“A voice cried, “intellect is the lever by which t move the world”, but another cried no less loudly that money was the fulcrum” Honore Balzac “Lost Illusions”
An Evolutionary Model of Ideology:
Abstract: “Ideology” can be modeled on the basis of natural selection. Such a selective formulation explains and predict the frequency, distribution and evolution of ideas. However, there are limits to the model. I explore the salient differences between ideology, Darwinian and Lamarckian mechanisms and difficulties with selective explanation in the context of ideology.
Introduction:
On Ideology : Getting Our Bearings with the Term
“The ideas of the ruling class are in every epoch the ruling idea” which are an “ideal expression of the dominant material relationships. “( Marx,). Marxists have often claimed that ideology- as a system of ideas- has a legitimating and normalizing function. The ideas of the ruling classes are represented as the common interest in order to have the workers and masses accept the domination of the ruling-classes. Hence, these ideas owe their existence, at least in part to the fact that they serve the interest of a ruling-class.
It is more difficult when the question is whether service invalidates the idea, or indeed even how that service is performed.[1] Marxists almost always ends up using functional explanations. Despite that, there has been much skepticism about functional explanation. The point, however, would be to understand the scope and difficulties with functional explanation, and how functional explanations work in different contexts before making such judgments. I will attempt to come to a new understanding for what it means to impute a function to an idea.
In this paper I will propose a model of ideology in order to better understand ideology, and above all the way ideology works. The source of the model is the mechanism of natural selection. The target is the evolution, frequency and distribution of ideas in a given social formation. Since the paper builds off of an analogy between the way ideology and natural work, I take pains to show there are strong intuitions to consider there to be a similarity in mechanism between natural selection and ideology.
I consider analogical reasoning, and the dual-dimensional framework for material analogy. I discuss the source-system in a very simple model of natural-selection.
I argue the similarity of mechanism (the vertical relation) comes from the fact that both natural selection and ideology count as “functional-cumulative” mechanisms.
I discuss the model of ideology and note the salient differences between source and target.
There have been various attempts to generalize or extend the mechanism of natural selection beyond populations of organisms. It has been found that natural selection does not just operate on populations of animals and plants, but on polypeptides, nucleotides, eukaryotes, bacteria and viruses i.e. natural selection is a mechanism that is said to operate at different “levels of selection” – from the macro-level of ant colonies and primate groups to the micro-level of self-replicating and catalyzed polypeptides[2].
However, outside of biology the application of evolutionary thinking has been controversial issue (and for good reason). Within philosophy the most famous account is Popper’s “Evolutionary Epistemology” which reasons on the basis of the fundamental similarity of mechanism between natural-selection and the scientific mechanism of “method by trial and elimination by error”. There have been attempts to apply evolutionary thinking to “human society”, “human psychology”, and “economics” (Wilson, Pinkett). However, I want to focus on the analogy of natural selection, as well as Lamarckian mechanisms, with a population of ideas.
From the outset, I must be clear about what I mean by that slippery term “idea”. There is an ambiguity in the term “idea”. “Idea” is a token-term that refers either to the processes, or the products of thinking. I restrict my evidence to the products of thinking and will speak of the “vehicles of thinking” e.g. media like talks, writings, pictures, movies etc. as transmitters of information[3]. In order to speak of a “population of ideas” it will be necessary to go through a few steps. It is not simply a set of products, because there is a hereditary aspect as well.
This will imply two primary analytic tasks before I can come to speak more fully on how ideology works: 1) how to take an understanding of natural selection mechanisms to conceptualize the evolution, frequency, and distribution for a population of ideas 2) an account of natural selection.
Models and Material Analogies In Science: a model of ideology, and modeling ideology
What does it mean to say that I am modeling “ideology” on “natural selection”, and how is a model of ideology possible?
I distinguish the usage of a “model” from the strategy of “modeling” and I do so because I will employ both terms. Statements of the form “To model x on/ on the basis of y” are not necessarily the same as statements of the form “a model of/for x”[4] and it will be important to give an account of each in order to capture the difference for our purposes.
“A model of/for x” is thought of as a modal generalization about “x” (statements in first-order logic which include modal operators). Such a statement-form is often taken as an explanatory proxy for “laws” in biological, psychological and social research (Sober). The modal generalization can take or be translated into the “if/then” form, and only affirms that the statement is possible, not when and to what extent the antecedent is satisfied, it only says when the antecedent is satisfied the inference is warranted (whether that warrant is deductive or otherwise). The modal generalization can either be an abstraction or idealization, an omission or “amplification” of the properties and relations of a system and such categories can often help explain why the antecedent is not satisfied, or if the antecedent is true and the conclusion is false, why the inference is not warranted. In either case, whether the antecedent is satisfied or not, the model is explanatory. This is one of the benefits of a model for some phenomenon, it allows for a search in apportioning blame when a conclusion does not follow from the (satisfaction) of the antecedent
R.A. Fisher was one of the principal founders of population genetics and statistical theory. Fisher gave a model of the sex-ratio in populations of organisms to argue that given certain assumptions, the sex-ratio is populations will evolve to 1:1 and stay there. These assumptions are abstract or ideal, for instance the assumption that organisms of different sexes mate randomly, and that the population of organisms is infinite. Of course this is not the case in many populations and there is no infinite population of organisms. The modal generalization “if/then” does not necessitate that a population achieve a stable 1:1 sex-ratio. At no given moment was and will the total population of human-beings ever be an evenly split 1:1 ratio. However, the model allows us to conclude that there will be an even split and if not why there is discrepancy in a population.[5]. Harrod and Domar were both Keynesian economists. They conjointly developed the Harrod-Domar model for economic growth. They gave a model to show that given certain simplifying assumptions like the marginal product of capital is some constant =c and the change in capital stock equals investment minus depreciation of capital stock, that change in output is a function of savings multiplied by marginal product of capital (roughly profit) minus depreciation.[6] Many of these assumptions, like MPC is a constant term are simplifications in order to derive the conclusion, and neither the antedecent nor the conclusion necessarily obtain. The mass of profit continuously expands and contracts, the rate of profit fluctuates and exhibits stochastic behavior. However, the conclusion of the model is said to describe a general mechanism of growth and is explanatory in that it gives a guide to apportioning causal responsibility[7].
Modeling is a strategy in which one system is treated as a surrogate for another (Godfrey-Smith). The strategy is used when the “target” system that we are trying to investigate is little understood and there is greater familiarity with the “source” system[8]. Alternatively, the “target” system could be too difficult to study directly, so the “source” system becomes an indirect means of inquiring into how the “target” system works.
There have been two main accounts of analogical reasoning (Norton). Formal analogy has been a standard of classical logic, The inference is often given the following form in syllogistic logic:
S1 is P
S2 resembles S1 in being M
n ===================
S2 is P
Where n is degree of resemblance
The double-line is meant to signify that the inference is not a deduction and the method is frequently unreliable. It is the case that the premises can be true and the conclusion false. The epistemological question arises, if this strategy and inference is used so often, in a given case of analogical reasoning how can we know that the inference is warranted?
The formal account is immediately beset with a number of problems which link back to the question of when it is appropriate to reason from analogy. The most common problem that is often posited with the formal account revolves around the variable n. It is often stated that the more closely the two systems resemble each other, the more certain we are that the inference cashes out. But then the question depends on what is meant by resemblance and how is it quantified. One might retort that in the ideal case resemblance is sharing properties and relations i.e. there is a one-to-one mapping of properties and relations from one system to another. But then, if properties and relations can be quantified in a second-order logic, then to say that n=1 is really to say that if they are the same in properties and relations iff they fully resemble one another. Then once again the question depends on much more difficult metaphysical questions of identity, properties and relations. Of course such philosophical circles are common. But the point is that no amount of formal analysis, can adjudicate on a matter of fact about analogy.
The two-dimensional approach, developed by Hesse, Bartha, and Norton, is richer than the formal account account in which it is not only the properties and relations which are “transferred” from target to source, but the second-order properties and relations at the source are transferred to the analogous properties and relations at the source. If S1 is P then S1 is Q, in which P stands in some causal or explanatory relation to Q, and if S2 carries an analogous property P* then it is “reasonable to expect” that the system carries an analogous property Q*. There are vertical-relations between the properties and relations and horizontal relations between the properties and relations of the source and the analogous properties and relations of the target, hence the “two-dimensional” account. The following chart, adopted from Bartha, depicts the “two-dimensional” relations:
Chart 1
The distinction allows for a sort of protocol. The distinction allows us to understand the similarity between systems as one of a material analogy between “how the systems work”. It allows us to better understand the warrant of the analogical inference as predicated on the transfer of causal and explanatory connections from source to system.
Hence, talk of “analogical inferences” is properly incorrect because the method of analogical reasoning is content and context-sensitive. Instead, comparisons are made on the basis of studied material analogies, a case-by-case understanding of the salient similarity and differences between how the systems work. Hence, there is no formal account.
Now, modeling can be used in different “epistemic contexts”. It can be used in the “context of discovery”, “ context of justification”, or “the mode of presentation”. Although the second might seem to interest us the most, I will make the case that the first and second are equal motivations in the model I develop.
Often modeling is a strategy used to advance our knowledge of “unknown” systems on the basis of our understanding of other systems with which we are relatively better acquainted. In the first place, a hypothetical mechanism is posited for some phenomenon that is analogous to the source system. The hypothetical mechanism does not need to lead to definitive explanatory conclusions, but such a method in the history of science does, whether accidentally or not, advance hypotheses and discoveries of new “unknown” systems. It is another matter whether such analogical reasoning does in fact lead to explanatory conclusions and definite discoveries. The justification is dependent finally on the material analogy between the workings of the source and target. A good example that shows the strategy at work in discovery and justification is the liquid-drop model of nuclear fission.[9]
Give the case of the electron wave and electron spin:
However, it is also possible to use analogical reasoning as a presentational and didactic device. Linus Pauling in his work General Chemistry gives us two very good and very different cases of analogical reasoning both of which I class as presentational.[10] In the first case Pauling models the flow of electricity (S2 ) along a wire to the flow of water in a pipe (S1). S1 has a quantity P, which is measured in liters Q ; S2 has an analoguos quantity P*, which is measured in coloumbs or stoneys Q*. Notice that the vertical relations are not explanatory, but are inferences from properties to operational measurements. It seems here that Pauling is not making a deeper claim about the material analogy between systems on the basis of similitude of mechanism, but using said analogy to illustrate how electricity works. (I translate his terms to some degree; consult the footnote for the full presentation and elaboration).[11]
Material Analogy between Natural Selection and Ideology:
I am not entirely certain we can say “ideology” is an “unknown” system as there has been a rich and variegated literature revolving and deploying the term. Such a model of ideology, built on the modeling of ideology on natural selection and Lamarckian mechanisms, is an attempt to present aspects of ideology, especially aspects of ideological transformation, in a new and didactic way. However, I think there is justification behind such analogical reasoning. There is a material analogy between the two systems. If there are vertical explanatory relations P and Q, a model of natural selection and Lamarckian mechanisms, and given analogous relations and properties P*, then an analogous vertical relation can be drawn to Q*, an analogous model of ideology.
The strategy is to present a model of the source-system to get the reader acquainted with how natural selection and Lamarckian mechanisms work, before discussing what I take to be analogous explanatory properties and relations found in Marxism. On that basis, it will be possible to present a model of the evolution, frequency, and distribution of ideas.
A Brief Discussion of A Model of Natural Selection and Lamarckian mechanisms:
For the purposes of this paper I will present a model of natural-selection that conforms to the brief characterization I gave above of a model. My discussion is indebted to Eliot Sober’s presentation of natural-selection as a “force”.
Natural-selection is said to be a force which acts on a population of organisms that results in evolution. Natural-selection is said to apply to all organisms which exhibit heritable variation in fitness (Lewotin). The model of natural-selection (MDS) is developed to demonstrate how a population of organisms will evolve if it is subject to certain conditions and can be presented in the following logical generalization:
MNS: If in a population A organisms that have at least a characteristic P are better able to survive and reproduce then organisms that are not-P, and if P and not-P are transmitted to offspring and the offspring encounter the same selection pressures, then the representation of individuals with characteristic P will increase in future generations until the trait reaches fixation.
Take a simple illustrative case. In the first generation, the population has only one trait not-P. A antibiotic treatment is administered. In the next generation, the population has one of two dichotomous traits not-P and P. Those bacteria with the trait P are antibiotic resistant and hence, better survive and reproduce. The bacteria transmit the trait to descendants and, under the continued pressure of the antibiotic, future generations more and more descendant-bacteria will the trait P until there the trait reaches fixation. If there were no variation between traits and no difference in fitness, there would be no evolution.
In short, if organisms exhibit heritable variation in fitness then there will be evolution (Lewotin). The simplicity and power of the model is incredible and, yet, appearances of simplicity are often deceiving. Many qualifications need to be made to this simple “if/then statement” model in order for us to understand the how the source-system works.
On the side of the explanans:
“Fitness” is a tricky term and we do not have the proper place to discuss this difficult and fundamental term in biology. However, understanding what sort of term fitness is will give us greater insight into the workings of the source- and ultimately the target- system.
Fitness is a relational-property that reflects the interaction between organism and environment. The concept of fitness is deployed to explain evolution and change in gene frequency. Let us look further into these statements:
An organism’s fitness is relative to its environment. A biological environment is not simply a local spatiotemporal region. The dimensionality of a biological environment includes local forces that interact with an organism – to determine fitness levels. Two spatiotemporal regions may be physically identical and yet there are many distinct, overlapping biological environments. A biological enviornment is a complex system that is determined by local causes and physical properties as well as organismic properties. Those local causes and physical properties are varied, they may be macro-level causes and properties as identified in geology, geography, and meteorology, or they may go all the way down to the micro-level causes and properties as sought after in chemistry, and physics. The organismic properties are evidently not distinguished in kind from physical properties. However, these properties are more commonly thought of at the macro-level in anatomy, physiology, and etiology all the way down to the micro-level in organic and molecular chemistry, as well as genetics. This should help clarify what is intended by such slippery terms as “local forces”, physical properties, and organismic properties.
There is no one-to-one mapping between an organism’s fitness level and a set of local forces and physical properties, and organismic properties. Take a simple case in which two bears that are highly similar are walking in a forest. A lighting bolt strikes one of the bears, killing it. Evidently, we would not decide based on this “chancy” selective force whether or not one of the bears was fitter than the other. Hence, there is a one-to-many mapping. “Fitness” is superveneient. The property of “fitness” supervenes on a set of disjunctive heterogeneous local forces, as well as physical and organismic properties.
Evidently, the complex dimensionality of biological environments , as well as the superevenience of fitness, constitutes a major difficulty in determining and measuring fitness levels, and hence in making judgments about comparative fitness within and between species, as well as trends in fitness across generations and evolution. And yet the difficulty must be confronted. Rosenberg suggests that “because of the one-to-many relation between fitness and its determinants, fitness must be measured by its effects”.
I feel it is better to anticipate the passage by stating, that its effects measure fitness and that this is perfectly normal of theoretical and probabilistic terms. If this were not the case, then fitness could not be explanatory. We will explore these claims now.
In discussion of the measurement of fitness, the measurement of fitness is often taken to be a definition. Natural selection operates on all cycles of an organisms life-time, and evolution “occurs when organisms differ in viability and fertility” (Sober). Viability and fertility can be taken here respectively as probabilities of survival and reproduction. Fertility can also be thought of as a rate of reproduction. However, the question is how to calculate this probability of viability and fertility. There is a difficulty in considering that the probability of viability and fertility to be defined by actual frequency. Let us look at this interpretation of probability in terms of actual frequency.
The actual frequency of an event in a population is how many times an event occurs in the larger population. Take a simple case. Suppose we flip a coin 200 times and that the coin is fair. Let us say that the coin lands tails 70 times. T is the proposition that the coin lands on tails for some given flip. P(T) can be taken to mean how many times the coin lands on tails within the larger population of events. Hence, the probability here is P(T)=70/200 . The actual frequency interpretation of probability is an objective interpretation. To generalize from the case consider the probability of some proposition to be the number of times some event occurs within a larger population. Probability just is actual frequency.
The problem is that there is no guarantee that actual frequency will converge with probability. In the case I have given above even though the coin is fair, the actual frequency diverges from the probability. Also, if probability is actual frequency within a population of events, then what would it mean to say that a fair coin has an actual frequency of .5 for an odd numbered population? It is often stated, that in a hypothetical long-run series and infinite population of events, actual frequencies converge with probability. Yet, if there is convergence only under such unrealistic conditions then there seems to be no case for expecting actual frequencies to reflect probability in nature. There have been attempts to deflect these problems with the actual frequency, and relative hypothetical interpretation but we cannot go into this further issue.
Actual frequency is used as evidence for estimating probability, just as an understanding that long-run series and population size are important in making judgments about whether actual frequency will reflect probability. There is no simple connection between probability and actual frequency. “Fitness” is a probabilistic term and this means that actual frequency of viability and fertility will often not reflect fitness levels. It also happens that in populations, especially of a smaller-size, that the fitter organism is eliminated by entirely less “chancy” forces such as predation or disease. There simply is always a chance that fitter organisms “leave behind” fewer descendants.
In many ways, the probabilistic character of “fitness” translates to “limits” in evolutionary theory. “Fitness” is not a “deterministic” property. Actual frequencies may deviate from such probability; the actual trajectory of evolutionary change may differ from the expected path of evolution predicted by “fitness” levels and natural-selection. This is known as “drift “ in evolutionary theory. “Drift” is often thought of as a “sample-error”. It makes sense that measuring the probability of “fitness” must inherit the same difficulties as probabilistic terms.
“Fitness” is also a theoretical term. Theoretical terms in science are often measured by their effects. It is not possible to define theoretical-terms as a function of observational-terms. Theoretical refers to (contingently) unobservable entities which are distinct from their observation and whose existence does not depend upon such an act. However, it is possible to identify and measure an unobservable entity with reference to observation. However, the effect is a measure in virtue of the fact that there is causal and explanatory connection between the unobservable entity and measure. An instrument in science relies on such a causal connection. Take the theoretical term “atmospheric pressure”. Atmospheric pressure reflects a force perpendicular to the surface of the earth; it is approximately the pressure air mass over the earth. A barometer is an instrument that measures atmospheric pressure and is often used as a predictive device for meteorological forecasts. Yet, no meteorologist would say that a barometer or barograph defines atmospheric pressure. Instead “the operation of the (barometer) is explained by citing the phenomenon it measures” (Rosenberg 156). There is a theory that explains the causes of the unobservable entity on the instrument.
Fitness is measured by such units of measurement as births, deaths, viability and fertility, but is not defined by the measurement. Instead, the term “fitness” is said to explain those facts. There may seem to be the tautology problem lurking. If fitness just is defined by those units of measurements and fitness is a relation in which some organisms survive better, reproduce more etc. , then obviously it predicts an increase in their descendants. However, it is important to note the probabilistic and “non-deterministic” character of the term “fitness”. It is precisely because these units of measurement can diverge from fitness levels, that fitness can explain trends in the units of measurement. As I mentioned before, “fitness” supervenes on local forces and physical properties, as well as organismic properties. The paradigmatic case in which we are able to understand the diverge is when we can identify a set of organismic mechanisms – and facts about selection pressures– in order to suggest why some organisms in a species are better adapted to environment. In the case with the bacteria, we can try to locate the random mutation at the level of molecular chemistry or genetics, and identify a causal mechanism that makes a difference in defense against the anti-biotic. There are identifiable mechanisms and selective pressures that account for the fitness of the bacteria, and the trajectory of evolution (Hull). The proper explanatory strength of fitness is characterized by the ability to generalize over distinctive and heterogeneous biological systems.
The model of natural selection presupposes that traits are transmitted to offspring. In other words, the model is neutral on hereditary mechanisms. Darwin himself had to take the fact of transmission more or less for granted. He also recognized the difficulty with such an empirical hypothesis, since it was obvious enough that children never resembled their parents in full. Faced with such a difficulty he proposed a notion of “gemules” –a prototype of genes- and a hereditary mechanism of “mixing”. However, this was not really any solution. The solution seemed illusive until the turn of the 20th century when the scientific community rediscovered Mendelian genetics. The hypothesis of natural selection seemed further clarified by the fundamental discovery of the DNA molecule, and the establishment of molecular biology. Recognizing this neutrality is important, because in certain instances, hereditary mechanisms can affect the course of evolution. Take the case of the jaw, it can be said that the growth of the jaw-line does not owe itself to the fact that it confers higher fitness, but as a “side-effect” of the general growth of the human head and of genetic information specifying the sturcture. This is a case of “pleiotropy” in which a genome determines multiple phenotypic effects. Considerations of such hereditary constraints, a type being pleiotropy, are important in judgments about “fitness” and hence whether a trait or mechanism (or set of traits or mechanisms) is responsible for evolution.
On the side of the explanandum:
The distinction must be made between evolution by natural-selection and change in gene frequency. Biology is interested in minute and large-scale changes in gene frequency. Yet, there can be cases of change of gene frequency (migration, mutation, recombination) which are not strictly considered “good cases” of evolution. Change in gene frequency is better thought of as a non-absolute criterion for determining occurrences of evolution. Natural-selection is not defined by change in gene frequency since “natural selection has two consequences: a change in gene-frequencies and, in addition, an increase in the quantity (fitness) optimized “ (italics included for emphasis). As we discussed earlier “fitness” explains viability and fertility in a probabilistic manner, and hence it makes sense that the concept of “fitness” cannot be identified with its measure. The concept of fitness can explain differences and changes in viability and fertility because higher (or lower) fitness does not always lead to higher (or lower) viability and/or fertility. The same argument might apply to natural selection writ large, natural selection cannot be identified with change in gene frequency because of the probabilistic character of the hypothesis. It is precisely because natural selection is not the only “cause” of change in gene-frequency that it can explain such trends.[12]
Character of and Challenges With The Model:
The model describes what will happen given certain conditions and is subject to a certeris paribus clause that would cancel out the effect of natural-selection. In this sense, model represents cases in which the system is operated upon by a single force. [13]The model does not state that or to what extent the forces act and Darwinian selection is only a kind of possible selection. Other forces, whether local, or at a population-level, can have a “biasing effect” in directing change in gene frequency. We have already looked at the probabilistic character of fitness and the concept of “drift”. [14] But even then the force at hand is non-deterministic. After all, we have seen that “fitness”, even when it can be given a material basis, is not sufficient to guarantee change in gene frequency.
The use of “force” might also strike some as odd. If “cause” is preferred then there is no dispute over the specific word indicating the concept. However, so as to not introduce confusion, it is necessary to make a crucial distinction between “selective forces” and “forces of selection”. There is the process of selection, and the product of selection. The process of selection occurs at the individual-level. An individual organism is bombarded by a multiplicity of local and short-term forces, constituted by organismic properties. These selective forces eliminate or retain, organisms, and they do not necessarily select organisms on the basis of fitness. However, the product of selection is a statistical-population outcome of individual selection processes. Natural-selection is a population-level force, in the sense that is an outcome for populations. The force acts, in the clearest case, when there are actual mechanisms- where physiological, anatomical or behavioral- responsible for viability and retention on average and a relative stability of the physical environment and individual selective pressures. And such a judgment cannot be made at the individual-level because “more chancy” selective pressures like lightning are characteristic of biological environments. Denying such a statement would then be lead us to consider the “selective force” of lighting a “force of selection”.
The model can be characterized as an abstraction from differences in local selective forces and population-level forces of selection over generations. It is an idealization in that it is assumed that there is a kind of selection pressure acting on all generations. In a way, for this model, “selective forces” are “forces of selection”.
It is also assumed that there is a trait that is retained and transmitted because it confers higher-fitness to the organism. I mean by this that it is retained and transmitted because the trait makes a causal contribution to viability and fertility. These two assumptions are obviously not the case. Organisms in the same population often confront different selective pressures because they are subjected to many different local forces, even though biologists are often concerned with populations that on average confront “similar” selective pressures over generations. Offspring never fully resemble their parents in either genotype or phenotype.
Final Note on Justifying the Model of Ideology based on Natural Selection and
It has often been noted the analogy between natural-selection and scientific knowledge. The most common account of this analogy is given by Popper in his essay “Evolutionary Episetemology . Popper’s account is based on the sort of analogical reasoning I have characterized above in that he claims there is a fundamental similarity in mechanism between natural selection and scientific progress. Popper moves between horizontal and vertical relations and seems to understand that a material analogy exists where the systems in question function in similar manners. He even goes so far as to suggest that scientific progress is an evolutionary process, which exists at a different level of selection. Of course, I am not pursuing the same analogy as Popper, but it will help to briefly discuss Popper’s paper in order to explicate the logic of the analogy.
Popper characterizes natural selection and scientific progress as cumulative and adaptive processes of “method by trial and elimination by error”.[15] The process is said to start at the level of “inherited structure” which is the accumulation of as yet unfalsified scientific theories, conjectures, discoveries, experiments etc. The inherited structure is then transmitted to future-generations and produces certain theoretical problems for the scientist. The scientist then produces a novel scientific theory or conjecture in response to the theoretical problem. The novel conjecture is then subjected to criticism and experimentation, and eliminated if false. If the conjecture is not falsified then the “mutation” is transmitted and new theoretical problems emerge in a unlimited circle.
Scientific progress is a process in which theories and conjectures become increasingly fitter-to-facts and better-adapted-to-problems i.e they come to increasingly approximate the case. In such an account, the unit of selection is “inmates of world III” and the evolution is then the cumultative addition and retention of non-falsified theories and conjectures. Popper sees scientific progress as a specific sort of evolutionary process. But it would have been more accurate for him to a say that scientific development exhibits a general mechanism at play. Scientific development is subjected to a mechanism of random variation and selective retention. In Popper’s account, the first refers the production of novel scientific theories and conjectures, and the second to elimination of false theories. Popper does not explicate the logic, but in both cases there is random variation and selective retention, and hence there is an analogy between the properties and relations of natural-selection and scientific theories. Scientific theories have analogue properties for variation V*, differential fitness F*, and heredity H*. There are also analogous hereditary structures. [16]
A mechanism of random variation and selective retention is a “substrate-neutral algorithm” and there is no a priori reason why such a mechanism could not apply to a population of ideas. The concept of a “substrate-neutral algorithm” was introduced by Daniel Dennett in speaking about what sort of thing natural selection is. An algorithm is a relatively uncontroversial and simple rule, in which certain inputs determine a unique outcome. Of course, the simplicity of the rule is not a necessary feature as I can think of the algorithm (take any value n and perform 2+3^6 x 8,00056786^3) which ostensibly no one will ever use. Addition is an algorithm in this sense, given at least two values x, y, z… there is a rule which determines a unique outcome x+y+z…. Now an algorithm can be implement by many different kinds of systems. Not every algorithm is implemented by a conscious system and there are many non-conscious systems that implement algorithms. A computer can implement an algorithm. A litter of water flows into another and the process implements the arithmetic-algorithm. When a tree grows many rings, the system implements the arithmetic-algorithm. When there is mitosis, the process of division implements the arithmetic-algorithm. In this way, because the algorithm spans or can be implemented by many heterogeneous and distinct systems, we say that the algorithm is “substrate-neutral”.
Natural-selection seems to be an intuitive candidate for such a concept. Natural-selection can be formulated as a algorithm that takes Darwinian lineages and present finesses and determines a unique (set of) probabilistic outcome(s) of the expected evolution of lineages. In the biological world natural selection seems to be “substrate-neutral” in that many different conscious and non-conscious systems implement the Darwinian algorithm and the algorithm is implemented at “multiple levels of organization”. For instance, it has been found that non-organic molecules undergo a process of variation and selection and it has been hypothesized that such a process is responsible for the formation of organic molecules, polypeptides and nucleotides. Polypeptides, nucleotides, DNA, and RNA implement the Darwinian algorithm. Eukaroytes and prokaryotes implement the Darwinian algorithm. Individual organisms and potentially groups of individuals implement the Darwinian. Hence, it seems there is a strong empirical case to make that natural-selection is a “substrate-neutral algorithm”. But I claim that mechanisms of variation and selection are in general “substrate-neutral algorithms” for the same reason that such a process can operate on vastly different physical systems.
It is important to distinguish natural selection from mechanisms of variation and selection because, to put it quite vaguely at first, there are different types of varitation-selection mechanisms and they are not exactly “substrate-neutral”. Natural selection is a type of variation-selection mechanism is specific to the biological world (and the origin of biological entities) because the mechanism presupposes the accumulation and mutation of functional traits. However, there are other types of variation-selection mechanism that do not presuppose accumulation and mutation. Three illustrative cases will allow me to make the contrast clearer between types of variation- selection mechanisms.
Earlier I spoke of an anti-biotic resistant bacteria. In this case, blind variation occurs, and the selection pressure includes the anti-biotic. Hence, it is this local pressure which selects for those bacteria with the fittest variant, the trait that allows the bacteria to best adapt, survive and reproduce. The variant trait is selected because it contributes to some causal capacity that allows the bacteria to survive the introduction of the anti-biotic regiment. Now take a technique in the world of chemistry. When a chemist wants to develop a chemical with a certain function, they resort to “chemical libraries” which contain many, many complex molecules. They then pass these molecules through a screen that selects for the function of the molecule. The final example is a common toy, it is also an excellent philosophical device.
diagram
There are two sorts of objects, squares and circles, in the container. There are multiple levels to the container with circle and no square openings. After enough shaking, the squares will remain at the top and the circles will end up at the bottom.
In each of these cases there is a variation-selection mechanism at work which presupposes local selective forces. In the first case, random variation can only be made sense of starting from the basis of the accumulation of functional traits and of mutations in the genotype (Lewens). The variation is then subjected to a local selective pressure, the introduction of the anti-biotic regimen. The environment selects the trait in virtue of
the function and causal capacity it contributes to defense against the anti-biotic.[17] In the second case, it makes no sense to speak of the accumulation of functional traits and of mutations in the lineage because there is a “designed” system that selects for functions. The mass of molecules that constitutes the library is a product of chemical synthesis and other techniques. The “library” does not exhibit “random” variation in the same sense as the first case. The variation of molecules is produced by synthesis and the agent that did so, did so because the library has many functional molecules. However, the mechanism exhibits “random” variation in the sense that the chemist has no intention and cannot control the production of a specific mass of molecules. The local pressure is the screen which is also designed to pick out certain properties, that contribute certain causal capacities, and not (a or a set of) specific chemical structures. Given the heterogeneity of the mass, there is a high likelihood that the screen will end up picking out a molecule for the relevant function. In the final case, my intuition is that the mechanism of “random” variation and selective “retention” is most limited. There is design behind the variation, just as there in the selective process. There is only variation in the brute sense of difference between properties. The design selects for circles to pass through the levels. The local pressure is the shaking which over a period of time means the circles filter through the screen. There is no accumulation of functional traits, no random mutation to speak of. The sense in which the circles are selected is extremely weak. In the end a trait is not selected for any causal contribution but just because of the micro and macro properties of the circle-objects and organization of the physical container.
To these three cases I distinguish three types of variation-selection mechanism: “cumulative-functional”, “design-functional”, and “filtration systems”. The first is characteristic of evolutionary theory, the second of engineering and chemical sciences, and the third as specified need not be designed, but exhibits local selective pressures that “pick out” certain properties. However, I do not think design is conceptually necessary, as it is possible to imagine some physical system that operates in a similar way. I draw no absolute taxonomy, and pitch the characterization of “filtration-systems” as a more general sort of mechanism and as being primarily relevant to non-designed and non-functional systems.
The similarity of mechanism comes from the fact that ideology operates most like a “cumulative-functional” mechanism. This will require some “ pre-theoretical “ familiarity with the term ideology and the argument that such a type most robustly applies to ideology. It must be remembered; a model is not devised when a system is the object of complete ignorance, but only when it is badly understood in a theoretical manner. In many ways, modeling x on y requires some kind of pre-theoretical acquaintance with both x and y, where there is greater theoretical understanding of y.
Ideology operates most like a “cumulative-functional” mechanism because a population of ideas has a functional lineage and a novel idea is selected in virtue of the causal contribution that idea makes for certain classes and people. The mechanism is not simply “design-functional” nor is it a “filtration-system”. To state the first is to presuppose that variant ideas are consciously designed by certain people for certain interests and purposively selected by certain people because of the contribution they make to their interest. Such a claim especially breakdowns when applied to longer-scale ideological lineages. After all, every case of the evolution of a population of ideas in history cannot owe itself to some group of intellectuals consciously producing variant ideas, and some other groups of lords and patrons carefully assessing whether and how each one will help their cause and selecting the variant that seems most useful. At the same time, every case of the evolution of a population of ideas in history cannot owe itself to an arbitrary, non-functional selection. If an idea has no epistemic meaning or social significance it would simply fail to be a candidate for selection and transmission. Neither constitutes a very plausible story about ideology.
I claim that ideology may constitute a mix of all three sorts of mechanisms.
Insofar as populations of ideas exhibit an analogous property to “drift”, in certain contexts it is helpful to think of the mechanism operating on the population as a ‘”filtration system”. Insofar, as populations of ideas and the variant traits they exhibit are conscious products of human-activity, it is helpful to think of the operant mechanism as “design-functional”.
Let us summarize the logic now that we have arrived at the model of ideology: there is a material analogy between natural selection, Lamarckian mechanisms, and ideology because all are variation-selection mechanisms. Variation-selection mechanisms need not only apply to the biological world because they are “substrate-neutral algorithms”. There are also different types of variation-selection mechanisms. The material analogy is further substantiated by the fact that the mechanism of ideology operates like the mechanism natural selection, Lamarckian mechanisms i.e. they are all cumulative-functional mechanisms. So it is possible to transfer the explanatory properties and relations of the source to target. It is now time to pass to the model itself.
A Model of Ideology
The model will represent ideology as a force that acts on population of ideas and results in ideological evolution. Understanding if and to what extent this mechanism acts, and how this mechanism combines with others will be important part of the model.
Model of Ideological Selection (MIS) : If in a population A , there are ideas with at least a characteristic P that make them more useful than ideas that are not-P, and if P and not-P are transmitted to future generations, then the representation of ideas with characteristic P will increase in future generations until the trait reaches fixation.
In short, if there is hereditable difference in usefulness, there will be ideological evolution, and certain ideas will increase in frequency in a specific and relevant population of persons. The differential usefulness also means that certain populations will tend to select, produce, and reproduce certain ideas based on the higher relative usefulness of the idea for their social environment, and to distribute across populations based on relative usefulness.
A few necessary points of clarification are in order. I speak of “populations of ideas” but it is important not to reify ideas. After all, it is populations of humans that produce, select, and modify ideas. However, I have specified the distinction between process and product. I assume once the products of human-thinking are embodied in material objects, these products have autonomy in the sense that the “correctness” or “usefulness” of the information signaled by the product. The information is not a function of the beliefs of individual, but is dependent on the state of the world and society.
I will then abstract from individual human-producers, and consider only the information signaled by the products. When I say that ideas have “traits”, I mean precisely the putative information transmitted by such products. This information is transmitted to contemporaries and future generations of people, who “inherit” and interpret the idea to produce novel ideas. These novel ideas resemble their parent-ideas to some degree and reproduce heritable information.
There are three similar conditions met which lead to change in the frequency of ideas.
- The first condition is that there is an analogue to random variation P*. In the case of ideology, “random” variation comes from the fact that information is not necessarily produced with the intention of serving the interest of certain classes and people. History is not one long conspiracy of ideologues serving rulers dominating and duping people. Instead, variant information is produced for any number of reasons. For instance, a scientist might produce a new theory because it is the best current solution to a theoretical problem, or a social theorist might produce a new theory of justice. However, variation need not consist in novelty.
“Randomness” exhibited by variant information does not mean that the information was produced for no reason, or more precisely that there is no agent-relative reason for the information. It also does not mean that any variant information is just as likely to be produced as any other variant information. “Randomness” in biology alludes to the fact that the future conference of fitness does not enter into the causes for the mutation. A mutation does not occur because it gives a higher fitness to an organism, even though we have studied many causes that do produce mutations. Instead, “randomness” alludes to the fact that in the production of variant information, the future use and reproduction of that information, especially by a different agent, does not affect the agent-relative reason for the information. “Randomness” alludes in part to the fact that information can be put to use in ways that no producer could ever envisage, and that it can be used in ways to serve ends in complete disharmony with the ends of the producer. Albert Einstein had to “recant” his involvement in the Manhattan project, and critiqued the “political purposes” which nuclear weapons served.
- The second condition is that there is a functional analogue to “differential fitness” Y*, and I call this analogue “differential usefulness”. I consider the structure of the concept to be similar to fitness and I want to show how the “explanatory structure” of usefulness is similar to “fitness”.
“Fitness” is a quantity that explains and predicts the survival and reproduction of organisms in a population. Darwin stated there was a tendency for organisms within lineages, given a relative stability of environment and selective pressures, to become fitter. Natural selection is said to be an improver, in that it selects, on average, for the fittest variant. I am a bit more skeptical of attributing any such tendency to the overall history of ideas. However, in the case of ideology, “usefulness” explains the survival and reproduction of ideas, and why on average certain information is selected over other for the cycle of production and reproduction.
“Usefulness” is a theoretical and probabilistic term. Usefulness is a quantity measured by viability and fertility. The viability of an idea is the degree to which the information contained in the product is transmitted to others. The fertility of an idea is the degree to which the idea is modified and reproduced in variant ideas. The usefulness of an idea may not be reflected in the actual frequency of viability or reproduction. There may be entirely chancy reasons, dependent either on the social environment or the idiosyncrasies of groups and people (or both) which mean that a useful idea is neither transmitted to others nor reproduced. Insofar as ideology is not the only kind of force of selection for ideas, or there may be entirely unique and chancy selective forces, less useful ideas may increase in frequency and distribution within a lineage. Also, the two measures may contribute or counter-act one another. An idea may be transmitted to many others, and yet not be reproduced (and vice-versa). However, the idea is that the there is a tendency in capitalism on average for the quantity of usefulness to increase in a lineage, the most useful ideas to be produced, transmitted, and reproduced.
An idea is more useful–or better adapted than another idea if the idea transmits information to others that provide solutions to problems thrown up by the social environment. Usefulness is a relative property that reflects the interaction of an idea, not just with the “environment” but also with what I will broadly call the “social environment”. The social environment is determined by local forces and physical properties, as well as social properties of and social relations between people. It is composed of a world with people bearing certain relations to each other. Ideas are like tools that people use to solve problems posed by the social environment. Ideas are selected for on the basis of the causal contribution they make to the efforts of people in effecting and/or changing the social environment. People and classes select ideas, and the model charges that they do so consciously or unconsciously because those ideas and the information contained in them make a causal contribution to the efforts of people and groups. Ideas have functions, and those functions just are the causal contributions of ideas to the efforts of people and groups.
In this most basic way, people’s ideas consciously and/or unconsciously adapt to their ends. If the idea answers a problem, if it makes possible the performance of some action or the attainment of some result, then the idea is retained and transmitted. If the idea does not allow for such, then the idea is discarded, a new one produced. In this way, given that the social environment poses an indefinite number of problems, there is always room for criticism and generalization of the problem. I will discuss the selection process and different kinds of selective pressures at length further.
A final note however that touches on the point of selection: now the usefulness of an idea is not a function of its truth-value. There are many statements that are true and useless, many useful statements that are not true. Hence, there are contexts in which people and classes select an idea on the basis of a rational criterion i.e. they select an idea because it is true, or in some way because it better corresponds with reality than another. The idea may be accepted because it is, putatively, the best theory for a certain phenomenon. In turn, that could imply instrumental value. This is very much the case for science, in which research agendas , even at public institutions, are heavily and subtly influenced by the demands of production, so that ideas , hypotheses, theories etc. are selected not only for their truth-value, but their potential instrumental value.
Take a simple case, one that is very much the case today: a lab at the NIH produces a new paper on nanotechnology, and it is the only and best solution to nanotechnology. A firm employs the paper to produce a commodity that requires nanotechnology. They select the paper because it is the best theory in the lineage, the best theory up to date on nanotechnology. Ultimately, they consciously select the paper on this basis because of the relative truth of the paper and the contribution it makes in designing the commodity. The outcome is that the information contained in the scientific paper and commodity-design, will be reproduced in the area of industry if it does not lead to further investment in the production of new information related to the paper. There is selection for truth-value and instrumental value, and this is the case because there is a causal-relation between the existence of unobservable structures and entities identified and studied by scientists, and human-beings effecting those structures and entities to produce determinate effects.
- The second condition is that there is a hereditary analogue H* for the transmission and reproduction of information. The condition is the most difficult to deal with, and I state that I do not understand in a clear and robust way the full workings of the hereditary mechanism in re information. Yet, it only need to be assumed that the information contained in the product has a multiple parent-relation, in which it resembles the parents-ideas to some degree.
In biology, the hereditary aspect is well understood. There are hereditary structures (DNA, tRNA, mRNA, cistron, structural genes, regulatory genes, nucleotides, chromosomes, enzymes etc.) and processes (protein synthesis, transcription, translation,) which replicate and transmit genetic information to offspring. These structures and processes specify a “genotype”, a sort of “plan” for protein-construction and the regulation of that process, which “build up” the “phenotype”- or “observable traits of an organism”- in interaction with a specific environment. These hereditary structures and processes are also physically distinct processes in the body, there is a germ-soma division which makes possible the replication and transmission of information. This is connected with the relative stability of genetic structures and material faced with the environment. This makes tracking species, and phylogenetic trees much easier as most times similarity of genetic information reflects species-membership. Hence, it becomes easier to say an organism is a member of a species, and that species is part of a larger biological lineage in which it is a branch of the larger tree.
When it comes to ideas there is no comparable unity of hereditary structure, and processes. There is also no one mode of transmission, and the parent-offspring relations are more complex than with genetic information.
There are no hereditary structures that maintain, replicate, and transmit genetic information to offspring. Someone might contend that language is the human structure of preserving and transmitting information. Hence, the genetic code is human language. Genetic information is primarily contained in the DNA which is a helical structure composed of triplets (codons) of four different nucleotides (adenine, cytosine, guanine, thymine). Human information is primarily contained in language which is composed of complex statements (structure) reducible in components to simple statements, words and letters, morphemes and phonemes. The genotype is the totality of grammatical and logical possibilities, the rules and constituents, for the production of complex statements. The phenotype is the actual complex statements produced under a specific social environment. I suppose then that the germ/soma division is the physical localization of the informational genotype.
Information is contained in products, but these products take many different physical forms. There are utterances, books, papers, hypotheses, theories, paintings, movies, posters, advertisements, commercials etc. This means there are many different ways an idea can be transmitted to people. They can heard, read, watched etc. However, transmission does not necessarily imply selection on the part of the audience. Selection for appropriation and use, and for reproduction is another matter altogether, even if usefulness is a property which can generalize a link between continuous production and reproduction. There is only a parent-offspring relation when a producer effects an audience in such a way that the idea is used and/or reproduced. The most robust case in which one idea is the offspring of at least a parent-idea, would be one in which the idea is used by people in order to modify and reproduce the information in different products, with variant information.
The offspring of parent-ideas, the information contained in the offspring, resemble the information contained in the parent-ideas. The set of information contained in the offspring differs from the set of information contained in the parent-ideas, and may either intersect or be a subset of the set of information contained in the parent-idea. The information contained in the parent-idea may be a subset of the information contained in the offspring. Hence, membership in a species would consist in a relative similitude of information, and a lineage, a evolutionary series of species of ideas, could be tracked by similitude of information with the parent-ideas. In genotypic terms this could mean a specific totality of logical and grammatical possibilities, a specific set of rules and protocols, for the production of the phenotype, the observable product.
A final note, an ideal cycle of production and reproduction might clarify the issue at hand. The cycle is P- Pr- Tr – Md- Rp . A problem (P) is posed by the social environment. Someone or something produces (Pr ) a solution in the form of an idea Pr – which signals information. The information is signaled by some means to other people. These people consciously or unconsciously select the information on the basis of similar problems. They modify the idea and information and produce variant ideas which contain (ostensibly) differing information.
- The model of ideology has a similar explanatory structure to the theory of natural-selection. It is a non-deterministic model and the inference from explanans to explanadum is non-deductive. The model of ideology also deploys certain theoretical terms “useful”, “hereditary transmission” and “selective mechanism” that, like theoretical terms in evolutionary theory, are both supervenient and multiply-realizable. I will focus on “usefulness” and “selective mechanism” insofar as they play a central explanatory role in the model.
The fact that an idea is “useful” means that it performs a certain function for a certain person or class. I opt for a deflationary account of function in which the function of information is simply the causal contribution that information makes to the efforts of agents and groups, individuals and classes. To say that “usefulness” is supervenient, is to state that the “usefulness” is synchronically determined by local forces and the state of the physical and social environment. After all, ideas are not spiritual entities, and mental systems are in some sense physical systems. To say that “usefulness” is multiply-realizable is to state that a causal contribution is not necessarily implemented by a single physical kind. There is a disjunction of heterogeneous social and physical properties and relations that implement the same causal contribution and there may be no one property or relation in virtue of which an idea is useful. [18]The explanatory power of such a supervenient term is that it allows us to generalize and collect over vastly different causes of ideological evolution. [19]
The central theoretical term of “usefulness” is probabilistic. The probability of usefulness is measured and supported by viability and fertility, and allows for explanation of why certain ideas survive and reproduce more than others. On the basis of the probability, a certain trajectory of evolution is expected. However, usefulness is not a deterministic quantity, and it is possible for usefulness to decrease in a lineage over time, it is also possible for less useful ideas to have higher viability and fertility. There is an analogue of drift when it comes to ideological evolution. In some cases for entirely “chancy” reasons, useful ideas die out, and less useful ideas survive, the less useful ideas reproduce and the less useful lineage wins out. As with all “cumulative-functional” mechanisms, there are two levels of “forces”: the forces of selection and selective forces, the statistical-level result of a population of ideas, and the process of local causes “bombarding” individual producers. The differential usefulness of an idea explains population-level results, and there are many different kinds of local, selective forces that can lead the trajectory of evolution to either fulfill or deviate from expectation.
I spoke of mechanisms of variation and selection, and of types of variation and selection. I categorized ideology as a “cumulative-functional” mechanism. [20]In the model of ideology, the selection of ideas occurs at each moment of the cycle of production, transmission, and reproduction. There are selectors of problems in the phase of production, selectors of information in the phase of transmission, and selectors of information and problems in the phase of reproduction. It is also possible in a particular case for the roles of selection to be combined. At each stage, there are different causes for the selection of the problem and/or information. People select problems, which are posed by the social environment and ideological lineage. They produce information that ostensibly provides a solution. People select information, actively interpret, modify, and disseminate the information. People select the information and come up with a new problem and solution. As we can see the overall process of selection occurring at each stage P- Pr- Tr –Rp, is supervenient on different causes and micro-realizations. At each stage, there are many possible micro-realizations, different social environments with different people, different methods and motives, so that the process of selection is multiply-realizable. Yet, in each case of selection, there are people who select the problem or information because the solution makes a causal contribution to achieving their ends.
Let us take a look at the case of nanotechnology, a relatively simple process of production and reproduction. Here, there are selectors of problems at the stage of production, and selectors of information and new problems at the stage of reproduction. The social environment and causes that effect selection at the two-stages differ. The laboratory in capitalist society is a hierchical institution with a division of labor. There is someone or some group charged with the selection of a problem, however there is also some other person or group charged with financing the research. There are many causes operating on the person that inform the selection of the problem and financing of the research. From the point of view of the producer, there is the “internal” hereditary influence of previous lineages and problems, but also the present “internal” influence of contemporary work and problems in shaping the selection of a problem. There is also the external influence of the social environment and the institutions devoted to finance that may even enter into the thought-process of selection. There is the issue of “progressive research programs” versus “regressive research programs”, and the fact that no scientifically-minded individual wants to participate in what seems to be a declining scientific project. Such an issue is not easily categorized as attributable to heredity or the environment, as well-financed research programs can give the appearance of progress. All of this goes into informing the motive for the selection of a problem, and the new theory is a means to answering the problem. The capitalist firm has a simple motive in the appropriation of the problem and the production of variant information on making the technology operative. There is the external influence of effective demand for technological goods, and hence the state of the social environment. There are people chaged to assess the market and potential for profitability. There are people charged in the R&D department with the selection of the information and charged with the selection of a technological problem, presupposing in some way the scientific problem and solution, and the reproduction of a technological or engineering solution. Still others must assess and select if the engineering solution is the optimal economic solution. The problem is selected because the firm is a nanotechnology firm which produces commodities and which is competing for market-share. The specific paper is selected, and the engineering problem and solution, because the firm expects to make a profit off the technological good.
We see in this simple process of production and reproduction , that the causes acting on the selectors at each stage differ, the motives differ as well, yet the product of the process is that an idea, a theory- information on nanotechnology- was selected for fertility and viability on the basis of its usefulness- because the theory is the best theory on nanotechnology and makes possible an engineering solution for the commodity-design, and that the information contained in the solution is reproduced, increasing in frequency and distributing itself across capitalist firms.
One thing I have not spoken greatly of, is the competition of ideas. I will simply say an idea competes within and outside of its (multiple) lineage(s)- that is there inter and intra species competition. I have spoken of intra-species competition. At the level of production, related problems and variants solutions compete for selection. Within a species, variant information competes for attention like it competes for limited resources, and on average, the more useful idea outcompetes others for attention and a portion of the limited resources. The problems and solutions compete for selection. As any scientist or academic knows competition does not always select the most promising or interesting problems and solutions, and too often “earthly” considerations enter into the selection. But they can also answer in the affirmative that it is generally the most promising problems or solutions for the military and/or capitalist firms that receive healthy investment.
Differences:
Up till now we have focused on clarifying the conditions of the model of ideology. We tried to show the similitude of mechanism and of explanatory structure between natural selection and ideology. Now I want to focus on some general ,of the many salient, differences between natural selection and ideology and present some brief critiques.
The general line of difference is the difference in the character of the “object explained”. Although a lineage of information is “cumulative-functional” it shows a greater degree of intentionality, than is the case with the completely meaningless chance accumulation of functional traits that is necessary to make sense of the apparent “design” produced by natural selection. Ideas after all are intentional means whereby human-beings effect their environment and themselves. They are “designed” solutions to problems posed by their social and natural environment and they are not transmitted in haphazard ways, but are consciously transmitted.[21]
The most serious difference in explanatory structure revolves around the question of heredity and information.
In biology, the hereditary aspect is well understood. Yet, with ideology there are no hereditary structures and processes that replicate and transmit genetic information to offspring. There are no structures and processes that specify a “genotype”, a sort of “plan” for information-construction and the regulation of that process, which has specific mechanism for “building up” the “phenotype” of observable products. The “genotype” of language is not the same kind of entity as the genotype of an organism. As an indication, it does not have the same kind of stability and it is effected by the social environment in ways that allow for modification and change. There does not seem to be localizable structures and processes in the body, an information-soma division which makes possible the replication and transmission of information.[22] This is connected with the relative instability of informational structures and material faced with the environment. As many biologists have remarked, the capacity for learning could be a superior strategy for survival and reproduction, if having changing information required a mutation for each change, then the costs would exceed the learning-mechanism. The capacity to change and modify ideas so that they can adjust to present, and even future problems, means that the “code”, and the proverbial genotype are constantly changing.
This makes defining species, and tracking phylogenetic trees much more difficult. In biology, similarity of genetic information often reflects species-membership, and this makes it relatively easy to identify when members are part of the same species, and to track down the ancestor-species and relations between the ancestor-species. In ideology, similar information in general does not reflect species-membership because an idea is part of many species. This generalization stems from the many-many relation between parents and offspring that generally holds when it comes to ideas. Sexual reproduction is two-one, asexaul is one-to-one relation. A producer has many influences, they are effected by information from many different sources, and they are not simply effected by different ideas, but by the social environments and requirements to a degree which is not the case with genetic information. They also influence many different people, can have many people in their audience, many students or colleagues or workers. A idea may be part of many species, and then enter into many lineages. Also, there is no empirical necessity that ideas track back to an origin. In general, there is no “tree of ideology” as we find it in evolutionary theory.
A representation of the differing trees may clarify the issue, the first ideological tree assumes common origin, the second a non-common origin. The empirical possibility is left open for both options.
Representation of trees
The difficulty in defining and tracking species-membership obviously makes it difficult to say what variant information is competing with. It would seem that there is at best “multiple-species” membership and intra and inter multiple-species membership.
The model of ideology awaits a ”genetic theory” of the sort of information that the model treats. Until then it becomes difficult at almost every level to precisely state key concepts, like lineage, ideological tree, speices, and competition.
I want to posit a brief admonition before passing on to my concluding remarks.
Adaptationism
Not sure of the empirical hypothesis
Conclusion:
I have presented a model of ideology on the basis of modeling ideology on natural selection. I stated and tried to defend the material analogy between the two mechanisms i.e. the similarity of mechanism between the two. I stated that they were both “functional-cumulative” mechanisms. I modeled natural selection as a non-deterministic “if/then” model, and demonstrated the similar explanatory structure of the analogue conditions and concepts. In short, the model predicts that if information is transmittable, then more useful information will increase in lineages, and distribute on the basis of differential usefulness for different people and classes. I stated a major difference, which I belief is the point where the model breaks down most obviously.
I have made many concessions and qualifications to the model, but there are intuitions that suggest that the antecedents indeed obtain in many cases, and that this model explains quite a lot about the state of the intellectual world today. People do not just choose ideas that are useful, but ideas that are useful relative to their social environment. Their social environment includes the social relations in which people interact. Hence, people often choose ideas that are useful to their particular position and class and even though this may be more robustly the case with “social ideas”, it is still the case with all ideas under capitalism. The distinction is only one of degree and not kind. The problems posed by the social environment are problems that revolve around struggle and conflict. The nanotechnologist is particularly well-funded by a public institution because their work has a potential for technological development. The public institution is themselves in competition for a limited pool of resources and workers. The firm is emerging in a nascent market and wants to achieve market-share. They are consecrated to making profit and beating out competitors, and to this end they deploy the idea. To be brute, they use the idea because it is profitable to do so, and in order to make a profit they must deploy good science. The usefulness of an idea for the career of a nanotechnologist is not so distant from the usefulness of an idea for the profit of the capitalist firm. A psychologist produces a work about the inevitable and empirically-confirmed progress under capitalism. A billionaire thinks the work is rather well confirmed, and that its moral evaluations stand upon solid foundations. The billionaire declares themselves the student and fan of the psychologist and sponsors all sorts of events, publications, institutions to work with and reproduce the material in all sorts of new ways. The usefulness of an idea for the career of a pyschologist is not so distant from the usefulness of an idea for the power and prestige of the billionaire. Yet, in a world where 8 people own more than 3 billion, and where the conditions behind the trends only seem to aggravate, to preach such a message of quietism quite obviously corresponds well to the experience of a billionaire, and in the end serves that billionaire well in the best of all worlds for them. [23]
There is a reason that the mechanism of ideology ends up promoting, especially in the realm of “social ideas”, the ideas most consonant with the experience and demands of the ruling-classes (the capitalists, elected politicians, bereaucrats, celebrities, officers of the military and defense establishment). The ruling class, by definition , has power, and hence power over ideas. They are producers, selectors, transmitters, and reproducers of information to an extent that the working-class is not. They own and direct the institutions consecrated to production, selection, transmission, and reproduction of information. Hence, it would make sense that ideas are selected, transmitted, reproduced on the basis of what is useful for them, and that information at large evolves based on their demands and interests. And what is useful for them is simply navigating in a world where they dominate others and must perform certain actions to adapt to the situation. As Warren Buffet once said “oh there is a class-war, and my class is winning it”.
[1] Another obvious difference is that the ideas of people in the ruling class are not homogeneous, and that the ruling class is not one thing, but that there are many ruling classes , “wielding”
[2]
[3] Information seems to be an empty term which is thrown around today as a substitute for “data”. I do not use “x is information” in the sense of being equivalent with “x is a fact”, since one can record information about e.g. Babylonian mythology without judging it according to whether or not it is factual. I ground my usage in Shannon’s mathematical model of information. This model includes three basic components: source, transmitter, signal. The source is some piece of the world that can be in multiple states. A transmitter generates a signal that is received by an audience to reduce uncertainty about the state about what is happening at the source. Hence, it is possible to see that information can fail to record the state of what is happening at the source. Successful information transmits a signal that records the state of what is happening at the source. The reality is that no “scientific information” is fully successful.
[4] Harre taxonomy
[5]
[6]
[7] If there is “growth” in say some period, then is this because the assumptions are correct and the conclusion warranted, or is it because the conclusion is correct and there other assumption plus the assumptions
[8] not to be confused with “source” in information theory.
[9]
[10]
[11]
[12] However, I take it that Darwin contributed three main historical theses, which is to say theses about a specific time and place: 1) that there is a tree of life 2) that the natural-selection is the predominant mechanism in the production of evolution- both the smaller-scale and the larger-scale events of speciation and 3) that natural-selection is the predominant mechanism for the “adaptedness” we observe of living organisms on earth. Ultimately, the fact that organisms do exhibit such a property as heritable variation in fitness and that evolution by natural-selection is the predominant mechanism in the production of evolution are historical hypotheses. It is tempting to think of Darwin’s contribution as nothing more than an evidence-laden statement of the model given above but this is not precise.
[13]
[14] It might be argued that the model is ontologically neutral as to whether there were or are organisms. In parrallel, Boyle’s law is agnostic as to whether or not there be or are gas molecules. The law only states that in an enclosed system of gas molecules, that if the volume of the gas increases (or decreases) then the pressure of the gas decreases (or increases).
[15] Elucidation of essay
[16]
[17]
[19] However, usefulness is mostly clearly understood when there is a restricted disjunction of local forces, social properties, relations and most importantly social mechanisms that specify the manner in which the information is produced, transmitted, and reproduced. For instance, to go back to the example on nanotechnology, there are institutions devoted to the production of scientific knowledge. There are past influences of the build-up of information on nanotechnology on the producers. We would look to the multiple lineages of computer-science, biotechnology, information theory, engineering for a report on the genotype. There are then the present influences of the social environment in transmitting the built-up information to the producers and in posing new problems for the producers –new problems on nanotechnology. The producers understand the alternative theories, and produce a new theory which contains variant information that differs in truth-value and
[20] how can an idea contain all the information it does? Take relativity ?
[21] degree should not be overemphasized
[22]
[23]