REFUTATIONS AND REBUTTALS VI

advertisement

REFUTATIONS AND REBUTTALS VI

The Probability of Life

Richard B. Hoppe

The relationship between chance events and ordering processes is a subtle and important issue in discussions of the process of organic change over time. There are two basic questions. First, just how likely (or unlikely) is it that self-replicating organic systems could have spontaneously (naturally) arisen from nonliving matter? Second, how can the ordered structures we see in living systems occur when the source of novelty, genetic mutation, is random in the respect that the theory of biological evolution assumes?

The creationists have simple answers to these questions. The emergence of self-replicating molecules from a primordial inorganic world is too improbable to have occurred without supernatural intervention, and the order we see in the world cannot be based on random processes. As a consequence, for order to occur in nature some purposive agency has to have imposed it -- there must have been a supernatural Designer to create the ordered world. Creationists offer elaborate calculations of probabilities that purportedly demonstrate that it is exceedingly unlikely that a random process could have produced the highly organized structures of life, and therefore a supernatural agency must have been at work. Because life in all its forms is so improbable, the creationists argue, it is necessary to invoke a supernatural creator to "scientifically" account for the emergence of life.

This essay will describe and criticize the creationists' account of the probability of the natural (spontaneous) emergence of self-replicating molecules from inanimate matter. The next essay will describe the interaction between random variation and naturalistic ordering processes that modern science holds responsible for the ordered world of living things.

One-Step Evolution

The first issue to be addressed is just how improbable the spontaneous emergence of self-replicating polymeric molecules from an inorganic world really is. The creationists would have us believe that it is so improbable as to be effectively impossible.

The most simple-minded of the creationist arguments are based on the assumption that scientific theories of the origin of complex self-replicating molecules suppose that they emerged in one giant

leap from an unordered collection of their constituents. For example, Morton (1980) a cell biologist, and Oller (1981), a linguist, both make this assumption in calculating numbers that they think demonstrate the unbelievable improbability of life.

Denton (1985), a physician, makes the same assumption.

I cannot resist an extended parenthetical comment. In the course of a discussion of why highly developed living systems supposedly cannot evolve, Denton compares living systems to machines. On page 315, one finds the following three locutions, in this order: (1) "Living organisms are complex systems, analogous in many ways to non-living complex systems." (2) "Given the close analogy between living systems and machines ...". (3)

"The fact that systems [machines] in every way analogous to living organisms cannot undergo evolution ...". I have added emphasis to the three quotations to show how on one page Denton promotes machines from being similar to living systems in many

(unspecified) respects to being virtually identical with them in all respects. Denton's science is similar to his rhetoric:

Superficially persuasive but fundamentally unsound.

In this essay, however, I will focus on the most recent version of the public school edition of the textbook of creationism, “Scientific Creationism” (Morris, 1981), which has 10 pages of calculations that supposedly show the utter improbability that living systems could have emerged from inorganic precursors.

Morris looks at two approaches. The first is the

"Probability of a complex system arising instantly by chance" (p.

59). Morris calculates that the probability that a system composed of 100 different parts taken in just one order could assemble itself in one step by chance is 1 in 10 158 ,a vanishingly small number. Morris indulges in some hyperbole about the number of electrons in the universe and the number of seconds in 30 billion years to illustrate just how large 10 158 is and how corresponding small 1/10 158 is. He then concludes, "It is thus inconceivable (to any but a doctrinaire evolutionist) that a living system could ever be formed by chance" (p. 62).

But of course his conclusion (and his calculation) is a non sequitur because not even the most "doctrinaire evolutionist" supposes that a completely formed complex molecule instantly emerged, fully functional, from a pile of 100 separate parts.

That is, nothing in the theory of evolution suggests that an organism or a DNA molecule started as a bunch of elementary parts and then, in one giant leap, appeared in its final form. Only

Probability of Life 2

"scientific" creationists suppose that to have happened, courtesy of divine intervention.

Almost without exception, biologists of the Darwinian persuasion have held that the fundamental process underlying the emergence of order in life is cumulative incremental evolution, not single-step evolution. That is, evolution proceeds step by small step, with each step in the series forming a stable system, more-or-less well adapted to its selective context and serving as a base from which other small steps might be made. Both points are critical: The steps are small, and the systems that survive selection at each small step are selectively fit in their current context -- they are not analogous to half-assembled airplanes but are themselves functioning systems. As a consequence, the calculation of the improbability of a 100-atom molecule spontaneously assembling itself out of its constituent atoms in one giant leap is simply irrelevant, and Morris’s conclusion regarding the improbability of the evolution of living systems is invalid, at least on his first premise.

Evolution by Accretion

Morris then takes up the (very slightly) more sophisticated hypothesis that "complex molecules might have been slowly and gradually synthesized (sic) by some process analogous to natural selection. That is, a system might advance from one part to a two-part system, then from two parts to three parts, and so on.

At each step, if the combination turned out to be advantageous in its immediate environment, it would survive and then be ready to undertake the next step" (p. 63). Morris goes on to say that "On the other hand, if a particular step turned out to be harmful, as it normally would (since a random change in a well-functioning system normally would decrease its efficiency), then presumably the molecule would be destroyed, or at least would be inhibited from further advance" (p. 63). Morris infers from this that "each trial step would have to be immediately beneficial; there could be no failures or backward steps" (p. 63). It looks to me like

Morris assumes that only one molecule is involved, strange though that seems. If that one molecule fails to "advance" then the whole enterprise is doomed in Morris's story.

Morris provides calculations showing that the step-by-step formation of a particular string of elements, on the assumption that 1,500 accretions must occur one by one in a particular order, is again so improbable as to be practically impossible, and concludes that it therefore could not have happened naturally.

Probability of Life 3

The usual 'evolutionist' response to this sort of argument has been to say things like, 'Well, even though the probability of such an event is exceedingly low, given untold billions of opportunities and a billion or so years even the most improbable events could have occurred.' I find this to be a profoundly unsatisfying and probably mistaken answer to the creationist argument.

Dawkins (1987), in an otherwise excellent book, has a different response, to the general effect that given the rarity of life (it has only occurred once in the universe, as far as we now know), we are a priori justified in accepting the emergence of life as an improbability. I find this equally unsatisfying as an answer. My own response is based on considering the assumptions underlying the probability calculations of creationists. I will use Morris' "evolution by accretion" scheme to illustrate how one might approach the problem of the probability of life.

In order to illustrate what I think is an appropriate approach, consider a model system in which there is a large collection of four different constituents -- call them A, B, C, and D. The target structure is the ordered string A-B-C-D.

Applying Morris' accretion approach to this system, the probability of forming an A-B-C-D string is the product of the individual probabilities of attaching a B to an A, forming an A-B subassembly; then attaching a C to this subassembly, forming A-B-

C; and finally attaching a D to the three-element subassembly, forming the target string A-B-C-D. If the original constituents are equally distributed in the primordial medium, and if the probability of attachment together of any two elements is equal, and assuming they are independent probabilities, then Morris' approach would calculate the probability of formation of an A-B-C-

D string as the product of the individual probabilities of combination. The probability that a given A will form an A-B pair is 1/4; that the A-B pair will form an A-B-C triplet is 1/4; and that the A-B-C will form an A-B-C-D is 1/4. The overall probability of an A-B-C-D string is the product of the individual probabilities or 1/4 * 1/4 * 1/4 = 1/64, or so Morris' reasoning goes.

Note that in this illustration, like Morris, I ignore the fact that the constituents will have different chemical properties if they are different atoms and/or molecules, and therefore the assumption that any constituent can attach to any other constituent (including itself) with equal probability is false. I make the assumption for the sake of the illustration to show the falsity of Morris' argument, since he makes the same assumption

Probability of Life 4

(though he doesn't say so) in his calculations. This issue, along with others that affect the probability of combining constituents, is discussed below.

Unfortunately for the accretion calculations, 1/64 is not the probability that an A-B-C-D string will form in the primordial medium because Morris' scheme is not an accurate representation of the number of steps required to form an A-B-C-D string. Rather, if we randomly select an A and follow it around, the probability that the A we happened to pick will wind up as the initial element in an A-B-C-D string is 1/64 in a one-by-one accretion model.

That is the probability that Morris' approach actually calculates, not the overall probability of formation of an A-B-C-D string.

Consider more closely what happens in our model system. In generation 0 we have just the four constituents, the individual

A, B, C, and D elements. As soon as they start interacting they begin to form two-element subassemblies. Assuming that mixing is random and the probability of any element attaching to any other element (including itself) is equal, as Morris must given his mathematics, then in generation 1 we would find 14 different strings, the original four one-part constituents and 10 distinguishable two-element strings formed from them. (In a three-dimensional medium A-B is identical to B-A, and Morris’ assumptions also allow pairs like A-A. The 16 possible pairs of four elements reduce to 10 when equivalent pairs are taken into account.) The relative proportions of the various single elements and two-element combinations depend on the nature of the attaching mechanism assumed.

As mixing continues, a large assortment of strings will form in generation 2, including the 14 of generation 1 plus all the strings that can be formed from their combinations. I have not worked out just how many different subassemblies comprise the generation 2 population, but there are at least two dozen structurally distinguishable offspring based on just the pair A-B.

Of most interest is that one of the generation 2 strings is the target string A-B-C-D, formed by combining the strings A-B and C-D formed in generation 1. Thus, instead of the three transitions in the Morris model (A to A-B to A-B-C to A-B-C-D), it requires only two transitions (generation 0 to generation 1 to generation 2) to produce the target string.

The reduction in the number of ‘steps’ that results from the combination of multi-element subassemblies is more pronounced as the length of the target string increases. For example, to form a

Probability of Life 5

seven-element string requires six steps in Morris' model, while the model that allows combining subassemblies requires just three steps: Both seven- and eight-element strings will be formed in generation 3 by combinations of the three-element strings and four-element strings formed in generation 2. Given this approach, rather than the full 1,499 steps that are required by Morris' oneelement-at-a-time-in-order model, it actually takes just 11 generations to assemble strings composed of 1,500 elements. In general, when subassemblies are allowed to combine under Morris' rules, the minimum number of generations required to produce a string of at least length N is an integer y, where 2 y > N > 2 y-1 .

None of this speaks directly to the question of the likelihood of the emergence of self-replicating molecules, though; it merely shows that the creationist argument in this respect is fatally flawed. It still might be the case that the emergence of self-replicating molecules is an exceedingly improbable event, but the creationists have not demonstrated that by their calculations.

In order to estimate the likelihood that self-replicating systems of some specified degree of complexity could spontaneously emerge we cannot merely calculate probabilities as though all combinations were equally probable. We must carefully consider the factors that can affect that process. What are the variables and questions that must be considered in order to estimate the likelihood of the formation of a large self-replicating molecule?

Though I am not a biochemist, I can think of at least six that seem relevant.

The first variable to consider is the target string itself.

Just how large and complex must a molecule be in order to display the functional property of self-replication? Given the combinatorial explosion the model sketched above allows, this is not a very powerful variable in determining the probability of formation unless there is some sort of structural instability associated with these larger molecules.

Second, what is the size of the class of potential targets?

A subtle assumption in all the material above is that there is only one specific large molecule that is capable of selfreplication. That assumption is very likely to be false; the class of potential self-replicating molecules may be very large.

Consider an analogy: How many different lay-down seven no-trump bridge hands are there? Surprisingly, the answer is "lots!" One could have three Ace-King-Queen-Jack sets plus the remaining Ace, for four distinguishable hands. One could hold the Ace-King-Queen of all four suits plus any one of the remaining Jacks, for four

Probability of Life 6

more ways. One could have the Ace-King-Queen of a suit, plus any seven of the ten remaining members of that suit (120 combinations of ten things taken seven at a time, times four suits), plus the other three Aces. This yields another 480 such hands, for a total of 488 hands so far that meet our functional criterion, and we have not yet considered still other combinations that would yield the same functional outcome: a lay-down seven no-trump hand.

Of course, the odds against getting any one particular hand are very slim; the odds of being dealt a hand with 13 specified cards are about 1 in 6.33 x 10 11 . If we impose the further constraint that the 13 cards be dealt in a particular order then the odds of getting a specific hand decrease to approximately 1 in

4 x 10 21 . However, as the number of functionally equivalent targets increases, the odds of getting at least one member of the set also increase. (Note, by the way, that these odds apply to any specified 13-card hand, not just high-powered hands. A specific mediocre bridge hand is exactly as improbable as a specific seven no-trump hand. One is dealt more mediocre hands merely because there are more members in the class of mediocre hands.)

Similarly, any of a potentially large set of self-replicating molecules might have emerged; we happen to be composed of one particular member of that set. But life could conceivably have been composed of another sort of self-replicating molecule had it emerged before the precursor of DNA or had it been selectively superior sometime during a competition with DNA's precursor.

DNA's precursor just happened to be the member of the set of potential outcomes that occurred. The larger the target set, the greater the probability that one of its members will occur, though the probability of occurrence of an individual member of the set does not change.

Third, what are the constituents the process started with and what was their distribution? We know from laboratory work, analyses of meteorites, and indirect data from objects in space that carbon-based (organic) molecules readily form in natural contexts. In particular, amino acids (constituents of proteins) and purines and pyrimidines (constituents of DNA and RNA) appear in laboratory simulations of plausible early terrestrial chemical environments, and complex organic molecules occur in comets.

Laboratory experiments have not yet produced a self-replicating molecule, but we know that the process did not have to start from scratch with raw atoms of the essential elements -- carbon, hydrogen, oxygen, nitrogen, and so on. Larger subassemblies were available, either formed on earth or deposited here by comet falls

Probability of Life 7

(Kunzig, 1988). The larger the subassemblies available, the fewer steps required to reach the size and complexity necessary for self-replication, though again, given the combinatorial possibilities, this does not seem to be a particularly powerful limiting variable.

Fourth, are all combinations of constituents equally likely, or are there biases in the physics and chemistry of the various subassemblies that make some combinations more likely to occur?

As noted above, the physical and chemical properties of different constituents -- atoms and molecules -- produce biases in the probabilities of combinations of various constituents. Not all chemical combinations are equally likely, and these biases distort the equal probability assumption Morris made in his calculations, making some paths either more or less likely to occur.

Furthermore, physical barriers, e.g., membrane-like films, can impose asymmetries on chemical processes, and the amphiphilic molecules that form biological membranes are not highly complex in structure though their functioning is far from simple. Some pathways of assembly may have been more likely to be taken than others due to the physics and chemistry of the constituents.

Targets near those pathways would be more likely to emerge than targets on other, less favored pathways. Did our precursor lie on or near a chemically favored pathway? If so, the odds in its favor increase.

Fifth, were there other elements in the chemical environment of the subassemblies that affected their activity? In particular, were weak catalysts present that preferentially (albeit perhaps only slightly) facilitated one rather than another combination?

Were some combinations self-catalyzing? If so, then once again the odds change in favor of the pathways that were catalyzed, and the overall odds of a hit increase if a member of the functionally equivalent target set was on or near a catalyzed pathway.

Sixth, were there factors in the inorganic physical environment that preferentially preserved or supported one rather than another subassembly? For example, hypotheses concerning the role of clay crystals as physical supports or even structural templates for large organic molecules suggest that the physical environment (as distinguished from chemical catalysis) could have played a role in preferentially selecting some pathways, again changing the shape of the probability distribution.

There are undoubtedly more questions and variables that must be addressed, but the preceding seem to my chemically naive eye to

Probability of Life 8

be among the central issues that must be considered in assessing the likelihood of the emergence of large self-replicating molecules from inanimate matter. Note that for the most part they do not act independently, but interact in potentially complex ways to determine the overall likelihood of self-replicating systems.

Furthermore, in order to estimate the likelihood of occurrence in a given time span one must know something about the rates of both formative and degenerative reactions, as well as likely concentrations of constituents, mixing processes, and so on. The bottom line is that while I do not know how likely or unlikely the emergence of self-replicating systems actually is, I do know that a candidate answer must adequately address at least the above issues and questions.

It is obvious why simple-minded calculations like those of

Morris are not merely irrelevant: they are actively misleading.

They tell us nothing useful about the probability of the formation of self-replicating systems because they ignore both what we already know and what we must seek to learn; they don't even address the real questions at issue. Morris provides a caricature of a process, shows that it could not work, and concludes that the world was created by a supernatural agency. But all he has really demonstrated is the inadequacy of his caricature and the depth of his own ignorance.

Finally, it is enormously important that the last three factors described above -- physical and chemical variables and the role of the inorganic context -- do not merely affect the overall probability of the emergence of self-replicating molecules, they preferentially bias the process by changing the odds against certain molecules and/or biochemical pathways. That is, they alter the shape of the probability distribution across molecules, favoring some over others. This differential favoring, this change in the shape of the probability distribution, is an ordering process. It is due solely to natural causes, and in sculpting the probabilistic landscape it begins the ordering that leads to life.

Note: I am grateful to Professor Owen York (an organic chemist),

Professor John Lutton (a biochemist), both in the Department of

Chemistry at Kenyon College, and Professor Ken Smail of the

Department of Anthropology, for reviewing a draft of this essay.

Any errors which remain are my responsibility.

Probability of Life 9

References

Denton, M. (1985). Evolution: A Theory in Crisis. Bethesda, MD:

Adler & Adler Publishers, Inc.

Dawkins, Richard (1987). The Blind Watchmaker. New York: Norton.

Kunzig, R. (1988). Stardust memories: Kiss of life. Discover,

March, 1988, 68-76.

Morris, H. M. (1981). Scientific Creationism. San Diego: CLP

Publishers.

Morton, J. S. (1980). Glycolysis and alcoholic fermentation. ICR

Impact Series #90, December, 1980.

Oller, J. W. (1981). Words: genetic and linguistic problems. ICR

Impact Series #92, February, 1981.

(c) 1988 by Richard B. Hoppe

Probability of Life 10

Download