How to Evolve Specified Complexity by Natural Means


 

Matt Young
 

Adjunct Professor of Physics, Colorado School of Mines, Golden, Colorado, 80401

Physicist, retired, US National Institute of Standards and Technology
 

Matt Young's Home Page
 

This is Revision 1 of an article originally submitted to Metanexus [Young 2002a] and incorporates a response to William Dembski's [2002] criticism.
 
 

1. Many years ago, I read this advice to a young physicist desperate to get his or her work cited as frequently as possible: Publish a paper that makes a subtle misuse of the second law of thermodynamics. Then everyone will rush to correct you and in the process cite your paper. The mathematician William Dembski has taken this advice to heart and, apparently, made a career of it.

2. Specifically, Dembski tries to use information theory to prove that biological systems must have had an intelligent designer. [Dembski, 1999; Behe et al., 2000] A major problem with Dembski's argument is that its fundamental premise - that natural processes cannot increase information beyond a certain limit - is flatly wrong and precisely equivalent to a not-so-subtle misuse of the second law.

3. Let us accept for argument's sake Dembski's proposition that we routinely identify design by looking for specified complexity. I do not agree with his critics that he errs by deciding after the fact what is specified. That is precisely what we do when we look for extraterrestrial intelligence (or will do if we think we've found it).

4. Detecting specification after the fact is little more than looking for nonrandom complexity. Nonrandom complexity is a weaker criterion than specified complexity, but it is adequate for detecting design and I will show below that there is no practical difference between nonrandom complexity and Dembski's criterion. [For other criticisms of Dembski's work, see especially Fitelson et al., 1999; Korthof, 2001; Perakh, 2001; Wein, 2001. For the Argument from Design in general, see Young, 2001a.]

5. Specified or nonrandom complexity is, however, a reliable indicator of design only when we have no reason to suspect that the nonrandom complexity has appeared naturally, that is, only if we think that natural processes cannot bring about such complexity. More to the point, if natural processes can create a large quantity of information, then specified or nonrandom complexity is not a reliable indicator of design.

6. Let us, therefore, ask whether Dembski's "law" of conservation of information is correct or, more specifically, whether natural processes can create large quantities of information.
 

7. Entropy. We begin by considering a machine that tosses coins, say five at a time. We assume that the coins are fair and that the machine is not so precise that its tosses are predictable.

8. Each coin may turn up heads or tails with 50 % probability. There are in all 32 possible combinations:
 

H H H H H
 

H H H H T
 

H H H T T


and so on. Because the coins are independent of each other, we find that the total number of permutations is 2 x 2 x 2 x 2 x 2 = 25 = 32.

9. The exponent, 5, is known as the entropy of the system of five coins. The entropy is the number of bits of data we need to describe the arrangement of the coins after each toss. That is, we need one bit that describes the first coin (H or T), one that describes the second (H or T), and so on till the fifth coin. Here and in what follows, I am for simplicity neglecting noise.

10. In mathematical terms, the average entropy of the system is the sum over all possible states of the quantity - p(i) x log p(i), where p(i) is the probability of finding the system in a given state i and the logarithm is to the base 2. In our example, there are 2N permutations, where N is the number of coins, in our case, 5. All permutations are equally likely and have probability 1/2N. Thus, we may replace p(i) with the constant value p = 1/2N. The sum over all i of - p(i) x log p(i) becomes 2N x [-p x log(p)], which is just equal to -log(p). That is, the entropy of N coins tossed randomly (that is, unspecified) is -log(p), or 5.
 

11. Information. At this point, we need to discuss an unfortunate terminological problem. The entropy of a data set is the quantity of data - the number of bits - necessary to transmit that data set through a communication channel. In general, a random data set requires more data to be transmitted than does a nonrandom data set, such as a coherent text. In information theory, entropy is also called uncertainty or information, so a random data set is said to have more information than a nonrandom data set. [Blahut, 1991] In common speech, however, we would say that a random data set contains little or no information, whereas a coherent text contains substantial information. In this sense, the entropy is our lack of information, or uncertainty, about the data set or, in our case, about the arrangement of the coins. [Stenger, 2002; for a primer on information theory, see also Schneider, 2000]

12. To see why, consider the case where the coins are arranged by an intelligent agent; for example, suppose that the agent has chosen a configuration of 5 heads. Then, there is only one permutation:
 

H H H H H


Because 20 = 1, the entropy of the system is now 0. The information gained (the information of the new arrangement of the coins) is the original entropy, 5, minus the final entropy, 0, or 5 bits.

13. In the terminology of communication theory, we can say that a receiver gains information as it receives a message. [Pierce, 1980] When the message is received, the entropy (in the absence of noise) becomes 0. The information gained by the receiver is again the original entropy minus the final entropy.

14. In general, then, a decrease of entropy may be considered an increase of information. [Stenger, 2002; Touloukian, 1956] The entropy of the 5 coins arranged randomly is 5; when they are arranged in a specified way, it is 0. The information has increased by 5 bits. As I have noted, this definition of information jibes with our intuitive understanding that information increases as randomness decreases or order increases. In the remainder of this paper, I will use "entropy" when I mean entropy and "information" when I mean decrease of entropy. Information used in this way means "nonrandom information," and I will show below how it is related to Dembski's complex specified information.

15. Dembski correctly notes that you do not need a communication channel to talk of information. In precisely the sense that Dembski means it, the system of coins loses entropy and therefore gains information when the intelligent agent arranges the coins. That is, a nonrandom configuration displays less entropy and therefore more information than a random configuration. There is nothing magic about all heads, and we could have specified any other permutation with the same result.

16. Similarly, the genome contains information in nonrandom sequences of the four bases that code genetic information. It also contains a lot of junk DNA (at least as far as anyone is able to deduce). [Miller, 1994] If we write down the entire genome, we at first think it has a very high entropy (many bases with many more possible combinations). But once we find out which bases compose genes, we realize that those bases are arranged nonrandomly and that their entropy is 0 (or at least very much less than the entropy of an equivalent, random set of bases). That is, the genes contain information because their entropy is less than that of a random sequence of bases of the same length.
 

17. Natural selection. Suppose now that we have a very large number, or ensemble, of coin-tossing machines. These machines toss their coins at irregular intervals. The base of each machine is made of knotty pine, and knots in the pine sometimes leak sap and create a sticky surface. As a result, the coins sometimes stick to the surface and are not tossed when the machine is activated.

18. For unknown reasons, machines that have a larger number of, say, heads have a lower probability of malfunctioning. Perhaps the reverse side of the coins is light-sensitive, corrodes, and damages the working of the machine. For whatever reason, heads confers an advantage to the machines.

19. As time progresses, many of the machines malfunction. But sometimes a coin sticks to the knotty pine heads up. A machine with just a few heads permanently showing is fitter than those with a few tails permanently showing or those with randomly changing permutations (because those last show tails half the time, on average). Given enough machines and enough time (and enough knots!), at least some of the machines will necessarily end up with five heads showing. These are the fittest and will survive the longest.

20. You do not need reproduction for natural selection. Nevertheless, it must be obvious by now that the coins represent the genome. If the machines were capable of reproducing, then machines with more heads would pass their "headedness" to their descendants, and those descendants would outcompete machines that displayed "tailedness." After a few generations, there would be a preponderance of headedness in the genomes of the ensemble.

21. Thus do we see a combination of regularity (the coin-tossing machines) and chance (the sticky knots) increasing the information in a genome.
 

22. Explanatory filter. Dembski's explanatory filter is a flow chart that is designed to distinguish between chance and design. The coin-tossing machines would escape Dembski's explanatory filter and suggest design where none exists, because the filter makes a false dichotomy between chance and design. Natural selection by descent with modification is neither chance nor design but a combination of chance and law. Many self-organizing systems would also pass through Dembski's filter and "prove" design where none exists. Indeed, the intelligent designauts give short shrift to self organization, an area where they are most vulnerable.
 

23. The 747 argument. Nonrandom information can thus be generated by natural causes. In order to quantify the meaning of specified complexity, Dembski defines complex specified information as nonrandom information with 500 bits or more. He claims that complex specified information could not appear naturally in a finite time and argues that, therefore, life must have been designed. What about that?

24. You will hear the argument that there is a very small chance of building a Boeing 747 by tossing the parts into the air and expecting them to fall down as a fully assembled airplane. Similarly, the argument goes, there is a very small chance of building a complex organism (or, equivalently, a genome) by chance. The analogy is false for at least two reasons.

25. First, airplanes and mousetraps are assembled from blueprints. The arrangement of the parts is not a matter of chance. The locations of many of the parts are highly correlated, in the sense that subsystems such as motors are assembled separately from the airplane and incorporated into the airplane as complete units. All airplanes and mousetraps of a given generation are nominally identical. When changes are made, they are apt to be finite and intentional. This is one reason, by the way, that Michael Behe's mousetrap [Behe, 1996] as an analogy for an irreducibly complex organism is a false analogy. [Young, 2001b]

26. Birds and mice, by contrast, are assembled from recipes, not blueprints. The recipes are passed down with modification and sometimes with error. All birds and mice of a given generation are different. When changes are made, they are apt to be infinitesimal and accidental.

27. When Dembski appeals to specified complexity, he is presenting the 747 argument in a different guise. He presents a back-of-the-envelope calculation to "prove" that there has not been enough time for complex specified information to have accumulated in a genome. The calculation implicitly assumes that each bit in the genome is independent of all others and that no two changes can happen simultaneously.

28. Creationists used to argue, similarly, that there was not enough time for an eye to develop. A computer simulation by Dan Nilsson and Susanne Pelger [1994] gave the lie to that claim: Nilsson and Pelger estimated conservatively that 500,000 years was enough time. I say conservatively because they assumed that changes happened in series, whereas in reality they would almost certainly have happened in parallel, and that would have decreased the time required. Similarly with Dembski's probability argument: Undoubtedly many changes of the genotype occurred in parallel not in series as the naive probability argument assumes.

29. Additionally, many possible genomes might have been successful; minor modifications in a given gene can still yield a workable gene. The odds of success are greatly increased when success is fuzzily defined. The airplane parts could as well have assembled themselves into a DC-10 and no one would have been the wiser. Dembski's analysis, however, ignores the DC-10 and all other possibilities, and in effect assumes that the only possible airplane is the 747. More specifically, by assigning a probability to a specific outcome, Dembski ignores all other possible outcomes and thereby calculates far too low a probability.

30. In assuming that the genome is too complex to have developed in a mere billion years, Dembski in essence propagates the 747 argument. Organisms did not start out with a long, random genome and then by pure chance rearrange the bases until, presto, Adam appeared among the apes. To the contrary, they arguably started with a tiny genome. How that first genome appeared is another matter; I think here we are arguing about natural selection by descent with modification, not about the origin of life. No less quantitatively than Dembski, we may argue that the genome gradually expanded by well known mechanisms, such as accidental duplications of genes and incorporation of genomes from other organisms, until it was not only nonrandom, but also complex, that is, contained more than 500 bits. To put it as simply as possible, if an organism with a 400-bit genome incorporates an organism with a 300-bit genome, then the resulting organism has a genome of 700 bits. Similarly, if an organism with a 100-bit genome incorporates five other organisms with 100-bit genomes, the resulting genome has 600 bits. There is nothing to prevent either genome from growing even larger, either in theory or in practice. Dembski's law of conservation of information, which is really a law of conservation of complex specified information, can thus be rendered moot as regards an entire genome.

31. Even if the 500-bit limit had validity, then, it would have to be applied to individual genes or perhaps groups of genes rather than whole organisms - and then only if it can be shown that the bits in the genes in question mutated wholly independently of each other.

32. To see exactly what Dembski is doing, let us suppose that there are 2 manufacturers of jet engines and that they share the market equally. Then, in the absence of further information, we would assume that there is a 50 % chance that the engines of the 747 were made by Manufacturer A. Dembski, by contrast, would argue that Manufacturer A's engine has N parts that could have been bought from various subcontractors. He would assign a probability p(i) to each part and calculate the probability p = p(1) x p(2) x ... x p(N) that the engine exists in its present form. Since the engine has many parts, p is a very small number. Dembski would conclude that it is very unlikely that the 747 uses the engine of Manufacturer A. Indeed, he would think it extremely unlikely that the 747 has any engine at all.

33. Even if complex specified information were a valid concept, it would not apply to the entire genome but only to specific genes. It is impossible to distinguish whether a specific gene is subject to the 500-bit limit, because the calculation depends on the unknown history of the gene (whether it contains duplicated segments, for example). I can, therefore, see no practical difference between specified complexity and nonrandom complexity. In distinguishing between specified and nonrandom complexity, I mean to imply that the concept of complex specified information is meaningless unless Dembksi can demonstrate that the bits in a given gene mutated independently of each other, throughout the entire history of that gene; otherwise, the 500-bit limit does not apply.

34. At the risk of adding to Dembski's already complex terminology, let us define aggregated complexity. A complex entity is aggregated if it consists of a number of subunits, no one of which demonstrates specified complexity. Aggregated complexity may exceed 500 bits yet not be specified in the way that Dembski means it. Thus, given a gene or a genome with more than 500 bits, how will Dembski demonstrate that the information in that gene is truly specified and not simply aggregated? How will he demonstrate that my far simpler analysis is incorrect? If he can do neither, then complex specified information is at best a meaningless innovation and at worst a smokescreen to hide a simple misapplication of information theory.
 

35. Reversing entropy. The definition of entropy in information theory is precisely the same as that in thermodynamics, apart from a multiplicative constant. Thus, Dembski's claim that you cannot increase information beyond a certain limit is equivalent to the claim that you cannot reverse thermodynamic entropy. That claim, which has long been exploited by creationists, is not correct. The correct statement is that you cannot reverse entropy in a closed or isolated system. A living creature (or a coin-tossing machine) is not a closed system. A living creature thrives by reversing entropy and can do so in part because it receives energy from outside itself. It increases the entropy of the universe as a whole as it discards its wastes. Dembski's information-theoretical argument amounts to just another creationist ploy to yoke science in support of a religious preconception.
 
 

36. Acknowledgement: I thank William Dembski for taking the time to critique my earlier article and for making me formulate my argument in more detail. Many thanks to Victor Stenger, University of Colorado, and Chris Debrunner, Colorado School of Mines, for illuminating the concept of entropy, and to Brendan McKay, Australian National University, and Mark Perakh for their helpful comments. As far as I know, Victor Stenger was the first to recognize that Dembski's "law" of conservation of information is a misuse of the second law. Thanks also to Richard Wein for his comments and for the reference to his Metanexus article. In addition, Eugenie Scott, National Center for Science Education; Andrew Porter, Center for Theology and the Natural Sciences; Larry Arnhart, Northern Illinois University, commented on the article and offered encouragement. Finally, many thanks to Andrew Porter for arranging to have this article posted.

Copyright © 2002 by Matt Young. All rights reserved.
 
 

References.
 

Michael J. Behe, 1996, Darwin's Black Box, Touchstone, New York.
 

Michael J. Behe, William A. Dembski, and Stephen C. Meyer, 2000, Science and Evidence for Design in the Universe, Ignatius, San Francisco.
 

Richard E. Blahut, 1991, "Information Theory," Encyclopedia of Physical Science and Technology, Academic, New York.
 

William A. Dembski, 1999, Intelligent Design: The Bridge between Science and Theology, Intervarsity, Downers Grove, Illinois.
 

William A. Dembski, 2002, "Refuted Yet Again! A Brief Reply to Matt Young," Metanexus, http://www.metanexus.net/archives/message_fs.asp?list=views&listtype=Magazine&action=sp_simple_archive_&page=1&ARCHIVEID=5414.
 

Brandon Fitelson, Christopher Stephens, and Elliott Sober, 1999, "How Not to Detect Design," Philosophy of Science, vol. 66, pp. 472-488.
 

Gert Korthof, 2001, "On the Origin of Complex Information by Means of Intelligent Design: A Review of William Dembski's Intelligent Design," Revision 2.3a, http://home.wxs.nl/~gkorthof/kortho44.htm, 2 November.
 

Kenneth Miller, 1994, "Life's Grand Design," Technology Review, vol. 97, no. 2, pp. 24-32, February-March.
 

Dan-E. Nilsson and Susanne Pelger, 1994, "A Pessimistic Estimate of the Time Required for an Eye to Develop," Proceedings of the Royal Society of London, vol. 256. pp. 53-58.
 

Mark Perakh, 2001, "A Consistent Inconsistency," http://www.nctimes.net/~mark/bibl_science/dembski.htm, updated November 2001.
 

John R. Pierce, 1980, An Introduction to Information Theory, 2nd ed., Dover, New York.
 

Thomas D. Schneider, 2000, "Information Theory Primer with an Appendix on Logarithms," version 2.48, http://www.lecb.ncifcrf.gov/~toms/paper/primer, accessed November 2001.
 

Victor J. Stenger, 2002, Has Science Found God? The Latest Results in the Search for Purpose in the Universe, Prometheus Books, Amherst, NY, to be published.
 

Y. S. Touloukian, 1956, The Concept of Entropy in Communication, Living Organisms, and Thermodynamics, Research Bulletin 130, Purdue University, Engineering Experiment Station, Bloomington, Indiana.
 

Richard Wein, 2000, "Wrongly Inferred Design," Metanexus, http://www.metanexus.org/archives/message_fs.asp?list=views&ARCHIVEID=2654.
 

Matt Young, 2001a, No Sense of Obligation: Science and Religion in an Impersonal Universe, 1stBooks Library, Bloomington, Indiana; www.1stBooks.com/bookview/5559.
 

Matt Young, 2001b, "Intelligent Design is Neither," paper presented at the conference Science and Religion: Are They Compatible? Atlanta, Georgia, November 9-11, 2001, www.mines.edu/~mmyoung/DesnConf.pdf.
 

Matt Young, 2002a, "How to Evolve Specified Complexity by Natural Means," Metanexus, http://www.metanexus.net/archives/message_fs.asp?list=views&listtype=Magazine&action=sp_simple_archive_&page=1&ARCHIVEID=5349.
 

Matt Young, 2002b, "Yet Another Scientist Fails to Understand Dembski," Metanexus, to be posted.