Theism and Materialism

In Chapter 2 of Mind and Cosmos by Thomas Nagel, the author explores the typical positions held by proponents of theism and by proponents of evolution.  His focus is sharpened by analysis of the different ways that each point of view attempts to make sense of human beings who are part of the world that ought to be intelligible to us.

According to Nagel, theists appeal to a deity who is outside the natural order, but who nevertheless provides intention and directionality to the natural order and who assures us of the basic reliability of our observational capacity and our reasoning ability.  It is a reassuring position at the expense of requiring a power outside of the natural order.  It suffers from a lack of any serious attempt to make human beings intelligible from within the natural order.

Evolutionary naturalists, on the other hand, claim that humanity is intelligible from within the natural order based on science and reason.  But, again according to Nagel, the problem is that both science and reason are the products of evolution and we have no authority outside of ourselves to substantiate the reliability of our understanding of science.  In Nagel’s terminology, evolutionary naturalism undermines its own claim of reliability.  Ultimately, the evolutionary explanations fail because the science that we possess has failed to explain consciousness and therefore failed to explain why we should trust the judgments arising from our consciousness.

I think Nagel is stretching too far for a criticism of the evolutionary point of view.  Its main problem is the inability for science to explain consciousness.  To find fault for the inability of evolution to provide reassurance that our reasoning is sound is the same criticism that can be applied to the theist position.   Both positions are based on faith!  Theists have faith in God based on a religious community and Darwinian evolutionists have faith in science based on the scientific community.  If anything, the evolutionary point of view has the advantage in that the scientific community is generally more unified and disciplined than the religious community.

The primary distinction between the two points of view, then, is the position and importance that each assigns to humanity.  Theism relies on a power outside the normal purview of science to explain and give meaning to human life and consciousness while evolution relies solely on current science at the expense of diminishing any essential or transcendent importance for human life and consciousness.

Nagel is searching for middle ground.  He wants an explanation for consciousness that does not rely on a power outside the natural order.  At this point in his book, I think he fails to see that any such explanation will be relying on faith in something.  Whether that something is science or philosophy or some combination, it will still be the object of faith.  Given the constraints on his search that there can be no power outside the natural order, his explanation would not be able to claim any more authority than evolutionary materialism.

From my point of view, a form of theism that provides a way for God to work through the natural order provides the best alternative.  The importance and discipline of science is maintained and modified so that human life and consciousness have access to transcendent power for guidance and assurance.

Scientific reductionism ends at the quantum boundary, so the assumption of transcendent consciousness working at the quantum level provides for the needed adjustment to science while maintaining the entire scientific edifice based on empirical evidence and reductionist explanation.  And there is scientific evidence for an order producing power working at the quantum level.  This evidence is being developed by the nascent scientific discipline of quantum biology.

The strongest evidence to date comes from quantum action during photosynthesis, but I expect much more evidence as quantum biology matures.  After all, isn’t all of physics based on quantum action?  The only alternative besides dualism would be a view that posits new scientific principles acting at the biological level.  But, it seems to me that there is too much continuity between chemistry and biology.  That continuity leaves little room for wholly new principles to be plausible.


Can the Philosophy of Mind Revise Science?

Thomas Nagel makes an astounding claim in his book, Mind and Cosmos.  That claim is that the entire edifice of science must change in order to accommodate the fact that human beings have evolved with minds that cannot be explained by science.  His reasoning in Chapter 1 goes like this:

“The great advances in the physical and biological sciences were made possible by excluding the mind from the physical world. This has permitted a quantitative understanding of that world, expressed in timeless, mathematically formulated physical laws. But at some point it will be necessary to make a new start on a more comprehensive understanding that includes the mind.”

“Mind, as a development of life, must be included as the most recent stage of this long cosmological history, and its appearance, I believe, casts its shadow back over the entire process and the constituents and principles on which the process depends.”

Nagel discounts intervention by an intelligent designer, but favors some sort of teleological explanation that can be contained within the laws of nature.  Presumably, Nagel’s teleological principle would modify the laws of physics so that those laws would be more likely to support the genesis of life and the evolutionary direction that he perceives.  What sort of modification could that be and still have the laws of physics support reductionism?  Although Nagel doesn’t require reductionism in his approach, I am adding the requirement of supporting reductionism because some form of causal reductionism is necessary to maintain the history of successful scientific explanation.

First, teleology is a philosophical position that attributes to nature the ability to proceed toward a final goal or objective.  That would seem to imply that whatever adjustment is made to physics, that it would need to be sophisticated enough to be able to correlate long term implications with assign short-term actions.  Short term actions that did not correlate strongly with the desired long term outcome would need to be minimized.

Second, in order to change physics so that the entire structure of physics does not have to be re-built requires a rather subtle change.  One way that I have stated that change is as a bias in the laws of physics that favor life.  I think that still works in Nagel’s framework, although in order to be more compatible with Nagel, perhaps the bias also needs to favor consciousness.

Third, the place to insert such a subtle change so that it doesn’t greatly disturb the whole structure of science is at the most fundamental level.  For physics that would be at the quantum level.  And some physicists do argue that quantum physics is incomplete as it is now constituted.

There may be many such modifications, but the simplest modification that I can imagine would be a decision process in quantum physics that favors life and consciousness.  The decision process would need to produce the exact same results that quantum experiments currently confirm.  And it would need to provide for the action of biological molecules in the simplest organisms and in the most complex organisms.  Presumably such a bias would also greatly increase the likelihood that life originated from the available raw materials, either on Earth or nearby.

In such a framework where a decision process favoring life and consciousness has been added to quantum physics, the fundamental particles would be representatives of the decision process rather than mechanistic material particles.  Some explanation would be needed to differentiate the constituents of inorganic matter from living biological matter.

In order to explain mind with this new edifice, it will be necessary to explain how individual particles can bind together into larger entities that possess the attributes of a single mind.  That may not be easy, but it is probably easier than explaining how mind can emerge from a collection of mechanistic particles.

I believe that such a decision process requires an intelligent power at least as powerful as the human mind, but probably vastly more powerful since this power must encompass the entire scope and history of the Universe.  That is why I take the theistic position contrary to Nagel’s atheistic position.

I am looking forward to a more specific proposal from Nagel as I plunge into Chapter 2.

Has Scientific Reductionism Failed?

Yesterday, I began reading Thomas Nagel’s book, Mind and Cosmos: Why the Materialist Neo-Darwinian Conception of Nature Is Almost Certainly False.  This book has generated a lot of controversy and I wanted to comment on some of the author’s statements as I encountered them rather than waiting until I had finished the book.

In Chapter 1, Nagel lays out his basic argument.  He is asserting that the central concept about the nature of the universe held by most secular-minded persons is not true.  That concept is that life began from a chemical accident several billion years ago and once life began to evolve, it proceeded by random mutation to develop new species arriving at humankind all within a time frame set by the age of the earth and the age of the universe.

According to Nagel, the reason that most secular-minded people hold this view is that many scientists present this view as the only possible scenario: “But among the scientists and philosophers who do express views about the natural order as a whole, reductive materialism is widely assumed to be the only serious possibility.”

Nagel goes on to say that reductive materialism has failed: “The starting point for the argument is the failure of psychophysical reductionism, a position in the philosophy of mind that is largely motivated by the hope of showing how the physical sciences could in principle provide a theory of everything.”

Later, Nagel qualifies this by saying he is mainly speaking about materialist reductionism as it applies to biology (and mind, presumably).  And here is where some clarification is needed.  Nagel uses several phrases to describe the type of reductionism he is speaking about.  In Chapter 1 they are:  “psychophysical reductionism,”  “physio-chemical reductionism,” and “materialist reductionism.”  What they all have in common is reductionism, so it will help to understand what reductionism is.

Reductionism is the idea that any complex entity can be completely understood and explained by analysis of its parts.  It is like peeling back the layers of an onion to reveal the innermost layer which presumably is the fundamental layer from which everything can be explained.  Within the physical sciences this approach has been very successful.  The innermost, fundamental layer for the physical sciences is the layer described as the “Standard Model of Particle Physics.”  This model describes the fundamental particles such as the electron and proton (quarks) as well as the fundamental forces such as the electromagnetic force.

The standard model has been very successful.  Its most recent achievement was the prediction and tentative confirmation of the Higgs Boson, also known as the “God particle,” a name suggested by a journalist, not a scientist.  So I was taken aback when I first read that reductionism had failed.

I think that Nagel is referring to the current inability to explain biology and particularly mind in terms of the features of the standard model.  I think that is an accurate statement:  living organisms cannot be fully understood or explained by appealing to their constituent particles and fundamental forces, if those entities are understood mechanically.

What I think is missing is the realization that the standard model may not be the most fundamental layer of scientific reductionism.  It is simply the layer that is best understood.  The standard model describes phenomenon at the quantum boundary.  Its particles and forces are the smallest measurable entities on which science can perform experiments.  The components of the standard model are conceptual entities.  But they are conceptual entities that have a huge advantage over the layer beneath them: they are measurable.

One could argue that the quantum layer is more fundamental than the standard model.  The huge problem is that the quantum layer contains conceptual entities that cannot be measured, even in principle.  The conceptual entities of the quantum layer are quantum states and they cannot be measured.  But quantum states are the mathematical entities that are essential for the success of the standard model.  So who is to say that quantum states are any less real than electrons and protons?

At the quantum boundary, science has encountered the absolute limit on what can be measured.  So, in that sense, science has reached the limit of what it can confirm experimentally.  But, if one believes that the quantum world is real, then an entirely different picture emerges from the standard model.  Instead of mechanistic particles, the quantum world suggests that elementary particles are computed entities.  One does not need to attribute classical computation to these tiny bundles of energy.  What is important is that there exists a decisional process in the universe that determines the specific outcome whenever one of these particles participates in the transfer of energy from one place in space-time to another place in space-time.

In other words, the fundamental particles are more mind-stuff than material-stuff.  I think that counts as a success for scientific reductionism, not as failure.  Of course the problem is that one must make a leap of faith to the point of view that the quantum world represents reality.  That might be a leap too far for the many who have been trained in the classical view of reality.

Consciousness (Part 1)

So far, in this series on the evidence for a conscious, rational power working in and through the laws of nature, I have followed the trail of low entropy.  I have used a general notion of entropy where low entropy correlates with an increasing degree of order or where it correlates with an increasing concentration of energy.  Consequently, high entropy means a state of disorder or a state of energy dispersal, most often as wasted heat.  I began with the amazing state of low entropy (highly ordered, high energy concentration) in which the universe was created.

I followed the trail of low entropy through the complex of mathematically precise physical laws that represent the incredible ordering power of nature.  I spoke of lasers, superconductivity and photosynthesis as supreme examples of entropy lowering processes.  I looked at the incredibly diverse life processes, all based on DNA, RNA and protein synthesis, that would be impossible without the information coding capability and the molecular machines of the individual cell.  I described the computer-like processing capability of individual proteins and the inexplicable speed with which they fold into the precise shape for their purpose.

I have tried to avoid the teleological language of purposeful design, but when one looks at the trail from creation to conscious being, it is difficult to avoid the question.  Random chance cannot account for this remarkable journey.  The probabilities are just too small for undirected forces to have arrived at living beings that maintain low entropy and rely on entropy lowering processes.  This implies, to me at least, that the laws of physics are favorable to life and consciousness.  What is it that has driven evolution to the point of prizing consciousness almost above other considerations?  Consciousness requires a huge energy budget; why should our brains deserve a 20% allocation of energy if not for its powerful entropy lowering ability?

An incredible panoply of ordered life flows from the human imagination.  There is language, art, drama, literature, music and dance in addition to the social inventions of government, economic systems, justice systems, cultural institutions, family and kinship groups.  One could almost say that the creation of explicitly ordered social structures defines humanity.  And yet there is a profound puzzle in the pervasive human tendency to sow discord.  Why should that be?  Why are there wars, violence, terrorism, and dysfunctional social institutions if the human imagination can be so productive?

In discussing these and other questions of consciousness, I will attempt to follow my reductionist approach by relating emergent phenomenon to the dynamics and properties of constituent components.  However, there will come a point where this approach will fail and I will need to resort to different language to describe what I consider to be the key dynamic of consciousness: the self and its narrative.  Consciousness cannot be completely understood based on functional descriptions of biological or physical components.  But first, let me turn to the attempt to explain consciousness in term of computation.

Considering that order emerges from entropy lowering processes, it is odd that some observers think that consciousness and intelligence emerges from random, chaotic activity.  Pure randomness results in high entropy, so how can order be produced from chaos?  One such person is Ray Kurzweil, a futurist, who has written a book titled The Singularity is Near.  He states, “Intelligent behavior is an emergent property of the brain’s chaotic and complex activity.”  Neither he nor anyone else can explain how entropy lowering intelligence can emerge from random, chaotic activity.  He does, however, distinguish intelligence from consciousness.  He cites experiments by Benjamin Libet that appear to show that decisions are an illusion and that “consciousness is out of the loop.” Later, he describes a computer that could simulate intelligent behavior: “Such a machine will at least seem conscious, even if we cannot say definitely whether it is or not.  But just declaring that it is obvious that the computer . . . is not conscious is far from a compelling argument.”  Like many others, Kurzweil thinks that consciousness is present if intelligence can be successfully simulated by a machine.

Kurzweil is an optimistic supporter of the idea that the human brain will be completely mapped and understood to the point where it can be entirely simulated by computation.  He has predicted that this should occur in the fifth decade of the 21st century: “I set the date for the Singularity – representing a profound and disruptive transformation in human capability – at 2045.  The nonbiological intelligence created in that year will be one billion times more powerful than all human intelligence today.”  Kurzweil’s prediction is based on the number of neurons in the human brain and their many interconnections, arriving at a functional memory capacity of 1018 bits of information for the human brain (1011 neurons multiplied by 103 connections for each neuron multiplied by 104 bits stored in each of the synaptic contacts.)

Kurzweil welcomes this prospective technological leap as a great advancement in the intellectual potential for the world.  He writes about his vision for the world after the singularity which he names the fifth epoch: “The fifth epoch will enable our human machine civilization to transcend the human brain’s limitations of a mere hundred trillion extremely slow connections.”  He goes on to say that eventually this new paradigm for intelligence will saturate all matter and spread throughout the universe.  Kurzweil appears to have the opposite perspective from my own view which is that the universe began with consciousness and consciousness infused all matter from the beginning.

But other people look at Kurzweil’s predictions and are concerned.  I recently read an opinion piece by Huw Price in the New York Times about the dangers of artificial intelligence (AI).  Huw Price was on his way to Cambridge to take up his newly appointed position as Bertrand Russell chair in Philosophy.  He had met the AI researcher named Jaan Tallinn, one of the developers of Skype, on his way to his new job.  Tallinn was concerned that AI technology would evolve to the point where it could replace humans and through some accident the computers would take control.  So Tallinn and Price joined up with Martin Rees, a cosmologist with a strong interest in biotechnology, to form a group called the Center for Study of Existential Risk (CSER).  I suspect that the group will focus more on the risk to human life posed by biotechnology rather than from AI, but the focus of Price’s column was on the risk from artificial intelligence.

Professor Price presented the argument that, although the risk of such a computer takeover appears small, it shouldn’t be completely ignored.  Perhaps he has a valid point, but what are the empirical signs that such computer intelligence is near at hand?  Some might point to the victories in 2011 of IBM’s Watson computer over all challengers in the Jeopardy game show.  This was an impressive demonstration of computer prowess in natural language processing and in database searching, but did Watson demonstrate intelligence?  I think that Ray Kurzweil would answer yes.  To the extent that the Jeopardy game demonstrates intelligence, then, by that measure, Watson must be considered intelligent.

However, consider the following subsequent development.  In a recent news report, Watson was upgraded to use a slang dictionary called the Urban Dictionary.  As that source puts it,

“[T]he Urban Dictionary still turns out to be a rather profane place on the Web. The Urban Dictionary even defines itself as ‘a place formerly used to find out about slang, and now a place that teens with no life use as a burn book to whine about celebrities, their friends, etc., let out their sexual frustrations, show off their racist/sexist/homophobic/anti-(insert religion here) opinions, troll, and babble about things they know nothing about.’”  (From the International Business Times, January 10, 2013, “IBM’s Watson Gets A ‘Swear Filter’ After Learning The Urban Dictionary,” by Dave Smith.)

One of Watson’s developers, Eric Brown, thought that Watson would seem more human if it could incorporate slang into its vocabulary so he taught Watson to use the slang and curse words from the dictionary.  As the news report continued,

“Watson may have learned the Urban Dictionary, but it never learned the all-important axiom, ‘There’s a time and a place for everything.’ Watson simply couldn’t distinguish polite discourse from profanity.  Watson unfortunately learned all of the Urban Dictionary’s bad habits, including throwing in overly -crass language at random points in its responses; in answering one question, Watson even reportedly used the word ‘bullshit’ within an answer to one researcher’s question. Brown told Forbes that Watson picked up similarly bad habits from reading Wikipedia.”

Perhaps the news story should have given us the researcher’s question so we could make our own decision about Watson’s epithet!  Eric Brown finally removed the Urban Dictionary from Watson.

In short, Watson was very good at what it was designed to do:  win at Jeopardy.  But it lacked the kind of social intelligence needed to distinguish appropriate situations for using slang.  It also appeared to lack a mechanism for learning from experience that some situations were inappropriate for slang or how to select slang words based on the social situation.  Watson was ultimately a typical computer system that had to be modified by its developers.  I know of no theoretical framework in which a computer system could maintain and enhance itself.

Now consider another facet of Watson verses Jeopardy contestant.  Our brain requires about 20% of our energy.  For a daily energy requirement of 2000 Calories, that amounts to 400 Calories for human mental activity.  That works out to about 20 watts of power.  In terms of electricity usage, that is less than 6 cents per day in my area.  Somewhat surprisingly, the number of brain energy calories does not much depend on one’s state of alertness.  The brain uses energy at about the same rate even when you sleep.  Watson, in contrast, used 200,000 watts of power during the Jeopardy competition.  That computes to about $528 per day.  If computers are to compete with humans for evolutionary advantage, it seems to me that they will need to be much more efficient users of energy.

In fact the entire idea of comparing computers to human mental activity is absurd to many people.  Perhaps I have even encouraged this analogy by speaking of quantum computation relative to biological molecules.  But I think it will become very apparent that any putative quantum computation must be something quite unlike ordinary computer calculations.  Mathematician and physicist, Roger Penrose, thinks that the fact that human mathematicians can prove theorems is evidence for quantum computation and decisionality in human consciousness.  But he also thinks that quantum computation must have capabilities that ordinary computers do not have.

John Searle is a Philosophy Professor at UC Berkeley and thinks that the current meme that the brain is a computer is simply a fad, no more relevant than the metaphors of past ages: telephone switchboard or telegraph system.  Professor Searle supports consciousness as a real subjective experience that is not open to objective verification.  It is therefore possible to explore consciousness philosophically, but not as an objective, measurable phenomenon.  Professor Searle is known for his example of the “Chinese Room,” where Chinese is mechanically translated into English, but where Searle claims there is no real understanding of what is being translated.  Searle states, “. . . any attempt to produce a mind purely with computer programs leaves out the essential features of mind.”

Closely related to the “Chinese Room” is the Turing test which seeks to demonstrate that a computer can simulate a human being well enough to fool another person.  In the Turing test, a person, the test subject, sits at a computer terminal which is connected to either another person sitting at a keyboard or to a computer.  The task of the test subject is to determine, by conversation alone, whether he or she is dialoging with another person or a computer.  An actual test has been held each year since 1990 and prizes awarded. So far, no computer program has been able to fool the required 30 percent of test subjects.  Nevertheless, the computer program that fools the most test subjects wins a prize.  People also compete with each other because half of the test subjects are connected to other persons who must try to demonstrate some characteristic in the dialog that will convince the test subject that he or she is really talking to another person.  The person who does best at convincing test subjects that they are communicating with another person wins the “Most Human Human” award.  In 2009, Brian Christian won that prize and wrote a book about his experience: The Most Human Human: What Talking with Computers Teaches Us About What it Means to Be Alive.

One of Brian Christian’s key insights in his book is that human beings attempt to present a consistent self-image in any public or interpersonal encounter.  In a dialog with another person, there is a striving to get beyond the superficial in order to reveal something of the personality underneath.  But the revealed personality is not monolithic; there are key self-referential elements of the conversation that reveal other possibilities.  Nevertheless there is a strong commitment to an underlying self-image, even if that self-image is ambiguous:

“[The existentialist’s] answer, more or less, is that we must choose a standard to hold ourselves to. Perhaps we’re influenced to pick some particular standard; perhaps we pick it at random. Neither seems particularly ‘authentic,’ but we swerve around paradox here because it’s not clear that this matters. It’s the commitment to the choice that makes behavior authentic.”

Authentic dialog, therefore, contains elements of consistent self-image and commitment to that self-image in spite of ambiguity and paradox.  A strong sense of self-unity underlies the sometimes fragmentary nature and unpredictable direction that human discourse often takes.  This is very difficult for a computer to simulate.

I think the risk from AI is so minuscule that it doesn’t deserve the level of concern that Jaan Tallinn was portrayed as having in Huw Price’s article.  There are two main assumptions in the assessment of risk that are very unlikely to be substantiated.  One assumption is that sheer computing power will lead to a machine capable of human intelligence within any reasonable time frame.  The second assumption is that such a machine, if created, could somehow replace humans in an evolutionary sense.

There are two problems with the first assumption, one theoretical and one practical.  The theoretical problem is that there is a limit to the true, valid conclusions that any automated system can achieve.  This limitation is called “Gödel Incompleteness.”  It means that for any system powerful enough to draw useful conclusions, there will still remain true conclusions that cannot be reached by computation alone.  In computer theory, this is called the “halting problem.”  The halting problem states that it is impossible to create a computer program that can decide whether any other computer program can halt or come to completion, producing a valid result.    The practical manifestation of the halting problem is that there is no way to introduce complete self-awareness into computer systems.  One can create modules that can simulate self-awareness of other modules, but the new module would not be self-aware of itself.  This limitation implies that human intelligence will always be needed to correct and modify computer systems.

(Roger Penrose’s book, Shadows of the Mind, presents the case for quantum consciousness in detail. A key part of his argument is that computers are fundamentally limited by “Gödel Incompleteness.”  This implies, according to Penrose, that quantum coherence plays a key part in consciousness and that quantum calculations are capable of decisions exceeding the power of any ordinary computer calculation)

The second problem with the first assumption is that it is very unlikely that a unified computer system with computing power of the human brain can be developed in any reasonable time frame.   Professor Price doesn’t say what a reasonable time frame might be, but Ray Kurzweil does, placing the date for the singularity at 2045.  Kurzweil’s assumption is that the human brain contains storage for 1018 bits (about 100 petabytes) of information.

In my previous post, I reported that Professor James Shapiro at the University of Chicago thinks that biological molecules are the most basic processing unit and not the cell.  This implies that Kurzweil should be using the number of molecules in the brain rather than the number of neurons.  Assuming about 1013 molecules per neuron, that increases the human brain capacity to about 1031 (10 trillion petabytes)!  This concept of storing large volumes of data in biological molecules has been confirmed by recent research where 5.5 petabytes of data have been stored in one gram of DNA.  Keep in mind that we are speaking only of storage capacity (and only for neurons, omitting the Glial cells) and not of processing power.  If the processing power of the biological molecule is aided by a quantum computation, then we have no current method for estimating the processing power of the human neuron.

Assuming that processing power is on a par with storage capacity, and assuming that computer capacity and power can double according to Moore’s law (every two years – another questionable assumption because of quantum limits), then there would need to be 40 doublings of storage capacity or about another 80 years beyond Kurzweil’s estimate of 2045.  That places the projection for Kurzweil’s “singularity” well into the twenty-second century.

The second assumption is that sufficiently advanced machine intelligence, if it could be developed, would be able to replace humans through evolutionary competition.  I have already mentioned the energy efficiency disadvantage for current silicon-based computers:  200 kilowatts for Watson’s Jeopardy performance versus 20 watts for human intelligence.  I have also described the impossibility of computer algorithms which could in principle modify themselves in an evolutionary sense.  I can also discount approaches based on evolutionary competition in which random changes are arbitrarily made to computer code.  I have seen too many attempts to fix computer programs by guesswork that amounts to little more than random changes in the code.  It doesn’t work for computer programmers and it won’t work for competing algorithms!

My conclusion is that the main practical threat to human intellectual dominance will be biological and not computational (in addition to our own self-destructive tendencies).  That leaves open the possibility for biological computation, but that threat is subsumed by the general threat of biological genetic engineering and by the creation of biological environments that are detrimental to human health and well-being.

I have taken this lengthy excursion into the analysis of the computer / brain analogy in order to eliminate it as one path toward understanding consciousness.  The idea that computation can produce human consciousness is an example of functionalism:  the concept that a complete functional description of the brain will explain consciousness.  Human consciousness is a complex concept which resists empirical exploration.  Let’s look at the key problem.

David Chalmers is professor of philosophy at Australian National University and has clearly articulated what has become known as the hard problem of consciousness.  In his 1995 paper, “Facing up to the Problem of Consciousness,” he first describes the easy problem.  The easy problem is the explanation of how the brain accomplishes a given function such as awareness or articulation of mental states or even the difference between wakefulness and sleep.  This last category, when pushed to consider different states of awareness, previously had seemed to me to be the most promising path towards understanding consciousness.

It has been known for some time that there are different levels of consciousness that are roughly correlated to the frequency of brain waves which can be measured by electroencephalogram (EEG).  Different frequencies of brain waves have traditionally corresponded to different levels of alertness.  The frequency range that seems to hold the most promise for understanding consciousness are the gamma waves at roughly 25 to 100 cycles per second (Hz or Hertz).  40 Hz is usually cited as representative.  In 1990, Francis Crick (co-discoverer of the DNA structure) and Christof Koch proposed that the 40 Hz to 70 Hz was the key “neural correlate of consciousness.”  The neural correlate of consciousness is defined to be any measurable phenomenon which can substitute for measuring consciousness directly.

The neural correlate of consciousness is a measurable phenomenon; and measurable events are what distinguish the easy problem from the hard problem of consciousness.  The easy problem is amenable to empirical research and experiment; it explains complex function and structure in terms of simpler phenomenon.  The hard problem, by contrast, raises a new question: how is it that the functional explanation of consciousness (the easy question) produces the experience of consciousness or how is it that the experience of consciousness arises from function?  As Chalmers says, why do we experience the blue frequency of light as blue?  Implicit in this question is the idea that consciousness is unified despite different functional impact.  Color, shape, movement, odor, sound all come together to form a unified experience; we sense that there is an “I” which has the unified experience and that this “I” is the same as the self that has had a history of similar or not so similar experiences.  My rephrasing of the hard question goes like this: how is it that we have a self with which to experience life.

Chalmers thinks that a new category for subjective experience will be needed to answer the hard question.  I think that such an addition is equivalent to adding consciousness as a basic attribute of matter.  That is what panpsychism asserts, and I think that the evidence from physics, chemistry and biology supports the panpsychist view.  I think panpsychism leads directly to experiences of awareness, consciousness and self-consciousness and that the concept of a self-reflective self is the natural conclusion of such a thought process.  David Chalmers thinks that the idea has merit, but differentiates his view from panpsychism, saying “panpsychism is just one way of working out the details.”

My next post will conclude this series and will directly present the theological question.

The Evidence from Evolution and Biology (Part 3)

In part 2 of this series on evolution and biology, I presented my analysis on the origin of life and my conclusion that life could not have arisen through random chance alone.  I have concluded along with other observers that the laws of physics and chemistry must be conducive to the creation of life and that such laws are evidence for a cosmic ordering power.  The question remains, however, what part does random chance play once life was created?  In part 1 of this series, I raised the question about the role that random mutations play in natural selection.  In this part, I will present evidence that natural selection does not rely entirely on random mutation and that there is at least some portion of natural selection that relies on directed mutation.

The most likely systematic way to create random changes in DNA is through copying errors.  One of the first researchers to deal rigorously with copying errors was Manfred Eigen with his “quasi-species” model.  In this mathematical model of natural selection, survival and fitness to survive are balanced against replication errors.  Here is Freeman Dyson’s description of the problem:

The central problem for any theory of the origin of replication is that a replicative apparatus has to function almost perfectly if it is to function at all. If it does not function perfectly, it will give rise to errors in replicating itself, and the errors will accumulate from generation to generation. The accumulation of errors will result in a progressive deterioration of the system until it is totally disorganized. This deterioration of the replication apparatus is called the “error catastrophe.”

Eigen’s model sets a theoretical limit on the allowable error rate necessary to avoid the “error catastrophe.”  It turns out that the maximum error rate is approximately the inverse of the number of DNA base pairs.  So for humans with about 3.2 billion base pairs, the calculated maximum error rate is about 10-9, or 1 error in 1 billion cell divisions.  This is consistent with the actual error rate after proofreading and repair of the copied DNA.

But some copying errors will still survive.  What becomes of them?  James A. Shapiro is professor of microbiology at the University of Chicago.  In his book, Evolution: A View from the 21st Century, he writes, “Although our initial assumption is generally that cells die when they receive an irreparable trauma or accumulate an overwhelming burden of defects with age . . ., it turns out that a significant (perhaps overwhelming) proportion of cell deaths result from the activation of biochemical routines that bring about an orderly process of cellular disassembly known by the terms programmed cell death and apoptosis.”  In multicellular species, there is an elaborate signaling system for causing some cells to die.  This process is not necessarily disease related.  During embryonic development, some tissues grow that need to be eliminated before birth such as the webs that connect fingers and toes.  These are eliminated by apoptosis (programmed cell death).  This process also happens to embryonic neurons that do not have sufficient interconnections to be viable.  The implication of this response is that organisms have elaborate capability for determining when some cells need to be eliminated.  Some cancers are caused by problems with the apoptosis response.

Before proceeding to the evidence for directed mutation, I want to encourage an appreciation for the enormous orchestration that occurs inside the cell.  As an observer of the biological sciences, I am constantly amazed by the incredible variability and responsiveness of living cells.  If you have never watched videos or animations of cell division or other cellular processes, I would urge you to do so.   They are simply fascinating!  And part of what makes for a fascinating view is the complex orchestration that is happening inside the cell.    Here is a video dealing with mitosis, but there are many others:  A longer, more advanced animation on the cellular response to inflammation is here:

Another amazing aspect of cellular function and orchestration is protein folding.  In order for proteins to be effective, they must be folded into a three dimensional shape that is suited to their purpose.  As I explained in my previous post, the protein enzyme, sucrase, performs its function of splitting table sugar (sucrose) into the more easily metabolized glucose and fructose by “locking onto” the sucrose molecule.  Biologists have often used the analogy of a lock and key to explain the fitting of enzymes to their target molecules.

Protein misfolding plays a part in several disease processes including Alzheimer’s disease, Creutzfeldt-Jakob disease (a form of “mad cow disease”), Tay-Sachs disease and sickle cell anemia.  In sickle cell anemia the protein misfolds because of a mutation that alters the sequence of amino acids in one of the blood proteins needed to construct hemoglobin.  In the case of Creutzfeldt-Jakob disease, the cause of protein misfolding has not been conclusively identified, but may be due to an “infectious protein” called a Prion.  A Prion is a normal human protein in the cell membrane that has misfolded and that causes other normal protein to misfold which results in brain tissue degeneracy.  It would be unprecedented if it is conclusively proved that Creutzfeldt-Jakob disease is caused by Prions because all other known disease agents involve replication or modifications to DNA.

The instructions for protein folding are not contained in DNA (although the amino acid sequence is a crucial aspect), but correct folding is absolutely necessary for good health.  DNA provides the peptide sequence information and it is the task of the completed protein, after it has been manufactured by a ribosome, to fold into the correct shape.  In human cells there are regulatory mechanisms for determining whether a protein has folded into the correct shape.  If a protein has misfolded, it can be detected and the protein can be disassembled.  Some proteins have the help of chaperones as mentioned in my previous post.  Here is an animation of a short 39 residue segment of the ribosomal protein L9, identified as “NTL9”, shown folding by computer simulation:  (The full protein from Bacillus stearothermophilus is just one of many that make up a ribosome.  It contains 149 amino acids and functions as binding protein to the ribosomal RNA.)

Proteins fold at widely varying rates, from about 1 microsecond to well over 1 second with many folding in the millisecond range.  The quickness with which most proteins fold led to an observation in 1969 by Cyrus Levinthal that if nature took the time to test all the possible paths to a correct final configuration, it would take longer than the age of the universe for a protein to fold.  It is now thought that proteins fold in a hierarchical order, with segments of the protein chain folding quickly due to local forces so that the final folding process only need configure a much smaller number of segments.  Nevertheless, simulations of protein folding often require huge computational resources to recreate the folding sequence.  One source estimated that it would take about 30 CPU years to simulate one of the fastest folding proteins.  A slower protein would require 100 times the resources, or about 3000 CPU years.

So Levinthal’s question has not been completely answered.  How does nature enable proteins to fold so quickly?  The prevailing theory on folding holds that the various intermediate states are following an energy funnel from a high energy state (unfolded) to the lowest energy state (folded).  Just as water seeks its lowest level, proteins seek the conformation that has the lowest energy.  The explanation for the wide variety of folding rates then rests on the nature of the path from the unfolded energy state to the folded energy state.  If the path is straight, the folding will be fast; if the path has energy barriers that must be circumnavigated or perhaps tunneled through, the folding will be slower.  These issues are still in active research, so there is currently no clear consensus.  But in a recent paper, two researchers conclude “Our results show it is necessary to move outside the realm of classical physics when the temperature dependence of protein folding is studied quantitatively” (“Temperature dependence of protein folding deduced from quantum transition”; 2011, Liaofu Luo and Jun Lu).

I simply point out the similarity to the research on photosynthesis that showed that photons captured by photosynthesis follow a highly efficient path to the place where the photon’s energy can be turned into food production.  That research showed that quantum coherence played a significant role in the efficient transfer of energy and it was thought by analysts that a quantum computation of the energy landscape was a key part of the explanation.  It would not surprise me if quantum computation played a key role in protein folding by determining the most efficient path for navigating the energy funnel.  But without regard to whether quantum computation plays a role in protein folding, some scientists have not hesitated in applying the computer analogy to cell function.

Paul Davies is a physicist and science advocate who contrasted the vitalism of the 19th century with our understanding of biology today by saying, “The revolution in the biological sciences, particularly in molecular biology and genetics, has revealed not only that the cell is far more complex than hitherto supposed, but that the secret of the cell lies not so much with its ingredients as with its extraordinary information storing and processing abilities. In effect, the cell is less magic matter, more supercomputer.”

James A. Shapiro continues the computer metaphor when he writes about the cognitive ability of the cell. In his book, Evolution: A View from the 21st Century, he writes about the cell’s ability to regulate and control itself using a number of examples such as repair of damaged DNA, programmed cell death,  and regulation of the process of cell division.  He then continues to characterize the cell in computer-like terms (my emphasis):

The selected cases just described are examples where molecular biology has identified specific components of cell sensing, information transfer, and decision-making processes. In other words, we have numerous precise molecular descriptions of cell cognition, which range all the way from bacterial nutrition to mammalian cell biology and development. The cognitive, informatic view of how living cells operate and utilize their genomes is radically different from the genetic determinism perspective articulated most succinctly, in the last century, by Francis Crick’s famous “Central Dogma of Molecular Biology.“

Shapiro goes on to suggest modification to the “Central Dogma of Molecular Biology.”  The “Central Dogma” summarizes the process of protein creation from RNA which is transcribed from DNA.  Dr. Shapiro suggests that this one way summary is too simple.  There are many paths through which RNA and proteins can modify the DNA.  The primary example of RNA which can modify DNA comes from retroviruses.  The well-known HIV virus is one example.  Retroviruses contain RNA which is transcribed into proteins that can convert the RNA into DNA and then insert the viral DNA into the host DNA.  It is estimated that between 5% and 8% of the human genome is comprised of DNA that has been inserted from retroviruses.

Dr. Shapiro also uses computer programming terminology when describing detailed biological function such as E. coli’s ability to metabolize lactose when glucose is not available: “Overall computation = IF lactose present AND glucose not present AND cell can synthesize active LacZ and LacY, THEN transcribe LacZY from LacP.”  That is a statement that could be implemented in almost any standard computing system with, of course, the proper functions available for “synthesize” and “transcribe,” etc.  I would also point out that a significant portion of a cell’s “cognitive” function is concerned with self-regulation.  In other words, there is a significant amount of self-knowledge available to the cell.

Professor Shapiro itemizes five general principles of cellular automation processing:

  1. There is no Cartesian dualism in the E. coli (or any other) cell. In other words, no dedicated information molecules exist separately from operation molecules. All classes of molecule (proteins, nucleic acids, small molecules) participate in sensing, information transfer, and information processing, and many of them perform other functions as well (such as transport and catalysis).
  2. Information is transferred from cell surface or intracellular sensors to the genome using relays of proteins, second messengers, and DNA-binding proteins.
  3. Protein-DNA recognition often occurs at special recognition sites.
  4. DNA binding proteins and their cognate formatting signals operate in a combinatorial and cooperative manner.
  5. Proteins operate as conditional microprocessors in regulatory circuits. They behave differently depending on their interactions with other proteins or molecules.

Regarding evolution, Dr. Shapiro advocates a concept called “natural genetic engineering” whereby the cell makes adaptive and creative changes to its own DNA.  I have used the phrase “directed mutation” to mean essentially the same thing.  These changes to a cell’s own DNA are not random: “It is difficult (if not impossible) to find a genome change operator that is truly random in its action within the DNA of the cell where it works. All careful studies of mutagenesis find statistically significant nonrandom patterns of change, and genome sequence studies confirm distinct biases in location of different mobile genetic elements. These biases can sometimes be extreme . . . “

In a recent article, Professor Shapiro further clarified his use of the phrase, “natural genetic engineering,” or NGE:

NGE is shorthand to summarize all the biochemical mechanisms cells have to cut, splice, copy, polymerize and otherwise manipulate the structure of internal DNA molecules, transport DNA from one cell to another, or acquire DNA from the environment. Totally novel sequences can result from de novo untemplated polymerization or reverse transcription of processed RNA molecules.

NGE describes a toolbox of cell processes capable of generating a virtually endless set of DNA sequence structures in a way that can be compared to erector sets, LEGOs, carpentry, architecture or computer programming.

NGE operations are not random. Each biochemical process has a set of predictable outcomes and may produce characteristic DNA sequence structures. The cases with precisely determined outcomes are rare and utilized for recurring operations, such as generating proper DNA copies for distribution to daughter cells.

It is essential to keep in mind that “non-random” does not mean “strictly deterministic.” We clearly see this distinction in the highly targeted NGE processes that generate virtually endless antibody diversity.

In summary, NGE encompasses a set of empirically demonstrated cell functions for generating novel DNA structures. These functions operate repeatedly during normal organism life cycles and also in generating evolutionary novelties, as abundantly documented in the genome sequence record.

(From What Natural Genetic Engineering Does and Does Not Mean, Huffington Post, February 28, 2013.)

Perhaps the most important evidence for natural genetic engineering is the discovery of transposable elements in the DNA.  These were first identified by Barbara McClintock in 1948 and for which she was awarded the Nobel Prize.  Transposable elements, also called transposons and retrotransposons, are segments of DNA that can move or be replicated into another part of the DNA molecule.  In general, this process can be either a “cut and paste” or a “copy and paste” operation using special proteins to operate on the DNA, sometimes with RNA as an intermediary molecule.

Retrotransposons makes up a significant portion of the human genome, about 42%.    One type of transposon, called an “Alu” sequence, is about 10% of the human genome and is one of the main markers for primates (including humans).    However, almost all transposable elements are contained within the non-coding region of DNA and therefore and not directly expressed as proteins.  This DNA has typically been called “junk DNA,” but recent research from the ENCODE project (“Encyclopedia Of DNA Elements”) has demonstrated a wide variety of function for the non-coding portions of DNA.

I have to mention that, as a computer designer and coder, this discovery of movable elements in the non-coding regions of DNA remind me of one of the most common ways we would modify computer programs.  First, we would locate an old segment of code that functioned similar to the desired new function.  Then we would copy that segment into another part of the program, but leave it unexecuted until the new segment was modified to accomplish its intended new function.  Finally we would activate the new segment and test it.  Nevertheless, DNA represents computational capabilities that I have never seen in any existing computer system.  It has now been demonstrated that the so called “junk DNA” has the ability to affect the “non-junk” portion of the genome by controlling when or whether certain proteins are expressed.

Continuing with the computer analogy, Freeman Dyson also speaks about DNA as a computer program, characterizing DNA as software and proteins as hardware.  I think that is a little too simple since individual proteins exhibit many cognitive abilities described by Dr. Shapiro.  Each separate molecule in the cell, including proteins, has its own processing capability.

One way the Professor Dyson is correct, though, is through the discovery of proteins as molecular machines. This is another fascinating area of biology.  Many of the functions of the cell are carried out by proteins that can best be described as miniature machines.  One important example is the ATP generator which is used to make ATP in the Mitochondria.  ATP, or Adenosine Triphosphate, is the main energy molecule for almost all forms of life.  This ATP generator or ATP synthase looks remarkably like a tiny motor.  This “motor” is powered by a hydrogen ion concentration differential across the mitochondria membrane.  The hydrogen ion concentration is generated by molecular pumps which push the hydrogen ions (protons) across the membrane.  An animation of ATP synthase follows:

The implications of all the above biology for lowering entropy are enormous.  The molecular machines are themselves an example of low entropy, being a highly structured, functional set of proteins.  The pumping of protons across a membrane is using some energy to create a state of low entropy by concentrating energy at a particular location.  The ATP itself is a storehouse of energy for future use.  Protein folding is another entropy lowering process. The DNA specifying the information necessary to manufacture proteins is perhaps the supreme example of low entropy, particularly now with the discovery of purposeful “junk DNA.”  One could easily conclude that all of life is powered by of the miracle of low entropy overcoming the global tendency for entropy to increase.

Life can be viewed as a struggle to maintain low entropy.  We need sources of low entropy to live: food, shelter and energy, etc.  The ultimate source of low entropy is the sunlight used to create carbohydrates from plants.  However, once our low entropy material needs are secured, we seek an ordered personal life, family life, and social life.  Some say that old age is the result of the loss of our ability to maintain low entropy.  In other words, life is a struggle to maintain low entropy in the face of the law of increasing entropy.  As individuals, we will lose that struggle since death is certain.  As a species, however, the trend towards low entropy, towards more complex ordering, can continue.

Before life began to evolve, matter on earth was subject to laws of physics and chemistry.  One of those laws is the law of increasing entropy: low entropy sunlight is absorbed and then radiated back into space as high entropy heat.  However, the laws of nature themselves contain a provision for entropy lowering interactions.  I strongly believe that such a provision is the result of the decisionality inherent in the collapse of the quantum wave function.  My reason for such a belief lies mainly in the order that results from entropy lowering interactions, especially the order inherent in life.  All of our human experience tells us that order results from rational decisionality; it does not result from randomness.  The mathematics of random chance rules out any likelihood that life arose by chance alone.

After life began to evolve, it naturally took advantage of entropy lowering processes.  Natural selection and fitness are crucially based of efficient use of energy.  There is a recent example of a prehistoric bird that had four wings, named microraptor.   Microraptor’s four wings allowed it to make tight turns around the many forest trees in its habitat.  However, four wings caused additional drag and consequent loss of speed and energy.  It therefore took microraptor more energy to accomplish what modern birds can do. Modern birds evolved two wings with additional muscle control for improved maneuverability but without the additional drag of a second set of wings.  Efficient use of energy is crucial for survival.

It is therefore very surprising that nature and evolution would have allocated a single organ in humans that requires 20% of our energy, yet weighs only about 2% of our total weight.  That is the amazing, almost unbelievable, statistic for the brain.  If we view human life as the pinnacle of evolution, then the entire evolutionary path must proceed towards higher consciousness and higher intelligence.  Therefore, if Professor Shapiro is right about natural genetic engineering (and I am convinced he is – he draws upon a huge body of research done by others), then modification made at the cellular level must include a bias for enhanced consciousness.

In my next section, I will begin to address the evidence from consciousness.  This will be difficult because science can say very little about consciousness.  Some take the position that consciousness is an epiphenomenon; that it emerges, ex novo, from complex calculations and therefore, has no real existence.  Some take the position that mind is a separate category from matter, leading to dualism.  I take the position that consciousness is embedded in matter, a position called panpsychism.  Furthermore, I hold the position that the way that consciousness has become embedded in matter is through the inherent decisionality of quantum decoherence.  One way to view this position is that the universe performs a quantum calculation on every transfer of energy.  But it would be a mistake to think that the calculation is the same as a calculation that could performed by a computer.  Stay tuned.

The Evidence from Evolution and Biology (Part 2)

The Origin of Life

(Thanks to all who took the time to comment on my previous post.  All the comments were helpful; as one who is attempting to summarize and draw conclusions from diverse areas of science, I sometimes struggle to find the right word or right example.  The comments on entropy were particularly helpful.)

The simplest living biological organisms that we know about are bacteria. (I am omitting viruses and infectious proteins because most biologists do not classify them as living due to their dependence on living cells.)  Bacteria are one-celled organisms without a well-defined nucleus.  Cells without a well-defined nucleus are called prokaryotes.   Nucleated cells, like those found in multi-celled organisms are called eukaryotes.  Bacteria are typically one tenth the size of eukaryote cells; they are less than 10 micrometers in length.

One of the smallest and simplest bacteria is an organism called mycoplasma genitalium.  This bacterium infects the urinary tract of humans and primates.  It is less than 300 nanometers long, or about one tenth the size of typical bacteria.  It also has one of the smallest sets of genetic code.  The amount of genetic code can be measured by the number of “base pairs.”  “Base pairs” are a count of the letters of the genetic code that make up the DNA of the organism: A (adenine), C (cytosine), G (guanine) and T (thymine).  They are called “pairs” because each letter is paired with another letter in the double-stranded DNA molecule: A is paired with T and C is paired with G.  The DNA of the bacterium, mycoplasma genitalium, contains about 583,000 base pairs.  The human genome, by comparison, contains about 3.2 billion base pairs.

The DNA of mycoplasma genitalium (M. genitalium) contains code for 482 proteins.  Recall that the genetic code for proteins is a group of three base pairs that refer to a specific amino acid of the twenty amino acids that comprise all proteins.    The average length of these proteins in M. genitalium is 366 amino acid molecules with a large range from smallest (37) to largest (1805).  The amount of DNA needed to code 482 proteins with and average length of 366 is 529,236 base pairs or about 91% of the genome.  For humans, the corresponding percentage is about 1.5%.  In humans, the overwhelming majority of DNA does not directly code for proteins and its function remains somewhat uncharted. (Although recently some light has been shed on this part of our genome by the ENCODE project.)

One of the smaller proteins of M. genitalium is known as “P47633” and functions as part of a protein complex known as a protein folding chaperone or “chaperonin.” P47633 is 110 amino acids long and is the “cap” to a much larger chaperonin, P47632 (543 amino acids).  The complete chaperonin of both P47633 and P47632 provide an isolated, barrel-shaped environment in which proteins can fold properly.  (Some proteins will fold properly without chaperonin.)    Proper folding is absolutely essential to protein function and misfolded proteins in humans have been correlated with certain diseases.

If nature had attempted to form a molecule as simple as P47633 with 110 amino acids by random chance, she would have had to search for a unique combination of 110 amino acids out of approximately 20110 (each of 110 positions can be filled by any of 20 amino acids)!  That is a huge number: approximately 1 followed by 143 zeroes!  A more realistic calculation would take into account that some amino acids can be substituted for others, but also that amino acids in nature come in two varieties (left and right handed) and biological molecules are only formed from left handed versions.  But the number would still be huge.  If nature could search at the rate of one combination in the smallest unit of time possible (Planck time), it would take about 10100 seconds to find the protein.  That is well beyond the age of the universe (about 1017 seconds).  Even if the search were taking place at multiple locations (say 1080 different locations—the number of protons and neutrons in the universe), the length of time would still exceed the age of the universe.  And that’s just for one small protein.

This simple calculation plus the fact that life requires numerous proteins, many of them much longer than 110 amino acids, have led many questers after the origin of life to discount the role of random chance.  Some think that the laws of nature are favorable to life, as I do.  Some think that the earliest organisms must have been much simpler and that more complex organisms such as M. genitalium would have been the product of natural selection.  The natural question to ask at this point is how simple were the initial forms of life?  Or, how simple could they have been?

If we knew how complex the initial living cells were, we could then evaluate whether such cells were the likely product of random molecular encounters.  When I first heard of the Miller-Urey experiment in high school chemistry class, I had the impression that soon we would know how life began.  That was over 50 years ago and we don’t appear to be much closer to solving the mystery of the beginning of life.  The Miller-Urey experiment showed that some amino acids could be produced in an atmosphere of water vapor, ammonia, methane, and hydrogen by passing an electric spark through those chemicals.  The concept presented to me then was that life arose from a primordial chemical soup formed by such chance events.  That idea has since been discredited.  The current thinking is that life began deep underground or underwater near a thermal source of energy.  But the key question remains: did it arise by chance or do the laws of physics and chemistry favor the creation of life?

We don’t know how complex the initial forms of life were, so I am using the simplest example of life that we now have to illustrate a point about the role for random chance in the beginning of life.  M. genitalium is one of the simplest living organisms that biologists have studied.  It has the additional advantage of being the subject of the Minimal Genome Project which seeks to find the simplest possible genome.  Toward that objective, each of the 482 protein coding genes of M. genitalium was individually and systematically deleted until a viable cell with 382 proteins was created in the laboratory.   The Minimal Genome Project provides a lower bound on the complexity of a viable living organism capable of both metabolism and replication.

Metabolism is the ability of cells to produce and use energy for homeostasis, or the ability of a cell to maintain itself in its typical environment.  Replication is the ability for a cell to pass along essential information to its progeny.  Metabolism is primarily protein driven biochemistry and Replication is primarily DNA / RNA driven cell division.  Since cell division involves metabolism (energy must be expended for a cell to divide), then replication requires some minimal functioning protein based chemistry.  Conversely, metabolism in the modern cell requires DNA directed protein creation, completing what is known as the “chicken and egg” conundrum for origin of life researchers.  Physicist Paul Davies sums up the puzzle: “It is hard enough to imagine one of them forming by chance, but to suppose both nucleic acids [DNA] and proteins were happy chemical accidents occurring at the same time and place stretches credulity.”

Not all scientists agree that the simplest original living organism had both the capability for metabolism and replication.  There appear to be two camps: one favoring metabolism priority and one favoring replication priority.  For example, Freeman Dyson has put forth an abstract mathematical model for metabolism priority that requires only 8 to 10 monomers.  In Dyson’s model, monomers could be amino acid molecules, so, if the model is predictive, it would indicate that cells with a stable metabolism could be achieved from proteins built from about half the amino acids we now have.  Dyson also indicates that “a few hundred polymers [proteins]” would be sufficient.  The problem with metabolism priority is that the resulting cells have no way to reliably pass on the precise composition of its proteins to its progeny (cell division would occur through external events and splitting due to growth).  Dyson argues that imprecise replication would be sufficient and actually better than an error-prone, directed replication.

If Dyson’s model is accurate, then the earliest cells would be appreciably less complex than M. genitalium with each protein needing between 10 and 100 amino acid molecules.  And there would be only a maximum of 10 amino acids.  This would reduce the probability due to random chance to between 1 chance in 1010 and 1 chance in 10100, a large range with the lower number (1 chance in 10 billion) within reach of a reasonable random search.  Still, Dyson’s model needs “a few hundred polymers [proteins],” and the combination of over 100 proteins with the simplest protein needing 10 amino acids will give a large space for random combinations.  Dyson doesn’t say how many varieties of proteins there might be.

But Dyson’s model doesn’t completely rely on random chance.  The model contains provisions for nature to favor life.  Part of his model is a table of probabilities that the correct “proteins” will be formed from monomers (amino acids).  These factors are called catalyst “discrimination factors” and Dyson characterizes them as “reasonable for the discrimination factor of primitive enzymes.”  Their values in his model range from 60 to 100.  He goes on to say:

A modern polymerase enzyme typically has a discrimination factor of 5000 or 10000. The modern enzyme is a highly specialized structure perfected by three thousand million years of fine-tuning. It is not to be expected that the original enzymes would have come close to modern standards of performance. On the other hand, simple inorganic catalysts frequently achieve discrimination factors of fifty. It is plausible that a simple peptide catalyst with an active site containing four or five amino acids would have a discrimination factor in the range preferred by the model from sixty to one hundred.

This is significant because metabolism requires enzymes (a type of protein; Dyson’s “peptide catalyst”) which act as catalysts to speed up reaction rates.  Without enzymes, reaction rates would be too slow to sustain life.  One example is the way that we metabolize sugar for energy.  Table sugar is called sucrose and is a combination of glucose and fructose, but glucose is the sugar that we best metabolize.  So sucrose is first split into glucose and fructose.   Most table sugar comes from either sugarcane or sugar beets which create the sugar through photosynthesis.

The chemical reaction to split sucrose simply requires water and can occur spontaneously, but table sugar placed in a glass of water would not dissociate fast enough to be useful.  The reason the reaction would not take place quickly is because there is a cost in energy to break the bonds between glucose and fructose so that the water can interact.  Given enough time and heat, thermal activity would eventually begin to break the bonds between glucose and fructose.  However, we have an enzyme in our small intestine named “sucrase-isomaltase” which is able to greatly speed up the reaction.

Sucrase-isomaltase is a dual enzyme with the sucrase portion able to split sucrose into glucose and fructose while the isomaltase portion breaks apart the starch from grains.  The entire enzyme is 1877 amino acids long with the sucrase portion being 820 in length.  The sucrase portion works by locking onto the dual sugar sucrose and by proximity to its own molecular structure lowers the energy cost of breaking the covalent bond between glucose and fructose.  A water molecule is then able to intervene and complete the disassociation of the two sugars.  There are thousands of enzymes needed for metabolism; life would not be possible without them, so the enzyme efficiency is a key factor in any theory about the beginning of life.  Each enzyme is incredibly specific so that, in the case of sugar metabolism, a separate enzyme is needed for sugars from grain (the isomaltase portion).  Enzyme specificity is created by the sequence of amino acids and the shape of the protein.  Protein folding is a key factor.

Freeman Dyson is a physicist who has turned his attention to the chemistry of life’s beginning.  Frank Anet, a professor of biochemistry at UCLA, has criticized Dyson’s model as too simple.  This is a charge that Dyson freely admits since his objective was to start the conversation on the metabolism first approach, which is clearly a minority position.  The main benefit of Dyson’s model is that it gives mathematical results and Dyson insists that the real proof will be in experimental results.  But I fail to see how Dyson’s model could be sufficiently convincing to generate much interest from research labs.

Professor Anet goes on to level a more serious criticism of Dyson’s model:

The range of required discrimination factors is comfortably less than the discrimination factor of several thousands in modern enzymes, as would be expected, and it is similar to the discrimination factor of simple inorganic catalyst. However, the nature of these inorganic catalysts is not given, nor are the catalysed reactions, nor is any reference to the literature provided. No reference is made to any experimental discrimination factors by oligopeptides [small protein enzymes] in catalytic reactions involving closely similar compounds, such as amino acids, which would be the appropriate reference systems. Such a large discrimination factor, it must be stressed, is far more difficult to achieve than mere catalysis. Dyson’s oligopeptides have on the order of 20 monomers, with an ‘active site’ of perhaps five monomers. However, the other fifteen monomers are important in determining the folding of the polymer and therefore also the catalytic efficiency of the active site. Additionally, with such small oligopeptides, the folding is likely to be poorly defined. Thus, it can be concluded that Dyson has no good experimental evidence for choosing high discrimination factors, which are probably too high by at least an order of magnitude. Unfortunately, this destroys his model.

I’m sure that Dyson would repeat his caveat that he is only pointing the way and that empirical results are the domain of chemists, not physicists.  Nevertheless, until there are definitive results that lend credibility to small effective organic enzymes, I think metabolism priority will continue to be ignored.  Professor Anet also reviews several other researchers who do attempt to find more specific results, but finds them all lacking.

Professor Anet is a proponent of the “RNA world” approach.  This is currently the majority position for origin of life research.  It is a form of replication priority, but has the advantage that RNA has been demonstrated to function as a catalyst in some situations.  Only 6 monomers are needed to form RNA.  There is ribose, a sugar that has been demonstrated to form from a reasonable pre-biotic environment on earth.  There is a phosphate group that combines with ribose to form the RNA backbone.  And there are the four bases for RNA, similar to the four bases for DNA: A (adenine), C (cytosine), G (guanine) and U (uracil).  The basic idea is that these 6 simple molecules were available in the pre-biotic environment and that by random movement came together to form RNA.  Further chance encounters lead to larger RNA polymers than can replicate themselves and catalyze protein formation.  Once replication begins, natural selection can operate leading to the more efficient and stable environment of protein enzymes and DNA code.

The RNA approach is an attractive picture, but strong doubts have been cast on the RNA scenario by Robert Shapiro (1935-2011; no relation to James Shapiro), previously Professor Emeritus of chemistry at NYU.   Professor Shapiro has argued that the basic components of RNA were extremely unlikely to have formed in the early earth environment.   In particular, the spontaneous formation of ribose cannot proceed in the presence of nitrogen which the four RNA bases need for their formation.   Both ribose and one of the bases (cytosine) have a relatively short half-life and it is therefore unlikely that they could be formed at separate locations and then brought together by chance.  If life began near high temperature thermal vents, then formation of RNA is even more unlikely.  Robert Shapiro became an advocate of the metabolism priority approach after criticizing the RNA world for several years.

Professor Anet was well aware of Shapiro’s criticism and commented in 2004:

From [my] analysis . . ., it does not seem that the metabolism-first theories are ‘robust’ (or to be recommended), as claimed by Shapiro. On the other hand, Shapiro has stressed some very serious weaknesses of the replication-first theories. But this does not mean that a satisfactory replication-first theory is impossible, although theories . . . that require activated nucleotide monomers to be available prebiotically are not really acceptable. The replication-first approach does not require the existence of a primitive organic soup, it should be stressed, and local conditions on Earth may have been quite varied. Shapiro admits that new discoveries or ideas could lead to more optimistic conclusions on the viability of the replication-first approaches. Some new developments that have appeared after the publication of Shapiro’s paper will now be outlined briefly.

Anet goes on to catalog several recent developments, but, in 2004, the most convincing results were not yet available.  I am speaking of the momentous 2009 experiments by Tracy Lincoln and Gerald Joyce that showed that RNA could replicate itself in the lab.  True, there were several constraints and limitations on the test, but it did show that sustained replication could take place with RNA alone, albeit under artificial conditions.  As positive as these results are, they required relatively long RNA molecules of 189 nucleotide bases.  Some researchers think this can be shortened to 100 bases, but even 100 bases gives a very large search space for assembly by random chance: 4100 or 1 followed by 60 zeroes.  That is such a large space because the half-life of RNA is measured in hours; RNA degrades relatively quickly.

Professor Shapiro gets the last word on this even though he spoke in 2007, 2 years before the Lincoln-Joyce results:  Dr. Shapiro asks his audience of scientists to imagine a large pile of Scrabble letters. Then he added, “If you scooped into that heap, and you flung them on the lawn there, and the letters fell into a line which contained the words, ‘To be or not to be, that is the question,’ that is roughly the odds of an RNA molecule, given no feedback —and there would be no feedback, because it wouldn’t be functional until it attained a certain length and could copy itself—appearing on earth.”  (If you do the math, Shapiro’s odds are about 1 in 1057!)

Given the ongoing and provisional status of research into the origin of life, what can we know with certainty?  And what does that knowledge bring to bear on the central theme of my writing: that there is a conscious, rational power at work in the universe without recourse to supernatural abilities?  Another way to frame the question is to ask: are the laws of physics and chemistry favorable to life and consciousness?  The one thing we can count on is that all of modern life is based on central dogma of molecular biology: 1) proteins, the primary workhorses of the cell, must be composed of the correct sequence of amino acids and folded into correct shape for them to be effective. 2) Proteins are created through the intermediary of RNA, acting along with other proteins.  3) The instructions for assembling proteins come from messenger RNA which is transcribed from DNA which contains both the genetic code for proteins and instructions for expressing that code.  The key point: the importance of DNA for protein assembly is its information content, not its chemical characteristics.

The modern cell is a protein manufacturing and information processing organism.  DNA contains the coded sequence of amino acids for proteins.  Information in the DNA drives the protein manufacturing work.  I worked in information processing for my entire career of more than 35 years.  There is no power in nature other than consciousness and intelligence that can create an information processing system.  A complete “artificial intelligence” solution to the software development bottleneck has been the holy grail of software management for decades.  It has not appeared nor will it appear.  There will be improvements in automated design, but there are sound mathematical reasons to think that software development cannot be totally replaced by automation.  Modern computer systems have the mathematical property of Gödel incompleteness:  they will always need an intelligent agent to make additions and improvements in them.  The bottom line is that a conscious, intelligent power that expresses itself through the laws of nature in the ultimate power behind all of life.  In other words, decisional consciousness is a property of nature.  This has been my conclusion from physics as well.  Random chance alone cannot account for the origin of life.

I think that research will eventually find smaller and possibly more primitive life forms, if not on Earth, if not in the lab, then possibly on Mars or some other nearby planet or moon.  I have taken the position that the laws of physics are favorable to life and therefore I think that life developed gradually and incrementally from ordinary matter that I think has been imbued with consciousness from the beginning.  Once life began, the power of consciousness in ordinary matter became expressed as natural selection.  Once natural selection began, something like DNA would essentially be a requirement for life so it could store the code for the wonderful protein inventions discovered through natural selection.  Once evolution became advanced, then advanced consciousness would be a natural consequence of this information-rich system.

The Evidence from Evolution and Biology (Part 1)

My previous posts have focused on the evidence for a rational agent inherent in the laws of physics.  There has been an implicit assumption that the laws of physics are rigorously deterministic.  But clearly life is not deterministic, so it was necessary for me to point to some possible feature of the laws of physics that allowed for the wild variation and unpredictability of life.  I will summarize my thought process as follows:

  1. The universe is ordered by deterministic laws and forces such as the force of gravity and electromagnetism.  There are also non-deterministic laws such as quantum theory.  One of the laws that combine both features is the law of increasing entropy.  Entropy always increases throughout the universe, but it is allowed to decrease locally.  Since quantum theory ultimately controls all interactions in the universe, all forces are non-deterministic at the quantum level.  (The only possible exception is gravitation which has not yet been unified with quantum theory.)
  2. The deterministic laws (electromagnetism, etc.), by themselves, cannot account for life and consciousness.  There must be another factor in the fundamental laws of physics that allows living organism to lower entropy.  The process of lowering entropy is essential to life because it concentrates energy for future use and organizes the genome for transmission to future generations.
  3. That factor in the laws of physics is the collapse of the wave function in quantum physics, also called decoherence.  Decoherence is absolutely necessary for any measurable energy transfer.  In decoherence, the universe actually chooses an outcome for every transfer of energy.  This choosing, or decisionality, on the part of the universe is what I have called rational agency and it is responsible for the forward direction of time.
  4. This decisionality on the part of the universe is always mixed up with randomness because we are prohibited from knowing precisely all the states of matter, particularly the states of entanglement between particles.  This is a consequence of a kind of cosmic censorship hypothesis.  The Heisenberg uncertainty principle is one such limitation on our knowledge.
  5. There can be no ordering principle or lowering of entropy based on true randomness.  True randomness, by definition, is maximum entropy.  In all of physics the only candidate for non-random yet non-deterministic action is decoherence.
  6. Therefore, this choice by the universe is directed choice.  It is a rational choosing based on the laws of physics and contains within it the possibility of lowering entropy.  It is the physical undergirding of all life and consciousness.  It is the physical action responsible for the forward direction of time.

Essentially, I think that the laws of physics favor life or are conducive to life.  In general, nature prefers to disperse energy; therefore there must be physical explanations for how energy gets concentrated.  Just as there is an explanation for how nature concentrates energy for lightning, there must also be an explanation for how living organisms concentrate energy and lower entropy.   These six steps summarize my explanation.  In this series on evolution and biology, I will lay out the case for the laws of physics favoring life as opposed to the case for life adapting to the laws of physics.  Both dynamics occur, but only laws conducive to life can create life from inanimate matter.

I don’t consider this logic highly dependent on particular experimental results.  Scientific theories are always provisional; they can be superseded by better theories or more accurate results.  My reasoning is broadly based on the general properties of physical laws.  A portion of the laws are rigorously deterministic and use mathematics to make predictions about future events.  A portion of the laws of physics deals with the presence of uncertainty in the universe.  I fully expect the laws of physics to be revised and improved, but I don’t expect that these general characteristics will be much altered.  If string theory is proved true, that would not change my basic logic, but my perspective might need to accommodate rational agency operating in a multiverse scenario.   String Theory, for all its promise, does not yet make any testable predictions.

Along with the laws of physics, I view the theory of evolution as a valid scientific theory.  It is a theory based on the idea that all living organisms adapt to their specific environment and pass along adaptive traits through procreation.  Darwin’s concept of “natural selection” was devised in contradistinction to “artificial selection,” whereby human breeders selected the best mates in order to raise generations of specifically adapted animals.

Biology is a complex science.  For someone like me, who has spent a major part of his life focused on math and the physical sciences, the main shock of encountering biology is the sheer astronomical diversity of life.  Last year, I took one of the online courses offered from UC Berkeley.  It was the basic undergraduate course for biology majors and it was something I needed because my previous biology class must have been in high school.  It was just as well that I didn’t have very much previous instruction because so much has changed between then and now.  The sheer volume of information is astounding.  I found myself wondering how on earth does anyone organize this much data.  In fact, it took three teachers to cover the material.  One instructor had a background in molecular biology; one was a specialist in genetics and one was from a medical background.  I had the distinct feeling that complete mastery was beyond the capability of any one individual.  But, I am still learning and I do have some observations based on my perspective from the physical sciences.

One observation concerns the principle of emergence.  Emergence is the concept that complex living organisms are able to exhibit new properties and traits by virtue of their complexity and organization.  The example from the textbook for the UC Berkeley class is one that interests me:  “For example, although photosynthesis occurs in an intact chloroplast, it will not take place in a disorganized test-tube mixture of chlorophyll and other chloroplast molecules.  Photosynthesis requires a specific organization of these molecules in the chloroplast.”  The text is saying that photosynthesis is an emergent phenomenon.  That is fine.  That helps organize knowledge, but for someone who wants to know how things work, there is a further question:  How is it that the particular organization contributes to function?  What are the properties of the constituent parts that enable the composite function to emerge?  Too often, emergence is used simply as label for a new function that can’t be explained any further.  When that happens, it becomes a kind of false knowledge: a category without explanatory power.

To take another example, water is composed of two room-temperature gases: hydrogen and oxygen.  I suppose you could say the emergent property of water is its liquidity.  But, with water, one can trace its properties to the molecular properties of hydrogen and oxygen and the strong bond between them as well as the weak bond between water molecules.  These particular molecular properties can also be used to explain surface tension, freezing and boiling.  My expectation is that biology will someday be explained in terms of molecular dynamics.  That day is a long way into the future.

Biological scientists are answering these kinds of questions and it is painstaking work.  It is slow and tedious work to demonstrate how biological molecules work, but I suppose, that is the part of biology that mainly interests me.  I have two main areas of interest in the biological sciences.  One is photosynthesis because of its use of quantum coherence for efficient transmission of sunlight energy to the “reaction center” where chemical food production begins.  The other is the biological molecule tubulin.

Tubulin is a protein molecule that assembles into microtubules.  Microtubules are long, narrow, hollow tubes that play an amazing variety of roles in living cells.  There is a natural tendency for microtubules to assemble themselves because of the positive and negative polarity on the tubulin molecule.  Once assembled, microtubules play key roles in biological cell functions.  They play an essential role during mitosis, cell division, by grabbing hold of the chromosomes and causing the genome to precisely separate toward opposite ends of the cell.  Microtubules are part of the cell’s cytoskeleton; they give shape and form to the cell.  In plants, microtubules guide the alignment of cellulose and direct plant growth at the cellular level.

Microtubules form the infrastructure that transports molecules from outside the cell to the inside and vice versa.  Motor proteins “walk” vesicles containing molecules back and forth along microtubules to their destination.  For example, pancreas cells that make insulin transport the insulin from inside the cell to the outside by this method.   In addition, microtubules are used for cell interaction with its environment.  They form some types of flagella and cilia for locomotion of the cell or movement of particles in the cell’s environment.  For example, the human sperm cell is propelled by action of a flagella made up of microtubules.

In short, microtubules are a very versatile cellular component.  Furthermore, they are an essential part of nerve cells.  Tubulin, the protein that forms microtubules, has a very high density in brain tissue.  That has led some researchers to project a key role in brain activity and consciousness for microtubules.  Microtubules are long, hollow, round tubes that might be ideal for quantum coherence.  There has been some research along these lines.

Tubulin is the protein building block of microtubules and it or similar proteins are probably very ancient, perhaps going back to the beginning of life.  One source specified that all cells had such proteins, except blue-green algae also known as cyanobacteria.  However, cyanobacteria have a tubulin-like molecule (a homologue) called “Ftsz.”  An interesting connection between my two main interests is that the cyanobacteria use photosynthesis for energy harvesting from sunlight.  It is the light harvesting complex from cyanobacteria that are used in the experiments testing quantum coherence.

Cyanobacteria are among the oldest life forms on Earth, perhaps as old as 3.5 billion years.  It would be a very interesting development if microtubules or microtubule-like structures go back to the beginning of life and if it can be demonstrated that quantum coherence played a key role in efficient energy transmission in these structures.  Those are two very big “ifs” and most researchers are very cautious about any evidence pointing towards quantum coherence in biological molecules.  But I remember some fairly incautious statements about the beginning of life from many years ago.

I think it was probably in high school chemistry class that the teacher, one day, covered the Miller-Urey experiment.  This experiment was conducted in 1952 and involved sending a spark of electricity (to simulate lightning) through a mix of chemicals assumed to represent Earth’s primitive atmosphere.  The result was a mixture of amino acids and sugars, both essential building block of life.  Stanley Miller and Harold Urey had demonstrated that organic compounds necessary for life could be easily formed from reasonable atmospheric compounds, such as water, methane, ammonia and hydrogen.  Not only that, but the teacher thought that we would soon be able to synthesize life in the test tube.  Well, that was over 50 years ago and the synthesis of life seems as elusive as ever.  Science doesn’t yet know what makes biochemicals spring to life.

The mystery of the beginning of life notwithstanding, the theory evolution brought incredible organizing power to the huge diversity of biology.  Darwin’s “natural selection” brought explanatory power to the huge diversity of species on Earth.  In the mid-twentieth century, the discovery of DNA and the genetic code brought into the evolutionary system a mechanism for adaptation.  This has led to what has been called the “central dogma” of molecular biology:  DNA makes RNA which makes proteins.  DNA contains coded information that is used to create a coded sequence of RNA which is used to create a sequence of amino acids which make up proteins.   The next step, which isn’t explicitly stated and is poorly understood, is that proteins must fold into a specific three dimensional form in order to be useful.   What is startling to me, coming from a computer programming background, is that the coded sequence of DNA contains just four characters representing four small molecules: A (adenine), C (cytosine), G (guanine) and T (thymine).

These four codes are interpreted in groups of three which gives 64 possible “words” for amino acids in the genetic code (4 X 4 X 4).  Of the 64 possible combinations of DNA code only 20 are actually needed, because there are only 20 amino acids that are needed to make all the known proteins.  Most of the 64 DNA sequences specify the same amino acid as another sequence, so there is built-in redundancy.  Only Tryptophan and Methionine rely on a single coded sequence; all the others have at least two sets of DNA codes and some (Serine, Leucine and Arginine) have six.  It seems possible to me that different evolutionary branches developed a reliance on different DNA sequences for the amino acids.  For someone with a data processing background, the DNA codes are reminiscent of a computer system that has been copied and modified to meet different objectives – even to the extent that duplicate codes are mainly sequential (e.g., Leucine: TTA, TTG, CTT, CTC, CTA, CTG).  From a “systems design” perspective it would seem that at one time there was provision for expansion with 64 codes for all 20 amino acids, but after evolutionary modifications all 64 codes are now in use.  I suppose that if there developed a need for a 21st amino acid, one of the existing redundant codes would be used.  The whole process is very complex, but the same basic DNA, RNA and amino acids are found in all life forms on Earth.  This amazing discovery of the genetic code is universal to life as we know it.  (There are some exceptions.  The Paramecium uses the “stop” codons, UAG and UAA, to code for Glutamate.)

“Natural selection” coupled with the genetic code has given enormous explanatory power to evolutionary biology.  But like all theories, it is a conceptual model of the physical processes that occur.  There remain many questions such as how did life begin.  And then there’s the question asked by Stephen Hawking, “What is it that breathes fire into the equations and makes a universe for them to govern?”  What is it that actually makes the world act in a way that is consistent with the conceptual model?  Readers of my previous posts will suspect that my answer is similar to what I’ve written before: there is a decisional power at work in the universe that breathes life into biological molecules.  It is this decisionality that insures that time flows forward and therefore gives evolution direction.

Some of the evidence for my answer resides in the evidence for directionality in evolution.  But, first of all, the evolutionary model is a rational model.  Even more amazing is that the implementation of the genetic code is an abstract, rational system!  Who would have thought that nature would have arrived at the very rational system of using a three character code to specify a sequence based on 20 amino acids that comprise the proteins for all life?   Let me be direct: The genetic code is information.  The central dogma of molecular biology is an information processing system.  The end results are proteins and decisional governance of the cell. This is exactly the type of system one might expect from a rational agent acting through nature.

As to directionality, the immediate form of the evidence is in the form of the adaptability of evolutionary change.  Evolutionary change produces living organisms that get better at adapting to their environment.  Not only are more advanced organisms better adapted, but they are better at adapting!  For higher life forms like mammals and particularly humans, this implies a higher consciousness.  Therefore, the longer range implication of evolution is higher consciousness.  I think this trend is evident from the archeological and historical record.  For almost 4 billion years, life has survived under the constant threat of a cosmic catastrophe such as the one that brought an end to the dinosaurs.  Today, we are beginning to track the asteroids and comets that have the potential to cause another life-ending cataclysm.  That would not be possible without some sort of advanced consciousness.  In a strange sort of self-reflection, adaptation has become adaptability for which is needed a higher consciousness.  This implies a robust moral development as well, but that is beyond what I can cover in these posts on science and reason.

But a rational agent is not the only explanation.  The alternative view is that evolution is the byproduct of random mutation.  First of all, I don’t think randomness is a good scientific answer.  Science succeeds when it finds and explains rational patterns.  To say that a process is random is to admit defeat from a scientific point of view.  The second thing I would say is that when someone refers to random mutation, it is unclear what type of randomness they are referring to: lack of knowledge randomness or the genuine non-determinism of quantum physics.  The common view of evolution is that it requires generations of offspring in order for nature to select the best attributes and pass those on to future generations.  Is evolution inherently random because some individuals show up at the wrong place at the wrong time or, alternatively, at the right place at the right time?  Is it random because a cosmic ray has altered the genome?  Is it random because we can’t predict how our children will turn out?  The most likely reason mutation might be random is because of a transcription or copying error.  But modern cells have evolved elaborate safeguards against such copying errors.

It turns out that when evolutionists speak of “random mutation,” they mean something specific.  My biology textbook (on Kindle!) only uses the phrase once in over 1000 pages of small font text, and that one occurrence refers to copies of genes that have lost functionality (i.e. the gene has been degraded) over time.  The textbook does not refer to new functionality as “random mutation,” but does use the phrase, “accidents during meiosis” (cell division in reproductive cells).  This phrase, too, has a specific meaning that might not be expected by normal English interpretation.  In general, the textbook prefers to state evidence positively, in terms of what we know rather than in terms of what we don’t know.  As to genetic mutation, it refers to various mechanisms for altering the genome, such as transposition of small portions of the DNA from one location to another.

One internet site was particularly helpful in tracking down the origin of the phrase “random mutation.”  This site was associated with UC Museum of Paleontology (at Berkeley).  The website is a teaching guide for evolution named “Evolution 101.”  This source was very explicit:

Mutations are random.
Mutations can be beneficial, neutral, or harmful for the organism, but mutations do not “try” to supply what the organism “needs.” In this respect, mutations are random—whether a particular mutation happens or not is unrelated to how useful that mutation would be.”

Behind this brief description is a debate that began with Darwin.  Prior to Darwin, there was a French biologist named Jean-Baptiste Lamarck who held the view that (1) Individuals acquire traits that they need and lose traits that they don’t need and (2) Individuals inherit the traits of their ancestors.  He gave as examples the Giraffe whose neck was assumed to have stretched in order to reach higher leaves in trees and blacksmiths whose strong arms appeared to have been inherited by their sons.  But these ideas have been debunked.

When Darwin published Origin of Species in 1859, he gave some credibility to Lamarck’s view, but later evolutionists elevated Lamarck’s idea to a major theme of evolution.  By the mid-twentieth century, biologists had become adept at doing experiments with bacteria.  In 1943, two biologists, Max Delbrück and Salvador Luria, wanted to test Lamarck’s hypothesis for bacteria, which were thought to be the more likely organism to use Lamarckian adaptation.  The Luria-Delbrück experiment tested whether bacteria exposed to a lethal virus would develop any adaptive mutation and whether that mutation would be acquired prior to exposure or not.  Their experiment showed conclusively that some bacteria had acquired an adaptive mutation prior to exposure, as did subsequent experiments by others, including Esther and Joshua Lederberg who are referenced on the “Evolution 101” website.

So, based on experiments, what evolutionists mean when they say that mutations are random is that some adaptive mutations occur before any exposure to infectious agents in a test.  The mutations do not occur because of exposure.  Now this is a somewhat contentious finding because it defies the rather commonsense view that mutations happen for a reason, most likely that reason is related to some inoculation or exposure to an agent.  In other words, either the finding appears to violate causality or the explanation is an admission of ignorance about the cause of adaptation.

I take the view that the finding is an admission of ignorance.  We really don’t know what might have caused an adaptive mutation to occur before exposure.  The real scientific question is what causes the mutation and biologists prefer to focus on what we can discover.  One such biologist is James A. Shapiro, professor of microbiology at the University of Chicago.  He characterizes the association of “random mutation” with the Luria-Delbruck experiment as follows:

One has to be careful with the word “proof” in science. I always said that conventional evolutionists were hanging a very heavy coat on a very thin peg in the way they cited Luria and Delbrück. The peg broke in the first decade of this century.

Professor Shapiro goes on to write about mechanisms that bacteria have for “remembering” previous exposure to infectious agents.  Those mechanisms include modification of the bacteria DNA.  He states that Delbrück and Luria would have discovered this if they had not used a virus that was invariably lethal and if they had the tools for DNA analysis.  The announcement of the DNA structure would take place in 1953, ten years after the Luria-Delbrück experiment, and the tools for analysis are still being developed.  It should not be too big a surprise that bacteria have elaborate mechanisms for DNA sharing and modification. The human immune response to invasive agents also includes the recording of information in the DNA of certain white blood cells (lymphocytes).   You can read Shapiro’s entire article here:

It is no longer fashionable to speak of Lamarckian inheritance, but the field of epigenetics is devoted to adaptation by means other than DNA modification.  My own view is that the amount of debate and discussion on the issue of “soft” inheritance points to a conclusion that this is unsettled science.  Microbiologists today have many more tools and techniques for answering questions about causes for adaptive inheritance then they did sixty years ago and I suspect that they would prefer to look at changes to the DNA and other molecules rather than make statistical inferences as Luria and Delbrück did.  Current research of the type that James Shapiro is doing is demonstrating specific causes for adaptation.