Consciousness and Dualism (Part 3)

Late in chapter 3 of Mind and Cosmos, Nagel introduces one of the candidates for a likely solution to the problem of Cartesian dualism.  This approach can be called monism or panpsychism.  Panpsychism is the view that matter and mind are two different manifestations of a single unnamed substance.  Nagel thinks this path offers one possible framework for an eventual solution:

“Everything, living or not, is constituted from elements having a nature that is both physical and nonphysical— that is, capable of combining into mental wholes. So this reductive account can also be described as a form of panpsychism: all the elements of the physical world are also mental.”

“A comprehensively reductive conception is favored by the belief that the propensity for the development of organisms with a subjective point of view must have been there from the beginning, just as the propensity for the formation of atoms, molecules, galaxies, and organic compounds must have been there from the beginning, in consequence of the already existing properties of the fundamental particles. If we imagine an explanation taking the form of an enlarged version of the natural order, with complex local phenomena formed by composition from universally available basic elements, it will depend on some kind of monism or panpsychism, rather than laws of psychophysical emergence that come into operation only late in the game.”

However, there is a serious problem.  We have no idea how elementary particles could possess subjectivity which Nagel calls the “proto-mental” attribute.  Such an understanding would be necessary in order to build a framework for explaining how individual particles could come together and form conscious organisms.  The best candidate for such an explanation is quantum physics, but our understanding of quantum physics today is limited to its computational aspects.  If those aspects are real, then computation only supplies part of the solution.  And you would need to assume that consciousness is partly the result of computation.

There are some philosophical positions that hold that consciousness is all computation (Dennett, Kurzweil).  Anyone holding such a position may be quite happy with a quantum solution since it would explain how minds could be made of matter with computational abilities.  Still, one would need to work out the details of how individual particles with computational ability can be organized so that the total organism’s mind appears to be an unbroken whole.  That work might be made easier by quantum entanglement, but there is very little theoretical understanding on which to build.

Quantum entanglement provides some theoretical advantage over classical computation.  Quantum entanglement enables quantum information to be coded more compactly than classical information giving it the ability to reduce entropy.  Low entropy is a characteristic of order.  In this way, quantum computation has order producing power beyond the ordering power of classical computation.  But how this could take place in biological organisms remains a mystery.

Nagel finds additional problems with panpsychism when he imagines how it might address the developmental problem of life.  How did life originally arise from non-living matter and how did this proto-mental attribute of matter overcome the unlikelihood of random chemical interactions leading to life?  He concludes the section on panpsychism with this pessimistic comment:

“The idea of a reductive answer to both the constitutive and the historical questions remains very dark indeed. It seeks a deeper and more cosmically unified explanation of consciousness than an emergent theory, but at the cost of greater obscurity, and it offers no evident advantage with respect to the historical problem of likelihood.”

I find myself more optimistic about the outlook for a form of panpsychism that is based on quantum physics.  I think the entropy lowering capability of quantum computation will go a long way towards explaining the ordering power inherent in life and consciousness.  The problem is that quantum computation does not really explain subjective experience unless you assume that subjectivity is the result of computation.  And that assumption re-introduces the problem of dualism because you would need to assume subjectivity in all matter, not just living matter.  As soon as you’ve assumed that subjective experience is an attribute of living matter only, then it has to be an optional attribute, introduced by something besides physical law.

Consider what happens at the moment of death of any living organism.  For a brief instant, the chemical composition remains unchanged, yet life and consciousness are gone.  Subjectivity as we have come to know it during life has disappeared.  This would seem to indicate that subjectivity is an optional attribute of the material world, and that is dualism.

I suppose it is possible that there are subtle changes in the chemical composition at the time of death, but will those changes be sufficiently observable to clearly indicate which came first?  The subtlety may be telling us how miraculous consciousness is in the first place.  One can also consider the action of anesthetics which cause temporary unconsciousness.  For example, ether can cause unconsciousness in humans and inactivity in the one-celled animal paramecium, yet its exact action remains unexplained.

While I think that panpsychism based on quantum physics offers hope for explaining the tremendous ordering power of life and consciousness, I do not find that it offers a complete answer to the problem of dualism when viewed from the materialist point of view.  I am drawn more to the idealist point of view as a solution to dualism.  No less a world-class physicist than Leonard Susskind has suggested that the universe may be like a hologram. (A hologram is a three dimensional projection from a two dimensional source.  For Susskind’s analogy to hold, the universe would need to be a four dimensional projection from some external source.)  Susskind is an atheist, so he will not agree with my perspective that the universe is a projection from God, but that appears to be the only view that solves the problem of consciousness and dualism.

Thomas Nagel doesn’t agree with me either.  He finishes up chapter three by dismissing the theist path of an intentional power, but gives more credibility to what he calls the teleological framework.  The teleological path requires that the laws of nature are “value free,” yet they proceed toward a defined purpose or goal.  Nature’s laws would need to be “value free” to avoid the appearance of an intentional designer.  He needs to say more about how a desired goal can be free of value, and he promises to do so later in the book.

Advertisements

Consciousness and Dualism (Part 2)

“It has become clear that our bodies and central nervous systems are parts of the physical world, composed of the same elements as everything else and completely describable in terms of the modern versions of the primary qualities— more sophisticated but still mathematically and spatiotemporally defined. Molecular biology keeps increasing our knowledge of our own physical composition, operation, and development. Finally, so far as we can tell, our mental lives, including our subjective experiences, and those of other creatures are strongly connected with and probably strictly dependent on physical events in our brains and on the physical interaction of our bodies with the rest of the physical world.” – Thomas Nagel, Mind and Cosmos, Chapter 3.

With these words, Nagel begins to tell us about how the tremendous recent developments in biology and neuroscience have raised hopes for a materialist explanation of life and consciousness.  Science has made great strides in demonstrating the detailed workings of biology and brain function.  Yet, with all the scientific progress, the connection between mind and physical biology seems as elusive as ever.  Some philosophers have even taken the position that Nagel calls “eliminative materialism” in which mental events are illusory.

Nagel looks historically at conceptual approaches to solving the problem because he thinks that reductionism alone will not discover an answer.  A conceptual approach is based on adding something new to science in the hopes of providing the necessary leverage for new discovery.  This is similar to the approach of David Chalmers who has called for adding back into the scientific picture a fundamental quality of subjective experience.

Chalmers approach is to include within the science of consciousness the science of subjective experience, including the psychological and sociological implications of our mental states.  It is difficult to tell whether Nagel would agree with Chalmers.  Nagel thinks that whatever is added to science would need to be at least as radical as electromagnetic fields and relativity theory.  Here Nagel has written something very surprising from the viewpoint of science history.  The most radical scientific developments in the twentieth century were quantum physics and relativity theory, not electromagnetic fields and relativity theory.  Electromagnetic fields were definitively added to science in the 19th century by James Clerk Maxwell.   I wonder if he has chosen to write “electromagnetic fields” in order to preserve some hope for quantum explanations of consciousness.

Nevertheless, Nagel’s description of the difficulty of avoiding dualism has challenged me.  As I have pursued the path of quantum explanations for consciousness, I have concentrated on the power of quantum calculation to bring about order.  There is real order producing power in quantum computation that lends itself to a possible explanation for mental activity if consciousness is seen as a type problem solving.

However, there is nothing in the computational model of quantum physics that can produce subjective experience.   I have worked over thirty years in computer systems design and programming and there is no way that classical computation alone will produce consciousness or subjective experience.  There must be a qualitative difference between classical computation and quantum computation that allows for the addition of a subjective sense of intent or purpose to quantum computation.  The type of subjective sense added to quantum calculation would probably need to be optional because it could not be required to be present for such non-conscious quantum calculations as take place in ordinary physics (for example, in lasers).  And if it is optional, or “contingent” as Nagel would say, then it must be considered a dualistic explanation.

The non-dualist solution to this problem takes me to George Berkeley, whom Nagel mentions in passing and whose idea of “subjective idealism,” Nagel completely discounts.  Berkeley was an 18th century philosopher whose perspective developed as a counterpoint to the new discoveries in science and the trend away from theism.  Berkeley’s Idealism is the point of view that everything is mind and that all matter originates in God’s mind, and he maintains that view without denying the objective existence of material objects.  Some of his ideas influenced Albert Einstein and his view of the role of consciousness in the act of perception has new echoes in quantum physics.  But few thinkers these days give much credence to idealism.  I think that Berkeley’s idealism takes on fresh meaning when seen through the lens of quantum physics.

Another non-dualist approach would be to discount subjective experience.  Those favoring a behaviorist approach would be happy with this line of reasoning.  If quantum computation is at the core of our mental ability, then that could explain our mental problem solving ability but it would leave unexplained any subjective experience.  Those who view subjective experience as an unnecessary byproduct of evolution, like the color of blood, might find this view attractive.  But it has the unfortunate consequence of turning us into zombies.

At this point in chapter 3, Nagel has left us with a significant puzzle: (1) there is no non-dualist materialist theory of consciousness that can explain subjective experience; (2) Idealist and theistic theories are not welcome; and (3) dualism leaves too much room for theistic explanation and therefore it is also not welcome.  I await his recommendation for something new to be added to the description of the physical world, because at this point, I am feeling challenged but also slightly unwelcome.

Theism and Materialism

In Chapter 2 of Mind and Cosmos by Thomas Nagel, the author explores the typical positions held by proponents of theism and by proponents of evolution.  His focus is sharpened by analysis of the different ways that each point of view attempts to make sense of human beings who are part of the world that ought to be intelligible to us.

According to Nagel, theists appeal to a deity who is outside the natural order, but who nevertheless provides intention and directionality to the natural order and who assures us of the basic reliability of our observational capacity and our reasoning ability.  It is a reassuring position at the expense of requiring a power outside of the natural order.  It suffers from a lack of any serious attempt to make human beings intelligible from within the natural order.

Evolutionary naturalists, on the other hand, claim that humanity is intelligible from within the natural order based on science and reason.  But, again according to Nagel, the problem is that both science and reason are the products of evolution and we have no authority outside of ourselves to substantiate the reliability of our understanding of science.  In Nagel’s terminology, evolutionary naturalism undermines its own claim of reliability.  Ultimately, the evolutionary explanations fail because the science that we possess has failed to explain consciousness and therefore failed to explain why we should trust the judgments arising from our consciousness.

I think Nagel is stretching too far for a criticism of the evolutionary point of view.  Its main problem is the inability for science to explain consciousness.  To find fault for the inability of evolution to provide reassurance that our reasoning is sound is the same criticism that can be applied to the theist position.   Both positions are based on faith!  Theists have faith in God based on a religious community and Darwinian evolutionists have faith in science based on the scientific community.  If anything, the evolutionary point of view has the advantage in that the scientific community is generally more unified and disciplined than the religious community.

The primary distinction between the two points of view, then, is the position and importance that each assigns to humanity.  Theism relies on a power outside the normal purview of science to explain and give meaning to human life and consciousness while evolution relies solely on current science at the expense of diminishing any essential or transcendent importance for human life and consciousness.

Nagel is searching for middle ground.  He wants an explanation for consciousness that does not rely on a power outside the natural order.  At this point in his book, I think he fails to see that any such explanation will be relying on faith in something.  Whether that something is science or philosophy or some combination, it will still be the object of faith.  Given the constraints on his search that there can be no power outside the natural order, his explanation would not be able to claim any more authority than evolutionary materialism.

From my point of view, a form of theism that provides a way for God to work through the natural order provides the best alternative.  The importance and discipline of science is maintained and modified so that human life and consciousness have access to transcendent power for guidance and assurance.

Scientific reductionism ends at the quantum boundary, so the assumption of transcendent consciousness working at the quantum level provides for the needed adjustment to science while maintaining the entire scientific edifice based on empirical evidence and reductionist explanation.  And there is scientific evidence for an order producing power working at the quantum level.  This evidence is being developed by the nascent scientific discipline of quantum biology.

The strongest evidence to date comes from quantum action during photosynthesis, but I expect much more evidence as quantum biology matures.  After all, isn’t all of physics based on quantum action?  The only alternative besides dualism would be a view that posits new scientific principles acting at the biological level.  But, it seems to me that there is too much continuity between chemistry and biology.  That continuity leaves little room for wholly new principles to be plausible.

Has Scientific Reductionism Failed?

Yesterday, I began reading Thomas Nagel’s book, Mind and Cosmos: Why the Materialist Neo-Darwinian Conception of Nature Is Almost Certainly False.  This book has generated a lot of controversy and I wanted to comment on some of the author’s statements as I encountered them rather than waiting until I had finished the book.

In Chapter 1, Nagel lays out his basic argument.  He is asserting that the central concept about the nature of the universe held by most secular-minded persons is not true.  That concept is that life began from a chemical accident several billion years ago and once life began to evolve, it proceeded by random mutation to develop new species arriving at humankind all within a time frame set by the age of the earth and the age of the universe.

According to Nagel, the reason that most secular-minded people hold this view is that many scientists present this view as the only possible scenario: “But among the scientists and philosophers who do express views about the natural order as a whole, reductive materialism is widely assumed to be the only serious possibility.”

Nagel goes on to say that reductive materialism has failed: “The starting point for the argument is the failure of psychophysical reductionism, a position in the philosophy of mind that is largely motivated by the hope of showing how the physical sciences could in principle provide a theory of everything.”

Later, Nagel qualifies this by saying he is mainly speaking about materialist reductionism as it applies to biology (and mind, presumably).  And here is where some clarification is needed.  Nagel uses several phrases to describe the type of reductionism he is speaking about.  In Chapter 1 they are:  “psychophysical reductionism,”  “physio-chemical reductionism,” and “materialist reductionism.”  What they all have in common is reductionism, so it will help to understand what reductionism is.

Reductionism is the idea that any complex entity can be completely understood and explained by analysis of its parts.  It is like peeling back the layers of an onion to reveal the innermost layer which presumably is the fundamental layer from which everything can be explained.  Within the physical sciences this approach has been very successful.  The innermost, fundamental layer for the physical sciences is the layer described as the “Standard Model of Particle Physics.”  This model describes the fundamental particles such as the electron and proton (quarks) as well as the fundamental forces such as the electromagnetic force.

The standard model has been very successful.  Its most recent achievement was the prediction and tentative confirmation of the Higgs Boson, also known as the “God particle,” a name suggested by a journalist, not a scientist.  So I was taken aback when I first read that reductionism had failed.

I think that Nagel is referring to the current inability to explain biology and particularly mind in terms of the features of the standard model.  I think that is an accurate statement:  living organisms cannot be fully understood or explained by appealing to their constituent particles and fundamental forces, if those entities are understood mechanically.

What I think is missing is the realization that the standard model may not be the most fundamental layer of scientific reductionism.  It is simply the layer that is best understood.  The standard model describes phenomenon at the quantum boundary.  Its particles and forces are the smallest measurable entities on which science can perform experiments.  The components of the standard model are conceptual entities.  But they are conceptual entities that have a huge advantage over the layer beneath them: they are measurable.

One could argue that the quantum layer is more fundamental than the standard model.  The huge problem is that the quantum layer contains conceptual entities that cannot be measured, even in principle.  The conceptual entities of the quantum layer are quantum states and they cannot be measured.  But quantum states are the mathematical entities that are essential for the success of the standard model.  So who is to say that quantum states are any less real than electrons and protons?

At the quantum boundary, science has encountered the absolute limit on what can be measured.  So, in that sense, science has reached the limit of what it can confirm experimentally.  But, if one believes that the quantum world is real, then an entirely different picture emerges from the standard model.  Instead of mechanistic particles, the quantum world suggests that elementary particles are computed entities.  One does not need to attribute classical computation to these tiny bundles of energy.  What is important is that there exists a decisional process in the universe that determines the specific outcome whenever one of these particles participates in the transfer of energy from one place in space-time to another place in space-time.

In other words, the fundamental particles are more mind-stuff than material-stuff.  I think that counts as a success for scientific reductionism, not as failure.  Of course the problem is that one must make a leap of faith to the point of view that the quantum world represents reality.  That might be a leap too far for the many who have been trained in the classical view of reality.

Consciousness (Part 1)

So far, in this series on the evidence for a conscious, rational power working in and through the laws of nature, I have followed the trail of low entropy.  I have used a general notion of entropy where low entropy correlates with an increasing degree of order or where it correlates with an increasing concentration of energy.  Consequently, high entropy means a state of disorder or a state of energy dispersal, most often as wasted heat.  I began with the amazing state of low entropy (highly ordered, high energy concentration) in which the universe was created.

I followed the trail of low entropy through the complex of mathematically precise physical laws that represent the incredible ordering power of nature.  I spoke of lasers, superconductivity and photosynthesis as supreme examples of entropy lowering processes.  I looked at the incredibly diverse life processes, all based on DNA, RNA and protein synthesis, that would be impossible without the information coding capability and the molecular machines of the individual cell.  I described the computer-like processing capability of individual proteins and the inexplicable speed with which they fold into the precise shape for their purpose.

I have tried to avoid the teleological language of purposeful design, but when one looks at the trail from creation to conscious being, it is difficult to avoid the question.  Random chance cannot account for this remarkable journey.  The probabilities are just too small for undirected forces to have arrived at living beings that maintain low entropy and rely on entropy lowering processes.  This implies, to me at least, that the laws of physics are favorable to life and consciousness.  What is it that has driven evolution to the point of prizing consciousness almost above other considerations?  Consciousness requires a huge energy budget; why should our brains deserve a 20% allocation of energy if not for its powerful entropy lowering ability?

An incredible panoply of ordered life flows from the human imagination.  There is language, art, drama, literature, music and dance in addition to the social inventions of government, economic systems, justice systems, cultural institutions, family and kinship groups.  One could almost say that the creation of explicitly ordered social structures defines humanity.  And yet there is a profound puzzle in the pervasive human tendency to sow discord.  Why should that be?  Why are there wars, violence, terrorism, and dysfunctional social institutions if the human imagination can be so productive?

In discussing these and other questions of consciousness, I will attempt to follow my reductionist approach by relating emergent phenomenon to the dynamics and properties of constituent components.  However, there will come a point where this approach will fail and I will need to resort to different language to describe what I consider to be the key dynamic of consciousness: the self and its narrative.  Consciousness cannot be completely understood based on functional descriptions of biological or physical components.  But first, let me turn to the attempt to explain consciousness in term of computation.

Considering that order emerges from entropy lowering processes, it is odd that some observers think that consciousness and intelligence emerges from random, chaotic activity.  Pure randomness results in high entropy, so how can order be produced from chaos?  One such person is Ray Kurzweil, a futurist, who has written a book titled The Singularity is Near.  He states, “Intelligent behavior is an emergent property of the brain’s chaotic and complex activity.”  Neither he nor anyone else can explain how entropy lowering intelligence can emerge from random, chaotic activity.  He does, however, distinguish intelligence from consciousness.  He cites experiments by Benjamin Libet that appear to show that decisions are an illusion and that “consciousness is out of the loop.” Later, he describes a computer that could simulate intelligent behavior: “Such a machine will at least seem conscious, even if we cannot say definitely whether it is or not.  But just declaring that it is obvious that the computer . . . is not conscious is far from a compelling argument.”  Like many others, Kurzweil thinks that consciousness is present if intelligence can be successfully simulated by a machine.

Kurzweil is an optimistic supporter of the idea that the human brain will be completely mapped and understood to the point where it can be entirely simulated by computation.  He has predicted that this should occur in the fifth decade of the 21st century: “I set the date for the Singularity – representing a profound and disruptive transformation in human capability – at 2045.  The nonbiological intelligence created in that year will be one billion times more powerful than all human intelligence today.”  Kurzweil’s prediction is based on the number of neurons in the human brain and their many interconnections, arriving at a functional memory capacity of 1018 bits of information for the human brain (1011 neurons multiplied by 103 connections for each neuron multiplied by 104 bits stored in each of the synaptic contacts.)

Kurzweil welcomes this prospective technological leap as a great advancement in the intellectual potential for the world.  He writes about his vision for the world after the singularity which he names the fifth epoch: “The fifth epoch will enable our human machine civilization to transcend the human brain’s limitations of a mere hundred trillion extremely slow connections.”  He goes on to say that eventually this new paradigm for intelligence will saturate all matter and spread throughout the universe.  Kurzweil appears to have the opposite perspective from my own view which is that the universe began with consciousness and consciousness infused all matter from the beginning.

But other people look at Kurzweil’s predictions and are concerned.  I recently read an opinion piece by Huw Price in the New York Times about the dangers of artificial intelligence (AI).  Huw Price was on his way to Cambridge to take up his newly appointed position as Bertrand Russell chair in Philosophy.  He had met the AI researcher named Jaan Tallinn, one of the developers of Skype, on his way to his new job.  Tallinn was concerned that AI technology would evolve to the point where it could replace humans and through some accident the computers would take control.  So Tallinn and Price joined up with Martin Rees, a cosmologist with a strong interest in biotechnology, to form a group called the Center for Study of Existential Risk (CSER).  I suspect that the group will focus more on the risk to human life posed by biotechnology rather than from AI, but the focus of Price’s column was on the risk from artificial intelligence.

Professor Price presented the argument that, although the risk of such a computer takeover appears small, it shouldn’t be completely ignored.  Perhaps he has a valid point, but what are the empirical signs that such computer intelligence is near at hand?  Some might point to the victories in 2011 of IBM’s Watson computer over all challengers in the Jeopardy game show.  This was an impressive demonstration of computer prowess in natural language processing and in database searching, but did Watson demonstrate intelligence?  I think that Ray Kurzweil would answer yes.  To the extent that the Jeopardy game demonstrates intelligence, then, by that measure, Watson must be considered intelligent.

However, consider the following subsequent development.  In a recent news report, Watson was upgraded to use a slang dictionary called the Urban Dictionary.  As that source puts it,

“[T]he Urban Dictionary still turns out to be a rather profane place on the Web. The Urban Dictionary even defines itself as ‘a place formerly used to find out about slang, and now a place that teens with no life use as a burn book to whine about celebrities, their friends, etc., let out their sexual frustrations, show off their racist/sexist/homophobic/anti-(insert religion here) opinions, troll, and babble about things they know nothing about.’”  (From the International Business Times, January 10, 2013, “IBM’s Watson Gets A ‘Swear Filter’ After Learning The Urban Dictionary,” by Dave Smith.)

One of Watson’s developers, Eric Brown, thought that Watson would seem more human if it could incorporate slang into its vocabulary so he taught Watson to use the slang and curse words from the dictionary.  As the news report continued,

“Watson may have learned the Urban Dictionary, but it never learned the all-important axiom, ‘There’s a time and a place for everything.’ Watson simply couldn’t distinguish polite discourse from profanity.  Watson unfortunately learned all of the Urban Dictionary’s bad habits, including throwing in overly -crass language at random points in its responses; in answering one question, Watson even reportedly used the word ‘bullshit’ within an answer to one researcher’s question. Brown told Forbes that Watson picked up similarly bad habits from reading Wikipedia.”

Perhaps the news story should have given us the researcher’s question so we could make our own decision about Watson’s epithet!  Eric Brown finally removed the Urban Dictionary from Watson.

In short, Watson was very good at what it was designed to do:  win at Jeopardy.  But it lacked the kind of social intelligence needed to distinguish appropriate situations for using slang.  It also appeared to lack a mechanism for learning from experience that some situations were inappropriate for slang or how to select slang words based on the social situation.  Watson was ultimately a typical computer system that had to be modified by its developers.  I know of no theoretical framework in which a computer system could maintain and enhance itself.

Now consider another facet of Watson verses Jeopardy contestant.  Our brain requires about 20% of our energy.  For a daily energy requirement of 2000 Calories, that amounts to 400 Calories for human mental activity.  That works out to about 20 watts of power.  In terms of electricity usage, that is less than 6 cents per day in my area.  Somewhat surprisingly, the number of brain energy calories does not much depend on one’s state of alertness.  The brain uses energy at about the same rate even when you sleep.  Watson, in contrast, used 200,000 watts of power during the Jeopardy competition.  That computes to about $528 per day.  If computers are to compete with humans for evolutionary advantage, it seems to me that they will need to be much more efficient users of energy.

In fact the entire idea of comparing computers to human mental activity is absurd to many people.  Perhaps I have even encouraged this analogy by speaking of quantum computation relative to biological molecules.  But I think it will become very apparent that any putative quantum computation must be something quite unlike ordinary computer calculations.  Mathematician and physicist, Roger Penrose, thinks that the fact that human mathematicians can prove theorems is evidence for quantum computation and decisionality in human consciousness.  But he also thinks that quantum computation must have capabilities that ordinary computers do not have.

John Searle is a Philosophy Professor at UC Berkeley and thinks that the current meme that the brain is a computer is simply a fad, no more relevant than the metaphors of past ages: telephone switchboard or telegraph system.  Professor Searle supports consciousness as a real subjective experience that is not open to objective verification.  It is therefore possible to explore consciousness philosophically, but not as an objective, measurable phenomenon.  Professor Searle is known for his example of the “Chinese Room,” where Chinese is mechanically translated into English, but where Searle claims there is no real understanding of what is being translated.  Searle states, “. . . any attempt to produce a mind purely with computer programs leaves out the essential features of mind.”

Closely related to the “Chinese Room” is the Turing test which seeks to demonstrate that a computer can simulate a human being well enough to fool another person.  In the Turing test, a person, the test subject, sits at a computer terminal which is connected to either another person sitting at a keyboard or to a computer.  The task of the test subject is to determine, by conversation alone, whether he or she is dialoging with another person or a computer.  An actual test has been held each year since 1990 and prizes awarded. So far, no computer program has been able to fool the required 30 percent of test subjects.  Nevertheless, the computer program that fools the most test subjects wins a prize.  People also compete with each other because half of the test subjects are connected to other persons who must try to demonstrate some characteristic in the dialog that will convince the test subject that he or she is really talking to another person.  The person who does best at convincing test subjects that they are communicating with another person wins the “Most Human Human” award.  In 2009, Brian Christian won that prize and wrote a book about his experience: The Most Human Human: What Talking with Computers Teaches Us About What it Means to Be Alive.

One of Brian Christian’s key insights in his book is that human beings attempt to present a consistent self-image in any public or interpersonal encounter.  In a dialog with another person, there is a striving to get beyond the superficial in order to reveal something of the personality underneath.  But the revealed personality is not monolithic; there are key self-referential elements of the conversation that reveal other possibilities.  Nevertheless there is a strong commitment to an underlying self-image, even if that self-image is ambiguous:

“[The existentialist’s] answer, more or less, is that we must choose a standard to hold ourselves to. Perhaps we’re influenced to pick some particular standard; perhaps we pick it at random. Neither seems particularly ‘authentic,’ but we swerve around paradox here because it’s not clear that this matters. It’s the commitment to the choice that makes behavior authentic.”

Authentic dialog, therefore, contains elements of consistent self-image and commitment to that self-image in spite of ambiguity and paradox.  A strong sense of self-unity underlies the sometimes fragmentary nature and unpredictable direction that human discourse often takes.  This is very difficult for a computer to simulate.

I think the risk from AI is so minuscule that it doesn’t deserve the level of concern that Jaan Tallinn was portrayed as having in Huw Price’s article.  There are two main assumptions in the assessment of risk that are very unlikely to be substantiated.  One assumption is that sheer computing power will lead to a machine capable of human intelligence within any reasonable time frame.  The second assumption is that such a machine, if created, could somehow replace humans in an evolutionary sense.

There are two problems with the first assumption, one theoretical and one practical.  The theoretical problem is that there is a limit to the true, valid conclusions that any automated system can achieve.  This limitation is called “Gödel Incompleteness.”  It means that for any system powerful enough to draw useful conclusions, there will still remain true conclusions that cannot be reached by computation alone.  In computer theory, this is called the “halting problem.”  The halting problem states that it is impossible to create a computer program that can decide whether any other computer program can halt or come to completion, producing a valid result.    The practical manifestation of the halting problem is that there is no way to introduce complete self-awareness into computer systems.  One can create modules that can simulate self-awareness of other modules, but the new module would not be self-aware of itself.  This limitation implies that human intelligence will always be needed to correct and modify computer systems.

(Roger Penrose’s book, Shadows of the Mind, presents the case for quantum consciousness in detail. A key part of his argument is that computers are fundamentally limited by “Gödel Incompleteness.”  This implies, according to Penrose, that quantum coherence plays a key part in consciousness and that quantum calculations are capable of decisions exceeding the power of any ordinary computer calculation)

The second problem with the first assumption is that it is very unlikely that a unified computer system with computing power of the human brain can be developed in any reasonable time frame.   Professor Price doesn’t say what a reasonable time frame might be, but Ray Kurzweil does, placing the date for the singularity at 2045.  Kurzweil’s assumption is that the human brain contains storage for 1018 bits (about 100 petabytes) of information.

In my previous post, I reported that Professor James Shapiro at the University of Chicago thinks that biological molecules are the most basic processing unit and not the cell.  This implies that Kurzweil should be using the number of molecules in the brain rather than the number of neurons.  Assuming about 1013 molecules per neuron, that increases the human brain capacity to about 1031 (10 trillion petabytes)!  This concept of storing large volumes of data in biological molecules has been confirmed by recent research where 5.5 petabytes of data have been stored in one gram of DNA.  Keep in mind that we are speaking only of storage capacity (and only for neurons, omitting the Glial cells) and not of processing power.  If the processing power of the biological molecule is aided by a quantum computation, then we have no current method for estimating the processing power of the human neuron.

Assuming that processing power is on a par with storage capacity, and assuming that computer capacity and power can double according to Moore’s law (every two years – another questionable assumption because of quantum limits), then there would need to be 40 doublings of storage capacity or about another 80 years beyond Kurzweil’s estimate of 2045.  That places the projection for Kurzweil’s “singularity” well into the twenty-second century.

The second assumption is that sufficiently advanced machine intelligence, if it could be developed, would be able to replace humans through evolutionary competition.  I have already mentioned the energy efficiency disadvantage for current silicon-based computers:  200 kilowatts for Watson’s Jeopardy performance versus 20 watts for human intelligence.  I have also described the impossibility of computer algorithms which could in principle modify themselves in an evolutionary sense.  I can also discount approaches based on evolutionary competition in which random changes are arbitrarily made to computer code.  I have seen too many attempts to fix computer programs by guesswork that amounts to little more than random changes in the code.  It doesn’t work for computer programmers and it won’t work for competing algorithms!

My conclusion is that the main practical threat to human intellectual dominance will be biological and not computational (in addition to our own self-destructive tendencies).  That leaves open the possibility for biological computation, but that threat is subsumed by the general threat of biological genetic engineering and by the creation of biological environments that are detrimental to human health and well-being.

I have taken this lengthy excursion into the analysis of the computer / brain analogy in order to eliminate it as one path toward understanding consciousness.  The idea that computation can produce human consciousness is an example of functionalism:  the concept that a complete functional description of the brain will explain consciousness.  Human consciousness is a complex concept which resists empirical exploration.  Let’s look at the key problem.

David Chalmers is professor of philosophy at Australian National University and has clearly articulated what has become known as the hard problem of consciousness.  In his 1995 paper, “Facing up to the Problem of Consciousness,” he first describes the easy problem.  The easy problem is the explanation of how the brain accomplishes a given function such as awareness or articulation of mental states or even the difference between wakefulness and sleep.  This last category, when pushed to consider different states of awareness, previously had seemed to me to be the most promising path towards understanding consciousness.

It has been known for some time that there are different levels of consciousness that are roughly correlated to the frequency of brain waves which can be measured by electroencephalogram (EEG).  Different frequencies of brain waves have traditionally corresponded to different levels of alertness.  The frequency range that seems to hold the most promise for understanding consciousness are the gamma waves at roughly 25 to 100 cycles per second (Hz or Hertz).  40 Hz is usually cited as representative.  In 1990, Francis Crick (co-discoverer of the DNA structure) and Christof Koch proposed that the 40 Hz to 70 Hz was the key “neural correlate of consciousness.”  The neural correlate of consciousness is defined to be any measurable phenomenon which can substitute for measuring consciousness directly.

The neural correlate of consciousness is a measurable phenomenon; and measurable events are what distinguish the easy problem from the hard problem of consciousness.  The easy problem is amenable to empirical research and experiment; it explains complex function and structure in terms of simpler phenomenon.  The hard problem, by contrast, raises a new question: how is it that the functional explanation of consciousness (the easy question) produces the experience of consciousness or how is it that the experience of consciousness arises from function?  As Chalmers says, why do we experience the blue frequency of light as blue?  Implicit in this question is the idea that consciousness is unified despite different functional impact.  Color, shape, movement, odor, sound all come together to form a unified experience; we sense that there is an “I” which has the unified experience and that this “I” is the same as the self that has had a history of similar or not so similar experiences.  My rephrasing of the hard question goes like this: how is it that we have a self with which to experience life.

Chalmers thinks that a new category for subjective experience will be needed to answer the hard question.  I think that such an addition is equivalent to adding consciousness as a basic attribute of matter.  That is what panpsychism asserts, and I think that the evidence from physics, chemistry and biology supports the panpsychist view.  I think panpsychism leads directly to experiences of awareness, consciousness and self-consciousness and that the concept of a self-reflective self is the natural conclusion of such a thought process.  David Chalmers thinks that the idea has merit, but differentiates his view from panpsychism, saying “panpsychism is just one way of working out the details.”

My next post will conclude this series and will directly present the theological question.

The Evidence from Evolution and Biology (Part 3)

In part 2 of this series on evolution and biology, I presented my analysis on the origin of life and my conclusion that life could not have arisen through random chance alone.  I have concluded along with other observers that the laws of physics and chemistry must be conducive to the creation of life and that such laws are evidence for a cosmic ordering power.  The question remains, however, what part does random chance play once life was created?  In part 1 of this series, I raised the question about the role that random mutations play in natural selection.  In this part, I will present evidence that natural selection does not rely entirely on random mutation and that there is at least some portion of natural selection that relies on directed mutation.

The most likely systematic way to create random changes in DNA is through copying errors.  One of the first researchers to deal rigorously with copying errors was Manfred Eigen with his “quasi-species” model.  In this mathematical model of natural selection, survival and fitness to survive are balanced against replication errors.  Here is Freeman Dyson’s description of the problem:

The central problem for any theory of the origin of replication is that a replicative apparatus has to function almost perfectly if it is to function at all. If it does not function perfectly, it will give rise to errors in replicating itself, and the errors will accumulate from generation to generation. The accumulation of errors will result in a progressive deterioration of the system until it is totally disorganized. This deterioration of the replication apparatus is called the “error catastrophe.”

Eigen’s model sets a theoretical limit on the allowable error rate necessary to avoid the “error catastrophe.”  It turns out that the maximum error rate is approximately the inverse of the number of DNA base pairs.  So for humans with about 3.2 billion base pairs, the calculated maximum error rate is about 10-9, or 1 error in 1 billion cell divisions.  This is consistent with the actual error rate after proofreading and repair of the copied DNA.

But some copying errors will still survive.  What becomes of them?  James A. Shapiro is professor of microbiology at the University of Chicago.  In his book, Evolution: A View from the 21st Century, he writes, “Although our initial assumption is generally that cells die when they receive an irreparable trauma or accumulate an overwhelming burden of defects with age . . ., it turns out that a significant (perhaps overwhelming) proportion of cell deaths result from the activation of biochemical routines that bring about an orderly process of cellular disassembly known by the terms programmed cell death and apoptosis.”  In multicellular species, there is an elaborate signaling system for causing some cells to die.  This process is not necessarily disease related.  During embryonic development, some tissues grow that need to be eliminated before birth such as the webs that connect fingers and toes.  These are eliminated by apoptosis (programmed cell death).  This process also happens to embryonic neurons that do not have sufficient interconnections to be viable.  The implication of this response is that organisms have elaborate capability for determining when some cells need to be eliminated.  Some cancers are caused by problems with the apoptosis response.

Before proceeding to the evidence for directed mutation, I want to encourage an appreciation for the enormous orchestration that occurs inside the cell.  As an observer of the biological sciences, I am constantly amazed by the incredible variability and responsiveness of living cells.  If you have never watched videos or animations of cell division or other cellular processes, I would urge you to do so.   They are simply fascinating!  And part of what makes for a fascinating view is the complex orchestration that is happening inside the cell.    Here is a video dealing with mitosis, but there are many others:  http://www.youtube.com/watch?v=C6hn3sA0ip0.  A longer, more advanced animation on the cellular response to inflammation is here:  http://www.youtube.com/watch?v=GigxU1UXZXo&NR=1&feature=fvwp.

Another amazing aspect of cellular function and orchestration is protein folding.  In order for proteins to be effective, they must be folded into a three dimensional shape that is suited to their purpose.  As I explained in my previous post, the protein enzyme, sucrase, performs its function of splitting table sugar (sucrose) into the more easily metabolized glucose and fructose by “locking onto” the sucrose molecule.  Biologists have often used the analogy of a lock and key to explain the fitting of enzymes to their target molecules.

Protein misfolding plays a part in several disease processes including Alzheimer’s disease, Creutzfeldt-Jakob disease (a form of “mad cow disease”), Tay-Sachs disease and sickle cell anemia.  In sickle cell anemia the protein misfolds because of a mutation that alters the sequence of amino acids in one of the blood proteins needed to construct hemoglobin.  In the case of Creutzfeldt-Jakob disease, the cause of protein misfolding has not been conclusively identified, but may be due to an “infectious protein” called a Prion.  A Prion is a normal human protein in the cell membrane that has misfolded and that causes other normal protein to misfold which results in brain tissue degeneracy.  It would be unprecedented if it is conclusively proved that Creutzfeldt-Jakob disease is caused by Prions because all other known disease agents involve replication or modifications to DNA.

The instructions for protein folding are not contained in DNA (although the amino acid sequence is a crucial aspect), but correct folding is absolutely necessary for good health.  DNA provides the peptide sequence information and it is the task of the completed protein, after it has been manufactured by a ribosome, to fold into the correct shape.  In human cells there are regulatory mechanisms for determining whether a protein has folded into the correct shape.  If a protein has misfolded, it can be detected and the protein can be disassembled.  Some proteins have the help of chaperones as mentioned in my previous post.  Here is an animation of a short 39 residue segment of the ribosomal protein L9, identified as “NTL9”, shown folding by computer simulation:  http://www.youtube.com/watch?v=gFcp2Xpd29I.  (The full protein from Bacillus stearothermophilus is just one of many that make up a ribosome.  It contains 149 amino acids and functions as binding protein to the ribosomal RNA.)

Proteins fold at widely varying rates, from about 1 microsecond to well over 1 second with many folding in the millisecond range.  The quickness with which most proteins fold led to an observation in 1969 by Cyrus Levinthal that if nature took the time to test all the possible paths to a correct final configuration, it would take longer than the age of the universe for a protein to fold.  It is now thought that proteins fold in a hierarchical order, with segments of the protein chain folding quickly due to local forces so that the final folding process only need configure a much smaller number of segments.  Nevertheless, simulations of protein folding often require huge computational resources to recreate the folding sequence.  One source estimated that it would take about 30 CPU years to simulate one of the fastest folding proteins.  A slower protein would require 100 times the resources, or about 3000 CPU years.

So Levinthal’s question has not been completely answered.  How does nature enable proteins to fold so quickly?  The prevailing theory on folding holds that the various intermediate states are following an energy funnel from a high energy state (unfolded) to the lowest energy state (folded).  Just as water seeks its lowest level, proteins seek the conformation that has the lowest energy.  The explanation for the wide variety of folding rates then rests on the nature of the path from the unfolded energy state to the folded energy state.  If the path is straight, the folding will be fast; if the path has energy barriers that must be circumnavigated or perhaps tunneled through, the folding will be slower.  These issues are still in active research, so there is currently no clear consensus.  But in a recent paper, two researchers conclude “Our results show it is necessary to move outside the realm of classical physics when the temperature dependence of protein folding is studied quantitatively” (“Temperature dependence of protein folding deduced from quantum transition”; 2011, Liaofu Luo and Jun Lu).

I simply point out the similarity to the research on photosynthesis that showed that photons captured by photosynthesis follow a highly efficient path to the place where the photon’s energy can be turned into food production.  That research showed that quantum coherence played a significant role in the efficient transfer of energy and it was thought by analysts that a quantum computation of the energy landscape was a key part of the explanation.  It would not surprise me if quantum computation played a key role in protein folding by determining the most efficient path for navigating the energy funnel.  But without regard to whether quantum computation plays a role in protein folding, some scientists have not hesitated in applying the computer analogy to cell function.

Paul Davies is a physicist and science advocate who contrasted the vitalism of the 19th century with our understanding of biology today by saying, “The revolution in the biological sciences, particularly in molecular biology and genetics, has revealed not only that the cell is far more complex than hitherto supposed, but that the secret of the cell lies not so much with its ingredients as with its extraordinary information storing and processing abilities. In effect, the cell is less magic matter, more supercomputer.”

James A. Shapiro continues the computer metaphor when he writes about the cognitive ability of the cell. In his book, Evolution: A View from the 21st Century, he writes about the cell’s ability to regulate and control itself using a number of examples such as repair of damaged DNA, programmed cell death,  and regulation of the process of cell division.  He then continues to characterize the cell in computer-like terms (my emphasis):

The selected cases just described are examples where molecular biology has identified specific components of cell sensing, information transfer, and decision-making processes. In other words, we have numerous precise molecular descriptions of cell cognition, which range all the way from bacterial nutrition to mammalian cell biology and development. The cognitive, informatic view of how living cells operate and utilize their genomes is radically different from the genetic determinism perspective articulated most succinctly, in the last century, by Francis Crick’s famous “Central Dogma of Molecular Biology.“

Shapiro goes on to suggest modification to the “Central Dogma of Molecular Biology.”  The “Central Dogma” summarizes the process of protein creation from RNA which is transcribed from DNA.  Dr. Shapiro suggests that this one way summary is too simple.  There are many paths through which RNA and proteins can modify the DNA.  The primary example of RNA which can modify DNA comes from retroviruses.  The well-known HIV virus is one example.  Retroviruses contain RNA which is transcribed into proteins that can convert the RNA into DNA and then insert the viral DNA into the host DNA.  It is estimated that between 5% and 8% of the human genome is comprised of DNA that has been inserted from retroviruses.

Dr. Shapiro also uses computer programming terminology when describing detailed biological function such as E. coli’s ability to metabolize lactose when glucose is not available: “Overall computation = IF lactose present AND glucose not present AND cell can synthesize active LacZ and LacY, THEN transcribe LacZY from LacP.”  That is a statement that could be implemented in almost any standard computing system with, of course, the proper functions available for “synthesize” and “transcribe,” etc.  I would also point out that a significant portion of a cell’s “cognitive” function is concerned with self-regulation.  In other words, there is a significant amount of self-knowledge available to the cell.

Professor Shapiro itemizes five general principles of cellular automation processing:

  1. There is no Cartesian dualism in the E. coli (or any other) cell. In other words, no dedicated information molecules exist separately from operation molecules. All classes of molecule (proteins, nucleic acids, small molecules) participate in sensing, information transfer, and information processing, and many of them perform other functions as well (such as transport and catalysis).
  2. Information is transferred from cell surface or intracellular sensors to the genome using relays of proteins, second messengers, and DNA-binding proteins.
  3. Protein-DNA recognition often occurs at special recognition sites.
  4. DNA binding proteins and their cognate formatting signals operate in a combinatorial and cooperative manner.
  5. Proteins operate as conditional microprocessors in regulatory circuits. They behave differently depending on their interactions with other proteins or molecules.

Regarding evolution, Dr. Shapiro advocates a concept called “natural genetic engineering” whereby the cell makes adaptive and creative changes to its own DNA.  I have used the phrase “directed mutation” to mean essentially the same thing.  These changes to a cell’s own DNA are not random: “It is difficult (if not impossible) to find a genome change operator that is truly random in its action within the DNA of the cell where it works. All careful studies of mutagenesis find statistically significant nonrandom patterns of change, and genome sequence studies confirm distinct biases in location of different mobile genetic elements. These biases can sometimes be extreme . . . “

In a recent article, Professor Shapiro further clarified his use of the phrase, “natural genetic engineering,” or NGE:

NGE is shorthand to summarize all the biochemical mechanisms cells have to cut, splice, copy, polymerize and otherwise manipulate the structure of internal DNA molecules, transport DNA from one cell to another, or acquire DNA from the environment. Totally novel sequences can result from de novo untemplated polymerization or reverse transcription of processed RNA molecules.

NGE describes a toolbox of cell processes capable of generating a virtually endless set of DNA sequence structures in a way that can be compared to erector sets, LEGOs, carpentry, architecture or computer programming.

NGE operations are not random. Each biochemical process has a set of predictable outcomes and may produce characteristic DNA sequence structures. The cases with precisely determined outcomes are rare and utilized for recurring operations, such as generating proper DNA copies for distribution to daughter cells.

It is essential to keep in mind that “non-random” does not mean “strictly deterministic.” We clearly see this distinction in the highly targeted NGE processes that generate virtually endless antibody diversity.

In summary, NGE encompasses a set of empirically demonstrated cell functions for generating novel DNA structures. These functions operate repeatedly during normal organism life cycles and also in generating evolutionary novelties, as abundantly documented in the genome sequence record.

(From What Natural Genetic Engineering Does and Does Not Mean, Huffington Post, February 28, 2013.)

Perhaps the most important evidence for natural genetic engineering is the discovery of transposable elements in the DNA.  These were first identified by Barbara McClintock in 1948 and for which she was awarded the Nobel Prize.  Transposable elements, also called transposons and retrotransposons, are segments of DNA that can move or be replicated into another part of the DNA molecule.  In general, this process can be either a “cut and paste” or a “copy and paste” operation using special proteins to operate on the DNA, sometimes with RNA as an intermediary molecule.

Retrotransposons makes up a significant portion of the human genome, about 42%.    One type of transposon, called an “Alu” sequence, is about 10% of the human genome and is one of the main markers for primates (including humans).    However, almost all transposable elements are contained within the non-coding region of DNA and therefore and not directly expressed as proteins.  This DNA has typically been called “junk DNA,” but recent research from the ENCODE project (“Encyclopedia Of DNA Elements”) has demonstrated a wide variety of function for the non-coding portions of DNA.

I have to mention that, as a computer designer and coder, this discovery of movable elements in the non-coding regions of DNA remind me of one of the most common ways we would modify computer programs.  First, we would locate an old segment of code that functioned similar to the desired new function.  Then we would copy that segment into another part of the program, but leave it unexecuted until the new segment was modified to accomplish its intended new function.  Finally we would activate the new segment and test it.  Nevertheless, DNA represents computational capabilities that I have never seen in any existing computer system.  It has now been demonstrated that the so called “junk DNA” has the ability to affect the “non-junk” portion of the genome by controlling when or whether certain proteins are expressed.

Continuing with the computer analogy, Freeman Dyson also speaks about DNA as a computer program, characterizing DNA as software and proteins as hardware.  I think that is a little too simple since individual proteins exhibit many cognitive abilities described by Dr. Shapiro.  Each separate molecule in the cell, including proteins, has its own processing capability.

One way the Professor Dyson is correct, though, is through the discovery of proteins as molecular machines. This is another fascinating area of biology.  Many of the functions of the cell are carried out by proteins that can best be described as miniature machines.  One important example is the ATP generator which is used to make ATP in the Mitochondria.  ATP, or Adenosine Triphosphate, is the main energy molecule for almost all forms of life.  This ATP generator or ATP synthase looks remarkably like a tiny motor.  This “motor” is powered by a hydrogen ion concentration differential across the mitochondria membrane.  The hydrogen ion concentration is generated by molecular pumps which push the hydrogen ions (protons) across the membrane.  An animation of ATP synthase follows:  http://www.youtube.com/watch?v=PjdPTY1wHdQ.

The implications of all the above biology for lowering entropy are enormous.  The molecular machines are themselves an example of low entropy, being a highly structured, functional set of proteins.  The pumping of protons across a membrane is using some energy to create a state of low entropy by concentrating energy at a particular location.  The ATP itself is a storehouse of energy for future use.  Protein folding is another entropy lowering process. The DNA specifying the information necessary to manufacture proteins is perhaps the supreme example of low entropy, particularly now with the discovery of purposeful “junk DNA.”  One could easily conclude that all of life is powered by of the miracle of low entropy overcoming the global tendency for entropy to increase.

Life can be viewed as a struggle to maintain low entropy.  We need sources of low entropy to live: food, shelter and energy, etc.  The ultimate source of low entropy is the sunlight used to create carbohydrates from plants.  However, once our low entropy material needs are secured, we seek an ordered personal life, family life, and social life.  Some say that old age is the result of the loss of our ability to maintain low entropy.  In other words, life is a struggle to maintain low entropy in the face of the law of increasing entropy.  As individuals, we will lose that struggle since death is certain.  As a species, however, the trend towards low entropy, towards more complex ordering, can continue.

Before life began to evolve, matter on earth was subject to laws of physics and chemistry.  One of those laws is the law of increasing entropy: low entropy sunlight is absorbed and then radiated back into space as high entropy heat.  However, the laws of nature themselves contain a provision for entropy lowering interactions.  I strongly believe that such a provision is the result of the decisionality inherent in the collapse of the quantum wave function.  My reason for such a belief lies mainly in the order that results from entropy lowering interactions, especially the order inherent in life.  All of our human experience tells us that order results from rational decisionality; it does not result from randomness.  The mathematics of random chance rules out any likelihood that life arose by chance alone.

After life began to evolve, it naturally took advantage of entropy lowering processes.  Natural selection and fitness are crucially based of efficient use of energy.  There is a recent example of a prehistoric bird that had four wings, named microraptor.   Microraptor’s four wings allowed it to make tight turns around the many forest trees in its habitat.  However, four wings caused additional drag and consequent loss of speed and energy.  It therefore took microraptor more energy to accomplish what modern birds can do. Modern birds evolved two wings with additional muscle control for improved maneuverability but without the additional drag of a second set of wings.  Efficient use of energy is crucial for survival.

It is therefore very surprising that nature and evolution would have allocated a single organ in humans that requires 20% of our energy, yet weighs only about 2% of our total weight.  That is the amazing, almost unbelievable, statistic for the brain.  If we view human life as the pinnacle of evolution, then the entire evolutionary path must proceed towards higher consciousness and higher intelligence.  Therefore, if Professor Shapiro is right about natural genetic engineering (and I am convinced he is – he draws upon a huge body of research done by others), then modification made at the cellular level must include a bias for enhanced consciousness.

In my next section, I will begin to address the evidence from consciousness.  This will be difficult because science can say very little about consciousness.  Some take the position that consciousness is an epiphenomenon; that it emerges, ex novo, from complex calculations and therefore, has no real existence.  Some take the position that mind is a separate category from matter, leading to dualism.  I take the position that consciousness is embedded in matter, a position called panpsychism.  Furthermore, I hold the position that the way that consciousness has become embedded in matter is through the inherent decisionality of quantum decoherence.  One way to view this position is that the universe performs a quantum calculation on every transfer of energy.  But it would be a mistake to think that the calculation is the same as a calculation that could performed by a computer.  Stay tuned.

The Evidence from Evolution and Biology (Part 1)

My previous posts have focused on the evidence for a rational agent inherent in the laws of physics.  There has been an implicit assumption that the laws of physics are rigorously deterministic.  But clearly life is not deterministic, so it was necessary for me to point to some possible feature of the laws of physics that allowed for the wild variation and unpredictability of life.  I will summarize my thought process as follows:

  1. The universe is ordered by deterministic laws and forces such as the force of gravity and electromagnetism.  There are also non-deterministic laws such as quantum theory.  One of the laws that combine both features is the law of increasing entropy.  Entropy always increases throughout the universe, but it is allowed to decrease locally.  Since quantum theory ultimately controls all interactions in the universe, all forces are non-deterministic at the quantum level.  (The only possible exception is gravitation which has not yet been unified with quantum theory.)
  2. The deterministic laws (electromagnetism, etc.), by themselves, cannot account for life and consciousness.  There must be another factor in the fundamental laws of physics that allows living organism to lower entropy.  The process of lowering entropy is essential to life because it concentrates energy for future use and organizes the genome for transmission to future generations.
  3. That factor in the laws of physics is the collapse of the wave function in quantum physics, also called decoherence.  Decoherence is absolutely necessary for any measurable energy transfer.  In decoherence, the universe actually chooses an outcome for every transfer of energy.  This choosing, or decisionality, on the part of the universe is what I have called rational agency and it is responsible for the forward direction of time.
  4. This decisionality on the part of the universe is always mixed up with randomness because we are prohibited from knowing precisely all the states of matter, particularly the states of entanglement between particles.  This is a consequence of a kind of cosmic censorship hypothesis.  The Heisenberg uncertainty principle is one such limitation on our knowledge.
  5. There can be no ordering principle or lowering of entropy based on true randomness.  True randomness, by definition, is maximum entropy.  In all of physics the only candidate for non-random yet non-deterministic action is decoherence.
  6. Therefore, this choice by the universe is directed choice.  It is a rational choosing based on the laws of physics and contains within it the possibility of lowering entropy.  It is the physical undergirding of all life and consciousness.  It is the physical action responsible for the forward direction of time.

Essentially, I think that the laws of physics favor life or are conducive to life.  In general, nature prefers to disperse energy; therefore there must be physical explanations for how energy gets concentrated.  Just as there is an explanation for how nature concentrates energy for lightning, there must also be an explanation for how living organisms concentrate energy and lower entropy.   These six steps summarize my explanation.  In this series on evolution and biology, I will lay out the case for the laws of physics favoring life as opposed to the case for life adapting to the laws of physics.  Both dynamics occur, but only laws conducive to life can create life from inanimate matter.

I don’t consider this logic highly dependent on particular experimental results.  Scientific theories are always provisional; they can be superseded by better theories or more accurate results.  My reasoning is broadly based on the general properties of physical laws.  A portion of the laws are rigorously deterministic and use mathematics to make predictions about future events.  A portion of the laws of physics deals with the presence of uncertainty in the universe.  I fully expect the laws of physics to be revised and improved, but I don’t expect that these general characteristics will be much altered.  If string theory is proved true, that would not change my basic logic, but my perspective might need to accommodate rational agency operating in a multiverse scenario.   String Theory, for all its promise, does not yet make any testable predictions.

Along with the laws of physics, I view the theory of evolution as a valid scientific theory.  It is a theory based on the idea that all living organisms adapt to their specific environment and pass along adaptive traits through procreation.  Darwin’s concept of “natural selection” was devised in contradistinction to “artificial selection,” whereby human breeders selected the best mates in order to raise generations of specifically adapted animals.

Biology is a complex science.  For someone like me, who has spent a major part of his life focused on math and the physical sciences, the main shock of encountering biology is the sheer astronomical diversity of life.  Last year, I took one of the online courses offered from UC Berkeley.  It was the basic undergraduate course for biology majors and it was something I needed because my previous biology class must have been in high school.  It was just as well that I didn’t have very much previous instruction because so much has changed between then and now.  The sheer volume of information is astounding.  I found myself wondering how on earth does anyone organize this much data.  In fact, it took three teachers to cover the material.  One instructor had a background in molecular biology; one was a specialist in genetics and one was from a medical background.  I had the distinct feeling that complete mastery was beyond the capability of any one individual.  But, I am still learning and I do have some observations based on my perspective from the physical sciences.

One observation concerns the principle of emergence.  Emergence is the concept that complex living organisms are able to exhibit new properties and traits by virtue of their complexity and organization.  The example from the textbook for the UC Berkeley class is one that interests me:  “For example, although photosynthesis occurs in an intact chloroplast, it will not take place in a disorganized test-tube mixture of chlorophyll and other chloroplast molecules.  Photosynthesis requires a specific organization of these molecules in the chloroplast.”  The text is saying that photosynthesis is an emergent phenomenon.  That is fine.  That helps organize knowledge, but for someone who wants to know how things work, there is a further question:  How is it that the particular organization contributes to function?  What are the properties of the constituent parts that enable the composite function to emerge?  Too often, emergence is used simply as label for a new function that can’t be explained any further.  When that happens, it becomes a kind of false knowledge: a category without explanatory power.

To take another example, water is composed of two room-temperature gases: hydrogen and oxygen.  I suppose you could say the emergent property of water is its liquidity.  But, with water, one can trace its properties to the molecular properties of hydrogen and oxygen and the strong bond between them as well as the weak bond between water molecules.  These particular molecular properties can also be used to explain surface tension, freezing and boiling.  My expectation is that biology will someday be explained in terms of molecular dynamics.  That day is a long way into the future.

Biological scientists are answering these kinds of questions and it is painstaking work.  It is slow and tedious work to demonstrate how biological molecules work, but I suppose, that is the part of biology that mainly interests me.  I have two main areas of interest in the biological sciences.  One is photosynthesis because of its use of quantum coherence for efficient transmission of sunlight energy to the “reaction center” where chemical food production begins.  The other is the biological molecule tubulin.

Tubulin is a protein molecule that assembles into microtubules.  Microtubules are long, narrow, hollow tubes that play an amazing variety of roles in living cells.  There is a natural tendency for microtubules to assemble themselves because of the positive and negative polarity on the tubulin molecule.  Once assembled, microtubules play key roles in biological cell functions.  They play an essential role during mitosis, cell division, by grabbing hold of the chromosomes and causing the genome to precisely separate toward opposite ends of the cell.  Microtubules are part of the cell’s cytoskeleton; they give shape and form to the cell.  In plants, microtubules guide the alignment of cellulose and direct plant growth at the cellular level.

Microtubules form the infrastructure that transports molecules from outside the cell to the inside and vice versa.  Motor proteins “walk” vesicles containing molecules back and forth along microtubules to their destination.  For example, pancreas cells that make insulin transport the insulin from inside the cell to the outside by this method.   In addition, microtubules are used for cell interaction with its environment.  They form some types of flagella and cilia for locomotion of the cell or movement of particles in the cell’s environment.  For example, the human sperm cell is propelled by action of a flagella made up of microtubules.

In short, microtubules are a very versatile cellular component.  Furthermore, they are an essential part of nerve cells.  Tubulin, the protein that forms microtubules, has a very high density in brain tissue.  That has led some researchers to project a key role in brain activity and consciousness for microtubules.  Microtubules are long, hollow, round tubes that might be ideal for quantum coherence.  There has been some research along these lines.

Tubulin is the protein building block of microtubules and it or similar proteins are probably very ancient, perhaps going back to the beginning of life.  One source specified that all cells had such proteins, except blue-green algae also known as cyanobacteria.  However, cyanobacteria have a tubulin-like molecule (a homologue) called “Ftsz.”  An interesting connection between my two main interests is that the cyanobacteria use photosynthesis for energy harvesting from sunlight.  It is the light harvesting complex from cyanobacteria that are used in the experiments testing quantum coherence.

Cyanobacteria are among the oldest life forms on Earth, perhaps as old as 3.5 billion years.  It would be a very interesting development if microtubules or microtubule-like structures go back to the beginning of life and if it can be demonstrated that quantum coherence played a key role in efficient energy transmission in these structures.  Those are two very big “ifs” and most researchers are very cautious about any evidence pointing towards quantum coherence in biological molecules.  But I remember some fairly incautious statements about the beginning of life from many years ago.

I think it was probably in high school chemistry class that the teacher, one day, covered the Miller-Urey experiment.  This experiment was conducted in 1952 and involved sending a spark of electricity (to simulate lightning) through a mix of chemicals assumed to represent Earth’s primitive atmosphere.  The result was a mixture of amino acids and sugars, both essential building block of life.  Stanley Miller and Harold Urey had demonstrated that organic compounds necessary for life could be easily formed from reasonable atmospheric compounds, such as water, methane, ammonia and hydrogen.  Not only that, but the teacher thought that we would soon be able to synthesize life in the test tube.  Well, that was over 50 years ago and the synthesis of life seems as elusive as ever.  Science doesn’t yet know what makes biochemicals spring to life.

The mystery of the beginning of life notwithstanding, the theory evolution brought incredible organizing power to the huge diversity of biology.  Darwin’s “natural selection” brought explanatory power to the huge diversity of species on Earth.  In the mid-twentieth century, the discovery of DNA and the genetic code brought into the evolutionary system a mechanism for adaptation.  This has led to what has been called the “central dogma” of molecular biology:  DNA makes RNA which makes proteins.  DNA contains coded information that is used to create a coded sequence of RNA which is used to create a sequence of amino acids which make up proteins.   The next step, which isn’t explicitly stated and is poorly understood, is that proteins must fold into a specific three dimensional form in order to be useful.   What is startling to me, coming from a computer programming background, is that the coded sequence of DNA contains just four characters representing four small molecules: A (adenine), C (cytosine), G (guanine) and T (thymine).

These four codes are interpreted in groups of three which gives 64 possible “words” for amino acids in the genetic code (4 X 4 X 4).  Of the 64 possible combinations of DNA code only 20 are actually needed, because there are only 20 amino acids that are needed to make all the known proteins.  Most of the 64 DNA sequences specify the same amino acid as another sequence, so there is built-in redundancy.  Only Tryptophan and Methionine rely on a single coded sequence; all the others have at least two sets of DNA codes and some (Serine, Leucine and Arginine) have six.  It seems possible to me that different evolutionary branches developed a reliance on different DNA sequences for the amino acids.  For someone with a data processing background, the DNA codes are reminiscent of a computer system that has been copied and modified to meet different objectives – even to the extent that duplicate codes are mainly sequential (e.g., Leucine: TTA, TTG, CTT, CTC, CTA, CTG).  From a “systems design” perspective it would seem that at one time there was provision for expansion with 64 codes for all 20 amino acids, but after evolutionary modifications all 64 codes are now in use.  I suppose that if there developed a need for a 21st amino acid, one of the existing redundant codes would be used.  The whole process is very complex, but the same basic DNA, RNA and amino acids are found in all life forms on Earth.  This amazing discovery of the genetic code is universal to life as we know it.  (There are some exceptions.  The Paramecium uses the “stop” codons, UAG and UAA, to code for Glutamate.)

“Natural selection” coupled with the genetic code has given enormous explanatory power to evolutionary biology.  But like all theories, it is a conceptual model of the physical processes that occur.  There remain many questions such as how did life begin.  And then there’s the question asked by Stephen Hawking, “What is it that breathes fire into the equations and makes a universe for them to govern?”  What is it that actually makes the world act in a way that is consistent with the conceptual model?  Readers of my previous posts will suspect that my answer is similar to what I’ve written before: there is a decisional power at work in the universe that breathes life into biological molecules.  It is this decisionality that insures that time flows forward and therefore gives evolution direction.

Some of the evidence for my answer resides in the evidence for directionality in evolution.  But, first of all, the evolutionary model is a rational model.  Even more amazing is that the implementation of the genetic code is an abstract, rational system!  Who would have thought that nature would have arrived at the very rational system of using a three character code to specify a sequence based on 20 amino acids that comprise the proteins for all life?   Let me be direct: The genetic code is information.  The central dogma of molecular biology is an information processing system.  The end results are proteins and decisional governance of the cell. This is exactly the type of system one might expect from a rational agent acting through nature.

As to directionality, the immediate form of the evidence is in the form of the adaptability of evolutionary change.  Evolutionary change produces living organisms that get better at adapting to their environment.  Not only are more advanced organisms better adapted, but they are better at adapting!  For higher life forms like mammals and particularly humans, this implies a higher consciousness.  Therefore, the longer range implication of evolution is higher consciousness.  I think this trend is evident from the archeological and historical record.  For almost 4 billion years, life has survived under the constant threat of a cosmic catastrophe such as the one that brought an end to the dinosaurs.  Today, we are beginning to track the asteroids and comets that have the potential to cause another life-ending cataclysm.  That would not be possible without some sort of advanced consciousness.  In a strange sort of self-reflection, adaptation has become adaptability for which is needed a higher consciousness.  This implies a robust moral development as well, but that is beyond what I can cover in these posts on science and reason.

But a rational agent is not the only explanation.  The alternative view is that evolution is the byproduct of random mutation.  First of all, I don’t think randomness is a good scientific answer.  Science succeeds when it finds and explains rational patterns.  To say that a process is random is to admit defeat from a scientific point of view.  The second thing I would say is that when someone refers to random mutation, it is unclear what type of randomness they are referring to: lack of knowledge randomness or the genuine non-determinism of quantum physics.  The common view of evolution is that it requires generations of offspring in order for nature to select the best attributes and pass those on to future generations.  Is evolution inherently random because some individuals show up at the wrong place at the wrong time or, alternatively, at the right place at the right time?  Is it random because a cosmic ray has altered the genome?  Is it random because we can’t predict how our children will turn out?  The most likely reason mutation might be random is because of a transcription or copying error.  But modern cells have evolved elaborate safeguards against such copying errors.

It turns out that when evolutionists speak of “random mutation,” they mean something specific.  My biology textbook (on Kindle!) only uses the phrase once in over 1000 pages of small font text, and that one occurrence refers to copies of genes that have lost functionality (i.e. the gene has been degraded) over time.  The textbook does not refer to new functionality as “random mutation,” but does use the phrase, “accidents during meiosis” (cell division in reproductive cells).  This phrase, too, has a specific meaning that might not be expected by normal English interpretation.  In general, the textbook prefers to state evidence positively, in terms of what we know rather than in terms of what we don’t know.  As to genetic mutation, it refers to various mechanisms for altering the genome, such as transposition of small portions of the DNA from one location to another.

One internet site was particularly helpful in tracking down the origin of the phrase “random mutation.”  This site was associated with UC Museum of Paleontology (at Berkeley).  The website is a teaching guide for evolution named “Evolution 101.”  This source was very explicit:

Mutations are random.
Mutations can be beneficial, neutral, or harmful for the organism, but mutations do not “try” to supply what the organism “needs.” In this respect, mutations are random—whether a particular mutation happens or not is unrelated to how useful that mutation would be.”

Behind this brief description is a debate that began with Darwin.  Prior to Darwin, there was a French biologist named Jean-Baptiste Lamarck who held the view that (1) Individuals acquire traits that they need and lose traits that they don’t need and (2) Individuals inherit the traits of their ancestors.  He gave as examples the Giraffe whose neck was assumed to have stretched in order to reach higher leaves in trees and blacksmiths whose strong arms appeared to have been inherited by their sons.  But these ideas have been debunked.

When Darwin published Origin of Species in 1859, he gave some credibility to Lamarck’s view, but later evolutionists elevated Lamarck’s idea to a major theme of evolution.  By the mid-twentieth century, biologists had become adept at doing experiments with bacteria.  In 1943, two biologists, Max Delbrück and Salvador Luria, wanted to test Lamarck’s hypothesis for bacteria, which were thought to be the more likely organism to use Lamarckian adaptation.  The Luria-Delbrück experiment tested whether bacteria exposed to a lethal virus would develop any adaptive mutation and whether that mutation would be acquired prior to exposure or not.  Their experiment showed conclusively that some bacteria had acquired an adaptive mutation prior to exposure, as did subsequent experiments by others, including Esther and Joshua Lederberg who are referenced on the “Evolution 101” website.

So, based on experiments, what evolutionists mean when they say that mutations are random is that some adaptive mutations occur before any exposure to infectious agents in a test.  The mutations do not occur because of exposure.  Now this is a somewhat contentious finding because it defies the rather commonsense view that mutations happen for a reason, most likely that reason is related to some inoculation or exposure to an agent.  In other words, either the finding appears to violate causality or the explanation is an admission of ignorance about the cause of adaptation.

I take the view that the finding is an admission of ignorance.  We really don’t know what might have caused an adaptive mutation to occur before exposure.  The real scientific question is what causes the mutation and biologists prefer to focus on what we can discover.  One such biologist is James A. Shapiro, professor of microbiology at the University of Chicago.  He characterizes the association of “random mutation” with the Luria-Delbruck experiment as follows:

One has to be careful with the word “proof” in science. I always said that conventional evolutionists were hanging a very heavy coat on a very thin peg in the way they cited Luria and Delbrück. The peg broke in the first decade of this century.

Professor Shapiro goes on to write about mechanisms that bacteria have for “remembering” previous exposure to infectious agents.  Those mechanisms include modification of the bacteria DNA.  He states that Delbrück and Luria would have discovered this if they had not used a virus that was invariably lethal and if they had the tools for DNA analysis.  The announcement of the DNA structure would take place in 1953, ten years after the Luria-Delbrück experiment, and the tools for analysis are still being developed.  It should not be too big a surprise that bacteria have elaborate mechanisms for DNA sharing and modification. The human immune response to invasive agents also includes the recording of information in the DNA of certain white blood cells (lymphocytes).   You can read Shapiro’s entire article here: http://www.huffingtonpost.com/james-a-shapiro/epigenetics-ii-cellular-m_b_1668820.html.

It is no longer fashionable to speak of Lamarckian inheritance, but the field of epigenetics is devoted to adaptation by means other than DNA modification.  My own view is that the amount of debate and discussion on the issue of “soft” inheritance points to a conclusion that this is unsettled science.  Microbiologists today have many more tools and techniques for answering questions about causes for adaptive inheritance then they did sixty years ago and I suspect that they would prefer to look at changes to the DNA and other molecules rather than make statistical inferences as Luria and Delbrück did.  Current research of the type that James Shapiro is doing is demonstrating specific causes for adaptation.