The Teleological Solution

At the end of Chapter 4 of Mind and Cosmos, Thomas Nagel finally turns to his candidate for best solution to the problem of consciousness:

I am drawn to a fourth alternative, natural teleology, or teleological bias, as an account of the existence of the biological possibilities on which natural selection can operate. I believe that teleology is a naturalistic alternative that is distinct from all three of the other candidate explanations: chance, creationism, and directionless physical law. . . . Teleology means that in addition to physical law of the familiar kind, there are other laws of nature that are “biased toward the marvelous.”

A teleological bias in physical law means that the laws governing all fundamental interaction would tend to produce outcomes that favor life and consciousness.  But, according to Nagel, such a bias could not be expressible in reducible physical law:

The idea of teleology as part of the natural order flies in the teeth of the authoritative form of explanation that has defined science since the revolution of the seventeenth century. Teleology would mean that some natural laws, unlike all the basic scientific laws discovered so far, are temporally historical in their operation. The laws of physics are all equations specifying universal relations that hold at every time and place among mathematically specifiable quantities like force, mass, charge, distance, and velocity. In a nonteleological system the explanation of any temporally extended process has to consist in the explanation, by reference to those laws, of how each state of the universe evolved from its immediate predecessor. Teleology, by contrast, would admit irreducible principles governing temporally extended development.

The challenge for any teleological based theory is whether such time-dependent changes in physical law can be experimentally detected.  If such goal-based bias cannot be detected in physical law, then teleology is essentially a faith-based system.  Experiments to detect time-dependent changes in physical law have so far come up empty.  So, the effect is very small if it exists at all.

I also think there must be some time-dependent effects in physical law, otherwise how could order producing organisms be created in a universe where unguided physical law seems to favor an increase in disorder.  Of course, Nagel believes such effects are attributable to “naturalistic” effects and I think they are attributable to an ordering power at work in the universe.  The difference is that an ordering power could be interpreted as evidence for God and a naturalistic approach would favor a non-theistic interpretation.  The advantage of Nagel’s interpretation is that atheists can embrace the evidence for an order producing power without directly naming it or considering it evidence for God.

The main weakness in the current atheist argument is the denial of an active ordering power in the universe.  This position flies in the face of common sense and puts atheism on the defensive, relying on arguments based on the problems with religious institutions.  This may seem like a strange thing for a believer in God to say, but atheists need a strong metaphysical argument for the naturalistic position so that faith-based institutions will take seriously their arguments about the failings of religion.  Nagel provides that metaphysical basis and atheists would do well to pay attention to him.

However, others have criticized Mind and Cosmos because Nagel fails to garner support from science for his position.  (See, for example,  Such support does indeed exist and Nagel’s failure to include references to the science means that his argument is weakened.  In fact, even Nagel’s summary of the book in the New York Times seems to step back from the teleological argument because he does not mention it:

And then there is the charge that Nagel has been too soft on the theistic option.  I don’t find him soft on theism so much as failing to put forth the standard arguments against religion.  The arguments against religion consist almost entirely on the problems that religion can cause among its adherents plus the incomprehensibility of religion from the point of view of non-theists.  But arguments against religion are not arguments against theism so I find this criticism out of place.

Of course, Nagel does admit that his teleological explanation does not definitively rule out a theistic interpretation.  But he correctly points out that theism seems like an unnecessary complication if the naturalistic explanation is sufficient.  This is where I think the argument between believers and atheists should be:  to what extent is the naturalistic explanation sufficient to account for human subjective experience?  I think that a naturalistic teleology in particular and atheism in general will fall short of satisfying the deep human yearning for spiritual truths.

That is probably the real reason that Nagel’s atheistic critics do not like his book.  A proper debate on the merits of the atheist position looks very weak unless some sort of ordering power is conceded.  But conceding an ordering power seems to be conceding too much because it can be mistaken for evidence of divinity.  However, Nagel maintains that atheism is a viable alternative if one views the ordering power as a natural teleology that is not subject to divine control.  In this position he does not waver.

Consciousness and Dualism (Part 2)

“It has become clear that our bodies and central nervous systems are parts of the physical world, composed of the same elements as everything else and completely describable in terms of the modern versions of the primary qualities— more sophisticated but still mathematically and spatiotemporally defined. Molecular biology keeps increasing our knowledge of our own physical composition, operation, and development. Finally, so far as we can tell, our mental lives, including our subjective experiences, and those of other creatures are strongly connected with and probably strictly dependent on physical events in our brains and on the physical interaction of our bodies with the rest of the physical world.” – Thomas Nagel, Mind and Cosmos, Chapter 3.

With these words, Nagel begins to tell us about how the tremendous recent developments in biology and neuroscience have raised hopes for a materialist explanation of life and consciousness.  Science has made great strides in demonstrating the detailed workings of biology and brain function.  Yet, with all the scientific progress, the connection between mind and physical biology seems as elusive as ever.  Some philosophers have even taken the position that Nagel calls “eliminative materialism” in which mental events are illusory.

Nagel looks historically at conceptual approaches to solving the problem because he thinks that reductionism alone will not discover an answer.  A conceptual approach is based on adding something new to science in the hopes of providing the necessary leverage for new discovery.  This is similar to the approach of David Chalmers who has called for adding back into the scientific picture a fundamental quality of subjective experience.

Chalmers approach is to include within the science of consciousness the science of subjective experience, including the psychological and sociological implications of our mental states.  It is difficult to tell whether Nagel would agree with Chalmers.  Nagel thinks that whatever is added to science would need to be at least as radical as electromagnetic fields and relativity theory.  Here Nagel has written something very surprising from the viewpoint of science history.  The most radical scientific developments in the twentieth century were quantum physics and relativity theory, not electromagnetic fields and relativity theory.  Electromagnetic fields were definitively added to science in the 19th century by James Clerk Maxwell.   I wonder if he has chosen to write “electromagnetic fields” in order to preserve some hope for quantum explanations of consciousness.

Nevertheless, Nagel’s description of the difficulty of avoiding dualism has challenged me.  As I have pursued the path of quantum explanations for consciousness, I have concentrated on the power of quantum calculation to bring about order.  There is real order producing power in quantum computation that lends itself to a possible explanation for mental activity if consciousness is seen as a type problem solving.

However, there is nothing in the computational model of quantum physics that can produce subjective experience.   I have worked over thirty years in computer systems design and programming and there is no way that classical computation alone will produce consciousness or subjective experience.  There must be a qualitative difference between classical computation and quantum computation that allows for the addition of a subjective sense of intent or purpose to quantum computation.  The type of subjective sense added to quantum calculation would probably need to be optional because it could not be required to be present for such non-conscious quantum calculations as take place in ordinary physics (for example, in lasers).  And if it is optional, or “contingent” as Nagel would say, then it must be considered a dualistic explanation.

The non-dualist solution to this problem takes me to George Berkeley, whom Nagel mentions in passing and whose idea of “subjective idealism,” Nagel completely discounts.  Berkeley was an 18th century philosopher whose perspective developed as a counterpoint to the new discoveries in science and the trend away from theism.  Berkeley’s Idealism is the point of view that everything is mind and that all matter originates in God’s mind, and he maintains that view without denying the objective existence of material objects.  Some of his ideas influenced Albert Einstein and his view of the role of consciousness in the act of perception has new echoes in quantum physics.  But few thinkers these days give much credence to idealism.  I think that Berkeley’s idealism takes on fresh meaning when seen through the lens of quantum physics.

Another non-dualist approach would be to discount subjective experience.  Those favoring a behaviorist approach would be happy with this line of reasoning.  If quantum computation is at the core of our mental ability, then that could explain our mental problem solving ability but it would leave unexplained any subjective experience.  Those who view subjective experience as an unnecessary byproduct of evolution, like the color of blood, might find this view attractive.  But it has the unfortunate consequence of turning us into zombies.

At this point in chapter 3, Nagel has left us with a significant puzzle: (1) there is no non-dualist materialist theory of consciousness that can explain subjective experience; (2) Idealist and theistic theories are not welcome; and (3) dualism leaves too much room for theistic explanation and therefore it is also not welcome.  I await his recommendation for something new to be added to the description of the physical world, because at this point, I am feeling challenged but also slightly unwelcome.

Consciousness and Dualism (Part 1)

Chapter 3 of Mind and Cosmos is simply titled, “Consciousness.”  Here, Thomas Nagel seeks to lay out the complete case for his assertion that naturalism, materialism, reductionism, etc., cannot fully explain consciousness.  He doesn’t define consciousness in this chapter, but he has written about it earlier in this book and in many other books and articles.  When I think about the issues Nagel is raising, I find that I need a clear definition of consciousness before understanding very much of what he is saying.

The reason consciousness is such a difficult word is that there are so many perceptions of what it is.  One extreme position is that consciousness doesn’t exist!  Usually what is meant by the non-existence of consciousness is that it is an accidental byproduct of evolution, sort of like the color of blood.  It is not a fundamental function; it simply emerges when physical organization becomes complex enough.  However, there is no explanation as to why consciousness should emerge from complex function in the way that one can explain the color of blood from its composition.

One way that Nagel defines consciousness is in terms of subjective experience, such as our sense of taste or our experience of color.  He has written elsewhere that we cannot be certain that other people experience the taste of chocolate the same way that we do.  We can agree to name a certain taste “chocolate,” but we cannot really be sure that we each experience that taste in the same way.  It is possible to extend the example of taste to other subjective experiences such as love, fear, pain, revulsion, shame, etc.

Once we consider the entire range of subjective experience, I think it is relatively easy to argue that such experience would develop naturally as a necessary attribute by natural selection.  That reasoning might go like this:  It is not only the individual that is important to evolution.  The kinship group, tribe or other social grouping is also important because such groups ensure the survival of the individual.  Groups can raise armies and provide for the common defense.  Groups can help ensure that individual DNA gets passed on to the next generation, and so forth.

But groups require social cohesion.  Groups don’t like individuals who don’t play well with others.  In order for an individual to be a good member of a group, that individual must consider the sensibilities of others.  The individual must develop empathy for others in order to understand why it might be wrong to take unfair advantage of other group members.  As Nagel has written elsewhere, an understanding of one’s own subjective experience is necessary for empathy.

The memory of our own subjective experience requires consciousness.  We must be able to focus our attention on experiences that cause feelings and remember them.  He has also referred to this ability as mental capability or “mind.”  The evolutionary reasoning alone seems to provide solid evidence for the reality of subjective experience and consciousness.  It’s at least as real as any other objects of philosophical inquiry such as the existence of the external world.  We cannot live successfully in the real world without a solid belief in the external world and the subjective experiences of ourselves and others.

So, if the case for evolution naturally explains the development of subjective experience and consciousness, what is missing?  What is missing is the reductionist, scientific explanation that shows why subjective experience and consciousness arises from the physical aspects of evolution.  Unlike the color of blood which can be explained in terms of its molecular composition, subjective experience has no obvious connection to the physical.

At this point one has to decide whether to consider dualist solutions to the problem.  And Nagel gives a very good explanation of the history of proposed solutions.  The problem begins with the nature of the post-enlightenment scientific revolution and he assigns a major role to René Descartes.  Descartes’ major accomplishment was the discovery of analytic geometry:  Descartes showed how geometry could be analyzed numerically.  This development was essential for the later development by Newton of the laws of motion.  Without analytic geometry, Newton could not have subjected motion to numerical analysis.  This led to the development of calculus which laid the foundation of all modern scientific explanation.

However, Descartes is famous for something else.  He thought deeply about consciousness and its connection to the nascent materialist explanations of the world.  The new developments in science were possible because they set aside the experience of consciousness.  A detailed, numerical description of the physical world became possible by leaving out any explanation of the mental world.  Descartes attempted to include the mental world as an associated phenomenon of the material world without an explicit, physical connection.  That is the source of his famous dictum, “Cogito ergo sum,” or “I think, therefore I am.”  Descartes’ assertion means that mental phenomena and physical phenomena arise together; we would not know that we exist in a material sense without consciousness.

This explanation of consciousness by Descartes is known as Cartesian Dualism and it has been the source of ongoing controversy about the nature of the world.  If one accepts dualism, then there is something other than matter in the universe and that something can be called spirit or soul or sometimes, “the ghost in the machine.”  Therefore, dualism is the bane of those desiring a materialist explanation.  And that includes Nagel.

But Nagel also shows how difficult it is to avoid dualism when all materialist explanations have failed.  Those difficulties have challenged me.  I will write about them next time.

Theism and Materialism

In Chapter 2 of Mind and Cosmos by Thomas Nagel, the author explores the typical positions held by proponents of theism and by proponents of evolution.  His focus is sharpened by analysis of the different ways that each point of view attempts to make sense of human beings who are part of the world that ought to be intelligible to us.

According to Nagel, theists appeal to a deity who is outside the natural order, but who nevertheless provides intention and directionality to the natural order and who assures us of the basic reliability of our observational capacity and our reasoning ability.  It is a reassuring position at the expense of requiring a power outside of the natural order.  It suffers from a lack of any serious attempt to make human beings intelligible from within the natural order.

Evolutionary naturalists, on the other hand, claim that humanity is intelligible from within the natural order based on science and reason.  But, again according to Nagel, the problem is that both science and reason are the products of evolution and we have no authority outside of ourselves to substantiate the reliability of our understanding of science.  In Nagel’s terminology, evolutionary naturalism undermines its own claim of reliability.  Ultimately, the evolutionary explanations fail because the science that we possess has failed to explain consciousness and therefore failed to explain why we should trust the judgments arising from our consciousness.

I think Nagel is stretching too far for a criticism of the evolutionary point of view.  Its main problem is the inability for science to explain consciousness.  To find fault for the inability of evolution to provide reassurance that our reasoning is sound is the same criticism that can be applied to the theist position.   Both positions are based on faith!  Theists have faith in God based on a religious community and Darwinian evolutionists have faith in science based on the scientific community.  If anything, the evolutionary point of view has the advantage in that the scientific community is generally more unified and disciplined than the religious community.

The primary distinction between the two points of view, then, is the position and importance that each assigns to humanity.  Theism relies on a power outside the normal purview of science to explain and give meaning to human life and consciousness while evolution relies solely on current science at the expense of diminishing any essential or transcendent importance for human life and consciousness.

Nagel is searching for middle ground.  He wants an explanation for consciousness that does not rely on a power outside the natural order.  At this point in his book, I think he fails to see that any such explanation will be relying on faith in something.  Whether that something is science or philosophy or some combination, it will still be the object of faith.  Given the constraints on his search that there can be no power outside the natural order, his explanation would not be able to claim any more authority than evolutionary materialism.

From my point of view, a form of theism that provides a way for God to work through the natural order provides the best alternative.  The importance and discipline of science is maintained and modified so that human life and consciousness have access to transcendent power for guidance and assurance.

Scientific reductionism ends at the quantum boundary, so the assumption of transcendent consciousness working at the quantum level provides for the needed adjustment to science while maintaining the entire scientific edifice based on empirical evidence and reductionist explanation.  And there is scientific evidence for an order producing power working at the quantum level.  This evidence is being developed by the nascent scientific discipline of quantum biology.

The strongest evidence to date comes from quantum action during photosynthesis, but I expect much more evidence as quantum biology matures.  After all, isn’t all of physics based on quantum action?  The only alternative besides dualism would be a view that posits new scientific principles acting at the biological level.  But, it seems to me that there is too much continuity between chemistry and biology.  That continuity leaves little room for wholly new principles to be plausible.

What if Evolution had Produced an Unintelligible World?

In Chapter 2 of Thomas Nagel’s book, Mind and Cosmos, the author explores more deeply the reasons why materialist reductionism is necessary for science and why mind cannot be explained by such a reductionist approach.  One important observation is that we believe that the world is intelligible:

“Science is driven by the assumption that the world is intelligible. That is, the world in which we find ourselves, and about which experience gives us some information, can be not only described but understood. That assumption is behind every pursuit of knowledge, including pursuits that end in illusion. In the natural sciences as they have developed since the seventeenth century, the assumption of intelligibility has led to extraordinary discoveries, confirmed by prediction and experiment, of a hidden natural order that cannot be observed by human perception alone. Without the assumption of an intelligible underlying order, which long antedates the scientific revolution, those discoveries could not have been made.”

On the one hand, I can hear the Darwinian materialist answer that evolution could not have done otherwise since we have evolved to be successful in the existing world.  Of course we understand the world!  Our survival depends on it!  However, this response only provides a partial answer.

Clearly, we are dependent on biological adaptation to the physical world.  All of our physical movements in the world depend on an intuitive understanding of the physical laws that govern such activity.  We could not long survive without intuitively grasping the law of gravity.  One could even argue that our direct experience with electricity and with radiation have come so recently that we have not yet had time to evolve an intuitive understanding of these forces.

Yet, it is difficult to see why evolution would need to respond to the electromagnetic force or the nuclear force by making them comprehensible, since direct experience with these forces is rare.  Why wouldn’t evolution respond to electricity and radiation by providing either immunity or an avoidance reaction?   So the question raised by Nagel boils down to why have we been so successful in understanding such non-intuitive phenomenon as electromagnetism and nuclear energy?

Our success at understanding non-intuitive scientific principles is even more remarkable when we consider how our minds are wired.  From my viewpoint as a computer programmer, the human brain works in a very inefficient manner.  We don’t simply compute responses to input stimuli as one might expect.  The mind is actually continuously generating expected results and comparing the expected results to observed activity and our attention is drawn to areas where the expected results do not match reality so we can make quick adjustments.

Our minds are wired to be generators of outcomes based on intuitive narratives that we have learned from experience.  This is why professional tennis players and baseball players can hit a ball that is travelling faster than we can consciously follow.  Professional players have learned to see clues about the physical dynamics of play that they have learned from experience and sometimes cannot even consciously explain.

In fact, our need for a narrative to generate expected outcomes is so great that it doesn’t much matter how good the narrative is as long as it does better than random chance.  That is why superstition can sometimes have such a strong grip on our outlook if we have had experiences that have confirmed those superstitions.  The same can be said about political and social beliefs.  The simple narrative always has an advantage over the more complex narrative because it is easier to use based on the way that our minds are wired.

So we are not naturally predisposed to understanding complex and abstract subjects like electromagnetism.  Yet we seem driven to explain the physical world even if we do not need such explanations for individual survival.  One could say this is about group power, but it seems to me that curiosity precedes the desire for prestige and power.  Time after time the satisfaction of curiosity is the only reward.  Few persons make the important discoveries that pay off in terms of power and prestige.

Nagel thinks this character of intelligibility leads to an important conclusion.  Since we have evolved to find the world intelligible, and we, ourselves, are part of the world, then we must be able to understand ourselves.  We are certainly driven to try.  I am still waiting for a clear answer.

Consciousness (Part 2)


Within the framework of a universe whose total entropy is always increasing, there is the surprising fact that the laws of physics allow for temporary decreases in entropy.  This direction of decreasing entropy is counter to the overall increase in total entropy and yet does not violate any physical law.  Processes that temporarily decrease entropy are essential to life and they are also present in such well-understood phenomenon as lasers and superconductivity where the process is traceable to quantum physics.  I think it is a reasonable position that all such processes of decreasing entropy are traceable to quantum physics and to decoherence in particular.   Quantum computation is a real process, though it is not yet practical on a large scale.  Quantum computation happens during entangled states of coherence and the results are reported by decoherence.  The most direct evidence that this is happening in biological organisms is from photosynthesis.  During photosynthesis, extended quantum coherence takes place during the transport of photons from the light-harvesting chlorophyll molecules to the reaction center where food production begins.   This results in the near 100% efficiency with which light energy is transported.

Quantum physics is the most fundamental of the physical sciences.  It accurately describes all interactions at the level of fundamental particles, whether they be matter particles such as protons, neutrons or electrons or whether they be energy particles such as photons (light).  If one takes the position that quantum states are real (though they are not directly measurable), then it is reasonable to conclude that the universe must decide where and when such states collapse into a single measurable quantity: the universe must make a decision for every single transfer of energy.  That is the fundamental decisionality underlying all physical activity.

I think it is a reasonable to extend the quantum role in photosynthesis to biology in general.  The protein folding problem in particular has significant similarities with photosynthesis in that an energy landscape must be navigated.  In photosynthesis, the energy landscape funnels a photon to the plant cell’s reaction center where food production begins.  In the protein folding problem the protein folds by navigating an energy funnel to a conformal state which has a lower overall energy level.  The shape into which proteins fold is crucial to their function in the cell and undetected misfolded proteins are implicated in some disease processes.  Recent research supports the view that quantum physics plays a role in protein folding (cf. Lou and Lu, 2011).

It is reasonable to extend the role of quantum computation to the operation of all biological molecules.  They are certainly small enough to be affected by quantum effects.  It fits the overall model that the cell is an information processing marvel.  DNA contains information about the sequence of amino acids in proteins in a coded form (three nucleotides per amino acid).  RNA and ribosomes decode the sequence to produce proteins.  Proteins work together to form complex molecular machines that assist the cell in keeping entropy low despite the constant loss of entropy to the environment.  There is an amazing real-time cooperation, an unbelievable choreography among different cell functions that would be impossible to fully explain through any other mechanism.  A reasonable position is that quantum decisionality is active throughout all biological organisms and the most basic unit of processing is the biological molecule as Professor James A. Shapiro has suggested.  Recent research supports the view that quantum entanglement is active in the DNA molecule (cf. Reiper, Anders and Vedral, 2011).

Although the exact path to creation of the earliest forms of life remains hidden from us, it is reasonable to think that the laws of physics and chemistry favored a path to life.  The alternatives to such a conclusion are either that life happened by blind chance or there was some supernatural intervention in the creation process that bypassed the laws of physics.  A reasonable thought process must rule out both of these options as highly improbable.   Since it is reasonable to think that quantum processes play a significant part in modern biological cells, it is a reasonable extension to think that quantum processes played a part in the creation of life.  It is the quantum computations that provide the necessary bias toward life.

(For those readers who have a theistic view of the universe, as I do, let me pose the question about supernatural intervention this way:  Why would the Creator of the universe put in place laws of nature that He or She would need to bypass?  I view the laws of nature as a kind of covenant with the universe.  The rules of quantum physics allow all the needed degrees of freedom for God’s intervention in history.)

Quantum computation is a real process, but is still in the research and development phase.  Some very simple calculations have been demonstrated such as factoring small numbers.  Factoring is one application that has garnered much interest because factoring can be done much faster on quantum computers than on ordinary computers.  If factoring can be done quickly on large numbers then some public key cryptographic systems would become obsolete.    For example, the RSA scheme relies on the impracticality of factoring large numbers, typically 617 digits long.  Public key cryptography is used to support the security infrastructure that enables shoppers to connect securely to a web site when they provide payment information.  The largest number factored by conventional methods during the RSA factoring challenge, which ended in 2009, was 232 digits and Microsoft has recently blocked any keys with less than 309 digits.  The largest number currently factored by quantum computer using the well-understood Shor’s algorithm is the 2 digit number 21.

However there is another quantum factoring algorithm that has demonstrated factoring of the 3 digit number 143.  Very interestingly, this algorithm finds the factors by arranging the quantum hardware (qubits) in a pattern where the answer can be read when the qubits settle in their lowest energy state, possibly using a quantum process similar to the one proposed for photosynthesis, that is, traversing an energy landscape toward the lowest overall energy level.  However, there is no reason to think this new algorithm can be scaled up for very large numbers.  (Within the past few weeks, there has been another report of quantum computation that works by traversing an energy landscape.  That claim comes from D-Wave Systems which hopes to produce the first commercial quantum computer.)

But it would be a mistake to assume that natural quantum computations of the type that take place during photosynthesis are implementations of any known mathematics.  The analysis that was done on the photosynthesis data led researchers to conclude that the quantum computation was not an ordinary search algorithm of the type that would have been implemented by a human programmer.  If quantum decisionality is the physical process underlying consciousness as Roger Penrose and Stuart Hameroff propose, then it is unlikely that the full spectrum of quantum computations could be implemented on any ordinary computer.  The quantum computations would be both non-deterministic and, ironically, non-computational in Penrose’s analysis.

In a recent paper, Physicist Roger Penrose and Anesthesiologist Stuart Hameroff have summarized their proposal and answered critics.  The paper is titled, “Consciousness in the Universe: Neuroscience, Quantum Space-Time Geometry and Orch OR Theory” (Journal of Cosmology, 2011, Vol. 14).  The Penrose-Hameroff proposal for quantum consciousness calls for quantum coherence within the very numerous microtubules that support brain cell structure.  Microtubules are very small hollow tubes in the neurons, about 25 nanometers in diameter, which would appear ideal for isolating quantum coherence from the environment.  Unlike the microtubules in other cells that form and break apart as needed, the microtubules in neurons are stabilized by another protein, called the tau protein.  Mature Nerve cells don’t divide, so microtubules do not need to become spontaneously active during nerve cell mitosis.  Neuronal microtubules will break apart if the tau protein becomes compromised, and a malfunctioning tau protein is one possible cause of Alzheimer’s disease.  Penrose and Hameroff end their summary paragraph with a surprising admission: “We conclude that consciousness plays an intrinsic role in the universe.”  That is the first time I recall hearing such a statement from Penrose and Hameroff.

While the Penrose-Hameroff hypothesis for quantum consciousness has not been experimentally verified, it does fit my overall paradigm of quantum coherence and decisionality being the primary mover of life.  I have been following the developments in quantum consciousness for over 20 years, since Roger Penrose’s first book on the subject, The Emperor’s New Mind, in 1989.  I am sufficiently comfortable with the theory to think that quantum coherence is the phenomenon behind our amazing consciousness.  The details will almost certainly be different than Penrose and Hameroff propose, but the direction is solid, and I am confident that an intelligent, decisional consciousness does indeed play an intrinsic role in the universe.

I have also emphasized that one cannot immediately conclude that such a consciousness is God.  The question of God is a theological question about oneself and one’s relationship to some power.  The power that I have elucidated over my several postings on science is the decisional, conscious ordering power of the universe.  It is a power that comes to us free of cost; it transcends time and space by its amazing non-local properties of entangled particles; it does not require any energy, and is the primary power causing a bias in the laws of physics supporting life, supporting entropy lowering processes.  I emphasize, once again, that entropy lowering processes are temporary and localized so that the total entropy in the universe increases.    For readers wanting a less theistic view, I recommend “the Information Philosopher” (Bob Doyle).  He also sees order in the universe emerging through the action of quantum physics, but he applies this concept to philosophy rather than metaphysics.  He points out that even though the total entropy in the universe is increasing, so is the available entropy: there is always more available ordering power as time increases.  His web site is

Even for readers with a theistic view of the universe, I would not recommend a direct correlation between the consciousness inherent in the laws of physics and God.  It may be impossible to completely distinguish the deterministic aspects of quantum computation from its non-deterministic aspects.  Therefore, I analogize the consciousness inherent in nature as the “hand of God,” although not in any anthropomorphic sense.  It is the vehicle through which God interacts with history.

Nevertheless, these developments in physics, biology and in the very new science of consciousness have the potential of sending a shock wave through theology and religion.  Throughout history, religion has responded poorly to or reacted against the best scientific evidence.  From Galileo to Newton, religious dogma has been confronted with scientific truth and has struggled to respond appropriately.  During the enlightenment, as people of faith came to terms with Newton’s mechanistic universe, God was deemed to have withdrawn from the world and Deism was the result.  The modern secular imagination has no place for religions that demand God’s supernatural intervention in history or for a three story universe.  The scriptural worldview is woefully outdated.  Yet, theology holds that God is real and that God acts in history.  So theology must answer the question about how God acts in history if supernatural explanations are no longer appropriate.  The best answer that I know of today is something called “process theology.”

Process theology is a type of panentheism.  A panentheistic view holds that God is present in all aspects of matter and energy, but that God is not limited to or identical with all matter and energy.  Process theology is based on the process-relational philosophy of Alfred North Whitehead and Charles Hartshorne and emphasizes changeable relationships rather than permanent entities as the basis for truth.  For process theologians, God’s power is exercised through acts of consciousness, through persuasion rather than coercion, and therefore requires a robust theology of free will.  And because free will is an essential property of humankind, God is therefore not the immutable, changeless God of traditional theology.  God interacts with history and is changed by history.  Since God is in all things, process theology also reconfigures the concepts of good and evil to avoid a reductionist form of Manichaeism; that is, there is no need for a devil or Satan role to account for evil.

Process theology emphasizes God’s imminence in the world, but also acknowledges God’s transcendence of the world.  Within Process Theology, evolution is guided by God, but not in a deterministic sense.  God represents the creative aspect of evolution.  According to several sources, process theology has influenced both Christian and Jewish writers such as Harold Kushner, Abraham Joshua Heschel, William E. Kaufman, W. Norman Pittenger, John B. Cobb, Thomas Berry and Marjorie Suchocki.  Marjorie Suchocki’s pamphlet, “What is Process Theology,” is a good place to begin learning.

I found it interesting to read a review of Hartshorne’s discussion about the proof of God.  From his 1970 book, Creative Synthesis and Philosophic Method, he identifies four possible philosophical options relating to cosmic order and God:

(A1) There is no cosmic order.
(A2) There is cosmic order, but no cosmic ordering power.
(A3) There is cosmic order and ordering power, but the power is not divine.
(T)    There is cosmic order and divine power.

Hartshorne holds the fourth position identified as (T), but insists that he does not arrive there by “proof.”   I have not read Hartshorne’s book, so I don’t know to what extent he uses empirical evidence for cosmic order, but I think that empirical evidence is very helpful in giving more weight to option (A3) compared to (A1) and (A2).  In my view, reason can be very helpful in arriving at a conclusion that there is a cosmic ordering power, but that faith is necessary to conclude that such a power is an expression of divine action.   If the evidence from physics, chemistry and biology is all we have, then a “leap of faith” is required to get from position (A3) to (T).  However, there is more evidence, but exploring that evidence will require delving into the social sciences, particularly psychology, to elicit reasonable conclusions about the structure of experience and selfhood.

In this context, faith is a decision one makes when reason based on empirical evidence does not apply or cannot guide our logic.  But there are other kinds of evidence and that brings me back to consciousness and the role of empirical evidence in understanding consciousness. 

David Chalmers has probably done more than any other philosopher towards putting the science of consciousness on sound footing.  But his most contentious position is that there are two paths for making progress in consciousness studies.  One path is the traditional and reliable reductionist path where complex mental processes are explained in terms of simpler, experimentally verified biological functions.  The reductionist path can explain how the brain works and therefore what brain functions are necessary for consciousness.  Chalmers calls this the “easy” question, and it is a long way from being answered.  The second path has the very difficult problem of dealing with the experience of consciousness and why the sensation of being conscious arises from brain function.  I have phrased this second question as the question about the ontology of selfhood: why is it that we have a self with which to experience life?

Chalmers insists that the second question cannot be answered from function alone and he posits a new category to deal with the “hard” question.  Taking a clue from historical scientific efforts to subject new phenomenon to reason, he favors creating a new fundamental category for subjective experience:

I suggest that a theory of consciousness should take experience as fundamental. We know that a theory of consciousness requires the addition of something fundamental to our ontology, as everything in physical theory is compatible with the absence of consciousness. We might add some entirely new nonphysical feature, from which experience can be derived, but it is hard to see what such a feature would be like. More likely, we will take experience itself as a fundamental feature of the world, alongside mass, charge, and space-time. If we take experience as fundamental, then we can go about the business of constructing a theory of experience.

Chalmers also proposes that a bridging theory can be constructed that will correlate subjective experience with brain function and this might take 100 years for meaningful progress to occur.  Others are not so sure.  Daniel Dennett does not think the hard problem is real.  For Dennett, consciousness is an epiphenomenon; it is an illusion generated by biological function.  He thinks that consciousness is adequately explained by a reductionist approach that explains brain function biologically, neurologically or computationally.  On the other hand, both Thomas Nagel and John Searle think that the hard problem of consciousness cannot be scientifically solved at all.   For them, consciousness is real but it must be explained philosophically.

Subjective experience may be the essential evidence necessary for consciousness science, but I think that it must be organized around a concept of self in order to be coherent and have an impact on our life.  We usually are not concerned with simply reporting individual and isolated experiences; we report and attempt to make sense of what these experiences mean for ourselves.  I think the fundamental entity is a sense of self; in other words, it is human self-consciousness that is the interesting phenomenon psychologically, philosophically and theologically.  And there already exists methods for dealing with experiential self-consciousness in the only universe we know about: developmental psychology, existentialism and process theology.

Developmental psychology has shown that there is a crucial point in the development process around 18 to 36 months where self-awareness appears.  If self-awareness can be equated with self-consciousness, then the structure of self-consciousness can be described through the analysis of psychological states arising at about the same time, namely empathy, embarrassment, shame and guilt.  But such an analysis must await another series.

Thomas Nagel once wrote a paper titled, “What is it Like to Be a Bat?”  The interesting question going forward is what is it like to be a self?  H. Richard Niebuhr’s contribution to this question has been formative for me, so I expect future expositions to be based on this seminal quotation from The Meaning of Revelation: “To be a self is to have a god; to have a god is to have history, that is, events connected in a meaningful pattern”.  And, according to Niebuhr, a “god” can be any power to which we intentionally relate ourselves.

I began this series in order to examine the role of reason for a person of faith.  I believe that a mature faith must resolve certain issues with respect to science and with respect to history.  This portion has dealt only with the physical sciences, and it has confronted the question of the kind of god that is compatible with science.  I have followed the best science that I know of, but I have also woven a narrative through the scientific evidence to elucidate something of the nature of the universe and its creator.  This narrative is not meant as proof; it is an exercise in metaphysics, but it demonstrates to me that faith is compatible with science and that science informs us about how God is likely to interact with us.  Not only does science not disprove the existence of God, it provides the essential evidence for a cosmic ordering power which is, if one choses, the hand of God.  I hope my future essays will explain possible motivations for making such an affirmation.

Consciousness (Part 1)

So far, in this series on the evidence for a conscious, rational power working in and through the laws of nature, I have followed the trail of low entropy.  I have used a general notion of entropy where low entropy correlates with an increasing degree of order or where it correlates with an increasing concentration of energy.  Consequently, high entropy means a state of disorder or a state of energy dispersal, most often as wasted heat.  I began with the amazing state of low entropy (highly ordered, high energy concentration) in which the universe was created.

I followed the trail of low entropy through the complex of mathematically precise physical laws that represent the incredible ordering power of nature.  I spoke of lasers, superconductivity and photosynthesis as supreme examples of entropy lowering processes.  I looked at the incredibly diverse life processes, all based on DNA, RNA and protein synthesis, that would be impossible without the information coding capability and the molecular machines of the individual cell.  I described the computer-like processing capability of individual proteins and the inexplicable speed with which they fold into the precise shape for their purpose.

I have tried to avoid the teleological language of purposeful design, but when one looks at the trail from creation to conscious being, it is difficult to avoid the question.  Random chance cannot account for this remarkable journey.  The probabilities are just too small for undirected forces to have arrived at living beings that maintain low entropy and rely on entropy lowering processes.  This implies, to me at least, that the laws of physics are favorable to life and consciousness.  What is it that has driven evolution to the point of prizing consciousness almost above other considerations?  Consciousness requires a huge energy budget; why should our brains deserve a 20% allocation of energy if not for its powerful entropy lowering ability?

An incredible panoply of ordered life flows from the human imagination.  There is language, art, drama, literature, music and dance in addition to the social inventions of government, economic systems, justice systems, cultural institutions, family and kinship groups.  One could almost say that the creation of explicitly ordered social structures defines humanity.  And yet there is a profound puzzle in the pervasive human tendency to sow discord.  Why should that be?  Why are there wars, violence, terrorism, and dysfunctional social institutions if the human imagination can be so productive?

In discussing these and other questions of consciousness, I will attempt to follow my reductionist approach by relating emergent phenomenon to the dynamics and properties of constituent components.  However, there will come a point where this approach will fail and I will need to resort to different language to describe what I consider to be the key dynamic of consciousness: the self and its narrative.  Consciousness cannot be completely understood based on functional descriptions of biological or physical components.  But first, let me turn to the attempt to explain consciousness in term of computation.

Considering that order emerges from entropy lowering processes, it is odd that some observers think that consciousness and intelligence emerges from random, chaotic activity.  Pure randomness results in high entropy, so how can order be produced from chaos?  One such person is Ray Kurzweil, a futurist, who has written a book titled The Singularity is Near.  He states, “Intelligent behavior is an emergent property of the brain’s chaotic and complex activity.”  Neither he nor anyone else can explain how entropy lowering intelligence can emerge from random, chaotic activity.  He does, however, distinguish intelligence from consciousness.  He cites experiments by Benjamin Libet that appear to show that decisions are an illusion and that “consciousness is out of the loop.” Later, he describes a computer that could simulate intelligent behavior: “Such a machine will at least seem conscious, even if we cannot say definitely whether it is or not.  But just declaring that it is obvious that the computer . . . is not conscious is far from a compelling argument.”  Like many others, Kurzweil thinks that consciousness is present if intelligence can be successfully simulated by a machine.

Kurzweil is an optimistic supporter of the idea that the human brain will be completely mapped and understood to the point where it can be entirely simulated by computation.  He has predicted that this should occur in the fifth decade of the 21st century: “I set the date for the Singularity – representing a profound and disruptive transformation in human capability – at 2045.  The nonbiological intelligence created in that year will be one billion times more powerful than all human intelligence today.”  Kurzweil’s prediction is based on the number of neurons in the human brain and their many interconnections, arriving at a functional memory capacity of 1018 bits of information for the human brain (1011 neurons multiplied by 103 connections for each neuron multiplied by 104 bits stored in each of the synaptic contacts.)

Kurzweil welcomes this prospective technological leap as a great advancement in the intellectual potential for the world.  He writes about his vision for the world after the singularity which he names the fifth epoch: “The fifth epoch will enable our human machine civilization to transcend the human brain’s limitations of a mere hundred trillion extremely slow connections.”  He goes on to say that eventually this new paradigm for intelligence will saturate all matter and spread throughout the universe.  Kurzweil appears to have the opposite perspective from my own view which is that the universe began with consciousness and consciousness infused all matter from the beginning.

But other people look at Kurzweil’s predictions and are concerned.  I recently read an opinion piece by Huw Price in the New York Times about the dangers of artificial intelligence (AI).  Huw Price was on his way to Cambridge to take up his newly appointed position as Bertrand Russell chair in Philosophy.  He had met the AI researcher named Jaan Tallinn, one of the developers of Skype, on his way to his new job.  Tallinn was concerned that AI technology would evolve to the point where it could replace humans and through some accident the computers would take control.  So Tallinn and Price joined up with Martin Rees, a cosmologist with a strong interest in biotechnology, to form a group called the Center for Study of Existential Risk (CSER).  I suspect that the group will focus more on the risk to human life posed by biotechnology rather than from AI, but the focus of Price’s column was on the risk from artificial intelligence.

Professor Price presented the argument that, although the risk of such a computer takeover appears small, it shouldn’t be completely ignored.  Perhaps he has a valid point, but what are the empirical signs that such computer intelligence is near at hand?  Some might point to the victories in 2011 of IBM’s Watson computer over all challengers in the Jeopardy game show.  This was an impressive demonstration of computer prowess in natural language processing and in database searching, but did Watson demonstrate intelligence?  I think that Ray Kurzweil would answer yes.  To the extent that the Jeopardy game demonstrates intelligence, then, by that measure, Watson must be considered intelligent.

However, consider the following subsequent development.  In a recent news report, Watson was upgraded to use a slang dictionary called the Urban Dictionary.  As that source puts it,

“[T]he Urban Dictionary still turns out to be a rather profane place on the Web. The Urban Dictionary even defines itself as ‘a place formerly used to find out about slang, and now a place that teens with no life use as a burn book to whine about celebrities, their friends, etc., let out their sexual frustrations, show off their racist/sexist/homophobic/anti-(insert religion here) opinions, troll, and babble about things they know nothing about.’”  (From the International Business Times, January 10, 2013, “IBM’s Watson Gets A ‘Swear Filter’ After Learning The Urban Dictionary,” by Dave Smith.)

One of Watson’s developers, Eric Brown, thought that Watson would seem more human if it could incorporate slang into its vocabulary so he taught Watson to use the slang and curse words from the dictionary.  As the news report continued,

“Watson may have learned the Urban Dictionary, but it never learned the all-important axiom, ‘There’s a time and a place for everything.’ Watson simply couldn’t distinguish polite discourse from profanity.  Watson unfortunately learned all of the Urban Dictionary’s bad habits, including throwing in overly -crass language at random points in its responses; in answering one question, Watson even reportedly used the word ‘bullshit’ within an answer to one researcher’s question. Brown told Forbes that Watson picked up similarly bad habits from reading Wikipedia.”

Perhaps the news story should have given us the researcher’s question so we could make our own decision about Watson’s epithet!  Eric Brown finally removed the Urban Dictionary from Watson.

In short, Watson was very good at what it was designed to do:  win at Jeopardy.  But it lacked the kind of social intelligence needed to distinguish appropriate situations for using slang.  It also appeared to lack a mechanism for learning from experience that some situations were inappropriate for slang or how to select slang words based on the social situation.  Watson was ultimately a typical computer system that had to be modified by its developers.  I know of no theoretical framework in which a computer system could maintain and enhance itself.

Now consider another facet of Watson verses Jeopardy contestant.  Our brain requires about 20% of our energy.  For a daily energy requirement of 2000 Calories, that amounts to 400 Calories for human mental activity.  That works out to about 20 watts of power.  In terms of electricity usage, that is less than 6 cents per day in my area.  Somewhat surprisingly, the number of brain energy calories does not much depend on one’s state of alertness.  The brain uses energy at about the same rate even when you sleep.  Watson, in contrast, used 200,000 watts of power during the Jeopardy competition.  That computes to about $528 per day.  If computers are to compete with humans for evolutionary advantage, it seems to me that they will need to be much more efficient users of energy.

In fact the entire idea of comparing computers to human mental activity is absurd to many people.  Perhaps I have even encouraged this analogy by speaking of quantum computation relative to biological molecules.  But I think it will become very apparent that any putative quantum computation must be something quite unlike ordinary computer calculations.  Mathematician and physicist, Roger Penrose, thinks that the fact that human mathematicians can prove theorems is evidence for quantum computation and decisionality in human consciousness.  But he also thinks that quantum computation must have capabilities that ordinary computers do not have.

John Searle is a Philosophy Professor at UC Berkeley and thinks that the current meme that the brain is a computer is simply a fad, no more relevant than the metaphors of past ages: telephone switchboard or telegraph system.  Professor Searle supports consciousness as a real subjective experience that is not open to objective verification.  It is therefore possible to explore consciousness philosophically, but not as an objective, measurable phenomenon.  Professor Searle is known for his example of the “Chinese Room,” where Chinese is mechanically translated into English, but where Searle claims there is no real understanding of what is being translated.  Searle states, “. . . any attempt to produce a mind purely with computer programs leaves out the essential features of mind.”

Closely related to the “Chinese Room” is the Turing test which seeks to demonstrate that a computer can simulate a human being well enough to fool another person.  In the Turing test, a person, the test subject, sits at a computer terminal which is connected to either another person sitting at a keyboard or to a computer.  The task of the test subject is to determine, by conversation alone, whether he or she is dialoging with another person or a computer.  An actual test has been held each year since 1990 and prizes awarded. So far, no computer program has been able to fool the required 30 percent of test subjects.  Nevertheless, the computer program that fools the most test subjects wins a prize.  People also compete with each other because half of the test subjects are connected to other persons who must try to demonstrate some characteristic in the dialog that will convince the test subject that he or she is really talking to another person.  The person who does best at convincing test subjects that they are communicating with another person wins the “Most Human Human” award.  In 2009, Brian Christian won that prize and wrote a book about his experience: The Most Human Human: What Talking with Computers Teaches Us About What it Means to Be Alive.

One of Brian Christian’s key insights in his book is that human beings attempt to present a consistent self-image in any public or interpersonal encounter.  In a dialog with another person, there is a striving to get beyond the superficial in order to reveal something of the personality underneath.  But the revealed personality is not monolithic; there are key self-referential elements of the conversation that reveal other possibilities.  Nevertheless there is a strong commitment to an underlying self-image, even if that self-image is ambiguous:

“[The existentialist’s] answer, more or less, is that we must choose a standard to hold ourselves to. Perhaps we’re influenced to pick some particular standard; perhaps we pick it at random. Neither seems particularly ‘authentic,’ but we swerve around paradox here because it’s not clear that this matters. It’s the commitment to the choice that makes behavior authentic.”

Authentic dialog, therefore, contains elements of consistent self-image and commitment to that self-image in spite of ambiguity and paradox.  A strong sense of self-unity underlies the sometimes fragmentary nature and unpredictable direction that human discourse often takes.  This is very difficult for a computer to simulate.

I think the risk from AI is so minuscule that it doesn’t deserve the level of concern that Jaan Tallinn was portrayed as having in Huw Price’s article.  There are two main assumptions in the assessment of risk that are very unlikely to be substantiated.  One assumption is that sheer computing power will lead to a machine capable of human intelligence within any reasonable time frame.  The second assumption is that such a machine, if created, could somehow replace humans in an evolutionary sense.

There are two problems with the first assumption, one theoretical and one practical.  The theoretical problem is that there is a limit to the true, valid conclusions that any automated system can achieve.  This limitation is called “Gödel Incompleteness.”  It means that for any system powerful enough to draw useful conclusions, there will still remain true conclusions that cannot be reached by computation alone.  In computer theory, this is called the “halting problem.”  The halting problem states that it is impossible to create a computer program that can decide whether any other computer program can halt or come to completion, producing a valid result.    The practical manifestation of the halting problem is that there is no way to introduce complete self-awareness into computer systems.  One can create modules that can simulate self-awareness of other modules, but the new module would not be self-aware of itself.  This limitation implies that human intelligence will always be needed to correct and modify computer systems.

(Roger Penrose’s book, Shadows of the Mind, presents the case for quantum consciousness in detail. A key part of his argument is that computers are fundamentally limited by “Gödel Incompleteness.”  This implies, according to Penrose, that quantum coherence plays a key part in consciousness and that quantum calculations are capable of decisions exceeding the power of any ordinary computer calculation)

The second problem with the first assumption is that it is very unlikely that a unified computer system with computing power of the human brain can be developed in any reasonable time frame.   Professor Price doesn’t say what a reasonable time frame might be, but Ray Kurzweil does, placing the date for the singularity at 2045.  Kurzweil’s assumption is that the human brain contains storage for 1018 bits (about 100 petabytes) of information.

In my previous post, I reported that Professor James Shapiro at the University of Chicago thinks that biological molecules are the most basic processing unit and not the cell.  This implies that Kurzweil should be using the number of molecules in the brain rather than the number of neurons.  Assuming about 1013 molecules per neuron, that increases the human brain capacity to about 1031 (10 trillion petabytes)!  This concept of storing large volumes of data in biological molecules has been confirmed by recent research where 5.5 petabytes of data have been stored in one gram of DNA.  Keep in mind that we are speaking only of storage capacity (and only for neurons, omitting the Glial cells) and not of processing power.  If the processing power of the biological molecule is aided by a quantum computation, then we have no current method for estimating the processing power of the human neuron.

Assuming that processing power is on a par with storage capacity, and assuming that computer capacity and power can double according to Moore’s law (every two years – another questionable assumption because of quantum limits), then there would need to be 40 doublings of storage capacity or about another 80 years beyond Kurzweil’s estimate of 2045.  That places the projection for Kurzweil’s “singularity” well into the twenty-second century.

The second assumption is that sufficiently advanced machine intelligence, if it could be developed, would be able to replace humans through evolutionary competition.  I have already mentioned the energy efficiency disadvantage for current silicon-based computers:  200 kilowatts for Watson’s Jeopardy performance versus 20 watts for human intelligence.  I have also described the impossibility of computer algorithms which could in principle modify themselves in an evolutionary sense.  I can also discount approaches based on evolutionary competition in which random changes are arbitrarily made to computer code.  I have seen too many attempts to fix computer programs by guesswork that amounts to little more than random changes in the code.  It doesn’t work for computer programmers and it won’t work for competing algorithms!

My conclusion is that the main practical threat to human intellectual dominance will be biological and not computational (in addition to our own self-destructive tendencies).  That leaves open the possibility for biological computation, but that threat is subsumed by the general threat of biological genetic engineering and by the creation of biological environments that are detrimental to human health and well-being.

I have taken this lengthy excursion into the analysis of the computer / brain analogy in order to eliminate it as one path toward understanding consciousness.  The idea that computation can produce human consciousness is an example of functionalism:  the concept that a complete functional description of the brain will explain consciousness.  Human consciousness is a complex concept which resists empirical exploration.  Let’s look at the key problem.

David Chalmers is professor of philosophy at Australian National University and has clearly articulated what has become known as the hard problem of consciousness.  In his 1995 paper, “Facing up to the Problem of Consciousness,” he first describes the easy problem.  The easy problem is the explanation of how the brain accomplishes a given function such as awareness or articulation of mental states or even the difference between wakefulness and sleep.  This last category, when pushed to consider different states of awareness, previously had seemed to me to be the most promising path towards understanding consciousness.

It has been known for some time that there are different levels of consciousness that are roughly correlated to the frequency of brain waves which can be measured by electroencephalogram (EEG).  Different frequencies of brain waves have traditionally corresponded to different levels of alertness.  The frequency range that seems to hold the most promise for understanding consciousness are the gamma waves at roughly 25 to 100 cycles per second (Hz or Hertz).  40 Hz is usually cited as representative.  In 1990, Francis Crick (co-discoverer of the DNA structure) and Christof Koch proposed that the 40 Hz to 70 Hz was the key “neural correlate of consciousness.”  The neural correlate of consciousness is defined to be any measurable phenomenon which can substitute for measuring consciousness directly.

The neural correlate of consciousness is a measurable phenomenon; and measurable events are what distinguish the easy problem from the hard problem of consciousness.  The easy problem is amenable to empirical research and experiment; it explains complex function and structure in terms of simpler phenomenon.  The hard problem, by contrast, raises a new question: how is it that the functional explanation of consciousness (the easy question) produces the experience of consciousness or how is it that the experience of consciousness arises from function?  As Chalmers says, why do we experience the blue frequency of light as blue?  Implicit in this question is the idea that consciousness is unified despite different functional impact.  Color, shape, movement, odor, sound all come together to form a unified experience; we sense that there is an “I” which has the unified experience and that this “I” is the same as the self that has had a history of similar or not so similar experiences.  My rephrasing of the hard question goes like this: how is it that we have a self with which to experience life.

Chalmers thinks that a new category for subjective experience will be needed to answer the hard question.  I think that such an addition is equivalent to adding consciousness as a basic attribute of matter.  That is what panpsychism asserts, and I think that the evidence from physics, chemistry and biology supports the panpsychist view.  I think panpsychism leads directly to experiences of awareness, consciousness and self-consciousness and that the concept of a self-reflective self is the natural conclusion of such a thought process.  David Chalmers thinks that the idea has merit, but differentiates his view from panpsychism, saying “panpsychism is just one way of working out the details.”

My next post will conclude this series and will directly present the theological question.