Consciousness (Part 1)

So far, in this series on the evidence for a conscious, rational power working in and through the laws of nature, I have followed the trail of low entropy.  I have used a general notion of entropy where low entropy correlates with an increasing degree of order or where it correlates with an increasing concentration of energy.  Consequently, high entropy means a state of disorder or a state of energy dispersal, most often as wasted heat.  I began with the amazing state of low entropy (highly ordered, high energy concentration) in which the universe was created.

I followed the trail of low entropy through the complex of mathematically precise physical laws that represent the incredible ordering power of nature.  I spoke of lasers, superconductivity and photosynthesis as supreme examples of entropy lowering processes.  I looked at the incredibly diverse life processes, all based on DNA, RNA and protein synthesis, that would be impossible without the information coding capability and the molecular machines of the individual cell.  I described the computer-like processing capability of individual proteins and the inexplicable speed with which they fold into the precise shape for their purpose.

I have tried to avoid the teleological language of purposeful design, but when one looks at the trail from creation to conscious being, it is difficult to avoid the question.  Random chance cannot account for this remarkable journey.  The probabilities are just too small for undirected forces to have arrived at living beings that maintain low entropy and rely on entropy lowering processes.  This implies, to me at least, that the laws of physics are favorable to life and consciousness.  What is it that has driven evolution to the point of prizing consciousness almost above other considerations?  Consciousness requires a huge energy budget; why should our brains deserve a 20% allocation of energy if not for its powerful entropy lowering ability?

An incredible panoply of ordered life flows from the human imagination.  There is language, art, drama, literature, music and dance in addition to the social inventions of government, economic systems, justice systems, cultural institutions, family and kinship groups.  One could almost say that the creation of explicitly ordered social structures defines humanity.  And yet there is a profound puzzle in the pervasive human tendency to sow discord.  Why should that be?  Why are there wars, violence, terrorism, and dysfunctional social institutions if the human imagination can be so productive?

In discussing these and other questions of consciousness, I will attempt to follow my reductionist approach by relating emergent phenomenon to the dynamics and properties of constituent components.  However, there will come a point where this approach will fail and I will need to resort to different language to describe what I consider to be the key dynamic of consciousness: the self and its narrative.  Consciousness cannot be completely understood based on functional descriptions of biological or physical components.  But first, let me turn to the attempt to explain consciousness in term of computation.

Considering that order emerges from entropy lowering processes, it is odd that some observers think that consciousness and intelligence emerges from random, chaotic activity.  Pure randomness results in high entropy, so how can order be produced from chaos?  One such person is Ray Kurzweil, a futurist, who has written a book titled The Singularity is Near.  He states, “Intelligent behavior is an emergent property of the brain’s chaotic and complex activity.”  Neither he nor anyone else can explain how entropy lowering intelligence can emerge from random, chaotic activity.  He does, however, distinguish intelligence from consciousness.  He cites experiments by Benjamin Libet that appear to show that decisions are an illusion and that “consciousness is out of the loop.” Later, he describes a computer that could simulate intelligent behavior: “Such a machine will at least seem conscious, even if we cannot say definitely whether it is or not.  But just declaring that it is obvious that the computer . . . is not conscious is far from a compelling argument.”  Like many others, Kurzweil thinks that consciousness is present if intelligence can be successfully simulated by a machine.

Kurzweil is an optimistic supporter of the idea that the human brain will be completely mapped and understood to the point where it can be entirely simulated by computation.  He has predicted that this should occur in the fifth decade of the 21st century: “I set the date for the Singularity – representing a profound and disruptive transformation in human capability – at 2045.  The nonbiological intelligence created in that year will be one billion times more powerful than all human intelligence today.”  Kurzweil’s prediction is based on the number of neurons in the human brain and their many interconnections, arriving at a functional memory capacity of 1018 bits of information for the human brain (1011 neurons multiplied by 103 connections for each neuron multiplied by 104 bits stored in each of the synaptic contacts.)

Kurzweil welcomes this prospective technological leap as a great advancement in the intellectual potential for the world.  He writes about his vision for the world after the singularity which he names the fifth epoch: “The fifth epoch will enable our human machine civilization to transcend the human brain’s limitations of a mere hundred trillion extremely slow connections.”  He goes on to say that eventually this new paradigm for intelligence will saturate all matter and spread throughout the universe.  Kurzweil appears to have the opposite perspective from my own view which is that the universe began with consciousness and consciousness infused all matter from the beginning.

But other people look at Kurzweil’s predictions and are concerned.  I recently read an opinion piece by Huw Price in the New York Times about the dangers of artificial intelligence (AI).  Huw Price was on his way to Cambridge to take up his newly appointed position as Bertrand Russell chair in Philosophy.  He had met the AI researcher named Jaan Tallinn, one of the developers of Skype, on his way to his new job.  Tallinn was concerned that AI technology would evolve to the point where it could replace humans and through some accident the computers would take control.  So Tallinn and Price joined up with Martin Rees, a cosmologist with a strong interest in biotechnology, to form a group called the Center for Study of Existential Risk (CSER).  I suspect that the group will focus more on the risk to human life posed by biotechnology rather than from AI, but the focus of Price’s column was on the risk from artificial intelligence.

Professor Price presented the argument that, although the risk of such a computer takeover appears small, it shouldn’t be completely ignored.  Perhaps he has a valid point, but what are the empirical signs that such computer intelligence is near at hand?  Some might point to the victories in 2011 of IBM’s Watson computer over all challengers in the Jeopardy game show.  This was an impressive demonstration of computer prowess in natural language processing and in database searching, but did Watson demonstrate intelligence?  I think that Ray Kurzweil would answer yes.  To the extent that the Jeopardy game demonstrates intelligence, then, by that measure, Watson must be considered intelligent.

However, consider the following subsequent development.  In a recent news report, Watson was upgraded to use a slang dictionary called the Urban Dictionary.  As that source puts it,

“[T]he Urban Dictionary still turns out to be a rather profane place on the Web. The Urban Dictionary even defines itself as ‘a place formerly used to find out about slang, and now a place that teens with no life use as a burn book to whine about celebrities, their friends, etc., let out their sexual frustrations, show off their racist/sexist/homophobic/anti-(insert religion here) opinions, troll, and babble about things they know nothing about.’”  (From the International Business Times, January 10, 2013, “IBM’s Watson Gets A ‘Swear Filter’ After Learning The Urban Dictionary,” by Dave Smith.)

One of Watson’s developers, Eric Brown, thought that Watson would seem more human if it could incorporate slang into its vocabulary so he taught Watson to use the slang and curse words from the dictionary.  As the news report continued,

“Watson may have learned the Urban Dictionary, but it never learned the all-important axiom, ‘There’s a time and a place for everything.’ Watson simply couldn’t distinguish polite discourse from profanity.  Watson unfortunately learned all of the Urban Dictionary’s bad habits, including throwing in overly -crass language at random points in its responses; in answering one question, Watson even reportedly used the word ‘bullshit’ within an answer to one researcher’s question. Brown told Forbes that Watson picked up similarly bad habits from reading Wikipedia.”

Perhaps the news story should have given us the researcher’s question so we could make our own decision about Watson’s epithet!  Eric Brown finally removed the Urban Dictionary from Watson.

In short, Watson was very good at what it was designed to do:  win at Jeopardy.  But it lacked the kind of social intelligence needed to distinguish appropriate situations for using slang.  It also appeared to lack a mechanism for learning from experience that some situations were inappropriate for slang or how to select slang words based on the social situation.  Watson was ultimately a typical computer system that had to be modified by its developers.  I know of no theoretical framework in which a computer system could maintain and enhance itself.

Now consider another facet of Watson verses Jeopardy contestant.  Our brain requires about 20% of our energy.  For a daily energy requirement of 2000 Calories, that amounts to 400 Calories for human mental activity.  That works out to about 20 watts of power.  In terms of electricity usage, that is less than 6 cents per day in my area.  Somewhat surprisingly, the number of brain energy calories does not much depend on one’s state of alertness.  The brain uses energy at about the same rate even when you sleep.  Watson, in contrast, used 200,000 watts of power during the Jeopardy competition.  That computes to about $528 per day.  If computers are to compete with humans for evolutionary advantage, it seems to me that they will need to be much more efficient users of energy.

In fact the entire idea of comparing computers to human mental activity is absurd to many people.  Perhaps I have even encouraged this analogy by speaking of quantum computation relative to biological molecules.  But I think it will become very apparent that any putative quantum computation must be something quite unlike ordinary computer calculations.  Mathematician and physicist, Roger Penrose, thinks that the fact that human mathematicians can prove theorems is evidence for quantum computation and decisionality in human consciousness.  But he also thinks that quantum computation must have capabilities that ordinary computers do not have.

John Searle is a Philosophy Professor at UC Berkeley and thinks that the current meme that the brain is a computer is simply a fad, no more relevant than the metaphors of past ages: telephone switchboard or telegraph system.  Professor Searle supports consciousness as a real subjective experience that is not open to objective verification.  It is therefore possible to explore consciousness philosophically, but not as an objective, measurable phenomenon.  Professor Searle is known for his example of the “Chinese Room,” where Chinese is mechanically translated into English, but where Searle claims there is no real understanding of what is being translated.  Searle states, “. . . any attempt to produce a mind purely with computer programs leaves out the essential features of mind.”

Closely related to the “Chinese Room” is the Turing test which seeks to demonstrate that a computer can simulate a human being well enough to fool another person.  In the Turing test, a person, the test subject, sits at a computer terminal which is connected to either another person sitting at a keyboard or to a computer.  The task of the test subject is to determine, by conversation alone, whether he or she is dialoging with another person or a computer.  An actual test has been held each year since 1990 and prizes awarded. So far, no computer program has been able to fool the required 30 percent of test subjects.  Nevertheless, the computer program that fools the most test subjects wins a prize.  People also compete with each other because half of the test subjects are connected to other persons who must try to demonstrate some characteristic in the dialog that will convince the test subject that he or she is really talking to another person.  The person who does best at convincing test subjects that they are communicating with another person wins the “Most Human Human” award.  In 2009, Brian Christian won that prize and wrote a book about his experience: The Most Human Human: What Talking with Computers Teaches Us About What it Means to Be Alive.

One of Brian Christian’s key insights in his book is that human beings attempt to present a consistent self-image in any public or interpersonal encounter.  In a dialog with another person, there is a striving to get beyond the superficial in order to reveal something of the personality underneath.  But the revealed personality is not monolithic; there are key self-referential elements of the conversation that reveal other possibilities.  Nevertheless there is a strong commitment to an underlying self-image, even if that self-image is ambiguous:

“[The existentialist’s] answer, more or less, is that we must choose a standard to hold ourselves to. Perhaps we’re influenced to pick some particular standard; perhaps we pick it at random. Neither seems particularly ‘authentic,’ but we swerve around paradox here because it’s not clear that this matters. It’s the commitment to the choice that makes behavior authentic.”

Authentic dialog, therefore, contains elements of consistent self-image and commitment to that self-image in spite of ambiguity and paradox.  A strong sense of self-unity underlies the sometimes fragmentary nature and unpredictable direction that human discourse often takes.  This is very difficult for a computer to simulate.

I think the risk from AI is so minuscule that it doesn’t deserve the level of concern that Jaan Tallinn was portrayed as having in Huw Price’s article.  There are two main assumptions in the assessment of risk that are very unlikely to be substantiated.  One assumption is that sheer computing power will lead to a machine capable of human intelligence within any reasonable time frame.  The second assumption is that such a machine, if created, could somehow replace humans in an evolutionary sense.

There are two problems with the first assumption, one theoretical and one practical.  The theoretical problem is that there is a limit to the true, valid conclusions that any automated system can achieve.  This limitation is called “Gödel Incompleteness.”  It means that for any system powerful enough to draw useful conclusions, there will still remain true conclusions that cannot be reached by computation alone.  In computer theory, this is called the “halting problem.”  The halting problem states that it is impossible to create a computer program that can decide whether any other computer program can halt or come to completion, producing a valid result.    The practical manifestation of the halting problem is that there is no way to introduce complete self-awareness into computer systems.  One can create modules that can simulate self-awareness of other modules, but the new module would not be self-aware of itself.  This limitation implies that human intelligence will always be needed to correct and modify computer systems.

(Roger Penrose’s book, Shadows of the Mind, presents the case for quantum consciousness in detail. A key part of his argument is that computers are fundamentally limited by “Gödel Incompleteness.”  This implies, according to Penrose, that quantum coherence plays a key part in consciousness and that quantum calculations are capable of decisions exceeding the power of any ordinary computer calculation)

The second problem with the first assumption is that it is very unlikely that a unified computer system with computing power of the human brain can be developed in any reasonable time frame.   Professor Price doesn’t say what a reasonable time frame might be, but Ray Kurzweil does, placing the date for the singularity at 2045.  Kurzweil’s assumption is that the human brain contains storage for 1018 bits (about 100 petabytes) of information.

In my previous post, I reported that Professor James Shapiro at the University of Chicago thinks that biological molecules are the most basic processing unit and not the cell.  This implies that Kurzweil should be using the number of molecules in the brain rather than the number of neurons.  Assuming about 1013 molecules per neuron, that increases the human brain capacity to about 1031 (10 trillion petabytes)!  This concept of storing large volumes of data in biological molecules has been confirmed by recent research where 5.5 petabytes of data have been stored in one gram of DNA.  Keep in mind that we are speaking only of storage capacity (and only for neurons, omitting the Glial cells) and not of processing power.  If the processing power of the biological molecule is aided by a quantum computation, then we have no current method for estimating the processing power of the human neuron.

Assuming that processing power is on a par with storage capacity, and assuming that computer capacity and power can double according to Moore’s law (every two years – another questionable assumption because of quantum limits), then there would need to be 40 doublings of storage capacity or about another 80 years beyond Kurzweil’s estimate of 2045.  That places the projection for Kurzweil’s “singularity” well into the twenty-second century.

The second assumption is that sufficiently advanced machine intelligence, if it could be developed, would be able to replace humans through evolutionary competition.  I have already mentioned the energy efficiency disadvantage for current silicon-based computers:  200 kilowatts for Watson’s Jeopardy performance versus 20 watts for human intelligence.  I have also described the impossibility of computer algorithms which could in principle modify themselves in an evolutionary sense.  I can also discount approaches based on evolutionary competition in which random changes are arbitrarily made to computer code.  I have seen too many attempts to fix computer programs by guesswork that amounts to little more than random changes in the code.  It doesn’t work for computer programmers and it won’t work for competing algorithms!

My conclusion is that the main practical threat to human intellectual dominance will be biological and not computational (in addition to our own self-destructive tendencies).  That leaves open the possibility for biological computation, but that threat is subsumed by the general threat of biological genetic engineering and by the creation of biological environments that are detrimental to human health and well-being.

I have taken this lengthy excursion into the analysis of the computer / brain analogy in order to eliminate it as one path toward understanding consciousness.  The idea that computation can produce human consciousness is an example of functionalism:  the concept that a complete functional description of the brain will explain consciousness.  Human consciousness is a complex concept which resists empirical exploration.  Let’s look at the key problem.

David Chalmers is professor of philosophy at Australian National University and has clearly articulated what has become known as the hard problem of consciousness.  In his 1995 paper, “Facing up to the Problem of Consciousness,” he first describes the easy problem.  The easy problem is the explanation of how the brain accomplishes a given function such as awareness or articulation of mental states or even the difference between wakefulness and sleep.  This last category, when pushed to consider different states of awareness, previously had seemed to me to be the most promising path towards understanding consciousness.

It has been known for some time that there are different levels of consciousness that are roughly correlated to the frequency of brain waves which can be measured by electroencephalogram (EEG).  Different frequencies of brain waves have traditionally corresponded to different levels of alertness.  The frequency range that seems to hold the most promise for understanding consciousness are the gamma waves at roughly 25 to 100 cycles per second (Hz or Hertz).  40 Hz is usually cited as representative.  In 1990, Francis Crick (co-discoverer of the DNA structure) and Christof Koch proposed that the 40 Hz to 70 Hz was the key “neural correlate of consciousness.”  The neural correlate of consciousness is defined to be any measurable phenomenon which can substitute for measuring consciousness directly.

The neural correlate of consciousness is a measurable phenomenon; and measurable events are what distinguish the easy problem from the hard problem of consciousness.  The easy problem is amenable to empirical research and experiment; it explains complex function and structure in terms of simpler phenomenon.  The hard problem, by contrast, raises a new question: how is it that the functional explanation of consciousness (the easy question) produces the experience of consciousness or how is it that the experience of consciousness arises from function?  As Chalmers says, why do we experience the blue frequency of light as blue?  Implicit in this question is the idea that consciousness is unified despite different functional impact.  Color, shape, movement, odor, sound all come together to form a unified experience; we sense that there is an “I” which has the unified experience and that this “I” is the same as the self that has had a history of similar or not so similar experiences.  My rephrasing of the hard question goes like this: how is it that we have a self with which to experience life.

Chalmers thinks that a new category for subjective experience will be needed to answer the hard question.  I think that such an addition is equivalent to adding consciousness as a basic attribute of matter.  That is what panpsychism asserts, and I think that the evidence from physics, chemistry and biology supports the panpsychist view.  I think panpsychism leads directly to experiences of awareness, consciousness and self-consciousness and that the concept of a self-reflective self is the natural conclusion of such a thought process.  David Chalmers thinks that the idea has merit, but differentiates his view from panpsychism, saying “panpsychism is just one way of working out the details.”

My next post will conclude this series and will directly present the theological question.

Advertisements

2 thoughts on “Consciousness (Part 1)

  1. I love, love, love these blogs you write. I sincerely hope that you continue to write them. However, I did find two typos: at the end of the seventeenth paragraph, you misspelled “computers” as “computes” in italics. Also, in the eighth to last paragraph, you misspelled “versus” as “verses.” I can’t wait for the next part on Consciousness!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s