Consciousness and Dualism (Part 3)

Late in chapter 3 of Mind and Cosmos, Nagel introduces one of the candidates for a likely solution to the problem of Cartesian dualism.  This approach can be called monism or panpsychism.  Panpsychism is the view that matter and mind are two different manifestations of a single unnamed substance.  Nagel thinks this path offers one possible framework for an eventual solution:

“Everything, living or not, is constituted from elements having a nature that is both physical and nonphysical— that is, capable of combining into mental wholes. So this reductive account can also be described as a form of panpsychism: all the elements of the physical world are also mental.”

“A comprehensively reductive conception is favored by the belief that the propensity for the development of organisms with a subjective point of view must have been there from the beginning, just as the propensity for the formation of atoms, molecules, galaxies, and organic compounds must have been there from the beginning, in consequence of the already existing properties of the fundamental particles. If we imagine an explanation taking the form of an enlarged version of the natural order, with complex local phenomena formed by composition from universally available basic elements, it will depend on some kind of monism or panpsychism, rather than laws of psychophysical emergence that come into operation only late in the game.”

However, there is a serious problem.  We have no idea how elementary particles could possess subjectivity which Nagel calls the “proto-mental” attribute.  Such an understanding would be necessary in order to build a framework for explaining how individual particles could come together and form conscious organisms.  The best candidate for such an explanation is quantum physics, but our understanding of quantum physics today is limited to its computational aspects.  If those aspects are real, then computation only supplies part of the solution.  And you would need to assume that consciousness is partly the result of computation.

There are some philosophical positions that hold that consciousness is all computation (Dennett, Kurzweil).  Anyone holding such a position may be quite happy with a quantum solution since it would explain how minds could be made of matter with computational abilities.  Still, one would need to work out the details of how individual particles with computational ability can be organized so that the total organism’s mind appears to be an unbroken whole.  That work might be made easier by quantum entanglement, but there is very little theoretical understanding on which to build.

Quantum entanglement provides some theoretical advantage over classical computation.  Quantum entanglement enables quantum information to be coded more compactly than classical information giving it the ability to reduce entropy.  Low entropy is a characteristic of order.  In this way, quantum computation has order producing power beyond the ordering power of classical computation.  But how this could take place in biological organisms remains a mystery.

Nagel finds additional problems with panpsychism when he imagines how it might address the developmental problem of life.  How did life originally arise from non-living matter and how did this proto-mental attribute of matter overcome the unlikelihood of random chemical interactions leading to life?  He concludes the section on panpsychism with this pessimistic comment:

“The idea of a reductive answer to both the constitutive and the historical questions remains very dark indeed. It seeks a deeper and more cosmically unified explanation of consciousness than an emergent theory, but at the cost of greater obscurity, and it offers no evident advantage with respect to the historical problem of likelihood.”

I find myself more optimistic about the outlook for a form of panpsychism that is based on quantum physics.  I think the entropy lowering capability of quantum computation will go a long way towards explaining the ordering power inherent in life and consciousness.  The problem is that quantum computation does not really explain subjective experience unless you assume that subjectivity is the result of computation.  And that assumption re-introduces the problem of dualism because you would need to assume subjectivity in all matter, not just living matter.  As soon as you’ve assumed that subjective experience is an attribute of living matter only, then it has to be an optional attribute, introduced by something besides physical law.

Consider what happens at the moment of death of any living organism.  For a brief instant, the chemical composition remains unchanged, yet life and consciousness are gone.  Subjectivity as we have come to know it during life has disappeared.  This would seem to indicate that subjectivity is an optional attribute of the material world, and that is dualism.

I suppose it is possible that there are subtle changes in the chemical composition at the time of death, but will those changes be sufficiently observable to clearly indicate which came first?  The subtlety may be telling us how miraculous consciousness is in the first place.  One can also consider the action of anesthetics which cause temporary unconsciousness.  For example, ether can cause unconsciousness in humans and inactivity in the one-celled animal paramecium, yet its exact action remains unexplained.

While I think that panpsychism based on quantum physics offers hope for explaining the tremendous ordering power of life and consciousness, I do not find that it offers a complete answer to the problem of dualism when viewed from the materialist point of view.  I am drawn more to the idealist point of view as a solution to dualism.  No less a world-class physicist than Leonard Susskind has suggested that the universe may be like a hologram. (A hologram is a three dimensional projection from a two dimensional source.  For Susskind’s analogy to hold, the universe would need to be a four dimensional projection from some external source.)  Susskind is an atheist, so he will not agree with my perspective that the universe is a projection from God, but that appears to be the only view that solves the problem of consciousness and dualism.

Thomas Nagel doesn’t agree with me either.  He finishes up chapter three by dismissing the theist path of an intentional power, but gives more credibility to what he calls the teleological framework.  The teleological path requires that the laws of nature are “value free,” yet they proceed toward a defined purpose or goal.  Nature’s laws would need to be “value free” to avoid the appearance of an intentional designer.  He needs to say more about how a desired goal can be free of value, and he promises to do so later in the book.


Consciousness (Part 1)

So far, in this series on the evidence for a conscious, rational power working in and through the laws of nature, I have followed the trail of low entropy.  I have used a general notion of entropy where low entropy correlates with an increasing degree of order or where it correlates with an increasing concentration of energy.  Consequently, high entropy means a state of disorder or a state of energy dispersal, most often as wasted heat.  I began with the amazing state of low entropy (highly ordered, high energy concentration) in which the universe was created.

I followed the trail of low entropy through the complex of mathematically precise physical laws that represent the incredible ordering power of nature.  I spoke of lasers, superconductivity and photosynthesis as supreme examples of entropy lowering processes.  I looked at the incredibly diverse life processes, all based on DNA, RNA and protein synthesis, that would be impossible without the information coding capability and the molecular machines of the individual cell.  I described the computer-like processing capability of individual proteins and the inexplicable speed with which they fold into the precise shape for their purpose.

I have tried to avoid the teleological language of purposeful design, but when one looks at the trail from creation to conscious being, it is difficult to avoid the question.  Random chance cannot account for this remarkable journey.  The probabilities are just too small for undirected forces to have arrived at living beings that maintain low entropy and rely on entropy lowering processes.  This implies, to me at least, that the laws of physics are favorable to life and consciousness.  What is it that has driven evolution to the point of prizing consciousness almost above other considerations?  Consciousness requires a huge energy budget; why should our brains deserve a 20% allocation of energy if not for its powerful entropy lowering ability?

An incredible panoply of ordered life flows from the human imagination.  There is language, art, drama, literature, music and dance in addition to the social inventions of government, economic systems, justice systems, cultural institutions, family and kinship groups.  One could almost say that the creation of explicitly ordered social structures defines humanity.  And yet there is a profound puzzle in the pervasive human tendency to sow discord.  Why should that be?  Why are there wars, violence, terrorism, and dysfunctional social institutions if the human imagination can be so productive?

In discussing these and other questions of consciousness, I will attempt to follow my reductionist approach by relating emergent phenomenon to the dynamics and properties of constituent components.  However, there will come a point where this approach will fail and I will need to resort to different language to describe what I consider to be the key dynamic of consciousness: the self and its narrative.  Consciousness cannot be completely understood based on functional descriptions of biological or physical components.  But first, let me turn to the attempt to explain consciousness in term of computation.

Considering that order emerges from entropy lowering processes, it is odd that some observers think that consciousness and intelligence emerges from random, chaotic activity.  Pure randomness results in high entropy, so how can order be produced from chaos?  One such person is Ray Kurzweil, a futurist, who has written a book titled The Singularity is Near.  He states, “Intelligent behavior is an emergent property of the brain’s chaotic and complex activity.”  Neither he nor anyone else can explain how entropy lowering intelligence can emerge from random, chaotic activity.  He does, however, distinguish intelligence from consciousness.  He cites experiments by Benjamin Libet that appear to show that decisions are an illusion and that “consciousness is out of the loop.” Later, he describes a computer that could simulate intelligent behavior: “Such a machine will at least seem conscious, even if we cannot say definitely whether it is or not.  But just declaring that it is obvious that the computer . . . is not conscious is far from a compelling argument.”  Like many others, Kurzweil thinks that consciousness is present if intelligence can be successfully simulated by a machine.

Kurzweil is an optimistic supporter of the idea that the human brain will be completely mapped and understood to the point where it can be entirely simulated by computation.  He has predicted that this should occur in the fifth decade of the 21st century: “I set the date for the Singularity – representing a profound and disruptive transformation in human capability – at 2045.  The nonbiological intelligence created in that year will be one billion times more powerful than all human intelligence today.”  Kurzweil’s prediction is based on the number of neurons in the human brain and their many interconnections, arriving at a functional memory capacity of 1018 bits of information for the human brain (1011 neurons multiplied by 103 connections for each neuron multiplied by 104 bits stored in each of the synaptic contacts.)

Kurzweil welcomes this prospective technological leap as a great advancement in the intellectual potential for the world.  He writes about his vision for the world after the singularity which he names the fifth epoch: “The fifth epoch will enable our human machine civilization to transcend the human brain’s limitations of a mere hundred trillion extremely slow connections.”  He goes on to say that eventually this new paradigm for intelligence will saturate all matter and spread throughout the universe.  Kurzweil appears to have the opposite perspective from my own view which is that the universe began with consciousness and consciousness infused all matter from the beginning.

But other people look at Kurzweil’s predictions and are concerned.  I recently read an opinion piece by Huw Price in the New York Times about the dangers of artificial intelligence (AI).  Huw Price was on his way to Cambridge to take up his newly appointed position as Bertrand Russell chair in Philosophy.  He had met the AI researcher named Jaan Tallinn, one of the developers of Skype, on his way to his new job.  Tallinn was concerned that AI technology would evolve to the point where it could replace humans and through some accident the computers would take control.  So Tallinn and Price joined up with Martin Rees, a cosmologist with a strong interest in biotechnology, to form a group called the Center for Study of Existential Risk (CSER).  I suspect that the group will focus more on the risk to human life posed by biotechnology rather than from AI, but the focus of Price’s column was on the risk from artificial intelligence.

Professor Price presented the argument that, although the risk of such a computer takeover appears small, it shouldn’t be completely ignored.  Perhaps he has a valid point, but what are the empirical signs that such computer intelligence is near at hand?  Some might point to the victories in 2011 of IBM’s Watson computer over all challengers in the Jeopardy game show.  This was an impressive demonstration of computer prowess in natural language processing and in database searching, but did Watson demonstrate intelligence?  I think that Ray Kurzweil would answer yes.  To the extent that the Jeopardy game demonstrates intelligence, then, by that measure, Watson must be considered intelligent.

However, consider the following subsequent development.  In a recent news report, Watson was upgraded to use a slang dictionary called the Urban Dictionary.  As that source puts it,

“[T]he Urban Dictionary still turns out to be a rather profane place on the Web. The Urban Dictionary even defines itself as ‘a place formerly used to find out about slang, and now a place that teens with no life use as a burn book to whine about celebrities, their friends, etc., let out their sexual frustrations, show off their racist/sexist/homophobic/anti-(insert religion here) opinions, troll, and babble about things they know nothing about.’”  (From the International Business Times, January 10, 2013, “IBM’s Watson Gets A ‘Swear Filter’ After Learning The Urban Dictionary,” by Dave Smith.)

One of Watson’s developers, Eric Brown, thought that Watson would seem more human if it could incorporate slang into its vocabulary so he taught Watson to use the slang and curse words from the dictionary.  As the news report continued,

“Watson may have learned the Urban Dictionary, but it never learned the all-important axiom, ‘There’s a time and a place for everything.’ Watson simply couldn’t distinguish polite discourse from profanity.  Watson unfortunately learned all of the Urban Dictionary’s bad habits, including throwing in overly -crass language at random points in its responses; in answering one question, Watson even reportedly used the word ‘bullshit’ within an answer to one researcher’s question. Brown told Forbes that Watson picked up similarly bad habits from reading Wikipedia.”

Perhaps the news story should have given us the researcher’s question so we could make our own decision about Watson’s epithet!  Eric Brown finally removed the Urban Dictionary from Watson.

In short, Watson was very good at what it was designed to do:  win at Jeopardy.  But it lacked the kind of social intelligence needed to distinguish appropriate situations for using slang.  It also appeared to lack a mechanism for learning from experience that some situations were inappropriate for slang or how to select slang words based on the social situation.  Watson was ultimately a typical computer system that had to be modified by its developers.  I know of no theoretical framework in which a computer system could maintain and enhance itself.

Now consider another facet of Watson verses Jeopardy contestant.  Our brain requires about 20% of our energy.  For a daily energy requirement of 2000 Calories, that amounts to 400 Calories for human mental activity.  That works out to about 20 watts of power.  In terms of electricity usage, that is less than 6 cents per day in my area.  Somewhat surprisingly, the number of brain energy calories does not much depend on one’s state of alertness.  The brain uses energy at about the same rate even when you sleep.  Watson, in contrast, used 200,000 watts of power during the Jeopardy competition.  That computes to about $528 per day.  If computers are to compete with humans for evolutionary advantage, it seems to me that they will need to be much more efficient users of energy.

In fact the entire idea of comparing computers to human mental activity is absurd to many people.  Perhaps I have even encouraged this analogy by speaking of quantum computation relative to biological molecules.  But I think it will become very apparent that any putative quantum computation must be something quite unlike ordinary computer calculations.  Mathematician and physicist, Roger Penrose, thinks that the fact that human mathematicians can prove theorems is evidence for quantum computation and decisionality in human consciousness.  But he also thinks that quantum computation must have capabilities that ordinary computers do not have.

John Searle is a Philosophy Professor at UC Berkeley and thinks that the current meme that the brain is a computer is simply a fad, no more relevant than the metaphors of past ages: telephone switchboard or telegraph system.  Professor Searle supports consciousness as a real subjective experience that is not open to objective verification.  It is therefore possible to explore consciousness philosophically, but not as an objective, measurable phenomenon.  Professor Searle is known for his example of the “Chinese Room,” where Chinese is mechanically translated into English, but where Searle claims there is no real understanding of what is being translated.  Searle states, “. . . any attempt to produce a mind purely with computer programs leaves out the essential features of mind.”

Closely related to the “Chinese Room” is the Turing test which seeks to demonstrate that a computer can simulate a human being well enough to fool another person.  In the Turing test, a person, the test subject, sits at a computer terminal which is connected to either another person sitting at a keyboard or to a computer.  The task of the test subject is to determine, by conversation alone, whether he or she is dialoging with another person or a computer.  An actual test has been held each year since 1990 and prizes awarded. So far, no computer program has been able to fool the required 30 percent of test subjects.  Nevertheless, the computer program that fools the most test subjects wins a prize.  People also compete with each other because half of the test subjects are connected to other persons who must try to demonstrate some characteristic in the dialog that will convince the test subject that he or she is really talking to another person.  The person who does best at convincing test subjects that they are communicating with another person wins the “Most Human Human” award.  In 2009, Brian Christian won that prize and wrote a book about his experience: The Most Human Human: What Talking with Computers Teaches Us About What it Means to Be Alive.

One of Brian Christian’s key insights in his book is that human beings attempt to present a consistent self-image in any public or interpersonal encounter.  In a dialog with another person, there is a striving to get beyond the superficial in order to reveal something of the personality underneath.  But the revealed personality is not monolithic; there are key self-referential elements of the conversation that reveal other possibilities.  Nevertheless there is a strong commitment to an underlying self-image, even if that self-image is ambiguous:

“[The existentialist’s] answer, more or less, is that we must choose a standard to hold ourselves to. Perhaps we’re influenced to pick some particular standard; perhaps we pick it at random. Neither seems particularly ‘authentic,’ but we swerve around paradox here because it’s not clear that this matters. It’s the commitment to the choice that makes behavior authentic.”

Authentic dialog, therefore, contains elements of consistent self-image and commitment to that self-image in spite of ambiguity and paradox.  A strong sense of self-unity underlies the sometimes fragmentary nature and unpredictable direction that human discourse often takes.  This is very difficult for a computer to simulate.

I think the risk from AI is so minuscule that it doesn’t deserve the level of concern that Jaan Tallinn was portrayed as having in Huw Price’s article.  There are two main assumptions in the assessment of risk that are very unlikely to be substantiated.  One assumption is that sheer computing power will lead to a machine capable of human intelligence within any reasonable time frame.  The second assumption is that such a machine, if created, could somehow replace humans in an evolutionary sense.

There are two problems with the first assumption, one theoretical and one practical.  The theoretical problem is that there is a limit to the true, valid conclusions that any automated system can achieve.  This limitation is called “Gödel Incompleteness.”  It means that for any system powerful enough to draw useful conclusions, there will still remain true conclusions that cannot be reached by computation alone.  In computer theory, this is called the “halting problem.”  The halting problem states that it is impossible to create a computer program that can decide whether any other computer program can halt or come to completion, producing a valid result.    The practical manifestation of the halting problem is that there is no way to introduce complete self-awareness into computer systems.  One can create modules that can simulate self-awareness of other modules, but the new module would not be self-aware of itself.  This limitation implies that human intelligence will always be needed to correct and modify computer systems.

(Roger Penrose’s book, Shadows of the Mind, presents the case for quantum consciousness in detail. A key part of his argument is that computers are fundamentally limited by “Gödel Incompleteness.”  This implies, according to Penrose, that quantum coherence plays a key part in consciousness and that quantum calculations are capable of decisions exceeding the power of any ordinary computer calculation)

The second problem with the first assumption is that it is very unlikely that a unified computer system with computing power of the human brain can be developed in any reasonable time frame.   Professor Price doesn’t say what a reasonable time frame might be, but Ray Kurzweil does, placing the date for the singularity at 2045.  Kurzweil’s assumption is that the human brain contains storage for 1018 bits (about 100 petabytes) of information.

In my previous post, I reported that Professor James Shapiro at the University of Chicago thinks that biological molecules are the most basic processing unit and not the cell.  This implies that Kurzweil should be using the number of molecules in the brain rather than the number of neurons.  Assuming about 1013 molecules per neuron, that increases the human brain capacity to about 1031 (10 trillion petabytes)!  This concept of storing large volumes of data in biological molecules has been confirmed by recent research where 5.5 petabytes of data have been stored in one gram of DNA.  Keep in mind that we are speaking only of storage capacity (and only for neurons, omitting the Glial cells) and not of processing power.  If the processing power of the biological molecule is aided by a quantum computation, then we have no current method for estimating the processing power of the human neuron.

Assuming that processing power is on a par with storage capacity, and assuming that computer capacity and power can double according to Moore’s law (every two years – another questionable assumption because of quantum limits), then there would need to be 40 doublings of storage capacity or about another 80 years beyond Kurzweil’s estimate of 2045.  That places the projection for Kurzweil’s “singularity” well into the twenty-second century.

The second assumption is that sufficiently advanced machine intelligence, if it could be developed, would be able to replace humans through evolutionary competition.  I have already mentioned the energy efficiency disadvantage for current silicon-based computers:  200 kilowatts for Watson’s Jeopardy performance versus 20 watts for human intelligence.  I have also described the impossibility of computer algorithms which could in principle modify themselves in an evolutionary sense.  I can also discount approaches based on evolutionary competition in which random changes are arbitrarily made to computer code.  I have seen too many attempts to fix computer programs by guesswork that amounts to little more than random changes in the code.  It doesn’t work for computer programmers and it won’t work for competing algorithms!

My conclusion is that the main practical threat to human intellectual dominance will be biological and not computational (in addition to our own self-destructive tendencies).  That leaves open the possibility for biological computation, but that threat is subsumed by the general threat of biological genetic engineering and by the creation of biological environments that are detrimental to human health and well-being.

I have taken this lengthy excursion into the analysis of the computer / brain analogy in order to eliminate it as one path toward understanding consciousness.  The idea that computation can produce human consciousness is an example of functionalism:  the concept that a complete functional description of the brain will explain consciousness.  Human consciousness is a complex concept which resists empirical exploration.  Let’s look at the key problem.

David Chalmers is professor of philosophy at Australian National University and has clearly articulated what has become known as the hard problem of consciousness.  In his 1995 paper, “Facing up to the Problem of Consciousness,” he first describes the easy problem.  The easy problem is the explanation of how the brain accomplishes a given function such as awareness or articulation of mental states or even the difference between wakefulness and sleep.  This last category, when pushed to consider different states of awareness, previously had seemed to me to be the most promising path towards understanding consciousness.

It has been known for some time that there are different levels of consciousness that are roughly correlated to the frequency of brain waves which can be measured by electroencephalogram (EEG).  Different frequencies of brain waves have traditionally corresponded to different levels of alertness.  The frequency range that seems to hold the most promise for understanding consciousness are the gamma waves at roughly 25 to 100 cycles per second (Hz or Hertz).  40 Hz is usually cited as representative.  In 1990, Francis Crick (co-discoverer of the DNA structure) and Christof Koch proposed that the 40 Hz to 70 Hz was the key “neural correlate of consciousness.”  The neural correlate of consciousness is defined to be any measurable phenomenon which can substitute for measuring consciousness directly.

The neural correlate of consciousness is a measurable phenomenon; and measurable events are what distinguish the easy problem from the hard problem of consciousness.  The easy problem is amenable to empirical research and experiment; it explains complex function and structure in terms of simpler phenomenon.  The hard problem, by contrast, raises a new question: how is it that the functional explanation of consciousness (the easy question) produces the experience of consciousness or how is it that the experience of consciousness arises from function?  As Chalmers says, why do we experience the blue frequency of light as blue?  Implicit in this question is the idea that consciousness is unified despite different functional impact.  Color, shape, movement, odor, sound all come together to form a unified experience; we sense that there is an “I” which has the unified experience and that this “I” is the same as the self that has had a history of similar or not so similar experiences.  My rephrasing of the hard question goes like this: how is it that we have a self with which to experience life.

Chalmers thinks that a new category for subjective experience will be needed to answer the hard question.  I think that such an addition is equivalent to adding consciousness as a basic attribute of matter.  That is what panpsychism asserts, and I think that the evidence from physics, chemistry and biology supports the panpsychist view.  I think panpsychism leads directly to experiences of awareness, consciousness and self-consciousness and that the concept of a self-reflective self is the natural conclusion of such a thought process.  David Chalmers thinks that the idea has merit, but differentiates his view from panpsychism, saying “panpsychism is just one way of working out the details.”

My next post will conclude this series and will directly present the theological question.

The Evidence from Physics and Cosmology (Part 2)

My previous post describes the evidence for a rational agent based on an ordered universe created by the “Big Bang”.  But if the laws of nature are so orderly, where does unpredictability come from?  Where does uncertainty come from?  We will need to know more about what we mean by “laws,” and why some of those laws might allow for some sort of non-deterministic behavior.  Might some non-deterministic activity be evidence for an ongoing role for a rational power in the universe?  But first, what are our most certain assumptions about nature?  What is it that all of physics depends on?  Leonard Susskind specifies three unconditional laws of nature (from The Black Hole War):

  1. The maximum velocity of any object in the universe is the speed of light, c. This speed limit is not just a law about light but a law about everything in nature.
  2. All objects in the universe attract each other with a force equal to the product of their masses and the Newton constant, G. All objects means all objects, with no exceptions.
  3. For any object in the universe, the product of the mass and the uncertainties of position and velocity is never smaller than Planck’s constant, h.

Susskind emphasizes, “There is no dispute . . . .  They apply to any and all things – everything.  These three laws of nature truly deserve to be called universal.”  For the really picky reader, there are some additional qualifications that probably need to be added, but I’ll ignore those now to keep things as simple as possible.

To these three unarguably fundamental laws, Susskind would probably add the conservation of energy (energy is neither created nor destroyed; mass being a form of energy due to Einstein’s famous equation, E = MC2); the conservation of charge (charge is neither created nor destroyed; electrons and protons are examples of charged particles); and surprisingly, time reversibility or conservation of information.   Susskind maintains that it is fundamental to the laws of physics that, in addition to predicting the future, the laws do not allow for an ambiguous past.  In other words, information about the prior states of a system is never lost.  He has successfully argued this point with Steven Hawking and apparently won. Roger Penrose appears to be a lone holdout in this debate about conservation of information.   Time reversibility will prove to be a paradoxical factor in the laws of physics.

Speaking of Roger Penrose, he would probably add to this list as well.  He rates our best scientific theories as ‘SUPERB’, ‘USEFUL’, or ‘TENTATIVE’.  In the ‘SUPERB’ category, he places Einstein’s theory of relativity (both special and general relativity), quantum theory, Newton’s laws of motion and law of gravity, and Maxwell’s theory of electromagnetism.  Into the ‘USEFUL’ category go the standard model of particle physics and the Big Bang theory.

Let’s look at some of the implications of these fundamental laws of physics for an orderly, rational world.  The first fundamental law stating that the speed of light is the maximum speed for any observer is included in Einstein’s special theory of relativity.  There are some remarkable features of the special theory of relatively.  The constancy of the speed of light for all observers leads to conclusions that distances along the direction of motion must contract and time must slow down for any system that is moving with respect to another system.

This effect is symmetric with respect to two systems that are moving past each other at a uniform speed so that observers in each system will conclude that the other system is measuring shorter distances and slower times.  However, if one system reverses direction, this implication of the special theory of relativity will have permanent consequences.

For example, if identical twins are born on earth and one of them is placed on a rocket to a nearby star and that rocket is moving at a speed close to the speed of light, then the space traveler twin will return to earth younger that his or her sibling.  The effect of time slowing down is made permanent by the reversal of direction of the rocket.  The space traveler twin will experience acceleration and deceleration that will break any motion symmetry between the two twins.  This is called the twin paradox.

The surprise is that each observer in motion has his or her own time frame.  With the twin paradox, it is possible for anyone who is willing to travel fast enough to move forward in time.  The space traveler will return to earth at a time in the future compared to the traveler’s own clock or calendar. If one is willing to travel fast enough and far enough, one could actually return far into the future.

This effect has been measured in particle accelerators and in the effects of cosmic rays that strike earth’s upper atmosphere.  The high energy cosmic rays that strike high altitude molecules will create exotic particles (muons) which normally decay so quickly that few would reach the earth.  However, some of these particles are moving so fast that time is slowed down to the point where more of them can be detected at a lower altitude.

You might wonder if it’s possible to travel forward in time, is it also possible to travel back in time?  The answer is no.  Backward time travel world require traveling faster than the speed of light which is prohibited by the Einstein’s special theory of relativity, and Susskind’s first fundamental law above.  That is a good thing because if one could travel back in time, causality could be violated: it would be possible to alter history (for example, think of the movie, “Back to the Future”).  It is not even possible to send a specified signal at a speed faster than light.  If one miraculously had such a device that could send a signal at greater than light-speed, and if that signal could be relayed back to its source, then a report of a future event could be received in the past thereby providing the option of avoiding the future event!

Even though special relativity makes time relative to each moving observer, it guarantees that time will always move forward, never backward.  It thereby guarantees causality:  causes will always precede effects.  Causality is one of the fundamental guarantees of a rational universe.

From time to time, there are scientific theories or experiments that appear to show that the universe has the possibility of violating causality.  One such possible implication arises in general relativity in the theory of black holes – stars so massive that not even light can escape.  Another implication of a possible violation of causality arises in the quantum theory of entangled particles.  Both of these situations imply that the universe has capabilities that are not made available through any normal activity.  But even if the universe has the ability to violate causality, that ability is not available to its inhabitants and it is still not possible to send any message back into the past.

In fact, the mere possibility of a violation of causality in relation to black hole singularities led Roger Penrose to propose a cosmic censorship hypothesis which states that it is not possible to observe any physical process that will lead to a violation of causality.  The sort of determinism in which time always flows forward is a key property of this universe.  Yet this property is in direct conflict with Leonard Susskind’s assertion that the laws of physics must be able to be reversed.  How will this tension be resolved?

Susskind’s law concerning the conservation of information is based on a fundamental assumption that the laws of physics are unambiguous with regard to the past.  This is sometimes stated that the laws of physics are still true whether time runs forward or backward.  This feature of scientific theory is necessary if we are to project events backwards to arrive at a beginning point.  The obvious question then is why don’t we ever see time running backward?

In a previous post, I have framed my discussion of rational agency in terms of a contradiction between two concepts.  One idea is that the universe is fundamentally governed by deterministic laws which include a provision for random action. I have called this concept materialism, but its main determining factor is a randomness which accounts for any observational results that are not strictly predictable.  The other concept I have called a rational agent, but its main determining factor is directed, rational action that conforms to the deterministic laws.  I have stressed that these are two extremes and that the truth might lie somewhere in between.  So far in my discussion on science, I have described the Big Bang creation of the universe and special relativity.  Both of these narratives intimately involve matter.  Even if the rules governing matter are rational and rigorous, why does that imply a rational agent?  And how do rational laws result in uncertainty?

All of the theories listed above – from relativity to quantum theory – are models of physical reality.  That is, they describe physical reality using mathematical equations along with constraints or principles that are applied to the analysis of physical reality.  The mathematics associated with each theory is an integral part of the narrative that explains why the theory is true.  Without such a narrative, doubts would immediately set in if there were anomalous observations.  For a well-tested and mathematically consistent theory, there are strong reasons to doubt the anomalous data.

For example, not too long ago some observations suggested that neutrinos could travel faster than light.  There was an experiment associated with the Large Hadron Collider (LHC) near Geneva, Switzerland in which neutrinos were timed at about 60 nanoseconds faster than a light beam going a distance of 450 miles.  If that observation had proved true, it would have been a significant violation of special relativity.  The problem was eventually traced to a GPS synchronization issue between the two clocks used to time the trip, but resolution took several months.  This episode illustrates both the confidence generated by a mathematically consistent, well tested theory and the provisional nature of any theory.  The provisional nature of scientific evidence is one reason for looking at a gestalt of the evidence rather than relying too much on any one result.

As mentioned above, the special theory of relativity describes the way that different observers, moving at different speeds will view the same events.  Mathematical equations define how clocks and rulers change when moving at high speed.  These changes have been observed in particle accelerators:  particles accelerated to high speed flatten out, like a pancake, and particle lifetimes increase in accord with special relativity.

The naïve question won’t go away:  how is it that matter in the form of very small particles knows how to obey the laws of special relativity?  Unless one thinks that the real world is a computer simulation (and some actually do think this), how do ‘inert’ particles know how to behave under the laws of physics?  Either the particles are not so ‘inert’ or there is a rational power that enforces the laws of physics, or both.  This is part of my evidence for panpsychism.  Matter and consciousness are intimately bound together.  Matter is knowledge made manifest.  Our mathematical theories hint at the connection.

What our best theories don’t tell us is how physical reality really works, or as Stephen Hawking says (quoted by Jim Holt): “What is it that breathes fire into the equations and makes a universe for them to govern?”  Or as someone once asked, “How does the electron know to follow the rules defined by the equations of magnetic force?”  Holt adds, “How do they [the equations] reach out and make a world?  How do they force events to obey them?”  Our scientific theories are rational models of how the universe works.  As such they are evidence for a rational process at work in the universe.  But they are not the actual power that enforces the physical laws.  That power lies outside our knowledge, but our best theories are pointers or signposts that indicate that the power is real.

My answer to these questions is that it is a rational agent that breathes the fire into the laws of physics and makes out of them a coherent world in which to live.  In order to understand how that happens without recourse to any supernatural power, I will need to describe two kinds of uncertainty or unpredictability that are present in our empirical view of the universe.

The first kind is easily dispensed with.  It is what Leonard Susskind calls experimental “sloppiness.”  I think that is a bit unkind because what he means is the inability to keep track of all the minute details that are necessary for the prediction of a result.  Think of a drop of ink placed into a glass of water and how it spreads out with apparent randomness.  Theoretically, if we knew the positions and velocities of all the particles we could predict the spreading.    Not only that, but we could reverse the spreading so that the dispersed ink coalesced into a drop and popped out of the water!  This is what Susskind means by “time reversal” or conservation of information.  But before information can be conserved, we have to know what that information is, and in complex systems, it is impossible to know all the variables that we would need to know.

What is important in the conservation of information is that it be theoretically possible to reconstruct the past, not that it ever be practical to do so.  This type of unpredictability is caused by the observer’s lack of complete knowledge.  But, as far as I can tell, there is no ordering power in lack of knowledge.  So this type of uncertainty is not very interesting.  (But I don’t mean to denigrate such useful scientific tools as stochastic modeling!)

Much more interesting from the perspective of conservation of information is the uncertainty that comes from quantum physics.  This is a completely different kind of uncertainty.  It is an unpredictability caused by the universe’s direct intervention in the outcome of any transfer of energy.  If you’ve heard of Schrödinger’s cat or the “collapse of the wave function,” you already know what this is.

Schrodinger’s cat is the archetypal and somewhat hackneyed example.  A live cat is placed in a box with a poison vial which can be broken by a single well-aimed photon that passes through a half-silvered mirror.  A photon passing through a half-silvered mirror has a 50% chance of being reflected and a 50% chance of transmission.  So there is a 50% chance that the vial will be broken and the cat poisoned and a 50% chance that cat will live.  The example concludes by speculating that we won’t know if the cat is alive or dead until we look in the box.  But, more dramatically, the story raises the question of whether the cat exists in a quantum superposed state of half-dead and half-alive!  This is what distinguishes the quantum example from the first type of uncertainty which is due to lack of complete information:  Schrodinger’s cat would be both dead and alive.

We never observe half-dead cats; so most physicists believe that quantum superposition never rises to the level of cats or anything else as big as a cat.  That means that the photon wave function must collapse to a definite state before whole cats get involved.  Most people believe that the cat is either dead or alive before the box is opened.  (Lest anyone be troubled as to why the universe might get involved in choosing life or death for a cat, remember, it was the hypothetical scientist who set up the experiment!)

Oddly enough, science has not been able to resolve this deep puzzle about quantum physics.  Lee Smolin, in The Trouble with Physics, calls it one of the “five great problems in theoretical physics.”  Roger Penrose has written at least two books to put forward his theory that there must be some objective reduction in the wave function based on the laws of physics.  The main reason that this problem has resisted solution is that attempts to test when the wave function collapses typically cause the wave function to collapse.  There may be indirect evidence, however.

The indirect evidence to which I am referring is that the universe, by choosing an outcome in every transfer of energy, is actually adding knowledge to an observable process.  We shall need to look for processes at the quantum level that concentrate energy or concentrate information that would not be expected from the law of increasing entropy.  Some of this evidence will be found in my next post dealing with quantum coherence and quantum entanglement.  The remaining evidence will be described under the topic of evolution when I look at biological processes that increase order and concentrate energy.

Supplementing and partially compensating for the lack of direct evidence is the philosophical perspective of objective realism.   There are strong reasons to believe that the wave function does collapse even if there is no observer.  This is the quantum physics version of the conundrum, if a tree falls in the forest and no one hears it, did it really fall?  There are strong reasons to believe in the reality of quantum states and strong reasons to believe that the universe picks one of the possible quantum outcomes, but the evidence is circumstantial.

If one takes the point of view that the collapse of the wave function is a real event that is initiated by the universe (whether or not there are governing rules) then one has taken the position that the universe chooses one particular outcome among all the possible outcomes that are predicted by quantum theory.  That means that anytime energy is transferred, at least one choice is involved and more often many choices are required.  This is the basis for a fundamental ‘decisionality’ in the universe that underlies all activity.  It is this fundamental decision process that prohibits any backwards movement in time.  It is the reason that we only observe time moving forward.  And ‘decisionality’ is evidence for rational agency.

Leonard Susskind confirms this position by insisting that, in order for time to be reversed, the quantum state must not be disturbed:

“Take the photon. When we run the photon in reverse, does it reappear at its original location, or does the randomness of Quantum Mechanics ruin the conservation of information? The answer is weird: it all depends on whether or not we look at the photon when we intervene. By “look at the photon” I mean check where it is located or in what direction it is moving. If we do look, the final result (after running backward) will be random, and the conservation of information will fail. But if we ignore the location of the photon—do absolutely nothing to determine its position or direction of motion—and just reverse the law, the photon will magically reappear at the original location after the prescribed period of time. In other words, Quantum Mechanics, despite its unpredictability, nevertheless respects the conservation of information.”

In Susskind’s narrative about information conservation, I sense an underlying agreement with Roger Penrose.  It is the decision process associated with the collapse of the wave function that prevents time from running backwards and it is also part of the basic mystery of the law of increasing entropy.  In order for the fundamental ‘decisionality’ of the universe to lead to rational agency, it must demonstrate the ability to perform activities that minimize entropy.  We will see some of that evidence in my next post regarding lasers and superconductivity.

The inescapable conclusion is that the collapse of the wave function does indeed discard information: it concentrates information about the state of the universe; it eliminates possible energy states; it sets a limit on the increase of entropy.  Quantum physics began by solving a profound puzzle about the energy spectrum.  In the nineteenth century, the energy spectrum was considered continuous.  If the energy spectrum was continuous, then there would be an infinite number of energy states and any heated object would radiate infinite energy.  Everyone knew this wasn’t true, but it was Max Planck who postulated in 1900 that radiation energy was quantized in units that now bear his name.  This one simple change limited the number of energy states and reduced the hypothetical infinite energy to a finite energy that was confirmed by experiment.

If we are to truly understand how the universe works in all of its magnificence, how it is able to produce both deterministic order and adaptable life, then we need to understand any process that limits entropy or has the potential of reducing entropy.  That physical process is quantum physics.

These topics will be explored in my next post.

In the Clear Light of Day

In my previous post in this series, I related how I came to be interested in science through experiences I had while going to school in central Florida.  When I left home to attend college, I was determined to pursue a career in physics.  Away from the influence of family, I tried to maintain a faith in the religion of my childhood, but I could not do it.  I came to believe that the way to know about the world was through science.  I came to disbelieve that religion had anything to offer in terms of my life or well-being.  For me as a college student, religion became a form of superstition that could be replaced by a rational understanding of the world.  It wasn’t until I dropped out of school and faced to full existential threat of the Vietnam War, that I had to face the prospect that reason alone could not save me from the social and political forces over which I had no control.

In the midst this crisis, I accidently encountered a form of existential Christianity that I could accept.  This theology provided me with an assurance of self-worth without which I do not believe I could have extricated myself from certain doom.  This existential form of Christianity did not rely on supernatural explanations nor on scripture.  It relied on an understanding of human life to which I could relate given that I was facing a crisis.  It relied on a metaphorical explanation for God and for Christ in which these words referred to real human experience rather than some supernatural power.  This experience was invaluable to me because I do not think I could have moved ahead with a family or career without it.  But it did set up a long term dilemma that I had to eventually resolve.

The dilemma evolved over time as I pursued a career and raised two children with my wife.  During this period I came to experience a real power at work in my consciousness about life and about myself that led me to be a more moral human being.  I did not perceive this new awareness to have come from within myself because in many cases I was persuaded to do things that did not at first appear to be in my self-interest.  Some people might attribute these experiences as due to social forces or life experience, but there was a unity of consciousness that came through to me based on the ‘God’ symbol and ‘Christ’ symbol that could not have been due to social forces alone.  The best single word that I can use to describe these experiences is ‘revelation.’  It was revelation based on my own personal situation and about which no one else could know and it led me to make surprising decisions about my own life.

At any rate, I came to believe that there was a real power at work in human consciousness and that power could not be described as simply metaphorical.  In other words, my dilemma boiled down to this question: Did the word-symbol, ‘God,’ point only to human experience or did it point to a real power at work in the universe that I could experience through consciousness?  This question has led me to seek out a physical and non-supernatural basis for such consciousness that could be discovered through science and reason.

In order for me to convey some sense of my journey, I need to describe the intellectual dilemma raised by that question about the reality of ‘God’.  On one hand, I was taught through my science training that the universe was a collection of blind forces that were indifferent to human life, but out of which human life arose, struggled and prevailed.  On the other hand, my theological training and my experience of revelation convinced me that the universe was friendly to human life and that is the reason human life prevailed.  The key question for me became: how does the universe really work?  Is there some way for the universe itself to directly affect consciousness?  I came to the conclusion that the answer is yes.

Therefore, there are two opposite views about the nature of the universe that are useful for me to discuss.  One extreme is that the universe is a collection of blind forces that are, at best, indifferent to human fate.  The other extreme is that the universe is friendly to human existence.  It should be clear that either extreme can lead to aberrant life choices.  If one is convinced that the universe is hostile to life, paranoia or depression can result. I have, on occasion, experienced these emotions.  At the other extreme, naïve trust can result in a fatal ignorance of reality, and I have, on occasion, been tempted by misplaced trust in my own invulnerability.  Because these two extreme views can be so consequential, it will be helpful to discuss the evidence for either point of view.

In the past, I might have framed this discussion as ‘atheism’ verses ‘theism’, but I now believe this is misguided.  For one thing, raising the question of God at this point brings up a whole host of theological issues that are best left to a discussion based on faith rather than reason.   I prefer now to adhere to reason as much as possible, knowing that the first step towards faith is a reasoned assertion that we are not alone.  So perhaps the best way I can convey the relevant points of view regarding the nature of the universe would be through the concepts of materialism and rational agency.

Materialism is the point of view that the universe is a collection of blind forces that are responsible for everything.  It is the point of view that all of life and consciousness emerge from random interactions of matter with the known physical and chemical forces, namely electromagnetism, gravity, the nuclear force and the weak force.  From a practical perspective, perhaps we can ignore the nuclear force and the weak force because they play virtually no part in the biochemistry from which life and consciousness emerge.  It is about these constituent forces and associated particles that our best theories inform us.

Rational agency is the view that there is a rational power active in the universe that is or can be friendly to life.  This rational power, by definition, must have some ability to direct events in the universe and this activity must be done through normal physical forces.  There must be no appeal to supernatural powers. This purported rational power is not ‘God,’ but many physicists and scientists have used that word to describe a belief in a rational order to the universe.  I will try to avoid using the word ‘God’ in this way except when quoting or paraphrasing other writers for whom ‘God’ has this meaning.

To make the leap from rational power to God requires a faith in certain additional attributes such as omniscience, omnipotence, omnipresence and omnibenevolence.  Belief in all four attributes results in the theodicy paradox of why there is evil in the world which, again, has to be reconciled by faith.  The theodicy paradox requires that the theological problem of evil be reconciled with the divine attributes of complete power, complete knowledge and complete goodness.  The theodicy paradox has resulted in some well-known renunciations of faith.

Materialism and rational agency are at two extremes hypothesized for the purpose of understanding each one separately.  Randomness is a key attribute of materialism and directed action is a key attribute of rational agency.  In the real world, these two attributes are not mutually exclusive and may both be present to some extent.  It will be helpful to view materialism and rational agency as two concepts in a dialectic from which some synthesis may emerge.  I will need to lay out some significant scientific principles in order to explain how randomness and directed action might both be present in the world.

But, I would like to begin with a discussion of panpsychism which Jim Holt includes in his chapters on mathematical Platonism.  Panpsychism is the view that consciousness has a physical basis and is present in the tiniest units of matter.  I had gradually come to the conclusion that panpsychism is real through my studies of Roger Penrose and his view of consciousness which I describe in my very early posts from 2007.  But I didn’t know it by that word until I read Jim Holt’s book.

So it was with eager anticipation that I began to read of Jim Holt’s treatment of mathematical Platonism which begins with an interview of Roger Penrose.  Sir Roger explains his view that the universe can be understood as three interrelated worlds:  the platonic mathematical world, the physical world and the mental world.  These three worlds are arranged so that each is dependent on one of the others.  There is circularity about this arrangement not unlike M. C. Escher’s waterfall or staircase drawings: the platonic mathematical world is dependent on the mental world which is dependent on the physical world which is dependent on the platonic mathematical world.  In fact, Holt tells us that Penrose’s early work on “impossible objects” was the inspiration for some of Escher’s drawings.

The basic idea of mathematical Platonism is that the world of mathematics is a real world with an independent existence that can be discovered and explored by mathematicians, in much the same way that other empirical sciences work.  Many mathematicians take this point of view.  One of the best arguments for the reality of a mathematical world is the apparent fact that mathematics is indispensable to physics.  Holt brings up the counter argument that there are ways to completely describe some parts of physics (Newtonian physics) without recourse to math.  I have severe doubts that such an approach could be extended to relativity or quantum physics, and, even if it was, no physicist would use it.  Therefore, it is very hard to escape the notion that math is crucial to doing physics even if one does not believe it has an independent platonic existence.

This puzzle is at the heart of Penrose’s worldview: Why is it that the abstract world of mathematics can agree so well with the empirical world of physics?  This agreement is not acquired easily.  There is almost always significant debate and disagreement over physical theories and the mathematics that represents those theories.  Decades sometimes elapse before the scientific community comes to some agreement about the correct form of a theory.  We tend to think that Albert Einstein, alone, came up with the theory of relativity, but the final form of the theory has benefitted from Einstein’s dialog with many others. But Holt seems to discount any meaning to the agreement between mathematics and experiment, preferring to characterize Sir Roger’s vision as “a spell” that gradually wore off.

I recently have been taking a series of online courses in theoretical physics.  The amount and quality of educational material online is absolutely amazing.  The particular physics series that I am taking is taught by Leonard Susskind, one of the premier senior theoretical physicists in the world.  There are more than 70 individual teaching sessions available at last count, and most sessions are more than 90 minutes long.  I have completed most of the sessions and all of the sessions on Quantum theory and Relativity.  I can say with confidence that any explanation of these theories without mathematics would be superficial.  The math is part of the indispensable narrative that explains why the theories are true.

It is one of the ironies of my life that while I was studying physics at an east coast college, over 2000 miles away, on the west coast, Richard Feynman was developing a new approach to teaching physics.  This new approach was the basis for his lecture series, The Feynman Lectures on Physics, published in 1965. Leonard Susskind was one of Feynman’s friends and colleagues.  As it turned out, I became dissatisfied with the physics curriculum and changed to math.  When I chose a career, I chose software engineering where my math background served me very well.  I spent several years developing the code for yield and present value calculations for many different type of investments in an investment accounting package.  So this review of theoretical physics has been very helpful considering my detour through software engineering.

Math may be indispensable to physics, but not all of the relevant mathematics can be used to explain empirical results.  When there are multiple solutions to equations, some of the solutions clearly don’t correspond to real-world answers.  The simplest example is when theories require the square root of a value and in almost all cases, it is the positive value of the square root that is meant because a negative number would have no physical significance.  In other cases, mathematical equations need to be qualified by physical insights that are not always obvious directly from the mathematics.  In special relativity, the Lorentz transformation uses both the speed of light and the velocity of an object.  Mathematically, the velocity could be greater than the speed of light, but that is prohibited by the theory.  In other words, the development of mathematical support for physics involves a complex dialog between what the mathematics is saying and knowledge about the actual real-world that the math is intended to describe.  This illustrates Penrose’s point that the real physical world is described by only a small portion of mathematics and that the math only describes a portion of the real physical world.

The mathematics that does correspond to physical reality is extremely important, however.  It allows physicists and engineers to make predictions about real-world results.  I think it is a mistake to discount this predictive ability of mathematics, as Holt seems to do.  I think there are deep underlying reasons why the abstract world of mathematics actually does correspond to physical reality even if those correspondences are hard-won.  I have come to the conclusion that this correspondence is not a coincidence and forms part of the evidence for the physical basis of consciousness called panpsychism. This correspondence is also the reason we think the universe is described by a well-ordered set of rules that we call the laws of physics.

Speaking of evidence, how much evidence constitutes proof?  Well, the standards for proof vary depending on the importance of the consequences of proof.  Using our legal system as an example, there are at least two standards that will illustrate the correlation between the standard of proof and the consequence of proof.  One standard is proof beyond a reasonable doubt and that other is proof based on the preponderance of the evidence.  I first learned about the different standards for evidence between criminal trials and civil trials from the O. J. Simpson case.

For those who need to be reminded, in 1994 the retired football player and actor, O. J. Simpson, was charged with the murder of his ex-wife and another person.  He was brought to trial and in October, 1995, he was acquitted.  In 1996, the families of the murdered victims brought a civil suit against Simpson for wrongful death.  The jury in the civil suit found Simpson liable and awarded the families $46 million for damages.

The standard of proof for a criminal case is based on certainty “beyond a reasonable doubt”; whereas the standard for a civil case is based on the “preponderance of the evidence”.  Part of the reason for a higher standard of proof for the criminal case is that the consequence of a guilty verdict is significantly more severe than in a civil case.

The same idea can be applied to evidence for other questions.  On the question whether our universe is better explained by materialism alone verses rational agency, how important are the consequences?  I came to the conclusion that though the consequences are very important, they probably don’t warrant absolute proof beyond any doubt.  Therefore, I think that a standard of proof based on the preponderance of the evidence is appropriate, particularly considering that a reasoned acceptance of rational agency can be a first step towards faith.  Typically, that means that there is a greater than 50% chance that the evidence supports one side of the question.  For me, the evidence greatly exceeds a 50% confidence that rational agency is a better explanation for the nature of the universe.

The weight of the evidence is something on which reasonable people might disagree.  But disagreement does not mean that one party is being either irrational or stubborn.  Disagreement is not a reason to condemn another person.  I don’t know of any examples where others have spoken about reasons for rational agency in the universe and the standards of evidence for that position, but I do know about such effects on some people concerning the question of God.  One’s perception of the evidence will vary greatly based on personal experience.

In one case, Rabbi David Wolpe inscribed his 2008 book, Why Faith Matters, to his children: “For Eliana and Samara: All the proof I need.”  A Rabbi’s son himself, Wolpe was raised to a traditional Jewish faith.  As a teenager, he soon lost that faith because of the problem of evil (the theodicy) and became a devotee of Bertrand Russell, a noted atheist.  Wolpe describes this period in his life:  “Life was suddenly murky, a place of night and fog.  Human life was an accident and everything that happened was a simple product of blind forces.  I longed for help in navigating this new terrain.  How does one live in a chaotic world?  I found a path in the words of an English philosopher.”

In time, Wolpe became disenchanted with Russell primarily because Russell’s personal life was such a mess.  If Russell’s ideas were correct, why couldn’t he live a life that Wolpe wanted to emulate?  Wolpe tells us: “Russell proved in the end to be an unexpectedly useful guide.  The atheistic philosopher with his corrosive wit taught me to question, constantly and repeatedly.  What Russell did not teach was that questions could themselves lead to faith. A brittle faith fears questions; a robust faith welcomes them.”

Wolpe’s journey from faith to unfaith and back to faith happened quickly enough for him to make a career choice to become a Rabbi.  At the opposite end of the spectrum is Antony Flew whose journey back to faith took almost his entire life.  Son of a British Methodist minister, Flew was an outspoken critic of theism for over fifty years.  In the early 2000’s Flew gradually admitted that he had changed his belief to a form of Deism.  My reading of Flew’s book, There is A God: How the World’s Most Notorious Atheist Changed His Mind, leads me to conclude that Flew’s Deism is very close to what I am calling a Rational Agent.  While Flew’s conversion has spawned much controversy, it is notable for Flew’s insistence that he has followed the evidence wherever it led, and has done so his entire life.

Let me now return to the physical basis for consciousness.  Holt’s summary of panpsychism is helpful:

The doctrine that consciousness pervades reality is called “panpsychism.” It seems to harken back to primitive superstitions like animism—the belief that trees and brooks harbor spirits. Yet it has attracted quite a bit of interest among contemporary philosophers. A few decades ago, Thomas Nagel showed that panpsychism, for all its apparent daftness, is an inescapable consequence of some quite reasonable premises. Our brains consist of material particles. These particles, in certain arrangements, produce subjective thoughts and feelings. Physical properties alone cannot account for subjectivity. (How could the ineffable experience of tasting a strawberry ever arise from the equations of physics?) Now, the properties of a complex system like the brain don’t just pop into existence from nowhere; they must derive from the properties of that system’s ultimate constituents. Those ultimate constituents must therefore have subjective features themselves—features that, in the right combinations, add up to our inner thoughts and feelings. But the electrons, protons, and neutrons making up our brains are no different from those making up the rest of the world. So the entire universe must consist of little bits of consciousness.

Another contemporary thinker who takes panpsychism seriously is the Australian philosopher David Chalmers. What attracts Chalmers to panpsychism is that it promises to solve two metaphysical problems for the price of one: the problem of stuff and the problem of consciousness. Not only does panpsychism furnish the basic stuff—mind-stuff—that might flesh out the purely structural world described by physics. It also explains why that otherwise gray physical world is bursting with Technicolor consciousness. Consciousness didn’t mysteriously “emerge” in the universe when certain particles of matter chanced to come into the right arrangement; rather, it’s been around from the very beginning, because those particles themselves are bits of consciousness. A single ontology thus underlies the subjective-information states in our minds and the objective-information states of the physical world—whence Chalmers’s slogan: “Experience is information from the inside; physics is information from the outside.”

Panpsychism is one of the philosophical presuppositions that I needed to support the evidence for rational agency.  Another philosophical presupposition is objective realism:  the world is a real independent phenomenon that exists whether I exist or not.  The third pillar is that the universe is an ordered unity, understandable by science and reason.  There is empirical evidence for panpsychism and for the proposition that the universe is an ordered unity, but objective realism must simply be decided.  There cannot be much of a discussion about evidence if one takes the position that all evidence is subjective.

My next segment will begin a series on the evidence from physics and cosmology.