The Evidence from Evolution and Biology (Part 1)

My previous posts have focused on the evidence for a rational agent inherent in the laws of physics.  There has been an implicit assumption that the laws of physics are rigorously deterministic.  But clearly life is not deterministic, so it was necessary for me to point to some possible feature of the laws of physics that allowed for the wild variation and unpredictability of life.  I will summarize my thought process as follows:

  1. The universe is ordered by deterministic laws and forces such as the force of gravity and electromagnetism.  There are also non-deterministic laws such as quantum theory.  One of the laws that combine both features is the law of increasing entropy.  Entropy always increases throughout the universe, but it is allowed to decrease locally.  Since quantum theory ultimately controls all interactions in the universe, all forces are non-deterministic at the quantum level.  (The only possible exception is gravitation which has not yet been unified with quantum theory.)
  2. The deterministic laws (electromagnetism, etc.), by themselves, cannot account for life and consciousness.  There must be another factor in the fundamental laws of physics that allows living organism to lower entropy.  The process of lowering entropy is essential to life because it concentrates energy for future use and organizes the genome for transmission to future generations.
  3. That factor in the laws of physics is the collapse of the wave function in quantum physics, also called decoherence.  Decoherence is absolutely necessary for any measurable energy transfer.  In decoherence, the universe actually chooses an outcome for every transfer of energy.  This choosing, or decisionality, on the part of the universe is what I have called rational agency and it is responsible for the forward direction of time.
  4. This decisionality on the part of the universe is always mixed up with randomness because we are prohibited from knowing precisely all the states of matter, particularly the states of entanglement between particles.  This is a consequence of a kind of cosmic censorship hypothesis.  The Heisenberg uncertainty principle is one such limitation on our knowledge.
  5. There can be no ordering principle or lowering of entropy based on true randomness.  True randomness, by definition, is maximum entropy.  In all of physics the only candidate for non-random yet non-deterministic action is decoherence.
  6. Therefore, this choice by the universe is directed choice.  It is a rational choosing based on the laws of physics and contains within it the possibility of lowering entropy.  It is the physical undergirding of all life and consciousness.  It is the physical action responsible for the forward direction of time.

Essentially, I think that the laws of physics favor life or are conducive to life.  In general, nature prefers to disperse energy; therefore there must be physical explanations for how energy gets concentrated.  Just as there is an explanation for how nature concentrates energy for lightning, there must also be an explanation for how living organisms concentrate energy and lower entropy.   These six steps summarize my explanation.  In this series on evolution and biology, I will lay out the case for the laws of physics favoring life as opposed to the case for life adapting to the laws of physics.  Both dynamics occur, but only laws conducive to life can create life from inanimate matter.

I don’t consider this logic highly dependent on particular experimental results.  Scientific theories are always provisional; they can be superseded by better theories or more accurate results.  My reasoning is broadly based on the general properties of physical laws.  A portion of the laws are rigorously deterministic and use mathematics to make predictions about future events.  A portion of the laws of physics deals with the presence of uncertainty in the universe.  I fully expect the laws of physics to be revised and improved, but I don’t expect that these general characteristics will be much altered.  If string theory is proved true, that would not change my basic logic, but my perspective might need to accommodate rational agency operating in a multiverse scenario.   String Theory, for all its promise, does not yet make any testable predictions.

Along with the laws of physics, I view the theory of evolution as a valid scientific theory.  It is a theory based on the idea that all living organisms adapt to their specific environment and pass along adaptive traits through procreation.  Darwin’s concept of “natural selection” was devised in contradistinction to “artificial selection,” whereby human breeders selected the best mates in order to raise generations of specifically adapted animals.

Biology is a complex science.  For someone like me, who has spent a major part of his life focused on math and the physical sciences, the main shock of encountering biology is the sheer astronomical diversity of life.  Last year, I took one of the online courses offered from UC Berkeley.  It was the basic undergraduate course for biology majors and it was something I needed because my previous biology class must have been in high school.  It was just as well that I didn’t have very much previous instruction because so much has changed between then and now.  The sheer volume of information is astounding.  I found myself wondering how on earth does anyone organize this much data.  In fact, it took three teachers to cover the material.  One instructor had a background in molecular biology; one was a specialist in genetics and one was from a medical background.  I had the distinct feeling that complete mastery was beyond the capability of any one individual.  But, I am still learning and I do have some observations based on my perspective from the physical sciences.

One observation concerns the principle of emergence.  Emergence is the concept that complex living organisms are able to exhibit new properties and traits by virtue of their complexity and organization.  The example from the textbook for the UC Berkeley class is one that interests me:  “For example, although photosynthesis occurs in an intact chloroplast, it will not take place in a disorganized test-tube mixture of chlorophyll and other chloroplast molecules.  Photosynthesis requires a specific organization of these molecules in the chloroplast.”  The text is saying that photosynthesis is an emergent phenomenon.  That is fine.  That helps organize knowledge, but for someone who wants to know how things work, there is a further question:  How is it that the particular organization contributes to function?  What are the properties of the constituent parts that enable the composite function to emerge?  Too often, emergence is used simply as label for a new function that can’t be explained any further.  When that happens, it becomes a kind of false knowledge: a category without explanatory power.

To take another example, water is composed of two room-temperature gases: hydrogen and oxygen.  I suppose you could say the emergent property of water is its liquidity.  But, with water, one can trace its properties to the molecular properties of hydrogen and oxygen and the strong bond between them as well as the weak bond between water molecules.  These particular molecular properties can also be used to explain surface tension, freezing and boiling.  My expectation is that biology will someday be explained in terms of molecular dynamics.  That day is a long way into the future.

Biological scientists are answering these kinds of questions and it is painstaking work.  It is slow and tedious work to demonstrate how biological molecules work, but I suppose, that is the part of biology that mainly interests me.  I have two main areas of interest in the biological sciences.  One is photosynthesis because of its use of quantum coherence for efficient transmission of sunlight energy to the “reaction center” where chemical food production begins.  The other is the biological molecule tubulin.

Tubulin is a protein molecule that assembles into microtubules.  Microtubules are long, narrow, hollow tubes that play an amazing variety of roles in living cells.  There is a natural tendency for microtubules to assemble themselves because of the positive and negative polarity on the tubulin molecule.  Once assembled, microtubules play key roles in biological cell functions.  They play an essential role during mitosis, cell division, by grabbing hold of the chromosomes and causing the genome to precisely separate toward opposite ends of the cell.  Microtubules are part of the cell’s cytoskeleton; they give shape and form to the cell.  In plants, microtubules guide the alignment of cellulose and direct plant growth at the cellular level.

Microtubules form the infrastructure that transports molecules from outside the cell to the inside and vice versa.  Motor proteins “walk” vesicles containing molecules back and forth along microtubules to their destination.  For example, pancreas cells that make insulin transport the insulin from inside the cell to the outside by this method.   In addition, microtubules are used for cell interaction with its environment.  They form some types of flagella and cilia for locomotion of the cell or movement of particles in the cell’s environment.  For example, the human sperm cell is propelled by action of a flagella made up of microtubules.

In short, microtubules are a very versatile cellular component.  Furthermore, they are an essential part of nerve cells.  Tubulin, the protein that forms microtubules, has a very high density in brain tissue.  That has led some researchers to project a key role in brain activity and consciousness for microtubules.  Microtubules are long, hollow, round tubes that might be ideal for quantum coherence.  There has been some research along these lines.

Tubulin is the protein building block of microtubules and it or similar proteins are probably very ancient, perhaps going back to the beginning of life.  One source specified that all cells had such proteins, except blue-green algae also known as cyanobacteria.  However, cyanobacteria have a tubulin-like molecule (a homologue) called “Ftsz.”  An interesting connection between my two main interests is that the cyanobacteria use photosynthesis for energy harvesting from sunlight.  It is the light harvesting complex from cyanobacteria that are used in the experiments testing quantum coherence.

Cyanobacteria are among the oldest life forms on Earth, perhaps as old as 3.5 billion years.  It would be a very interesting development if microtubules or microtubule-like structures go back to the beginning of life and if it can be demonstrated that quantum coherence played a key role in efficient energy transmission in these structures.  Those are two very big “ifs” and most researchers are very cautious about any evidence pointing towards quantum coherence in biological molecules.  But I remember some fairly incautious statements about the beginning of life from many years ago.

I think it was probably in high school chemistry class that the teacher, one day, covered the Miller-Urey experiment.  This experiment was conducted in 1952 and involved sending a spark of electricity (to simulate lightning) through a mix of chemicals assumed to represent Earth’s primitive atmosphere.  The result was a mixture of amino acids and sugars, both essential building block of life.  Stanley Miller and Harold Urey had demonstrated that organic compounds necessary for life could be easily formed from reasonable atmospheric compounds, such as water, methane, ammonia and hydrogen.  Not only that, but the teacher thought that we would soon be able to synthesize life in the test tube.  Well, that was over 50 years ago and the synthesis of life seems as elusive as ever.  Science doesn’t yet know what makes biochemicals spring to life.

The mystery of the beginning of life notwithstanding, the theory evolution brought incredible organizing power to the huge diversity of biology.  Darwin’s “natural selection” brought explanatory power to the huge diversity of species on Earth.  In the mid-twentieth century, the discovery of DNA and the genetic code brought into the evolutionary system a mechanism for adaptation.  This has led to what has been called the “central dogma” of molecular biology:  DNA makes RNA which makes proteins.  DNA contains coded information that is used to create a coded sequence of RNA which is used to create a sequence of amino acids which make up proteins.   The next step, which isn’t explicitly stated and is poorly understood, is that proteins must fold into a specific three dimensional form in order to be useful.   What is startling to me, coming from a computer programming background, is that the coded sequence of DNA contains just four characters representing four small molecules: A (adenine), C (cytosine), G (guanine) and T (thymine).

These four codes are interpreted in groups of three which gives 64 possible “words” for amino acids in the genetic code (4 X 4 X 4).  Of the 64 possible combinations of DNA code only 20 are actually needed, because there are only 20 amino acids that are needed to make all the known proteins.  Most of the 64 DNA sequences specify the same amino acid as another sequence, so there is built-in redundancy.  Only Tryptophan and Methionine rely on a single coded sequence; all the others have at least two sets of DNA codes and some (Serine, Leucine and Arginine) have six.  It seems possible to me that different evolutionary branches developed a reliance on different DNA sequences for the amino acids.  For someone with a data processing background, the DNA codes are reminiscent of a computer system that has been copied and modified to meet different objectives – even to the extent that duplicate codes are mainly sequential (e.g., Leucine: TTA, TTG, CTT, CTC, CTA, CTG).  From a “systems design” perspective it would seem that at one time there was provision for expansion with 64 codes for all 20 amino acids, but after evolutionary modifications all 64 codes are now in use.  I suppose that if there developed a need for a 21st amino acid, one of the existing redundant codes would be used.  The whole process is very complex, but the same basic DNA, RNA and amino acids are found in all life forms on Earth.  This amazing discovery of the genetic code is universal to life as we know it.  (There are some exceptions.  The Paramecium uses the “stop” codons, UAG and UAA, to code for Glutamate.)

“Natural selection” coupled with the genetic code has given enormous explanatory power to evolutionary biology.  But like all theories, it is a conceptual model of the physical processes that occur.  There remain many questions such as how did life begin.  And then there’s the question asked by Stephen Hawking, “What is it that breathes fire into the equations and makes a universe for them to govern?”  What is it that actually makes the world act in a way that is consistent with the conceptual model?  Readers of my previous posts will suspect that my answer is similar to what I’ve written before: there is a decisional power at work in the universe that breathes life into biological molecules.  It is this decisionality that insures that time flows forward and therefore gives evolution direction.

Some of the evidence for my answer resides in the evidence for directionality in evolution.  But, first of all, the evolutionary model is a rational model.  Even more amazing is that the implementation of the genetic code is an abstract, rational system!  Who would have thought that nature would have arrived at the very rational system of using a three character code to specify a sequence based on 20 amino acids that comprise the proteins for all life?   Let me be direct: The genetic code is information.  The central dogma of molecular biology is an information processing system.  The end results are proteins and decisional governance of the cell. This is exactly the type of system one might expect from a rational agent acting through nature.

As to directionality, the immediate form of the evidence is in the form of the adaptability of evolutionary change.  Evolutionary change produces living organisms that get better at adapting to their environment.  Not only are more advanced organisms better adapted, but they are better at adapting!  For higher life forms like mammals and particularly humans, this implies a higher consciousness.  Therefore, the longer range implication of evolution is higher consciousness.  I think this trend is evident from the archeological and historical record.  For almost 4 billion years, life has survived under the constant threat of a cosmic catastrophe such as the one that brought an end to the dinosaurs.  Today, we are beginning to track the asteroids and comets that have the potential to cause another life-ending cataclysm.  That would not be possible without some sort of advanced consciousness.  In a strange sort of self-reflection, adaptation has become adaptability for which is needed a higher consciousness.  This implies a robust moral development as well, but that is beyond what I can cover in these posts on science and reason.

But a rational agent is not the only explanation.  The alternative view is that evolution is the byproduct of random mutation.  First of all, I don’t think randomness is a good scientific answer.  Science succeeds when it finds and explains rational patterns.  To say that a process is random is to admit defeat from a scientific point of view.  The second thing I would say is that when someone refers to random mutation, it is unclear what type of randomness they are referring to: lack of knowledge randomness or the genuine non-determinism of quantum physics.  The common view of evolution is that it requires generations of offspring in order for nature to select the best attributes and pass those on to future generations.  Is evolution inherently random because some individuals show up at the wrong place at the wrong time or, alternatively, at the right place at the right time?  Is it random because a cosmic ray has altered the genome?  Is it random because we can’t predict how our children will turn out?  The most likely reason mutation might be random is because of a transcription or copying error.  But modern cells have evolved elaborate safeguards against such copying errors.

It turns out that when evolutionists speak of “random mutation,” they mean something specific.  My biology textbook (on Kindle!) only uses the phrase once in over 1000 pages of small font text, and that one occurrence refers to copies of genes that have lost functionality (i.e. the gene has been degraded) over time.  The textbook does not refer to new functionality as “random mutation,” but does use the phrase, “accidents during meiosis” (cell division in reproductive cells).  This phrase, too, has a specific meaning that might not be expected by normal English interpretation.  In general, the textbook prefers to state evidence positively, in terms of what we know rather than in terms of what we don’t know.  As to genetic mutation, it refers to various mechanisms for altering the genome, such as transposition of small portions of the DNA from one location to another.

One internet site was particularly helpful in tracking down the origin of the phrase “random mutation.”  This site was associated with UC Museum of Paleontology (at Berkeley).  The website is a teaching guide for evolution named “Evolution 101.”  This source was very explicit:

Mutations are random.
Mutations can be beneficial, neutral, or harmful for the organism, but mutations do not “try” to supply what the organism “needs.” In this respect, mutations are random—whether a particular mutation happens or not is unrelated to how useful that mutation would be.”

Behind this brief description is a debate that began with Darwin.  Prior to Darwin, there was a French biologist named Jean-Baptiste Lamarck who held the view that (1) Individuals acquire traits that they need and lose traits that they don’t need and (2) Individuals inherit the traits of their ancestors.  He gave as examples the Giraffe whose neck was assumed to have stretched in order to reach higher leaves in trees and blacksmiths whose strong arms appeared to have been inherited by their sons.  But these ideas have been debunked.

When Darwin published Origin of Species in 1859, he gave some credibility to Lamarck’s view, but later evolutionists elevated Lamarck’s idea to a major theme of evolution.  By the mid-twentieth century, biologists had become adept at doing experiments with bacteria.  In 1943, two biologists, Max Delbrück and Salvador Luria, wanted to test Lamarck’s hypothesis for bacteria, which were thought to be the more likely organism to use Lamarckian adaptation.  The Luria-Delbrück experiment tested whether bacteria exposed to a lethal virus would develop any adaptive mutation and whether that mutation would be acquired prior to exposure or not.  Their experiment showed conclusively that some bacteria had acquired an adaptive mutation prior to exposure, as did subsequent experiments by others, including Esther and Joshua Lederberg who are referenced on the “Evolution 101” website.

So, based on experiments, what evolutionists mean when they say that mutations are random is that some adaptive mutations occur before any exposure to infectious agents in a test.  The mutations do not occur because of exposure.  Now this is a somewhat contentious finding because it defies the rather commonsense view that mutations happen for a reason, most likely that reason is related to some inoculation or exposure to an agent.  In other words, either the finding appears to violate causality or the explanation is an admission of ignorance about the cause of adaptation.

I take the view that the finding is an admission of ignorance.  We really don’t know what might have caused an adaptive mutation to occur before exposure.  The real scientific question is what causes the mutation and biologists prefer to focus on what we can discover.  One such biologist is James A. Shapiro, professor of microbiology at the University of Chicago.  He characterizes the association of “random mutation” with the Luria-Delbruck experiment as follows:

One has to be careful with the word “proof” in science. I always said that conventional evolutionists were hanging a very heavy coat on a very thin peg in the way they cited Luria and Delbrück. The peg broke in the first decade of this century.

Professor Shapiro goes on to write about mechanisms that bacteria have for “remembering” previous exposure to infectious agents.  Those mechanisms include modification of the bacteria DNA.  He states that Delbrück and Luria would have discovered this if they had not used a virus that was invariably lethal and if they had the tools for DNA analysis.  The announcement of the DNA structure would take place in 1953, ten years after the Luria-Delbrück experiment, and the tools for analysis are still being developed.  It should not be too big a surprise that bacteria have elaborate mechanisms for DNA sharing and modification. The human immune response to invasive agents also includes the recording of information in the DNA of certain white blood cells (lymphocytes).   You can read Shapiro’s entire article here: http://www.huffingtonpost.com/james-a-shapiro/epigenetics-ii-cellular-m_b_1668820.html.

It is no longer fashionable to speak of Lamarckian inheritance, but the field of epigenetics is devoted to adaptation by means other than DNA modification.  My own view is that the amount of debate and discussion on the issue of “soft” inheritance points to a conclusion that this is unsettled science.  Microbiologists today have many more tools and techniques for answering questions about causes for adaptive inheritance then they did sixty years ago and I suspect that they would prefer to look at changes to the DNA and other molecules rather than make statistical inferences as Luria and Delbrück did.  Current research of the type that James Shapiro is doing is demonstrating specific causes for adaptation.

Advertisements

The Evidence from Physics and Cosmology (Part 3)

Quantum Uncertainty

So far in my discussion of the scientific evidence for a rational power at work in the universe, I have relied heavily on the orderliness inherent in the mathematical laws of physics that model nature’s governance of the world.  I have written about the orderly application of the laws of atomic physics during the creation of the universe.  I have written about the remarkable correlation between the abstractly ordered mathematical world of theoretical physics and the empirical world of observation.  I have presented the randomness that we observe as a form of incomplete knowledge.  Though I didn’t emphasize it, that incomplete knowledge is one of the fundamental laws.  It is called the Heisenberg uncertainty principle and was itemized as Leonard Susskind’s third universal laws in my previous post.

But there was also another kind of uncertainty.  This second kind of uncertainty is based on nature’s involvement in every transfer of energy that takes place in the universe.  Quantum physics is one of Roger Penrose’s ‘SUPURB’ theories and it calls for the orderly evolution of quantum states until some final measurable state is chosen by the universe.  The mathematics is complicated, but precise.  The theory has mathematically confirmed the measured magnetic moment of the electron to about one part in one billion.  The magnetic moment measures the reaction of an electron’s magnetic field (caused by its spin) to an external magnetic field.  This effect will cause the electron’s spin to precess, like a spinning toy top.  The precision of the correspondence between theory and experiment is like measuring the distance from New York to Los Angeles to the width of a human hair!

Even though quantum theory is based on superposed quantum states (the idea that a particle can be multiple places at once, for example), we have good reason to believe that these superposed states never rise to the level of large objects (for example, Schrödinger’s cat).  This implies that some decision process is taking place in what has been traditionally called “the collapse of the wave function:” the superposed states suddenly jump to a state that conforms to the desired measurement but is based on the probabilities associated with the superposed states. And this happens even if there is no measurement being made in the scientific sense.  In the Schrödinger’s cat example, the very hypothetical superposed states of alive-cat and dead-cat carried with them each a 50% probability.  The question that I will explore in this part is to what extent that decision process can be considered random and to what extent can it be considered coherent.

First of all, we can dispense with one kind of randomness quickly.  This is the randomness due to incomplete knowledge and, as mentioned above, all of our knowledge is incomplete due to the uncertainty principle.  There will be, in any experimental situation, quantum states in the environment that the calculations cannot consider, either because they are too numerous or because we are theoretically limited in what states can be measured accurately.  I see no power in this type of randomness to create the kind of order that we observe in the universe.

One might think that this would be the end of the discussion, but there are natural quantum processes that demonstrate coherence and order.  We are aware of these powerful natural processes because of two scientific discoveries.  Let’s see if those discoveries will give us some clue as to how to proceed.

One of the surprising discoveries of the twentieth century based on quantum physics was the laser.  Today, lasers are used in many everyday applications.  They are used to record and playback compact discs of various types; they are used to read bar codes on products purchased at retail stores; they are used to measure distance and speed; they are used as pointing devices, surgical instruments and even as potential military weapons.

The surprising property of lasers on which I want to focus is that they produce coherent light; that is, light of a single color or frequency with all light particles (photons) in synchronization with each other.  This is highly ordered light, with entropy near zero.  The ability of lasers to produce highly coherent light is due to a special quantum physics property that only bosons possess.  Light particles are one of a number of elementary particles called bosons.  You may have heard of the Higgs boson for which evidence has recently been discovered at the Large Hadron Collider (LHC) near Geneva, Switzerland.  All other ordinary matter – matter that makes up virtually all of the stuff necessary for life, for example electrons, protons and neutrons – are fermions.

Aside from the major distinction between light and matter, there is another very important difference between bosons and fermions.  The distinction is related to another fundamental law of physics called the Pauli Exclusion Principle.  This principle states that two fermions cannot share the same quantum state.  Without this law, ordinary chemistry would not be possible; life would not be possible.  The Pauli Exclusion Principle is the explanation for why electrons exist in different orbits in atoms. Because electrons are in different orbits, the elements have different chemical properties, mostly due to the electrons that are in the outermost orbit.  This is why 2 atoms of hydrogen combine with one atom of oxygen to form water.  Hydrogen has one electron and one open slot in its outer orbit whereas oxygen has two open slots available in its outer orbit.  The two electrons from the two hydrogen atoms exactly satisfy the one oxygen atom’s tendency to fill up the outer orbit.  Water is highly stable with both hydrogen and oxygen sharing electrons to fill each other’s open slots for electrons.

Light particles (photons), like all bosons, do not obey the Pauli Exclusion Principle and they can share the same quantum state.  And that is why lasers are possible.  Lasers work because photons actually prefer to be in the same quantum state as other photons.  It is very important that the photons are produced synchronously. If photons are produced by heat, for example in an incandescent light bulb, they are produced at different energy levels.  Different energy levels mean different colors and different frequencies – hence incoherent light.  Lasers work because they use partially silvered mirrors to reflect light photons back and forth across a suitable material until all the emitted photons are synchronized.  The mirrors allow time for synchronization to happen.

If energy transmitted by light particles can be synchronized, what about energy transmitted through matter?  Since fermions are prohibited from being in a synchronized state, they cannot transmit coherent energy.  Or can they?  Consider the phenomenon of superconductivity.  Superconductivity does not yet have a household application, but it is very useful in certain areas where very strong and concentrated magnetic fields are needed.  Superconductivity is the free flow of electricity through a conductor which is usually cooled to a very low temperature.  Electricity flow is accomplished by electrons (fermions).  So how do low temperatures produce coherent electron flow?

The beginning of the answer is that electrons have a property called spin.  Spin is the property responsible for magnetism in permanent magnets.  Iron has three filled orbits of electrons with the outer orbit containing two electrons.  Those two electrons in the outer orbit are allowed to have the same spin.  The spin of the electrons in the inner orbits will cancel each other, leaving the total spin effect to the outer orbit electrons.  If iron is placed in a magnetic field, the spin of all the outer orbit electrons will align and the whole iron atom will have a net magnetic field.  Iron will retain the magnetism because of its crystalline structure.  Heating will generally cause iron to lose its magnetism through the strong molecular vibration caused by heat energy.

It is one of those strange quantum physics rules that measured spin has only two values.  Let’s say we want to measure electron spin in the “up” direction.  The answer will always be either yes or no.  That is, the spin will always be up or down.  This will be true no matter what actual direction we call “up.”  If we first measure spin in the up-down direction and separate all spin up electrons from all spin down electrons, we can perform another measurement on, say, the spin up electrons.  If we measure them again for spin up then the answer will always be up.  100% of the time the second measurement will agree with the first.  But if we measure the spin in the left-right direction, then we find that half will have spin left and half will have spin right.   This strange property of spin is shared by all fermions.

Bosons, on the other hand, do not share this spin property.  Fermions have what is called “half spin” and bosons have “integer spin.”  The measured spin of fermions is stated in units of one-half whereas the boson spin is stated in units of integers.  Photons, in particular, have spin one.  They do not divide into spin up and spin down.  Light can be polarized, but that is another story for another time.  So perhaps a way to cause electrons (fermions) to behave like bosons (light) is to cancel out their spin property.

That is in fact what happens in the phenomenon called superconductivity.  In the right material and at very cold temperatures, electrons can pair up so that one spin-up electron associates with a spin-down electron giving an overall spin of zero.  The electron pair can act like a boson and flow coherently and without resistance through a conductor.  The conductor must remain cold enough to prevent thermal molecular motion from splitting up the electron pair.  These pairs of electrons are called “Cooper pairs.”

As long as I’m writing about coherent light and electrons, I should mention one other interesting phenomenon: lasers can be used to cool atoms to a very low temperature.  Thus, the low entropy of the laser can be used to reduce the entropy of matter.  This does not violate any laws of thermodynamics since entropy must be increased elsewhere in order to decrease entropy in a specific location.  However, the ability for a process to decrease entropy at a particular location is very important to life.  Both the efficient concentration of energy for fuel and the remarkable ordering of the genome are key factors in the evolution of life.

Therefore, in the example of lasers, superconductivity and laser cooling, nature has given us a hint of where to look for processes that are essential for life.  The place to begin looking involves light interacting with matter.  Particularly, we should be looking for evidence that coherence in the light / living matter interaction will result in some concentration of energy or increase in order beyond what we might expect for inert matter.  Not surprisingly, that points us to photosynthesis.

For comparison, we should consider what happens when sunlight interacts with ordinary inert matter.  Consider a particle of light, a photon, traveling from the sun to earth.  That trip takes about 8 minutes.  The peak energy emission from the sun is propagated by photons in the green color range with a wavelength about .5 micrometers.  For comparison, the width of a human hair is about 25 micrometers or 50 times larger.

When the photon strikes a surface and is absorbed, it will cause the molecules to vibrate slightly faster resulting in heat.  Over the course of a day, the direct sunlight will heat up materials significantly.  But at night, the heated material will cool by emitting infrared photons.  If the heated material is about 70 degrees Fahrenheit, the emitted radiation will have a wavelength of approximately 10 micrometers.  The emitted wavelength is about 20 times longer than the sunlight arriving from the sun, so it will require about 20 times as many photons to dissipate the same energy as was absorbed.  The increased number of photons required to dissipate the sun’s energy results in an increase in entropy.

Now, what happens when a photon of sunlight is absorbed by the chlorophyll in a plant?  First of all, some of the highest energy photons are reflected because chlorophyll is green and therefore reflects green light.  Chlorophyll does not absorb green light, but strongly absorbs blue and red light.  The real surprise is that the transport of the blue or red photon through the Chlorophyll molecules is done with near 100% efficiency.  Virtually no energy is lost as heat.  I wrote about this capability in a previous post (see https://quantumveil.wordpress.com/2012/03/06/quantum-coherence-in-photosynthesis/).  I overstated the efficiency in that post since I included food production, but the essential point is that the transport of the photon’s energy from initial point of contact in the chloroplast to a molecular structure called the “reaction center” is accomplished without heat loss.  The reaction center is where the process of using sunlight energy to convert water and carbon dioxide into food begins.

The experiments that have been done to confirm this photosynthetic process also show that the efficient conduction of sunlight to the reaction center is associated with quantum coherence.  The strong implication is that quantum coherence assists the lossless transfer of energy to the right location for food production.  Without such effects, the normal expectation would be for some of the sunlight energy to escape as heat energy.  By keeping the chlorophyll as cool as possible, the chlorophyll is able to efficiently convert sun energy into food.  That keeps entropy low.  There are other processes that aid in cooling as well, but the evidence for quantum coherence in this process is a significant fact.

Because quantum coherence is involved in the transport of sunlight energy in photosynthesis, it is not out of the question that it is involved in other life processes.  All biochemical reactions involve both photons and electrons, the key components of quantum process.  The overall conversion of sunlight into food involves a local decrease of entropy.  Water is split apart to form hydrogen and oxygen and the hydrogen combines with the carbon from the carbon dioxide to make carbohydrates for food.  This is a concentration of energy and an increase in order that can be described as negative entropy.  It does not violate the law of increasing entropy because entropy rises elsewhere to compensate.  But the local decrease in entropy means a great deal to life processes.  Without the sugars and oxygen that plants produce, life on earth as we know it would not be possible.  We should all thank a plant for its miracle of negative entropy.

Analysts of the photosynthesis / quantum coherence experiments have described the phenomenon as a kind of quantum calculation.  Continuing with the computer analogy, any calculation, if it is to be useful, requires the result to be reported.  In the case of photosynthesis, the “report” is an actual decision on the path the photon should take to its destination.  I have generalized this understanding: any transfer of energy requires a decision by the universe.  That decision process is not random.  Energy must be conserved.  Momentum must be conserved.  Charge must be conserved.  Even quantum states must be preserved if the same state is measured again.  In the case of photosynthesis, there may be multiple paths to the reaction center, but it would not matter which path is chosen as long as the chosen path did not result in heat loss.  This is what I mean by “not random.”  There is uncertainty but not randomness.  Pure randomness results in increased entropy and all living organisms rely on an inherent ability to reduce or conserve entropy, or minimize entropy increase.

The best current theory is that quantum coherence enables calculations regarding the energy landscape of the molecules involved.  In photosynthesis, the thinking is that quantum coherence allows the photon to follow a “downhill” energy path to the reaction center.  This would strongly imply that quantum coherence makes calculations about the laws of physics.  We shall see more evidence of this type when we cover the phenomenon called “protein folding” where biological proteins fold into a shape that minimizes their energy.  I am using computer terminology because this is possibly the best way for people to think about the power of rational agency.  But, like any analogy, it can be stretched too far.

What kind of power could be responsible for this type of activity?  I think the evidence points to a rational power that transcends time and space.    I describe it as transcending space and time because quantum phenomenon is non-local:  it instantly affects widely disbursed particles.  The non-local properties of quantum theory have been established by several tests.  One such experiment was Alain Aspect’s 1981 test of Einstein’s EPR paradox in which Einstein attempted to show that quantum theory was incomplete.  He described the phenomenon, which he clearly thought was impossible, as “spooky action at a distance.”  Another recent test confirmed John Archibald Wheeler’s delayed choice experiment.  This 2007 test was also performed by a French team that included Alain Aspect.  The tests performed by the French teams were done using polarized light photons, but the results have been confirmed by additional experiments.

Since both the quantum phenomenon and the tests are complicated, perhaps the best way for me to describe the results is through analogy.  Let’s recall the ability of electricity to flow without resistance through a wire that has been cooled to near absolute zero.  Recall that under these special conditions, two electrons with opposite spin associate with each other and form a composite particle that has boson-like properties.  The composite particle, called a Cooper pair, can act like a boson in sense that that the pairs of electrons prefer to be in the same state as other Cooper pairs.  That means that the Cooper pair of electrons can be in a coherent state with other pairs and can move synchronously through the conductor.  The two electrons in a Cooper pair are called “entangled.”

Now, imagine that we can separate the entangled pair of electrons without disturbing their entangled state.  Progress has been made on actually performing this trick.  One of the quantum rules is that the spins must be in opposite direction, even after separation.  Suppose that a measurement of spin is done on one of the two electrons.  That measurement will cause the other electron to immediately jump to the opposite spin direction.  That will always happen, no matter what direction is chosen.  According to quantum theory, this will happen no matter how far the electrons are separated, though in the experiments with photons, the photons are generally only separated by a few meters.

This quantum trick is like a magician who puts three colored balls into one box and three balls of a different color into a second box.  Let’s say he puts a red, green and white ball into box one and he puts a blue, yellow and black ball into box two.  The boxes are separated; maybe even placed in different rooms, or even at great distance from each other.  A ball is drawn at random from box one and a ball is drawn at random from box two.  In every case, if a red ball is drawn from box one then a blue ball is drawn from box two; if a green ball is chosen from box one then the yellow ball comes out of box two; similarly for the white and black ball.  Every time the trick is performed, the ball drawn from box one appears to cause a particular ball to be drawn from box two.  Imagine the same trick with 100 balls or 1000 balls; that is the power of quantum entanglement.

Entangled particles have the power to instantly communicate a change in state to other particles.  This communication can cover great distances and occurs instantaneously.  This has led some to claim that the instantaneous communication violates the spirit of relativity.  While there is some truth to that claim, it is nevertheless impossible to use this quantum ability to instantaneously communicate to actually send a coded message.  Causality is not violated.  This appears to be another situation where the universe has an apparent ability to bypass causality, but we are prevented from using that ability to alter history.

Nor can we claim that entanglement is a rare event.  It is the norm.  This has led some to say that the entire universe is entangled.  I don’t know if that can ever be confirmed, but if entanglement can affect particles a few meters apart, then it can certainly affect biological molecules at much closer range.

This is why I think that scientific evidence supports a conclusion that a decisional, rational power is at work in the universe, a power that is conducive to life.  That power is at work in every transfer of energy because a decision must be made as to which of the quantum possibilities will be chosen.  That decision is not random; it follows certain well established rules that are the foundation of physics.  The best characterization of the decision process is that it is a quantum calculation.  It appears random to us because we do not and cannot know all the variables that affect any given particle.  In particular, we cannot know all the quantum entanglements by which any given particle is constrained.  I think this is the best explanation as to how life and consciousness can develop from ordinary matter: protons, neutrons, electrons and photons.  There is no alternative explanation as to how the forces of electromagnetism, the strong and weak forces, and gravity can accomplish the amazing reduction in entropy that exists in living organisms.