My previous posts have focused on the evidence for a rational agent inherent in the laws of physics. There has been an implicit assumption that the laws of physics are rigorously deterministic. But clearly life is not deterministic, so it was necessary for me to point to some possible feature of the laws of physics that allowed for the wild variation and unpredictability of life. I will summarize my thought process as follows:
- The universe is ordered by deterministic laws and forces such as the force of gravity and electromagnetism. There are also non-deterministic laws such as quantum theory. One of the laws that combine both features is the law of increasing entropy. Entropy always increases throughout the universe, but it is allowed to decrease locally. Since quantum theory ultimately controls all interactions in the universe, all forces are non-deterministic at the quantum level. (The only possible exception is gravitation which has not yet been unified with quantum theory.)
- The deterministic laws (electromagnetism, etc.), by themselves, cannot account for life and consciousness. There must be another factor in the fundamental laws of physics that allows living organism to lower entropy. The process of lowering entropy is essential to life because it concentrates energy for future use and organizes the genome for transmission to future generations.
- That factor in the laws of physics is the collapse of the wave function in quantum physics, also called decoherence. Decoherence is absolutely necessary for any measurable energy transfer. In decoherence, the universe actually chooses an outcome for every transfer of energy. This choosing, or decisionality, on the part of the universe is what I have called rational agency and it is responsible for the forward direction of time.
- This decisionality on the part of the universe is always mixed up with randomness because we are prohibited from knowing precisely all the states of matter, particularly the states of entanglement between particles. This is a consequence of a kind of cosmic censorship hypothesis. The Heisenberg uncertainty principle is one such limitation on our knowledge.
- There can be no ordering principle or lowering of entropy based on true randomness. True randomness, by definition, is maximum entropy. In all of physics the only candidate for non-random yet non-deterministic action is decoherence.
- Therefore, this choice by the universe is directed choice. It is a rational choosing based on the laws of physics and contains within it the possibility of lowering entropy. It is the physical undergirding of all life and consciousness. It is the physical action responsible for the forward direction of time.
Essentially, I think that the laws of physics favor life or are conducive to life. In general, nature prefers to disperse energy; therefore there must be physical explanations for how energy gets concentrated. Just as there is an explanation for how nature concentrates energy for lightning, there must also be an explanation for how living organisms concentrate energy and lower entropy. These six steps summarize my explanation. In this series on evolution and biology, I will lay out the case for the laws of physics favoring life as opposed to the case for life adapting to the laws of physics. Both dynamics occur, but only laws conducive to life can create life from inanimate matter.
I don’t consider this logic highly dependent on particular experimental results. Scientific theories are always provisional; they can be superseded by better theories or more accurate results. My reasoning is broadly based on the general properties of physical laws. A portion of the laws are rigorously deterministic and use mathematics to make predictions about future events. A portion of the laws of physics deals with the presence of uncertainty in the universe. I fully expect the laws of physics to be revised and improved, but I don’t expect that these general characteristics will be much altered. If string theory is proved true, that would not change my basic logic, but my perspective might need to accommodate rational agency operating in a multiverse scenario. String Theory, for all its promise, does not yet make any testable predictions.
Along with the laws of physics, I view the theory of evolution as a valid scientific theory. It is a theory based on the idea that all living organisms adapt to their specific environment and pass along adaptive traits through procreation. Darwin’s concept of “natural selection” was devised in contradistinction to “artificial selection,” whereby human breeders selected the best mates in order to raise generations of specifically adapted animals.
Biology is a complex science. For someone like me, who has spent a major part of his life focused on math and the physical sciences, the main shock of encountering biology is the sheer astronomical diversity of life. Last year, I took one of the online courses offered from UC Berkeley. It was the basic undergraduate course for biology majors and it was something I needed because my previous biology class must have been in high school. It was just as well that I didn’t have very much previous instruction because so much has changed between then and now. The sheer volume of information is astounding. I found myself wondering how on earth does anyone organize this much data. In fact, it took three teachers to cover the material. One instructor had a background in molecular biology; one was a specialist in genetics and one was from a medical background. I had the distinct feeling that complete mastery was beyond the capability of any one individual. But, I am still learning and I do have some observations based on my perspective from the physical sciences.
One observation concerns the principle of emergence. Emergence is the concept that complex living organisms are able to exhibit new properties and traits by virtue of their complexity and organization. The example from the textbook for the UC Berkeley class is one that interests me: “For example, although photosynthesis occurs in an intact chloroplast, it will not take place in a disorganized test-tube mixture of chlorophyll and other chloroplast molecules. Photosynthesis requires a specific organization of these molecules in the chloroplast.” The text is saying that photosynthesis is an emergent phenomenon. That is fine. That helps organize knowledge, but for someone who wants to know how things work, there is a further question: How is it that the particular organization contributes to function? What are the properties of the constituent parts that enable the composite function to emerge? Too often, emergence is used simply as label for a new function that can’t be explained any further. When that happens, it becomes a kind of false knowledge: a category without explanatory power.
To take another example, water is composed of two room-temperature gases: hydrogen and oxygen. I suppose you could say the emergent property of water is its liquidity. But, with water, one can trace its properties to the molecular properties of hydrogen and oxygen and the strong bond between them as well as the weak bond between water molecules. These particular molecular properties can also be used to explain surface tension, freezing and boiling. My expectation is that biology will someday be explained in terms of molecular dynamics. That day is a long way into the future.
Biological scientists are answering these kinds of questions and it is painstaking work. It is slow and tedious work to demonstrate how biological molecules work, but I suppose, that is the part of biology that mainly interests me. I have two main areas of interest in the biological sciences. One is photosynthesis because of its use of quantum coherence for efficient transmission of sunlight energy to the “reaction center” where chemical food production begins. The other is the biological molecule tubulin.
Tubulin is a protein molecule that assembles into microtubules. Microtubules are long, narrow, hollow tubes that play an amazing variety of roles in living cells. There is a natural tendency for microtubules to assemble themselves because of the positive and negative polarity on the tubulin molecule. Once assembled, microtubules play key roles in biological cell functions. They play an essential role during mitosis, cell division, by grabbing hold of the chromosomes and causing the genome to precisely separate toward opposite ends of the cell. Microtubules are part of the cell’s cytoskeleton; they give shape and form to the cell. In plants, microtubules guide the alignment of cellulose and direct plant growth at the cellular level.
Microtubules form the infrastructure that transports molecules from outside the cell to the inside and vice versa. Motor proteins “walk” vesicles containing molecules back and forth along microtubules to their destination. For example, pancreas cells that make insulin transport the insulin from inside the cell to the outside by this method. In addition, microtubules are used for cell interaction with its environment. They form some types of flagella and cilia for locomotion of the cell or movement of particles in the cell’s environment. For example, the human sperm cell is propelled by action of a flagella made up of microtubules.
In short, microtubules are a very versatile cellular component. Furthermore, they are an essential part of nerve cells. Tubulin, the protein that forms microtubules, has a very high density in brain tissue. That has led some researchers to project a key role in brain activity and consciousness for microtubules. Microtubules are long, hollow, round tubes that might be ideal for quantum coherence. There has been some research along these lines.
Tubulin is the protein building block of microtubules and it or similar proteins are probably very ancient, perhaps going back to the beginning of life. One source specified that all cells had such proteins, except blue-green algae also known as cyanobacteria. However, cyanobacteria have a tubulin-like molecule (a homologue) called “Ftsz.” An interesting connection between my two main interests is that the cyanobacteria use photosynthesis for energy harvesting from sunlight. It is the light harvesting complex from cyanobacteria that are used in the experiments testing quantum coherence.
Cyanobacteria are among the oldest life forms on Earth, perhaps as old as 3.5 billion years. It would be a very interesting development if microtubules or microtubule-like structures go back to the beginning of life and if it can be demonstrated that quantum coherence played a key role in efficient energy transmission in these structures. Those are two very big “ifs” and most researchers are very cautious about any evidence pointing towards quantum coherence in biological molecules. But I remember some fairly incautious statements about the beginning of life from many years ago.
I think it was probably in high school chemistry class that the teacher, one day, covered the Miller-Urey experiment. This experiment was conducted in 1952 and involved sending a spark of electricity (to simulate lightning) through a mix of chemicals assumed to represent Earth’s primitive atmosphere. The result was a mixture of amino acids and sugars, both essential building block of life. Stanley Miller and Harold Urey had demonstrated that organic compounds necessary for life could be easily formed from reasonable atmospheric compounds, such as water, methane, ammonia and hydrogen. Not only that, but the teacher thought that we would soon be able to synthesize life in the test tube. Well, that was over 50 years ago and the synthesis of life seems as elusive as ever. Science doesn’t yet know what makes biochemicals spring to life.
The mystery of the beginning of life notwithstanding, the theory evolution brought incredible organizing power to the huge diversity of biology. Darwin’s “natural selection” brought explanatory power to the huge diversity of species on Earth. In the mid-twentieth century, the discovery of DNA and the genetic code brought into the evolutionary system a mechanism for adaptation. This has led to what has been called the “central dogma” of molecular biology: DNA makes RNA which makes proteins. DNA contains coded information that is used to create a coded sequence of RNA which is used to create a sequence of amino acids which make up proteins. The next step, which isn’t explicitly stated and is poorly understood, is that proteins must fold into a specific three dimensional form in order to be useful. What is startling to me, coming from a computer programming background, is that the coded sequence of DNA contains just four characters representing four small molecules: A (adenine), C (cytosine), G (guanine) and T (thymine).
These four codes are interpreted in groups of three which gives 64 possible “words” for amino acids in the genetic code (4 X 4 X 4). Of the 64 possible combinations of DNA code only 20 are actually needed, because there are only 20 amino acids that are needed to make all the known proteins. Most of the 64 DNA sequences specify the same amino acid as another sequence, so there is built-in redundancy. Only Tryptophan and Methionine rely on a single coded sequence; all the others have at least two sets of DNA codes and some (Serine, Leucine and Arginine) have six. It seems possible to me that different evolutionary branches developed a reliance on different DNA sequences for the amino acids. For someone with a data processing background, the DNA codes are reminiscent of a computer system that has been copied and modified to meet different objectives – even to the extent that duplicate codes are mainly sequential (e.g., Leucine: TTA, TTG, CTT, CTC, CTA, CTG). From a “systems design” perspective it would seem that at one time there was provision for expansion with 64 codes for all 20 amino acids, but after evolutionary modifications all 64 codes are now in use. I suppose that if there developed a need for a 21st amino acid, one of the existing redundant codes would be used. The whole process is very complex, but the same basic DNA, RNA and amino acids are found in all life forms on Earth. This amazing discovery of the genetic code is universal to life as we know it. (There are some exceptions. The Paramecium uses the “stop” codons, UAG and UAA, to code for Glutamate.)
“Natural selection” coupled with the genetic code has given enormous explanatory power to evolutionary biology. But like all theories, it is a conceptual model of the physical processes that occur. There remain many questions such as how did life begin. And then there’s the question asked by Stephen Hawking, “What is it that breathes fire into the equations and makes a universe for them to govern?” What is it that actually makes the world act in a way that is consistent with the conceptual model? Readers of my previous posts will suspect that my answer is similar to what I’ve written before: there is a decisional power at work in the universe that breathes life into biological molecules. It is this decisionality that insures that time flows forward and therefore gives evolution direction.
Some of the evidence for my answer resides in the evidence for directionality in evolution. But, first of all, the evolutionary model is a rational model. Even more amazing is that the implementation of the genetic code is an abstract, rational system! Who would have thought that nature would have arrived at the very rational system of using a three character code to specify a sequence based on 20 amino acids that comprise the proteins for all life? Let me be direct: The genetic code is information. The central dogma of molecular biology is an information processing system. The end results are proteins and decisional governance of the cell. This is exactly the type of system one might expect from a rational agent acting through nature.
As to directionality, the immediate form of the evidence is in the form of the adaptability of evolutionary change. Evolutionary change produces living organisms that get better at adapting to their environment. Not only are more advanced organisms better adapted, but they are better at adapting! For higher life forms like mammals and particularly humans, this implies a higher consciousness. Therefore, the longer range implication of evolution is higher consciousness. I think this trend is evident from the archeological and historical record. For almost 4 billion years, life has survived under the constant threat of a cosmic catastrophe such as the one that brought an end to the dinosaurs. Today, we are beginning to track the asteroids and comets that have the potential to cause another life-ending cataclysm. That would not be possible without some sort of advanced consciousness. In a strange sort of self-reflection, adaptation has become adaptability for which is needed a higher consciousness. This implies a robust moral development as well, but that is beyond what I can cover in these posts on science and reason.
But a rational agent is not the only explanation. The alternative view is that evolution is the byproduct of random mutation. First of all, I don’t think randomness is a good scientific answer. Science succeeds when it finds and explains rational patterns. To say that a process is random is to admit defeat from a scientific point of view. The second thing I would say is that when someone refers to random mutation, it is unclear what type of randomness they are referring to: lack of knowledge randomness or the genuine non-determinism of quantum physics. The common view of evolution is that it requires generations of offspring in order for nature to select the best attributes and pass those on to future generations. Is evolution inherently random because some individuals show up at the wrong place at the wrong time or, alternatively, at the right place at the right time? Is it random because a cosmic ray has altered the genome? Is it random because we can’t predict how our children will turn out? The most likely reason mutation might be random is because of a transcription or copying error. But modern cells have evolved elaborate safeguards against such copying errors.
It turns out that when evolutionists speak of “random mutation,” they mean something specific. My biology textbook (on Kindle!) only uses the phrase once in over 1000 pages of small font text, and that one occurrence refers to copies of genes that have lost functionality (i.e. the gene has been degraded) over time. The textbook does not refer to new functionality as “random mutation,” but does use the phrase, “accidents during meiosis” (cell division in reproductive cells). This phrase, too, has a specific meaning that might not be expected by normal English interpretation. In general, the textbook prefers to state evidence positively, in terms of what we know rather than in terms of what we don’t know. As to genetic mutation, it refers to various mechanisms for altering the genome, such as transposition of small portions of the DNA from one location to another.
One internet site was particularly helpful in tracking down the origin of the phrase “random mutation.” This site was associated with UC Museum of Paleontology (at Berkeley). The website is a teaching guide for evolution named “Evolution 101.” This source was very explicit:
“Mutations are random.
Mutations can be beneficial, neutral, or harmful for the organism, but mutations do not “try” to supply what the organism “needs.” In this respect, mutations are random—whether a particular mutation happens or not is unrelated to how useful that mutation would be.”
Behind this brief description is a debate that began with Darwin. Prior to Darwin, there was a French biologist named Jean-Baptiste Lamarck who held the view that (1) Individuals acquire traits that they need and lose traits that they don’t need and (2) Individuals inherit the traits of their ancestors. He gave as examples the Giraffe whose neck was assumed to have stretched in order to reach higher leaves in trees and blacksmiths whose strong arms appeared to have been inherited by their sons. But these ideas have been debunked.
When Darwin published Origin of Species in 1859, he gave some credibility to Lamarck’s view, but later evolutionists elevated Lamarck’s idea to a major theme of evolution. By the mid-twentieth century, biologists had become adept at doing experiments with bacteria. In 1943, two biologists, Max Delbrück and Salvador Luria, wanted to test Lamarck’s hypothesis for bacteria, which were thought to be the more likely organism to use Lamarckian adaptation. The Luria-Delbrück experiment tested whether bacteria exposed to a lethal virus would develop any adaptive mutation and whether that mutation would be acquired prior to exposure or not. Their experiment showed conclusively that some bacteria had acquired an adaptive mutation prior to exposure, as did subsequent experiments by others, including Esther and Joshua Lederberg who are referenced on the “Evolution 101” website.
So, based on experiments, what evolutionists mean when they say that mutations are random is that some adaptive mutations occur before any exposure to infectious agents in a test. The mutations do not occur because of exposure. Now this is a somewhat contentious finding because it defies the rather commonsense view that mutations happen for a reason, most likely that reason is related to some inoculation or exposure to an agent. In other words, either the finding appears to violate causality or the explanation is an admission of ignorance about the cause of adaptation.
I take the view that the finding is an admission of ignorance. We really don’t know what might have caused an adaptive mutation to occur before exposure. The real scientific question is what causes the mutation and biologists prefer to focus on what we can discover. One such biologist is James A. Shapiro, professor of microbiology at the University of Chicago. He characterizes the association of “random mutation” with the Luria-Delbruck experiment as follows:
One has to be careful with the word “proof” in science. I always said that conventional evolutionists were hanging a very heavy coat on a very thin peg in the way they cited Luria and Delbrück. The peg broke in the first decade of this century.
Professor Shapiro goes on to write about mechanisms that bacteria have for “remembering” previous exposure to infectious agents. Those mechanisms include modification of the bacteria DNA. He states that Delbrück and Luria would have discovered this if they had not used a virus that was invariably lethal and if they had the tools for DNA analysis. The announcement of the DNA structure would take place in 1953, ten years after the Luria-Delbrück experiment, and the tools for analysis are still being developed. It should not be too big a surprise that bacteria have elaborate mechanisms for DNA sharing and modification. The human immune response to invasive agents also includes the recording of information in the DNA of certain white blood cells (lymphocytes). You can read Shapiro’s entire article here: http://www.huffingtonpost.com/james-a-shapiro/epigenetics-ii-cellular-m_b_1668820.html.
It is no longer fashionable to speak of Lamarckian inheritance, but the field of epigenetics is devoted to adaptation by means other than DNA modification. My own view is that the amount of debate and discussion on the issue of “soft” inheritance points to a conclusion that this is unsettled science. Microbiologists today have many more tools and techniques for answering questions about causes for adaptive inheritance then they did sixty years ago and I suspect that they would prefer to look at changes to the DNA and other molecules rather than make statistical inferences as Luria and Delbrück did. Current research of the type that James Shapiro is doing is demonstrating specific causes for adaptation.