The Evidence from Evolution and Biology (Part 3)

In part 2 of this series on evolution and biology, I presented my analysis on the origin of life and my conclusion that life could not have arisen through random chance alone.  I have concluded along with other observers that the laws of physics and chemistry must be conducive to the creation of life and that such laws are evidence for a cosmic ordering power.  The question remains, however, what part does random chance play once life was created?  In part 1 of this series, I raised the question about the role that random mutations play in natural selection.  In this part, I will present evidence that natural selection does not rely entirely on random mutation and that there is at least some portion of natural selection that relies on directed mutation.

The most likely systematic way to create random changes in DNA is through copying errors.  One of the first researchers to deal rigorously with copying errors was Manfred Eigen with his “quasi-species” model.  In this mathematical model of natural selection, survival and fitness to survive are balanced against replication errors.  Here is Freeman Dyson’s description of the problem:

The central problem for any theory of the origin of replication is that a replicative apparatus has to function almost perfectly if it is to function at all. If it does not function perfectly, it will give rise to errors in replicating itself, and the errors will accumulate from generation to generation. The accumulation of errors will result in a progressive deterioration of the system until it is totally disorganized. This deterioration of the replication apparatus is called the “error catastrophe.”

Eigen’s model sets a theoretical limit on the allowable error rate necessary to avoid the “error catastrophe.”  It turns out that the maximum error rate is approximately the inverse of the number of DNA base pairs.  So for humans with about 3.2 billion base pairs, the calculated maximum error rate is about 10-9, or 1 error in 1 billion cell divisions.  This is consistent with the actual error rate after proofreading and repair of the copied DNA.

But some copying errors will still survive.  What becomes of them?  James A. Shapiro is professor of microbiology at the University of Chicago.  In his book, Evolution: A View from the 21st Century, he writes, “Although our initial assumption is generally that cells die when they receive an irreparable trauma or accumulate an overwhelming burden of defects with age . . ., it turns out that a significant (perhaps overwhelming) proportion of cell deaths result from the activation of biochemical routines that bring about an orderly process of cellular disassembly known by the terms programmed cell death and apoptosis.”  In multicellular species, there is an elaborate signaling system for causing some cells to die.  This process is not necessarily disease related.  During embryonic development, some tissues grow that need to be eliminated before birth such as the webs that connect fingers and toes.  These are eliminated by apoptosis (programmed cell death).  This process also happens to embryonic neurons that do not have sufficient interconnections to be viable.  The implication of this response is that organisms have elaborate capability for determining when some cells need to be eliminated.  Some cancers are caused by problems with the apoptosis response.

Before proceeding to the evidence for directed mutation, I want to encourage an appreciation for the enormous orchestration that occurs inside the cell.  As an observer of the biological sciences, I am constantly amazed by the incredible variability and responsiveness of living cells.  If you have never watched videos or animations of cell division or other cellular processes, I would urge you to do so.   They are simply fascinating!  And part of what makes for a fascinating view is the complex orchestration that is happening inside the cell.    Here is a video dealing with mitosis, but there are many others:  http://www.youtube.com/watch?v=C6hn3sA0ip0.  A longer, more advanced animation on the cellular response to inflammation is here:  http://www.youtube.com/watch?v=GigxU1UXZXo&NR=1&feature=fvwp.

Another amazing aspect of cellular function and orchestration is protein folding.  In order for proteins to be effective, they must be folded into a three dimensional shape that is suited to their purpose.  As I explained in my previous post, the protein enzyme, sucrase, performs its function of splitting table sugar (sucrose) into the more easily metabolized glucose and fructose by “locking onto” the sucrose molecule.  Biologists have often used the analogy of a lock and key to explain the fitting of enzymes to their target molecules.

Protein misfolding plays a part in several disease processes including Alzheimer’s disease, Creutzfeldt-Jakob disease (a form of “mad cow disease”), Tay-Sachs disease and sickle cell anemia.  In sickle cell anemia the protein misfolds because of a mutation that alters the sequence of amino acids in one of the blood proteins needed to construct hemoglobin.  In the case of Creutzfeldt-Jakob disease, the cause of protein misfolding has not been conclusively identified, but may be due to an “infectious protein” called a Prion.  A Prion is a normal human protein in the cell membrane that has misfolded and that causes other normal protein to misfold which results in brain tissue degeneracy.  It would be unprecedented if it is conclusively proved that Creutzfeldt-Jakob disease is caused by Prions because all other known disease agents involve replication or modifications to DNA.

The instructions for protein folding are not contained in DNA (although the amino acid sequence is a crucial aspect), but correct folding is absolutely necessary for good health.  DNA provides the peptide sequence information and it is the task of the completed protein, after it has been manufactured by a ribosome, to fold into the correct shape.  In human cells there are regulatory mechanisms for determining whether a protein has folded into the correct shape.  If a protein has misfolded, it can be detected and the protein can be disassembled.  Some proteins have the help of chaperones as mentioned in my previous post.  Here is an animation of a short 39 residue segment of the ribosomal protein L9, identified as “NTL9”, shown folding by computer simulation:  http://www.youtube.com/watch?v=gFcp2Xpd29I.  (The full protein from Bacillus stearothermophilus is just one of many that make up a ribosome.  It contains 149 amino acids and functions as binding protein to the ribosomal RNA.)

Proteins fold at widely varying rates, from about 1 microsecond to well over 1 second with many folding in the millisecond range.  The quickness with which most proteins fold led to an observation in 1969 by Cyrus Levinthal that if nature took the time to test all the possible paths to a correct final configuration, it would take longer than the age of the universe for a protein to fold.  It is now thought that proteins fold in a hierarchical order, with segments of the protein chain folding quickly due to local forces so that the final folding process only need configure a much smaller number of segments.  Nevertheless, simulations of protein folding often require huge computational resources to recreate the folding sequence.  One source estimated that it would take about 30 CPU years to simulate one of the fastest folding proteins.  A slower protein would require 100 times the resources, or about 3000 CPU years.

So Levinthal’s question has not been completely answered.  How does nature enable proteins to fold so quickly?  The prevailing theory on folding holds that the various intermediate states are following an energy funnel from a high energy state (unfolded) to the lowest energy state (folded).  Just as water seeks its lowest level, proteins seek the conformation that has the lowest energy.  The explanation for the wide variety of folding rates then rests on the nature of the path from the unfolded energy state to the folded energy state.  If the path is straight, the folding will be fast; if the path has energy barriers that must be circumnavigated or perhaps tunneled through, the folding will be slower.  These issues are still in active research, so there is currently no clear consensus.  But in a recent paper, two researchers conclude “Our results show it is necessary to move outside the realm of classical physics when the temperature dependence of protein folding is studied quantitatively” (“Temperature dependence of protein folding deduced from quantum transition”; 2011, Liaofu Luo and Jun Lu).

I simply point out the similarity to the research on photosynthesis that showed that photons captured by photosynthesis follow a highly efficient path to the place where the photon’s energy can be turned into food production.  That research showed that quantum coherence played a significant role in the efficient transfer of energy and it was thought by analysts that a quantum computation of the energy landscape was a key part of the explanation.  It would not surprise me if quantum computation played a key role in protein folding by determining the most efficient path for navigating the energy funnel.  But without regard to whether quantum computation plays a role in protein folding, some scientists have not hesitated in applying the computer analogy to cell function.

Paul Davies is a physicist and science advocate who contrasted the vitalism of the 19th century with our understanding of biology today by saying, “The revolution in the biological sciences, particularly in molecular biology and genetics, has revealed not only that the cell is far more complex than hitherto supposed, but that the secret of the cell lies not so much with its ingredients as with its extraordinary information storing and processing abilities. In effect, the cell is less magic matter, more supercomputer.”

James A. Shapiro continues the computer metaphor when he writes about the cognitive ability of the cell. In his book, Evolution: A View from the 21st Century, he writes about the cell’s ability to regulate and control itself using a number of examples such as repair of damaged DNA, programmed cell death,  and regulation of the process of cell division.  He then continues to characterize the cell in computer-like terms (my emphasis):

The selected cases just described are examples where molecular biology has identified specific components of cell sensing, information transfer, and decision-making processes. In other words, we have numerous precise molecular descriptions of cell cognition, which range all the way from bacterial nutrition to mammalian cell biology and development. The cognitive, informatic view of how living cells operate and utilize their genomes is radically different from the genetic determinism perspective articulated most succinctly, in the last century, by Francis Crick’s famous “Central Dogma of Molecular Biology.“

Shapiro goes on to suggest modification to the “Central Dogma of Molecular Biology.”  The “Central Dogma” summarizes the process of protein creation from RNA which is transcribed from DNA.  Dr. Shapiro suggests that this one way summary is too simple.  There are many paths through which RNA and proteins can modify the DNA.  The primary example of RNA which can modify DNA comes from retroviruses.  The well-known HIV virus is one example.  Retroviruses contain RNA which is transcribed into proteins that can convert the RNA into DNA and then insert the viral DNA into the host DNA.  It is estimated that between 5% and 8% of the human genome is comprised of DNA that has been inserted from retroviruses.

Dr. Shapiro also uses computer programming terminology when describing detailed biological function such as E. coli’s ability to metabolize lactose when glucose is not available: “Overall computation = IF lactose present AND glucose not present AND cell can synthesize active LacZ and LacY, THEN transcribe LacZY from LacP.”  That is a statement that could be implemented in almost any standard computing system with, of course, the proper functions available for “synthesize” and “transcribe,” etc.  I would also point out that a significant portion of a cell’s “cognitive” function is concerned with self-regulation.  In other words, there is a significant amount of self-knowledge available to the cell.

Professor Shapiro itemizes five general principles of cellular automation processing:

  1. There is no Cartesian dualism in the E. coli (or any other) cell. In other words, no dedicated information molecules exist separately from operation molecules. All classes of molecule (proteins, nucleic acids, small molecules) participate in sensing, information transfer, and information processing, and many of them perform other functions as well (such as transport and catalysis).
  2. Information is transferred from cell surface or intracellular sensors to the genome using relays of proteins, second messengers, and DNA-binding proteins.
  3. Protein-DNA recognition often occurs at special recognition sites.
  4. DNA binding proteins and their cognate formatting signals operate in a combinatorial and cooperative manner.
  5. Proteins operate as conditional microprocessors in regulatory circuits. They behave differently depending on their interactions with other proteins or molecules.

Regarding evolution, Dr. Shapiro advocates a concept called “natural genetic engineering” whereby the cell makes adaptive and creative changes to its own DNA.  I have used the phrase “directed mutation” to mean essentially the same thing.  These changes to a cell’s own DNA are not random: “It is difficult (if not impossible) to find a genome change operator that is truly random in its action within the DNA of the cell where it works. All careful studies of mutagenesis find statistically significant nonrandom patterns of change, and genome sequence studies confirm distinct biases in location of different mobile genetic elements. These biases can sometimes be extreme . . . “

In a recent article, Professor Shapiro further clarified his use of the phrase, “natural genetic engineering,” or NGE:

NGE is shorthand to summarize all the biochemical mechanisms cells have to cut, splice, copy, polymerize and otherwise manipulate the structure of internal DNA molecules, transport DNA from one cell to another, or acquire DNA from the environment. Totally novel sequences can result from de novo untemplated polymerization or reverse transcription of processed RNA molecules.

NGE describes a toolbox of cell processes capable of generating a virtually endless set of DNA sequence structures in a way that can be compared to erector sets, LEGOs, carpentry, architecture or computer programming.

NGE operations are not random. Each biochemical process has a set of predictable outcomes and may produce characteristic DNA sequence structures. The cases with precisely determined outcomes are rare and utilized for recurring operations, such as generating proper DNA copies for distribution to daughter cells.

It is essential to keep in mind that “non-random” does not mean “strictly deterministic.” We clearly see this distinction in the highly targeted NGE processes that generate virtually endless antibody diversity.

In summary, NGE encompasses a set of empirically demonstrated cell functions for generating novel DNA structures. These functions operate repeatedly during normal organism life cycles and also in generating evolutionary novelties, as abundantly documented in the genome sequence record.

(From What Natural Genetic Engineering Does and Does Not Mean, Huffington Post, February 28, 2013.)

Perhaps the most important evidence for natural genetic engineering is the discovery of transposable elements in the DNA.  These were first identified by Barbara McClintock in 1948 and for which she was awarded the Nobel Prize.  Transposable elements, also called transposons and retrotransposons, are segments of DNA that can move or be replicated into another part of the DNA molecule.  In general, this process can be either a “cut and paste” or a “copy and paste” operation using special proteins to operate on the DNA, sometimes with RNA as an intermediary molecule.

Retrotransposons makes up a significant portion of the human genome, about 42%.    One type of transposon, called an “Alu” sequence, is about 10% of the human genome and is one of the main markers for primates (including humans).    However, almost all transposable elements are contained within the non-coding region of DNA and therefore and not directly expressed as proteins.  This DNA has typically been called “junk DNA,” but recent research from the ENCODE project (“Encyclopedia Of DNA Elements”) has demonstrated a wide variety of function for the non-coding portions of DNA.

I have to mention that, as a computer designer and coder, this discovery of movable elements in the non-coding regions of DNA remind me of one of the most common ways we would modify computer programs.  First, we would locate an old segment of code that functioned similar to the desired new function.  Then we would copy that segment into another part of the program, but leave it unexecuted until the new segment was modified to accomplish its intended new function.  Finally we would activate the new segment and test it.  Nevertheless, DNA represents computational capabilities that I have never seen in any existing computer system.  It has now been demonstrated that the so called “junk DNA” has the ability to affect the “non-junk” portion of the genome by controlling when or whether certain proteins are expressed.

Continuing with the computer analogy, Freeman Dyson also speaks about DNA as a computer program, characterizing DNA as software and proteins as hardware.  I think that is a little too simple since individual proteins exhibit many cognitive abilities described by Dr. Shapiro.  Each separate molecule in the cell, including proteins, has its own processing capability.

One way the Professor Dyson is correct, though, is through the discovery of proteins as molecular machines. This is another fascinating area of biology.  Many of the functions of the cell are carried out by proteins that can best be described as miniature machines.  One important example is the ATP generator which is used to make ATP in the Mitochondria.  ATP, or Adenosine Triphosphate, is the main energy molecule for almost all forms of life.  This ATP generator or ATP synthase looks remarkably like a tiny motor.  This “motor” is powered by a hydrogen ion concentration differential across the mitochondria membrane.  The hydrogen ion concentration is generated by molecular pumps which push the hydrogen ions (protons) across the membrane.  An animation of ATP synthase follows:  http://www.youtube.com/watch?v=PjdPTY1wHdQ.

The implications of all the above biology for lowering entropy are enormous.  The molecular machines are themselves an example of low entropy, being a highly structured, functional set of proteins.  The pumping of protons across a membrane is using some energy to create a state of low entropy by concentrating energy at a particular location.  The ATP itself is a storehouse of energy for future use.  Protein folding is another entropy lowering process. The DNA specifying the information necessary to manufacture proteins is perhaps the supreme example of low entropy, particularly now with the discovery of purposeful “junk DNA.”  One could easily conclude that all of life is powered by of the miracle of low entropy overcoming the global tendency for entropy to increase.

Life can be viewed as a struggle to maintain low entropy.  We need sources of low entropy to live: food, shelter and energy, etc.  The ultimate source of low entropy is the sunlight used to create carbohydrates from plants.  However, once our low entropy material needs are secured, we seek an ordered personal life, family life, and social life.  Some say that old age is the result of the loss of our ability to maintain low entropy.  In other words, life is a struggle to maintain low entropy in the face of the law of increasing entropy.  As individuals, we will lose that struggle since death is certain.  As a species, however, the trend towards low entropy, towards more complex ordering, can continue.

Before life began to evolve, matter on earth was subject to laws of physics and chemistry.  One of those laws is the law of increasing entropy: low entropy sunlight is absorbed and then radiated back into space as high entropy heat.  However, the laws of nature themselves contain a provision for entropy lowering interactions.  I strongly believe that such a provision is the result of the decisionality inherent in the collapse of the quantum wave function.  My reason for such a belief lies mainly in the order that results from entropy lowering interactions, especially the order inherent in life.  All of our human experience tells us that order results from rational decisionality; it does not result from randomness.  The mathematics of random chance rules out any likelihood that life arose by chance alone.

After life began to evolve, it naturally took advantage of entropy lowering processes.  Natural selection and fitness are crucially based of efficient use of energy.  There is a recent example of a prehistoric bird that had four wings, named microraptor.   Microraptor’s four wings allowed it to make tight turns around the many forest trees in its habitat.  However, four wings caused additional drag and consequent loss of speed and energy.  It therefore took microraptor more energy to accomplish what modern birds can do. Modern birds evolved two wings with additional muscle control for improved maneuverability but without the additional drag of a second set of wings.  Efficient use of energy is crucial for survival.

It is therefore very surprising that nature and evolution would have allocated a single organ in humans that requires 20% of our energy, yet weighs only about 2% of our total weight.  That is the amazing, almost unbelievable, statistic for the brain.  If we view human life as the pinnacle of evolution, then the entire evolutionary path must proceed towards higher consciousness and higher intelligence.  Therefore, if Professor Shapiro is right about natural genetic engineering (and I am convinced he is – he draws upon a huge body of research done by others), then modification made at the cellular level must include a bias for enhanced consciousness.

In my next section, I will begin to address the evidence from consciousness.  This will be difficult because science can say very little about consciousness.  Some take the position that consciousness is an epiphenomenon; that it emerges, ex novo, from complex calculations and therefore, has no real existence.  Some take the position that mind is a separate category from matter, leading to dualism.  I take the position that consciousness is embedded in matter, a position called panpsychism.  Furthermore, I hold the position that the way that consciousness has become embedded in matter is through the inherent decisionality of quantum decoherence.  One way to view this position is that the universe performs a quantum calculation on every transfer of energy.  But it would be a mistake to think that the calculation is the same as a calculation that could performed by a computer.  Stay tuned.

Advertisements

One thought on “The Evidence from Evolution and Biology (Part 3)

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s