Irreducibility and unpredictability:

from simulation to creation?
computationally unfolding text procuded by the multicellular automaton Otto B. Wiersma

27 Sept. 2003 – 15 May. 2004 (last update)


Howto study complex eventities

Both mathematical formalism and experimental evidence help scientists in discovering how eventities work. So far a lot of eventity-aspects are reduced until they fit into relatively simple mathematical schemes and can be tested in relatively simple experimental setups. Despite the evident success of this piecemeal approach in tackling ‘how-does-it-work’-problems, it’s questioned if these reductive methods of investigation will cover what are called ‘complex’ eventities, that seem to resist these kinds of aspect-reduction. It might be that more complex eventities just ask for more complex mathematical formalisms and more complicated experiments. A different approach suggests the use of models with simple initial conditions and simple rules to provide an adequate simulation of the behavior of complex eventities. The issue is: should our descriptions of nature’s toolbox be more and more mathematically complex or should we look for computationally simple simulations?

Are complex eventities irreducible?

A keyword that often pops up in the disputes about these different approaches is ‘irreducibility’, here vaguely phrased as the notion that a complex eventity under aspect-reduction would loose it’s characteristic properties, resulting in inadequate descriptions. Starting with Pythagoras (‘everything is numbers’) one can present a history of philosophy as marked by all kinds of aspect-reductions (e.g. atomism, materialism, physicalism, vitalism, logicism, historism, spiritualism) . In this light the arguments for ‘irreducibility’ are interesting, so I like to search as well for suggestions how irreducibility could be part of the philosopher’s toolbox.
Looking for a way to make vague notions about irreducibility in the research of complex eventities more concrete, I found references to a dispute that was triggered by a book of Mike
Behe (Darwin’s Black Box. Free Press, 1996), in which he gives examples of biochemical systems that are (in his view) irreducibly complex. So let’s take some biochemical systems as possible examples of irreducibly complex eventities.

the cilium

the flagellum
Both the cilium and the flagellum can be found, sticking through the cell-membrane and one of the obvious functions of both the cilium and the flagellum is movement: moving the cell through the envirionment or move the environment along the cell.

Biochemical system as example of an irreducibly complex eventity?

Giving an adequate description of how a biochemical system like e.g. a cell works, makes already a pretty complicated story which is by now far from complete, despite the overwhelming amount of detailed information that’s already documented in the basic textbooks. Providing a similar adequate description of how such a system came into being, seems at present to be beyond the reach of the available scientific tools, if it has to meet the demand for experimental evidence. A description of any step in a possible evolution towards a specific system would have to involve a description of the relevant properties and property-relations (laws) of the system’s predecessors and their environments in order to provide evidence for at least the possibility (if not high probability) of next steps. Some understanding how complicated this is, could be gained by considering a few of the basic biochemical elements that are involved in a biochemical system.
Biochemical systems share a unique (but not the only possible) molecular ‘alphabet’ in and with which their existence is expressed. The structure of the active biochemical molecules in the present-day cell is largely determined by the structure of the DNA. The genetic archive DNA is a repeating sequence that contains four complementary paired bases as ‘building blocks’: Adenine – Thymine and Cytosine – Guanine, coupled in a double-stranded helix by hydrogen bonds and sugar-phosphate backbones. A typical animal cell can contain one meter of DNA, consisting of 3*109 nucleotides. A gene is a small part of this DNA sequence (a gene can typically be a sequence of around 6000 bases - represented by the bases in one strand this looks like ACTTGCCCATGGTCAAAGTTTCGATCGATTAGAGTCC etc.), which not only replicates (to transfer the genetic information), but also transcribes to single-stranded RNA with almost the same nucleotides (only the RNA base Uracil is slightly different from the DNA base Thymine and there is a slight difference in the sugar used). The four bases of RNA are organized in triplets, forming 43=64 codons that through tRNA appear to be associated to the sequencing of 20 different amino acids that are the building blocks of proteins. The mRNA is read by ribosomes that synthesize proteins (also like DNA and RNA repeating sequences, but now consisting out of the 20 different amino acids). The thousands of different proteins in the cell have very different chemical and structural properties serving a wide range of functions that are vital for the development and persistence of the cell.The common ‘alphabet’ and a lot of homologies (matching (parts of) genes) in the genomes of different species suggest a common root of the great diversity of biochemical systems, a ‘tree of life’. The mainstream (Neo-)Darwinist Evolution Theory wants to find and describe in detail the gene-creation/mutation/selection mechanisms in order to build the framework of a natural history of common descent as a launchbase for future better understood and perhaps even more controled evolution.
Alberts et al (1994) present evolution by natural selection as the central principle of biology: random variation in the genetic information, followed by selection in favor of advantageous genetic information. We cannot go back in time to witness the unique molecular events that took place billions of years ago, but those ancient events have left many traces for us to analyze (p. 3, cf p. 697: Hypotheses about events occurring on an evolutionary time scale are difficult to test, but clues abound (..)).
The early atmosphere was rich in reactive molecules and far from chemical equilibrium. In that kind of environment it just happens that from organic molucules (containing C) and other simple molecules (like CO2, NH3, CH4, H2, H2O) more complex molecules like amino acids, sugars, purines and pyrimidines will be formed, wich will chain up to polymers (nucleotides and polypeptides): the building blocks of all presentday living cells (p. 4).
Some of the proteins act as catalysts (enhancing chemical reactions) through related RNA producing more of the protein itself (selfpromoting), thus forming autocatalytic systems. Proteins cannot reproduce themselves, polynucleotides can, because of their complementary pairing of nucleotide subunits. So there evolved a crucial reciprocal relation between the RNA synthesis and the protein synthesis, which gives RNA a central rol in the origin of life. (p. 5,6).
The interactions between RNA and proteins is determined by their chemical properties, which are responsible for the characteristic folding of RNA and proteins and the binding of specific molecules. By replicating it’s nucleotide sequences the RNA passes through it’s factual information and by interacting with other molecules RNA exerts different actual functions (p. 7-9). Compartimentalization by membranes (made mainly by phospholipids with hydrophylic heads and hydrophobic tails) created the advantageous protective context for the catalytic cooperation of different RNA’s and proteins. In a later stage DNA becomes the more stable genetic archive. Almost the same nucleotides as in the single-stranded RNA are found in the double-stranded DNA, which is (through this double-stranded structure) more stable (and therefore providing the opportunity for longer sequences), easier and more reliable to replicate with better error reduction (p. 10,11).
<picture dnarnaproteins>
Discussing the irreducibility of any biochemical system one should consider the polynucleotide and polypeptide sequences involved in the light of their tracable evolutionary and functional pathways. Are there traces of gene recombinations (duplications or translocations, raising new functions), finetuning point mutations, domain shuffling (using parts of other/older genes)? Related to the cilium and flagellum this means e.g. a detailed evolutionary and functional examination of the microtubules from which they are composed. Is there any relation to other membrane-specific functions like ion-pumping, ATP-synthesis, protein secretion and lipid synthesis? Also relevant is the metabolic context (the enzymatic chains synthesizing the molecules that fuel the biochemical system), the cell-type-context (taking into account the sequence of events in the development of the cell or combination of cells in which the biochemical system is expressed), and the outer-cell environment (not only internal signals but also external signals from the environment activate or repress gene expression, which determines the specificity of biochemial (sub-)systems and cell-differentation).
The simple roots of complexity
Wolfram (2002) visualizes that complexity can be found already in a system with one simple initial condition and a simple rule for it’s developing behavior. He demonstrates this with cellular automata like e.g. the rule 110 cellular automaton:

Repetitive or nested behavior can be predicted (it’s computationally reducible), but complex behavior cannot be predicted (it’s computationally irreducible). Wolfram states that this is not due to a lack of mathematical tools (which one could expect to be developed in the future), but to a fundamental limit of mathematics we meet in dealing with complex behavior. The complex behavior of a lot of systems cannot be predicted by mathematical shortcuts (like equations). The only way to arrive at the behavior is to go all the way, tracing all the steps. This holds for all universal systems (systems that can emulate each other’s behavior, given the appropriate initial conditions and rules). Lead by his belief that all systems that show complex behavior, are universal as well, Wolfram states the Principle of Computational Equivalence: all the different processes the different universal systems are involved in, can be viewed as computations. This equivalence (if valid) could be regarded as more fundamental, compared to e.g. the equivalence of mass and energy, or the local equivalence of gravity and accelleration. The more so, if Wolfram would be right in stating that the treshold for universality is low (what means that there is an overwhelming amount of universal systems) and that the upper limit of computational sophistication is reached by any system passing this treshold (what means that no system can ever carry out explicit computations that are more sophisticated then those carried out by universal systems like cellular automata or Turing machines).

Different kinds of irreducibility

1 elementary irreducibility: if an element of a set cannot be decomposed into other elements of the same set according to (a) specific operation(s) or rule(s), this element can be called elementary irreducible. Examples in mathematics are prime numbers which cannot be splitted according to the multiplication operation or polynomials which cannot be factored into nontrivial polynomials over the same field. Another example: metrical units (e.g. length) are expressed with numbers, but cannot be reduced to numbers.
2 equational irreducibility: if a final state of a system cannot be predicted by combining an initial state with equations, this final state can be called equationally irreducible.
3 computational irreducibility: if complex behavior of a system cannot be predicted from or reduced to its initial conditions and underlying rules, this complex behavior can be called computationally irreducible. Wolfram’s (2002) examples: the complex behavior of class 4 cellular automata, which he generalizes (through the Principle of Computational Equivalence) to the complex behavior of all universal systems.
4 functional irreducibility: if a function of a system cannot be reduced to functions of its parts or to functions of its encapsulated subsystems, this function can be called functionally irreducible. Example: the function of the reverse transcriptase enzyme (making a DNA copy of a RNA strand to form a DNA-RNA hybrid helix as basis for a double helix with two DNA strands) cannot be reduced to the functions of specific domains of the enzyme or to functions of the amino acids out of which the enzyme is composed.
5 evolutionary irreducibility: if an evolutionary pathway to a system cannot be described convincingly (which should also involve experimental evidence for the mutational (and beneficial) steps), this system can be called evolutionary irreducible. Behe’s (1996) examples of biochemical systems: cilium, flagellum,
the blood clotting cascade.
Can these different kinds of irreducibility perhaps be reduced to each other? Are elements, states, behaviors, functions and systems not just explanatory abstractions, different points of view?
Books and articles:
Alberts, B et al., Molecular Biology of the Cell, 1994
Behe, M., Darwin’s Black Box. Free Press, 1996
Wolfram, S., A New Kind of Science. 2002
Internet links:
Eric Weisstein’s World of Mathematics
ISCID Internatinal Society for Complexity, Information and Design Encyclopedia
Fred Hoyle: it’s too unlikely > intelligent control
Mike Behe: it’s too complicated > intelligent design
Stephen Wolfram: it’s (not just too) simple < computational equivalence
The biochemical cell in numbers, structures and processes, e.g. a mammalian cell: the atoms CHON determine 99% of the cell weight; H2O covers 70% of the cell weight; proteins 18%, phospholipids and other lipids 5%, small metabolites 3%, polysaccharids 2%, RNA 1.1%, DNA 0.25%. DNA,mRNA,rRNA,tRNA in the cell ( polynucleotides < 4 nucleotides (base +suger+phosphate - several functions: carriers of chemical energy or particular chemical groups, universal signaling, storage of information) < U-A C-G / T-A C-G (nucleic acids)< CONHPS // functional combinations of polypeptides < hydrophobic/hydrophilic folding of proteins < 20 amino acids (different side-chains – different chemical properties) < CO-HN + H2O < NH3+ -CH-CO2- < CONHPS // energy < polysaccharids < compounds with CH2O // structures (eg membranes) < lipids (e.g. phospholipids) < fatty acids with hydrophylic heads (CO2) and hydrophobic(CH2) chain-tails that can form (bi-)layers. Bacteria contain several million nucleotides of DNA, humans three billion, some flowering plants several hundred billion. The 64 codons (combinations of the four bases UACG taken three at the time) are associated (through tRNA) with the 20 amino acids that connect to each other to form a chain (polypeptide), also known as protein.Transcription: triggered e.g. by RNA polymerase binding to DNA promotor sequence; DNA gene > RNA > amino acids; translation: mRNA ‘read’ by ribosomes using tRNA > combining the amino acids that are carried by the tRNA into proteins. Gene regulation (examples: the life cycle of a bacteriophagea; feedback controls and multiple factors conniving to decide whether a single gene should be turned on of of; interfering / gene-blocking doublestrand siRNA transcribed by what previously was labeled as ‘junk-DNA’). Covalent (strong chemical bonds, sharing electrons e.g. H3N (amine) or CO2H (carboxyl)) and weaker non-covalent bonds (ionic, hydrogen, vd Waals). Kinetic energy in motions (translation, vibration, rotation), chemical energy e.g. carried by ATP.
A system comes into being, having some innovative property. Relation conservation (of embedded or encapsulated subsystems) & innovation. Innovation without conservation possible? Something new out of the blue? Part of the complexity of developing systems are alternating conditions and alternating rules.
Michael Behe. Old belief in spontaneaous generation: Ernst Heackel and Thomas Henry Huxley,1859 Urschleim – appeared to be the mud that failed to grow. Since they were unaware of the complexity of cells, they found it easy to believe that cells could originate from simple mud. In Darwin's time all of biology was a black box: not only the cell, or the eye, or digestion, or immunity, but every biological structure and function because, ultimately, no one could explain how biological processes occured. (..) Proteins are the machinery of living tissue that builds the structures and carries out the chemical reactions necessary for life. Proteins carry out amazingly diverse functions: a typical cell contains thousands and thousands of different types of proteins to perform the many tasks necessary for life. Although the protein chain can consist of anywhere from about 50 to about 1,000 amino acid links, each position can only contain one of twenty different amino acids. In this way they are much like words: words can come in various lengths but they are made up from a discrete set of 26 letters. Now, a protein in a cell does not float around like a floppy chain; rather, it folds up into a very precise structure which can be quite different for different types of proteins. In general, biological processes on the molecular level are performed by networks of proteins, each member of which carries out a particular task in a chain. [biochemical explanation of vision] (..) In private even most evolutionary biologists will admit that science has no explanation for the beginning of life. (..) Darwin: "If it could be demonstrated that any complex organ existed which could not possibly have been formed by numerous, successive, slight modifications, my theory would absolutely break down." Behe: A system which meets Darwin's criterion is one which exhibits irreducible complexity. By irreducible complexity I mean a single system which is composed of several interacting parts that contribute to the basic function, and where the removal of any one of the parts causes the system to effectively cease functioning. An irreducibly complex system cannot be produced gradually by slight, successive modifications of a precursor system, since any precursor to an irreducibly complex system is by definition nonfunctional. Example: the mousetrap – if any of the parts are missing, the trap does not function. (OBW but see e.g. John McDonald, A reducibly Complex Mousetrap, Pete Dunkelberg, David Ussery, Marnix Medema over Behe's theorie and Gert Korthof who derives from Behe’s logic that "all genetic diseases are examples of deliberate intelligent design" leading to the question What is 'intelligent': a redundant, reducible system that is mutation-tolerant, or a non-redundant, irreducible system that is easily damaged by mutations? You can't have it both ways.(..) Could it be that design is another word for 'we-do-not-yet-know-a-natural-explanation'? It’s a mentalistic (like vitalism) and essentialistic concept. See also: Irreducible Complexity and Michael Behe
Are there biochemical systems which are irreducible complex?

Michael Denton (Evolution: A Theory in Crisis, 1986): intermediates do not exist (p108: "There is no hint anywhere of any sort of structure halfway to the complex molecular organization of these fascinating microhairs [cilia] through which evolution might have occurred.") Michael Behe goes one step further: intermediates cannot exist (the system is irreducible complex). Behe explains: Cilia are hairlike organelles on the surfaces of many animal and lower plant cells that serve to move fluid over the cell's surface or to "row" single cells through a fluid. Cilia are composed of at least a half dozen proteins, combined to perform one task, ciliary motion. In the cilia the flexible linker protein nexin converts the sliding motion of neighboring microtubuli to a bending motion. Irreducible complexity on the molecular scale. The components of cilia are single molecules. This means that there are no more black boxes to invoke; the complexity of the cilium is final, fundamental. The complexity of the cilium is irreducible: it can not have functional precursors, so it can not be produced by natural selection, which requires a continuum of function to work. Behe: If the cilium can not be produced by natural selection, then the cilium was designed. Journal of Molecular Evolution (JME): zero papers discussing detailed models for intermediates in the development of complex biomolecular structures (other journals: the same lack of...) ( OBW: but here Mike Behe did not do his homework, as extensively documented by e.g.
David Usseri). We are not inferring design to account for a black box, but to account for an open box: the fundamental mechanisms of life cannot be ascribed to natural selection, and therefore were designed. Behe: By "intelligent design" (ID) I mean to imply design beyond the laws of nature (different from the physicist Paul Davies and the geneticist Michael Denton in their recent books, respectively, The Fifth Miracle: The Search for the Origin and Meaning of Life (Davies 1999) and Nature’s Destiny: How the Laws of Biology Reveal Purpose in the Universe. (Denton 1998), see also Pete Dunkelberg). Jerry Coyne: this ID is unfalsifiable. In fact, my argument for intelligent design is open to direct experimental rebuttal, e.g. produce flagellum by natural selection. The claim of intelligent design is that "No unintelligent process could produce this system." The claim of Darwinism is that "Some unintelligent process (involving natural selection and random mutation) could produce this system." To falsify the first claim, one need only show that at least one unintelligent process could produce the system. The examples, I think, better get across the concept of irreducible complexity than does the definition I offered. (..)Although it produces some complexity, the self-organizing behavior so far observed in the physical world has not produced complexity and specificity comparable to irreducibly complex biochemical systems. In his recent book Tower of Babel: The Evidence against the New Creationism, however, philosopher of science Robert Pennock argues that science should avoid a theory of intelligent design because it must of necessity embrace "methodological naturalism." (Pennock 1999). (..) Behe: SETI radio-wave-watchers think they can detect intelligent signals. (..) Intelligence, human or not, is evident only in its effects. (..) Francis Crick has famously suggested that life on earth may have been deliberately seeded by space aliens (Crick and Orgel 1973). Behe: The biochemical evidence strongly indicates design, but does not show who the designer was. But are gradual Darwinian natural selection and intelligent design the only potential explanations? Shanks and Joplin (1999) direct our attention to complexity theory, which concerns the ability of systems to self-organize abruptly, sometimes in surprising ways. They suggest that irreducibly complex biochemical systems might in principle be explained by self-organization, eliminating the need to invoke intelligence. They then go on to argue that biochemical systems are "redundantly complex"--that is, contain components that can be removed without entirely eliminating function. (Shanks, Niall and Karl H. Joplin (1999), "Redundant complexity: A critical analysis of intelligent design in biochemistry", Philosophy of Science 66: 268-282). Example: the Belousov-Zhabotinsky reaction, a self-organizing chemical system. Behe: the system lacks a crucial feature--the components are not "well-matched.", so it is a "simple interactive" system (designated 'SI'). Ones that require well-matched components are irreducibly complex ('IC'). The line dividing SI and IC systems is not sharp. (..) Even if a biological system displays self-organizing behavior, the question of its origin remains. (..) The observation that some biochemical systems are redundant, however, does not entail that all are.

Dr. Kenneth Miller (Finding Darwin's God, 1999) opposes the notion that the eubacterial flagellum is irreducibly complex, and therefore could not have evolved in a step-by-step Darwinian process; he proposes that the flagellum is simply a composite of pieces co-opted from other systems within the cell. See also: Ian Musgrave,
Evolution of the Bacterial Flagella. And Ken Miller about the evolution of the blood clotting cascade.
William Dembski (Intelligent Design. The bridge between science and theology, 1999) claims that DNA is a contingent, complex and specified string. Contingency: freedom of choice for elements of a string: e.g. words in sentences, bases in DNA; complexity: objects that are not so simple that they can be explained by chance; specifity: the object exhibits a pattern characteristic of intelligence. WD defines complex specified information as "any specified information whose complexity exceeds 500 bits of information"
(p166), which would mean that the cytochrome-c family of genes would not qualify, having an information content of 233 - 373 bits (Gert Korthof). GK: we must distinguish between two questions: (1) is there a natural mechanism that creates the very first information (= origin of life, e.g. what generated autotroph prokaryotes from chemical building blocks (cf Senapathy, 1994)? (2) can natural selection and mutation increase information content of DNA? WD is forced to believe that CSI existed before the origin of life: CSI could be 'abundant in the universe' and 'CSI is inherent in a lifeless universe'. This amounts to free-floating ghostly information in space, which is too far removed from down-to-earth biological science. The main thesis of Dembski's book is that an intelligent designer is a valid explanation for the origin of Complex Specified Information. But how can he accuse Darwinists of only explaining the flow of information, while his own explanation of CSI relies on pre-existing CSI? For Dembski holds that "To explain an instance of CSI requires at least as much CSI as we started with." Cf Paul Davies (..): "all random sequences of the same length encode about the same amount of information" (sc mathematically defined information, e.g. in terms of compressibility). Davies concludes that DNA sequences can only qualitatively (in a non-mathematically way) be subdivided. The biologically relevant subset can only be defined in biological terms. Dembski accepts that natural selection can produce 'micro-evolution', but not macro-evolution. WD: Darwinists often claim to have explained the origin of genetic information, while in fact they only have explained the flow of information. The set of indispensable parts of a system is known as the irreducible core of that system – to be found using knockout experiments; the core can be thought of in terms of components, functions, or mutations. GK: It could be that we easily accept that DNA contains information, not because of, but despite the mathematical definition of information. Clearly something is missing: meaning or quality. The presence of information in DNA does not fully explain the living organism. What matters is gene regulation: the expression of information. One gene > ten proteins, relation to information-content? The protein itself cannot try out all folding configurations and yet protein folding is done in (milli)seconds. This is called the Levinthal paradox. A real increase in geometrical complexity during development. (..) It is contradictory to reject natural law as something that is designed in the design inference, and to accept natural law as something that is designed in the Fine Tuning argument. It undermines the logic of his Explanatory Filter. If Dembski claims 'Intelligent Design' for all genes greater than 500 bits, then he cannot deny Intelligent Design for viruses, oncogenes and mobile elements (ethical problem for WD?).
Russell Doolittle: It is conceivable that there would be some system in which a multiple knockout might function to some extent even though either single knockout loses function.
Keith Robison attempted to show that the blood clotting system was not irreducibly complex by devising a possible evolutionary pathway for its genesis. Behe: I was struck that irreducible complexity could be better formulated in evolutionary terms by focusing on a proposed pathway, and on whether each step (sc. necessary but unselected mutation) that would be necessary to build a certain system using that pathway was selected or unselected. Behe suggests a quantification of irreducible complexity: the number of unselected steps on the evolutionary pathway. (OBW: Darwinian mechanism: a (slow) one-step-at-the-time evolution by mutation & selection (adaptation > speciation). This small-steps mechanism is accepted and applied. What are the alternative proposals in terms of big steps? A setup with initial conditions under which a simultaneous modification of different parts of the genome would take place and survive, would be a convicing experiment to prove the possibility of big-steps-at-the-time evolution. Could there be something like a natural simulation of multiple-steps-at-the-time / high step-speed (quantum-eventities) > biochemical encoding of the best simulation (simulation > speciation)? When and how runs the ‘software’ that generates the genetic alphabet and the with that alphabet expressed key-code? This all speculative, no experimental evidence for now.)
In their 1994 theoretical paper, Nilsson and Pelger modeled one possible evolutionary pathway to the geometry of a fish-like eye from a patch of photoresponsive cells. Nilsson, D., and Pelger, S., "A Pessimistic Estimate of the Time Required for an Eye to Evolve", 1994. The point was to determine how many plausible, populational micro-steps of variation would be needed for very weak selection to yield a fish-like eye—and then under reasonable assumptions to convert micro-steps into generations and years. The answer was about 350,000—a geological blink of the eye. This answer is just one of many to the failed 19th-century complaint of insufficient time for evolution to have taken place. (..) evolutionary developmental biology: there, with the discovery of the developmental regulatory genes, we have learned how subtle, how versatile, and yet how simple the mechanisms can be for transforming one biological structure to another. (A professional but accessible account can be found in From DNA to Diversity: Molecular Genetics and the Evolution of Animal Design [2001] by Sean B. Carroll, Jennifer K. Grenier, and Scott D. Weatherbee. A popular but sound insight is available in: "Which Came First, the Feather or the Bird?" by Richard O. Prum and Alan H. Brush, Scientific American, March 2003. (..) We now know that evolution progresses in a modular way, with different systems evolving in parallel and nearly independently. (..) A 56-page article by Salvini-Plawen and Mayr in Evolutionary Biology (vol. 10, 1977) called "On the Evolution of Photoreceptors and Eyes." answers many of the questions that Mr. Berlinski asserts are unanswered or unanswerable. Berlinski: In explaining the evolution of the eye in terms of such global geometrical processes, Nilsson and Pelger rather resemble an art historian prepared to explain the emergence of the Mona Lisa in terms of preparing the wood, mixing the paint, and filling in the details. The conclusion—that Leonardo completed his masterpiece in more than a minute and less than a lifetime—while based squarely on the facts, seems rather less than a contribution to understanding. (..) If the papers by Snyder and Warrant & McIntyre say nothing about fish or octopuses, neither do they say anything about evolution. No mention there of Darwin’s theory, no discussion of morphology, not a word about invagination, aperture constriction, or lens formation, and nothing about the time required to form an eye, whether simple, compound, or camera-like. (..) Responding to my observation that no quantitative argument supports their quantitative conclusions—no argument at all, in fact—Mr. Nilsson has thus (1) offered a mathematically incoherent appeal to his only equation; (2) cited references that make no mention of any morphological or evolutionary process; (3) defended a theory intended to describe the evolution of vertebrate camera eyes by referring to a theory describing the theoretical optics of compound invertebrate eyes; (4) failed to explain why his own work has neglected to specify any relevant biological parameter precisely; and (5) championed his results by means of assumptions that his own sources indicate are false across a wide range of organisms. (..) random variation: the heart of the matter (..) If one assumes, as Nilsson and Pelger do, that probabilities need not be taken into account because all transitions occur with a probability of one, there is no problem to be discussed—but nothing of any conceivable interest, either. (..) A possible evolutionary pathway: the existence of such a path is hardly in doubt. Every normal human being creates an eye from a patch of photoresponsive cells in nine months. (..)To repeat, the flaw in Nilsson and Pelger’s work to which I attach the greatest importance is that, as a defense of Darwinian theory, it makes no mention of Darwinian principles. Those principles demand that biological change be driven first by random variation and then by natural selection. There are no random variations in Nilsson and Pelger’s theory. (OBW: random variation = mutations in the genetic code in DNA e.g. at the moment of replication). (..) Behe, 1996, pp 154-156: Horowitz (1945), de Duve (Blueprint for a Cell), Kauffman (The origin of order, 1993) identiy problems for gradualistic evolution – alternative suggestions are not backed by bio-chemical details (Maynard Smith: ‘fact-free science’). Behe: origin-of-life-experiments (Miller), amino acids > proteinoids production (Fox) and RNA-origin theories (‘probiotic chemicist’s nightmare’ (Joyce & Orgel), mathematical models of evolution and sequence analysis did not succeed in ‘proving evolution’ so far. Biochemical journals and textbooks almost completely ignore evolution (see indices). Alternative theories (e.g. symbiotic theory (Margulis) or complexity theory (Kauffman)) too don’t provide biochemical evidence. Behe relates his findings of ‘irreducible complex biochemical systems’ to intelligent design, vs Dawkins (The blind watchmaker: natural selection as automatic process, the <2619 click-trick), Dennet (Darwin’s dangerous idea), Futuyama (vestigal organs > sloppy ‘design’), Miller (pseudo-genes > sloppy ‘design’ ). Behe, 1996, p 229: Design-theory has nothing to say about a biochemical or biological system, unless all the components of the system are known and it is demonstrated that the system is composed of several interacting parts. (..) 229 other factors that might have affected the development of life: common descent (B: gene duplication points only to common descent, not to the mechanism of evolution), natural selection, migration, population size, founder effects, genetic drift, linkage, meiotic drive, transposition and much more (..) 230 intelligent design does not mean that any of the other factors are not operative, common or unimportant (..) one has to determine that a system is not a composite of several separable systems. (..) Arguments concerning evolutionary pathways: experimental evidence is much preferred to mere model building, since it would be extremely difficult for models to predict whether proposed changes in complex systems might have unforeseen detrimental effects.
Pete Dunkelberg,
Irreducible Complexity Demystified , 2003. Organisms and ecosystems evolve. It may not even make sense to expect a precursor to have had the same function (cf a cow’s tail). Instead of Behe’s moustrap – Venus’ flytrap that traps and digests insects to make up for the lack of nitrogen in the soils of its habitat. (..) Snap-traps very likely evolved from flypaper traps as Darwin (Insectivorous Plants) thought. Venus' flytrap: First, rather than gaining a part, it lost a part - the glue that the sundews use. Even more interestingly, the trap was able to evolve because the parts evolved. (..) Pentachlorophenol (PCP) is a highly toxic chemical, not known to occur naturally, that has been used as a wood preservative since the 1930's. It is now recognized as a dangerous pollutant that we need to dispose of. But how? Evolution to the rescue! A few soil bacteria have already worked out a way to break it down and even eat it. (OBW by creating new genes, or by activating present genes?..) Whales, mammals like us, lack a key part called Hageman factor but their blood clots anyway. Under questioning at a meeting (Miller questions Behe at the Question and Answer session at the American Museum of Natural History meeting, April 23, 2002) Behe finally agreed that the cascade is not IC after all. (..) Cilia are many and diverse (for examples see Miller, K., Finding Darwin's God, 1999) and may contain two hundred or so different proteins and various numbers of microtubules. It turns out that being IC or not is not a property of the cilium itself. It depends on choices [of parts] we make. Behe regards proto-cilia as dysfunctional, but they are there and function, e.g. axopodia, reticulopodia. (..) The flagellum of Archeabacteria are not used for swimming, but for simpler ways of moving. Behe’s three parts of the flagellum (motor, rotor and paddle) need additional parts, e.g. proteins at the base that react to external stimuli and turn the motor on and off, and in some flagella cause it to change directions. The type three secretion system (TTSS) is part of the flagellum and has other functions than swimming (e.g. export of proteins that cause sickness). Bacteria can move across surfaces in organized swarms, and quickly colonize a new food source such as your own much larger cells. When swarming, they often grow many more flagella than usual and make cell-to-cell contacts with these flagella. Some bacteria also use their flagella to hang on to our cells as they try to break in and eat the cell contents. Flagella participate in the cause of quite a few bacterial diseases, including diarrhea, ulcers and urinary tract infections. (..)The whole irreducible complexity argument is based on fixed functions, parts, systems, organisms and environments. In nature all these things vary. IC is a matter of an observer specifying a combination of function, parts and system so that the specified function requires all the parts. There is no way for evolution to be sensitive to this, no way for it to matter at all. Nor does nature care about 'direct' vs 'indirect' evolution as perceived by us. (..) It always comes down to the same things: Given a population with inherited variation and also new variations from mutation or immigration, evolution occurs. Natural selection (instead of only random drift) occurs if some heritable variations are related to reproductive success.
Andrew Parker, In the Blink of an Eye. 2003. Light Switch theory. The development of eyes in predators (543 million years ago) triggered the evolution of hard parts in prey. The development of hard parts (as external properties) boosted the number of animal phyla from Pre-Cambrium 3 to the Cambrium 38. The theoretical time calculated for the evolution of the eye fits neatly in what palaeontology tells us: between 544 and 543 million years ago. So it seems Parker has a solution for two old Darwinian enigmas: the evolution of the eye and the Cambrian explosion at one stroke.
Andrew Knoll ("Life on a Young Planet: The First Three Billion Years of Evolution on Earth", Princeton University Press, 2003) believes that the rise of oxygen fuelled the Cambrian explosion.
Paul Davies, The Fifth Miracle. The Search for the Origin and Meaning of life. 1999. The target DNA-sequences can only be discovered (if ever) by biologists experimentally, not calculated by mathematicians. Paul Davies: Early life forms (> 3.5 bilion years) found in rockformations in Australia (?). Life forms in oxygen-free conditions (e.g. in oilwells deep in the earth or other very hot environments). Transpermia hypothesis: life could originate from other planets and be transported to earth by meteorites. Look for life on Mars.
Stuart Kauffman's Autocatalytic Set theory is a theory about the origin of life and by implication of information. GK: The information content of DNA is not limited to being a linear sequence of 4 bases. Extra information is needed to translate the DNA into proteins; one-gene-more-proteins; every amino acid can be coupled in 10 geometrical ways with its neighbour, so a sequence of 200 amino acids has 10200 possible folding-configurations (this cannot be computed by any number of (super)computers, yet protein folding is done in (milli)seconds); there is a real increase in geometrical complexity during development from the 1D string of DNA-bases to the 3D organism; more than 30 different mathematical descriptions of complexity – which one(s) appropriate for biology?;
Richard H. Thornhill and David W. Ussery (
A classification of possible routes of Darwinian evolution. published in The Journal of Theoretical Biology, 203: 111-116, 2000) suggest four classes of evolution: serial direct (e.g. increments of giraffe neck length; gradual change in enzyme specificity and activity resulting from single amino acid substitutions), parallel direct (approximately synchronous changes in more than one component, e.g. eye: evolution of the retina to be approximately synchronous with that of the pinhole eye), elimination of functional redundancy (evolution from mammalian to reptilian jaws: the fossil intermediates Morganucodon and Kuehneotherium had both quadrate-articular and dentary-squamosal articulation.), and adoption from a different function (e.g. recently discovered fossil evidence suggests that feather evolution did indeed follow such a sequence, with proto-feathers, composed of the same proteins as feathers, in Sinosauropteryx (Chen et al., 1998; Padian, 1999), probably marginally airworthy feathers in the non-flying Caudipteryx and Protarchaeopteryx (Ji et al., 1998), and feathers in the flying Archaeopteryx (Padian, 1998). The proto-feathers and feathers probably also possessed functions in display, camouflage, recognition, etc., and it is possible that the actual sequence was more complicated than the above hypothetical one, with evolution at some stages being driven primarily by selection for such functions (Padian & Chiappe, 1998). However, the proto-feathers in Sinosauropteryx were so thickly distributed that they almost certainly did function as insulation (Padian, 1999). Another example: Antifreeze glycoprotein in the blood of Antarctic notothenioid fishes, which enables them to survive in icy seas, is considered to have evolved from a functionally unrelated pancreatic trypsinogen-like protease, and the recent discovery of chimeric genes which encode both the protease and an antifreeze glycoprotein polyprotein strongly supports this theory (Cheng & Chen, 1999). Piatigorsky uses the term "gene-sharing" for the encoding in a single gene of a protein with two or more functions, and suggests that this may be a widespread evolutionary 'strategy' (1998).
Fred Hoyle,
The Intelligent Universe (1983) thinks it unlikely that life, even on a cosmic scale, arose from non-living matter. Information necessary for the development of life comes from the future. The information is coming from a source of information, an intelligence, placed in the remote future.
Fred Hoyle,
The Mathematics of Evolution (1999, panspermia – life from space) worries about deleterious mutations in the human species (James Crow: "3 new deleterious mutations per person per generation. Why aren't we extinct? Mark Ridley's Mendel's Demon (..) also speculates about how to get rid of the excess of mutations. What Hoyle called 'genetic erosion' is now known as 'mutational meltdown'.). Hoyle about histone-H4 (having a chain of 102 amino acids and an extremely conserved structure in all eukaryote species): where are all the functional intermediates of histone-H4? Histone specialists G. Felsenfeld and M. Groudine conclude that "Core histones are among the most highly conserved eukaryotic proteins known". GK: Neither John Maynard Smith nor anybody else did give evidence that natural selection is in fact capable of producing all the necessary intermediates of functional proteins such as histone-H3/H4.
Gardner, J. My Selfish Biocosm hypothesis asserts that life and intelligence are, in fact, the primary cosmic phenomena and that everything else—the constants of nature, the dimensionality of the universe, the origin of carbon and other elements in the hearts of giant supernovas, the pathway traced by biological evolution—is secondary and derivative.
Stephen Jay Gould, The Structure of Evolutionary Theory, 2002. theory of punctuated equilibria: if evolution proceeds discontinuously, as the natural history of the dead suggests, natural selection must in part act on the level of a species.
Pääbo, The mosaic that is our genome Nature, 421, 409 - 412 (23 Jan 2003) Evidence from comparisons of DNA sequences between humans and the great apes. It seems clear that the human evolutionary lineage diverged from that of chimpanzees about 4–6 million years ago, from that of gorillas about 6–8 million years ago, and from that of the orangutans about 12–16 million years ago. (..) Not one history, but different histories for different segments of our genome. In this respect, our genome is a mosaic, where each segment has its own relationship to that of the African apes. (..) Within the human gene pool, most variation is found in Africa and what is seen outside Africa is a subset of the variation found within Africa. (..) We have come to realize that almost all features that set humans apart from apes may turn out to be differences in grade rather than absolute differences. (..) identify regions of the human genome where the patterns of variation suggest the recent occurrence of a mutation that was positively selected and swept through the entire human population. (..) study how genes interact with each other to influence developmental and physiological systems. As these goals are achieved, we will be able to determine the order and approximate times of genetic changes during the emergence of modern humans that led to the traits that set us apart among animals.
Chemical reaction times are in the order of femtoseconds = 10-15 seconds (the scientist Ahmed Zewail received the Nobel Prize 1999 for research in this area).
Mutation frequency is high enough to produces enough mutations for natural selection to work on.

The 'genetic code' is the key to decode the encoded instructions in DNA. DNA is not directly useful for an organism, it has to be translated into proteins in order to be useful. The 'genetic code' is doing the translation from DNA-world to protein-world. The genetic code and the genetic content of an organism belong together, like a key and a lock. Genes change, but the genetic code did not (with a few minor exceptions). It's a highly arbitrary assignment of 64 codons to 20 naturally occurring amino acids (the building blocks of proteins). A chemical necessity in the association of amino acids with the codons has never been found. The code has been called 'a frozen accident'. The universality of the genetic code implies a common origin. Designs at the genetic level that clearly contradict evolution are possible but absent. (..)To make Lamarckian inheritance a serious evolutionary mechanism, one needs at least: (a) environmental induced production of a new and beneficial protein and (b) a mechanism to translate it back via RNA into DNA and (c) to transport it to the germline (d) to successfully integrate it into the chromosome. These are all serious obstacles! To my knowledge there is no evidence that this strong form of Lamarckian inheritance ever occurred in nature. (OBW but see e.g.: Steele: retrofection, reverse transcription).

Edward J.
Steele, Robyn A. Lindley, Robert V. Blanden, Lamarck's Signature. How Retrogenes Are Changing Darwin's Natural Selection Paradigm, 1998: main scientific claim, roughly stated, is that an acquired property such as a specific immune response can be inherited. A mechanism for the flow of information from somatic RNA to germline DNA is called retrofection and was described by Linial ("Creation of a processed pseudogene by retroviral infection", Cell 49:93-102 ,1987). What is the benefit of reverse transcription for the organism? The widespread presence of reverse transcriptase in organisms without an immune system (bacteria and plants) is still a mystery. The so-called 'Central Dogma (CD) of molecular biology' (Francis Crick) seems to forbid the inheritance of acquired characteristics. CD: DNA > RNA > proteins. After the discovery of reverse transcriptions: DNA <> RNA > proteins. Steele contradicts the ‘Weismann barrier’. Steele et al. neo-Lamarckism? A. Mellor's group (Journal of Immunology, Jan 15 1999): Evidence is presented for the presence of cDNA reverse transcripts of the TCR alpha-chain within the hybridoma, suggesting a role for reverse transcriptase in the generation of mutations.
John Maynard Smith & Eörs Szathmáry
, The Origins of Life. From the Birth of Life to the Origin of Language. (1999): The path of evolution on Earth is characterised by major transitions. These major transitions are in fact the most important innovations of life. The very first step was from individual replicating molecules to a population of replicators in compartments (a 'cell' with a membrane). All life is based upon cells. The second was integration of independent replicators into chromosomes (could we live with 40,000 free floating genes in our cells?). The third was the transition from the 'RNA world' to a DNA and protein world, which includes the evolution of the current universal genetic code. All bacteria are in this phase of evolution. With hindsight they are called prokaryotes, because they lack a nucleus. The fourth invention was the nucleus (eukaryotes). Single-celled organisms are included (Amoeba). Although the fifth transition is one of the most puzzling transitions (from a-sexual (a word again with hindsight) to sexual reproduction) , most critics of evolution completely missed it.
Henry Gee, In search of deep time. Beyond the fossil record to a new history of life, (1999). Fossils are isolated points in Deep Time that cannot be connected. The fossil evidence is unable to support evolutionary narratives.
Cladistics is a way of looking at the world in terms of the pattern rather than the proces that creates the pattern. The method tries to find the minimum evolutionary costs of an evolutionary tree, sc. the most parsimonious tree. When applied to molecular data such as proteins and DNA, this is a powerful method. Common descent of organisms must be a necessary assumption of cladistics, because cladograms are based on common evolutionary innovations (presupposing evolution).
Wallace Arthur, Evolutionary Developmental Biology: Finishing Darwin's Unfinished Symphony? (1997). The creative side of evolution: apply the knowledge that development is controlled by genes to evolutionary puzzles. Darwinism centres around adaptation to external selective agents (selection as destructive/eliminative force) - internal selection is neglected. The idea of internal selection promises to explain why there are constancies in the morphology and physiology of organisms despite external adaptations. Neo-Darwinism is characterised as gradualist/externalist. According to Arthur the dichotomy micro-macro is wrong, because there is a continuous scale from micro to macro with respect to effects on the phenotype. John Maynard Smith (1998): we need to know how changes in genes cause changes in morphology, and that requires an understanding of development.
Mark Ridley, Mendel's Demon (2000) the creative side of mutation was not only missing in the neo-Darwinian Synthesis, but was not even seen as an urgent problem. Our topic here is not evolutionary change. Our topic is mutational decay and how life preserves itself against it. Short DNA messages can be copied reliably. The longer the message the more mistakes. Humans have a DNA length 2000 times larger than bacteria (about 33,000 genes) and produce 200 copying mistakes per offspring of which 2-20 are harmful. So there is an upper limit of complexity (number of genes). Details of Mendelian inheritance are crucial for complex life, e.g. the process that produces haploid gametes from diploid cells (meiosis): diploid (2n) >> tetraploid (4n) >> diploid (2n) >> haploid (n) in stead of one step: diploid (2n) >> haploid (n). According to Ridley gender completely depends on a historical accident: the Margulian merger. Without that merger gender would not exist. (..) Error reduction depends on the evolution of 50 new proofreading genes. Where did they come from? (..) Stuart Kauffman uses the trial and error metaphor extensively, almost exclusively. In this metaphor evolution is a search in sequence space, protein space or shape space.
Gabriel Dover, Dear Mr Darwin. Letters on the evolution of life and human nature, 2000. Molecular drive (besides natural selection and neutral drift) as evolutionary force. Transmission of genes (M and F, X and Y) biased in favour of one of the two copies (gene conversion). Moving of genes from one chromosome to another (transposition). Natural selection is not a creative force. Dover rejects the 'gene as the ultimate selfish unit of selection'. The centipede (15 to 173 pairs of legs) is one of Dover's most illuminating and intriguing examples. Dover convincingly casts doubt on natural selection and neutral drift as the sole cause for the evolution from 15 to 173 pairs of legs. Identical genes present in many copies (e.g. 700 copies of the ribosomal RNA gene in humans), all being functional. Dover: Molecular Drive as a collection of mechanisms that keeps copies of genes identical. Genomes are ten thousand or a hundred thousand times larger than necessary. ... For example humans have alpha-satellite DNA, that consists of several hundred thousand copies spread in tandem arrays all over our 23 pairs of chromosomes. ... Humans carry enough DNA in each cell nucleus to code for 3 million genes. In reality we need only about 70,000 genes.... Why are all genomes subject to such a bizarre variety of Non-Mendelian mechanisms? Mutations also produce and modify the bodyplan of organisms. All genes are interacting with one another. One gene can contribute to many different structures and functions, and any given structure is built by many different genes. Genetics (molecular encoding) – development (from cell to organism) – evolution (to different organisms). The fingerprinting technique is based on variation in the number of copies of a 20 base DNA sequence. The repeat number is so variable that everybody has a unique genetic fingerprint. The mechanisms responsible for this variability are unequal crossing-over and slippage. Slippage is the most frequently occurring mechanism of gain and loss of DNA in genomes. It is one of the mechanisms Dover included in 'Molecular Drive'. Thanks to Molecular Drive DNA-fingerprinting is possible.
Lee Spetner, Not By Chance! -Shattering The Modern Theory of Evolution, 1998. Mutations are random copying errors of the bases in DNA of which only a small percentage is beneficial and therefore are selected. How many small random mutations are needed 'to get a new species' ? In the scientific literature 'to get a new species' is known as speciation. There is a speciation mechanism which does not depend on the gradual accumulation of many small mutations between populations and which can lead to the immediate establishment of a new species: hybridisation followed by a doubling of chromosome number: polyploidy (well known in plants). Also a chromosomal change such as an inversion involves many genes at a time. According to textbooks the most important step in speciation is reproductive isolation and this is achieved by geographic isolation or natural selection. Reproductive isolation could be caused by only two genes. After reproductive isolation genetic differences can and will accumulate, building up a ‘genetic distance’.
Gert Korthof: Spetner's examples are from physiology (food metabolism), not from morphology, anatomy or embryology. Are mutations in the metabolism of an organism the right kind to produce reptiles, birds, mammals and humans (=macro-evolution)? Spetner does not tell us. Point mutations - short-term 'solutions' to environmental problems of bacteria and insects - irrelevant for the problems of macro-evolution. One has to look at complete genomes, identify genes (for example) for the eye and subsequently find organisms with similar genes that are not yet used for building an eye. Those genes are the precursors. Only then can we trace the path that genes followed in building new organs. For example the fact that we can see red and green colours is caused by two opsin genes. The two genes are identical for 96% and this points to a recent gene duplication. Spetner: "The genome were set up for an adaptive change". GK: Spetner's 'set up' idea is a question-begging idea, a skyhook. Textbooks and to a lesser degree scientific journals tend to ignore crucial questions of how new genes are created and focus on neutral or slightly deleterious mutations and 'purifying' selection. (..) D.R. Forsdyke Two Levels of Information in DNA: "Over 90% (and probably over 98%) of all speciation events are accompanied by karyotic changes [chromosomal macromutations], and ... in the majority of cases the structural chromosomal rearrangements have played a primary role in initiating divergence". (..) Sp: It’s difficult to quantify information on a biological level where information is linked to functionality (meaning) - if a mutation causes an enzyme to lose its functionality, then information is lost. A back mutation (re)gains information. The statement on p. 160 of my book still stands: "Not even one mutation has been observed that adds a little information to the genome." My point is that, although loss of specificity can have survival value, it cannot be typical of the mutations needed by NDT. There are no known (yet) examples of mutations that can serve as prototypes of the mutations required by NDT. GK: the central problem of biology: the existence of a million species. Darwin explained this observation by Common Descent (CD) of all life. Darwin had a point although his theory of inheritance was wrong. Neo-Darwinism is about mechanism(s). Life on Earth is a whole, a unity because of the identical genetic code. Darwinism and neo-Darwinism do not explain the origin of life, which is a separate issue on the borderline of biology and chemistry.
Christian Schwabe: 'The Genomic Potential Hypothesis: a Chemist's View of the Origins, Evolution and Unfolding of Life', 2001: hypothesis that all species on earth have an independent but natural origin; all species start life as single cells ('pro-forms'), they continue to live as such until they develop into adults when the fossil record demands them to do so.
P. Senapathy, Independent birth of organisms (1994). His central idea is: 1. split genes (genes with introns) are easy to find in computer generated random DNA sequences. Genes without introns are impossible to find. 2. so when in real life DNA sequences are randomly assembled from their building blocks, genes with introns will easily be formed by accident, 3. since split genes are only found in eukaryotes (plants and animals), eukaryotes must have originated first and 4. prokaryotes (which don't have introns) must have evolved from eukaryotes by losing introns (pieces of DNA in the middle of a gene, which are eliminated when the gene is translated into protein and so the intron sequence does not end up in the protein). Gert Korthof: a sequence without cellular machinery is like software without a computer. James D. Watson: The major problem, I think, is chromatin (GK: the dynamic complex of DNA and histone proteins that makes up chromosomes). What determines whether a given piece of DNA along the chromosome is functioning, since it's covered with the histones? What is happening at the level of methylation and epigenetics (GK: chemical modification of the DNA that affects gene expression)? You can inherit something beyond the DNA sequence. That's where the real excitement of genetics is now. (Scientific American, April 2003) Gert Korthof: A stopcodon is the DNA code for the end of the protein (?OBW: for the RNA). If the distribution of stopcodons in current genomes would match a purely random distribution, it would be strongly suggestive for the random origin of genomes. 3 out of the 64 codons are stopcodons (4.7%). So I would expect 47 stopcodons in 1000 codons (=3000 bases). The average length of genes between two stopcodons would be 1000/47=21 codons (=63 bases) – too small for a gene, which is hundreds or thousands bases long. SP "More than 95% of all random genes are shorter than 100 bases", but genes in organisms are often 9000 bases (=3000 amino acids) long (p234). SP does not seem to be discouraged by this result. He invokes a kind of processing of DNA that results in eliminating all the shorter DNA sequences and leaving all the longer sequences. But this is more tricky than SP seems to consider. Gert Korthof: there should be something like:
XYZTOXYZ XYZBE XYZ XYZOR XYZ XYZNOTXYZ XYZTO XYZ XYZBE XYZ (XYZ being splicing recognition sites). How else can the words (exons) TO BE OR NOT BE be recognised? The number of codons for amino acids varies from one up to six. Some are rarely used, others with high frequency. Furthermore: the A:T and C:G ratios are 1:1 (Chargaff ratio), but the (A+T):(C+G) ratio varies from 25% to 75% (Syozo Osawa (1995) Evolution of the Genetic Code, p. 45). Animals and plants have two types of cells: diploid body cells (two pairs of chromosomes) and haploid sex cells (one pair of chromosomes). Even though there are haploid organisms (e.g. the male honeybee), most species are not. It’s higly improbable that matching male and female haploid cells would develop independently (having thousands of genes in the right positions on the right chromosomes). Evolutionary theory starts with relatively simple haploid cells which reproduce without sex and without meiosis. The transition from asexual clones to sexual reproduction is one of the 8 major transitions of life (John Maynard Smith & Eörs Szathmáry, The Origins of Life. From the Birth of Life to the Origin of Language, 1999). In a hermaphrodite species all individuals have both male and female reproductive organs. Males of grasshoppers and aphids ('plant bugs') do not have a Y chromosome. The standard textbook view is that the first organisms found in the fossil record are 3,500 million years old and are prokaryotes (bacteria). The first indirect evidence of eukaryotes appeared 2,700 million years ago and the first fossil eukaryotes appeared 1,7000 million years ago. Mitochondria are organelles in all eukaryotic cells; they are crucial for eukaryotic life; they multiply independently within eukaryotic cells; have their own DNA (37 genes in humans) which is autonomously copied; and are exclusively inherited via egg cells (maternal inheritance). These facts and diverse other facts support the hypothesis that mitochondria were once free living single-celled prokaryotes. This hypothesis is called endosymbiosis theory and was proposed by Lynn Margulis in 1970. (..) How easy is it to produce a genome? Building blocks are not enough to produce complex systems. Creationists and Darwinists reject the possibility that a complex system can arise by chance in one trial. According to evolutionary biologists, numerous selection steps are needed. According to creationists, intelligence is needed. The Darwinian mechanism is a test of a small modification of a genome. Darwinian selection applies to individuals of a species. Conserved chromosome segments between human and mouse are the final refutation of independent origin; a great number of genes appear in the same order in different species.
Hubert Yockey, Information theory and molecular biology (1992). Information is located in the one-dimensional DNA-sequence which is translated with the help of the genetic code to the one-dimensional sequence in proteins. This one-dimensional sequence of the protein determines the 3-D structure of proteins. The 3-D structure of proteins enables specific biochemical reactions to be speeded up. This sustains structures essential to life. In the end, information is the difference between life and matter, between biology and physics. Information is the ultimate explanation of life. Information is the secret of life. DNA as encoded information, proteins as decoded information. Gert Korthof: 1. contrary to engineering systems, there is no encoding process in the biological world, 2. contrary to engineering systems, the decoder device (the genetic code) is itself transmitted through the same channel as the message. So the message and the decode instructions are transmitted via the same DNA channel. However, both are encoded. Although the decode instructions are stored in DNA, they are encoded themselves. So the first cell of the embryo needs at least one molecule of each of the 64 translators (transfer-RNA) to start (boot up). Once the embryo has functional translators, it can produce more of them. The only solution seems to be to transmit the tRNA's (the decode instructions) via an extra independent channel. But how? They must be present in every cell of every individual; otherwise, no DNA could be translated (or: no message could be decoded). It seems that the egg is most suitable to 'transmit' those decode instructions from one generation to the next generation. They must be present in the cytoplasm of the egg (outside the nucleus) as ready-to-use translator molecules. This boot problem needs to be solved at the origin of life. HY: One molecule of iso-l-cytochrome c can be formed spontaneously with a probability of 0.95 in 1.5 x 1044 trials. So it seems that we don’t know scenarios how life could arise by chance. Agnostic about the origin of life and about the origin of the genetic code as well. OBW: information is not a life-specific property. Information is already a linguistic property of numbers, space, kinetics and physics. It’s a linguistic property of biochemics as well. What are the typical life-specific properties?
Are there (better) alternatives to DNA? Gert Korthof quotes: Science magazine (Robert F. Service "Creation's Seventh Day", Science, Volume 289, issue of 14 Jul 2000 p232-235) reported that the Romesberg-Schultz team had designed a new base called "PICS" and incorporated it in DNA and showed that the new DNA could be replicated with a new DNA polymerase. Furthermore, a team of Japanese investigators (Hirao et al (2002) Nature Biotechnology, vol. 20, pp. 177-82, quoted by Christian de Duve(2002) Life Evolving, p. 249) introduced two synthetic complementary bases S and Y. DNA molecules modified in this manner formed normal double helices with S pairing with Y. So the four bases A,T,C,G are not uniquely fit for forming DNA. In 2003 Haibo Liu et al
(Haibo Liu et al (2003) "A Four-Bae Paired Genetic Helix with Expanded SSize", Science 31 Oct 2003 868-871) reported a DNA which has all four base pairs replaced by new, larger pairs. The expanded double helices are more thermodynamically stable than the Watson-Crick helix. The new pairs apparently form hydrogen bonds analogous to the natural Watson-Crick pairs. The new bases pair with the natural bases, so DNA with 8 bases can exist and has an increased potential for encoding information. The authors conclude that there is no apparent prohibition against genetic systems having sizes different from the natural one. What about the ribose component of DNA? Schöning et al (K. Schöning Chemical etiology of nuclei acid structure, Science 290, 1347-1351, 2000) have synthesised a chemical analogue of RNA, which is derived from a sugar ring that contains four carbons (tetrose) instead of the more usual five found in ribose. This simple RNA called TNA, can form stable double helices with itself and also with complementary RNAs and DNAs. What about amino acids? There are 20 amino acids occurring in proteins. Can other amino acids be incorporated into proteins? Schultz team has added now more than 80 different non-natural amino acids to proteins. Chin et al have developed a technique that potentially expands the eukaryotic genetic code with an arbitrary unnatural amino acid (Jason Chin et al (2003) "An Expanded Eukaryotic Genetic Code", Science 301 (5635): 964, 15 Aug 2003). So clearly those 20 are not the only possible amino acids.
Why some species don't change over a period of million of years. Michael Denton (Nature's Destiny. How the Laws of Biology reveal Purpose in the Universe, 1998) explains that the nematode, a small and simple multicellular worm, is assembled in such a way that practically all the organs are intimately interconnected to all other organs or parts of the organism. The result is that virtually every mutation will disrupt the development and functioning of the nematode. Gert Korthof: less interconnectedness = more open to change. So the degree of developmental constraints could be a beautiful explanation why some organisms evolve and others do not.
John Wilkins gives the following Darwinian theses under dispute: 1. Transmutationism - that species change form to become other species; the alternative view is Statism. 2. Common descent - that similar species have common ancestors; the alternative is a view I can only call Parallel descent (a view held by Lamarck). 3. Struggle for existence - that more are born than can survive; the alternate view is sometimes called Commensualism. 4. Natural selection - that the relatively better adapted have more offspring, sometimes called Malthusianism; the alternate has no name. 5. Sexual selection - that the more "attractive" organisms of sexual species mate more (and have more offspring), causing unfit traits to spread; again there is no alternate, just a denial that it happens. 6. Biogeographic distribution - that species occur close by related species, explaining the distributions of various genera; this view, first published by Wallace, is in opposition to the older "single centre of creation" notion. 7. Heredity -a. Darwin's own theory was called "pangenesis" and is no longer accepted (it was a form of what we now call "neo-Lamarckism", or the inheritance of aquired characters), b. Weismannism - the more modern view that genes don't record information about the life of organisms. To this I must add four other more recent theories: 8. Random mutation - the notion that changes in genes aren't directed towards "better" alternatives; in other words, that mutations are blind to the needs imposed by the ecology in which organisms find themselves. 9. Genetic drift/neutralism - the view that some changes in genes are due to chance or the so-called "sampling error" of small populations of organisms. Molecular neutralism is the view that the very structure of genes changes in purely random ways. 10. Functionalism - the view that features of organisms are neither due to or are constrained by the shapes (morphology) of their lineage, but are due to their functional, or adaptive, benefits. Darwinism, in common with several other sciences dealing with historical change, also is sometimes held to assert - 11. Gradualism - the view that changes do not occur all at once, and that there are intermediate steps from an earlier stage to the next. Anti-Darwinisms include: Special creationism (sometimes just "Creationism" , the view that species are created "specially" in each case): challenges 1, 2, 6 and usually 8. Orthogenesis (linear evolution, aka Great Chain of Being thinking, the view that evolution proceeds in direct lines to goals, also sometimes called teleological evolution or progressionism): challenges 8 and 9. Examples: Lamarck, Nägeli, Eimer, Osborn, Severtsov, Teilhard. Often found as vague statements in more orthodox biology (in terms like "primitive" and "advanced" forms instead of the usual meanings in biology of older and derived). Neo-Lamarckism (aka Instructionism, the view that the environment instructs the genome, and/or the view that changes occur to anticipate the needs of the organism): challenges 7b, 8 and 9. Examples: Darwin, Haeckel, ED Cope, S Butler, Kropotkin, GBS Shaw, Kammerer, Koestler, E Steele, Goldschmidt. Process Structuralism (aka Formalism, aka Laws of growth tradition, also called Naturphilosophie, deriving from Goethe and Oken - the view that there are deep laws of change that determine some or all of the features of organisms): challenges 3 to 5 and 10. Examples: Goethe, Geoffroy, D'Arcy Thompson, Goodwin, Salthe, Gould, Løvtrup. Saltationism (in texts before about 1940 also called "Mutationism" or "Mutation Theory", the view that changes between forms occur all-at-once or not at all): challenges 11, and sometimes 2. Examples: Galton, TH Huxley, De Vries, TH Morgan, Johannsen, Goldschmidt.
Mark Isaak discusses five common misconceptions about evolution: 1 Evolution has never been observed. (see: "Observed Instances of Speciation" FAQ , further predictions regarding fossil record, comparative anatomy, genetic sequences, geographical distribution of species, etc.) 2 Evolution violates the 2nd law of thermodynamics. (The entropy of a closed system cannot decrease. Life is not a closed system. Order from disorder is common in nonliving systems too. Evolution says that organisms reproduce with only small changes between generations.) 3 There are no transitional fossils. (Paleontology has progressed a bit since Origin of Species was published, uncovering thousands of transitional fossils, by both the temporally restrictive and the less restrictive definitions, see the transitional fossils FAQ in the archive, and see ; the hypothesis of punctuated equilibrium was proposed to explain the relative rarity of transitional forms and why speciation appears to happen relatively quickly in some cases, gradually in others, and not at all during some periods for some species.) 4 The theory of evolution says that life originated, and evolution proceeds, by random chance (Selection is the very opposite of chance. Chance, in the form of mutations, provides genetic variation, which is the raw material that natural selection has to work with. Nor is abiogenesis (the origin of the first life) due purely to chance. Atoms and molecules arrange themselves not purely randomly, but according to their chemical properties. The theory of evolution doesn't depend on how the first life began.) 5 Evolution is only a theory; it hasn't been proved (Evolution’s strict biological definition is "a change in allele frequencies over time." By that definition, evolution is an indisputable fact. Common descent is not the theory of evolution, but just a fraction of it. The theory of evolution not only says that life evolved, it also includes mechanisms, like mutations, natural selection, and genetic drift, which go a long way towards explaining how life evolved. What evolution has is what any good scientific claim has--evidence, and lots of it).
Chris Colby,
Introduction to Evolutionary Biology
Douglas Theobald. 29+ Evidences for Macroevolution. The Scientific Case for Common Descent
John Wilkins: Evolution and Philosophy. Survival of the fittest because of their better adaptation to a changing environment. Adaptation is not a logical or semantic apriori definition (vs Popper, 1976), but a functional notion. Fitness as a disposition of traits to reproduce better, fitness as statistical property of the genes, as supervenient property of different physical structures, as emergent property of complex systems. Popper: something is science if it is liable to be falsified by data, if it is tested by observation and experiment and if it makes predictions. Kuhn: science undergoes revolutions, and the only way to determine if something is scientific is to see what scientists do (there is an obvious circularity here). Feyerabend wanted scientists to do anything they wanted, and call it science. Lakatos argued that science was a historical series of research programs. Pragmatism holds that the truth or value of a statement like a theory or hypothesis lies in its practical outcomes. Many of the things Darwin said have in fact been falsified. Many of his assertions of fact have been revised or denied, many of his mechanisms rejected or modified even by his strongest supporters. Science moves on, and if a theory doesn't, that is strong prima facie evidence it actually is a metaphysical belief. (..) Explanation: premisses + (universal) laws > things to be explained; initial conditions + (universal) laws > observed phenomena; Evolution is highly sensitive to the initial conditions and the boundary conditions that arise during the course of evolution. You cannot predict with any reasonable degree of accuracy what mutations will arise, which genotypes will recombine, and what other events will perturb the way species develop over time. Moreover, the so-called 'laws' of genetics and other biological rules are not laws. They are exceptional. Literally. For every law, right down to the so-called 'central dogma' of molecular genetics, there is at least one exception. And yet, we know the properties of many biological processes and systems well enough to predict what they will do in the absence of any other influences. Mostly, explanations in evolution take the following format: initial conditions at time t-n + properties of biological systems > observed phenomena at t. These explanations are retrodictions, not predictions. A good many scientific explanations rest not on laws but propensities, that is, likelihood to behave in a certain way. (..) Ernst Mayr: "typological essentialism" is the opinion that species have essences in some Aristotelian fashion > the Linnean system of classification. Species by morphology or by descent. "Population thinking": aggregates of individuals, groups, have a profile that shows a distribution of characteristics. Ghiselin and Hull propose that species are not universal types, or classes, but are historical individuals. Cladistics attempts to 'reconstruct the past' - recreate phylogeny - using as few theoretical assumptions as possible, on the basis of the present distributions of organismic traits. Quine (1969): 'natural kinds' - things that exist naturally at certain times and places. Species are biological entities that change. (..) Ontological reduction: biology > chemistry > physics. Epistemic reduction: higher level properties as effects of lower level processes. DNA Molecules – Mendelian genes – Population traits. Gene-centrism (e.g. Dawkins, 1976): only genes are selected (that is, are evolutionarily important). Gould, Eldredge, Vrba, Williams: groups are thought to survive extinction events differentially, based on adaptations of their component organisms, so the organisms are adapted, not the groups. (..) By the 1970s, progress had been abandoned by working biologists. Gould (1996) thinks that the apparent trend to complexity is just a matter of random evolution that started at a minimal 'wall' of complexity. The traditional notion of progress as an increase in perfection or optimality (the "Ladder of Perfection") has been abandoned: evolution is a bush, not a tree (Gould). In science, (functional) teleology is a way of modelling a system's behaviour by referring to its end-state, or goal. It is an answer to a question about function and purpose, reducing to historical explanations. External teleological explanation derives from Plato - a goal is imposed by an agent, a mind, which has intentions and purpose. This external teleology is declared dead in biology. Aristotle: material, form, cause, purpose > internal teleology. Mayr (1982) distinguishes teleomatic (law-like), adapted (existing through survival) teleonomic (goal-seeking) and teleologic (end-seeking) behavior.

Science is about explaining things that are observed. Methodological naturalism: everything observed is amenable to a naturalistic investigation. Explanatory naturalism: any explanation that uses a non-natural explanans (thing doing the explaining, like e.g. the Invisible Pink Unicorn's powers) fails to be testable. Ontological naturalism: all that exists, is natural. Ockham's Razor ('do not unnecessarily multiply entities in explanation') - also known as parsimony - is used to trim as much away as possible in order to achieve the leanest explanation. Moral naturalism: moral systems are explained in terms of the social or biological properties of humans. (..) Contingency of complexity vs design + purpose. (..) While some extreme cultural relativists do try to claim that science is no more than the sum of its cultural environments, this view fails to explain how it is that science gets such consistent results and acquires such broad agreement on matters of fact.
OBW: organisms and environments - what changes precisely? What changes first? Simultaneous changes? Mutual adaptation? What are the specific processes that could be called "adaptation"? Selection as a post hoc description of the result of processes? How much explanatery power has the concept selection? It explains e.g. why organisms having specific genes can survive by expressing these genes in specific environments. Gene-categories: developmental genes, persistence-genes, etc. Protein-categories: (..)The working-relations and evolutionary relations of DNA-sequences, RNA-sequences and protein-sequences. (..) Common descent – assumption, inference or fact? What is the smallest RNA-proteins-replication-system? Is mutation a random and selection a non-random mechanism? Relation genetic copy-error-reduction and genetic copy-innovation? The present original is a copy of a copy containing surviving mutations. Mutation: the sequence(-location) in the copy that is different from the original. Selective expression of genetic information. (..)
Warrant & McIntyre (1993), Falconer (1989), Futuyma (1986) Mark Ridley, Evolution, 2nd Edition (1996); John Gerhart and Marc Kirschner, Cells, Embryos, and Evolution (1997); Rudolf A. Raff, The Shape of Life (1996).
Tibor Ganti. The principles of life (2003). Gert Korthof: What is life? Senapathy claims to explain the origin of life. But what is life? If one has a wrong idea of what life is, then the theory to explain 'life' is useless. So, what is life? According to chemical engineer Tibor Ganti life (or what he calls ‘chemoton’ as minimal life-system) consists of 3 subsystems: 1) a chemical motor (metabolism) that supplies energy to synthesise compounds necessary for the other two subsystems and is stable (proteins); 2) a membrane which keeps the other 2 subsystems together, protects against dilution and is itself stable (lipids) 3) an information-carrying subsystem which enables reproduction of the 3 subsystems (RNA or DNA). Together these 3 subsystems are a living system. Senapathy's theory is concerned with the information-carrying subsystem (DNA) only. So he has a mistaken view of what life is. Therefore, his theory is useless.

Informative sites about Evolution Theory:
see also:
and: the site of Gert Korthof with extended book-reviews
Institutes/Journals for the study of complex systems:
Software for genetic/evolutionary simulations:
From the SCID Encycopedia:
Irreducible Core
The parts of a complex system which are indispensable to the basic functioning of the system.
SCID Irreducible Complexity
Michael Behe's Original Definition:
A single system composed of several well-matched, interacting parts that contribute to the basic function of the system, wherein the removal of any one of the parts causes the system to effectively cease functioning. (Darwin's Black Box, 39)
William Dembski's Enhanced Definition:
A system performing a given basic function is irreducibly complex if it includes a set of well-matched, mutually interacting, nonarbitrarily individuated parts such that each part in the set is indispensable to maintaining the system's basic, and therefore original, function. The set of these indispensable parts is known as the irreducible core of the system. (No Free Lunch, 285)
Michael Behe's "Evolutionary" Definition
An irreducibly complex evolutionary pathway is one that contains one or more unselected steps (that is, one or more necessary-but-unselected mutations). The degree of irreducible complexity is the number of unselected steps in the pathway.
Book Resources On Irreducible Complexity
Darwin's Black Box by Michael Behe
See also:
Irreducible Complexity and Michael Behe FAQs and Irreducible Complexity Demystified.
No Free Lunch by William Dembski
See also:
Not a Free Lunch But a Box of Chocolates, A Presentation Without Arguments, Mr. Dembski's Compass and The AntiEvolutionists: William A. Dembski.
SCID Computational Irreducibility
Much of theoretical physics has traditionally been concerned with trying to find "shortcuts" to nature. That is to say, with trying to find methods that are able to reproduce a final state of a system by knowing the initial state but without having to meticulously trace out each step from the initial to final states. The fact that we can write down a simple parabola as a path a thrown object makes in a gravitational field is an example of an instance where this might be possible. Clearly such shortcuts ought to be possible in principle if the calculation is more sophisticated than the computations the physical system itself is able to make. But consider a computer. Because a computer is itself physical system, it can determine the outcome of its evolution only by explicitly following it through. No shortcut is possible. Such computational irreducibility occurs whenever a physical system can act as a computer. In such cases, no general predictive ability is possible. Computational irreducibility implies that there is a highest level at which abstract models of physical systems can be made. Above that level, one can model only by explicit simulation.
SCID Complex Systems
Complex systems research is the study of nonlinear, adaptive, dynamical systems. These systems consist of many interacting components and often perform self-regulation, feedback and adaptation. It is often the case that complex systems display emergent functions and behaviors that are irreducible to their constituent subsystems. It follows that a general feature of most complex systems is computational irreducibility. Put more simply, this means that complex systems tend not to be amenable to complete mathematical descriptions. The field of complex systems is interdisciplinary and received a great deal of exposure over the last few decades of the twentieth century, due in large part to Stuart Kauffman and his application of self-organizational theory to biology. Examples of complex systems include economies, ecosystems, societies, as well as certain molecular machines at the cellular level.
Stephen Wolfram
:a pattern of data is random if no simple program can detect any regularities in it (NKS, 556); data generated by a simple program can by definition never by algorithmically random (NKS 1067). SW a pattern of data is complex if no short(er) description can be given (NKS, 559); from 1890 complexity associated with the three-body problem (Poincaré), mathematical formulas, large number of components with different types of behavior, sizes of axioms for logical theories, information content (DNA), algorithmic information content, sizes of descriptions, recources needed for computational tasks. Instead of defining complexity SW wants to capture everyday notions of complexity and see how systems (like cellular automatas) can produce these (NKS, 1068,1069).
: Complexity between regularity and randomness? How to quantify levels of all three of them (e.g. compressibility)? Wave-form of increasing and decreasing complexity related to scale-change and focus (cf complexity within the atomic nucleus, complexity of the relation of the nucleus and the electrons, complexity of atoms (chemically inert = less complex?). Criteria for increasing / decreasing complexity as property of set-elements, states, equations, behavior, functions, systems? Relation complexity and number of non-repetitive patterns that can be identified? Is the setup of a CA, its initial conditions and rules "simple" or already "complex" (e.g. as a nice example of ‘intelligent design")?
SCID Cellular Automata
Cellular automata (CA) are a class of spatially and temporally discrete, deterministic mathematical systems characterized by local interaction and an inherently parallel form of evolution. First introduced by von Neumann in the early 1950s to act as simple models of biological selfreproduction, CA are prototypical models for complex systems and processes consisting of a large number of identical, simple, locally interacting components. The study of these systems has generated great interest over the years because of their ability to generate a rich spectrum of very complex patterns of behavior out of sets of relatively simple underlying rules. Moreover, they appear to capture many essential features of complex self-organizing cooperative behavior observed in real systems. Although much of the theoretical work with CA has been confined to mathematics and computer science, there have been numerous applications to physics, biology, chemistry, biochemistry, and geology, among other disciplines. Some specific examples of phenomena that have been modeled by CA include fluid and chemical turbulence, plant growth and the dendritic growth of crystals, ecological theory, DNA evolution, the propagation of infectious diseases, urban social dynamics, forest fires, and patterns of electrical activity in neural networks. CA have also been used as discrete versions of partial differential equations in one or more spatial variables.
Web Resources On Cellular Automata:
: a cellular automaton is an old kind of model of which Stephen Wolfram (2002) found several new properties from which he derived his Principle of Computational Equivalence. SW demonstrates that the initial conditions and rules of universal CA’s can be set up to perform e.g. as calculating or even as theorems-generating axiom systems. Even very simple CA’s can show remarkable behavior. An interesting computation is e.g. the intrinsic randomness generated by the simple rule 30 CA. Interesting CA’s are the ones that behave as universal systems or Turing machines. (..) Application of the CA approach in molecular biology: RNA triplets of four possible bases generate one of 20 possible amino acids - simulation in a CA of the generation of sequences that are stable enough to survive and unstable enough to be modified. How could in this CA the environmental factors be represented (like eg water (determining the hydrophobic/hydrophilic behavior determining the stability of sequences)?
SCID Phase Transition
An abrupt change in a system's behavior. A common example is the gas-liquid phase transition undergone by water. In such a transition, a plot of density versus temperature shows a distinct discontinuity at the critical temperature marking the transition point. Similar behavior can be seen in systems described by ordinary differential flows and discrete mappings. In nonlinear dynamical systems, the transition from self-organizing to chaotic behavior is sometimes referred to as a phase transition (or, more specifically, as an order-disorder transition).
SCID Autoplectic Systems
Consider a dynamical system whose behavior appears random or chaotic. There are two ways in which an apparent randomness can occur: (1) external noise, so that if the evolution of the system is unstable, external perturbations amplify exponentially with time -such systems are called homoplectic; (2) internal mechanisms, so that the randomness is generated purely by the dynamics itself and does not depend on any external sources or require that randomness be present in the initial conditions -such systems are called autoplectic systems. An example of an autoplectic system is the one-dimensional, two-state, two neighbor Cellular Automaton rule-30, starting from a single non-zero site. The temporal sequence of binary values starting from that single non-zero initial seed are completely random, despite the fact that the evolution is strictly deterministic and the initial state is ordered.
Stephen Wolfram, A New Kind of Science (NKS, 2002)
There is no correspondence between complexity of behavior and complexity of underlying rules (351). Complex behavior can emerge from simple programs with simple initial conditions (19). Analogy: computer machine instructions (rules), program (initial conditions) (41). Classes of patterns in cellular automata: 1 simple (uniform final state) 2 repetitive / nesting (cf fractals), 3 chaotic/random (e.g. rule 30 CA, used in Mathematica to generate random numbers(317)), 4 complex (mixture of order and randomness) (231). A system following continuous rules can often appropriately be modelled by discrete systems like CA’s if this system exhibits discrete overall behavior (342), cf discrete transitions (frozen > fluid > gas). Mathematical equations impose constraints on what the behavior of a system must be, rule models just let the behavior evolve (368). Using CA’s SW models fysical systems like the growth of snowflakes (370), breaking patterns of sold material (374), fluid flow (376), and biological systems like leave growth (400), shell shapes (414), pigmentation patterns (422). Complex features in organisms rise in spite of natural selection, which tends to make things simpler (396,398). Randomness in the behaviors is often intrinsically generated by the evolution of the systems (from the rules, compared to from environment or from initial conditions (299, 432). SW characterizes the simple programs he uses for modelling as "a basis for understanding, capturing the essence of what is going on" (433), as "metaphors for physical systems" that "emerge physical laws" (434). Line of reasoning leading to the conclusion that the Second Law of Thermodynamics is not universally valid (435-457). CA’s have already a too rigid built-in notion of space (467). Space emerges from "patterns of connectivity that tend to exist" (..) "the ultimate rule of the universe will turn out to look quite simple" (468). Physical laws are "not fundamental, but emergent features of the largescale behavior of some ultimate underlying rule" (470). Space and time emerge (as discrete features, 482) from a "network of nodes" (475). "Distance" in a network of nodes is just the number of connections from one node to the other. (478) (OBW: network-distance=0 and network-connectivity = (almost) infinite in a network where all nodes are connected to each other – what should be the basic connectivity to start the emerging with? SW jumps from dimensions (as network-properties) to space (480)). After emerging space and time, the nework is a network of causal connections between events (516), showing seemingly random behavior at small scales and average properties at larger scales (517), like e.g. particles as specific patterns of node connections (526), faster particles having more nodes associated with them (529). Continual updating of the network-connections according to some simple set of underlying rules (539). A system is random if no simple program detects regularities in it (556). A system is complex if no mathematical formulas can give a shortcut description of its behavior (606-620). A universal system can emulate any task (643). Universal cellular automata can emulate a Turing machine (658), a substitution system (659), a register machine (661), number systems (661) basic logic circuits (662) and the use of random-access-memory (663); also the other way around: these systems can emulate an universal CA (664). Class 4 universal CA’s can transmit information over (large) distances (694). SW hunts for the smallest universal Turing machine, and comes up with 14 2-state, 3-color CA’s showing complex behavior(706-709). If processes are viewed as computations, with rules defined by the basic laws of nature (716), SW finds a fundamental equivalence: any system that can achieve universality (and there are an overwhelming amount of non-simple systems that are universal), can exhibit computational sophistication (717), which can be generated by simple rules (718). Like the notion of heat can be associated with microscopic motion, so the notion of computation can be associated with any behavior (726). Continuous mathematical models consist of formulas relating a few overall numerical quantities, often giving constraints on behavior, rather than explicit rules for behavior (728). Discrete computational systems process localy (the behavior is determined by neigboring cells) (730). SW beliefs there is no ‘higher’ regularity beyond repetition and nesting and that systems beyond this are universal and equivalent in their computational sophistication (735). So the Principle of Computational Equivalence (PCE) holds for perception, analysis and the percieved/analyzed systems as well (736): observers are computationally equivalent to the observed universal systems (737). From the PCE follows computational irreducibility: the outcome of a universal system after n steps can only be found by explicitely tracing each step (738). Even if one knows the underlying rules and the initial conditions, it takes an irreducible amount of computational work to work out the behavior of the system (739). In practice only repetitive and nested systems are predictable (computationally reducible) (741). All this leaves SW with a pretty weak version of "free will", overall behavior seeming free from "reasonable laws" or "obvious rules" (750-752). Implications of the PCE for mathematics. Very incomplete axiom systems generate very few theorems. Very inconsistent axiom systems generate almost all theorems (797). Axiom systems which are not both complete and consistent, always contain theorems that are undecidable/unprovable (Gödel’s Theorem as a consequence of the PCE) (782). It’s not possible to construct a finite set of axioms that can be guaranteed to lead to completeness and consistency (783). Human intelligence is not outstanding – in very basic universal systems one can find learning, memory, adaptation, self-organization, self-reproduction and complexity (823,824). We share the same level of computational sophistication with our whole universe (845). The PCE implies that all the wonders of our universe can in effect be captured by simple rules, yet it shows that there can be no way to know all the consequences of these rules, except in effect just to watch and see how they unfold (846).
Principle of Computational Equivalence: simple programs, following a very limited set of simple rules, and starting from simple initial conditions, are able to generate complexity that in all aspects is equivalent to any kind of complexity anywhere in nature.
Example: rule CA
number 30 (00011110) generates a random pattern.
CA’s (e.g. the rule 110 cellular automaton) can emulate the universal Turing machine (it can calculate or prove anything that could be calculated or proved by any purely mechanical procedure).
Download Stephen Muires WolframGenerator program, a free utility to generate the first 256 Cellular Automata: (Stephen Muires: We like things that are simple, cheap, easy and fast – cf the dragon in "Elven star" that sees people running away from him and shouts: "Ah, fast food!")
Web Resources On Cellular Automata:
Karl Schramm, slideshow:
Karl Schramm, Digital Signal Processing, Cellular Automata, and Parallelism
Ray Kurzweil A key issue to ask is this: Just how complex are the results of Class 4 Automata? Wolfram effectively sidesteps the issue of degrees of complexity. There is nonetheless a distinct limit to the complexity produced by these Class 4 automata. They do not evolve into, say, insects, or humans, or Chopin preludes, or anything else that we might consider of a higher order of complexity than the streaks and intermingling triangles that we see in these images. A human being has a far higher level of order or complexity. SW does not show how a Class 4 automaton can ever increase its complexity. It is the complexity of the software that runs on a universal computer that is precisely the issue. Wolfram would say that the Class 4 automata and an evolutionary algorithm are "computationally equivalent." But that is only true on what I could regard as the "hardware" level. On the software level, the order of the patterns produced are clearly different, and of a different order of complexity. Although genetic algorithms are a useful tool in solving specific problems, they have never achieved anything resembling "strong AI," i.e., aptitude resembling the broad, deep, and subtle features of human intelligence, particularly its powers of pattern recognition and command of language. (..) It is true that computation is a universal concept, and that all software is equivalent on the hardware level (i.e., with regard to the nature of computation), but it is not the case that all software is of the same order of complexity. The order of complexity of a human is greater than the interesting but ultimately repetitive (albeit random) patterns of a Class 4 automaton. The phenomenon of randomness readily produced by cellular automaton processes is a good model for fluid turbulence, but not for the intricate hierarchy of features in higher organisms. (..) Information-based physics: Richard Feynman, Norbert Weiner. Edward Fredkin believes that the Universe is very literally a computer and that it is being used by someone, or something, to solve a problem. It sounds like a good-news / bad-news joke: the good news is that our lives have purpose; the bad news is that the purpose is to help some remote hacker estimate pi to nine jillion decimal places. (..) We find the nature of the process often alternates between analog and digital representations of information (cf the sound paths through air, wires, electronic devices, cells, brain). (..) There is no such thing as the first person in science, so inevitably concepts such as free will and consciousness end up being meaningless. We can either view these first person concepts as mere illusions, as many scientists do, or we can view them as the appropriate province of philosophy, which seeks to expand beyond the objective framework of science. (..) In my view, the fundamental reality in the world is not stuff, but patterns. I am a completely different set of stuff than I was a month ago (cf the pattern of the flow of water around a rock). All that persists is the pattern of organization of that stuff. The pattern changes also, but slowly and in a continuum from my past self. Reality ultimately is a pattern of information. Information is the ultimate reality. What we perceive as matter and energy are simply abstractions, i.e., properties of patterns. (..) In summary, Wolfram's sweeping and ambitious treatise paints a compelling but ultimately overstated and incomplete picture. Wolfram joins a growing community of voices that believe that patterns of information, rather than matter and energy, represent the more fundamental building blocks of reality.
Computists vs physicists: "It’s our stuff (information), not your stuff (matter/energy) that rules the universe!" CA-fans seem not to expect that a donkey will emerge from CA-like models, at most (with the appropriate initial conditions and rules) a picture and perhaps even some behavior of a donkey could be simulated (leaving the question open which types of models are more adequate for which phenomena: continuous or discrete, which again leaves the question open whether the phenomena themselves are ‘actually’ continuous or discrete ("A rational question is: are integers the real stuff or is it more integer to work with reals?")). According to SW a bit more is needed to get at some physical level, in the light of his note: "If I am correct that there is a simple underlying program for the universe, then this means that theoretical physics must at some level have only a very small amount of true physical input [sc in the form of models, but what then means "true physical"? OBW] – and the rest must in a sense all just be mathematics." (NKS, 1026). SW ends up with a pretty equalized concept of meaning: his simple-rule-universe suggests "that it makes no more or less sense to talk about the meaning of phenomena in our universe as it does to talk about the meaning of phenomena in the digit sequence of
p". (NKS, 1027).
Steven Weinberg Only if Wolfram were right that neither space nor time nor anything else is truly continuous (which is a separate issue) would the Turing machine or the rule 110 cellular automaton be computationally equivalent to an analog computer or a quantum computer or a brain or the universe. (..) I am an unreconstructed believer in the importance of the word, or its mathematical analogue, the equation. (..)Wolfram's survey of the complex patterns produced by automata may yet attract the attention of other scientists if it leads to some clear and simple mathematical statement about complexity.
Jim Crutchfield
proposes statistical complexity as a measure of meaningful information. Statistical complexity is low when randomness is very high, but is also low when randomness is very low (regularities). In the intermediate range, where there's randomness mixed in with regularity, the statistical complexity is high.
Gert Korthof proposes to call the mathematical Chaitin-Kolmogorov concept of information 'compressibility' in order to prevent confusion of 'mathematical information' with 'human information'. He also proposes a dictionary based Information Content (based on a dictionary specified in advance) that would easily be implemented and executed by a computer program and that would capture 'meaning'. OBW: This specify-a-dictionary-in-advance-approach seems to be somewhat circular: the fact that some sequence has "meaning" determines whether it will find a place in a dictionary; the pieces of analyzed data that match this sequence are taken out of the data to be labeled as information, and this information now ‘captures meaning’. This way the problem to determine information/meaning has been shifted to the question what should be placed into the dictionary (in which dictionary one will find the word "flabberdunkynotch"?). The hardcore of the combination "sequence, information, meaning" seems to be a diversity of repeated occurences of sequences and combinations of sequences (the kind of repetitive regularity that makes the combination of all occurring sequences in a large set of complex data more compressible). Medium compressibility suggests a possible relation between the diversity of repeated occurences of sequences and some kind of expression of meaning (e.g. the production of proteins or of understanding). Search for regularities in the relation between compressibility and lexicality would be interesting, when applied to different area’s like e.g. large texts in several languages or e.g. genomes. With the concept lexicality I suggest a quantification of the (possibly meaning-specific) regularity of a sufficiently large data-set. Comparable levels of lexicality could be realized by quantifying the results of automatic lexicalization of the regularities in data-sets. Strictly speaking lexicality is the word-nonword ratio, which would always yield 1 for any (random) series of words in any well-defined language. I propose to add other characteristics of the derived lexical units (like number of different lexical units and types of relations between them), making lexicality just one dimension of a multi-dimensional measure, which would be more appropriate for (seemingly?) redundant data-sets (like e.g. genomes). The linguistic concepts grammaticality and semanticality suggest dimensions of relations between lexical units that could be quantified as well, also when applied to other fields of research: what are e.g. bio-analogues of well-formed sentences, and what are e.g. bio-analogues of sentences that are not only well-formed (grammatically), but at the same time are sentences-that-make-sense (semantically)? (..)
Mathematical methods to compress DNA sequences:
emperical curve fitting (Huen Y.K.)
Is junk DNA really junk? Huen Y.K. argues it isn’t. According to Huen, using sequence algebra and the theory of first order compressibility of k-nary strings the following properties of junk DNA can be proved: 1. Junk DNA is not junk as it contains finite amount of algorithmic information 2. All genomes have degrees of instability. No corrective action could bring them to full stability or equilibrium. If a genome could achieve full equilibrium, stagnation and death will set in. Thus equilibrium is a state to be avoided. 3. The main role of junk DNA is to supply introns to increase the stability of genomes but the corrective action is never sufficient to bring a genome to full equilibrium. 4. Genomes must satisfy two constraints, viz., biochemical constraint and 4-nary string constraint. 5. SA can be used to predict certain macroscopic string properties of genomes without detailed knowledge of DNA coding. But microscopic properties can only be predicted biochemically.
Huen proposes a method for DNA-sequence-compression, using Taylor-Laurent expressions like A*( unit interval (jump), starting-postion, times ) applied to the substrings of only A,C,T, and G positions, this yields e.g. for the DNA-sequence "ACGTGATAGCCA":

DNA_A = A0000A0A000A = A(5,1,2)+A(4,8,2)
DNA_C = 0C0000000CC0 = C/z^2 + C(1,10,2)
DNA_G = 00G0G000G000 = G(2,3,2) + G/z^9
DNA_T = 000T00T00000 = T(3,4,2)

So DNA = A(5,1,2)+A(4,8,2)+ 1/z^2 + C(1,10,2)+ G(2,3,2) + G/z^9 + T(3,4,2)
= expand(A*n(5,1,2)+A*n(4,8,2)+ C/z^2 + C*n(1,10,2) + G*n(2,3,2) + G/z^9 + T*n(3,4,2));
Huen defines algorithmic information (AI) as the most concise formula that generates the sequence and he defines Compressibility ratio or Comprat as the ratio of the Algorithmic Information of a DNA-sequence divided by the Algorithmic Information of a natural number sequence ("1111111...n") of the same length. (..) Then he calculates the Comprats for different gene- en intron-lengths, which yields lower Comprats with longer introns. Huen then states: we know that the longer the DNA-sequence, the more compressible it is and the more compressible a string is, the closer it drifts towards equilibrium. Since the genes cannot be tempered with, the only option is by increasing the amount of introns. Introns contain a lot of repeats and are more compressible than the gene sequences. This is how introns stabilize an unstable genome. The role of INTRONS is to reduce the Comprat value of GENOME since the lower the value the more stable the genome.
Examples of use of the word irreducibility:
Sean Meyn
Markov chains > irreducibility (h 4), 2002

Irreducibility in the Halting Problem. Given a description of an algorithm and a description of its initial arguments, determine whether the algorithm, when executed with these arguments, ever halts (the alternative is that it runs forever without halting). Church/Turing (1936 - the work of both authors heavily influenced by Kurt Gödel’ss earlier work on his incompleteness theorem, especially by the method of assigning numbers to logical formulas in order to reduce logic to arithmetic): there is no general method that can solve the halting problem for all possible inputs, so this is an undecidable problem. Rice’s theorem: the truth of any non-trivial statement about the function that is defined by an algorithm is undecidable. Note that this theorem holds for the function defined by the algorithm and not the algorithm itself. Example: does this C program halt (i.j.k integers, the algorithm looking for an odd, perfect number)?
(..) fundamental irreducability of that same population to one homogenous identity.
Drees, W.B., Onherleidbaarheid, natuurwetenschap en geloof. In M.A. Kaashoek, A.W. Musschenga, W.B. Drees (eds.), De eigen wijsheid van wetenschap en geloof: Essays in gedachtenis aan Maarten A. Maurice. Amsterdam: VU-Uitgeverij, 1996. pg 199 dagelijkse ervaring onherleidbaar: niet uitputtend te beschrijven en met behulp van precieze modellen te voorspellen; 200 (..) die onherleidbaarheid geen ontologische basis (..); theoretisch herleiden, bv in de fysica de beschrijving van gassen met thermodynamische grootheden (bv druk, temperatuur, volume) of met moleculaire grootheden (bv botsingen, kinetische energie) (type-type identiteit); 201 verschil met token-token identiteit: bv geldtransactie (economische handeling - fysische gebeurtenis); fysicus, econoom, bioloog: verschillende eenheden en manieren van classificeren; 202 rol van de context die bv aan een neurofysiologische toestand pas de betekenis van een mentale toestand geeft (cf Murphy, 1997, supervenience); een verklaring in andere termen kan een bepaalde theorie van de werkelijkheid veranderen (bv slang in de hoek (paniek!) is een rol touw); 203 de gaswet van Boyle-Gay Lussac (p*V/T is constant, thermodynamisch domein) is een goede benadering van de Van der Waals-wet (moleculair domein); 204 commonsense noties van substantie vs op te geven filosofische noties van substantie (cf de lege tafel van Eddington); gedeeltelijke herleiding van psychologie tot fysiologie -> eliminatie van de gedachte dat er een afzonderlijke referent is voor mentale processen (zoals bv een entiteit 'ziel'); herleiden als vragen doorgeven (bv bioloog -> biochemicus -> astrofysicus) (cf Weinberg, S., Dreams of a Final Theory, 1992); 205 -> grensvragen (waarom is de werkelijkheid zoals ze is?); 208 zou wijsheid die verankerd is in verhalen, mythen en rituelen wel op andere wijze uitgedrukt kunnen worden (te herleiden zijn)?; Drees maakt onderscheid tussen de kosmologische (mystiek, eenheid, afhankelijkheid, behoren tot) en functionele (profetisch, dat wat is vs dat wat zou moeten zijn) benadering.
Carlo, W.E., The ultimate reducibility of essence to existence in existential metaphysics. The Hague, 1966. Essence as a mode of existence, as intrinsic limitation of existence. 99 Essence is the intrinsic modification of the dynamism of the actual exercise of the act of being. Thomas: esse is the source of all cognosibility and intelligibility (Unumquodque, quantam habet de esse, tantum habet de cognoscibilitate). (vs bv Giles of Rome: essence as the source of all intelligibility) 100 Thomas: esse not as abstract concept of being, (..) being conceived as genus: esse possesses within itself all perfection (Vivere in viventibus est esse. Intelligere in intelligibus est esse.).101 in God Essence identical with Existence as Ipsum Esse. Doctrine of the non-being of essence and the ultimate reducibility of essence to esse as logical consequence of the interpretation of essece as a mode of being.102 Because it is a limited esse, it can be grasped by the finite intellect. Modern existentialism eliminates essence from the metaphysical picture. 103 Henry of Ghent and Francisco Suarez reduce existence to essence. C: essence as the intrinsic limitation of esse, the point at which existence stops, bordered by nothingness. There is nothing in water which is not water – there is nothing in an existent that is not existence. 104 Essence co-existent rather than an existent. Essence is not that which limits esse, it is the limitation of esse; it is not that which receives, determines and specifies esse, it is the very specification itself of existence. 105 Esse as moulded and determined intrinsically is then conceptualizable and known in the concept, and called essence. But essence itself is not what is most fundamental in reality.138 Plato: essences separate from things; Aristoteles: essences within things; C: reducibility of essences to esse, achieving a unity of the plurality of metaphysical principles; difference descriptive and explanatory metaphysics; 139 the laws of being are at the same time the laws of the unification of knowledge or what we call theory. Essence is not a positive being apart from the existence of which it is the limitation, but a positive principle of philosophy as the intrinsic limitation of esse (..) displaying and showing forth the riches and intelligible perfections of esse. An actually existing essence (..) as limited esse. 140 All essences are modes of esse. Essence is the intrinsic principle of limitation to esse. Before existence nothing happens. After existence all that happens does so. Essence is not absorbed into esse, (..) it is the limitation of esse which restricts esse to this kind of thing and consequently conceptually knowable and definable. 141 Metaphysics acquires its scientific structure from the fact of the Ultimate Reducibility of all metaphysical doctrines to being, all entities to esse and matter in particular.
Some notes which are only remotely associated with the subject.
Reduction: long > short, complex > simple
The large gaps in our knowledge provide a wide space for speculations.
Is there some meaning beyond function – of something which function it is to have meaning?
If you call a tail a leg, does this mean that dogs have five legs?
We share the roots of our cultures, let’s share the fruits as well.
The doctor who prescribes the same medicines to people of different races, is definitely not a racist, but nevertheless stupid (cf different levels of TGF-beta, lactase, etc. Research by J. Philipe Rushton – Race, Evolution and behavior (..) After all, (genetic) differences between men and women are greater than differences between racial backgrounds).

Relatie symmetrie en de behoudswetten (Emily Noether).
Gustaaf Joos (de kardinale vriend van de paus): mensen zijn vol van zichzelf, en in een vol vat kun je geen wijn gieten.
Grensverlegging (in kennis en handelen) als levensroeping.
Pieter Pekelharing: Filosofie is de overtuiging dat als je redelijk naar de werkelijkheid kijkt, die werkelijkheid ook redelijk terugkijkt. Filosofie is (..) hoop op redelijkheid. (Trouw, 29 Dec. 2003, p 10).
chemical initial conditions + ?-rules > biochemical behavior > biochemical systems (..) H2O in a testtube is a chemical molecule, H2O in a cell is a biochemical molecule (..)
Hallo Gert,
Zojuist las ik je interessante
samenvatting van Tibor Ganti's
operationalisering van 'leven' als biochemisch concept.
Als ik het goed begrijp, is in jouw ogen een virus niet meer
dan een chemisch systeem - 't zijn wel 'evolutionary units',
maar ze zijn niet 'alive'. Dit vloeit voort uit het criterium
dat bv de proteïnen die zorgen voor de replicatie van het systeem
onderdeel moeten uitmaken van het systeem zelf. Dat zou je kunnen
veralgemeniseren tot: een systeem is alleen een levend systeem,
als het de proteïnen die nodig zijn om te overleven, zelf kan
genereren. Wat zou betekenen dat de mens geen levend systeem is,
omdat wij (om te kunnen overleven) afhankelijk zijn van proteïnen
die niet door onszelf, maar wel door andere systemen (planten,
dieren) gegenereerd worden. Dit is een aardige redenering 'ad
absurdum', maar levert mogelijk een argument op om virussen te
beschouwen als 'grensgevallen' van levende systemen, omdat ze
als eenheden van viraal RNA + capsid proteïns + envelope proteïns
voor hun replicatie zeer effectief 'gebruik maken' van de bio-
synthetische proteïnen van cellen. Het levert mogelijk ook een
argument op om andere 'replicating systems' die (itt tot virussen)
binnen de cel blijven ook als 'levende systemen' te beschouwen, ook
al komen er geen membranen aan te pas (plasmids, transposable elements).
Een compleet intern (systeem-specifiek) metabolisme en biosynthetisch
instrumentarium legt misschien de lat ook wat te hoog voor het mogelijk
ooit nog eens kunnen bepalen van mogelijke 'origins of life' - dat zou
door een te abundante definitie van biochemisch leven voor de voeten
gelopen kunnen worden.
De 'property-set of persistence' (de 'necessary and sufficient
properties' van een min of meer zelfstandig 'in leven' blijvend
biochemisch systeem) zou je op 't verkeerde been kunnen zetten
als je er van uitgaat dat de 'property-set of origination' (de
'necessary and sufficient properties' die een rol spelen bij het
ontstaan van een biochemisch systeem) een sub-set van de 'property-
set of persistence' moet zijn (wat je suggereert door te stellen
dat het chemoton model een 'model for the origin and creation of
life' zou kunnen zijn).
De door Ganti aangevoerde 'absolute' (toe maar!) criteria zijn
voor mijn gevoel nogal ongelijksoortig:
- 'unity' en 'stability' lijken mij al kenmerken van het begrip 'system'
- 'regulation and control' zijn toch ook al gegeven met 'metabolism'?
- 'information-carrying subsystem' plaatst bepaalde moleculen (RNA/DNA)
naast de verzameling processen die aangeduid worden met 'metabolism';
wat is er tegen om te stellen dat ook proteïnen 'informatiedragers'
zijn die actief betrokken zijn bij het doorgeven van hun informatie
(gezien de min of meer eenduidige relatie tussen de RNA/DNA-bases en
de protein-amino-acids)?
- diffuus blijft het verschil tussen structuren ((bio-)chemische mole-
culen en de gronden om deze moleculen weer van elkaar onderscheidend
in te delen) en processen (waarin de functies van de moleculen blijken);
helderheid op dit punt lijkt me nodig om chemie > biochemie sprongetjes
te kunnen beschrijven en verklaren
ad criterium 1 "a cell is a unit, because it cannot be subdivided without loosing living properties:
? een cel deelt zichzelf - lijkt me 'n tamelijk levendig kenmerk, evenals
de manier waarop een virus 'zichzelf' repliceert - de vraag is: wat
bepaalt de 'herkenbare identiteit' van een systeem gedurende de levenscyclus
(waarvan ook decomposing & recomposing vh systeem zelf deel uit kan maken)
ad criterium 2
bij welke minimum set van actieve RNA’s en proteïnen zou er gesproken kunnen
worden van sprongetjes van chemie naar biochemie?
Intelligent Design: discussies in NL: