DOC

advertisement
Published in Biosemiotics 1(2), 147-168 (2008)
Physical and functional conditions for symbols, codes, and languages
H. H. Pattee*
Department of Systems Science and Industrial Engineering
T. J. Watson School of Engineering and Applied Science
State University of New York at Binghamton
Binghamton, NY 13902-6000
(e-mail: hpattee@roadrunner.com)
*Correspondence address: 1611 Cold Spring Road, Suite 210, Williamstown, MA 01267, USA.
Abstract
All sciences have epistemic assumptions, a language for expressing their theories or
models, and symbols that reference observables that can be measured. In most sciences the
languages in which their models are expressed are not the focus of their attention, although
the choice of language is often crucial for the model. On the contrary, biosemiotics, by
definition, cannot escape focusing on the symbol-matter relationship. Symbol systems first
controlled material construction at the origin of life. At this molecular level it is only in the
context of open-ended evolvability that symbol-matter systems and their functions can be
objectively defined. Symbols are energy-degenerate structures not determined by laws that
act locally as special boundary conditions or constraints on law-based energy-dependent
matter in living systems. While this partial description holds for all symbol systems,
cultural languages are much too complex to be adequately described only at the molecular
level. Genetic language and cultural languages have common basic requirements, but there
are many significant differences in their structures and functions.
Key words: symbol-matter problem, mind-body problem, epistemic cut, measurement problem,
evolution, natural selection
“In the beginning of heaven and earth there were no symbols. Symbols came
out of the womb of matter.”― Laotzu (ca. 600 BCE)
1. Epistemology and terminology are problems for biosemiotics
There are classical epistemic problems that have troubled the greatest minds for over 2000
years without reaching any consensus. This is the case for conceptual dualisms, like discrete and
continuous, chance and determinism, form and function, and especially the mind-body problem
that has persistently puzzled philosophers, and is still a central issue for philosophy, psychology,
artificial life, artificial intelligence, and cognitive science. It is also closely related to
fundamental issues in physics, the information-energy relation and what is known as the
measurement problem. All of these problems are related to a category I have generalized as
symbol-matter problems. Biosemiotics, virtually by definition, cannot escape these problems.
In my view, the central difficulty with the historical mind-body problem is that philosophy
approached it from the wrong end of evolution. The many concepts and terminologies that have
1
been invented to describe thought and language at the highest evolutionary levels of human
cognition are not conceptually or empirically clear enough to adequately describe symbolic
control at the cellular level where this duality first appears, and where it is simple enough to
understand. Because my education began as a physics student, I learned that one must thoroughly
understand very simple problems before one could even think clearly about more complex
problems.
For many years I have thought about the symbol-matter relation at the level of single cells,
and I am convinced that if we do not thoroughly understand the symbol-matter relations at the
molecular level in the cell, where it began, we will not fully understand it at any higher evolved
level (e.g., Pattee, 1968, 1969, 1972). At the molecular genetic level I believe we can make the
necessary objective distinction between symbol sequences, chosen by natural selection and not
determined by physical laws, and the law-based dynamics that they control. At this simplest level
I do not claim or expect to find complete answers to what are clearly context-dependent symbolmatter relations occurring at many evolved hierarchical levels of organization. However, I do
find minimum conditions for symbol systems that must hold at all levels, even for the brain.
I want to be clear that my studying the problem at the molecular level does not mean I am a
philosophical reductionist; but as a physicist I know that specific detailed models are essential
for full understanding of even the simplest symbolic control of material systems. I emphasize
that such elementary models are incomplete with respect to emergent functions and meanings,
and certainly inadequate as conceptual explanations of semiotic behavior at the cognitive level.
I also am not a dualist. I believe that all semiotic activity and all evolved emergent processes,
including consciousness, must have a material basis, and that this material aspect must be
understood even though it is only one aspect of the symbol-matter problem. The fact that after so
many years and so much intellectual effort there is no consensus on how to picture symbolmatter interactions at higher evolutionary levels is strong evidence that the problem requires
looking more carefully at the epistemic assumptions underlying the basic concepts of symbol and
matter.
There is also the problem of terminology. The terminology of physical laws that can describe
material symbol vehicles is of little use for describing symbol function and meaning. On the
other hand, semiotic terminology that arose for the study of human language has nothing to do
with physical laws. In between these extremes are many levels of biological models with their
own well-established terminologies, but they do not address the symbol-matter relation. There is
no reason to expect models at different levels of organization to be reducible, one to the other, or
even to be formally consistent with one another. For example, the cell’s own genetic symbols are
descriptions that allow a cell population to survive the tests of natural selection. The
terminologies that geneticists and evolutionists use are those that survive in the society of
biologists. The mathematical symbol systems that express natural laws are the descriptions that
survive the tests that are essential for physics.
What is of greatest interest for the field of biosemiotics is that biologists, physicists,
philosophers, and linguists see similarities between the cell’s symbol systems and the human
brain’s symbol systems even though they are separated by 4 billion years of evolution. However,
the significance of these similarities is not obvious, while there are significant differences. It is
these similarities and differences that I will discuss in the following sections.
Biosemiotics will become influential in the biological sciences only if it can persuade
biologists that semiotic concepts and terminology are objectively necessary for a full
understanding of genetic control. This will be a difficult task because the rapidly expanding
2
technologies of molecular biology, like gene sequencing, genomics, and proteomics have
developed their own empirically based terminologies that describe processes also in the domain
of biosemiotics, even if they are not recognized by that name. In order to establish itself as a
scientific discipline, biosemiotics must state clearly the epistemic principles on which symbols
and matter are empirically distinguished, as well as the explanatory value of semiotic models that
cannot be achieved by physical models.
2. The rules of symbols and the laws of matter have incompatible epistemic assumptions.
Let me begin with two epistemic assumptions that are a common source of misunderstanding.
Models of symbol systems and material systems are based on incompatible epistemic
assumptions. Physical laws ― the laws of matter and energy ― are based on the principle that
any candidate for a law must give the same results for all conceivable observers and for arbitrary
changes of reference frames. These conditions are called invariance and symmetry principles.
Physical laws seek to describe those relations between events over which individual agents,
whether single cells or humans, ideally have no control or freedom to make changes. Laws must
appear universal and inexorable. By contrast, any candidate for a symbol system is based on the
condition that all individual agents, from cells to humans, ideally have complete symbol-writing
freedom within the syntactic constraints of the symbol system.
In other words, physical laws must give the impression that events do not have alternatives
and could not be otherwise (Wigner, 1964), while informational symbolic structures must give
the impression that they could be otherwise, and must have innumerable ways of actually being
otherwise. Semiotic events are based on an endless choice of alternatives, not only in symbol
sequences but also in codes that interpret the symbols. It is just those innumerable alternatives,
selected by heritable propagation, that are the prerequisites for evolution as well as for creative
thought.
Each of these incompatible conditions by itself appears to be reasonable and even necessary
in the context of its corresponding category. In fact, they are the essential defining principles
that distinguish the two categories of laws and symbol systems. For that reason both conditions
must be recognized as metaphysical assumptions. They are metaphysical because they are not
subject to falsification by experiment. They are ultimately normative choices, even though one
may support either of them by ontological beliefs. In other words, whether you choose to model
a cell as a material system governed by laws or as a semiotic system constrained by information,
or as some combination of both, is an epistemological choice decided primarily by personal
aesthetics, ontological beliefs, and especially by the types of questions you want the model to
answer.
This is a general epistemic choice that is available to anyone, physicists as well as biologists.
For example, some physicists choose to view matter as fundamental and information as a derived
concept (e.g., Crick, 1994) while others may choose to view information as fundamental and
matter as the derived concept (e.g., Wheeler, 1991). Of course, in these two examples Crick and
Wheeler were asking different questions and using different ontological views of the universe.
My point is that without stating one’s epistemic assumptions and asking a specific question it is
unproductive to argue about which model gives the best answer. The symbol-matter relation is a
problem just because any answer must incorporate both types of models with their differing
epistemologies.
3
As a direct result of these two epistemic assumptions, models of material systems and models
of symbol systems have different dependencies on energy and time. Stated briefly, all the
fundamental physical laws are expressed in terms of energy and time, specifically, rates or time
derivatives. By contrast, energy, time, and rates of change are of little significance for the
function or meaning of symbolic expressions. Also, fundamental laws are state-determined
which means they do not depend on memory, while symbol systems require some form of
memory. It is just this independence of symbolic expression on energy and time and their
dependence on memory that gives organisms the freedom to evolve by heritable variation and
natural selection, and the brain freedom to think.
These two incompatible epistemic conditions for symbolic and material descriptions produce
two nearly disjoint attitudes toward studying life. Almost every philosopher has recognized the
symbol-matter problem in one form or another, but because of the arbitrariness of the physical
symbols and coding vehicles, they focus on the symbolic half of the problem, and they are
seldom interested in the material embodiments that store symbols and execute codes.
The basic fact that is often ignored by philosophers, including philosophers of biosemiotics, is
that to construct symbols requires physical control of the material symbol vehicles, and that to
read symbols requires some material pattern recognizer or measurement mechanism. Similarly,
while the concepts of symbol, code, control, construction, pattern recognition, and measurement
all require triadic relations, these relations themselves must have a physical executor or
embodiment and they cannot be adequately understood only as abstractions. This requirement
that symbols, codes and measurements must have a material embodiment is just another aspect
of the symbol-matter problem. This requirement raises the classical question: How does this
arbitrariness of symbols and codes come about if all matter must obey universal physical laws?
3. How can symbols and codes be free of physical laws?
This question is a second common source of misunderstanding about the symbol-matter
problem. It is a belief among many scientists and philosophers that because physical laws are
universal and inexorable there is in principle no room for alternative behaviors, in particular, the
freedom of symbolic information, and ultimately free-will. This is a belief with a psychological
basis that long predates the discovery of physical laws. It is rooted in the concept of causality ―
the feeling that every event must have a cause. Aristotle and Lucretius could not accept
indeterminism long before Laplace and Einstein.
“Again, if all movement is interconnected, the new arising from the old in a
determinate order - if the atoms never swerve so as to originate some new
movement that will snap the bonds of fate, the everlasting sequence of cause and
effect – what is the source of the free will possessed by living things throughout
the earth?”
Lucretius – De Rerum Natura
Today, physics makes a clear distinction between laws, initial conditions and boundary
conditions, and recognizes that the latter conditions are undeterminable by laws (e.g., Wigner,
1964). It also recognizes that measurement is an irreducible symbol-matter relation that is not
describable by laws (e.g., von Neumann, 1955). Eddington (1929) in discussing free will
emphasized that, “There is nothing to prevent the assemblage of atoms constituting a brain from
being of itself a thinking object in virtue of that nature which physics leaves undetermined and
4
undeterminable.” Most physicists have also come to accept the evidence that all physical laws
are fundamentally probabilistic so that no event is strictly deterministic. What appears as
deterministic is just very reliable statistics. Physicists also have no reluctance to use metaphors in
their models because common sense concepts are clearly inadequate for picturing matter,
especially at very small sizes and at very high energies.
Nevertheless, there are still scientists and philosophers who believe in a literal concept of
material reductionism (e.g., Churchland and Churchland, 1998). It is clear that the symbols in the
brain depend on molecular processes just as do the symbols of the genes, but dependency on
laws does not mean determined by laws.
In spite of the fact that chance events are common in everyday experience there is still a
common feeling that the complex structures in the universe are the result of natural laws. This is
not the case. While laws always apply, the laws also leave many events undetermined and
undeterminable. Many accidental structures arise by chance, and persist by some form of natural
or artificial selection. Only a small fraction of the complexity of the universe is the result of
fundamental laws. The physicist Murray Gell-Mann has emphasized that most of the complexity
in the universe is the result of “frozen accidents” (Gell-Mann, 1994). In the case of life, we know
that biological complexity is largely the result of chance events and natural selection. Natural
selection is itself a statistical influence over an indefinite length of time on the survival of a
replicating population distribution. While species are not entirely frozen or accidental, their
evolution is undeterminable over the long run.
The concept of universal and inexorable physical laws can be misleading if you ignore these
accidental or naturally selected structures that act locally as boundary conditions or constraints
on the lawful dynamics. It is true that many local structures like atoms, molecules, crystals, and
solids are determined by laws insofar as they are local minimum energy configurations.
Consequently, they are effectively stable or unchanging with respect to the lawful dynamics.
They can act on the dynamics as boundary conditions, and as local structures they must be
described separately from universal laws. Most of our artifacts and machines can be described
physically as special boundary conditions constructed from 3-dimensional solid parts.
What should by now be obvious is that there is an enormously larger class of natural
structures that have nearly equal probabilities of formation because they are 1-dimensional and
have nearly equivalent energies. They are the linear copolymers, like the polynucleotides and
polypeptides. Life and evolution depend on this class of copolymer that forms an unbounded
sequence space, undetermined by laws. Life also depends on the fact that these sequences act as
special boundary conditions or constraints on the energy-based forces that fold them into 3dimensional structures. This unbounded sequence space is the first component of the freedom
from laws necessary for evolution.
The second component of this freedom from laws is the remarkable fact that the entire range
of structures and functions of the organism can be effectively controlled by members of this 1dimensional sequence space. This second freedom is the freedom of interpretation of the
symbols. This requires an arbitrary code, a condition necessary for a general-purpose language,
a concept that I will discuss in Sec.5.
The implementation of a code requires a subtle but essential physical condition. The code’s
arbitrariness means that the constraint of the coding process must have freedom that is not
completely under the control of the dynamics, but at the same time it must depend on the
dynamics for execution of the coding rules. This requires that the constraint must have more
degrees of freedom in its structural configurations than the laws allow in its energy-based
5
dynamic behavior. This type of special boundary condition is called a non-holonomic constraint
(Pattee, 1968, 1972). This is a common type of structure found in almost all our machine
artifacts, but because they are special local boundary conditions they are of interest for
engineering but not for basic physics. I recognized this as the same type of constraint von
Neumann (1966) found necessary for computational generality with a single hardware machine,
namely, that the symbolic instructions must be unrestricted by the hardware dynamics, and at the
same time the symbols must control the dynamic symbol manipulations of the hardware. I
discuss this further in Sec. 5.
The enzyme is the primeval natural example. The binding to its substrate and to the bonds it
catalyzes depends on lawful energetic dynamics, but the shapes or specificity of its substrate
binding site and its reaction site are what determines its choice of substrate and the reaction it
catalyzes. These 3-dimensional shapes are determined by the 1-dimensional gene sequences that
constrain their folding. In other words, enzyme function is under symbolic control and is not
determined by the energetic dynamics, but the substrate binding, the allosteric coupling to the
catalytic site, and the catalysis itself depend on the energy-based dynamics.
Symbol-constrained folding of copolymer chains in replicating cells is the primeval symbolmatter interaction. Protein and nucleic acid folding is a symbol-constrained, law-based,
minimum energy process. In typical proteins the number of variables is so large that even the
fastest computers cannot calculate the folding in full detail (e.g., Frauenfelder and Wolynes,
1994). The concept of folding depends on the existence of strong and weak chemical bonds.
Folding means forming a 3-dimensional configuration by many weak bonds without breaking the
strong-bonded 1-dimensional sequence. In the context of the cell, the strong-bonded, energydegenerate, linear sequence is the symbolic boundary condition on the energy-based folding
process. Polanyi (1968) has aptly characterized the basis of life as special boundary conditions
that “harness the laws of nature.”
There are no fundamental scientific principles beyond laws, chance, and natural selection
that we can use as ultimate explanations of natural events. Consequently, because symbols and
informational events are not determined by natural laws, their existence must ultimately be the
result of chance and natural or artificial selection.
The chance occurrence of random polymers that form autocatalytic cycles probably were
important in the origin of life (e.g., Eigen & Schuster, 1979; Kauffman, 1993) but there is no
objective necessity of calling these cycles symbolic until they have heritable control over
individual, locally bounded, replicating units being naturally selected. Only natural selection can
determine if a genetic sequence represents functional information. A mutation in a gene is just an
error until replication and natural selection shows it to be adaptive.
4. Early personal history of the problem
There are more requirements for a polymer sequence to function as a symbol besides energy
degeneracy, coding rules, and the ability to fold into a specific catalyst. The entire system must
be able to replicate and to persist by heritable variation and natural selection. It was only after
studying the nature of hierarchical organization, von Neumann’s logic of self-replication, and the
measurement problem that I began to understand the essential semiotic requirement that symbols
and codes must be part of a language to allow open-ended evolution. To explain this I need to
recount a brief personal history.
6
The symbol-matter problem first arose in my thinking about the origin of life. I have to agree
with Laotsu that symbols emerged from the lawful material universe at the origin of life. From
an evolutionary point of view I do not see how one can support the claim that semiotic principles
are on the same footing as physical laws. Symbols and life are coextensive concepts and their
occurrence in the universe is cosmically very recent and exceedingly sparse, at least for life as
we know it.
Before the discoveries of the genetic code and protein synthesis, physicists often viewed life
as a basic challenge to natural laws, and many expressed doubt that life is reducible to physical
laws. Bohr (1933), Delbrück, and Schrödinger are prominent examples of those whose thoughts
on the subject are in the literature (e.g., McKaughan, 2005). Like many other physicists at the
time, I was challenged by the central question raised by in Schrödinger’s "What Is Life?” He
asks how the gene, “that miniature code,” could reliably control the development of such highly
complicated organisms. In the 1960s there were two schools of thought; one school focused on
the molecular structure and biochemistry of life, the other school (that should now be recognized
as “biosemiotics”) focused on the informational aspects of genetic control (e.g., Beadle, 1963;
Kendrew 1968; Stent, 1968; Delbrück, 1970).
I first belonged to the material school because my physics research was on x-ray microscopic
and microdiffraction techniques for studying cell structure. Because the origin of life certainly
requires understanding the origin of higher levels of organization, I also began to study
hierarchical structures, specifically how new levels of organization are distinguished and whether
higher levels of structure were objective, a descriptive convenience, or an epistemic necessity.
My ideas about hierarchic structures were influenced by Herbert Simon’s paper, “The
Architecture of Complexity: Hierarchic Systems” (Simon, 1962) in which he explains why the
evolution of complex systems favors hierarchical levels and why different levels require different
models that often appear formally incompatible. His central thesis is that, “hierarchically
organized systems will evolve far more quickly than non-hierarchical systems of comparable
size” (ibid. p. 184). By “hierarchically organized” he means that each level is defined by its
stability relative to its components at the lower level. What he means by “stability” is that the
dynamics of the higher level appears slow (and temporally incoherent) compared to the faster
dynamics of the subunits that make up the lower level system. This temporal incoherence means
that the faster dynamic description of the lower level cannot adequately describe the features of
interest at the next higher level. This is one way to view symbol structures.
It became clear from my own studies of hierarchies that complex systems require multiple
descriptions that are often formally incompatible (Pattee, 1973). Robert Rosen reached the same
conclusion from a more epistemic viewpoint: “No one type of system description can possibly
display by itself a definite type of hierarchical structure for the simple reason that we recognize
such structure only by the necessity for different kinds of system description” (Rosen, 1969: p.
188). There are also many such cases in physics and chemistry where different descriptions are
necessary at different levels of organization (e.g., Platt, 1961; Anderson, 1972). For example,
chemical bonds can be explained by quantum dynamics, but in practice chemists use higher-level
descriptions because most molecules are too complex for practical computation at the quantum
dynamical level. The relation between the reversible microscopic laws of physics and the
irreversible statistical laws is a classical case of two levels with formally incompatible
descriptions. The formal incompatibility arises from different epistemological assumptions about
what is observable at each level.
7
Simon also made the important distinction between a “state description” and a “process
description” that is similar to Ryle’s (1949) distinction between “knowing that” and “knowing
how.” All genetic descriptions are instructions that function as process descriptions in the
knowing how category. It is another unresolved semiotic problem deciding where in the
evolution of brains the first cognitive state descriptions or knowing that arose. The problem also
arises in the symbol-grounding problem in artificial intelligence (e.g., Harnad, 1990), and in
criteria (like the Turing test) for deciding whether a computer can think. There is a current belief
in artificial life research that any empirical sense of “meaning’ or “knowing” can occur only
when symbol systems are connected by sensory inputs and outputs to material systems that in
turn are immersed in a realistic physical environment. This area of research is often called
situated robotics (e.g., Clark, 1997).
I first tried several models of self-organizing tactic heteropolymers that appear to evolve
higher levels of organization. I also discovered simple self-replicating models that were
essentially simulations of autocatalytic cycles, but in none of these models did I make a clear
distinction between symbolic descriptions and material constructions (Pattee, 1961). As a result,
what I learned from these self-organizing replicating models was that none of them had any
evolutionary potential. While a few different “species” could arise by mutation, it became
obvious that variation was strictly limited, because all but a few mutations destroyed the
possibility of further replication, as did any change in their artificial environment.
It was at that time (1966) that von Neumann's lecture notes on self-reproducing automata
were edited and published by Arthur Burks. Von Neumann’s basic thoughts were about
computers (automata), but his motivation was the conceptual problem of why some systems
decay while living systems evolve endless complexity. Von Neumann said, “There is thus this
completely decisive property of complexity, that there exists a critical size below which the
process of synthesis is degenerative, but above which the phenomenon of synthesis, if properly
arranged, can become explosive, in other words, where synthesis of automata can proceed in
such a manner that each automaton will produce other automata which are more complex and of
higher potentialities than itself” (von Neumann, 1966, p. 80).
Von Neumann’s basic argument was informal and incomplete, but I discovered three points of
great significance. They explained the failure of my early evolutionary models.
(1) Open-ended evolution requires a rate-independent descriptive language, distinct from the
dynamical rate-dependent construction that it controls. This implies some form of rateindependent memory structure.
(2) In order to replicate the whole system, this description must be read twice: once as a state
description, and once as a process description. That is, in order to copy the description it must be
read once as the state of a material structure without regard to its meaning. Then the description
must be read again and interpreted (decoded) as a process description.
(3) The descriptive language must be rich enough in the context of a changing environment to
allow open-ended evolution within that environment.
A symbol system with its code that is rich enough to allow open-ended evolution in a
changing environment is sometimes called a “universal language,” or more modestly a “generalpurpose language,” it being understood that it is universal or general only in the context of a
8
specified type of environment. I will elaborate on this concept of language in the next section.
Von Neumann’s points made sense to me, first, because of their sound logic, second, because
they explained why none of my computer models of self-organizing systems evolved, and third,
because his argument described how all existing cells actually replicate and evolve. With these
points in mind I began to think about how they would apply to a physical system that had to obey
the rate-dependent laws of energy and matter.
5. The necessity of a general-purpose language.
At first I did not fully appreciate von Neumann’s chain of thought. To understand it you need
to understand the formal concept of a universal language that is the basis of Turing’s Universal
Turing Machine. Von Neumann used the principle of Turing’s universal machine as an analogy
for his evolvable self-replicating machine. The Universal Turing Machine is defined as a single
formal machine that when fed a complete description of any other Turing machine would
compute the same function as that machine. The meaning of universal language here simply
means that the language is rich enough to explicitly describe all possible Turing machines. This
defines the conceptually limited, but cardinally infinite, environment describable by that
language.
Von Neumann’s analogy required two steps. The first step was to extend the entirely formal
idea of a Universal Turing Machine to a real material computer. By “formal” here I mean a pure
symbol system governed by axioms and syntax that have no necessary relation to physical laws
and no necessary external referents. Von Neumann is usually credited as the first to fully
understand the practical significance of this formal principle: that to achieve open-ended
computational generality with a single hardware machine, the symbolic instructions must be
unrestricted by the hardware dynamics, and at the same time the symbols must control the
dynamic symbol manipulations of the hardware. Again, this implies that the symbol system (the
programming language) is rich enough to define all computations. This principle allows in
practice a material approximation of the symbol-manipulating formal Universal Turing
Machine. This approximation can be realized only if we separate the dynamically unconstrained
symbolic memory, or software, from the symbolically constrainable dynamic symbol-changing
hardware, and if we provide a rich enough programming language. This is why it is known as a
general purpose, memory-stored program computer. Discovering what is meant by a “rich
enough language” for computation is again a context-dependent concept covering many
hierarchical levels in real computers. Designing these programming languages and writing
programs are difficult problems that have generated new academic departments as well as new
industries.
The second step that von Neumann used to explain his “threshold of complexity” applied the
same fundamental principle of separate description, but replaced the computational hardware
with construction hardware. This was a conceptual step that necessarily involves the symbolmatter problem because construction is a material process. By freeing the non-dynamic
“quiescent” symbolic description that controls the dynamics of a universal constructor he
showed that this one ideal construction process could in principle evolve unlimited complexity;
first, because the controlling descriptive language was assumed to be complete for all
constructions, and second, because it was open-ended (ibid. p. 86). Von Neumann briefly
discussed some of the properties that a material constructor must have, but he did not pursue the
physics of this problem or the symbol-matter problem in the context of a real physical
9
environment. He spent his further efforts trying to formalize his replicating automaton.
Consequently, I began to work on the physical conditions necessary to allow symbolic control of
material, law-based, dynamical systems.
Von Neumann did explicitly state that he was intentionally avoiding the symbol-matter
problem. He emphasized that formalizing was only half the problem. He says, “one does not ask
the most intriguing, exciting, and important question of why the molecules or aggregates which
in nature really occur in these parts are the sorts of things they are” (ibid. p. 77). In another
lecture he warned that it was “clear at which point the analogy with the Turing Machine ceases
to be valid” (von Neumann, 1951, p. 318), which was the fact that, unlike computation, the
construction process is not algorithmic. Von Neumann stated that any analogy of a formal
Turing Machine with a material organism would break down just because algorithmic
computational languages require complete, unambiguous symbolic instructions for every step,
while genetic description is not complete and therefore not algorithmic. I will explain in Sec. 8
why no single description of a matter-symbol system can be complete in any algorithmic sense.
In fact, it is just this ability of symbolic constraints to control functions with incomplete
descriptive detail that allows such effective adaptive evolution (Conrad, 1984, 1990).
6. Life is matter controlled by symbols
I first publicly discussed these ideas in 1966 at Waddington’s theoretical biology symposia at
the Villa Serbelloni. At the first symposium I explained the physical requirements for a symbol
code (Pattee, 1968). At the final Waddington symposium I emphasized that, “dependence on
symbol structures and language constraints is the essence of life.” I also warned that an exclusive
reductionist view of molecular biology was obscuring the fundamental symbol-matter problem.
“In fact, the acceptance of the structural data of molecular biology as ‘the physical
basis of life’ tends to obscure the basic question rather than illuminate it. We are
taught more and more to accept the genetic instructions as nothing but ordinary
molecules, and to forget the integrated constraints that endow what otherwise would
be ordinary molecules with their functional symbolic properties.”
“What I would like to counteract is the oversimplification, or perhaps what is
better called the evasion of the genotype-phenotype distinction. In order to have an
explanation of life, this distinction cannot be treated as merely a descriptive
convenience for what is popularly assumed to be the molecular basis of life. This
interpretation is not the property of a single molecule, which is only a symbol
vehicle, but a consequence of a coherent set of constraints with which they interact”
(Pattee, 1972: p. 249).
At the time, or at least among those biologists I knew, there was no objection to my use of
the concept of language at the genetic level. In fact, even linguists had developed the analogy
(e.g., Jacobson, 1970). At the same symposium Brian Goodwin in his paper titled “Biology and
meaning” stated clearly why energy-independent symbol systems are essential for evolution.
“It is genetic symbolism that enables living matter to step outside the
constraints imposed by physical laws formulated in terms of minimum potential
criteria.”
10
“The symbolic nature of the genetic material is what provides a virtually
inexhaustible reservoir of potential genetic states for evolution, since symbols can
be juxtaposed in very many different ways to provide new ‘statements’, new
hypotheses, which can then be tested [by natural selection]” (Goodwin, 1972:
p.270).
While the language of physics is reasonably simple and unambiguous, I discovered quickly
that the terminologies of semiotics are so complicated and controversial that I could not hope to
find consensus on primitive symbol system terminology. As I stated earlier, linguistic
terminology originated at the highest levels with human language and human behavior. Not only
that, but in the recent past it was believed that symbolic language was what distinguished
humans from lower animals. Consequently, the idea of a symbolic language at the origin of life
was simply resisted by many linguists, anthropologists, and philosophers as unreasonable except
as a vague metaphor or distant analogy.
Evolution does not exhibit sharp and clear distinctions between functional activities like
language usages even though we can see enormous differences between genetic symbol systems
and the brain’s symbol systems. The important question for biosemiotics is what similarities are
fundamental and what differences are irrelevant for their primary survival functions.
I define symbol as a material constraint not determined by physical laws that controls specific
physical dynamics of a self-replicating system. My usage is consistent with other definitions of a
symbol, like Peirce’s triadic symbol–interpreter–referent definition. However, all the detailed
refinements of Peirce’s complicated terminologies I find empirically ambiguous and unnecessary
at the cellular level. I should add that symbols are arbitrary only to the extent that they are not
proximally or structurally determined by laws. However they are non-arbitrary to the extent that
they have been naturally or artificially selected to satisfy functional requirements. At the
molecular level Monod (1971) calls this structural freedom the principle of “chemical gratuity.”
However, a proximal constraint, as in control of protein synthesis, is only the beginning of a
coherent network of energy-based interactions necessary for the unit’s replication and survival in
an environment. Ultimately natural selection will determine the symbol’s survival.
In one of my early papers, “How does a molecule become a message?” (Pattee, 1969),
presented to developmental biologists at a symposium titled “Communication in Development,”
I explained why no isolated individual object can be a symbol. A symbol is always an element of
a coherent symbol system functioning as a language that allows adaptive behavior. Furthermore,
the simplest concept of adaptive behavior does not make sense unless its language is about
survival of a population distribution in an environment. In other words, the semiotic concepts of
symbols, codes, messages, and languages have objectively definable meanings only insofar as
they are physically arbitrary and function in the survival of a replicating population of cells in an
ecosystem. Heritable replication is the first empirically observable objective function.
Consequently, the use of the word “symbol” at any lower level is scientifically gratuitous. Of
course, as organisms evolve, so will higher levels of functions and meanings, and at all levels
survival remains as the ultimate test.
7. Symbolic descriptions are necessarily incomplete.
By the incompleteness of descriptions here I am not referring to the well-known formal
logical incompleteness theorems of Gödel and others, but to the need to use complementary
11
descriptions for symbol structure and symbol function. Conceptually and logically it is also
necessary to separate the description from what it describes, the measuring device from what it
measures, and the controller from what it controls if these concepts are to make any sense. The
basic meaning of a symbol, a description, a measurement, and a control is that they stand for, or
act on, something other than themselves. This necessary separation I have called an epistemic
cut, following a discussion of complementarity by Pauli (1995, p. 41).
The fundamental reason why a complete single description of a symbol-matter system is
impossible derives from the two incompatible epistemological bases I discussed in Sec. 2. Von
Neumann clearly explained how this applies to the measurement problem in physics (von
Neumann, 1955, pp. 419-420). Measurement is a special type of description, but his argument
can be generalized to all descriptions. The function of measurement is to produce a record of
specific aspects of the observed system. Because physical laws are universal and apply to all
possible systems, they are moot with respect to a specific system until the system is explicitly
defined by measuring its initial conditions or its state. The function of measurement is to produce
a record of these initial conditions. Because the measuring device itself is a material structure
obeying laws, if it is to perform its function, the detailed initial conditions of the measuring
device must be selectively ignored; otherwise the initial conditions of the measuring device must
be measured by another measuring device ― a process that becomes a useless infinite regress.
This is not a metaphysical opinion but a pragmatic fact. To obtain a meaningful record we
must define an epistemic cut between the measuring device and what it measures.
“That is, we must always divide the world into two parts, the one being the
observed system, the other the observer. In the former, we can follow up all
physical processes (in principle at least) arbitrarily precisely. In the latter, this is
meaningless. The boundary between the two is arbitrary to a very large extent . . .
but this does not change the fact that in each method of description the boundary
must be placed somewhere, if the method is not to proceed vacuously, i.e., if a
comparison with experiment is to be possible.” (von Neumann, 1955, p. 420)
Measurement implies choosing explicitly where to place the cut and what to measure. In other
words, we must be able to selectively measure something without having to measure everything.
It follows that to give a functional result, a number of observable degrees of freedom in the
measuring device must be selectively ignored. As a consequence of this loss of detailed
information, measurement can be described only as an intrinsically irreversible process. That is,
the record of the measurement must come after the measurement, and the process cannot be
meaningfully reversed. On the contrary, all the most detailed fundamental physical laws are
time-symmetric or reversible. Because of this necessary loss of information, a single formal
description of the measurement device cannot be complete if the process is to function as a
measurement.
Just as in the case of measurement, in order to have any useful function, genes must be able to
symbolize specific parts without having to symbolize all possible parts; otherwise genetic
information would never end. Without selective simplification genetic descriptions would suffer
the same infinite regress, as would the detailed descriptions of measuring devices. This is a very
general argument that applies to all descriptions and functional symbol systems (Pattee, 1995).
For example, if the program of a computer had to describe the solid-state dynamics of all the
transistor circuits, it would never finish a computation. Because of the high cost of evolution by
12
variation and selection, genetic descriptions are extremely concise and efficient process
descriptions.
Polanyi has also pointed this out even more generally. Even if an ideal reductionist could
follow the entire universe in one detailed and complete Laplacean lawful description it would be
of no functional or explanatory value whatever, because, “it substitutes for the subjects in which
we are interested a set of data which tell us nothing that we want to know” (Polanyi, 1964, p.
140). There would be no way in this endless amount of meaningless data to find a description of
those subsystems that are of interest, such as genes, organisms, measurements, and brain states.
8. To what must symbols refer in order to replicate cells?
The first essential reference of the genetic symbols is to the material parts needed to replicate
or construct a copy of the cell. First, we have to know what the cell can identify in its
environment, or what Uexküll called the cell’s Umwelt, because the cell must recognize those
parts that are necessary to construct a daughter cell. Clearly, if the recognizable parts are too
small, like atoms, the length of the genetic description will be too long to define a useful
structure like an enzyme. If the number of types of elementary parts is large the genes must have
a correspondingly large vocabulary of nouns referring to these parts. Also, if the parts are too
large or too complex the variety of constructions will be too limited, even in the unlikely event
that such large parts are available in the environment. Choosing an evolvable set of parts von
Neumann called the “parts problem.” Nature somehow solved this problem with a relatively
small set of parts that were available in the prebiotic environment, although we are not sure what
these parts were at the origin of life.
The second essential reference of genetic symbols is to the genetic sequences themselves.
This self-reference turns out to be the most important and complex aspect of self-replication and
development. I discuss how this differs from cultural language in the next section. One important
problem for biosemiotics is how one can usefully follow the meaning of genetic symbol
reference from its relatively simple and explicit protein synthesis to the many interactions of the
protein with the rest of the cell as it spreads its epigenetic effects on the adaptive behavior at
many hierarchical levels of organization and ultimately contributes to the survival of an entire
population. Hoffmeyer, (2007) has coined the term semiotic causation for this extended
integrative function. This is the central issue in developmental biology and it is the focus of new
fields like proteomics. Understanding it will probably require a hermeneutic circle more complex
than any cultural language text.
9. Similarities in genetic language and natural language.
The significant similarities of genetic and natural language might be defined as those essential
characteristics required for a general-purpose language. As I discussed in Sec. 3, a generalpurpose language requires two freedoms from physical laws in order to allow open-ended
evolution as well as freedom of thought and free will. The first physical requirement is energy
degeneracy of the symbol vehicles so that there is an endless variety of structures undetermined
by physical laws. The second condition is the freedom of the code, or the translation or
interpretation of the symbols. The simplest solution for the first condition is a 1-dimensional
discrete string of at least two distinguishable units. The 1-dimensionality also give the minimum
number of interactions that can still maintain an ordered sequence. In other words, there is the
13
least effect of physical laws on the order as long as it remains unfolded. One can imagine other
possibilities, like a 2-dimensional sheet, but that would introduce more minimum energy
requirements of physical laws. Also, construction and reading problems would be more
complicated.
The second freedom requirement addresses what is required of a code, as discussed in Sec 3.
Physically a code is a special type of constraint that cannot be formally separated from the
dynamics it constrains. A code must also be complete, meaning it must cover all sequences. I do
not think of a code in the narrow sense of an arbitrary mapping between two symbol systems,
associated with a translation or hiding a message. I use the original meaning in the context of a
social code or system of rules used in common by an organization with the function of
maintaining the coherence of the organization. In this sense, a code implies a complete set of
rules that is associated with a symbol system or language. This is the case with the genetic code
that can read any genetic sequence. It corresponds in some ways to the syntax of cultural
languages that applies to all texts, however the genetic code is so much simpler that the
comparison is weak. A partial code appears to be useless for replication. The concept of a
general-purpose language is crucial for open-ended evolution, but there must have been
precursors that were simpler. This is the origin problem that is still mysterious for both life and
human language.
Beyond these basic physical freedom requirements, the nature of language will depend on the
organism and its environment. What are the symbol vehicles? How are the written and read? To
what do they refer? What are the essential functions of the language? The answers to these and
other questions are where there are great differences between genetic and natural languages.
10. Differences in genetic language and natural language.
The two fundamental symbol-matter processes in all organisms are the symbol-constrained
dynamics of protein folding, and the complementary template pairing in the nucleic acids. In
both cases the information in a symbol sequence constrains the energy-based lawful dynamics.
Evolution depends on the efficacy of this folding process where 1-dimensional (energy
degenerate) symbol sequences constrain the 3-dimensional (energy-dependent) control
dynamics. This coding from a sequence space to a function space is a many-to-many mapping
that has special properties of redundancy and gradualness that are essential for the effectiveness
of natural selection (e.g., Conrad, 1990; Schuster, 1998).
It is difficult to find any close correspondence with these processes in natural language.
Classical linguistic terminology is not of much help at this level. The so-called universals that
address syntactic similarities of natural language do not apply to the basic functions of genetic
language (e.g., Greenberg, 1963). Furthermore, where and how natural language symbol
sequences are transformed in the brain to produce images or actions is unknown. I have
suggested that some functional analog of folding must occur in the n-dimensional network of
neurons to remove the degeneracy of 1-dimensional symbol strings of speech or text (Pattee,
1980) but so far we know too little of how brains generate symbols and transform speech and
text to action or meaning to make any comparison.
A second difference between genetic expressions and natural language expressions is that
genetic construction sequences are extremely simple compared to natural language. Syntactically
speaking, the basic genetic controlled constructions have only one person, one tense, one mood,
and one part of speech. All constructive expressions correspond to the 1st person singular, present
14
imperative, and use only nouns. Genes cannot make simple declarative statements, ask questions,
or make propositional statements than can be true or false. All structural genetic sequences are
made up of statements of the form, ”Add this part now!” The so-called structural DNA
sequences are proper names directly referring by way of the code to each of the 20 amino acids
and in copying to the 4 bases. None of the cell’s actions are expressed in gene sequences by
verbs. Actions are the result of the constraints of these noun sequences on the physical dynamics.
The much larger numbers of non-structural and non-coding genetic sequences have many
functions that are still being discovered. Some of them function syntactically like punctuation,
but most controls have no apparent similarity to natural language syntax. For example, some
DNA sequences are read in both directions and each product has a different function.
The third basic difference between genetic expression and natural language is that in order to
self-replicate the physical structures that execute the copying of symbol sequences and that
execute the reading or decoding of symbol sequences must be described by these same symbol
sequences. This is a long and growing list, including DNA and RNA polymerases, transfer
RNAs, aminoacyl tRNA synthetases, ribosomal RNAs and proteins, microRNAs, and many
other genetically described control structures that execute the replicative and developmental
functions of the organism. This complete self-description requirement is called semiotic closure
(Pattee, 1995). Self-description is possible in cultural-based languages, but nothing corresponds
to this self-construction requirement.
The fourth and most important evolution-based difference between genetic expression and
natural language is its heavy dependence on networks of self-reference or metalinguistic control
sequences. The importance of genetic metalanguage was first discussed by Waddington in his
Epilogue to the Serbelloni meetings (Waddington, 1972). There are of course metalanguages
embedded in natural language and logic, but in normal discourse these metalanguages do not
control or determine what sentences and paragraphs are read or omitted in a text. By contrast,
replication and development in organisms depends entirely on gene regulation ― the coordinated
expression or omission of structural genetic sequences as determined by many control sequences
that are themselves influenced by a complex network of interactions with the internal and
external environment.
A much closer analog to genetic language are our artificial computer programming languages,
especially those that are used to control robots or machines. That is partly due to designers
copying nature. These languages use modular subroutines that are addressed directly or
conditionally, based on inputs from their hardware environment. Early engineering control
theory dealt with localized systems in real time or with relatively short time delays. Programmed
computation is controlled by random-access memory, which means non-dynamic (temporally
incoherent) symbolic displacements in space and time. What we have learned is that once we
have a system with expandable memory storage and programmable computation we are in a
semiotic control domain with open-ended potential for complicated behavior.
On the other hand, we also know that symbol-constrained physical processes, like folding,
can outstrip the speed and power of digital symbolic computations. It therefore has become
useful in engineering design to copy efficient biological strategies (e.g., Miles and Hoy, 2006) as
well as the basic evolutionary strategy of hereditary variation and natural selection. This is now
done using many forms of genetic algorithms.
A fifth difference between genetic language and cultural texts is that the latter are continually
being created and tend not to repeat existing texts. In fact, too much copying is not considered
creative and plagiarism is unacceptable. There are also thousands of natural languages. By
15
contrast, genetic texts are essential evolutionary histories that are highly conserved if they
describe a useful protein or control function. There is still only one basic genetic language, and
plagiarism is the norm. The major differences in species are the result of the metalanguage
control of conserved gene expressions. As I mentioned in Sec. 8, this is because evolution by
variation and natural selection is so costly that genes provide only the minimum symbolic
information to survive. Consequently, organisms at all levels of organization depend as much as
possible on dynamic energy-based self-assembly processes that require only the minimum
symbolic informational constraints.
Many other differences exist between genetic and cultural languages. For an instructive
discussion of additional similarities and differences between genetic and cultural language see
the paper by Martin Sereno (1991).
.
11. The symbolic dependence of analog signals and controls
All the general-purpose languages we know are recognizable as discrete strings of symbols
from a small alphabet. How should we understand the continuous dynamical processes
representing biosemiotic signals or analog controls? Insect sounds, bird songs, bee dances,
mating displays, mimicking, and many other forms of communication are continuous in real
time. There are many names given to these continuous dynamical behaviors, but I would
emphasize that they all special-purpose structures that are constructed under the control of the
discrete genetic symbol system. Even when learning occurs in the nervous system, this ability is
a naturally selected genetic trait. Therefore I believe it is correct to say that these special-purpose
activities depend on discrete genetic symbols acting as constraints on lawful, energy-based
dynamical systems.
Of course that statement leaves the question of the origin of symbols as mysterious as ever.
Hoffmeyer and Emmeche (1991) proposed that the symbol-matter relation arose from the
coupling of natural discrete and continuous processes requiring a symbolic and analog code
duality. In one sense this is certainly the case, because all we have to start with is natural lawbased dynamics and random copolymer sequences. However, calling this a code duality does not
explain how any natural dynamics can achieve the freedom of a code. The possibility of
dynamical systems generating discrete symbol-like structures has been suggested using many
different models. For example, Simon’s (1962) explanation of hierarchical levels suggests that it
is primarily our perceptual categories that determine continuity and discreteness. What appears
as discrete incoherence at one level can be described as a coherent continuous dynamical
observable at a higher level. Rocha and Hordijk (2005) have tried to show how simulating
continuous dynamical systems in a cellular automata can evolve, using genetic algorithms, so as
to generate quiescent structures that can be interpreted as discrete symbolic memory. RaczaszekLeonardi and Kelso (2008) have pointed out how instabilities and bifurcations in continuous
dynamical descriptions may be usefully interpreted using a longer time scale as discrete stable
symbols.
The problem of objectively separating continuous dynamics and discrete symbols is currently
intractable at the level of cognitive activity where we know less about the mechanisms. How
much of the brain's activity should be interpreted as discrete symbolic activity and how much
should be modeled as continuous dynamics is still a central issue in cognitive theory. Older AI
models emphasized rule-based symbolic representation while the more resent connectionist
theories emphasize the dynamics of concurrent, distributed networks.
16
12. Limitations of language.
Wherever we find sharp dichotomies in our linguistic terminologies, as between symbol and
matter, physical laws and syntactic rules, or between continuous dynamics and discrete objects, it
is instructive to consider why there is nothing in between. Our formal mathematical languages
are based conceptually on discrete and continuous sets, but formal mathematics is expressed only
with discrete symbols. Consequently, establishing the relation between discrete sets and
continuous or uncountable infinite sets has been a central problem in logic and mathematics
since Zeno’s paradox of motion. Aristotle explained away Zeno’s paradox of motion by noting,
“That which moves does not move by counting.” As I have argued, the symbol-matter relation
was first established by a discrete-continuous interaction of symbolic constraints acting on the
energy-based continuous dynamics of folding. There is no efficient discrete description of the
folding itself. Following Aristotle, I would say in evolving systems, that which folds does not
fold by symbolic descriptions.
Physical theory routinely uses the concepts of continuous fields and discrete particles as
complementary modes of description of the same system, but again the mathematics is all
discrete symbols. Are these sharp dichotomies linguistic artifacts or a property of nature? These
two modes are apparently all our brains have available. We know the brain has edge detectors
that emphasize discreteness and motion detectors by which we perceive continuity, and it is
difficult to imagine something in between.
The concepts of discreteness and continuity in physics do not appear to be objective concepts,
but depend on our choice of observables and the choice of space and time scales of our
measurements and models. Modeling at the sensorimotor level, controls may appear continuous
and coherent, but at the cognitive level, such real-time coherence is lost in thought, so to speak.
Nerve pulses may be modeled as discrete at one level, as a continuous dynamics at a lower level,
and again as statistical dynamics at a higher level. Even in populations of biological organisms
that act largely independently as individuals we can discover a coherent statistical dynamics at
the population level.
The fact remains that there is little evidence for the existence of any general-purpose
languages other than the genes and natural languages; and there is no persuasive model of how a
continuous dynamics could evolve into a general-purpose discrete coded symbol system. In fact,
that is the reason the origin of life and the origin of language remain great mysteries.
13. Is human language a Promethean trade-off?
From our human perspective, one could argue that human language was the greatest
evolutionary discovery since the origin of life itself. On the other hand, from an evolutionary
perspective, human language is only a very late evolutionary discovery with an untested survival
value. While language is clearly adaptive in many respects, our language-based cultures have in
many conflicts intensified competitive genetic traits beyond their natural adaptive value.
Human language has been largely responsible for our cultures for well over 5000 years.
However, in the last 200 years the growth of science has given us the power to construct artificial
language-based technologies like computers, the Internet and genetic engineering that have a
potential of almost unlimited control of information from cells to societies. One problem about
these modern sources of information is that the selective pressures that would normally
17
discriminate between adaptive and destructive information are being virtually eliminated. The
cost of production, communication, and storing information is now so low that we are
accumulating more information than we can ever hope to selectively recall. This lack of
discrimination is increasing not only for our culture as a whole, but also for many individuals
who have access to the Internet. Because of this technology, the amount of information in our
modern culture is growing exponentially. By contrast, genetic information is sparse and strongly
selected, as I pointed out in Sec.8.
From an evolutionary perspective the survival value of even culturally selected human
information is not obvious. Our symbol-based technological culture has given humans power
over natural forces that can increase lifespan, counter the effects of genetic deficiencies, and
unleash weapons of mass destruction. None of these powers has any advantage from an
evolutionary perspective. To put it another way, while our cultures clearly depend on semiotic
technologies, there is little evidence that these cultures are adaptive for the species in the time
scale of evolution.
Biotechnology will eventually give us power to edit genetic instructions that have been tested
by billions of years of natural selection and replace them with artificial instructions based on
current human desires. We cannot imagine what greater powers another 5000 years of
technology will give human culture, if it lasts that long. We should not forget that 5000 years is
only a moment in evolutionary time, and that natural selection operates over indefinitely longer
time scales, and it will ultimately decide survival or extinction. We are still entirely dependent on
the genetic language to construct the neural architecture that allows natural language. This
evolved genetic architecture is not specific, but learning specific languages probably also
involves evolutionary selection processes (e.g., Edelman, 1987; Bradie, 1994). In other words,
no matter what humans desire, it is the information in genetic language, not the information in
human language, that will ultimately determine survival or extinction of the species over
evolutionary time.
Looking at global patterns of adaptive evolution, it is clear that survival at all levels requires
balancing competition and cooperation. We do not know how much cooperation and competition
among humans and with other species would promote our survival. Nevertheless, it is largely our
human choice how we balance competition and cooperation within our short cultural time scales.
We might prefer living on the “edge of chaos” ― a lifestyle that favors maximum individual
freedom with its high level of risk ― even if we knew that extinction was a threat, rather than
choosing to live regimented lives with enforced cooperation, like the social insects, on the
chance our species might survive longer. Perhaps it is just as well that we cannot foresee the
evolutionary consequences of our cultural and technological choices.
REFERENCES
Anderson, P. W. 1972. More is different. Science 177, 393-396.
Beadle, G.W. 1963. The language of the gene. In The Languages of Science. P. LeCorbellier
(ed.), New York: Basic Books, pp. 57-84.
Bohr, N. 1933. Light and Life. Nature 131, 421-457.
18
Bradie, M. 1994. Epistemology from an Evolutionary Point of View. In Conceptual Issues in
Evolutionary Biology, Second Edition, Elliott Sober (ed.), Cambridge, MA: MIT Press, pp. 453475.
Churchland, PM and Churchland, P.S., 1998. On the Contrary: Critical Essays 1987-1997.
Cambridge, Massachusetts: MIT Press.
Clark, A. 1997. Being There. Cambridge, MA: MIT Press.
Conrad, M. 1990. The geometry of evolution. BioSystems 24, 61-81.
Conrad, M. 1984. Microscopic-macroscopic interface in biological information processing.
BioSystems 16, 345-363.
Crick, F. 1994. The Astonishing Hypothesis. New York: Macmillan.
Delbrück, M. 1970. A physicist’s renewed look at biology: 20 years later. Science 168,13121315.
Eddington, A. 1929. The Nature of the Physical World, Cambridge Univ. Press, p. 260.
Edelman, G.1987. Neural Darwinism. The Theory of Neuronal Group Selection. New York:
Basic Books,.
Eigen, M. and Schuster, P. 1979. The Hypercycle. A Principle of Natural Self-Organization.
Berlin: Springer.
Frauenfelder H. and Wolynes, P. G. 1994. Biomolecules: where the physics of complexity and
simplicity meet. Physics Today, February, 1994, pp. 58-64.
Gell-Mann, M. 1994. The Quark and the Jaguar. New York: W. H. Freeman, p. 134.
Goodwin, B. 1972. Biology and meaning. In Towards a Theoretical Biology 4, C. H.
Waddington (ed.), Edinburgh Univ. Press, pp. 259-275
Greenberg, J. H. 1963. Some universals of grammar with particular reference to the order of
meaningful elements. In Universals of Language. Greenberg, J. H. (ed.), London: MIT Press, pp.
73-113.
Harnad, S. 1990. The symbol grounding problem. Physica D 42, 335-346.
Hoffmeyer, J. 2007. Semiotic scaffolding of living systems. In Introduction to Biosemiotics, M.
Barbieri, ed., Dordrecht, The Netherlands: Springer, pp.149-166.
Hoffmeyer, J. and Emmeche, C. 1991. Code duality and the semiotics of nature. In On Semiotic
Modeling, M. Anderson and F. Merrell (eds.), New York: Mouton de Gruyter, pp. 117-166.
19
Jakobson, R.1970. Linguistics. In Main Trends in Research in the Social and Human Sciences I.
Mouton-UNESCO, pp. 437-440.
Kauffman, S. 1993. The Origins of Order, Oxford University Press.
Kendrew, J. C. 1968. How molecular biology got started. Book review of J. Cairns, G. Stent, and
J. Watson, Phage and the Origins of Molecular Biology, Cold Spring Harbor. In Scientific
American, 216 (March), 141-144.
Lucretius, De Rerum Natura. On the Nature of Things, 1951 verse translation by R. E. Latham,
Penguin revised edition, 1994
McKaughan, D. J. 2005. The Influence of Niels Bohr on Max Delbrück: Revisiting the Hopes
Inspired by “Light and Life.” Isis, 96: 507–529.
Miles, R.N. and Hoy R.R. 2006. The development of a biologically-inspired directional
microphone for hearing aids. Audiology & Neurology 11(2), 86-93.
Monod, J. 1971. Chance and Necessity: An Essay on the Natural Philosophy of Modern
Biology, New York: Alfred A. Knopf.
Pattee, H. H. 1961. On the origin of macromolecular sequences. Biophysical Journal 1, 683-710.
Pattee. H. H. 1968. The physical basis of coding and reliability in biological evolution. In
Towards a Theoretical Biology 1, C. H. Waddington (ed.), Edinburgh Univ. Press, pp. 67-93.
Pattee, H. H. 1969. How Does a Molecule Become a Message? Developmental Biology
Supplement 3, 1-16.
Pattee, H. H. 1972. Laws, Constraints, Symbols, and Languages. In Towards a Theoretical
Biology 4, C. H. Waddington (ed.), Edinburgh Univ. Press, pp. 248-258.
Pattee, H. H. 1973. Hierarchy Theory. The Challenge of Complex Systems. New York: George
Braziller.
Pattee, H.H. 1980. Clues from molecular symbol systems. Signed and Spoken Language:
Biological Constraints on Linguistic Form, Bellugi, U. and Studdart-Kennedy, M., eds.. Dahlem
Konferenzen, Verlag-Chemie, pp. 261-274.
Pattee, H.H. 1995. Evolving self-reference: matter, symbols, and semantic closure.
Communication and Cognition - Artificial Intelligence, Vol. 12, Nos. 1-2, pp. 9-27, Special Issue
Self-Reference in Biological and Cognitive Systems, Luis Rocha (ed.)
Pauli, W. 1994. The philosophical significance of the idea of complementarity. In Writings on
Physics and Philosophy, C. P. Enz and K. von Meyenn (eds.), Berlin: Springer-Verlag, pp. 35-
20
48. (quotation on p. 41. First published under the title Die philosophische Bedeutung der Idee der
Komplementarität, Experientia 6(Heft 2), pp. 72-75, 1950.
Platt, J. R. 1961. Properties of large molecules that go beyond the properties of their chemical
subgroups. J. Theoretical Biology 1, 342-358.
Polanyi, M. 1968. Life's Irreducible Structure. Science 160: 1308-1312.
Polanyi, M. 1964. Personal Knowledge, Torchbook Edition, NewYork: Harper & Row.
Rocha, L. and Hordijk, W. (2005) Artificial Life 11(1-2)), 189-214.
Raczaszek-Leonardi, J. and Kelso, S. (2007) Reconciling symbolic and dynamic
aspects of language: Toward a dynamic psycholinguistics. New Ideas in Psychology
doi:10.1016/j.newideapsych.2007.07.003
R. Rosen, 1969. Hierarchical organization in automata theoretic models of biological systems.
In Hierarchical Structures, L. L. Whyte, A. G. Wilson, and D.Wilson (eds.), New York:
American Elsevier, pp. 179-199.
Ryle, Gilbert, 1949. The Concept of Mind. Chicago: University of Chicago Press.
Schuster, P. 1998. Evolution in an RNA world. In M. G. Ord and L. A. Stocken, Eds.,
Foundations of Modern Biochemistry, Vol. IV: More Landmarks in Biochemistry. Stamford, CT:
JAI Press, pp. 159-198.
Sereno, M.I. 1991. Four analogies between biological and cultural/linguistic evolution. Journal
of Theoretical Biology 151:467-507.
Simon, H. A., 1962, The architecture of complexity: hierarchic systems, Proc. Am. Philos. Soc.
106, 467-482. Reprinted with revisions in The Sciences of the Artificial, 3rd Ed. MIT Press, 1996,
pp. 183-216.
Stent, G. S. 1968. That was the molecular biology that was. Science, 160, 390-395.
von Neumann, J. 1951. General and logical theory of automata, in Cerebral Mechanisms of
Behavior, The Hixon Symposium, vol. 5, No. 9, Jeffress, L. A. (ed.), New York: Wiley, pp. 316318.
von Neumann, J. 1955. Mathematical Foundations of Quantum Mechanics. Princeton, NJ:
Princeton Univ. Press, p. 420.
von Neumann, J. 1966. The Theory of Self-Reproducing Automata. Edited and completed by A.
Burks, Urbana, IL: University of Illinois Press, Fifth Lecture, pp.74-87.
21
Waddington, C. H. 1972. Epilogue. In Towards a Theoretical Biology 4, C. H. Waddington (ed.),
Edinburgh Univ. Press, pp. 283-289.
Wheeler, J. A. 1991. Information, Physics, Quantum: The Search for links. In Complexity,
Entropy and the Physics of Information. Zurek, W. H. (ed.), Redwood City, CA: AddisonWesley.
Wigner. E. 1964. Events, Laws, and Invariance Principles. Wigner’s Nobel Lecture, Stockholm,
l0 Dec. 1963. Reprinted in Science 145, 995-999.
22
Download