The Straitjacket Theory of Computation

advertisement
The Straitjacket Theory of Computation
With the introduction of the Church-Turing thesis 70 years ago, either both computational activites and
a series of related studies fell under the spell of its limitations, or, several energetic and practical
disciplines chose to ignore it and proceed, without a theory.
Problems, Needs, Efforts
Under the shadow of Hilbert's Entscheidungsproblem, it was de rigueur during 30's and 40's to have a
working understanding/explanation of effective computability (or other equivalent concepts offered
from different perspectives). However, once this limiting dictum was established, efforts were
immediately launched, even by its own creators, to transcend it, and this trend has never slowed down
during the last six decades: the underlying parameters of the Church-Turing thesis, such as discreteness
(challenged by Shannon's General Purpose Analog Machine, Siegelmann's Analog Recurrent Neural
Networks), finite length input (challenged by von Neuman's Cellular Automata and omega-word
studies), computable numbers/functions only (challenged by Turing's own o-machine), time
independence (challenged by Copeland's accelerating Universal Turing Machine), destruction of the
"working tape" (challenged by Goldin's Persistent Turing Machine), etc. are all targeted.
In the 90's as well as nowadays, however, the tentative search for effective open-computability has
become attractive given the fact that there is an urgent need to fathom the nature of computer science
and software engineering practices that could not have been accounted for, since its very inception, by
TM or by certain extensions of it. As opposed to internal motivators that are listed in the preceding
paragraph, and beyond the relatively simple phenomenon presented by all kinds of ubiquitous
interactive devices emphasizing just another aspect of open computing, external motivators such as the
wide-spread existence of the Internet and the emergence of bioinformatics and systems biology are
badly in need of a theory to justify their loosely coupled nature of their constituents. Since the early
60's, social sciences have been struggling to absorb nonlinearity, unsuccessfully, perceivably as the
only viable contender to rigid linear (computational) models.
A rather peculiar situation should be mentioned here: during all these six decade long past -and
presently acute- developments, the philosophical community which otherwise itself underwent a
computational turn via the interactions of AI and philosophy of mind (typically started by Smart and
Putnam), ignored this turbulent dynamics in computational/informational sciences, until, roughly, 2003
(apart from its contributions in ethical and social issues that affected radical transformations in society,
and a few singular initiatives such as the Vienna group which will be investigated further below).
Left to itself alone, the computational community made some explicit but mainly implicit efforts -the
latter are considerably larger in scope and profounder in depth- to transcend the CT thesis.
At least at the outset, most of the singular efforts which explicitly targeted the CT thesis, as already
exemplified in the opening paragraph, are of a mathematical/rationalistic character, and this rather
heterogeneous set of individualistic ideas recently precipitated around the banner of hypercomputation
challenging the notion of limit: the title of a field survey about hypercomputation by Toby Ord readily
reflects it: "Hypercomputation: Computing More Than the Turing Machine" [Ord2002]; Martin Davis
has been firmly challenging this methodology [Davis2003, 2006].
In contrast, some ideas were rather the product of the latest scientific/technological progress across
cross-pollinating fronts and emerging needs/problems; therefore, they were visualized simultaneously
by several researchers and enjoy almost always a sizeable community. The topics that were discussed
during the short lived (1998-2002) and now seemingly defunct UMC (Unconventional Models of
Computation) conferences (which inherited the other half -along Ed Fredkin's digital physics school- of
the also defunct PAC conferences) were tackled from their inception onward by a relatively crowded
community.
Perhaps the most ardent, explicit and longest cry comes from the now decade and half old
interactionism group guided by Peter Wegner and company; in the last paper of a three piece series
[Wegner1997] [Wegner et al., 2003] [Goldin et al., 2004], they overtly call the dominance of (the
strong version of the) CT thesis a myth and try to "deconstruct" it. They are also apologetic of the
absent proclamation of this simple fact during the late 70's on behalf of Hoare and Milner, the original
leaders of the theoretical concurrency studies
"Milner's Turing Award lecture in 1991 presented models of interaction as complementary to the
closed-box computation of Turing machines. However, he avoided the question whether the
computation of CCS and the p-calculus went beyond Turing machines and algorithms. TMs had been
accepted as a principal paradigm of complete computation, and it was premature to openly challenge
this view in the late '70s and the early '80s." [Wegner et al., 2002]
Most recently, the same group organized a workshop apparently with the intent of an intellectual
coming out of the closet with a very explicit statement in their introduction: "However, a satisfactory
unifying foundational framework for interactive computation, analogous to what Turing machines and
the lambda-calculus provide for algorithms, is still lacking.", (FInCo 2005, Workshop on the
Foundations of Interactive Computation). Even though their diagnosis seems to be on the right track,
namely that the CT thesis does not serve and do justice to a discipline which reveals more and more
empirical tendencies, their several attempts to offer a solution to this lasting problem unfortunately fall,
however, with mathematical constructs such as Interaction Machines [Wegner et al., 1999] and
Persistent TMs (PTM) [Goldin1988, 2000], once again back to the rationalistic (mathematical) camp,
and do not achieve the stated goals.
With different backgrounds from philosophy of computing, to artificial intelligence to cognitive
psychology, the "virtual" Vienna group (Smith, Agre, Copeland, Cussins, Harnad, Haugeland, Scheutz),
gathered in Vienna during the NTCS'99, Computationalism - The Next Generation conference; the
proceedings was summarized in a book [Scheutz2003]; in this work, the title of Aaron Sloman's article
speaks for itself: "The Irrelevance if Turing Machines to Artificial Intelligence". Scheutz notes that this
observation eventually applies to computationalism itself.
Finally, parallel to these developments, during the last decade we saw the emergence of a "strongly
empirical" network science extending the mathematically inclined simplistic "random graph" theories
of Erdos and Renyi with "small world model" networks of Watts-Strogatz (1998), Newman-Watts
(1999), Kasturirangan (1999), Dorogovtsev and Mendes (1999) and scale-free networks of BarabasiAlbert, which seem to faithfully reflect the real world phenomena such as biological systems, social
networks and the Web, althought still somewhat immaturely. The confrontation and cross-polination
with the computational community is pending.
In search for a unifying framework that would satisfactorily serve all theoretical and practical
activities, we should start with the confession that the CT thesis, the de jure central dogma of
computation, is not up to the task; this fact should be recognized, properly enunciated and then
(re)conceptualized. It is not that the thesis itself has any fault; rather, it is the straitjacket we try to
impose upon our perception of apparently complex activities that evolve around computation
simpliciter. Unfortunately, instead of giving an account of such complex phenomena as a composite
whole made up of elements which individually are subject to the CT thesis, several parties have been
trying to devise different ideological perspectives what the core term "computation" itself should be,
beyond the traditional algorithmicism of the current canon, extended/modified into: the transcendental
algorithmicism of the hypercomputation school, the interactionism of Peter Wegner and company,
several incarnations of physicalism (Fredkin, Milkowksi, Penrose, et al.), dynamicalism of van Gelder,
Floridi's informationism, etc.
To explore the above proposed "composite whole" approach, the concepts of multitude, asynchrony,
concurrency and nondeterminism, which partially rest on myths, missing and conflicting definitions
and misunderstandings, should be reexamined.
Multitude
In addition to structural and functional enrichment of the underlying system, both token multitude
(redundancy) and type multitude (diversity), offer evolutionary benefits for organic systems [Forrest et
al., 2000]. Turing Machines (TM) can be coupled together forming ensembles to enhance the overall
performance, however, this does not introduce any new capabilities as it can be simulated by a
Universal TM which has the same expressiveness and computational power as a single TM.
Asynchrony
It is often not mentioned explicitly, however, that the above observation hinges on a special
assumption: this argumentation is valid only if the coupling is synchronous. In general, asynchronous
coupling of TM's cannot be simulated by a UTM. However, a strategic Concurrency Working Group
Report [Cleaveland et al.,1996], does not mention asynchrony at all. It is sometimes stated in a
circumlocutory way:
While apparently a small point, in fact it is crucial. ... a parallel machine's behavior is ... utterly
different from the sequential variety. [Boucher1997]
or bluntly denied
Boucher (1997) argues that "parallel computation is fundamentally different from sequential
computation" ... But parallelism, ..., is irrelevant. [Bringsjord2000]
Copeland mentions it, en passant, only once in his extensive studies about the CT thesis
... (the idea that) any finite assembly of Turing machines can be simulated by a single universal Turing
machine ... is sound only in the case where the machines in the assembly are operating in synchrony
(yet this restriction is seldom mentioned)" [Copeland et al., 1999]
It is Sloman who expresses it explicitly to support his main thesis
Any collection of synchronized parallel computers can be mapped onto a Turing machine by
interleaving their execution. However, for unsynchronized variable speed parallel machines, this
mapping cannot always be specified. The system does not have well-defined global states and welldefined state transformations, as a Turing Machine does [Sloman1995].
Cardelli also states it bluntly
Are existing languages and semantic models adequate to represent these kinds of situations? Many
classical approaches are relevant, but I believe the current answer must be: definitely not. ... The
systems to be described are massively concurrent, heterogeneous, and asynchronous [Cardelli2004e]
recently, he is more specific
The fundamental flavor of the Gene Machine is: slow asynchronous stochastic broadcast.
[Cardelli2005a]
Surprisingly, in his gigantic revolt against the central dogma of computation, Peter Wegner does not
consider asynchrony, the major fault line of the established conception, at all.
Concurrency
Years ago, the status of concurrency research was stated frankly as: "The literature already contains a
myriad of proposals for highly abstract, general-purpose models of concurrency..." [Cleaveland et
al.,1996], or, "veritable Babel of formalisms ... suggests that the current methodologies for concurrency
are insufficiently constrained, or perhaps that some key ideas are still missing" [Goguen et al., 1997].
Indeed, after a decade in 2005, Abramsky finally gave it up in his critical article: What are the
fundamental structures of concurrency? We still don't know!
Nondeterminism
Most authors agree that "... concurrency introduces ... nondeterminism" [Milner1991], or,
"(n)ondeterminism arises in a natural way when discussing concurrency, and models of concurrency
typically also model nondeterminism." [Meldal+1995]. However, although it is not the "choice (local)
nondeterminism" that is relevant here, the computational (concurrency) community chickens out probably due to its rationalistic Western heritage- exactly at this point and ignores the global
unbounded version:
Bounded nondeterminism refers to the case where every terminating computation has only a finite
number of possible results; unbounded - to the one where the set of the possible results may be infinite.
It may be argued [30, 54] whether unbounded nondeterminism has a plausible computational
interpretation. ... This either presents a serious challenge to the Church-Turing thesis and the classical
notion of computability or, perhaps, discredits unbounded nondeterminism as a notion without
computational relevance. [Meldal, et al.,1995]
The Solution
The necessary step to be taken, rather, is much more mundane. Without altering the concept and utility
of computation simpliciter, it should be treated as atomic and the principles that govern the structure
and process of composing more complex systems should be established. A new concept, called
paracomputation, would be an umbrella term to describe this newly created meta-mechanism: a
multitude of asynchronous computational units that are individually subject to the CT thesis, but
otherwise cannot be conceived as a deterministic whole. Surely, the unbounded nondeterminism of
such a mechanism as a whole is tantamount of losing the predictive, verificationary and specificatory
control over such systems; but this can be recovered at a later stage. Paracomputation offers the
"subvenient" stratum below the recently discovered "real-world" network theories, in a "computational"
perspective.
Phyla of "Systems"
Currently, to explain/model any phenomenon, computational, "complexity" (complex systems) and
network science communities use different conceptualizations; but most of the time, cross-references
are nonexistent and their paradigm examples do not overlap, thus making comparisons and translations
very difficult, if ever. Also, complex networks are a relatively recent discovery, and nondeterministic
(small) assemblies are not considered at all (as they are not tractable):
simple systems (few, tightly coupled)
- linear
- nonlinear
complex systems (many, tightly coupled)
- linear
- nonlinear
stochastic systems (many, loosely coupled)
- complex networks (structured)
- small-world models
- scale-free networks
- random graph networks (random)
? (few, loosely coupled)
(nondeterministic)
- nondeterministic (small) assemblies
Using asynchrony/nondeterminism as a principal concept -along multitude, observation, randomness
and structure, the phyla of "systems" become systematic and complete with the introduction of
paracomputation. Structuredness is orthogonal to multitude [Whitney2006] and asynchrony, but not to
randomness; "synchrony/asynchrony" is a more precise conceptualization than "tightly/loosely
coupled".
linear systems (synchronous/deterministic) (observation is microscopic)
- simple (few)
- complex (many) ?(complexity a la Murray Gell-Mann: "many" interactions, few
nodes)
nonlinear systems (synchronous/deterministic) (observation is microscopic)
- simple (few)
- complex (many) ?(complexity a la Prigogine: "many" nodes, few interactions)
paracomputational systems (asynchronous/nondeterministic)
- (small) assemblies (microscopically random, unstructured) (few) (observationally
random > observation is microscopic)
- complex networks (nonrandom, structured) (many)
- small-world models
- scale-free networks
stochastic systems (asynchronous/nondeterministic)
- (microscopically random) (many) (observationally nonrandom > observation is
macroscopic)
Practically, paracomputational models offer a correct approach to many nondeterministic real-world
phenomena for which monolithic deterministic models have been used previously, without much
success: philosophy of mind (computationalism) and social sciences, in desperate search for a
nondeterministic framework, would benefit most. They also conceptually clear the way to further
advances made by empirical network science discoveries and allow the computational community with its considerable experience in concurrency even if only in bounded nondeterminism, to join the
efforts: e.g. with his ambient and stochastic pi-calculus behind him, Cardelli is already leading
theoretical computational attempts in synthetic biology [Cardelli2004e].
References
 [Abramsky2005] - What are the fundamental structures of concurrency? We still don't know!
Samson Abramsky, 2005
 [Barabasi2005] - Taming Complexity, Albert-Laszlo Barabasi, November 2005, Nature Physics
1, 68-70
 [Barabasi2004] - Network Biology: Understanding the Cells's Functional Organization, AlbertLaszlo Barabasi and Zoltan N. Oltvai, 2004, Nature Reviews Genetics 5, 101-113
 [Boucher1997] - Parallel Machines, Andrew Boucher, 1997, Minds and Machines 7, 542-551
 [Bringsjord2000] - In Computation, Parallel is Nothing, Physical Everything, Selmer
Bringsjord, 2000
 [Cardelli2005a] - Abstract Machines of Systems Biology, Luca Cardelli, February 2005, draft
 [Cardelli2004e] - Languages for Systems Biology, Luca Cardelli, 2004, position paper for
Grand Challenges UK, GC1
 [Cleaveland et al.,1996] - Strategic Directions in Computing Research - Concurrency Working





















Group Report, edited by: Rance Cleaveland, Scott A. Smolka , September 1996
[Copeland et al., 1999] - Beyond the Universal Turing Machine, B. Jack Copeland, Richard
Sylvan, Australasian Journal of Philosophy 77, pp. 46-66, 1999
[Davis2006] - The Church-Turing Thesis: Consensus and Opposition, Martin Davis, 2006
[Davis2003] - The Myth of Hypercomputation, Davis2003
[Forrest et al., 2000] - Computation in the Wild, Stephanie Forrest, Justin Balthrop, Matthew
Glickman, David Ackley, 2002
[Goguen et al., 1997] - A Hidden Agenda - Joseph Gogen, Grant Malcolm, 1997, preprint for
Theoretical Computer Science
[Goldin et al., 2004] - The Origins of the Turing Thesis Myth, Dina Goldin, Peter Wegner, June
2004
[Goldin2000] - Persistent Turing Machines as a Model of Interactive Computation (PS) in: K-D.
Schewe and B. Thalheim (Eds.), First Int'l Symposium (FoIKS'2000)
[Goldin1988] - Persistence as a Form of Interaction (PS), Brown University Technical Report
CS 98-07, March 1998
[Meldal et al., 1995] - Nondeterministic Operators in Algebraic Frameworks, Sigurd Meldal,
Michal Walicki,1995
[Milner1991] - Elements of Interaction: Turing Award Lecture, Robin Milner, Communications
of the ACM, Volume 36, Issue 1, January 1993
[Newman2003] - The Structure and Function of Complex Networks, M.E.J. Newman, 2003,
SIAM Review
[Newman2000] - Small Worlds - The Structure of Social Networks, M.E.J. Newman, 2000
[Oltvai et al., 2002] - Life's Complexity Pyramid, Zoltan Oltvai, Albert-Laszlo Barabasi, 2002
[Ord2002] - Hypercomputation: Computing More Than the Turing Machine, Toby Ord, 2002
[Scheutz2003] - Computationalism: - New Directions, Matthias Scheutz, editor, 2003
[Sloman1995] - Beyond Turing Equivalence, revised edition - Aaron Sloman, 1995
[Wegner et al., 2002] - Computation Beyond Turing Machines - Peter Wegner et al., 2002
[Wegner et al., 2003] - Computation Beyond Turing Machines. Peter Wegner, Dina Goldin,
Comm. ACM, Apr. 2003
[Wegner et al., 1999] - Interaction as a Framework for Modeling, Peter Wegner, Dina Goldin,
April 1999 Springer-Verlag, Berlin 2000, pp. 116-135
[Wegner1997]- Why Interaction is More Powerful Than Algorithms, Peter Wegner, Comm.
ACM, May 1997
[Whitney2006] - Network Models of Mechanical Assemblies, Daniel E. Whitney, August 2006
Additional Bibliography
interactionism
 [Wegner95] Tutorial Notes: Models and Paradigms of Interaction, September 1995 (1993?)
 [Wegner96a] Coordination as Constrained Interaction, LNCS 1061, pp. 28-33, April 1996
 [Wegner96b] Interactive Software Technology, Handbook of Computer Science and
Engineering, CRC Press, 1996.
 [Wegner97a] Why Interaction Is More Powerful Than Algorithms, Communications of the
ACM, May 1997
(The Paradigm Shift from Algorithms to Interaction, October 1996)
 [Wegner97b] Frameworks for Compound Active Documents, (draft), October 20 1997
 [Wegner97c] Rationalism, Empiricism, and Virtual Reality - November 1997 (Brown



















University Faculty Bulletin, Volume X, Number 1, November 1997)
[Wegner98a] A Research Agenda for Interactive Computing, work in Progress, January 1998
[Wegner98b] Interactive Foundations of Computing, (final draft), Theoretical Computer
Science, February 1998
[Wegner98c] Persistence as a Form of Interaction, July 1998 (Goldin, Wegner)
[Wegner99a] Towards Empirical Computer Science, The Monist, Spring 1999
[Wegner99b] Mathematical Models of Interactive Computing *, Brown Technical Report CS
99-13
[Wegner99c] Interaction as a Framework for Modeling *, LNCS #1565, April '99
[Wegner99d] Coinductive Models of Finite Computing Agents *, Electronic Notes in
Theoretical Computer Science, March 1999
[Wegner??e] Interactive Visual Programming: Principles and Examples, work in Progress
[Wegner99f] Modeling, Formalization, and Intuition, Brown Faculty Bulletin, March '99
[Wegner99g] Models of Interaction, ECOOP '99 Course Notes
[Wegner99h] Interaction, Computability, and Church's Thesis *, work in Progress, May 1999
[Wegner99i] Behavior and Expressiveness of Persistent Turing Machines *, Brown Technical
Report CS 99-14
[Wegner99j] Draft of ECOOP'99 Banquet Speech, Lisbon, Portugal
[Wegner01] An Interactive Viewpoint on the Role of UML*, Book chapter, published in Unified
Modeling Language: Systems Analysis, Design, and Development Issues, Idea Group
Publishing, 2001.
[Wegner01] Turing Machines, Transition Systems, and Interaction *, work in progress
[Wegner02] Paraconsistency of Interactive Computation*, PCL 2002 (Workshop on
Paraconsistent Computational Logic), Denmark, July 2002
[Wegner+02] Computation Beyond Turing Machines* (RTF), Accepted to Communications of
the ACM, June 2002
[Wegner+03] Turing's Ideas and Models of Computation (draft, RTF). Book chapter, to be
published in 2004 (co-authored with Eugene Eberbach, Dina Goldin), March 2003
[Wegner+04] Interactive Computation: the New Paradigm. Book in progress (co-edited with
Dina Goldin, Scott Smolka), June 2003
criticism of interactionism
 [Ekdahl1999] Interactive computing does not supersede Church's thesis, Ekdahl1999
 [Rittgen+1998] Why Church's Thesis Still Holds, Peter Rittgen, Michael Prasse, 1998
asynchronicity
 [Cardelli2005a] - Abstract Machines of Systems Biology, Luca Cardelli, February 2005, draft
 [Cardelli2004e] - Languages for Systems Biology, Luca Cardelli, 2004, position paper for
Grand Challenges UK, GC1 InVivo<=>InSilico
 [Cleaveland+1996] - Strategic Directions in Computing Research - Concurrency Working
Group Report, Rance Cleaveland, Scott A. Smolka, 1996
 [Hoare1985-2004] - CSP
 [Abramsky1989] - A Generalized Kahn Principle for Abstract Asynchronous Networks, Samson
Abramsky, November 1989
concurrency
 [Cleaveland+1996] - Strategic Directions in Computing Research - Concurrency Working




Group Report, Rance Cleaveland, Scott A. Smolka, 1996
[Abramsky+1995] - Interaction Categories and the Foundations of Typed Concurrent
Programming, Samson Abramsky, Simon Gay, Rajagopal Nagarajan, 1995
[Lamport1990] - Distributed Computing: Models and Methods, Leslie Lamport, Nancy Lynch,
1990, Handbook of Theoretical Computer Science, Volume B: Formal Models and Semantics,
Jan van Leeuwen, editor, Elsevier
[Lamport1986] - A Simple Approach to Specifying Concurrent Systems, Leslie Lamport, 1986
[Hoare1985-2004] - CSP
nondeterminism
 [Menzies+2003] - Nondeterminism: Unsafe? Tim Menzies, David Owen, Mats Heimdahl, Jimin
Gao, Bojan Cukic, 2003 >>> process mode > symbol based computing >
computing/verification
 [Atmanspacher+2003] - Epistemic and Ontic Quantum Realities, Harald Atmanspacher, Hans
Primas, 2003
 [Atmanspacher2001] - Determinism Is Ontic, Determinability Is Epistemic, Harald
Atmanspacher, 2001
 [Goguen+2000 (1997)] - A Hidden Agenda - Joseph Goguen, Grant Malcolm, 2000 (1997,
preprint for Theoretical Computer Science )
 [Primas1999] - Basic Elements and Problems of Probability Theory, Hans Primas
 [Meldal+1995] - Nondeterministic Operators in Algebraic Frameworks, Sigurd Meldal, Michal
Walicki,1995
computability
 [Shagrir2006] - Gφdel on Turing on Computability, Oron Shagrir, 2006
 [de Pisapia1999] - Gandy Machines: an Abstract Model of Parallel Computation for Turing
Machines, the Game of Life, and Artifical Neural Networks, Nicola de Pisapia, Master Thesis,
1999
 [Sieg2002] - Church Without Dogma: Axioms of Computability, Wilfried Sieg, 2002
structure
 [Whitney2006] - Network Models of Mechanical Assemblies, Daniel E. Whitney, August 2006
 [Whitney+2006] - Are technological and social networks really different? Daniel Whitney,
David Alderson, 2006
 [Whitney2005] - Degree Correlations and Motifs in Technological Networks, Daniel E.
Whitney, August 2005
systems biology
 [Cardelli2006] - Biological Systems as Reactive Systems, Luca Cardelli, 2006, CiE2006
 [Ellis+2004] - From Genomes to Systems, David I Ellis, Steve O'Hagan, Warwick B Dunn,
Marie Brown and Seetharaman Vaidyanathan, October 2004
 [Chen+2002] - Petri Net Based Modelling and Simulation of Metabolic Networks in the Cell,
Ming Chen, Andreas Freier, 2002?
 (Deville2002) - An Overview of Data Models for the Analysis of Biochemical Networks
(Bibliography), compiled by Yves Deville, 15 November 2002
 [Barabasi] The large-scale organization of metabolic networks. Nature 406: 651--654. 2000. ,
Barabasi
Santa Fe Institute
 [Flack+2006] - Encoding Power in Communication Networks, Jessica C. Flack, D. C. Kracker,
September 2006
 [Forrest et al., 2000] - Computation in the Wild, Stephanie Forrest, Justin Balthrop, Matthew
Glickman, David Ackley, 2002
Download