The Complex Evolution of Duncan K. Foley as a Complexity Economist

advertisement
THE COMPLEX EVOLUTION OF DUNCAN K. FOLEY AS A COMPLEXITY
ECONOMIST
J. Barkley Rosser, Jr.
James Madison University
rosserjb@jmu.edu
August, 2011
1
Introduction
Duncan K. Foley has always dealt with the larger and deeper questions of
economics since the day he walked into Herbert Scarf’s class on mathematical economics
at Yale University in the 1960s, with Scarf becoming his major professor. Scarf (1960,
1973) was the leading figure in the study of how to compute general equilibria and the
problems of uniqueness and stability of same, problems that Foley (2010a) has pursued to
the present day in the form of the related issues of convergence to general equilibria from
non-equilibrium states. The pursuit of these issues would lead him to the problem of
microfoundations of macroeconomics, which he saw as involving money and studied
with the late Miguel Sidrauski (Foley and Sidrauski, 1971). Considering the problem of
money would lead him to pursue the study of Marxian economics as possibly resolving
this problem (Foley, 1982), a pursuit that would lead to his experiencing professional
problems in the 1970s. But, he has continued to pursue the broader issue of the role of
money in the economy, even while feeling some frustration along the way at trying to fill
this “one of the big lacunae of economic theory.”1
While these topics as discussed so far are not obviously parts of what is called
complexity economics, the pursuit of these matters would indeed lead him to pursue
various forms of complexity economics. The initial link would be his reviving the model
of Goodwin (1951) in a paper on the role of liquidity in business cycles, interested in the
endogenous appearance of limit cycles, although these are at most on the edge of
complexity models (Foley, 1986). This laid the groundwork for him to pursue ideas and
1
Colander, Holt, and Roseser (2004, p. 196). This is from an interview with Foley in this book, and in
connection with that remark he compared studying money to an odyssey like that of Ulysses trying to reach
his home in Ithaca, finally confessing when asked “So, you haven’t reached Ithaca yet?” that “No, and I
probably never will” (ibid.)
2
approaches he had been previously interested in regarding statistical mechanics and also
the role of the computer in economic decisionmaking and a more algorithmic and
engineering approach to economics.2 Meeting the late Peter Albin in the late 1980s
would lead to collaboration with him (Albin and Foley, 1992; Albin with Foley, 1998) in
which he pursued a more computational approach to complexity, albeit with some
discussions of dynamic issues as well. During the same period he also took up the
problem of applying statistical mechanics to general equilibrium theory (Foley, 1994),
following on the work of Föllmer (1974), with this leading to a more specifically
econophysics approach that linked more fully to considerations of the law of entropy
(Foley and Smith, 2008) as he became associated with the Santa Fe Institute and moved
more clearly into the complexity approach to modeling economics.
The rest of this essay honoring Duncan will focus on how his work in this area of
complexity economics has related to ongoing controversies regarding the nature of
complexity, and particularly which approach is most useful for economics. In particular,
there has been an ongoing debate between those who focus more on computational
complexity (Velupillai, 2009), in contrast with those who concentrate on dynamic
complexity (Rosser, 2009). It can be broadly said that while Foley has tended to pursue
and advocate more the computational complexity approach based on his work with Albin,
he has also studied and articulated views about the dynamic approach and has suggested
possible ways that they can be reconciled.
A Brief Overview of Competing Approaches to Complexity
2
He worked as a computer programmer in an instrument and control company for three summers when
young (Colander, Holt, and Rosser, 2004, p. 186).
3
What is complexity is a topic that has been beaten more or less to death in the
existing literature, so we shall keep our discussion short here. Over a decade ago, Seth
Lloyd of MIT famously gathered 45 definitions of complexity (Horgan, 1997, Chap. 11),
with his not being a comprehensive list. This list can be viewed as comprising metacomplexity, the full set of definitions of complexity. Many of these 45 can be aggregated
into sub-categories, with the sub-category arguably containing more of these definitions
than any other being computational complexity, or variants thereof. Many of these
definitions are variations on a theme of the minimum length of a computer program
required to solve a problem, but there are indeed many variations on this. Furthermore, a
hardline group argues that the only truly computationally complex systems are those that
are undecidable due to halting problems in their programs (Blum, Cucker, Shub, and
Smale, 1998). In any case, an argument made by those advocating the superiority of the
computational approach to complexity is that it may be more precisely measurable,
assuming one can agree on what is the appropriate measure, with well-defined hierarchies
involved as well, although the hierarchy favored by Foley is not necessarily the same as
that emphasized by some others.
Among economists almost certainly the main rival is dynamic complexity,
defined by Rosser (1999), following Day (1994) as being systems that do not
endogenously move to a point, a limit cycle, or a smooth implosion or smooth explosion.3
Such systems inevitably involve either nonlinear dynamics or coupled linear systems that
can be reduced to nonlinear systems (Goodwin, 1947). There are sub-categories of this
form of complexity, with the “4 C’s” described by Horgan (op. cit.) being the main ones:
3
Ironically, this definition is not among the 45 that Lloyd listed earlier in the 1990s, although a few of
those he lists have some similarities to it.
4
cybernetics, catastrophe theory, chaos theory, and heterogeneous agent complexity. This
latter has increasingly become what many mean by the term “complexity” in economics,
with Arthur, Durlauf, and Lane (1997) laying out crucial characteristics of this approach,
favored at the Santa Fe Institute, and often involving computer simulations. Given the
interest of Foley in models using computer simulations of the sorts used at the Santa Fe
Institute, such as cellular automata, it is not surprising that he is interested in these forms
of complexity, although often emphasizing more the computability side of their use over
the dynamics side of them. As laid out by Arthur, Durlauf, and Lane (1997), this
approach is very much an antithesis of a general equilibrium perspective, with agents
interacting locally with each other without ever necessarily being in any overall or
general equilibrium and not ever necessarily arriving at one either. Many econophysics
models arguably follow this pattern, although it can be debated whether or not Foley’s
own forays into econophysics fully fit this model.
Foley on Computational Complexity
Foley’s most important direct work on questions related to computational
complexity arose from his collaborations with the late Peter Albin. Most significantly,
when Albin fell ill while working on a book covering his approach to this, Foley stepped
in and edited a volume consisting mostly of previously published papers by Albin (Albin
with Foley, 1998). As part of that, he wrote a substantial introductory chapter (pp. 3-72)
in which he laid out his own perspective on such matters as nonlinear dynamical systems,
the nature of complexity, various approaches to computational complexity, and other
related issues. As the introductory chapter to this book, mostly by Albin, it followed very
5
much Albin’s views, although it would seem that at least at that time, Foley’s views were
fairly much in synch with those of Albin. This becomes important in that Albin was
arguably the first economist to study computational complexity as it relates to economic
systems (Albin, 1982, reprinted as Chap. 2 of Albin with Foley, 1998).
In particular, Albin emphasized computability problems associated with the
halting problem, and the limits (or “barriers and bounds”) this imposed on full rationality
by economic agents. Drawing on the literature deriving from Gödel’s Incompleteness
Theorem as connected to computer science by Alan Turing (1936-37), he emphasized
especially the role of self-referencing in bringing about these failures to halt, or infinite
do-loops in programs. Such self-referencing can lead to such problems as the Cretan Liar
paradox, “Is a Cretan who says ‘All Cretans are liars’ telling the truth?” Such questions
can lead to an endless going back and forth between saying “yes” and “no,” thus
becoming undecidable as the program fails to halt at a solution. Albin argued that
economies are full of such self-referencing that implies such non-decidabilities from a
computability perspective, a swarm of unstoppable infinite regresses, and Foley agreed.4
Albin was strongly influenced by the ideas of Stephen Wolfram (1986), who in
turn has been strongly influenced by those of Noam Chomsky (1959). In his discussion
of complexity in this introductory chapter, Foley follows through on these lines,
discussing in succession computational complexity, linguistic complexity, and machine
complexity. His initial discussion of computational complexity makes it clear that he is
Velupillai (2000) has emphasized the distinction between “computable economics” and “computational
economics,” with the former being more concerned with these matters of whether a program halts or not as
Albin and Foley see it, whereas more conventional computational economics is concerned more with such
issues as trying to figure out the most efficient way to program a system or problem (Judd and Tesfatsion,
2006). Curiously enough, the issue of computational complexity is involved with the concerns of
computational economics as the most efficient program may also be the shortest in length, which is tied to
many of the more widely used measures of computational complexity.
4
6
going to embed it within the four-level hierarchy developed by Chomsky and show how
undecidability is a form of computational complexity that undermines full rationality.
This is followed by a discusson of how computational complexity and dynamic
complexity relate, which in turn is followed by a more detailed discussion of cellular
automata and the forms of complexity in them.
Central to this entire discussion is the four-level hierarchy of languages, initially
due to Chomsky (1959). He defines a formal language as consisting of finite sets of an
alphabet of T terminal symbols (words), intermediate variables V, and a distinguished
variable S that serves to initiate productions in the language. These take the form of P 
Q, where P is a string composed of one or more variables with zero or more terminals and
Q is a string composed of any combination of variables and terminals.
The lowest of these is the category of regular languages, generated by grammars
that take the form P  T or P  TQ, where P and Q are variables and T is a string of
terminal symbols that make up the regular language. The next level up are the contextfree languages, which include the regular languages. The grammars generating these
take the form of P  Q, where P is a string of variables and Q a string composed of
terminal symbols and variables, without contextual restrictions. Above this is the level of
context-sensitive languages that are generated by grammars such that P  Q have the
length of Q at least as long as the length of P, this group likewise containing the one
lower down. For these, if Q is a non-empty string and P1PP2  P1QP2, then Q and P can
be substituted for each other in the context of P1 and P2, but not necessarily in other
contexts. The highest level of these grammars generates unrestricted languages, which
only need to follow the most general rules described above and will include the most
7
complex formal languages. Each of the higher levels contains within it the level below it
in an embedded or nested form.
This hierarchy is then seen to translate directly to a four-level hierarchy of
machine complexity. At the lowest level is the finite automaton that that has a finite set
of states and reads simple inputs off a tape to generate simple outputs, with a pocket
calculator an example. This corresponds to the regular languages. By adding an
unbounded pushdown stack on which the automaton can store and retrieve symbols on a
first-in and last-out basis, one can recognize context-free languages. Having two
bounded pushdowns moves one to the context-sensitive level. Having two unbounded
pushdowns generates the level of unrestricted languages and becomes equivalent to an
abstract Turing machine. Foley interprets all of these as representing informational and
computational problems for economic decisionmakers.5
Foley then pulls what is the neat hat trick of applying all this to the analysis of
dynamic complexity, which he sees as also having such a hierarchy, although in this he
follows Wolfram. The dynamic equivalent of regular languages and finite automata are
simple linear dynamical systems that converge on point attractors. The next stage up
equivalent to context-free languages and one pushdown automata consists of nonlinear
systems that endogenously generate periodic cycles as their attractors.
Context-sensitive languages and the two-bounded pushdown automata are
equivalent to nonlinear dynamical systems that can go to a chaotic attractor. The break
between this level and the previous one is equivalent to the line between the non-
5
Mirowski (2007) uses this four-level hierarchy to extend this analysis directly to markets, arguing that
they are algorithms fundamentally, and that market forms themselves evolve in a natural selection manner,
with the equivalent of finite automata being spot markets, with futures markets the next level up, options
markets above and embedding them, and so forth.
8
dynamically complex and the dynamically complex systems according to the Day-Rosser
definition. Foley notes that at this level there can be long-period relations such as a
parenthesis that opens and is closed much later. He identifies this with such economic
phenomena as a loan being made that must then be paid back at some later time. He also
notes that for this level, computing costs may rise so sharply that it may become
impractical to actually solve problems, even if they are decidable in principle. This is
somewhat equivalent to the break between P and NP systems more generally in the
computational complexity literature, although it remains unproven that these really are
distinct levels.
Finally, the equivalent of the unrestricted languages and two-unbounded
pushdown automata equivalent to Turing machines may be undecidable. Monotonicities
holding at the other levels break down at this level.
Foley then discusses Wolfram’s more specific adaptation of these categories to
cellular automata. Type 1 evolve to uniform states. Type 2 evolve to periodic patterns
from arbitrary initial conditions. Type 3 evolve to irregular patterns. And Type 4
generate a much broader range of possibilities, including non-monotonic ones in which a
simple outcome may be the result of an enormously complicated set of generated
structures. Foley links this to “edge of chaos” ideas of self-organizing systems and sees
this level as that where new structures can emerge. In Albin’s discussion of this he ties
this to the original von Neumann (1966) formulation for cellular automata that can selfreproduce, with von Neumann likening the jump from one level to another as being
marked by complexity thresholds.
9
This is a stunning way of integrating the supposedly competing approaches to
complexity, and this observer finds it impressive. However, this observer also questions
whether this fully captures all the dynamic phenomena one observes in dynamically
complex systems. In particular is the matter of emergence, argued by many to be at the
very heart of dynamic complexity in systems ranging from evolution to urban
development (Rosser, 2011). Two comebacks to this are that it may be captured by that
highest level of Chomsky-Wolfram complexity. Another is to say that the supposed
inability of computational systems to deal with this is a sign that such emergence is really
not a proper complexity concept, with computational systems that provide genuine
novelty (Moore, 1990) being the more useful concept.6
The Dynamic Complexity of Foley’s Econophysics
In the wake of his work with Peter Albin, Duncan Foley was also moving in
another direction into the world of complexity economics, that of reconsidering the role
of statistical mechanics as a foundation for general equilibrium theory, or more
accurately, an alternative version of that theory. Mirowski (1989) has documented that
the seminal developer of statistical mechanics theory, J. Willard Gibbs (1902),7 was a
profound influence on efforts in the US to mathematize neoclassical economic theory,
both through Irving Fisher and later through Paul Samuelson (1947), who always looked
to Gibbs as a source and inspiration thanks to the direct influence of Gibbs’s student,
Edwin Bidwell Wilson at the University of Chicago. However, after this time the ideas
6
7
Foley (2010b) has continued to deal with problems of computational complexity at a theoretical level.
Gibbs also invented vector analysis independently of Oliver Heaviside.
10
of Gibbs faded from influencing many in economics directly. In any case, Samuelson
was more influenced by other parts of Gibbs’s work than that on statistical mechanics.
The idea that an equilibrium might not be just a vector of prices (an idea certainly
consistent with Gibbsian vector analysis), but a probability distribution of prices was
what Foley would pursue.8 His important paper of 1994 in the Journal of Economic
Theory would do just that and would fit in with the emerging new confluence of
economics and physics ideas that was happening at the Santa Fe Institute with which he
would become affiliated, and which would eventually form as econophysics shortly
thereafter, with H. Eugene Stanley coining this term in 1995, a year after Foley’s article
appeared (Rosser, 2008).
While Foley has avoided these debates, econophysics has become quite
controversial. Several economists generally sympathetic to it have criticized it fairly
recently (Gallegati, Keen, Lux, and Ormerod, 2006), with some physicists replying
sharply (Ball, 2006; McCauley, 2006). The critics argue on four points: 1) that many
econophysicists make exaggerated claims of originality due to not knowing relevant
economics literature, 2) that they often fail to use a sufficiently rigorous statistical
methodology or techniques, 3) that they make exaggerated claims regarding the
universality of some of their empirical findings, and 4) that they often lack any
theoretical foundation or explanation for what they study. Interestingly, Foley’s work on
this topic may help out on this last point. It is a bit ironic that he is an economist using a
physics-based approach that may provide a theoretical foundation for certain portions of
current econophysics.
8
Hans Föllmer (1974) would pursue this idea, only to have no one until Foley follow up on it with a more
general approach.
11
Central to Foley’s approach was the idea that an equilibrium state should be
associated with a maximization of entropy, a deeply Gibbsian idea, as Gibbs was also a
founder of chemical thermodynamics. This was an aspect that Samuelson did not
approve of so much. He made this clear at a symposium honoring Gibbs (Samuelson,
1990, p. 263):
“I have come over the years to have some impatience and boredom to find
those who try to find an analogue of the entropy of Clausius or Boltzman or
Shannon9 to put into economic theory. It is the mathematical structure of
classical (phenomenological, macroscopic, nonstochastic) thermodynamics that
has isomorphisms with theoretical economics.” [italics in original]
Not for Samuelson would there be stochastic processes or probability distributions of
price outcomes. His Gibbs was the “nonstochastic” Gibbs.
Foley followed the later Gibbs of statistical mechanics, the student of the
foundation of temperature in the stochastic processes of molecular dynamics. It was in
this nonlinear world that phase transitions became important in studying such phenomena
as the freezing or melting or boiling or condensation of water at critical temperatures.
Foley’s interest in nonlinear dynamics had long been more along the lines of
understanding bifurcations of dynamical systems than of the precise natures of erratic
trajectories, whether chaotic or not. Statistical mechanics offered an approach to
understanding such bifurcations within the purview of his old topic of study, general
equilibrium theory.
It was Shannon (1951) who would link entropy with information theory. A major influence on Foley’s
thinking came through this link as discussed by Jaynes (1957).
9
12
In any case, he pursued the idea of a thermodynamic equilibrium in which entropy
is maximized.10 A difference between this sort of equilibrium and the more standard
Walrasian one is that this statistical mechanics one lacks the welfare economics
implications that the Walrasian one has. There would be no Pareto optimality for such a
stochastic equilibrium, an equilibrium in which a good is being exchanged for many
different prices across a set of heterogeneous traders in discrete bilateral transactions,
with the set of those prices conforming to the distribution given by the maximization of
entropy, this occurring across all goods and services.
One limit of the traditional Gibbsian model is that it is a conservative system,
which usually means for a model of economic exchange that one is dealing with a pure
exchange model. While much of general equilibrium theory has been developed with
such an assumption, Foley was able to expand this to include both production and
exchange. The key strong assumption he made was that all possible transactions in the
economy have an equal probability. A curious outcome of this model is the possibility of
negative prices coinciding with positive ones for the same good. While conventional
theory rejects such ideas, indeed even the idea of a negative price for something at all, 11
such outcomes have been observed in real life, as in the Babylonian bride markets as
reported by Herodotus, wherein the most beautiful brides command positive prices,
while the ugliest command negative ones, even in the same auction. This phenomenon is
coming to be known as the “Herodotus paradox,” (Baye, Kovenock, and de Vries, 2011),
10
While not doing so along the lines of Foley, others who have applied the entropy concept to either
economics or spatial dynamics have included Wilson (1970), Georgescu-Roegen (1971), Stutzer (1994),
and Weidlich (2000). Julius Davidson (1919) posed the law of entropy as the ultimate foundation for the
law of diminishing returns.
11
As long as the price for something is strictly negative, then one can turn it into a positive price for
removing the object. We pay for water when it is scarce, but if it floods our basement, we pay people to
remove it.
13
and it is to Foley’s credit that he allows for such outcomes in contrast with long traditions
of general equilibrium theory that insist on strictly non-negative prices for equilibrium
solutions..
In his model there are m commodities, n agents of r types, k with offer set Ak. The
proportion of agents of type k who achieve a particular transaction x out of mn such
possible ones is given by hk[x]. The multiplicity of an assignment is the number of ways
n agents can be assigned to S actions, with ns being the number of agents assigned to
action s, which is given by
W[{ns}] = n!/(n1!...ns!...nS!).
(1)
A measure of this multiplicity is the informational entropy of the distribution, which is
given by
H{hk[x]} = -Σk=1rWkΣx=1mnhk[x]lnhk[x].
(2)
This is then maimized subject to a pair of feasibility constraints given by
ΣxεAkhk[x] = 1,
(3)
Σk=1rWkΣxεAkhk[x]x = 0,
(4)
resulting in a unique solution if the feasibility set in non-empty of the canonical Gibbs
form given by
hk[x] = exp[-πx]/Σxexp[-πx],
(5)
where the vectors π ε Rm are the entropy shadow prices.
Foley has continued to follow along this line of inquiry (Foley and Smith, 2008),
and this is more in line with developments in econophysics that have followed from this
time forward. Foley has not pursued the issue of whether or not these distributions will
obey power laws or not, a major concern of econophysicists, although he does accept
14
their argument that financial market returns do tend to follow such laws. He has
expressed the view that these newer ideas from physics can help move economics into a
broader social science perspective along more realistic lines than has been seen in the
past (Colander, Holt, and Rosser, p. 212):
“The study of economic data surely has a future, but the question is
whether it will be recognizable as economics in today’s terms and whether it will
exhibit any unity of subject matter and method.”
Conclusion
Duncan Foley’s intellectual career has emphasized fundamental issues of
economic theory, particularly general equilibrium theory. This initially concerned the
loose ends of uniqueness and stability and how the role of money might allow for a
proper microfoundation for macroeconomics. However, as he pursued these questions
through various approaches and over time, he eventually moved towards the two leading
strands of complexity within economics, the computational and the dynamic. He became
involved in the former through his collaboration with the late Peter Albin, the first
economist to seriously deal with problems of computational complexity. They followed
the route of using the four-hierarchy level model of Chomsky as adumbrated by
Wolfram. A particular emphasis of their interests were in the problems of nondecidability that can arise in such systems and how this limits full rationality by agents.
He would also move towards the sorts of dynamics associated with the
econophysics movement by using the statistical mechanics methods developed initially
by J. Willard Gibbs, although as applied to information theory by such figures as
15
Shannon and Jaynes. This allowed for him to establish a maximum entropy equilibrium
of a distribution of price outcomes that could allow for both positive and negative prices
for specific goods and that also lacked any welfare implications.
A particular outcome of the earlier work was a possible integration of the
computational and dynamic approaches. This involved classifying patterns of dynamic
paths arising from nonlinear economic systems into the four-level hierarchy developed by
Chomsky and Wolfram. One may disagree that this is the best way to reconcile these
mostly competing approaches to economic complexity. However, it must be granted that
this is an ingenious intellectual achievement, not the only one that Duncan Foley has
achieved in his distinguished intellectual career.
References
Albin, Peter S. 1982. “The metalogic of economic predictions, calculations, and
propositions.” Mathematical Social Sciences 3, 329-358.
Albin, Peter and Duncan K. Foley. 1992. “Decentralized, dispersed exchange without an
auctioneer: A simulation study.” Journal of Economic Behavior and Organization 18,
27-51.
Albin, Peter S. with Duncan K. Foley. 1998. Barriers and Bounds to Rationality: Essays
on Economic Complexity and Dynamics in Interactive Systems. Princeton: Princeton
University Press.
Arthur, W. Brian, Steven N. Durlauf, and David A. Lane. 1997. “Introduction.” In:
Arthur, W.B, Durlauf, S.N., and Lane, D.A. (eds.). The Economy as an Evolving
Complex System II. Reading: Addison-Wesley, pp. 1-14.
Ball, Philip. 2006. “Culture clash.” Nature 441, 686-688.
Baye, Michael A., Don Kovenock, and Caspar de Vries. 2011. “The Herodotus paradox.”
Games and Economic Behavior, forthcoming.
Blum, Lenore, Felipe Cucker, Michael Shub, and Steve Smale. 1998. Complexity and
Real Computation. New York: Springer.
16
Chomsky, Noam. 1959. “On certain properties of grammars.” Information and Control 2,
137-167.
Colander, David, Richard P.F. Holt, and J. Barkley Rosser, Jr. 2004. The Changing Face
of Economics: Conversations with Cutting Edge Economists. Ann Arbor: University of
Michigan Press.
Davidson, Julius. 1919. “One of the physical foundations of economics.” Quarterly
Journal of Economics 33, 717-724.
Day, Richard H. 1994. Complex Economic Dynamics: An Introduction to Dynamical
Systems and Market Mechanisms, Volume I. Cambridge: MIT Press.
Foley, Duncan K. 1982. “Realization and accumulation in a Marxian model of the circuit
of capital.” Journal of Economic Theory 28, 300-319.
Foley, Duncan K. 1986. “Liquidity-profit rate cycles in a capitalist economy.” Journal of
Economic Behavior and Organization 8, 363-376.
Foley, Duncan K. “A statistical equilibrium theory of markets.” Journal of Economic
Theory 62, 321-345.
Foley, Duncan K. 2010a. “What’s wrong with the fundamental existence and welfare
theorems?” Journal of Economic Behavior and Organization 75, 115-131.
Foley, Duncan K. 2010b. “Model description length priors in the urn problem.” In:
Zambell, S. (ed.). Computable, Constructive and Behavioural Economic Dynamics:
Essays in honour of Kumaraswamy (Vela) Velupillai. Milton Park: Routledge, pp. 109121.
Foley, Duncan K. and Miguel Sidrauski. 1971. Monetary and Fiscal Policy in a Growing
Economy. New York: Macmillan.
Foley, Duncan K. and Eric Smith. 2008. “Classical thermodynamics and general
equilibrium theory.” Journal of Economic Dynamics and Control 32, 7-65.
Föllmer, Hans. 1974. “Random economies with many interacting agents.” Journal of
Mathematical Economics 1, 51-62.
Gallegati, Mauro, Steve Keen, Thomas Lux, and Paul Ormerod. 2006. “Worrying trends
in econophysics.” Physica A 370, 1-6.
Georgescu-Roegen, Nicholas. 1971. The Entropy Law and the Economic Process.
Cambridge: Harvard University Press.
17
Gibbs, J. Willard. 1902. Elementary Principles of Statistical Mechanics. New Haven:
Yale University Press.
Goodwin, Richard M. 1947. “Dynamical coupling with especial reference to markets
having production lags.” Econometrica 15, 181-204.
Goodwin, Richard M. 1951. “The nonlinear accelerator and the persistence of business
cycles.” Econometrica 19, 1-17.
Horgan, John. 1997. The End of Science: Facing the Limits of Knowledge in the Twilight
of the Scientific Age. New York: Broadway Books.
Jaynes, E.T. 1957. “Information theory and statistical mechanics.” Physical Review 106,
620-638; 108, 171-190.
McCauley, Joseph L. 2006. “Response to ‘Worrying trends in econophysics’.” Physica A
371, 601-609.
Mirowski, Philip. 1989. More Heat than Light: Economics as Social Physics, Physics as
Nature’s Economics. Cambridge: Cambridge University Press.
Mirowski, Philip. 2007. “Markets come to bits: Evolution, computation, and makomata in
economic science.” Journal of Economic Behavior and Organization 63, 209-242.
Moore, Christopher. 1990. “Undecidability and unpredictability in dynamical systems.”
Physical Review Letters 64, 2354-2357.
Rosser, J. Barkley, Jr. 1999. “On the complexities of complex economic dynamics.”
Journal of Economic Perspectives 13(4), 169-192.
Rosser, J. Barkley, Jr. 2008. “Debating the role of econophysics.” Nonlinear Dynamics,
Psychology, and Life Sciences 12, 311-323.
Rosser, J. Barkley, Jr. 2009. “Computational and dynamic complexity in economics.” In:
Rosser, J.B., Jr. (ed.). Handbook of Complexity Research. Cheltenham: Edward Elgar, pp.
22-35.
Rosser, J. Barkley, Jr. 2011. Complex Evolutionary Dynamics in Urban-Regional and
Ecologic-Economic Systems. New York: Springer.
Samuelson, Paul A. 1947. The Foundations of Economic Analysis. Cambridge: Harvard
University Press.
Samuelson, Paul A. 1990. “Gibbs in economics.” In: Caldi, G. and Mostow, G.D. (eds.).
Proceedings of the Gibbs Symposium. Providence: American Mathematical Society, pp.
255-267.
18
Scarf, Herbert.E. 1960. “Some examples of global instability of competitive
equilibrium.” International Economic Review 1, 157-172.
Scarf, Herbert E. 1973. The Computation of Economic Equilibria. New Haven: Yale
University Press.
Shannon, Claude E. 1951. “Prediction and entropy of printed English.” The Bell System
Technical Journal 30, 50-64.
Stutzer, Michael J. 1994. “The statistical mechanics of asset prices.” In: Elworthy, K.D.,
Everitt, W.N., and Lee, E.B. (eds.). Differential Equations, Dynamical Systems, and
Control Science: A Festschrift in Honor of Leonard Markus. New York: Marcel Dekker,
pp. 321-342.
Tesfatsion, Leigh and Kenneth L. Judd (eds.). 2004. Handbook of Computational
Economics, Volume 2: Agent-Based Computatonal Economics. Amsterdam: Elsevier.
Turing, Alan M. 1936-37. “On computable numbers with an application to the
Entscheidungsproblem.” Proceedings of the London Mathematical Society, Series 2 43,
544-546.
Velupillai, K. Vela. 2000. Computable Economics. Oxford: Oxford University Press.
Velupillai, K. Vela. 2009. “A computable economist’s perspective on computational
complexity.” In: Rosser, J.B., Jr. (ed.). Handbook of Complexity Research. Cheltenham:
Edward Elgar, pp. 36-83.
von Neumann, John, completed by Arthur W. Burks. 1966. Theory of Self Reproducing
Automata.Urbana: University of Illinois Press.
Weidlich, Wolfgang. 2000. Sociodynamics: A Systematic Approach to Mathematical
Modelling in the Social Sciences. Amsterdam: Harwood.
Wilson, Alan G. 1970. Entropy in Urban and Regional Modelling. London: Pion.
Wolfram, Stephen (ed.).. 1986. Theory and Applications of Cellular Automata.
Singapore: World Scientific.
19
Download