RANDOM PROCESSES AND TRANSFORMATIONS It is intended to

advertisement
RANDOM PROCESSES AND TRANSFORMATIONS
S. ULAM
I t is intended to present here a general point of view and specific problems
connected with the relation between descriptions of physical phenomena by
random processes and the theory of probabilities on the one hand, and the
deterministic descriptions by methods of classical analysis in mathematical
physics, on the other. We shall attempt to formulate procedures of random processes which will permit heuristic and also quantitative evaluations of the
solutions of differential or integral-differential equations. Broadly speaking,
such methods will amount to construction of statistical models of given physical
situations and statistical experiments designed to evaluate the behavior of the
physical quantities involved.
The role of probability theory in physics is really manifold. In classical theories
the role of initial conditions is consciously idealized. In reality these initial conditions are known only within certain ranges of values. One could say that
probability distributions are given for the actual values of initial parameters.
The influence of the variation of initial conditions, subject to "small" fluctuations,
on the properties of solutions has been studied in numerous cases and forms one
subject of the theories of "stability."
In a more general way, not only the initial constants, but even the operators
describing the behavior of a given physical situation may not be known exactly.
We might assume that, for example, the forces acting on a given mechanical
system are known only within certain hmits. They might depend, for example, to
some extent on certain "hidden" parameters and we might again study the influence of random terms in these forces on the given system.
In quantum theory, of course, the role of a stochastic point of view is even more
fundamental. The variables describing a physical system are of higher mathematical type. They are sets of points or sets of numbers (real, complex, or still
more general) rather than the numbers themselves. The probability distributions
enter from the beginning as the primitive notions of theiheory. The observable or
measurable quantities are values of certain functionals or eigenvalues of operators
acting on these distributions. Again, in addition to this fundamental role of the
probabilities formulation, there will enter the fact that the nature of forces or
conditions may not be known correctly or exactly, but the operators corresponding to them will depend on "hidden" parameters in a fashion similar to that
in classical physics. In fact, at the present time considerable latitude exists in
the choice of operators corresponding to "forces" in nuclear physics.
There is, in addition, another reason for the recourse to descriptions in the
spirit of the theory of probabilities which permit from the beginning, a flexibility
and, therefore, greater generality of formulations. It is obvious that a general
mathematical formalism for dealing with "complications" in models of reality
264
RANDOM PROCESSES AND TRANSFORMATIONS
265
is needed already on a heuristic level. This need is mainly due to the lack of simplicity in the presently employed models for the behavior of matter and radiation. The combinatorial complexity alone, present in such diverse problems as
hydrodynamics, the theory of cosmic rays, the theory of nuclear reaction in heterogenous media, is very great. One has to remember that even in the present
theories of so-called elementary particles themselves one employs rather complicated models for each of these particles and their interactions. Often the complications relate already to the qualitative topological and algebraic structure even
before one attempts to pursue analysis of these models. One reason for these
complications is that such problems involve a considerable number of independent variables. The infinitesimal analysis, i.e., the methods of calculus,
become, for the case of many variables, unwieldy and often only purely symbolic.
The class of "elementary" functions within which the operators of the calculus
act in an algebraically tolerable fashion is restricted in the main to functions of
one variable (real or complex). Mathematical physics deals with this increasing
complexity in two opposite limiting methods. The first is the study of systems of
differential or integral-differential equations describing in detail the behavior of
each element of the system under consideration. The second, an opposite extreme
in treatment, is found in theories like statistical mechanics dealing with only a
few total or integral properties of systems which consist of enormous numbers of
objects. There we resign ourselves to the study of only a few functionals or
operators on such ensembles.
Systems involving, so to say, an intermediate situation have been becoming,
in recent years, more and more important in both theory and practice. A mechanical problem of a system of N bodies with forces acting between them (we think
here of N as having a value like, say, 10 or 20) would present an example of this
kind. Similarly one can think of a continuum, say a fluid subject to given forces
in which, however, we are interested in the values of N quantities describing the
whole continuum of the fluid. Neither of the two extreme approaches which we
mentioned is very practical in such cases. It will be impractical to try to solve
exactly the deterministic equations. The purely statistical study of the system, in
the spirit of thermodynamics, will not be detailed enough. The approach should
be rather a combination of the two extreme points of view, the classical one of
following step by step in time and space the action of differential and integral
operators and the stochastic method of averaging over whole classes of initial
conditions, relations, and interactions. We propose a way to combine the deterministic and probability method by some general mathematical algorithms.
In mathematics itself combinatorial analysis lacks general methods, and
methodologically resembles an experimental science. Its problems are suggested
by arrangements and combinations of physically existing situations and each
requires for solution specific ingenuity. In analysis the subject of functional
equations is in a similar position. There is a variety of special cases, each treated
by special methods. According to Poincaré it is even impossible to define, in
general, functional equations.
266
S. ULAM
We shall now give examples of heuristic approaches all based on the same
principle: of an equivalent random process through which one can examine the
various problems of mathematical physics alluded to above.
One should remember that mathematical logic itself or the study of mathematics as a formal system can be considered a branch of combinatorial analysis.
Metamathematics introduces a class of games—"solitaires"—to be played with
symbols according to given rules. One sense of Gödel's theorem is that some
properties of these games can be ascertained only by playing them.
From the practical point of view, investigation of random processes by playing
the corresponding games is facilitated by the electronic computing machines.
(In this connection: a simple computational device for production of a sequence
of numbers with certain properties of randomness is desirable. By iterating the
function xf = 4#(1 — x) one obtains, for almost all x, the same ergodic distribution of iterates in (0,1) [10; 12].)
II
One should remember that the distinction between a probabilistic and deterministic point of view lies often only in the interpretation and not in the
mathematical treatment itself. A well-known example of this is the comparison
of two problems, (1) BoreFs law of large numbers for the sequence of the throws
of a coin, and (2) a simple version of the ergodic theorem of Birkhoff: if one
applies this ergodic theorem to a very special situation, namely, the system of
real numbers in a binary expansion, the transformation T of this set on itself
being a shift of the binary development by 1, one will realize that the theorems
of Borei and Birkhoff assert in this case the same thing (this was noticed first,
independently, by Doob, E. Hopf, and Khintchine.) In this case a formulation of
the theory of probability and a deterministic one of iterating a well-defined transformation are mathematically equivalent.
In simple situations one might combine the two points of view: the one of
probability theories, the other of iterating given transformations as follows.
Given is a space E; given also are several measure preserving transformations
Ti, T2, • • • , Tn . We start with a point p and apply to it in turn at random the
given transformations. Assume for simplicity that at each time each of the N
given transformations has an equal chance = 1/N of being applied. It was proved
by von Neumann and the author that the ergodic theorem still holds in the following version: for almost every sequence of choices of these transformations and for
almost every point p the ergodic limit will exist [10; 12]. The proof consists in the
use of the ergodic theorem of Birkhoff in a suitably defined space embodying, as
it were, the space of all choices of the given transformations over the space E.
The question of metric transitivity of a transformation, i.e., the question whether
the limit in time is equal to the space average, can be similarly generalized
from the iteration of a given transformation to the situation dealt with above;
that is, the behavior of a sequence of points obtained by using several trans-
RANDOM PROCESSES AND TRANSFORMATIONS
267
formations at random. One can again show, similarly to the case of one transformation [11], that metric transitivity obtains in very general cases.
Ill
A very simple practical illustration of a statistical approach to a mathematically well-defined problem is the evaluation of integrals by a sampling
procedure: suppose R is a region in afc-dimensionalspace defined by the inequalities:
/i(ffi, • • • , Xk) < 0
f2(xi, • • • , xk) < 0
fi(xi, • • • , xk) < 0.
The region is contained, say, in the unit cube. The problem is to evaluate the
volume of this region. The most direct approximation is from the definition of the
integral: one divides each of the k axes into a number N of, say, equidistant
points. We obtain in our cube, Nk lattice points and by counting the fraction of
those which do belong to the given region we obtain an approximate value of its
volume. An alternative procedure would be to produce, at random, with uniform
probability a number M of points in the unit cube and count again the fraction of
those belonging to the given region. From Bernoulli's law of large numbers it
follows that as M tends to infinity this fraction will, with probability 1, tend to
the value of the volume in question. It is clear from the practical point of view
that for large values offc,the second procedure will be, in general, more economical. We know the probability of an error in M tries and given the error, the
necessary value of M will be for largefcmuch smaller than Nh. Thus it can be
seen in this simple problem that by playing a game of chance (producing the
points at random) we may obtain quantitative estimates of numbers defined by
strictly deterministic rule. Analogously, one can evaluate by such statistical
procedures, integrals occurring in more general problems of "geometric probabilities."
IV
Statistical models, that is, the random processes equivalent to the deterministic
transformations, are obvious in the case of physical processes described by
differential diffusion equations or by integral differential equations of the Boltzmann type. These processes are, of course, the corresponding "random walks".
Onefindsin extensive literature dealing with stochastic processes the foundations
for construction and study of such models, at least for simple problems of the
above type. It is known that limiting distributions resulting from such processes
obey certain partial differential equations. Our aim is to invert the usual procedure. Given a partial differential equation, we construct models of suitable
268
S. ULAM
games and obtain distributions or solutions of the corresponding equations by
playing these games, i.e., by experiment. As an illustration consider the problem
of description of large cosmic ray showers. I t can be schematized as follows:
An incoming particle produces with certain probabilities new particles; each
of these new particles, which are of several kinds, is, moreover, characterized by
additional indices giving its momenta and energies. These particles can further
multiply into new ones until the energies in the last generation fall under certain
given limits. The problem is first: to predict, from the given probabilities of
reactions, the statistical properties of the shower; secondly, a more difficult
one, the inverse problem, where the elementary probabilities of transformation
are not known but statistics of the showers are available, to estimate these
probabilities from the properties of the shower. Mathematically, the problem is
described by a system of ordinary differential equations or by a matrix of transitions, which has to be iterated.
A way to get the necessary statistics may be, of course, to "produce" a large
number of these showers by playing a game of chance with assumed probabilities
and examine the resulting distributions statistically. This may be more economical than the actual computation of the powers of the matrices describing the
transition and transmutation probabilities: the multiplication of matrices corresponds to evaluation of all contingencies at each stage, whereas by playing a
game of chance we select at each stage only one of the alternatives.
Another example: given is a medium consisting of several nuclearly different
materials, one of which is uranium. One introduces one or several neutrons which
will cause the generation of more neutrons through fissions in uranium. We introduce types, i.e., indices of particles corresponding to different kinds of nuclei
present. In addition, the positions and velocities of particles of each type can be
also characterized by additional indices of the particle so that these continuous
variables are also, approximately, represented by a finite class of discrete indices. The given geometrical properties of the whole assembly and nuclear constants corresponding to the probabilities of reaction of particles (they are, in
general, functions of velocities) would give us a matrix of transitions and transmutations. Assuming that time proceeds by discrete fixed intervals, we can then
study the powers of the matrix. These will give us the stat'e of the system at the
nth interval of time. It is important to remember that the Markoff process involved here has infinitely many states because the numbers of particles of each
type are not a priori bounded. A very schematized mathematical treatment would
be given by the partial differential equation
— = aAw + b(x)w.
dt
This equation describes the behavior of a diffusing and multiplicative system of
particles of one type, x denoting the "index" of position. For a mathematical
description of this system it is preferable, instead of picturing it as an infinite-
RANDOM PROCESSES AND TRANSFORMATIONS
269
dimensional Markoff process, to treat it as an iteration of a transformation of a
space given by the generating functions [2; 3; 5; 6; 9]. (Considerable work has
been done on a theory of such processes also by Russian mathematicians [8].)
The transformation T, given by the generating functions which is of the form
^i = f%(x\, • • • , xn), i = 1, • • • , n, where the/* are power series with non-negative coefficients, will define a linear transformation A whose terms a,-/ will be the
expected values of the numbers of particles of type j produced by starting with a
particle of type i. Ordinarily, to interpret a matrix by a probabilistic game, one
should have all of the terms non-negative, and the sum of each row should be
equal to 1. One can generalize the interpretation of matrices, however, by playing
a probability game, considering the terms not as transition probabilities but
rather as the first moments or expected values of the numbers of particles of type
j produced by one particle of type i. (The probabilities, of course, can be fixed in
many different ways so as to yield the same given values of the moments.) One
can go still further. Multiplication of matrices with arbitrary real coefficients can
be studied by playing a probability game if we interpret the real numbers in
each term as matrices with non-negative coefficients over two symbols:
1 0
-Itt
0 1
0 1
1 0
The negative and positive numbers require then each its own "particles" with
separate indices. This correspondence preserves, of course, both addition and
multiplication on matrices. Obviously, more general matrices with complex
numbers as general terms admit, therefore, also of analogous probabilistic interpretation, each complex number requiring 4 types of "particles" in this
correspondence [4].
The following theorem provides one mathematical relation between the properties of the iterates of the transformation given by generating functions and the
iterates of the associated linear transformation (given by the expected values) :
With probability 1 the ratios of the numbers of particles of any two types will
approach the ratios defined by the direction of the invariant vector given by
Frobenius' theorem for the linear matrix [2; 3; 5; 6; 9].
It is possible to interpret the "particles" in a rather general and abstract
fashion. Thus, for example, one may introduce an auxiliary particle whose role is
that of a clock [2, part 2]. A distribution in the 4-dimensional time-space continuum can be investigated by' an iteration of transition and transmutation
matrices. The parameter of iteration will then be a purely mathematical variable
T, having no direct physical meaning since physical time is now one of the dependent variables.
V
In some cases one could deal with a partial differential equation as follows.
270
S. ULAM
First, purely formally, we transform it into an equation of the diffusion-multiplication type. We then interpret this equation as describing the behavior of a
system consisting of a large number of particles of various types which diffuse
and transmute into each other. Finally we study the behavior of such a system
empirically by playing a game with these particles according to prescribed
chances of transitions. Suppose, for example, we have the time independent
Schrödinger equation:
aty + (E - V(x, y, z))$ = 0.
By introducing a new variable r, and the function
^
u = 0«-*
we shall obtain the equation
—- = aàu — Vu.
dt
This latter is of the desired type. The potential V(x, y, z) plays the role of expected value of the multiplication factor at the position given by the vector
x [1]. Dirac's equation can also be treated in a similar fashion. (We have to
introduce at least 4 types of particles since the description is not by means of real
numbers but through Dirac's matrices. Again the parameter r, as in Schrödinger's
equation, is a purely auxiliary variable not interprétable as time.) Such probability models certainly have heuristic value in cases where no analytical methods
are readily applicable to obtain solutions of the corresponding equations in
closed form. This is, for example, the case when the potential function is not of
simple enough type or in problems dealing with three or more particles. The
result of a probability game will, of course, never give us the desired quantities
accurately but could only allow the following possible interpretation: Given
€ > 0, t\ > 0, with probability 1 — rj, the values of quantities which we try to
compute lie within e of the constants obtained by our random process for sufficiently great number n of the sampling popiulation.
One should remember that in reality the integral or partial differential equations often describe only the behavior of averages or expected values of physical
quantities. Thus, for example, if one assumes as fundamental a model of the
fluid as does the kinetic theory, the equations of hydrodynamics will describe
the behavior of average quantities; velocities, pressures, etc., are defined by
averaging these over very large numbers of atoms near a given position. The
results of a probability game will reflect, to some extent, the deviation of such
quantities from their average values. That is to say, the fluctuations unavoidably
present as a result of the random processes performed may not be purely mathematical but may reflect, to some extent, the physical reality.
RANDOM PROCESSES AND TRANSFORMATIONS
271
VI
One economy of a statistical formulation is this: often, in a physical problem,
one is merely interested in finding the values of only a few functionals of an unknown distribution. Thus, for example, in a hydrodynamic problem we would
like to know, say, the average velocity and the average pressure in a certain
region of the fluid. In order to compute these one has to know, in an analytic
formulation of the problem, the positions of all the particles of the fluid. One
needs then the knowledge of the functions for all values of the independent variables. In an abstract formulation the situation is this: given is an operator
equation U(f) = 0 where / is a function offcvariables; what we want to know
is the value of several given functionals Ci(f), G2(f), • • • , Gi(f). (Sometimes, of
course, even the existence of a solution of the equation U(f) = 0 or, which is the
same, of the equation V(f) = U(f) + / = /, that is, thefixedpoint of the operator
V(f), is not a priori guaranteed.) The physical problem, however, consists merely
infindingthe values of (?,(/). Mathematically it amounts to looking for functions
/ for which Gi(V(f)) = Gi(f). We might call such / quasi-fixed points of the
transformation V (with respect to the given functionals Gì). Obviously, the
existence of quasi-fixed points is, a priori, easier to establish than the existence
of a solution in the strict sense. A simple mathematical illustration follows:
let T be a continuous transformation of the plane onto itself given by x' = f(x, y) ;
y' == Q(Xî y)* There need not, of course, exist afixedpoint. There will always exist
a point (x0, y0) such that | x'0 \ = \ x0 | ; | y'o \ = | y0 |, analogously in n dimensions.
Similar theorems in function spaces would permit one to assert the existence of
quasi-solutions of operator equations V(f) = /. A quasi-solution (for given
functionals) is then a function which possesses the same first n moments or the
same first n coefficients in its Fourier series as its transform under V. For each
4
n there should exist such quasi-solutions.
In a random process "equivalent" to a given equation, the values of functionals
of the desired solution or, more generally, quasi-solutions, are obtained quite
automatically as the process proceeds. The convergence in probability of the
data, obtained during the process, to their true value may, in some cases, be
much more rapid than the convergence of the data describing the functions themselves. This will be in general the case for functionals which have the form of
integrals over the distributions.
VII
The role of "small" variations introduced in the operators which describe
physical processes is discussed in elementary cases in the theories of stability.
In the simplest cases one deals with the influence which variations of constants
have on the behavior of solutions, say, of linear differential equations. In many
purely mathematical theories one can conceive the problem of stability in a very
general way. One can, for example, study instead of functional equations, functional inequalities and ask the question whether the solutions of these inequalities
272
S. ULAM
are, of necessity, close to the solutions of the corresponding equations. Perhaps
the simplest example would be given by the equation
T(x + y) = T(x) + T(y)
for all x, y which are elements of a vector space E, and the corresponding functional inequality:
|| S(x + y)-
S(x) - S(y) || < «
for all x, y.
A result of Hyers is that there exists a T satisfying the equation such that for
all x, we have then
|| T{x) - S(x) || < e.
Or, more generally, one could ask the question: given an e-isomorphism F of a
metric group, is there always an actual isomorphism G within, say, fc times e
of the given F. Another example is the question of e-isometric transformations
T, i.e., transformations T such that for all p, q:
| pip, q) - p(T(p), T(q)) \ < «.
Here again one can show that such T differ only by fc-e from strictly isometric
transformations. To give still another example one can introduce a notion of
almost convex functions and almost convex sets. Again it is possible to show that
such objects differ little from strictly convex bodies which, one proves, will exist
in their vicinity.
All this is mentioned here because, in order to establish rigorously the comparison between random ' process models of physical problems and their classical
descriptions by analysis, mathematical theorems will be needed which will allow
us to estimate more precisely the influence of variations not merely of constants
but of the operators themselves.
In many mathematical theories it is natural to subject the definitions themselves to «-variations. Thus, for example, the notion of the homeomorphic transformation can be replaced by a notion of a continuous transformation which is up
to e a one-to-one transformation. Again one finds that many theorems about oneto-one transformations can be generalized to hold for the almost one-to-one case.
Little is known at present about solutions of functional inequalities. One needs,
of course, beyond theorems on stability, more precise information on the rapidity
of the convergence in probability.
VIII
In theories which would deal with actually infinite assemblies of points—the
probability point of view can become axiomatic and more fundamental rather
than only of the approximative character evident in the previous discussion.
Let us indicate as an example a purely schematic set-up of this sort. We want to
treat a dynamic system of an infinite number of mass points interacting with each
other. Imagine that on the infinite real axis we have put, with probability equal
RANDOM PROCESSES AND TRANSFORMATIONS
273
to | , on each of the integer points a material point of mass 1. That is to say, for
each integer we decided by a throw of a coin whether or not to put such a mass
point on it. Having made infinitely many such decisions, we shall obtain a distribution of points on the line. It can be denoted by a real number in binary development, e.g., the indices corresponding to ones give us, say, for odd places,
the non-negative integers where mass points are located, for the even indices of
ones, we obtain the location of the mass points on the negative part of the line,
Imagine that this binary number represents our system at the time T = 0.
Assume further that the mass points attract each other with forces proportional
to the inverse squares of the distances. (It is obvious that forces on each point
are well-defined at all times since the sum of the inverse squares of integers
converges absolutely.) Motions will now ensue. We propose to study properties of
the motion common to almost all initial conditions, or theorems valid for almost
all binary sequences (normal numbers in the sense of Borei). As representing
initial conditions one may make the assumption that as the two points collide
they will from then on stay together and form a point with a greater mass whose
motion will be determined by the preservation of the momentum. It is interesting
to note here that, because the total mass of the system is infinite, the various
formulations of mechanics which are equivalent to each other in the case of
finite systems cease to be so in this case. One can use, however, Newton's equations quite legitimately in our case. The interesting thing to notice is that the
behavior of our infinite system will not be obtainable as a limiting case of the
behavior of very large but finite systems approximating it. One shows, for
example, that the average density of the system will remain constant equal to
\ for all time. One can prove that collisions will lead to formations or condensations of arbitrarily high orders. For all time T there will be particles which
have not yet collided with another particle. On the other hand, given a particle,
the probability that it will collide at some time tends to 1. We might add that
one could treat similarly systems of points distributed on integer-valued lattice
points in the plane or in 3-dimensional space. The forces will not be determined
any more by absolute convergence, but in 2 and 3 dimensions one can show that
if we sum over squares or spheres the forces acting on a point from all the other
ID oints in the spheres whose radii tend to infinity, the limits will exist for each
point with probability 1. That is, for almost every initial condition of the whole
system the force is defined everywhere. In a problem of this sort it is obvious that
the role of probability formulation is fundamental. Actually infinite systems of
this kind may be thought of, however, as a new kind of idealization of systems
already considered in present theories. This is so if we allow in advance for an
infinity of hidden parameters present in the physical system, and which are not
so far treated explicitly in the model. An important case in which the idealization
to an actual infinity of many degrees of freedom interacting with each other
seems to be useful is the recent theory of turbulence of Kolmogoroff, Onsaeger,
and Heisenberg.
An interesting field of application for models consisting of an infinite number of
274
S. ULAM
interacting elements may exist in the recent theories of automata.1 A general
model, considered by von Neumann and the author, would be of the following
sort:
Given is an infinite lattice or graph of points, each with a finite number of
connections to certain of its "neighbors." Each point is capable of a finite number
of "states." The states of neighbors at time tn induce, in a specified manner, the
state of the point at time tn+i. This rule of transition is fixed deterministically
or, more generally, may involve partly "random" decisions.
One can define now closed finite subsystems to be called automata or organisms.
They will be characterized by a periodic or almost periodic sequence of their
states as function of time and by the following "spatial" character: the state of
the neighbors of the "organism" has only a "weak" influence on the state of the
elements of the organism; the organism can, on the contrary, influence with full
generality the states of the neighboring points which are not part of other
organisms.
One aim of the theory is to establish the existence of subsystems which are
able to multiply, i.e., create in time other systems identical ("congruent") to
themselves.
As time proceeds, by discrete intervals, one will generate, starting from a
finite "activated" region, organisms of different types. One problem is again to
find the equilibrium ratios of the numbers of individual species, similarly to the
situation described in §IV. The generalization of Frobenius' theorem mentioned there gives one basis for the existence of limits of the ratios.
The existence of finite universal organisms forms one of the first problems of
such theory. These would be closed systems able to generate arbitrarily large,
(or "complicated") closed systems.
One should perhaps notice that any metamathematical theory has, to some
extent, formally a character of the above sort: one generates, by given rules,
from given classes of symbols, new such classes.
Mathematically, the simplest versions of such schemes would consist simply
of the study of iterates of infinite matrices, having nonzero elements in only a
finite number of terms in each row. The problems consist offindingthe properties
of the finite submatrices appearing along the diagonal, as one iterates the matrix.
REFERENCES
1. M. D. DONSKER and M. KAC, A sampling method for determining the lowest eigenvalue
and the principal eigenfunction of Schrödinger's equation, Journal of Research of the National Bureau of Standards vol. 44 (1950) pp. 551-557.
2. C. J. EVERETT and S. ULAM, Los Alamos Report L. A. D. C. 533, 534, and 2532.
3.
, Multiplicative systems, I, Proc. Nat. Acad. Sci. U. S. A. vol. 34 (1948) pp. 403405.
4.
, On an application of a correspondence between matrices over real algebras and
matrices of positive real numbers, Bull. Amer. Math. Soc. Abstract 56-1-96.
5. T. E. HARRIS, Some mathematical models for branching processes, RAND Corp. Report,
September, 1950.
1
J. von Neumann lectures at the University of Illinois, December, 1949.
RANDOM PROCESSES AND TRANSFORMATIONS
275
6. D. HAWKINS and S. ULAM, LOS Alamos Report L. A. D. C. 265,1944.
7. G. W. KING, Stochastic methods in quantum mechanics, A. D. Little Co. Report, February, 1950.
8. A. N. KOLMOGOROFF andM. A. SEVATYANOV, The calculation of final probabilities for
branching random processes, C. R. (Doldady) Acad. Sci. URSS (N.S.) vol. 56 (1947) pp.
783-786.
9. N. METROPOLIS and S. ULAM, The Monte Carlo method, Journal of the American Statistical Association vol. 44 (1949) pp. 335-341.
10. J. VON NEUMANN and S. ULAM, Random ergodic theorems, Bull. Amer. Math. Soc.
Abstract 51-9-165.
11. J. C. OXTOBY and S. ULAM, Measure preserving homeomorphisms and metrical transitivity, Ann. of Math. vol. 42 (1941) pp, 874-920.
. 12. S. M. ULAM and J. VON NEOMANN, On combination of stochastic and deterministic
processes, Bull. Amer. Math. Soc. Abstract 53-11-403.
Los ALAMOS SCIENTIFIC LABORATORY,
Los ALAMOS, N. M., U. S. A.
Download