Stability of First Order Ordinary Differential Equations with Colored Noise Forcing

advertisement
Stability of First Order Ordinary Differential
Equations with Colored Noise Forcing
Timothy Blass1
L.A. Romero
2
November 3, 2010
Abstract
We present a method for determining the stability of a class of stochastically forced ordinary differential equations, where the forcing term can
be obtained by passing white noise through a filter of arbitrarily high
degree. We use the Fokker-Planck equation to write a partial differential equation for the second moments, which we turn into an eigenvalue
problem for a second-order differential operator. The eigenvalues of this
operator determine the stability of the system. Inspired by Dirac’s creation and annihilation operator method, we develop “ladder” operators
to determine analytic expressions for the eigenvalues and eigenfunctions
of our operator.
1
Introduction
The original goal of this work was to develop a framework for analyzing the
stability of the stochastically forced Mathieu equation:
ẍ + γ ẋ + (ω02 + εf (t))x = 0,
(1)
where f is a stochastic process, and the stability is determined by the boundedness of the second moment hx2 (t)i [2, 7]. Here, h·i denotes the sample-average.
We wanted to avoid heuristic methods, and consider cases where f (t) is a
stochastic process with a realistic power spectral density. In particular, we
do not want to have to assume that f is white noise. Hence we want to analyze
the case where f (t) is colored noise. However, in order to rigorously derive
a Fokker-Planck equation for a stochastic differential equation, the governing
equation must include only white noise [2]. We can achieve both goals of rigor
and realistic PSD by letting f be the output of a linear filter. That is, we define
f (t) as the output from solving
ṡ = Hs + ξ(t),
f (t) = a> s(t),
(2)
where
hξ(t + τ )ξ T (t)i = Bδ(τ )
1
(3)
2
0.014
0.012
0.01
0.008
0.006
0.004
0.002
0
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
5
Figure 1: S(ω) vs ω for α = 0.2, κ = 0.8, β = 1, σ = 1
and a ∈ Rn , H, B are n × n matrices, B is symmetric positive semi-definite,
and ξ(t) is a vector of white noises, meaning that if Wt is an n-dimensional
brownian motion, then dWt = ξ(t)dt in our notation. We refer to f as a norder filter because it is generated by an n-dimensional ODE system. We will
not lose any generality in generating our filter if we assume that B is diagonal.
In Appendix A we prove the following theorem.
Theorem 1. As t → ∞, the function f (t) defined in Eqns. (2) approaches a
stationary random signal with power spectral density given by
S(ω) = aT S(ω)a
where
−1
S(ω) = (H + iωI)
B HT − iωI
(4)
−1
(5)
We were originally interested in equation (1) as a model for the dispersion
relation of a capillary gravity wave in a time-varying gravitational field arising
from random vertical motions of a container with a free surface (as in [10]).
Here f (t) represents the random fluctuations in acceleration. Since the Fourier
transform of an acceleration should vanish at zero, along with its derivative, the
power spectral density of a realistic process f should satisfy S(0) = S 0 (0) = 0.
For example, we can construct a two-dimensional filter using the system (2)
that has the power spectral density
α2 ω 2
κ 0
α
S(ω) = 2
,
for
H
=
,
a
=
.
α β
β
(ω + κ2 )(ω 2 + β 2 )
An example profile for S(ω) is shown in Figure 1.
1 Department
of Mathematics, University of Texas at Austin, 1 University Station C1200,
3
By numerically computing the eigenvalues of a partial differential equation
we successfully determined the stability of equation (1) with the second order
filter as a forcing term, and plan to present it in a later paper. However, before
solving (1), we considered the simpler example of a first order equation, of the
form
ẋ = (−γ + εf (t)) x,
(6)
with the same type of forcing term f (t). The insight that we gained from solving
this system helped us greatly simplify the solution to the stochastically forced
Mathieu equation. In analyzing this simpler problem, we realized that it had a
quite general analytic solution for the stability boundary, and the purpose of this
paper is to present our solution for this simpler problem in detail. We believe
it should be of interest to people working in the field of stochastic stability.
We begin by discussing our ideas for a first-order filter forcing a first-order
equation. Here we are interested in solving equation (6) where f (t) is obtained
using
ṡ = −rs + bn(t),
(7)
f (t) = s(t).
Here n(t) is white noise with hn(t + τ )n(t)i = δ(τ ), and we assume r > 0 so
that our filter is stable. In this case, the filter f (t), no longer represents an
acceleration. However, the value of ε for which solutions to (7) become unstable
has an analytic expression. The Fokker-Planck equation (see Section (2)) for
the probability density function associated with (6) and (7) is
∂t P =
b2 2
∂ P + r∂s (sP ) + ∂x [(γ − εs)xP ].
2 s
(8)
We determine the stability of (7) by the long-time behavior of the second moments of the solutions. If the system is linear and the forcing is by white noise,
then the associated equations for the moments of the solutions are a simple
system of ordinary differential equations (see [2, 7]). However, in the present
case, the equations are nonlinear due to the term s(t)x in our system of differential equations, which eliminates this approach to our study of stability. Van
Kampen has presented a heuristic approach to the case of colored noise, [11].
We improve on this by developing a rigorous theory for colored noise forcing.
Our analysis considers the second moment in x as a function of s and t:
Z
Mxx (s, t) =
x2 P (x, s, t) dx.
(9)
R
Austin, TX 78712-0257 (USA) tblass@math.utexas.edu
2 Computational Mathematics and Algorithms Department, Sandia National Laboratories,
MS 1320, P.O. Box 5800, Albuquerque, NM 87123-1320, {lromero}@sandia.gov
3 Sandia National Laboratories is a multi-program laboratory managed and operated by
Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S.
Department of Energy’s National Nuclear Security Administration under contract DE-AC0494AL85000.
4
We derive a differential equation for Mxx (s, t) simply by multiplying (8) by x2
and integrating over x. Assuming that the boundary terms vanish, we arrive at
∂t Mxx =
b2 2
∂ Mxx + r∂s (sMxx ) − 2γMxx + 2εsMxx .
2 s
(10)
We look for Mxx (s, t) = eλt M (s), so that (10) becomes an eigenvalue problem
for the differential operator D, where
b2 2
∂ φ + r∂s (sφ) − 2γφ + 2εsφ.
(11)
2 s
In Appendix B we use standard techniques from the theory of differential equations to show that the eigenfunctions of D are given in terms of Hermite poly2 2
nomials, and the eigenvalues are of the form {λj = −jr − 2γ + 2 εr2b }j≥0 . Using
the Hermite polynomials as a basis was key to generalizing the analysis to more
complicated systems, and was an important part of developing our numerical
approach. This is explained in more detail in Section 6.
The same technique with a second-order filter leads to a similar eigenvalue
problem, except D becomes a partial differential operator in two variables. In
the case of an n-th order filter, the equation (10) is a eigenvalue problem in n
variables. We originally thought that although these eigenvalue problems could
be solved numerically, the solutions would become intractable as the order of the
filter became large. However, the numerical solutions to the second-order filter
were quite simple and the eigenvalues had a similar form to those in the case of
a first-order filter. (i.e a base value, plus integer multiples). Furthermore, the
eigenvalues were exactly quadratic in ε. The main purpose of this paper is to
explain this behavior and show that it applies generally to the case where the
first-order equation is forced by a general n-th order filter. In particular, we
derive an analytical expression for the stability boundary of equation (6), where
f (t) arises from a general nth order filter as in equation (2).
To outline our results, consider the case of an n-th order filter, given by (2),
forcing equation (6). In Section 2 we use the Fokker-Planck equation to derive
an equation for the second moment, Mxx (t, s1 , . . . , sn ), and let D denote the
corresponding partial differential operator in n variables, so the equation for
Mxx is ∂t Mxx = DMxx . Again letting Mxx (t, s1 , . . . , sn ) = eλt M (s1 , . . . , sn ),
we have the eigenvalue problem
Dφ =
λM = DM.
(12)
If there is a solution to (12) with λ > 0, then Mxx will grow exponentially with
time. We make the physicaly reasonable assumption that the eigenvalues of D do
not have arbitrarily large real parts. If this were not the case, then there would
be solutions of ∂t Mxx = DMxx that become arbitrarily large in an arbitrarily
small time interval. We find an analytic expression for the eigenvalues, λ, and
are thus able to predict the stability.
5
An inspection of our operator D shows that it can be written as (for example,
see equation (11))
2n
X
1
D=
dij Li Lj + γ
(13)
2
i,j=1
where dij are constants, and Li are all in a simple class of linear operators. In
particular, Li = ∂si for i = 1, . . . , n and Li = si for i = n + 1, . . . , 2n, and γ is
a constant. In Section 3, Lemma 3 gives an explicit formula for calculating the
matrix D whose coefficients are dij . The main result of the paper is to derive
an analytic solution to (12) when D has the form (13) by generalizing Dirac’s
method for finding the eigenvalues and eigenfunctions of the harmonic oscillator
[5]. In particular, we find ladder operators, Xk , such that
[D, Xk ] = µk Xk ,
(14)
Here [·, ·] denotes the commutator. Xk is called a ladder operator because if
Dφ = λφ then (DXk − Xk D)φ = µXk φ, and hence D(Xk φ) = (λ + µk )Xk φ, so
that Xk φ is also an eigenfunction of D with eigenvalue λ + µk , provided that
Xk φ is not identically zero. By assumption, we cannot have eigenvalues with
infinitely large real parts, and by successively applying the ladder operators we
must eventually generate a null function.
In Section 2, Lemma (4) shows that we can determine Xk and µk as the
solution of a 2n + 1 order eigenvalue problem
Tx = µx,
T = DA
(15)
where D is a symmetric matrix, and A is an antisymmetric matrix. In Section
3, Lemma 5 shows that there are n raising operators, Xk that generate new
eigenfunctions with an increase in the real part of the eigenvalue, and n lowering
operators X−k that correspondingly decrease the real part of an eigenvalue. In
Section 4 we show that the properties of these ladder operators follow from basic
theorems in linear algebra concerning the eigenvalue problem DAx = µx, that
are proved in Section 3. In particular, in Corollary 12 and Theorem 13 we show
that we can find constants λ0 and νk , k = 1, . . . , n, such that
X
D=
νk X−k Xk + λ0
(16)
k>0
In Section 4 we argue that because the eigenvalues are bounded above, there is
a highest eigenfunction, which must satisfy Xk φ = 0 for k = 1, . . . , n, otherwise
one of the Xk φ would give an eigenvalue with a larger real part. These n equations form an overdetermined system of first order partial differential equation
for the determination of the top eigenfunction. In Theorem 13 we show that
this overdetermined system of partial differential equations are solvable. This
reasoning also shows that λ0 must be the eigenvalue of D with largest real part,
which determines the stability.
As a final step in our analysis we ask if our analytical stability criterion
can be expressed in terms of the asymptotic power spectral density S(ω) (as in
6
Thm. (1)) of the function f (t) coming out of our filter. We find that there is
a simple relationship between the largest eigenvalue and the asymptotic power
spectral density of f . In particular, the largest eigenvalue is given by
λ0 = −2γ + 2S(0)ε2
(17)
This expression determines the stability for the second moments, but in Section
7 we give a similar expression that determines the stability for any moment.
Theorem 2. Consider the equation (6) where f (t) is generated by the nth order
filter in equation (2), with asymptotic power spectral density S(ω) given by Thm
2γ
(1). The pth moment of x(t) is unstable if and only if 2 > pS(0)
.
2
The Fokker-Planck Equation and the Eigenvalue Problem
In this section we derive the equation for the second x-moment of solutions to
a linear equation with a general n-th order filter from the associated FokkerPlanck equation. We then show how this partial differential equation can be
cast as a matrix eigenvalue problem for the ladder operators associated with the
moment equation.
Let H and B be n × n matrices, where the components of H are written as
hi,j , and B is diagonal and positive semi-definite with bi on the ith diagonal.
The system we will study is given by
ṡ = Hs + ξ(t),
(18)
ẋ = −γx + εa> s(t)x
where a ∈ Rn , ε ≥ 0, and ξ is a vector of white noises, as described after equation (2). We also assume that H has a complete set of eigenvectors,
and the eigenvalues of H all have negative real part so that the filter remains
bounded. From this stochastic differential equation, we can derive the associated
Fokker-Planck equation for the probability density function P (s1 , . . . , sn , t).
The Fokker-Planck equation for (18) is
n X
bi 2
∂t P =
∂ P − ∂si [(hi,1 s1 + hi,2 s2 + . . . + hi,n sn ) P ]
2 si
(19)
i=1
+∂x [(γx−εa> sx)P ].
See [6], page 96 for how to derive √
this equation. In the notation there, (18)
would be written as dX = HXdt + BdW(t), where W(t) is an n-dimensional
brownian motion. We note that (19) is the same in both the Itô and Stratonovich
interpretations because the matrix B is independent of s and x (see [6]).
Multiplying equation (19) by x2 and integrating over all x we get the equation for the second moment Mxx (s, t)
∂t Mxx = DMxx ,
Mxx ∈ L1 (Rn )
(20)
7
where
DMxx =
n X
bi
i=1
2
∂s2i Mxx
− ∂si [(hi,1 s1 + hi,2 s2 + . . . + hi,n sn ) Mxx ]
+ (−2γ + 2εa> s)Mxx . (21)
R
We take Mxx in L1 so that hx2 (t)i = Rn Mxx (s, t)ds, is well-defined. At this
point, it is clear that we can write down a similar equation
for an x moment
R
of arbitrary order. That is, for the function Mn (s, t) = R xn P (s, x, t)dx. Our
analysis for the second moment can be applied in this general case as well, and
this is discussed in Section 7. In particular, it can be shown that the second
moment will become unstable for values of ε where the mean is still stable.
In all that follows, the operator D will refer to the differential operator
in equation (21), If Mxx (s, t) = eλt M (s) we arrive at λM = DM , just as in
equation (12). The analysis of our system will use the operators Li defined
below.
Definition 1. We define the operators Li , i = 1, 2n + 1 as follows.
Li φ = ∂si φ for i = 1, n
Li+n φ = si φ for i = 1, n
(22)
L2n+1 φ = φ for i = 2n + 1
With the operators Li defined as above, we have the following lemma.
Lemma 3. The operator D in equation (21) can be expressed as
D=
2n+1
X
1
dij Li Lj
2
i,j=1
(23)
where the dij define a symmetric matrix D. In particular, the matrix D is given
by


B
−H
0

0n
2εa
D =  −H>
(24)
0
2εa> −4γ − tr(H)
where H, B, a, γ, and are as in equation (18).
Proof. Each of the terms in equation (21) can be written as Li Lj where Li and
Lj are operators as defined in equation (22). Hence, even if we do not require
that D is symmetric, it is clear that D can be written as in equation (23). We
now show that we can always choose the coefficients dij so they are symmetric.
This follows from the fact that the choice of the coefficients dij is not unique.
The terms involving hij in equation (21) all have the form ∂si sj , we can write
∂si sj = 12 ∂si sj + 12 sj ∂si when i 6= j and ∂si si = 21 ∂si si + 12 si ∂si + 12 . From these
two expressions we see that we can split each term hij ∂si sj into a ∂si sj term
and a sj ∂si term, and possibly a constant term, where each term has the same
h
coefficient of 2ij . Furthermore, 2εai si = εai Li+n L2n+1 + εai L2n+1 Li+n , so we
can represent D as in (24).
8
To study the stability of the system (18), we must determine the eigenvalues
of equation (12) with the operator D defined as in Lemma 3. As mentioned
in the introduction, our analysis of this equation depends on the construction
of ladder operators Xk where [D, Xk ] = µk Xk . These operators will also be
expressed in terms of the Li , and we will write
Xk =
2n+1
X
xki Li .
(25)
i=1
We write xk for the vector of coefficients of Xk . Note that [Li , Lj ] = 0 unless
|i − j| = n, and [Li , Li+n ] = 1. We define the matrix A by
[Li , Lj ] = aij .
It is a simple exercise to see that the matrix A can be written as


0n In 0
A =  −In 0n 0 
0
0 0
(26)
(27)
Lemma 4. With Xk defined as in equation (25), the equation [D, Xk ] = µk Xk
can be written as a matrix eigenvalue problem Txk = µk xk where T is a (2n +
1) × (2n + 1) matrix. T can be written as T = DA where D is the symmetric
matrix defined in equation (24) and A is the antisymmetric matrix defined in
equation (27) .
Proof. We compute an expression for [D, X] in terms of D and A.
[D, Xk ] =
X 1
dij xkm (Li Lj Lm − Lm Li Lj )
2
i,j,m
=
X 1
dij xkm (Li [Lj , Lm ] + [Li , Lm ]Lj )
2
i,j,m
=
X 1
dij xkm (Li aj,m + ai,m Lj )
2
i,j,m
=
X 1
( (D + D> )A)i,m xkm Li .
2
i,m
For the equation [D, Xk ] = µk Xk , this implies that we have
X 1
X
( (D + D> )A)i,m xkm Li = µk
xki Li .
2
i,m
i
In matrix notation, this is just 21 (D + D> )Axk = µk xk . Since we can take D
to be symmetric without loss of generatlity, we have that [D, Xk ] = µk Xk is
equivalent to Txk = µk xk with T = DA.
9
This proof holds even if we do not assume that D is symmetric. In that
case the analysis that follows would be done in terms of the symmetric matrix
S = 21 (D + D> ), instead of D. Thus, it is only for convenience that we assume
D is symmetric.
3
General Properties for Tx = λx
In this section we collect some useful properties of solutions to the eigenvalue
problem
Tx = DAx = µx,
(28)
where D is symmetric, A is antisymmetric, and both D and A are real. The
lemmas we prove in this section involve standard linear algebraic arguments. In
the next section we show that the relevant properties of the ladder operators Xk
are almost immediate consequences of these lemmas. It should be emphasized
that the lemmas we prove in this section do not rely on any particular form for
the matrices D and A, but only on the fact that they are respectively symmetric
and anti-symmetric.
We will use the notation h·, ·i for the inner product on R2n+1 .
Lemma 5. If µ is an eigenvalue of T, then so is −µ. If y is an adjoint
eigenvector associated with µ, then Dy is an eigenvector of T with associated
eigenvalue −µ.
Proof. Let T be an 2n + 1 × 2n + 1 matrix with simple eigenvalues. Since T
is real, if µ is an eigenvalue of T, then it is also an eigenvalue of its adjoint.
Let y be an eigenvector of T> with eigenvalue µ. Since T> = −AD we have
ADy = −µy. Left-multiplying this by D we arrive at DADy = −µDy. Hence
Dy is an eigenvector of T with eigenvalue −µ.
Definition 2. We arrange the eigenvalues µk , k = −n, n of T so that for k > 0
the real parts are greater than zero, and so that µ−k = −µk . The corresponding eigenvectors and adjoint eigenvectors xk and yk are normalized so that
hyi , xj i = δij . We define normalization constants νk so that Dyk = ν k x−k
From the results of Lemma 5 and the fact that T has an odd number of eigenvalues, 0 must be an eigenvalue. We can write the eigenvalues as 0, µ±1 , . . . , µ±n
and eigenvectors x0 , x±1 , . . . x±n , with the convention that µk has positive real
part, k = 1, . . . , n.
Lemma 6. If the adjoint eigenvectors, yj , are normalized so that hxi , yj i = δi,j
µ
δi,j , for i, j ≥ 0.
for i, j = −n, . . . , n, then hxi , Axj i = 0, and hxi , Ax−j i = ν−j
j
Proof. Consider hxi , Axj i for j, i ≥ 0. By Lemma 5 we have xj =
−j
1
,
ν−j Dy
and T> y−j = µ−j y−j by definition . Thus
hxi , Axj i = hxi , A
1
1
−µ−j i −j
Dy−j i =
hxi , −T> y−j i =
hx , y i = 0.
ν−j
ν−j
ν−j
10
Similarly, for j, i ≥ 0, hxi , Ax−j i =
−µj
i
j
νj hx , y i
=
µ−j
νj δi,j .
Lemma 7. IfP
the eigenvectors are normalized so that hxi , yj i = δi,j for −n ≤
n
i, j ≤ n, then k=−n xki yjk = δij .
Proof. If we define the matrices Y = [y−n , . . . , yn ] and X = [x−n , . . . , xn ], then
X∗ Y = I2n+1 because (xi )> yj = δij for −n ≤ i, j ≤ n. ButP
this means YX∗ =
n
∗
∗
I2n+1 as well, and the components of YX are (YX )ij = k=−n xki yjk
Lemma 8. νk = ν−k for each k ≥ 0.
Proof. Each νk was defined by Dyk = ν k x−k , when the eigenvectors are normalized to hxi , yj i = δij . Thus, Dyk = νk x−k , and Dy−k = ν−k xk . If we take
the inner product of the first expression with y−k and of the second expression
with yk , then we get
hy−k , Dyk i = νk hy−k , x−k i = νk
hyk , Dy−k i = ν−k hyk , xk i = ν−k .
But D is symmetric, so hyk , Dy−k i = hDyk , y−k i = hy−k , Dyk i. Hence, νk =
ν−k .
4
Properties of D and the X±k
We now apply the results of Section 3 to D and its ladder operators Xk . The
matrix T = DA from Lemma 4 satisfies the assumptions of Lemmas 5 and 6.
Hence, for each eigenvector xk of T with eigenvalue µk , we have a corresponding
eigenvalue µ−k = −µk and eigenvector x−k . The ladder operator associated
with xk is written as Xk , and that associated with x−k is written as X−k .
The notion of a ladder operator is based on the following lemma.
Lemma 9. Suppose Xk is a ladder operator such that [D, Xk ] = µk Xk . Let φ
be an eigenfunction of D with eigenvalue λ. Then either Xk φ = 0, or Xk φ is
an eigenfunction of D with eigenvalue λ + µk .
Proof. We have DXk φ − Xk Dφ = µk Xk φ. Since φ is an eigenfunction of D, this
gives us DXk φ = (λ + µk )Xk φ. The lemmas follows from this.
Expressing D in terms of the X±k will be very useful in determining the
eigenvalue of D with largest real part.
P2n+1
Pn
Lemma 10. D = i,j=1 12 di,j Li Lj can also be written as D = k=−n ν2k X−k Xk .
P2n+1
k
Proof. ν2k X−k Xk = p,m=1 ν2k x−k
m xp Lm Lp , for each k = −n, . . . , n. Without
loss of generality, we can assume the matrix D is symmetric, as shown in Lemma
3. Thus, for each k, Dyk = νk x−k , which follows from Lemma 5. Hence,
11
P2n+1
2
k
−k
x−k
m = νk
q=1 dmq y q , so if we replace the term xm in the above expression
νk
for 2 X−k Xk , and sum over k, we get
n
n
X
X
νk
X−k Xk =
2
k=−n
k=−n
Lemma 7 states that
2n+1
X
2n+1
n
X 1
X
νk 1
dmq y kq xkp Lm Lp =
dmq Lm Lp
y kq xkp .
2
ν
2
k
p,m,q=1
p,m,q=1
k=−n
Pn
k=−n
y kq xkp = δqp , so
2n+1
2n+1
n
X 1
X 1
X
νk
X−k Xk =
dmq δqp Lm Lp =
dmp Lm Lp = D.
2
2
2
p,m,q=1
p,m=1
k=−n
We would like to write D as a sum of X−k Xk but only for k ≥ 0. To do this,
we first find the commutator of the ladder operators.
Lemma 11. For i, j ≥ 1 we have [Xi , Xj ] = 0 and [Xi , X−j ] = − µνii δij .
Proof. Recall A was defined as having coefficients amp = [Lm , Lp ]. Writing out
[Xi , Xj ] in terms of the Lm we have
[Xi , Xj ] =
2n+1
X
xim xjp [Lm , Lp ] =
m,p=1
2n+1
X
xim xjp amp
m,p=1
= (xi )> Axj = hxi , Axj i = 0,
µ
by the result of Lemma 6. Similarly, [Xi , X−j ] = hxi , Ax−j i = ν−j
δi,j =
j
− µνii δij
Pn
Corollary 12. We have the identity D = k=1 νk X−k Xk − 12 µk + 12 ν0 .
Pn
Proof. From Lemma 10 we know D = k=−n ν2k X−k Xk . For each k > 0, we
ν−k
µk
can write ν−k
X
X
=
X
X
−
by the result of Lemma 11. We
k
−k
−k
k
2
2
νk
also know
8, so
Pn that νk = ν−k by Lemma
D = k=1 νk X−k Xk − 12 µk + 21 ν0 .
ν−k
2 Xk X−k
=
νk
2 X−k Xk
− µ2k . Hence,
The form of D in Corollary 12 will lead to an expression for the largest
eigenvalue of D, and therefore determine the stability of (18).
Theorem 13. The eigenvalue of D with largest real part is
n
λ0 =
1
1X
ν0 −
µk ,
2
2
(29)
k=1
where ν0 is defined in Definition 2, the µk are the eigenvalues of T = DA with
positive real part, and D, A are defined in (24) and (27). If Dφ0 = λ0 φ0 , then
φ0 is a solution to the overdetermined system
Xk φ0 = 0,
∀k = 1, 2, . . . , n.
(30)
12
Proof. The n equations (30) are solvable
Pnbecause the Xk are in involution.
That is, for k, j = 1, . . . , n, [Xk , Xj ] = i=1 cijk Xi , for some constants cijk .
Indeed, as established in Lemma 6, [Xk , Xj ] = 0 for k, j = 1, . . . , n, so equations
Xk φ0 = 0, for k = 1, 2, . . . , n are simultaneously solvable by the Frobenius
Theorem (see [3], [1]). Let φ0 be a solution of (30). Writing
in the
form given
PD
n
in Corollary 12, we immediately see that Dφ0 = 21 ν0 − 12 k=1 µk φ0 , so φ0 is
an eigenfunction of D.
By assumption, the eigenvalues of D have real parts that are bounded above,
so let φ be an eigenfunction of D whose eigenvalue λ0 has the largest possible
real part. Then Xk φ ≡ 0 for k ≥ 1. Otherwise, Xk φ is an eigenfunction with
eigenvalue λ0 + µk , as explained in the paragraph following equation (14). In
which case, the eigenvalue λ0 + µk has a larger real part than λ0 because k ≥ 1,
which is a contradiction. Hence, P
φ solves (30), and by the reasoning in the
n
previous paragraph, λ0 = 21 ν0 − 21 k=1 µk .
We can derive a simple formula for ν0 in terms of the matrix D. D already
has block form as given in equation (24), but to simplify notation in the following
lemma, we will write
D0 w
D=
,
(31)
w> κ
where w ∈ R2n , κ ∈ R and D0 is a (2n) × (2n) matrix.
Lemma 14. If D0 is invertible, then
ν0 = κ − w> D−1
0 w,
(32)
with D0 , w, and κ as defined in (31).
Proof. For the normalized eigenvectors x0 and y0 , ν0 is the constant satisfying
Dy0 = ν0 x0 . Note, that x0 and y0 are real because T is real, they correspond
to the eigenvalue 0. From the equations DAx0 = 0 and ADy0 = 0 it is easy to
see


0
 .. 
−D−1


0 w
.
(33)
x0 =  .  , and y0 =
1
 0 
1
From this, we see that
0
Dy =
hence ν0 = κ − w> D−1
0 w.
0
−w> D−1
0 w+κ
,
13
5
Computing the Eigenvalues and Eigenfunctions
The results in the last section did not require D and A to have the particular
forms given in (24) and (27). Taking advantage of those forms will allow us to
write down a formula for λ0 in terms of H,B, a, and the eigenvalues of H. The
blocks comprising D0 in (31) are given by
B
−H
0
, w=
D0 =
,
(34)
2εa
−H> 0n
and κ = −4γ − tr(H).
The following lemma shows that the eigenvalues of T are known once we
know the eigenvalues of H.
Lemma 15. For each k > 0, −µk is an eigenvalue of H, where µk is an
eigenvalue of T with positive real part.


u
Proof. Let x ∈ R2n+1 solve Tx = DAx = µx. If we write x =  v , where
p
u, v ∈ Rn , and p ∈ R, then DAx = µx becomes





H
B
0
u
u
 0n
−H> 0   v  = µ  v  .
(35)
>
p
p
−2εa
0
0
If µ 6= 0 we have the equations
Hu + Bv = µu,
−H> v = µv
(36)
>
−2εa u = µp
Hence, if v is not the zero vector, we must have −µ is an eigenvalue of H> , and
therefore of H. The eigenvalues of H were assumed to have negative real parts,
so it must be that −µ = −µk for some k > 0, because µ is an eigenvalue of T.
Thus −µk is an eigenvalue of H. Similarly, if v = 0, then the first equation in
(36) becomes Hu = µu. Thus, µ is an eigenvalue of H, so it has negative real
part, and µ = µ−k = −µk for some k > 0.
From this result we can show a relationship between the νk from Definition
2 and the µk .
Corollary 16. For k = 1, 2, . . . , n, νk = µk .
Proof. Let k > 0, and let xk be an eigenvector of T with eigenvalue µk , and yk
its normailzed adjoint eigenvector as in Definition 2. Then yk is an eigenvector
of T> with eigenvalue µk . x−k is an eigenvector of T with eigenvalue −µk . We
know that −µk is an eigenvalue of H by Lemma 15, so let zk be an eigenvector
14
of H with eigenvalue −µk . From the block form of T in (35) we see that yk
and x−k have block form




zk
0
.
0
yk =  zk  , x−k = 
2ε > k
0
µk a z
From the form of D given in (24) we have

 

µk zk
−Hzk
=
 = µk x−k .
0
0
Dyk = 
2ε > k
>
µk µk a z
2εa zk
but Dyk = νk x−k by the definition of νk , thus νk = µk .
Lemma 17. λ0 = −2γ + 2ε2 a> H−1 BH−> a
Pn
Proof. By Lemma 15, tr(H) = − k=1 µk . Combining this with the results of
Lemma 14 and Theorem 13,
n
λ0 =
n
1X
1
1X
1
ν0 −
µk =
κ − w> D−1
µk
0 w −
2
2
2
2
k=1
k=1
n
1X
1
1
= −2γ − tr(H) − w> D−1
0 w−
2
2
2
µk
(37)
.
(38)
k=1
1
= −2γ − w> D−1
0 w.
2
From equation (34) we see that
0n
−1
D0 =
H−1
H−>
−1
−H BH−>
2 > −1
Because w> = (0, . . . , 0, 2εa> ), we have −w> D−1
BH−> a,
0 w = 2ε a H
2 > −1
−>
hence λ0 = −2γ + 2ε a H BH a.
Combining the results of Lemma 17 and Theorem 1 , we can write λ0 in
terms of the power spectral density of the filter.
Theorem 18. Let S(ω) be the power spectral density of the n-th order filter
f (t) as given in equation (2). Then the largest eigenvalue of D, defined in (21),
is
λ0 = −2γ + 2S(0)ε2 .
The stability boundary for (18) is given by solutions to λ0 = 0.
To find the eigenfunctions of D we can look for φ0 , and generate the other
eigenfunctions by applying the X−k operators. We will use the n equations
from Theorem 13 to determine φ0 , which will be an exponential of a quadratic
polynomial in s1 , . . . , sn .
15
Lemma 19. There is a real symmetric matrix Γ and real vector c such that
1 >
φ0 = e 2 s
Γs+c> s
.
(39)
Proof.If φ0 
has the form in (39), then ∇φ0 = (Γs + c)φ0 . If we write xk as
k
u
xk =  vk , then Xk φ0 = 0 is just
pk
(uk )> (Γs + c) + (vk )>s + αk = 0.
So for each k > 0, (uk )> Γ = −(vk )> and (uk )> c = αk . We let U and V be the
matrices with columns uk and vk , respectively, and let p ∈ Rn have components
(p)k = pk . Then the equations for Γ and c become
U> Γ = −V> ,
and U> c = p.
(40)
Hence, Γ = U−> V> and c = U−> p.
To see that Γ is symmetric, recall that (xk )> Axj = 0 for each j, k > 0
by Lemma 6. Thus, (uk )> vj = (vk )> uj for j, k > 0, which we can write as
U> V = V> U. Therefore U−> V> = VU−1 , so Γ is symmetric.
6
Numerical Methods
Since we have an analytic expression for the stability boundary of (18), we do not
need numerics to study this problem. However, we present a numerical method
for determining this boundary for two reasons. First, it is a nice way to verify
our solution. Second, the method we develop generalizes to more complicated
systems, such as the Mathieu equation (1), where an analytic expression for the
stability boundary is not available. The rapid convergence of this numerical
method in our case, helps us gain confidence when applying it to a case where
an analytical solution is not known.
We present two different types of numerical confirmation of our results. In
the first we numerically compute the eigenvalues of the operator D in equation
(21). The numerical scheme we present easily generalizes to the case where we
replace our first order equation in equation (6) by a higher dimensional equation. The second confirmation of our results comes from numerically integrating
the stochastic differential equation as an initial value problem, and taking an
ensemble average over many different stochastic integrations.
6.1
Numerical Computation of the Eigenvalues
We limit ourselves to the case of a second-order filter given by (2), where H, B,
and a are
−κ 0
1 0
a1
H=
, B=
, a=
,
(41)
β −α
0 0
a2
16
where β, a1 , a2 ∈ R and κ, α > 0.
Our goal is to determine the eigenvalues of the operator in equation (21)
for our specific case, by numerically solving for the eigenvalues. To do this,
we will take moments in the variable s2 . The equation for the jth moment
in s2 of M (s1 , s2 ) depends on the j − 1 and j + 1 moment. This will give
us an eigenvalue problem involving an infinite dimensional system of ordinary
differential equations in s1 . In particular, if Mj is the jth moment in s2 of Mxx ,
we get
λMj =
1 2
∂ Mj + κ∂s1 (s1 Mj ) − nαMj + jβsMj−1
2 s1
+ 2εa1 s1 Mj + 2εa2 Mj+1 − 2γMj
(42)
for j ≥ 0. We turn this into a finite dimensional problem by expanding each of
the moments Mj in terms of a finite sum of N2 Hermite polynomials, and by
writing down the equations for only a finite number N1 of the moments. This
gives us a finite dimensional eigenvalue problem
λw = Qw.
(43)
Table (1) shows the errors in the numerically computed eigenvalues as compared
to the analytical eigenvalues. We see that we are getting rapid convergence to
the analytical answers as we increase N1 and N2 . This both validates our
analytical answers, and suggests that this is a robust numerical technique for
computing the eigenvalues in other situations.
ε=
ε=
ε=
1
65
1
60
1
55
N1 = 7, N2 = 5
1.08 × 10−6
2.04 × 10−6
4.04 × 10−6
N1 = 11, N2 = 5
2.72 × 10−10
6.56 × 10−10
1.72 × 10−9
N1 = 15, N2 = 7
7.95 × 10−15
2.99 × 10−14
1.18 × 10−13
λ0
−2.96 × 10−3
0
3.80 × 10−3
Table 1: Values of the error in computing λ0 for different truncation lengths
N1 , N2 , and for four values of ε. All other parameters are fixed: α = 0.2, κ =
0.8, β = 1, γ = 0.01, a1 = 0.2, a2 = −1.
6.2
Comparison to Stochastic Integrations
For a two dimensional filter the solution to the Fokker Planck equation gives us
the probability density P (s1 , s2 , x, t) for the system being in the state (s1 , s2 , x)
at time t. We can define the probability distribution
Z ∞
P2 (s1 , s2 , t) =
x2 P (s1 , s2 , x, t)dx
(44)
−∞
for the second moment in x.
17
If we have deterministic initial conditions
s1 (0) = s10
(45)
s2 (0) = s20
(46)
x(0) = x0
(47)
P (s1 , s2 , x, 0) = δ(s1 − s10 )δ(s2 − s20 )δ(x − x0 )
(48)
P2 (s1 , s2 , 0) = x20 δ(s1 − s10 )δ(s2 − s20 )
(49)
Then we have
and
Once we know the eigenvalues of the operator D in equation (21), we can express
the solution P2 (s1 , s2 , t) in terms of these. In particular we have
∞
X
P2 (s1 , s2 , t) =
ak eλk t φk (s1 , s2 , t)
(50)
k=0
where λk are the eigenvalues of D, and φk (s1 , s2 ) are the eigenfunctions. If
ψk (s1 , s2 ), are the adjoint eigenfunctions, normalized so that
Z ∞Z ∞
(51)
φk (s1 , s2 )ψ k (s1 , s2 )ds1 ds2 = 1
−∞
−∞
then we can compute the coefficients ak by integrating the initial conditions
for P2 (s1 , s2 , 0) times the adjoint eigenfunctions ψk (s1 , s2 ). For the case of
deterministic initial conditions we get
ak = x20 ψ k (s10 , s20 )
(52)
If we define the constants
Z
∞
Z
∞
χk =
φk (s1 , s2 )ds1 ds2
−∞
(53)
−∞
we see that the expected value of second moment in x as a function of time is
given by
Mxx (t) =
∞
X
ψ k (s10 , s20 )χk eλk t
(54)
k=0
We could compute ψk (s10 , s20 ) and χk using our theory of ladder operators.
However, in this case, we computed these quantities from our numerical solution
to the eigenvalue problem. We will not give the details of how this is done here,
but it is a fairly straightforward process to extract this information from the
eigenvalues and eigenfunctions of the eigenvalue problem in equation (43).
18
Equation (54) gives us an expression for computing the expected value of the
second moment in x as a function of time. Assuming the eigenvalues are ordered
so that the eigenvalues with the largest real part have the smallest indices, for
large values of t we get accurate answers using only the first term in this series,
and for moderate times it is only necessary to keep a few terms in this series.
We can also compute the expected value of the second moment by integrating
the stochastic equations for many different realizations of the stochastic function
f (t). In particular, we integrate the equations using a second-weak-order scheme
as discussed in detail in [8], [4].
Two examples are shown in Figures 2(a) and 2(b) for different values of
the parameter ε. The solid lines are the results of the stochastic integrator
plotted against t ∈ [0, T ] with T = 10, and the dotted lines are the results of the
truncated initial value problem against t. In each case, the truncation lengths for
the number of moments and number of Hermite polynomials were N1 = N2 = 8.
Only six eigenvalues were used to express Mxx (t) as in (54). As expected, the
approximations are off considerably at t = 0, but quickly approach the solution
1
, so
as t increases. The critical value of ε for those parameter values is ε = 60
1
6
0.98
5
0.96
4
0.94
3
0.92
2
0.9
1
0.88
0
1
2
3
4
5
(a) ε =
6
1
50
7
8
9
10
0
0
1
2
3
4
5
(b) ε =
6
7
8
9
10
1
10
Figure 2: The horizontal axis is time, t ∈ [0, 10]. The parameter values on both
plots are α = 0.2, κ = 0.8, β = 1, γ = 0.01, a1 = 0.2, a2 = −1. The solid line is
the results of the stochastic integrator, using 45,000 sample paths. The dotted
line is the result of the initial value problem with N1 = N2 = 8 and the six
largest eigenvalues from (54).
in fact the solution is unstable in both cases. This is easily seen in Figure 2(b),
but is not apparent in Figure 2(a). If the analytic expression for the eigenvalues
1
case is stable. To see
of D were unknown to us, then we might think the ε = 50
the instability using the stochastic integrator, one would have to integrate for a
much longer time interval. However, when solving the initial value problem, we
1
find the largest eigenvalues of Q and hence for D, which in the case ε = 50
is
λ0 ≈ 0.0088, so we would know immediately that the solutions are unstable for
1
ε = 50
. In more complicated systems, like (1), the analytic expression for the
eigenvalues is unknown. However, this numerical technique can be applied and
19
the stability determined without relying on a stochastic integrator.
7
Extension to Higher Moments
From Theorem 18 we know that the solution of
ṡ = Hs + ξ(t),
ẋ = −γx + εa> s(t)x
has bounded second moment hx2 (t)i for ε ∈ [0,
q
γ
S(0) ],
where S(ω) is the power
spectral density of f (t) = a> s(t), which becomes stationary as t → ∞. The
study of hx2 (t)i was recast first into the study of Mxx (s, t) in equation (20),
and then into a simple matrix eigenvalue problem, for the ladder operators
associated to D.
We could have done the analysis for the p-th moment instead of the second
moment. We replace hx2 (t)i by
Z
hxp (t)i =
xp P (s, x, t)dxds,
R3
in which case, the differential equation (20) would be replaced by ∂t Mxp =
Dp Mxp , where
Dp Mxp =
n X
bi
i=1
2
∂s2i Mxp − ∂si [(hi,1 s1 + hi,2 s2 + . . . + hi,n sn ) Mxp ]
+ (−pγ + pεa> s(t))Mxp . (55)
The vector w and constant κ in formula (24) for D would be replaced by w> =
2
(0, . . . , 0, p2 εa> ), and κ = −2pγ − tr(H). Then we can write a formula for the
largest eigenvalue of Dp , just as in Theorem 18. We have
Theorem 20. Let S(ω) be the power spectral density of the n-th order filter
f (t) = a> s(t), where ṡ = Hs + Bξ(t). Then the largest eigenvalue of Dp is
λ0 = −pγ +
p2
S(0)ε2 .
2
q
2γ
.
From this, we see that the p-th moment becomes unstable when ε > pS(0)
Thus the second moment, and the variance, will become unbounded before the
mean. So if one investigates only the mean in a stability analysis of (18), then
one may find that the mean is stable, but when integrating the equation numerically, find unbounded growth due to the unbounded growth of the variance.
20
8
Conclusions
We have developed a method for studying (18) where the minor restrictions
on H and B allow f to be taken from a broad class of stochastic processes.
By studying the PDE for the x-moments as derived from the Fokker-Planck
equation, we are able to determine the value of ε for which the system becomes
unstable, as determined by the unboundedness of the second moment. The
boundedness is determined by the solution of the eigenvalue problem (12), which
reduces to a linear algebra problem via the use of the ladder operators Xk . The
methods provide an efficient numerical technique for computing the eigenvalue
that determines the stability, and gives a formula for determining when the p-th
moment will become unbounded.
The numerical technique generalizes to higher order ODEs and has been successfully applied to the stability of (1) with a second-order filter. The general
technique of studying the PDE for the moments and the associated ladder operators also provides a way to do perturbation theory for higher order equations.
This perturbation theory has been carried out for (1) and potentially generalizes
to n-th order ODEs.
9
Acknowledgements
We would like to thank John Torczynski for motivating and finding funding for
this work. We also thank Jim Ellison and Nawaf Bou Rabee for several fruitful
discussions concerning stochastic differential equations.
10
Appendix A
In this appendix we prove Thm. (1). We begin with a simple lemma.
Lemma 21. Let s(t) be a real stationary random process with autcorrelation
function R(τ ) and power spectral density S(ω). If we define
Z ∞
S+ (ω) =
R(τ )e−iωτ dτ
(56)
0
then we have
S(ω) = S+ (ω) + S?+ (ω)
(57)
Proof. This is a direct consequence of the fact that the power spectral density
is the Fourier transform of R(τ ) = hs(t)sT (t + τ )i, and the fact that R(τ ) =
hs(t)sT (t + τ )i = hs(t − τ )sT (t)i = RT (−τ ).
We now prove a lemma concerning the autocorrellation function of s(t) as
defined in equation (2).
21
Lemma 22. Let s(t) be the solution to equation (2) with zero initial conditions.
As t → ∞ the autocorrellation function R(τ ) = hs(t)sT (t + τ )i is given by
T
R(τ ) = K0 eH
τ
Z
t
where
(58)
K(t − s)ds
K0 = lim
t→∞
for τ > 0
(59)
0
where
K(q) = eHq BeH
T
q
(60)
Proof. The solution to equation (2) (with zero initial conditions) is given by
Z t
s(t) =
eH(t−s) ξ(s)ds
(61)
0
We can write
T
Z tZ
t+τ
T
eH(t−s) ξ(s)ξ T (r)eH
s(t)s (t + τ ) =
0
(t+τ −r)
drds
(62)
0
If we take the expected value of both sides of this equation, and use the fact
that
hξ(s)ξ T (r)i = Bδ(r − s)
(63)
we arrive at the equation
hs(t)sT (t + τ )i =
Z
t
T
eH(t−s) BeH
(t−s) HT τ
e
ds
(64)
0
When deriving this last equation we have assumed that the variable r is
equal to the variable s at some point when doing the integration. This will only
be guaranteed if τ > 0, and hence this is only valid for τ > 0. The expression
for τ < 0, is obtained by using the fact that the autocorrellation function must
satisfy R(−τ ) = RT (τ ).
Assuming that all of the eigenvalues of H have negative real part, the process
s(t) will become stationary as t → ∞. We will take the limit of equation (64)
as t → ∞. This can be seen to be
T
R(τ ) = K0 eH
τ
(65)
where K0 is defined as in equation (60).
We now prove a simple lemma about the matrix K0 .
Lemma 23. With K0 defined as in equation (60), and assuming all of the
eigenvalues of H are negative, we have HK0 + K0 HT = −B.
22
Proof. We have
d
K(s) = HK(s) + K(s)HT
ds
(66)
It follows that
Z
T
HK0 + K0 H = − lim
t→∞
0
t
d
(K(t − s)) ds
ds
(67)
We can evaluate this integral using the fundamental theorem of calculus. When
we do this we find that the contribution at s = 0 vanishes in the limit as
t → ∞. Since K(0) = B, the contribution at s = t is just −B, which proves the
lemma.
We can now compute S+ (ω).
Lemma 24. We have
S+ (ω) = −K0 HT − iωI
−1
(68)
Proof. Using the expression for R in equation (58) we get
Z ∞
−1
S+ (ω) =
R(τ )e−iωτ dτ = −K0 HT − iωI
(69)
0
The proof of Thm (1) follows almost immediately from these lemmas. From
Lemmas (21) and (22), we get
?
S(ω) = S+ (ω) + S+
(ω) = −K0 HT − iωI
−1
− (H + iωI)
−1
K0
(70)
This can be written as
−1
S(ω) = − (H + iωI)
−1
(H + iωI)K0 + K0 (HT − iωI) HT − iωI
(71)
This can be written as
S(ω) = − (H + iωI)
−1
HK0 + K0 HT
−1
HT − iωI
(72)
If we use Lemma (23) we arrive at the result of Thm. (1).
11
Appendix B
In this appendix we compute the eigenvalues and eigenfunctions of the operator
D in equation (11) using standard techniques for differential equations. These
eigenvalues could also be computed using the ladder operators described in this
paper.
23
We begin by considering the eigenvalue problem arising from letting = γ =
0. That is, we consider the eigenvalue problem
D0 φ =
d
b2 d 2 φ
+ r (sφ) = λφ
2
2 ds
ds
φ → 0 as s → ±∞
(73)
(74)
We begin by making the change of variables
2
φ(s) = ψ(s)e−rs
/b2
(75)
In terms of ψ, equation (73) can be written as
dψ
b2 d 2 ψ
− rs
= λψ
2 ds2
ds
(76)
The Hermite polynomials satisfy
Hn00 (y) − 2yHn0 (y) = −2nHn (y)
(77)
These are eigenfunctions (with eigenvalue λ = −2n for n ≥ 0) of the equation
d2 ψ
dψ
− 2y
= λψ
dy 2
dy
(78)
2
where we require that e−y ψ(y) remain bounded as y → ±∞. We see this
implies that
√
2
2
(79)
φn (s, r) = Hn (s r/b)e−rs /b
is an eigenfunction of equation (73) with eigenvalue
λn = −nr,
n ≥ 0.
(80)
We see that when = 0 we have an analytical solution of our eigenvalue problem.
To extend this result to the case where 6= 0, we prove the following lemma.
Lemma 25. With D0 defined as in equation (73), the eigenvalues of
D0 φ + κ
dφ
= λφ
ds
φ → 0 as s → ±∞
(81)
are independent of κ.
Proof. A simple calculation shows that If φ0 (s) is the eigenfunction (with eigenvalue λ) for κ = 0, then the function φ(s) = φ0 (s + κ/r) is an eigenfunction of
equation (81) with eigenvalue λ.
24
Lemma 26. The eigenvalue problem in equation (11) has the same eigenvalues
as
dψ 1 2 2
D0 ψ + µb2
+ b µ ψ − 2γψ = λψ
(82)
ds
2
where
2
µ=−
(83)
r
where we require that ψ goes to zero as s → ±∞.
Proof. If φ(s) is an eigenfunction of equation (11) with eigenvalue λ, then ψ(s) =
e−µs φ(s) will be an eigenfunction of equation (82) with the same eigenvalue.
These two lemmas imply the following lemma.
Lemma 27. The eigenvalues of equation (11) are given by
λn = −nr − 2γ +
2b2 2
,
r2
n ≥ 0.
(84)
References
[1] Ralph Abraham and Jerrold E. Marsden. Foundations of mechanics. Benjamin/Cummings Publishing Co. Inc. Advanced Book Program, Reading,
Mass., 1978. Second edition, revised and enlarged, With the assistance of
Tudor Raţiu and Richard Cushman.
[2] Ludwig Arnold. Stochastic differential equations as dynamical systems. In
Realization and modelling in system theory (Amsterdam, 1989), volume 3 of
Progr. Systems Control Theory, pages 489–495. Birkhäuser Boston, Boston,
MA, 1990.
[3] V. I. Arnol0 d. Geometrical methods in the theory of ordinary differential
equations, volume 250 of Grundlehren der Mathematischen Wissenschaften
[Fundamental Principles of Mathematical Sciences]. Springer-Verlag, New
York, second edition, 1988. Translated from the Russian by Joseph Szücs
[József M. Szűcs].
[4] Søren Asmussen and Peter W. Glynn. Stochastic simulation: algorithms
and analysis, volume 57 of Stochastic Modelling and Applied Probability.
Springer, New York, 2007.
[5] P. A. M. Dirac. The Principles of Quantum Mechanics. Oxford, at the
Clarendon Press, 1947. 3d ed.
[6] C. W. Gardiner. Handbook of stochastic methods, volume 13 of Springer
Series in Synergetics. Springer-Verlag, Berlin, second edition, 1985. For
physics, chemistry and the natural sciences.
[7] R.Z. Khasminski. Stochastic stability of differential equations. Kluwer Academic Pub, 1980.
25
[8] Peter E. Kloeden and Eckhard Platen. Numerical solution of stochastic differential equations, volume 23 of Applications of Mathematics (New York).
Springer-Verlag, Berlin, 1992.
[9] Horace Lamb. Hydrodynamics. Cambridge Mathematical Library. Cambridge University Press, Cambridge, sixth edition, 1993. With a foreword
by R. A. Caflisch [Russel E. Caflisch].
[10] R. Repetto and V. Galletta. Finite amplitude Faraday waves induced by a
random forcing. Physics of fluids, 14:4284, 2002.
[11] N. G. van Kampen. Stochastic processes in physics and chemistry, volume 888 of Lecture Notes in Mathematics. North-Holland Publishing Co.,
Amsterdam, 1981.
Download