Uncertainty Quantification and Prediction for

Uncertainty Quantification and Prediction for
Non-autonomous Linear and Nonlinear Systems
by
Akash Phadnis
M.Tech. and B.Tech, Indian Institute of Technology Bombay (2011)
Submitted to the Department of Mechanical Engineering
in partial fulfillment of the requirements for the degree of
MAS_SAmCHUSETTs5i-s-T
OF TECHNOLOGY
Master of Science in Mechanical Engineering
NOV 1 2 2013
at the
MASSACHUSETTS INSTITUTE OF TECHNOLOGY
LIBRARIES
September 2013
@
Massachusetts Institute of Technology 2013. All rights reserved.
A uthor ....................
Department of Mechanical Engineering
July 31, 2013
Certified by ............
. ......... .. .....
.......................
Pierre F. J. Lermusiaux
Associate Profe sor of Mechanical Engineering
Thesis Supervisor
Accepted by.............
.................................
David E. Hardt
Chairman, Department Committee on Graduate Theses
2
Uncertainty Quantification and Prediction for Non-autonomous
Linear and Nonlinear Systems
by
Akash Phadnis
Submitted to the Department of Mechanical Engineering
on July 31, 2013, in partial fulfillment of the
requirements for the degree of
Master of Science in Mechanical Engineering
Abstract
The science of uncertainty quantification has gained a lot of attention over recent years.
This is because models of real processes always contain some elements of uncertainty, and
also because real systems can be better described using stochastic components. Stochastic
models can therefore be utilized to provide a most informative prediction of possible future
states of the system. In light of the multiple scales, nonlinearities and uncertainties in
ocean dynamics, stochastic models can be most useful to describe ocean systems.
Uncertainty quantification schemes developed in recent years include order reduction
methods (e.g. proper orthogonal decomposition (POD)), error subspace statistical estimation (ESSE), polynomial chaos (PC) schemes and dynamically orthogonal (DO) field
equations. In this thesis, we focus our attention on DO and various PC schemes for quantifying and predicting uncertainty in systems with external stochastic forcing. We develop
and implement these schemes in a generic stochastic solver for a class of non-autonomous
linear and nonlinear dynamical systems. This class of systems encapsulates most systems
encountered in classic nonlinear dynamics and ocean modeling, including flows modeled
by Navier-Stokes equations. We first study systems with uncertainty in input parameters (e.g. stochastic decay models and Kraichnan-Orszag system) and then with external
stochastic forcing (autonomous and non-autonomous self-engineered nonlinear systems).
For time-integration of system dynamics, stochastic numerical schemes of varied order
are employed and compared. Using our generic stochastic solver, the Monte Carlo, DO
and polynomial chaos schemes are intercompared in terms of accuracy of solution and
computational cost.
To allow accurate time-integration of uncertainty due to external stochastic forcing, we
also derive two novel PC schemes, namely, the reduced space KLgPC scheme and the modified TDgPC (MTDgPC) scheme. We utilize a set of numerical examples to show that the
two new PC schemes and the DO scheme can integrate both additive and multiplicative
stochastic forcing over significant time intervals. For the final example, we consider shallow water ocean surface waves and the modeling of these waves by deterministic dynamics
and stochastic forcing components. Specifically, we time-integrate the Korteweg-de Vries
(KdV) equation with external stochastic forcing, comparing the performance of the DO
and Monte Carlo schemes. We find that the DO scheme is computationally efficient to
3
integrate uncertainty in such systems with external stochastic forcing.
Thesis Supervisor: Pierre F. J. Lermusiaux
Title: Associate Professor of Mechanical Engineering
4
Acknowledgments
I would like to express my sincere gratitude towards my advisor, Prof. Pierre Lermusiaux, for introducing me to this fascinating topic and for providing invaluable guidance
and support throughout the course of this work. His enthusiasm and energy, along with
his helpful comments and words of encouragement, have been instrumental in producing
this thesis. I would also like to thank Wayne, Pat, Marcia and Leslie for all their help
during my stay at MIT.
I am also grateful to the rest of the MSEAS group, and in particular to Matt, Tapovan
and Jing for their support and for the numerous exciting and informative discussions I
had with them, without which, this thesis would not have been possible.
Finally, I would like to thank my family and friends, for always being there for me
and for constantly showering me with everlasting love and affection.
5
6
Contents
1
2
Introduction
19
1.1
Background and Motivation . . . . . . . . . . . . . . . . . . . . . . . . . .
19
1.2
Problem Statement and Research Objectives . . . . . . . . . . . . . . . . .
24
1.3
Thesis Outline. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
25
Background
2.1
2.2
2.3
Basic Theory and Definitions
27
. . . . . . . . . . . . . . . . . . . . . . . . .
27
2.1.1
Stochastic processes and random fields . . . . . . . . . . . . . . . .
27
2.1.2
Brownian motion and stochastic differential equations . . . . . . . .
28
2.1.3
Stochastic integrals and their convergence
. . . . . . . . . . . . . . 30
Numerical Integration Schemes for the Solution of SDEs
. . . . . . . . . . 33
2.2.1
Euler-Maruyama (EM) method
2.2.2
Extrapolated Euler-Maruyama (ExEM) method . . . . . . . . . . . 34
2.2.3
Stochastic Runge-Kutta method by Kloeden and Platen (RKKP)
2.2.4
Stochastic Runge-Kutta method by Rdssler (RKR)
. . . . . . . 36
2.2.5
Computational cost of numerical integration schemes
. . . . . . . 38
. . . . . . . . . . . . . . . . . . . . 34
.
35
Methods for Uncertainty Quantification . . . . . . . . . . . . . . . . . . . . 49
2.3.1
Literature review . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
2.3.2
Generalized polynomial chaos
. . . . . . . . . . . . . . . . . . . . .
53
2.3.3
Time-dependent generalized polynomial chaos . . . . . . . . . . . .
57
2.3.4
Wiener chaos using sparse truncation . . . . . . . . . . . . . . . . .
58
2.3.5
Dynamically orthogonal field equations . . . . . . . . . . . . . . . . 59
7
3
4
65
Derivation of Evolution Equations
Dynamically Orthogonal Equations .....
3.2
Generalized Polynomial Chaos . . . . . . . . . . . . . . . . . . . . . . . . .
68
3.3
Time-dependent Generalized Polynomial Chaos
. . . . . . . . . . . . . . .
69
3.4
New Modified TDgPC Scheme for Handling External Stochastic Forcing. .
72
3.5
New Reduced Space and Reduced Order Polynomial Chaos Scheme
7
. . . . 75
. . . . . . . . . . . . . . . . . . . . . 76
3.5.1
Singular value decomposition
3.5.2
Karhunen-Loeve expansion . . . . . . . . . . . . . . . . . . . . . . . 77
3.5.3
KLgPC algorithm and derivation of evolution equations . . . . . . . 78
83
Uncertainty in Input Parameters
4.1
One-dimensional Decay Model . . . . . . . . . . . . . . . . . . . . . . . . .
84
4.2
Kraichnan-Orszag Three Mode Problem
. . . . . . . . . . . . . . . . . . .
89
4.3
Long Time Integration of Nonlinear Systems using the gPC Scheme . . . .
94
4.4
Discussion on Performance of Uncertainty Quantification Schemes . . . . . 104
111
5 Uncertainty due to External Stochastic Forcing
6
65
......................
3.1
. . . . . . . . . . . . . 112
5.1
Limitations of Existing Polynomial Chaos Schemes
5.2
Modeling Stochastic Noise using our New Polynomial Chaos Schemes . . . 115
5.3
Discussion on Performance of Uncertainty Quantification Schemes . . . . . 123
Uncertainty Prediction for Stochastic Ocean Systems
125
6.1
A Self-Engineered 20-Dimensional Test Case . . . . . . . . . . . . . . . . . 126
6.2
Modeling Uncertainty in Nonlinear Ocean Surface Waves . . . . . . . . . . 144
6.2.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
6.2.2
Korteweg-de Vries (KdV) Equation . . . . . . . . . . . . . . . . . . 147
6.2.3
Deterministic KdV Equation: Solitary Wave Solutions
6.2.4
Stochastic KdV Equation
. . . . . . . . . . . . . . . . . . . . . . . 152
165
Conclusions
7.1
. . . . . . . 150
FutureWork . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
8
A Detailed Derivation of Evolution Equations
A.1
Dynamically Orthogonal Equations .....
169
......................
A.2 Generalized Polynomial Chaos ..............................
B Algorithm for General Stochastic Solver
B.1
Functions specA() and specB(). .......................
169
177
181
181
B.2 Script setup-script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
B.3 Script plot-script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
B.4 Function main() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
C Table of Runs
187
9
10
List of Figures
2-1
Analytical solutions for the first moment Ew[Xi] (left) and the second mo-
ment Ew[(Xk) 2 ] (right) for SDE (2.29)
2-2
. . . . . . . . . . . . . . . . . . . . 41
Mean errors in the approximation of first moment E" [Xt] (left) and second
moment E" [(X,) 2 ] (right) for SDE (2.29) . . . . . . . . . . . . . . . . . . . 42
2-3
Total computational effort vs time-step size (left) and mean error vs total
computational effort (right) for the approximation of EW[Xt] for SDE (2.29) 42
2-4
Analytical solution (left) and mean errors (right) in the approximation of
the first moment Ew[Xtl for SDE (2.30) . . . . . . . . . . . . . . . . . . . . 43
2-5
Total computational effort vs time-step size (left) and partial computational effort vs time-step size (right) for the approximation of EW[Xt] for
SD E (2.30)
2-6
. . .. . . . .. ..
. . . . .. . . .
. . . . . . . . . . . . . . . . 44
Analytical solution (left) and mean errors (right) in the approximation of
the first moment Ew[Xt] for SDE (2.31) with m = 2 . . . . . . . . . . . . . 46
2-7
Analytical solution (left) and mean errors (right) in the approximation of
the first moment Ew[Xt] for SDE (2.31) with m = 4 . . . . . . . . . . . . . 46
2-8
Analytical solution (left) and mean errors (right) in the approximation of
the first moment E"[Xt] for SDE (2.31) with m = 6 . . . . . . . . . . . . . 47
2-9
Total computational effort vs time-step size (left) and partial computational effort vs time-step size (right) for the approximation of E"[Xi] for
SDE (2.31) with m = 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
2-10 Total computational effort vs time-step size (left) and partial computational effort vs time-step size (right) for the approximation of EW[Xt] for
SDE (2.31) with m = 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
11
2-11 Total computational effort vs time-step size (left) and partial computational effort vs time-step size (right) for the approximation of Ew[Xf] for
SDE (2.31) with m = 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
4-1
Mean and variance of the solution u(t; w) for 1-D decay model using the
M C schem e . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4-2
Relative error in mean and variance of u(t; w) for 1-D decay model using
the M C schem e . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4-3
85
85
(a) Mean and variance of u(t; w) for 1-D decay model using the gPC scheme
with p = 2 (b) Relative error in the mean and variance of u(t; w) using the
gPC scheme with p = 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
4-4
As figure (4-3), but using the gPC scheme with p = 4 . . . . . . . . . . . . 87
4-5
As figure (4-3), but using the gPC scheme with p = 6 . . . . . . . . . . . .
87
4-6
As figure (4-3), but using the gPC scheme with p = 8 . . . . . . . . . . . .
88
4-7
As figure (4-3), but using the gPC scheme with p = 10
88
4-8
(a) Mean and variance of u(t; w) for 1-D decay model using the DO scheme
. . . . . . . . . . .
with s = 1 (b) Relative error in the mean and variance of u(t; w) using the
DO scheme with s = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4-9
89
Mean and variance of the state variable xi(t; w) for the K-O system with
initial uncertainty in one state variable using the MC scheme . . . . . . . . 91
4-10 Mean and variance of the state variable x,(t; w) for the K-O system with
initial uncertainty in one state variable using the gPC scheme with order
p=2........
91
.......................................
4-11 As figure (4-10), but using the gPC scheme with order p = 4 . . . . . . . . 92
4-12 As figure (4-10), but using the gPC scheme with order p = 6 . . . . . . . . 92
4-13 Mean and variance of the state variable xi(t; w) for the K-O system with
initial uncertainty in one state variable using the DO scheme with s = 3 .
.
92
4-14 Mean and variance of the state variable xi(t; w) for the K-O system with
initial uncertainty in all state variables using the TDgPC scheme with order
p=2.......
.......................................
4-15 As figure (4-14), but using the TDgPC scheme with order p = 3 . . . . . .
12
93
93
4-16 Mean and variance of the state variable xi(t; w) for the K-O system with
initial uncertainty in all state variables using the DO scheme with s = 3 . .
94
4-17 Probability density function of u(t; w) for the 1-D decay model at t = 0.5 s
95
4-18 As figure (4-17), but at time t
=
1.0 s
. . . . . . . . . . . . . . . . . . . . 96
4-19 As figure (4-17), but at time t
=
1.5 s
. . . . . . . . . . . . . . . . . . . . 96
4-20 As figure (4-17), but at time t
=
2.0 s
. . . . . . . . . . . . . . . . . . . . 97
4-21 As figure (4-17), but at time t
=
2.5 s
.. .. .. ... .. .. ... .. .. 97
4-22 As figure (4-17), but at time t
=
3.0 s
.. .. ... .. .. .. ... .. .. 98
4-23 As figure (4-17), but at time t
=
3.5 s
.. .. ... ... . .. ... .. .. 98
4-24 As figure (4-17), but at time t
=
4.0 s
.. .. ... ... . .. ... .. .. 99
4-25 As figure (4-17), but at time t
=
4.5 s
. . . . . . . . . . . . . . . . . . . . 99
4-26 As figure (4-17), but at time t
=
5.0 s
. . . . . . . . . . . . . . . . . . . .1 0 0
4-27 As figure (4-17), but at time t
=
5.5 s
. . . . . . . . . . . . . . . . . . . .100
4-28 As figure (4-17), but at time t
=
6.0 s
. . . . . . . . . . . . . . . . . . . .1 0 1
4-29 Mean and variance of the state variable x, (t; w) for the K-O system with
initial uncertainty in one state variable using the MC scheme . . . . . . .
102
4-30 Mean and variance of the state variable xi(t; w) for the K-O system with
initial uncertainty in one state variable using the gPC scheme with p = 6
103
4-31 Mean and variance of the state variable x,(t; w) for the K-O system with
initial uncertainty in one state variable using the TDgPC scheme with p
=
3103
4-32 Probability density function of x 1 (t; w) for the K-O system at t = 0.0 s
104
4-33 As figure (4-32), but at time t
=
2.0 s . . . . . . . . . . . . . . . . . . . .
105
4-34 As figure (4-32), but at time t
=
4.0 s . . . . . . . . . . . . . . . . . . . .
105
6.0 s . . . . . . . . . . . . . . . . . . . .
106
4-35 As figure (4-32), but at time t
4-36 As figure (4-32), but at time t
=
8.0 s . . . . . . . . . . . . . . . . . . . .
106
4-37 As figure (4-32), but at time t
=
10.0 s . . . . . . . . . . . . . . . . . . .
107
12.0 s . . . . . . . . . . . . . . . . . . .
107
14.0 s . . . . . . . . . . . . . . . . . . .
108
4-38 As figure (4-32), but at time t
4-39 As figure (4-32), but at time t
5-1
=
Mean and variance of the state variable x,(t; w) for a three-dimensional
autonomous system with additive noise using the MC scheme
13
. . . . . .
113
5-2
Mean and variance of the state variable x, (t; w) for three-dimensional autonomous system with additive noise using the DO scheme with s = 3 . . . 113
5-3
Mean and variance of the state variable x, (t; w) for three-dimensional autonomous system with additive noise using the new MTDgPC scheme with
p=
5-4
3 . . . .
. . . . .
. .
. . . . . . . . . . . . . . . . . . .
. . . .
.. .
.
. . . . .. .
115
Mean and variance of the state variable x 2 (t; w) for three-dimensional autonomous system with additive noise using the new MTDgPC scheme with
p = 3 . ...
5-5
..
. . . . ...
. . . . ..
... ...
.. ..
....
. . . . . .116
Mean and variance of the state variable X3 (t; W) for three-dimensional autonomous system with additive noise using the new MTDgPC scheme with
5-6
116
.......................................
p = 3........
Mean and variance of x, (t; w) for the three-dimensional non-autonomous
system with additive noise using the MC scheme . . . . . . . . . . . . . . . 118
5-7
Mean and variance of x, (t; w) for the three-dimensional non-autonomous
system with additive noise using the new KLgPC scheme with fred
p=
5-8
3 . . . .. . .
. . .
...
. . . . . . . . . . . .
. . ...
=
3 and
. . . .. . . ..
. . .
118
Mean and variance of X 2 (t; w) for the three-dimensional non-autonomous
system with additive noise using the new KLgPC scheme with fred = 3 and
p=
5-9
3 ..
.
. . . . . . .
.
. . .
. . . . . . . . . . .
. . .
. . . .
. . . . ..
. .
118
Mean and variance of X 3 (t; w) for the three-dimensional non-autonomous
system with additive noise using the new KLgPC scheme with fred = 3 and
p= 3....
.. . . .
..
............
. . ... .. .
. ... . . . . ...
119
5-10 Mean and variance of xi(t; w) for the three-dimensional non-autonomous
system with additive noise using the new MTDgPC scheme with p = 3
. . 119
5-11 Mean and variance of X 2 (t; W) for the three-dimensional non-autonomous
system with additive noise using the new MTDgPC scheme with p = 3
. . 119
5-12 Mean and variance of x 3 (t; w) for the three-dimensional non-autonomous
system with additive noise using the new MTDgPC scheme with p
=
3
. . 120
5-13 Mean and variance of x, (t; w) for four-dimensional autonomous system with
multiplicative noise using the MC scheme . . . . . . . . . . . . . . . . . . . 121
14
5-14 Mean and variance of x, (t; w) for four-dimensional autonomous system with
multiplicative noise using the DO scheme . . . . . . . . . . . . . . . . . . . 121
5-15 Mean and variance of x,(t; w) for four-dimensional autonomous system with
multiplicative noise using the new KLgPC scheme with fred = 4 and p = 3
122
5-16 Mean and variance of x,(t; w) for four-dimensional autonomous system with
multiplicative noise using the new MTDgPC scheme with p = 3
6-1
. . . . . . 122
(a) Eigenvalues of matrix A for the 20-dimensional self-engineered test case
(b) First six eigenvectors of matrix A for the 20-dimensional self-engineered
test case .......
6-2
.....................................
127
(a) Singular values of matrix B for the 20-dimensional self-engineered test
case (b) First six left singular vectors of matrix B for the 20-dimensional
self-engineered test case
6-3
. . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
Mean of solution field x(t; w) for the 20-dimensional self-engineered test
case using the MC scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
6-4
Variance of solution field x(t; w) for the 20-dimensional self-engineered test
case using the MC scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
6-5
Variance of first six modes for the 20-dimensional self-engineered test case
using the MC scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
6-6
Comparison of variances of first six modes for the 20-dimensional selfengineered test case using the MC scheme
6-7
First three modes for the 20-dimensional self-engineered test case using the
M C schem e
6-8
. . . . . . . . . . . . . . . . . . 132
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
Mean of solution field x(t; w) for the 20-dimensional self-engineered test
case using the DO scheme with s = 6 . . . . . . . . . . . . . . . . . . . . . 133
6-9
Variance of solution field x(t; w) for the 20-dimensional self-engineered test
case using the DO scheme with s = 6 . . . . . . . . . . . . . . . . . . . . . 134
6-10 Variance of first six modes for the 20-dimensional self-engineered test case
using the DO scheme with s = 6 . . . . . . . . . . . . . . . . . . . . . . . . 135
6-11 First three modes for the 20-dimensional self-engineered test case using the
DO scheme with s = 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
15
6-12 Mean of solution field x(t; w) for the 20-dimensional self-engineered test
case using the DO scheme with s = 10
. . . . . . . . . . . . . . . . . . . . 138
6-13 Variance of solution field x(t; w) for the 20-dimensional self-engineered test
case using the DO scheme with s = 10
. . . . . . . . . . . . . . . . . . . . 139
6-14 Variance of first six modes for the 20-dimensional self-engineered test case
using the DO scheme with s = 10 . . . . . . . . . . . . . . . . . . . . . . . 140
6-15 First three modes for the 20-dimensional self-engineered test case using the
DO scheme with s = 10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
6-16 Mean of solution field x(t; w) for the 20-dimensional self-engineered test
case using the KLgPC scheme with
fred =
6 and p = 2 (sred = 28) . . . . .
142
6-17 Variance of solution field x(t; w) for the 20-dimensional self-engineered test
case using the KLgPC scheme with fred = 6 and p = 2 (sred
=
28)
. . . . . 143
6-18 Variance of first six modes for the 20-dimensional self-engineered test case
using the KLgPC scheme with fred = 6 and p = 2
(Sred
= 28) . . . . . . . . 144
6-19 First three modes for the 20-dimensional self-engineered test case using the
KLgPC scheme with fred = 6 and p = 2
(Sred
= 28)
. . . . . . . . . . . . . 145
6-20 Numerical KdV 1-soliton wave train, computed using the Z-K scheme with
the correction (6.20): (a) 3-D view (b) Top view . . . . . . . . . . . . . . . 153
6-21 Analytical KdV 1-soliton wave train: (a) 3-D view (b) Top view . . . . . . 154
6-22 KdV 1-soliton numerical and analytical solutions at different time instances 155
6-23 Numerical KdV 2-soliton wave train, computed using the Z-K scheme: (a)
3-D view (b) Top view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
6-24 Analytical KdV 2-soliton wave train: (a) 3-D view (b) Top view . . . . . . 157
6-25 KdV 2-soliton numerical and analytical solutions at different time instances 158
6-26 Mean of the stochastic KdV solution field at different time instances . . . . 159
6-27 Variance of the stochastic KdV solution field at different time instances . . 159
6-28 Third central moment of the stochastic KdV solution field at different time
instances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
6-29 Fourth central moment of the stochastic KdV solution field at different
tim e instances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
16
List of Tables
. . . . . . . . . . . . . . . . . . . . . . 37
2.1
Butcher tableau for RKR methods
2.2
Butcher tableau for RKR scheme RI5 . . . . . . . . . . . . . . . . . . . . . 37
2.3
Butcher tableau for RKR scheme R16 . . . . . . . . . . . . . . . . . . . . . 38
2.4
Computational cost of numerical integration schemes . . . . . . . . . . . . 40
2.5
Basis functions for polynomial chaos expansions . . . . . . . . . . . . . . .
4.1
Run times for uncertainty quantification schemes of comparable accuracy
for integrating in time uncertainty in input parameters
5.1
54
. . . . . . . . . . . 109
Run times for UQ schemes for integrating uncertainty due to external
stochastic forcing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
6.1
Run times for UQ schemes for time-integrating uncertainty in the 20dimensional self-engineered test case
6.2
. . . . . . . . . . . . . . . . . . . . . 141
Run times for UQ schemes for time-integrating uncertainty in the KdV
equation with external stochastic forcing . . . . . . . . . . . . . . . . . . . 161
6.3
Run times for the DO (with s = 14) and the MC schemes for different
values of dimensionality (N) of state space . . . . . . . . . . . . . . . . . . 162
6.4
Run times for the DO (with s = 14) and the MC schemes for different
number of sample realizations (M,) . . . . . . . . . . . . . . . . . . . . . . 163
. . . . . . . . . . . . . . . . . . . . . . . 187
C.1
Uncertainty in Input Parameters
C.2
Uncertainty due to External Stochastic Forcing
C.3
Uncertainty Prediction for Stochastic Ocean Systems . . . . . . . . . . . . 188
17
. . . . . . . . . . . . . . . 188
18
Chapter 1
Introduction
1.1
Background and Motivation
Over recent years, stochastic modeling and uncertainty quantification (UQ) have become
essential means of study in a wide variety of application areas. Today, stochastic models
play a prominent role in studying systems in many diverse fields such as biology, mechanics, economics and finance, weather, ocean and climate predictions and so forth. These
stochastic models take into account system and input variability as well as modeling and
data errors, and hence provide a more comprehensive understanding of the system when
compared to their deterministic counterparts. They also provide a more informative prediction of the future state of the system. These attributes make them very well suited
for applications involving systems with high amount of variability and uncertainty. One
such field of study where stochastic modeling is starting to be used extensively is ocean
modeling.
Oceans are highly dynamical and complex environments with countless physical and
biological processes taking place simultaneously. These processes, many of which are
nonlinear, take place over a wide range of temporal and spatial scales and often interact
with one another, making them difficult to observe and study individually (Lermusiaux,
2006). Due to such challenges, the models formulated to study these coupled nonlinear
ocean systems are often imperfect. There are several sources of uncertainties in ocean
models. These include, but are not limited to, modeling of a restricted range of tempo19
ral and spatial scales (Nihoul and Djenidi, 1998), limited knowledge of processes within
this modeled scale window, limited coverage and accuracy of measurement data, approximations in interactions between various processes and miscalculations due to numerical
implementations (Lermusiaux et al., 2006). To account for these uncertainties, in most
systems, it is a common practice to assume random components for some parameters of
the dynamical equations and/or for some of the initial and boundary conditions. This
type of randomness is called uncertainty in input parameters. It has drawn considerable
attention over the last few years and substantial progress has been made in modeling
such systems, especially to account for initial input uncertainties. Although this type of
approach to modeling uncertainty is suitable for a large number of ocean systems, there
are many cases where considering randomness only in the form of input parameters is
not enough. For example, for studying systems where multiple scales are involved, the
effect of unresolved processes outside the modeled scale window needs to be incorporated
into the governing dynamical equations. Often times, it is not possible to parameterize
such effects in terms of the initial input parameters of the dynamical equations. Sometimes, these unresolved sub-scale processes may have a dependence on a phenomenon
not directly related to the main process being studied. In such cases, it is suggested
that the resulting uncertainty be modeled in the form of additional stochastic terms in
the governing dynamical equations of the system (Muller and Henyey, 1997). This type
of randomness is known as uncertainty due to external stochastic forcing or stochastic
noise, and the resulting dynamical equations are known as stochastic partial differential
equations (SPDEs).
The easiest way to model stochastic systems is through Monte Carlo simulations.
However, Monte Carlo schemes can have a high computational cost and may become
inefficient or even infeasible to implement for high dimensional systems.
As a result,
alternate methods for uncertainty quantification and prediction have been developed and
significant progress has been made for systems with uncertainty in input parameters.
Some of these schemes include generalized Polynomial Chaos (gPC) (for e.g., see the
works of (Ghanem and Spanos, 1991), Xiu and Karniadakis (2002), Debusschere et al.
(2002), Le Maitre et al. (2002), Knio and Le Maitre (2006)) and its several variations
20
(Wan and Karniadakis, 2005, Gerritsma et al., 2010, Hou et al., 2006, Le Maitre et al.,
2004a), as well as Proper Orthogonal Decomposition (POD) (Berkooz et al., 1993, Holmes
et al., 1998, Gay and Ray, 1995).
For modeling systems with external stochastic forcing, a major challenge stems from
the difficulties in solving stochastic partial differential equations, in part due to lack of
regularities with respect to time. Additionally, modeling stochastic noise involves dealing
with a constant influx of randomness, which becomes very expensive to handle over larger
time scales, especially for most of the aforementioned methods. Monte Carlo simulations have thus been the method of choice for such larger time simulations with external
stochastic forcing. However, in the past fifteen years, methodologies that aim to optimally and dynamically capture most of the uncertainty have been derived and utilized.
Such methodologies utilize the property that many nonlinear systems concentrate most of
their stochastic energy in a subspace of dimension much smaller than the true dimension
of the system. In other words, they consider stochastically forced nonlinear systems of
probability measure that decays rapidly away from the mean or that is sufficiently localized by nonlinear effects. In these situations, focusing on capturing uncertainty in the
dynamical stochastic subspace is efficient. One such method is the Error Subspace Statistical Estimation (ESSE) scheme (Lermusiaux and Robinson, 1999, Lermusiaux, 1999)
developed for optimal state estimation and data assimilation. ESSE aims to adaptively
predict the dominant eigenvalue decomposition of the system covariance, using a Monte
Carlo scheme for the nonlinear evolution of this error subspace and a learning scheme
for adapting the subspace. It was utilized for systems with uncertain parameters, with
uncertain initial and boundary conditions, and with stochastic forcing. More recently,
Dynamically Orthogonal (DO) equations (Sapsis and Lermusiaux, 2009, Sapsis, 2010)
were obtained, aiming to predict the dominant uncertainty of nonlinear systems. These
differential equations govern the mean and a reduced time-dependent Karhunen-Loeve
(K-L) expansion of the system state. Applications of DO equations have so far mostly
focused on uncertainty arising due to initial and boundary conditions only. One of the
motivations of this thesis is thus uncertainty quantification and prediction for nonlinear
systems with stochastic forcing.
21
The present research is concerned with the development, implementation and evaluation of uncertainty quantification and prediction schemes for dynamical systems of a
general class. Existing schemes are first applied to a range of specific dynamics systems
belonging to that class and the results are intercompared. These schemes are then modified and improved so as to be able to deal with additive and multiplicative stochastic
forcing of varied types over significant periods of time. The implementation of the general class of stochastic dynamical systems and varied solution methods is generic. The
resulting computational framework allows the incubation of methods and the scientific
and engineering investigation of varied questions related to stochastic dynamical systems,
including more realistic ocean systems. The specific ocean science application considered
in this thesis is shallow water ocean surface waves and the modeling of these waves by
deterministic dynamics and stochastic forcing components. As mentioned above, such
stochastic forcing would then model the statistics of the waves not governed by the deterministic components, for example, the smaller-scale surface waves and the remotely
forced (incoming) waves.
The governing equations for the class of non-autonomous linear and nonlinear stochastic dynamical systems that we consider is defined by the general stochastic differential
system of equations of the form
dX(r, t; w) = A(X(r, t; w), t) X(r, t; w)dt + B(X(r, t; w), t) dW(t; w)
(1.1)
where X(r, t; w) represents an N-dimensional stochastic field, W(t; w) represents an mdimensional Brownian motion, r is a multi-dimensional spatial variable, t is time, and
w are elementary events representing the uncertainty. A(X(r, t; w), t) and B(X(r, t; W), t)
represent the deterministic and stochastic components of the system dynamics respectively. The above system (1.1) is said to be non-autonomous when A and B are explicitly
dependent on t, because this dependence is then not modeled within the system (1.1)
but is provided from an external (and possibly dynamical) influence. When t represents
time (as is the case here), the non-autonomous system is then also simply said to be
time-variant.
In order to develop an efficient generic solver, we only consider systems where A and B
22
have at most a linear dependence in X(r, t; w). We note that this is not a strong limitation
since many highly nonlinear and high-dimensional ocean systems directly belong to this
class when their governing equations are discretized in space or when they can be simplified
to systems of that class. For example, fluid flows modeled by Navier-Stokes equations and
classic conservation equations, and thus most ocean circulation and wave models, when
discretized in space, fall under this class of nonlinear systems. Most (low dimensional)
systems that are the hallmarks of strongly nonlinear or chaotic dynamical system theory
(Strogatz, 2001) also fall under this class (e.g., the Lorenz system, Rbssler attractor etc).
As we will see, the resulting quadratic nonlinearities in X(r, t; w) (due to the linear-inX(r, t; w) assumption on A and B) restrict the computation of statistics to third moments.
For this assumption, we can re-write the matrices A and B as
A(X(r, t; w), t)
=
Ao(r, t; w) + A(X(r, t; w), t)
(1.2)
B(X(r, t; w), t) = Bo(r, t; w) + Bi(X(r, t; w), t)
where Ao(r, t; w) and Bo(r, t; w) are functions in time, space and uncertainty (w) only
and A 1(X(r,t;w),t) and B 1(X(r,t;w),t) are linear in X(rt;w). This assumption has
been used to develop a computationally efficient solver which can be utilized for studying
uncertainty in a large variety of nonlinear systems.
As discussed earlier, most ocean models involve multi-scale, nonlinear and high dimensional systems. Computational cost is often the limiting factor in modeling such systems,
especially when they involve stochastic inputs and forcing. In this work, we utilize the
property that for a subset of the dynamical systems described by (1.1) or (1.2), the computational cost of uncertainty quantification schemes can be reduced significantly. This
occurs when the system's deterministic and stochastic dynamics (represented by matrices
A and B respectively) have singular value decompositions (SVDs) with rapidly decaying
singular values. This is the case with most high-dimensional ocean models. Of course,
these singular values vary in general with X and with time (as both A and B vary with
time), such that the reduction approach needs to adapt.
In what follows, we first state our problem statement and research objectives and
summarize the work accomplished. Then, we provide an outline of the thesis.
23
1.2
Problem Statement and Research Objectives
The problem statement for the present work is to evaluate existing and develop new
uncertainty quantification and prediction schemes for a class of non-autonomous linear
and nonlinear stochastic dynamical systems, which are represented by equations (1.1) and
(1.2), and to implement a computationally efficient framework for the resolution of these
generic equations (1.2) such that it can be utilized for future incubation of methodologies
and future scientific and engineering inquiries involving such stochastic systems.
The specific objectives of the present research are twofold. The first is to study dynamically orthogonal field equations and polynomial chaos schemes and to compare their
performance. The new focus is on studying the applicability of these schemes for modeling
uncertainty due to external stochastic forcing, specifically for ocean systems. Since generalized polynomial chaos has limitations in modeling stochastic forcing over large time
intervals, two new schemes using the polynomial chaos framework are developed. These
schemes are capable of modeling stochastic noise over arbitrarily large time intervals. The
second objective is to develop a generic solver for future incubation of methods for uncertainty quantification and prediction for a wide class of systems, and to utilize the solver
to study the stochastic characteristics of coupled nonlinear ocean systems. Towards this
goal, a computationally efficient solver for the solution of the general stochastic differential equations (1.2) is developed and is used for studying shallow water ocean surface
waves governed by the Korteweg-de Vries (KdV) equation with stochastic noise.
For the existing uncertainty quantification and prediction methods, we focus on the
dynamically orthogonal equations and polynomial chaos schemes.
The corresponding
equations for evolution of uncertainty for systems (1.2) are derived using these stochastic
modeling schemes. First, uncertainty in input parameters is studied using the example
of a one-dimensional stochastic ODE and later using the Kraichnan-Orszag three mode
problem. The accuracy and computational cost of the UQ schemes are examined and some
limitations are discussed. Next, uncertainty due to external stochastic forcing is analyzed.
It is seen that all the existing classic polynomial chaos schemes have an inherent limitation in modeling stochastic noise over large time intervals. Two novel polynomial chaos
methods (KLgPC and modified TDgPC) for modeling stochastic noise over an arbitrarily
24
large time interval are introduced. A few self-engineered model test cases are integrated
using dynamically orthogonal field equations and the new polynomial chaos schemes, and
their performance is investigated. In order to investigate the likely performance of the
UQ schemes at modeling more realistic ocean systems, a 20-dimensional time-dependent
test case with external stochastic forcing has been engineered. The results of the different
schemes are compared and their relative merits and limitations are explained.
Subse-
quently, a stochastic ocean system (Korteweg de-Vries equation with external stochastic
forcing) is considered and integrated with different schemes, and the resulting uncertainty
characteristics are examined.
1.3
Thesis Outline
Chapter 2 develops the background for uncertainty quantification of dynamical systems.
The first part reviews the concepts of stochastic processes and stochastic differential equations (SDEs). A basic understanding of Brownian motion is developed, and the It6 and
Stratonovich form of stochastic integrals are defined. Next, the notions of strong and
weak convergence are presented and numerical integration schemes for the weak approximation of solutions of It6 form of stochastic differential equations are studied. The later
part of chapter 2 deals with uncertainty quantification methods for non-autonomous dynamical systems. Monte Carlo simulations, dynamically orthogonal (DO) field equations
and polynomial chaos schemes are described in greater detail. In chapter 3, dynamically
orthogonal field equations and polynomial chaos schemes have been applied to stochastic
non-autonomous systems described by equations (1.1) and (1.2), and the evolution equations for the stochastic field have been derived. Building on an existing time-dependent
generalized polynomial scheme, a new modified TDgPC (MTDgPC) scheme is developed
in such a way as to enable it to handle time-integration of stochastic forcing. Furthermore, a novel reduced space and reduced order scheme in the polynomial chaos framework
(KLgPC) capable of integrating stochastic noise over arbitrarily large time intervals is
introduced. Chapter 4 focuses on systems with uncertainty in input parameters. The examples studied in this chapter include a one-dimensional stochastic decay model and the
Kraichnan-Orszag three mode problem. Solutions of these systems have been presented,
25
followed by a discussion on the performance of UQ schemes. Chapter 5 addresses uncertainty due to external stochastic forcing. A basic limitation of existing polynomial chaos
schemes in modeling stochastic noise is discussed. The two new polynomial chaos schemes
introduced in chapter 3, namely the MTDgPC and KLgPC schemes, are used for integrating systems with uncertainty due to external stochastic forcing of both additive and
multiplicative nature. Chapter 6 deals with uncertainty quantification of stochastic ocean
systems. The implementation of DO and KLgPC schemes on a reduced subspace is studied
using a 20-dimensional self-engineered test case with external stochastic forcing and the
performance of the two schemes is compared. Next, the Korteweg-de Vries (KdV) equation is introduced and its deterministic solutions are obtained using the Zabosky-Kruskal
(Z-K) numerical scheme. Then, using our computationally efficient stochastic solver, KdV
equation with uncertainty due to external stochastic forcing is time-integrated and its uncertainty characteristics are examined. Conclusions and directions for future research are
presented in chapter 7.
26
Chapter 2
Background
In this chapter, we review the background theory for studying uncertainty quantification
and prediction of dynamical systems. We begin with an overview of the basic theory and
definitions related to stochastic algebra. Next we study numerical integration schemes for
the solution of stochastic differential equations (SDEs), and intercompare their accuracy
and computational costs. In the later part of the chapter, we present a literature review
on existing methods for uncertainty quantification, including a detailed description of
the generalized polynomial chaos (gPC) scheme, time-dependent generalized polynomial
chaos (TDgPC) scheme, Wiener chaos using sparse truncation and dynamically orthogonal
(DO) equations.
2.1
2.1.1
Basic Theory and Definitions
Stochastic processes and random fields
Let (Q, F, P) be a probability space. Q is the sample space, which is a set of all possible
outcomes of a random experiment. An event A is a subspace of the sample space Q. It is
a set of outcomes to which a probability is assigned. F is the o--algebra associated with
Q. It is a set consisting of measurable subsets of Q. Its elements are events about which
it is possible to obtain information. P is the probability measure, P : F
that
1. P()=1
27
-4
[0, 11 such
2. 0 < P(A)
1
3. P(A. u Aj) = -P(A.) + -P(A), if k.n Aj = 0
A function Y is said to be F-measurable if Y- 1 (B) E F, for every Borel set B E B
in R. A random variable X(w) defined on the probability space (Q, F,1) is a real valued
F-measurable function X :
-4
R. In simple words, a random variable is a real-valued
measurable quantity, whose value is based on the outcome w c Q of a random experiment.
A stochastic process or random process X(t; w) is a real-valued measurable function
X : [0, oo) x Q
-4
R, where t is the time variable t E [0, oo). One can also consider
stochastic processes based on finite time intervals. Often times, when the dependence on
w is not required to be shown explicitly, the stochastic processes are represented simply
as X(t). Stochastic processes can be thought of in two ways (Hunter, 2009). If we fix
W E Q, we have XW
: t
-
X(t; w).
X"' is called a sample path or realization of the
stochastic process. Similarly, if we fix t E [0, oo), we have Xt : w
-
X(t; w). From this
viewpoint, a stochastic process may also be thought of as a collection of random variables
{Xt} indexed by the time variable 0 < t < oo. Consider a spatial domain D C R1 and
a time domain T 9 [0, oo). A stochastic field or random field X(r, t; w) is defined as a
real-valued measurable function X : D x T x Q
2.1.2
-+
R".
Brownian motion and stochastic differential equations
In order to model and study uncertainty due to external stochastic forcing, it is imperative to understand Stochastic Differential Equations (SDEs). These are differential
equations consisting of a deterministic part and an additional stochastic part described
by a Brownian motion (see Jazwinski (1970), Oksendal (2010) for detailed theory). In
1827, biologist Robert Brown, while studying pollen particles floating in water in the microscope, observed minute particles in the pollen grains executing jittery motion but was
not able to determine the theory behind this motion (Brown, 1828). This random jittery
motion later came to be known as Brownian motion, named after the biologist. The first
theory of Brownian motion was given by Louis Bachelier in his PhD thesis "The theory
of speculation" in 1900 (Bachelier, 1900). It was later in 1905, when Albert Einstein used
28
a probabilistic model to sufficiently explain Brownian motion, that it came to be recognized as an important topic of research (Einstein, 1905). The construction and existence
of Brownian motion as it is known today, was established by Norbert Wiener in 1923
(Wiener, 1923), who was a professor of mathematics at MIT. One-dimensional Brownian
motion also came to be known as a Wiener process, in honor of the great mathematician.
A scalar standard Brownian motion, also known as a one-dimensional standard Wiener
process, defined over time period t E [to, T], is a random variable W(t) that depends
continuously on t. For t', t such that to
W(t) - W(t') is Gaussian with mean
AW(t, t') r~
t-t
<t' < t < T, the random variable AW(t, t')
=
= 0 and variance a.2 = t - t'. Equivalently,
A'(0, 1) (Higham, 2001). In order to be called a standard Brownian
motion, the random variable W(t) should satisfy the following three conditions:
1. W(0)
2. For to
=
0 ( with probability 1 )
t' < t < T, AW(t, t') = W(t) - W(t') is Gaussian with mean y = 0 and
variance o2
t_
t', i.e., AW(t, t') ~ VTZ- / A(0, 1)
3. For to < t' < t < u < v < T, the increments AW(t, t')
=
W(t) - W(t') and
AW(v, u) = W(v) - W(u) are independent random variables
It is useful to consider discretized Brownian motion for computational purposes, where
W(t) is specified at discrete values of time t. Let us consider the time interval [0, T] and
divide it into n steps by setting h = T/n, for some positive integer n. Let us denote
W(tj) by Wj, where tj = jh (j
=
0, 1, 2, ..., n - 1). We have Wo
=
0 from condition 1 for
Brownian motion. Also, from conditions 2 and 3, we get
Wj = W_
1
+ dW
j = 1,2, ..., n
where each dW is an independent random variable such that dWj
(2.1)
-
Vhg(0, 1).
A scalar stochastic differential equation is of the form
dX(t)
=
a(X(t), t)dt + b(X(t), t)dW(t)
X(0) = Xo, t E [0, T]
(2.2)
where a(X (t), t) and b(X (t), t) are scalar functions of the continuously time-dependent
29
scalar stochastic process X(t), and the initial condition X(O)
Here, W(t)
=
fJ dW(T)
=
Xo is a random variable.
represents a standard Brownian motion path over the time domain
[0,T]. The function a(X(t), t) characterizes the deterministic part of the SDE and is called
the drift coefficient. On the other hand, the function b(X(t), t) characterizes the stochastic
part and is known as the diffusion coefficient. In its integral form, equation (2.2) is written
as
X(t)
=
X(0) +
j
a(X(r), T)dT +
f
t E [0, T
b(X(T), T)dW(T)
(2.3)
The first integral on the right hand side of the equation is an ordinary Riemann integral,
whereas the second integral is a stochastic integral along the Brownian motion path, which
will be discussed in greater detail in section 2.1.3. A general N-dimensional SDE with an
m-dimensional driving Wiener process is given as
X(t)
=
X(0) +10 a(X(T), T)dT+ E
t E [0, T]
b(X(),T)dWj(T)
(2.4)
j=1
where X(t) is an N-dimensional stochastic process and b (X(t), t)
(j
= 1, 2, ..., m) are m
diffusion coefficients corresponding to the m independent Brownian motion paths W (t) =
f
dW (r) represented in equation (2.4). From here onwards, we will omit the explicit
time-dependence of a(X(t), t) and bj(X(t), t), i.e., as if the systems were time-invariant
or autonomous, in order to simplify the notation. However, all numerical properties and
schemes discussed later still apply to the non-autonomous case.
2.1.3
Stochastic integrals and their convergence
Brownian motion is complex mathematically because it is not differentiable at any point
and hence the rules of ordinary calculus are not applicable to it. For a given ordinary
function f(t), the integral
f[ f(t)dt
can be approximated by a Reimann sum as
n-1
T
f(t)dt = 1: f(tj)(tj+ 1 - tj)
where tj = jh
(j
= 0,1,2,..., n -
1) and h
-+
0.
(2.5)
However, the stochastic integrals
used in equations (2.3) and (2.4) represent integrals over non-differentiable Brownian
30
motion paths, and hence cannot be approximated by a Reimann Sum. The first version
of stochastic calculus was developed by the Japanese mathematician K. Itb in 1940s (It6,
1944). The It6 approximation of the stochastic integral fe'
f(t)dW(t)
is given as
n-1
T
f(t)dW(t)
0
=
(2.6)
f(tj)(W(tj+ 1 ) - W(tj))
E
j=0
This is known as the It6 stochastic integral. An alternative approximation for the stochastic integral was proposed by the Russian physicist R.L. Stratonovich in the 1960s (Stratonovich,
1966). This integral is known as the Stratonovich stochastic integraland is represented as
T
f t
d
-1
f(t) o dW(t)
j+tl
I: f (t
j=0
+ty+
(W(tj+)
- W(tj))
(2.7)
The symbol "o" is used to distinct Stratonovich integral approximation from its corresponding It6 integral approximation. The Stratonovich form of a general N-dimensional
SDE with an rn-dimensional driving Wiener process, represented by equation (2.4) in its
It6 form, is given as
X(t) = X(0) +
a(X(s))ds + Z
b (X(s)) o dWj(s)
t E [0, T]
(2.8)
Both It6 and Stratonovich forms of stochastic integrals are popular and each form has its
own advantages and disadvantages. The choice of stochastic calculus used is a matter of
convenience and depends on the problem at hand, with several existing guidelines (for e.g.,
see (Oksendal, 2010)). It is possible to convert an It6 form of an SDE into its Stratonovich
form and vice-a-versa quite easily. The present work focuses exclusively on the It6 form
of stochastic differential equations, represented by equation (2.4).
The definitions of strong and weak convergence of stochastic processes are adopted
from Kloeden and Platen (Kloeden and Platen, 2011). Assume that Y, is a discrete time
approximation of a stochastic process X(t) at time t = T using some discrete numerical
time integration scheme. In many situations, it is required that the sample paths of the
approximation Y be close to those of the stochastic process X(t). The notion of strong
31
convergence is useful in such cases. A numerical time integration method is said to have
a strong convergence of order -y E (0, oo) if there exists a finite constant K and a positive
constant ho such that
EW[IX(T) - Yn] < Kh7
(2.9)
for any time discretization with maximum step size h
E (0, ho), where EW[X(t)] is the
Expectation Operator and is defined as the mean value or the expected value of the random variable X. For the deterministic case, when diffusion coefficients bj of equation (2.4)
are zero, the strong convergence criterion reduces to the usual deterministic convergence
criterion for ordinary differential equations (ODEs).
In many practical situations, it is not necessary to have a close pathwise approximation
of a stochastic process. Often times, only the expectation of a certain function of the
stochastic process might be of interest. The type of convergence criterion required in this
case is much weaker than the strong convergence criterion discussed above. In order to
have such a functional approximation, it suffices to have a good approximation of the
probability distribution of the stochastic process X(t) rather than a close approximation
of its sample paths. The notion of weak convergence is useful in such cases. The strong
convergence criterion asks for the difference in trajectory, whereas the weak convergence
criterion asks for a difference in distribution. Consider a function g(X(t)) of the stochastic
process X(t) at time t
=
T. A numerical time integration scheme is said to have a weak
convergence of order / if for any polynomial function g, there exists a finite constant K
and a positive constant ho such that
|Ew[g(X(T))] - EW[g(Yn)]J < Kh'
(2.10)
for any time discretization with maximum step size h E (0, ho). For the deterministic case,
when diffusion coefficients bj of equation (2.4) are zero, the weak convergence criterion also
reduces to the usual deterministic convergence criterion for ordinary differential equations
(ODEs) when we set g(X(T)) = X(T).
In ocean modeling applications, one is often
interested in the statistics of the random fields and not so much in their individual sample
paths. Hence, the weak convergence criterion is often more relevant. The present work
32
thus focuses primarily on the weak convergence criterion.
2.2
Numerical Integration Schemes for the Solution
of SDEs
In order to model stochastic processes and random fields and to study the evolution of
uncertainty with time, SDEs need to be numerically integrated. The presence of Brownian
motion renders the deterministic numerical integration schemes incapable of integrating
SDEs. Almost all algorithms that are used for the solution of ODEs display very poor
numerical convergence when applied to SDEs. Numerical solution of SDEs is a relatively
new topic of research when compared to regular ODEs or PDEs. The development of
numerical time integration schemes for SDEs began with the Euler-Maruyama scheme
developed by Maruyama in 1955 (Maruyama, 1955). This scheme is the first order Taylor
approximation of an SDE. Subsequently, higher order Taylor approximations were developed, but these required the evaluation of derivatives of drift and diffusion coefficients
(defined in section 2.2), making the algorithms inefficient and cumbersome. This shifted
the focus to development of derivative-free approximations to these higher order Taylor
expansions. This has become a major topic of research in the last few decades and several
derivative-free higher order numerical schemes for solution of SDEs have been developed
for both the strong and weak approximation of solutions of SDEs (defined in section 2.3),
for e.g. Giles (2008a), Kloeden and Platen (2011), Milstein and Tretyakov (2004) and
the references therein. In particular, derivative-free Runge-Kutta schemes for strong approximation of solutions of SDEs have been proposed by Burrage and Burrage (1996),
Burrage and Burrage (2000), Kloeden and Platen (2011), Milstein and Tretyakov (2004),
Newton (1991), RdBler (2010) and Riimelin (1982). Similarly, Runge-Kutta schemes for
weak approximation of solution of SDEs have been developed by Kloeden and Platen
(2011), Komori et al. (1997), Komori (2007a), Komori (2007b), Kiipper et al. (2007),
R6Bler (2007), RMBler (2009), Talay (1990) and Tocino and Vigo-Aguiar (2002). Next, we
discuss in greater detail, some of these numerical integration schemes for weak approximation of solutions of SDEs.
33
2.2.1
Euler-Maruyama (EM) method
The Euler-Maruyama scheme is the simplest Taylor approximation of the stochastic differential equation (2.4), and was proposed by Maruyama (Maruyama, 1955). It attains a
strong order of convergence of -y = 0.5 and a weak order of convergence of 3
1.0. The
=
Euler-Maruyama method has the form
m
Y+l = Yn + a(Yn)h + E bj(Yn)AWi
(2.11)
j=1
where
(2.12)
AW = W (t,+1 ) - Wj(tn)
Higher order Taylor approximations of equation (2.4) require the evaluation of derivatives
of diffusion coefficients bj(Y), which are often non-existent or too difficult to evaluate.
Hence, higher order Taylor approximations are not very useful as numerical integration
methods for SDEs. However, several derivative-free integration schemes have been developed in recent years by substituting appropriate approximations for the derivatives in
these higher order Taylor expansions.
2.2.2
Extrapolated Euler-Maruyama (ExEM) method
The Extrapolated Euler-Maruyama scheme, proposed by Talay and Tubaro (Talay and
Tubaro, 1990) is based on Euler-Maruyama approximations Y and Yn/ 2 , calculated with
step sizes h and h/2 respectively. It attains a weak order of convergence of 6
=
2.0 and
has the form
Y+j = Yn + a(Y)h +
b (Y)AWy
(2.13)
j=1
m
t
Y2 emp =
Y + a(Yn)(h/2) +
>3 bj(Y)AWJ
(2.14)
j=1
?n
= y
2
em
a(Y tp"m)(h/2)+
b(Y
m)AW
(2.15)
j=1
=
2Y
34
-
1
(2.16)
where
7 = AVW+
2.2.3
(2.17)
AW
AWj = Wj(tn+1/2) - W(tn)
(2.18)
AWj = W (tn+1 ) - WV(tn+1/2)
(2.19)
Stochastic Runge-Kutta method by Kloeden and Platen
(RKKP)
An explicit derivative-free stochastic Runge-Kutta method has been proposed by Kloeden
and Platen (Kloeden and Platen, 1989, 2011).
convergence of
#=
This method attains a weak order of
2.0. It has the form
Y+ 1 =Y + 0.5 (a(Yn) + a(H(0 ))) h
(bk(H ) ±bk(H
±+0.25>
k=1
±0.25>
(bkWft)
)) + 2bk(Yn)) i(k)
+ bk(H
)) - 2bk(Yn)) i(k)
(2.20)
+ 0.25 E (bk(H
k,1=1
) -bk(H ()
± 0.25 k=1: (bk(H))
-
bk(HS))
Jk
J(k,l)
where
H(O)
Y + a(Y)h +
(2.21)
bk(Yn) hk
k=1
H(k) =- y + a(Yn)h ± bk (yn)IVf
ft42) =±Y
(2.22)
(2.23)
bk(Yn)Vh
for k = 1, 2, ..., m. The random variables h(k) are distributed with P(h(k)
and P(h(k) = 0)
=
for k
=
1,2, ... , m. Also,
for l = 1, 2, ... , k - 1, Vk,k =-h
J(k,1) = i(k)f(l)
and Vk, = -V,
35
=
±V3ii) =
+ Vj with P(V,z = ±h)=
for l = k1,..., m.
{
2.2.4
Stochastic Runge-Kutta method by Rdssler (RKR)
Recently, a new class of stochastic Runge-Kutta methods for weak approximation of
solutions of SDEs has been developed by R6ssler (R5ller, 2009) in 2009. The novelty
of this class of Runge-Kutta methods is that the number of stages does not depend on
the dimension m of the driving Wiener process. The Stochastic Runge-Kutta method by
Rdssler has the form
n+1
n
+
j
aiG(t, -+c 0)h, H Oh
S
+ Y±i
1)bk(n + ci h, H
+ c) hH
±zbk(tn
=1
(k)
(k)
i=1 k=1
EI3
± 1:
i=1
2) bk(tfn + C1)h,H(k))I(kk)
f-(.4
C
k=1
±bk(tn ± c+2)h, H
>3>1 I3~~It
i=1 k=1
bk(t,, ± Ci 2 h
+
)I(k)
~)V
where s is the number of stages and
H(
A+ aN(t, +c(O)h, H40))h
Y+
j=l
(2.25)
3>B
+
1
b,(tn+ C()h H()
j=1 1=1
H(=Y)
A
a
cj9ch, H0 )h
(2.26)
k
B
+
bk (tn+c(1 h H (),F
j=1
S
Hj'
±n
A )a(tn + ()h H(O)h
3=1
8
(2.27)
M
+> >3BE
)b(tn+ c 1)h, H())(k~l)
j=1 1=1
l0k
36
Table 2.1: Butcher tableau for RKR methods
c(0)
(M
)
B(O)
c()
A(M)
B(M)
C(2)
A(2)
B(2)
Table 2.2: Butcher tableau for RKR scheme R15
0
1
1
1
5
12
25
35
14-4
4
4
2
4
0
-}
5
6
0
0
4
0
2
0
00
0
1
0
0
1
3
14
10
24
35
-1
0
1
-1
1
~
2
-1
1
.
1
-
0
1
-1
0
2
-127
for i = 1, 2, ... , s and k= 1,2, ... , m. The random variables
'(k,j)
are defined as
IZ(I(k)(l)
- Vhik))
if k < 1
if k = I
II(k)I(l) ±
if k > 1
for 1 < k, 1 < m. The random variables I(k) are distributed with P(h(k)
and P(h(k) = 0) =
P(f(k)
=
±vii)
=
for k
}
(2.28)
1, 2, ..., m and the random variables
I(k)
=
V h)
=
6
are distributed with
for k= 1, 2, ... , m- 1.
The coefficients of these Stochastic Runge-Kutta methods can be represented by an
extended Butcher Tableau as shown in table (2.1).
37
The order of the schemes varies
Table 2.3: Butcher tableau for RKR scheme R16
0
1
1
0
0
1
0
0
0
0
1
1
1
1
1
0
-1
0
0
0
0
1
0
0
0
2
2
-1
0
0
!2
1
2
4
1
1
4
0
0
4
1
4
2
1
2
2
1
2
depending on the actual values of the coefficients used. The values of the coefficients
must satisfy the corresponding order conditions based on the colored rooted tree analysis
(R56fler, 2004, 2006). Two schemes of this class have been studied in the present work.
The coefficients for RKR schemes R15 and R16 derived by Rdller (2009) are shown in
tables (2.2) and (2.3) respectively. Both these schemes have s = 3 stages and attain a
weak order of convergence of
2.2.5
#
= 2.0.
Computational cost of numerical integration schemes
In this section, we investigate the computational cost of implementing the numerical
schemes, in terms of number of floating point operations (flops) required per iteration of
the scheme. We utilize theoretical results to compute the number of flops and intercompare the numerical solutions obtained by using the aforementioned numerical schemes.
Theoretical estimates. We consider an N-dimensional SDE with an m-dimensional
driving Wiener process, as given by equation (2.4). Consider a numerical implementation
by Monte Carlo simulations using M, realizations. Since the number of flops required for
evaluating the drift and diffusion coefficients depends on the problem at hand, assume
that the computational cost of evaluating the drift coefficient a(X) (for M, realizations
of X) is na and for evaluating diffusion coefficients bk(X) is nb,, for k = 1, 2, ..., m. Also
38
assume that the number of flops required for generating M, realizations of a Gaussian
random variable are
nG, and the number of flops required for generating M, realizations
of the two-point distributed random variable
Vl
(used in RKKP scheme) and three-point
distributed random variable f(Ak) (used in RKKP and RKR schemes) are n,,,2 and n,,
respectively. We evaluate the number of flops per iteration for all the schemes in terms
of N, m, Mr, na, nbk, nwc ri,
2
and nw, (s = 3 for RKR scheme) and present them in table
(2.4).
We observe that the number of flops required for the evaluation of the diffusion coefficient in the RKKP scheme depends on the dimension m of the driving Wiener process,
which is also true of most other stochastic Runge-Kutta schemes. However, this is not the
case for the schemes RKR and ExEM. Also, the number of random variables that need
to be generated for implementing RKR and ExEm schemes is 0(m), whereas for RKKP
scheme it is 0(m 2 ). For SDEs with higher dimensional driving Wiener processes, these
two computations have a major share in the computational cost of the algorithm and
hence, RKR and ExEM schemes are expected to be computationally cheaper than other
stochastic Runge-Kutta schemes like RKKP, especially for more complex and higher dimensional SDEs with large m. Additionally, although the Extrapolated Euler-Maruyama
scheme (ExEM) may be computationally cheap and easy to implement, there are situations in which extrapolation methods only have limited value (Milstein and Tretyakov,
2004).
One example of this is the case of stiff problems, which have restricted stabil-
ity regions for the ExEM scheme (Kloeden et al., 1995). Thus, the recently developed
Runge-Kutta scheme by R6ssler has additional importance for calculating higher order
weak approximation of SDEs in a computationally cheap manner.
We estimate the number of flops in generating the required random variables with
using the theory on numerical computing with MATLAB given by Moler (Moler, 2004).
Two-point and three-point distributed random variables can be generated using the "if'
condition on uniform random variables. MATLAB generates uniform random variables
using the "subtract-with-borrow" step, which requires 2 flops per random variable. Hence,
generation of Mr uniform random variables requires
2
Mr flops. Also, the generation of
two-point and three-point random variables using Mr uniform random variables would
39
Table 2.4: Computational cost of numerical integration schemes
S. No.
Scheme
1.
EM
2.
ExEM
3.
RKKP
Flops per iteration
2(m+
)MrN +mM,+
na +
1l
n,
+ mnwG
2(3m + 4)MrN + 2mM, + 3 a + 3 Em1 nbk + 2mnwe
(11M
2
+ 14m + 5)MrN + 3m
2
Mr
+ 2na + (2m +1) ET
nlb,
+mnw
3 + 0.5m(m - 1)nW
2
4.
RKR
(9m
+ 48m + 12)MrN + (4m 2 + 2m - 2)Mr + 3na + 6 EZ1 nb
2
+mnw3 + (m - 1)nW2
require additional Mr and 2 Mr flops respectively. Thus, we have nW2 =
4
3
Mr and n
3
=
Mr. All algorithms for generating Gaussian (normal) random variables are based on
transformations of uniform random variables. The simplest way to generate an a by b
matrix with approximately normally distributed random variables is to use the expression
sum(rand(a,b, 12), 3) -6.
This expression requires 12ab additional flops for generating ab
Gaussian random variables from 12ab uniform random variables. Hence, the number of
flops for generating M, Gaussian random variables using this expression is
36
M, ( 2 4M,
for generating 12Mr uniform random variables and 12Mr additional flops for doing the
required transformations). A more sophisticated table lookup algorithm called the ziggurat
algorithm is used by MATLAB to generate Gaussian random variables, but for simplicity,
we assume nWG =
36
Mr.
Numerical Estimates. In order to verify the theoretical results and estimates, we implement the numerical integration schemes discussed in section 2.2.4 for three different systems of stochastic differential equations from R5Bler (2009). We compare the schemes using an ensemble of Mr Monte Carlo simulations for each schemes and the sample averages
are thus approximated by using Mr independently simulated realizations. The mean error
for the numerical integration schemes is computed as ji
All errors have been computed for the final time T
=
=
Ew[f(XT)]
-
y
Em- f(YT,k).
1.0s and are plotted on a log-log
scale in order to see the convergence results clearly.
As a first example, a linear SDE system (d = m = 2) with commutative noise has
40
Analytical solution for E[(X, )2I
Analytical solution for E[X ]I
0.25
0.5
0.45
0.2
0.4
0.35 -
0.15
0.3
-
0.1
0.250.2
0.05
0.15
0.1 1C
0
0.4
0.2
0.8
0.6
0
1
0.4
0.2
0.6
0.8
1
time(s)
time(s)
Figure 2-1: Analytical solutions for the first moment Ew [Xf (left) and the second moment
Ew[(Xt) 2 ] (right) for SDE (2.29)
been considered. The governing stochastic differential equation for this system is given as
X
/ 3Xi
Xk
I
X1
0
dW +
d +
K
X )
0
dW2
(2.29)
(10 X)
with the initial condition Xo = (-L, 1 )T . The analytical solutions for the first and second
10
11
moments are given as Ew[Xt] = 1 exp (it) and Ew[(Xt) 2 ] = - exp (1t)
respectively,
for i = 1, 2. The analytical solutions for the first and second moments for i = 1 are
shown in figure (2-1).
For this case, we choose M, = 1 x 108.
for the first and second moments are shown in figure (2-2).
The mean errors JI
It is observed from figure
(2-2) that the EM scheme does display a weak order of convergence of / = 1.0, whereas
the other schemes display an order of convergence of
#
= 2.0. The calculated empirical
variances corresponding to the mean errors are reasonably small, and hence our results
are consistent. Also, we observe that the mean errors for the scheme R15 are significantly
lower than the other schemes.
In order to estimate the computational effort involved in implementing the numerical
schemes, the number of flops required for each scheme are shown in figure (2-3).
For a
fair comparison of the accuracy and computational efforts, figure (2-3) also shows a plot
of mean error vs. computational effort for all the schemes considered for their estimates
of Ew[Xtl for the system of SDEs (2.29).
It is observed that although in terms of im-
plementation, the EM scheme is the simplest, its error and convergence characteristics
41
Mean Error for approximation of E[X )2]
Mean Errorfor approximation of E[Xt]
-2
-4
-4
-6
-6
-8
-18
C:
-10
Q
E
-10
S-12
-12
0ExEM
-0
-M-RKK P
R15
-14
-16
-EMExEM
-M-RKKFP
-14
----
-16
- -R16
R 1
-18
-5
-4
-3
-2
-1
R15
0
-5
-3
-4
log 2 (time-step size)
-2
0
-1
log 2 (time-step size)
Figure 2-2: Mean errors in the approximation of first moment EO[X'] (left) and second
moment Ew[(Xt) 2 ] (right) for SDE (2.29)
Mean Error vs Computational Effort for approximation of E[X ]
Computational Effort for approximation of E[X I
-240
-4
39
-6
38
-8
37
-
E
Cs
E
36
.2
35
34
33
-10-12
-0
ExEM
-- -RKKP
-- R15
R16
+
-
-14
--e-A-
EM
ExEM
RKKP
-16
-+-R5
-18.
-5
-4
-2
-1
-3
log 2 (time-step size)
33
0
34
35
36
37
38
39
40
log 2 (number of flops)
Figure 2-3: Total computational effort vs time-step size (left) and mean error vs total
computational effort (right) for the approximation of Ew[Xl for SDE (2.29)
make it the most expensive scheme for a fixed desired mean error. Also, we observe that
the schemes ExEM, RKKP and R16 have similar slopes in the plot of mean error vs.
computational effort (figure (2-3)).
This means that in order to improve the accuracy
of approximation, a proportionally similar increase in the computational effort would be
required for all the three schemes. It is also observed that the scheme R15 shows the most
desirable characteristics in terms of accuracy and computational cost for this example.
The second example considered is a nonlinear SDE system (d = m = 2) with noncommutative noise. The governing stochastic differential equation for this system is given
42
Analytical solution for E[X
]
Mean Error for approximation of E[X ]
-4
-6
0.25
-8
-101a)
0.2 -
12 -14
--
0.15
0.1
-A-
-165
-16
--
-18
0
0.2
0.4
0.6
1
0.8
-4
-3
time(s)
-2
log2 (time-step
EM
ExEM
RKKP
R15
R16
0
-1
size)
Figure 2-4: Analytical solution (left) and mean errors (right) in the approximation of the
first moment Ew[Xil for SDE (2.30)
as
d
(X
)
-11x1 + X 2
2 +K
(
Xj2
3X1
- 2
IX2
({X1 - X)2 +
dt +(
)
(X| - Xj)
with initial condition Xo
=
case is given as Ew[X,] =
(T,
-Xj)
2
0
dW'
0
0(2.30)
2
+
T_
)j. The analytical solution for the first moment in this
'exp (t), for i = 1, 2. Here, we choose M, = 1 x 10 9 . The
analytical solution for Ew[Xt] and the mean errors JAI in its approximation for various
numerical integration schemes are presented in figure (2-4). We observe that the results
are similar to those obtained in the previous example. Because of the non-commutative
nature of the noise and the increased complexity of the system, the empirical variances
are higher than in the previous example, and hence a larger number of realizations are
required to keep them at their acceptable levels. We also observe that again the error
characteristics of the schemes ExEM, RKKP and R16 are similar whereas the mean errors
of the scheme RI5 are lower than the others.
A plot of mean error vs. total computational effort is shown in figure (2-5) on the
left side. Again, trends very similar to the previous example are observed. For complex
SDEs with stochastic forcing from higher dimensional Wiener processes, the generation of
43
Mean Error vs Total Computational Effort for approximation of Eq]
22
mean Error vs Partial Computational Effort for approximation of E[X
-4-
-4
-6
-6-
-8-
-8
-10
,
-10
-1E
12
-14
]
-12
cl2
-b---A-
-16
O
EM
ExEM
RKKP
-16
-RI5
-16
-E
R15
36
37
--A-a*-
*
-18
R 16
-18
-
-14
38
39
40
log 2(number
41
42
43
36
EM
ExEM
RKKP
R15
R16
37
of flops)
38
39
40
log 2(number of flops)
41
42
Figure 2-5: Total computational effort vs time-step size (left) and partial computational
effort vs time-step size (right) for the approximation of EW'[Xi] for SDE (2.30)
random variables and the evaluation of drift and diffusion coefficients become the major
computational costs. Hence, it is useful to study these partial computational costs for the
schemes considered. Such a comparison between mean errors and partial computational
costs of the numerical schemes is also shown in figure (2-5) on the right side.
By a
comparison of the two plots shown in figure (2-5), we observe that the costs of evaluating
the drift and diffusion coefficients and generating random variables are relatively lower for
the Runge-Kutta schemes as compared to EM and ExEM schemes. Also, since this system
has a relatively small dimensional driving Wiener process m = 2, the advantages of the
RKR schemes over other Runge-Kutta schemes (like RKKP) in terms of computational
costs, which have been mentioned earlier in section 5.5, are not evident here. In order
to see the advantages of RKR scheme over other Runge-Kutta schemes, we need an SDE
system with a higher dimensional driving Wiener process.
Next, we consider a nonlinear SDE with non-commutative noise and a higher dimension d = 4. For given A, p E {0, 1}, the governing stochastic differential equation for this
44
example is given as
Xtl
-X
Xt1
X-
-
X + 6
Xt4
-
dt
d
x
X ±
X
-
Xt
13
14
1
11
+ I(X2)2+(X3)2+-
1
1
1
12
9
dW1 +
t
8
(Xt)2+(Xt)2
+
13
16
1
1
1
A
* dW?+1
14
3
(Xi 3 2+(X) 2+-±
29
i
1
1
11
12
12
12
+
15
(1X0)2+ (X3)2 +
w
1
1
1
+
dWt
1i
1
1
13
(Xt)2 + (Xt)
2
±+
3dW
25 1
16
1
1
1/1
1/
(2.31)
with initial condition Xo =
(
1,
})T.
Thus, we have m = 2+2A +2p as the dimension
of the driving Wiener process. In this case, the first moment of solution can be calculated
as Ew[Xt] =
} exp (2t),
of realizations are M,
for i
=
compared for the cases m
=
1, 2, 4 and E"[Xt] = exp (2t), for i = 3. The number
1 x 107. In this example, the performance of the schemes is
=
2 (with A = p= 0), m = 4 (with A = 1 and
[
= 0) and
m = 6 (with A = p = 1). The analytical solution for Ew[Xt] and the mean errors
jI in
its approximation by the numerical integration schemes for m = 2,4 and 6 are presented
in figures (2-6), (2-7) and (2-8) respectively. From these figures, we observe that the
convergence characteristics of the numerical schemes are again similar to those obtained
45
Analytical solution for E[Xt ) with m =2
Mean Error for approximation of E[X ] with m = 2
0.9
-2
0.8
-4
0.7
wU
0.6
wU 0.5
aI
-6
-8
0
-10
-e-A-
EM
ExEM.
RKKP
-12
-E-
R15.
0.4
0.3
0.2
0.1
0
0.2
0.4
0.6
0.8
RI6
R*
-4
1
-3
time(s)
-2
-1
0
log2 (Time-step size)
Figure 2-6: Analytical solution (left) and mean errors (right) in the approximation of the
first moment Ew[Xk] for SDE (2.31) with m = 2
Analytical solution for E[X ] with m = 4
Mean Error for approximation of E(Xt] with m = 4
1
0.9
-2
0.8
-4
0.7
W
0.6
a
wU 0.5
-6
-8
-10
0.3
-12
0.2
0.1
-A-
EM
ExEM
RKKP
-*-
RIG
-0-
0.4
0
0.2
0.4
0.6
0.8
-4
1
time(s)
-3
-2
log 2(Time-step
-1
size)
0
Figure 2-7: Analytical solution (left) and mean errors (right) in the approximation of the
first moment EW[Xi] for SDE (2.31) with m = 4
in the first two examples.
The main purpose of this third example is to observe the change in computational
effort of the schemes due to increase in the dimension of the driving Wiener process m.
Plots depicting mean error vs. total computational effort and mean error vs. partial
computational effort for the approximation of EW[Xi] for m = 2,4 and 6 are thus shown
in figures (2-9), (2-10) and (2-11) respectively. Comparing figures (2-3), (2-5) and (29), we observe that since we are using a smaller number of Monte Carlo realizations,
the performance of the EM and ExEM schemes in terms of computational cost is better
than in the previous two examples. As m is increased, (figures (2-10) and (2-11)), the
computational cost of RKKP scheme increases faster than the other schemes. This is
46
Analytical solution for E[X ] with m
Mean Error for approximation of E[X ] with m = 6
6
0.9
-2
0.8
,
-
0.7
0.6
W
-6
0.5
2
-
-
E
M0
0.4
-10
-0- ExEM
-A- RKKP
R
-*R16
0.3
-12
0.2
0.1
0
0.2
0.4
0.6
0.8
-14
1
time(s)
-4
-3
-2
-1
log 2 (Time-step size)
0
Figure 2-8: Analytical solution (left) and mean errors (right) in the approximation of the
first moment E'[Xtl for SDE (2.31) with m = 6
because in RKKP (and in most other stochastic Runge-Kutta schemes), the number of
evaluations of the diffusion coefficients scale linearly with dimension m of the driving
Wiener process.
Also, the number of random variables that need to be generated for
implementing the scheme are O(m 2 ) for most stochastic Runge-Kutta methods (including
RKKP), which add further to the computational cost. However, the family of RungeKutta schemes proposed by Rdssler does not have this inefficiency and hence we see
that the computational cost of the schemes R15 and R16 does not increase faster than
the schemes EM and ExEM with an increase in m. Moreover, the performance of the
scheme R15, in terms of accuracy of solution and computational cost of implementing the
algorithm, is again observed to be better than the other numerical schemes considered
here.
In order to minimize statistical error, M, has to be chosen to be very large (Kloeden
and Platen, 2011). In usual practice however, a much smaller M, is sufficient to get a
reasonably good approximation of the solution. In fact, for most ocean modeling applications, due to computational constraints introduced by high dimensionality, M, cannot
be chosen to be very large. For such a choice of M, the statistical errors are often dominant and the choice of a higher order numerical scheme does not necessarily improve the
accuracy and convergence characteristics of the solution. Hence, for a majority of ocean
modeling applications, the Euler-Maruyama method is often the most suitable numerical
scheme due to its low computational cost. The EM scheme has been used as the nu47
Mean Error vs Partial Computational Effort with m = 2
Mean Error vs Total Computational Effort with m = 2
-2
-2
-4
-4
r_
2
W
-6
-8
-8
-10
-12
-6
-4--EM
--ExEM
-ARKKP
E- R15
-*R16
30
31
32
33
34
10g 2 (Number of
35
flops)
36
-10
--A--
ExEM
RKKP
-12
----
RI
-4-- R16
30
37
31
32
33
10g 2 (Number
34
35
36
of flops)
Figure 2-9: Total computational effort vs time-step size (left) and partial computational
effort vs time-step size (right) for the approximation of Ew[Xfl for SDE (2.31) with m = 2
Mean Error vs Partial Computational Effort with m = 4
Mean Error vs Total Computational Effort with m = 4
-2
-2
-4
-4
2
Cs
w
2
-6
-8
-10
-
-8
E
- -0-ExEM
-A- RKKP
-12
-4-
-10
--e-A-
-12
R16
32
-6
---*-
34
36
30
38
EM
ExEM
RKKP
R15
R16
31
32
33
log 2 (Number of
log 2 (Number of flops)
34
flops)
35
36
Figure 2-10: Total computational effort vs time-step size (left) and partial computational
effort vs time-step size (right) for the approximation of Ew[Xi] for SDE (2.31) with m = 4
Mean Error vs Total Computational Effort with m = 6
w
Mean Error vs Partial Computational Effort with m = 6
-2
-2
-4
-4
-6
W
-6
2
-8
a5
-8
EM
-10
-9-A-
-12-
-B-
ExEM
RKKP
R15
-
R16
32
-10
-12
34
36
bog 2(Number of flops)
38
--- e
-A--- E
--30
EM
ExEM
RKKP
R15
RI6
31
32
33
34
35
36
log 2 (Number of flops)
Figure 2-11: Total computational effort vs time-step size (left) and partial computational
effort vs time-step size (right) for the approximation of Ew[X] for SDE (2.31) with m = 6
48
merical integration scheme for systems with uncertainty due to stochastic forcing in all
the future analysis done in the present work. For the numerical integration of ordinary
differential equations and partial differential equations (for studying uncertainty in input
parameters), we use the explicit Runge-Kutta scheme (RK-4).
2.3
Methods for Uncertainty Quantification
In this section, we study schemes for uncertainty quantification of stochastic dynamical
systems. We consider a general stochastic dynamical system represented by the differential
equation
X(rt;W)= L [X(r, t;w), w]
at
where X(r, t; w)
r e Dt G T,
EQ
(2.32)
c RN is an N-dimensional stochastic variable representing the state of
the system at any given time t E T and L is any given nonlinear operator. The initial
condition is given as
X(r, to, w) = Xo(r, w)
r c D, W E Q
(2.33)
and the boundary conditions are specified as
ro E &D,tE T,w E Q
B [X(ro, t; w)] = h(ro, t; w)
(2.34)
We begin with a literature review of existing schemes and methodologies, including Monte
Carlo simulations, generalized polynomial chaos, dynamically orthogonal field equations,
proper orthogonal decomposition and error subspace statistical estimation.
Next, we
study generalized polynomial chaos and its derivative schemes, and dynamically orthogonal field equations in greater detail.
2.3.1
Literature review
The Monte Carlo simulation technique is perhaps the simplest and one of the most common methods for uncertainty quantification. This statistical sampling method was first
introduced by von Neumann, who used it to study neutron diffusion (Eckhardt, 1987).
49
The mathematical foundation of Monte Carlo simulations lies in the Law of Large Numbers, which states that the average of a large number of independent samples of a random
variable converges to the analytical mean of that random variable (Bertsekas and Tsitsiklis, 2002). Its theoretical basis has been described in detail in Zaremba (1968) and James
(2000). Monte Carlo methods can be readily applied to solve problems to an arbitrary
degree of accuracy, provided a sufficiently large number of samples is used. Over the
years, significant advances have been made to improve the efficiency of these schemes.
Some of these include variance reduction (Fishman, 1996, McGrath and Irving, 1973),
importance sampling (Hastings, 1970, Gilks et al., 1995), sequential Monte-Carlo (Doucet
et al., 2001), quasi-Monte Carlo (Niederreiter, 2010) and multi-level Monte Carlo methods
(Giles, 2008b). Monte Carlo methods are easy to implement and are shown to converge
without stringent regularity conditions. Another major advantage is that if we have an
existing deterministic code for modeling a dynamical system, we do not need to modify
it extensively to incorporate uncertainty when implementing the Monte Carlo scheme, as
long as we can generate a sufficiently large number of independent samples. Because of its
ease of implementation, Monte Carlo scheme has been a preferred method of solution for
complex problems which are difficult to model using other stochastic methods. However,
Monte Carlo schemes can have a very high computational cost and can become infeasible
to implement for high dimensional systems. Hence, several new uncertainty quantification schemes have emerged in recent years, in order to increase the efficiency of modeling
stochastic systems.
Development of new uncertainty quantification schemes continues to be a topic of
active research. Polynomial chaos is a new approach to model uncertainty which has attracted increasing attention over recent years. The origin of this method is based on the
work done by Norbert Wiener on homogeneous chaos theory (Wiener, 1938). Cameron
and Martin showed that the expansion of any square-integrable functional in a series of
Hermite polynomials of Gaussian random variables is L-convergent (Cameron and Martin, 1947). The Cameron-Martin theorem forms the mathematical basis of the polynomial
chaos scheme. The polynomial chaos scheme uses polynomial functions as stochastic bases
for expanding the uncertainty components of the stochastic field. The Wiener-Hermite
50
expansions were first used to study Gaussian and other similar stochastic processes in
modeling turbulence (Kraichnan, 1963, Orszag and Bissonnette, 1967), but they were
found to have some serious limitations (Crow and Canavan, 1970, Chorin, 1974), leading to a decrease of interest in the method. Later, Ghanem and Spanos developed a
spectral stochastic finite element method using truncated Wiener-Hermite expansions in
a Galerkin framework (Ghanem and Spanos, 1991). This work revived interest in the
scheme and led to the development of stochastic Galerkin approach to polynomial chaos.
Xiu and Karniadakis generalized the stochastic basis to include non-Gaussian stochastic
processes by using Askey-scheme-based orthogonal polynomials (Xiu and Karniadakis,
2002), introducing the generalized polynomial chaos (gPC) scheme.
The gPC scheme
has been used in a variety of applications to study uncertainty, including transport in
porous media (Ghanem, 1998), solid mechanics (Ghanem, 1999a,b), electrochemical flow
(Debusschere et al., 2002, 2003), fluid flow (Le Maitre et al., 2002, Xiu and Karniadakis,
2003) and computational fluid dynamics (Knio and Le Maitre, 2006, Najm, 2009). It has
also been extended to include arbitrary random distributions outside the Askey family
of orthogonal polynomials (Soize and Ghanem, 2004, Wan and Karniadakis, 2006b). Although the gPC scheme has been used successfully to model uncertainty in many diverse
applications, there are certain situations in which it is not very effective. These include
modeling of parametric uncertainty in nonlinear systems with complex dynamics (limit
cycles, strange attractors etc.) and long time integration of systems with external stochastic forcing (Branicki and Majda, 2012). Several new polynomial chaos schemes have been
introduced in recent years which try to alleviate the shortcomings of gPC by using a
dynamically adaptive polynomial chaos basis. Examples include the multi-element polynomial chaos (ME-gPC) scheme (Wan and Karniadakis, 2005, 2006a), time-dependent
polynomial chaos (TDgPC) scheme (Gerritsma et al., 2010), Wiener chaos using sparse
truncation (Hou et al., 2006, Luo, 2006), polynomial chaos using Wiener-Haar expansions
(Le Maitre et al., 2004a) and multi-resolution polynomial chaos schemes (Le Maitre et al.,
2004b, Le Maitre et al., 2007, Najm et al., 2009). Some of these schemes are described
in detail in section 3.2. Despite the success of these schemes in specific applications, long
time integration of unsteady dynamics and uncorrelated stochastic noise continues to be a
51
significant challenge for polynomial chaos schemes (Debusschere et al., 2004, Ernst et al.,
2012, Branicki and Majda, 2012).
Order reduction methods have also been utilized for uncertainty quantification.
A
classical order reduction method is Proper Orthogonal Decomposition (POD) scheme
(Berkooz et al. (1993), Holmes et al. (1998)). This method uses a family of orthogonal
spatial functions as bases to capture the dominant components of the stochastic processes.
The POD scheme has been applied for uncertainty quantification in a wide range of
applications such as turbulence (Sirovich (1987), Berkooz et al. (1993), Holmes et al.
(1998), Lumley (2007)) and control of chemical processes (Gay and Ray (1995), Zhang
et al. (2003), Shvartsman and Kevrekidis (2004)). However, the basis functions of POD
scheme need to be chosen a priori and hence may not always be able to model the evolution
of nonlinear dynamical processes accurately. Other examples of uncertainty quantification
schemes include those mentioned in the works of Majda et al. (2001), Borcea et al. (2002),
Schwab and Todor (2006), Babuska et al. (2007), Majda et al. (2008), Ma and Zabaras
(2009), Owhadi et al. (2010), Dashti and Stuart (2011), Doostan and Owhadi (2011) and
Majda and Branicki (2012).
Another approach to predict uncertainty for data assimilation is the Error Subspace
Statistical Estimation scheme (ESSE) (Lermusiaux and Robinson, 1999, Lermusiaux,
1999). The motivation behind this scheme are multi-scale, intermittent, non-stationary
and non-homogeneous uncertainty fields for ocean dynamics (Lermusiaux, 2006).
This
method uses adaptive and time-varying basis functions to represent the stochastic field.
It focuses on the subspace where most of the uncertainty is concentrated as measured
by a dominant decomposition of the time-variant covariance of the system, i.e., a timedependent dominant Karhunen-Loeve (K-L) expansion. This dominant decomposition
and its basis eigenfunctions are evolved using ensemble predictions through a Monte Carlo
approach. The advantage of this scheme is that it does not make any explicit assumption
on the form of the stochastic field. A drawback is that the evolution of the basis functions
uses Monte Carlo simulations.
Dynamically Orthogonal (DO) equations are new equations to predict uncertainty
for dynamical systems (Sapsis and Lermusiaux, 2009, Sapsis, 2010). The DO method is
52
based on a time-dependent dominant K-L expansion and a Dynamical Orthogonality (DO)
condition. The DO condition simply imposes that the rate of change of the dominant K-L
expansion basis is orthogonal to the basis itself. This DO condition eliminates redundancy
in the representation of the solution field, without imposing additional restrictions. As
in ESSE, a major advantage is that the basis functions evolve in time, and are thus
able to follow the dominant uncertainty in nonlinear dynamical systems in an efficient
manner.
The DO equations are a generalization of proper orthogonal decomposition
(POD) and generalized polynomial chaos (gPC) schemes, and the governing equations of
these schemes can be recovered from the DO equations with additional assumptions, a
steady (spatial) basis or a steady stochastic basis, respectively. A closely linked scheme is
the Dynamically Bi-Orthogonal (DyBO) method (Cheng et al., 2012). The DO equations
have been used for uncertainty quantification in fluid and ocean-relevant flows (Sapsis,
2010, Sapsis and Lermusiaux, 2012, Ueckermann et al., 2013) and extended for use in nonGuassian data assimilation (Sondergaard, 2011, Sondergaard and Lermusiaux, 2013a,b).
2.3.2
Generalized polynomial chaos
A polynomial chaos expansion is an expansion of the stochastic field in terms of polynomial
functions of stochastic random variables. A uni-variate polynomial chaos expansion of a
stochastic field X(r, t; w) has the form
p
X(r, t; w)
=
Zi (r,t)i((w))
(2.35)
i=O
where p is the polynomial order of the expansion,
tions of the random variable
(w) and
zi. (r,t)
'bi(
(w)) are the stochastic basis func-
are the corresponding deterministic coeffi-
cients. For optimal convergence, the choice of polynomial basis functions depends on the
expected form of the uncertainty in the dynamical system. For example, if it is known
that the random variable
is a uniform random variable, the basis functions V' are chosen
to be Legendre polynomials, i.e., O4i()
=
Li( ). Legendre polynomials are given by the
53
Table 2.5: Basis functions for polynomial chaos expansions
Random variables
(() Polynomial basis functions /(()
Gaussian
Gamma
Continuous
Hermite
Laguerre
(-oo, oo)
[0, 00)
Jacobi
[a, b]
Legendre
Charlier
Krawtchouk
Meixner
Hahn
[a, b]
Beta
Discrete
where a, b
c
Uniform
Poisson
Binomial
Negative Binomial
Hypergeometric
R and N > 0 is a finite integer
Support
{0, 1, 2,
..
{0, 1,2, ..., N}
{0, 1, 2, ...}
{0, 1,2, ..., N}
recurrence relation
Lo( ) = 1
(2.36)
__
(2i + 1) Lj( ) - iLj_1( )
for i = 1, 2, 3, ...
i+1
4
Similarly, for Gaussian uncertainty, the basis functions
nomials, i.e., Oj(
are chosen to be Hermite poly-
) = Hj( ). Hermite polynomials are given by the recurrence relation
f
Ho( ) =
(2.37)
H1( ) = (
Hj+j( ) = H3()
iHi_ 1(
for i = 1, 2, 3,..
The choice of polynomial basis functions corresponding to various forms of uncertainty is
presented in table (2.5). For the multi-variate stochastic case, the generalized polynomial
chaos expansion has the form
X(r, t; w) =
e (r,t)'a(W),
lal:5p
(w),
W
,
(2.38)
8-i
=
2
Z:i(r,t)'i(w)
i=O
54
where a is a multi-index a
=
lal
(a 1 , a 2 , ... , a,),
>i
=
a, r is the number of random
variables (stochastic dimension) and p is the order of the multi-variate polynomial chaos
expansion. In addition, in equation (2.38), the multi-variate polynomial basis functions
are defined as
a =
where 0,
(2.39)
i Oflp
( 3(w))
(( (w)) are the uni-variate polynomial basis functions, and the total number of
terms in the expansion is given by
(2.40)
+ p!
r! p!
s= (f+P)
r
As an example, we consider a Hermite chaos expansion with f = 3 and p = 2. The total
number of terms in the expansion are s
{a}
=
=
(3+2) = 10. The set of multi-indices a is given as
{(0, 0, 0), (1, 0, 0), (0, 1, 0), (0, 0, 1), (2, 0, 0), (0, 2, 0), (0, 0, 2), (1, 1, 0), (1, 0, 1), (0, 1, 1)}.
The multi-variate basis functions are calculated as
''(o,o,o) = Ho(
''(1,0,0) = H1(
(
Ho(6)
I
1)HO(2HO(6)
'(0,1,0) = Ho(
1)Hi(
2)Ho( 3)
"'(0,0,1) = Ho(
1)Ho(
2)H,( 3)
'1(2,0,0) = H 2 ( 1)Ho( 2 )Ho( 3 )
H1 (
2) =
H2(
i)
=
2
(2.41)
"'(0,2,0) =HO(
H2(2HO(6)
H2(42) 2
T(0,0,2) =HO(
(
H2(
H2(63)
T(1,1,0) =H1(6)H1(62)Ho(
3)
H1(
) =
3-1
1H1(62)=
,F(1,0,1) =H1(6)HO(62)H1(63)
H1( 1H1(63)=
T(0,1,1) =Ho(61)H1(2)H1(6)
H1(62)H1(63)=
55
If-
1
Finally, the multivariate polynomial chaos expansion for this example is given as
X(r, t; w)
=
>
(r,t)Ia( 1 (w),
= X(o,o,o)(r,
+
t) +
X( 2 ,O,O)(r,
2
(W), 63 (W))
X(i,o,o) (r, t) 1 + ;(o,1,o)
t)((
-
(r, t)
2
+ X(o,o,) (r, t) 3
1) + x(o, 2,o)(r, t)((2 - 1) + x(o,o,2)(r, t)( 3 - 1)
+ X(1,1,O)(r, t) 1i 2 + x(1,o,1)(r, t) 1,
3
+ x(,1,1)(r, t)
2
(2.42)
3
9
'3i(r, t)Ti (w)
=
i=O
Next, we consider the stochastic dynamical system (2.32) with the initial and boundary
conditions (2.33) and (2.34), respectively.
Consider a multi-variate polynomial chaos
expansion of X(r, t; w) given by
X(r, t; w)
zi(r,t) Ti(w)
=
(2.43)
i=O
where Ti(w) represent the stochastic polynomial bases functions and
zi (r,t)
E R' repre-
sent their corresponding N-dimensional deterministic coefficients. To obtain the evolution
equations for the polynomial chaos scheme, we substitute the polynomial chaos expansion
into the governing equation and take a stochastic Galerkin projection, which is equivalent
to multiplying the resultant equation by Ij(w) and applying the mean value operator.
Substituting the expansion from equation (2.43) into equation (2.32) and multiplying by
Tj(w), we get (in Einstein notation)
&
(r, t) Ti (w) Tj (w) =
at
L [X(r, t; w), w] 'I (w)
(2.44)
Applying the mean value operator to equation (2.44), we obtain
i(r,t)E' ['.iJ(w) I' (w)]
=
at
56
Ew [L [X(r, t; w), w] Tj (w)]
(2.45)
z(r, t)
The evolution equations for the coefficients
-
___(__
are given as
Ew [L[X(r,t;w),w]Ij)
0 < i, j
s-
(2.46)
at
where
C , = E" [Ti(w)'.j(w)]
(2.47)
The initial and boundary conditions for the coefficients are obtained by substituting the
polynomial chaos expansion from (2.43) into (2.33) and (2.34), respectively, and then
taking Galerkin projections. The initial conditions on the coefficients are given as
xi(r, to) = z i(r)
0< i
s -1
(2.48)
and the associated boundary conditions are of the form
B [_(ro, t) ]
2.3.3
=
EW [h(ro, t; w) I'(w)] C
.
0
i,i
s-
1
(2.49)
Time-dependent generalized polynomial chaos
The time-dependent generalized polynomial chaos scheme (TDgPC), developed by Gerritsma et al. (2010), is a modification of the gPC scheme and is useful for long time
integration of nonlinear stochastic systems with uncertainty in input parameters. The
TDgPC scheme utilizes a Gram-Schmidt orthogonalization technique to generate a family of orthogonal polynomials, to be used as stochastic bases in the polynomial chaos
expansion of the uncertainty field. The basic idea is that as the probability density function (pdf) of the uncertainty field changes as a function of time, it requires a different set
of orthogonal polynomials for its representation (as also discussed in section 2.3.1). In the
TDgPC scheme of Gerritsma et al. (2010), the basis functions are time-dependent and are
generated at every re-initializationstep. However, the equations for the time evolution
of coefficients have the same form as the evolution equations for the gPC scheme.
57
2.3.4
Wiener chaos using sparse truncation
Hou et al. (2006) have developed a numerical method based on the gPC scheme for
uncertainty quantification of dynamical systems with external stochastic forcing. This
scheme involves the use of a compression technique to deal with the sustained influx of
new random variables, as provided by the sustained stochastic forcing with time. Let us
consider the stochastic dynamical system represented by equation (1.1).
Also consider
any time T > 0 and any orthonormal basis {Ii(t), i = 1, 2, ...} in L 2 ([0, T]). For example,
consider the trigonometric basis defined as
1
(2.50)
ri (t) =
i= 2,3, ...
Cos (i-1)rt
T
T1
We define random variables &i(w) as
(w) = jfni(t)dW(t; w)
i = 1,2, ...
(2.51)
It can be shown that when the stochastic noise is Gaussian, &(w) are independent identically distributed standard Gaussian random variables, i.e., &j(w) - K(0, 1). Also the
Wiener process can be represented as
00
W(t; w)
=
jOi
Zi(w)
(2.52)
(T)dW(T,w)
i=1
and converges in the mean square sense
n
E' W(t; O)
-
Y
i(w)
j
2
t
()dW(
, )
-+
0
as
n
-
oc
(2.53)
for t < T. The mean square convergence is a result of the Cameron-Martin theorem discussed in section 2.3.1. Expanding this representation of the stochastic forcing integrated
over time [0, T] instead of the original instantaneous dW(t; w) can be more efficient since
the number of random variables needed for a good approximation can be reduced by the
time-integration. In those cases, using equations (2.51) and (2.52), we can employ an
58
efficient polynomial chaos expansion in random variables
i(w)
and derive equations for
the evolution of the corresponding deterministic coefficients, similar to the gPC scheme.
Additionally, the scheme utilizes sparse truncation to reduce the number of terms
in the polynomial chaos expansion. In the generalized polynomial chaos scheme, for all
random variables j(w), 1 < i < f, polynomials of all orders are used, with the truncation
condition specified as
E ai < p. With the sparse truncation scheme however, it is
(w) with higher values of
recognized that it is better to use lower order polynomials for
subscript i. Hence, some additional constraints such as a, < p - i have been introduced,
which reduce the number of terms required in the polynomial chaos representation of the
stochastic field. This idea is similar to the sparse tensor product developed by Schwab
for solving random elliptic problems by polynomial chaos methods (Frauenfelder et al.,
2005). Moreover, some of the random variables are decoupled from the rest to further
reduce the number of terms in the expansion.
The sparse truncation Wiener chaos scheme by Hou et al. (2006) has been used successfully for uncertainty quantification of dynamical systems with Gaussian noise for short
time intervals and has been shown to be computationally more efficient than the Monte
Carlo scheme. However, the number of random variables
i(w)
required to accurately rep-
resent the dynamics of the system can still increase rapidly with time and quickly grow
beyond computationally acceptable limits, making the scheme unsuitable for uncertainty
quantification over large time intervals (Branicki and Majda, 2012).
2.3.5
Dynamically orthogonal field equations
A Dynamically Orthogonal (DO) decomposition of the stochastic field X(r, t; W) has the
form
X(r, t; w) = T(r, t) + E
i(r, t)0i(t;w)
(2.54)
where t(r, t) is the mean of the stochastic field, defined as
T(r, t) = E' [X(r, t; w)]
59
(2.55)
and s is a non-negative integer representing the number of terms in the DO expansion.
The basis functions 5, (r, t) are deterministic and are also known as DO modes, whereas
#
(t; w) represent the stochastic coefficients. The stochastic subspace V, is the linear space
spanned by the s DO modes and is defined as
V, = span{-i (r,t)}
1
i< s
(2.56)
It represents a subspace of the physical space where most of the uncertainty (probability
measure) of the random field X(r, t; w) resides at time t.
It is seen from equation (2.54) that the mean ;(r,t), modes rj(r, t) and coefficients
#
(t; w) are all time-dependent quantities. It is also observed that the simultaneous vari-
ation of the modes and the coefficients in time to represent the evolution of uncertainty
is a source of redundancy. Therefore, additional constraints can be imposed to derive
closed-form evolution equations for these mean, modes and coefficients. The additional
constraint is imposed by restricting the time-rate-of-change of the basis ;(r, t) to be
normal to the stochastic subspace V, itself. This condition on the evolution of modes is
known as the dynamically orthogonal (DO) condition and is represented as
dV8
d Vs
>
95i;(.,t)
(*, 'it)
t) = 0
1 < ij : s
(2.57)
where "( )" represents an inner product in space (evaluated by an integral) and "e" represents the dummy variable for this integration. Using condition (2.57), one can derive a set
of independent explicit equations for the evolution of the mean, modes and coefficients.
The DO evolution equations for the stochastic dynamical system represented by (2.32),
(2.33) and (2.34) are derived and discussed in Sapsis and Lermusiaux (2009, 2012) and
Ueckermann et al. (2013). Next, we summarize some of this previous work.
Consider the DO expansion of the stochastic field X(r, t; w) given by equation (2.54).
Substituting this expansion into equation (2.32), we obtain
0.t(r, t)
+
d#i(t
d
)z(r,
:(r, t) + 0i(t; w)
'
t)
= L [X(r, t; w), w]
(2.58)
which is the basic equation from which the DO governing equations for the mean, coeffi-
60
cients and modes are obtained.
Evolution equation for mean field. By applying the mean value operator to equation
(2.58), we obtain the evolution equation for the mean of the stochastic field, given as
O8t(r, t)
at = Ew [L [X(r, t; w), w]]
(2.59)
Evolution equation for stochastic coefficients. We first take the inner product of equation (2.58) with each of the modes ij(r,t) to obtain
Jzg(0, t) )+
,t
dt
+__ (,
_
it
t)
_
6W(-0
=
(L [X(, t;w),
,z (.,t))
The third term on the left hand side vanishes by the DO condition (2.57). Additionally,
by orthogonality of the DO modes (if they are orthogonal initially, (2.57) ensures that
they are orthogonal for all times), the second term on the left hand side is non-zero only
for the term i = j. For the first term of the right-hand-side, we take the inner product of
(2.59) with 'j (r,t) and obtain
): (0, t)
,t
=Ew
[(L [X (o, t; w),
i],j (e, t))]
(2.61)
Using these facts, equation (2.60) reduces to the evolution equation for the stochastic
coefficients, given as
dq~i(t;w)_
dti
(L [X(e, t; w), w] - E' [L [X(, t;w), w]] ,;i(e, t))
1<i< s
(2.62)
Evolution equation for DO modes. Instead of a projection of (2.58) onto the modes
and a spatial integration, by duality, we now multiply with stochastic coefficients and
take a statistical average. Hence, multiplying (2.58) with #j(t; w) and applying the mean
61
value operator, the first term cancels since Ew [0j (t; w)] = 0, and we obtain
E w ~ d t; o~
.dt
t w)
(r, t) + E w[0i(t; o# (t; w)]
at
BJ
t
(2,63)
(2.63)
= Ew [L [X(r, t; w), w]#4(t; w)]
To replace
(t;w)
drk1dt
in the first term of (2.63), we multiply (2.62) with Oj (t; w) and apply
the mean value operator to obtain
Ew
)=E
#Oj
dt
(E[~,t
(2.64)
) wz~,t)#(t;w))]
Inserting this expression into (2.63) and simplifying, we obtain the evolution equation for
the DO modes, which is given as
0: #, t). = ri [Ew [L [X(r, t; w), w]#oj(t; w)C-I
1 < i, j
s
(2.65)
where
fI [F(x)]
=
F(x) - rIy [F(x)]
=
F(x) - (F(.),zI(*, t)) .i(r, t)
(2.66)
and
(2.67)
Ck,= E' [#i(t; w)Oj(t; w)]
The initial and boundary conditions for the evolution equations can be obtained by substituting the DO expansion in equations (2.33) and (2.34), using a similar analysis as done
for the governing equation. The initial conditions are given as
where :j
0
.(r, to)
=
zo(r)
#i(to W)
=
(Xo(;-w)t-o(),i
zj(r, to)
=
zi.(r)
=
EW [Xo(r, w)]
0
(.))
(2.68)
are the s most dominant modes of the initial error covariance of the system
62
state. The associated boundary conditions are given as
B [7(ro, t)]
B [:(ro,t)]
E' [h(ro, t; w)]
=
Ew [h(ro, t; w)oj (t; w)] C-(6
63
64
Chapter 3
Derivation of Evolution Equations
In this chapter, we derive evolution equations for the class of non-autonomous linear
and nonlinear systems represented by (1.1) and its special case (1.2), using Dynamically
Orthogonal (DO) equations, generalized Polynomial Chaos (gPC) and several polynomial
chaos derivative schemes. We also introduce two novel polynomial chaos schemes, namely,
the modified TDgPC scheme and the KLgPC scheme, which are capable of modeling
dynamical systems with additive and multiplicative stochastic forcing over arbitrarily
large time intervals.
3.1
Dynamically Orthogonal Equations
We consider the class of stochastic dynamical systems whose governing equation is (1.1)
and derive its DO evolution equations. We then address its special quadratic-nonlinearity
case (1.2).
Derivation of DO governing equations for (1.1)
Substituting the DO expansion (2.54) in equation (1.1), we have
d.(r, t) + d~i(t; w).j(r, t) + 0i(t; w)dj(r, t)
= A(.(r, t)+
+ B(.T(r, t) +
k(r, t)#k(t;
#(t;
W), t) (.(r,t) + i1 (r,t)0
1 w)) dt
(r, t)#, (t; w), t) dW (t; w)
65
(3.1)
This is again the basic equation from which the DO governing equations for the mean,
coefficients and modes are obtained. In the remaining part of the derivation, we omit the
(r,t) and modes
arguments (r,t) after the deterministic mean
(t; w) after the stochastic coefficients
#i(t; w)
zi (r,t) and the arguments
and incremental Brownian motion dW(t; w).
Evolution equation for mean field:
Applying the mean value operator to (3.1), we obtain the evolution equation for the mean
of the stochastic field, given as
d = Ew [A(. +ikk,
t) (T + :q41)1 dt + Ew [B(t +
qmOM7
t) dW]
(3.2)
Evolution equation for stochastic coefficients:
Taking the inner product of (3.1) with each of the modes -j (r,t) and simplifying the
resulting equation, we obtain the evolution equation for the stochastic coefficients, given
as
di =
KA(Jc + zjqj, t) [- + ik
+ KB(+ ± On,t)dW -
k] - Ew [A(t + :iq1, t) [t +
Ew [B(. + ,o
7t)dW,
Y;$nqm]] ,
i
)dt(
:;i
Evolution equation for DO modes:
Instead of a projection of (3.1) onto modes and a spatial integration, by duality, we now
multiply with stochastic coefficients and take a statistical average. Hence, multiplying
(3.1) with
#j (t; w)
and applying the mean value operator, the first term cancels since
Ew [#%(t; w)] = 0 and we obtain
E'N [dpioj
] zi + EW [0i04 d5i
= Ew [A(
+k447k,
(34)
t) [j + ioiI Oj] dt + EW [B(z +
XmOJm,
t)dWoj]
The expression for Ew [dq5(t; w)oj(t; w)] can be obtained by multiplying equation (3.3) by
0j (t; w) and applying the mean value operator. Substituting this expression in equation
66
(3.4) and simplifying, we obtain the evolution equation for the DO modes, given as
d
=
Ew [A (.; + zk Ok, t) [j ± l $1]
+ Ew [B(5 + iqcq)dWq4j -
j
-
A(, + :nOn, t)
[7
KB(T + jz 0,dW #,
+ 4Pp] #0,im
JC
dt
I)m
Cj
m
(3.5)
Summary of DO governing equations for (1.1)
To summarize, we reinsert the arguments (r,t) after the deterministic mean t and modes
#3 and
i4, and arguments (t; w) after the stochastic coefficients
incremental Brownian
motion dW. The DO evolution equations for the mean, coefficients and modes have the
final form
EL [ A(5(r,
dt(r, t) =
t) +
Xk(rt)Ok(t; w),
t) (.(r, t) -
i(r, t)0i (t; w))] dt
(3.6)
+ Ew [B(z(r, t) + Jm(r, t)4m(t; w), t) dW(t; w)]
d~i(t; w) =
KA(.(,
t) + zV(j,
t)#5O(t; W), t) [-(T,
t) +
X4(e,
t)#4(t; w)]
-Ew [A((o, t) + iz(e, t)#1 (t; w), t)]
[ t(0, t) + -'im(0,t)Om (t; W)]
+
ie t)
(3.7)
dt
KB(T (0, t) + 5Cn(0, t)#On(t; w), t)dW(t; w)
--Ew [B (.T(e, t) + I-p(e, t) op(t; w), t) dW(t; w) ], :j(a, t))
dz.j(r, t)
Ew
=
[A(T(r, t) +
-( A (5
#5 (t; W),
(r,t) +
Ck(r, t)Ok(t; w),
t) [5(r, t) + .z(r, t)#1 (t; w)] #5(t; W)
;n (r, t) On(t; W), t) [:t(r, t) + ;cp(r, t) op(t; w)]
m (r,t)
;c.rn(r, t) ]C 1 dt
+ Ew [B(z(r, t) + q(r, t)#q(t; w), t)dW (t; w)4j(t; w)
-
B(F(r, t) + :z(r, t)#z(t; w), t)dW(t; w)#j(t; w),
m(r,
t):izm(r, t)] C
(3.8)
67
DO governing equations for quadratic-nonlinear systems (1.2)
For the special case when A and B are at most linear in X(r, t; w) as represented by equation (1.2), we can further simplify the equations (3.6), (3.7) and (3.8). After performing
these simplifications, the final evolution equation for the mean
z(r, t)
of the stochastic
field is given as
dj = Ao(t)fTdt + A 1 (.,t)Tdt + A 1 ('it)_; E [q5#q ] dt + B 1j(Qt )Ew [#idW]
The evolution equation for the stochastic coefficients
di
=
{A0(t)24,
24 }#0dt +
#i(t; w)
{A1(T, t ).;, -ii)0jdt +
(3.9)
has the form
A1(;kt2,
i)qOkdt
+ (A1(z,t)zJ,; zj(0kkj
)
- Ew[4 k&#1 )dt
(3.10)
+ (B 0 (t),Jrj)dW + (B 1 (t,t),;)i dW
+ ( Bl(k,t),J)Q)ciV#dW - Ew [OkdWI)
Finally, the evolution equation for the DO modes is given as
dz. = [Al(t) )z-
{A(t):
± [A1( k, t):t + [A
A(-4, t),;, -
, zr jz:
(A(,t),
A(-;4,
A
+ [B 0 (t) - (B 0 (t),
±
t
] dt + [A, (.Ttt)),fmi - A
A1(T, t)ji '.;
;ffim)I] E'
[OkO]
Cg 1dt
t) ip, j2m -C] Ew [Okx#,#g|C;j
dt 0i
(3.11)
.ffmXm]
Ew [dW#j] C '
[B 1 (T,t) - {B1(,T,
Ew [dWoj] C '
Xzm] EW [#kdWoj]
+ [B1(-k, t) - {B1(k, t), zm
3.2
] dt
C '
Generalized Polynomial Chaos
We consider again the class of stochastic dynamical systems whose governing equation
is (1.1) and derive its gPC evolution equations. We then address its special quadraticnonlinearity case (1.2).
Derivation of gPC governing equations for (1.1)
Substituting the polynomial chaos expansion from equation (2.43) into equation (1.1) and
68
taking a stochastic Galerkin projection, we have
dii(r,t)E' [Ii(w)xIk(w)I
=
Ew [A(z7(r, t)Ti(w), t):i(r,t)'j(w)Xk(w)] dt
(3.12)
The evolution equations for the coefficients are then given as
dif(r, t) = Ew [A(:4(r, t)'Fk(w), t) j(r, t
(w)'I(w)] CT( 1 dt
(3.13)
+ Ew [B(ii(r,t)'Fi(w), t)dW(t; w)Ji(w)] C '
gPC governing equations for quadratic-nonlinear systems (1.2)
For the special case when A and B are at most linear in X(r, t; w) as represented by
equations (1.2), we can further simplify (3.13). Inserting (2.43) into (1.2) , we first obtain
A(Q4(r, t)qIk(W), t)
=
AO(t) + Al(zk(r, t), t)Tk(W)
B(4 (r,t) 'k (w), t)
=
Bo(t) + B1(ik(r, t), t) Ik(w)
(3.14)
where Ao(t) and Bo(t) are functions in time only and Al(ik(r, t), t) and Bl(ik(r, t), t)
are linear in xk(r, t). Substituting (3.14) into (3.13) and simplifying, we obtain the final
evolution equations for this special case, which are given as
dig (r,t) = Ao(t) z(r, t)dt + Al(Ck(r,t),t)zy(r,t) Ew ['k(w)Pj(w)Pl(w)] Cj
+ BO(t) Ew [dW(t; w) J'i(w)] C '
1
dt
(3.15)
+ B (ik(r,t), t) EW [Tk(w)dW(t; w)'I(w)] C ,T
where CT,
3.3
is given by (2.47).
Time-dependent Generalized Polynomial Chaos
As presented in section 2.3.3, the TDgPC scheme Gerritsma et al. (2010) was derived
as a modification of the gPC scheme and is useful for longer time integration of nonlinear dynamical systems with uncertainty in input parameters. It evolves the deterministic
coefficients as the gPC scheme does, but with an additional re-initialization of the stochas-
69
tic basis functions, carried out whenever the nonlinearity effects dominate. Next, we first
motivate the need for the TDgPC scheme and then present its re-initialization step.
We still consider the nonlinear stochastic dynamical system represented by (1.1). We
begin with the gPC expansion given by (2.43) and time integrate (3.15) to obtain the
time evolution of the deterministic coefficients. For nonlinear systems, the magnitude of
the coefficients of higher-order polynomials would increase in time, indicating that the
stochastic characteristics of the solution are changing. When the magnitude of the nonlinear part reaches a certain threshold level (which is decided heuristically based on the
problem at hand), we halt the gPC scheme and perform a change of stochastic variables,
which is called re-initialization.This is obtained next, following Gerritsma et al. (2010).
Basis re-initialization
Assuming that the threshold for re-initialization is reached at time t = t1 , the change of
random variables is given by
1< i< N
(ti; W) = X()(r, ti;w)
where (,(ti; w), 1
(3.16)
< i < N represent the stochastic variables to be used for generating the
new polynomial chaos basis and the scalar xW(r, ti; w) represents the ith component of
the stochastic field X(r, ti; w) in N-dimensional state space, given by
S-1
xW (r, ti; w)
=
Z
(3.17)
(r,t)'k (W)
k=Q
where similarly the scalar
coefficient
ik(r, t)
2 (r,t) is the ith component
of the N-dimensional deterministic
corresponding to the polynomial basis function
'k(w).
Note that s is
the total number of terms in the re-initialized polynomial chaos expansion and is given
by equation (2.40) with f = N.
Next, we use Gram-Schmidt orthogonalization to generate a set of uni-variate poly-
70
nomial basis functions for each of the stochastic variables
1
new(t;W)
*k"(ti;
W)
(j, 1 < i < N, given by
=
k k-1
- k-i
W )]
[
_=(0
jOnew
[(tl
(3.18)
(ti;W.
1
ew(ti;w ),
e (ti;w))
< k <p
-
Subsequently, we use the uni-variate polynomial basis functions to generate the corresponding multi-variate polynomial basis. We have
N
T"new(ti;
new(((ti;
-B)=
w))
(3.19)
j=1
Using a single dimensional representation for the multi-index and arranging the basis
functions by increasing order, we observe that the first basis function is a constant and the
next N multi-variate polynomial basis functions are linear functions of random variables
(j, 1 < i < N, given as
1
0new"(t;w) =
(3.20)
))
new(t1; W) = *new(k(t1;
(k(ti; w) - E' [(k(ti;w)]
1< k < N
The next set of multi-variate polynomial functions are quadratic functions of (j, 1 < i < N,
then cubic and so on. Once we obtain the new polynomial basis, we need to re-initialize
the deterministic coefficients. The re-initialized coefficients are given as
n"
E' [X(r, ti; w)]
1< k< N
n"w =ek
4
where
ek
=0
N
(3.21)
1 < k < s -1
represents an N-dimensional unit vector such that
8 =
1
if i=k
0,
otherwise
71
(3.22)
and 0 represents an N-dimensional zero vector. The total number of terms in the new
polynomial chaos expansion are given (using equation (2.40)) as
N +p
N
(N +p)!
N!p!
(3.23)
It is observed that in the re-initialized polynomial chaos expansion, the coefficients corresponding to quadratic and other higher-order polynomials are zero, and hence the expansion is linear in terms of the polynomial basis functions. Once the polynomial basis has
been re-initialized, the time evolution of the coefficients is continued in the same manner
as in the gPC scheme, until the threshold is reached again.
3.4
New Modified TDgPC Scheme for Handling External Stochastic Forcing
All existing PC schemes known to us, including gPC and TDgPC, have fundamental
limitations in modeling stochastic noise for long time intervals (as also illustrated in
chapter 5). One of our research goals is to evaluate and obtain methodologies that can
predict uncertainty in dynamical systems with external stochastic forcing, as employed
for oceanic systems (Lermusiaux, 2006).
For such predictions, one needs to integrate
stochastic noise over sufficiently long time intervals. In this section, we introduce and
derive a new Modified TDgPC (MTDgPC) scheme capable of modeling stochastic noise
over arbitrarily large time intervals.
We again consider the stochastic dynamical system represented by (1.1). Consider the
gPC expansion of the field variable X(r, t; w) given by (2.43), with a modification that
the polynomial basis functions are now a function of time as well. Thus, we have
8-1
X(r, t; w)
(r,t)4'j(t; w)
=
(3.24)
i=O
The main idea of our new MTDgPC scheme is to modify the polynomial basis functions
TI'(t; w) in time so as to include the effect of the external stochastic forcing as it occurs.
The algorithm is as follows:
72
1. Evolve the deterministic coefficients
zi(r, t)
at each time step using the classic
TDgPC scheme, but based on the system dynamics without new stochastic forcing,
i.e. neglecting the new stochastic noise dW(t; w) in (1.1) during the time step. This
involves evolving the coefficients using the gPC scheme, followed by re-initialization
of the polynomial basis. Thus, we apply the TDgPC scheme to the differential
equation
dX(r, t; w) = A(X(r, t; W), t)X(r, t; w)dt
(3.25)
Let the corresponding time-evolved coefficients obtained by (3.21) be represented
as pfew(r, t
+
dt). It is observed from (3.21) that for these coefficients we have
(ew(, t + dt),
(zi
new(,
t + dt) = , - 6j = 6,
Thus, the set of deterministic coefficients
{
1< ,
: N
(3.26)
1Pew(r, t + dt) : 1 < i < N } spans the
entire N-dimensional physical space. The intermediate solution of the stochastic
field at time t + dt is then represented as
S- 1
Xint(r, t
+ dt; w) = I:i
ew(r, t + dt) Ti(t; w)
(3.27)
i=o
2. Next, we modify the basis functions so as to add the effect of the new stochastic
noise (both multiplicative and additive).
We define iD- as the projection of the
stochastic noise term on the new set of deterministic coefficients obtained in step 1.
We have
ii
=
(B(X(a, t; w), t)dW(t; w)),
jne(,
t + dt))
The new set of polynomial basis functions TIew(t
, nowtif
i(
+ dt; w) =
(t; w) + i,
Ti(t; W),
73
1<i <N
(3.28)
+ dt; w) is then given as
i=0
if 1<i<N
if N+1 < i < s- 1
(3.29)
where s is given by (3.23).
The final form of solution of the stochastic field at time t + dt is represented as
s-1
>
X(r, t + dt; w) =
4ew(r,
t + dt)4
ew(t
+ dt; w)
(3.30)
i=o
where the deterministic coefficients
pew(r,
t + dt) and the stochastic polynomial basis
functions iiew(t ± dt; w) are as defined in steps 1 and 2 of the algorithm respectively.
We now show that the final solution of the stochastic field at time t + dt is indeed the
solution of (1.1).
To do so, we substitute for Tnew(t + dt; w) from equation (3.29) into
equation (3.30), and obtain
s-1
X(r, t + dt; w) =
E
jnew(r,
t + dt)IJew(t + dt; w)
i=O
new (r,
=
t + dt)w(t + dt; w)
N
z>e3(r, t + dt)'I""(t + dt; w)
±
+e
*(r, t +dt) Tnew(t +tw
i=N+1
+new
0
(r, t + dt)
0;(4
(rtd)~~,j(3.31)
)
N
±
>
+
*"'w(r, t + dt) ['i(t; w)+ '
>3
"(+]
,lw (rt+dt) Ti(t;w)
S-i
N
i3new(r,
t ± dt)Tj'(t; w)
+
>3
,.;V-ew
i=1
i=O
N
=Xint(r,
t + dt; w) + >3 ~i.,ew (r, t + dt)
74
(r, t ± dt)
Using the definition of viDfrom equation (3.28), we have
X(r, t + dt; w) = Xi (r,t + dt; w)
N
+
(B(X(., t; w), t)dW(t; w), Jcew(*, t + dt))ew(r, t + dt)
(3.32)
=X 1n (r,t+ dt; w)
N
+
=
(B(X(e, t; w), t)dW(t; w), 6i)6i
Xi (r,t + dt; w) + B(X(r, t; w), t)dW(t; w)
which is the correct representation of the solution field at time t + dt, including the effect
of stochastic noise.
Considering computations, the time integration of the system dynamics without the
stochastic forcing terms and the integration of these new stochastic forcing terms alone
need to be compatible. In the numerical sense, the scheme splits the governing dynamics
in two parts (deterministic dynamics and stochastic forcing), which is straightforward to
handle for explicit schemes. For implicit or multi-steps time-discretization schemes, there
will be numerical choices that will lead to optimum within-time-step coupling, and others
that won't.
3.5
New Reduced Space and Reduced Order Polynomial Chaos Scheme
The MTDgPC scheme presented in section 3.4 can model stochastic noise over arbitrarily
large time intervals.
However, as seen later in chapter 5, it has limited applicability
in modeling realistic ocean models due to its high computational cost. Hence, for highdimensional dynamical systems that concentrate most of their stochastic energy in a timedependent subspace, we propose another new reduced space and reduced order polynomial
chaos scheme. Next, we first review the known concepts of Singular Value Decomposition
(SVD) and Karhunen-Loeve (K-L) expansion. Then, the algorithm for the new SVD and
K-L expansion based generalized polynomial chaos (KLgPC) scheme is presented and the
75
evolution equations for the stochastic system represented by equation (1.1) using the new
KLgPC scheme are derived.
3.5.1
Singular value decomposition
A singular value decomposition (SVD) of a real m x n matrix A has the form
A=
(3.33)
UEVT
where U is an m x m orthogonal matrix, V is an n x n orthogonal matrix and E is an
m x n matrix with elements
0i,
if 1<i=j
0,
otherwise
r
(3.34)
where r here is the rank of matrix A. By definition, cof are the eigenvalues of AAT. These
o- are called the singular values of matrix A and are defined such that U-1
U2
-2
..
a, > 0. The matrix A can be represented as
r
A =
Oiuivf
(3.35)
where ui and vi are the ith column vectors of matrices U and V respectively. A low rank
approximation of A is given by a truncated SVD, which is represented as
fred
Ared
=Z
iuiv
(3.36)
where Fred r
i=1
Consider an n x m data matrix X (with stochastic mean removed), representing m
observations of a stochastic field u(x, t; w) at time t E T in a physical domain D E
R'.
The covariance matrix of the random field is then defined as Cu(t)u(t)
The eigenvalues Ai = ou
of the covariance matrix (where
=
XXT.
o- are the singular values of
the data matrix X) represent the magnitude of the variance of the stochastic field in
the directions of the corresponding eigenvectors. This idea is central to the concept of
76
principal component analysis (PCA) (see, for e.g., Jolliffe (1986)), and is used in several
uncertainty quantification schemes, including proper orthogonal decomposition (POD)
and error subspace statistical estimation (ESSE).
3.5.2
Karhunen-Loeve expansion
The Karhunen-Loeve expansion (see Ghanem and Spanos (1991), Le Maitre and Knio
(2010)), like the polynomial chaos expansion, is a common spectral representation of a
stochastic process u(x; w). It is analogous to principal component analysis (PCA) in an
infinite dimensional setting. Loosely speaking, the Karhunen-Loeve theorem states that
any square integrable random process u(x; w) with a continuous covariance function can
be represented as an infinite sum of uncorrelated random variables.
Using the Karhunen-Loeve theorem, a stochastic process u(x; w) defined on a probability space (Q, .F, P) and a finite domain D with a known covariance function C.,(x 1 , x 2 )
can be represented as
u(x; w)
=
ii(x) +
>
'fA/XZj(w)iij(x)
(3.37)
where ii(x) = Ew [u(x; w)] is the mean value of the stochastic process, Zi(w) are uncorrelated random variables with zero mean, and A, and iii(x) are the eigenvalues and
eigenfunctions of the covariance function C(X1,
x 2 ). In short, we have the following
properties and definitions,
/
Cu. (X,
X2)f(X2)dX2 =
Agii(x
1)
/D ft(x)ftj(x)dx = kg
Zi(w) = >
j
(u(x; w) - ii(x)) iii(x)dx
Ew [Zj(w)] = 0
E' [Zj(w)Zj(w)]
=
Jij
77
(3.38)
Furthermore, the total variance of u(x; w) is given as
J
00
Var [u(x; w)] dx =
Ai
(3.39)
Next, let us consider a random field u(x, t; w) defined on the probability space (A F, 'P),
spatial domain D and time domain T, with a continuous covariance function Cu(t),(t)(X1, x 2 ).
For any t E T, the Karhunen-Loeve expansion of the stochastic field is given as
00
V/A3(t)Zi(t; w)ii(x, t)
u(x, t;w) = i(x, t) +
(3.40)
The K-L expansion represented by equation (3.40) is exact, but in order to facilitate
computations, needs to be truncated to a finite number of terms. The set of frdj terms
that maximizes the total variance of the stochastic field and hence leads to the most
accurate representation (in this variance sense) is given by the first fred terms of the
expansion, assuming that the eigenvalues {Ai(t)}iEN have been arranged in decreasing
order of magnitude. Hence, the truncated Karhunen-Loeve expansion is given as
rred
Ured(X,
v A3(t)Zi(t;w)iii(x, t)
t; w) = ii(x, t) -
(3.41)
The truncated K-L expansion Ured(X, t; w) converges to u(x, t; w) in the mean square sense
(given by Mercer's theorem). Therefore, we have
Ew
[j IUred(X, t;W) -
u(x, t; w) 2 dxJ =
E
Ai(t) -+ 0 as fred -- 00
(3.42)
i=fred+1
3.5.3
KLgPC algorithm and derivation of evolution equations
The main idea of the new KLgPC scheme is to utilize the concepts of SVD and K-L expansion to obtain a reduced space and reduced order representation of the stochastic field
at every time step, taking into account the effect of stochastic noise, and to use this truncated representation to implement a polynomial chaos based uncertainty quantification
scheme which minimizes the computational effort involved, while solving for the stochastic
78
field with sufficient accuracy. In some sense, it combines the uncertainty prediction part
of ESSE scheme with gPC.
As in section 3.4, we again consider the stochastic dynamical system (1.1) and assume
that at time t, the solution field X(r, t; w) is represented in the form of a gPC expansion
given by (3.24). The algorithm for our KLgPC scheme is as follows:
1. Evolve the deterministic coefficients c(r, t) at each time step using the gPC scheme,
but based on the system dynamics without the new stochastic forcing, i.e. neglecting
the new stochastic noise dW(t; w) in (1.1) during the time step. Thus, we apply the
gPC scheme to the differential equation (3.25). Let the time evolved coefficients be
represented as i, (r,t + dt). The intermediate or predictor solution field (without
including the effect of the new external stochastic forcing) is then represented as
s-i
Xint(r, t
+ dt; w)
=
(3.43)
j3 z(r, t + dt) I'j(t; w)
i=O
which is similar to (3.27) from step 1 of the modified TDgPC algorithm. However,
only an optimally truncated version of this expansion will be used in KLgPC. This
is developed next in steps 2 and 3.
2. Approximate the intermediate stochastic field Xit (r,t + dt; W) using a reduced spectral representation. We have from equation (3.43)
Xint(r, t
+ dt; w) = zo(r, t + dt) To(t; w) + j3 z(r, t + dt) I'j(t; w)
(3.44)
=
where X(r, t
zo(r, t + dt)To(t; w) +
+ dt) is an N x (s
(r,t + dt).I'(t; w)
- 1) matrix with column vectors 'j(r, t + dt), 1 <
i < (s - 1) and TI(t; w) is an (s - 1) dimensional column vector with elements
Ii (t; w), 1 < i < (s - 1). Here, TO (t; w) = 1 and ;o(r, t + dt) represents the mean
or expected value of the stochastic field, i.e., 1o(r, t + dt) =
Ew
[Xint(r, t + dt; w)]
=
Xint(r, t + dt). Taking the complete SVD of matrix X(r, t + dt) and substituting it
79
in equation (3.44), we have
&jiiii[(t; w)
Xint(r, t + dt; w) = Xint(r, t + dt) +
(3.45)
where f is the rank of matrix X(r, t + dt), &j are its singular values and ui and vi
are the left and right singular vectors respectively. Truncating the singular value
decomposition to fred terms
(Fied
<_f), we get
rred
&3
fti b['I'(t; w)
Xint(r, t + dt; w) ~ Xint(r, t + dt) +
Defining
Zi(t; w) =
ifT(t; w),
(3.46)
we rewrite equation (3.46) to get
frd
Xint(r, t + dt;w)
XInt(rt + dt) +
&iZi(t;w)fti
(3.47)
i=1
which is similar to a truncated K-L expansion, as defined by (3.41).
3. Next, we use the truncated spectral representation (3.47) to obtain a gPC expansion
of the intermediate stochastic field. First, we define new random variables
(i(t +
dt; w) such that
(i(t + dt; w)
=
&Zi(t; w)
1 < i < fred
(3.48)
Using these new random variables, we generate a new set of polynomial basis functions V e(t; ), 1 < i <_ Sred where se
(red+P)!
using Gram-Schmidt orthog-
onalization, following ideas similar to that of the TDgPC scheme, using equations
(3.18) and (3.19). The corresponding deterministic coefficients are then similar to
(3.21) and are given as
z""e(r, t + dt)
=
E' [X1nt(r, t + dt; w)]
I
z""(r, t + dt) = id
z""r
k <
red
+ 1 < k < sred -
+d)=0fred
80
(3.49)
Thus, the gPC expansion of the intermediate solution field is given as
Sred
Xnt (r, t + dt; w) =
E zw(r, t + dt)4!"e(t; w)
(3.50)
i=O
where
p9ew(r,
t+dt) and V'ew(t; w) are as defined above. This is the KLgPC approx-
imation of equation (3.43) and it is to this equation (3.50) that the new stochastic
noise is added to.
4. Modify the stochastic basis functions so as to add the effect of new stochastic noise
over dt (both additive and multiplicative noise), using an approach similar to our
modified TDgPC scheme. We define
iT,, = (B(X(r, t; w), t) dW(t; w), ziew (r, t + dt))
1<i
(3.51)
as the projection of the stochastic noise term on the set of deterministic coefficients.
The new set of polynomial bases functions T
+ dt; w) =
+ dt; w) is then given as
if i=0
1,
w"(t
ew(t
Igew(t; W) +
TWew(t; .),
i,
if 1 <i <
if
irei
(3.52)
fre + I < i < Sred - 1
The final form of solution of the stochastic field at time t + dt is represented as
Sred- 1
X(r, t + dt; w)
=
3ziw(r, t + dt)xFew(t + dt; w)
(3.53)
i=O
where the deterministic coefficients .4ew(r t + dt) and the stochastic polynomial basis
functions
X pew (t
+ dt; w) are as defined previously in the algorithm. Using an analysis
similar to that represented in equations (3.31) and (3.32), it can be shown that the solution represented by equation (3.53) is indeed the correct representation of the solution
field at time t + dt, including the effect of stochastic noise, but only considering the dominant components of the two terms in (1.1).
81
Remarks:
a) We note that the dominant left-singular-vector subspace of X(r, t+dt; W) corresponding
to the first (deterministic) dynamical term A(X(r, t; w), t) X(r, t; w)dt and the dominant
left-singular-vector subspace of the matrix of iii's corresponding to the second (stochastic)
dynamical term B(X(r, t; w), t) dW(t; w) of (1.1) need not to be the same. The two SVDs
of these two matrices are not equal. In fact, if one of the matrices requires a smaller/larger
basis than the other to represent most of its variance, then the two subspaces have different
dimensions. The final subspace that represents the sum of the two dynamical terms is the
union of the two subspaces. This union-subspace can be further reduced to maintain a
certain percentage of variance (as done in the ESSE scheme). These different versions of
combination of subspaces is not discussed in the above, but they clearly allow the size of
the subspace to vary with time, in accord with the nonlinear deterministic and stochastic
terms of the dynamical system (1.1).
b) A special example of the situation described in remark (a) is the case of zero initial
uncertainty in the stochastic field,
i(r,0) = 0 for 1 < i < (s - 1). Hence, after step 1 of
the algorithm, we have -j(r, dt) = 0 for 1 < i < s and the singular value decomposition of
the matrix k(r, dt) in step 2 of the algorithm would not have any non-zero singular values.
This situation would also be true for the MTDgPC scheme. There is nothing incorrect
with this, the computation simply indicates that over the first time-step, the zero initial
uncertainty leads to zero future uncertainty if there is no new external stochastic forcing.
However, if there is new stochastic forcing over dt, then it needs to be accounted for and
different options are feasible. The best option for KLgPC is to set the initial basis such
that the second dynamical term in (1.1), i.e. the noise term over dt, is best represented
in the variance sense. That basis is the dominant right singular vectors of the matrix
of ?iD's corresponding to this second term B(X(r, t; w), t) dW(t; w). Another choice (less
efficient) is to select a complete initial basis (which is expensive for large systems) and
then represent that second term fully, possibly truncating it for the next time-step. A
third choice (less efficient) is to start the scheme with a Monte-Carlo approach. After
that first time step, the above KLgPC scheme can be used.
82
Chapter 4
Uncertainty in Input Parameters
In this chapter, we study stochastic linear and nonlinear dynamical systems with uncertainty in input parameters. These systems involve uncertainty in the form of initial conditions, boundary conditions and model parameters (without sustained stochastic noise).
We have studied a set of both self-engineered and classic nonlinear dynamical systems (for
e.g., see Strogatz (2001)), but only a summary subset of these results are presented here.
As a first example, we consider a one-dimensional stochastic ordinary differential equation and solve it using DO and gPC schemes. Next, we move on to the Kraichnan-Orszag
three mode problem (Kraichnan, 1963, Orszag and Bissonnette, 1967), which involves a
nonlinear system originally used for modeling fluid turbulence. The long-time integration issues of classic gPC schemes are discussed and the TDgPC scheme introduced by
Gerritsma et al. (Gerritsma et al., 2010) is used to obtain improved solutions. These
solutions are compared to the solutions obtained using DO and MC schemes and the
performance of the uncertainty quantification schemes is analyzed in terms of accuracy
of solution and computational cost of implementation. The new MTDgPC and KLgPC
schemes derived in chapter 3 are not used in this chapter since for now, only uncertainty
in input parameters is considered.
83
4.1
One-dimensional Decay Model
The one-dimensional decay model is represented by the stochastic ordinary differential
equation
du(t; w) + k(w)u(t; w)dt
=
(4.1)
0
where the decay rate k(w) is stochastic. We consider a deterministic initial condition,
given as
(4.2)
u(0, W) = 1
For this problem, we consider the decay rate to be a uniformly distributed random variable
in the interval [0, 1]. Thus, the decay rate k(w) is represented as
k(w) ~ U[0, 1]
(4.3)
The one-dimensional stochastic ODE represented by equations (4.1), (4.2) and (4.3) has
an analytical solution, which is given by the equation
u(t; w)
(44)
=-k()t
Using this analytical stochastic solution (4.4), the mean and the variance of u(t; w) can
be computed analytically. The mean of u(t; w) is
'i(t)
=
Ew [u(t; w)]
=
(4.5)
t
and the variance is
Var [u(t; w)]
=
2t
-
t
et
2
(4.6)
The one-dimensional stochastic decay model is solved using various uncertainty quantification schemes for t E [0, 301, using a time step of At = 0.01. The stochastic mean and
variance of the numerical solution obtained using Monte Carlo simulations with 10000
sample realizations, is plotted in figure (4-1). For comparison, the analytical mean and
variance of u(t; w) are also shown in the same figure. The relative error in the mean and
variance of the stochastic field is depicted in figure (4-2). From figures (4-1) and (4-2),
84
of solution fild u(t,a)
1.Mean
0.9
Variance of solutionfield u(tp)
0.07
MVC
scheme
---
MC scheme
0.06
0.8
0.05
0.7
-
0.6
-
-- Aayia
n~a
0.04
0.5
W
0.03
0.4
0.3
0.02
0.2
0.01
0.1
0
5
10
15
time (s)
20
25
30
0
5
10
15
time (a)
20
25
30
Figure 4-1: Mean and variance of the solution u(t; w) for 1-D decay model using the MC
scheme
4
x 10
Relative error in mean of u(twa)
Relative error in variance of u(t.w)
l
io
5
2
4
1.5
3
1
2
0.5
0
5
10
15
time (s)
20
25
30
0
5
10
15
time (s)
20
25
30
Figure 4-2: Relative error in mean and variance of u(t; w) for 1-D decay model using the
MC scheme
it is observed that the Monte Carlo scheme with 10000 sample realizations shows a very
good agreement with the analytical results. For stochastic systems without analytical
solutions, the Monte carlo solution with a sufficiently high number of sample realizations
is a good estimate of the true solution.
Next, we solve the stochastic decay model using gPC scheme (described in section
2.3.2), with varying polynomial order. The stochastic mean and variance of the numerical
solution obtained with polynomial expansions of order 2, 4, 6, 8 and 10, along with their
relative errors, are plotted in figures (4-3), (4-4), (4-5), (4-6) and (4-7) respectively. We
observe that the accuracy of the results is dependent on the order of the gPC scheme.
The solution of u(t; w) using a lower order gPC scheme (p = 2 and p = 4) is not very
accurate. As the order of the polynomial chaos expansion increases, there is a better
85
Mean of solution field u(t,w)
1,.
------
0.8
gPC scheme I
Analytical
Variance of solution field u(tp.)
0.07
th -2
0.06
-
%
%
gPC schem
Analytical
9
with p.2
0.05
0.6
0.03
0.4
-C
em
0.02
j--i
0.2
0.01
0'
0
5
10
15
time (s)
20
25
5
3
Relative error in mean of u(t,w)
0.8
10
is
time (s)
20
25
0
Relative error In variance of u(tp)
1
0.70.8
0.60.5-
0.6
t50.4
W
0.4
0.3
0.2
-
0.2
0.1
0
5
10
15
time (s)
20
25
0
30
5
10
15
time (s)
20
25
30
Figure 4-3: (a) Mean and variance of u(t; w) for 1-D decay model using the gPC scheme
with p = 2 (b) Relative error in the mean and variance of u(t; w) using the gPC scheme
with p = 2
agreement between the numerical results obtained using gPC scheme and the analytical
solution. The solution obtained using gPC scheme of order p = 10, exhibits a fairly good
accuracy, similar to that of the Monte Carlo scheme.
Finally, we solve the 1-D decay model using dynamically orthogonal equations (described in section 2.3.5). The stochastic mean and variance of the solution u(t; w) using
the DO scheme with one stochastic mode, i.e., s = 1, and their corresponding relative
errors, are plotted in figure (4-8). It is observed that the DO solution with one stochastic
mode is identical (up to stochastic realizations) to the Monte Carlo solution, and shows
a very good agreement with the analytical results. This is normal since for s = 1 the DO
scheme captures the whole dynamical system and becomes a Monte Carlo scheme (if the
stochastic coefficients are solved for using a Monte Carlo scheme).
86
Mean of solution field u(t.e)
Variance of solution field u(tp)
PC schemne With p-4
- - - Analytical
---
0.8
-----
0.06
0.06
PC
scherne with p-4
Analytical
0.05
0.6
5.0.04
0.03
OA
0.02
0.2
0.01
0
5
10
15
time (S)
20
25
0
30
5
20
15
25
30
25
30
time (a)
Relative error in mean of u(to)
014A.
10
Relative error in variance of u(t,e)
0.7
0.12
0.6
0.1
0.5
-
0.08
0.4
W
0.06
0.3-
0.04
0.2
0.02
0
5
10
,ez.
15
time (s)
20
/
0.1 -
0
30
25
5
10
15
20
time (a)
Figure 4-4: As figure (4-3), but using the gPC scheme with p = 4
Variance of solution fildk
Mean of solution field u(t,e)
---
PC 6ch6m
With
p6
|0
0.8
0.07
.06
0
.05
u(t)
gPC schemne with p-6
-
-
--- -Analytical
0
0.6
-Aay
a
0.03
0.4
0.02
0.2
0
5
10
15
time (a)
20
25
0.01
A
30
0
5
Relative error In mean of u(te)
10
15
time (a)
20
25
30
25
30
Relative error In variance of u(te)
0.01
0.2
0.008
0.16
0.006
0.1
0.004
0.05
0.002
0
5
10
15
time (a)
20
25
0
0
30
5
10
15
time (s)
20
Figure 4-5: As figure (4-3), but using the gPC scheme with p = 6
87
Mean
of solution field
u(t,)
gPC scheme
thp8
--- nlyia
0.4
0.2
gPC
scheme with p-8
~- - Analytical---
0.06
0.05
1 0.04
0.03
0.02
0.8
0.6
Variance of solution field u(ta)
0.07
1
-
0.01
0
5
10
is
20
25
time (s)
0
Relative error in mean of u(t,w)
x 10~a
2A
30
0.02
1.5.
0.015
1-
0.01
0.5
0.005
5
10
15
time (a)
20
25
30
0
10
15
time (s)
20
25
30
25
30
Relative error in variance of u(t,a)
n-nMR,
2-
n0
5
5
10
15
time (a)
20
Figure 4-6: As figure (4-3), but using the gPC scheme with p = 8
Mean
of solution field u(t,e)
|-gPC scheme
Variance of solution field u(t,w)
0.07
with p-10
0.06
0.8
gPC0
-
- - -Analytical
2eme
p
0
25
30
25
30
0.05
0.*04
0.6
0.03
0.4
0.02
0.2
0.01
0
5
10
15
time (S)
20
25
30
5
Relative error in mean of u(t,e)
x 10-
10
15
time (s)
20
Relative error in variance of u(tw)
xl1
5
2
4
1.5
3
LU
1
2
0.5
-
0
'
5
.
10
'
15
time (S)
' - '
20
25
'
n0
30
5
10
15
time (a)
20
Figure 4-7: As figure (4-3), but using the gPC scheme with p = 10
88
Mean of solution field u(tw)
Variance of solution field u(tom)
00n7
1
0---10 15 2me
0.06
0.05
0.8
0
5
0.6
-
n
-
25
3
25
30
S0.03
0.02
04
0.2
0
5
10
25
15)
0
30
10
is
(a)
20
xine
Relative error in mean of u(t,m)
x 10-8
5
20
Relative error in variance of u(ltc)
Xe
5
2
4
1.5
3
1
2
0.5
0
5
10
15
time (S)
20
25
30
0
5
10
15
time (a)
20
Figure 4-8: (a) Mean and variance of u(t; w) for 1-D decay model using the DO scheme
with s = 1 (b) Relative error in the mean and variance of u(t; w) using the DO scheme
with s = 1
4.2
Kraichnan-Orszag Three Mode Problem
In this section, we study the Kraichnan-Orszag three mode problem, which is a well-known
system in literature, used for studying the long time numerical behavior of uncertainty
quantification schemes. The Kraichnan-Orszag (K-O) system was first introduced by R.
H. Kraichnan (Kraichnan, 1963) and later used by S. A. Orszag (Orszag and Bissonnette,
1967) for studying uncertainties of Gaussian nature in turbulence. It is derived from simplified inviscid Navier-Stokes equations and consists of three coupled nonlinear ordinary
differential equations, given as
dx1 (t; w)
= x 2 (t; w)x 3 (t; w)
dt
dx 2 (t; w)
= x,(t;
w)xi(t; w)
dt
dx3 (t; w)
dt = -2x 1 (t; w)x 2 (t; w)
89
(4.7)
Uncertainty in the system is introduced through initial conditions.
Each of the three
state variables x1 (t; w), x 2 (t; w) and X3 (t; w), may have an uncertain initial condition. The
number of state variables having initial uncertainty determines the stochastic dimension
of the system. The solution of the Kraichnan-Orszag system is known to be heavily
dependent on the initial state of the system.
First we consider the case with initial uncertainty in just one state variable. In this
case, the initial conditions are represented as
x1(0, w) = 0.99 + 0.01 (w)
x 2 (0,w) = 1.0
x 3 (0,w)
where
=
(4.8)
1.0
(w) ~ U[-1, 1] is a uniform random variable.
We consider the time domain
t E [0, 40] and use a time step of At = 0.01. The stochastic mean and variance of the state
variable x 1 (t; w) using Monte Carlo simulations with 10000 sample realizations is plotted
in figure (4-9). Since the analytical solution is not available for the Kraichnan-Orszag
system, the stochastic solution obtained using the Monte Carlo scheme is assumed to be
the true solution. Next we use the gPC scheme to integrate the Kraichnan-Orszag system
with initial conditions given by equation (4.8). The mean and variance of x 1 (t; w) using
the gPC scheme with polynomial order p = 2 is shown in figure (4-10). For comparison,
the Monte Carlo solution is also plotted in the same figure. We observe that the gPC
scheme with p = 2 is unable to model the evolution of uncertainty in the KraichnanOrszag three mode problem. In an attempt to improve the accuracy of solution, we solve
the same system with a gPC scheme of higher polynomial order. The stochastic mean
and variance using gPC schemes of order p = 4 (s = 5) and p = 6 (s = 7) are plotted
in figures (4-11) and (4-12) respectively. It is observed that even with an increase in the
order of polynomial chaos expansion, there is not much improvement in the accuracy of the
solution. The inability of the gPC scheme in modeling uncertainty in nonlinear dynamical
systems over large time-intervals is well known, and is discussed in detail in section 4.3.
The stochastic mean and variance of x,(t; w) using the DO scheme with three stochastic
modes, i.e., s = 3, is depicted in figure (4-13). As in the case of the one-dimensional decay
90
Variance of x1(tw) using MC scheme
Mean of x,(t,e) using MC scheme
0.45
0.4
1
0.35
0.3
0.5
0.25
W
0.2
0
0.15
-0.5 )
0.1
0.05
-11
5
10
15
20
time (s)
25
30
35
40
10
5
0
15
20
time (s)
25
30
35
40
Figure 4-9: Mean and variance of the state variable x 1 (t; w) for the K-O system with
initial uncertainty in one state variable using the MC scheme
Variance of x,(twm) using gPC scheme with p-
Mean of x, (t,m) using gPC scheme with p-2
-L
gPC scheme
9-- - - MC scheme1
0.
|
-MCseme
2
0. 8
1
0. 7
0. 6
0.51
'
W
0.5
I
0.4
0
I
*
0. 3-
-
0. 2 -
-0.5
0.
-11
5
10
15
20
time (S)
25
30
35
40
0
5
10
15
20
time (S)
25
30
35
0
4
Figure 4-10: Mean and variance of the state variable x,(t; W) for the K-O system with
initial uncertainty in one state variable using the gPC scheme with order p = 2
problem, the DO solution is found to be identical to the MC solution.
Now, we consider the case with initial uncertainty in all three state variables. The
initial conditions are then represented as
Xi(0, w) = 0.99 + 0.01 (1(w)
x 2 (0, W) = 1.0 + 0.01
2
(W)
(4.9)
X3 (0, W) = 1.0 + 001 3 (W)
where (1(w), p2 (w), 3(w)
-
U[-1, 1] are independent and identically distributed (i.i.d.)
uniform random variables. We again consider the time domain t E [0, 40] and use a time
step of At = 0.01. We use the time-dependent generalized polynomial scheme proposed
91
Mean of x (t.s) using gPC scheme with p.4
Variance of x,(tw) using gPC scheme with p-4
1.5
1-4.
gPC scheme
- - - MC scheme
1.2
1
0.8
%-
0
8W
1
%-
0.5
- - gPC scheme
- - - -MC scheme
-%
5% ,
0.6
~
-
5~-
-0.5 F
0.4
-1
0.2
5
10
15
20
rime (S)
25
30
35
0
40
5
10
15
20
time (s)
25
30
35
40
Figure 4-11: As figure (4-10), but using the gPC scheme with order p = 4
Mean of x, (tm) using gPC scheme with p-6
Variance of x,(tw)using gPC scheme with p=6
1.5
I
0.8
-
0.5
.-.
.6
W
0
0.4
I
-0.5
I
0.2
gPC scheme
-
- - - MC Scheme
0
5
10
15
20
rime (s)
25
30
35
gPCscheme
- - - MC shm
00
40
5
10
15
20
time (s)
25
30
35
40
Figure 4-12: As figure (4-10), but using ;he gPC scheme with order p = (
Mean of
x,(tm) using DO scheme
Variance of x,(t,w) using DO scheme
nF
0.450.40.350.5(
I-
..
0
FDO
0.30.25-
0.20.15-
-
-0.5!
M
0.1
DO scheme
- - -MC
0.050
5
10
15
20
time (a)
25
30
35
40
0
shm
5
10
15
20
rime (s)
25
30
35
40
Figure 4-13: Mean and variance of the state variable x1 (t;w) for the K-O system with
initial uncertainty in one state variable using the DO scheme with s = 3
92
Mean of x,(twa) using TDgPC scheme with p-2
Variance of x,(t.o) using TDgPC scheme with p-2
0.7
--- TgPC schteme
---
MC scheme
TDPC scheme
- - - MC scheme
-
0.6
0.5
0. 5
-
6~
I
0
0.4
-
I
%
0.3
0.2
-0. 5V0.1
-1
5
'
10
'
15
20
time (s)
25
30
35
0
0
40
5
10
15
20
time (s)
25
30
35
40
Figure 4-14: Mean and variance of the state variable x 1 (t; w) for the K-O system with
initial uncertainty in all state variables using the TDgPC scheme with order p = 2
Mean of x,(te) using TDgPC scheme with p-3
1.5
n.Rn.
.
Variance of x,(tw)using TDgPC scheme with p-3
0.4
1
0.35
0.3
0.5
5- 0.25
02
0
0.15
0.1
-4.56
r---T[3PC scheme
- - - MC scheme
-11
5
10
15
20
time (a)
25
30
35
--PC scheme
- - - MC scheme
0.05
40
0
5
10
15
20
time (a)
25
30
35
40
Figure 4-15: As figure (4-14), but using the TDgPC scheme with order p = 3
by Gerritsma et al. (2010) to compute the solution. The stochastic mean and variance
of the state variable xi(t; w) using the TDgPC scheme with p = 2 (s = 10) and p = 3
(s = 20) are plotted in figures (4-14) and (4-15), along with their respective Monte Carlo
solutions. From these figures, we observe that the TDgPC scheme, even with relatively
low polynomial orders, is able to model uncertainty in nonlinear systems accurately over
large time intervals. The mean and variance of xi(t; w) using DO scheme with s = 3 is
shown in figure (4-16). Once again, the DO solution is found to be identical to the MC
solution.
93
Mean of x,(t,) using DO scheme
Variance
of x1 (tw) using DO scheme
0.40.350.30.5
~
.
0.25
0.2
0
0.150.1
-0.5
---
0.05
DO scheme
--
-DO
MC scheme
-1
0
5
10
15
m(
25
30
35
40
0
time (s)
5
10
15
20
tirm (s)
25
scheme
MC scheme
30
35
40
Figure 4-16: Mean and variance of the state variable x 1 (t; w) for the K-O system with
initial uncertainty in all state variables using the DO scheme with s = 3
4.3
Long Time Integration of Nonlinear Systems using the gPC Scheme
This section studies in greater detail, the limitation of classic gPC schemes in integrating
uncertainties in nonlinear dynamical systems over large time intervals. We begin by revisiting the one-dimensional decay model, studied earlier in section 4.1. We found in
section 4.1 that the lower order gPC schemes (with p = 2 and p = 4) were not effective
in integrating the stochastic ODE accurately. As we increased the order of the gPC
expansion, the agreement between the numerical results obtained using gPC and the
analytical solution increased. However, even for a higher order scheme (for e.g., with
p = 6), we found that the gPC scheme started to deviate from the analytical solution at
a later time. This effect is a consequence of the continuous increase in the magnitude of
nonlinearities in the PC expansion of the solution field with time. To see this effect more
clearly, the probability density function (pdf) of the numerical gPC solutions u(t; w) using
different polynomial orders (p = 2 and p = 3) and at different values of time t is plotted in
figures (4-17) through (4-28), along with its stochastic mean and variance. From figures
(4-17) and (4-18), it is observed that initially, both the schemes represent the pdf of the
solution accurately. At around t = 1.5 s, the gPC scheme with order p = 2 starts to
deviate from the Monte Carlo solution (figure (4-19)). This is due to the inability of the
scheme to handle growing nonlinearities. The higher order gPC scheme (with p = 3) is
94
Variance of solution field u(tw)
of solution field u(t.w)
Mean
0.07 -
0.8
0.6
-
-gPCwthl P-2
gPCwth p-3
MC
- gPC withp- gPCithpp- - -MC
o~os
0.06-
0.02-
-
0.04
0.03
0.4
0.020. 2-
00
0.01
1
1
2
3
4
tme (a)
6
5
7
0
8
1
PDF of u(tw) at t - O.5s using gPC with p-2
3.5
5
6
7
8
1.1
1.2
PDF of u(t.a) at t - O.Ss using gPC with p-3
2.5
-2
2
1.5
1.5
1
1
0.5
0.5
0.5
4
time (S)
23.5
P
2.5
0
3
2
0.6
0.7
0.9
0.8
1
1.1
0
0.5
1.2
0.6
0.7
0.9
0.8
1
u(tO)
u(te)
Figure 4-17: Probability density function of u(t; w) for the 1-D decay model at t = 0.5 s
able to handle the growing nonlinearities better than the lower order scheme, but at a
later time t = 4.0 s, the higher order scheme also begins to deviate from the true solution
(figure (4-24)). From figures (4-25), (4-26), (4-27) and (4-28), it is evident that for t > 4.5
s, the probability distribution function of u(t; w) represented by both the gPC schemes is
inaccurate.
To explain this behavior, consider the polynomial chaos expansion of the stochastic
field X(r, t; w), given by equation (2.43), with f stochastic random variables. The first deterministic coefficient of the expansion
zo (r,t)
mean of the solution field. The next f terms
(with ?bo (w) = 1) represents the stochastic
(z
(r, t), 1 < i < f) represent the coefficients
corresponding to the linear terms of the polynomial expansion of the stochastic random
variables and subsequent terms (ii (r,t), f +1
i < s - 1) represent the coefficients corre-
sponding to quadratic and other higher order terms. In many cases, the initial uncertainty
is almost always linear, meaning that the magnitude of the coefficients corresponding to
quadratic and other higher terms is zero initially (i1 (r,t) = 0, f + 1 < i < s - 1).
As
the system is evolved numerically in time, the stochastic characteristics of the solution
95
Mean
of solution field u(t,a)
8.
--.
.
.
--- -
0.8
Variance of solution field u(tw)
0.07
gPC whp=2
gPC with p-3
gPC with p=2
-
gPC with p-3
- - - MC
0.06
-MC
-
0.05
0.6
30.04-
aU.
T 0.03
S0.4-
0.020.2
0
0.01
1
2
3
4
time (s)
5
6
7
0
8
1
PDF of u(twm) at t - 1.0s using gPC with p.-2
3
2
3
4
time (s)
6
5
7
8
PDF of u(t,w) at t - 1.0s using gPC with p-3
gPC withp2
--- - - MC
2. 5
-
2.5
2
2
0.5
1.5
1
0.5
0.
0
0.2
0.4
0.6
u(t,)
0.8
1
00
1.2
0.2
0.4
0.6
u(tm)
1
0.8
1.2
Figure 4-18: As figure (4-17), but at time t = 1.0 s
Mean of solution field u(t,w)
Variance of solution field u(ta)
0.07
-g PC with p-2
-g PC with p-3
- -- MC
0.8
-
0.06
-
with p.2
gPC
gPC with p-3-
-
MC
0.05
0.6
a
0.4
0.03
0.02
0.2
0.01
0
0
1
2
3
4
time (S)
5
6
7
8
0
3
2
4
time (a)
5
7
6
1
PDF of u(t,o) at t = 1.5s using gPC with p=3
PDF of u(t,w) at t = 1.5s using gPC with p-2
---
1
gPC with p-2
2.5
2.5
2
2
1.5
1.5
1
1
0.5
0.5
0
0.2
0.4
0.6
u(t,W)
0.8
1
1.2
0
0.2
0.4
0.6
0.8
u(t,)
Figure 4-19: As figure (4-17), but at time t = 1.5 s
96
1
1.2
Mean of
solution field u(tw)
1
Variance of solution field u(te)
0.07
gPC with p-2
-
0.8
gPC with p-2
gPC with p-3
-
gPC with p-3
- - - MC
0---
-
0.06
-MC
0.05
0.6
0.04
0.03
U 0.4
0.02
0-
0.2
0.01
0
0
1
3
2
4
time (s)
6
5
8
7
0
1
PDF of u(t,w) at t = 2.Os using gPC with p=2
PDF of u(tl)
7
-
2.5
4
time (S)
3
2
6
5
7
at t = 2.Os using gPC with p=3
2.5
2
2
1.5
1.5
1
0.5
0.5
0- -
0
0.6
u(tO)
0.4
0.2
-
0.8
1
0
1.2
0.4
0.2
1
0.8
0.6
u(t.0)
1.2
Figure 4-20: As figure (4-17), but at time t = 2.0 s
Mean of solution field u(t,)
Variance of solution field u(ta)
gPC with p-2
-
-
gPC with p-3
-
--
0.8
gPC with p-2
gPC with p-S
- - - MC
0.06
- MC
0.05
0.6
0.04
I 0.4
0.03
0.02
0.2[
0
0.01
1
3
2
4
time (s)
6
5
7
0'
0
8
2
1.5
I
1
-
* " - --
1
IA
0.5
C' ,
-0.4
1
2.5
-
2
1.5
7
--- - - MC
-
3
-
6
5
gPC with p-3
gPC with p,=2
3
2.5
4
time (s)
PDF of u(twa) at t - 2.s using gPC with p-3
PDF of u(t,w) at t - 2.5s using gPC with p-2
---
3
2
1
-0.2
0
0.2
0.4
0.5
0.6
0.8
1
'
1.2
-0.4
u(t4)
-0.2
0
0.2
0.4
0.6
u(t,)
Figure 4-21: As figure (4-17), but at time t = 2.5 s
97
0.8
1
1.2
of solution field u(te)
Mean
0. 8
Variance
0.07
gPC with p-2
gPC with p.3
-
of solution field u(tpw)
-gPC
with p=2
gPC withp.3- - - MC
0.06
- - - MC
0.05
0.
a.
6-
0.04
4-
0.03 0
1-
0.02
0. 20.01
0,
0
1
2
3
4
5
6
0 0-
8
7
time (S)
1
2
3
4
time (S)
6
5
7
a
1
1.2
M
PDF of u(t,.) at t -
3.05
using gPC with p-2
F-I
3.5
PDF of u(tw) at t - 3.0a using gPC with p-3
FCwith P-
3.5
3-
3
2.5-
2.5
2
.
2
1.5
1.5
1
0.5
1
I
0.5
1
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
1.2
-0.4
0
-0.2
0.2
u(ta)
0.4
u(tm)
0.6
0.8
Figure 4-22: As figure (4-17), but at time t = 3.0 s
1,
,
.
Mean of solution field u(twm)
,
,
,
i
Variance of solution field u(tw)
,
gPC with p-2
gPC with p.3
- - -MC
0.07
gPC with p-2
-gPCwlthp-3
- - -MC
-
0.8
-
0.06
0.05
0.6
0.04
W 0.4
0.03
0.02
0.2
0.01
- ' 4'
3
'
2
'
1
U
0
time
PDF
5
'
7
0
a
2
1
3
4
5
6
7
8
time (S)
(S)
of u(t,w) at t- 3.5s using gPC with p-2
4.5
4
'
6
-
- - - MC
-
M
PDF of u(tc) at t - 3.5S using gPC with p-3
gPC whth p-3
4.5
--- - - MC
4
3.5
3.5
3
3
a.
2.5
I
-
2
2
1.5
1.5
-
1
-
1
0.5
0.5
-0.4
-0.2
0
0.2
0.4
u(tw)
0.6
0.8
1
1.2
-0.4
-0.2
0
0.2
0.4
0.6
u(t0)
C'C
Figure 4-23: As figure (4-17), but at time t = 3.5 s
98
0.8
1
1.2
Vadianos of solution fleld
Mean of solution field u(t..)
gPC with p-2
-
gPC with p-3
--
0.8
u(twe)
gPC with p-2
PC with p-3
- - - MC
-
0.06
- MC
0.05
0.6
0.04
0.03
0.4
0.02
0.2
0.01
'
'
'
'
1I
n'
2
3
5
4
time (S)
'
'
'
6
7
0
1
1
0
PDF of u(t,e) at t = 4.Os using gPC with p=2
PC
- -C
~-
-
3
2
4
time (s)
5
7
6
8
PDF of u(tLm) at t - 4.Os using gPC with p-3
Wth p-
PC with p-3
---
5
-e
4
3
2
1
1I
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
-0.4
1.2
-0.2
0
0.4
0.2
0.6
0.8
1
1.2
u(te)
u(t.e)
Figure 4-24: As figure (4-17), but at time t = 4.0 s
Mean of solution field u(t,e)
-
0.8
Variance of solution field u(tLw)
A nl7
gPC with p-2
gPC with p-3
-
gPC with p-2
with p-3
-
-PC
- - -MC
0.06
- - - MC
0.05
0.6
0.04
4
0.03
0.02
0.2
0.01
0
0
3
2
1
5
4
time (s)
PDF of u(t..) at t = 4.5s using
7
6
0
E
1
1
gPC with p-2
3
2
4
time (S)
PDF of u(tw) at t = 4.58 using
7
7
*
7
8
gPC with p=3
gPC wit
--
- - - MC
6
a
6
5
5
5
4
3
3
I
-
2
1
0'
-0.2
2
--
" 0-
'
0.2
'
'
0.4
0.6
'"
0.8
"
1
-0.2
1.2
u(t,0)
0
0.2
0.6
0.4
u(e)
Figure 4-25: As figure (4-17), but at time t = 4.5 s
99
0.8
1
1.2
of solution field u(te)
Mean
'
Variance of solution field u(t,m)
0.07
gPC with p.2
gPC with p-3
- - - MC
0.8
0.06
0.05
0.6
0.04
0.4
S0.03
---
0.02
gPC with p-2
----- - - MC
0.21
gPC with p-3.
0.01
0
1
3
2
4
time (s)
6
5
7
M-
8
0
1
PDF of u(tw) at t - 5.0s using gPC with p-2
2
PDF
3
4
time (S)
5
6
7
of u(ta)) at t - 5.Os using gPC with p-3
8
--
gPC With p-3
--
MC
8
7
I,
6
5 *
I
4
3
I
2
~
2
-0.2
I
-
0
0.2
--
0.4
0.6
0.8
1
1.2
-I0.2
0
0.2
0.4
u(t..)
0.6
0.8
1
1.2
u(tm)
Figure 4-26: As figure (4-17), but at time t = 5.0 s
Mean of solution field u(t,w)
Variance of solution field u(tw)
n n7
gPC with p.2
gPC with p.3
- - - MC
0.8-
0.06
0.05
0.6
-
0.04
w 0. 4
0.03
0
0.02
1_
0. 2
1
2
3
4
time (s)
5
---
0.01
6
8
0
1
2
PDF of u(te) at t - 5.5s using gPC with p.-2
10--
a-
8
ii
6
I
------
3
4
time (s)
gPC with p-2
g PC with p-3
MC
8
5
7
8
PDF of u(t,w) at t - 5.5s using gPC with p-3
gPC with p='
MC
14
12
10
8
6
4
4
2
2
0 L''
-0.2
0
0.2
0.4
0.6
0.8
1
0
1.2
-0.2
u(t,w)
0
0.2
1
0.4
0.6
u(t,w)
Figure 4-27: As figure (4-17), but at time t = 5.5 s
100
0.8
1
1.2
Mean
of solution field OAte)
Variance of solutionfield u(t,m)
gPC with p.2
gPC with p-3
MC
0.8
16
0.C
0.C15
0.6
L
0.c
3
0.4
0. 2
0.2
0.0 1
0
1
2
3
4
time (s)
PDF of u(t,w) at I -
5
6
7
n
0
8
0
1
6.0s using gPC with p-2
12
3
4
2
3
4
time (s)
-----
;PC with p-2
;PC with p-3
8MC
7
5
6
7
I
a
PDF of u(tI,) al t - 6.0s using gPC with p-3
14
--Z-ZZ
12$
.0-
10
I
8
%
5
6
s
0
4
5
2
0.2
0
0.2
0.4
0.6
0.8
1
1.2
-0.2
u(t-)
0
0.2
0.4
0.6
0.8
1.2
u(tD)
Figure 4-28: As figure (4-17), but at time t = 6.0 s
field change. Depending on the extent of nonlinearities in system dynamics, the magnitude of the coefficients corresponding to the higher order terms in the gPC expansion
increases with time. With increasing magnitudes of nonlinear terms, the ability of the
polynomial chaos expansion to approximate the true pdf of the solution field diminishes,
leading to reduction in accuracy of the scheme. A higher order gPC scheme, by virtue of
more number of higher order terms, will be able to accurately model the nonlinearities
for a larger period of time compared to a lower order scheme, but will tend to lose its
accuracy eventually. This causes the gPC scheme to be ineffective in modeling highly
nonlinear systems over large time-intervals. The TDgPC scheme proposed by Gerritsma
et al. (2010) alleviates this limitation by redefining the polynomial basis so as to make the
magnitude of the coefficients corresponding to the higher order terms equal to zero during
the re-initialization step, thereby reducing the polynomial chaos expansion to linear terms
only.
To further explain the limitation of gPC scheme in integrating nonlinear systems, we
consider the highly nonlinear Kraichnan-Orszag three mode problem (from section 4.2).
101
Mean
of x,(t,w) using MC scheme
Variance
of x,(t,w) using MC scheme
1.4
1.2
0.7-
1
0.6-
0.8
0.5-
0.6
0.4
0.4
0.2
0
0.3-
-0.2
0.2
-0.4
0.1
-0.6 r
0
5
10
15
20
tIme (S)
25
30
35
0
40
0
5
10
15
20
time(s)
25
30
35
40
Figure 4-29: Mean and variance of the state variable x 1 (t; w) for the K-O system with
initial uncertainty in one state variable using the MC scheme
Consider the K-O system represented by equation (4.7) with the initial condition given
as
x1(0, w) = 0.995 + 0.01 (w)
x 2 (0,w) = 1.0
(4.10)
X3 (0,w) = 1.0
where (w) ~ U[-1, 1] is a uniform random variable. We again consider the time domain
t E [0, 40] and use a time step of At = 0.01. The stochastic mean and variance of the state
variable xi(t; w) using Monte Carlo simulations with 10000 sample realizations is plotted
in figure (4-29). The mean and variance of x1 (t; w) using gPC scheme with polynomial
order p = 6 and using TDgPC scheme with polynomial order p = 3 are shown in figures
(4-30) and (4-31), respectively. From these figures, we find that the gPC scheme is unable
to integrate the evolution of uncertainty in the K-O system, whereas the TDgPC scheme
is fairly accurate in the chosen example.
For a clearer comparison, we again show the pdf of x, (t; w) at different values of time
t using the two schemes, in figures (4-32) through (4-39). The corresponding pdf using
Monte Carlo simulations is also depicted in the same figures. The gPC solution is found
to be accurate only for smaller values of time t. From figures (4-32) and (4-33), it is
observed that initially, the pdf of x, (t; w) is similar in appearance to that of a uniform
random variable. As the system evolves in time, the nonlinear characteristics of the gPC
102
Mean of
x (tw)using gPC scheme with p-6
Variance of x,(ttm) using gPC scheme with p-6
1.4
gPC scheme
- - - MC scheme
1.2
-
-
gPC scheme
MC scheme
1
0.8
I
0.6
I
r
0.4
-
0.2
%
0.5
0
-4I
-0.2
.4
.~
%4
%
%
-0.4
-06
0
5
10
15
20
*nm (s)
25
30
35
40
5
10
15
20
ime (s)
25
30
35
40
Figure 4-30: Mean and variance of the state variable x1 (t; w) for the K-O system with
initial uncertainty in one state variable using the gPC scheme with p = 6
Mean of x,(te) using TDgPC scheme with p-3
1.2
--
-MC
Variance of x,(ta.) using TDgPC scheme with p-3
TDqPC scheme
- - - MC scheme
0.7
scheme
1
0.6
0.8
-'
.5
0.6
0.5
0.4
0.4
0.2
0.3
0
-0.2
0.2
-0.4
0.1
-0.6
0
5
10
15
20
tine
25
30
35
(
40
(s)
5
10
15
20
time (S)
25
30
35
40
Figure 4-31: Mean and variance of the state variable x 1 (t; w) for the K-O system with
initial uncertainty in one state variable using the TDgPC scheme with p = 3
103
Mean of solution field x,(tco)
gPC wih
-T~gPC whhp- ]-3
1.2
w.
Variance of solution field x (t.e)
gPC wth p-6
T DgPC with p-3
- - -MC
1. 2
0.8.
0.8.
0.4
0.2
0.80. 60.4
-0.2
-0.4
-0.6
0. 20
4
4
2
2
6
6
12
12
10
8 10
time (s)
8
14
14
16
16
0
1
60
50
50-
40
40-
n|
20-
10
10
0.98
0.99
X(t1w)
6
1.01
8
10
time (s)
12
16
14
11
go,
,
----- TDgPC
- - - MC
With p=3
W 30-
L
20
0.97
4
POF of x,(twe) at t - C.Os using T~gPC with p=3
PDF of x, (twe) atft - 0.Os using gPC with p-.6
tu
30
2
-.
1.02
C-
1.03
0.97
0.98
0.99
1
X, (t,)
1.01
1.02
1.03
Figure 4-32: Probability density function of x 1 (t; w) for the K-O system at t = 0.0 s
expansion increase, and the pdf of x 1 (t; w) assumes a shape completely different from that
of a uniform random variable (seen in figures (4-35) through (4-39)). As the stochastic
characteristics of the solution field change, the gPC scheme loses its ability to represent
the pdf of x, (t; w) accurately, and hence deviates from the true solution. On the other
hand, the TDgPC scheme is able to represent the pdf of the solution field fairly accurately
at all values of time t.
4.4
Discussion on Performance of Uncertainty Quantification Schemes
In this section, we discuss and investigate the performance of uncertainty quantification
schemes for integrating uncertainties in input parameters, in terms of accuracy of solution
and computational cost. First, we remark that the solution field obtained using the DO
scheme implemented over full space is exactly identical to the solution field obtained
using Monte Carlo simulations. By studying the DO evolution equations (3.15), (3.16)
104
Mean of solution field x (t.o)
of solution field x1 (tp)
gPC
with p-6
TDgPC with p-3
- - - MIC
1.2
1
6
Variance
0.8
0.6
0.4
0.2
0
-0.2
-0.4
-0.6
0
gPC with p-6
TDgPC with p-3
- - - MC
1.2-
-
---C
A -
0.8-
0.60.402
4
2
6
8
10
time (a)
14
12
16
1
8
0
2
4
8
6
10
12
14
16
time (a)
PDF of x,(t,w) at I = 2.0s using gPC with p-6
PDF of xl(tc)
1
8
at t = 2.Os using TDgPC with p=3
In.
20
-gPC
-- MC
with p-6
TDgPC with p-3
F-
15
15
10
10
- --
MC
0.16
0.18
5
5
006
0.08
0.12
0.1
0.14
ta)
0.16
0.18
0.2
0.22
0.06
0.08
0.1
0.12
x,
0.14
x, ( D)
0.2
0.22
Figure 4-33: As figure (4-32), but at time t = 2.0 s
Variance of solution field x,(ta)
Mean of solution field x, (t(O)
gPC with p-6
TDgPC with p-3
- - - MC
1.2
1
1.21
0.8
0.6
w
gPC with p-6
TDgPC with p-3
- - -MC
-
-
MI
0.806 -
0.2
0
0. 4-
-0.2
-0.4
-0.6
0.2-
0
2
4
8
10
time (s)
6
12
14
16
18
0
---
gPC with p-6
-~~
M --
0.4
0.2
0.2
x (t.O)
0
time
10
(S)
12
14
--- TDgPC
- -MC
0.4
-0.5
a
16
18
wfth p-3
0.8
0.6
-1
6
I
0.6
-1.5
4
PDF of x, (t,,)at t- 4.0s using TDgPC with p-3
PDF of x, (,w) at t - 4.Os using gPC with p-6
1
0.8
2
0
0.5
-1.5
-1
-0.5
x (t,0)
0
Figure 4-34: As figure (4-32), but at time t = 4.0 s
105
0.5
1
Mean of solution field x,(t,o)
1.2
Variance of solution field x1 (t,m)
gPC with pTDgPC with p-3
--MC
1
0.8
gPC with p-6
TDgPC with p-S
- - - MC
.
1 .21
0.8
0..8
0.4
0.2
0
0
0 .4
-0.2
-0.4
-0.6
0..2
D
2
4
6
8
10
time (s)
12
14
16
18
0
2
PDF of x,(ta) at t - 6.08 using gPC with p-6
4
PDF of
6
8
10
time (S)
x,(tm) at t -
6.Os
12
14
16
18
using TDgPC with p-3
3.5
---
3
gPC with
p-6
TDgPC with p-_3
---
-
-c
MC
---
2
2.5
1.5
2
-
-
1
-
t
-
0.5
0.5
-1.5
-1
-0.5
0
0.5
x1 (tm)
1
Ii W-1
-1.5
1.5
Im 0.5i
-0.5
i1 1 1.5*
0
x1(tW)
Figure 4-35: As figure (4-32), but at time t = 6.0 s
Variance of solution field x1(toa)
Mean of solution field x,(to)
gPC with p-6
TDgPC with p-3
MC
-
---
1
gPC with p=6
TDgPC thp-3
MC
1.2--1
0.8
0 5
0.6
.50
0.4
0
-0
2-
0
8
2
6
4
1
a
10
time (s)
2
4
1
12
14
16
0.218
0
2
PDF of x,(t,e) at t - 8.Oa using gPC with p-6
-
.-
M
8
6
10
time (S)
12
14
16
18
8.Os using TDgPC with p-3
PDF of x,(ts) at t
gPC wit
2
4
TDgPC wit
2-.--
-
1.5
0.5
0 .5
04
C
-1.5
-1
0
-
--
-
-0.5
0
0.5
x1 (t,m)
1
1.5
2
-1.5
2.5
-
-1
-
-0.5
0
x1(ta)
0.5
Figure 4-36: As figure (4-32), but at time t = 8.0 s
106
1
1.5
2
Mean of solution field x,(te)
---
1
x
Variance of solution field xc(t.e)
gPC with p-6
TDgPC with p-3
MC
-
PC with p-6
TDgP with p-3 - MC
1. 2--
0.5
8~ 0.
6
0. 6 -0. 4
0. 2
-0.5
3
2
4
6
8
10
time (S)
12
14
16
0
18
2
at t - 10.0s using gPC with p-6
s(tw)
PDF of x
1.6
---
1.4-
6
8
10
time (s)
12
14
18
16
PDF of x,(te)att - 10.Os using TDgPC with p-3
1.6
gPC with p6
MC
---
1.4-
a
1.2-
4
TDgPC with p-3
MC
1.2-
8~
*
0.8
a
a-
0.6
a
0.8
0.6-
0.4
0.4
0.2
a
-1
-0.6
0.2
0
0.5
x,(t,()
1
1.5
-1
2
-0.5
0
Figure 4-37: As figure (4-32), but at time t
Mean of solution field x, (t,e)
47
=
1
1.5
10.0 s
Variance of solution field x1 (t,)
gPC with p-6
TDgPC with p-3
-
---
1
0.5
x,(t,e)
gPC with pTDgPC
with p-3
MC
-
1.2 ---
MC
1-
0.8-
0.5
0.60.4
0.2
-0.5
3
2
4
6
8
10
time (s)
12
14
16
0
18
PDF of x, (to)at t - 12.0s using gPC with p-6
1.
---
1. 41.
2-
~
2
*
0.8
0.6
gPC with p-6
1.4
8
10
time (a)
12
14
18
16
--- TDgPC
- - - MC
-
with P-3
1.2
i
a-
5 :
1
0.8
0.6
1
%
6
PDF of x, (tp)at t - 12.08 using TDgPC with p-3
Is.
1
4
1.6
-MC
I
2
0.4
0.
4
t-
0. 2
-1.5
0.2
N
-1
-0.5
0
0.5
1
x,(tLw)
1.5
2
2.5
-1.5
3
-1
-0.5
0
x,,(tw)
0.5
Figure 4-38: As figure (4-32), but at time t = 12.0 s
107
1
1.5
Mean of solution field x,(t,W)
Vaiance of solution field x,(t,e)
1. 2
TDgPC
- - - MVC
---
gPC with p=6
-
gPC with p-8
with p.3
1
TDgPC with p-3
- - -MC
-
M
0.5
8~
0. 8
8W
0. 6
0
0. 4
0. 2
-0.5
0
2
4
6
8
10
time (s)
12
14
16
18
0
PDF of x,(tpw) at t - 14.08 using gPC with p-6
1.5
---
- -I
2
4
6
8
10
time (s)
12
14
16
18
PDF of x,(t,e) at t = 14.Os using TDgPC with p=3
gPC with p.6
TDgPC with p-3
-
MC
I
8~
*
-0.5
O
1
-1.5
-1
C
0.5
I
-0.5
0
0.5
X,Yt)
1
1.5
-1.5
2
-1
-0.5
0
X,(t~W)
0.5
1
1.5
Figure 4-39: As figure (4-32), but at time t = 14.0 s
and (3.17) in greater detail, it can be observed that when the DO scheme is implemented
over full space, the DO modes do not (need to) evolve in time. As such, a full space DO
scheme with MC integration of the stochastic coefficient becomes equivalent to a classic
MC scheme.
As was shown in the previous section, the gPC scheme is ineffective in integrating
nonlinear dynamical systems over large time intervals. This limitation is removed in the
TDgPC scheme by using time-dependent gPC bases. However, re-initializing the bases
frequently in time significantly increases the computational cost of the TDgPC scheme.
Moreover, since the re-initialization increases the number of random variables (f) to be
equal to the size of the physical space (N), the TDgPC scheme becomes computationally
expensive for large dimensional stochastic systems. Hence, it might have limited applicability in integrating uncertainty in realistic ocean models, which typically involve high
dimensionality.
The run times for the various examples studied in this chapter using UQ schemes of
comparable accuracy are presented in table (4.1). Since the gPC scheme is unable to model
108
Table 4.1: Run times for uncertainty quantification schemes of comparable accuracy for
integrating in time uncertainty in input parameters
S. No.
1.
Test Case
1-D Decay Model
MC
1380.63s
Equations (4.1), (4.2), (4.3)
2.
K-O Model with initial
uncertainty in x,
1908.48s
gPC
TDgPC
DO
(p = 10)
(p = 3)
(full space)
121.26s
273.81s
62.51s
(s = 11)
(s = 4)
(s = 1)
4376.06s
196.98s
(s
(s
Not Applicable
Equations (4.7), (4.8)
K-O Model with initial
3.
uncertainty in x1
1874.08s
Not Applicable
=
20)
=
3)
4560.03s
157.59s
(s= 20)
(s
5633.97s
194.59s
(
(s=3)
=
3)
Equations (4.7), (4.10)
K-O Model with initial
4.
uncertainty in X1 , x 2 ,X 3
1895.61s
Not Applicable
=
20)
Equations (4.7), (4.9)
the Kraichnan-Orszag three mode problem with sufficient accuracy, its run times are not
included. We find from table (4.1) that for the examples studied in this chapter, the run
times for the DO scheme are smaller than those of the other schemes. Additionally, the
run times for the TDgPC scheme are significantly higher compared to the other schemes,
which is a result of the frequent re-initialization of the gPC basis. Thus, the TDgPC
scheme compromises computational cost in order to integrate the evolution of uncertainty
accurately using a PC framework. It is also evident that increasing the dimensionality of
the system leads to a significant increase in run times of all the schemes, as seen here in the
case of one-dimensional and three-dimensional examples. Specifically, the DO increase in
time is linear in the system size (as is also the case of ESSE or MC integrations). Overall,
for integrating uncertainty in high dimensional realistic ocean systems, schemes which
may be implemented over a reduced space may have a significant advantage in terms of
computational cost.
109
110
Chapter 5
Uncertainty due to External
Stochastic Forcing
In this chapter, we study stochastic linear and nonlinear dynamical systems with uncertainty due to external stochastic forcing. The type of forcing we consider is given by the
second term of our system of equations (1.1), i.e. B(X(r, t; w), t) dW(t; W).
It is impor-
tant to realize that this covers both additive and multiplicative noise since B is in general
a function of the stochastic state variable X(r, t; w),.
The source for the noise is the
multi-dimensional dW(t; w) which are Brownian motions, defined in section 2.1.2. This
Gaussian forcing (also known as white noise) is chosen because our long-term interest is
in efficient uncertainty quantification and prediction for realistic ocean systems. Gaussian stochastic forcing is a common and relevant type of forcing to model processes not
represented deterministically in classic ocean circulation equations. Of course, the whole
stochastic forcing is B(X(r, t; w), t) dW(t; w), which accounts for nonlinear interactions of
state variables X(r, t; w) with the multi-dimensional Gaussian forcing. In fact, one can
show that a large class of statistics can be represented by stochastic forcing of this type
(Jazwinski, 1970).
In what follows, we begin by discussing and illustrating a fundamental limitation of
existing PC schemes in capturing sustained influx of randomness, resulting in their inability to accurately integrate stochastic forcing over long time intervals. Next, we utilize and
study our new MTDgPC scheme (section 3.4) to integerate a stochatic three-dimensional
111
autonomous system with Gaussian noise and compare the results with those of DO and
MC schemes.
We then consider again a stochastic three-dimensional system but this
time with both autonomous and non-autonomous dynamics, and we compare the results
and efficiency of our new reduced space and reduced order KLgPC scheme (section 3.5)
to those of our new MTDgPC scheme.
Finally, we study the application of our new
MTDgPC and KLgPC schemes for integrating multiplicative noise in a four-dimensional
autonomous system. We end the chapter with a discussion on the performance of uncertainty quantification schemes for integrating multi-dimensional stochastic forcing.
5.1
Limitations of Existing Polynomial Chaos Schemes
As a first example of a system with stochastic noise, we consider a three-dimensional autonomous system X(t; w) = [x1(t; w)
x 2 (t; W)
X 3 (t; w)IT with a three-dimensional driv-
ing Wiener process. Consider the dynamical system represented by equation (1.1), where
the matrices A and B are given as
A
=
-3
1
1
2
-5
1
3
1
-6
0 2
B= 0
1
1 0
(5.1)
2 0 0
In this case, the initial conditions are deterministic and are represented as
X,(0,
w)
1.0
x 2 (0,
W)
1.0
X3 (0, W)
(5.2)
1.0
We consider the time domain t E [0, 4] and use a time step of At = 0.01. The stochastic
mean and variance of the state variable xi(t; w) estimated using Monte Carlo simulations
with 2 x 10 5 realizations is shown in figure (5-1). The same result using the DO scheme
with three stochastic modes (s = 3) is plotted in figure (5-2).
analytical results are also depicted in the same figures.
For comparison, the
It is observed that even for
dynamical systems with external stochastic forcing, the solution obtained using the DO
112
Mean of xl(t,w) using MC
scheme
Variance
of x, (to) using MC scheme
1.4
1
""
MC scheme"
Anaycal
-
--.--
"
1.2
0.8
0.7
1
0.6
0.8
0.5
0.4
0.6
0.3-
0.4
0.2
0.2
0.1
0
0.5
1
1.5
2.5
2
time (s)
3
0
4
3.5
MC scheme
- - - Analytical
05
1
15
2
time (S)
35
3
25
4
Figure 5-1: Mean and variance of the state variable x,(t; w) for a three-dimensional autonomous system with additive noise using the MC scheme
Mean of x,(to) using DO scheme
Variance of x,(tws) using DO scheme
1.4
1
0.9
L
DO
scheme
- -- ''-
acl
--
"
"""""'
1.2-
0.8
0.7
1
0.6
0.8
0.5
0.6
m
D
0.4
0.4
0.3
0.2
0.20.1
-
0
0.5
1
1.5
2
time (s)
2.5
3
3.5
4
DO scheme
- - Analytical
time (s)
Figure 5-2: Mean and variance of the state variable xi(t; w) for three-dimensional autonomous system with additive noise using the DO scheme with s = 3
scheme implemented over the full space is identical to the solution obtained using Monte
Carlo simulations.
All existing PC schemes known to us, including the gPC and TDgPC schemes, have
inherent limitations in integrating stochastic forcing of dynamical systems. To better
understand this limitation, it is useful to revisit the concept of Brownian motion. Let us
consider a Brownian motion W(t) over a time interval t E [0, T], numerically discretized
into n steps, by setting h = T/n, as discussed in section 2.1.2. Let dW = W(tj)-W(ty_1),
where j = 1, 2, ..., n, be a Brownian motion increment over the time step interval (At)3 =
tj - tj_1 . Then, by definition of a Brownian motion, each dWj is an independent random
variable. Thus, in order to represent the solution field over the time interval [0, T] using
polynomial chaos, we need to construct a PC expansion of the solution field using all dWj
113
as independent random variables. Even in a system with no other form of uncertainty,
the total number of random variables needed to model an m-dimensional Wiener process
over n time steps turns out to be f= mn. The total number of terms in the polynomial
chaos expansion is then given as
*S
( + p)
r
(mn + p)!
(mn)! p!
(5.3)
It is clearly seen that the number of terms in the expansion increases very rapidly beyond
computational capabilities as the number of time steps n increases or as the modeling
duration T increases. As an example, we again consider the three-dimensional autonomous
system described by equations (1.1) with dynamics (5.1) and (5.2) and over a duration
t E [0,4].
The number of terms required to represent the solution field using a PC
expansion of order p = 3 and using a time step of At = 0.01 is given by
s
(1200
3)!
1200! 3!
2.9 x 108
(5.4)
which is beyond acceptable computational limits for such a small system. Ideally, with
At -+ 0, a Brownian motion is represented by an infinite number of random variables.
In order to represent a white noise process numerically using PC schemes, one has
to deal with a sustained flux of new random variables, which grows very quickly in time
and becomes computationally infeasible. Hou et al. (2006) and a few other authors have
accurately intgerated dynamical systems with stochastic noise over short time intervals
using PC schemes with sparse truncation methods (described in detail in section 2.3.4).
However, even when using sparse truncation methods, the number of terms in the PC
approximation of the solution field grows very quickly in time and hence, these schemes
lose their effectiveness in integrating uncertainty over longer time intervals. With our
interest in modeling realistic ocean systems with external stochastic forcing, we introduced
two new PC schemes, the Modified TDgPC scheme (section 3.4) and the reduced space
and reduced order KLgPC scheme (section 3.5). As we will show next, these new schemes
are capable of integrating stochastic noise over arbitrarily long time intervals.
114
Mean
of xl(twe) using
Modified TDgPC scheme with p=3
Variance
of x, (t~w) using
Modified TDg PC scheme
wfth p=-3
Modified TDgPC scheme:
0.9-
- -
-
Analytical
1.2-
0.8
0.7
1
0.6
0.8
.0.5
a40.6
0.4
0.3
0.4
0.2
00
0.2
0.1
0.11
0.5
1
1.5
2
tima (a)
-I-2.5
3
3.5
-
Modified TDgPC schema
- - Analytdcal
4
timeas)
Figure 5-3: Mean and variance of the state variable xi(t; w) for three-dimensional autonomous system with additive noise using the new MTDgPC scheme with p = 3
5.2
Modeling Stochastic Noise using our New Polynomial Chaos Schemes
In this section, we utilze our new polynomial chaos schemes, namely, the MTDgPC and
the KLgPC schemes to integrate stochastic forcing. First, we consider again the threedimensional autonomous system represented by equations (1.1) and dynamics (5.1) and
(5.2). We again consider the time domain t G [0, 4] and use a time step of At = 0.01.
The stochastic mean and variance of the state variable x,(t; w) using the new MTDgPC
scheme with polynomial order p = 3 (s = 20) is plotted in figure (5-3).
The analyti-
cal solutions are also depicted in the same figure. Similarly, the mean and variance of
state variables x 2 (t; w) and x 3 (t; w) using the new MTDgPC scheme with p = 3, along
with their corresponding analytical solutions are shown in figures (5-4) and (5-5) respectively. It is observed that all the errors are reasonably small over the entire time domain.
Thus, the new MTDgPC scheme accurately models stochastic additive noise in the threedimensional autonomous system. However, it has a relatively high computational cost
(discussed in detail in section 5.3).
Next, we study a three-dimensional dynamical non-autonomous system with a threedimensional driving Wiener process.
forcing is additive in nature.
In this example as well, the external stochastic
Specifically, the system is governed by (1.1), but with
115
Mean of x2(tws) using Modified TDgPC scheme with p-3
0.9
-
---
Variance of x2(Iw) using Modified TDgPC scheme with p-3
0.5-
Modified TDgPC scheme
Analytical
0.8
0.4
0.7
0.6
0.3
0.5
0.4
0.3
0.2
0.1
|-Modified TMgPC schm
0.1
- - - Analytical
0
0
0.5
1
1.5
2
time (s)
2.5
3
3.5
0
4
0
0.5
1
1.5
2
time (s)
2.5
3
3.5
4
Figure 5-4: Mean and variance of the state variable x 2 (t; w) for three-dimensional autonomous system with additive noise using the new MTDgPC scheme with p = 3
Mean of x3(t,o) using Modified TDgPC scheme with p-3
Variance of %(t,w) using Modified TDgPC scheme with p-3
1
0.8
-
-
Modified TDgPC scheme
-
0.9
-
-
- -
- - - -
0.7-
Analytical
0.8
0.60.70.6
0.5
0.5
0.4
0.4
0.3
0.3
0.2
0.2
-
0
0.5
1
----
1
0.1
1.5
2
time (s)
2.5
3
3.5
4
0
0.5
1
1.5
2
time (S)
Modified TDgPC
2.5
3
scheme.
3.5
4
Figure 5-5: Mean and variance of the state variable x 3 (t; W) for three-dimensional autonomous system with additive noise using the new MTDgPC scheme with p = 3
116
dynamics matrices A and B now given as
-3 + 2 cos(27rt)
1
1 + 2 sin(27t)
2
-5
1
A=
3
1 + 3 sin(47t) -6 + +3 cos(47rt)55)
J
0.5 cos(27rt) 2 + 0.5 sin(27rt)
B =
0
2
1
+ sin(27rt)
0
cos(2,7rt)
0
The initial conditions are again chosen to be deterministic and are represented by equation
(5.2). We still consider the time domain t E [0, 4] and use a time step of At = 0.01, and
we compare the Monte Carlo solution to the MTDgPC and KLgPC solutions.
The three-dimensional non-autonomous system is first integrated using the Monte
Carlo scheme, and the stochastic mean and variance of x, (t; w) estimated using the MC
scheme with 2 x 10 5 sample realizations is shown in figure (5-6). The same state variable,
but estimated using our new KLgPC scheme with fred = 3 (full space) and a polynomial
order p = 3 (s = 20) is plotted in figure (5-7). The corresponding analytical solutions
are also depicted in the same figures. Similar results for the state variables x 2 (t; W) and
x 3 (t; w) using the KLgPC scheme are shown in figures (5-8) and (5-9) respectively. From
these results, it is observed that the new KLgPC scheme accurately models the threedimensional non-autonomous system with additive forcing. Note that it is possible to
integrate the system using a reduced space KLgPC scheme, for e.g., using fred= 2. For
this case, the results would be accurate if the singular values of dynamics matrices A and
B decay rapidly and the dynamics are represented adequately using only the first two
singular values (when the third singular value is sufficiently small). We study examples
using reduced space KLgPC and DO schemes in chapter 6. For comparison of numerical
results, the stochastic mean and variance of the state variables state variables x1 (t; w),
x 2 (t; w) and x 3 (t; w) estimated using the new MTDgPC scheme with polynomial order
p = 3 (s = 20), along with their corresponding analytical solution, are plotted in figures
(5-10), (5-11) and (5-12) respectively. It is observed that the new MTDgPC scheme also
models the uncertainty with reasonable accuracy.
117
Mean of x,(te) using MC scheme
Variance of x,(tw) using MC scheme
3.
1.4
- - -
1.2
MC scheme
Analytical
2.5
1
2
0.8
..
0.6
1.5
0.4
0.2
0.5
0
-0.21
0
scme
0.5
1
1.5
2
time (s)
2.5
3
3.5
0
4
0.5
1
1.5
2
time (s)
2.5
3
3.5
4
Figure 5-6: Mean and variance of xi(t; w) for the three-dimensional non-autonomous
system with additive noise using the MC scheme
Mean of x,(t,w) using KLgPC scheme with p.3
Variance of x,(twe) using KLgPC scheme with p.3
1.4.
- -
1.21
KLgPC scheme
-
Anaytical
-
2.5
2
0.8
a.
0.6
1.5
u.
0.4
1
0.2
0.5
0
---
0
0.5
1
1.5
2
time (s)
2.5
3
3.5
4
0
0.5
1
1.5
2
time
2.5
KLgPC scheme
Anytical
3
3.5
4
(a)
Figure 5-7: Mean and variance of xi(t; w) for the three-dimensional non-autonomous
system with additive noise using the new KLgPC scheme with fred = 3 and p = 3
1.2
14
,
Mean of x2t,w) using KLgPC scheme with p-3
,
,
,
,
,
,
KLgPC scheme
- - - Analytcal
,
Variance of i(te) using KLgPC scheme with p-3
1
0.9
0.8
0.8-
0.7
0.6
0.6-
5. 0.5
LU
n~a
--
0.4
0.3
0.2
0.2
0
0
0.1
0.5
1
1.5
2
time (S)
2.5
3
3.5
4
0
K~IPC sce
0.5
1
1.5
2
time (s)
2.5
3
3.5
4
Figure 5-8: Mean and variance of X2 (t; w) for the three-dimensional non-autonomous
system with additive noise using the new KLgPC scheme with ,ed= 3 and p = 3
118
Mean of x3(te) using KLgPC scheme with p-3
1.2
'-
Variance of 3(t,w) using KLgPC scheme with p-3
KLg- scneme-
1.2
- -- Anayuca
1
0.8
0.8
W
0.6
0.6
0.4
0.4
0.2
0.2
0
KLPC scheme
- -
0
0.5
1
1.5
2
2.5
3
3.5
0
4
time (s)
0.5
1
1.5
2
2.5
Analytical
3
3.5
4
time (s)
Figure 5-9: Mean and variance of x 3 (t; w) for the three-dimensional non-autonomous
system with additive noise using the new KLgPC scheme with f.ed = 3 and p = 3
Mean of x,(tw) using Modified TDgPC scheme with p-3
Variance of x,(tom) using Modified TDgPC scheme with p-3
1.4
Modified TDgPC scheme
- - - Anlytical
1.2
2.5 F
1
2
0.8
-. 0.6
1.5
0.4
1
0.2
0.5
0
0
0.5
1
1.5
2
2.5
3
3.5
4
0
-
0.5
1
1.5
time (S)
2
Modified TDgPC scheme
2.5
3
3.5
time (s)
Figure 5-10: Mean and variance of xi(t; w) for the three-dimensional non-autonomous
system with additive noise using the new MTDgPC scheme with p = 3
Mean of x2 (te) using Modified TDgPC scheme with p-3
Variance of x2(t,w)using Modified TDgPC scheme with p-3
1.2
---
Modified TDgPC cm
0.9
0.8
0.8
0.7
Ana~c-
0.6
0.6
0.5
0.4
0.4
0.3
0.2
0.2
0
----
0.1
0
0.5
1
1.5
2
2.5
3
3.5
4
0
time (s)
Modified TDgPC scheme
- - - Anacal
0.5
1
1.5
2
2.5
3
3.5
time (s)
Figure 5-11: Mean and variance of x 2 (t; W) for the three-dimensional non-autonomous
system with additive noise using the new MTDgPC scheme with p = 3
119
Mean of x,(tp) using Modified TDgPC scheme with p=3
1.2
Modified TDgPC
- - - Analytical
Variance of y(tw) using Modified TDgPC scheme with p=3
scheme
1.2
1
0.8
0.8
5.
0.6
0.6
0.4
0.4
0.2
0.2
0
-0.2
0
0.5
1
1.5
2
time (s)
2.5
3
3.5
0
0.5
1
1.5
-
Modified TDgPC scheme
---
Analytical
2
time (a)
2.5
3
3.5
4
Figure 5-12: Mean and variance of X3 (t; w) for the three-dimensional non-autonomous
system with additive noise using the new MTDgPC scheme with p = 3
Finally, we consider the application of the new KLgPC and MTDgPC schemes for modeling uncertainty due to multiplicative noise. To do so, we consider a four-dimensional
autonomous system with a four-dimensional driving Wiener process multiplying dynamical variables via the matrix B. Specifically, the governing equation for this system is
again (1.1), where the dynamics matrices A and B are given by
-3
20
1
1
20
-3
20
20
1
1
-20
2TO
1
20
1
20
1
20
1
20
1
20
-3
10
1
20
1
-3
1
1
1
1
3X 1
1
20
B=
10X0
1
T00 X1
1
_00X1
To-
3~X
1 2
1
100X2
1
10106X3
1
3
1
100I
1
1
1
1
3x4
gYX3
166 33
(5.6)
1
We use a deterministic initial condition, which is represented as
x1 (0, w)
1.0
x 2 (0, w)
1.0
x3 (0, w)
1.0
x 4 (0, w)
1.0
(5.7)
We consider the time domain t E [0, 3] and use a time step of At = 0.01. The stochastic
mean and variance of the state variable x1 (t; w) estimted using the MC scheme with 5 x 10 4
sample realizations and using the DO scheme with s = 4 (full space) are shown in figures
(5-13) and (5-14) respectively. For comparison, the corresponding analytical solutions are
120
Variance of x,(te) using MC scheme
Mean of x,(te) using MC scheme
1
ois
-
0.9
-
MC sheme
---
- - -Analytical
-
0.014-
MC scheme
cal
0.8
0.0120.7
-.
0.01
0.6
8. 0.5
0.008
0.4
0.006
0.3
0.004
0.2
0.002
0.1
0~
0
0.5
1
1.5
(s)
2
2.5
3
0
0.5
1.5
time (s)
time
2
2.5
3
Figure 5-13: Mean and variance of x,(t; w) for four-dimensional autonomous system with
multiplicative noise using the MC scheme
Mean of x,(te) using DO scheme
1
0.9-
Variance
--- DO scheme
- - - Analytical
-
of x,(te) using DO scheme
DO scheme
- - - Analytical
-
0.014
L-7-=ca
0.80.012
0.70.01
0.6
.W
$ZE 0.008
0.5
0.4
0.006
0.30.004
0.2
0.002
0.1
0
0.5
1
15
time (S)
2
2.5
0
3
0.5
1.5
time (s)
2
2.5
3
Figure 5-14: Mean and variance of x,(t; w) for four-dimensional autonomous system with
multiplicative noise using the DO scheme
also depicted in the same figures. The same results using KLgPC scheme with fred
=
4
(full space) and p = 3 (s = 35), and using new MTDgPC scheme with p = 3 (s = 35),
along with the corresponding analytical solutions, are plotted in figures (5-15) and (5-16)
respectively. It is observed that both the KLgPC and new MTDgPC schemes are able to
integrate uncertainty due to multiplicative noise with sufficiently good accuracy.
From the examples discussed in this section, it can be concluded that the new MTDgPC and KLgPC schemes can accurately integrate dynamical systems with external
stochastic forcing of additive as well as multiplicative nature over long time intervals.
However, as discussed later in section 5.3, the new MTDgPC scheme utilizes the entire
physical space for its implementation and hence its computational cost is largely dependent on the dimensionality of the state space. Thus, the new MTDgPC scheme is useful
121
Mean of x,(t,w) using KLgPC scheme with p-3
0.9
Variance
--- KLgPC scheme
- - - Analytical
-
of x(t,w) using
KLgPC scheme
wfth p.3
KLgPC scheme
0.014
- -
-
Analytical
0.8
0.012
0.7
0.6
a-..
a0.5
0.01
0.008
0.4
0.006
0.3
0.004
0.2
0.002
0.1
0
0.5
1
1.5
time (s)
2
2.5
3
0
0.5
1.5
time (s)
2
2.5
3
Figure 5-15: Mean and variance of x,(t; w) for four-dimensional autonomous system with
multiplicative noise using the new KLgPC scheme with fed = 4 and p = 3
Mean of x,(t,w) using Modified TDgPC scheme with p-3
Variance of x1 (t,)using Modified TDgPC scheme with p-3
0.016
Modified TDgPC scheme
- - -Analytical
0.9 -
---
Modiffied TDgPC scheme
- - -Anlyca
0.014
0.8
0.012
0.7
0.01
0
.5
0
0.008
X
W
0..4-
0.006
0.3
0.004
0.2
0.002
0. 1 0
0
0.5
1
1.5
time (a)
2
2.5
3
0
0.5
1.5
time (s)
2
2.5
Figure 5-16: Mean and variance of x,(t; w) for four-dimensional autonomous system with
multiplicative noise using the new MTDgPC scheme with p = 3
122
for modeling relatively small physical systems, but loses its effectiveness when used for
modeling large dimensional systems. The KLgPC scheme, on the other hand, is implemented on an adaptive (time-dependent) reduced space and is computationally effective
in integrating uncertainty in large-dimensional nonlinear systems and over long time intervals. Of course, this will work when the systems actually concentrate most of their
stochastic energy in a time-dependent subspace. If this is not the case, the whole state
space needs to be retained and the KLgPC scheme becomes similar to the MTDgPC
scheme.
5.3
Discussion on Performance of Uncertainty Quantification Schemes
We now investigate the efficiency of the UQ schemes in integrating uncertainty due to
external stochastic forcing. Specifically, we discuss the schemes in terms of accuracy of
solution and computational cost.
As in section 4.4, it is again observed that the solution field obtained using the DO
scheme implemented over the full space is exactly identical to the solution field obtained
using Monte Carlo simulations. Thus, even for dynamical systems with stochastic noise,
the full space DO scheme with the DO coefficients integrated with an MC scheme is
equivalent to the MC scheme.
We illustrated in section 5.1 that all existing PC schemes known to us, including the
gPC and TDgPC schemes, have an inherent limitation in integrating effects of stochastic
noise over long time intervals.
We also showed that our new MTDgPC and KLgPC
schemes, were capable of integrating dynamical systems with external stochastic forcing
of additive as well as multiplicative nature with reasonable accuracy. Next, we discuss the
applicability of these new schemes in integrating uncertainty in realistic ocean systems.
The run times for the various examples studied in this chapter using different UQ
schemes are presented in table (5.1). It is observed that for the examples shown here,
the run times for the test cases with additive noise using the DO scheme are significantly
smaller than those of the other schemes. However, in the current framework of implemen-
123
Table 5.1: Run times for UQ schemes for integrating uncertainty due to external stochastic
forcing
Modified
S. No.
Test Case
MC
3-D Autonomous System
1.
with additive noise
631.50s
DO
TDgPC
KLgPC
(p = 3)
(p = 3)
(full space)
(full space)
15764.36s
19378.66s
131.41s
(s = 20)
(s=20)
(s = 3)
16435.68s
19719.64s
137.00s
(s =20)
(s =20)
(s =3)
11097.10s
14746.56s
(35)
(
5
401.50s
(s= 4)
s=4
Equations (5.1), (5.2)
2.
3-D Non-autonomous System
with additive noise
613.62s
Equations (5.5), (5.2)
4-D Autonomous System
3.
with multiplicative noise
176.07s
(s =35)
Equations (4.7), (4.9)
tation, the run time for modeling multiplicative noise using the DO scheme is considerably
higher than the run time for modeling additive noise, which is not the case with other
schemes. It is also seen that the run times for the new gPC schemes are comparable.
From the given examples, it appears that the computational cost of our MTDgPC scheme
is slightly lower than that of our KLgPC scheme implemented over full space. However,
unlike the KLgPC scheme, the MTDgPC scheme cannot be implemented on a reduced
subspace. This is because at every re-initialization step of the MTDgPC scheme, the number of random variables (f)is set to be equal to the size of the physical space (N). Hence,
the stochastic dimension of the system in the new MTDgPC scheme is always equal to
the dimensionality of the full physical space, i.e., f = N. Therefore, for high dimensional
realistic ocean models where N is large, the computational cost of the MTDgPC scheme
would be much higher than that of the reduced space KLgPC scheme. Thus, we anticipate that in comparison to our MTDgPC scheme, our reduced space KLgPC scheme will
be much more efficient for integrating realistic ocean systems with external stochastic
forcing.
124
Chapter 6
Uncertainty Prediction for
Stochastic Ocean Systems
This chapter focuses on uncertainty prediction schemes that can be used for stochastic
models of realistic ocean systems. We first study the application of the DO and the new
KLgPC schemes on a stochastic dynamical system that concentrates most of its probability measure in a time-dependent subspace. The system considered is a self-engineered
20-dimensional non-autonomous stochastic dynamical system. Its time integration is used
as a test case for the reduced space DO and KLgPC schemes, and the two schemes are analyzed in terms of their solution accuracy and computational efficiency. Next, we employ
our computationally efficient stochastic solver to integrate the probability density function and moments of stochastic shallow-water ocean surface waves. We begin with a brief
introduction of general surface waves and study a few differential equation models used
to describe their dynamical evolution. Then, we focus on shallow water surface waves of
limited amplitudes, governed by the weakly nonlinear Korteweg-de Vries (KdV) equation
with stochastic forcing.
Such stochastic forcing can model the statistics of waves that
force the real system but are not solely represented by the deterministic KdV equation.
We obtain numerical solutions of the stochastic KdV equation using the DO scheme and
study the corresponding probability density function and uncertainty characteristics. The
chapter ends with a brief discussion on the application of reduced space DO scheme for
the integration of models of real ocean systems that include stochastic forcing.
125
6.1
A Self-Engineered 20-Dimensional Test Case
In the previous chapter, it was seen that amongst all uncertainty quantification schemes
which use the polynomial chaos framework, the KLgPC scheme is most suited for modeling
high dimensional realistic nonlinear ocean systems. Here, we utilize the reduced space DO
and KLgPC schemes to integrate a stochastically forced 20-dimensional self-engineered
test case. The test case is constructed in such a way that it has similar characteristics to
that of a realistic nonlinear ocean model which has been observed to concentrate stochastic
energy in a reduced subspace of the full state space (for e.g., see the work of Lermusiaux
(2006)).
The self-engineered stochastic dynamical system is still within the class of systems
governed by (1.1), with the dynamics matrices A and B now constructed using eigenvalue
and singular value decompositions, respectively. The system is a 20-dimensional system
with an 8-dimensional driving Wiener process. The eigenvalues of matrix A are presented
in figure (6-1). The eigenvalue spectrum chosen aims to mimic the observed red-spectrum
of ocean variability, i.e., the largest and smallest eigenvalues decay a bit faster than the
other eigenvalues (we note that these properties are often more pronounced in real ocean
systems and the decays are faster). In the test case (as in the real ocean), the positive
eigenvalues represent growing characteristics of the system whereas the negative eigenvalues represent decaying characteristics. We use trigonometric sine and cosine functions as
the corresponding eigenvectors. The first six eigenvectors of matrix A are shown in figure
(6-1).
Similarly, the singular values of matrix B are plotted in figure (6-2). We use Legendre
polynomials as the left singular vectors and 6k, 1 < k < 8 (defined by equation (3.22)) as
the right singular vectors. The first six left singular vectors of matrix B are depicted in
figure (6-2).
The dynamics of the system are made time-dependent by varying the largest 5 eigenvalues of matrix A in time, using trigonometric cosine functions to represent their time
evolution. Eigenvalues which are positive initially become negative at a later time and
vice-a-versa. Thus, eigenvectors that grow initially start decaying later, resulting in other
eigenvectors becoming dominant at a later time.
126
Eigenvalues
First Elgenvector of A
of A
0-
-1
0.5
0.5
0.5
0
0
0
-0.5
-1
-0.5
-0.5
-0.5
0
0.5
1
-1
Fourth Elgenvector of A
-0.5
0
0.5
1
0.5
0.5
-0.5
5
10
15
20
-1
0
0.5
1
-1
-1
0
0.5
1
0.5
0
-0.5
-0.5
-0.5
-0.5
Sixth Eigerwector of A
0
-4
-1
Fifth Elgenvector of A
50
0
Third Egienvector of A
Second Eigenvector of A
-0.5
0
0.5
1
-1
-1
-0.5
0
0.5
Figure 6-1: (a) Eigenvalues of matrix A for the 20-dimensional self-engineered test case
(b) First six eigenvectors of matrix A for the 20-dimensional self-engineered test case
First Singular Vector of B
Singular Values of B
2.5
Third Singular Vector of 8
Second Singular Vector of B
0.5
0.5
0
0
0
-0.5
-0.5
-0.5
-1
-1
-0.5
0
0.5
1
-1
-1
0.5
-0.5
0
0.5
1
-1
-1
-0.5
0
0.5
1
1.5
0.5
0.5
-0.51
0
1
2
3
4
5
6
7
8
1
-1
-1
0.5
0
0
0.5
Sixth Singular Vector of B
Fifth Singular Vector of B
Fourth Singular Vector of B
0
-0.5
-0.5
0
0.5
1
-1
-1
-0.5
-0.5
0
0.5
1
-1
-1
-0.5
0
0.5
1
Figure 6-2: (a) Singular values of matrix B for the 20-dimensional self-engineered test
case (b) First six left singular vectors of matrix B for the 20-dimensional self-engineered
test case
127
We consider the time domain t E [0, 4] and use a time step of At = 0.01.
The
initial uncertainty is set to zero (deterministic initial condition) and the initial condition
is represented as
x1(0, w)
1.0
x 2 (0, w)
1.0
x 20(0, w)
1.0
(6.1)
The stochastic mean of the entire 20-dimensional system using Monte Carlo simulations with 2 x 104 sample realizations is shown in figure (6-3).
It is observed that the
mean values of all the state variables follow the exact same dynamics. This is because
the initial conditions project only on the first eigenvector of A(t) at t
=
0. Since in our
test case the mean equation is 4dt = At, at all times, the mean will remain orthogonal
to the other eigenvectors and in the direction of the state space that corresponds to that
constant first eigenvector of A.
Hence, all of the elements of the mean should remain
equal for all times if they are equal initially. The variance of the system state estimated
using the MC scheme is plotted in figure (6-4). Unlike the mean, the variances of different
state variables exhibit dissimilar dynamics. This is because the variance is driven by the
sustained stochastic forcing B(X(t; w), t) dW(t; w) which is not a square matrix (it is a
20 x 8 matrix) and is not orthogonal to the eigenvectors of A.
To obtain more information about the system state, the MC solution is decoupled
into deterministic modes and corresponding stochastic coefficients (similar to the DO
scheme), using singular value decomposition (SVD) of the ensemble spread matrix (as in
ESSE scheme). The singular vectors (modes) are arranged in order of decreasing variance.
The variances of the first six as estimated with the MC scheme are shown in figure (65). By construction, the magnitude of the variances decreases significantly from the first
mode to the second, from the second mode to the third, and so on. For instance, at
t = 4s, the first mode captures around 61.8 percent of the total variance, the first two
modes together capture around 92.4 percent of the total variance and the first four modes
together capture around 99.2 percent of the total variance of the system. To further
illustrate the dynamics of the decoupled system, a comparison of the time-evolution of
128
Uw.'"O,
CA5
is
IMeod..3
U 3
2
M~O'now
maw of SPA) Uft MC ed"M
3.5
0
4
0.5
3
2
I's
3
4
*ho4s04-3Mw
d.
4
is5
1.5
13
0.04S
0
CA
i
1
2
3
2.
3
3
3
4
4
0
0
0i
-
1.Ii
2
Si
3
3.5
0$i
4
05
5
5
I
Ao
i
U
S
3
S S i
O
i
I
2
Ik(0
M
Ca
MW of
1"uigM
WC
5*
&
A0uVM
1.6
1.5
A
01
0"
CA
is
2
3S
3
35
0
0
01
1
1.6
2
Is
3
i
4
'0
01
Is
2
S
3
3.5
4
6-3 fiel xkt; w)* for the 20dmnioa*efenierd- etcs
ofksouto
Figure~
Mean
~ ~~~~~~M
usin the MCscem
0.129
2A4
3
i5
~~~~,g~Vat
051
ofots,
M 16152.1
V.15M sb6gMC5.o
2.
0Vfti
5
035
Vk"Of
11.522,
03
.5
3
4.
3
53
6
2
1.5
VOW-
5
2
**86()
d
5.6 1
1 3
I.
0.5
3.5
2.5A
3.
4
5 0.5
1
1A
*W k15MC5l66&
2.5
3
2WO
2
t3
2
LS
V.006)
X)16,
656MCad
61.5
3
2
4.5
P5
015
33&S
11Is
2 2
"s1)
Va.4..ofsQs236o5g
0
0.5
1.5
0
0.5
1
MC,5.I
3
2
0610(6)
2.s
3
35
53
4
O6*
6
1.6
2
2.5
usingMC
1.0
2
2.5
3
35
V66*)
Vae16of 66,OM..
uMMMC sdl.
0 5 .5
0
1
110IVl MC
Is
11*
4.6,16
2.6
5
5 .5
4
Ktwt
0
53
45
2
1.
2
5*)
Vwiwmof s,L6
.
Z.3
3.5
MC
.5,65,
2
2.
IS 5 1 .
S
2.3
3
u566
MC
06.5
1z
Vwis
of
2
56S()
xW)to
2.533.5
(01gMC
4
5
I
6Sdh
0
a~
2-
'5
655d~))5
61 62
N6s .s56,*)IM
3
3.5
4
)
0sin6
M
1
4
3A
vd661
Z56 8
83
-
5
3
.3
I561
2
4
wh-11
566
6116C
1
63
1.5
1
VOW"5 of
4-f
0
0
Vw.1.s5.s6)*v1MCwOIO1
566()
Vw26565l6)L(
0
0
Lmk 56MC*)6s.
1.5
1
66.6
MC d,.o
of1g
V.k5 .,)
5
20.
4
f
.55
%
16
05
6MO d5.6.
2
of011560)5
2.5
3
46-
3.5
6(5,1MC31621
253
135
2
235
3
35
4
0
05
135
2
2.5
5
3
0
5
s
I0
I A
2
2-S35
Figure 6-4: Variance of solution field x(t; w) for the 20-dimensional self-engineered test
case using the MC scheme
130
p4656
Me
O"M51
Variance of model using MC scheme
Variance of mode2 using MC scheme
50
100-
40
50-
E20
10
0
0
30
2
3
time(s)
Variance of mode3 using MC scheme
1
0
0
4
5
1
2
3
time(s)
Variance of mode4 using MC scheme
4
S2010 0-
0
3
1
2
3
0
4
time(s)
Vaiance of mode5 using MC scheme
1
Variance
2
time(s)
3
4
of modeS using MC scheme
0.40
2
0.1
0
0
1
2
time(s)
3
0
4
1
2
3
4
time(s)
Figure 6-5: Variance of first six modes for the 20-dimensional self-engineered test case
using the MC scheme
the variances of the first six singular vectors is shown in figure (6-6). We find that the
evolution of eigenvalues of matrix A causes the dynamics of the decoupled system state
to change in time. As the system is evolved, the variances of the modes change relative
to each other, causing the modes to interchange their relative positions at certain time
instances. For example, at around t = 1.43s, the first and second modes interchange their
positions. Similarly, at around t = 2.20s, the second and third modes interchange their
positions. The time evolution of the first three deterministic modes is depicted in figure
(6-7). The interchange of relative positions of the modes is seen very clearly in this figure.
In what follows, the MC solution is assumed to be the true solution.
Next, we model the self-engineered test case using the DO scheme with s = 6. The
stochastic mean and variance of the entire system is presented in figures (6-8) and (6-9)
respectively. For comparison, the corresponding MC solutions are also plotted in the same
figures. From these figures, we find that the DO scheme with s = 6 time-integrates the
mean and variance with reasonable accuracy.
The variances of the first six decoupled DO modes are shown in figure (6-10). We
131
Variance of first six modes using MC scheme
10
101
10~
cc
100
--
10-2
model
mode2
---
1eL
0
0.5
1.5
1
2
time(s)
2.5
3
_
ode4j
mode5
mode6
4
3.5
Figure 6-6: Comparison of variances of first six modes for the 20-dimensional selfengineered test case using the MC scheme
Model using MC scheme
20
15
5 35
0.5
2
time(s)
1.5
2.5
3
3.5
2.5
3
3.5
4
Mode2 using MC scheme
0
10
20
15
0.5
1.5
2
time(s)
Mode3
using MC scheme
ES
1.5
2
time(s)
Figure 6-7: First three modes for the 20-dimensional self-engineered test case using the
MC scheme
132
Mw of.,O*
01
3
1.
1CA
Left DO aah
2
S*)
LS
mom of
3
3.5
4
s
CA
udna 00 0*4M
2I
5*G)
S C
S
&S
L6
3
CA
1A1
551*)
01*
A
30
MsISLC5
05
x2"
5055
LaCCC7.
3
3.
4
35..
A CA
3 115C
311*)SIC))
C
C C
4
C
A
1
IA
2
35)
55*)
3
CA
C
0
C
A
2
51.5(s)
C
ICA
1A
1.
UA
~*)
&55*)
55)
6-*
Figure 6-8: Mean of solution field x(t; w) for the 20-dimensional self-engineered test case
using the DO scheme with s = 6
133
~4-4
10
12
1
2:
0
as-
2-
1
1.
2
0--()
2.5
3
3.i
l
0
i5
1
Ii
2
0-(-
2.3
5
a
.
0
0
i5
1
1.5
2
55.52)
3i
31 S 4
1
0i
1
55
I
Is
a
2L5
3
1i
.
2-
2-
2 -
0.i
0.5
1
Is
2
515.1.)
IS
S
i5
4
0
IA
2
*2)2)
2.5
5
41--
2
0
-
205
1
15
2
253554
20
Si
51
-
Inw)
V155251o,5s5u3IS50d125.
f
Vs.,
22
3
4
1
1,5
2
S
S
3.5
4
2
10
2
L3
DO s
4-1
.
'A
s
li
Z
2'
~
s
Si
1
Is
S
Si3
1158)n
0schme
Si
s ing
us55
2-
0".()
caseoof%0 using h
-)
d s(
f
255
Is
al
-f4
2
0
0
.#1s(L)s1gDS.d.-
with.")ft~ws solc
to 6,4)ov0sw
134
3
3.5
4
Variance of model using DO scheme
100
-
DO scheme
---
MC scheme
Variance of mode2 using DO scheme
50
ET
50
E
DO scheme
----
-
- MCscheme
20-
100
1
2
time(s)
3
0
4
1
Variance of mode3 using DO scheme
5
30
-
1 20 --E
DO scheme]
- -
MC scheme
2
time(s)
3
DO scheme
- MC scheme
E2-
2
10
4
Variance of mode4 using DO scheme
-
1-
0
1
2
3
0
4
time(s)
Variance of mode5 using DO scheme
3
-DO
scheme
- - - MC scheme
W
1
2
time(s)
3
4
Variance of mode6 using DO scheme
0.4 -- -10.3
E0.3 -
E
DO scheme
MC chme
00.1
0
0
1
2
3
0
4
time(s)
1
2
3
4
time(s)
Figure 6-10: Variance of first six modes for the 20-dimensional self-engineered test case
using the DO scheme with s = 6
find that the variances of the first three DO modes show an excellent agreement with
the variances of the first three MC modes. The variance of the DO modes four and five
are also very good, but that of the sixth DO mode does not show as good an agreement
with the corresponding MC mode. However, the sixth and subsequent modes together
account for only around 0.3 percent of the total variance of the system, and are hence
not significant. The time evolution of the deterministic DO modes is illustrated in figure
(6-11). The corresponding errors are also shown in the same figure. We find that apart
from the short time intervals where there is a change in the relative positions of the modes,
the errors are quite small in the entire time domain.
To improve the accuracy of the solution, we now model the self-engineered test case
using the DO scheme with s = 10 modes. The corresponding stochastic mean and variance, along with the MC solutions, are plotted in figures (6-12) and (6-13) respectively.
The variances of the first six decoupled DO modes are shown in figure (6-14) and the time
evolution of the deterministic DO modes, along with the corresponding errors, is depicted
in figure (6-15). From figures (6-12) and (6-13), we observe that the solution using the
135
Model using DO scheme
0.5
5
1W,
0a5
20
0
0.5
3.5
4
time(s)
Error in model
15
15
10
(
0.2
1.5
0.5
2
2.5
35
4
0
lime(s)
10
Mode2 using DO scheme
5
0*5
0
-0.5
0
15
0.5
2
time(s)
Error in mode2
2.5
3
IS
4
20
15
10
5
0
0.4
0.2
0.5
1.5
2.5
2
3
3.5
4
time(s)
Mode3 using DO scheme
21
0
-0.5
10
5
0
0.5
1
1.5
2
2.5
3
3.5
2.5
3
3.5
time(s)
Error in mode3
21
5
0
I1
5
0
0.5
1.5
2
time(s)
Figure 6-11: First three modes for the 20-dimensional self-engineered test case using the
DO scheme with s = 6
136
DO scheme with s
=
10 is reasonably accurate, which is as expected. Additionally, from
figure (6-14), the variances of the first five DO modes show an excellent agreement with
the first five MC modes. Thus, the DO scheme with s
=
10 is able to capture as much as
99.7 percent of the total stochastic variance accurately. Moreover, from figure (6-15), we
observe that for the DO scheme with s
=
10, the errors in the evolution of the determin-
istic modes are extremely small over the entire time domain, including the time intervals
where there is a change in the relative position of the modes. Thus, the DO scheme with
s
=
10 shows an observable improvement in accuracy compared to the DO scheme with
s = 6.
Finally, we use our new KLgPC scheme to time integrate this dynamical test case. The
KLgPC scheme is implemented on a reduced space of Fied = 6 and using the polynomial
order p
=
2
(Sred
= 28). The stochastic mean and variance of the solution field X(t; w)
using the KLgPC scheme, along with the MC solutions, is illustrated in figures (6-16)
and (6-17) respectively. The comparison of the results of KLgPC and MC schemes shows
that the KlgPC scheme is also accurate in modeling the evolution of uncertainty in the
self-engineered test case.
To gain more insight into the KLgPC solution, it is also decoupled using SVD, similar
to the DO scheme. The variances of the first six decoupled KLgPC modes are shown in
figure (6-18). For comparison, the corresponding variances of the decoupled MC modes
are also shown. We find that except for the sixth mode, the variances of the KLgPC modes
show a good agreement with the corresponding variances of the MC modes. However,
the accuracy of the KLgPC scheme in time integrating these variances is less than that
of the DO scheme with s = 10. The time evolution of the deterministic modes for the
KLgPC scheme is depicted in figure (6-19). The corresponding errors are also plotted in
the same figure. It is seen that the errors in the evolution of the deterministic modes are
small over the entire time domain, including the time intervals where there is a change
in the relative position of the modes. Overall, for this test case, the KLgPC scheme with
fred = 6 and p = 2 is more accurate than the DO scheme with s = 6, but slightly less
accurate than the DO scheme with s = 10.
The run times for the different UQ schemes presented for this 20-dimensional self-
137
MW.UMV DOs5.sds
0
Wo of
."5.)
uWV
DO d
A
1
5
23
*0f"
03
I 1 .5
2
2.5
Soff")
Mw of0s%") u*ng DO .5o
3
3A0
0
s 13
.
).h0.)
2
A5 5
3
3
5
035
1
03.
2
o
33
4
-0
06
1
* DO
0A
2
2s
3
5
4
0
1
5
.0
53
0
03.
0
1
3 Is
2
3
3.5
sWV
4
DO do&
2.
8
3 21
*00
3
4
0
05
A 0
2
3
DO
5
0.5
S
xh
5
3
0
Is3.1
.5
103F
0
2
2.s
500*0
.S~0,.
IS
0.
1
50of
fh.M(sxa)
1.5
,005
1A
0
4
3
h
2.5
0.5
M
0
03
0
os
o
0.(2
2A
3
06
0A
1
2
"K05.)
5
3
5
5
*0.
I
.
5
3
A
4
03
0
03 A.
Is
2
Son*.)
53
3
.5
0
0
0
3
2A5
5
0)
dF0,0'.0
0
53
.50
2
*rA.')
5
3
355
L
03
0
2
dSos()
3
35
4
0
0
3
0
0
3
22
5..o)
53
5
235
4
Figure 6-12: Mean of solution field x(t; w) for the 20-dimensional self-engineered test case
using the DO scheme with s = 10
138
as
8-
1
1.5
2
Vl0(.5l
Is514O0.Is.4
V5 1s
105..
,
IS1350U1.4
6-
f4-
f
2-2115-
-
217526
3
3
s
1
.
.
.
0
06
1
1A
2
f4
2-
2-
2-
Ik"0*)
cas uin
the D
schem
6"()
wit sw
=
WW
10 xt 4 OW
43139
ww fNQ gD d
3
&
Variance of model using DO scheme
100 -
a
0
E
Variance of mode2 using DO scheme
DO scheme
- - - MC scheme
-
scheme
-O
- -- MC scheme
40
0
50-
20 -
0
0
1
2
3
time(s)
Variance of mode3 using DO scheme
> 0
-
0
4
1
2
3
time(s)
Variance of mode4 using DO scheme
4
30
-
20- ---
DO scheme
MC scheme
4
E
DOscheme
- - - MC scheme
E210
0
1
2
3
time(s)
Variance of mode5 using DO scheme
-
2
- -
0
0
4
1
2
3
4
time(s)
Variance of mode6 using DO scheme
DO scheme
- MCscheme
-
E
4 ----
.
-
-
DO scheme
-MC scheme
0.2
0
0
1
2
time(s)
3
0
4
0
1
2
time(s)
3
4
Figure 6-14: Variance of first six modes for the 20-dimensional self-engineered test case
using the DO scheme with s = 10
engineered test case are given in table (6.1).
We find that the run times for the DO
scheme are significantly smaller than those of the KLgPC scheme.
Even for the DO
scheme with s = 10, which is found to be slightly more accurate than the KLgPC scheme
with rred = 6 and p = 2, the run time is considerably smaller than that of the KLgPC
scheme. Thus, for this stochastic test case, the DO scheme appears to be a more efficient
choice. In conclusion, although the KLgPC scheme appears to be the best polynomial
chaos scheme for the time-integration of uncertainty due to stochastic forcing in high
dimensional stochastic systems over large time intervals, its relatively high computational
cost (in comparison to the DO scheme) may prove to be a hindrance in its applicability
to realistic ocean systems.
140
Model using DO scheme
20
0.55
0
0
0.5
1.5
2
time(s)
Error in model
2.5
3
3.5
4
2 lO
0.4
0.2
0
51
0
0.5
1.5
2
time(s)
2.5
3.5
4
Mode2 using DO scheme
20
151
0.5
_101
0
-0.5
0
0
0.5
1.5
2
2.5
3
3.5
4
time(s)
Error in mode2
0
-0 15
E 10
0.4
0.2
55
time(s)
Mode3 using DO scheme
F201
0
-0.5
S20
{15
1.5
U.b
0
15
2
time(s)
Error in mode3
2.5
3.5
4
0
5
0.5
1.5
2
2.5
3
3.5
4
0.4
0.2
io
time(s)
Figure 6-15: First three modes for the 20-dimensional self-engineered test case using the
DO scheme with s = 10
Table 6.1: Run times for UQ schemes for time-integrating uncertainty in the 20dimensional self-engineered test case
S. No.
Uncertainty Quantification Scheme
Run Time
1.
MC scheme
84.21s
2.
DO scheme with s = 6
75.18s
3.
DO scheme with s = 8
163.15s
4.
DO scheme with s = 10
221.31s
5.
KLgPC scheme with fr, = 6 and p = 2
5271.66s
141
Mw
------ M~n OgvMC
uA KLP.o
wt
MWoftp
S4
0.6
Oh
is
3,
0
h 0
0
Oh
Os
I
0
.0
2
Oh .
5
h
4
005
is
Oh
O
h
I h
0
S
Oh
4
0
OS
3
11
MW
f "
1.4ohO
MW d XS)
1
Wft
1!5
1
*o*)
.6V
25h
0KLOK
3.
4
1
.h
0
On,(,)
3
%
3h
3.5
-'
2
O
4
PC fthoo
*now(
O
'A O
2A
3
h.
4.(0*0*)
x,,WI
L*0 Ko4qK o
0
0
Oh5
IsO
0U.Mw
2
al
26
,,"wf
3
o*OO
Oh.5
.*0.
01
4
'0
1.5
Oh.
Oh5
Ith2
h
MW of
Is
'A OS
O
S0..
-
Oh
2j
4
1.5
MW.
O
4
o
Is
Oh
S
"a)
KLO~~O
0--(-)
010
0
0
2
OUoKo)
4*qMgC.Oo-o
1.;
0
OA,
0
Ooo.()
M.f""Oto)
oVRPC Wh o
05
0.
Go
,ta)
Oh
0.5
.a
aO O .S
S..))
3O
1S
O
a
0 0
*)
0 2
2.5
$
I.5
h0.
0
O
4
0
O
I
Oh
0
S
3h
-.
Oh 4
0
0.
h
h
0
O
Figure 6-16: Mean of solution field x(t; wc) for the 20-dimensional self-engineered test case
using the KLgPC scheme with fed = 6 and p = 2 (Sred = 28)
142
a
VWWM .k0QKLOPC.o5
00
f:4
OZ 1
2
233
'
.
s
2
I
f4
O
051
i
.
3
35
4
C
I
15
0 a.s 1
1.5
VOIolI()I
2
6*()
.LV
2.5
4
3
35
0
2z
3
S5
4
0A
I
Is
-
2s-
2-
2
55*5)WMKqP.*
00*)
0.5
1
Vsin
2
*F*)
d Y,.&.)
1
i'
KI@PC0ai.
is
2.
1
.KLPC
0
3!5
4
00
GS
1
-
Vo
1.5
.1
2
2.5
S
0
W.OOa)
&Vo(
044PC d11
21:
15-23
O~l0s
2.5
V50001 is
3.5
016
2-
2
2
3
0s~)0*
45I~~W
0,.0OI5
3
2.
0
2*OP
.501
2-
.06
0
0*$
00*)
1
2.
2.0
Us
2
2A 3.5
I 0
t-
0.5
1
1.
2
.
3 U&S-
10
8-
f
150.1Is5,0
4-
f
1-4
2-
2 1-
2 [
w55*)
**aS)
f
0
.5
w"s)
f
1
1.5
0
f U
3 .6
0
0
CA
1
1A
2
US
3
U1
WWM
4
o
0.0
1
I5
L5
U4
OUfaw
Figure 6-17: Variance of solution field x(t; w) for the 20-dimensional self-engineered test
case using the KLgPC scheme with fred
=
6 and p
143
=
2 (sreci
=
28)
2
.5
3
0.5
4
Variance of model using KLgPC scheme
100
7
E
Variance of mode2 using KLgPC scheme
KLgPC scheme
- - - MC scheme
-
KLgPC scheme
MC scheme
740- ----
50-
EsE 20-
0
1
2
3
time(s)
Variance of mode3 using KLgPC scheme
0
4
1
2
3
time(s)
Variance of mode4 using KLgPC scheme
4
30
KLgPC scheme
MC scheme
-S20--E
KLgPC scheme
4
-MC scheme
- -
E 2.-
10
0
0
2
3
time(s)
Variance of mode5 using KLgPC scheme
1
0
4
2
3
1
time(s)
Variance of mode6 using KLgPC scheme
0
4
3
-
2
KLgPC scheme
-
0.4 - -
-MC scheme
-
E0.2
E
KLgPC scheme
- - MC scheme
>0/
0
1
2
time(s)
3
4
0
1
2
time(s)
3
4
Figure 6-18: Variance of first six modes for the 20-dimensional self-engineered test case
using the KLgPC scheme with fred = 6 and p = 2
6.2
(Sred =
28)
Modeling Uncertainty in Nonlinear Ocean Surface Waves
6.2.1
Introduction
Ocean surface waves are a well-known physical phenomenon and constitute an important
section of ocean science. Although man has been intrigued by surface waves for a long
period of time, it is only in the last 200 years, that he has been able to build a strong
physical and mathematical foundation for understanding wave phenomenon. The most
directly experienced and frequently occurring surface waves are the wind-driven waves,
which are caused by the interaction of the wind with the ocean surface at the air-water
interface. A less frequent but more catastrophic form of surface waves are the tsunamis,
which are long-period oscillations caused by geologic effects such as earthquakes or landslides. Other examples of surface waves include those caused by human activity (motion
of ships, explosions etc.) and marine biology.
In the present work, we focus our attention on wind-driven surface waves, that occur
144
Model using KLgPC scheme
20
15
0
S10
-0-5
S5
2.5
0
3
3.5
4
time(s)
Error in model
0.4
1
1
0.2
2.5
time(s)
Mode2 using KLgPC sche ie
3
3.5
4
0
0.5
0
0
1.5
05
2
time(s)
Error
in mode2
20
S15
0.4
10.6
02
_ 10
5
0
05
1
1.5
25
2
3
3.5
4
0
time(s)
Mode3 using KLgPC schenie
20
5
0.5
-0.
0.5
1
1.5
2
time(s)
Error in mode3
2.5
3
35
20
15
10
20
5
4
0.4
1.5
2.5
2
time(s)
3
3.5
0.2
go
Figure 6-19: First three modes for the 20-dimensional self-engineered test case using the
KLgPC scheme with red = 6 and p = 2 (Sred = 28)
145
on the free surface of large water bodies (such as oceans, lakes and rivers) due to the
action of the wind. Since gravity is the main restoring force in these waves, they are often
referred to as wind-driven surface gravity waves. Under the assumption of irrotational and
inviscid fluid flow, the governing equations of motion for these waves are the coupled and
highly nonlinear Euler equations (which are derived from the Navier-Stokes equations).
These equations typically involve coupled partial differential equations for two unknown
variables, namely, the surface elevation q(r, t) and the velocity potential <0(r, t). The
Euler equations in their general form are highly nonlinear and non-integrable analytically.
In view of their mathematical complexity, several lower order approximations to model
wind-driven surface waves have been proposed over the last few decades. These models
have been derived by making further simplifying assumptions on the Euler equations, by
considering special cases where certain terms in the equations dominate over others.
The Green-Naghdi equations (Green et al. (1974), Green and Naghdi (1976)), also
known as the Su-Gardner equations (Su and Gardner, 1969), are derived under the additional assumption that the wavelength of the surface wave (A) is much greater than the
water depth (h), i.e., c = h/A
<
1. This assumption is typically used for studying waves
in shallow water. The one-dimensional Green-Naghdi equations are given as
nt + [(h + n)ux = 0
Ut+ UUx +?x
=
[(h +
1
q)3(Uxt
+ UUXX
_
U2)]x
(6.2)
3(h + q)
where q(x, t) is the surface elevation and u(x, t) is the depth-mean velocity field. The
Green-Naghdi equations (6.2), as the Euler equations, are also highly nonlinear. If we additionally assume that the wave amplitude is much smaller than the water depth, we can
simplify equation (6.2) further to obtain lower order approximations. Using the assumption that i7/h = O(c2 ) and u/(gh)1 / 2
=
0(6 2 ), and dropping terms of order higher than
0(04 ), the Green-Naghdi equations reduce to the weakly nonlinear Boussinesq equations,
which are given as
qt + [(h+)u]x = 0
h2
Ut
±
UUX
±
977X =
146
-U,~t
(6.3)
The Boussinesq equations (6.3) represent waves of bi-directional nature and can be simplified further by assuming uni-directional waves, giving rise to the Korteweg-de Vries
(KdV) equation (de Jager, 2006). The one-dimensional KdV equation is given as
3co
2
+ C0X+
+t
where co =
+
Co2
6 _7XXX =0
(6.4)
rg/h7.
The analytical solutions of many nonlinear wave equation models can be computed
using a relatively new method of mathematical physics called the inverse scatteringtransform (IST), which is a generalization of the linear Fourier transform (see Osborne (2010)
for details). For a given wave speed c, the analytical expression for the surface elevation
7(x, t) of a solitary wave for the Green-Naghdi equations (6.2) is given as
c2
(x, t)GN =
- cd
g
2
sech 2
3(C2 -
(c
C2)
(X
)
2ch
- Ct)
(6.5)
Similarly, the solitary wave solution for the KdV equation (6.4) is given as
r(x, t)Kdv =
2 V3(c -- co)(X - ct)(66)
2h(c - co)
sech(.)
Co
S
The relation between the speed of propagation c and the wave amplitude a is given as
2h
=(6.7)
The analytical solution of the solitary wave for the Boussinesq equations (6.3) is not
known (Li et al., 2004).
6.2.2
Korteweg-de Vries (KdV) Equation
A detailed historical account of the origin of the Korteweg-de Vries equation is provided
by Miles (1981) and de Jager (2006).
The motivation behind the development of this
equation were the observations made by John Scott Russell in the Union Canal in 1834,
and his subsequent experiments (Russell, 1844). He observed a round and smooth wave
147
moving at a constant speed in the canal without changing its form, a phenomenon which
later came to be known as a solitary wave. Lord Rayleigh and Joseph Valentin Boussinesq
independently developed mathematical theories to explain these observations (Rayleigh
(1876), Boussinesq (1877)). The KdV equation is present implicitly in the work of Boussinesq (Boussinesq, 1872).
Diederik Johannes Korteweg was a student of J. D. van der Waals and received his
doctoral degree from the University of Amsterdam in 1878. He appears to have believed
that the mathematical theories of Rayleigh and Boussinesq did not properly resolve the
long wave paradox (Ursell, 1953) between Russell's observations and the theory of shallow
water waves proposed by George Bidell Airy (Airy, 1845). It was for this reason that he
suggested this problem to his student Gustav de Vries. The first explicit appearance of
the KdV equation is in the dissertation of de Vries (de Vries, 1894). Subsequently, the
theory of Korteweg and de Vries was published in a paper in the Philosophical Magazine
in 1895 (Korteweg and de Vries, 1895). Interest in solitary waves declined in the coming
years, until the discovery of the soliton in 1965 (Zabusky and Kruskal, 1965). This lead to
the development of inverse scattering theory (see Ablowitz and Segur (1981) for details),
which was used for finding analytical solution of the KdV equation (Gardner et al., 1967),
reviving academic interest in solitary waves.
The Korteweg-de Vries equation models the propagation of weakly nonlinear, weakly
dispersive, incompressible, inviscid and irrotational water waves of long wavelength and
small amplitude in shallow water. The physical form of the KdV equation is represented
by equation (6.4). However, several other forms of the equation are also employed. Using
the variable transformation o- = , + 2h/3, equation (6.4) reduces to the simplified form
7t
+
coh 2
uo=-+
O- = 0
6
3c0
2h
Equation (6.8) can then be non-dimensionalized with the transformations
(6.8)
7=
3c-/2h 3
and r = coh 2t/6, leading to the non-dimensional form of the KdV equation, given as
(-+
6Cx+
148
xXX = 0
(6.9)
On the other hand, if we non-dimensionalize equation (6.8) using
( = 9o-/h
3
and T
=
coh 2t/6, we get another form of non-dimensional KdV equation, represented as
(T
(6.10)
= 0
+ ((X + (XXX
Finally, the sign of the nonlinear terms 6((x and
x in (6.9) and (6.10) can be reversed
by using a negative sign in the variable transformations of ( and ( respectively.
The expression for the depth-averaged velocity field u(x, t) for the KdV approximation
can be obtained from conservation of mass for a homogeneous fluid, i.e., the first of the
equations in (6.3), in conjunction with the KdV equation. To do so, we re-write this
conservation of mass (continuity equation) as
nt + [(h +
r/)ux =
(6.11)
0
We also re-write the KdV equation (6.4) in conservative form, given as
27
3c+
o TI2+coh
6
+77 +co
_0(12
=0
(6.12)
Comparing equations (6.11) and (6.12), we obtain
u=
3
___
co
2cjh2
ICO7 + -72 +
h+77 .
4h
6/h2
6
1
(6.13)
Since 77/h = 0(c2) < 1, we can utilize a Taylor series expansion to approximate 1/(h+rq).
Neglecting (77/h) 2 and other higher order terms, the expression for u(x, t) of equation
(6.13) can be written as
U~, h( 1
o= tes
c077 + 3rD 72 + CA 2
[I
±
± co4h ]
3CO 2
r +
-CO2
CO 2
an
(6.14)
3co 3
coh
f
6orr
Neglecting terms of order higher than 0(,E 4) and simplifying further, we obtain the final
149
expression for the depth-mean velocity field for the KdV approximation, given as
CO
+ ohC/XX(6.15)
772
The classic KdV equations model the propagation of weakly nonlinear water waves of
long wavelength and small amplitude in shallow water of constant depth. To extend their
applicability to ocean waves with variable bathymetry, several modified KdV-like equations have been introduced in recent years. The class of generalized KdV-like differential
equations with variable coefficients which model the propagation of ocean surface waves
of small amplitude over shallow water of variable depth, is known as KdV-top equations
(for more details, see the works of Van Groesen and Pudjaprasetya (1993), Dingemans
(1997), Israwi (2009), Duruf1e and Israwi (2012)). In what follows, we only consider the
classic KdV equations, with constant bathymetry.
Deterministic KdV Equation: Solitary Wave Solutions
6.2.3
In this section, we numerically solve for and study the propagation of solitary waves using
the non-dimensional form (6.9) of the one-dimensional KdV equation (6.4). As discussed
earlier, a solitary wave is a smooth and rounded wave consisting of a single elevation,
which propagates without a change in its form. The stability in form of a solitary wave
is due to the balance between nonlinear steepening (mathematically represented by the
term 6((x) and dispersion (represented by the term (x..). The single soliton analytical
solution of the non-dimensional KdV equation (6.9) is given by
((x, t) = 2i 2 sech 2 [i (x - xo - 442t)]
where
£O
(6.16)
is the initial location of the soliton (at t = 0). Further, the amplitude of the
solitary wave is
2i
2
, its width is 1/
and its velocity is
4i2.
Similarly, the two soliton
solution of the non-dimensional KdV equation (6.9) is given as
((X, t) =
[(-
8 (7 - i) [i2 cosh 2 02 + % sinh 2 01]
1
i 2 ) cosh (01 + 02) + (i1 + i 2 ) cosh (01 - 02)(.
150
(6.17)
where
; i2,
/i and 01 and 02 are given by
01 =i1
[x - x, - 4it]
02
[X
=
72
x2
-
(6.18)
it]
-
Here, for the first solitary wave, the amplitude is 2
2,
width is 1/1 and velocity is 4i?,
whereas for the second solitary wave, the amplitude is 242, width is
1/72
and velocity is
492. The initial locations of the two solitons are represented by x, and x 2 respectively.
First, we numerically solve for the 1-soliton solution of the non-dimensional KdV
equation (6.9).
We consider the spatial domain x E [-10, 20] and use a length scale
of Ax = 0.1 for spatial discretization of the governing equation. Also, we utilize periodic boundary conditions for our numerical simulation. For time integration, we use the
Zabusky-Kruskal (Z-K) numerical scheme (Zabusky and Kruskal (1965), Albowitz and
Taha (1984)). The Z-K scheme is an explicit leapfrog finite difference scheme with second
order convergence in space and time. The linear stability condition for the Z-K scheme
has been derived by Vliegenthart (Vliegenthart, 1971) and is given as
At < [6Axl((x, t) + 4 (AX)-3 - 1
Rose (2006) has shown that At
=
(AX)
3
(6.19)
/4 satisfies the stability condition (6.19). We
consider the time domain t E [0,10] and use a time step of At = (Ax) 3 /4 = 0.00025.
Herman and Knickerbocker (1993) have found that the numerical solution of the KdV
equation using the Z-K scheme induces a numerical shift in the position of the solitary
wave. They have proposed a correction in the velocity of the numerical KdV wave. The
corrected velocity of the numerical solution is given by
v
-dxe _ 4
dt
2 _
2
- 4 (A\)
(6.20)
5
The first term in equation (6.20) is the expression for the analytical velocity and the
second term is a correction term due to truncation error. Our numerical solution is also
adjusted to account for this truncation error.
For our example, we consider a solitary wave with
151
i
=
1.0 and x 0 = 0.0. The 1-
soliton wave train for the domain [-10, 20] x [0, 10] using the Z-K numerical integration
scheme with correction (6.20) is shown in figure (6-20). For comparison, the analytical
1-soliton wave train is plotted in figure (6-21). It is seen that the Z-K scheme is accurate
in modeling the propagation of a single soliton. The physical form of the solitary wave
at different time instances is depicted in figure (6-22). We observe that the solitary wave
travels without a change in its form, which is as expected.
Next, we investigate the interaction of two solitons, still using the KdV equation
(6.9).
We consider the spatial domain x E [-10, 10] and as before use Ax = 0.1 for
spatial discretization. We again use periodic boundary conditions for the physical domain.
We consider the time domain t E [0, 2] and again choose a time step of At
=
We consider two solitons with i1 = 1.2, i 2 = 0.8, x, = -6.0
-2.0.
and X2
=
0.00025.
The
numerical evolution of the 2-soliton wave train for the domain [-10, 10] x [0,2] using
the Z-K numerical integration scheme is shown in figure (6-23).
For comparison, the
analytical 2-soliton wave train is plotted in figure (6-24). It is found that the Z-K scheme
is accurate in modeling the interaction of the two solitons. The physical form of the
solution at different time instances is depicted in figure (6-25).
It is observed that the
solitons interact elastically, i.e., their amplitudes and physical forms are unchanged before
and after the interaction. However, the interaction of two solitons induces a noticeable
phase shift in their positions, as evident from the top view of the wave trains in figures
(6-23) and (6-24).
6.2.4
Stochastic KdV Equation
In this section, we numerically solve for and study solutions of the Korteweg-de Vries
(KdV) equation with external stochastic forcing. Starting again from the non-dimensional
deterministic KdV equation (6.9), the governing equation of the resulting stochastic KdV
system is then given as
(_ + 6((+
where
xX
=
e (t; w)
(6.21)
(t; w) is stochastic noise (represented by one-dimensional Wiener process) with
amplitude 6. For our example, we consider the spatial domain x E [-5, 10]. We impose
periodic boundary conditions for all simulations and consider the time domain t E [0, 1].
152
KdV 1-soliton wave train using Z-K scheme (3-D view)
2
0
20
time(s)
0
-10
x(m)
KdV 1-soliton wave train using Z-K scheme (top view)
E
0M
-10
'U
20
x(m)
Figure 6-20: Numerical KdV 1-soliton wave train, computed using the Z-K scheme with
the correction (6.20): (a) 3-D view (b) Top view
153
KdV 1 -soliton wave train using analytical solution (3-D view)
2
1.5
0.5
0
10
8
6
4
4
2
0
time(s)
-5
-10
10
5
0
20
15
X(r)
KdV 1 -soliton wave train using analytical solution (top view)
6
0
-10
-5
0
5
x(m)
10
15
20
Figure 6-21: Analytical KdV 1-soliton wave train: (a) 3-D view (b) Top view
154
KdV 1-soliton solution at t = 0.Os
2.5
-
2
---1.5
2.5
2-
Numerical
Analytical
1.5
15
P
0
-5
0
5
10
15
x(m)
KdV 1-soliton solution at t - 4.Os
2.52
-10
20
-5
0
5
10
15
x(m)
KdV 1 -sollton solution at t - 8.Os
0.5
0
-10
20
2.5
1-
2
1.5
0
5
10
15
x(m)
KdV 1 -solton solution at t - 6.Os
.-
2.5
2
Nurical
- -Analytical
1.5
20
Numerical
- - - Analytical
2
1.51
Analytical
0.5-
1.5
-5
2.5
-Numerical
1.5---
01
-10
1-
0.5A
0.50
-10
KdV 1 -soliton solution at t = 2.Os
NumerIal
- - - Analytical
-5
0
5
10
15
x(m)
KdV 1-sollton solution at t - 1 0.Os
20
-Numerical
(
n
An-
1
0.5-
0
-10
0.5-
,
-5
,
0
5
01
10
15
-10
20
x(m)
-5
0
5
10
15
20
x(m)
Figure 6-22: KdV 1-soliton numerical and analytical solutions at different time instances
We assume an initial solitary wave with
i
=
1.0 and x 0 = 0.0, given by equation (6.16)
with t = 0.
We utilize the Euler-Maruyama scheme (described in section 2.2.1) for the discretization of the stochastic PDE (6.21). We employ Ax = 0.2 for spatial discretization. Thus,
the dimensionality of the spatially discretized system is N = 75. We compare the numerical stochastic solutions of (6.21) obtained using a Monte Carlo (MC) scheme to that
obtained using the Dynamically Orthogonal (DO) equations. For the MC scheme, we
utilize M, = 1000 sample realizations, whereas for the DO scheme, we compare solutions
using s = 6, s = 10 and s = 14, in each case using M, = 1000 realizations for the corre-
sponding s stochastic DO coefficients. We found that for our implementation of the DO
scheme, we needed to use a substantially small time step in order to ensure numerical
stability of the evolution equation for the DO modes. Hence, we employ a time step of
At = 0.005 x (Ax)3 /4 = 0.00001 for both the MC and DO schemes.
The evolution of the mean of the solution, as estimated using the MC scheme and
using the DO scheme with three different DO subspace size s, is presented in figure (626). The evolution of the corresponding variance of the solution field at the same time
155
KdV 2-soliton wave train using Z-K scheme (3-D view)
3.5
....
3
2.5
0.50
2
1.5
10
5
0
0.5
-5
0
time(s)
-10
x(m)
KdV 2-soliton wave train using Z-K scheme (top view)
1.8
1.6
1.4
1.2
E
1
0.8
0.6
0.4
0.2
00
-10
-8
-6
-4
-2
0
x(m)
2
4
6
8
10
Figure 6-23: Numerical KdV 2-soliton wave train, computed using the Z-K scheme: (a)
3-D view (b) Top view
156
KdV 2-soliton wave train using analytical solution (3- D view)
3.5 ..
........-
3
2.5-
... ......-
2 . ...
-1
.5-
-
0.5
0
2
10
1.5
5
0.5
-5
0
time(s)
-10
x(m)
KdV 2-soliton wave train using analytical solution (top view)
2
1.8
1.6
1.4
1.2
1
0.8
0.6
0.4
0.2
0
-10
-8
-6
-4
-2
0
2
4
6
8
10
x(m)
Figure 6-24: Analytical KdV 2-soliton wave train: (a) 3-D view (b) Top view
157
KdV 2-soliton solution at t -
-
3
Numerical
Analytical
---
2
KdV 2-soliton solution at t = 0.6s
.Os
3
Numerical
-
- - - Analytical
1
0
0
-10
-5
0
5
x(m)
KdV 2-soliton solution at t = 0.9s
3 -
10
-10
3
Numnerical
Analytical
- --
-5
0
5
x(m)
KdV 2-soliton solution at t = 1.2s
---
0
10
Numerical
Analytical
0
-10
3
-5
0
5
x(m)
KdV 2-soliton solution at t = 1.5s
10
-10
-5
0
5
x(m)
KdV 2-soliton solution at t = 2.0s
Numerical
3
-
.1 - - - Analytical
2
- - - Analytical
-
* 1
Nunerical
1
0
-10
10
0
-5
0
x(m)
5
10
-10
-5
0
x(m)
5
10
Figure 6-25: KdV 2-soliton numerical and analytical solutions at different time instances
instances using the MC and DO schemes is shown in figure (6-27). We assume that the
MC solution is the true solution. From these figures, it is observed that all three DO
schemes model the mean of the solution field with reasonable accuracy. However, the
variance of the solution field using the DO scheme with s = 6 starts to deviate from the
Monte Carlo solution at t = 0.7s, while the other two schemes (with s = 10 and s = 14)
maintain their accuracy at all time instances.
In order to get a better idea of the accuracy of the numerical schemes, we investigate
higher order moments of the solution field. The third and fourth central moments of
the stochastic KdV solution field using the DO scheme with s = 6, s = 10 and s =
14, along with the corresponding MC solutions, are plotted in figures (6-28) and (6-29)
respectively. Just as we expect the relative accuracy of the MC scheme with too small a
sample size to decrease for higher order moments, we expect that the relative accuracy of
DO schemes with too low subspace size s to decrease as we estimate higher order moments.
It is observed from figures (6-28) and (6-29) that this is indeed the case, and that the
relative accuracy of the DO schemes in modeling higher order moments (third and fourth
moments) is lower than their accuracy in modeling the corresponding lower order moments
158
Mean
of
1-olo SWolMo
Kd
t
1
MWon of KdV I-WOltOn So6l6on id t -
-0.209
0AOs
1.6
9-60
- -
--
-
1
1
-6-s0
-6-10
-
-10
6m
-
00
0.5
0
-
0
-
-0.5
-0.5
0
6
5
0
x(m)
Mean
------
of
s0
1-muon
KdV
at
1
t
-0.5f
Mean
Isi
of KdV 1-8ofon
at
Soaukon
-10
t-
0.70.
--.
1
14
006 10
--14
- - -MC
-
Sdhomo
MC 0006mm
0.5 -
-
0.5
- OS6N
0-1
-0.5
0
0
Ma of
1.2
6
X(m)
KdV
soution at 1
1-4o60
1
-
S0
Mean
0.60s
0.8 - 0.4 -
s-14
C I&Wm"
--
sokuslon at t
KdV I -"Klon
~--8
-
S--S.-10
----
of
-.
10
---- -- ea-14
- - "MM
-
SOA
0.0
U(
6
X(M)
-
.Oft
-
-
10
0.4
-S(m6
-0.2
0
0
6
na5
51
x(M)
0
10
x(m)
Figure 6-26: Mean of the stochastic KdV solution field at different time instances
Varla0
.--- -M- -- sohom.
6-10
-- .. s 14
- - - MC Od
0.1
0.00
Of
KV
1-Ooftn
0 0d00
t
-020
Var06
-
02
f
I
of
KdV
1-010
60
n al t
-0.40a
-s10
- .. S-14 g
- --Mem
0.16
0.04 0.02 0.00
0.1
.
-
0.00
0
5
Vadance of KdV 1-80bon
0.6 - ---0.5 ---
So0don
at
t-
X(m)
VarIanceOf
0.5
s10
-
14-
5
0
x(m)
KdV
SoL60on al
1-4006,on
t
0.706
06
MC
- -- 10-14
0.4 - - - - MC sdhm.
Wme
0.A
0.3 0.2 0.1 -
0.3
0.2
--
0.1
10
0
5
0
x(m)
Vadano0
0.
of
KdV
souon
1-4o006n
at
t
-
0.86f
0.6 -
-
- -14
0.6
--
102
V.460n0.0011Kd
.
1-400o
o0601.att
100n
1.006
0.7
0.7
0.3 ---
- - -MCsoam
0A
40.4
10.3
m
0.1 -
-
02
0.2
0.1
::5
5
::6
x(M)
Figure 6-27: Variance of the stochastic KdV solution field at different time instances
159
10
Third moten (coirs4 of KdV 1-soliton sollonat
Third moment (centre) of KdV 1-sotn solubton at t - 0.20*
1o-'
t - 0.40s
I
2
0.020--
a
---
s
5.000.02
-
1
-0-04
0
-5
Third somer (Contra)
02
0.1
-0.1
10
1
MSBBt
-5
5
a
of KdV 1-solo sooluon a t. 0.00.
Third momen
(m2
(Cenr.al)
10
of KdV 1-wtIon soonwoI at = 0.70o
-m
- -
---
MC SOW"m
-
-5
0
Third
0.2
0.1
0
5
1
-t__
- - -MC adwe
moment (centWi
X()
5
of KdV 1-Solon lolucon att.
10
-5
Third momno
.oMe
0.0
-
-
0
A)
(0.*40
5
of KdV 1-4oltoiol on ati -
10
1.00*
0.20. 1
2S. 0--
MCg
-0.1
-01
6-02-
-0.2
-0.3
-0.3
X(M
6
-
-
47
-50
10
A )
5
10
Figure 6-28: Third central moment of the stochastic KdV solution field at different time
instances
(mean and variance). It is also seen that the accuracy of the DO scheme increases with
an increase in the number of modes, which is as expected. The DO scheme with s = 14
accurately models all four moments of the solution field in the given time domain. If
we allowed the simulation to run for a very long time (without data assimilation), we
expect all reduced space DO schemes to eventually start losing their accuracy. However,
the higher the number of modes used in the scheme, the larger the time up to which the
scheme can accurately model the statistics of the solution field.
The run times for numerical integration of the KdV equation with external stochastic
forcing using the different UQ schemes are presented in table (6.2). It is observed that
the run time for the MC scheme is similar to the run time for DO scheme with s = 10.
Also, the run time for DO scheme with s = 14 is approximately twice the run time for
the MC scheme.
These observations may lead us to believe that the Monte Carlo scheme is more economical for modeling uncertainty in realistic ocean systems as compared to a DO scheme
160
Fourth
mo-ent (car")
of KdV
6-olon wuon mt.20s
- ---
0.00
".
chw10
M
moment
(eonvft
of KdV I-so tio880n at t - OAOs
-- - 1sd
0.1
0.0 12
0.01 5 --
o 84 0Fourth
4m0
.14
0,12
I
-----
- 14m
mm
0.08
0.0 0
000.04
1
0.00
0
Fourth amom0n
(08nr*)
x(m)
of KdV 1-040
6
I
0.02
0
0
Solon at t - 0.5
Fourth
moment
5
x(m)
(omntra of KdV 1-boloon
soiulon at t
0.701
0
0
0.8
g 0.6
0.5
- - -MC BAchM
- ,,
-
-
10
MC
800.0.
OA
-
0.3
2-\
0.2
0.1
0
-F
0
0
of
6
I
0(m)
Fouwth
80men
(olnoral)
of KdVO
1-ollOn,eluimn
a. 1.-0.
Fourth
.i
momrnt (owfto of
KdV 1-solitan so~l00on at i -
1.00S
0.8
0.7
-.
- --
10
NIC e1tam
-8--
0.5
0A
0.2
0.1
10
-8--
0.1
0.5
10
-. 14
- - - MCSchem----
-5
10
:5
x(m)
10
8(m)
Figure 6-29: Fourth central moment of the stochastic KdV solution field at different time
instances
Table 6.2: Run times for UQ schemes for time-integrating uncertainty in the KdV equation
with external stochastic forcing
S. No.
Uncertainty Quantification Scheme
Run Time
1.
MC scheme
5271.68s
2.
DO scheme with s = 4
1829.38s
3.
DO scheme with s = 6
2011.18s
4.
DO scheme with s = 8
3599.65s
5.
DO scheme with s = 10
5396.80s
6.
DO scheme with s = 12
7709.99s
7.
DO scheme with s = 14
11119.24s
161
Table 6.3: Run times for the DO (with s = 14) and the MC schemes for different values
of dimensionality (N) of state space
S. No.
Ax
N
Run times for DO scheme
Run times for MC scheme
1.
0.200
75
24.91s
10.46s
2.
0.150
100
25.79s
32.52s
3.
0.100
150
27.61s
53.95s
4.
0.075
200
29.76s
74.75s
of sufficiently high accuracy. However, this is not the case. All the stochastic dynamical
systems that we have studied in the present work, including the stochastic KdV system,
have a relatively low dimensionality (N). Most real ocean fields in two or three dimensional physical space may have a dimensionality of up to N
-
106
-
1010. From the two
realistic test cases studied in this chapter, i.e., the 20-dimensional self-engineered test case
and the stochastic KdV system with N = 75, it can be observed from tables (6.1) and
(6.2) that the run times for the MC scheme increase significantly with an increase in the
dimensionality of the stochastic dynamical system. The MC scheme suffers from what
is known as the curse of dimensionality (Sapsis and Lermusiaux, 2012). For real ocean
systems with high dimensionality, the MC scheme becomes computationally infeasible.
In order to validate this argument, we run the stochastic KdV test case for different values of dimensionality N (by using different values of Ax for spatial discretization). The
run times for integrating the stochastic KdV system for the first 200 time steps, using
different values of N are presented in table (6.3). It is observed that the run times for
the DO scheme (with s = 14) increase relatively slowly with an increase in N. However,
the run times for the MC scheme increase significantly with an increase in dimensionality
N. Hence, we conclude that the MC scheme becomes computationally infeasible for high
dimensional realistic ocean systems.
Next, we investigate the sensitivity of the computational cost on the number of sample
realizations (Mr). For this, we use the same example as above, but with Ax = 0.15. The
run times for integrating the stochastic KdV system for the first 200 time steps, using
different values of Mr are presented in table (6.4). We observe that the run times for the
162
Table 6.4: Run times for the DO (with s = 14) and the MC schemes for different number
of sample realizations (Mr)
S. No.
M,
Run times for DO scheme
Run times for MC scheme
1.
500
21.80s
15.97s
2.
1000
25.79s
32.52s
3.
2000
34.35s
65.28s
MC scheme increase linearly with an increase in the number of realizations (M,), which
is as expected. The increase in the run times for the DO scheme is marginally smaller
than the increase in the run times for the MC scheme. We infer that the DO scheme is
less sensitive than the MC scheme to an increase in the number of realizations and hence,
is more efficient in modeling uncertainty.
In the stochastic KdV test case, we observed that the DO scheme with s = 14 modes
was successful in integrating the stochastic differential equation with high accuracy. For
most realistic high dimensional ocean systems, the number of dominant directions of
uncertainty is much smaller than the dimensionality N. Thus, even large ocean systems
with high dimensionality can be integrated accurately using a relatively small number
of DO modes.
Hence, based on the numerical examples studied in this and previous
chapters, we conclude that the reduced space DO scheme is suitable for modeling realistic
ocean systems with uncertainty in input parameters as well as uncertainty due to external
stochastic forcing.
163
164
Chapter 7
Conclusions
This chapter presents a summary of the main results and conclusions, and identifies
possible directions for future research. In this thesis, we focused our attention on a special
class of non-autonomous stochastic dynamical systems, represented by equations (1.1) and
(1.2). This class of systems encapsulates most systems encountered in classic nonlinear
dynamics and ocean modeling, including flows modeled by Navier-Stokes equations. The
main motivation behind this research was to quantify and predict uncertainty for ocean
systems with external stochastic forcing.
The first goal of the present work was to develop a computationally efficient solver
for time-integrating the class of non-autonomous linear and nonlinear dynamical systems
(1.2), and to use this solver to study the uncertainty characteristics of stochastic ocean
systems.
The second objective was to investigate the use of dynamically orthogonal
(DO) field equations and polynomial chaos (PC) schemes for integrating systems with
uncertainty in input parameters and due to external stochastic forcing. Towards the first
goal, a computationally efficient solver for the solution of the general stochastic differential
equation (1.1) was developed and used to time-integrate the Korteweg-de Vries equation
with stochastic forcing. Using this solver, the DO and PC schemes were intercompared
in terms of accuracy of solution and computational cost. To allow time-integration of
uncertainty due to external stochastic forcing over long time intervals, we derived two
new polynomial schemes, namely, the reduced space KLgPC scheme and the modified
TDgPC (MTDgPC) scheme.
165
In chapter 2, we developed the background for the current research. We studied numerical integration schemes for the weak approximation of solutions of It6 form of stochastic
differential equations and investigated their convergence characteristics. Next, a literature review of UQ methods was presented and the DO and PC schemes were studied
in greater detail. In chapter 3, we derived the evolution equations for the class of nonautonomous systems represented by equations (1.1) and (1.2), using DO and polynomial
chaos schemes. We also introduced two new polynomial chaos schemes (KLgPC scheme
and MTDgPC scheme) capable of integrating uncertainty due to external stochastic forcing over significantly large time intervals.
In chapter 4, we focused our attention on uncertainty in input parameters. Using the
one-dimensional decay model and the Kraichnan-Orszag system, we studied the limitation of the gPC scheme in integrating uncertainty in nonlinear dynamical systems over
large time intervals. The TDgPC scheme introduced by Gerritsma et al. (2010) was able
to alleviate this limitation by re-initializing the polynomial chaos basis in time. Chapter
5 dealt with uncertainty due to external stochastic forcing. We discussed a fundamental
limitation of existing classic PC schemes in time-integrating uncertainty due to external
stochastic forcing. The reduced space KLgPC and the MTDgPC schemes, introduced in
chapter 3, were used successfully to integrate uncertainty due to stochastic noise of both
additive and multiplicative nature.
However, we observed that the MTDgPC scheme
utilizes the entire state space for its implementation, and would hence become computationally expensive for high dimensional systems. Thus, we concluded that the reduced
space KLgPC scheme is more efficient computationally than the MTDgPC scheme for
integrating uncertainty in ocean systems with stochastic forcing.
Chapter 6 dealt with uncertainty quantification and prediction of stochastic ocean
systems.
We engineered a 20-dimensional system to study the application of reduced
space DO and KLgPC schemes on a stochastic dynamical system that concentrates most
of its probability measure on a time-dependent subspace. We observed that for the same
level of accuracy, the DO scheme has a significantly lower computational cost as compared
to the KLgPC scheme. Next, the deterministic Korteweg-de Vries (KdV) equation was
introduced and was numerically integrated using the Zabusky-Kruskal (Z-K) scheme.
166
Then, we time-integrated the KdV equation with external stochastic forcing using our
computationally efficient stochastic solver and compared the performance of the DO and
Monte Carlo schemes. Finally, based on the numerical examples studied in this work, we
concluded that the reduced space DO scheme is computationally efficient for integrating
stochastic ocean systems with uncertainty due to external stochastic forcing.
7.1
Future Work
In this thesis, we studied the application of reduced space DO scheme for uncertainty
quantification and prediction of ocean systems with external stochastic forcing. The DO
scheme has already been used with success for modeling stochastic fluid flows and other
ocean related systems. However, the use of the DO scheme has so far been restricted to
integrating systems with uncertainty in input parameters. A next step would be to use
the DO scheme to model fluid flows and other high dimensional systems with uncertainty
due to external stochastic forcing. This would greatly enhance our capabilities to model
and understand the dynamics and interactions of various coupled nonlinear ocean systems
which are modeled as dynamical systems with stochastic forcing. For example, the DO
scheme could be used to study the uncertainty characteristics in the coupling of ocean
acoustics and dynamics, or the effect of wind-driven surface waves on internal waves.
It was observed in chapter 5 that the KLgPC scheme is the most computationally efficient polynomial chaos scheme for time-integrating uncertainty due to external stochastic
forcing over large time intervals. However, the computational cost of the reduced space
KLgPC scheme is still much higher as compared to the reduced space DO scheme. Hence,
another possible research direction could be to investigate possible ways of reducing the
computational cost of the KLgPC scheme. This might involve researching ways of computing the time-dependent polynomial chaos basis either analytically or using other efficient
methods.
In chapter 6, while investigating the 20-dimensional self-engineered test case, it was
observed that the accuracy of the DO scheme with s
the KLgPC scheme with
fred
=
6 is lower than the accuracy of
= 6. The KLgPC scheme involves taking a singular value
decomposition (SVD) of the solution field and hence represents the best possible reduced
167
space approximation of the stochastic field. Thus, the DO approximation of the solution
field might not be the optimal one. Further research might be required to improve the
DO orthogonality condition in order to obtain a better reduced space DO approximation
of the solution field.
Furthermore, it was discussed in chapter 6 that the DO scheme is effective for modeling
high dimensional stochastic systems because many of these dynamical systems concentrate most of their probability measure on a reduced time-dependent subspace. For such
systems, as the dimensionality N of the system increases, the increase in the number of
DO modes (s) required to accurately model the dynamical system is significantly lower
than the corresponding increase in the dimensionality (N) of the system. Although the
singular value decompositions (SVDs) of the dynamics matrices A and B give us some
idea about the number of DO modes required to accurately integrate a dynamical system,
further research may help us to better understand the relationship between the dimensionality of a system, its dynamics and the number of DO modes required to integrate it
accurately.
168
Appendix A
Detailed Derivation of Evolution
Equations
Here, we derive in detail, the evolution equations for the stochastic dynamical systems
represented by (1.1) using DO and gPC schemes. The evolution equations for the other
PC schemes are very similar in form to the evolution equations for the gPC scheme, and
can be derived relatively easily building on the evolution equations for the gPC scheme,
and thus have been omitted here.
A.1
Dynamically Orthogonal Equations
Consider the stochastic dynamical system represented by the differential equation (1.1).
We consider a DO expansion given by equation (2.54). We omit the arguments after the
mean, modes and coefficients for sake of simplicity. Substituting the DO expansion (2.54)
into (1.1), we have
d + vidob + dcio. = A(. + ,i$5)
169
[J+± ;jio] dt + B(.t +
q$jo)dW
(A.1)
Evolution equation for mean field:
Taking the mean value of equation (A.1), we get
dt + fid [E' [ki]I + dzi Ew [oi]
Ew [A(j+ ±i-iq)
=
[.t + ziq5|] dt
(A.2)
+ Ew [B(J + ci iq)dW]
The stochastic noise can be considered to have zero mean value without loss of generality.
Also, since E' [oil = 0 , we have
d=
Ew [A(
+
:iq)
(A.3)
[J + zipi]l dt + EW [B(: + sioi)dW|
which is the DO evolution equation for the mean.
Evolution equation for stochastic coefficients:
Subtracting equation (A.3) from equation (A.2) and taking its projection on
Xk,
we get
(:i,z-) d i + (dii,z)4p =
(A(. -+.'ioi)[:T +
4$] dt - Ew [A(. + i)
+ (B(t +. ;ci )dW - Ew [B(. + ioi)dW
Imposing the DO condition (dii, zr)
[J + -qi$]] dt,i-)
(A.4)
,4)
= 0 and using the condition that the modes are
orthonormal, we get
dbk =
(A(.t +:ji$5)
[t + zi$] dt - Ew [A(J + q)
+ (B(.+-I cioi)dW - Ew [B(.t ±
[p + ii]] dt,z4)
(A.5)
jrii)dW], 4)
which are the DO evolution equations for the stochastic coefficients.
Evolution equation for DO modes:
Multiplying equation (A. 1) by
#j
and taking its mean value, we get
d.Ew [0j] + -iE' [doioj] + diziEw [#i$j] =
(A.6)
EW [A(t + z5io) [:i +:jio$]
170
kj] dt + EW [B(T +i Ji~)dWj]
Also, multiplying equation (A.5) by $j and taking its mean value, we have
Ew [d$nkqj =
E;[(A(t +
$jqj)
[J + ipil Odt
-
Ew [A(J +Jc-i$5)
+ E [[(B(j + : j)dWo$ - Ew [B(t + _;pbjj)dW]
[5 +.; qpi]]$qdt, k)]
(A.7)
j, -4)]
Substituting for E' [dqkq4j from equation (A.7) in equation (A.6) and simplifying, we
obtain
d.;
=
Ew [A(. + -40k) [t + -40k] O1 dt - (A(
+ -4k)
[P +
kk]
dt,zin)
{EW [Oi]},m{
1
{EW [0jjI$}+ Ew [B(j + i jdWoj - (B(.T + i piq)dW$5, im) ,m{
(A.8)
which is the DO evolution equation for the modes.
DO governing equations for quadratic-nonlinear systems
We consider the special case when matrices A and B are of the form
A =Ao+ A1(. +zi$o)
(A.9)
B =Bo + B1 (-±+ zo)
where A 0 and B 0 are a function of time only and A 1 and B 1 are linear in X = t + :ii0.
Hence, we can write A and B as
A = Ao + A1 (t)+ A1 (: jo)
(A.10)
B = B0 + B 1 (J) + B1 (fi$j0)
Substituting from equation (A.10) in equations (A.3), (A.5) and (A.8), and simplifying
the resulting equations, we get the DO evolution equations for this special case, which
are given as
d5 = Aojdt + A 1 (!)tdt + EW [A1(kk)
171
jq$J ] dt
+ Ew [B1( k k)dW
(A.11)
q}3dt + ((A1Qkq),:ci) dt
d@i = {Aoig , ;r ) Ojdt + A
+
- Ew [A1(;4qk) jqjJ ),zi)
(A1(zkq4)zJ#4
(A.12)
+(Bo,i) dW
+ (B 1 (T),i) dW + (B1(-kq0k)dW - Ew [B1(Jk~k)dW],. i)
d.;-i =Ao: j - { A0. r,.;c-j)} -c dt + A1l
+ EW [A1(ikqk)
± Ew [A1(
pj
-
~i - ( A1(
jz, i24) 2r4dt
(A1(4-pk)2Nj, m) ,n)] { Ew [0jg] }1
-kq4 k)J q5 p# j - (A A ( 4k ) -pp
,
m,
-C]
dt
{E [0i |}-] 1dt
(A.13)
+ Ew [BodW pg - (BodW5j, 5am)im] {E [4]}-l
± Ew [Bi(Ji)dW kg - (B 1 (t)dW5j,
+ Ew [B1(;kk)dVVj
m) .rn] {Eu [
- (B1(-"pkqk)dWj,
m)
mI
{ Ew [$4k]}- 1
DO governing equations in vector form
We define k E RN
as k
=
I1:2
.
-
I and i
E Rsx1 as
=
[0102 ...
s 51T
Then we
have the vector form of DO evolution equations, given as
&t = A 0 dt + A 1 (4).dt + Ew [A1 (Xk)k4]
A
= _kTA Akdt
+
+k A'
+
TBodW +
dt + EW [B1(XB
)dW]
(A.14)
A1 (:)Xkdt + kT A1(k).dt
-
Ew [A, (
TB 1 (t)dW ++27
)k(iI dt
B1(k$)dW
172
(A.15)
- Ew
[B 1 (k)dWJ
dX
=
[Aok + E
(14W)4$
k
-
+ Ew [A1 (Ze)XeT ±
[Lw
+ Ew BodW
k
=
Ew
[$4 TJ
(T
-
ldt
P()-dt
A(Ze)XTl
BodWW4
± Ew [Bj(e)dWT
where P(()
P(
A(-)
)
T
AW
-
A, (T)XI dt
- k
kTAkJ dt +
P(()
(A .16)
P(
1
ZkkTB1(Xt)dW T
P()-
1
is the covariance matrix of 1.
DO governing equations using realizations
Let $r E: RsX1 and dWr E RNx1 represent one specific realization of stochastic variables
1 and dW respectively. Then we have, for these specific realizations, the DO evolution
equations for the mean, stochastic coefficients and modes, respectively, given as
dt= A 0odt + A 1 (4)dt + Ew [A 1(
dr
r)k&'4I dt + Ew [B 1('$r)dWrj
(A.17)
= kT AOX(Ddt + kT A1(p)V'dt + kT A1(4r).Tdt
+ kT
- EW [A 1 (
L
U
T B1(.)dWr
A 1 (Xr)X4r
± _T BodWr +
r)Xl] dt
± k T [B1($4r)dWr - Ew [B 1 (Xr)dWrJJ
173
(A.18)
dX = [AoX - XXTAoX] dt + [A1(.t)-
Ai
dt
1()k
+ EW [A1(X4r)2@Ir)T -kkT' A(kbr)i&)TJ p(())-ldt
+ EA
- ZZT A1(XVr)'r(@r)T] p(($r))-ldt
(A.19)
+ Ew [BodW_(
')T
- kTBodW((ir)T
+ Ew [B1(.)dW(dr)T
-
p((4 r))-l
kXTB1(.)dWr(4r)T] p((4 r))~l
+ Ew [B 1 (kr)dWr(&)T
-
kkTB1(k4r)dWr(4r)TJ p(( r))-
Vectorized DO governing equations using realizations
We define 4$R E RSxM
R _
=
dW R E RNXMr as dWR =
'1< to represent Mr realizations of 1 and
12.
[dW'dW 2 ... dW,"] to represent
Mr
realizations of dW. Then
we have the vectorized form of DO evolution equations with realizations, given as
dj = AOTdt + A 1 (ji)!idt + Ew [A1(X,
dR
-
_T
±
Ak)Rdt +'
).
I
A1 (r)R
dt + EB
(B(& R)dW
I
(A.20)
dt + kT A1(k$R)tdt
Z" [A1 (XR)XbR - Ew [A 1 (XR)X4J dt
(A.21)
± kTBodVR + kT B1()dWR
+
±
T
[B 1 (XkR )&R
-
174
Ew [Bi(k,-?dRJ
dX = [Ao-k
-
T Aok]
+ Ew [A1(,iR ) (
(.T)k-
dt + [
)
± E
_( ,(T
B1
R -ldt
A(XbR)XR(RTl
ZTA
-
[A(A.22)
(R)TJ
T Bod-
kkB1(T)d
-
+ Ew [Bi(kb )dW
dt
R)t@ R)Tj p((-R))ldt
kiTA1
-
+ EW [A1(XkR)X R(zIA)T
L
E
+ EW [BodW
kkT A1()]
)T
-
p((-R))-l
(R)TJ
p((R))-l
kTB1(x$ )d W(4R)TJ
p(QR))-l
Vectorization for optimal numerical computations
Since matrix A 1 is linear in
xkqi,
all the stochastic terms can be combined together.
Thus, from equation (A.11), we have
dt = Ao0 dt + A 1 (T)4rdt + A1(Q k):ijEw [Okkj] dt + B1(-'4)Ew [OkdW]
We define V1 E RNxs 2 as V1
=
A1(;k)ij, 1 < k, j < s and M 2 (5R) E R
(A.23)
2
XMr
as M2(pR)
k, j < s,1 < r < M. Also, we define V3 E RNxsN as V3 = B( k),1 < k < s
_,
and Mnl(4R, dWR)
RsN x M, as Mn1(
RW)
< p
=4dW,1 < k
,1 < r <
Mr. Then from equation (A.23), we get the final DO evolution equation for the mean,
given as
d. = A 0 dt + A 1 ().dt + V1 E
dt ± V 3 Ew M
M 2 ()]
1
A, dWR
)
(A.24)
Similarly, from equation (A.12), we have
d5i = (Aoy , i}j)
dt + A 1 ()z,
zi) Odt + (A1 (
± (A1(4k);j,zi) (Ok4J - EL [$kxjp)
, zii) Okdt
dt
(A.25)
± (B 0 ,Ji) dW + (B 1 (t),;i)dW + (B1(ak),zi) (qkdW - EW [qkdW])
We define
V
E RNxS as V2
=
A,(±k)t,1
k < s. Using this and earlier definitions,
and vectorizing equation (A.25), we get the final DO evolution equation for the stochastic
175
coefficients, given as
dR
-
kR
RTdt + TA(g)TR dt+ k
+
[M2(4R)
RdT
Ew [M 2(4R)J dt
-
7 BodWR + kTB1(t)d&R(
+
+k v
Mn1, ( R d&R)
E" [Mn1 (&R dWR)J I
-
Furthermore, from equation (A.13), we have
dzj = [ Ao:zj - { Aoj, j) -;j] dt + [AA(-;)., - (AA(,tjjJ-)rj
zdt
± [A1(k)2 -
(A1(zk>2, m>) n]
EW [Okqj] CpC
dt
± [A1(:k)zp - (A1 (zk)zP, zr) zmI Ew [Okkpqj] C'
dt
± [Bo - (Bo, .m) .mIEw [dW5| C0(.
+ [B1(2) - (B1(.),.m) zm] E" [dWoj] C-'
+ [B1(A)
where C
k,p,j
= {EW
[#j#jj}-1. We define M 3 ()
s,1 < r < Mr.
EW [< dW x
,
.m]E" [qpkdWqp] C
( B1(ik), zm)
-
1
R
as
Also define Mn2(R, dW)
k, j 5 s, 1
p
3($R) =
EP
C RsNxs as Mn2 (&R,
x
J,
i
)
=
N, 1 < r < Mr. Using this and earlier definitions
and simplifying and vectorizing equation (A.27), we get the final DO evolution equation
for the modes, given as
dk = [Aok -
± [v
-
kkAoT
kTV
k
2
dt + [A1
dt + [V
--
-k
dt
(A.28)
p(((R))l
+ B 1 () -
B()
BT
+ [V
Mn2 (WR,dW)P(("R-1
kkTV
A1(]
_kTV1J M 3 (4R)P((4R))-dt
+ [B0 - XXTBo] E" [dR(R]
-
k
Ew [divRR]
p((
R))-
Final form of DO governing equations
Thus, from equations (A.24), (A.26) and (A.28), we have the final form of the DO evo176
lution equations for the mean, stochastic coefficients and modes respectively, given as
follows:
Mean
d. = Aotdt + A 1(2T2dt + V1Ew [M 2 @((R)J dt + VEW [Mn1(4R,
R)
(A.29)
Stochastic coefficients
dR - kTA Ok(Rdt +
+
k
[v 2 (R)
1 [M
+ kTBodW
+ f TV3
TA (.)XuRdt + k TV2Rdt
-
Ew [M 2 (R)]]
dt
(
+ kTB()dWR
[M 1 (( R, dWR)
-
Ew [M"1 ((R, dW)JJ
DO Modes
dx
=
[AoX - XX7TAoJ dt ± [A1(,)X
±
[v2 - I
+ [B0
-
V2] dt +
- X
-
XXTA1(,)X
TV 1 ] M 3 ((R)p((
XTBo] Ew [dWRRJ p(("R-)
dt
-R))dt
(A.31)
+ [B 1 (t) - XX(TB1()J Ew [dWR$R] p(($R))-1
+
A.2
[v3 -
kkTvj
Mn2 (WR dWR)P((4R))1
Generalized Polynomial Chaos
We again consider the dynamical system represented by the differential equation (1.1).
Consider the multi-variate polynomial chaos expansion given by equation (2.43). Substituting this expansion in equation (1.1), we have
dj
= A(5j'jj) [.,
4
i~j] dt + B(:iij')dW
177
(A.32)
Multiplying equation (A.32) by Tk and taking expectation of the resulting equation, we
get
dzcEw [ii 4 JkI
=
Ew [A(.J'Ii) 'j'Ij'k]
dt + EW [B(i i'J!)dWk]
(A.33)
gPC governing equations for quadratic non-linear systems
We again consider the special case here matrices A and B have the form given by equations
(A.9) and (A.10). Substituting from equation (A.10) in equation (A.33) and simplifying,
we get the gPC evolution equation for this special case, which is given by
d.i = AOzidt + Ew [A1(-k'J'k)Cj'j'zIj C
1
dt
I-'q'jT'(A.34)
+ BoEw [dW'I 1 ] C ,, + Ew [B1(Jk'J'k)dW~jI|CC
gPC governing equation in vector form
We define
E RNxs as k
1 2 --
s] and P E Rsxl as T = [XF1T2]... T.
Then we
have the vector form of gPC evolution equation, given as
dk = AOkdt + Ew A 1(k )k4T P(PT')-dt
+ BoEw IdW TJ P(P T )- 1 + Ew [Bi(
(A.35)
dW TiPT] P( )
where P(i) = Ew [jVNpTJ is the covariance matrix of i.
gPC governing equation using realizations
Let 1r E R-9"11 and dWr E Rxl represent one specific realization of stochastic variables
'I and dW respectively. Then we have, for these specific realizations, the gPC evolution
equations for the deterministic coefficients, given as
dk
=
Aofdt + EW
1
(6))kr(4Jr)T
+ Ew IBodWr(Jr)TJ p((pr))-~
p(( I'))-ldt
3A
+ Ew [Bl(k4Ir)dWr(1r)T p((pr))_.l
Vectorization for optimal numerical computations
Since matrix A 1 is linear in ;kxIk, all the stochastic terms can be combined together.
178
Thus, from equation (A.34), we have
dzi = A& jdt + A1(iQ)z; Ew ['4IjlkJ] C,,dt
(A.37)
+ BoE [dW4 11| CQ, + Bl(;k)Ew [Ik dWJ] C ,
We define V E RNxs 2 as V = A,( k):j, 1 < k, j < s and M 3 (R)
x)<5r , 1 < k,j,l
Ew
2
jR
,,
s 1 < r< M,. AlsodefineV 3 E RNxsN
k <sand Mn 2 (IR , 2R) E RsNxs as Mn2 ( IR, ZR) = Ew [X
Z
x
as M 3 (IR)
=
V 3=Bl(ik),1
,1
k, 1 < s
<
j < N, 1 < r < Mr. Then from equation (A.37), we get the final gPC evolution equation,
given as
dx = AoXdt +VM
3(
+ BoEw [dWRVRJ
R)P((,n) -ldt(
+ V3 Mn2 (jR, dWR)P((R)y(A.
p((4,R))y
Final form of gPC governing equation
Thus, from equation (A.38), e have the final form of the gPC evolution equation, which
is given as
dx = Aokdt + V 1M 3 (-R)P((
R))ldt
+ BoEw [dWRR] p((qVR))i + V3 Mn2 (I JR, dWR)P((R))()-1
179
(
180
Appendix B
Algorithm for General Stochastic
Solver
Here, we discuss the details of the stochastic solver developed for efficient time integration
of stochastic dynamical systems represented by (1.1) and (1.2). The numerical code for
the general stochastic solver is divided into four main parts, which are as follows:
" functions spec-A ()
and spec-B()
:
used to define the matrices A and B for
the class of dynamical systems governed by the differential equation (1.1)
used to define the model parameters and initial conditions
" script setup-script :
for the dynamical system
" script plotscript : used for plotting the final solution
" function main( ) :
used to perform the time integration of the system (1.1)
In what follows, we discuss each of these four parts of the stochastic solver.
B.1
Functions specA(
) and specB( )
These functions use a sparse representation to define matrices A and B for stochastic
dynamical systems represented by (1.1).
The matrices A and B are decomposed into
181
state independent and state dependent parts, given as
A(X(r, t; w), t)
Ao(r, t; w) + A,(X(r, t; w), t)
=
B(X(r, t; w), t) = Bo(r, t; w) + Bi(X(r, t; w), t)
where Ao(r, t; w) and Bo(r, t; w) are functions in time, space and uncertainty (w) only
and A,(X(r, t; w), t) and Bi(X(r, t; w), t) are linear in X(r, t;w). The matrices AO and BO
are defined directly whereas sparse representation is used for defining matrices A 1 and
B 1 . For these matrices, the array node1 stores the location of non-empty entries of the
matrix. Array N, stores which state variable is specified at the corresponding non-empty
location and the array C1 stores the corresponding coefficient to be multiplied to the state
variable. For example, consider the governing equations for the Kraichnan-Orszag three
mode problem, given as
dxl (t; w)
= X2(t; W)X3(t; W)
dt
dX2 (t; W)
dt
dX3 (t;
dt
(B.2)
=_ 3(t; w)Xi(t; W)
=) 2x 1 (t; w)X (t;
W)
2
The differential equations (B.2) can be represented non-uniquely in the form (1.1) as
X1
0
X3
=2 0
0
X3
0 -2x
0
i)
1
0
X1
X2
(,
(B.3)
3J
from which we have
A =0
0 -3
0
0
x,
0 -2x,
0
182
(B.4)
Thus, we define the matrix A 0 as
0 0 0
Ao =
0 0 0
0 0
(B.5)
0
Furthermore, we define the matrix A 1 using node1 , N and C1 as
node1 = [4 6
8]
(B.6)
Ni = 3 1 1]
C1 = [1
-2
1]
A similar approach is used for defining matrix B in terms of matrices B 0 and B 1.
B.2
Script setup-script
This script is used for setting up the problem by defining the model parameters and the
initial conditions for the dynamical system (1.1).
The following model parameters are
used in the numerical code:
" n :
Dimension of state space
" n-noise :
* s :
Dimension of driving Wiener process
Dimension of the DO subspace and the number of polynomial basis functions
for PC expansion of the solution state
* m:
Number of Monte Carlo realizations
" delt :
Time step size
" k-max :
" Xsi :
Number of time steps up to which we perform numerical time integration
Realizations of random variables used to define uncertainty in the dynamical
system
183
* num-int-choice :
Variable specifying the choice of numerical time integration
scheme
The initial conditions are defined according to the uncertainty quantification scheme being
used for integrating the stochastic differential equation.
Script plot-script
B.3
This script is used for saving and plotting the final solution after the numerical run is
complete.
BA
Function main()
This function first evaluates setupscript to define the model parameters and initial conditions for the problem. Next, time integration of the evolution equations is performed
based on the choice of numerical integration scheme given through the model parameter num-int-choice defined in setup-script. The following functions are called in main()
during the time integration, when using the DO scheme:
" calc-A
:
used to calculate the matrix A 1 (T)
" calcB1 : used to calculate the matrix B1 (J) or B 1 (zi) for any particular i, where
1 < i <s
" calc-M2Phi :
used to calculate the second moment of stochastic coefficients, i.e.,
E [#j5], for all 1
" calcM3Phi :
calcMnlPhi :
s
used to calculate the third moment of stochastic coefficients, i.e.,
E [#i~jyk], for all 1
"
ij
i, j, k < s
used to calculate the mean value of nnoise-dimensional Wiener pro-
cess multiplied by stochastic coefficients i.e., E [dWoj], for all 1 < i < s
" calcMn2Phi :
" calc-vall :
used to calculate the mean value of E [dW~j~j], for all 1 < i, j < s
used to calculate the value of A 1 (zi)z, for all 1 < i, j K s
184
"
calc-val2 : used to calculate the value of A 1 (zi)5, for all 1 < i < s
" calc-val3 :
used to calculate the value of B 1 ( j), for all 1 < i < s
Additionally, orthonormalization of the modes is performed after every time step to ensure
that the modes remain numerically orthonormal. The algorithm for orthonormalization
is as follows:
M =X' * k;
[v
d]
=
eig(M);
X
=
X * v/sqrt(d);
D= '
(B.7)
* b;
where X and 4b are matrices comprising of deterministic modes
ij, 1 < i < s and Oj, 1 <
i < s respectively. Furthermore, the following other functions are called in main() during
the time integration step when using the MC scheme:
" calc-mcl : used to calculate the value of A 1 (XR)XR
" calc-mc2 :
used to calculate the value of B 1 (fR)dWR
While using the gPC scheme, the following functions are called in main( ) during the
time integration step:
" Hermite: used to generate Hermite polynomials up to order p
" Legendre: used to generate Legendre polynomials up to order p
used to calculate the multi-index for polynomial chaos expansion
" calcindex :
" calc-psi : used to calculate the realizations of multi-variate polynomial basis functions
" calc-A1 : used to calculate the matrix A 1 (t)
" calcB1 :
used to calculate the matrix B 1 (.T) or B 1 (Xmode(i)) for any particular
spatial mode
185
"
calcM3Phi
E[
used to calculate the third moment of stochastic coefficients, i.e.,
- I - IF]
" calcMr2Phi
used to calculate the moment of noise multiplied by square of
stochastic coefficients i.e., E [Z - I - I]
" calc.vall : used to calculate the value of A 1 (Xmode,)
"
calcval3 :
' Xmodes
used to calculate the value of B 1 (Xmodes)
In TDgPC, the new Modified TDgPC (MTDgPC) and the new KLgPC schemes, in addition to all the functions used in the gPC scheme, the function calcPsi-new-multi is
used during time integration to calculate the new polynomial basis numerically using
Gram-Schmidt orthogonalization method (during the re-initialization step).
When the time integration step is complete, the script plot-script is called in order to
save the solution and to plot the relevant figures.
186
Appendix C
Table of Runs
Here, we tabularize the systems and simulations that we have investigated in this research
work, including those for the study of uncertainty in input parameters, uncertainty due
to external stochastic forcing and uncertainty prediction for stochastic ocean systems.
S. No.
P1
P2
Table C.1: Uncertainty in Input Parameters
Governing Equations Model Parameters
Test Case
Realizations = 1 x 10 4
+ ku = 0
1-D Decay Model
(Gerritsma et al., 2010) u(0) = 1
At = 0.01, 0 < t < 30
K-O Problem
(Gerritsma et al., 2010)
k ~ U(-1, 1)
Scheme: RK-4
= x 2x 3
A = 1113
a3= -2
Realizations = 1 x 10 4
At = 0.01,0 < t < 40
2
Scheme: RK-4
Xi(0) = 0.99 + 0.01 1
X2(0) = 1,X3(0) = 1
_
P3
K-O Problem
(Gerritsma et al., 2010)
1)
~-U(-1,
== 223
A
=
-i
= -2xlX2
13
Realizations = I x 104
At = 0.01,0 < t < 40
Scheme: RK-4
xi(0) = 0.995 + 0.01 1
P4
K-O Problem
(Gerritsma et al., 2010)
X2(0) = 1,13(0) = 1
6i ~ U(-1, 1)
- = X223
A
=
4t--2xlX2
Xi(0) = 0.99 + 0.01 1
X2(0) = 1.0 + 0.012
X3(0) = 1.0 + 0.013
_1,
62, 6
187
-
U(-1, 1)
Realizations = 1 x 10 4
At
AiX3
= 0.01,0 < t < 40
Scheme: RK-4
S. No.
P5
Table C.2: Uncertainty due to External Stochastic Forcing
Test Case
Governing Equations Model Parameters
3-D Autonomous
dX = AXdt + BdW
Realizations = 2 x 10 5
1
1
-3
(additive noise)
A=
2 -5
1
At = 0.01,0 < t < 4
3
1- -6'0 2 1~
B= 0 1 0
Scheme: EM
L2
P6
0 0_
1 1 ]T
AXdt + BdW
X(t
=
3-D Non-autonomous
dX
=
(additive noise)
where A and B are as
0) = [1
2
A
Scheme: EM
1 ]T
1 + 2 sin(27rt)
1
-5
3
1 + 3 sin(47rt) -6 + +3 cos(47rt)
0.5 cos(27rt) 2 +0.5 sin(27rt)
1
cos (27rt)
0
1+ sin(27rt)
2
0
0
dX = AXdt + BdW
Realizations = 5 x 104
B=
P7
At = 0.01,0 < t < 4
defined below
X(t = 0) = [1 1
-3 + 2 cos(27rt)
1
P6
Realizations = 2 x 105
4-D Autonomous
At = 0.01, 0 < t < 3
dere dA and B are as
(multiplicative noise)
Scheme: EM
X(t = 0) = [1 1 1 I]T
_
P7
A=
'T
.1&
210
T
20
2 - -X
20
13
10
~
_
B=
Table C.3: Uncertainty
Test Case
20-D Non-autonomous
(additive noise)
~ 21
TO
10
2P
TOTO
S. No.
P8
X1
X2
_X
a:2
1
-X
TV0
2
1
-1-X
2
Prediction for Stochastic
Governing Equations
dX = AXdt + BdW
Self-engineered A, B
X(t = 0) = [1 1...
1 ]T
P9
KdV 1-soliton
(deterministic)
C, + 6((2 + ( x = 0
1.0, ao = 0.0
PlO
KdV 2-soliton
(deterministic)
P11
KdV 1-soliton
(stochastic)
(r + 6((x + (x = 0
i = 1.2, i 2 = 0.8
1 = -6.0, x= - 2.0
(r+6((x+(xx2 = ei(t; w)
i = 1.0, aO = 0.0
188
X4
1X3y
j53X
-X 3
T
9X
-4
-X4
4
_
1?0
X3TX
Ocean Systems
Model Parameters
Realizations = 2 x 104
At = 0.01,0 < t < 4
Scheme: EM
Ax = 0.1, x E [-10, 20]
At
0.00025, 0 < t < 10
Scheme: Z-K
Ax = 0.1, x E [-10, 10]
At = 0.00025, 0 <t < 2
Scheme: Z-K
Realizations = 1000
Ax = 0.2, a E [-5, 10]
At = 0.00001, 0 < t < 1
Scheme: EM
Bibliography
Ablowitz, M. J. and Segur, H. (1981).
volume 4. SIAM.
Solitons and the inverse scattering transform,
Airy, G. B. (1845). Tides and waves. Encyclopaedia Metropolitana,pages 241 396.
Albowitz, M. and Taha, T. (1984). Analytical and numerical aspects of certain nonlinear
evolution equations. iii. korteweg-de vries equation. Journal of ComputationalPhysics,
55(2):231 253.
Babuska, I., Nobile, F., and Tempone, R. (2007). A stochastic collocation method for
elliptic partial differential equations with random input data. SIAM Journal on Numerical Analysis, 45(3):1005 1034.
Bachelier, L. (1900). Thiorie de la speculation. Gauthier-Villars.
Berkooz, G., Holmes, P., and Lumley, J. L. (1993). The proper orthogonal decomposition
in the analysis of turbulent flows. Annual review of fluid mechanics, 25(1):539 575.
Bertsekas, D. and Tsitsiklis, J. (2002).
Scientific Nashua, NH.
Introduction to probability, volume 1. Athena
Borcea, L., Papanicolaou, G., Tsogka, C., and Berryman, J. (2002). Imaging and time
reversal in random media. Inverse Problems, 18(5):1247.
Boussinesq, J. (1872). Th {6} orie des ondes et des remous qui se propagent le long d\'un
canal rectangulaire horizontal, en communiquant au liquide contenu dans ce canal des
vitesses sensiblement pareilles de la surface au fond. J. Math. Pures Appl., 17(2):55 108.
Boussinesq, J. (1877). Essai sur la thiorie des eaux courantes. Imprimerie nationale.
Branicki, M. and Majda, A. J. (2012). Fundamental limitations of polynomial chaos for
uncertainty quantification in systems with intermittent instabilities. Comm. Math. Sci,
11.
Brown, R. (1828). Xxvii. a brief account of microscopical observations made in the months
of june, july and august 1827, on the particles contained in the pollen of plants; and
on the general existence of active molecules in organic and inorganic bodies. The
Philosophical Magazine, or Annals of Chemistry, Mathematics, Astronomy, Natural
History and General Science, 4(21):161 173.
189
Burrage, K. and Burrage, P. (1996). High strong order explicit runge-kutta methods for
stochastic ordinary differential equations. Applied Numerical Mathematics, 22(1):81
101.
Burrage, K. and Burrage, P. (2000). Order conditions of stochastic runge kutta methods
by b-series. SIAM Journal on Numerical Analysis, 38(5):1626 1646.
Cameron, R. H. and Martin, W. T. (1947). The orthogonal development of nonlinear functionals in series of fourier-hermite functionals. The Annals of Mathematics,
48(2):385 392.
Cheng, M., Hou, T., and Zhang, Z. (2012). A dynamically bi-orthogonal method for
time-dependent stochastic partial differential equations i: Derivation and algorithms'.
Chorin, A. J. (1974).
Gaussian fields and random flow.
Journal of Fluid Mechanics,
63:21 32.
Crow, S. and Canavan, G. (1970). Relationship between a wiener-hermite expansion and
an energy cascade. J. Fluid Mech, 41(2):387 403.
Dashti, M. and Stuart, A. M. (2011). Uncertainty quantification and weak approximation
of an elliptic inverse problem. SIAM Journal on Numerical Analysis, 49(6):2524 2542.
de Jager, E. (2006).
math/0602661.
On the origin of the korteweg-de vries equation.
arXiv preprint
de Vries, G. (1894). Bijdrage tot de kennis der lange golven. PhD thesis, Universiteit van
Amsterdam.
Debusschere, B., Najm, H., Matta, A., Shu, T., Knio, 0., Ghanem, R., and Le Maitre,
0. (2002). Uncertainty quantification in a reacting electrochemical microchannel flow
model. In Proc. 5th Int. Conf. on Modeling and Simulation of Microsystems, pages
384 387.
Debusschere, B., Najm, H., P bay, P., Knio, 0., Ghanem, R., and Le Maitre, 0. (2004).
Numerical challenges in the use of polynomial chaos representations for stochastic processes. SIAM Journal on Scientific Computing, 26(2):698 719.
Debusschere, B. J., Najm, H. N., Matta, A., Knio, 0. M., Ghanem, R. G., and Le Maitre,
0. P. (2003). Protein labeling reactions in electrochemical microchannel flow: Numerical
simulation and uncertainty propagation. Physics of fluids, 15:2238.
Dingemans, M. W. (1997). Water wave propagation over uneven bottoms: Linear wave
propagation. Part 1, volume 13. World Scientific.
Doostan, A. and Owhadi, H. (2011). A non-adapted sparse approximation of pdes with
stochastic inputs. Journal of ComputationalPhysics, 230(8):3015 3034.
Doucet, A., De Freitas, N., Gordon, N., et al. (2001). Sequential Monte Carlo methods in
practice, volume 1. Springer New York.
190
Durufl6, M. and Israwi, S. (2012). A numerical study of variable depth kdv equations and
generalizations of camassa holm-like equations. Journal of Computational and Applied
Mathematics, 236(17):4149 4165.
Eckhardt, R. (1987). Stan ulam, john von neumann, and the monte carlo method. Los
Alamos Sci, 15:131 143.
Einstein, A. (1905). Uber die von der molekularkinetischen theorie der wdrme geforderte
bewegung von in ruhenden fliissigkeiten suspendierten teilchen. Annalen der physik,
322(8):549 560.
Ernst, 0. G., Mugler, A., Starkloff, H.-J., and Ullmann, E. (2012). On the convergence
of generalized polynomial chaos expansions. ESAIM: Mathematical Modelling and Numerical Analysis, 46(02):317 339.
Fishman, G. (1996). Monte Carlo: concepts, algorithms, and applications. Springer.
Frauenfelder, P., Schwab, C., and Todor, R. A. (2005). Finite elements for elliptic problems
with stochastic coefficients. Computer methods in applied mechanics and engineering,
194(2):205 228.
Gardner, C. S., Greene, J. M., Kruskal, M. D., and Miura, R. M. (1967). Method for
solving the korteweg-devries equation. Physical Review Letters, 19(19):1095 1097.
Gay, D. H. and Ray, W. H. (1995). Identification and control of distributed parameter
systems by means of the singular value decomposition. Chemical Engineering Science,
50(10):1519 1539.
Gerritsma, M., Van der Steen, J., Vos, P., and Karniadakis, G. (2010). Time-dependent
generalized polynomial chaos. Journal of ComputationalPhysics, 229(22):8333 8363.
Ghanem, R. (1998). Probabilistic characterization of transport in heterogeneous media.
Computer Methods in Applied Mechanics and Engineering, 158(3):199 220.
Ghanem, R. (1999a). Ingredients for a general purpose stochastic finite elements implementation. Computer Methods in Applied Mechanics and Engineering, 168(1):19 34.
Ghanem, R. (1999b). Stochastic finite elements with multiple random non-gaussian properties. Journal of Engineering Mechanics, 125(1):26 40.
Ghanem, R. G. and Spanos, P. D. (1991). Stochastic finite elements: a spectral approach.
Springer-Verlag.
Giles, M. (2008a). Improved multilevel monte carlo convergence using the milstein scheme.
Monte Carlo and quasi-Monte Carlo methods 2006, pages 343 358.
Giles, M. B. (2008b).
Multilevel monte carlo path simulation.
Operations Research,
56(3):607 617.
Gilks, W. R., Richardson, S., and Spiegelhalter, D. (1995). Markov Chain Monte Carlo
in practice: interdisciplinarystatistics, volume 2. Chapman & Hall/CRC.
191
Green, A., Laws, N., and Naghdi, P. (1974). On the theory of water waves. Proceedings of
the Royal Society of London. A. Mathematical and Physical Sciences, 338(1612):43 55.
Green, A. and Naghdi, P. (1976). A derivation of equations for wave propagation in water
of variable depth. Journal of Fluid Mechanics, 78(02):237 246.
Hastings, W. K. (1970). Monte carlo sampling methods using markov chains and their
applications. Biometrika, 57(1):97 109.
Herman, R. and Knickerbocker, C. (1993). Numerically induced phase shift in the kdv
soliton. Journal of ComputationalPhysics, 104(1):50 55.
Higham, D. (2001). An algorithmic introduction to numerical simulation of stochastic
differential equations. SIAM review, 43(3):525 546.
Holmes, P., Lumley, J. L., and Berkooz, G. (1998). Turbulence, coherent structures,
dynamical systems and symmetry. Cambridge University Press.
Hou, T., Luo, W., Rozovskii, B., and Zhou, H. (2006). Wiener chaos expansions and
numerical solutions of randomly forced equations of fluid mechanics. Journal of Computational Physics, 216(2):687 706.
Hunter, J. K. (2009). Math280: Applied mathematics. Math280 UCDavis class notes.
Israwi, S. (2009). Variable depth kdv equations and generalizations to more nonlinear
regimes. arXiv preprint arXiv:0901.3201.
It6, K. (1944). Stochastic integral. Proceedings of the Japan Academy, Series A, Mathematical Sciences, 20(8):519 524.
James, F. (2000). Monte carlo theory and practice. Reports on Progress in Physics,
43(9):1145 1189.
Jazwinski, A. H. (1970). Stochastic processes and filtering theory. Academic Press, New
York.
Jolliffe, I. T. (1986).
York.
Principal component analysis, volume 487. Springer-Verlag New
Kloeden, P. and Platen, E. (2011). Numerical solution of stochastic differential equations,
volume 23. Springer.
Kloeden, P., Platen, E., and Hofmann, N. (1995). Extrapolation methods for the weak
approximation of it6 diffusions. SIAM journal on numerical analysis, 32(5):1519 1534.
Kloeden, P. E. and Platen, E. (1989). A survey of numerical methods for stochastic
differential equations. Stochastic Hydrology and Hydraulics, 3(3):155 178.
Knio, 0. and Le Maitre, 0. (2006). Uncertainty propagation in cfd using polynomial
chaos decomposition. Fluid Dynamics Research, 38(9):616 640.
192
Komori, Y. (2007a). Multi-colored rooted tree analysis of the weak order conditions of a
stochastic runge kutta family. Applied numerical mathematics, 57(2):147 165.
Weak second-order stochastic runge kutta methods for nonKomori, Y. (2007b).
commutative stochastic differential equations. Journal of computational and applied
mathematics, 206(1):158 173.
Komori, Y., Mitsui, T., and Sugiura, H. (1997). Rooted tree analysis of the order conditions of row-type scheme for stochastic differential equations. BIT Numerical Mathematics, 37(1):43 66.
Korteweg, D. J. and de Vries, G. (1895). On the change of form of long waves advancing
in a rectangular canal, and on a new type of long stationary waves. The London,
Edinburgh, and Dublin Philosophical Magazine and Journal of Science, 39(240):422
443.
Kraichnan, R. H. (1963). Direct-interaction approximation for a system of several interacting simple shear waves. Physics of Fluids, 6(11):1603 1609.
Kiipper, D., Lehn, J., and R5Bler, A. (2007). A step size control algorithm for the weak
approximation of stochastic differential equations. Numerical Algorithms, 44(4):335
346.
Le Maitre, 0. and Knio, 0. (2010). Spectral methods for uncertainty quantification: with
applications to computational fluid dynamics. Springer.
Le Maitre, 0., Knio, 0., Najm, H., and Ghanem, R. (2004a). Uncertainty propagation
using wiener-haar expansions. Journal of Computational Physics, 197(1):28 57.
Le Maitre, 0., Najm, H., Ghanem, R., and Knio, 0. (2004b). Multi-resolution analysis
of wiener-type uncertainty propagation schemes. Journal of Computational Physics,
197(2):502 531.
Le Maitre, 0., Najm, H., Pebay, P., Ghanem, R., and Knio, 0. (2007). Multi-resolutionanalysis scheme for uncertainty quantification in chemical systems. SIAM Journal on
Scientific Computing, 29(2):864 889.
Le Maitre, 0. P., Reagan, M. T., Najm, H. N., Ghanem, R. G., and Knio, 0. M. (2002).
A stochastic projection method for fluid flow: Ii. random process. Journal of computational Physics, 181(1):9 44.
Lermusiaux, P. F. (1999). Data assimilation via error subspace statistical estimation.
part ii: Middle atlantic bight shelfbreak front simulations and esse validation. Monthly
Weather Review, 127(7):1408 1432.
Lermusiaux, P. F. (2006). Uncertainty estimation and prediction for interdisciplinary
ocean dynamics. Journalof ComputationalPhysics, 217(1):176 199.
Lermusiaux, P. F., Chiu, C.-S., Gawarkiewicz, G. G., Abbot, P., Robinson, A. R., Miller,
R. N., Haley, P. J., Leslie, W. G., Majumdar, S. J., Pang, A., et al. (2006). Quantifying
uncertainties in ocean predictions. Technical report, DTIC Document.
193
Lermusiaux, P. F. and Robinson, A. (1999). Data assimilation via error subspace statistical estimation. part i: Theory and schemes. Monthly Weather Review, 127(7):1385
1407.
Li, Y. A., Hyman, J. M., and Choi, W. (2004). A numerical study of the exact evolution
equations for surface waves in water of finite depth. Studies in Applied Mathematics,
113(3):303 324.
Lumley, J. L. (2007). Stochastic tools in turbulence. Dover Publications.
Luo, W. (2006). Wiener chaos expansion and numerical solutions of stochastic partial
differential equations. PhD thesis, California Institute of Technology, Department of
Applied and Computational Mathematics.
Ma, X. and Zabaras, N. (2009). An adaptive hierarchical sparse grid collocation algorithm
for the solution of stochastic differential equations. Journal of Computational Physics,
228(8):3084 3113.
Majda, A. J. and Branicki, M. (2012). Lessons in uncertainty quantification for turbulent
dynamical systems. Discrete Contin. Dyn Syst, 32:3133 231.
Majda, A. J., Franzke, C., and Khouider, B. (2008). An applied mathematics perspective
on stochastic modelling for climate. Philosophical Transactions of the Royal Society A:
Mathematical, Physical and Engineering Sciences, 366(1875):2427 2453.
Majda, A. J., Timofeyev, I., and Vanden Eijnden, E. (2001). A mathematical framework
for stochastic climate models. Communications on Pure and Applied Mathematics,
54(8):891 974.
Maruyama, G. (1955). Continuous markov processes and stochastic equations. Rendiconti
del Circolo Matematico di Palermo, 4(1):48 90.
McGrath, E. J. and Irving, D. (1973). Techniques for efficient monte carlo simulation.
volume iii. variance reduction. Technical report, DTIC Document.
Miles, J. W. (1981). Korteweg-de vries equation: A historical essay. Journal of fluid
mechanics, 106:131 147.
Milstein, G. and Tretyakov, M. (2004). Stochastic numerics for mathematical physics.
Scientific Computation. Springer-Verlag, Berlin.
Moler, C. (2004). Numerical computing with MATLAB. Society for Industrial Mathematics.
Muller, P. and Henyey, F. (1997). Workshop assesses monte carlo simulations in oceanography. In Monte Carlo simulations in oceanography: proceedings, Aha Huliko a Hawaiian Winter Workshop, University of Hawaii at Manoa, January 14-17, 1997, volume 9,
page 201. School of Ocean and Earth Science and Technology.
Najm, H. N. (2009). Uncertainty quantification and polynomial chaos techniques in computational fluid dynamics. Annual Review of Fluid Mechanics, 41:35 52.
194
Najm, H. N., Debusschere, B. J., Marzouk, Y. M., Widmer, S., and Le Maitre, 0. (2009).
Uncertainty quantification in chemical systems. Internationaljournal for numerical
methods in engineering, 80(6-7):789 814.
Newton, N. (1991). Asymptotically efficient runge-kutta methods for a class of ito and
stratonovich equations. SIAM Journal on Applied Mathematics, 51(2):542 567.
Niederreiter, H. (2010). Quasi-Monte Carlo Methods. Wiley Online Library.
Nihoul, J. and Djenidi, S. (1998). Global coastal ocean: the processes and methods. In
Global coastal ocean: the processes and methods, volume 10, chapter Coupled physical,
chemical and biological models. Harvard University Press.
Oksendal, B. (2010). Stochastic differential equations: an introduction with applications.
Springer.
Orszag, S. A. and Bissonnette, L. (1967). Dynamical properties of truncated wienerhermite expansions. Physics of Fluids, 10:2603.
Osborne, A. (2010). Nonlinear Ocean Waves & the Inverse Scattering Transform, volume 97. Academic Press.
Owhadi, H., Scovel, C., Sullivan, T. J., McKerns, M., and Ortiz, M. (2010).
uncertainty quantification. arXiv preprint arXiv:1009.0679.
Optimal
Rayleigh, L. (1876). On waves. Phil. Mag, 1(5):257 279.
Rose, A. (2006). Numerical simulations of the stochastic KDV equation. PhD thesis,
University of North Carolina.
R6Bler, A. (2004). Stochastic taylor expansions for the expectation of functionals of
diffusion processes. Stochastic analysis and applications, 22:1553 1576.
R6Bler, A. (2006). Rooted tree analysis for order conditions of stochastic runge-kutta
methods for the weak approximation of stochastic differential equations. Stochastic
analysis and applications,24(1):97 134.
R6Bler, A. (2007). Second order runge-kutta methods for stratonovich stochastic differential equations. BIT Numerical Mathematics, 47(3):657 680.
R6Bler, A. (2009). Second order runge-kutta methods for it6 stochastic differential equations. SIAM Journal on Numerical Analysis, 47(3):1713 1738.
R6Bler, A. (2010). Runge-kutta methods for the strong approximation of solutions of
stochastic differential equations. SIAM Journal on Numerical Analysis, 48(3):922 952.
Rimelin, W. (1982). Numerical treatment of stochastic differential equations.
Journal on Numerical Analysis, 19(3):604 613.
SIAM
Russell, J. S. (1844). Report on waves. In 14th meeting of the British Association for the
Advancement of Science, volume 311, page 390.
195
Sapsis, T. P. (2010). Dynamically orthogonalfield equations for stochastic fluid flows and
particle dynamics. PhD thesis, Massachusetts Institute of Technology, Department of
Mechanical Engineering.
Sapsis, T. P. and Lermusiaux, P. F. (2009). Dynamically orthogonal field equations
for continuous stochastic dynamical systems.
Physica D: Nonlinear Phenomena,
238(2324):2347
2360.
Sapsis, T. P. and Lermusiaux, P. F. (2012). Dynamical criteria for the evolution of the
stochastic dimensionality in flows with uncertainty. Physica D: Nonlinear Phenomena,
241(1):60
76.
Schwab, C. and Todor, R. A. (2006). Karhunen loeve approximation of random fields by
generalized fast multipole methods. Journalof ComputationalPhysics, 217(1):100 122.
Shvartsman, S. Y. and Kevrekidis, I. G. (2004). Nonlinear model reduction for control of
distributed systems: A computer-assisted study. AIChE Journal,44(7):1579 1595.
Sirovich, L. (1987). Turbulence and the dynamics of coherent structures. i-coherent structures. ii-symmetries and transformations. iii-dynamics and scaling. Quarterly of applied
mathematics, 45:561 571.
Soize, C. and Ghanem, R. (2004). Physical systems with random uncertainties: chaos
representations with arbitrary probability measure. SIAM Journal on Scientific Computing, 26(2):395 410.
Sondergaard, T. (2011). Data assimilation with gaussian mixture models using the dynamically orthogonal field equations. Master's thesis, Massachusetts Institute of Technology.
Sondergaard, T. and Lermusiaux, P. F. (2013a). Data assimilation with gaussian mixture
models using the dynamically orthogonal field equations. part i. theory and scheme.
Monthly Weather Review, 141(6):1737 1760.
Sondergaard, T. and Lermusiaux, P. F. (2013b). Data assimilation with gaussian mixture
models using the dynamically orthogonal field equations. part ii: Applications. Monthly
Weather Review, 141(6):1761 1785.
Stratonovich, R. (1966). A new representation for stochastic integrals and equations.
SIAM Journal on Control, 4(2):362 371.
Strogatz, S. (2001). Nonlinear dynamics and chaos: with applications to physics, biology,
chemistry and engineering. Perseus Books Group.
Su, C. and Gardner, C. S. (1969). Korteweg-de vries equation and generalizations. iii.
derivation of the korteweg-de vries equation and burgers equation. Journal of Mathematical Physics, 10:536.
Talay, D. (1990). Second-order discretization schemes of stochastic differential systems
for the computation of the invariant law. Stochastics: An International Journal of
Probability and Stochastic Processes, 29(1):13 36.
196
Talay, D. and Tubaro, L. (1990). Expansion of the global error for numerical schemes
solving stochastic differential equations. Stochastic Analysis and Applications, 8(4):483
509.
Tocino, A. and Vigo-Aguiar, J. (2002). Weak second order conditions for stochastic
runge kutta methods. SIAM Journal on Scientific Computing, 24(2):507 523.
Ueckermann, M., Lermusiaux, P., and Sapsis, T. (2013). Numerical schemes for dynamically orthogonal equations of stochastic fluid and ocean flows. Journal of Computational
Physics, 233(0):272
294.
Ursell, F. (1953). The long-wave paradox in the theory of gravity waves. In Mathematical Proceedings of the Cambridge PhilosophicalSociety, volume 49, pages 685 694.
Cambridge Univ Press.
Van Groesen, E. and Pudjaprasetya, S. (1993). Uni-directional waves over slowly varying
bottom. part i: Derivation of a kdv-type of equation. Wave Motion, 18(4):345 370.
Vliegenthart, A. (1971). On finite-difference methods for the korteweg-de vries equation.
Journal of EngineeringMathematics, 5(2):137 155.
Wan, X. and Karniadakis, G. (2005). An adaptive multi-element generalized polynomial
chaos method for stochastic differential equations. Journal of Computational Physics,
209(2):617 642.
Wan, X. and Karniadakis, G. (2006a). Multi-element generalized polynomial chaos for
arbitrary probability measures. SIAM Journal on Scientific Computing, 28(3):901 928.
Wan, X. and Karniadakis, G. E. (2006b). Beyond wiener askey expansions: handling
arbitrary pdfs. Journal of Scientific Computing, 27(1):455 464.
Wiener, N. (1923). Differential space. Journal of Mathematical Physics, 2:131 174.
Wiener, N. (1938). The homogeneous chaos. Amer. J. Math, 60(4):897 936.
Xiu, D. and Karniadakis, G. E. (2002). The wiener askey polynomial chaos for stochastic
differential equations. SIAM Journal on Scientific Computing, 24(2):619 644.
Xiu, D. and Karniadakis, G. E. (2003). Modeling uncertainty in flow simulations via
generalized polynomial chaos. Journal of Computational Physics, 187(1):137 167.
Zabusky, N. J. and Kruskal, M. D. (1965). Interaction of" solitons" in a collisionless
plasma and the recurrence of initial states. Physical Review Letters, 15(6):240 243.
Zaremba, S. (1968). The mathematical basis of monte carlo and quasi-monte carlo meth-
ods. SIAM review, 10(3):303 314.
Zhang, Y., Henson, M. A., and Kevrekidis, Y. G. (2003). Nonlinear model reduction for
dynamic analysis of cell population models. Chemical engineering science, 58(2):429
445.
197