Quasi- and Multilevel Monte Carlo Methods for Aretha Teckentrup

advertisement
Quasi- and Multilevel Monte Carlo Methods for
Bayesian Inverse Problems in Subsurface Flow
Aretha Teckentrup
Mathematics Institute, University of Warwick
Joint work with:
Rob Scheichl (Bath), Andrew Stuart (Warwick)
5 February, 2015
A. Teckentrup (WMI)
MLMC for Bayesian Inverse Problems
5 February, 2015
1 / 20
Outline
1
Introduction
2
Posterior Expectation as High Dimensional Integration
3
MC, QMC and MLMC
4
Convergence and Complexity
5
Numerical Results
6
Conclusions
A. Teckentrup (WMI)
MLMC for Bayesian Inverse Problems
5 February, 2015
2 / 20
Introduction: Model problem
EDZ
CROWN SPACE
WASTE VAULTS
Modelling and simulation of subsurface flow essential in
many applications, e.g. oil reservoir simulation
FAULTED GRANITE
GRANITE
DEEP SKIDDAW
N-S SKIDDAW
DEEP LATTERBARROW
N-S LATTERBARROW
FAULTED TOP M-F BVG
Darcy’s law for an incompressible fluid → elliptic partial
differential equations
−∇ · (k∇p) = f
TOP M-F BVG
FAULTED BLEAWATH BVG
BLEAWATH BVG
FAULTED F-H BVG
F-H BVG
FAULTED UNDIFF BVG
UNDIFF BVG
FAULTED N-S BVG
N-S BVG
FAULTED CARB LST
CARB LST
FAULTED COLLYHURST
COLLYHURST
FAULTED BROCKRAM
BROCKRAM
SHALES + EVAP
FAULTED BNHM
BOTTOM NHM
FAULTED DEEP ST BEES
DEEP ST BEES
FAULTED N-S ST BEES
N-S ST BEES
FAULTED VN-S ST BEES
VN-S ST BEES
FAULTED DEEP CALDER
DEEP CALDER
FAULTED N-S CALDER
N-S CALDER
FAULTED VN-S CALDER
VN-S CALDER
MERCIA MUDSTONE
QUATERNARY
c
NIREX
UK Ltd.
A. Teckentrup (WMI)
MLMC for Bayesian Inverse Problems
5 February, 2015
3 / 20
Introduction: Model problem
EDZ
CROWN SPACE
WASTE VAULTS
Modelling and simulation of subsurface flow essential in
many applications, e.g. oil reservoir simulation
FAULTED GRANITE
GRANITE
DEEP SKIDDAW
N-S SKIDDAW
DEEP LATTERBARROW
N-S LATTERBARROW
FAULTED TOP M-F BVG
Darcy’s law for an incompressible fluid → elliptic partial
differential equations
−∇ · (k∇p) = f
TOP M-F BVG
FAULTED BLEAWATH BVG
BLEAWATH BVG
FAULTED F-H BVG
F-H BVG
FAULTED UNDIFF BVG
UNDIFF BVG
FAULTED N-S BVG
N-S BVG
FAULTED CARB LST
Lack of data → uncertainty in model parameter k
CARB LST
FAULTED COLLYHURST
COLLYHURST
FAULTED BROCKRAM
BROCKRAM
Quantify uncertainty in model parameter through
stochastic modelling (→ k, p random fields)
SHALES + EVAP
FAULTED BNHM
BOTTOM NHM
FAULTED DEEP ST BEES
DEEP ST BEES
FAULTED N-S ST BEES
N-S ST BEES
FAULTED VN-S ST BEES
VN-S ST BEES
FAULTED DEEP CALDER
DEEP CALDER
FAULTED N-S CALDER
N-S CALDER
FAULTED VN-S CALDER
VN-S CALDER
MERCIA MUDSTONE
QUATERNARY
c
NIREX
UK Ltd.
A. Teckentrup (WMI)
MLMC for Bayesian Inverse Problems
5 February, 2015
3 / 20
Introduction: Model problem
The end goal is usually to estimate the expected value of a quantity
of interest (QoI) φ(p) or φ(k, p).
I
point values or local averages of the pressure p
I
point values or local averages of the Darcy flow −k∇p
I
outflow over parts of the boundary
I
travel times of contaminant particles
We will work in the Bayesian framework, where we put a prior
distribution on k, and obtain a posterior distribution on k by
conditioning the prior on observed data.
A. Teckentrup (WMI)
MLMC for Bayesian Inverse Problems
5 February, 2015
4 / 20
Introduction: Prior distribution
Typical simplified model for k is a log–normal random field,
k = exp[g], where g is a scalar, isotropic Gaussian field. E.g.
E[g(x)] = 0,
E[g(x)g(y)] = σ 2 exp[−|x − y|/λ].
Groundwater flow problems are typically characterised by:
I
Low spatial regularity of the permeability k and the resulting pressure
field p
I
High dimensionality of the stochastic space (possibly infinite
dimensional)
I
Unboundedness of the log–normal distribution
A. Teckentrup (WMI)
MLMC for Bayesian Inverse Problems
5 February, 2015
5 / 20
Introduction: Posterior distribution
In addition to presumed log–normal distribution, one usually has available
some data y ∈ Rm related to the outputs (e.g. pressure data).
Denote by µ0 the prior log–normal measure on k, and assume
y = O(p) + η,
where η is a realisation of the Gaussian random variable N (0, ση2 Im ).
A. Teckentrup (WMI)
MLMC for Bayesian Inverse Problems
5 February, 2015
6 / 20
Introduction: Posterior distribution
In addition to presumed log–normal distribution, one usually has available
some data y ∈ Rm related to the outputs (e.g. pressure data).
Denote by µ0 the prior log–normal measure on k, and assume
y = O(p) + η,
where η is a realisation of the Gaussian random variable N (0, ση2 Im ).
Bayes’ Theorem:
dµy
1
|y − O(p(k))|2
1
(k) = exp[−
] =: exp[−Φ(p(k))]
2
dµ0
Z
2ση
Z
Here,
Z
Z=
A. Teckentrup (WMI)
exp[−Φ(p(k))] = Eµ0 [exp[−Φ(p(k))]].
MLMC for Bayesian Inverse Problems
5 February, 2015
6 / 20
Posterior Expectation as High Dimensional Integration
We are interested in computing Eµy [φ(p)]. Using Bayes’ Theorem, we can
write this as
Eµy [φ(p)] = Eµ0 [
Eµ0 [φ(p) exp[−Φ(p)]]
1
exp[−Φ(p)] φ(p)] =
.
Z
Eµ0 [exp[−Φ(p)]]
We have rewritten the posterior expectation as a ratio of two prior
expectations.
A. Teckentrup (WMI)
MLMC for Bayesian Inverse Problems
5 February, 2015
7 / 20
Posterior Expectation as High Dimensional Integration
We are interested in computing Eµy [φ(p)]. Using Bayes’ Theorem, we can
write this as
Eµy [φ(p)] = Eµ0 [
Eµ0 [φ(p) exp[−Φ(p)]]
1
exp[−Φ(p)] φ(p)] =
.
Z
Eµ0 [exp[−Φ(p)]]
We have rewritten the posterior expectation as a ratio of two prior
expectations.
We can now approximate
Eµy [φ(p)] ≈
b
Q
,
Zb
b is an estimator of Eµ [φ(p) exp[−Φ(p)]] =: Eµ [ψ(p)] := Q and
where Q
0
0
b
Z is an estimator of Z.
Remark: If m is very large or ση2 is very small, the two prior expectations
will be difficult to evaluate.
A. Teckentrup (WMI)
MLMC for Bayesian Inverse Problems
5 February, 2015
7 / 20
MC, QMC and MLMC
[Niederreiter ’94], [Graham et al ’14]
The standard Monte Carlo (MC) estimator
N
1 X
(i)
MC
b
ψ(ph )
Qh,N =
N
i=1
(i)
is an equal weighted average of N i.i.d samples ψ(ph ), where ph
denotes a finite element discretisation of p with mesh width h.
A. Teckentrup (WMI)
MLMC for Bayesian Inverse Problems
5 February, 2015
8 / 20
MC, QMC and MLMC
[Niederreiter ’94], [Graham et al ’14]
The standard Monte Carlo (MC) estimator
N
1 X
(i)
MC
b
ψ(ph )
Qh,N =
N
i=1
(i)
is an equal weighted average of N i.i.d samples ψ(ph ), where ph
denotes a finite element discretisation of p with mesh width h.
The Quasi-Monte Carlo (QMC) estimator
N
X
(j)
b QMC = 1
Q
ψ(ph )
h,N
N
j=1
is an equal weighted average of N deterministically chosen samples
(j)
ψ(ph ), with ph as above.
A. Teckentrup (WMI)
MLMC for Bayesian Inverse Problems
5 February, 2015
8 / 20
MC, QMC and MLMC
[Giles, ’07], [Cliffe et al ’11]
The multilevel method works on a sequence of levels, s.t. h` = 21 h`−1 ,
` = 0, 1, . . . , L. The finest mesh width is hL .
A. Teckentrup (WMI)
MLMC for Bayesian Inverse Problems
5 February, 2015
9 / 20
MC, QMC and MLMC
[Giles, ’07], [Cliffe et al ’11]
The multilevel method works on a sequence of levels, s.t. h` = 21 h`−1 ,
` = 0, 1, . . . , L. The finest mesh width is hL .
Linearity of expectation gives us
Eµ0 [ψ(phL )] = Eµ0 [ψ(ph0 )] +
L
X
Eµ0 ψ(ph` ) − ψ(ph`−1 ) .
`=1
The multilevel Monte Carlo (MLMC) estimator
N
N0
L
X
1 X̀
1 X
(i)
ML
b
Q{h` ,N` } :=
ψ(ph0 ) +
ψ(ph` (i) ) − ψ(ph`−1 (i) ).
N0
N`
i=1
`=1
i=1
is a sum of L + 1 independent MC estimators. The sequence {N` } is
decreasing, which means a significant portion of the computational effort
is shifted onto the coarse grids.
A. Teckentrup (WMI)
MLMC for Bayesian Inverse Problems
5 February, 2015
9 / 20
Convergence and Complexity: Mean square error
We want to bound the mean square error (MSE)

!2
!2 
b
b
Q
Q Q 
.
e
= E
−
b
b
Z
Z
Z
In the log–normal case, it is not sufficient to bound the individual
b 2 ] and E[(Z − Z)
b 2 ].
mean square errors E[(Q − Q)
b 2 ] and E[(Z − Z)
b p ], for some
We require a bound on E[(Q − Q)
p > 2.
We split the error in two contributions: the discretisation error and
the sampling error.
A. Teckentrup (WMI)
MLMC for Bayesian Inverse Problems
5 February, 2015
10 / 20
Convergence and Complexity: Discretisation error
Denote Qh := Eµ0 [ψ(ph )] and
!2
" 
b
Q
≤ 2 E
e
b
Z
|
Zh := Eµ0 [exp[−Φ(ph )]]. Then
!2  #
b
Q Qh 
Qh Q 2
−
−
+
.
b
Zh
Zh
Z
Z
|
{z
}
{z
} Discretisation error
Sampling error
A. Teckentrup (WMI)
MLMC for Bayesian Inverse Problems
5 February, 2015
11 / 20
Convergence and Complexity: Discretisation error
Denote Qh := Eµ0 [ψ(ph )] and
!2
" 
b
Q
≤ 2 E
e
b
Z
|
Zh := Eµ0 [exp[−Φ(ph )]]. Then
!2  #
b
Q Qh 
Qh Q 2
−
−
+
.
b
Zh
Zh
Z
Z
|
{z
}
{z
} Discretisation error
Sampling error
Theorem (Scheichl, Stuart, T., in preparation)
Under a log-normal prior, k = exp[g], and suitable assumptions on O and
φ, we have
Qh Q s
Zh − Z ≤ CFE h ,
where the rate s is problem dependent.
A. Teckentrup (WMI)
MLMC for Bayesian Inverse Problems
5 February, 2015
11 / 20
Convergence and Complexity: Sampling error
MC: Follows from results on moments of sample means of i.i.d.
random variables (X)
MLMC: Follows from results for MC, plus bounds on moments of
sum of independent estimators (independent of L) (X)
Lemma
Let {Yi }L
i=0 be a sequence of independent, mean zero random variables.
Then for any p ∈ N

!
!2p 
L h
L
i1/p p
X
X
2p
 ≤ Cp
Yi
E Yi
,
E
i=0
i=0
where the constant Cp depends only on p.
QMC: Requires extension of current QMC theory to non–linear
functionals (X) and higher order moments of the worst-case error (7)
A. Teckentrup (WMI)
MLMC for Bayesian Inverse Problems
5 February, 2015
12 / 20
Convergence and Complexity
Theorem (Scheichl, Stuart, T., in preparation)
Under a log-normal prior, k = exp[g], we have
!
b MC 2
Q
h,N
≤ CMC N −1 + h2s ,
e
bMC
Z
h,N
!2
!
L
b ML
X
Q
h2s
{h` ,N` }
2s
`
e
≤ CML
+ hL ,
bML
N`
Z
`=0
{h` ,N` }
where the convergence rate s is problem dependent.
A. Teckentrup (WMI)
MLMC for Bayesian Inverse Problems
5 February, 2015
13 / 20
Convergence and Complexity
Theorem (Scheichl, Stuart, T., in preparation)
Under a log-normal prior, k = exp[g], we have
!
b MC 2
Q
h,N
≤ CMC N −1 + h2s ,
e
bMC
Z
h,N
!2
!
L
b ML
X
Q
h2s
{h` ,N` }
2s
`
e
≤ CML
+ hL ,
bML
N`
Z
`=0
{h` ,N` }
where the convergence rate s is problem dependent. If k = exp[g] + c, for
some c > 0, then we additionally have
e
b QMC
Q
h,N
QMC
b
Z
!2
≤ CQMC N −2+δ + h2s ,
for any δ > 0.
h,N
b and Z!
b
Same convergence rates as for the individual estimators Q
A. Teckentrup (WMI)
MLMC for Bayesian Inverse Problems
5 February, 2015
13 / 20
Convergence and Complexity
The computational ε-cost is the number of FLOPS required to achieve a
MSE of O(ε2 ).
For the groundwater flow problem in d dimensions, we typically have
s = 1, and with an optimal linear solver, the computational ε-costs are
bounded by:
d MLMC
A. Teckentrup (WMI)
QMC
MC
1
O(ε−2 ) O(ε−2 ) O(ε−3 )
2
O(ε−2 ) O(ε−3 ) O(ε−4 )
3
O(ε−3 ) O(ε−4 ) O(ε−5 )
MLMC for Bayesian Inverse Problems
5 February, 2015
14 / 20
Numerical Results: Mean square error
2-dimensional flow cell model problem on (0, 1)2
k log-normal random field with exponential covariance function,
correlation length λ = 0.3, variance σ 2 = 1
Observed data corresponds to local averages of the pressure p at 9
points, ση2 = 0.09
QoI is outflow over right boundary
A. Teckentrup (WMI)
MLMC for Bayesian Inverse Problems
5 February, 2015
15 / 20
Numerical Results: Discretisation and sampling errors
Discretisation error
Sampling error
Reference slope h
MC and QMC
A. Teckentrup (WMI)
MLMC for Bayesian Inverse Problems
5 February, 2015
16 / 20
Numerical Results: Sampling error
Sampling error MC
Sampling error QMC
Reference slope N −1/2
Reference slope N −1
A. Teckentrup (WMI)
MLMC for Bayesian Inverse Problems
5 February, 2015
17 / 20
Conclusions
Posterior expectations can be written as the ratio of prior
expectations, and in this way approximated using QMC and MLMC
methods.
A convergence and complexity analysis of the resulting estimators
showed that the complexity of this approach is the same as
computing prior expectations.
Numerical investigations confirm the effectiveness of the QMC and
MLMC estimators for a typical model problem in subsurface flow.
A. Teckentrup (WMI)
MLMC for Bayesian Inverse Problems
5 February, 2015
18 / 20
References I
R. Scheichl, A.M. Stuart, and A.L. Teckentrup.
Quasi-Monte Carlo and Multilevel Monte Carlo Methods for
Computing Posterior Expectations in Elliptic Inverse Problems.
In preparation.
H. Niederreiter.
Random Number Generation and quasi-Monte Carlo methods.
SIAM, 1994.
I.G. Graham, F.Y. Kuo, J.A. Nicholls, R. Scheichl, Ch. Schwab, and
I.H. Sloan.
Quasi-Monte Carlo Finite Element methods for Elliptic PDEs with
Log-normal Random Coefficients.
Numerische Mathematik, (Published online), 2014.
A. Teckentrup (WMI)
MLMC for Bayesian Inverse Problems
5 February, 2015
19 / 20
References II
M.B. Giles.
Multilevel Monte Carlo path simulation.
Opererations Research, 256:981–986, 2008.
K.A. Cliffe, M.B. Giles, R. Scheichl, and A.L. Teckentrup.
Multilevel Monte Carlo methods and applications to elliptic PDEs with
random coefficients.
Computing and Visualization in Science, 14:3–15, 2011.
A.L. Teckentrup, R. Scheichl, M.B. Giles, and E. Ullmann.
Further analysis of multilevel Monte Carlo methods for elliptic PDEs
with random coefficients.
Numerische Mathematik, 3(125):569–600, 2013.
A. Teckentrup (WMI)
MLMC for Bayesian Inverse Problems
5 February, 2015
20 / 20
Download