Uploaded by azhangbbb

Gaussian Process Regression: Solving Non-linear Problems

advertisement
Solving Challenging Non-linear Regression Problems
by Manipulating a Gaussian Distribution
Machine Learning Summer School, Cambridge 2009
Carl Edward Rasmussen
Department of Engineering, University of Cambridge
August 30th - September 10th, 2009
Rasmussen (Engineering, Cambridge)
Gaussian Process Regression
August 30th - September 10th, 2009
1 / 62
The Prediction Problem
CO2 concentration, ppm
420
400
?
380
360
340
320
1960
1980
2000
2020
year
Rasmussen (Engineering, Cambridge)
Gaussian Process Regression
August 30th - September 10th, 2009
2 / 62
The Prediction Problem
CO2 concentration, ppm
420
400
380
360
340
320
1960
1980
2000
2020
year
Rasmussen (Engineering, Cambridge)
Gaussian Process Regression
August 30th - September 10th, 2009
3 / 62
The Prediction Problem
CO2 concentration, ppm
420
400
380
360
340
320
1960
1980
2000
2020
year
Rasmussen (Engineering, Cambridge)
Gaussian Process Regression
August 30th - September 10th, 2009
4 / 62
The Prediction Problem
CO2 concentration, ppm
420
400
380
360
340
320
1960
1980
2000
2020
year
Rasmussen (Engineering, Cambridge)
Gaussian Process Regression
August 30th - September 10th, 2009
5 / 62
Maximum likelihood, parametric model
Supervised parametric learning:
• data: x, y
• model: y = fw (x) + ε
Gaussian likelihood:
p(y|x, w) ∝
Y
exp(− 12 (yc − fw (xc ))2 /σ2noise ).
c
Maximize the likelihood:
wML = argmax p(y|x, w).
w
Make predictions, by plugging in the ML estimate:
p(y∗ |x∗ , wML )
Rasmussen (Engineering, Cambridge)
Gaussian Process Regression
August 30th - September 10th, 2009
6 / 62
Bayesian Inference, parametric model
Supervised parametric learning:
• data: x, y
• model: y = fw (x) + ε
Gaussian likelihood:
p(y|x, w) ∝
Y
exp(− 12 (yc − fw (xc ))2 /σ2noise ).
c
Parameter prior:
p(w)
Posterior parameter distribution by Bayes rule p(a|b) = p(b|a)p(a)/p(b):
p(w|x, y) =
Rasmussen (Engineering, Cambridge)
p(w)p(y|x, w)
p(y|x)
Gaussian Process Regression
August 30th - September 10th, 2009
7 / 62
Bayesian Inference, parametric model, cont.
Making predictions:
Z
p(y∗ |x∗ , x, y) =
Marginal likelihood:
Z
p(y|x) =
Rasmussen (Engineering, Cambridge)
p(y∗ |w, x∗ )p(w|x, y)dw
p(w)p(y|x, w)dw.
Gaussian Process Regression
August 30th - September 10th, 2009
8 / 62
The Gaussian Distribution
The Gaussian distribution is given by
p(x|µ, Σ) = N(µ, Σ) = (2π)−D/2 |Σ|−1/2 exp − 12 (x − µ)> Σ−1 (x − µ)
where µ is the mean vector and Σ the covariance matrix.
Rasmussen (Engineering, Cambridge)
Gaussian Process Regression
August 30th - September 10th, 2009
9 / 62
Conditionals and Marginals of a Gaussian
joint Gaussian
conditional
joint Gaussian
marginal
Both the conditionals and the marginals of a joint Gaussian are again Gaussian.
Rasmussen (Engineering, Cambridge)
Gaussian Process Regression
August 30th - September 10th, 2009
10 / 62
Conditionals and Marginals of a Gaussian
In algebra, if x and y are jointly Gaussian
p(x, y) = N
hai h A
,
b
B>
B i
,
C
the marginal distribution of x is
p(x, y) = N
hai h A
,
b
B>
B i
=⇒ p(x) = N(a, A),
C
and the conditional distribution of x given y is
p(x, y) = N
hai h A
,
b
B>
B i
=⇒ p(x|y) = N(a+BC−1 (y−b), A−BC−1 B> ),
C
where x and y can be scalars or vectors.
Rasmussen (Engineering, Cambridge)
Gaussian Process Regression
August 30th - September 10th, 2009
11 / 62
What is a Gaussian Process?
A Gaussian process is a generalization of a multivariate Gaussian distribution to
infinitely many variables.
Informally: infinitely long vector ' function
Definition: a Gaussian process is a collection of random variables, any
finite number of which have (consistent) Gaussian distributions.
A Gaussian distribution is fully specified by a mean vector, µ, and covariance
matrix Σ:
f = (f1 , . . . , fn )> ∼ N(µ, Σ), indexes i = 1, . . . , n
A Gaussian process is fully specified by a mean function m(x) and covariance
function k(x, x 0 ):
f (x) ∼ GP m(x), k(x, x 0 ) , indexes: x
Rasmussen (Engineering, Cambridge)
Gaussian Process Regression
August 30th - September 10th, 2009
12 / 62
The marginalization property
Thinking of a GP as a Gaussian distribution with an infinitely long mean vector
and an infinite by infinite covariance matrix may seem impractical. . .
. . . luckily we are saved by the marginalization property:
Z
Recall:
p(x) =
p(x, y)dy.
For Gaussians:
p(x, y) = N
Rasmussen (Engineering, Cambridge)
hai h A
,
b
B>
B i
=⇒ p(x) = N(a, A)
C
Gaussian Process Regression
August 30th - September 10th, 2009
13 / 62
Random functions from a Gaussian Process
Example one dimensional Gaussian process:
p(f (x)) ∼ GP m(x) = 0, k(x, x 0 ) = exp(− 12 (x − x 0 )2 ) .
To get an indication of what this distribution over functions looks like, focus on a
finite subset of function values f = (f (x1 ), f (x2 ), . . . , f (xn ))> , for which
f ∼ N(0, Σ),
where Σij = k(xi , xj ).
Then plot the coordinates of f as a function of the corresponding x values.
Rasmussen (Engineering, Cambridge)
Gaussian Process Regression
August 30th - September 10th, 2009
14 / 62
Some values of the random function
1.5
output, f(x)
1
0.5
0
−0.5
−1
−1.5
−5
Rasmussen (Engineering, Cambridge)
0
input, x
Gaussian Process Regression
5
August 30th - September 10th, 2009
15 / 62
Joint Generation
To generate a random sample from a D dimensional joint Gaussian with
covariance matrix K and mean vector m: (in octave or matlab)
z = randn(D,1);
y = chol(K)'*z + m;
where chol is the Cholesky factor R such that R> R = K.
Thus, the covariance of y is:
E[(y − y)(y − y)> ] = E[R> zz> R] = R> E[zz> ]R = R> IR = K.
Rasmussen (Engineering, Cambridge)
Gaussian Process Regression
August 30th - September 10th, 2009
16 / 62
Sequential Generation
Factorize the joint distribution
p(f1 , . . . , fn |x1 , . . . xn ) =
n
Y
p(fi |fi−1 , . . . , f1 , xi , . . . , x1 ),
i=1
and generate function values sequentially.
What do the individual terms look like? For Gaussians:
h a i h A B i
p(x, y) = N
,
=⇒ p(x|y) = N(a+BC−1 (y−b), A−BC−1 B> )
b
B> C
Do try this at home!
Rasmussen (Engineering, Cambridge)
Gaussian Process Regression
August 30th - September 10th, 2009
17 / 62
Function drawn at random from a Gaussian Process with Gaussian covariance
8
7
6
5
4
3
2
1
0
−1
−2
6
4
6
2
4
0
2
0
−2
−2
−4
−4
−6
Rasmussen (Engineering, Cambridge)
−6
Gaussian Process Regression
August 30th - September 10th, 2009
18 / 62
Maximum likelihood, parametric model
Supervised parametric learning:
• data: x, y
• model: y = fw (x) + ε
Gaussian likelihood:
p(y|x, w, Mi ) ∝
Y
exp(− 12 (yc − fw (xc ))2 /σ2noise ).
c
Maximize the likelihood:
wML = argmax p(y|x, w, Mi ).
w
Make predictions, by plugging in the ML estimate:
p(y∗ |x∗ , wML , Mi )
Rasmussen (Engineering, Cambridge)
Gaussian Process Regression
August 30th - September 10th, 2009
19 / 62
Bayesian Inference, parametric model
Supervised parametric learning:
• data: x, y
• model: y = fw (x) + ε
Gaussian likelihood:
p(y|x, w, Mi ) ∝
Y
exp(− 12 (yc − fw (xc ))2 /σ2noise ).
c
Parameter prior:
p(w|Mi )
Posterior parameter distribution by Bayes rule p(a|b) = p(b|a)p(a)/p(b):
p(w|x, y, Mi ) =
Rasmussen (Engineering, Cambridge)
p(w|Mi )p(y|x, w, Mi )
p(y|x, Mi )
Gaussian Process Regression
August 30th - September 10th, 2009
20 / 62
Bayesian Inference, parametric model, cont.
Making predictions:
Z
p(y∗ |x∗ , x, y, Mi ) =
p(y∗ |w, x∗ , Mi )p(w|x, y, Mi )dw
Marginal likelihood:
Z
p(y|x, Mi ) =
p(w|Mi )p(y|x, w, Mi )dw.
Model probability:
p(Mi |x, y) =
p(Mi )p(y|x, Mi )
p(y|x)
Problem: integrals are intractable for most interesting models!
Rasmussen (Engineering, Cambridge)
Gaussian Process Regression
August 30th - September 10th, 2009
21 / 62
Non-parametric Gaussian process models
In our non-parametric model, the “parameters” are the function itself!
Gaussian likelihood:
y|x, f (x), Mi ∼ N(f, σ2noise I)
(Zero mean) Gaussian process prior:
f (x)|Mi ∼ GP m(x) ≡ 0, k(x, x 0 )
Leads to a Gaussian process posterior
f (x)|x, y, Mi ∼ GP mpost (x) = k(x, x)[K(x, x) + σ2noise I]−1 y,
kpost (x, x 0 ) = k(x, x 0 ) − k(x, x)[K(x, x) + σ2noise I]−1 k(x, x 0 ) .
And a Gaussian predictive distribution:
y∗ |x∗ , x, y, Mi ∼ N k(x∗ , x)> [K + σ2noise I]−1 y,
k(x∗ , x∗ ) + σ2noise − k(x∗ , x)> [K + σ2noise I]−1 k(x∗ , x)
Rasmussen (Engineering, Cambridge)
Gaussian Process Regression
August 30th - September 10th, 2009
22 / 62
2
2
1
1
output, f(x)
output, f(x)
Prior and Posterior
0
0
−1
−1
−2
−2
−5
0
input, x
5
−5
0
input, x
5
Predictive distribution:
p(y∗ |x∗ , x, y) ∼ N k(x∗ , x)> [K + σ2noise I]−1 y,
k(x∗ , x∗ ) + σ2noise − k(x∗ , x)> [K + σ2noise I]−1 k(x∗ , x)
Rasmussen (Engineering, Cambridge)
Gaussian Process Regression
August 30th - September 10th, 2009
23 / 62
Graphical model for Gaussian Process
y1
y∗1
f∗1
f1
x1
Square nodes are observed (clamped),
round nodes stochastic (free).
x∗1
y2
x∗2
f∗2
f2
y∗2
x2
y3
All pairs of latent variables are connected.
f∗3
f3
x∗3
y∗3
x3
fn
yn
xn
Predictions y∗ depend only on the corresponding single latent f ∗ .
Notice, that adding a triplet x∗m , fm∗ , y∗m
does not influence the distribution. This
is guaranteed by the marginalization
property of the GP.
This explains why we can make inference using a finite amount of computation!
Rasmussen (Engineering, Cambridge)
Gaussian Process Regression
August 30th - September 10th, 2009
24 / 62
Some interpretation
Recall our main result:
f∗ |X∗ , X, y ∼ N K(X∗ , X)[K(X, X) + σ2n I]−1 y,
K(X∗ , X∗ ) − K(X∗ , X)[K(X, X) + σ2n I]−1 K(X, X∗ ) .
The mean is linear in two ways:
µ(x∗ ) = k(x∗ , X)[K(X, X) + σ2n ]−1 y =
n
X
c=1
βc y(c) =
n
X
αc k(x∗ , x(c) ).
c=1
The last form is most commonly encountered in the kernel literature.
The variance is the difference between two terms:
V(x∗ ) = k(x∗ , x∗ ) − k(x∗ , X)[K(X, X) + σ2n I]−1 k(X, x∗ ),
the first term is the prior variance, from which we subtract a (positive) term,
telling how much the data X has explained. Note, that the variance is
independent of the observed outputs y.
Rasmussen (Engineering, Cambridge)
Gaussian Process Regression
August 30th - September 10th, 2009
25 / 62
The marginal likelihood
Log marginal likelihood:
1
1
n
log p(y|x, Mi ) = − y> K−1 y − log |K| − log(2π)
2
2
2
is the combination of a data fit term and complexity penalty. Occam’s Razor is
automatic.
Learning in Gaussian process models involves finding
• the form of the covariance function, and
• any unknown (hyper-) parameters θ.
This can be done by optimizing the marginal likelihood:
∂ log p(y|x, θ, Mi )
1
∂K −1
1
∂K
= y> K−1
K y − trace(K−1
)
∂θj
2
∂θj
2
∂θj
Rasmussen (Engineering, Cambridge)
Gaussian Process Regression
August 30th - September 10th, 2009
26 / 62
Example: Fitting the length scale parameter
Parameterized covariance function: k(x, x 0 ) = v2 exp −
(x − x 0 )2 + σ2n δxx 0 .
2`2
1.5
observations
too short
good length scale
too long
1
0.5
0
−0.5
−10
−8
−6
−4
−2
0
2
4
6
8
10
The mean posterior predictive function is plotted for 3 different length scales (the
green curve corresponds to optimizing the marginal likelihood). Notice, that an
almost exact fit to the data can be achieved by reducing the length scale – but the
marginal likelihood does not favour this!
Rasmussen (Engineering, Cambridge)
Gaussian Process Regression
August 30th - September 10th, 2009
27 / 62
Why, in principle, does Bayesian Inference work?
Occam’s Razor
P(Y|Mi)
too simple
"just right"
too complex
Y
All possible data sets
Rasmussen (Engineering, Cambridge)
Gaussian Process Regression
August 30th - September 10th, 2009
28 / 62
An illustrative analogous example
Imagine the simple task of fitting the variance, σ2 , of a zero-mean Gaussian to a
set of n scalar observations.
The log likelihood is log p(y|µ, σ2 ) = − 21 y> Iy/σ2 − 12 log |Iσ2 | −
Rasmussen (Engineering, Cambridge)
Gaussian Process Regression
n
2
log(2π)
August 30th - September 10th, 2009
29 / 62
From random functions to covariance functions
Consider the class of linear functions:
f (x) = ax + b, where a ∼ N(0, α), and b ∼ N(0, β).
We can compute the mean function:
ZZ
Z
Z
µ(x) = E[f (x)] =
f (x)p(a)p(b)dadb = axp(a)da + bp(b)db = 0,
and covariance function:
ZZ
k(x, x ) = E[(f (x) − 0)(f (x ) − 0)] =
(ax + b)(ax 0 + b)p(a)p(b)dadb
Z
Z
Z
= a2 xx 0 p(a)da + b2 p(b)db + (x + x 0 ) abp(a)p(b)dadb = αxx 0 + β.
0
Rasmussen (Engineering, Cambridge)
0
Gaussian Process Regression
August 30th - September 10th, 2009
30 / 62
From random functions to covariance functions II
Consider the class of functions (sums of squared exponentials):
1X
f (x) = lim
γi exp(−(x − i/n)2 ), where γi ∼ N(0, 1), ∀i
n→∞ n
i
Z∞
=
γ(u) exp(−(x − u)2 )du, where γ(u) ∼ N(0, 1), ∀u.
−∞
The mean function is:
Z∞
2
exp(−(x − u) )
µ(x) = E[f (x)] =
−∞
Z∞
γp(γ)dγdu = 0,
−∞
and the covariance function:
Z
0
E[f (x)f (x )] = exp − (x − u)2 − (x 0 − u)2 du
Z
(x − x 0 )2 x + x 0 2 (x + x 0 )2
) +
− x2 − x 02 )du ∝ exp −
.
= exp − 2(u −
2
2
2
Thus, the squared exponential covariance function is equivalent to regression
using infinitely many Gaussian shaped basis functions placed everywhere, not just
at your training points!
Rasmussen (Engineering, Cambridge)
Gaussian Process Regression
August 30th - September 10th, 2009
31 / 62
Using finitely many basis functions may be dangerous!
1
0.5
?
0
−0.5
−10
−8
−6
Rasmussen (Engineering, Cambridge)
−4
−2
0
Gaussian Process Regression
2
4
6
8
August 30th - September 10th, 2009
10
32 / 62
Spline models
One dimensional minimization problem: find the function f (x) which minimizes:
Z1
c
X
(f (x(i) ) − y(i) )2 + λ (f 00 (x))2 dx,
0
i=1
where 0 < x(i) < x(i+1) < 1, ∀i = 1, . . . , n − 1, has as solution the Natural
Smoothing Cubic Spline: first order polinomials when x ∈ [0; x(1) ] and when
x ∈ [x(n) ; 1] and a cubic polynomical in each x ∈ [x(i) ; x(i+1) ], ∀i = 1, . . . , n − 1,
joined to have continuous second derivatives at the knots.
The identical function is also the mean of a Gaussian process: Consider the class
a functions given by:
n−1
x if x > 0
1 X
i
f (x) = α + βx + lim √
γi (x − )+ ,
where (x)+ =
n→∞
n
n
0 otherwise
i=0
with Gaussian priors:
α ∼ N(0, ξ),
Rasmussen (Engineering, Cambridge)
β ∼ N(0, ξ),
γi ∼ N(0, Γ ), ∀i = 0, . . . , n − 1.
Gaussian Process Regression
August 30th - September 10th, 2009
33 / 62
The covariance function becomes:
i
1X
i
(x − )+ (x 0 − )+
k(x, x ) = ξ + xx ξ + Γ lim
n→∞ n
n
n
i=0
Z1
= ξ + xx 0 ξ + Γ (x − u)+ (x 0 − u)+ du
n−1
0
0
0
1
1
= ξ + xx 0 ξ + Γ |x − x 0 | min(x, x 0 )2 + min(x, x 0 )3 .
2
3
In the limit ξ → ∞ and λ = σ2n /Γ the posterior mean becomes the natrual cubic
spline.
We can thus find the hyperparameters σ2 and Γ (and thereby λ) by maximising
the marginal likelihood in the usual way.
Defining h(x) = (1, x)> the posterior predictions with mean and variance:
(X∗ ) = H(X∗ )> β + K(X, X∗ )[K(X, X) + σ2n I]−1 (y − H(X)> β)
µ
∗ ) = Σ(X∗ ) + R(X, X∗ )> A(X)−1 R(X, X∗ )
Σ(x
β = A(X)−1 H(X)[K + σ2n I]−1 y,
A(X) = H(X)[K(X, X) + σ2n I]−1 H(X)>
R(X, X∗ ) = H(X∗ ) − H(X)[K + σ2n I]−1 K(X, X∗ )
Rasmussen (Engineering, Cambridge)
Gaussian Process Regression
August 30th - September 10th, 2009
34 / 62
Cubic Splines, Example
Although this is not the fastest way to compute splines, it offers a principled way
of finding hyperparameters, and uncertainties on predictions.
Note also, that although the posterior mean is smooth (piecewise cubic), posterior
sample functions are not.
1.5
observations
mean
random
random
95% conf
1
0.5
0
−0.5
−1
0
0.1
0.2
Rasmussen (Engineering, Cambridge)
0.3
0.4
0.5
Gaussian Process Regression
0.6
0.7
0.8
0.9
August 30th - September 10th, 2009
1
35 / 62
Model Selection in Practise; Hyperparameters
There are two types of task: form and parameters of the covariance function.
Typically, our prior is too weak to quantify aspects of the covariance function.
We use a hierarchical model using hyperparameters. Eg, in ARD:
k(x, x ) =
0
v20
exp −
D
X
(xd − x 0 )2 d
2v2d
d=1
v1=v2=1
,
hyperparameters θ = (v0 , v1 , . . . , vd , σ2n ).
v1=0.32 and v2=1
v1=v2=0.32
2
2
2
1
0
0
−2
−2
0
2
0
0
x2 −2 −2 x1
Rasmussen (Engineering, Cambridge)
2
2
0
0
x2 −2 −2 x1
Gaussian Process Regression
2
2
0
0
x2 −2 −2 x1
August 30th - September 10th, 2009
2
36 / 62
Rational quadratic covariance function
The rational quadratic (RQ) covariance function:
kRQ (r) =
1+
r2 −α
2α`2
with α, ` > 0 can be seen as a scale mixture (an infinite sum) of squared
exponential (SE) covariance functions with different characteristic length-scales.
Using τ = `−2 and p(τ|α, β) ∝ τα−1 exp(−ατ/β):
Z
kRQ (r) = p(τ|α, β)kSE (r|τ)dτ
Z
ατ τr2 r2 −α
∝ τα−1 exp −
exp −
dτ ∝ 1 +
,
β
2
2α`2
Rasmussen (Engineering, Cambridge)
Gaussian Process Regression
August 30th - September 10th, 2009
37 / 62
Rational quadratic covariance function II
2
output, f(x)
0.8
covariance
3
α=1/2
α=2
α→∞
1
0.6
0.4
0
−1
−2
0.2
0
0
1
1
2
input distance
3
−3
−5
0
input, x
5
The limit α → ∞ of the RQ covariance function is the SE.
Rasmussen (Engineering, Cambridge)
Gaussian Process Regression
August 30th - September 10th, 2009
38 / 62
Matérn covariance functions
Stationary covariance functions can be based on the Matérn form:
h √2ν
iν √2ν
1
0
0
0
k(x, x ) =
|x
−
x
|
K
|x
−
x
|
,
ν
Γ (ν)2ν−1
`
`
where Kν is the modified Bessel function of second kind of order ν, and ` is the
characteristic length scale.
Sample functions from Matérn forms are bν − 1c times differentiable. Thus, the
hyperparameter ν can control the degree of smoothness
Special cases:
• kν=1/2 (r) = exp(− `r ): Laplacian covariance function, Browninan motion
(Ornstein-Uhlenbeck)
√ √ • kν=3/2 (r) = 1 + `3r exp − `3r (once differentiable)
√
√ 5r2
• kν=5/2 (r) = 1 + `5r + 3`
exp − `5r (twice differentiable)
2
2
r
• kν→∞ = exp(− 2`
2 ): smooth (infinitely differentiable)
Rasmussen (Engineering, Cambridge)
Gaussian Process Regression
August 30th - September 10th, 2009
39 / 62
Matérn covariance functions II
Univariate Matérn covariance function with unit characteristic length scale and
unit variance:
covariance function
ν=1/2
ν=1
ν=2
ν→∞
0.5
2
output, f(x)
covariance
1
sample functions
1
0
−1
−2
0
0
1
2
input distance
Rasmussen (Engineering, Cambridge)
3
−5
Gaussian Process Regression
0
input, x
August 30th - September 10th, 2009
5
40 / 62
Periodic, smooth functions
To create a distribution over periodic functions of x, we can first map the inputs
to u = (sin(x), cos(x))> , and then measure distances in the u space. Combined
with the SE covariance function, which characteristic length scale `, we get:
kperiodic (x, x 0 ) = exp(−2 sin2 (π(x − x 0 ))/`2 )
3
3
2
2
1
1
0
0
−1
−1
−2
−2
−3
−2
−1
0
1
2
−3
−2
−1
0
1
2
Three functions drawn at random; left ` > 1, and right ` < 1.
Rasmussen (Engineering, Cambridge)
Gaussian Process Regression
August 30th - September 10th, 2009
41 / 62
The Prediction Problem
CO2 concentration, ppm
420
400
?
380
360
340
320
1960
1980
2000
2020
year
Rasmussen (Engineering, Cambridge)
Gaussian Process Regression
August 30th - September 10th, 2009
42 / 62
Covariance Function
The covariance function consists of several terms, parameterized by a total of 11
hyperparameters:
• long-term smooth trend (squared exponential)
k1 (x, x 0 ) = θ21 exp(−(x − x 0 )2 /θ22 ),
• seasonal trend (quasi-periodic
smooth)
k2 (x, x 0 ) = θ23 exp − 2 sin2 (π(x − x 0 ))/θ25 × exp − 12 (x − x 0 )2 /θ24 ,
• short- and medium-term anomaly (rational quadratic)
k3 (x, x 0 ) = θ26 1 +
(x−x 0 )2
2θ8 θ27
−θ8
• noise (independent Gaussian,
and
dependent)
0 2
k4 (x, x 0 ) = θ29 exp −
(x−x )
2θ210
+ θ211 δxx 0 .
k(x, x 0 ) = k1 (x, x 0 ) + k2 (x, x 0 ) + k3 (x, x 0 ) + k4 (x, x 0 )
Let’s try this with the gpml software (http://www.gaussianprocess.org/gpml).
Rasmussen (Engineering, Cambridge)
Gaussian Process Regression
August 30th - September 10th, 2009
43 / 62
400
1
380
0.5
360
0
340
−0.5
320
−1
CO2 concentration, ppm
CO2 concentration, ppm
Long- and medium-term mean predictions
1960 1970 1980 1990 2000 2010 2020
year
Rasmussen (Engineering, Cambridge)
Gaussian Process Regression
August 30th - September 10th, 2009
44 / 62
Mean Seasonal Component
2020
−3.6
2010
Year
2000
1990
−2.8
−1
0
1
3.1
2
1980
3
1970
2
1 0
−2
−2
−3.3
−1
−2.8
2.8
1960
J
F
M
A
M
J
J
Month
A
S
O
N
D
Seasonal component: magnitude θ3 = 2.4 ppm, decay-time θ4 = 90 years.
Dependent noise, magnitude θ9 = 0.18 ppm, decay θ10 = 1.6 months.
Independent noise, magnitude θ11 = 0.19 ppm.
Optimize or integrate out? See MacKay [3].
Rasmussen (Engineering, Cambridge)
Gaussian Process Regression
August 30th - September 10th, 2009
45 / 62
Binary Gaussian Process Classification
1
class probability, π(x)
latent function, f(x)
4
2
0
−2
−4
0
input, x
input, x
The class probability is related to the latent function, f , through:
p(y = 1|f (x)) = π(x) = Φ f (x) ,
where Φ is a sigmoid function, such as the logistic or cumulative Gaussian.
Observations are independent given f , so the likelihood is
p(y|f) =
n
Y
i=1
Rasmussen (Engineering, Cambridge)
p(yi |fi ) =
n
Y
Φ(yi fi ).
i=1
Gaussian Process Regression
August 30th - September 10th, 2009
46 / 62
Prior and Posterior for Classification
We use a Gaussian process prior for the latent function:
f|X, θ ∼ N(0, K)
The posterior becomes:
p(y|f) p(f|X, θ)
N(f|0, K) Y
=
Φ(yi fi ),
p(f|D, θ) =
p(D|θ)
p(D|θ)
m
i=1
which is non-Gaussian.
The latent value at the test point, f (x∗ ) is
Z
p(f∗ |D, θ, x∗ ) = p(f∗ |f, X, θ, x∗ )p(f|D, θ)df,
and the predictive class probability becomes
Z
p(y∗ |D, θ, x∗ ) = p(y∗ |f∗ )p(f∗ |D, θ, x∗ )df∗ ,
both of which are intractable to compute.
Rasmussen (Engineering, Cambridge)
Gaussian Process Regression
August 30th - September 10th, 2009
47 / 62
Gaussian Approximation to the Posterior
We approximate the non-Gaussian posterior by a Gaussian:
p(f|D, θ) ' q(f|D, θ) = N(m, A)
then q(f∗ |D, θ, x∗ ) = N(f∗ |µ∗ , σ2∗ ), where
−1
µ∗ = k>
m
∗K
−1
σ2∗ = k(x∗ , x∗ )−k>
− K−1 AK−1 )k∗ .
∗ (K
Using this approximation with the cumulative Gaussian likelihood
Z
µ
∗
q(y∗ = 1|D, θ, x∗ ) = Φ(f∗ ) N(f∗ |µ∗ , σ2∗ )df∗ = Φ √
1 + σ2∗
Rasmussen (Engineering, Cambridge)
Gaussian Process Regression
August 30th - September 10th, 2009
48 / 62
Laplace’s method and Expectation Propagation
How do we find a good Gaussian approximation N(m, A) to the posterior?
Laplace’s method: Find the Maximum A Posteriori (MAP) lantent values fMAP ,
and use a local expansion (Gaussian) around this point as suggested by Williams
and Barber [8].
Variational bounds: bound the likelihood by some tractable expression
A local variational bound for each likelihood term was given by Gibbs and
MacKay [1]. A lower bound based on Jensen’s inequality by Opper and by Seeger
[5].
Expectation Propagation: use an approximation of the likelihood, such that the
moments of the marginals of the approximate posterior match the (approximate)
moment of the posterior, Minka [4].
Rasmussen (Engineering, Cambridge)
Gaussian Process Regression
August 30th - September 10th, 2009
49 / 62
Expectation Propagation
fx1
fx2
y1
y2
...
fxn
yn
proj[p(yi |fi )q\i ]
fi =
,
q\i
Rasmussen (Engineering, Cambridge)
fx∗3
fx∗1 fx∗2
y1∗
q\i = p(f)
Gaussian Process Regression
y3∗
y2∗
Y
fi .
j6=i
August 30th - September 10th, 2009
50 / 62
USPS Digits, 3s vs 5s
5
5
2
−100
−115
−200
−105
1
−130
3
−95
−130
−160
2
−105
1−200
−200
−115
0
3
4
log lengthscale, log(l)
−115
2
5
0.8
0.7
0.84
0.5
0.7
2
−130
1
3
4
log lengthscale, log(l)
Rasmussen (Engineering, Cambridge)
0.86
5
2
0.25
5
0.86
3
0.8
0.89
0.7
0.88
0.5
1
0 0.7
3
4
log lengthscale, log(l)
Gaussian Process Regression
5
0.84
0.84
0.89
2
0.7
1
0.8
0.5
0
0.25
2
0.86
30.7
5
0.88
4
0.84
2
3
4
log lengthscale, log(l)
5
0.86 0.84
log magnitude, log(σf)
0.5
2
log magnitude, log(σf)
log magnitude, log(σf)
0.8
1
3
4
log lengthscale, log(l)
4 0.8
0.25
0
−200
0
5
5
3
−160
−105
2
−200
0
4
−100
3
−150 −200
2
−92−95
4
−92
log magnitude, log(σf)
3
−105
−160
4
log magnitude, log(σf)
log magnitude, log(σf)
4
5
−105
−130
−160
−100
0.25
2
3
4
log lengthscale, log(l)
August 30th - September 10th, 2009
5
51 / 62
USPS Digits, 3s vs 5s
0.2
0.07
MCMC samples
Laplace p(f|D)
EP p(f|D)
0.06
MCMC samples
Laplace p(f|D)
EP p(f|D)
0.05
0.15
0.04
0.1
0.03
0.02
0.05
0.01
0
−15
−10
−5
0
5
0
−40
f
Rasmussen (Engineering, Cambridge)
Gaussian Process Regression
−30
−20
−10
0
10
f
August 30th - September 10th, 2009
52 / 62
The Structure of the posterior
0.45
0.4
0.35
p(xi)
0.3
0.25
0.2
0.15
0.1
0.05
0
0
Rasmussen (Engineering, Cambridge)
Gaussian Process Regression
2
xi
4
August 30th - September 10th, 2009
6
53 / 62
Sparse Approximations
Recall the graphical model for a Gaussian process. Inference is expensive because
the latent variables are fully connected.
Exact inference: O(n3 ).
Sparse approximations: solve a smaller,
sparse, approximation of the original
problem.
fx1
fx2
...
fxn
fx∗1 fx∗2
fx∗3
Algorithm: Subset of data.
Are there better ways to sparsify?
y1
y2
yn
Rasmussen (Engineering, Cambridge)
y1∗
y2∗
y3∗
Gaussian Process Regression
August 30th - September 10th, 2009
54 / 62
Inducing Variables
Because of the marginalization property, we can introduce more latent variables
without changing the distribution of the original variables.
fu1
The u = (u1 , u2 , . . .)> are called inducing
variables.
fu2 fu3
The inducing variables have associated
inducing inputs, s, but no associated
output values.
fx1
fx2
y1
y2
...
fxn
yn
Rasmussen (Engineering, Cambridge)
fx∗1 fx∗2
y1∗
y2∗
fx∗3
y3∗
The marginalization property ensures
that
Z
p(f, f∗ ) = p(f, f∗ , u)du
Gaussian Process Regression
August 30th - September 10th, 2009
55 / 62
The Central Approximations
In a unifying treatment, Candela and Rasmussen [2] assume that training and test
sets are conditionally independent given u.
fu1
Assume: p(f, f∗ ) ' q(f, f∗ ), where
Z
q(f, f∗ ) = q(f∗ |u)q(f|u)p(u)du.
fu2 fu3
The inducing variables induce the dependencies between training and test cases.
fx1
fx2
...
fxn
fx∗1 fx∗2
fx∗3 Different sparse algorithms in the literature correspond to different
• choices of the inducing inputs
y1
y2
yn
Rasmussen (Engineering, Cambridge)
y1∗
y2∗
y3∗
• further approximations
Gaussian Process Regression
August 30th - September 10th, 2009
56 / 62
Training and test conditionals
The exact training and test conditionals are:
p(f|u) = N(Kf,u K−1
f,f u, Kf,f − Qf,f )
p(f∗ |u) = N(Kf∗ ,u K−1
f,f u, Kf∗ ,f∗ − Qf∗ ,f∗ ),
where Qa,b = Ka,u K−1
u,u Ku,b .
These equations are easily recognized as the usual predictive equations for GPs.
The effective prior is:
hK
f,f
q(f, f∗ ) = N 0,
Qf,∗
Rasmussen (Engineering, Cambridge)
Gaussian Process Regression
Q∗,f i
K∗,∗
August 30th - September 10th, 2009
57 / 62
Example: Sparse parametric Gaussian processes
Snelson and Ghahramani [6] introduced the idea of sparse GP inference based on
a pseudo data set, integrating out the targets, and optimizing the inputs.
Equivalently, in the unifying scheme:
q(f|u) = N(Kf,u K−1
f,f u, diag[Kf,f − Qf,f ])
q(f∗ |u) = p(f∗ |u).
The effective prior becomes
h Q − diag[Q − K ]
f,f
f,f
f,f
qFITC (f, f∗ ) = N 0,
Qf,∗
Q∗,f i
,
K∗,∗
which can be computed efficiently.
The Bayesian Committee Machine [7] uses block diag instead of diag, and the
inducing variables to be the test cases.
Rasmussen (Engineering, Cambridge)
Gaussian Process Regression
August 30th - September 10th, 2009
58 / 62
The FITC Factor Graph
Sparse Parametric Gaussian Process ≡ Fully Independent Training Conditional
fu1
fx1
fx2
y1
y2
...
Rasmussen (Engineering, Cambridge)
fu2 fu3
fxn
yn
fx∗1 fx∗2
y1∗
y2∗
Gaussian Process Regression
fx∗3
y3∗
August 30th - September 10th, 2009
59 / 62
Conclusions
Complex non-linear inference
problems can be solved by
manipulating plain old Gaussian
distributions
Rasmussen (Engineering, Cambridge)
Gaussian Process Regression
August 30th - September 10th, 2009
60 / 62
More on Gaussian Processes
Rasmussen and Williams
Gaussian Processes for Machine Learning,
MIT Press, 2006.
http://www.GaussianProcess.org/gpml
Gaussian process web (code, papers, etc): http://www.GaussianProcess.org
Rasmussen (Engineering, Cambridge)
Gaussian Process Regression
August 30th - September 10th, 2009
61 / 62
A few references
[1] Gibbs, M. N. and MacKay, D. J. C. (2000). Variational Gaussian Process Classifiers. IEEE
Transactions on Neural Networks, 11(6):1458–1464.
[2] Joaquin Quiñonero-Candela and Carl Edward Rasmussen (2005). A unifying view of sparse
approximate gaussian process regression. Journal of Machine Learning Research, 6:1939–1959.
[3] MacKay, D. J. C. (1999). Comparison of Approximate Methods for Handling Hyperparameters.
Neural Computation, 11(5):1035–1068.
[4] Minka, T. P. (2001). A Family of Algorithms for Approximate Bayesian Inference. PhD thesis,
Massachusetts Institute of Technology.
[5] Seeger, M. (2003). Bayesian Gaussian Process Models: PAC-Bayesian Generalisation Error
Bounds and Sparse Approximations. PhD thesis, School of Informatics, University of Edinburgh.
http://www.cs.berkeley.edu/∼mseeger.
[6] Snelson, E. and Ghahramani, Z. (2006). Sparse gaussian processes using pseudo-inputs. In
Advances in Neural Information Processing Systems 18. MIT Press.
[7] Tresp, V. (2000). A Bayesian Committee Machine. Neural Computation, 12(11):2719–2741.
[8] Williams, C. K. I. and Barber, D. (1998). Bayesian Classification with Gaussian Processes. IEEE
Transactions on Pattern Analysis and Machine Intelligence, 20(12):1342–1351.
Rasmussen (Engineering, Cambridge)
Gaussian Process Regression
August 30th - September 10th, 2009
62 / 62
Download