3469 - Practical Numerical Simulations Mike Peardon — Michaelmas Term 2015

advertisement

3469 - Practical Numerical Simulations

Mike Peardon — mjp@maths.tcd.ie

http://www.maths.tcd.ie/~mjp/3469

School of Mathematics

Trinity College Dublin

Michaelmas Term 2015

Michaelmas Term 2015 1 / 10 Mike Peardon — mjp@maths.tcd.ie

(TCD)

3469

Monte Carlo - introduction

Monte Carlo is a simple-to-implement technique to estimate high-dimensional integrals or very large sums

Monte Carlo estimation of I h

=

R dx h ( x )

If we have a sequence of random numbers, { X

1

, X

2

, X

3

, . . .

} drawn from a distribution f

X

( x ) and a function g such that f

X

( x ) g ( x ) = h ( x ) then we make the ensemble

{ G

1

= g ( X

1

) , G

2

= g ( X

2

) , . . .

} and

E [ G k

] =

Z dx g ( x ) f

X

( x ) = I h

Sample mean ¯ has same expected value, and (if underlying distribution of G has finite variance) has variance proportional to 1

N and “looks” normally distributed.

Monte Carlo: Integration ↔ sampling.

Mike Peardon — mjp@maths.tcd.ie

(TCD)

3469

Michaelmas Term 2015 2 / 10

“Monte Carlo is an extremely bad method, it should be used only when all alternative methods are worse.”

Alan Sokal

Mike Peardon — mjp@maths.tcd.ie

(TCD)

3469

Michaelmas Term 2015 3 / 10

Monte Carlo - introduction (2)

The choice of sampling distribution is often important in controlling the variance of the estimator

The variance of G is minimised when f

X

( x ) =

| h ( x ) |

R dx | h ( x ) |

Not usually practical, but choosing f to get close to this optimum is termed importance sampling

For many problems in systems with a large number of interacting degrees of freedom, it is not easy to draw from a good distribution

Markov Chain Monte Carlo provides a recipe to draw from complicated distributions - the sequence is generated iteratively following a random process

Mike Peardon — mjp@maths.tcd.ie

(TCD)

3469

Michaelmas Term 2015 4 / 10

Markov Chain Monte Carlo

A Markov Chain is a sequence of random states of a system

{ ψ

1

, ψ

2

, . . . ψ

3

} where the conditional distribution of the system at iteration k depends only on ψ k − 1 and no earlier or later entries in the chain.

Markov chains can be constructed such that the probability

(or probability density) of ψ at equilibrium can be arbitrarily chosen

Can use a Markov process to sample any sensible distribution .

A useful (but not necessary) condition that a process has density f

X as its fixed-point is Detailed Balance

P ( X

0

| X ) f

X

( X ) = P ( X | X

0

) f

X

( X

0

)

Mike Peardon — mjp@maths.tcd.ie

(TCD)

3469

Michaelmas Term 2015 5 / 10

Example: The

2 d q

-state Potts model

Generalisation of Ising model ( q = 2 )

Consider a 2d grid of sites, with a variable σ site ( x , y )

( x , y )

∈ { 1 , . . .

q } at each

Grid has extent L , so total number of states of system is q L 2

Statistical mechanics: thermal properties of system computed by summing over set of allowed states, Σ h f i =

X f ( σ ) e − β S ( σ )

Z

σ ∈ Σ

β is the inverse temperature

Mike Peardon — mjp@maths.tcd.ie

(TCD)

3469

Michaelmas Term 2015 6 / 10

The

2 d q

-state Potts model

The action , S ( σ ) is given by

S ( σ ) =

X X

( x , y ) ( x 0 , y 0 ) ∈ n ( x , y )

1 − δ

σ x , y

,σ x

0

, y

0

Z is the constant that normalises the thermodynamical expectation value: h 1 i = 1 .

Each spin interacts directly just with its four nearest neighbours (like our Laplace discretisation).

A simple algorithm to generate a Markov process can be constructed using the Metropolis algorithm , which obeys detailed balance for fixed-point probability generated by S .

Mike Peardon — mjp@maths.tcd.ie

(TCD)

3469

Michaelmas Term 2015 7 / 10

The Metropolis algorithm

Given a desired fixed-point distribution π , can build a Markov process following a simple two-step recipe:

Metropolis algorithm

For a system initially in state χ

1

Propose a new state of the system χ

0 reversible way : P ( χ

0 | χ ) = P ( χ | χ

0

) in a

2

Compute r = min 1 ,

π ( χ

0

)

π ( χ )

.

Accept χ

0 as the next entry in the chain with probability r .

Important: if the proposal is rejected, copy χ as the next entry in the chain.

Mike Peardon — mjp@maths.tcd.ie

(TCD)

3469

Michaelmas Term 2015 8 / 10

Metropolis

The Metropolis algorithm has a lot of design freedom - many possible reversible proposal steps can be built.

Close to equilibrium, large changes are usually rejected small changes are accepted, but mean the next state in the chain is closely correlated to the current one.

Rule-of-thumb: aim for acceptance rate of about 50%. Not always possible to achieve this, particularly for a system with a discrete set of states (like the Potts model).

In many systems with a large number of degrees of freedom, change just a few variables each update

The system takes a random walk through the state space monitor autocorrelations.

Mike Peardon — mjp@maths.tcd.ie

(TCD)

3469

Michaelmas Term 2015 9 / 10

Metropolis and the Potts model

Proposing a change to all variables at once is almost always rejected (in equilibrium).

Computing r means measuring the change in the action after the proposal.

Local interaction means change in action can be determined in O ( 1 ) operations. Non-local interactions are more difficult.

∆ S = S new

− S old

= 6 − 4 =

Accept with probability e − 2 β

2

Mike Peardon — mjp@maths.tcd.ie

(TCD)

3469

Michaelmas Term 2015 10 / 10

Download