Licia Verde (Penn)

advertisement

What you need to know about large scale structure

QuickTime™ and a

TIFF (Uncompressed) decompressor are needed to see this picture.

Licia Verde

University of Pennsylvania www.physics.upenn.edu/~lverde

Outline

1) Motivation and basics

Large Scale Structure probes

(spherical cows)

2) Real world effects

3) Measuring P(k) & (Statistics)

(less spherical cows)

The standard cosmological model

96% of the Universe is missing!!!

QuickTime™ and a

TIFF (LZW) decompressor are needed to see this picture.

The standard cosmological model

96% of the Universe is missing!!!

QuickTime™ and a

TIFF (LZW) decompressor are needed to see this picture.

Major questions :

Questions that can be addressed exclusively by looking up at the sky

1)What created the primordial perturbations?

2) What makes the Universe accelerate?

QuickTime™ and a

TIFF (Uncompressed) decompressor are needed to see this picture.

QuickTime™ and a

TIFF (LZW) decompressor are needed to see this picture.

These questions may not be unrelated

CMB is great and told us a lot, but large scale structures are still useful:

Check consistency of the model

If this test is passed

Combine to reduce the degeneracies

We will concentrate on dark energy and inflation

On blackboard:

Power spectrum (for DM) definitions

Gaussian random fields

Linear perturbations growth

Transfer function

ln P(k)

Primordial power spectrum=

A

k

n slope

Amplitude of the power law

A

(convention dependent)

!

ln k

ln P(k)

Primordial power spectrum=

A

k

n(k) slope

Amplitude of the power law a =dn/dlnk

A,n

(convention dependent)

!

ln k

CONSTRAINTS ON

NEUTRINO MASS

QuickTime™ and a

TIFF (LZW) decompressor are needed to see this picture.

Neutrino mass

Spergel et al ‘07

QuickTime™ and a

TIFF (LZW) decompressor are needed to see this picture.

CMB+SDSS LRG

Tegmark et al ‘07

0.9eV (95% CL)

CDM density

WMAP II

WMAP+high l experiments

SDSS main

2dFGRS

LRG SDSS

QuickTime™ and a

TIFF (LZW) decompressor are needed to see this picture.

WMAP II

2dFGRS

SDSS main

Flatness

SN1A

Riess et al 04

WMAPII + H

QuickTime™ and a

TIFF (LZW) decompressor are needed to see this picture.

WMAPII

2dfGRS ‘02

QuickTime™ and a

TIFF (LZW) decompressor are needed to see this picture.

Q ui ck Ti m e ™ an d a

T IF F ( LZ W) de co m pr e ss or a re ne ed ed t o s ee th i s pi c tu r e.

From Sperget et al 07

How about dark energy?

Planck scale

QuickTime™ and a

TIFF (LZW) decompressor are needed to see this picture.

(At EW scale it’s only 56 orders of magnitude)

If it dominated earlier, structures would not have formed

QuickTime™ and a

TIFF (LZW) decompressor are needed to see this picture.

And it’s moving fast

What’s going on?

Non exhaustive list of possibilities:

We just got lucky

“landscape” there are many other vacuum energies out there with more reasonable values

It is a slowly varying dynamical component (quintessence)

Einstein was wrong (we still do not understand gravity)

Quintessence

Equation of state parameter w= p/ r w=-1 is cosmological constant what other options to consider?

clustering?

Couplings?

If dark energy properties are time dependent, so are other basic physical parameters

Varying fine structure constant alpha QuickTime™ and a

TIFF ( LZW) decompressor are needed to see thi s pi ctur e.

QuickTime™ and a

TIFF ( LZW) decompressor are needed to see thi s pi ctur e.

Oklo Natural reactor:

1.8 billion yr ago there was a natural water-moderated fission reactor in Gabon.

QuickTime™ and a

TIFF (Uncompressed) decompressor are needed to see this picture.

Isotopic abundances contrain 149 Sm neutron capture cross section ad thus alpha

QuickTime™ and a

TIFF (LZW) decompressor are needed to see this picture.

Dark energy

QuickTime™ and a

TIFF (LZW) decompressor

QuickTime™ and a

TIFF (LZW) decompressor are needed to see this picture.

2dfGRS

H prior

WMAPII

SN

With DE clustering

Why so weak dark energy constraints from CMB?

The limitation of the CMB in constraining dark energy is that the CMB is located at z=1090.

We need to look at the expansion history

(I.e. at least two snapshots of the Universe)

What if one could see the peaks pattern also at lower redshifts?

Baryonic Acoustic Oscillations

Evolution of a single perturbation,

Imagine a superposition

For those of you who think in Real space

Courtesy of D. Eisenstein

If baryons are ~1/6 of the dark matter these baryonic oscillations should leave some imprint in the dark matter distribution

Fore those of you who think in Fourier space

Matter-radn equality

QuickTime™ and a

TIFF (LZW) decompressor are needed to see this picture.

Acoustic horizon at last scattering

Data from Tegmark et al 2006

QuickTime™ and a

TIFF (LZW) decompressor are needed to see this picture.

DR5 from Percival et al 2006

Robust and insensitive to many systematics

THE SYMPTOMS

Or OBSERVATIONAL EFFECTS of DARK ENERGY

Recession velocity vs brightness of standard candles: dL(z)

CMB acoustic peaks: Da to last scattering

Da to z survey

LSS: perturbations amplitude today, to be compared with CMB

Perturbation amplitude at z survey

Galaxy clusters number counts

Galaxy clusters are rare events:

P(M,z) oc exp(d 2 / s (M,z) 2 )

In here there is the growth of structure d x

Beware of systematics! “What’s the mass of that cluster?”

Galaxy clusters number counts

Galaxy clusters are rare events:

P(M,z) oc exp(d 2 / s (M,z) 2 )

In here there is the growth of structure d x

Beware of systematics! “What’s the mass of that cluster?”

V(  )

Inflation

H ~ const

Solves cosmological problems (Horizon, flatness).

Cosmological perturbations arise from quantum fluctuations, evolve classically.

Guth (1981), Linde (1982), Albrecht & Steinhardt (1982), Sato (1981), Mukhanov &

Chibisov (1981), Hawking (1982), Guth & Pi (1982), Starobinsky (1982), J. Bardeen,

P.J. Steinhardt, M. Turner (1983), Mukhanov et al. 1992), Parker (1969), Birrell and

Davies (1982)

Flatness problem

Horizon problem

Structure Problem

Seeing (indirectly) z>>1100

Information about the shape of the inflaton potential is enclosed in the shape and amplitude of the primordial power spectrum of the perturbations.

Information about the energy scale of inflation (the height of the potential) can be obtained by the addition of B modes polarization amplitude.

In general the observational constraints of Nefold>50 requires the potential to be flat (not every scalar field can be the inflaton). But detailed measurements of the shape of the power spectrum can rule in or out different potentials .

But the spacing of the fluctuations

(their power as a function of scale) depend on how fast they exited the horizon (H)

Which in turns depend on the inflaton potential

The shape of the primordial power spectrum encloses information on the shape of the inflaton potential!

r

Specific models critically tested

dn s

/dlnk=0 dn s

/dlnk=0

QuickTime™ and a

TIFF (LZW) decompressor are needed to see this picture.

r

QuickTime™ and a

TIFF (LZW) decompressor are needed to see this picture.

QuickTime™ and a

TIFF (Uncompressed) decompressor are needed to see this picture.

p=4

HZ

n n p=2

Models like V(  )~  p

For 50 and 60 e-foldings p fix, Ne varies p varies, Ne fix

Possible probes of large scale structure

Galaxy surveys

Clusters surveys (SZ, thermal and Kinetic)

Lyman alpha surveys

Weak lensing surveys (***)

H21surveys (far future)

Weak lensing (cosmic shear)

QuickTime™ and a

TIFF (LZW) decompressor are needed to see this picture.

QuickTime™ and a

TIFF (LZW) decompressor are needed to see this picture.

QuickTime™ and a

TIFF (LZW) decompressor are needed to see this picture.

Very near future:

Atacama Cosmology telescope

(& South Pole telescope, & Planck)

High resolution map of the CMB

Use the CMB as a background light to “illuminate” the growth of foreground cosmological structures

Thermal Sunyaev-Zeldovich

Kinetic SZ

Coma Cluster T electron

= 10 8 K e e e e e e e e e -

CMB gravitational

Lensing

Summary

Large-scale structure (LSS) (in combination with CMB)

Can be used to test the consistency of the model

(LCDM) and if that holds, to better constrain cosmology

2 problems:

dark energy, inflation

can be addressed exclusively by looking up at the sky

In the future expect an avalanche of LSS data (and acronyms)

So far we have seen the basic theory behind LSS

Next time: real world effects

Redshift space distortions

Great walls

Fingers-of -God

In linear theory : enhancement of P(k) along the line of sight

Kaiser (1987)

P(k) => P(k)(1+2/3f+1/5f^2)

Redshift-space distortions

(

Kaiser 1987

) z obs

= z true

+ d v / c d v prop. to

 m

0.6 dr/r =  m

0.6 b -1 d n/n

(bias) shells

 s p linear

Fourier space

Non-linear

QuickTime™ and a

TIFF (LZW) decompressor are needed to see this picture.

Great walls

Fingers-of -God

What’s bias?

What’s bias?

?

Measured for 2dFGRS (Verde et al. 2002)

“If tortured sufficiently, data will confess to almost anything”

Fred Menger

Treat your data with respect

(Licia Verde)

Interpretation:

CMBFAST or CAMB to get P(K)

Likelihood analysis

Bayes Theorem:

P ( a i

| Data )

=

P ( a i

) P ( D | a i

) / P ( D )

What you really want

(Posterior)

Prior Likelihood

You should not forget

Likelihood: Gaussian vs non-gaussian

What is the probability distribution of your data?

Examples: Cl, alm, d

, etc..

Gaussian likelihood:

L =

1

( 2

) n det

 exp[

1

2

 x

T  

1

 x ]

 x

=

( data

 theory )

If data uncorrelated… much simpler

Central Limit Theorem  distribution will converge to Gaussian

Best fit parameters  Maximize the likelihood

Likelihood analysis

Error bars and Confidence Levels

Why errors?

a i true a

ˆ

i

1

 s

2

 s

3

 s

68 .

3 %

95 .

4 %

99 .

73 %

Joint or marginalized?

Cosmic variance noise

( ignore approximations, mistakes etc ..)

Errors

From: “Numerical recipes” Ch. 15

Errors

2 ln L

= 

2

From: “Numerical recipes” Ch. 15

If likelihood is Gaussian and Covariance is constant

Example: for multi-variate Gaussian

Errors

There is a BIG difference between

 2

& reduced

2

Only for multi varaite Gaussian with constant covariance

Statistical and systematic errors

Examples of statistical (random) errors: cosmic variance, instrumental noise, roundoff (!)…..

Examples of systematic errors: approximations, incomplete modeling, numerics, ….

(introduce biases)

As you add more data points (or improve the S/N) the statistical errors become smaller but the systematic errors do not.

Errors

Operationally:

Grid-based approach

 m e.g., 2 params: 10 x 10

What if you have (say) 7 parameters?

You’ve got a problem !

s

8

Markov Chain Monte Carlo (MCMC)

Random walk in parameter space

At each step, sample one point in parameter space

The density of sampled points

 posterior distribution

FAST: before

7

10 likelihood evaluations, now

10

5 marginalization is easy: just project points and recompute their density

Adding external data sets is often very easy

Operationally:

3a. If

L a old

1. Start at a random location in parameter space:

2. Try to take a random step in parameter space: i new a

 L old

Accept (take and save) the step,

“new”  “old” and go to 2.

i new L

L old new

3b. If

L new

L old

Draw a random number x uniform in 0,1

If x

If x

 L

L

L

L old new old new do not take the step (i.e. save “old”) and go to 2. do as in 3a.

KEEP GOING….

“Take a random step”

The probability distribution of the step is the

“ proposal distribution ”, which you should not change once the chain has started.

The proposal distribution (the step-size) is crucial to the MCMC efficiency.

Steps too small  poor mixing

Steps too big  poor acceptance rate

MCMC

When the MCMC has forgotten about the starting location and has well explored the parameter space you’re ready to do parameter estimation.

USE a MIXING and CONVERGENCE criterion!!!

Burn-in

(From Verde et al 2003)

Beware of DEGENERACIES h

 c

 c h

2

Reparameterization.

e.g., Kososwsky et al. 2002

Once you have the MCMC output:

The density of points in parameter space gives you the posterior distribution

To obtain the marginalized distribution, just project the points

To obtain confidence intervals, integrate the “likelihood” surface

-compute where e.g. 68.3% of points lie

To each point in parameter space sampled by the MCMC give a weight proportional to the number of times it was saved in the chain

To add to the analysis another dataset (that does not require extra parameters) renormalize the weight by the “likelihood” of the new data set.

No need to re-run cmbfast!

warning: if new data set is not consistent with the old one

 nonsense

Thermal Sunyaev Zeldovich effect

Our Tools

Expansion rate of the universe a(t) ds 2 =

 dt 2 +a 2 (t)[dr 2 /(1-kr 2 )+r 2 d

2 ]

Einstein equation

(å/a) 2 = H 2 = (8

/3) r

= (8

/3) r m m

+ d

H 2 (z)

+ C exp{

 dlna [1+ w(z) ]}

Growth rate of density fluctuations g(z) = ( dr m

/ r m

)/a

Second oder diff eqn, here.

Poisson equation

2

(a) =4

Ga 2 dr m

= 4

G r m

(0) g(a)

Download