Monte Carlo Integration COS 323 Acknowledgment: Tom Funkhouser

advertisement
Monte Carlo Integration
COS 323
Acknowledgment: Tom Funkhouser
Integration in 1D
1
 f ( x)dx  ?
0
f(x)
x=1
Slide courtesy of
Peter Shirley
We can approximate
1
1
0
0
 f ( x)dx  g ( x)dx
f(x)
g(x)
x=1
Slide courtesy of
Peter Shirley
Or we can average
1
 f ( x)dx  E ( f ( x))
0
f(x)
E(f(x))
x=1
Slide courtesy of
Peter Shirley
Estimating the average
1

0
1 N
f ( x)dx   f ( xi )
N i 1
f(x)
E(f(x))
x1
xN
Slide courtesy of
Peter Shirley
Other Domains
b

a
f(x)
ba N
f ( x)dx 
f ( xi )

N i 1
< f >ab
x=a
x=b
Slide courtesy of
Peter Shirley
Benefits of Monte Carlo
• No “exponential explosion” in required number
of samples with increase in dimension
• Resistance to badly-behaved functions
Variance
1
Var f ( x) 
N
E(f(x))
x1
xN
N
2
[
f
(
x
)

E
(
f
(
x
))]
 i
i 1
Variance
1
VarE ( f ( x))  Var f ( x)
N
Variance decreases as 1/N
Error decreases as 1/sqrt(N)
E(f(x))
x1
xN
Variance
• Problem: variance decreases with 1/N
– Increasing # samples removes noise slowly
E(f(x))
x1
xN
Variance Reduction Techniques
• Problem: variance decreases with 1/N
– Increasing # samples removes noise slowly
• Variance reduction:
– Stratified sampling
– Importance sampling
Stratified Sampling
• Estimate subdomains separately
Arvo
Ek(f(x))
x1
xN
Stratified Sampling
• This is still unbiased
1
FN 
N
1

N
Ek(f(x))
x1
xN
N
 f (x )
i
i 1
M
N F
k 1
i
i
Stratified Sampling
• Less overall variance if less variance
in subdomains
1 M
VarFN   2  N iVarFi 
N k 1
Ek(f(x))
x1
xN
Importance Sampling
• Put more samples where f(x) is bigger


1
f ( x)dx 
N
f ( xi )
Yi 
p( xi )
E(f(x))
x1
xN
N
Y
i 1
i
Importance Sampling
• This is still unbiased
E Yi    Y ( x) p ( x)dx


E(f(x))

f ( x)
p ( x)dx
p( x)
  f ( x)dx

x1
xN
for all N
Importance Sampling
• Zero variance if p(x) ~ f(x)
p ( x)  cf ( x)
f ( xi ) 1
Yi 

p ( xi ) c
E(f(x))
Var (Y )  0
x1
xN
Less variance with better
importance sampling
Generating Random Points
• Uniform distribution:
– Use pseudorandom number generator
Probability
1
0

Pseudorandom Numbers
• Deterministic, but have statistical properties
resembling true random numbers
• Common approach: each successive
pseudorandom number is function of previous
• Linear congruential method: xn 1  axn  b mod c
– Choose constants carefully, e.g.
a = 1664525
b = 1013904223
c = 232 – 1
Pseudorandom Numbers
• To get floating-point numbers in [0..1),
divide integer numbers by c + 1
• To get integers in range [u..v], divide by
(c+1)/(v–u+1), truncate, and add u
– Better statistics than using modulo (v–u+1)
– Only works if u and v small compared to c
Generating Random Points
• Uniform distribution:
– Use pseudorandom number generator
Probability
1
0

Importance Sampling
• Specific probability distribution:
– Function inversion
– Rejection
f (x)
Importance Sampling
• “Inversion method”
– Integrate f(x): Cumulative Distribution Function
f (x)
1
 f ( x) dx
Importance Sampling
• “Inversion method”
– Integrate f(x): Cumulative Distribution Function
– Invert CDF, apply to uniform random variable
f (x)
1
 f ( x) dx
Importance Sampling
• Specific probability distribution:
– Function inversion
– Rejection
f (x)
Generating Random Points
• “Rejection method”
– Generate random (x,y) pairs,
y between 0 and max(f(x))
Generating Random Points
• “Rejection method”
– Generate random (x,y) pairs,
y between 0 and max(f(x))
– Keep only samples where y < f(x)
Monte Carlo in Computer Graphics
or, Solving Integral Equations
for Fun and Profit
or, Ugly Equations, Pretty Pictures
Computer Graphics Pipeline
Modeling
Animation
Lighting
and
Reflectance
Rendering
Rendering Equation



    
Lo ( x,  ' )  Le ( x,  ' )   Li ( x,  ) f r ( x,  ,  ' ) (  n ) d

Surface
Light
Viewer

d



n
x

'

[Kajiya 1986]
Rendering Equation



    
Lo ( x,  ' )  Le ( x,  ' )   Li ( x,  ) f r ( x,  ,  ' ) (  n ) d

• This is an integral equation
• Hard to solve!
– Can’t solve this
in closed form
– Simulate complex
phenomena
Heinrich
Rendering Equation



    
Lo ( x,  ' )  Le ( x,  ' )   Li ( x,  ) f r ( x,  ,  ' ) (  n ) d

• This is an integral equation
• Hard to solve!
– Can’t solve this
in closed form
– Simulate complex
phenomena
Jensen
Monte Carlo Integration
f(x)
1

0
1
f ( x) dx 
N
N
 f (x )
i 1
i
Shirley
Monte Carlo Path Tracing
Estimate integral
for each pixel
by random sampling
Monte Carlo Global Illumination
• Rendering = integration
–
–
–
–
Antialiasing
Soft shadows
Indirect illumination
Caustics
Monte Carlo Global Illumination
• Rendering = integration
–
–
–
–
Antialiasing
Eye
Soft shadows
Indirect illumination
Pixel
Caustics
x
Surface
LP   L(x  e)dA
S
Monte Carlo Global Illumination
• Rendering = integration
–
–
–
–
Antialiasing
Light
x’
Eye
Soft shadows
Indirect illumination
Caustics
x
Surface

L(x,w)  Le(x,x  e)   f r ( x,x  x, x  e) L(x  x)V ( x, x)G( x, x)dA
S
Monte Carlo Global Illumination
• Rendering = integration
–
–
–
–
Antialiasing
Soft shadows
Indirect illumination
Caustics
Herf

L(x,w)  Le(x,x  e)   f r ( x,x  x, x  e) L(x  x)V ( x, x)G( x, x)dA
S
Monte Carlo Global Illumination
Surface
• Rendering = integration
–
–
–
–
Light
Antialiasing
Soft shadows
Indirect illumination
Eye

’
Caustics
x
Surface


 
   

Lo(x,w)  Le(x,w)   f r ( x,w , w) Li(x,w)(w  n )dw

Monte Carlo Global Illumination
• Rendering = integration
–
–
–
–
Antialiasing
Soft shadows
Indirect illumination
Caustics
Debevec


 
   

Lo(x,w)  Le(x,w)   f r ( x,w , w) Li(x,w)(w  n )dw

Monte Carlo Global Illumination
Specular
Surface
• Rendering = integration
–
–
–
–
Antialiasing
Soft shadows
Indirect illumination
Caustics
Eye

’
x
Diffuse Surface


 
   
Lo(x,w)  Le(x,w)   f r ( x,w, w) Li(x,w)(w  n )dw

Light
Monte Carlo Global Illumination
• Rendering = integration
–
–
–
–
Antialiasing
Soft shadows
Indirect illumination
Caustics


 
   
Lo(x,w)  Le(x,w)   f r ( x,w, w) Li(x,w)(w  n )dw

Jensen
Challenge
• Rendering integrals are difficult to evaluate
– Multiple dimensions
– Discontinuities
• Partial occluders
• Highlights
• Caustics
Drettakis

L(x,w)  Le(x,x  e)   f r ( x,x  x, x  e) L(x  x)V ( x, x)G( x, x)dA
S
Challenge
• Rendering integrals are difficult to evaluate
– Multiple dimensions
– Discontinuities
• Partial occluders
• Highlights
• Caustics
Jensen

L(x,w)  Le(x,x  e)   f r ( x,x  x, x  e) L(x  x)V ( x, x)G( x, x)dA
S
Monte Carlo Path Tracing
Big diffuse light source, 20 minutes
Jensen
Monte Carlo Path Tracing
1000 paths/pixel
Jensen
Monte Carlo Path Tracing
• Drawback: can be
noisy unless lots of
paths simulated
• 40 paths per pixel:
Lawrence
Monte Carlo Path Tracing
• Drawback: can be
noisy unless lots of
paths simulated
• 1200 paths per pixel:
Lawrence
Reducing Variance
• Observation: some paths more important
(carry more energy) than others
– For example, shiny surfaces reflect more light
in the ideal “mirror” direction
• Idea: put more samples where f(x) is bigger
Importance Sampling
• Idea: put more samples where f(x) is bigger
1

0
1
f ( x)dx 
N
f ( xi )
Yi 
p( xi )
N
Y
i 1
i
Effect of Importance Sampling
• Less noise at a given number of samples
Uniform random sampling
Importance sampling
• Equivalently, need to simulate fewer paths for
some desired limit of noise
Download