Computational Photography

advertisement
Image-Based Lighting
© Eirik Holmøyvik
adapted by Klaus Mueller, added some slides from Jian Huang, UTK
Slides from Alexei Efros
and Paul Debevec
cs129: Computational Photography
James Hays, Brown, Spring 2011
Inserting Synthetic Objects
Why does this look so bad?
• Wrong orientation
• Wrong lighting
• No shadows
Environment Maps
Simple solution for shiny objects
• Models complex lighting as a panoramic image
• i.e. amount of radiance coming in from each direction
• A plenoptic function!!!
Environment Mapping
projector function converts
reflection vector (x, y, z) to
texture image (u, v)
Reflected ray: r=2(n·v)n-v
viewer
n
v
r
reflective surface
environment
texture image
Texture is transferred in the direction of the reflected ray
from the environment map onto the object
What is in the map?
What approximations are made?
The map should contain a view of the world with the
point of interest on the object as the Center of
Projection
• We can’t store a separate map for each point, so one map is
used with the COP at the center of the object
• Introduces distortions in the reflection, but we usually don’t
notice
• Distortions are minimized for a small object in a large room
The object will not reflect itself!
Environment Maps
The environment map may take various forms:
• Cubic mapping
• Spherical mapping
• other
Describes the shape of the surface on which the map
“resides”
Determines how the map is generated and how it is
indexed
Cubic Mapping
The map resides on the surfaces of a cube around the
object
• Typically, align the faces of the cube with the coordinate axes
To generate the map:
• For each face of the cube, render the world from the center of
the object with the cube face as the image plane
– Rendering can be arbitrarily complex (it’s off-line)
• Or, take 6 photos of a real environment with a camera in the
object’s position
To use the map:
• Index the R ray into the correct cube face
• Compute texture coordinates
Cubic Map Example
Calculating the Reflection Vector
Normal vector of the surface : N
Incident Ray : I
Reflection Ray: R
N,I,R all normalized
R = I -2 N ( N . I )
The texture coordinate is based
on the reflection vector
Assuming the origin of the
vector is always in the center
of the cube environment map
Indexing Cubic Maps
Assume you have R and the cube’s
faces are aligned with the coordinate
axes, and have texture coordinates
in [0,1]x[0,1]
• How do you decide which face to use?
• Use the reflection vector coordinate with
the largest magnitude
(0.3, 0.2, 0.8)  face in +z direction
Indexing Cubic Maps
How do you decide which texture coordinates to use?
• Divide by the coordinate with the largest magnitude
• Now ranging [-1,1]
• Remapped to a value between 0 and 1.
(0.3,0.2,0.8)  ((0.3/0.8 +1)*0.5, ((0.2/0.8 +1)*0.5) = (0.6875, 0.625)
A Sphere Map
• A mapping between the reflection vector and a circular
texture
• Prepare an image/texture in which the environment is
mapped onto a sphere
Sphere Mapping
To generate the map:

•
•
Take a photograph of a shiny sphere
Mapping a cubic environment map onto a sphere
For synthetic scenes, you can use ray tracing
Sphere Map
Compute the reflection vector at the surface of the object
Find the corresponding texture on the sphere map
Use the texture to color the surface of the object
Indexing Sphere Maps
Given the reflection vector R (Rx,Ry,Rz)
R 1
u x  ,
2
m2
R
1
y

v 
2
m2
2
2
R

m
R

R
1
x
y
z
2
• (u,v) on the spherical map
0u,v 1
Indexing Sphere Maps
The normal vector is the sum of
the reflection vector and the
eye vector
N

(R
1
)
x,R
y,R
z
2
2
Rz 1
m R
x R
y
2
Normalization of the vector
gives
2
m Rx2 Ry2 Rz 1
R
R
1
y R
x
z
n
m, m, m 



 Rx Ry Rz 1
n
 m, m, m 



If the normal is on a sphere of
radius 1, its x, y coordinates
are also location on the
sphere map
Finally converting them to make
their range [0,1]
R 1
u x  ,
2
m2
R
1
y

v 
2
m2
2
2
R

m
R
R
1
x
y
z
2
Non-linear Mapping
Problems:
• Highly non-uniform sampling
• Highly non-linear mapping
Linear interpolation of texture coordinates picks up the wrong
texture pixels
• Do per-pixel sampling or use high resolution polygons
• Can only view from one direction
Correct
Linear
Example
What about real scenes?
From Flight of the Navigator
What about real scenes?
from Terminator 2
Real environment maps
We can use photographs to capture environment maps
How do we deal with light sources? Sun, lights, etc?
• They are much much brighter than the rest of the
enviarnment
User High Dynamic Range photography, of course!
Several ways to acquire environment maps:
• Stitching mosaics
• Fisheye lens
• Mirrored Balls
Stitching HDR mosaics
http://www.gregdowning.com/HDRI/stitched/
Scanning Panoramic
Cameras
Pros:
very high res (10K x 7K+)
Full sphere in one scan – no stitching
Good dynamic range, some are HDR
Issues:
More expensive
Scans take a while
Companies: Panoscan, Sphereon
Fisheye Images
Mirrored Sphere
Sources of Mirrored Balls
 2-inch chrome balls ~ $20 ea.
 McMaster-Carr Supply Company
www.mcmaster.com
 6-12 inch large gazing balls
 Baker’s Lawn Ornaments
www.bakerslawnorn.com
 Hollow Spheres, 2in – 4in
 Dube Juggling Equipment
www.dube.com
 FAQ on www.debevec.org/HDRShop/
0.34
Calibrating
Mirrored Sphere
Reflectivity
=> 59%
Reflective
0.58
Real-World HDR Lighting Environments
Funston
Beach
Eucalyptus
Grove
Uffizi
Gallery
Grace
Cathedral
Lighting Environments from the Light Probe Image Gallery:
http://www.debevec.org/Probes/
Acquiring the Light Probe
Assembling the Light Probe
Not just shiny…
We have captured a true radiance
map
We can treat it as an extended (e.g
spherical) light source
Can use Global Illumination to
simulate light transport in the scene
 So, all objects (not just shiny) can
be lighted
 What’s the limitation?
Illumination Results
Comparison: Radiance
map versus single image
Putting it all together
Synthetic Objects
+
Real light!
CG Objects Illuminated by a Traditional CG
Light Source
Paul Debevec. A Tutorial on Image-Based Lighting. IEEE
Computer Graphics and Applications, Jan/Feb 2002.
Rendering with Natural Light
SIGGRAPH 98 Electronic Theater
RNL Environment
mapped onto
interior of large
cube
MOVIE!
Illuminating a Small Scene
We can now illuminate
synthetic objects with real light.
How do we add synthetic objects to a
real scene?
Real Scene Example
Goal: place synthetic objects on table
Light Probe / Calibration Grid
Modeling the Scene
light-based model
real scene
The Light-Based Room Model
Modeling the Scene
light-based model
local scene
synthetic objects
real scene
The Lighting Computation
distant scene (light-based, unknown BRDF)
synthetic objects
(known BRDF)
local scene
(estimated BRDF)
Rendering into the Scene
Background Plate
Rendering into the Scene
Objects and Local Scene matched to Scene
Differential Rendering
Local scene w/o objects, illuminated by model
Differential Rendering (2)
Difference in local scene
-
=
Differential Rendering
Add differences to images -- Final Result
High-Dynamic Range
Photography
• Standard digital images typically represent
only a small fraction of the dynamic range—
the ratio between the dimmest and brightest
regions accurately represented—present in
most realworld lighting environments.
The bright light saturates the pixel colour
Need a high dynamic range image which can
be produced by combining images of
different shutter speed
High-Dynamic Range
Photography
HDR images
IMAGE-BASED LIGHTING IN FIAT LUX
Paul Debevec, Tim Hawkins, Westley Sarokin, H. P. Duiker, Christine Cheng, Tal Garfinkel, Jenny Huang
SIGGRAPH 99 Electronic Theater
HDR Image Series
2 sec
1/4 sec
1/30 sec
1/250 sec
1/2000 sec
1/8000 sec
Stp1
Panorama
Assembled Panorama
Light Probe Images
Capturing a Spatially-Varying
Lighting Environment
The Movie
Download