Introduction to Image-Based Rendering Jian Huang, CS 594, Spring 2002

advertisement
Introduction to Image-Based
Rendering
Jian Huang, CS 594, Spring 2002
A part of this set of slides reference slides used at Standford
by Prof. Pat Hanrahan and Philipp Slusallek.
What is Image- Based
Rendering?
• Not just using images on geometry (akin to
texture mapping)
• Built on desire to bypass the manual
modeling phase
• Use images (of some kind) for modeling
and rendering
Types of IBR
• Panoramas/Image Mosaics/Light Fields,
Lumigraph
– QuicktimeVR
– Concentric Mosaics, light fields/lumigraph
• View Interpolation
• Model based methods
– Depth Images
– Geometry from Images
Plenoptic Function
• Plenoptic function (7D) depicts light rays
passing through:
–
–
–
–
center of camera at any location (x,y,z)
at any viewing angle ( , )
for every wavelength (  )
for any time ( t )
Limiting Dimensions of
Plenoptic Functions
• Plenoptic modeling (5D) : ignore time &
wavelength
• Lumigraph/Lightfield (4D) : constrain the
scene (or the camera view) to a bounding
box
• 2D Panorama : fix viewpoint, allow only
the viewing direction and camera zoom can
be changed
Limiting Dimensions of
Plenoptic Functions
• Concentric mosaics (3D) : index all input
image rays in 3 parameters: radius, rotation
angle and vertical elevation
Apple’s QuickTime VR
Outward
Inward
Mars Pathfinder Panorama
Creating a Cylindrical Panorama
From www.quicktimevr.apple.com
Commercial Products
– QuickTime VR, LivePicture, IBM (Panoramix)
– VideoBrush
– IPIX (PhotoBubbles), Be Here, etc.
Light Field and Lumigraph
• Take advantage of empty space to
– reduce Plenoptic Function to 4D
Capturing Lightfields
• Need a 2D set of (2D) images
• Choices:
–
–
–
–
Camera motion: human vs. computer
Constraints on camera motion
Coverage and sampling uniformity
Aliasing
Lightfield Parameterization
• Point / angle
• Two points on a sphere
• Points on two planes
• Original images and camera positions
Two Plane Parametrization
Focal plane (st)
Camera plane (uv)
Object
Reconstruction
•
Light Field
• Key Ideas:
 4D function
- Valid outside convex hull
 2D slice = image
- Insert to create
- Extract to display
 Inward or outward
Lightfields
• Advantages:
– Simpler computation vs. traditional CG
– Cost independent of scene complexity
– Cost independent of material properties and
other optical effects
• Disadvantages:
– Static geometry
– Fixed lighting
– High storage cost
Concentric Mosaics
• Concentric mosaics : easy to capture, small
in storage size
Concentric Mosaics
• A set of manifold mosaics constructed from
slit images taken by cameras rotating on
concentric circles
Sample Images
Rendering a Novel View
Construction of Concentric
Mosaics
• Synthetic scenes
– uniform angular direction sampling
– square root sampling in radial direction
Construction of Concentric
Mosaics (2)
• Real scenes
Bulky, costly
Cheaper, easier
Construction of Concentric
Mosaics (3)
• Problems with single camera:
– Limited horizontal fov
– Non-uniform spatial horizontal resolution
• Resampling necessary
– bilinear is better than point sampling
• Video sequence can be compressed with VQ
and entropy encoding (25X)
• Compressed stream gives 20fpx on PII300
Results
Results (2)
View Interpolation
• Sprites/Imposters with Depth
– Better image warping:
• Wider range of reuse
• Backward mapping only with homograph
– New mapping:
• Stored depth map
• Forward map depth map
(approximate geometry)
• Backward mapping of color
using depth information
d
d’
d’
Mapping with Depth
• Forward Mapping:
– Holes and aliasing

I1
d1
I2
(I2 )
Mapping with Depth
• Backward Mapping:
– What is d?
I1

(I2)
I2
d2
Mapping with Depth
• Solution:
– Forward map depth
– Reconstruct approximate geometry
– Backward map color

I1
(I2)
I2
d2
Layered Depth Images
• Idea:
– Handle disocclusion
– Store invisible geometry in depth images
• Data structure:
– Per pixel list of depth samples
– Per depth sample:
• RGBA
• Z
• Encoded: Normal direction, distance
– Pack into cache lines
Layered Depth Images
• Computation:
– Incremental warping computation
– Implicit ordering information
• Process in up to four quadrant
– Splat size computation
• Table lookup
• Fixed splat templates
– Clipping of LDIs
Layered Depth Images
Model-based IBR
• Basic Idea: Sparse set of images [Debevec’97, Pulli’96]
• Overview:
– Approximate Modeling
• Photogrammetric modeling
• Triangulated depth maps
– View-dependent Texture Mapping
• Weighting
• Hardware accelerated rendering
– Model-based Stereo
• Details from stereo algorithms
Hybrid Approach
•
Courtesy: P. Debevec
Approximate Modeling
• User-assisted photogrammetry [Debevec ‘97]:
– Based on “Structure from Motion”
– Use constraints in architectural models
• Approach:
–
–
–
–
–
Simple block model
Constraints reduce DOF
Matching based on lines
Non-linear optimization
Initial Camera Positions
Approximate Modeling: Block
Model
Courtesy: P. Debevec
Approximate Modeling
• Active Light:
–
–
–
–
Calibrated camera and projector
Plane of light and triangulation
Registration of multiple views
Triangulation of point cloud
Camera
Projector
Approximate Modeling
Projecting Images
• Technique:
–
–
–
–
–
Known camera positions
Projective texture mapping
Shadow buffer for occlusions
Blending between textures
Filling in
Visibility
•
Projecting Images
Projecting Images
• Simple compositing vs. blending
• Blending:
– Select “best” image
•
•
•
•
•
closeness to viewing direction
distance to border
sampling density [Pulli]
deletion of features
Some computation in HW
– Smooth transition between pixels and frames
• Alpha blending, soft Z-buffer, confidence
Projecting Images
• Closeness to viewing direction:
– Triangulate the Hemisphere
• Delaunay triangulation of viewing directions
• Regular triangulation: label each vertex with best view
– Interpolate based on barycentric coordinates
Blending of Textures
Model-Based Stereo
• Problems with conventional stereo algorithms:
– Correspondences are difficult to find
– Large disparities
– Foreshortening, projective distortions
• Approach:
– Use approximate geometry to reproject one image
– Compute disparity of warped image
• Significant smaller disparity and foreshortening
Model-Based Stereo
Model-Based Stereo
Model-Based Stereo
Demos
•
Download