ppt - TAMU Computer Science Faculty Pages

advertisement
CSCE 641: Computer Graphics
Image-based Rendering
Jinxiang Chai
Image-based Modeling: Challenging Scenes
Why will they produce poor results?
- lack of discernible features
- occlusions
- difficult to capture high-level structure
- illumination changes
- specular surfaces
Some Solutions
- Use priors to constrain the solution space
- Aid modeling process with minimal user interaction
- Combine image-based modeling with other modeling
approaches
Videos
Morphable face (click here)
Image-based tree modeling (click here)
Video trace (click here)
3D modeling by ortho-images (Click here)
Spectrum of IBMR
Model
Panoroma
Image
based
modeling
Images
user input range
scans
Images + Depth
Geometry+ Images
Imagebased
rendering
Camera + geometry
Light field
Geometry+ Materials
Kinematics
Dynamics
Etc.
Images
Outline
Layered depth image/Post-Rendering 3D Warping
View-dependent texture mapping
Light field rendering [Levoy and Hanranhan SIGGRAPH96]
Layered depth image [Shade et al, SIGGRAPH98]
Layered depth image:
- image with depths
Layered depth image [Shade et al, SIGGRAPH98]
Layered depth image:
- rays with colors and depths
Layered depth image [Shade et al, SIGGRAPH98]
Layered depth image: (r,g,b,d)
- image with depths
- rays with colors and depths
Layered depth image [Shade et al, SIGGRAPH98]
Rendering from layered depth image
Layered depth image [Shade et al, SIGGRAPH98]
Rendering from layered depth image
- Incremental in X and Y
- Forward warping one pixel with depth
Layered depth image [Shade et al, SIGGRAPH98]
Rendering from layered depth image
- Incremental in X and Y
- Forward warping one pixel with depth
Layered depth image [Shade et al, SIGGRAPH98]
Rendering from layered depth image
How to deal with occlusion/visibility problem?
- Incremental in X and Y
- Forward warping one pixel with depth
Layered depth image [Shade et al, SIGGRAPH98]
Rendering from layered depth image
How to deal with occlusion/visibility problem? Depth comparison
- Incremental in X and Y
- Forward warping one pixel with depth
How to form LDIs
Synthetic world with known geometry
and texture
- from multiple depth images
- modified ray tracer
Real images
- reconstruct geometry from multiple images (e.g.,
voxel coloring, stereo reconstruction)
- form LDIs using multiple images and
reconstructed geometry
Kinect sensors
- record both image data and depth data
Image-based Rendering Using Kinect Sensors
Capture both video/depth data using kinect sensors
Using 3D warping to render an image from a novel
view point [e.g., Post-Rendering 3D Warping]
3D Warping
Render an image from a novel viewpoint by warping a
RGBD image.
The 3D warp can expose areas of the scene for which the
reference frame has no information (shown here in black).
Image-based Rendering Using Kinect Sensors
Capture both video/depth data using kinect sensors
Using 3D warping to render an image from a novel
view point [e.g., Post-Rendering 3D Warping]
Demo: click here
3D Warping
- A single warped frame will lack information about areas occluded in
its reference frame.
- Multiple reference frames can be composited to produce a more
complete derived frame.
How to extend to surface representation?
Outline
Layered depth image/Post-Rendering 3D Warping
View-dependent texture mapping
Light field rendering
View-dependent surface representation
From multiple input image
- reconstruct the geometry
- view-dependent texture
View-dependent surface representation
From multiple input image
- reconstruct the geometry
- view-dependent texture
View-dependent surface representation
From multiple input image
- reconstruct the geometry
- view-dependent texture
View-dependent surface representation
From multiple input image
- reconstruct the geometry
- view-dependent texture
View-dependent texture mapping
[Debevec et al 98]
View-dependent texture mapping
- Virtual camera at point D
Subject's 3D proxy points
- Textures from camera Ci
mapped onto triangle faces
V
q0
- Blending weights in vertex V
- Angle θi is used to compute
the weight values:
wi =
exp(-θi2/2σ2)
q3
q1 q2
C0
C1
C2
D
C3
Videos: View-dependent Texture Mapping
Demo video
Can we render an image without any geometric
information?
Outline
Layered depth image/Post-Rendering 3D Warping
View-dependent texture mapping
Light field rendering [Levoy and Hanranhan SIGGRAPH96]
Light Field Rendering
Video demo: click here
Light Field Rendering
Light Field
Capture
Image Plane
Camera Plane
Rendering
Plenoptic Function
P(x,y,z,θ,φ,λ,t)
Can reconstruct every possible view, at every moment, from every
position, at every wavelength
Contains every photograph, every movie, everything that anyone
has ever seen! it completely captures our visual reality!
An image is a 2D sample of plenoptic function!
Ray
Let’s not worry about time and color:
P(x,y,z,q,f)
5D
• 3D position
• 2D direction
How can we use this?
Static Lighting
No Change in
Radiance
Static object
Camera
How can we use this?
Static Lighting
No Change in
Radiance
Static object
Camera
Ray Reuse
Infinite line
• Assume light is constant (vacuum)
4D
• 2D direction
• 2D position
• non-dispersive medium
Slide by Rick Szeliski and Michael Cohen
Synthesizing novel views
Assume we capture every ray in 3D space!
Synthesizing novel views
Light field / Lumigraph
Outside convex space
Empty
4D
Stuff
Light Field
How to represent rays?
How to capture rays?
How to use captured rays for rendering
Light Field
How to represent rays?
How to capture rays?
How to use captured rays for rendering
Light field - Organization
2D position
2D direction
q
s
Light field - Organization
2D position
2D position
u
s
2 plane parameterization
Light field - Organization
2D position
2D position
s,t
t
u,v
s,t
v
u,v
2 plane parameterization
s
u
Light field - Organization
Hold u,v constant
Let s,t vary
What do we get?
u,v
s,t
Lightfield / Lumigraph
Light field/lumigraph - Capture
Idea:
• Move camera carefully over u,v
plane
• Gantry
> see Light field paper
u,v
s,t
Stanford multi-camera array
640 × 480 pixels ×
30 fps × 128 cameras
synchronized timing
continuous streaming
flexible arrangement
Light field/lumigraph - rendering

For each output pixel
• determine s,t,u,v
• either
• use closest discrete RGB
• interpolate near values
s
u
Light field/lumigraph - rendering
Nearest
• closest s
• closest u
• draw it
Blend 16 nearest
• quadrilinear interpolation
s
u
Ray interpolation
s
Nearest
neighbor
u
Linear interpolation
in S-T
Quadrilinear
interpolation
Light fields
Advantages:
•
•
•
•
No geometry needed
Simpler computation vs. traditional CG
Cost independent of scene complexity
Cost independent of material properties and other optical
effects
Disadvantages:
• Static geometry
• Fixed lighting
• High storage cost
Download