advertisement

Introduction to Image-Based Rendering Lining Yang [email protected] A part of this set of slides reference slides used at Standford by Prof. Pat Hanrahan and Philipp Slusallek. 11/18/2003 References: S. E. Chen, “QuickTime VR – An Image-Based Approach to Virtual Environment Navigation,” Proc. SIGGRAPH ’95, pp. 2938, 1995 S. Gortler, R. Grzeszczuk, R. Szeliski, and M. Cohen, “The Lumigraph,” Proc SIGGRAPH ’96, pp. 43-54, 1996 M. Levoy and P. Hanrahan, “Light Field Rendering,” Proc. SIGGRAPH ’96, 1996. L. McMillan and G. Bishop, “Plenoptic Modeling: An ImageBased Rendering System,” Proc. SIGGRAPH ’95, pp. 39-46, 1995 J. Shade, S. Gortler, Li-Wei He, and R. Szeliski, “Layered Depth Images,” Proc. SIGGRAPH ’98, pp 231-242, 1998 Heung-Yeung Shum, Li-Wei He, “Rendering With Concentric Mosaics,” Proc. SIGGRAPH ’99, pp. 299-306, 1999 11/18/2003 Problem Description Complex Rendering of Synthetic Scene takes too long to finish Interactivity is impossible Interactive visualization of extremely large scientific data is also not possible Image-Based Rendering (IBR) is used to accelerate the renderings. 11/18/2003 Examples of Complex Rendering Povray quaterly competition site March – June, 2001 11/18/2003 Examples of Large Dataset LLNL ASCI Quantum molecular simulation site 11/18/2003 Image-Based Rendering (IBR) The models for conventional polygon-based graphics have become too complex. IBR represents complex 3D environments using a set of images from different (predefined) viewpoints It produces images for new views using these finite initial images and additional information, such as depth. The computation complexity is bounded by the image resolution, instead of the scene complexity. 11/18/2003 Image-Based Rendering (IBR) Mark Levoy’s 1997 Siggraph talk 11/18/2003 Overview of IBR Systems Plenoptic Function QuicktimeVR Light fields/lumigraph Concentric Mosaics Plenoptic Modeling and Layered Depth Image 11/18/2003 Plenoptic Function Plenoptic function (7D) depicts light rays passing through: center of camera at any location (x,y,z) at any viewing angle ( , ) for every wavelength ( ) for any time ( t ) 11/18/2003 Limiting Dimensions of Plenoptic Functions Plenoptic modeling (5D) : ignore time & wavelength Lumigraph/Lightfield (4D) : constrain the scene (or the camera view) to a bounding box 2D Panorama : fix viewpoint, allow only the viewing direction and camera zoom to be changed 11/18/2003 Limiting Dimensions of Plenoptic Functions Concentric mosaics (3D) : index all input image rays in 3 parameters: radius, rotation angle and vertical elevation 11/18/2003 Quicktime VR Using environmental maps Cylindrical Cubic spherical At a fixed point, sample all the ray directions. Users can look in both horizontal and vertical directions 11/18/2003 Mars Pathfinder Panorama 11/18/2003 Creating a Cylindrical Panorama From www.quicktimevr.apple.com 11/18/2003 Commercial Products QuickTime VR, LivePicture, IBM (Panoramix) VideoBrush IPIX (PhotoBubbles), Be Here, etc. 11/18/2003 Panoramic Cameras Rotating Cameras Kodak Cirkut Globuscope Stationary Cameras Be Here 11/18/2003 Quicktime VR Advantages: Using environmental map Easy and efficient Disadvantages: Cannot move away from the current viewpoint No Motion Parallax 11/18/2003 Light Field and Lumigraph • Take advantage of empty space to Reduce Plenoptic Function to 4D 11/18/2003 Object or viewpoint inside a convex hull Radiance does not change along a line unless blocked Lightfield Parameterization Parameterize the radiance lines by the intersections with two planes. A light Slab t L(u,v,s,t) v u 11/18/2003 s Two Plane Parametrization Focal plane (st) Camera plane (uv) 11/18/2003 Object Reconstruction (u, v) and (s, t) can be calculated by determining the intersection of image ray with the two planes This can also be done via texture mapping (x, y) to (u, v) or (s, t) is a projective mapping 11/18/2003 11/18/2003 Capturing Lightfields Need a 2D set of (2D) images Choices: Camera motion: human vs. computer Constraints on camera motion: planar vs. spherical 11/18/2003 Easier to construct Coverage and sampling uniformity Light field gantry Applications: Designed by 11/18/2003 digitizing light fields measuring BRDFs range scanning Marc Levoy et al. Light Field Key Ideas: 4D function - Valid outside convex hull 2D slice = image - Insert to create - Extract to display 11/18/2003 Lightfields Advantages: Simpler computation vs. traditional CG Cost independent of scene complexity Cost independent of material properties and other optical effects Disadvantages: Static geometry Fixed lighting High storage cost 11/18/2003 Concentric Mosaics Concentric mosaics : easy to capture, small in storage size 11/18/2003 Concentric Mosaics A set of manifold mosaics constructed from slit images taken by cameras rotating on concentric circles 11/18/2003 Sample Images 11/18/2003 Rendering a Novel View 11/18/2003 Construction of Concentric Mosaics Synthetic scenes uniform angular direction sampling square root sampling in radial direction 11/18/2003 Construction of Concentric Mosaics (2) Real scenes Bulky, costly 11/18/2003 Cheaper, easier Construction of Concentric Mosaics (3) Problems with single camera: Limited horizontal fov Non-uniform spatial horizontal resolution Video sequence can be compressed with VQ and entropy encoding (25X) Compressed stream gives 20fps on PII300 11/18/2003 Results 11/18/2003 Results (2) 11/18/2003 Image Warping McMillan’s 5D plenoptic modeling system Render or capture reference views Creating Novel Views Using reference views’ color and depth information with the warping equation For opaque scenes, the location or depth of the point reflecting the color is usually determined. Calculated using vision techniques for real imagery. 11/18/2003 Image Warping (filling holes) Dis-occlusion problem: Previously occluded objects in the reference view can be visible in the new view Fill in holes from other viewpoints or images (Mark William et al). 11/18/2003 Layered Depth Images Different primitives according to depth values Image Image with depth LDI polygons 11/18/2003 Layered Depth Images Idea: Handle disocclusion Store invisible geometry in depth images 11/18/2003 Layered Depth Image Data structure: Per pixel list of depth samples Per depth sample: 11/18/2003 RGBA Z Encoded: Normal direction, distance Layered Depth Images Computation: Implicit ordering information Incremental warping computation LDI is broken into four regions according to epipolar point Start + xincr (back to front order) Splat size computation 11/18/2003 Table lookup Layered Depth Images 11/18/2003