Rendering Synthetic Objects into Real- World Scenes by Paul Debevec Presented

advertisement
Rendering Synthetic Objects into RealWorld Scenes by Paul Debevec
SIGGRAPH 98 Conference
Presented
By
Justin N. Rogers for Advanced Comp Graphics
Spring 2002
Introduction

Realistically adding synthetic objects to realworld scenes is difficult

Interplay of light between objects and
surroundings must be consistent



Should cast shadows on surroundings
Should appear in reflections
Should refract, focus, and emit light similarly as real
objects would
Introduction

Current techniques



manually model light sources
photograph a reference object and use guide for
lighting environment
Problems with current techniques


requires considerable hand-refinement
hard to simulate effects of indirect illumination
from environment
Introduction

Related Work

Reflection Mapping produces realistic results for
mirror-like objects


Disadvantages – doesn’t account for objects casting
light or shadows on the environment
Use of geometric models of environment local to
object to compute shadows from various light
sources


Disadvantages – requires complete knowledge about
each light source in scene
Disadvantages – doesn’t account for diffuse reflection
from the scene
Introduction

Related Work

Recent developments have produced algorithms
and software packages that realistically simulate
lighting

Includes indirect lighting with diffuse and specular
reflections
Recording light measurements

Illumination of objects with actual samples of
light from real scenes


Provides a unified and physically accurate
alternative to replicating incidental illumination
Difficulties

Recording light in scenes is difficult due to high
dynamic range that usually exists.


Due to fact light sources are usually concentrated
However the direct light from light sources and indirect
light from the environment are important parts of the
illumination solution.
Recording light measurements

Conventional imaging equipment is used to
derive radiance maps from scenes

F-stop or f-number is used to record the dynamic
range of light and form radiance maps



f-stop refers to the maximum lens apeture
f-stop also refers to the specific apeture selected for
optimal brightness
Synthetic objects are illuminated using the
radiance maps
An omnidirectional radiance map. This full dynamic range lighting environment
was acquired by photographing a mirrored ball balanced on the cap of a pin
sitting on a table. The three views of this image are adjusted to (a)+0 stops
(b)-3.5 stops, and (c)-7.0 stops show that the full dynamic range of the scene has
been captured without saturation.
Illuminating synthetic objects with real light (Top row:a,b,c,d,e) With full
dynamic range measurements of scene radiance from previous slide.
(Bottom row:f,g,h,i,j) With low dynamic range information from a single
photograph of the ball. The right sides of images (h,i,j) have been brightened
by a factor of six to allow qualitative comparison to (c,d,e). The high dynamic
range measurements of scene radiance are necessary to produce proper lighting
on the objects.
Synthetic objects lit by two different environments (a) A collection of objects
is illuminated using the radiance information from the previous radiance map
(b) The same objects are illuminated by radiance information obtained in an
outdoor environment on an overcast day. Radiance maps used in illumination
are displayed in the upper left-hand corner of the images.
Adding synthetic objects to scenes


Scene is broken into three components: the
distant scene, the local scene, and the
synthetic objects.
Global illumination is used to simulate the
interplay of light between the three
components.

light from the distant scene is ignored
Adding synthetic objects to scenes

Distant Scene


Radiates light towards the local scene and
synthetic objects, but ignores light reflected to it
Local Scene


Contains the surfaces that will interact with the
synthetic objects
Full geometry and reflectance properties must be
known to ensure proper interactions
Adding synthetic objects to scenes

Synthetic Objects



May consist of a variety of shapes and materials
Should be placed in desired correspondence to
the local scene
After the three components are modeled and
positioned then the global illumination
software is used to produce renderings.
Three Components of General Method
Distant Scene
light-based
(no reflectance
model)
light
Local Scene
estimated
reflectance
model
Synthetic
Objects
known reflectance
model
Compositing objects into scene
Constructing light-based model w/light probe


Light-based model of distant scene needs to
appear correctly near synthetic objects.


Used to calculate incident light to illuminate synthetic
objects
Obtaining radiance map of distant scene


photograph a spherical, mirror-like object near the
location of the synthetic object
radiance measurements are mapped onto geometry of
distant scene
Mapping from probe to scene model


Correct mapping between coordinates on the ball
and ray in the world requires that the ball position
relative to the camera, the size of the ball, and the
camera parameters such as its location in the scene
and focal length be recorded
The data from a single ball image will display some
artifacts. (1)The camera will be visible (2)The ball
interacts with the scene: the ball (and its support)
can appear in reflections, cast shadows, and can
reflect light back onto surfaces (3)The ball won’t
reflect the scene directly behind it, and will provide a
poor sample of the nearby area.
Mapping from probe to scene model Problems


Careful positioning of the ball and camera will
cause these effects to be minimized and they
won’t have a dramatic impact on the final
renderings.
If the artifacts are significant then the images
can be altered by (1)manually in an imageediting software or by (2)combining images of
the ball taken from direct angles.
Mapping from probe to scene model Problems

The combination of two images of the ball
taken 90° apart serves to eliminate the
camera’s appearance and helps avoid poor
sampling.
Compositing objects into scene

Creating final renderings



Synthetic local scene model is created and
images of scene are taken from desired viewpoint
Software is run to render synthetic objects, local
scene, and distant scene from desired viewpoint
Finally the synthetic objects and local scene are
composited onto the background image
Using a light probe (a)The background plate of a
scene is taken. (b)A light probe records the incident
radiance near the desired location of the synthetic
objects. (c)A simplified light-based model of the
distant scene is created. The objects on the table,
which were not explicitly modeled, become projected
onto the table. (d)Synthetic objects and a BRDF
model of the local scene are added to the light-based
model of the distant scene. A global illumination
solution of the model is computed with light coming
from the distant scene and interacting with the local
scene and synthetic objects. Light reflected back to
the distant scene is ignored. Finally the results from
this rendering are composited into the background
plate from (a) to achieve the final result.
Rendering with a Combined Probe Image The full dynamic range environment
map shown above was assembled from two light probe images taken 90° apart.
As a result, the only visible artifact is a small amount of probe support visible on
the floor. The map is shown at -4.5, 0, and +4.5 stops.
Rendering with a Combined Probe Image The rendering was produced using
the lighting information from the previous slide. It exhibits diffuse and specular
reflections, shadows from different sources of light, reflections, and caustics.
Improving quality with differential
rendering

The method presented thus far requires that
geometry and material properties of the local
scene be modeled properly.

If the model is inaccurate, the appearance of
the local scene will not be consistent with the
adjacent distant scene. Differential rendering
introduces a method for greatly reducing
such effects
Improving quality with differential
rendering




LSb – local scene in the light-based model
LSnoobj – local scene without synthetic objects
The error in the rendered local scene is:
Errls = LSnoobj –LSb. This error results from the
difference between the BRDF characteristics
of the actual local scene as compared to the
modeled local scene.
LSobj – local scene with synthetic objects
Improving quality with differential
rendering


We can compensate for error by computing
the final rendering as LSfinal = LSobj - Errls or
LSfinal = LSb + (LSobj – LSnoobj)
When LSobj and LSnoobj are the same the final
rendering is equivalent to LSb. When LSobj is
darker than LSnoobj , light is subtracted from
the background to form shadows and viceversa.
Improving quality with differential
rendering
Incorrect results can still be produced
depending on the amount of error in the
estimated local scene BRDF and
inaccuracies in the light-based model of the
distant scene. An alternative approach is to
adjust for relative error in the local scene.
LSfinal = LSb(LSobj /LSnoobj).
BRDF




Materials interact with light in different ways, and different materials
have different appearances given the same lighting conditions.
The reflectance properties of a surface are described by a
reflectance function, which models the interaction of light reflecting
at a surface.
The bi-directional reflectance distribution function (BRDF) is the
most general expression of reflectance of a material
The BRDF is defined as the ratio between differential radiance
reflected in an exitant direction, and incident irradiance through a
differential solid angle
BRDF
Estimating the local scene BRDF
1. Assume a reflectance model for the local scene
2. Choose approximate initial values for the
parameters of the reflectance model
3. Compute a global illumination solution for the
local scene with the current parameters using
the observed lighting configuration
4. Compare the appearance of the rendered local
scene to its actual appearance in one or more
views
Estimating the local scene BRDF
5. If the renderings aren’t consistent, adjust the
parameters of the reflectance return to step 3

Assuming a diffuse-only model of the local
scene in step 1 makes the adjustment in step
5 straightforward
Estimating the local scene BRDF
Estimating the local scene BRDF
We use the global illumination software to render
each patch as a perfectly diffuse reflector, then
compare the resulting radiance to the observed
value. Dividing the two quantities produces the
next estimate for the diffuse reflection coefficient
p‘d . If there is no interreflection within the local
scene, then the p‘d estimates will make the
renderings consistent. Interreflection requires that
the algorithm be iterated until there is
convergence.
Compositing Results
Compositing Results
Conclusion



A general framework for adding new object
into light-based models with correct
illumination has been presented.
The method uses high dynamic range images
of real scene radiance to realistically
illuminate synthetic objects.
A practical instance of the method was
presented used a light probe to record
incident illumination in the vicinity of synthetic
objects
Download