Shadows

advertisement
Shadows
Dinesh Manocha
Computer Graphics
COMP-770 lecture
Spring 2009
What are Shadows?
From Webster’s dictionary:
Shad-ow (noun): partial darkness or obscurity
within a part of space from which rays from a
source of light are cut off by an interposed
opaque body
Is this definition sufficient?
What are Shadows?
• Does the occluder have to be
opaque to have a shadow?
– transparency (no scattering)
– translucency (scattering)
• What about indirect light?
– reflection
– atmospheric scattering
– wave properties: diffraction
• What about volumetric or
atmospheric shadowing?
– changes in density
Is this still a shadow?
What are Shadows Really?
Volumes of space that receive no light or light
that has been attenuated through obscuration
• Is this definition sufficient?
• In practice, too general!
• We need some restrictions
Common Shadow Algorithm Restrictions
• No transparency or translucency!
– Limited forms can sometimes be handled efficiently
– Backwards ray-tracing has no trouble with these effects, but it
is much more expensive than typical shadow algorithms
• No indirect light!
– More sophisticated global illumination algorithms handle this
at great expense (radiosity, backwards ray-tracing)
• No atmospheric effects (vacuum)!
– No indirect scattering
– No shadowing from density changes
• No wave properties (geometric optics)!
What Do We Call Shadows?
• Regions not completely
visible from a light source
• Assumptions:
– Single light source
– Finite area light sources
– Opaque objects
• Two parts:
– Umbra: totally blocked from light
– Penumbra: partially obscured
area light source
shadow
umbra
penumbra
Basic Types of Light & Shadows
area, direct & indirect
area, direct only
SOFT SHADOWS
more realistic
point, direct only
directional, direct only
HARD or SHARP SHADOWS
simpler
more realistic for small-scale scenes, directional is realistic for scenes lit by sunlight in space!
Goal of Shadow Algorithms
Ideally, for all surfaces, find the fraction of light
that is received from a particular light source
• Shadow computation can be considered a global
illumination problem
– this includes ray-tracing and radiosity!
• Most common shadow algorithms are restricted to
direct light and point or directional light sources
• Area light sources are usually approximated by many
point lights or by filtering techniques
Global Shadow Component in
Local Illumination Model
Without shadows:
I  GlobalAmbient 
NumLights
 Dist
i 1
i
 Spoti  Ambienti  Diffusei  Speculari 
With shadows:
I  GlobalAmbient 
NumLights
 Dist
i 1
i
 Spoti Ambienti 
NumLights
 Shadow  Dist
i 1
i
i
 Spoti  Diffusei  Speculari 
• Shadowi is the fraction of light received at the surface
– For point lights, 0 (shadowed) or 1 (lit)
– For area lights, value in [0,1]
• Ambient term approximates indirect light
What else does this say?
I  GlobalAmbient 
NumLights
 Dist
i 1
i
 Spoti Ambienti 
NumLights
 Shadow  Dist
i 1
i
i
 Spoti  Diffusei  Speculari 
• Multiple lights are not really difficult (conceptually)
• Complex multi-light effects are many single-light
problems summed together!
– Superposition property of illumination model ()
• This works for shadows as well!
• Focus on single-source shadow computation
• Generalization is simple, but efficiency may be improved
Characteristics of Shadow Algorithms
• Light-source types
– Directional
– Point
– Area
• Light transfer types
–
–
–
–
Direct vs. indirect
Opaque only
Transparency / translucency
Atmospheric effects
• Geometry types
– Polygons
– Higher-order surfaces
Characteristics of Shadow Algorithms
• Computational precision (like visibility algorithms)
– Object precision (geometry-based, continuous)
– Image precision (image-based, discrete)
• Computational complexity
– Running-time
– Speedups from static viewer, lights, scene
– Amount of user intervention (object sorting)
• Numerical degeneracies
Characteristics of Shadow Algorithms
• When shadows are computed
– During rendering of fully-lit scene (additive)
– After rendering of fully-lit scene (subtractive)
not correct, but fast and often good enough
• Types of shadow/object interaction
– Between shadow-casting object and receiving object
– Object self-shadowing
– General shadow casting
Taxonomy of Shadow Algorithms
• Object-based
–
–
–
–
–
Local illumination model (Warnock69,Gouraud71,Phong75)
Area subdivision (Nishita74,Atherton78)
Planar projection (Blinn88)
Radiosity (Goral84,Cohen85,Nishita85)
Lloyd (2004)
• Image-based
– Shadow-maps (Williams78,Hourcade85,Reeves87,
Stamminger/Drettakis02, Lloyd 07)
– Projective textures (Segal92)
• Hybrid
–
–
–
–
Scan-line approach (Appel68,Bouknight70)
Ray-tracing (Appel68,Goldstein71,Whitted80,Cook84)
Backwards ray-tracing (Arvo86)
Shadow-volumes (Crow77,Bergeron86,Chin89)
Good Surveys of Shadow Algorithms
Early complete surveys found in (Crow77 & Woo90)
Recent survey on hard shadows: Lloyd 2007 (Ph.D.
thesis)
Recent survey on soft shadows: Laine 2007 (Ph.D.
thesis)
Survey of Shadow Algorithms
Focus is on the following algorithms:
–
–
–
–
–
–
Local illumination
Ray-tracing
Planar projection
Shadow volumes
Projective textures
Shadow-maps
Will briefly mention:
–
–
–
–
Scan-line approach
Area subdivision
Backwards ray-tracing
Radiosity
Local Illumination “Shadows”
• Backfacing polygons are in shadow (only lit by ambient)
• Point/directional light sources only
• Partial self-shadowing
– like backface culling is a partial visibility solution
• Very fast (often implemented in hardware)
• General surface types in almost any rendering system!
Local Illumination “Shadows”
• Typically, not considered a shadow algorithm
• Just handles shadows of the most restrictive form
• Dramatically improves the look of other restricted
algorithms
Local Illumination “Shadows”
Properties:
–
–
–
–
–
–
–
Point or directional light sources
Direct light
Opaque objects
All types of geometry (depends on rendering system)
Object precision
Fast, local computation (single pass)
Only handles limited self-shadowing
convenient since many algorithms do not handle any self-shadowing
– Computed during normal rendering pass
– Simplest algorithm to implement
Ray-tracing Shadows
Only interested in shadow-ray tracing (shadow feelers)
– For a point P in space, determine if it is shadow with respect
to a single point light source L by intersecting line segment
PL (shadow feeler) with the environment
– If line segment intersects object, then P is in shadow,
otherwise, point P is illuminated by light source L
shadow feeler
(edge PL)
P
L
Ray-tracing Shadows
• Arguably, the simplest general algorithm
• Can even handle area light sources
– point-sample area source: distributed ray-tracing (Cook84)
I  GlobalAmbient 
NumLights
 Dist
i 1
i
 Spoti Ambienti 
NumLights
 Shadow  Dist
i
i 1
 Spoti  Diffusei  Speculari 
Area light Li
Li
P
i
P
Shadowi = 0
Shadowi = 2/5
Ray-tracing Shadows
Sounds great, what’s the problem?
– Slow
• Intersection tests are (relatively) expensive
• May be sped up with standard ray-tracing acceleration techniques
– Shadow feeler may incorrectly intersect object touching P
• Depth bias
• Object tagging
– Don’t intersect shadow feeler with object touching P
– Works only for objects not requiring self-shadowing
Ray-tracing Shadows
How do we use the shadow feelers?
2 different rendering methods
– Standard ray-casting with shadow feelers
– Hardware Z-buffered rendering with shadow feelers
Ray-tracing Shadows
Ray-casting with shadow feelers
For each pixel:
Eye
• Trace ray from eye through pixel
center
• Compute closest object intersection
point P along ray
• Calc Shadowi for point by performing
shadow feeler intersection test
• Calc illumination at point P
Light
Ray-tracing Shadows
Z-buffering with shadow feelers
• Render the scene into the depthbuffer (no need compute color)
• For each pixel, determine if in
shadow:
– “unproject” the screen space pixel point
to transform into eye space
– Perform shadow feeler test with light in
eye space to compute Shadowi
– Store Shadowi for each pixel
• Light the scene using per-pixel
Shadowi values
Light
Eye
Ray-tracing Shadows
Z-buffering with shadow feelers
How do we use per-pixel Shadowi values to light the scene?
Method 1: compute lighting at each pixel in software
• Deferred shading
• Requires object surface info (normal, materials)
• Could use more complex lighting model
Ray-tracing Shadows
Z-buffering with shadow feelers
How do we use per-pixel Shadowi values to light the scene?
Method 2: use graphics hardware
For point lights:
• Shadowi values either 0 or 1
• Use stencil buffer, stencil values = Shadowi values
• Re-render scene with the corresponding light on using graphics
hardware but use stencil test to only write into lit pixels (stencil=1).
Should perform additive blending and ambient-lit scene should be
rendered in depth computation pass.
For area lights:
• Shadowi values continuous in [0,1]
• Multiple-passes and modulation blending
• Pixel Contribution = Ambienti + Shadowi*(Diffusei+Speculari)
Ray-tracing Shadows
Properties
–
–
–
–
–
–
–
–
–
Point, directional, and area light sources
Direct light (may be generalized to indirect)
Opaque (thin-film transparency easily handled)
All types of geometry (just need edge intersection test)
Hybrid : object-precision (line intersection), image-precision for
generating pixel rays
Slow, but many acceleration techniques are available
General shadow algorithm
Computed during illumination (additive, but subtractive is possible)
Simple to implement
Planar Projection Shadows
• Shadows cast by objects onto planar surfaces
• Brute force: project shadow casting objects onto the
plane and draw projected object as a shadow
Directional light
(parallel projection)
Point light
(perspective projection)
Planar Projection Shadows
Not sufficient
– co-planar polygons (Z-fighting) : depth bias
– requires clipping to relevant portion of plane : shadow receiver stenciling
Planar Projection Shadows
better approach, subtractive strategy
Render scene fully lit by single light
For each planar shadow receiver:
• Render receivers: stencil pixels covered
• Render projected shadow casters in a
shadow color with depth testing on, depth
biasing (offset from plane), modulation
blending, and stenciling (to write only on
receiver and to avoid double pixel
writing)
– Receiver stencil value=1, only write
where stencil equals 1, change to zero
after modulating pixel
Texture is visible in shadow
Planar Projection Shadows
problems with subtractive strategy
• Called subtractive because it begins with full-lighting and
removes light in shadows (modulates)
• Can be more efficient than additive (avoids passes)
• Not as accurate as additive. Doesn’t follow lighting model
– Specular and diffuse components in shadow
– Modulates ambient term
– Shadow color is chosen by user
I  GlobalAmbient 
NumLights
 ShadowColor  Dist
i 1
i
i
 Spoti  Ambienti  Diffusei  Speculari 
as opposed to the correct version
I  GlobalAmbient 
NumLights
 Dist
i 1
i
 Spoti Ambienti 
NumLights
 Shadow  Dist
i 1
i
i
 Spoti  Diffusei  Speculari 
Planar Projection Shadows
even better approach, additive strategy
• Draw ambient lit shadow receiving scene
(global and all lights’ local ambient)
• For each light source:
For each planar receiver
– Render receiver: stencil pixels
covered
– Render projected shadow casters into
stenciled receiver area: depth testing
on, depth biasing, stencil pixels
covered by shadow
– Re-render receivers lit by single light
source (no ambient light): depth-test
set to EQUAL, additive blending,
write only into stenciled areas on
receiver and not in shadow
• Draw shadow casting scene: full-lighting
Planar Projection Shadows
Properties
–
–
–
–
–
–
Point or directional light sources
Direct light
Opaque objects (could fake transparency using subtractive)
Polygonal shadow casting objects, planar receivers
Object precision
Number of passes: L=num lights, P=num planar receivers
• subtractive: 1 fully lit pass, L*P special passes (no lighting)
• additive: 1 ambient lit pass, 2*L*P receiver passes, L*P caster passes
Planar Projection Shadows
Properties
– Can take advantage of static components:
• static objects & lights: precompute silhouette polygon from light source
• static objects & viewer: precompute first pass over entire scene
– Visibility from light is handled by user
(must choose casters and receivers)
– No self-shadowing (relies on local illumination)
– Both subtractive and additive strategies presented
– Conceptually simple, surprisingly difficult to get right
gives techniques needed to handle more sophisticated multi-pass methods
Shadow Volumes
What are they?
Volume of space in shadow of a single occluder with respect to a point light source
OR
Volume of space swept out by extruding an occluding polygon away from a point
light source along the projector rays originating at the point light and passing
through the vertices of the polygon
point light
occluding
triangle
3D shadow volume
Shadow Volumes
How do you use them?
• Parity test to see if a point P on
a visible surface is in shadow:
– Initialize parity to 0
– Shoot ray from eye to point P
– Each time a shadow-volume
boundary is crossed, invert the
parity
• if parity=0, P is in shadow
if parity=1, P is lit
What are some potential problems?
point light
eye
0
occluder
0
0
1
1
parity=0
0
parity=1
parity=0
Shadow Volumes
Problems with Parity Test
Eye inside of
shadow volume
Self-shadowing of
visible occluders
Multiple overlapping
shadow volumes
0
0
0
0
1
0
• Incorrectly shadows pts
(reversed parity)
• Should a point on the
occluder flip the parity?
(consistent if not flipped)
• Point on the occluder
should not flip the
parity
• Touching boundary is
not counted as a
crossing
1
0
1
0
• Incorrectly shadows
pts (incorrect parity)
• Is parity’s binary
condition sufficient?
Shadow Volumes
Solutions to Parity Test Problems
Eye inside of
shadow volume
Self-shadowing of
visible occluders
Multiple overlapping
shadow volumes
0
1
1
+1
+1
-1
-1
1
0
0
• Init parity to be 0 when
starting outside and 1
when inside
• Do not flip parity when
viewing the “in”-side of
an occluder
• Do not flip parity
when viewing “out”side of an occluder
either
1
2
1
0
• Binary parity value is
not sufficient, we need
a general counter for
boundary crossings: +1
entering a shadow
volume, -1 exiting
Shadow Volumes
A More General Solution
Determine if point P is in shadow:
– Init boundary crossing counter to number of
shadow volumes containing the eye point
Why? Because ray must leave this many shadow
volumes to reach a lit point
– Along ray, increment counter each time a
shadow volume is entered, decrement each
time one is exited
– If the counter is >0, P is in shadow
+1
+1
-1
-1
Special case when P is on an occluder
– Do not increment or decrement counter
– Point on boundary does not count as a
crossing
0
1
2
1
0
Shadow Volumes
More Examples
Can you calculate the final boundary count for these visible points?
Shadow Volumes
More Examples
Can you calculate the final boundary count for these visible points?
1
0
+1
+1
1
+1
-1
+1
+1
-1
+1
0
1
-1
-1
2
0
0
Shadow Volumes
How do we use this information to find shadow pixels?
Could just use ray-casting (ray through each pixel)
– Too slow, possibly more primitives to intersect with
– Could use silhouette of complex objects to simplify shadow volumes
0
0
+
1
-
+
-
+
+
1
+
+
+
+
+
-
-
-
+
+
-
+
1
2
0
-
+
+
+
+
1
-
0
0
Shadow Volumes
Using Standard Graphics Hardware
Simple observations:
– For convex occluders, shadows volumes form convex shape.
– Enter through front-facing shadow-volume boundaries
Exit through back-facing
0
0
+
1
-
+
-
+
+
1
+
+
+
+
+
+
+
+
-
-
+
+
-
+
+
-
0
0
-
Shadow Volumes
Using Standard Graphics Hardware
Use standard Z-buffered rendering and the stencil buffer (8 bits) to
calculate boundary count for each pixel
– Create shadow volumes for each occluding object (should be convex)
– Render the ambient lit scene, keep the depth values
– For each light source
• Initialize stencil values to number of volumes containing the eye point
• Still using the Z-buffer depth test (strictly less-than), but no depth update
– Render the front-facing shadow-volume boundary polygons, increment stencil
values for all pixels covered by the polygons that pass the depth test
– Render the back-facing boundary polygons, but decrement the stencil.
• Pixels with stencil value of zero are lit, re-render the scene with lighting on
(no ambient, depth-test should be set to equal).
Shadow Volumes
Using Standard Graphics Hardware: step-by-step
• Create shadow volumes
• Initialize stencil buffer values
to # of volumes containing eye
per-pixel stencil
values initially 0
Shadow Volumes
Using Standard Graphics Hardware: step-by-step
• Render the ambient lit scene
• Store the Z-buffer
• Set depth-test to strictly less-than
Shadow Volumes
Using Standard Graphics Hardware: step-by-step
• Render front-facing shadow-volume boundary polygons
– Why front faces first? Unsigned stencil values
• Increment stencil values for pixels covered that pass depth-test
Shadow Volumes
Using Standard Graphics Hardware: step-by-step
• Render back-facing shadow-volume boundary polygons
• Decrement stencil values for pixels covered that pass depth-test
Shadow Volumes
Using Standard Graphics Hardware: step-by-step
• Pixels with stencil value of zero are lit
• Set depth-test to strictly equals
• Re-render lit scene with no ambient into lit pixels
Shadow Volumes
More Potential Problems
• Lots o’ geometry!
– Only create on shadowcasting objects
(approximation)
– Use only silhouettes
• Lots o’ fill!
– Reduce geometry
– Have a good “max distance”
– Clip to view-volume
• Near-plane clipping
Shadow Volumes
Properties
–
–
–
–
–
Point or directional light sources
Direct light
Opaque objects (could fake transparency using subtractive)
Restricted to polygonal objects (could be generalized)
Hybrid: object precision in creation of shadow-volumes,
image-precision per-pixel stencil evaluation
– Number of passes: L=num lights, N=number of tris
• additive: 1 ambient lit, 3*N*L shadow-volume, 1 fully lit
• subtractive: 1 fully lit, 3*N*L shadow-volume, 1 image pass
(modulation)
• Could be made faster by silhouette simplification, and by handpicking shadow casters and receivers
Shadow Volumes
Properties
– Can take advantage of static components:
• static objects & lights: precompute shadow volumes from light sources
• static objects & viewer: precompute first pass over entire scene
– General shadow algorithm, but could be restricted for more
speed
– Both subtractive and additive strategies presented
Projective Texture Shadows
What are Projective Textures?
Texture-maps that are mapped to a
surface through a projective
transformation of the vertices into
the texture’s “camera” space
Projective Texture Shadows
How do we use them to create shadows?
Project a modulation image of the shadow casting objects from the
light’s point-of-view onto the shadow receiving objects
Light’s point-of-view
Shadow projective
texture (modulation
image or light-map)
Eye’s point-of-view,
projective texture
applied to ground-plane
(self-shadowing is from
another algorithm)
Projective Texture Shadows
More details
Fast, subtractive method
• For each light source:
– Create a light camera that encloses shadowed area
– Render shadow casting objects into light’s view
only need to create a light map (1 in light, 0 in shadow)
– Create projective texture from light’s view
– Render fully-lit shadow receiving objects with
applied modulation projective-textures (need additive
blending for all light sources except first one)
• Render fully-lit shadow casting objects
Projective Texture Shadows
More examples
Cast shadows from
complex objects onto
complex objects in only
2 passes over shadow
casters and 1 pass over
receivers (for 1 light)
Lighting for shadowed
objects are computed
independently for each
light source and summed
into a final image
Colored light sources.
Lit areas are modulated
by value of 1 and
shadow areas can be any
ambient modulation
color
Projective Texture Shadows
Problems
• Does not use visibility information from the light’s view
– Objects must be depth-sorted
– Parts of an object that are not visible from the light also have the
projective texture applied (ambient light appears darker on shadows
receiving objects)
• Receiving objects may already be textured
– Typically, only one texture can be applied to an object at a time
Projective Texture Shadows
Solutions… well, sort of...
• Does not use visibility information from the light’s view
– User selects shadow casters and receivers
– Casters can be receivers, receivers can be casters
– Must create and apply projective textures in front-to-back order
from the light
– Darker ambient lighting is accepted. Finding these regions
requires a more general shadow algorithm
• Receiving objects may already be textured
– Use two passes: first to apply base texture, second apply
projective texture with modulation blending
– Use multi-texture: this is what it is for! Avoids passes over the
geometry!
Projective Texture Shadows
Properties
•
•
•
•
•
Point or directional light sources
Direct light (fake transparency, with different modulation colors)
All types of geometry (depends on the rendering system)
Image precision (image-based)
For each light, 2 passes over shadow-casting objects (1 to create
modulation image, 1 with full lighting), 1 pass over shadow
receiving object (fully-lit w/ projective texture)
• More passes will be required for shadow-casting objects that are
already textured
• Benefits mostly from static scene (precompute shadow textures)
• User must partition objects into casters and receivers (casters
could be receivers and vice versa)
Projective Texture Shadows
How do we apply projective textures?
• All points on the textured
surface must be mapped into
the texture’s camera space
(projective transformation)
• Position on texture’s camera
viewplane window maps into
the 2D texture-map
How can this be done efficiently?
Slight modification to perspectively-correct texture-mapping
Projective Texture Shadows
Perspectively-incorrect Texture-mapping
• Relies on interpolating screen-space values along projected edge
• Vertices after perspective transformation and perspective divide:
(x,y,z,w)(x/w,y/w,z/w,1)
x y z

A   1 , 1 , 1 , s1 , t1 
 w1 w1 w1

I (t )  (1  t )  A  t  B
x y z

B   2 , 2 , 2 , s2 , t2 
 w2 w2 w2

Projective Texture Shadows
Perspectively-correct Texture-mapping
•
•
•
•
Add 3D homogeneous coordinate to texture-coords (s,t,1)
Divide all vertex components by w after perspective transformation
Interpolate all values, including 1/w
Obtain perspectively-correct texture-coords (s’,t’) by applying another
homogeneous normalization (divide interpolated s/w and t/w terms by
interpolated 1/w term)
x y z s t 1
A   1 , 1 , 1 , 1 , 1 , 
 w1 w1 w1 w1 w1 w1 
I (t )  (1  t )  A  t  B
x y z s t 1 
B   2 , 2 , 2 , 2 , 2 , 
 w2 w2 w2 w2 w2 w2 
Final perspectively-correct values, by
normalizing homogeneous texture-coords

I
I 
I perspcorrect  x' , y' , z ' , s' , t '   I x , I y , I z , s , t 
I1/ w I1/ w 

Projective Texture Shadows
Projective Texture-mapping
• Texture-coords become 4D just like vertex coords:
(x,y,z,w)(s,t,r,q)
• Full 4x4 matrix transformation is applied to texture-coords
• Projective transformations also allowed, another perspective
divide is needed for texture-coords:
Vertices: homogeneous space to screen-space
(x,y,z,w)(x/w,y/w,z/w)
Texture-coords: homogeneous space to texture-space
(s,t,r,q) (s/q,t/q,r/q)
• Requires another per-vertex transformation, but per-pixel work is
same as in perspectively-correct texture-mapping (Segal92)
Projective Texture Shadows
Projective Texture-mapping
Given vertex v, corresponding texture-coords t, and two 4x4 matrix
transformations M and T (M = composite modeling, viewing,
and projection transformations, and T = texture-coords
transformation matrix)
– Each vertex represented as [ M*v, T*t ] = [ x y z w s t r q ]
– Transformed into screen space through a perspective divide of all
components by w
[ x y z w s t r q ]  [ x/w y/w z/w s/w t/w r/w q/w ]
– All values are linearly interpolated along edge (across polygon face)
– Perform per-pixel homogeneous normalization of texture-coords by
dividing interpolated q/w value
[ x’ y’ z’ s’ t’ r’ ] = [ x/w y/w z/w (s/w)/(q/w) (t/w)/(q/w) (r/w)/(q/w) ]
– Same as perspectively-correct texture-mapping, but instead of dividing by
interpolated 1/w, divide by interpolated q/w (Segal92)
Projective Texture Shadows
Projective Texture-mapping
x y z s t r q 
A   1 , 1 , 1 , 1 , 1 , 1 1 
 w1 w1 w1 w1 w1 w1 w1 
I (t )  (1  t )  A  t  B
x y z s t r q 
B   2 , 2 , 2 , 2 , 2 , 2 2 
 w2 w2 w2 w2 w2 w2 w2 
Final perspectively-correct values, by
normalizing homogeneous texture-coords

I
I
I 
I perspcorrect   x' , y ' , z ' , s ' , t ' , r '   I x , I y , I z , s , t , r 

I q / w I q / w I q / w 

Projective Texture Shadows
Projective Texture-mapping
So how do we actually use this to apply the shadow texture?
• Use the vertex’s original coords as the texture-coords
• Texture transformation:
T = LightProjection*LightViewing* NormalModeling
Shadow-Maps
for accelerating ray-traced shadow feelers
• Previously, shadow feelers had to be
intersected against all objects in the scene
• What if we knew the nearest intersection
point for all rays leaving the light?
• The depth-buffer of the rendered scene
from a camera at the light would give us a
discretized version of this
• This depth-buffer is called a shadow-map
• Instead of intersecting rays with objects, we
intersect the ray with the light viewplane,
and lookup up the nearest depth value.
• If the light’s depth value at this point is less
than the depth to the eye-ray nearest
intersection point, then this point is in
shadow!
Light
Light-ray nearest
intersection point
Eye
L
E
Eye-ray nearest
intersection point
If L is closer to the light than E,
then E is in shadow
Shadow-Maps
for accelerating ray-traced shadow feelers
Cool, we can really speed up ray-traced shadows now!
– Render from eye view to accelerate first-hit ray-casting
– Render from light view to store first-hits from light
– For each pixel-ray in the eye’s view, we can project the first
hit point into the light’s view and check if anything is
intersecting the shadow feeler with a simple table lookup!
– The shadow-map is discretized, but we can just use the
nearest value.
What are the potential problems?
Shadow-Maps
Problems with Ray-traced Shadow Maps
• Still too slow
– requires many per-pixel operations
– does not take advantage of pixel coherence in eye view
• Still has self-shadowing problem
– need a depth bias
• Discretization error
– Using the nearest depth value to the projected point, may not
be sufficient
– How can we filter the depth-values? The standard way does
not really make sense here.
Shadow-Maps
faster way: standard shadow-map approach
• Not normally used as a ray-tracing acceleration
technique, normally used in a standard Z-buffered
graphics system
• Two methods presented (Williams78):
– Subtractive: post-processing on final lit image (like full-scene
image warping)
– Additive: as implemented in graphics hardware (OpenGL
extension on InfiniteReality)
Shadow-Maps
illustration of basic idea
Shadow-map from light 1
Shadow-map from light 2
Final view
Shadow-Maps
Subtractive
• Render fully-lit scene
• Create shadow-map: render depth from light’s view
• For each pixel in final image:
– Project point at each pixel from eye screen-space into light
screen-space (keep eye-point depth De)
– Look up light depth value Dl
– Compare depth values, if Dl<De eye-point is in shadow
– Modulate, if point is in shadow
Shadow-Maps
Subtractive: advantages
• Constant time shadow computation!
just like full-scene image-warping: eye view pixels are warped to
light view and then a depth comparison is performed
• Only a 2-pass algorithm:
1 eye pass, 1 light pass (and 1 constant time image-warping pass)
• Deferred shading (for shadow computation)
Zhang98 presents a similar approach using a forward-mapping
(from light to eye, reverses this whole process)
Shadow-Maps
Subtractive: disadvantages
• Not as accurate as additive (same reasons)
– Specular and diffuse components in shadow
– Modulates ambient term
• Has standard shadow-map problems:
– Self-shadowing : depth-bias needed
– Depth sampling error : how do we accurately reconstruct
depth values from a point-sampling?
Shadow-Maps
Additive
• Create shadow-map: render depth from light’s view
• Use shadow-map as a projective texture!
• While scan-converting triangles:
– apply shadow-map projective texture
– instead of modulating with looked-up depth value Dl,
compare the value against the r-value (De) of the transformed
point on the triangle
– Compare De to Dl , if Dl<De eye-point is in shadow
Basically, scan-converting triangle in both eye and light spaces simultaneously and
performing a depth comparison in light space against previously stored depth values
Shadow-Maps
Additive: advantages
• Easily implemented in hardware
only a slight change to the standard perspectively-correct
texture-mapping hardware: add an r-component compare op
• Fastest, most general implementation to date!
As fast as projective textures, but general!
Shadow-Maps
Additive: disadvantages
• Computes shadows on a per-primitive basis
All pixels covered by all primitives must go through shadowing
and lighting operation whether visible or not (no deferred
shading)
• Still has standard shadow-mapping problems
– Self-shadowing
– Depth sampling error
Shadow-Maps
Solving main problems: self-shadowing
Use a depth bias during the transformation into light space
– Add a z translation towards the light source after
transformation from eye to light
OR
– Add z-translation towards eye before transforming into light
space
OR
– Translate eye-space point along surface normal before
transforming into light space
Shadow-Maps
Solving main problems:
depth sampling
Could just use the nearest
sample, but how would you
anti-alias depth?
Shadow-Maps
Depth sampling: normal filtering
• Averaging depth doesn’t really make sense
(unrelated to surface, especially at shadow boundaries!)
• Still a binary result, (no anti-aliased softer shadows)
Shadow-Maps
Depth sampling: percentage closer filtering (Reeves87)
• Could average binary results of all depth map pixels covered
• Soft anti-aliased shadows
• Very similar to point-sampling across an area light source in raytraced shadow computation
Shadow-Maps
How do you choose the samples?
Quadrilateral represents the area covered by a pixel’s projection
onto a polygon after being projected into the shadow-map
Scanline Algorithms
classic by Bouknight and Kelley
• Project edges of shadow
casting triangles onto
receivers
• Use shadow-volume-like
parity test during scanline
rasterization
Area-Subdivision Algorithms
based on Atherton-Weiler clipping
• Find actual visible
polygon fragments
(geometrically) through
generalized clipping
algorithm
• Create model composed
of shadowed and lit
polygons
• Render as surface detail
polygons
Area-Subdivision Algorithms
based on Atherton-Weiler clipping
Multiple Light Sources
for any single-light algorithm
• Accumulate all fully-lit single-light images into a single
image through a summing blend op (standard
accumulation buffer or blending operations)
• Global ambient lit scene should be added in separately
• Very easy to implement
• Could be inefficient for some algorithms
• Use higher accuracy of accumulation buffer (usually
12-bit per color component)
Area light Sources
for any point-light algorithm
• Soft or “fuzzy” shadows (penumbra)
• Some algorithms have some “natural” support for these
• For restricted algorithms, we can always sample the
area light source with many point light sources: jitter
and accumulate
• Very expensive: many “high quality” passes to obtain
something fuzzy
• Not really feasible in most interactive applications
• Convolution and image -based methods are usually
more efficient here
Backwards Ray-tracing
• Big topic: sorry, no time
Radiosity
• Big topic: sorry, no time
References
Appel A. “Some Techniques for Shading Machine Renderings of Solids,” Proc AFIPS
JSCC, Vol 32, 1968, pgs 37-45.
Arvo, J. “Backward Ray Tracing,” in A.H. Barr, ed., Developments in Ray 8-Tracing,
Course Notes 12 for SIGGRAPH 86, Dallas, TX, August 18-22, 1986.
Atherton, P.R., Weiler, K., and Greenberg, D. “Polygon Shadow Generation,” SIGGRAPH
78, pgs 275-281.
Bergeron, P. “A General Version of Crow’s Shadow Volumes,” CG & A, 6(9), September
1986, pgs 17-28.
Blinn, Jim. “Jim Blinn’s Corner: Me and My (Fake) Shadow,” IEEE CG&A, vol 8, no 1,
Jan 1988, pgs 82-86.
Bouknight, W.J. “A Procedure for Generation of Three-Dimentional Half-Toned
Computer Graphics Presentations,” CACM, 13(9), September 1970, pgs 527-536. Also
in FREE80, pgs 292-301.
Bouknight, W.J. and Kelly, K.C. “An Algorithm for Producing Half-Tone Computer
Graphics Presentations with Shadows and Movable Light Sources,” SJCC, AFIPS
Press, Montvale, NJ, 1970, pgs 1-10.
Chin, N., and Feiner, S. “Near Real-Time Shadow Generation Using BSP Trees,”
SIGGRAPH 89, pgs 99-106.
References
Cohen, M.F., and Greenberg, D.P. “The Hemi-Cube: A Radiosity Solution for Complex
Environments,”SIGGRAPH 85, pgs 31-40.
Cook, R.L. “Shade Trees,” SIGGRAPH 84, pgs 223-231.
Cook, R.L., Porter, T., and Carpenter, L. “Distributed Ray Tracing,” SIGGRAPH 84, pgs
127-145.
Crow, Frank. “Shadow Algorithms for Computer Graphics,” SIGGRAPH ‘77.
Goldstein, R.A.and Nagel, R. “3-D Visual Simulation,” Simulation, 16(1), January 1971,
pgs 25-31.
Goral, C.M., Torrance, K.E., Greenberg, D.P., and Gattaile, B. “Modeling the Interaction
of Light Between Diffuse Surfaces,” SIGGRAPH 84 pgs 213-222.
Gouraud, H. “Continuous Shading of Curved Surfaces,” IEEE Trans. On Computers, C20(6), June 1971, 623-629. Also in FREE80, pgs 302-308.
Hourcade, J.C. and Nicolas, A. “Algorithms for Antialiased Cast Shadows,” Computers &
Grahpics 9, 3 (1985), pgs 259-265.
Nishita, T. and Nakamae, E. “An Algorithm for Half-Tone Representation of ThreeDimensional Objects,” Information Processing in Japan, Vol. 14, 1974, pgs 93-99.
Nishita, T., and Nakamae, E. “Continuous Tone Representation of Three-Dimensional
Objects Taking Account of Shadows and Interreflection,” SIGGRAPH 85, pgs 23-30.
References
Reeves, W.T., Salesin, D.H., and Cook, R.L. “Rendering Antialiased Shadows with Depth
Maps,” SIGGRAPH 87, pgs 283-291.
Segal, M., Korobkin, C., van Widenfelt, R., Foran, J., and Haeberli, P. “Fast Shadows and
Lighting Effects Using Texture Mapping,” Computer Graphics, 26, 2, July 1992, pgs
249-252.
Warnock, J. “A Hidden-Surface Algorithm for Computer Generated Half-Tone Pictures,”
Technical Report TR 4-15, NTIS AD-753 671, Computer Science Department,
University of Utah, Salt Lake City, UT, June 1969.
Whitted, T. “An Improved Illumination Model for Shaded Display,” CACM, 23(6), June
1980, pgs 343-349.
Williams, L. “Casting Curved Shadows on Curved Surfaces,” SIGGRAPH 78, pgs 270274.
Woo, Andrew, Pierre Poulin, and Alain Fournier. “A Survey of Shadow Algorithms,”
IEEE CG&A, Nov 1990, pgs 13-32.
Zhang, H. “Forward Shadow Mapping,” Rendering Techniques 98, Proceedings of the 9th
Eurographics Rendering Workshop.
Acknowledgements
Mark Kilgard (nVidia) : for various pictures from presentation
slides (www.opengl.org)
Advanced OpenGL Rendering course notes (www.opengl.org)
Download