Advanced Mapping

advertisement
Advanced Mapping
Computer Graphics
Types of Mapping
• Maps affect various values in the rendering process:
– Color
• Texture mapping
• Light mapping
– Transparency
• Alpha mapping
– Specular component
• Environment mapping
• Gloss Mapping
– Surface normal
• Bump mapping
– Vertex position
• Displacement mapping
MultiTexturing
• Most of the advanced mapping techniques we will
be looking at will be made possible by
multitexturing
• Multitexturing is simply the ability of the graphics
card to apply more than one texture to a surface in
a single rendering pass
• Specifically, there is a hardware pipeline of
texture units, each of which applies a single
texture
MultiTexturing
Interpolated
vertex value
Texture
unit 1
Texture value
Texture
unit 2
Texture value
Texture
unit 3
Texture value
MultiTexturing
• Each of the texture units is independent:
– Specification of “texture” parameters
– The type of information is stored in the
“texture”
– The parameter in the rendering process that is
modified by the “texture” values
Multipass Rendering
• In theory, all illumination equation factors are
evaluated at once and a sample color is generated
• In practice, various parts of the light equations can
be evaluated in separate passes, each successive
pass modifying the previous result
– Results are accumulated in the offscreen framebuffer
(a.k.a. colorbuffer)
• Multipass rendering is an older technique than
MultiTexturing
Multipass Rendering
• The multipass idea came about as more of the
rendering pipeline moved into hardware
– When all rendering was done in software, one had
control over all the details of the rendering process
– Moving rendering to hardware significantly increased
performance, at the expense of flexibility
• This lack of flexibility means we can’t program arbitrarily
complex lighting models in a single pass
– Vertex and Pixel Shaders give us back some of the
flexibility while still being done in hardware (later)
Multipass Rendering
• There are several techniques we will see
that can be performed by either Multipass
rendering or MultiTexturing
• MultiTexturing is a newer and not all
graphics cards support it
– Although this is quickly changing
Multipass Rendering
• Multipass also has the advantage that a program
can automatically adjust to the capabilities/speed
of the graphics card it is being run on
• That is, the program can perform the basic passes
it needs to produce an acceptable picture. Then if
it has time (e.g. the frame rate isn’t too low) it can
perform extra passes to improve the quality of the
picture for those users who own better cards.
Multipass Rendering
• Quake III engine uses 10 passes:
–
–
–
–
–
–
–
(Passes 1-4: accumulate bump map)
Pass 5: diffuse lighting
Pass 6: base texture
(Pass 7: specular lighting)
(Pass 8: emissive lighting)
(Pass 9: volumetric/atmospheric effects)
(Pass 10: screen flashes)
• The passes in ( ) can be skipped for slower cards
Light Mapping
• Lightmaps are simply texture maps that contain
illumination information (lumels)
• How can lighting be done in a texture map?
– Recall that the diffuse component of the lighting
equation is view independent
– Thus, for static light sources on static objects the
lighting is always the same no matter where the viewer
is located
– The light reflected from a surface can be pre-computed
and stored in a lightmap
Light Mapping
• What are the benefits?
– Speed: the lighting equations can be turned off while
rendering the object that contains the lightmap
– More realism: we are not
constrained by the Phong local
reflection model when
calculating our lighting
• View-independent global models
such as radiosity can even be used
Light Mapping
• The illumination information can be combined
with the texture information, forming a single
texture map
• But there are benefits to not combining them:
–
–
–
–
Lightmaps can be reused on different textures
Textures can be reused with different lightmaps
Repeating textures don’t look good with repeating light
Lightmaps are usually stored at a lower resolution so
they don’t take up much space anyway
– Extensions will allow us to perform dynamic lightmaps
Light Mapping
• In order to keep the texture and light maps
separate, we need to be able to perform
multitexturing – application of multiple
textures in a single rendering pass
Light Mapping
• How do you create light maps?
• Trying to create a light map that will be used on a
non-planar object things get complex fast:
– Need to find a divide object into triangles with similar
orientations
– These similarly oriented triangles can all be mapped
with a single light map
Light Mapping
• Things for standard games are usually much easier
since the objects being light mapped are usually
planar:
–
–
–
–
Walls
Ceilings
Boxes
Tables
• Thus, the entire planar object can be mapped with
a single texture map
Light Mapping
Light Mapping
• Can dynamic lighting be simulated by using
a light map?
• If the light is moving (perhaps attached to the
viewer or a projectile) then the lighting will
change on the surface as the light moves
– The light map values can be partially updated
dynamically as the program runs
– Several light maps at different levels of intensity could
be pre-computed and selected depending on the light’s
distance from the surface
Alpha Mapping
• An Alpha Map contains a single value with
transparency information
– 0  fully transparent
– 1  fully opaque
• Can be used to make sections of objects
transparent
• Can be used in combination with standard texture
maps to produce cutouts
– Trees
– Torches
Alpha Mapping
Alpha Mapping
• In the previous tree example, all the trees are texture
mapped onto flat polygons
• The illusion breaks down if the viewer sees the tree from
the side
• Thus, this technique is usually used with another technique
called “billboarding”
– Simply automatically rotating the polygon so it always
faces the viewer
• Note that if the alpha map is used to provide transparency
for texture map colors, one can often combine the 4 pieces
of information (R,G,B,A) into a single texture map
Alpha Mapping
• The only issue as far as the rendering pipeline is
concerned is that the pixels of the object made
transparent by the alpha map cannot change the
value in the z-buffer
– We saw similar issues when talking about whole
objects that were partially transparent  render them
last with the z-buffer in read-only mode
– However, alpha mapping requires changing z-buffer
modes per pixel based on texel information
– This implies that we need some simple hardware
support to make this happen properly
Environment Mapping
• Environment Mapping is used to approximate
mirrored surfaces
Environment Mapping
• The standard Phong lighting equation
doesn’t take into account reflections
– Just specular highlights
• Raytracing (a global model) bounces rays
off the object in question and into the world
to see what they hit
Environment Mapping
• Environment Mapping approximates this
process by capturing the “environment” in a
texture map and using the reflection vector
to index into this map
Environment Mapping
• The basic steps are as follows:
– Generate (or load) a 2D map of the environment
– For each pixel that contains a reflective object, compute
the normal at the location on the surface of the object
– Compute the reflection vector from the view vector (V)
and the normal (N) at the surface point
– Use the reflection vector to compute an index into the
environment map that represents the objects in the
reflection direction
– Use the texel data from the environment map to color
the current pixel
Environment Mapping
• Put into texture mapping terminology:
– The projector function converts the reflection vector (x,
y, z) to texture parameter coordinates (u, v)
• There are several such projector functions in
common use today for environment mapping:
– Cubic Mapping
– Spherical Mapping
– Parabolic mapping
Cubic Environment Mapping
• The map is constructed by placing a camera at the
center of the object and taking pictures in 6 directions
Cubic Environment Mapping
• Or the map can be easily created from
actual photographs to place CG objects into
real scenes (Abyss, T2, Star wars)
Cubic Environment Mapping
• When the object being mapped moves, then
the maps need to change
– Can be done in real-time using multipass
• 6 rendering passes to accumulate the environment
map
• 1 rendering pass to apply the map to the object
– Can be done with actual photographs
• Take 6 pictures are set locations along the path
• Warp the images to create intermediate locations
Cubic Environment Mapping
• How to define the projector function:
– The reflection vector with the largest magnitude
selects the corresponding face
– The remaining two coordinates are divided by
the absolute value of the largest coordinate
• They now range from [-1..+1]
– Then they are remapped to [0..1] and used as
our texture parameter spaces coordinates on the
particular face selected
Cubic Environment Mapping
• Just like with normal texture mapping, the texture
coordinates are computed at the vertices and then
interpolated across the triangle
• However, this poses a problem when 2 vertices
reflect onto different cube faces
• The software solution to this is to subdivide the
problematic polygon along the EM cube edge
• The hardware solution puts reflection interpolation
and face selection onto the graphics card
– This is what most modern hardware does
Cubic Environment Mapping
• The main advantages of cube maps:
– Maps are easy to create (even in real-time)
– They are view-independent
• The main disadvantage of cube maps:
– Special hardware is needed to perform the face
selection and reflection vector interpolation
Spherical Environment Mapping
• The map is obtained by orthographically
projecting an image of a mirrored sphere
– Map stores colors seen by reflected rays
Spherical Environment Mapping
• The map can be obtained
from a synthetic scene by:
– Raytracing
– Warping automatically
generated cubic maps
• The map can be obtained
from the real world by:
– Photographing an actual
mirrored sphere
Spherical Environment Mapping
Spherical Environment Mapping
• Note that the sphere map contain
information about both the environment in
front of the sphere and in back of the sphere
Spherical Environment Mapping
• To map the reflection vector to the sphere
map the following equations are used, based
on the reflection vector (R)
Spherical Environment Mapping
• Some disadvantages of Spherical maps:
– Maps are hard to create on the fly
– Sampling is non-linear:
– Sampling is non-uniform
• View-point dependent!
• Some advantages of Spherical maps:
– No interpolation across map seams
• Normal texture mapping hardware can be used
Parabolic Environment Mapping
• Similar to Spherical maps, but 2 parabolas
are used instead of a single sphere
• Each parabola forms a environment map
– One for the front, one for the back
– Image shown
is a single
parabola
Parabolic Environment Mapping
• The maps are still circles in 2D
– The following is a comparison of the 2 parabolic maps
(left) to a single spherical map (right)
Parabolic Environment Mapping
• The main advantages of the parabolic maps:
– Sampling is fairly uniform
• They are view-independent!
– Can be performed on most graphics hardware that
supports texturing
• Interpolation between vertices even over seam between front
and back maps can be done with a trick
• The main disadvantage of parabolic maps:
– Creating the map is difficult
• Cube maps are easily created from both real and synthetic
environments (even on the fly)
• Sphere maps are easily created from from real-world scenes
General Environment Mapping
• Potential problems with Environment Maps:
– Object must be small w.r.t. environment
– No self-reflections (only convex objects)
– Separate map is required for each object in the
scene that is to be environment mapped
– Maps may need to be changed whenever the
viewpoint changes (i.e. may not be viewpoint
independent – depends on map type)
Gloss Mapping
• Not all objects are uniformly shiny over their
surface
– Tile floors are worn in places
– Metal has corrosion in spots
– Partially wet surfaces
• Gloss mapping is a way to
adjust the amount of specular
contribution in the lighting
equation
Gloss Mapping
• The lighting equations can be computed at the
vertices and the resulting values can be
interpolated across the surface
– Similar to Gouraud shading
• But the diffuse and specular contributions must be
interpolated across the pixels separately
• This is because the gloss map contains a single
value that controls the specular contribution on a
per pixel basis
– Adjusts the Ks value, not the n (shininess) value
Gloss Mapping
Gloss Mapping
• This is more complex than Gouraud shading:
– 2 values (diffuse / specular) need to be interpolated across the
surface rather than just the final color
– They need to be combined per pixel rather than just at the vertices
• But simpler than Phong shading:
– The normal, lighting, viewing directions still only need to be
computed at the vertices
– The cos (dot products) only need to be computed at the vertices
• Of course, Phong shading produces better specular
highlights for surfaces that have large triangles:
– Could use full Phong shading
– Tessellate the surface finer to capture the specular highlights with
Grouaud shading
Gloss Mapping
• What is needed in term of hardware extensions to
the classic rendering pipeline to get gloss mapping
to work?
– We need to separate the computation of the diffuse and
specular components
• Or we can simply use a multipass rendering
technique to perform gloss mapping on any
hardware
– 1st pass computes diffuse component
– 2nd pass computes specular with gloss map applied as a
lightmap, adding the result to the 1st pass result
Bump Mapping
• A technique to make a surface appear
bumpy without actually changing the
geometry
• The bump map changes the surface normal
value by some small angular amount
• This happens before the normal is used in
the lighting equations
1D Bump Map Example
• Surface
• Bump map
• The “goal” surface
• The “actual” surface
Bump Mapping
Bump Mapping
• Advantages of bump mapping over actually
displacing the geometry
– Significantly less modeling time involved
– A simple bump map can be tiled across a large surface
• Disadvantages include
– Hardware must support it because it involves modifying
the normal before the light equation is computed
– The geometry is not actually changed, so any silhouette
edge will not look bumpy
– No self-shading without extra shadow computation
Bump Mapping
• What sort of data is stored in the Bump map
– Offset Vector Map:
• 2 values, bu and bv are stored at each location
• bu and bv are used of offset the normal vector in the
u and v directions, respectively
Bump Mapping
• What sort of data is stored in the Bump map
– Heightfield map:
• 1 value, h is stored at each location
• h is used to represent a height
• The h values are used to derive bu and bv by taking
differences of neighbors in the u and v directions,
respectively
• Bu and Bv applied as
in Offset Vector Map
Bump Mapping
• When should you not use Bump Mapping?
– If you have no specular highlights, no moving
lights, and the object in question is also not
moving
• In the above case, you should simply precompute what the bumpy lighting would
look like and store that information in a
Lightmap
Bump Mapping
• The cost of bump mapping
– Classical bump mapping varies the normal at each
back-projected pixel and computes a full illumination
equation to calculate a light value for that pixel
• This sounds like: Phong shading with the
interpolated normal adjusted before being used in
the light equation
– Recall that Phong shading is not currently a real-time
option (in OpenGL or DirectX)
• Phong shading can be used in batch-processing
animation systems
Bump Mapping
• So, how do we simulate Bump Mapping in
real-time?
– Emboss Bump Mapping
– Dot Product Bump Mapping (DOT3)
– Environment Map Bump Mapping (EMBM)
Emboss Bump Mapping
• This employs a technique borrowed from 2D
image processing, called embossing
• Recall the diffuse lighting eqn: Id = Ii (L•N)
– True bump mapping adjusts N per pixel
– Emboss bump mapping approximates (L•N)
• A heightfield is used to describe the surface offset
– First derivative of heightfield represents slope m
– m is used to increase/decrease base diffuse value
– (Base diffuse value + m) approximates (L•N) per pixel
Emboss Bump Mapping
• Embossing approximates the derivative
– Lookup height H0 at point (u,v)
– Lookup height H1 at point slightly perturbed
toward light source (u+u, v+v)
– Subtract original height H0 from perturbed
height H1
– difference represents instantaneous slope
m=H1-H0
Emboss Bump Mapping
Original bump
(H0)
Original bump (H0) overlaid
with second bump (H1) perturbed
toward light source
brightens image
darkens image
Subtract original bump
from second (H1-H0)
Emboss Bump Mapping
The specific algorithm is:
1. Render the surface with the heightfield applied as a
diffuse monochrome texture
2. Shift all the vertex (u, v) coordinates in the direction
of the light
3. Render the surface with the shifted heightfield as a
diffuse texture, subtracting from the first-pass result
 this produces the emboss effect (the derivative)
4. Render the surface again with no heightfield,
diffusely illuminated and Gouraud-shaded, adding
this shaded image to the emboss result
Emboss Bump Mapping
Heightfield
Embossed Effect Added to diffuse
Emboss Bump Mapping
• The difficult part of the algorithm is determining how
much to shift the vertex values in Step 2
• Need to find the light’s direction relative to the surface
– Transform the light from global to vertex tangent space
• The vertex tangent space coordinate system is defined by:
– The normal, n, of the surface at the vertex in question
– A surface vector, s, that follows the u texture axis
– A surface vector, t, that follows the other texture axis, v
Emboss Bump Mapping
• The following matrix can be used to transform the light
vector (vector from the vertex to the light) into vertex
tangent space:
Sx
Sy Sz 0
• The resulting light vector is then
Tx Ty Tz 0
projected to the ST-plane  L’
Nx Ny Nz 0
• The x and y coordinates of L’
0
0
0 1
are used to shift the texture
coordinates (u, v) in the direction of the light
• This matrix needs to be computed per vertex to
determine the (u, v) offsets per vertex
Emboss Bump Mapping
• Limitations of this method:
– Applies only to diffuse surfaces – specular highlights
are not possible
– When light is directly over a surface, no offset occurs,
so the bumps disappear entirely
– Basic algorithm can’t handle bumps facing away from
the light
– Mipmap filtering cannot be done on the bump map:
• Algorithm is based on shifting approximately one original
texel in the texture map
• As smaller mipmaps levels are used, not enough shifting
occcurs and the surface goes flat
Emboss Bump Mapping
• The main advantage of using this method is that it
will run on almost any hardware
– Doesn’t require modification to the pipeline
– Requires multipass rendering or multitexuring
• Nehe Lesson 22 uses this approach
Dot Product Bump Mapping
• Primary bump map method implemented on
modern graphics hardware
• Often called the “DOT3” method
• Instead of storing heights or offsets, actual
surface normals are stored in the map
– Sometimes called a Normal Map
– Each texel contains 3 values: (x, y, z)
• [-1..+1] is mapped to [0..255] for each coordinate
Dot Product Bump Mapping
• Light source locations are transformed into vertex
tangent space at each vertex
– Same as for Emboss, but without the final projection to
the st-plane
• These light source vectors are interpolated across
the surface
– Like colors or depth values are interpolated
• Now we have a light source vector at each pixel
(from above steps) and a normal vector at each
pixel (from normal map)
– Compute (L•N) at each pixel
Dot Product Bump Mapping
• This is often implemented by pretending the light
source vector is the “color” of the vertex
• Then the usual color interpolation is used to
spread the light source vector across the pixels
– Light vectors and colors both have 3 components
• Then a special texture blending function is used to
blend the interpolated surface “colors” (light
vectors) with the texture map values (normals).
– This blending function is often called DOT3
– The hardware and API must support “DOT3” as a
texture blending function (DirectX does)
Environment Map Bump
Mapping
• Used to give bump appearance to shiny
reflective surfaces
• The idea is to perturb (u, v) environmentmapping coordinates by u and v differential
found in the bump texture
– Produces a modified reflection vector rather
than a modified normal and therefore distorts
the reflected image
Environment Map Bump
Mapping
Displacement Mapping
• In Bump Mapping a Heightfield is often used to
adjust the normals of the surface
• In Displacement Mapping the actual vertices are
displaced by the given height along the surface
normal direction
• This can be done in software by simply creating a
highly tessellated surface and moving the vertices
along the normals
– However, highly tessellated surfaces are expensive to
send across the bus to the graphics card
Displacement Mapping
• Technique originally produced in Pixar’s
Renderman animation renderer (not real-time)
• For real-time, we need to be able to:
– Send a low polygon model to the hardware
– Send a heightfield map to the hardware
– Have the hardware produce a finely tessellated version
of the surface, adjusting the newly created vertices by
sampling the heightfield
• Thus, we must have hardware and API support
Displacement Mapping
• The main advantage
Displacement
Mapping has over
Bump Mapping is
that the object will
show bumps even at
the silhouette
Download