Document

advertisement
KeyFrame animation
•Underlying technique is interpolation
-The in-between frames are interpolated from the keyframes
-Originally done by armies of underpaid animators
•Interpolating splines are smooth curves that interpolate their control points
•Perfect for keyframe animation
•Time is directly associated with the parameter value, controlling speed
•Anything can be keyframed and interpolated: Position, Orientation, Scale,
Deformation, Patch Control Points (facial animation), Color, Surface normal
•Special interpolation schemes for things like rotations
- Use quaternions to represent rotation and interpolate between quaternions
•Control of parameterization controls speed of animation
Motion Blur
•Extract data from real-world people acting out a scene
-Optical – take video and extract motion
-Magnetic/Radio – attach magnets, transponders and use sensors to get.
-Mechanical methods of extracting motion (for small motions)
•all are limited in the complexity of the scenes they can capture
- Solution: Break scenes into smaller pieces and re-construct later
Procedural
•Animation is generated by writing a program that spits out the
position/shape/whatever of the scene over time
•Generally:
-Program some rules for how the system will behave
-Choose some initial conditions for the world
-Run the program, maybe with user input to guide what happens
•Advantage: Once you have the program, you can get lots of motion
•Disadvantage: The animation is generally hard to control, which makes it
hard to tell a story with purely procedural means
Particle system
•Used for everything from explosions to smoke to water
•Basic idea:
-Everything is a particle
-Particles exert forces of some form on each other, and the world, and the
world might push back
-Simulate the system to find out what happens
-Attach something to the particles to render
•Different force rules and different renderings give all different behaviors
Spring mass
•Model objects as systems of springs and masses
•The springs exert forces, and you control them by changing their rest length
•A reasonable, but simple, physical model for muscles
•Advantage: Good looking motion when it works
•Disadvantage: Expensive and hard to control
Physically based
•Create a model based on the physics of a situation, solve for what happens
•Has been applied to: Colliding rigid objects, Cloth, Water, Smoke, Squishy
objects, Humans, New ones every year
•Problem: Expensive, hard to control, and not necessarily realistic
Exact Visibility: Tells you what is visible and only what is visible
- No over-rendering: Warnock’s is an example
- Difficult to achieve efficiently in practice:Small detail objects
- In maze-like simple environments, it is extremely efficient
Cells - Simple shapes, Rooms in a building, for instance
Portals the transparent boundaries between cells –Doorways between rooms
Rendering
1. Start in the viewer’s cell with the full viewing frustum
2. Render the walls of that room and its contents
3. Recursively clip the viewing frustum to each portal out of the cell, and
call the algorithm on the cell beyond the portal
Advantages
- Extremely efficient - only looks at visible cells: visibility culling
- Easy to modify for approximate visibility - render all of partially visible
cells, let depth buffer clean up
- Can handle mirrors as well - flip world and pretend mirror is a portal
Disadvantages:Restricted to environments with good cell/portal
Shading
Local Shading Models
•Local shading models provide a way to determine the intensity and color of a
point on a surface
- Fast and Simple to compute
- Local and not required the knowledge of the entire scene, because not
consider other objects
What they capture: Approximate effects of global lighting
- Direct illumination from light sources
- Diffuse and Specular components
What they don’t do: Shadows, Mirrors, Refraction, and so on.
Consists of three terms linearly combined:
I  ka I a  I i kd ( L  N )  k s ( H  N )
n

kd I i ( L  N )
•Diffuse the amount of incoming light reflected equally in all directions
•Specular the amount of light reflected in a mirror-like fashion
n H  (L  V ) / 2
s i
k s I i (light
H arriving
N) n via other surfaces
• Ambient term to approximate
Shading Interpolation
•Flat shading: computes shading at a representative point to whole polygon
- Advantages: Fast - one shading computation per polygon
- Disadvantages: Inaccurate, What are the artifacts?
•Gouraud interpolation:
- Advantages:
1. Fast - incremental calculations when rasterizing
2. Much smoother - use one normal per shared vertex to get continuity
between faces
- Disadvantages:
1. What are the artifacts?
2. Is it accurate?
•Phong interpolation
- Advantages:High quality, narrow specularities
- Disadvantages:Expensive, still an approximation for most surfaces
Consider two sub-problems of illumination
- Where does the light go? Light transport
- What happens at surfaces? Reflectance models
•L and I of light sources are important for a local shading model:
•Various light source types
- Point light source, Directional, Spotlight, Area light: Light from a continuum
of points
Mapping
Texture mapping associates the color of a point with the color in an image
- Establish a mapping from surface points to image points
•Texture Interpolation: linearly interpolate the mapping for other points in
world space
- Straight lines in world space go to straight lines in texture space
(x3, y3), (s3, t3)
k I (R  V )

 y  y2 
y  y2 
 s2  
 s3
sL  1 
y3  y2 

 y3  y 2 

 y  y1 
y  y1 
 s1  
 s3
sR  1 
y3  y1 

 y3  y1 

 x  xL 
x  xL 
 sL  
s  1 
 sR
xR  xL 

 xR  xL 
(x2, y2), (s2, t2)
(x1, y1), (s1, t1)
Textures are subject to aliasing:
- A polygon point maps into a texture image, essentially sampling the texture
at a point like image resizing
Standard approaches:
- Pre-filtering: Filter the texture down before applying it
- Post-filtering: Take multiple pixels from the texture and filter them before
applying to the polygon fragment
Mipmapping(pre-filtering)
- Interpolate between the two nearest mipmaps using nearest or interpolated
points from each, GL_LINEAR_MIPMAP_LINEAR
Boundary:glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S, p)
• When Mapping outside the texture image
- Repeat: Assume the texture is tiled: GL_REPEAT
- Clamp to Clamp to Edge: the texture coordinates are truncated to valid
values, and then used - GL_CLAMP
- Can specify a special border color: GL_TEXTURE_BORDER_COLOR,
R,G,B,A
Procedural Texture: Use a function to computes the texture value on the fly
•Advantages:
- Near-infinite resolution with small storage cost
- Idea works for many other things
•Disadvantage: being slow in many cases
Other Type
•Environment mapping looks up incoming illumination in a map -- Simulates
reflections from shiny surfaces
•Bump-mapping computes an offset to the normal vector at each rendered
pixel -- No need to put bumps in geometry, but silhouette looks wrong
•Displacement mapping adds an offset to the surface at each point -- Like
putting bumps on geometry, but simpler to model
Modeling
Overview
•Modeling is the process of describing an object
•Sometimes the description is an end in itself
•More typically in graphics, the model is then used for rendering
•The computer graphics motto: “If it looks right it is right”
Polygons Dominate (AD)
•Almost Everything can be turned into polygons
•Know how to render polygons quickly
•Many operations are easy to do with polygons
•Memory and disk space is cheap
•Simplicity and inertia
Polygons Aren’t Great (DISAD)
•An approximation to curved surfaces
- But can be as good as you want in the price of size
- Normal vectors are approximate
- They throw away information
- Most real-world surfaces are curved, particularly natural surfaces
•They can be very unstructured
•It is difficult to perform many geometric operations
Properties of Polygon Meshes
•Convex/Concave - Convexity makes many operations easier: Clipping,
intersection, collision detection, rendering, volume computations, …
•Closed/Open -- Closed if they as a group contain a closed space
- Can’t have “dangling edges” or “dangling faces”
- Every edge lies between two faces
- Closed also referred to as watertight
•Simple
- Faces intersect each other only at edges and vertices
- Edges only intersect at vertices
Polygonal Data structures: three common components
- The location of the vertices
- The connectivity - which vertices make up which faces
- Associated data: normals, texture coordinates, plane equations, …
Polygon Soup Evaluation: stored vertex in face directly
•Advantages
- simple to read, write, transmit, etc.
- A common output format from CAD modelers
- The format required for OpenGL
•Disadvantage: No higher order information
- No information about neighbors - hard to find neighboring polygons
- No open/closed information
- No guarantees on degeneracies - Difficult to ensure that polygons meet
correctly
- Wastes memory - each vertex repeated many times
Vertex Indirection
- Put all the vertices in a list
- Each face stores the list indices of its vertices
•Advantages:
- Connectivity is easier to evaluate because vertex equality is obvious
- Saving in storage:
-- Index might be only 2 bytes, and a vertex 12 bytes
-- Each vertex gets used 3-6 times, but is only stored once
-- Normals, texture coords, colors etc. can all be stored the same way
•Disadvantages: Connectivity information is not explicit
Variant
• Many algorithms can take advantage of neighbor information
- Faces store pointers to their neighbors
- Edges may be explicitly stored
- Helpful for:
-- Building strips and fans for rendering, Collision detection, Mesh decimation
(combines faces), Slicing and chopping, Many other things
- Information can be extracted or explicitly saved/loaded
Normal Vector: give information about the true surface shape
•Per-Face normals: One normal vector for each face, stored as part of face like
Flat shading
•Per-Vertex normals: A normal for every vertex (smooth shading)
- Can keep an array of normals analogous to array of vertices
- Faces store vertex indices and normal indices separately
- Allows for normal sharing independent of vertex sharing
Computing Normal Vector for per-vertex normals:
- Compute per-face normals, average normals of faces surrounding vertex
-- where neighbor information is useful, whether to use area-weighted samples?
-- Can define crease angle to avoid smoothing over edges, do not average if
angle between faces is greater than crease angle
Storing Other Information
•Colors, Texture coordinates and so on can be treated like vertices or normals
•Lighting/Shading coefficients may be per-face or per-object, rarely per vertex
•Key idea is sub-structuring:
- Faces are sub-structure of objects
Indexed V.S. Pointers
•Storing indices of vertices:- Lots of address computations, Works with
OpenGL’s vertex arrays
•Can store pointers directly: -Probably faster because of fewer address
computations, Easier to write, Doesn’t work directly with OpenGL, Messy to
save/load, copy (pointer arithmetic)
Meshes from Scanning
•Laser scanners sample 3D positions
- Uses triangulation, time of flight
- Some take images also for use as textures
- Famous example: Scanning the David
Level of Detail: attempt to balance the resolution of the mesh against the
viewing conditions
- Must have a way to reduce the complexity of meshes
- Must have a way to switch from one mesh to another
- Also called mesh decimation, multi-resolution modeling and other
Problems with Polygons
•They are inherently an approximation
-Things like silhouettes can never be prefect without very large numbers of
polygons, and corresponding expense
-Normal vectors are not specified everywhere
•Interaction is a problem
-Dragging points around is time consuming
-Maintaining things like smoothness is difficult
•Low level information
-Eg: Hard to increase, or decrease, the resolution
-Hard to extract information like curvature
Parametric Instancing
•Primitives are described by a label and a few parameters
- Cylinder: Radius, length, does it have end-caps, …
- Bolts: length, diameter, thread pitch, …
- A modeling format:
-- Provide software that knows how to draw the object given the parameters, or
knows how to produce a polygonal mesh
-- How you manage the model depends on the rendering style
-- Can be an exact representation
Rendering Instances
•A routine takes parameters and produces a polygonal representation
-Brings parametric instancing into the rendering pipeline
-May include texture maps, normal vectors, colors, etc
-OpenGL utility library (GLu) defines routines for cubes, cylinders, disks, and
other common shapes
•The procedure may be dynamic - For example, adjust the polygon resolution
according to distance from the viewer
Display List
•The list cannot be modified after it is compiled
•When to use display lists: Do the same thing over and over again
•Advantages:
- Can’t be much slower than the original way
- Can be much much faster
•Disadvantages:
- Doesn’t support real parameterized instancing, because you can’t have any
parameters (except transformations)
- Can’t use various commands that would offer other speedups For example,
can’t use glVertexPointer()
Hierarchical Modeling: unites parametric instances into one object
•Represented as a tree, with transformations and instances at nodes
•Rendered by traversing the tree, applying the transformations, and rendering
the instances
•Particularly useful for animation: Human is a hierarchy of body, head, upper
arm, lower arm, etc… Animate by changing the transformations at the nodes
Vitally Important Point:
•Every node has its own local coordinate system.
•This makes specifying transformations much much easier.
Regularized Set Operations
•Hierarchical modeling is not good enough if objects in the hierarchy intersect
each other
- Transparency will reveal internal surfaces that should not exist
- Computing properties like mass may count the same volume twice
•Solution is to define regularized set operations:
- Every object must be a closed volume (mathematically closed)
- Define mathematical set operations (union, intersection, difference,
complement) to the sets of points within the volume
Constructive Solid Geometry (CSG)
•Based on a tree structure
- The nodes are set operations: union, intersection or difference
- The edges of the tree have transformations associated with them
- The leaves contain only geometry
•Allows complex shapes with only a few primitives
•Motivated by computer aided design and manufacture
- A common format in CAD products
Rendering
•Normals and texture coordinates is from underlying primitives
•Some rendering algorithms can render CSG directly
- Raytracing, Scan-line with an A-buffer, Do 2D with tesselators in OpenGL
•For OpenGL and other polygon renderers, must convert CSG to polygonal
representation
- Must remove redundant faces, and chop faces up
- Basic algorithm: Split polygons until they are inside, outside, or on boundary.
Then choose appropriate set for final answer.
- Generally difficult, messy and slow
- Numerical imprecision is the major problem
•Advantages:
- Good for describing many things, particularly machined objects
- Better if the primitive set is rich
- Early systems used quadratic surfaces
- Moderately intuitive and easy to understand
•Disadvantages:
- Not a good match for polygon renderers
- Some objects may be very hard to describe
•Geometric computations are sometimes easy, sometimes hard
•A volume representation (hence solid in the name)
- Boundary (surface representation) can also work
Sweep: the path maybe any curve
•Define a polygon by its edges, and sweep it along a path
•The path taken by the edges form a surface - the sweep surface
•Special cases
- Surface of revolution: Rotate edges about an axis
- Extrusion: Sweep along a straight line
•The polygon may be transformed as it is moved along the path
- Scale, rotate with respect to path orientation, …
•One common way to specify is:
- Give a poly-line (sequence of line segments) as the path
- Give a poly-line as the shape to sweep
- Give a transformation to apply at the vertex of each path segment
•Difficult to avoid self-intersection
Rendering Sweeps
•Convert to polygons
- Break path into short segments, create a copy of the sweep polygon at each
segment, join the corresponding vertices between the polygons
- May need like end-caps on surfaces of revolution and extrusions
•Normals come from sweep polygon and path orientation
•Sweep polygon defines one texture parameter, path defines the other
Spatial Enumeration: describe something by the space it occupies
- For example, break the volume of interest into lots of tiny cubes, and say
which cubes are inside the object
- Works well for things like medical data such MRI or CAT
-- Data is associated with each voxel (volume element)
•Problem to overcome:
- The number of voxels may explode
- The number of voxels grows with the cube of linear dimension
Octrees (and Quadtrees)
•Build a tree where successive levels represent better resolution
• Large uniform spaces result in shallow trees
•Quadtree is for 2D, Octree is for 3D (eight children for each node)
Rendering Octrees
•Volume rendering renders octrees and associated data directly
- A special area of graphics, visualization, not covered in this class
•Can convert to polygons by a few methods:
- Just take faces of voxels that are on the boundary
- Find iso-surfaces within the volume and render those
- Typically do some interpolation (smoothing) to get rid of the artifacts from
the voxelization
•Typically render with colors that indicate something about the data, but other
methods exist
Spatial Data Structures: A data structure specifically designed for storing
information of a spatial nature
• Octrees are an example of a spatial data structure
• In graphics, octrees are frequently used to store information about where
polygons, or other primitives, are located in a scene
•Speeds up many computations by making it fast to determine when something
is relevant or not
•Other include BSP trees, KD-Trees, Interval trees, …
Blobs and Metaballs
•Define the location of some points
•May be to define a function on the distance to a given point, (x,y,z), Sum these
functions up, and use them as an implicit function
•Question: If I have two special points, in 2D, and my function is just the
distance, what shape results?
•More generally, use Gaussian functions of distance, or other forms
- Various results are called blobs or metaballs
Rendering Implicit Surfaces
•Some methods can render them directly
- Raytracing - find intersections with Newton’s method
•For polygonal renderer, must convert to polygons
•Advantages:
- Good for organic looking shapes eg human body
- Reasonable interfaces for design
•Disadvantages:
- Difficult to render and control when animating
- Being replaced with subdivision surfaces, it appears
Production rules Model by a set of rules to follow to generate it
•Works best for things like plants:
- Start with a stem
- Replace it with stem + branches
- Replace some part with more stem + branches, and so on
•Essentially, generate a string that describes the object by replacing sub-strings
with new sub-strings
•Render by generating geometry
- Parametric instances of branch, leaf, flower, etc
- Or polygons, or blobs, or …
Shortcoming of the previous method
•Meshes are large, difficult to edit, require normal approximations, …
•Parametric instancing has a limited domain of shapes
•CSG is difficult to render and limited in range of shapes
•Implicit models are difficult to control and render
•Production rules work in highly limited domains
Parametric curves and surfaces address many these issues
-More general, Easier to control
•Parametric curves are intended to provide the generality of polygon meshes
but with fewer parameters for smooth surfaces
•Fewer parameters makes it faster and easier to create and edit a curve
•Normal fields can be properly defined everywhere
•Parametric curves are easier to animate than polygon meshes
Parametric curve: use t to control the position on a line
Hermite Spline
•A spline is a parametric curve defined by control points
-A spline a piece of flexible wood used to draw smooth curves
-The control points are adjusted by the user to control the shape of the curve
•A Hermite spline: the endpoints of the curve, the parametric derivatives dx/dt,
dy/dt, dz/dt of the curve at the endpoints for a cubic Hermite spline
 2 3 0 0 t 3 
Hermite Spline
 2  3 0 1  2 
 t 
•The form of A cubic spline has degree 3: x  x1 x0 x1 x0 
x  at 3  bt 2  ct  d
 1 0 0  t 
 
 2 1 0  1 
1

1
Basic Function: A point on a Hermite curve is obtained by multiplying each
control point by some function which called basis functions and summing
d
Bezier Curves
xt   p i Bid t 
•Different choices of basis functions give different curves
i 0
•For Bezier curves, two control points define endpoints, and two control the
tangents at the endpoints in a geometric way
d 
d i
Bid t    t i 1  t 
•The first and last control points are interpolated
i
•The tangent to the curve at the first control point is along the line joining the
first and second control points. Which is similar to the last control
•The curve lies entirely within the convex hull
- The Bernstein polynomials sum to 1 and are everywhere positive
•The user supplies d control points, pi
•The functions Bid are the Bernstein polynomials of degree d
Rendering
•Interpolate a fixed set of parameter values
•Advantage: Very simple
•Disadvantages:Expensive to evaluate the curve at many points, No easy way
of knowing how fine to sample points, and maybe sampling rate must be
different along curve, No easy way to adapt. In particular, it is hard to measure
the deviation of a line segment from the exact curve
Sub-division method
•Recall that a Bezier curve lies entirely within the convex hull
•If the control vertices are nearly collinear, then the convex hull is a good
approximation. A cubic Bezier curve can be broken into two shorter cubic
Bezier curves that exactly cover the original curve
•This suggests a rendering algorithm:
- Keep breaking the curve into sub-curves, Stop when the control points of each
sub-curve are nearly collinear, Draw the control polygon - the polygon formed
by the control points De Casteljau’s Algorithm

M12
P1
M012
M0123
M12
P2
P2
P1
M123
M23
t=0.25
M01
M23
M01
P0
P0
P3
P3
Invariance
•Translational invariance means that translating the control points and
evaluating the curve is the same as evaluating and translating curve
•Rotational invariance means that rotating the control points and evaluating the
curve is the same as evaluating and rotating the curve
•These properties are essential for parametric curves used in graphics
•Bezier curves, Hermite curves and everything else we will study
•Some forms of curves, rational splines, are also perspective invariant
Longer Curve
•A single cubic Bezier or Hermite curve can only capture a small class of
curves - At most 2 inflection points
•One solution is to raise the degree: at the expense of more control points and
higher degree polynomials, control is not local
•Join pieces of cubic curve together into piecewise cubic curves
- Total curve can be broken into pieces, each of which is cubic
- Local control: Each control point only influences a limited part
- Interaction and design is much easier
Continuity: When two curves are joined, we typically want some degree of
continuity across the boundary (the knot)
- C0, “C-zero”, point-wise continuous, C1, “C-one”, continuous derivatives, C2,
“C-two”, continuous second derivatives
Achieving Continuity
•For Hermite curves, the user specifies the derivatives, so C1 is achieved simply
by sharing points and derivatives across the knot
•For Bezier curves:
-They interpolate their endpoints, so C0 is achieved by sharing control points
-The parametric derivative is a constant multiple of the vector joining the
first/last 2 control points
-So C1 is achieved by setting P0,3=P1,0=J, and making P0,2 and J and P1,1
collinear, with J-P0,2=P1,1-J
-C2 comes from further constraints on P0,1 and P1,2
DOF and Locality
•The number of degrees of freedom (DOF) can be thought of as the number of
things a user gets to specify
-- C0 for n piece Bezier curves is 3n+1, C1 is 2n + 2
•Locality refers to the number of curve segments affected by a change in a
control point - Local change affects fewer segments
Geometric Continuity
•Derivative continuity is important for animation: If an object moves along the
curve with constant parametric speed, there should be no sudden jump at the
knots
•For other applications, tangent continuity might be enough
-Curves could be made C1 with a re-parameterization
-The geometric version of C2 is G2, based on curves having the same radius of
curvature across the knot
s x(s,1)
•What is the tangent continuity constraint for a Bezier curve?
P1,1
P0,1
Parametric Surfaces
x(s,t)
Implicit Funciton: some surfaces can be represented as the vanishing points of
functions - Places where a function f(x,y,z)=0
x( s,0)  (1  s ) P0, 0  sP1, 0
• Some objects are easy represent this way
2
2
2
- Spheres, ellipses, and similar ax  bx  cy  dy  ez  fz  g  0 x( s,1)  (1  s) P0,1  sP1,1
x( s, t )  (1  t ) x( s,0)  tx( s,1)
- More generally, quadratic surfaces:
F0, s  1  s, F1, s  s
F0, t  1  t , F1, t  t
1
1
x( s, t )   Pi , j Fi , s ( s) F j , t (t )
i 0 j 0
t
P0,0
s
Tensor Product Surface Patches
•Defined over a rectangular domain: 0s<1, 0t<1
•Use a rectangular grid of control points to specify the surface
- 4 points in the bi-linear case on the previous slide, more in other
•Surface takes the form: For some functions Fi,s and Fj,t
d s dt
Bezier Patches
x( s, t )   Pi , j Fi , s ( s) F j ,t (t )
•Edge curves are Bezier curves
i 0 j 0
•Any curve of constant s or t is a Bezier curve
n m
xs, t    Pi , j Bin s B mj t 
•One way to think about it:
•Instead of subdivision, view splitting as refinement:
-Inserting additional control points, and knots, between the existing
-Useful not just for rendering - also a user interface tool
-By the Oslo algorithm
i 0 j 0
-Each row of 4 control points defines a Bezier curve in s
-Evaluating each of these curves at the same s provides 4 virtual control points
m n
-The virtual control points define a Bezier curve in t
X s, t    Pj ,k B j ,d ( s) Bk ,d t 
-Evaluating this curve at t gives the point x(s,t)
j 0 k 0
Properties of Bezier patches
•The patch interpolates its corner points from the interpolation of the
underlying curves
•The tangent plane at each corner interpolates the corner vertex and the two
neighboring edge vertices
- The tangent plane is the plane that is perpendicular to the normal vector at a
point, the tangent plane property derives from the curve tangent properties and
the way to compute normal vectors
•The patch lies within the convex hull of its control vertices
Matrix form
x( s, t )  S T B T PBT

x ( s, t )  s
3
s
 1 3  3
 3 6 3
s 1
 3 3
0

0
0
1
1  P0, 0

0  P1, 0
0  P2, 0

0  P3, 0

2
P0,1
P0, 2
P1,1
P1, 2
P2,1
P3,1
P2, 2
P3, 2
P0,3    1 3  3
P1,3   3  6 3

P2,3   3 3
0

P3,3   1
0
0
1  t 
 
0 t 2 
0  t 
 
0  1 
3
Bezier patch meshes
-Patches meet along complete edges
-Each patch must be a quadrilateral
Bezier Mesh Continuity
-C0 continuity along an edge? Share control points at the edge
-C1 continuity along an edge? Control points across edge are collinear and
equally spaced
-C2 continuity along an edge? Constraints extent to points farther from the edge
•For geometric continuity, constraints are less rigid. Still collinear for G1, but
can be anywhere along the line
•What can you say about the vertices around a corner if there must be C 1
continuity at the corner point?They are co-planar
Rendering Bezier Patches
•Option 1: Evaluate parameter values, join up with triangles
•Option 2: Subdivide
Computing Normal Vectors
•The partial derivative in the s direction is one tangent vector
•The partial derivative in the t direction is another
•Take their cross product, and normalize, to get the surface normal vector
n m
x
x
n
dB n
x
n

nˆ 
  Pi , j i B mj t 
s s ,t t s ,t
n
s s ,t i 0 j 0
ds s
Problems with Bezier Curve
•Requires using many segments
•Maintaining continuity requires constraints on the control point
-Cannot arbitrarily move control and automatically get continuity
-The constraints must be explicitly maintained
-It is not intuitive to have control points that are not free
B-Spline
•Automatically take care of continuity, with exactly one control vertex per
curve segment
•Many types : degree may be different (linear, quadratic, cubic,…) and they
may be uniform or non-uniform
•With uniform B-splines, continuity is always one degree lower than the degree
of each curve piece
Uniform Cubic B-spline on [0,1)
•Four control points to define the curve for 0t<1 (t is the parameter) with 4
degrees of freedom, the basis function called blending functions - they describe
how to blend the control points to make the curve
3
x(t )   Pi Bi , 4 (t )
i 0
Properties P0 6 1  3t  3t  t   P 6 1 4  6t  3t   P2 6 1  3t  3t  3t   P3 6 t 
•The blending functions sum to one, and are positive everywhere
n
  1 3  3 1  t 3 
Pk Bk, d t 
•The curve does not interpolate its endpoints  3  6 0 4  2  Xt   
k 0
1
2
3
x (t ) 
1
1
P0
6
2
P1
3
P2
1
2
P3 
 3

1
3
1
3
3
 t 
1  t 
 
0  1 
3
Uniform Cubic B-splines on [0,m)
0
0
•Curve:
-n is the total number of control points
-d is the order of the curves, 2  d  n+1
-Bk,d are the uniform B-spline blending functions of degree d-1
-Pk are the control points
-Each Bk,d is only non-zero for a small range of t values, so the curve has local
control, created by convolution of (0,1) square n times
Uniform B-spline at arbitrary t
•The interval from i to i+1 is essentially the same as from 0 to 1
3
-The parameter value is offset by i,
X t    Pi  k Bk , 4 t  i 
-To evaluate at an arbitrary parameter value t:
k 0
-- Find the greatest integer less than or equal to t: i = floor(t)
-- Evaluate: Valid parameter range: 0t<n-3
• To create a loop, use control points from the start of the curve when
3
computing values at the end of the curve:
X t    P( i k ) mod n Bk , 4 t  i 
•Any parameter value is now valid
k 0
B-Splines and Interpolation, Continuity
•Uniform B-splines do not interpolate control points, unless:
- You repeat a control point three times, But then all derivatives also vanish
(=0) at that point, To do interpolation with non-zero derivatives you must use
non-uniform B-splines with repeated knots
•To align tangents, use double control vertices
•Uniform B-splines are automatically C2
B-Splines surfaces
•Continuity is automatically obtained everywhere
•BUT, the control points must be in a rectangular grid
Non-Uniform B-Spline
•Uniform B-splines are a special case of B-splines
•Each blending function is the same
•A blending functions starts at t=-3, t=-2, t=-1,…
•Each blending function is non-zero for 4 units of the parameter
•Non-uniform B-splines can have blending functions starting and stopping
anywhere, and the blending functions are not all the same
B-Spline Knot Vectors
•Knots: Define a sequence of parameter values at which the blending functions
will be switched on and off
•Knot values are increasing, and there are n+d+1 of them, forming a knot
vector: (t0,t1,…,tn+d) with t0  t1  …  tn+d
•Curve only defined for parameter values between td-1 and tn+1
•These parameter values correspond to the places where the pieces of the curve
meet
•There is one control point for each value in the knot vector
•The blending functions are recursively defined in terms of the
knots and the
1 tk  t  tk 1 B t    t  tk  B t  
curve degree
k ,d
Bk ,1 t   
t
 k ,d 1
 k d 1  tk 
otherwise
B-Spline Blending Functions 0
 tk  d  t 
The recurrence relation starts with the 1st order
 t  t  Bk 1,d 1 t 
 k d k 1 
B-splines, just boxes, and builds up successively higher orders
•This algorithm is the Cox - de Boor algorithm
Uniform Cubic B-Spline
•Uniform cubic B-splines arise when the knot vector is of the form (-3,-2,1,0,1,…,n+1)
•Each blending function is non-zero over a parameter interval of 4
•All of the blending functions are translations of each other Bk,d(t)=Bk+1,d(t+1)
•The blending functions are the result of convolving a box with itself d times,
although we will not use this fact
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
-3
-2.5
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
2.5
3
3.5
t
Rendering B-Splines
•Evaluate at a set of parameter values and join with lines
•Use a subdivision rule to break the curve into small pieces, and then join
control points
4
4.5
5
Refining Uniform Cubic B-Splines
•Basic idea: Generate 2n-3 new control points:
-Add a new control point in the middle of each curve segment: P’0,1, P’1,2, P’2,3
, …, P’n-2,n-1
-Modify existing control points: P’1, P’2, …, P’n-2
1
1
-Throw away the first and last control
Pi, j  Pi  Pj , P'i  Pi 1  6 Pi  Pi 1 
2
8
•Rules:
•If the curve is a loop, generate 2n new control points by averaging across the
loop
•When drawing, don’t draw the control polygon, join the X(i) points
Shading Revisited
•To produce photorealistic pictures like a photograph
-A better metric is perceptual: the image should generate a target set of
perceptions
-Applications include: Film special effects, Training simulations, Computer
games, Architectural visualizations, Psychology experiments, …
•To achieve the goal of photorealism, we must think carefully about light and
how it interacts with surfaces
Light Transport concerned with how much light arrives at any surface, and
from what direction
•The physical quantity is radiance: How much light is traveling along a line in
space per unit foreshortened area per unit solid angle
•Similar problems arise in radiated heat transport (i.e. satellites)
Radiometry:The study of light distribution:
Rational Curves: Each point is the ratio of two curves
- NURBS: x(t), y(t), z(t) and w(t) are non-uniform B-splines
 x (t ) y (t ) z (t ) 
•Advantages:
[ x(t ), y (t ), z (t ), w(t )]  
,
,

 w(t ) w(t ) w(t ) 
-Perspective invariant, so can be evaluating in screen space
-Can perfectly represent conic sections: circles, ellipses, etc
--Piecewise cubic curves cannot do this
NURBS Non-uniform Rational B-splines
- The curved surface of choice in CAD packages
•Support routines are part of the GLu utility library
•Allows you to specify how they are rendered:
- Can use points constantly spaced in parametric space
- Can use various error tolerances - the good way!
•Allows you to get back the lines that would be drawn
•Allows you to specify trim curves
- Only for surfaces
- Cut out parts of the surface - in parametric space
From B-spline to Bezier x(t )  P MT
•Recall, a point on the curve can be represented by a matrix equation:
•M depends on the representation: MB-spline and Mbezier, T is the column vector
containing: t3, t2, t, 1
•By equating points generated by each representation, we can find a matrix MBspline->Bezier that converts B-spline control points into Bezier control points
T
How to choose
•Hermite curves are good for single segments where you know the parametric
derivative or want easy control of it
•Bezier good for single segments or patches where a user controls the points
•B-splines are good for large continuous curves and surfaces
•NURBS are the most general, and are good when that generality is useful, or
when conic sections must be accurately represented (CAD)
Tessellating a sphere
•Tessellation process of approximating a surface with a polygon mesh
•One option for tessellating a sphere: step around and up the sphere in constant
steps of  and 
-Problem: Polygons are of wildly different sizes, and some vertices have very
high degree
•Begin with a coarse approximation to sphere, that uses only triangles
-Two good candidates are platonic solids with triangular faces: Octahedron,
Isosahedron
- They have uniformly sized faces and uniform vertex degree
•Repeat the following process:
-Insert a new vertex in the middle of each edge
-Push the vertices out to the surface of the sphere
-Break each triangular face into 4 triangles using the new vertices
•Advantage
-All the triangles at any given level are the same size
--Relies on the initial mesh having equal sized faces, and properties of the
sphere
-The new vertices all have the same degree
--Mesh is uniform in newly generated areas
-The location and degree of existing vertices does not change
--The only extraordinary points lie on the initial mesh
--Extraordinary points are those with degree different to the uniform areas
Fractal Surface
•Fractals are objects that show self similarity
•Start with a coarse mesh
-Vertices on this mesh won’t move, so they can be used to set mountain peaks
and valleys
•Also defines the boundary
•Mesh must not have dangling edges or vertices
•Every edge and every vertex must be part of a face
•Also define an “up” direction
•Then repeatedly:
-Add new vertices at the midpoint of each edge, and randomly push them up or
down
-Split each face into four, as for the sphere
Fractal Terrain Details
•There are options for choosing where to move the new vertices
- Uniform random offset, Normally distributed offset – small motions more
likely, Procedural rule – eg Perlin noise
•Scaling the offset of new points according to the subdivision level is essential,
For the subdivision to converge to a smooth surface, the offset must be reduced
for each level
•Colors are frequently chosen based on “altitude”
Rendering
•To render, we must be able to find the vertices around a face
•We also require vertex normals for smooth shading
•And we might require texture coordinates
- When an edge is split to create a new vertex, average the endpoint texture
coordinates to get the coordinates for the new vertex
General Subdivision Scheme
•Subdivision schemes also used where there is no “target” surface
•They aim to replace a polygonal mesh with a smooth surface that
approximates the coarse mesh
•Butterfly scheme (for triangular meshes), Catmull-Clark subdivision (for
mostly rectangular meshes, converges to B-splines in uniform regions), Loop’s
scheme (for triangular meshes), Modified butterfly scheme (for triangular
meshes), Many more…
Butterfly Scheme
•Subdivides: Each edge is split, Each face is split into four
•Rules are defined for computing the splitting vertex of each edge
•Basic rule for a uniform region
b
c
-Take a weighted sum of the neighboring cvertices
Weights :
1
-Weights define rules
a: w
a
2
a
d b : 1  2w
8
1
c: w
16
d :w
Modified Butterfly Scheme
•The butterfly scheme must be modified to deal with edges with an endpoint of
degree  6
•In that case, compute new vertex based only the neighbors of the
extraordinary vertex
•If an edge has two extraordinary endpoints, average the results from each
endpoint to get the new endpoint
•The modified butterfly scheme is provably continuous about extraordinary
vertices -- Proof formulates subdivision as a matrix operator and does eigenanalysis of subdivision matrix
Weights :
e1
5
1
1
 3
N  3 :  v : , e0 : , e1 :  , e2 :  
12
12
12 
 4
3
1
 3

N  4 :  v : , e0 : , e1 : 0, e 2 :  , e3 : 0 
8
8
 4

 3
1
N  5 : v : , e j :
N
 4
1
 2 j  1
 4 j   
  cos 
 
  cos 
 N  2
 N  
4
e0
v
eN-3
eN-1
eN-2
L( x, o , o )  Le ( x, o , o )   bd ( x, o , o , ,  ) Li ( x, ,  ) cos d

Light
leaving
Exitance
Sum
BRDF
Incoming
light
Incoming light reflected at the point
Subdivision Scheme
•Basic idea: Start with something coarse, and refine it into smaller pieces,
smoothing along the way
d
Reflectance Modeling concerned with the way in which
light reflects off surfaces
-To deciding what surfaces look like, in solving
the light transport
•Physical quantity is BRDF: Bidirectional Reflectance Distribution Function
-A function of a point on the surface, an incoming light direction, and an
outgoing light direction to tell you how much of the light that comes in from
one direction goes out in another direction
Assumption
•Diffuse surfaces: Uniformly reflect all the light they receive
--Sum up all the light that is arriving: Irradiance
--Send it back out in all directions
-A reasonable approximation for matte paints, soot, carpet
•Perfectly specular surfaces: Reflect only in the mirror direction
•Rough specular surfaces: Reflect around the mirror direction
•Diffuse + Specular:A diffuse component and a specular component
Light Sources: emit light: exitance
•Different light sources are defined by how they emit light:
-How much they emit in each direction from each point on their surface
-For some algorithms, “point” lights cannot exist, for other algorithms, only
“point” light can exist
Global Illumination Equation
Photorealistic Lighting requires solving the equation!
•Light transport is concerned with the “incoming light” part
- To know how much light leaves a point, you need to know how much light
reaches it  To know how much light reaches a point, you need to know light
leaves every other point
•Reflectance modeling is concerned with the BRDF
Classifying Rendering Algorithms
•According to the type of light interactions they capture
•For example: The OpenGL lighting model captures:
-Direct light to surface to eye light transport
-Diffuse and rough specular surface reflectance
-It actually doesn’t do light to surface transport correctly, because it doesn’t do
shadows
Classifying Light Paths according to where they come from, where they go to,
and what they do along the way
•Assume only two types of surface interactions:
- Pure diffuse, D, Pure specular, S
•Assume all paths of interest: start at light source, L, End at the eye, E
•Use regular expressions on the letters D, S, L and E to describe light paths:
Valid paths are L(D|S)*E
Simple Light Path Examples
•LE:The light goes straight from the source to the viewer
•LDE: The light goes from the light to a diffuse surface that the viewer can see
•LSE:The light is reflected off a mirror into the viewer’s eyes
•L(S|D)E:The light is reflected off either a diffuse surface or a specular surface
toward the viewer
The OpenGL Model: The “standard” graphics lighting model captures only
L(D|S)E and It is missing:
-Light taking more than one diffuse bounce: LD*E
-- Should produce an effect called color bleeding, among other things
-- Approximated, grossly, by ambient light
-Light refracted through curved glass
--Consider the refraction as a “mirror” bounce: LDS
- Light bouncing off a mirror to illuminate a diffuse surface: LS+D+E
Raytracing
•Cast rays out from the eye, through each
pixel, and determine what they hit first – Builds
Shadow rays
the image pixel by pixel, one at a time
•Cast additional rays from the hit point
Reflection ray
to determine the pixel color
-Shadow rays toward each light. If they hit
Transmitted ray
something, then the object is shadowed from that light, otherwise use
“standard” model for the light
-Reflection rays for mirror surfaces, to see what should be reflected
-Transmission rays to see what can be seen through transparent objects
-Sum all the contributions to get the pixel color
Recursive Ray Tracing
•When a reflected or refracted ray hits a surface, repeat the whole process from
that point
-Send out more shadow rays
-Send out new reflected ray (if required)
-Send out a new refracted ray (if required)
-Generally, reduce the weight of each additional ray when computing the
contributions to surface color
-Stop when the contribution from a ray is too small to notice
•What light paths does recursive ray tracing capture?
RayTracing Implementation
•Raytracing breaks down into two tasks: Constructing the rays to cast,
Intersecting rays with geometry
•The former problem is simple vector arithmetic
•The intersection problem arises in many areas of computer graphics: Collision
detection, Other rendering algorithms
•Intersection is essentially root finding (as we will see)
Constructing Rays
•Define rays by an initial point and a direction: x(t)=x0+td
•Eye rays: Rays from the eye through a pixel
•Shadow rays: Rays from a point on a surface to the light.
•Reflection rays: Rays from a point on a surface in reflection direction
•Transmitted rays: Rays from a point on a transparent surface through the
surface
Ray-Object Intersections
•Aim: Find the parameter value, ti, at which the ray first meets object i
•Transform the ray into the object’s local coordinate system
- Makes ray-object intersections generic: ray-sphere, ray-plane, …
•Write the surface of the object implicitly: f(x)=0
-Unit sphere at the origin is x•x-1=0
-Plane with normal n passing through origin is: n•x=0
•Put the ray equation in for x
-Result is an equation of the form f(t)=0 where we want t
Ray : x (t )  x0  td
-Now it’s just root finding
Plane : n  x  0
Ray-Sphere Intersection
Substitute : n   x0  td   0
Ray : x (t )  x0  td
: n  d t  n  x0  0
Sphere : x  x  1  0
 n  x0
:t 
nd
Substitute :  x0  td    x0  td   1  0
: d  d t 2  2 x0  d t   x0  x0  1  0
Ray-Plane Intersection
•To do polygons, intersect with plane then do point-in-polygon test…
Point-in-Polygon Testing
•Project point and polygon onto a 2D plane: project to the smaller two normal
vector elements’ plane
•Cast a ray from the point to infinity and count the number of edges it crosses
-Odd number means point is inside
-Edge crossing tests are very fast - think clipping
More complex tests
•Ray-Polygon test reveals a common strategy
-Intersect with something easy - a superset of the actual shape
-Do a bounds check to make sure you have actually hit the shape
•Also works for cylinders, disks, cones
•CSG is well suited to raytracing
-Find intersection along ray of all the CSG primitives
-Break the ray into intervals according to which primitives it is in
-Do set operations to find the first interval that is actually inside the CSG
Ray : x(t )  x 0  td
Ray-Patch Intersection
3
3
•Equation in 3 parameters, two for
Patch : x(u , v)   Pij Bi (u ) B j (v)  0
surface and one for ray
i 0 j 0
3
3
•Solve using Newton’s method
Substitute : x 0  td   Pij Bi (u ) B j (v)  0
for root finding
i 0 j 0
-Have derivatives from basis functions
-Starting point from control polygon, or random guess, or try a whole set of
different starting values
Details
•Must find first intersection of ray from the eye: take soonest from eye
-Avoiding testing all objects: Bounding boxes, Octrees for organizing objects
-Take care to eliminate intersections behind the eye
Plane Eqn : nT x  0
-Same rules apply for reflection and transmission rays
T
Kn Mx  0
•Shadow ray just has to find any intersection shadowing
the light source: Speedup: Keep a cache of shadowing
nT K T Mx  0
objects - test those first
Transforming Normal Vectors
KT M  I
•Normal vectors are not transformed the same way
MT K  I
points are: Ray directions behave like normal vectors
1
•Plane equation should still be true with transformed points!
K  MT
•Transform normal vectors with the inverse transpose of
the transformation matrix
•For rotations, matrix is its own inverse transpose
Numerical Issues
•Shadow, reflection and transmission rays have to be sure they don’t intersect
the surface they are leaving
-Can’t just ignore the object the ray is leaving - some objects self-shadow
-Solution: Use a tolerance - offset the starting point off the surface a little in the
normal direction
•Finding all the intersections with a spline surface patch is difficult
•CSG can have problems when doing set operations
-Make sure pieces being subtracted protrude above surfaces
Mapping Techniques
•Raytracing provides a wealth of information about the visible surface point:
- Position, normal, texture coordinates, illuminants, color…
•Raytracing also has great flexibility
-Every point is computed independently, can easily be applied on a per-pixel
-Reflection and transmission and shadow rays manipulated for various effects
-Even the intersection point can be modified
Soft Shadow
•Light sources that extend over an area (area light sources) should cast softedged shadows: fully illuminated, umbra, penumbra
•To ray-trace area light sources, cast multiple shadow rays
-Each one to a different point on the light source
-Weigh illumination by the number that get through
Anti-Aliasing
•Raytracing can alias badly
-Each ray is a single point sample
-Problem is made worse by recursive
rays – the point sample depends
on other point samples
All shadow rays
No shadow rays
Some shadow
go through
go through
rays go through
•Common solutions:
-Super-sampling: Cast multiple rays per pixel and average their contribution
-Jittered sampling: Frequently used with super-sampling, randomly jitters each
ray within the pixel
-Adaptive sampling: Cast extra rays through the pixel if some initial sample
rays indicate that they are needed
Distribution Raytracing : casts more than one ray for each sample
•Multiple rays for each pixel, distributed in time: gives you motion blur, a
strong visual clue for motion
•Cast multiple reflection rays at a reflective surface: gives you rough, blurry
reflections
•Simulate multiple paths through the camera lens system: gives you depth of
field, an important visual clue for depth
Missing Paths: Raytracing cannot do:
-LS*D+E: Light bouncing off a shiny surface like a mirror and illuminating a
diffuse surface
-LD+E: Light bouncing off one diffuse surface to illuminate others
•Basic problem: The raytracer doesn’t know where to send rays out of the
diffuse surface to capture the incoming light
•Also a problem for rough specular reflection - fuzzy reflections in rough shiny
Ray-Tracing and Sampling
•Basic ray-tracing casts one ray through each pixel, sends one ray for each
reflection, one ray for each point light, etc
•This represents a single sample for each point, and for an animation, a single
sample for each frame
•Many important effects require more samples:
-Motion blur: A photograph of a moving object smears the object across the
film (longer exposure, more motion blur)
-Depth of Field: Objects not located at the focal distance appear blurred when
viewed through a real lens system
-Rough reflections: Reflections in a rough surface appear blurred
11 principles
 
 
•Squash-and-Stretch, Timing, Anticipation, Follow Through and
Overlapping Action, Straight Ahead Action and Pose-to-Pose Action,
Slow In and Out, Arcs, Exaggeration, Secondary Action, Appeal
•Basically, principles are driven by
-Perceptual factors, such as directing the viewer’s attention and
smoothing the motion for easier perception
-Conveying emotion through motion
•Keyframe animation: Animator specifies important positions throughout the
animation – the keyframes, Someone or something fills in the intermediate
frames – inbetweening, or just ’tweening
•Motion capture: System captures motion data from a real enactment of the
animation, The data then drives a virtual character
•Procedural animation, A set of equations or rules are evaluated to determine
how the animation behaves
Technique
Control
Time to
Create
Computation Interactivity
Cost
Key-Framed
Excellent
Poor
Low
Low
Motion
Capture
Good at time Medium
of creation,
after that poor
Medium
Medium
Procedural
Poor
High
High
Poor to
create
program
Download