From Vertices to Fragments

advertisement
From Vertices to Fragments
1 Basic Implementation Strategies
1.1 Object-oriented
1.1.1 foreach (object) render(object);
1.1.2 pipeline renderer: each primitive processed independently (in
parallel)
1.1.3 needs lots of memory (framebuffer, depthbuffer)
1.1.4 no global illumination
1.2 Image-oriented
1.2.1 foreach (pixel) assign_a_color(pixel)
1.2.2 less memory: maybe can compute pixels during refresh!
1.2.3 needs data structure to tell which primitives matter
1.2.4 can do global illumination
2 Four Major Tasks
2.1 Modeling
2.1.1 produces geometric objects
2.1.2 usually, a user program
2.1.3 clipping/culling: can reduce number of primitives using
application knowledge
2.2 Geometry Processing
2.2.1 Which objects appear on display, assign colors
2.2.2 operations
projection
transform vertices to normalized view volume: a cube centered at origin (clip
coordinates: Homogeneous)
primitive assembly
groups vertices into objects (need to tell if objects lie outside, inside, or partially
inside the view volume)
(visible surface found in fragment processing)
clipping
determine which primitives like in view volume
for partially visible primitives, create new primitives that are completely visible
afterwards, perform perspective division
shading
assign colors to each vertex
2.3 Rasterization
2.3.1 for line segments: determine fragments along a line between
projected vertices
2.3.2 for polygons: determine which pixels lie inside polygon defined
by projected vertices
2.3.3 input in normalized device coords
2.3.4 outputs fragments in window coordinates (perform a 2D
transformation to viewport )
2.4 Fragment Processing
2.4.1 assign colors to fragments, place in frame buffer
2.4.2 interpolate per-vertex colors
2.4.3 lookup texture values
2.4.4 perform depth test
2.4.5 blending
3 Clipping
3.1 Overview
3.1.1 clipper determines which primitives are accepted or rejected
(culled).
3.1.2 performed before perspective divide
3.1.3 visible object lie in cube: normalized device coordinates
-w <= x <= w
-w <= y <= w
-w <= z <= w
3.1.4 partially visible primitives are clipped to fit
3.1.5 clipping can occur at various points in pipeline
modeler can remove some objects
can clip after projecting to screen (in 2D)
can clip to 3D view volume (OpenGL)
3.2 Line-Segment Clipping
3.2.1 Lines to be clipped
3.2.2 Cohen-Sutherland Clipping
Cohen-Sutherland outcodes
"outcodes" b0b1b2b3 classify a point (x,y) wrt boundaries
b0=(y>ymax)?1:0
b1=(y<ymin)?1:0
b2=(x>xmax)?1:0
b3=(x<xmin)?1:0
Algorithm
for each endpoint of line, compute the outcodes o1 and o2
if both o1 = o2 = 0, line inside window, accept
if one outcode=0, other is !=0, then the line segment must be shortened. nonzero outcode bits tell which boundary(s) to intersect with line.
if (o1 "logical and" o2) != 0, then both lie on same side of a boundary: reject
if (o1 "logical and" o2) = 0, then both are outside, but line may pass through
window. Intersect with a boundary and check intersection's outcode. (reapply
alg)
Computing intersections
line endpoints:(x1,y1) and (x2,y2)
m = (y2-y1)/(x2-x1)
recall y = mx + b
y of intersection with a vertical boundary: y = y1 + m ( x_bdy - x1)
x of intersection with a horiz bdy: x = x1 + (y_bdy - y1) / m
best for cases where most lines are completely outside
needs to shorten edges multiple times in some cases
extends to 3D
3.2.3 Liang-Barsky Clipping
use parametric form: p(a)=(1-a)p1 + a p2
consider the parameter values at the intersections with the boundaries:
a1,a2,a3,a4
Liang-Barsky intersections
sort the a values to determine if line can be accepted, rejected, or needs
to have an intersection computed
Computing Intersections
we want to avoid computing intersections unless needed
intersection with top edge: a = (y_max - y1) / (y2 - y1)
rewrite: a(y2-y1)=(y_max-y1) or a (delta_y) = delta_y_max
can then restate tests in terms of delta_y and delta_y_max, avoiding divisions
unless a is needed
avoids multiple shortenings
extends to 3D
3.3 Polygon Clipping
3.3.1 can clip to rectangular windows, or other shapes
clipping to the shadow volume of a polygon
3.3.2 can create multiple polygons when clipping a concave polygon.
(or something weird) (So, forbid them!)
A concave polygon to be clipped
Multiple polygons resulting from clipping
A single, weird polygon can result (e.g. using Sutherland-Hodgeman)
3.3.3 Sutherland-Hodgeman Algorithm
good for pipelines
clip polygon to one side of clip window, pass result to next stage
Sequence of clipping operations
3.4 Other Primitives
3.4.1 Bounding Boxes and Volumes
Axis-aligned bounding boxes (AABB)
Complex objects with bounding volumes
if box totally or partially visible, draw the contents
if box completely outside window, don't draw contents
Bounding Spheres
can simplify collision detection (if volumes don't intersect, no collision.)
3.4.2 Curves, Surfaces, Text
curves can be complicated: convert to lines or polys and clip those
text: can convert to geometry and clip that. May want to clip at higher
level: e.g., words
3.5 In Three Dimensions
3.5.1 clip to a volume instead of a window
Clipping to a 3D volume
3.5.2 Cohen-Sutherland: extend outcodes to 6 bits
3D outcodes
3.5.3 Liang-Barsky: extend line equation to 3D parametric
3.5.4 Computing Intersections
Now we are intersecting lines with planes
Intersection of a line and a plane
Equations for intersection of a line and a plane
3.5.5 OpenGL supports additional clip planes, of arbitrary orientation
4 Rasterization
4.1 Lines
4.1.1 DDA
Digital Differential Analyzer
assume 0<= m <= 1 (get other slopes by symmetry)
for (x=x1; x <= x2; x++) { y += m; write_pixel(x, round(y), color); }
Driving axis: incremented by 1 each pixel
Using the wrong driving axis causes gaps
Use y as driving axis and 1/m as increment for x, if m > 1
Correct driving axis gives desired results
needs a floating-point add per pixel
4.1.2 Bresenham's Algorithm
Efficient: uses only integer adds and shifts
assume 0 <= m <= 1 (take a unit step in x)
after drawing a pixel, the next one will be to the right, or to right and
above
which pixel is next?
use sign of a decision variable, d=a-b, to tell which is closest to true line.
which pixel is closer?
derivation
Derivation
Algorithm
bresenham (x',y',x",y")
(x,y) = leftmost endpoint
x_end = x coord of right endpoint
delta_x = abs ( x" - x' )
delta_y = abs ( y" - y' )
p = 2 * delta_y - delta_x
incr1 = 2 * delta_y
incr2 = 2 * ( delta_y - delta_x )
write_pixel ( x,y,color );
while ( x < x_end ) {
x=x+1
if ( p < 0 )
p = p + incr1
else
p = p + incr2; y += 1
write_pixel ( x,y,color )
}
4.2 Polygons
4.2.1 inside-outside test
odd-even (crossing) test
from the point being considered, send a ray (scanline) to infinity, and count the
number of polygon edges crossed
inside if number or crossings is odd, outside if even
odd-even test on a non-simple polygon
winding number
from the point considered, count how many times the polygon encircles the point
windings = ((sum of angles subtended by each edge) / 2 pi)
if 0, point outside.
if 1 (or more), point inside
winding test fills it all in
4.2.2 flood fill
first rasterize edges, then start from a seed point and fill the interior
Flood-fill algorithm
make efficient by removing recursion and filling spans (along scanlines)
instead of single pixels
4.2.3 singularities
when vertices lie directly on scanlines, can cause problems when
counting edge crossings
singularities
(a) count as 0 or 2 crossings, (b) count as one crossing
avoiding singularities:
move vertex slightly if on scanline
or, consider pixel centers at halfway between integers, vertices at integers
5 Hidden Surface Removal
5.1 Object-space
5.1.1 For each object, compare it to the other objects
5.1.2 Display the visible part of the object
5.1.3 for k objects, runtime is O(k^2)
5.2 Image-space
5.2.1 for each pixel, generate a ray from eye through pixel
5.2.2 Intersect the ray with all objects, pick closest intersection
5.2.3 For MxN image with k objects, need MxNxk intersection
calculations. O(k) worst case, can be better, e.g. O(log k) with a
spatial indexing structure
5.3 Sorting
5.3.1 Object-space method essentially is a sort. Can be improved to
O(k log k)
5.4 Scanline Algorithms
5.4.1 Can create pixels as they are displayed (during refresh)
5.4.2 span=contiguous set of interior pixels along a scanline
several spans
5.4.3 Sort the edge intersections by y (scanline), within scanline by x
unsorted intersections
sorted intersections
5.4.4 can use a radix sort to speed it up: keep a bucket of edge
intersections per scanline, each bucket sorted by x (keep z values of
intersections, too)
scanline buckets
5.4.5 solve the visible surface problem within each scanline (2D
problem is easier than the 3D problem)
5.5 Back-Face Removal
5.5.1 keep the face if dot(n,v) >= 0
5.6 Z-buffer algorithm
5.6.1 keep depth of each pixel in a separate depth buffer
5.6.2 When generating fragments during rasterization, write the
fragment to color buffer if its closer than the pixel last written at that
position
5.6.3 update the depth buffer if the fragment was drawn.
5.6.4 the depth of each fragment along a scanline changes by a
constant amount between adjacent pixels
5.7 Depth Sort and the Painter's Algorithm
5.7.1 Sort polygons by distance from viewer
5.7.2 Draw polygons in back-to-front order
5.7.3 Problems
big+small polygons
cyclic overlap
intersecting polygons
5.8 BSP tree algorithm
5.8.1 Consider a single polygon P, and the plane it lies in Hp
The plane Hp, embedding polygon P, partitions space
"near side" of Hp = side with viewpoint. other side = "far side"
Properties
A poly lying in the near side f Hp might obscure P or a poly in the far side
P might obscure a poly on the far side, but cannot obscure anything in the near
side
A poly lying on the far side cannot obscure P or anything in the near side
(A poly lying on both sides is split into two polygons, one in near side, other in
far side (modified Cohen-Sutherland alg)
PartitionPolygons ( set<polygon> S, set<polygon> &frontside, set<polygon> &backside,
PlaneEquation Hp, set<polygon> inHp )
{
polygon P = select a polygon from S at random
Hp = plane equation for P
for each polygon Q in S
test each vertex against Hp
if all vertices on front side of Hp
add Q to frontside set
else if all vertices on back side
add Q to backside set
else {
split Q into Qfront and Qback
add Qfront to fronside
add Qback to backside
}
}
Given these properties, should draw in this order:
far side polygons
P
near side polygons
But, there may be more than one poly in near and/or far sides.
So, apply the idea recursively to the near and far sides
5.8.2 Create a BSP Tree by recursively partitioning the polygons on
either side of the plane.
struct BSPTree {
set<polygon> inHp;
plane_equation Hp;
BSPTree *left, *right;
}
BSPTree CreateBSP ( set<polygon> polys ) {
if ( polys is empty ) return NULL;
PartitionPolygons ( polys, frontpolys, backpolys, Hp, inHpPolys );
BSPTree *tree = new BSPNode ( Hp, inHpPolys );
tree->left = CreateBSP ( backpolys );
tree->right = CreateBSP ( frontpolys );
return tree;
}
BSP Tree schema
5.8.3 if eye in front of Hp, then frontside = "near" side, backside =
"far" side
else vice versa
5.8.4 Back-to-front rendering
void Draw ( BSPTree *t ) {
if ( t ) {
d = evaluate t->Hp at viewpoint
if ( d > 0 )
far = t->left; near = t->right;
else
far = t->right; near = t->left;
Draw ( far );
DrawPolygons ( t->inHp );
Draw ( near );
}
}
5.8.5 Mark leaves as "in" or "out" of the solid, and use the tree to
classify points
struct BSPSolidTree {
set<polygon> inHp;
plane_equation Hp;
BSPSolidTree *left, *right;
}
bool PointInSolid ( Point p, BSPSolidTree *solid ) {
if ( solid == "in" ) return true;
if ( solid == "out" ) return false;
if ( p lies in "front" halfspace of solid->Hp )
return PointInSolid ( p, solid->right );
else if ( p lies in "back" halfspace of solid->Hp )
return PointInSolid ( p, solid->left )
else return ( PointInSolid ( p, solid->left)
& PointInSolid ( p, solid->right) );
}
5.8.6 Example
Initial polygons
Choose polygon1 to create root. Split 4 into 4 and 4'.
split back side of "a" using polygon 4
split back side of "b" with polygon 5
split frontside of "a" with polygon 2
split back side of "d" with polygon 3
split back side of "e" with polygon 4'
6 Antialiasing
6.1 Displays are discrete (limited resolution), but primitives are
continuous (infinite resolution)
6.2 Many possible lines will result in the same set of pixels (the
alias)
6.3 Pixels are on a uniform grid, with a fixed size and shape
6.4 Area averaging
6.4.1 Consider the "ideal" line as having area
6.4.2 If the display supports 2 or more bits per pixel, we can color
pixels based on the percentage of the pixel covered by the ideal line
6.4.3 (a) aliased (b) antialiased (c) zoom on aliased (d) zoom on
antialiased
6.4.4 For pixels containing portions of different primitives, can color
the pixel with the area-weighted average of primitives' colors
6.5 time-domain aliasing
6.5.1 flash/flicker of small objects when raytracing (fix with
oversampling and averaging)
6.5.2 strobe-effect on moving objects (fix with motion blur)
6.5.3 wagon-wheel effect (fix with oversampling and averaging in
time-domain (i.e. motion blur))
6.6 Expensive
7 Display Considerations
7.1 Color Systems
7.1.1 We can convert between RGB values and other systems for
specifying color
7.1.2 color gamut = displayable colors
7.1.3 chromaticity coordinates
describe color without regard for intensity
visible (spectral) colors vs. CRT color gamut
7.1.4 HLS: hue, lightness, saturation
Hue: color
Lightness: how bright
Saturation: "purity" of color. Less saturated = mixed with white
How HLS cone relates to RGB cube
HLS space is actually a double cone.
7.2 The Color Matrix
7.2.1 glMatrixMode(GL_COLOR)
7.2.2 transforms RGBA colors by a 4x4 matrix
7.3 Gamma Correction
7.3.1 Eyes perceive brightness logarithmically
7.3.2 If we want brightness (perceived intensity) values to be equally
spaced, the physical intensities should increase exponentially
The consequences of having pixel values unevenly spaced in brightness space are twofold.
Either brigtnesses are changing too slowly to be noticed, or the brightnesses are changing too
quickly, causing visible transitions between regions of pixels that differ in value by one.
The latter situation can occur at low intensities. Our eyes are more sensitive to changes at low
intensities. We see relative differences in intensity, i.e., perception is on a logarithmic scale.
"So we're glad that the phosphors have this curve, because by darkening our colors it lets us
keep more resolution in the darks. For a gamma of 2.2, for example, light at 50% intensity is
generated with an 8-bit value of 186. This gives us 186 shades of dark and 70 shades of bright.
The reduced resolution in the bright areas isn't noticed because the eye isn't as sensitive there.
We need to take this distortion into account in our graphics math, however, or colors will end up
too dark and things like anti-aliasing won't have the effect we want."
--- from "Gamma Correction in Computer Graphics"
(http://www.teamten.com/lawrence/graphics/gamma/)
7.3.3 Monitor intensity is proportional to the input voltage (pixel
value), raised to a power (gamma). This is (approximately) the inverse
of the eye's response.
See document: gamma
The input voltage is linear in the pixel values, since the DAC usually performs a linear conversion.
For NTSC, gamma = 2.2
For PAL, gamma = 2.8
For HDTV, gamma = 2.5
(bigger gamma, more contrast)
"So we're glad that the phosphors have this curve, because by darkening our colors it lets us
keep more resolution in the darks. For a gamma of 2.2, for example, light at 50% intensity is
generated with an 8-bit value of 186. This gives us 186 shades of dark and 70 shades of bright.
The reduced resolution in the bright areas isn't noticed because the eye isn't as sensitive there.
We need to take this distortion into account in our graphics math, however, or colors will end up
too dark and things like anti- aliasing won't have the effect we want."
--- from "Gamma Correction in Computer Graphics"
(http://www.teamten.com/lawrence/graphics/gamma/)
7.3.4 In most graphics packages, pixel values are computed
assuming a linear relationship between pixel value and intensity.
For example, for antialiasing, we may compute a 50% coverage by a black object over a white
pixel, and update the pixel color to be
( 0.5 * 0 + 0.5 * 1.0) * 255 = 128
7.3.5 This is WRONG, however, since the monitor applies the gamma
function. Our colors will look too dark.
The monitor will output an intensity of about 22% for a pixel value of 128.
7.3.6 If you want to compute intensities in "linear space", you need to
apply the inverse of the gamma curve when you store the pixel
values.
We can use a lookup table for this.
You can also use a non-linear DAC to do the conversion (SGI).
You could also use a fragment program.
7.3.7 Using a LUT, you are losing resolution in the dark range. This
can result in visible banding in dark areas.
Half of the values are "darks", half are "brights". This is too much for the "brights", and too little
for the "darks".
7.3.8 To deal with this problem, we can use a higher resolution
representation. Use 16-bit integers, or floats (NVidia's float16) per
primary.
We should perform all intensity computations (blending, antialiasing, etc) using high- resolution,
linear intensities.
Avoid converting to 8-bpp until the very last (if at all).
7.3.9 Also, displays usually have a "black offset" : can't display pure
black.
7.4 Dithering and Halftoning
7.4.1 For displays with limited range of intensities, we can tradeoff
spatial resolution for intensity resolution
7.4.2 Halftoning: use black dots of different sizes to get shades of
gray (The eye integrates.)
7.4.3 Digital halftoning (dithering): use patterns (e.g. 4x4) of 1-bit
pixels to get shades
Can get 17 different shades with 16 pixels.
Download