j + - Book Spar

advertisement
www.Bookspar.com | Website for Students | VTU - Notes - Question Papers
Page 1 of 15
UNIT – 8
IMPLEMENTATION
Line-Segment Clipping
A clipper decides which primitives, or parts of primitives, appear on the display. Primitives that fit within the
specified view volume pass through the clipper, or are accepted. Primitives that cannot appear on the display are
eliminated, or rejected or culled. Primitives that are only partially within the view volume must be clipped such that any
part lying outside the volume is removed.
Clipping can occur at one or more places in the viewing pipeline. The modeler may clip to limit the primitives
that the hardware must handle. The primitives may be clipped after they have been projected from 3-D to 2-D
objects.
In OpenGL, at least conceptually, primitives are clipped against a 3-D view volume before projection and
rasterization.
Here two 2-D line-segment clippers are discussed, both extend directly to 3-Ds and to clipping of polygons.
Cohen-Sutherland Clipping
The 2-D clipping problem for line segments is shown in figure.
Assume that this problem arises after 3-D line segments have been projected
onto the projection plane, and that the window is part of the projection plane
that is mapped to the viewport on the display. All values are specified as real
numbers.
Instances of the problem: Entire line segment AB appears on the display, whereas none of CD appears. EF and
GH have to be shortened before being displayed. Although a line segment is completely determined by its
endpoints, GH shows that, even if both endpoints lie outside the clipping window, part of the line segment may
still appear on the display.
It is possible to determine the necessary information for clipping by computing the intersections of the lines of
which the segments are parts with the sides of the window. But these calculations require floating-point division
and hence must be avoided, if possible.
The Cohen-Sutherland algorithm replaces most of the expensive floating-point multiplications and divisions
with a combination of floating-point subtractions and bit operations.
The algorithm starts by extending the sides of the window to infinity, thus breaking up space into the
nine regions shown below.
Each region can be assigned a unique 4-bit binary number, or outcode, bo
b1 b2 b3, as follows.
Suppose that (x, y) is a point in the region; then,
1 ๐‘–๐‘“ ๐‘ฆ > ๐‘ฆ๐‘š๐‘Ž๐‘ฅ
1 ๐‘–๐‘“ ๐‘ฆ < ๐‘ฆ๐‘š๐‘–๐‘›
b0 = {
}
b1 = {
}
0 ๐‘œ๐‘กโ„Ž๐‘’๐‘Ÿ๐‘ค๐‘–๐‘ ๐‘’
0 ๐‘œ๐‘กโ„Ž๐‘’๐‘Ÿ๐‘ค๐‘–๐‘ ๐‘’
1 ๐‘–๐‘“ ๐‘ฅ > ๐‘ฅ๐‘š๐‘Ž๐‘ฅ
1 ๐‘–๐‘“ ๐‘ฅ < ๐‘ฅ๐‘š๐‘–๐‘›
b2 = {
}
b3 = {
}
0 ๐‘œ๐‘กโ„Ž๐‘’๐‘Ÿ๐‘ค๐‘–๐‘ ๐‘’
0 ๐‘œ๐‘กโ„Ž๐‘’๐‘Ÿ๐‘ค๐‘–๐‘ ๐‘’
The resulting codes are indicated in the above figure.
Outcode for the each endpoint of the each line segment is then computed. This step requires eight
floating-point subtractions per line segment.
Consider a line segment whose outcodes are given by O1 = outcode(x1, y1) and O2 =outcode(x2, y2).
On the basis of these outcodes, following four cases can be reasoned:
1. (O1 = O2 =0) : Both endpoints are inside the clipping window, as is true
for segment AB in figure. The entire line segment is inside, and the segment
can be sent on to be rasterized.
2. (O1 ≠ 0, O2 =0; or vice versa) One endpoint is inside the clipping window;
one is outside (say segment CD). The line segment must be shortened. The
nonzero outcode indicates which edge, or edges, of the window are crossed by the segment. One or two
intersections must be computed. After one intersection is computed, its outcode is computed to determine
whether another intersection calculation is required.
3. (O1 & O2 ≠ 0): Whether or not the two endpoints lie on the same outside side of the window can be
determined by taking the bitwise AND of the outcodes,. If so, the line segment can be discarded (say segment
E F in).
4. (O1 & O2 = 0): Both endpoints are outside, but they are on the outside of different edges of the window (say,
segments GH and IJ). But it can not be determined from just the outcodes whether the segment can be discarded
or must be shortened. Hence intersection with one of the sides of the window is computed and the outcode of
the resulting point is checked.
To compute any required intersection: The form this calculation takes depends on how the line segments
are represented, although only a single division should be required in any case. Consider the standard
explicit form of a line, y = mx + h, where m is the slope of the line and h is the line's y intercept, then,
m and h can be computed from the endpoints.
However, vertical lines cannot be represented in this form - a critical weakness of the explicit form.
Advantages:
๏ƒผ Checking of outcodes requires only Boolean operations.
www.Bookspar.com | Website for Students | VTU - Notes - Question Papers
Page 2 of 15
๏ƒผ Intersection calculations are done only when they are needed, as in the second case, or as in the fourth
case, where the outcodes did not contain enough information.
๏ƒผ The Cohen-Sutherland algorithm works best when there are many line segments, but few are actually
displayed. In this case, most of the line segments lie fully outside one or two of the extended sides of the
clipping rectangle, and thus can be eliminated on the basis of their outcodes.
๏ƒผ This algorithm can be extended to three dimensions.
Liang-Barsky Clipping
It uses the parametric form of lines. Consider a line segment defined by the two endpoints
p1 = [x1, y1]T and p2 = [x2, y2]T
These endpoints can be used to define a unique line which can be expressed parametrically:
1. In matrix form, p(α) = (1 - α)pl + αp2,or
2. As two scalar equations,
x(α) = (1 - α).x1 + α x2,
y(α) = (1 - α).y1 + α y2.
Parametric form is robust and needs no changes for horizontal or vertical lines.
๏ƒผ Varying the parameter α from 0 to 1 is equivalent to moving along the segment from p1 to p2.
๏ƒผ Negative values of α yield points on the line on the other side of p1 from p2.
๏ƒผ Similarly, values of α > 1 give points on the line past p2 going off to infinity.
Consider the line segment and the line of which it is part, as shown below.
As long as the line is not parallel to a side of the window (if it is, it can be handled
easily), there are four points where the line intersects the extended sides of the window.
These points correspond to the four values of the parameter: α1, α2, α3 and α4. One of
these values corresponds to the line entering the window; another corresponds to the line
leaving the window.
These intersections can be computed and arranged in order. Then the intersectionvalues that are needed for clipping can be determined.
For the given example, 1 > α4 > α3 > α2 > α1 > 0.
Hence, all four intersections are inside the original line segment, with the two innermost (α2 and α3)
determining the clipped line segment.
This case can be distinguished from the case in figure below:
It also has the four intersections between the endpoints of the line segment, by noting
that the order for this case is 1 > α4 > α2 > α3 > α1 > 0.
The line intersects both the top and the bottom of the window before it intersects either
the left or the right; thus, the entire line segment must be rejected.
Other cases of the ordering of the points of intersection can be argued in a
similar way.
Implementation is efficient
๏ƒผ
If computing intersections are avoided until they are needed. Many lines can be rejected
before all four intersections are known.
๏ƒผ
If floating-point divisions are avoided wherever possible.
If the parametric form is used to determine the intersection with the top of the window, the intersection-value is
α=
๐‘ฆ๐‘š๐‘Ž๐‘ฅ−๐‘ฆ1
๐‘ฆ2−๐‘ฆ1
Similar equations hold for the other three sides of the window.
Rather than computing these intersections, at the cost of a division for each, the equation can be written
as
α (y2-y1) = α โˆ†y = ymax – y1 = โˆ†ymax
Advantages:
๏ƒผ All the tests required by the algorithm can be restated in terms of โˆ†ymax, โˆ†y and similar terms can be
computed for the other sides of the windows. Thus, all decisions about clipping can be made without
floating-point division.
๏ƒผ Division is done only if an intersection is needed, because a segment has to be shortened.
๏ƒผ The efficiency of this approach, compared to that of the Cohen-Sutherland algorithm, is that multiple
shortening of line segments and the related re-executions of the clipping algorithm are avoided.
๏ƒผ This algorithm can be extended to three dimensions.
Polygon Clipping:
www.Bookspar.com | Website for Students | VTU - Notes - Question Papers
Page 3 of 15
Polygons are to be clipped against rectangular windows for display or against other polygons in some other
situations.
E.g. Figure shows the shadow of a polygon created by clipping a polygon that is closer
to the light source against polygons that are farther away.
Polygon-clipping algorithms can be generated directly from line-clipping algorithms
by clipping the edges of the polygon successively. However, the polygon is an object.
Hence depending on the form (non-convex/convex) of the polygon, clipping may
generate more than one polygonal object.
E.g. Consider the non-convex (or concave) polygon below.
If it is clipped against a rectangular window, the result is shown below, with the three polygons.
Unfortunately, implementing a clipper that can increase the number of objects can be a problem.
Hence the implementation of the clipper must result in a single polygon with edges that overlap
along the sides of the window. But this choice might cause difficulties in other parts of the
implementation.
Convex polygons do not present such problems. Clipping a convex polygon against a rectangular window can
leave at most a single convex polygon. A graphics system might then either forbid the use of concave polygons,
or divide (tessellate) a given polygon into a set of convex polygons, as shown below.
For rectangular clipping regions, both the Cohen-Sutherland and the LiangBarsky algorithms can be applied to polygons on an edge-by-edge basis.
Sutherland and Hodgeman Algorithm:
๏‚ท A line-segment clipper can be envisioned as a black box whose input is the pair of
vertices from the segment to be tested and clipped, and whose output is either a pair of
vertices corresponding to the clipped line segment, or is nothing if the input line segment
lies outside the window.
๏‚ท Rather than considering the clipping window as four line segments, it can be considered as the object
created by the intersection of four infinite lines that determine the top, bottom, right, and left sides of the
window.
๏‚ท Then the clipper can be subdivided into a pipeline of simpler clippers, each of which clips against a
single line that is the extension of an edge of the window. The black-box view can be used on each of
the individual clippers.
Consider clipping against only the top of the
window. This operation can be considered as a
black box shown in figure (b) below, whose input
and output are pairs of vertices, with the value of
Ymax as a parameter known to the clipper.
๏‚ท
Using the similar triangles below, if there is an intersection, it lies at
x3 = x1 + (ymax –y1)
๐’™๐Ÿ−๐’™๐Ÿ
๐’š๐Ÿ−๐’š๐Ÿ
y3 = ymax
Thus, the clipper returns one of three pairs:
{(x1, y1), (x2,y2)}, {(x1, y1), (xi,ymax)}, or {(xi, ymax), (x2,y2)}.
๏‚ท Clipping against the bottom, right, and left lines can be done
independently, using the same equations with the roles of x and y exchanged
as necessary, and the values for the sides of the window inserted. The four
clippers can now be arranged in the pipeline of figure blow.
www.Bookspar.com | Website for Students | VTU - Notes - Question Papers
Page 4 of 15
If this configuration is built in hardware, that clipper works on four vertices concurrently.
Clipping of Other Primitives
Bounding Boxes: Consider a many-sided polygon shown below. Any clipping algorithm can be used which
would clip the polygon by individually clipping all that polygon's edges. However, algorithm
finds that entire polygon lies outside the clipping window.
This observation can be exploited through the use of the bounding box or extent of the
polygon. Bounding box is the smallest rectangle, aligned with the window that contains the
polygon.
Calculating the bounding box requires only going through the vertices of the polygon to find the minimum and
maximum of both the x and y values.
Once the bounding box is obtained, detailed clipping can be avoided.
Consider the three cases shown below.
For the polygon above the window, no clipping is necessary, because the minimum y for the
bounding box is above the top of the window.
By comparing the bounding box with the window the polygon can be determined to be inside
the window. More care must be taken only when the bounding box straddles the window. Then
detailed clipping using all the edges of the polygon must be performed. The use of extents is
such a powerful technique - in both two and three dimensions - that modeling systems often
compute a bounding box for each object, automatically, and store the bounding box with the
object.
Curves, Surfaces, and Text
The variety of curves and surfaces that can be defined mathematically makes it difficult to find general
algorithms for processing these objects. The potential difficulties can be seen from the 2-D curves in figure.
For a simple curve, such as a quadratic, intersections can be computed,
although at a cost higher than that for lines. For more complex curves, such
as the spiral, not only must intersection calculations be computed with
numerical techniques, but even determining how many intersections need
to be computed, may be difficult.
Such problems can be avoided by approximating curves with line segments and surfaces with planar polygons.
The use of bounding boxes can also prove helpful, especially in cases such as quadratics where the intersections
can be computed exactly, but would prefer to make sure that the calculation is necessary before carrying it out.
The handling of text differs from API to API, with many APIs allowing the user to specify how detailed a
rendering of text is required. There are two extremes.
1. The text is stored as bit patterns and is rendered directly by the hardware without any geometric
processing. Any required clipping is done in the frame buffer.
2. Text is defined like any other geometric object, and is then processed through the standard viewing
pipeline.
OpenGL allows both these cases by not, having a separate text primitive. The user can choose the
desired mode by defining either bitmapped characters, using pixel operations, or stroke characters, using
the standard primitives.
Other APIs, such as PHIGS and GKS, add intermediate options, by having text primitives and a variety
of text attributes. In addition to attributes that set the size and color of the text, there are others that allow
the user to ask the system to use techniques such as bounding boxes to clip out strings of text that cross a
clipping boundary.
www.Bookspar.com | Website for Students | VTU - Notes - Question Papers
Page 5 of 15
Clipping in the Frame Buffer
Clipping can be done after the objects have been projected and converted into screen coordinates. Clipping can
be done in the frame buffer through a technique called scissoring. However, it is usually better to clip geometric
entities before the vertices reach the frame buffer; thus, clipping within the frame buffer generally is required
for only raster objects, such as blocks of pixels.
Clipping in Three Dimensions
In 3-Ds, clipping is performed against a bounded volume, rather than against a bounded region in the plane.
Consider the right parallelepiped clipping region below.
The three clipping algorithms (Cohen-Sutherland, LiangBarsky, and Sutherland-Hodgeman) and the use of
extents can be extended to 3-Ds.
Extension for the Cohen-Sutherland algorithm: The 4-bit outcode is replaced with a 6-bit outcode. The
additional 2 bits are set if the point lies either in front of or behind the clipping volume.
The testing strategy is virtually identical for the two- and threedimensional cases.
Extension For the Liang-Barsky algorithm: The equation z(α) = (1 - α).z1 + α z2 is added to obtain a 3-D
parametric representation of the line segment. Six intersections with the surfaces that form the clipping volume
must be considered and the same logic used in 2-D can be used.
Pipeline clippers add two modules to clip against the front and back of the clipping volume.
The major difference between two- and three-dimensional clippers is that instead of clipping lines against lines,
(as in 2-Ds), in 3-Ds clipping is performed either lines against surfaces or surfaces against surfaces.
Consequently, the intersection calculations must be changed. A typical intersection calculation can be posed in
terms of a parametric line in 3-Ds intersecting a plane shown below.
If the line and plane equations are written in matrix form (where n is the normal to
the plane and p0 is a point on the plane), following equations need to be solved:
p(α) = (1 - α).p1 + α p2
n. (p(α) – p0) = 0,
๐’.(๐’‘๐ŸŽ − ๐’‘๐Ÿ)
for the α corresponding to the point of intersection. This value is α = ๐’.(๐’‘๐Ÿ –๐’‘๐Ÿ)
and computation of an intersection requires six multiplications and a division.
However, simplifications are possible with standard viewing volumes.
o For orthographic viewing, shown below, the view volume is a right parallelepiped, and each intersection
calculation reduces to a single division, as it did for 2-D clipping.
o For an oblique view, shown below, the clipping volume no longer is a right parallelepiped.
www.Bookspar.com | Website for Students | VTU - Notes - Question Papers
Page 6 of 15
Although the computation of dot products to clip against the sides of the volume is needed, here is where the
normalization process pays dividends. It has been shown that an oblique projection is equivalent to a shearing of
the data followed by an orthographic projection. Although the shear transformation distorts objects, they are
distorted such that they project correctly. The shear also distorts the clipping volume from a general
parallelepiped to a right parallelepiped.
Figure (a) shows a top view of an oblique volume with a cube inside the volume. Figure (b) shows the volume
and object after they have been distorted by the shear. As far as projection is concerned, carrying out the
oblique transformation directly or replacing it by a shear transformation and an orthographic projection requires
the same amount of computation. When clipping is added in, it is clear that the second approach has a definite
advantage, because clipping can be performed against a right parallelepiped. This example illustrates the
importance of considering the incremental nature of the steps in an implementation. Analysis of either
projection or clipping, in isolation, fails to show the importance of the normalization process.
o For perspective projections, the argument for normalization is just as strong. By carrying out the
perspective-normalization transformation, but not the orthographic projection, again a rectangular
clipping volume can be created and thereby all subsequent intersection calculations can be simplified.
OpenGL supports additional clipping planes that can be oriented arbitrarily. Hence, if this feature is used in a
user program, the implementation must carry out a general clipping routine in software, at a performance cost.
Rasterization / Scan Conversion
The process of setting of pixels in the frame buffer from the specification of geometric entities in an application
program is called scan conversion.
Scan Converting Line-Segments:
DDA (Digital Differential Analyzer) algorithm: DDA is an early electromechanical device for digital
simulation of differential equations. A line satisfies the differential equation dy / dx = m, where m is the slope.
Hence generating a line segment is equivalent to solving a simple differential equation numerically.
๏‚ง Consider a line segment defined by the endpoints (x1, y1) and (x2, y2). These values are rounded to
have integer values, so the line segment starts and ends at a known pixel.
๐‘ฆ2−๐‘ฆ1 โˆ†๐‘ฆ
๏‚ง The slope is given by m=
= .Assume that 0 ≤ m ≤ 1. Other values of m can be handled using
๐‘ฅ2−๐‘ฅ2 โˆ†๐‘ฅ
symmetry.
๏‚ง The algorithm writes a pixel for each value of ix in write_pixel as x goes from x1 to x2.
For the line segment shown in the figure, for any change in x equal to โˆ†x, the corresponding
changes in y must be โˆ†y = m โˆ†x
While moving from x1 to x2, if x is increased by 1 in each iteration, then algorithm increases
y by โˆ†y = m
๏‚ง
๏‚ง
Although each x is an integer, each y is not, because m is a floating-point number, and algorithm must
round it to find the appropriate pixel.
The algorithm, in pseudocode, is
for (ix = xl; ix <= x2; ix++)
{
y+=m;
write_pixel(x, round(y), line_color);
}
where round is a function that rounds a real to an integer.
www.Bookspar.com | Website for Students | VTU - Notes - Question Papers
Page 7 of 15
๏‚ง
The above algorithm is of the form: For each x, find the best y. If this is used for the lines with larger slopes
(slope > 1), then the separation between pixels that are colored can be large, generating an unacceptable
approximation to the line segment. Hence for larger slopes, the algorithm swaps the roles of x and y.
Thus the algorithm becomes this: For each y, find the best x.
๏‚ง Use of symmetry removes any potential problems from either vertical or horizontal line segments.
๏‚ง Parts of the algorithm for negative slopes can also be derived.
๏‚ง The DDA algorithm appears efficient and it can be coded easily, but it requires a floating-point addition
for each pixel generated.
Bresenham’s Algorithm
Bresenham’s algorithm avoids all floating-point calculations and has become the standard algorithm used in
hardware and software rasterizers.
๏‚ง Consider a line segment defined by the endpoints (x1, y1) and (x2, y2). These values are rounded to
have integer values, so the line segment starts and ends at a known pixel.
๐‘ฆ2−๐‘ฆ1 โˆ†๐‘ฆ
๏‚ง The slope is given by m=
= .Assume that 0 ≤ m ≤ 1. Other values of m can be handled using
๐‘ฅ2−๐‘ฅ2 โˆ†๐‘ฅ
symmetry.
๏‚ง Consider that in the middle of the scan conversion of line segment,
1
1
a pixel is placed at (i + 2 , j + 2 ). The line (of which the segment is part)
can be represented as y =mx + h.
1
At x = i + , this line must pass within one-half the length of the pixel at
1
2
1
(i + , j + ); otherwise, the rounding operation would not have
2
2
generated this pixel.
3
๏‚ง When moved ahead to x = i + the slope condition indicates that
2
one of the two possible pixels must be colored:
3
๏‚ง
1
3
3
Pixel at (i + , j + ) or (i + , j + ).
2
2
2
2
This selection can be made in terms of the decision variable d =a - b, where a and b are the distances
3
between the line and the upper and lower candidate pixels at x = i + as shown below.
2
If d is positive, the line passes closer to the lower pixel, so the
3
1
3
3
pixel at (i + , j + ) is selected; otherwise, (i + , j + ) is
2
2
2
2
selected.
๏‚ง
d can be computed by computing y =mx + b. But it is avoided because m is a floating-point number.
Bresenham's algorithm offers computational advantage through two further steps.
1. It replaces floating-point operations with fixed-point operations.
2. If the algorithm is applied incrementally, it starts by replacing d with the new decision variable d
= (x2 – x1)(a - b) = โˆ†x(a - b), a change that cannot affect which pixels are drawn, because it is
only the sign of the decision variable that matters.
๐‘ฆ2−๐‘ฆ1 โˆ†๐‘ฆ
๏‚ง If a and b values are substituted, using the equation of the line, and noting that m=
= ,
๐‘ฅ2−๐‘ฅ1 โˆ†๐‘ฅ
h = y2 - mx2, then d is an integer. Floating-point calculations are thus eliminated but the direct computation of d
requires a fair amount of fixed-point arithmetic.
1
Slightly different approach: Suppose that dk is the value of d at x = k + .
2
Then while computing dk+1 incrementally from dk, there are two situations, depending on whether the y
location of the pixel is incremented at the previous step or not; these situations are shown in figure below.
www.Bookspar.com | Website for Students | VTU - Notes - Question Papers
Page 8 of 15
By observing that a is the distance between the location of the upper candidate location and the line, a increases
by m only if y was increased by the previous decision; otherwise, it decreases by m - 1.
Likewise, b either decreases by –m or increases by 1 - m when y is incremented. Multiplying by โˆ†x, the possible
changes in d are either 2โˆ†y or 2(โˆ†y - โˆ†x). This result can be stated in the form
2 โˆ†๐‘ฆ
๐‘–๐‘“ ๐‘‘๐‘˜ < 0
dk+1 = dk + {
2 (โˆ†๐‘ฆ − โˆ†๐‘ฅ) ๐‘œ๐‘กโ„Ž๐‘’๐‘Ÿ๐‘ค๐‘–๐‘ ๐‘’
The calculation of each successive pixel in the frame buffer requires only an addition and a sign test. This
algorithm is so efficient that it has been incorporated as a single instruction on graphics chips.
Scan Conversion of Polygons:
Raster systems have the ability to display filled polygons. The phrases rasterizing polygons and polygon scan
conversion came to mean filling a polygon with a single color. There are many viable methods for rasterizing
polygons.
Inside-Outside Testing
It determines whether a given point is inside or outside of the polygon for non-simple polygons.
Crossing or odd-even test: It is used for making inside-outside decisions. Suppose that p is a point inside a
polygon. Any ray emanating from p and going off to infinity must cross an odd number of edges. Any ray
emanating from a point outside the polygon and entering the polygon crosses an even number of edges before
reaching infinity. Hence, a point can be defined as being inside if a line is drawn through it and if this line is
followed, starting on the outside, an odd number of edges must be crossed before reaching it. For the starshaped polygon below, the inside coloring is shown.
Implementation of this testing replaces rays through points with scan-lines, and counts the
crossing of polygon edges to determine inside and outside.
Winding test:
However, suppose the star polygon is to be colored using
fill algorithm, as shown, then the winding test performs
that. This test considers the polygon as a knot being
wrapped around a point or a line. To implement the test,
traversing the edges of the polygon from any starting vertex
is considered and then going around the edge in a particular
direction (which direction does not matter) until the starting
point is reached. We illustrate the path by labeling the edges, as shown in figure (b).
Then an arbitrary point is considered. The winding number for this point is the number of times it is encircled
by the edges of the polygon. Clockwise encirclements are counted as positive and counterclockwise
encirclements as negative (or vice versa). Thus, points outside the star in figure are not encircled and have a
winding number of 0; points that are filled in figure of odd-even test, all have a winding number of 1; and points
in the center that were not filled by the odd-even test have a winding number of 2. If the fill rule is changed to
be that a point is inside the polygon if its winding number is not zero, then the inside of the polygon is filled as
shown in figure(a).
Problems with the aforesaid definition of the winding number:
Consider the S-shaped curve in figure which can be approximated with a polygon containing
many vertices. Definition of encirclements for points inside the curve is not clear. But odd-even
definition can be modified to improve the definition for the winding number and to obtain a way
of measuring the winding number for an arbitrary point.
Consider any line through an interior point p that cuts through the polygon completely, as shown in above
figure (b), and that is not parallel to an edge of the polygon. The winding number for this point is the number of
www.Bookspar.com | Website for Students | VTU - Notes - Question Papers
Page 9 of 15
edges that cross the line in the downward direction, minus the number of edges that cross the line in the upward
direction. If the winding number is not zero, the point is inside the polygon. Note that for this test, it does not
matter how up and down are defined.
OpenGL and Concave Polygons
To ensure correct rendering of more general polygons, the possible approaches are:
1. Application must ensure that all polygons obey the restrictions needed for proper rendering.
2. Application tessellate a given polygon into flat convex polygons, usually triangles. There are many ways
to divide a given polygon into triangles. A good tessellation should not produce triangles that are long
and thin; it should, if possible, produce sets of triangles that can use supported features, such as triangle
strips and triangle fans. There is a tessellator in the GLU library. For a simple polygon without holes, a
tessellator object is declared first and then the vertices of the polygons are given to it as follows:
mytess = gluNewTess( );
gluTessBeginPolygon(mytess, NULL);
gluTessBeginContour(mytess);
for (i=0; i<nvertices; i++)
glTessVertex(mytess. vertex[i], vertex[i]);
gluTessEndContour( );
gluTessEndPolygon(mytess);
Although there are many parameters that can be set, the basic idea is that a contour is described, and the
tessellating software generates the required triangles and sends them off to be rendered based on the present
tessellation parameters.
Scan-Conversion with the z-Buffer
Careful use of the z-buffer algorithm can accomplish following three tasks simultaneously:
๏‚ท Computation of the final orthographic projection
๏‚ท Hidden-surface removal, and
๏‚ท Shading.
Consider the dual representations of a polygon shown below:
In (a) the polygon is represented in 3-D normalized device coordinates. In (b) it is
shown after projection in screen coordinates.
The strategy is to process each polygon, one scan line at a time. In terms of the
dual representations, a scan line, projected backward from screen coordinates,
corresponds to a line of constant y in normalized device coordinates as below:
To march across this scan line and its back projection, for the scan line in screen
coordinates, in each step one pixel width is moved. Normalized-device-coordinate
line is used to determine depths incrementally, and to see whether or not the pixel in
screen coordinates corresponds to a visible point on the polygon. Having computed
shading for the vertices of the original polygon, bilinear interpolation can be used to
obtain the correct color for visible pixels. This process requires little extra effort over. It is controlled, and thus
limited, by the rate at which the polygons can be sent through the pipeline.
Fill and Sort
A different approach to rasterization of polygons starts with the idea of a polygon processor: a black box whose
inputs are the vertices for a set of 2-D polygons and whose output is a frame buffer with the correct pixels set.
Consider filling each polygon with a constant color. First, consider a single polygon. The basic rule for filling a
polygon is as follows: If a point is inside the polygon, color it with the inside (fill) color: This conceptual
algorithm indicates that polygon fill is a sorting problem, where all the pixels in the frame buffer are sorted into
those that are inside the polygon, and those that are not. From this perspective, different polygon-fill algorithms
can be obtained using different ways of sorting the points. The following possibilities are introduced:
1. Flood fill
2. Scan-line fill
3. Odd-even fill
Flood-Fill
An unfilled polygon can be displayed by rasterizing its edges into the frame buffer using Bresenham's
algorithm. Assume two colors: a background color (white) and a foreground, or drawing-color (black).
Foreground color can be used to rasterize the edges, resulting in a frame buffer colored as shown below:
If an initial point (x, y) inside the polygon is found - a seed point – then its neighbors can be
found recursively, coloring them with the foreground color if they ,are not edge points. The
flood-fill algorithm can be expressed in pseudocode, assuming that there is a function
read_pixel that returns the color of a pixel:
flood_fil1(int x, int y)
{
www.Bookspar.com | Website for Students | VTU - Notes - Question Papers
Page 10 of 15
If (read_pixel(x,y)==WHITE)
{
write_pixel(x, y, BLACK);
flood_fill(x-l, y);
flood_fill(x+l, y);
flood_fill(x, y-l);
flood_fill(x, y+l);
}
}
A number of variants of flood fill can be obtained by removing the recursion. One way to do so is to work one
scan line at a time.
Scan-Line Algorithms
They generate pixels as they are displayed.
Consider the polygon with one scan line shown.
Span: Groups of pixels of the scan line that lie inside the polygon. There are three spans
in the figure.
Scan-line constant-filling algorithms identify the spans first and then color the interior
pixels of each span with the fill color.
The spans are determined by the set of intersections of polygons with scan lines. The vertices contain all the
information needed to determine these intersections.
But the order in which these intersections are generated is determined by the method used to represent the
polygon.
E.g. Consider the polygon represented by an ordered list of vertices.
The most obvious way to generate scan-line-edge intersections is to process edges defined by
successive vertices. Figure shows these intersections, indexed in the order in which this
method would generate them. Note that this calculation can be done incrementally.
However, to fill one scan line at a time, the aforesaid order is not useful.
Instead, the scan-line algorithms sort the intersections
๏‚ง
initially by scan lines, and
๏‚ง
then by the order of x on each scan line as shown in figure below.
y-x algorithm:
It creates a bucket for each scan line. As edges are processed, the intersections with scan
lines are placed in the proper buckets. Within each bucket, an insertion sort
orders the x values along each scan line. The data structure is shown in figure.
Properly chosen data structure can speed up the algorithm.
Singularities
Most polygon-fill algorithms can be extended to other shapes. Polygons have the distinct advantage that the
locations of their edges are known exactly. Even polygons can present problems, however, when vertices lie on
scan lines. Consider the two cases below.
Odd-even fill definition treats these two cases differently. For part (a), the intersection of the scan
line with the vertex can be counted as either zero or two edge crossings; for part (b), the vertexscan-line intersection must be counted as one edge crossing.
Algorithm can be fixed in one of two ways.
Algorithm checks to see which of the two situations encountered and then counts the edge
crossings appropriately.
Or
Algorithm can prevent the special case of a vertex lying on an edge - a singularity – from ever
arising by making a rule that no vertex has an integer y value. If any vertex finds integer, its
location is perturbed slightly. Another method- one that is especially valuable when working in the
frame buffer - is to consider a virtual frame buffer of twice the resolution of the real frame buffer. In the virtual
frame buffer, pixels are located at only even values of y, and all vertices are located at only odd values of y.
Placing pixel centers half-way between integers, as does OpenGL, is equivalent to using this approach.
www.Bookspar.com | Website for Students | VTU - Notes - Question Papers
Page 11 of 15
Hidden-Surface Removal (or visible-surface determination)
But before rasterizing any of the objects after transformations and clipping, the problem of hidden-surface
removal is solved to discover whether each object is visible to the viewer, or is obscured from the viewer by
other objects. There two types of hidden-surface elimination algorithms:
1. Object-Space Algorithms
They determine which objects are in front of others. Consider a scene composed of k 3-D opaque flat polygons.
Object-space algorithms consider the objects pair wise, as seen from the center of projection. Consider two such
polygons, A and B. There are four possibilities (Figure below):
1. A completely obscures B from the camera; then only A is displayed.
2. B obscures A; Only B is displayed.
3. A and B both are completely visible; both A and B are displayed.
4. A and B partially obscure each other; then the visible parts of each polygon must be computed.
For simplicity, the determination of the particular case and any the calculation of the visible part of a polygon
can be considered as a single operation.
Algorithm proceeds iteratively. One of the k polygons is selected and compared pair wise with the remaining
k - 1 polygons. After this procedure, the part of this polygon that is visible, if any, will be known, and that
visible part is rendered. This process is repeated with any of the other k - 1 polygons. Each step involves
comparing one polygon, pair wise, with the other remaining polygons, until there are only two polygons
remaining, and they are also compared to each other.
The complexity of this calculation is O(k2). Thus, without deriving any of the details of any particular objectspace algorithm, it can be said that, the object-space approach works best for scenes that contain relatively few
polygons.
2. Image-Space Algorithms
The image-space approach follows the viewing and ray-casting model, as shown below.
๏‚ง Consider a ray that leaves the center of projection
and passes through a pixel.
๏‚ง This ray can be intersected with each of the planes
determined by k polygons.
๏‚ง Then the planes through which the ray passes
through a polygon are determined.
๏‚ง Finally, for those rays, the intersection closest to
the center of projection is calculated. This pixel is colored
with the shade of the polygon at the point of intersection.
The fundamental operation is the intersection of rays with polygons. For an n x m display, this operation must
be done nmk times, giving O(k) complexity. However, because image-space approaches work at the pixel level,
they can create renderings more jagged than those of object-space algorithms.
Back-Face Removal
The work required for hidden surface removal can be reduced by eliminating all back-facing polygons before
applying any other hidden-surface-removal algorithm.
The test for culling a back-facing polygon can be derived from figure below:
Front of a polygon is seen if the normal, which comes out of the front face, is pointed
toward the viewer. If ัฒ is the angle between the normal and the viewer, then the
polygon is facing forward if and only if
-90 ≤ ัฒ ≤ 90 , or, equivalently, cosัฒ ≥ 0.
The second condition is much easier to test because, instead of computing the cosine,
the dot product n.v ≥ 0 can be used.
Usually back-face removal is applied after transformation to normalized device
coordinates in which all views are orthographic, with the direction of projection along the Z axis. Considering
this makes the aforesaid test further simplified. Hence in homogeneous coordinates,
0
0
n=[ ]
1
0
www.Bookspar.com | Website for Students | VTU - Notes - Question Papers
Page 12 of 15
Thus, if the polygon is on the surface ax + by + cz + d = 0 in normalized device coordinates, the algorithm only
needs to check the sign of c to determine whether the polygon is a front- or back-facing.
In OpenGL, the function glCullFace turns on back-face elimination.
The z-Buffer Algorithm
It is the most widely used algorithm. It has the advantages of being easy to implement, in either hardware or
software, and of being compatible with pipeline architectures, where it can execute at the speed at which
vertices are passing through the pipeline. Although the algorithm works in image space, it loops over the
polygons, rather than over pixels, and can be regarded as part of the scan-conversion process (discussed later).
Assume the process of rasterizing one of the two polygons in figure.
A color for each point of intersection between a ray from the center of
projection and a pixel can be computed, using the shading model. In
addition, algorithm checks whether this point is visible. It will be
visible if it is the closest point of intersection along the ray. Hence,
while rasterizing B, its shade will appear on the screen if the distance
z2 < z1 to polygon A. Conversely, while rasterizing A, the pixel that
corresponds to the point of intersection will not appear on the display.
Because the algorithm proceeds polygon by polygon, the information
on all other polygons while rasterizing any given polygon must be
retained. However, the depth information can be stored and updated
during scan conversion.
A buffer, called the z butfer, with the same resolution as the frame buffer and with depth consistent with the
resolution can be used to store distance.
E.g. Consider a 1024 x 1280 display and single-precision floating-point numbers used for the depth
calculation. Then a 1024 x 1280 z buffer with 32-bit elements can be used. Initially, each element in the depth
buffer is initialized to the maximum distance away from the center of projection. The frame buffer is initialized
to the background color. At any time during rasterization, each location in the z buffer contains the distance
along the ray corresponding to this location of the closest intersection point on any polygon found so far.
The calculation proceeds as follows. Rasterization is done polygon by polygon. For each point on the polygon
corresponding to the intersection of the polygon with a ray through a pixel, the distance from the center of
projection is computed. This distance is compared to the value in the z buffer corresponding to this point. If this
distance is greater than the distance in the z buffer, then it means that a polygon closer to the viewer has already
been processed, and this point is not visible. If the distance is less than the distance in the z buffer, then it is the
point closer to the viewer and hence the distance in the z buffer is updated and the shade computed for this point
is placed at the corresponding location in the frame buffer.
OpenGL uses z-buffer algorithm for hidden-surface removal.
The z-buffer algorithm works well with the image-oriented approaches to implementation, because the amount
of incremental work is small.
Depth Sort and the Painter's Algorithm
Depth sort is a direct implementation of the object-space approach. It is a variant of an even simpler algorithm
known as the painter's algorithm.
Consider a collection of polygons sorted based on their distance from the viewer.
Consider the figure (a) and (b). To render the scene correctly, one of the following approaches be followed:
๏‚ง Part of the rear polygon that is visible must be found and that part must be rendered into the frame
buffer. This is just a calculation that requires clipping one polygon against the other.
๏‚ง Another approach is analogous to the way an oil painter might render the scene. Oil painter probably
would paint the farther back polygon in its entirety, and then would paint the front polygon, in the
process painting over the part of the rear polygon not visible to the viewer. Both polygons would be
rendered completely, with the hidden-surface removal being done as a consequence of the back-to-front
rendering of the polygons.
Depth sort addresses following two questions:
๏‚ง How to sort?
Assume that the extent of each polygon has already been computed. The next step of depth sort is to order all
the polygons by how far away from the viewer their maximum z value is. This step gives the algorithm the
name depth sort.
๏‚ง What to do if polygons overlap?
If the two polygons overlap, then the depth-sort algorithm runs a number of increasingly more difficult tests,
attempting to find an ordering the polygons individually to paint (render) and yield the correct image. E.g.
www.Bookspar.com | Website for Students | VTU - Notes - Question Papers
Page 13 of 15
Consider a pair of polygons whose z extents overlap. The simplest test is to check their x and y extents. If either
of the x or y extents do not overlap, neither polygon can obscure the other and they can be painted in either
order. Even if these tests fail, it may still be possible to find an order in which the polygons can be painted
individually.
Other problem:
๏‚ง Cyclical overlapping of three or more polygons: Then there is no correct order for painting.
The application divides at least one of the polygons into two parts, and attempts to find
an order to paint the new set of polygons.
๏‚ง
A polygon piercing another polygon, as shown below.
To continue with depth sort, program must derive the details of the intersection – a
calculation equivalent to clipping one polygon against the other. If the intersecting
polygons have many vertices, another algorithm that requires less computation can be
adopted.
A performance analysis of depth sort is difficult because the particulars of the application determine how often
the more difficult cases arise.
The Scan-Line Algorithm
The algorithm combines polygon scan conversion with hidden-surface removal. Figure below shows two
intersecting polygons and their edges.
If the polygon is rasterized scan line by scan line, the incremental depth calculation
can be used. However, by observing the figure, still greater efficiency is possible.
๏ƒผ Scan line i, crosses edge a of polygon A. There is no reason to carry out a
depth calculation because the first polygon is entered on this scan line. No other
polygon can yet affect the colors along this line. Scan line leaves the polygon A when
the next edge, b, is encountered and the corresponding pixels are colored with the
background color. When the edge c of polygon B is encountered, only a single polygon is considered
and hence depth calculations be avoided.
๏ƒผ Scan line j shows a more complex situation. First, edge a is encountered again and colors can be
assigned without a depth calculation. The second edge encountered is c; thus, there are two polygons about
which to worry. Until edge d is passes, depth calculations must be performed and incremental methods can be
used.
Although this strategy has elements in common with the z-buffer algorithm, it is fundamentally different
because it is working one scan line at a time, rather than one polygon at a time. A good implementation of the
algorithm requires a moderately sophisticated data structure for representing which edges are encountered on
each scan line. The basic structure is an array of pointers, one for each scan line, each of which points to an
incremental edge structure for the scan line.
Antialiasing
Rasterized line segments and edges of polygons look jagged even on a high-resolution CRT. This is due to the
mapping of continuous representation of an object, which has infinite resolution, to a discrete approximation,
which has limited resolution. The name aliasing has been given to this effect, because of the tie with aliasing in
digital signal processing. The errors are caused by three related problems with the discrete nature of the frame
buffer.
1. The number of pixels with n x m frame buffer is fixed and only certain patterns can be generated to
approximate a line segment. Many different continuous line segments may be approximated by the same
pattern of pixels. Alternatively it can be said that all these segments are aliased as the same sequence of
pixels.
2. Pixel locations are fixed on a uniform grid, regardless of where pixels need to be placed. That is, the
pixels can not be placed other than evenly spaced locations.
3. Pixels have a fixed size and shape.
๏‚ง Spatial-domain aliasing and Antialiasing:
Consider an ideal raster line segment shown below:
This line can not be drawn as such practically because it does not consist of the square pixels.
Bresenham's algorithm can be viewed as a method for approximating the ideal one-pixel-wide
line with the real pixels. The ideal one-pixel-wide line partially covers many pixel-sized
boxes. Scan-conversion algorithm forces to choose exactly one pixel value for each value of x,
for lines of slope less than 1.
Instead, each box can be shaded by the percentage of the ideal line that crosses it. Then smoother-appearing
image is obtained. This technique is known as antialiasing by area averaging. The calculation is similar to
polygon clipping.
www.Bookspar.com | Website for Students | VTU - Notes - Question Papers
Page 14 of 15
There are other approaches to antialiasing, and antialiasing algorithms that can be applied to other primitives,
such as polygons.
Use of z-buffer algorithm poses problem of aliasing in which color of a given pixel is determined by the shade
of a single primitive.
Consider the pixel shared by the three polygons shown in figure. If each polygon has a
different color, the color assigned to the pixel is the one associated with the polygon closest to
the viewer.
In such situations, color can be assigned based on an area-weighted average of the colors of
the three triangles, to obtain much better image.
.
๏‚ง Time-domain aliasing:
This problem arises when generating sequences of images, such as for animations. Consider a small object that
is moving in front of the projection plane and that has been ruled into pixel-sized units, as shown in figure.
If the rendering process sends a ray through the center of each pixel and determines
what it hits, then sometimes the object is intersected and sometimes, if the
projection of the object is small, the object is missed. The viewer will have the
unpleasant experience of seeing the object flash on and off the display as the
animation progresses. There are several ways to deal with this problem. For
example, more than one ray per pixel-a technique common in ray tracing- can be
used.
All antialiasing techniques require considerably more computation than does
rendering without antialiasing. In practice, for high-resolution images, antialiasing is done off-line, and is done
only when a final image is needed.
Display Considerations
In most interactive applications, the application programmer does not have to worry about how the contents of
the frame buffer are displayed.
๏ƒผ In scan-line based systems, the display is generated directly by the rasterization algorithms.
๏ƒผ In the more common approach for workstations, the frame buffer consists of dual-ported memory; the
process of writing into the frame buffer is completely independent of the process of reading the frame
buffer's contents for display.
Thus, the hardware redisplays the present contents of the frame buffer at a rate sufficient to avoid flickerusually 60 to 85 Hz and the application programmer worries about only whether or not the program can execute
and fill the frame buffer fast enough. E.g. Use of double buffering allows the display to change smoothly, it will
not push the primitives at the desired speed.
Numerous other problems affect the quality of the display and often cause users to be unhappy with the output
of the programs. For example, two CRT displays may have the same nominal resolution but may display pixels
of different sizes.
Color Systems
Problems with RGB Color Systems:
Differences across RGB systems among devices: E.g. The use of same RGB triplet (0.8, 0.6, 0.0) to drive both
a CRT and a film-image recorder may differ in color because the film dyes and the CRT phosphors have
different color distributions. This difference in display properties due the device-dependency, are not addressed
by most APIs.
Colorimetry literature contains the information of standards for many of the common existing color systems.
E.g. CRTs are based on the National Television Systems Committee (NTSC) RGB system.
Differences in color systems can be viewed as equivalent to different coordinate systems for representing the
tri-stimulus values. If Cl = [Rl, Gl, Bl]T and C2 = [R2, G2, B2]T are the representations of the same color in two
different systems, there is a 3 x 3 color-conversion matrix M such that C2 = M C1. This solution may not be
sufficient due to the further problems given below:
1. Difference in the color gamuts of the two systems: It may cause, a color not be producible on one of the
systems.
2. Use of four-color subtractive system (CMYK) in the printing and graphic-arts industries: It adds black
(K) as a fourth primary. Conversion between RGB and CMYK often requires a great deal of human
expertise.
3. Limitations on the linear RGB color theory: The distance between colors in the color cube is not a
measure of how far apart the colors are perceptually.
Other Color Systems as alternatives to RGB Color System:
๏ถ Color researchers often prefer to work with chromaticity coordinates rather than tri-stimulus values.
The chromaticity of a color is the three fractions of the color in the three primaries. Thus, if the tri-stimulus
values are T1, T2, and T3, for a particular RGB color, its chromaticity coordinates are:
T1
T2
T3
t1 =
t2 =
t3 =
T1+T2+T3
T1+T2+T3
T1+T2+T3
www.Bookspar.com | Website for Students | VTU - Notes - Question Papers
Page 15 of 15
Adding the three equations, t1 + t2 + t3 = 1, and thus it is possible to work in 2-D in t1, t2 space, finding t3 only
when its value is needed. The information that is missing from chromaticity coordinates, which was contained
in the original tri-stimulus values, is the sum T1+ T2+ T3, a value related to the intensity of the color. When
working with color systems, this intensity is often not important to issues related to producing colors or
matching colors across different systems. Because each color fraction must be nonnegative, the chromaticity
values are limited by 1 ≥ ti ≥ 0.
๏ถ The hue-saturation-lightness (HLS) system is used by artists and some display manufacturers. The hue
is the name of a color: red, yellow, gold. The lightness is how bright the color appears. Saturation is
the color attribute that distinguishes a pure shade of a color from the a shade of the same hue that has
been mixed with white, forming a pastel shade. These attributes can be related to a typical RGB color as
shown below:
Given a color in the color cube, the lightness is a measure of how far the point is
from the origin (black). All the colors on the principal diagonal of the cube goes
from black to white and hence are shades of gray and are totally unsaturated.
Then the saturation is a measure of how far the given color is from this diagonal.
Finally, the hue is a measure of where the color vector is pointing. HLS colors
are usually described in terms of a color cone, as shown below.
HLS system can be considered a representation of an RGB color
in polar coordinates.
Gamma Correction
Brightness is the perceived intensity and the intensity of a CRT is related to the voltage applied which depends
on specific properties of the particular CRT. Hence two monitors may generate different brightness for the same
values in the frame buffer.
Gamma correction uses a look-up table in the display whose values can be adjusted for the particular
characteristics of the monitor.
Dithering / Halftoning
Half toning techniques in the printing industry use photographic means to simulate gray levels by creating
patterns of black dots of varying size. The human visual system tends to merge small dots together and sees, not
the dots, but rather intensity proportional to the percentage of black in a small area.
Digital halftones differ because the size and location of displayed pixels are fixed. Consider a 4 x 4 group of lbit pixels, as shown below:
If this pattern is looked from far away, individual pixels are not seen, but rather a
gray level based on the number of black pixels is seen. For 4 x 4 example, although
there are 216 different patterns of black and white pixels, there are only 17 possible
shades, corresponding to 0 to 16 black pixels in the array. There are many algorithms
for generating halftone, or dither, patterns.
Halftoning (or dithering) is often used with color, especially with hard-copy displays, such as ink-jet printers,
that can produce only fully on or off colors. Each primary can be dithered to produce more visual colors.
OpenGL supports such displays and allows the user to enable dithering (glEnab1e(GL_DITHER)).
Download