www.Bookspar.com | Website for Students | VTU - Notes

advertisement
www.Bookspar.com | Website for Students | VTU Notes - Question Papers




UNIT-1
Graphics Systems and Models
Computer graphics is concerned with all aspects of producing pictures or images
using a computer.
The field began with the display of a few lines on a cathode-ray tube (CRT) and
now, the field generates photograph-equivalent images.
The development of computer graphics has been driven both by the needs of the
user community and by advances in hardware and software.
The combination of computers, networks, and the complex human visual system,
through computer graphics, has led to new ways of displaying information,
seeing virtual worlds, and communicating with both other people and machines.
 Applications of Computer Graphics:
Four major areas of application are:
1. Display of information:
 Computer-based drafting systems produce the information
needed for architects, mechanical designers, and drafts people.
 Computer plotting packages provide a variety of plotting
techniques and color tools, and that can handle multiple large
data sets.
 Medical imaging systems such as Computed Tomography (CT),
Magnetic Resonance Imaging (MRI), ultrasound, and positronemission tomography (PET) – can generate three-dimensional
data that must be subjected to algorithmic manipulation to
provide useful information.
 Graphical tools provided by the field of scientific visualization
help the researchers to interpret the vast quantity of data that
the supercomputers generate while solving previously
intractable problems.
 Data can be converted to geometric entities and then the images
can be produced. This capability has yielded new insights into
complex processes in fields such as fluid flow, molecular
biology, and mathematics.
 Internet can develop & manipulate maps relevant to geographic
information systems in real time.
2. Design:
 Professions such as engineering and architecture are concerned
with design.
 Design starts with a set of specifications and ends with a costeffective and esthetic solution that satisfies the specifications.
 But to end-up with such a solution, design process takes several
iterations. Thus, the designer generates a possible design, tests
it, and then uses the results as the basis for exploring other
solutions.
 Normally the solutions are not unique.
 Moreover design problems are either over-determined, such
that they possess no optimal solution, or under-determined,
such that they have multiple solutions.
www.bookspar.com
1
www.Bookspar.com | Website for Students | VTU Notes - Question Papers
Computer graphics aids iterative process and also obtain
optimal solution.
 Computer-aided design (CAD) uses interactive graphical tools.
CAD is used in architecture and the design of mechanical parts
and of very-large-scale integrated (VLSI) circuits. In many such
applications, the graphics is used in a number of distinct ways.
For example, in a VLSI design, the graphics provides an
interactive interface between the user and the design package,
usually via tools such as menus and icons. In addition, after the
user evolves a possible design, other tools analyze the design
and display the analysis graphically.
3. Simulation:
 Graphics systems are capable of generating sophisticated
images in real time and the engineers and researchers use them
as simulators. Uses of simulation include:
 Graphical flight simulators: Are proved cost-effective and safety
in training the pilots.
 Arcade games: are as sophisticated as flight simulators.
 Games and educational software: for home computers are
almost as impressive.
 Robot designing and planning its path, and behavior in complex
environments
 The television, motion-picture, and advertising industries use
computer graphics to generate the photorealistic images.
 Entire animated movies can be made by computer at a cost
comparable to that of movies made with traditional handanimation techniques.
 In the field of virtual reality (VR), a human viewer is equipped
with a display headset with which separate images can be seen
with the right and left eyes. This effect on the viewer is called
the effect of stereoscopic vision. In addition, the body location
and position, possibly including head and finger positions, of the
viewer are tracked by the computer. The viewer can with other
interactive devices available, including force-sensing gloves and
sound, can then act as part of a computer-generated scene,
limited only by the image-generation ability of the computer.
 For example, a surgical intern might be trained to do an
operation in this way, or an astronaut might be trained to work
in a weightless environment.
4. User interfaces:
 Visual paradigm includes windows, icons, menus, and a pointing
device, such as a mouse. This paradigm has increased humancomputer interaction through windowing systems such as the X
Window system, Microsoft Windows, and the Macintosh
operating systems.
 Graphical network browsers, such as Netscape and Internet
Explorer have created millions of users on the Internet. There
are graphical interfaces other than Graphical User Interfaces.

www.bookspar.com
2
www.Bookspar.com | Website for Students | VTU Notes - Question Papers
 A Graphics System
 A computer graphics system is a computer system having a special
component called frame buffer in addition to the common components of a
general-purpose computer system.
 Block Diagram & 5 Major components of a Graphics system
1. Processor
2. Memory
3. Frame buffer
4. Output devices
5. Input devices



Raster-Graphics Display: It is a point-plotting output device based on the
cathode ray tube.
Raster: A matrix (arranged in rows in columns) of discrete cells (pixels)
which can be illuminated in a Raster-Graphics display is called a raster.
Pixel: Each discrete cell in a raster is called a pixel (Pi(x)cture element).
Frame Buffer:
 A part of memory used by the graphics system to store pixels is called frame
buffer.
 The frame buffer can be viewed as the core element of a graphics system.
 In simpler systems, the frame buffer is part of standard memory.
 In high-end systems, the frame buffer is implemented with special types of
memory chips-video random-access memory (VRAM) or dynamic
random-access memory (DRAM)-that enable fast redisplay of the
contents of the frame buffer.
Depth of the frame buffer:
 The number of bits that are used for each pixel which determines properties
of each pixel (say, color of a pixel) is called the depth of the frame buffer.
E.g.
 1-bit-deep frame buffer allows the frame buffer only two colors.
 8-bit-deep frame buffer allows 28 (=256) colors.
 Full / true / RGB-color systems: Display systems with a frame buffer
depth of 24-bits per pixel, in which, individual three groups of 8-bits are
assigned to each of the three primary colors - red, green, and blue used in
most displays. Such systems can display sufficient colors to represent most
images realistically.
Resolution of frame buffer:
 The number of pixels in the frame buffer which determines the detail of an
image is called the resolution.
www.bookspar.com
3
www.Bookspar.com | Website for Students | VTU Notes - Question Papers
Processor:
 In a simple system, there may be only one processor, which must do both the
normal processing and the graphical processing. Sophisticated graphics
systems are characterized by various special-purpose processors, each
custom-tailored to specific graphics functions.
Graphical Processing / Functions? – Rasterization / Scan conversion:
 The graphical process of conversion of geometric entities (such as lines,
circles, and polygons) generated by application programs to pixel
assignments in the frame buffer for the best representation of those entities.
Output Devices – say Cathode-Ray Tube (CRT):
An electron gun produces a beam of electrons.
The output of the computer is
converted, by digital-to-analog
converters, to voltages across the x
and y deflection plates which
control the direction of the electron
beam.
When electrons strike the
phosphor coating on the tube,
light is emitted.
Light appears on the surface of the CRT when a sufficiently intense beam of electrons is
directed at the phosphor.
TYPES OF CRTs
 Random-scan or calligraphic CRT:
The CRTs used in early graphics systems in which an electron beam can be moved
directly from any position to any other position. If the voltages steering the beam
change at a constant rate, the beam will trace a straight line, visible to a viewer. If
intensity of the beam is turned off, the beam can be moved to a new position without
changing any visible display.
A typical CRT will emit light for only a short time-usually, a few milli-seconds after the
phosphor is excited by the electron beam. For a human to see a steady image on most
CRT displays, the same path must be retraced, or refreshed, by the beam at least 50
times per second.
 Raster system: The CRTs in present graphics system, in which the pixels are
taken from the frame buffer and displayed as points on the surface of the display.
Refresh Rate: The high rate at which the entire contents of the frame buffer are
displayed on the CRT to avoid flicker.
Two types of Raster Systems:
These types are according to the two ways of displaying pixels.
 Interlaced display:
 Odd rows and even rows are refreshed alternately.
 Interlaced displays are used in commercial television.
 In an interlaced display operating at 60 Hz, the screen is redrawn
in its entirety only 30 times per second, although the visual
system is tricked into thinking the refresh rate is 60 Hz rather
than 30 Hz.
 Non-interlaced system:
www.bookspar.com
4
www.Bookspar.com | Website for Students | VTU Notes - Question Papers
 The pixels are displayed row by row, or scan line by scan line, at
the refresh rate, which is usually 50 to 85 times per second, or 50
to 85 hertz (Hz).
 Non-interlaced displays are becoming more widespread, even
though these displays process pixels at twice the rate of the
interlaced display.
 Viewers located near the screen, however, can tell the difference between
the interlaced and non-interlaced displays.
 Color CRTs: They have three different colored phosphors ,(red, green, and blue),
arranged in small groups, One common style arranges the phosphors in triangular
groups called triads, each triad consisting of three phosphors, one of each primary.
Most color CRTs have three electron beams, corresponding to the three types of
phosphors.
 Shadow-mask CRT: A metal screen with small holes-the shadow maskensures that an electron beam excites only phosphors of the proper color.
The other output devices such as the liquid-crystal displays (LCDs) , must be refreshed,
whereas hard-copy devices, such as printers, do not need to be refreshed, albeit both
are raster-based.
 Input Devices (Chapter 3 covers these devices)
 Positional Input Devices: Provide positional information to the system and
usually is equipped with one or more buttons to provide signals to the
processor. E.g. Mouse, Joystick, and Data tablet.
 Pointing devices: Allow a user to indicate a particular location on the
display. E.g. Light pen.
Other: say Keyboard
 Computer Generated Images: S y n t h e t i c ( A r t i f i c i a l ) I m a g e s
 Computer-generated images are synthetic or artificial, in the sense that the
objects being imaged do not exist physically.
 They can be formed similar to the traditional imaging methods say optical
systems, such as cameras and the human visual system.
 Hence, to understand and develop computer generated imaging systems,
following sequence of study is needed:
1. Traditional imaging systems.
2. A model (paradigm) of the image formation process needs to
be constructed. This model is based on the traditional imaging
methods.
www.bookspar.com
5
www.Bookspar.com | Website for Students | VTU Notes - Question Papers
3. Computer architecture for implementing that model
(paradigm). (Covered in subsequent chapters with relevant
equations).
Basic entities of image formation: Objects and Viewers
In computer graphics, graphic objects (various geometric primitives, such as points,
lines, and polygons) are synthetic and are specified/defined/approximated with their
positions (locations) and sometimes relationships among them, in space. They exist in
space independent of viewer or any image formation process.
E.g.
 A line can be defined by two vertices.
 Polygon can be defined by an ordered list of vertices.
 Sphere can be specified by two vertices that specify its center and any point on
its circumference.
Viewer: It forms the image of objects. Viewer may be a human, a camera, or a
digitizer.
It is easy to confuse images and objects. Usually an object is seen from an individual’s
single perspective by forgetting that other viewers, located in other places, will see the
same object differently.
In a camera system viewing a building, both the object (i.e. building) and the viewer
exist in a three-dimensional world. However, the image that they define and find on the
film plane is two-dimensional.
Thus the process by which the specification of the object is combined with the
specification of the viewer to produce a two-dimensional image is the essence of image
formation.
Other entities of image formation:
 Light source: It makes the objects visible in the image without which the objects
would be dark and there would be nothing visible in the image.
 Color: The way the color enters the picture
 Different kinds of surfaces on objects affecting an image
A simple physical imaging system – A camera system with a light source: taking a more
physical approach:
It consists of physical object and a viewer (the camera)
and a light source in the scene. Light from the source
strikes various surfaces of the object, and a portion of
the reflected light enters the camera through the lens.
The details of the interaction between light and the
surfaces of the object determine how much light enters
the camera.
Light sources emit at a fixed rate of light energy or
intensity. Light travels in straight lines, from the
sources to those objects with which it interacts. A
particular light source is characterized by the intensity
of light that it emits at each frequency, and by that light's
directionality.
 An ideal point source emits energy from a single location at one or more
frequencies equally in all directions.
 More complex sources, such as a light bulb, can be characterized as emitting light
over an area and by emitting more light in one direction than another. More
www.bookspar.com
6
www.Bookspar.com | Website for Students | VTU Notes - Question Papers
complex sources often can be modeled by a number of carefully placed point
sources (Chapter 6).
Note: Here only the monochromatic (means a source of single frequency), point sources
are considered for simplicity. This is analogous to discussing black-and-white television
before examining color television.
Imaging systems/ Models / Paradigms of Graphics Systems
Ray Tracing
Building an imaging model: by following light from a source.
Consider the scene in the figure.
It is illuminated by a single point source.
Viewer is included because the light that
reaches the eye of viewer is of interest.
The viewer can also be camera as shown
below:
Ray: It is a semi-infinite line that emanates from a point and travels to infinity in a
particular direction because light travels in straight lines. A portion of these infinite rays
contributes to the image on the film plane of the camera. E.g. If the source is visible from
the camera, some of the rays go directly from the source through the lens of the camera,
and strike the film plane. Most rays go off to infinity, neither entering the camera
directly, nor striking any of the objects.
These rays contribute nothing to the image, although they may be seen by some other
viewer. The remaining rays strike and illuminate objects. These rays can interact with
the objects surfaces in a variety of ways.
E.g.
 Mirror Surface: If the surface is a mirror, a reflected ray might-depending on the
orientation of the surface enter the lens of the camera and contribute to the
image.
 Diffuse surfaces: They scatter light in all directions.
 Transparent surfaces: Allow the light ray from the source to pass through it;
perhaps being bent or refracted, and may interact with other objects, enter the
camera, or travel to infinity without striking another surface.
Ray tracing is an image-formation technique that is based on the ideas (aforesaid) of
tracing rays of light to form an image. This paradigm is useful in understanding the
www.bookspar.com
7
www.Bookspar.com | Website for Students | VTU Notes - Question Papers
interaction between light and materials that is essential to physical image formation.
Only a small fraction of all the rays leaving a source enter the imaging system and the
time spent tracing most rays is wasted. Ray tracing is an alternative way to develop a
computer graphics system. It can be used to simulate even complex physical effects with
expense of requisite computing. It is a close approximation to the physical world. But it
is not well suited for fast computation.
However by further simplification, it is possible to reduce the computational burden.
E.g.
 By assuming that all objects are uniformly bright- say from the perspective of the
viewer, a red triangle appears to have the same shade of red at every point, and
is indistinguishable from a uniformly red emitter of light. Given this assumption,
sources can be neglected, and simple trigonometric methods can be used to
calculate the image.
 From a physical perspective, if an object emits light, one cannot tell whether the
looking object is reflecting light, or whether the object is emitting light from
internal energy sources. It also reduces computations.
The Synthetic-Camera Model
It is a modern model of three-dimensional computer graphics in which creating a
computer-generated image is similar to forming an image using an optical system.
Consider the imaging system shown in
figure containing objects and a viewer.
The viewer is a bellows camera. In a
bellows camera, the lens is located at the
front plane and the film plane is located
at the back of the camera. These two are
connected by flexible sides. Thus, the
back of the camera can be moved
independently of the front of the camera,
introducing additional flexibility in the
image-formation process.
The image is formed on the film plane at the back of the camera so that this process
emulates the creation of artificial images.
Basic Principles:
1. The specification of the objects is independent of the specification of the viewer.
=> Within a graphics library, there will be separate functions for specifying the
objects and the viewer.
2. Image can be computed using simple trigonometric calculations in a
straightforward manner. Consider the side view of the camera and a simple
object in figure below:
View in (b) is obtained by noting the similarity of the two triangles in (a).
www.bookspar.com
8
www.Bookspar.com | Website for Students | VTU Notes - Question Papers
Image/Film plane is moved in front of the lens. In
three dimensions, it is possible to work with the
arrangement of figure.
The image of a point on the object is obtained by
drawing a line, called a projector, from the point to
the center of the lens, or the center of projection. All
projectors are rays emanating from the center of
projection. The film plane that is moved in front of
the lens is called the projection plane. The image of
the point is located where the projector passes through the projection plane. (Chapter 5
discusses more and derives relevant mathematical formulas).
3. The image size is limited i.e., not all objects can be imaged onto film plane. A
clipping rectangle or clipping window, in the projection plane, placed to the front
indicates this limitation. This rectangle acts as a window through which a viewer,
located at the center of projection, sees the world. Given the location of the
center of projection, the location and orientation of the projection plane, and the
size of the clipping rectangle, it is possible to determine which objects will
appear in the image.
4. Synthetic-camera model leads to the notion of a pipeline architecture in which
each of the various stages in the pipeline performs distinct operations on
geometric entities, then passes on the transformed objects to the next stage.
The Modeling-Rendering Paradigm
It assumes the image formation, a two-step process:
1. Modeling of the scene: This process designs and positions the objects of the
scene. This step is highly interactive. The details of images of the objects need
not be specified at this stage. Hence this step is carried out on a graphical
workstation.
2. Rendering/ Production of the scene: It renders the designed scene by adding
light sources, material properties, and a variety of other detailed effects, to form
a production-quality image. This step requires a tremendous amount of
computation, and hence requires a number-cruncher machine.
These two steps not only differ in the required optimal hardware but also the in their
required software.
The interface between the modeler and renderer can be as simple as a file produced by
the modeler that describes the objects, and that contains additional information
important to only the renderer, such as light sources, viewer location, and material
properties.
Pixar's Renderman Interface follows this approach and uses a file format that allows
modelers to pass models to the render in text format.
Modeling-Rendering Pipeline
It suggests that the modeler and the renderer can
be implemented with different software and
hardware.
www.bookspar.com
9
www.Bookspar.com | Website for Students | VTU Notes - Question Papers
Advantages:
 It allows developing modelers that, although they use the same renderer, are
custom-tailored to particular applications.
 Likewise, different renderers can take as input the same interface file.
 It is even possible, at least in principle, to dispense with the modeler completely,
and to use a standard text editor to generate an interface file.
Disadvantages: For complex scenes it is difficult for the users to edit lists of information
for a renderer. Hence an interactive modeler is used. Such modelers are based upon the
simple synthetic-camera model.
Applications: In CAD applications and in development of realistic images, such as for
movies.
The Programmer’s Interface
Interface provides ways that a user can interact with a graphics system. With
completely self-contained packages, using mouse and keyboard, the menus and icons
representing possible actions can be selected and the user can guide the software and
produce images without having to write programs.
To write own graphics application interfaces:
Application programmer's interface (API) : A set of functions that resides in a graphics
library with which the interface between an application program and a graphics system
is specified is called API.
Application programmer's model of the system is below:
The
application
programmer sees only the
API, and is thus shielded
from the details of both the
hardware and the software
implementation
of
the
graphics library. From the
perspective of the writer of
an application program, the
functions available through the API should match the conceptual model that the user
wishes to employ to specify images.
The synthetic-camera model is the basis for a number of popular APIs, including
OpenGL.
There are functions to specify:
 Objects:
Objects are usually defined by sets of vertices. For simple geometric objects such as line
segments, rectangles, and polygons-there is a simple relationship between a list of
vertices and the object. For more complex objects, there may be multiple ways of
defining the object from a set of vertices. A circle, for example, can be defined by three
points on its circumference, or by its center and one point on the circumference.
Most APIs provide similar sets of primitive objects for the user. These primitives are
usually those that can be displayed rapidly on the hardware. The usual sets include
points, line segments, polygons, and, sometimes, text. OpenGL defines primitives
through lists of vertices.
E.g. To define a triangular polygon in OpenGL through five function calls:
www.bookspar.com
10
www.Bookspar.com | Website for Students | VTU Notes - Question Papers
glBegin( GL_POLYGON ):
glVertex3f(0.0, 0.0, 0.0);
glVertex3f(0.0, 1.0, 0.0);
glVertex3f(0.0, 0.0, 1.0);
glEnd( ):
Note:
 By adding additional vertices, an arbitrary polygon can be defined.
 Same vertices can be used to define a different geometric primitive simply by
changing the type parameter, GL_POLYGON. GL_LINE_SCRIPT uses the vertices to
define two connected line segments, whereas the type GL_POINTS uses the same
vertices to define three points.
 Some APIs let the user work directly in the frame buffer by providing functions
that read and write pixels.
 Some APIs provide curves and surfaces as primitives; often, however, these types
are approximated by a series of simpler primitives within the application
program. OpenGL provides access to the frame buffer, curves, and surfaces.
 Viewer
Viewer or camera can be defined in a variety of ways. Available APIs differ in both how
much flexibility they provide in camera selection, and in how many different methods
they allow.
There are four types of necessary specifications:
1. Position: The camera location usually is given by the
position of the center of the lens (the center of
projection).
2. Orientation: Once the camera is positioned, a camera
coordinate system can be placed with its origin at the
center of projection. Then the camera can be rotated
independently around the three axes of this system.
3. Focal length: The focal length of the lens determines
the size of the image on the film plane or, equivalently,
the portion of the world the camera sees.
4. Film plane: The back of the camera has a height and a
width. On the bellows camera, and in some APIs, the orientation of the back of the
camera can be adjusted independently of the orientation of the lens.
These specifications can be satisfied in various ways:
1. Developing the specifications for the camera location and orientation uses a
series of coordinate system transformations. These transformations convert
object positions represented in the coordinate system that specifies object
vertices to object positions in a coordinate system centered at the center of
projection. This approach is useful, both for doing implementation and for
getting the full set of views that a flexible camera can provide. (Chapter 5)
2. The synthetic-camera model emphasizes object is independent of the view. But,
the classical viewing techniques, stress the relationship between the object and
the viewer. Thus, the classical two-point perspective of a cube in below is a twopoint perspective because of a particular relationship between the viewer and
the planes of the cube.
www.bookspar.com
11
www.Bookspar.com | Website for Students | VTU Notes - Question Papers
3. In OpenGL API, all transformations can be set with complete freedom. In addition
to that, OpenGL also provides helpful extra functions.
E.g.
Function call
gluLookAt(cop_x, cop_y, cop_z, at_x, at_y, at_z, ...);
points the camera from a center of projection toward a desired point.
Function call
glPerspective(field_of_view, ...);
selects a lens for a perspective view.
4. However, none of the APIs built on the synthetic-camera model - OpenGL
provides functions for specifying desired relationships between the camera and
an object.
 Light sources
Light sources can be defined by their location, strength, color, and directionality.
APIs provide a set of functions to specify these parameters for each source.
 Material Properties
Material properties are characteristics, or attributes, of the objects, and such properties
are usually specified through a series of function calls at the time that each object is
defined. Both light sources and material properties depend on the models of lightmaterial interactions supported by the API. (Chapter 6)
Graphics Architectures
 A model of early graphics systems:
They used general-purpose computers with the
standard von Neumann architecture. Such
computers are characterized by a single
processing unit that processes a single
instruction at a time. The display in these
systems was based on a CRT display that included the necessary circuitry to generate a
line segment connecting two points. The job of the host computer was to run the
application program, and to compute the endpoints of the line segments in the image (in
units of the display). This information had to be sent to the display at a rate high enough
to avoid flicker on the display. Computers were so slow that refreshing even simple
images, containing a few hundred line segments, would burden an expensive computer.
 Display Processor Architecture: It relieves the general-purpose computer from
the task of refreshing the display continuously by incorporating a special display
processor. These display processors had conventional architectures as that of
general-purpose, but included
instructions
to
display
primitives on the CRT.
The main advantage of the display
processor was that the instructions to
generate the image could be
assembled once in the host and sent to
the display processor, where they were stored in the display processor's own memory
www.bookspar.com
12
www.Bookspar.com | Website for Students | VTU Notes - Question Papers
as a display list or display file. The display processor would then execute repetitively
the program in the display list, at a rate sufficient to avoid flicker, independently of the
host, thus freeing the host for other tasks. It is similar to client-server architecture.
 Pipeline Architectures
The major advances in graphics architectures parallel closely the advances in
workstations. In both cases, the ability to create special-purpose VLSI circuits was the
key enabling technology development. In addition, the availability of cheap solid-state
memory led to the universality of raster displays.
For computer graphics applications, the most important use of custom VLSI circuits has
been in creating pipeline architectures. The concept of pipelining is illustrated in the
figure below for a simple arithmetic calculation.
In this pipeline, there is an adder and a multiplier. If
this configuration is used to compute a + (b * c), the
calculation takes one multiplication and one addition,
- the same amount of work required if a single
processor is used to carry out both operations.
However, suppose that the same computation is to be performed with many values of a,
b, and c. The multiplier can pass on the results of its calculation to the adder, and can
start its next multiplication while the adder carries out the second step of the
calculation on the first set of data. Here, the rate at which data flows through the system,
the throughput of the system, has been doubled.
Pipelines can be constructed for more complex arithmetic calculations that will afford
even greater increases in throughput. There is no point in building a pipeline unless the
same operation is to be performed on many data sets.
Pipeline architecture suits Computer Graphics because in computer graphics, large sets
of vertices need be processed in the same manner.
Geometric Pipeline and four major steps in Image Procesing:
Suppose a set of geometric primitives are defined by a set of vertices. Set of primitive
types and vertices can be referred to as the geometry of the data. In a complex scene,
there may be thousands-even millions of vertices that define the objects. These entire
vertices muse be processed in a similar manner to form an image in the frame buffer.
Processing the geometry of objects to obtain an image can be pipelined as follows:
This pipelining shows four major steps in the imaging process:
1. Vertex Processing:
Each vertex is processed independently. This block includes two major functions:

Transformation:
Representation of same object in different coordinate system requires transformation.
E.g.
 In the synthetic camera model:
www.bookspar.com
Representation of objects from
13
Representation in terms of the
www.Bookspar.com | Website for Students | VTU Notes - Question Papers
The internal representation of
objects - whether in the
camera coordinate system or
perhaps in a system used by
the graphics software.
Representation
in
Coordinate System
Display
 Wh
ile
putting an image onto CRT or output display:
Each change of coordinate systems can be represented by a matrix. Successive changes
in coordinate systems can be represented by multiplying, or concatenating, the
individual matrices into a single matrix. (Chapter 4)
Because multiplying one matrix by another matrix yields a third matrix, a sequence of
transformations is an obvious candidate for pipeline architecture. In addition, because
the matrices that used in computer graphics will always be small (4 x 4), there is
opportunity to use parallelism within the transformation blocks in the pipeline.
Eventually, after multiple stages of transformation, the geometry is transformed by a
projection transformation. This step can be implemented using 4 x 4 matrices (Chapter
5) and thus projection fits in the pipeline.
In general, 3-D information needs to be kept as long as possible, as objects pass through
the pipeline. Further, there is a variety of projections that can be implemented (Chapter
5).

Assignment of Vertex Color:
The assignment of vertex colors can be as simple as the program specifying a color or as
complex as the computation of a color from a physically realistic lighting model that
incorporates the surface properties of the object and the characteristic light sources in
the scene. (Chapter 6).
2. Primitive assembly and Clipping:
Clipping is done because of the limitation that no imaging system can see the whole
world at once. E.g. Cameras have film of limited size and their fields of view can be
adjusted by selecting different lenses. Equivalent property can be obtained in the
synthetic camera model, by considering a clipping volume, such as the pyramid in front
of the lens. The projections of objects in this volume appear in the image. Those that are
outside do not and are said to be clipped out. Objects that straddle the edges of the
clipping volume are partly visible in the image.
Clipping must be done on a primitive by primitive basis rather than on a vertex by
vertex basis. Thus, sets of vertices must be assembled in to primitives, such as line
segments and polygons before clipping can take place within this stage of the pipeline.
Consequently the output of this stage is a set of primitives whose projections can
appear in the image. (Chapter 7 – covers efficient clipping algorithms).
Clipping can occur at various stages in the imaging process. For simple geometric
objects, whether or not an object is clipped out can be determined from their vertices.
www.bookspar.com
14
www.Bookspar.com | Website for Students | VTU Notes - Question Papers
Because clippers work with vertices, clippers can be inserted with transformers into the
pipeline. Clipping can even be subdivided further into a sequence of pipelined clippers.
3. Rasterization or Scan Conversion
The primitives that emerge from the clipper are still represented in terms of their
vertices and must be further processed to generate pixels in the frame buffer. E.g. if
three vertices specify a triangle filled with a solid color, the rasterizer must determine
which pixels in the frame buffer are inside the polygon. (Chapter 8 discusses
rasterization for line segments and polygons). The output of the rasterizer is a set of
fragments for each primitive. A fragment can be thought of as a potential pixel that
carries with it information, including its color and location, that is used to update the
corresponding pixel in the frame buffer. Fragments can also carry along depth
information that allows later stages to determine if a particular fragment lies behind
other previously rasterized fragments for a given pixel.
4. Fragment Processing
It updates the pixels in the frame buffer for the fragments generated by the rasterizer. If
the application generated 3-D data, some fragments may not be visible because the
surfaces that they define are behind other surfaces. The color of a fragment may be
altered by texture mapping or bump mapping. The color of the pixel that corresponds to
a fragment can also be read from the frame buffer and blended with the fragment's
color to create translucent effects. (These effects will be covered in Chapters8 and 9.)
Performance Characteristics:
Two fundamentally different types of processing in Graphics architecture:
 Front end geometric processing, based on processing vertices through the
various clippers and transformers. This processing is ideally-suited tor
pipelining, and usually involves floating-point calculations.
 The geometry engine developed by Silicon Graphics was a VLSI implementation
for many of these operations in a special-purpose chip that became the basis for
a series of fast graphics workstations.
 Later, floating-point accelerator chips, such as the Intel i860, put 4 x 4 matrixtransformation units on the chip, reducing a matrix multiplication to a single
instruction.
 Graphics workstations and add-on graphics boards use Application Specific
Integrated Circuits (ASICS) that perform many of the graphics operations at the
chip level.
Pipeline architectures are the dominant type of high-performance system. As more
boxes are added to the pipeline, however, it takes more time tor a single datum to pass
through the system. This time is called the latency of the system; Latency must be
balanced against increased throughput in evaluating the performance of a pipeline.
 Back end direct manipulation of bits in the frame buffer: Beginning with
rasterization including many other features, process directly the bits in the frame
buffer. It is fundamentally different from front-end processing, and can be
implemented most effectively using architectures that have the ability to move
blocks of bits quickly.
www.bookspar.com
15
www.Bookspar.com | Website for Students | VTU Notes - Question Papers
The overall performance of a system is characterized by how fast the geometric entities
are moved through the pipeline, and by how many pixels per second can be altered in
the frame buffer.
Consequently, the fastest graphics workstations are characterized by pipelines at the
front ends and parallel bit processors at the back ends.
Pipeline architectures dominate the graphics field, especially where real-time
performance is of importance.
Note: Pipelining architecture can be implemented not only in hardware but also in a
software system implementation of an API. The power of the synthetic-camera
paradigm is that the pipelining works well in both cases.
1.
2.
3.
4.
5.
System.
What is Computer Graphics? Explain its applications.
Describe the components of a graphics system.
Explain different types of CRTs.
Comment on Synthetic Camera Model of Imaging
What do you mean by API? Discuss the nature of
functions supported by API to specify objects and camera.
6.
Discuss different Graphics Architectures.
7.
What do you mean by geometric pipeline? What are
the major steps involved in it? Explain.
8.
Write a note on performance characteristics of
graphics system.
www.bookspar.com
16
Download