02_IntroductionToComputerGraphics

advertisement
CS 450: COMPUTER GRAPHICS
INTRODUCTION TO
COMPUTER GRAPHICS
SPRING 2015
DR. MICHAEL J. REALE
DEFINITION AND APPLICATIONS
WHAT IS COMPUTER GRAPHICS?
• Computer graphics – generating and/or displaying imagery using computers
•
Also touches on problems related to 3D model processing, etc.
• The focus of this course will be on the underlying mechanics and algorithms of computer graphics
•
As opposed to a computer art course, for instance, which focuses on using tools to make graphical content
SO, WHY DO WE NEED IT?
•
Practically EVERY field/discipline/application needs to use computer graphics in some way
•
Science
•
Art
•
Engineering
•
Business
•
Industry
•
Medicine
•
Government
•
Entertainment
•
Advertising
•
Education
•
Training
•
…and more!
APPLICATIONS: GRAPHS, CHARTS, AND DATA
VISUALIZATION
• Graphics and charts
•
One of the earliest applications  plotting data using printer
• Data visualization
•
Sciences
•
Show visual representation  see patterns in data
• E.g., the flow of fluid (LOx) around a tube is described using streamtubes
•
•
Challenges: large data sets, best way to display data
Medicine
•
2D images (CT, MRI scans)  3D volume rendering
• E.g., volume rendering and image display from the visible woman dataset
Images from VTK website: http://www.vtk.org/VTK/project/imagegallery.php
APPLICATIONS: CAD/CADD/CAM
http://www.nvidia.com/object/siemens-plm-software-quadro-visualization.html
• Design and Test
•
CAD = Computer-Aided Design
•
CADD = Computer-Aided Drafting and Design
•
Usually rendered in wireframe
• Manufacturing
•
CAM = Computer-Aided Manufacturing
• Used for:
•
Designing and simulating vehicles, aircraft,
mechanical devices, electronic circuits/devices,…
•
Architecture and rendered building designs
• ASIDE: Often need special graphics cards to make absolutely SURE the rendered image is correct (e.g., NVIDIA
Quadro vs. a garden-variety Geforce)
GAMES!!!
APPLICATIONS: COMPUTER ART / MOVIES
• First film to use a scene that was completely computer generated:
•
Tron (1982)
•
Star Trek II: Wrath of Khan (1982)
•
…(depends on who you talk to)
• First completely computer-generated full-length film:
•
Toy Story (1995)
http://upload.wikimedia.org/wikipedia/en/1/17/Tron_poster.jpg
http://blog.motorcycle.com/wp-content/uploads/2008/10/tron_movie_image_light_cycles__1_.jpg
https://bigcostas.files.wordpress.com/2007/12/genesis1.jpg
http://www.standbyformindcontrol.com/wp-content/uploads/2013/05/khan-poster.jpg
http://s7d2.scene7.com/is/image/Fathead/1515991X_dis_toy_story_prod?layer=comp&fit=constrain&hei=350&wid=350&fmt=pngalpha&qlt=75,0&op_sharpen=1&resMode=bicub&op_usm=0.0,0.0,0,0&iccEmbed=0
APPLICATIONS: VIRTUAL-REALITY ENVIRONMENTS /
TRAINING SIMULATIONS
• Used for:
•
Training/Education
•
Military applications (e.g., flight simulators)
•
Entertainment
https://dbvc4uanumi2d.cloudfr
ont.net/cdn/4.3.3/wpcontent/themes/oculus/img/or
der/dk2-product.jpg
"Virtuix Omni Skyrim (cropped)" by Czar - Own work. Licensed under CC BY-SA 3.0 via Wikimedia Commons http://commons.wikimedia.org/wiki/File:Virtuix_Omni_Skyrim_(cropped).jpg#mediaviewer/File:Virtuix_Omn
i_Skyrim_(cropped).jpg
APPLICATIONS: GAMES!
Unreal 4 Engine
Crytek 3 Engine
APPLICATIONS: GAMES!
Unity Engine
(Wasteland 2)
Source Engine
(Portal 2)
APPLICATIONS: GAMES!
http://www.dosgamesarchive.com/download/wolfenstein-3d/
http://mashable.com/2014/05/23/wolfenstein-then-and-now/
http://www.gamespot.com/images/1300-2536458
Wolfenstein: Then and Now
CHARACTERISTICS OF COMPUTER GRAPHICS
• Depending on your application, your focus and goals will be different:
•
Real-time vs. Non-real-time
•
Virtual Entities / Environments vs. Visualization / Representation
•
Developing Tools / Algorithms vs. Content Creation
CG CHARACTERISTICS:
REAL-TIME VS. NON-REAL-TIME
• Real-time rendering
•
•
15 frames per second (AT BARE MINIMUM – still see skips but it will look more or less animated)
•
24 fps = video looks smooth (no skips/jumps)
•
24 – 60 fps is a more common requirement
Examples: first-person simulations, games, etc.
• Non-real-time
•
Could take hours for one frame
•
Examples: CG in movies, complex physics simulations, data visualization, etc.
• Often trade-off between speed and quality (image, accuracy, etc.)
Face using Crytek
engine:
http://store.steampo
wered.com/app/220
980/
CG CHARACTERISTICS:
VIRTUAL ENTITIES / ENVIRONMENTS VS.
VISUALIZATION / REPRESENTATION
• Virtual Entities / Environments
•
Rendering a person, place, or thing
•
Often realistic rendering, but it doesn’t have to be
•
Examples: simulations (any kind), games, virtual avatars, movies
• Visualization / Representation
Remy from Pixar’s Ratatouille:
http://disney.wikia.com/wiki/
Remy
Tiny and Big: Grandpa’s Leftovers game:
http://www.mobygames.com/game/windows/tiny-and-biggrandpas-leftovers/screenshots/gameShotId,564196/
•
Rendering data in some meaningful way
•
Examples: graphics/charts, data visualization, (to a lesser extent) graphics user
interfaces
• Both
•
Rendering some object / environment, but also highlighting important
information
•
Examples: CAD/CAM
http://www.vtk.org/VTK/project/imagegallery.php
CG CHARACTERISTICS:
TOOLS/ALGORITHMS VS. CONTENT CREATION
• Developing Tools / Algorithms
•
It’s…well…developing tools and algorithms for graphical purposes
•
Using computer-graphics application programming interfaces (CG API)
•
•
Common CG APIs: GL, OpenGL, DirectX, VRML, Java 2D, Java 3D, etc.
•
Interface between programming language and hardware
•
Also called “general programming packages” in Hearn-Baker book
Example: how do I write code that will render fur realistically?
• Content Creation
•
Using pre-made software to create graphical objects
•
Called “special purpose software packages” in Hearn-Baker book
•
Example: how do I create a realistic-looking dog in a 3D modeler program?
THIS COURSE
• In this course, we’ll mostly be focusing on developing tools / algorithms to render in real-time virtual
entities / environments
VIDEO DISPLAY DEVICES
INTRODUCTION
• A lot of why computer graphics works the way it does is based in the hardware
•
Graphics cards, display devices, etc.
• In this section, we’ll talk about video display devices (and some of the terminology associated with them)
•
CRT
•
Plasma
•
LCD/LED
•
3D
• For an excellent explanation of how…
•
LCD/LED monitors work: http://electronics.howstuffworks.com/lcd.htm
•
Plasma monitors work: http://electronics.howstuffworks.com/plasma-display.htm
CRT: CATHODE-RAY TUBE
• Primary video display mechanism for a long time
•
Now mostly replaced with LCD monitors/TVs
• Cathode = an electrode (conductor) where electrons
leave the device
•
By Theresa Knott (en:Image:Cathode ray Tube.PNG) [GFDL
(http://www.gnu.org/copyleft/fdl.html) or CC-BY-SA-3.0
(http://creativecommons.org/licenses/by-sa/3.0/)], via Wikimedia Commons
Cathode rays = beam of electrons
• Basic idea:
•
Electron gun (cathode + control grid) shoots electrons in vacuum tube
•
Magnetic or electric coils focus and deflect beam of electrons so they hit each
location on screen
•
Screen coated with phosphor  glows when hit by electrons
•
Phosphor will fade in a short time  so keep directing electron beam over same
screen points
• Called refresh CRT
http://i.imgur.com/ayHx5.jpg?1
DISPLAY DEFINITIONS
• Refresh rate = frequency that picture is redrawn
•
Term still used for things other than CRTs
•
Usually expressed in Hertz (e.g., 60 Hz)
• Persistence = how long phosphors emit light after being hit by electrons
•
Low persistence  need higher refresh rates
•
LCD monitors have an analogous concept  response time
MORE DISPLAY DEFINITIONS
• Pixels = “picture element”; single point on screen
• Resolution = maximum number of points that can be displayed without overlap
•
For CRTs  more analogue device, so definition is a little more involved
•
Now, usually just means (number of pixels in width) x (number of pixels in height)
• Aspect ratio = resolution width / height
•
(Although sometimes vice versa)
TYPES OF CRT MONITORS
• There were two basic types of CRT monitors:
•
Vector displays
•
Raster-scan displays
CRT: VECTOR DISPLAYS
• Also called random-scan, stroke-writing, or calligraphic displays
• Electron beam actually draws points, lines, and curves directly
•
List of things to draw stored in display list (also called refresh
display file, vector file, or display program)
•
Long list  just draw as quick as you can
•
Short list  delay refresh cycle so you don’t burn out screen!
• Advantages: draws non-aliased lines
• Disadvantages: not very flexible; cannot draw shaded polygons
• Mostly abandoned in favor of raster-scan displays
CRT: RASTER-SCAN DISPLAYS
• Most common type of CRT
• Refresh buffer (or frame buffer) = contains picture of
screen you want to draw
• Electron gun sweeps across screen, one row at a time,
from top to bottom
•
Each row = scan line
• Advantages: flexible
• Disadvantages: lines, edge, etc. can look jagged (i.e.,
aliased)
CRT: INTERLACING
• Interlacing = first draw even-numbered scan lines,
then do odd-numbered lines
•
Effectively doubles your refresh rate
•
Also used to save data in TV transmission
CRT: COLOR
• Two ways to do color with CRTs:
•
•
Beam-penetration
•
Have red and green layer of phosphors
•
Slow electrons  only red lay
•
Fast electrons  only green layer
•
Medium-speed electrons  both
•
Inexpensive, but limited in number of colors
Shadow-mask
•
Uses red-green-blue model for color (RGB)
•
Three electron guns and three phosphor dots (one for red,
one for green, and one for blue)
•
Shadow mask makes sure 3 guns hit the 3 dots
PLASMA DISPLAYS
• Fill region between two glass plates with mixture of gases (usually
includes neon)
• Vertical conducting ribbons on one plate; horizontal conducting ribbons
on the other
• Firing voltages at intersecting pair of horizontal and vertical conductors 
gas at intersection breaks down into glowing plasma of electrons and ions
• For color  use three subpixels (red, green, and blue)
• Advantages: very thin display; pixels very bright, so good at any angle
• Disadvantages: expensive
http://electronics.howstuffworks.com/plasma-display2.htm
LCD DISPLAYS
• LCD = Liquid Crystal Displays
• Liquid crystal = maintain a certain structure, but can move
around like liquid
•
Structure is twisted, but applying electrical current straightens it out
• Basic idea:
•
Two polarized light filters (one vertical, one horizontal)
•
Light passes through first filter  polarized light in vertical direction
•
“ON STATE”  no current  crystal twisted  causes light to be
reoriented so it passes through horizontal filter
•
“OFF STATE”  current  crystal straightens out  light does NOT
pass through
LCD DISPLAYS: WHERE DOES THE LIGHT COME FROM?
• Mirror in back of display
•
Cheap LCD displays (e.g., calculator)
•
Just reflects ambient light in room (or prevents it from reflecting)
• Fluorescent light in center of display
• LED lights
•
Could be edge-lit or full array (i.e., LEDS covering entire back of screen)
•
Usually what people mean when they say an “LED monitor” = LCD display backlit by LEDs
LCD DISPLAYS: PASSIVE VS. ACTIVE
• Passive-matrix LCDs  use grid that sends charge to pixels through transparent conductive materials
•
Simple
•
Slow response time
•
Imprecise voltage control
•
When activating one pixel, nearby pixels also turned on  makes image fuzzy
• Active-matrix LCDs  use transistor at each pixel location using thin-film transistor technology
•
Transistors control voltage at each pixel location  prevent leakage to other pixels
•
Control voltage get 256 shades of gray
LCD DISPLAYS: COLOR
• Color  have 3 subpixels (one red, one green, and one blue)
http://electronics.howstuffworks.com/lcd5.htm
3D DISPLAYS: INTRODUCTION
• In real life, we see depth because of we have two eyes (binocular vision)
•
One eye sees one angle, the other sees another
•
Brain meshes two images together to figure out how far away things are
• By “3D” displays, we mean giving the illusion of depth by purposely giving each eye a different view
3D DISPLAYS: OLDER APPROACHES
• Anaglyph 3D
•
Red and blue 3D glasses
•
Show different images (one more reddish, the other more blue-ish)
•
Color quality (not surprisingly) is not that great
http://science.howstuffworks.com/3-d-glasses2.htm
• View Master toys
•
First introduced 1939
•
Take photograph at two different angles
http://www.ebay.com/gds/How-to-Make-a-View-Master-Reel/10000000178723069/g.html
3D DISPLAYS: ACTIVE 3D
• Active 3D
•
Special “shutter” glasses that sync up with monitor TV
•
Showing only one image at time (but REALLY fast)
•
•
•
Show left image on TV  glasses close right eye
•
Show right image on TV  glasses close left eye
Advantages:
•
If your monitor/TV has a high enough refresh rate, you’re good to go
•
See full screen resolution
Disadvantages:
•
If out of sync, see flickering
•
Image can look darker overall
•
Glasses can be cumbersome
http://www.nvidia.com/object/product-geforce-3d-vision2-wirelessglasses-kit-us.html
3D DISPLAYS: PASSIVE 3D
• Passive 3D
•
Polarized light glasses
•
TV shows two images at once
•
Uses alternating lines of resolution
•
Similar to red-blue glasses, but color looks right
•
Advantages:
•
•
Glasses are lightweight
•
Image is brighter than active 3D
Disadvantages:
•
Need special TV
•
Only seeing HALF the vertical resolution!
Left: Passive 3D through glasses - Middle: Passive 3D without glasses - Right: Active 3D
http://www.cnet.com/news/active-3d-vs-passive-3d-whats-better/
3D DISPLAYS: VR
• Virtual reality displays (like the Oculus Rift) basically have two separate screens (one for each eye)
•
Advantages: no drop in resolution or brightness
•
Disadvantages: heavy headset
http://www.engadget.com/2013/09/30/vorpx-beta-launch/
THE GRAPHICS RENDERING PIPELINE
DEFINITIONS
• Graphics Rendering Pipeline
•
Generates (or renders) a 2D image given a 3D scene
•
AKA the “pipeline”
We’ve got this…
…and we want this
• A 3D scene contains:
•
A virtual camera – has a position and orientation (which way it’s point and which way is up), like in the image
above
•
3D Objects – stuff to render; have position, orientation, and scale
•
Light sources - where the light is coming from, what color the light is, what kind of light is it, etc.
•
Textures/Materials – determine how the surface of the objects should look
PIPELINE STAGES
• The Graphics Rendering Pipeline can be divided into 3 broad stages:
•
•
•
Application
•
Determined by (you guessed it) the application you’re running
•
Runs on CPU
•
Example operations: collision detection, animation, physics, etc.
Geometry
•
Computes what will be drawn, how it will be drawn, and where it will be drawn
•
Deals with transforms, projections, etc. (we’ll talk about these later)
•
Typically runs on GPU (graphics processing unit, or your graphics card)
Rasterizer
•
Renders final image and performs per-pixel computations (if desired)
•
Runs completely on GPU
• Each of these stages can also be pipelines themselves
PIPELINE SPEEDS
• Like any pipeline, only runs as fast as slowest stage
•
Slowest stage (bottleneck)  determines rendering speed
• Rendering speed usually expressed in:
•
Frames per second (fps)
•
Hertz (1/seconds)
• Rendering speed  called throughput in other pipeline contexts
APPLICATION STAGE
• Programmer has complete control over this
• Can be parallelized on CPU (if you have the cores for it)
• Whatever else it does, it must send geometry to be rendered to the
Geometry Stage
• Geometry – rendering primitives, like points, lines, triangles, polygons, etc.
DEFINING 3D OBJECTS
• A 3D object (or 3D model) is defined in term of geometry or geometric primitives
• Vertex = point
• Most basic primitives:
•
Points
(1 vertex)
•
Lines
(2 vertices)
•
Triangles
(3 vertices)
• MOST of the time, an object/model is defined as a triangle mesh
DEFINING 3D OBJECTS
• Why triangles?
•
Simple
•
Fits in single plane
•
Can define any polygon in terms of triangles
• At minimum, a triangle mesh includes:
•
Vertices
•
•
•
Position in (x,y,z)  Cartesian coordinates
Face definitions
•
For each face, a list of vertex indices (i.e., which vertices go with which triangle)
•
Vertices can be reused
(Optional) Normals, texture coordinates, color/material information, etc.
DEFINING 3D OBJECTS
• In addition to points, lines, and triangles, other primitives exist such as:
•
Polygons
•
•
Circles, ellipses, and other curves
•
•
Often approximated with line segments
Splines
•
•
Again, however, you can define any polygon in terms of triangles
Also often approximated with lines (or triangles, if using splines to define a 3D surface)
Sphere
•
Center point + radius
•
Often approximated with triangles
DEFINING 3D OBJECTS
• Sometimes certain kinds of 3D shapes are referred to as “primitives” (but ultimately these are often
approximated with a triangle mesh)
•
Cube
•
Sphere
•
Torus
•
Cylinder
•
Cone
•
Teapot
Original drawing of the teapot:
http://www.computerhistory.org/revolution/
computer-graphics-music-and-art/15/206
ASIDE: WAIT, TEAPOT?
• The Utah teapot or Newell teapot
•
In 1975, Martin Newell at the University of Utah needed a 3D model, so he
measured a teapot and modeled it by hand
•
Has become a very standard model for testing different graphical effects (and a bit
of an inside joke)
http://community.thefoundry.co.uk/discussion/topic.aspx?f=8&t=33283
GEOMETRY STAGE
• Performs majority of per-polygon and per-vertex operations
• In days of yore (before graphics accelerators), this stage ran on the CPU
• Has 5 sub-stages
•
Model and View Transform
•
Vertex Shading
•
Projection
•
Clipping
•
Screen Mapping
MODEL COORDINATES
• Usually the vertices of a polygon mesh are relative to the
model’s center point (origin)
•
•
Example: vertices of a 2D square
•
(-1,1)
•
(1,1)
•
(1,-1)
•
(-1,-1)
Called modeling or local coordinates
• Before this model gets to the screen, it will be transformed into several different spaces or coordinate
systems
•
When we start, the vertices are in model space (that is, relative to the model itself)
GEOMETRY STAGE:
MODEL TRANSFORM
Different model transforms
• Let’s say I have a teapot in model coordinates
• I can create an instance (copy) of that model
in the 3D world
• Each instance has its own model transform
•
Transforming model coordinates  to world
coordinates
•
Coordinates are now in world space
•
Transform may include translation, rotation,
scaling, etc.
RIGHT-HAND RULE
• Before we go further with coordinate spaces, etc., we need to talk about which way the x, y, and z axes
go relative to each other
• OpenGL (and other systems) use the right-hand rule
•
Point right hand toward X, with palm up towards Y  thumb points toward Z
GEOMETRY STAGE:
VIEW TRANSFORM
• Only things visible by the virtual camera will be rendered
• Camera has a position and orientation
• The view transform will transform both the camera and all objects so that:
•
Camera starts at world origin (0,0,0)
•
Camera points in direction of negative z axis
•
Camera has up direction of positive y axis
•
Camera is set up up such that the x-axis points to right
• NOTE: This is with the right-hand rule setup (OpenGL)
•
DirectX uses left-hand rule
• Coordinates are now in camera space (or eye space)
OUR GEOMETRIC NARRATIVE THUS FAR…
• Model coordinates  MODEL TRANSFORM  World coordinates  VIEW TRANSFORM  Camera coordinates
•
Or, put another way:
• Model space  MODEL TRANSFORM  World space  VIEW TRANSFORM  Camera (Eye) space
GEOMETRY STAGE:
VERTEX SHADING
• Lights are defined in the 3D scene
• 3D objects usually have one or more materials attached to them
•
So, a metal can model might have a metallic-looking material, for instance
• Shading – determining the effect of a light (or lights) on a material
•
Vertex shading – shading calculations using vertex information
• Vertex shading is programmable!
•
I.e., you have a great deal of control over what happens during this stage
• We’ll talk about this in more detail later; for now, know that the geometry stage handles this part…
GEOMETRY STAGE: PROJECTION
• View volume – area inside camera’s view that contains the objects we must render
•
For perspective projections, called view frustum
• Ultimately, we will need to map 3D coordinates to 2D coordinates (i.e., points on the screen)  points
must be projected from three dimensions to two dimensions
• Projection – transforms view volume into a unit cube
•
Converts world coordinates  normalized device coordinates
•
Simplifies clipping later
•
Still keep z coordinates for now
• In OpenGL, unit cube = (-1,-1,-1) to (1,1,1)
GEOMETRY STAGE: PROJECTION
• Two most commonly used projection methods:
•
•
Orthographic (or parallel)
•
View volume = rectangular box
•
Parallel lines remain parallel
Perspective
•
View volume = truncated pyramid with rectangular base  called view frustum
•
Things look smaller when farther away
Orthographic
Perspective
GEOMETRY STAGE:
CLIPPING
• What we draw is determined by what’s in the view volume:
•
Completely inside  draw
•
Completely outside  don’t draw
•
Partially inside  clip against view volume and only draw part inside view volume
• When clipping, have to add new vertices to primitive
•
Example: line is clipped against view volume, so a new vertex is added where the line intersects with the view
volume
GEOMETRY STAGE: SCREEN MAPPING
• We now have our (clipped) primitives in normalized device coordinates (which are still 3D)
• Assuming we have a window with a minimum corner (x1, y1) and maximum corner (x2, y2)
• Screen mapping
•
x and y of normalized device coordinates  x’ and y’ screen coordinates (also device coordinates)
•
z coordinates unchanged
•
(x’,y’,z) = window coordinates = screen coordinates + z
• Window coordinates passed to rasterizer stage
GEOMETRY STAGE: SCREEN MAPPING
• Where is the starting point (origin) in screen coordinates?
•
OpenGL  lower-left corner (Cartesian)
•
DirectX  sometimes the upper-left corner
• Pixel = picture element
•
Basically each discrete location on the screen
• Where is the center of a pixel?
•
Given pixel (0,0):
•
OpenGL  0.5, 0.5
•
DirectX  0.0 ,0.0
In OpenGL
RASTERIZER STAGE
• Have transformed and projected vertices with associated shading data from geometry stage
• Primary goal  rasterization (or scan conversion)
•
Computing and setting colors for pixels covered by the objects
•
Convert 2D vertices + z value + shading info  pixels on screen
• Has four basic stages:
•
Triangle setup
•
Triangle traversal
•
Pixel shading
•
Merging
• Runs completely on GPU
RASTERIZER STAGE:
TRIANGLE SETUP AND TRIANGLE TRAVERSAL
• Triangle setup
•
Performs calculations needed for next stage
• Triangle Traversal (or Scan Conversion)
•
Finds which samples/pixels are inside each triangle
•
Generates a fragment for each part of a pixel covered by a triangle
•
Fragment properties  interpolated from triangle vertices
RASTERIZER STAGE: PIXEL SHADING
• Performs per-pixel shading computations using interpolated shading data from previous stage
•
Example: texture coordinates interpolated across triangles  get correct texture value for each pixel
• Output: one or more colors for each pixel
• Programmable!
FRAGMENT DEFINITION
• Fragment = data necessary to shade/color a pixel due to a primitive covering or partially covering that
pixel
•
Data can include color, depth, texture coordinates, normal, etc.
•
Values are interpolated from primitive’s vertices
•
Can have multiple fragments per pixel
•
Final pixel color will either be one of the fragments (i.e., z-buffer chooses nearest one) or combination of
fragments (e.g., alpha blending)
RASTERIZER STAGE: MERGING
• Color buffer  stores color for each pixel
• Merging stage  combine each fragment color with color current stored in color buffer
• Need to check if fragment is visible  e.g., with a Z-buffer
•
Check z value of incoming fragment
•
If closer to camera than previous value in Z-buffer  override color in color buffer and update z value
•
Advantages:
•
•
O(n)  n = number of primitives
•
Simple
•
Can draw OPAQUE objects in any order
Disadvantages:
•
Transparent objects more complicated
RASTERIZER STAGE: MERGING
• Frame buffer
•
Means all buffers on system (but sometimes just refers to color + Z-buffer)
• To prevent the user from seeing the buffer while it’s being updated  use double-buffering
•
Two buffers, one visible and one invisible
•
Draw on invisible buffer  swap buffers
FIXED-FUNCTION VS. PROGRAMMABLE
• Fixed-function pipeline stages
•
Elements are set up in hardware a specify way
•
Usually can only turn things on or off or change options (defined by hardware and graphics API)
• Programmable pipeline stages
•
Vertex shading and pixel (fragment) shading
•
Have direct control over what is done at given stage
OTHER PIPELINES
• The outline we described is NOT the only way to do a graphics pipeline
•
E.g., ray tracing renderers  pretty much do EVERYTHING in software, and then just set the pixel color with the
graphics card
COMPUTER GRAPHICS API
DEFINITIONS
• Computer-graphics application programming interfaces (CG API)
•
Common CG APIs: GL, OpenGL, DirectX, VRML, Java 2D, Java 3D, etc.
•
Interface between programming language and hardware
• Let’s briefly go over some CG APIs…
GKS AND PHIGS
• GKS (Graphical Kernel System) – 1984
•
International effort to develop standard for computer graphics software
•
•
Adopted as first graphics software standard by ISO (International Standards Organization) and ANSI (American National Standards
Institute)
Original 2D  3D extension developed later
• PHIGS (Programmer’s Hierarchical Interactive Graphics Standard)
•
Extension of GKS
•
Developed in 1980’s  standard by 1989
•
3D standard
•
Increased capabilities for hierarchical modeling, color specifications, surface rendering, and picture manipulations
•
PHIGS+  added more advanced 3D surface rendering
GL AND OPENGL
• GL (Graphics Library)
•
Developed by Silicon Graphics, Inc. (SGI) for their graphics workstations
•
Became de facto graphics standard
•
Fast, real-time rendering
•
Proprietary system
SGI O2 workstation:
http://www.engadget.com/products/sgi/o2/
• OpenGL
•
Developed as hardware-independent version of GL in 1990’s
•
Specification
•
Was maintained/updated by OpenGL Architecture Review Board; now maintained by
the non-profit Khronos Group
•
Both are consortiums of representatives from many graphics companies and organizations
•
Designed for efficient 3D rendering, but also handles 2D (just set z = 0)
•
Stable; new features added as extensions
DIRECTX AND DIRECT3D
• DirectX
•
Developed by Microsoft for Windows 95 in 1996
•
Originally called “Game SDK”
•
Actually combinations of different APIs: Direct3D, DirectSound, DirectInput, etc.
•
Less stable  adopts new features fairly quickly (for better or for worse)
•
Only works on Windows and Xbox
WHY ARE WE USING OPENGL?
• In this course, we will be using OpenGL because:
•
It works with practically ever platform/system (Windows, Unix/Linux, Mac, etc.)
•
It’s arguably easier to learn/understand
• It is NOT because DirectX/Direct3D is a bad system
REFERENCES
• Many of the images in these slides come from the book “Real-Time Rendering” by Akenine-Moller,
Haines, and Hoffman (3rd Edition) as well as the online supplemental material found on their website:
http://www.realtimerendering.com/
• Some also are from the book “Computer Graphics with OpenGL” by Hearn, Baker, and Carithers (4th
edition)
Download