INTRODUCTION TO COMPUTER GRAPHICS 3D Viewing 3D HOMOGENEOUS COORDINATES • Again, we’ll use homogenized coordinates P(x,y,z,W), with W ≠ 0 • Add a z coordinate to our column vectors ⎡ x⎤ • Add a fourth row and fourth to our matrices ⎢ ycolumn ⎥ ⎢ ⎥ ⎢ ⎥ z • Point P(x,y,z): Identity matrix: ⎢ ⎥ ⎣1 ⎦ ⎡1 ⎢0 ⎢ ⎢0 ⎢ ⎣0 0 0 0⎤ ⎥ 1 0 0 ⎥ 0 1 0⎥ ⎥ 0 0 1⎦ HOMOGENEOUS COORDINATES • 3D → 4D Homogeneous • 3D Homogeneous → 2D ⎡ x ⎤ ⎡ x ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ y ⎥ ⎢ y ⎥⇒⎢ ⎥ z ⎢ z ⎥ ⎢ ⎥ ⎣ ⎦ ⎣ 1 ⎦ ⎡ x ⎤ ⎡ x ⎤ ⎢ ⎥ w ⎢ ⎥ ⎢ ⎥ ⎢ y ⎥⇒⎢ y ⎥ ⎢ z ⎥ ⎢ w ⎥ ⎢ ⎥ ⎢ z ⎥ ⎣ w ⎦ w ⎣ ⎦ ROW-ORDER VS. COLUMN-ORDER • Translation ⎡ 1 ⎢ ⎢ 0 ⎢ 0 ⎢ ⎢⎣ 0 0 0 1 0 0 1 0 0 ⎤ tx ⎥ ty ⎥ ⎥ tz ⎥ ⎥ 1 ⎦ Row major order ⎡ 1 ⎢ ⎢ 0 ⎢ 0 ⎢ t ⎣ x 0 1 0 ty ⎤ 0 0 ⎥ 0 0 ⎥ 1 0 ⎥ ⎥ tz 1 ⎦ Column major order ROW-ORDER VS. COLUMN-ORDER • Rotation – ROW MAJOR ORDER 0 ⎡1 ⎢ 0 cosθ ⎢ ⎢ 0 sinθ ⎢ 0 ⎣0 0 − sinθ cosθ 0 Rotate around x 0 ⎤ ⎡ cosθ ⎢ ⎥ 0 0 ⎥⎢ 0 ⎥ ⎢ − sinθ ⎥⎢ 1⎦ ⎣ 0 0 sinθ 1 0 0 cosθ 0 0 Rotate around y 0 ⎤ ⎡ cosθ ⎥ 0 ⎢ sinθ ⎥ ⎢ 0⎥ ⎢ 0 ⎥ ⎢ 1⎦ ⎣ 0 − sinθ cosθ 0 0 0 0 1 0 Rotate around z 0⎤ ⎥ 0 ⎥ 0⎥ ⎥ 1⎦ ROW-ORDER VS. COLUMN-ORDER • Rotation – COLUMN MAJOR ORDER ⎡ 1 0 ⎢ ⎢ 0 cosθ ⎢ 0 −sinθ ⎢ 0 0 ⎣ 0 sinθ cosθ 0 ⎡ ⎤ cosθ 0 ⎥⎢ 0 ⎥⎢ 0 0 ⎥ ⎢ sinθ ⎥ 1 ⎦ ⎢⎣ 0 Rotate around x 0 −sinθ 1 0 0 cosθ 0 0 Rotate around y ⎡ cosθ ⎤ 0 ⎢ ⎥ 0 ⎥ ⎢ −sinθ 0 ⎥ ⎢ 0 ⎢ 0 ⎥ 1 ⎦ ⎣ sinθ cosθ 0 0 0 0 1 0 Rotate around z ⎤ 0 ⎥ 0 ⎥ 0 ⎥ ⎥ 1 ⎦ PRE-MULTIPLY / POST-MULTIPLY To apply transformation A followed by transformation B vnew = BAvold vnew = vold AB Post-multiply (vertex coords) OpenGL Pre-multiply (vertex coords) DirectX post-multiplying with column-major matrices produces the same result as pre-multiplying with row-major matrices. Will use col-major order / post-multiply for remainder of slides 3D Viewing as a Kodak Moment CAMERA MODEL • Image is a projection of the world • Onto a plane • From the point of view of the camera REAL CAMERAS • Cameras have apertures • Light focused through aperture using one or more lenses • A lens bends light going through it based on its geometry • Convex lens – lens thicker in the center than at the edges • Light rays converge • Concave lens – lens thinner in center than edges • Light rays diverge • Lens applet • http://lectureonline.cl.msu.edu/~mmp/applist/optics/o.htm REAL CAMERA http://photographycourse.net/lessons/exposure-apertureshutter-speed-iso/ CG CAMERA GRAPHICS PIPELINE M C Modeling Transformation Projection Transformation W C PC Viewing Transformation VC Normalization Transformation and Clipping N C Viewport Transformation D C TRANSFORMATIONS AND CAMERA • Modeling transformations • Moving the model — e.g., stacking fruit in a bowl • Camera transformations • Place camera on tripod, point toward desired scene • Projection transformations • Adjust the lens of the camera • Viewport transformations • Enlarge or reduce the physical photograph SPECIFYING WHAT YOU CAN SEE • OpenGL projection model uses camera/eye coordinates • The “eye” (camera) is located at the origin • Looking down the -z axis • Projection matrices use a six-plane model: • Near (image) plane and far (infinite) plane • Both are distances from the eye (positive values) • Enclosing planes • Top & bottom, left & right • Set up a viewing frustum • A.k.a. view volume • Specifies how much of the world we can see • Anything outside of the viewing frustum is clipped • Two steps: • Specify its location in space (view transform) • Specify the size of the frustum (projection transform) PROJECTION • Orthographic/parallel best for: • Architectural drawings where line up/same size checking essential • Not trying to “fly” through scene • Perspective/frustum best for: Orthographic • Realism • Moving through scene • Aligning/measuring is not an issue Perspective VIEWING FRUSTUM / VIEW VOLUME Orthographic Perspective/Frustum VIEWING TRANSFORM • Camera has its own 3D coordinate system based on its orientation: (u,v,n) • u corresponds to x (as seen by the camera) • v corresponds to y (as seen by the camera) • n corresponds to z (as seen by the camera) y • Negative n is into the scene • Notice in OpenGL both are right-handed systems! v • This is not the case in all APIs z n u x CAMERA COORDINATES • We define camera orientation in world coordinates • Provide the camera location ( eyepoint ) • Indicate what direction the camera is looking ( lookat ) • Give the “up” direction of the camera • Note: this is independent of v • Then v • n = eyepoint – lookat (normalized) • u = up x n (normalized) Lookat UP • v=nxu n Eyepoint u CAMERA COORDINATE DEFAULTS • Default orientation coincides with world axes: • Eyepoint at (0, 0, 0) • Lookat (0, 0, -1) • Up vector (0,1,0) • Then • n = normalize( eyepoint - lookat) = (0,0,1) • u = normalize( up x n ) = (1,0,0) • v = normalize( n x u ) = (0,1,0) GRAPHICS PIPELINE M C Modeling Transformation Projection Transformation W C PC Viewing Transformation VC Normalization Transformation and Clipping N C Viewport Transformation D C VIEWING TRANSFORMATION • Viewing transform converts coords from world space to camera/eye space. • Coordinate system of world must line up with the axes of camera. • Origin of camera space is the position of the camera. WORLD TO CAMERA COORDINATE TRANSFORMATION • When considering transformations of objects, we can achieve the same effects if we transform the objects or alternatively transform the coordinate systems. • For example, consider two 2D coordinate systems x, y, the world coordinates and u, v, the camera/eye coordinates v y u •x1,y1 x WORLD TO CAMERA COORDINATE TRANSFORMATION v y •x1,y 1 x R(-Θ)=> y u T(-x1,-y1)=> v u Θ •0,0 vy •0 ,0 ux x WORLD TO CAMERA COORDINATE TRANSFORMATION • We can do the same in 3D • The goal is to find out what the coordinates of the model which are available in world coordinates are in camera coordinates. • What are the steps?: • 1. Translate the camera/eye coordinates so that the origins of the two coordinate systems are coincident. • 2. Rotate the camera/eye coordinates about the y-axis of the world until the n-axis is in the yz-plane • 3. Rotate the camera/eye coordinates about the x-axis of the world until the n-axis is coincident with the z- axis of the world • 4. Rotate the camera/eye coordinates about the z-axis of the world until the u and v -axes are coincident with the x and y-axes, respectively • NOTE: The world axes NEVER moves!!!! VIEWING TRANSFORM • The 3D viewing process: • Build our own set of world and camera coordinates • Define the four transformations needed to make the axes coincident • These are the same transformations that will convert world coordinates to camera coordinates • Composition of these is our camera transformation: PROJECTION • The role of cameras can be described as projecting a 3D scene onto a 2D plane • Projection plane (also called the view plane) • 2D plane upon which the 3D scene is being projected • In OpenGL, it’s the near clipping plane (discussed later) v p'(u,v,0) p(u,v, n) COP n u View-plane window PROJECTION – METHODOLOGY • Must determine how objects are projected onto plane • World window is “on” the view plane • Objects projected onto view plane which are within the window will be mapped to the viewport • To define the projection, need two things: • Center of Projection (COP) • Point of convergence for the projection • Also called a vanishing point • In some texts called the Projection Reference Point • Type of projection • Orthographic (parallel) or perspective PROJECTION – METHODOLOGY • Projection of a 3D object is defined by projectors • Straight projection rays • Projectors emanate from a center of projection • The eyepoint in OpenGL • Travel from center of projection to vertices of objects • Intersection point with view plane is where the object is projected v p'(u,v,0) p(u,v, n) COP n u View-plane window ORTHOGRAPHIC PROJECTION ORTHOGRAPHIC PROJECTION • Sometimes called parallel projection • Objects of equal size appear the same size after being projected, regardless of the distance they are from the viewing plane. • The center of projection is at infinity • Default projection type in OpenGL Center of projection at infinity Object in View plane 3D scene CALCULATING THE EQUATIONS OF PROJECTION • Given OpenGL orthographic (parallel) projection with the DOP parallel to the n-axis, all in camera/ eye coordinates: Center of DOP projection at infinity • Our goal is to find out the vertices of P1’. • What would be the values of u’, v’, and n’? P2’ • •P2(u2, v2,n2) • •P1(u1, v1,n1) P1’(u’, v’, n’) View plane CALCULATING THE EQUATIONS OF PROJECTION u‘ = u1 v‘ = v1 n‘ = -near Center of DOP projection at infinity • P2’ • P2(u2, v2,n2) • P1’(u’, v’, n’) •P1(u1, v1,n1) View plane •The vertex at P1’ is (u1, v1, -near), i.e.., orthographic projections simply through away the n values! ORTHOGRAPHIC NORMALIZATION • Done in “old” OpenGL as part of projection • View volume is mapped to a 2x2x2 cube PERSPECTIVE PROJECTION • Sometimes called frustum projection • The center of projection is at camera location • Projectors converge at COP • Objects closer to the view plane will appear larger when projected than objects of the same size that are farther from the view point • This is the projection used by “real” cameras v p'(u,v,0) p(u,v, n) COP n u View-plane window PERSPECTIVE PROJECTION CALCULATING THE EQUATIONS OF PROJECTION •Given perspective projection with the COP at the origin of the camera/eye coordinate system: COP P2’ • • P1’(u’, v’, n’) • P2(u2, v2, n2) • P1(u1, v1, n1) View plane •Our goal is to find out the vertices of P1’. •We can do this using the parametric equation of the line or similar triangles. CALCULATING THE EQUATIONS OF PROJECTION u‘ = -near • u1/n1 v‘ = -near • v1/n1 n‘ = -near COP •P2’ • P1’(u’, v’, n’) View plane • P2(u2, v2, n2) • P1(u1, v1, n1) HOMOGENEOUS COORDINATES • 3D → 4D Homogeneous • 3D Homogeneous → 2D ⎡ x ⎤ ⎡ x ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ y ⎥ ⎢ y ⎥⇒⎢ ⎥ z ⎢ z ⎥ ⎢ ⎥ ⎣ ⎦ ⎣ 1 ⎦ ⎡ x ⎤ ⎡ x ⎤ ⎢ ⎥ w ⎢ ⎥ ⎢ ⎥ y ⎢ ⎥⇒⎢ y ⎥ ⎢ z ⎥ ⎢ w ⎥ ⎢ ⎥ ⎢ z ⎥ ⎣ w ⎦ w ⎣ ⎦ CALCULATING THE EQUATIONS OF PROJECTION ⎡ near •u ⎤ 1 ⎢ ⎥ ⎢ near • v1 ⎥ =⎢ ⎥ −near ⎢ ⎥ n1 ⎢⎣ ⎥⎦ u‘ = -near • u1/n1 v‘ = -near • v1/n1 n‘ = -near COP • • P2’ P1’(u’, v’, n’) View plane P2(u2, v2, n2) • • P1(u1, v1, n1) PERSPECTIVE NORMALIZATION PROJECTION TRANSFORM Perspective View Orthographic View ⎛ 2 ⎜ r−l ⎜ 0 O=⎜ ⎜ 0 ⎜⎜ ⎝ 0 0 0 2 t−b 0 0 −2 f −n 0 0 ⎞ − ⎟ t+b ⎟ − t−b ⎟ f +n − f −n ⎟ ⎟⎟ 1 ⎠ r+l r −l 2n r −l ⎛ ⎜ 0 ⎜ P= ⎜0 ⎜⎜ ⎝0 0 r +l r −l 2n t −b 0 t +b t −b − ( f + n) f −n 0 −1 0 ⎞ ⎟ 0 ⎟ −2 fn ⎟ f −n ⎟⎟ 0 ⎠ See http://www.songho.ca/opengl/gl_projectionmatrix.html for derivation ALTERNATE SPECIFICATION OF PERSPECTIVE PROJECTION • Another way to construct view volume for perspective projections: • Field of view (FOV) • Angle subtended from center of projection to top and bottom of the view plane • Get width Angle from ( Aspect Ratio * FOV ) • Aspect ratio v • Width/height of view plane p'(u,v,0) n COP u View-plane window p(u,v,n) DIFFERENT FOV PERSPECTIVE NORMALIZATION • Once again… PERSPECTIVE CAMERAS IN OPENGL Again all projections in OpenGL map the view volume to a 2 X 2 X 2 cube. For more info on developing these projection matrices, see Hearn and Baker, Chapter 7. gluPerspective( fov, aspect, near, far ) ⎡ f ⎢ ⎢ aspect ⎢ 0 ⎢ ⎢ 0 ⎢ ⎢ 0 ⎣ 0 0 f 0 0 0 zFar + zNear zNear − zFar −1 ⎤ 0 ⎥ ⎥ ⎥ 0 ⎥ 2 * zFar * zNear ⎥ zNear − zFar ⎥ ⎥ 0 ⎦ GRAPHICS PIPELINE M C Modeling Transformation Projection Transformation W C PC Viewing Transformation VC Normalization Transformation and Clipping N C Viewport Transformation D C MODELVIEW MATRIX • Fixed-function pipeline OpenGL • Two transformation matrices: • Modelview – model-to-world, world-to-camera transformations • Projection – camera-to-clip transformation MODELVIEW MATRIX • Transformation data will be uniform variables • Constant for all vertices of an object • Two approaches: • Application sends raw data, shader constructs matrices • Application constructs matrices and sends them to shader GRAPHICS PIPELINE M C Modeling Transformation Projection Transformation W C PC Viewing Transformation VC Normalization Transformation and Clipping N C Viewport Transformation D C