Old School Pipeline Basic set of abstractions: triangles, z-buffer, vertex lighting, texture, pixel tests, … Interactive graphics used this to do other things with hacks – machinery made it go fast Application (primitives) Triangles Vertex Stream Vertex Processing: Transform, Lighting, Clipping (TCL) Triangle Re-Assembly Rasterization Per-Pixel Coloring Pixel/Fragment Tests (buffer reads) Memory Writes (put pixel in frame buffer) Vertices as a stream – caching processed vertices for sharing, other ways to feed Rasterization: triangle -> list of fragments Fragments (as opposed to pixels) Fragment has a place on screen (pixel) – but might not make it there Many fragments can contribute to a single pixel Fragments: x,y position (screen space) – other values interpolated from triangles Important: interpolate from vertices – only way to get to fragments Program cannot talk to fragments – doesn’t know what they are! Fragment coloring: texture mapping goes here Tests (z-buffer, stencil buffer, …) Issues of late Z – a lot of work, potentially thrown away (and late other tests) Machinery allows for multipass – multiple textures via multi-pass History: multi-texture, texture combiners, … Memory reads and writes get in the way Buffers, caches, queues Importance of stream independence – limited order dependence, can parallelize Why is it called a pipeline? Performance: Where is the bottleneck? Getting vertices to the pipeline Transformation / vertex computation Rasterization Per-pixel computations Texture memory reads Read/write to buffer Different systems have different bottlenecks Newer systems adapt based on needs (flexible resources) Fancy memory systems, since increasingly where bottlenecks are Fixed Function Pipeline Vertex: Transform, Basic Lighting Vertices are independent (no “triangle” operations) Fragment: Use interpolated values (color, texture cords) Look up and apply textures (with different modulators/combiners) Not associated with vertices (since they come from 3) Want more – but which tricks to put into hardware Normal maps (of different flavors), different combiner ops, per-pixel lighting, … Non-Answer: every hardware different. Good luck (graphics circa 2000) Answer: make it programmable! Programmable Hardware Key: Still pipeline! Pieces talk together in a fixed way. Some pieces stay the same (at first) Vertex Unit: What comes in: vertices Plus constants What goes out: vertices What changes? Other aspects of vertices filled in Screen space position Computed color Anything you want – it gets passed to interpolators Fragment Unit: What comes in: fragments (with values interpolated from vertices) Plus constants What goes out: fragments What changes? Properties of the fragments Mainly color Can change Z Can’t change X,Y (screen space) Note what you can specify from your app: Some stuff is per vertex Some stuff applies to the whole set of vertices (constants) Need to what for everything to finish, then re-load constants How vertices connect to be assembled Note what doesn’t change through pipeline Vertices – can’t make new ones, put together differently, … Fragments – can’t change position, interpolated input values, … Notice how important interpolation and rasterization is Other Programmability Process the stream of processed vertices (add/remove vertices) Geometry shaders Applied AFTER vertex shading Now commonplace Programmable Rasterization Direct access to computation units (for doing non-graphics stuff) GLSL Basics Why GLSL – compiler build into driver / history Specify programs written in C-like language Has nice features for graphics programming (vectors, matrices, …) Implements the model as given by the pipeline Vertex, Fragment Shaders, only pass things to Vertices & interpolation Some terminology: Uniform: constant over a group of primitives Attribute: features of vertices Varying: properties of vertices that get interpolated Outputs of Vertex shaders Inputs to Fragment shaders