Presentation Script: November 02, 2015

advertisement
History
Ray tracing can be thought of as an algorithm used to render images by generating them the
same way the eye interprets the world from light rays. This creates very realistic images
because the algorithm is modeled after our current understanding of physics of light, optics,
and vision.
The first formal attempt to understanding vision was Plato, who interpreted the eye as a
lantern that emitted light. This light would meet with outside light and create vision.
Later on, Aristotle would change this way of thinking. He interpreted the eye as a darkened
chamber awaiting light. He also first described the camera obscura, a device which is the
basis for cameras and photography.
Kepler offered the first theory of the retinal image, seeing the eye as a camera obscura, and
recorded laws of geometric optics. Modeling light as a traveling particle, Newton believed
particles were emitted from shining objects. Gauss used this understanding to pioneer
techniques for ray tracing with lenses.
Modern physics understands light as both a particle and a wave in the form of photons that
are delivered in quanta. This is more useful for modeling the behavior of light at scales that
are not intuitive to humans (either really big or really small), and don't really change our day to
day perception of images. Ray tracing does not account for modern physics.
History of the eye
https://web.stanford.edu/class/history13/earlysciencelab/body/eyespages/eye.html
History of ray tracing
https://graphics.stanford.edu/courses/cs348b-98/basics/history/slides/walk001.html
Gaussian Ray Tracing Technique
http://bolvan.ph.utexas.edu/~vadim/classes/10f/rays.pdf
Applications
Ray tracing in Video games
The main use for raytracing is to accurately simulate the transport of light. In 3D graphics,
rendering a surface means looking at how much of a scene will reach the viewer’s eye. Even
a basic understanding of the physics of light makes you realize that just the properties of that
surface are not enough for a correct and realistic render: you need to know the properties of
the entire scene.
This is way raytracing so powerful. Its innate understanding of light actually means that some
effects are more efficiently computed with raytracing than using state of the art raster
techniques.
The examples below highlight some of the most common cases for raytracing. While the
primary application of raytracing is rendering, it can also be used for:
*Shadows
*Reflections
*Refractions
*Ambient occlusion
*Global Illumination
*Collision Detection
*A.I and line of sight
Ray Tracing in Architecture
When presenting design proposals to clients, architects need realistic rendering
accompanying their design. In the past, architects relied on hand-done drawings created with
ink or watercolors. However, this traditional method is rather tedious and does not easily
produce realistic illumination. Even many computer aided design (CAD) programs which
accurately model objects are unable to model light.
Some modeling programs even allow architects to introduce light into a picture, specifying the
light's location, orientation, color, and distribution. These features help designers to create
effects such as shadows and specular highlights; however, they still fail to model the physical
reality that objects interact with light. In order to accurately model the true behavior of light in
a given environment, we must consider all of the light in that environment, and acknowledge
that real light is reflected, refracted, diffused, and absorbed. For this reason, backwards ray
tracing, combined with radiosity techniques, is often the most used and most useful method
for architectural rendering
Ray Tracing in Animation
Advancements in computer graphics including developments in ray tracing have opened up
world of possibility in the field. Traditionally, individual frames of animated works were drawn
by hand. Movement was simulated through a complex series frame adjustment steps. There
is still a strong sense of nostalgia for traditional methods of animation, however, computer
graphics are playing a stronger and stronger role in the process.
We can say that the first great example of an animated movie remounts back to 1995 with the
release of Toy Story. The script was about a group of toys, mostly modeled with an opaque
round geometry, with simple color textures and, thus, easy to render. Since then, filmmakers
have gradually became more ambitious, powered by the advances in computer technology,
developing more complex stories, characters and scenes, but still relying on rasterization
methods.
In a more recent past, although ray tracing was still expensive, animation houses such as
DreamWorks and Pixar have started to integrate it in their movies to approximate global
illumination In films such as Shrek 2 and Finding Nemo, they have started to use pseudoray
tracing, in the sense that they generally rasterized the scenes, unless there was a polygon
that needed complex reflections and justified to be ray traced. Currently, most digital
animated films combine multiple techniques, including both ray tracing and rasterization, to
create the widest possible variety of e effects as efficiently as possible. The first animated
movie to extensively use ray tracing is Pixar/Disney's Cars, were cars have very curved
surfaces, are very shiny and reflective and cast many shadows into a rather soft, natural
environment.
Algorithm
Introduction
Light is flow of photon with wavelengths
Light rays travel in straight lines
Light rays can cross each other without interfering with each other
Light rays obey laws of reflection and refraction
Pseudo code
where I got pseudo code from
https://www.cl.cam.ac.uk/teaching/2003/AdvGraph/raytracing.html
equations
https://www.siggraph.org/education/materials/HyperGraph/raytrace/rtinter0.htm
Snell's law
http://hyperphysics.phy-astr.gsu.edu/hbase/geoopt/refr.html
http://hyperphysics.phy-astr.gsu.edu/hbase/phyopt/fermat.html
// We have a scene that contains 3d objects
// We have a plane that represents our image
// The plane is located somewhere in 3d space
// Show picture to demonstrate
Script for algorithm
Explain the basics of the algorithm.
We have our scene:
Show scene
We have a point of projection and an image that we will draw the scene on:
Show that
The point of projection is our eye; it is where we will be looking at the scene from.
The image is the 2D plane we will be drawing the scene on to. This is the image that is
returned from the algorithm.
From the eye, we shoot out a ray that extends through a pixel:
Show the picture with a vector from the point of projection to pixel (1, 1)
What the vector hits will help use determine the color of that pixel.
In this example, the vector will keep on extending to infinity meaning it will hit nothing.
Show picture of vector hitting something.
In this situation, we hit the sphere.
We then find the reflection vector and see how that contributes to the color of the object.
Show picture of reflection vector.
We then find the refraction vector if the object is refractive (like a glass cup).
Show picture of refraction vector.
If the object is refractive, we find out how the objects behind it contribute to the pixel’s color.
Then we find out how much the light source contributes to the color.
Show light vector.
If the light is obstructed before reaching the point, we have a shadow. Else, we find out how
much the light contributes to the color.
We find all the contributions to the color and plot it on the image.
image function(scene, point_of_projection, image_width, image_height)
{
point_P = point_of_project
image = new image(image_width, image_height)
for each pixel (x, y) in image
{
|
point_A = convertPixelToPoint(x, y)
|
vector_D = getUnitVector(point_P, point_A)
|
color = raytrace(point_P, vector_D, scene)
|
image.plot(x, y, color)
}
return image
}
color raytrace(point_of_projection, unit_direction_vector, scene)
{
point_P = point_of_projection
vector_D = unit_direction_vector
nearest_t = INFINITY
nearest_obj = NULL
for each object in scene.objects
{
|
// ray equation
|
http://hyperphysics.phy-astr.gsu.edu/hbase/phyopt/fermat.htmlR_t = point_P +
vector_D * t
|
|
// solve for the smallest t
|
// t is when the ray intersects the object
|
t = rayIntersectsObject(R_t, object)
|
|
if( isReal(t) && t >= 0)
|
{
|
|
if(t < nearest_t)
|
|
{
|
|
|
nearest_t = t
|
|
|
|
|
|
nearest_obj = object
|
|
}
|
}
}
color = black
if(nearest_obj != NULL)
{
|
// Find normal vector
|
|
point_I = point_P + vector_D * nearest_t
|
|
// find the normal of the object at point I
|
vector_N = normalVectorAtPoint(nearest_object, point_I)
|
|
if(nearest_obj.isReflective(point_I, vector_D))
|
{
|
|
reflection_vector = getReflectionVector(vector_D, vector_N, nearest_obj)
|
|
|
|
reflected_color = raytrace(point_I, reflection_vector)
|
|
|
|
color += reflected_color * reflection_coeff
|
}
|
|
if(nearest_obj.isRefractive(point_I, vector_D))
|
{
|
|
refraction_vector = getRefractionVector(vector_D, vector_N, nearest_obj)
|
|
|
|
refracted_color = raytrace(point_I, refraction_vector)
|
|
|
|
color += refracted_color * refraction_coeff
|
}
|
|
|
for each light in scene.lights
|
{
|
|
if(hasShadow(light, scene.objects, point_I))
|
|
|
color += lightColorContribution(scene, light)
|
}
}
return color
}
Download