Real-Time Raytracing in Computer Games Andy Rowan-Robinson NCCA Abstract Computer games generally incorporate scanline algorithms from APIs such as OpenGL for their rendering. This report presents a raytracer written for real-time use and explores whether it is viable to use such a system to produce high quality rendering within computer games. 1 Introduction Rasterisation is currently the dominant approach for interactive 3D graphics, but its inability to directly access more than a single triangle at a time is increasingly limiting the advancement of interactive 3D graphics and content creation. Time must be spent working around its limitations in order to implement seemingly simple features such as shadows or reflections, features that ray tracing can implement naturally and efficiently [1]. This report explores current developments within the field of interactive and real-time ray tracing. It also presents a soft real-time ray tracer which is capable of adjusting image quality to maintain an average frame rate above a specified target. The report investigates the notion that such rendering systems have potential to replace rasterisation techniques in computer games. introduces the current rasterisation method of rendering in computer games, and explains its limitations. Section 4 introduces the ray tracing technique and introduces some widely used optimisation techniques. The section also looks at whether these techniques are appropriate to the context we are working within. Section 5 looks at previous attempts to bring ray tracing into the interactive or real-time domain, and where appropriate looks at what specific methods they employ to do so. Section 6 presents the soft realtime ray tracer that accompanies this report. It details what techniques that were employed in its implementation and the overall effectiveness of the system. Finally, the report concludes by summarising the findings on whether real-time and interactive ray tracing systems are appropriate for game design. 1.1 Uses Real-time and interactive ray tracing is capable of opening up a whole new range of effects in computer games. Accurate, non-faked shadows, reflections and refractions can be produced naturally and efficiently, adding to the credibility of any game. Real-time systems in ray tracing can be used to ensure that the frame rate is above a certain level independent of the power of the machine it is run on. 2 Real-time definition In computer graphics the phrase ‘real-time’ is often take to mean ‘fast’ or ‘interactive’. In the context of real time systems this is inaccurate. Displaying objects moving across the screen at the same rate as they were intended to move in reality is favourable, but it is not the primary aim of the implementation presented in this report. This section gives an overview of real-time systems in terms of what is relevant to this system. An article compiled by Lehrbaum [2] includes the Institute of Electrical and Electronics Engineers’ definition: 1.2 Results This report introduces an accompanying soft realtime ray tracing system. The system renders simple spheres, incorporating a camera move around them. The system adapts the image quality to maintain a frame rate above a specified target. Combined with research into ray tracing systems that produce interactive frame rates the product is used as the basis of an exploration into the viability of ray tracing for computer games. a real time system is one whose correctness includes its response time as well as its functional correctness. To a real-time system it is not just that the answers are correct, it is also important that the answers are delivered on time. Just how important the timeliness of these answers are depends on the nature of the system. Consequently we can divide real-time systems into two main types: hard and soft. Lehrbaum states that a successful hard realtime system has guaranteed ‘worst case response 1.3 Organisation This report is organised into seven sections. Section 2 presents the definition of real-time a real time system as I will be applying it. Section 3 1 times’. This makes them appropriate to situations where life could be put at risk, for example in a nuclear power plant. Therefore hard real-time systems are often built to cope with errors or faults and will adapt themselves. In the case of the system presented here providing guaranteed ‘worst case response time’ (such as ‘will produce a 512 x 512 frame at least once every second’) would require co-ordination with an operating system. No such co-ordination exists in the system presented. In Lehrbaum’s words a soft real-time system is one ‘best described as “not hard”’. In these systems timeliness is required but is not critical. If run over a long period a soft real-time system will, on average, reach its time goal. However, it is worth noting that many systems that appear to run as soft real-time systems contain a hard real-time constraint. Lehrbaum uses the example of Voiceover-IP telephony. Here the system can cope with the odd packet not reaching it on time but if too many don’t get to their destination the communication falls apart. He states that a hard real-time constraint exists, like a floor, beneath which this seemingly soft real-time system fails. The system can be seen as a hard real-time system as it is considered to fail if its worst-case timing constraints aren’t met. The ray tracer presented here is modelled on a soft real-time system, and on average will meet the deadlines set providing the deadlines set are reasonable for the specific machine. There will always be a point where the system simply cannot output frames above a certain rate and the user will need to be aware of this when setting the target. Should the target be out of the scope for the machine the system will simply output frames at a rate as close as possible to the target as it can manage. With this project, it is important to distinguish between real-time as described here, and the common interpretation of it in computer graphics as meaning ‘fast’ or 25 f.p.s. With this product deadlines are set that the system targets to meet. These deadlines may well not make the frame rate appear fast or display images at 25 f.p.s. The deadline takes the form of an arbitrary target frame rate that the system attempts to maintain by adjusting the image quality of the output. This simulates a system that may be included in a computer game to ensure the frame rate stays above a certain value. Where this report refers to systems that claim to be ‘real time’ but are actually non-real-time systems with frame rates typically above 5 frames per second, the report terms them as ‘interactive’ or working at ‘interactive frame rates’. 3 Current methods used in Games As it stands today most games use either OpenGL or Direct3D to implement their rendering. Both incorporate a traditional graphics rendering pipeline with rasterisation techniques. This has been present since the earliest days of computer graphics, and enhanced and extended as hardware became more capable [3]. 3.1 Rasterisation The following summary of a rasterisation-based approach is quoted from a paper on OpenRT [4]. The majority of today’s interactive rendering systems builds on rasterizationbased techniques. These approaches process polygons independently of each other by projecting them onto the image plane. Once a triangle is issued, it is immediately transformed and lighted, clipped, rasterized, and z-buffered. After all these operations have been performed, the rasterizer can move to the next triangle. The process benefits from being heavily performed in the hardware pipeline and so is very fast. However, by operating on a stream of independent triangles it cannot efficiently and accurately render global effects such as shadows, reflections, and indirect illumination [4]. Each triangle is processed on its own with only local information when calculating shading. Therefore the shading method is not aware of other triangles blocking the light-source or appearing as reflections in the surface. Producing such effects requires additional rendering passes and manual tuning. For example reflections need to be created by rendering out an image from the point of view of the object and then texturing it on. Because ray tracing models the way light actually works global effects occur naturally in the process. By shooting additional rays for shadows, refractions and reflections we obtain them on demand. Incorporating this into computer games has the potential for more realistic images and captivating titles. Figure 1 shows images from the same game environment rendered with rasterisation and ray tracing techniques. Notice the difference in the accuracy of the shadows produced. 2 Figure 2: Rays created by a ray tracer. The Yellow rays are the initial rays spawned from the eye. These pass through the viewplane (the vertical black line) and into the scene. On intersection reflected rays are spawned (purple) and, if the surface attributes dictate, refracted rays are spawned (Green). Rays are also launched towards light sources (Blue) which are used to determine the light the object receives at that point. If an object is occluding the light source this is taken into account in the calculations. This ray tracing algorithm can be understood in terms pseudo code [5] (Appendix A ) and details a ‘brute force’ ray tracer. A brute force ray tracer is simple to understand, relatively simple to implement, but is incredibly inefficient. Figure 1: An example of the difference between rasterisation and ray tracing techniques. The top image is from the rasterised Quake 3. The bottom image is from the OpenRT (discussed in section 5) implementation of the same game. Notice the difference in the accuracy of the shadows. 4.2 Optimising the Raytracing Algorithm There are some major modifications that can be made to the brute force algorithm to optimise it. These additional algorithms tend to work very well with scenes containing large amounts of primitives, but for simpler scenes the overhead doesn’t always yield a worthwhile increase. The following sections are a summary of these techniques, adapted from a series of articles compiled by Jacco Bikker [6]. 4 Introduction To Ray tracing This section introduces the ray tracing algorithm and explores some possible ways of optimising it for interactive frame rates. 4.1 The Ray tracing algorithm Ray tracing works by sending out rays from an eye position, through sample points on the viewplane (that represent pixels) and into the scene. The paths of these rays are checked against every object and the closest intersection is found. From the point of closest intersection more rays are created towards each light source. From these rays it is possible to find out whether the object is in shadow or illuminated, and to what extent. We can also create rays in the direction of reflection from the object and in the direction of refraction. From all these rays we can work out the contribution to shading from each and so the colour of the pixel for that particular point. The algorithm is recursive, calling itself when producing reflected and refracted rays. Figure 2 shows the paths of a selection of rays fired, and the additional rays that are spawned on intersections. 4.2.1 Hierarchical Bound Volumes With a brute force ray tracer each ray is checked against every primitive. This is very expensive and often unnecessary, and means that as the number of primitives increases the rendering speed decreases linearly. By grouping objects and encapsulating each group in a larger object, known as a ‘bounding volume’, each ray can be tested against this larger object. Only if an intersection is obtained is there a need to test against the individual primitives within the object. This allows the system to test against a smaller number of objects initially. This can be extended to group multiple ‘bounding volumes’ in an even larger volume and decrease the initial testing even further. Figure 3 shows this system in the grouping of a number of primitives. 3 Figure 3: Hierarchical Bound Volumes. The leftmost image shows the primitives without any bounding volumes. The central image shows how these can be grouped under bounding volumes. The rightmost image shows how bounding volumes can be grouped to reduce the number of initial intersection tests further. This optimisation is a simple technique that could well be appropriate to real-time and interactive ray tracing as the overhead it entails it is not extremely high. However, in this project the number of primitives has been kept deliberately small to yield quick results and implementing this with such a small amount of primitives would not yield enough of a speed increase to make it worthwhile. Figure 4: Spatial Subdivisions. Each ray is moved through each grid square and intersection tests are performed if the square contains any objects. This ensures that primitives not near the rays path are excluded from any intersection tests. Only if a ray reaches a voxel that contains a primitive are intersection tests performed. This means that the ray is only checked for intersection with objects that are close to its path. Setting up this system is costly and it doesn’t guarantee the ray won’t come across a voxel containing a large amounts of primitives that will all have to be intersect tested (the teapot in a football field scenario). Although it provides good speed increases as the number of primitives increases, at the number of primitives real-time or interactive ray tracers must limit themselves to today it is not worth the overhead setting up and implementing. The same implementation and set-up costs are apparent in developments of the spatial subdivision algorithm, such as Kd-trees or octrees., which set up the grid in more advanced ways to partition space and isolate empty areas. 4.2.2 Using hardware acceleration By drawing the scene with a 3D accelerator, and using a unique colour for each primitive, it is possible to omit primary rays completely. For each pixel the ray tracer simply reads the colour value in the buffer to identify which primitive is closest taking advantage of the hardware’s efficient pipeline for hidden surface removal. You can then obtain the distance from the z-buffer to spawn new rays. This process can also be combined with bound volumes for more optimisation. The method can only be used on primary rays but in situations where these make up a large proportion of the rays generated it yields very good results. As real-time and interactive ray tracers tend to limit the amount of bounced secondary rays for performance this process would lend well. The system would also require a version of each primitive in the hardware renderer’s system as well as in the mathematical space used by the ray tracer. In the case of the project presented with this report it would require additional careful implementation as hardware rendering is already utilised to output the final image (this is detailed in section 6). 4.3 Real-time Context The techniques described above, in general, increase the speed at which we get results for each ray we fire. Although we want to calculate the shade for each pixel as quickly as possible from when the ray was fired this is not the primary concern when designing a ray tracer in the realtime context. With real-time ray tracers it is important how the system adapts to the time it takes for each ray, whether it is receiving results from rays extremely fast or excessively slowly. The system presented here must adjust base factors in the ray tracing process, such as the number of rays fired, and this is independent of how fast the ray tracing system is at calculating results for each ray. 4.2.3 Simple Spatial Subdivisions In this system the scene is subdivided with a regular 3d grid. Rays that are shot from the camera step through the volume voxel by voxel until a grid that contains a primitive is found. In empty grids no intersection tests are performed, although simple computations are done to move from grid to grid. This technique can be visualised as in Figure4. 5 Previous Work On Realtime Raytracing During research no reference was found to any ray tracing systems that ray trace to the specification of 4 a real-time system defined above. However, there are a number of ray tracers that display at ‘interactive’ frame rates of which two are investigated below. The research into these systems is aimed at finding the type of output that can be expected from such a system and the techniques that were used to obtain this output. Also investigated is OpenRT, an OpenGL style API for ray tracing, from which insight into the future of ray tracing can be gained. 5.1 Trezebees This system was produced by Nils Desle [7] (available from http://home.tiscali.be/slinline/trezebees.html). At the heart of the system is a standard ray tracer, with a few minor optimisations, combined with a system that speeds up the drawing of the image using hardware acceleration. The Trezebees system uses a process known as ‘adaptive subsampling’ to provide frame rates at levels that are at an interactive level. This system relies on the fact that a large amount of pixels rendered share the same or a slight variation of the colour of their neighbouring pixel. Instead of rendering pixels individually, it samples a group of pixels at the corners of a rectangle, and analyses the colour difference between them. If there is only a subtle colour difference the system interpolates between them using an OpenGL polygon rendered at the point in the view. If the colours do differ widely then the area is divided up and the process is repeated until the sample area is only 1 pixel by 1 pixel. In short, the system detects where more detail is required and ray traces further, if not it simply fills the area in as a shortcut. Figure 5 shows an image produced with this method, and Figure 6 shows the areas that were actually ray traced. By incorporating OpenGL the system takes advantage of the efficient rasterisation pipeline present and can also drastically reduce the amount of rays fired. This does entail degradation in the Figure 6 : The actual samples that were ray traced to produce Figure 5. image quality and objects smaller in view than the sample size of the preliminary sample grid can easily disappear between frames. The Trezebees system is also interesting as it includes some small, ‘common sense’ streamlining alterations. These are easy to include in a project aiming for frequent update, without incurring large overheads or drastic changes to the structure: Precalculation. Any values that remain constant for the duration of the frame, or the system, are precalculated to avoid unnecessary calculations within each individual ray computation. Shadow casting object caching. As shadows may well not only be cast on one pixel but its adjacent pixels, it makes sense to hold the last shadow casting object and test for it first, before checking the other objects. Inlining often used methods. This can make code less readable but can reduce the overhead of function calls. Excluding objects that never cast shadows from the shadow check. Bypassing virtual functions where possible to limit overhead. Trezebees updates relatively smoothly, especially at low resolutions, and is a good example of what can be done with ray tracers that display at interactive frame rates. Though not at the level of complex modern computer games, it seems feasible that it could be extended to incorporate a simple, low resolution, classic arcade style game. Although the system does not adapt to having processing power taken away from it, other than a reduction in the frame rate, it provides a base that a soft real time system based around a target frame rate could be built upon. 5.2 Nik Chapman’s ‘RealTime’ Raytracer Chapman’s system [8] (available from http://homepages.paradise.net.nz/nickamy/raytracer /raytracer.htm.) is another ray tracer providing Figure 5: An image produce by the Trezebees system using adaptive subsampling. 5 frame rates at an interactive level.. It is slightly more advanced than Trezebees, as is actually user interactive, allowing the user the ability to move around the objects and increase the detail at run time. The system uses a similar process of ‘adaptive subsampling’ and so also benefits from hardware acceleration. However, instead of subdivision occurring where there is a large colour difference, the system subdivides where the rays at each corner hit a different sequence of objects, or the presence of shadow changes. This avoids comparing colour values with three components. Figure 7 shows an image produced with this system and the actual samples ray traced to produce it. The system also includes some features that demonstrate the scope of ‘interactive’ raytracing. These are: Metaballs Constructive Soldid Geometry Water effects using an animated normal map over a plane. This system is an example of what can be achieved with interactive ray tracing, and there seems no reason why it could not be adjusted to a cope with real-time deadlines in terms of intervals between frames or even a classic arcade style game. 5.3 OpenRT Where the previous systems researched could theoretically support simple games, OpenRT is an API that can support games of modern complexity. OpenRT is a real time ray tracing engine with a corresponding programming API developed by the Computer Graphics Lab of Saarland University, Germany. From the university’s website [9]: The goal of the OpenRT Real-Time RayTracing Project is to develop ray tracing to the point where it offers an alternative to the current rasterization based approach for interactive 3D graphics. Therefore the project consists of several parts: a highly optimized ray-tracing core, the OpenRTAPI which is similar to OpenGL and many applications ranging from dynamically animated massive models and global illumination, via high quality prototype visualization to computer games. According to its developers, ray tracing has recently been developed to the point where it is becoming a possible alternative to the rasterisation approach in interactive 3D graphics. They point out that with the availability of a first pototype graphics boards purely based on ray tracing, ‘we have all the ingredients for a new generation of 3D graphics technology that could have significant consequences for computer gaming’ [10]. Using OpenRT, a group of researchers at the university have developed two games. One is an adaptation of an existing game, Quake 3, to the the OpenRT system. The other, Oasen, is a game design from scratch to take advantage of raytracing techniques. In reference to Quake 3, the developers stated that they ‘were able to support all the traditional effects of Quake 3 while most effects were significantly simpler to implement.’, and went on to state that looking at newer engines, such as Unreal3, they saw no new features that could not easily be supported by raytracing [10]. In the second, purpose built game, the developers were able to reuse raytracing methods in the physics engine, on acoustics and for collision detection. Used like a radar system rays determine the distance to nearby objects, it allows the developer to avoid building special algorithms and data structures for such tasks. [10]. Although both games were run on a powerful distributed system the developers are confident that, when used in conjunction with specific hardware, raytracing is set to be a viable alternative to rasterisation for computer games. Schmittler’s paper on the SaarCOR prototype interactive raytracing hardware demonstrates that ‘ray tracing is at least as well suited for hardware implementation as the ubiquitous rasterization Figure 7: The top image was produced Nik Chapmans ray tracer utilising adaptive subsampling. The lower image shows the sample ray traced to produce it. 6 approach’, and achieves interactive performance of 20 to 60 frames per second on a wide variety of scenes [1]. With work on producing fully functional raytracing computer games OpenRT has proved that it is possible to produce games using ray tracing and that there are many benefits in doing so. Some screen shots of the produced games are shown in Figure 8. 6 Presented Soft Real-time Ray tracer The system presented here is basically a brute force ray tracer with adaptive subsampling. On top of this is a system that adapts the quality of the image produced to achieve soft real-time deadlines set as a target frame rate. 6.1 Features 6.1.1. The Ray tracer The ray tracer has the typical features of a brute force ray tracer. It implements Phong shading, shadows, multiple coloured point lights, reflections, materials per primitive and camera positioning. The building of the ray tracer section of the project was aided somewhat by a collection of articles written by Jacco Bikker [6]. In order to increase the speed of development I excluded refractions from the system as my primary interest was shadows and reflections. Refractions can be easily added in the future if desired. In order to output the images the system uses OpenGL by plotting colour values found by the ray tracer on an orthographic view. OpenGL is further used in the adaptive subsampling method to interpolate quickly between points where detail isn’t required. 6.1.2. Adaptive Subsampling This method is optimised slightly. Firstly, the system ray traces the scene as a grid of sample points, which is stored in a multidimensional array. The amount of points in the grid initially traced depends on the level of image quality. A better quality image is obtained by tracing a tighter grid. Stored with the colour values for each sample is a number. This represents the object hit’s identity, with a value of 100 added on if the sample references a shadow, and 1000 added on if the sample references a successful reflection of another object. As the number of primitives is limited to 100, including light-sources, the sample won’t get the same value referencing two different combinations of object, shadows and reflections. This value is then used, if the system determines there is time to allow it, to find areas where we need to divide the grid square up and ray trace more detail. If the sample at each corner holds the same value then the system assumes that it is referencing the same object, and that it has a similar level of shadow and reflection. This means it can simply interpolate the area using OpenGL. If one or more of the samples references a different object then it knows there is an edge within the square and more detail is required. The recursive ‘subdivide’ function is then called with each of the four corners and a depth value. This function then ray traces five additional points in order to divide the square into a further four. To avoid ray tracing any same point twice, which is likely as adjacent grid squares share corners, the values are added Figure 8 : Screenshots from ray tracing games developed with OpenRT. The top is taken from a port of Quake 3 to OpenRT, and the lower two are taken from the Oasen, a game purposely built to utilise ray tracing. 7 into the initial grid of samples. Before a ray tracing process is started by the subdivide function it checks if there is already a value in the grid. If so it simply uses the value already there. At the end of each frame the grid is refreshed. After dividing the area, creating four smaller squares, the subdivide function analyses the corners of each square. If the system determines there is enough time, and a further subdivide is required, it recursively calls itself to subdivide further. An advantage of this system is that detail is generally added around the edges of objects, shadows and reflections and it is these edges that the eye is especially sensitive to. By adding detail here we effectively create an image for the viewer that appears more detailed than it actually is. Figure 9 shows examples of the system at work. 6.1.3. Real-time system The system works on the assumption that the next frame will be of a similar level of complexity to the last one. After every frame the system decides the image quality of the next by looking at the time it took to render the last and adjusting parameters that speed up or slow it down. There is a target frame rate that is used for comparisons, and there is a degree of tolerance built in above the target frame rate to prevent constant switching over it. The attributes the system adjusts are the size of the grid initially ray traced and the level of subdivision depth allowed. The system is also capable of turning off shadows, reflections, specular shading and diffuse shading (leaving flat colours) but testing showed that the effect of making such changes was very slight when compared to the effect of adjusting the grid size and subdivision level. Adjusting these features was unnecessary and distracting, as aspects would flicker on and off spoiling the continuity of the process. Adjusting the grid size and subdivision levels makes dramatic differences in image quality and speed. However, these large differences can cause obvious ‘steps’ in the states of the system. For example, setting a target frame rate of 10 f.p.s at 512 x 512 could give low quality images being output at around 12 f.p.s, as this could be the next adjustment for the grid size above a 10 f.p.s. target. This means that power that could be spent on improving image quality is being wasted on a frame rate above what we want. The loss of accuracy that causes this becomes more apparent the higher the value of the target frame rate. At higher frame rates, and lower levels of detail, accuracy in changes is difficult to implement and the system has less control. In attempts to make the transition between ‘steps’ smaller and more controllable tests were performed with disabling and re-enabling features, such as shadows, between each change in grid size or subdivision level. This Figure 9: Adaptive Subsampling. The top image shows a high quality image produced at a low frame rate. The central image was produced from an increased target frame rate. Notice the quality decrease. The bottom image shows the areas that subdivison takes place. Shortly after this image was grabbed the technique was expanded to subdivide the edge of reflections. 8 however produced awful, choppy sequences of images that changed constantly and irritatingly while not solving the problem satisfactorily. Testing was also carried out using colour values to subsample, rather than values based on object Ids, in order to provide an additional value that the system could control. In practice the system gave a large loss in general image quality while failing to provide a value that could provide accuracy of control at higher frame rates. When running tests where two instances of the system were launched, image quality noted, and a system closed while paying attention to the image quality change on the other, results were successful though they suffered from the ‘stepping’ described above. Tests with readouts showed that at the accurate high quality levels the average frame rate did remain constant, although at lower quality levels the frame rates did occasionally drop, although often only to a closer level to the target. There are obvious limits to the system however. Setting the target frame rate too high will make it unattainable and the system will be unable to maintain an accurate average frame rate. Tests on a machine at 512 x 512 showed that the system could maintain an average around 25 f.p.s. though outputting very bad image quality. In reality there will however always be a point where the system breaks down and this is why the user needs to specify a ‘reasonable’ target deadline. Overall at high image quality levels the system is accurate. The system can be classed as successful in that it can maintain a state above a target frame rate in the manner of a soft real-time system, providing the user is sensible with the target they define. Additional work needs to be channelled into providing more control within the system. By making the ‘steps’ smaller and more accurate the system will be more efficient with its resources. 6.1.4 Animation System The system also includes a simple animation system. Objects can be translated and the camera rotates around a position, adjusting its height according to a sine curve. This system is implemented with calls to an ‘update scene’ method, with details of the time past since the last update. The method then works out the distance to move objects according to this time. Figure 10 shows a selection of grabs of the system at work. 6.2 Further work The ray tracer would benefit from the ability to make more accurate adjustments to the render time parameters and from a higher level of optimisation, perhaps including hardware rendering to omit primary rays. This could possibly bring it up to a level where it could hold a simple classic arcade game. Figure 10: A selection of grabs showing the animation system at work. 9 sb.de/Publications/2003/TheOpenRTAPI_OpenSG 2003.pdf [Accessed 2 March 2005]. 6..3 Summary of the System Presented Although the system is not as ‘fast’ as the systems researched, it is also largely un-optimised and has the potential to reach the speed of the ray tracers afore-mentioned. It does however stand out in that it is the only system that considers real-time deadlines and adapts itself to them. This is an area that would be of use in computer game development, and has been shown to work to a reasonable, though improvable, extent. [5] CHAUDHURI, S., 2002 Raytracing Algorithm [online]. Fuzzyphoton.tripod.com. Available from: http://fuzzyphoton.tripod.com/rtalgo.htm [Accessed 1 March 2005]. [6] BIKKER, J., 2004. flipcode – Raytracing Topics & Techniques [online]. Flipcode.com, inc. Available from: http://www.flipcode.com/articles/article_raytrace0 1.shtml [Accessed 1 March 2005]. 7 Conclusions It has been shown through the work on OpenRT that interactive ray tracing is possible, and further investigations have shown the feasibility of efficient hardware support for such systems. Investigations with OpenRT have also shown that there are benefits gained within the game engine by incorporating ray tracing. It is apparent that ray tracing has the potential to become an alternative to rasterisation, although it is dependant on hardware advances. Trezebees and Nik Chapmans systems are impressive but are not comparable to the interactive graphics that a hardware rasterisation pipeline currently provides. If an equivalent hardware system was widely available, such as the prototyped SaarCOR, then we would see significant advances in what could be achieved with interactive ray tracing. [7] DESLE, N., 2004 SlinLine Productions – Trezebees – The Engine [online]. Available from: http://home.tiscali.be/slinline/chapter4.html [Accessed 1 March 2005]. [8] CHAPMAN, N., 2004. Real Time Raytracer [online]. Available from: http://homepages.paradise.net.nz/nickamy/raytracer /raytracer.htm [Accessed 2 March 2005]. [9] WALD, I., 2004. OpenRT Realtime Ray Tracing: Project Home [online]. Saarbrucken: Saarland University. Available from: http://graphics.cs.uni-sb.de/RTRT/ {Accessed 2 March 2005]. [10] SCHMITTLER, J, ET AL. 2004. Realtime Ray Tracing for Current and Future Games [online]. Saarbrucken: Saarland University. Available from: http://graphics.cs.unisb.de/~jofis/Arbeit/GI04-MWZC-RayTracing.pdf [Accessed 2 March 2005]. References [1] SCHMITTLER, J., WOOP, S., WAGNER, D., WOLFGANG, P. SLUSALLEK, P. 2004. Realtime Ray Tracing of Dynamic Scenes on an FPGA Chip [online]. Saarbrucken: Saarland University. Available from: http://graphics.cs.unisb.de/~jofis/SaarCOR/DynRT/Schmittler_et_alRealtime_Ray_Tracing_of_Dynamic_Scenes_on_a n_FPGA_Chip.pdf [Accessed 2 March 2005]. [2] LEHRBAUM, R. ed., 2000. Real-time Linux – what it is, why do you want it, how do you do it? [online]. NY: Ziff Davis Media. Available from: http://www.linuxdevices.com/articles/AT98377192 78.html [Accessed 1 March 2005]. [3] THOMSON, R., 2001. Direct3d vs. OpenGL : Comparison. [online] Thomson, R. Available from: http://www.xmission.com/~legalize/d3d-vsopengl.html [Accessed 1 March 2005]. [4] DIETRICH, A., WALD, I., BENTHIN, C., SLUSALLEK, P., 2003. The OpenRT Application Programming Interface – Towards A Common API for Interactive Ray Tracing. [online] Saarbrucken: Saarland University. Available from: http://graphics.cs.uni- 10 Appendix A : Brute force ray tracer pseudo code // FOLLOWING PSEUDO CODE BASED ON EXAMPLE AT HTTP://FUZZYPHOTON.TRIPOD.COM/RTALGO.HTM (CHAUDHURI 2002) For every line of the image, For every pixel of the line, Generate a ray from the eye position through the point on the viewplane corresponding to the line and pixel on the line. Call the Raytrace procedure with this ray Colour the pixel the value returned from the raytrace procedure. Next Pixel on line. Next Line. Raytrace Procedure: Set distance to large value for the maximum distance traced. Set object pointer to NULL. For every object in the scene, Check for intersections with the ray. If the intersection occurs, If the distance is less than current distance Update current distance to this distance Set object pointer to this object End if End if Next Object. If object Pointer != NULL Colour = Black. For each lightsource in scene, For each object in scene, If object is between point of intersection and lightsource, calculate intensity of received light from the transmittivity of blocking object (e.g. If object is solid, no light will be received). Next Object. Calculate perceived colour of object at this pixel according to intensity received of current light source. Add this value to Colour. End if. Next lightsource. If Depth is < a limit value Generate two new rays for refraction and reflection starting at the point of intersection with this object Call raytrace with reflected ray and depth + 1. Add value returned to Colour Call raytrace with refracted ray and depth + 1. Add value returned to Colour End if. End if. Else Set Colour to background colour, (no object was intersected). End if. Return Colour. End of Raytrace procedure. 11 APPENDIX B: Visual Development of the ray tracer presented here 12 13 14 15 Key: Page 12: Preliminaries. Getting intersections working. Using simple shading and point light sources. Implementing material class. Page 13: Implementing specular highlights, reflections and shadows. Infinite ground plane. The shadow effects appear odd due to the light intensity remaining constant no matter how far you move away from it and the infinite ground plane providing little information about distance. Page 14: Continued experiments. Page 15: Implementation of adaptive subsampling, including some of the bugs that crept in. 16