Ray Tracing and Rasterization
The architecture of existing interactive 3D GPUs is based
exclusively on the method of ray casting or rasterization which, in turn, suggests the triangular
interpolation of the surfaces.
There are well-known drawbacks and limitations to this approach. Here are just a few:
- formidable load on the GPU's memory as a result of frequent frame-buffer overwrites and intensive usage of Z- and stencil-buffers;
- distortion of analytically-defined surfaces due to triangulation;
- application of illumination interpolation (i.e. Gouraud or Phong's) to alleviate triangulation effects by angular surface smoothing and realistic shading synthesis;
- inability to realistically reproduce physically correct shading, reflection, and refraction;
- difficulties synthesizing complex objects such as fog, clouds, smoke, particle objects (e.g. dust, snow, rain, etc.), and, especially, their dynamics;
- limitations in volume rendering.
The quality of the synthesized image is increased by increasing
the number of triangles that comprise it. As a result, the computational complexity increases to a point
where interactive functionality at a normal frame-rate is no longer possible.
Drawbacks and limitations mentioned above are resolved in the method of ray tracing with an extended set of graphics primitives. Below we describe some of its most notable features:
- synthesis of any analytic surface with no prior triangulation which is less demanding in terms of computational resources;
- expanded set of graphics primitives allows for construction of complex objects (e.g. terrain, water surface, clouds, etc.) Additional graphics primitives for ray tracing include planes (polygons) as well as surfaces of higher order;
- simplified shading and texture computations, which reduces demand for a high-throughput processor;
- eliminated need for triangulation increases the effective throughput on the memory-GPU bus.
Presently, scientists working on 3D graphics are convinced in the
validity of this method. Real time ray tracing is actively pursued by Philipp Slusallek, Jorg Schmittler,
Andreas Dietrich, et al, at Saalrand University, with Ingo Wald at Max Plank Institute for Computer Science,
Saarbrucken, Germany, who have developed SaarCOR architecture. Timothy J. Purcell et al, at Stanford
University have done work on implementing ray tracing on existing GPU. Daniel Hall et al, at ART VPS Ltd.,
have developed PURE and RenderDrive ray tracing hardware systems. Given all the present contributions to the
topic, existing real time ray tracing mathematical models, algorithms, and GPU architectures still are hot
development area.
Our Approach
Our approach involves a hardware implementation of ray tracing that supports interaction with the synthesized scene.
The main objective of this approach is to expand the capabilities of the graphics processor. Unlike existing GPUs, to facilitate this expansion we utilize a wider range of graphics primitives. Our version of the GPU allows for rendering of both triangulated and analytic surfaces. Such development is back-compatible with all existing 3D graphics. Moreover, our GPU supports volume rendering.
To build a GPU with such capabilities we had to redesign algorithms that support standard ray tracing. Some of the standard features redesigned include computing surface-to-ray intersection, surface transformation, shading and lighting, and texture mapping. The implementation of the algorithms introduced at this point is highly optimized and allows to reach high levels of interactive realism. As to realism of synthesized images, in addition to physically valid processing of light rays using ray tracing another level is added with the possibility of computing physically correct reflections, refractions, and diffusions on analytic surfaces as well as in participating media..
Looking at the increase in the processing speed, we have to note that utilization of analytic surfaces simplifies the system's implementation and considerably reduces GPU's memory requirements. The precision of geometric transformations, and, consequently, visual details, is determined by the angular resolution. Note, that this parameter can naturally be used to relate the system's precision to the human visual system.
One of the major components of our system handles the computation of the intersection point of the light ray and the surface. This subsystem can be easily parallelized and requires a constant number of machine cycles. For the purposes of increasing the rendering speed we have developed a set of methods for scene preprocessing and special object description which complements standard object description (e.g. geometric, textural, etc.).
In order to reduce the amount of memory needed to store the raster and vector texture, we have developed an algorithm for texture compression. During the decompression, the textural aliasing is removed.
All of the above allow for realistic visualization of the physical phenomena that take place as a result of particle light diffusion. These phenomena include fog, haze, dynamic clouds, smoke, as well as various weather conditions. We use the same approached for volume visualization.
As a result, all objects are processed similarly, be it a cloud, plane, terrain, or a CT/MR/ultrasound scan.
We want to stress the point that our approach can also be used with standard polygonal 3D data, since triangulated surfaces are nothing but a special case of our extended set of graphics primitives.
RT2 architecture contains two stages.
The primary stage classifies the scene on a central processor (CPU). The device driver, working in conjecture with the central processor, enables the access to and loading of 3D data (e.g. geometry and texture) at the required frame rate. It also contains programmable interface (API) that allows the processing of events coming from a mouse, keyboard, joystick, network, etc. The functionality of the API allows controlling of the geometric and textural data as well as diffusion, reflection, and transparency.
The secondary stage in RT2 pipeline conducts the processing of the data, which is stored in the on-board memory. Processing algorithms are highly scalable improving the system's performance with complex scenes.
Our unified approach to the visualization of 3D objects enables us to build a single crystal, general purpose RT2 GPU even with current chip density that will have a wide range of applications.
Practical Results
From a practical perspective, starting in late 80s we have researched applications of ray tracing in visualization systems that were specifically designed for flight simulators.
The following results were achieved:
· In 1987-1989, developed and manufactured special processor which conducted centro-projective geometrical transformations of the color image in real time. It has been approved for the Future Development Testing System at the Central Aerohydrodynamic Institute (TsAGI), Zhukovsky, Russia.
· In 1991-1993, supported by the grant from Civil Aviation Administration, developed and manufactured a visualization system which enabled training of such flight maneuvers as steering, take-off, and landing during both day and night with various meteorological conditions. Experimental prototype of the system installed in July 1993 at the Full Flight Simulator TU-154М of the Ukrainian Federal Certification Center of Civil Aviation, Kiev, Ukraine. It is still installed and operational and is currently being upgraded.
· As a result of research in 1994-1996, our visualization system was installed at the training facility TL-39 at the Kharkov Military Pilot Institute, which has been in use from February 1996 to 2004.
Current and Future Researches
We currently conduct active research in the following areas:
· Synthesis of scenes of large-scale spaces (in aerospace and outer space sense).
· Refraction of light in dynamic media.
· Volume rendering optimization.
· Visualisation of gas-dynamic environments.
· Development of API.
· Development of interfaces for software packages Bryce, 3DMax, and UG.
· Interactive Global illumination.
Our nearest goal is implementation FPGA RT2 prototype for high
fidelity interactive synthesis of 3D surfaces, volumes and complex scenes.
Final goal is general purpose single RT2 chip.
|