The shiny, new hatchback you nudge in a street race dents slightly on the driver’s side door. Although you’re playing a PC game, created with beaucoup equations, the bend looks almost real. The 3D renderer sculpts all those numbers into images, with help from the video API (application program interface). However, several completely different rendering techniques can be the source of those images. Currently, the hardware and software industries are debating how to best utilize two graphics-rendering techniques: ray tracing and rasterization.
Rasterizing is widely used to render current 3D games because it strikes a compromise between real-time processing demands and pretty pictures. Its regular, predictable patterns are also suited to specialized massively parallel processors, such as GPUs. Essentially, the raster engine looks at the thousands of 2D triangles that build a 3D scene and determines which are visible in the current perspective. With that information, the engine analyzes the light sources and other environment details to light and color pixels onto each triangle.
Ray tracing takes the opposite approach, borrowing from the way photons move in the real world. In nature, a light source creates countless photons (or rays) that bounce off objects, take on their color and properties, and eventually reach your eye.
Ray tracing reverses the process, firing its gaze away from the camera perspective, assessing which objects are in view. When a ray hits something, the engine knows to draw a pixel.
The Grey Area
These two techniques further diverge when adding shadows and other details to a scene. Rasterized graphics can use a few techniques to create light and dark, frequently relying on shadow maps. These guides are created by rasterizing from the perspective of a light source, seeing which objects are visible, and shading the camera perspective based on this blueprint. A ray tracer calculates shadows just by tracing more beams and seeing how they bounce. If a beam’s path leads back to a light source, its pixel is drawn brighter. If the beam ends without hitting a light, the engine knows to draw that pixel in shadow.
Ray tracing’s realism—and system burden—comes from the arbitrary point at which the engine stops calculating these bounces. Every time the beam ricochets off another object, more color, shadow, and reflection details can be added back to the first collision pixel. Fog effects can be especially taxing, requiring the beams to refract through a mist. The best-looking images can take billions of rays; that’s just too much number crunching for today’s CPUs and GPUs to handle in real time. And even if those chips could keep up, other bottlenecks couldn’t keep pace with a fully ray-traced real-time scene. “It’s just too hard in terms of memory bandwidth; it’s too hard in terms of silicon speed,” says David Kirk, chief scientist at Nvidia. “It’s just too hard. And I don’t think that’s the goal.”
“Graphics in general is the grand art of cheating,” Kirk notes, regardless of technique. “We’re trying to approximate what nature does—tracing gazillions of photons around—by doing less work than that, because even the most sophisticated and powerful ray tracers don’t trace billions of rays per second.”
Tools For The Job
“This whole CPU versus GPU distinction is a little bit artificial,” says Bill Mark, senior researcher at Intel’s Corporate Technology Group. “Certainly you can build GPUs that have some CPU-like characteristics. Similarly, you can build CPUs that have GPU-like characteristics.” That said, ray tracing slightly favors current CPUs because those chips were designed for similar computations as the physics-based ray engines.
Jerry Bautista, co-director of Intel’s Tera-scale computing research program, says, “There’s no computational difference between tracing the path of a bullet and tracing the path of a light ray.” That similarity could even lead to ray-tracing engines being recycled as a game’s physics engine, saving programming and processing power. Bautista also notes, “General compute engines like a CPU are pretty well suited to physics kinds of problems, whereas a GPU is more of a stream compute engine and probably a little better suited to… processing triangles at a high speed.”
Ultimately, hardware companies want software developers to have access to the fastest parts, regardless of renderer. Intel is developing its massively scalable, multicore Larrabee architecture. Nvidia is offering ways for game developers to run their own rendering code directly on the video hardware, allowing even those GPU devices to accelerate ray tracing.
According to Intel, hardware one to two generations away could render a complete, real-time scene with ray tracing. But nobody sees that as the goal. Nvidia’s David Kirk says, “If you could do all ray tracing, would you? I don’t think you would. There are many effects that you can do that involve diffuse kinds of lighting—that means softer, more inter-reflected kinds of lighting—that are horrendously [taxing]… to do with ray tracing.”
The hardware companies want to give software developers more opportunities to write their own renderers, mixing and matching methods even within a single scene. Like the current process in many animated movies, a rasterizer could sketch in a game scene, while a ray tracer could add sharp reflections and details.
This mix-and-match approach seems to contradict an API standard, but Microsoft has already been heading toward this solution. DirectX even allows game developers to send programmable shaders directly to the graphics card, allowing open-ended acceleration regardless of the 3D engine. Chas Boyd, principal program manager for Windows Display and Graphics Technology notes, “In future releases, we will continue to increase the generality of [Direct3D], and thus offer developers even more flexibility in their choice of rendering methods.”