White Paper: DirectX 11

jamor

You thought DX10 brought big changes? Get a load of DX11!

DirectX 10 marked a radical departure from DirectX 9: In order to be compatible, a graphics processor must feature a unified architecture in which each shader unit is capable of executing pixel-, vertex-, and geometry-shader instructions. The changes in DirectX 11 aren’t quite as fundamental, but they could have just as big an impact—and not only with games.

DirectX 11 is a superset of DirectX 10, so everything in DirectX 10 is included in the new collection of APIs. In addition, DX11 offers several new features and three additional stages to the Direct3D rendering pipeline: the Hull Shader, the Tessellator, and the Domain Shader. And in an effort to deliver cross-hardware support for general-purpose computing on graphics processors, Microsoft has come up with a new Compute Shader.

DirectX 11 will be compatible with both Vista and Windows 7, but many of its graphics features will be available on GPUs designed for previous iterations of Direct3D. Tapping into the Tessellator’s power, however, will require a GPU with transistors dedicated to the task (in this sense, DX11 marks a slight departure from DX10’s vision of a unified architecture). Let’s explore the concept of tessellation now.

Meet Tess

The three new pipeline stages we mentioned earlier are all related to tessellation. They reside in the geometry-processing stage, between the Vertex Shader and the Geometry Shader. Tessellation can rapidly create the primitive elements that go into the creation of a complex three-dimensional object by subdividing just a few at a time. In this case, the primitives are called patches, which are defined by control points (visualize Photoshop’s pen tool, except that DX11’s control points manipulate a surface instead of a line). Patches replace the triangles used in previous versions of DirectX. Each subsequent subdivision creates more primitives, with each group being smaller than the last. Increasing the number of primitives in a model makes that model look more realistic. The Tessellator can also reshape these primitives by adjusting the control points to form more complex geometry.

While it’s very easy for GPUs to produce coarse objects like cubes, they have a much harder time creating objects with smooth curves. By tessellating a coarse object, a cube, for example—a GPU can transform that object into something that does have smooth curves, such as a sphere—and the kicker is that this process requires relatively little GPU horsepower and graphics memory.


(click to enlarge)

Here’s a broad overview of how tessellation works: The Vertex Shader outputs patches, which then travel down the pipeline to the Hull Shader. The Hull Shader analyzes the patches’ control points to determine how the Tessellator should be configured (generating so-called “tessellation factors”) and then sends the patches on to the Tessellator. The Tessellator, in turn, subdivides the patches and feeds a stream of points to the Domain Shader. The Domain Shader manipulates these points to form the appropriate geometry and sends the resulting vertices to the Geometry Shader.

Hardware tessellation isn’t a new concept. Animators at Pixar began using tessellation to create their highly detailed characters beginning with A Bug’s Life , and they’re still using it today. The GPU that AMD designed for Microsoft’s Xbox 360 gaming console features a tessellation unit, and AMD integrated something similar in its Radeon GPUs for the PC, beginning with the Radeon HD 2000 series. This led many to predict that Microsoft would expose tessellation in DirectX 10. But that didn’t happen, and DirectX 11 won’t be able to tap AMD’s tessellator, either, because AMD’s original implementation of the technology isn’t compatible with Microsoft’s.

I Compute, Therefore I Am

If you’ve followed the evolution of modern GPUs, you know that they’ve moved from being single-core processors designed for one specific purpose—processing graphics—to massively parallel devices with hundreds of processing cores. Modern GPUs are capable of performing more than a trillion floating-point operations per second, which has been a boon for the types of graphics processing and real-time animation needed for computer gaming. But this hardware can be tapped to perform other types of computations, too; the concept is known as GPGPU computing (the acronym stands for general-purpose graphics processing unit). Most software applications, however, as well as the tools used to develop them, are designed for serial execution, not parallel.

GPGPU computing, therefore, requires brand-new tools, and AMD and Nvidia have invested significant amounts of time and effort to both create them and spur the development of GPGPU applications. AMD’s initiative is known as Stream SDK (Software Development Kit) and Nvidia’s is called CUDA (Compute Unified Device Architecture). The growth of GPGPU computing, however, has been hindered by the fact that each company’s tools work with only that company’s GPU. Microsoft hopes to change that with the addition of the Compute Shader to DirectX 11. The Compute Shader will enable developers to write GPGPU code that will run on any graphics processor, be it Nvidia’s GeForce platform, AMD’s Radeon, or Intel’s upcoming Larabee.

Although the Compute Shader is integrated with DirectX 11, it’s not actually a stage in the Direct3D pipeline. It can, however, take data structures from the Pixel Shader stage, manipulate them using the GPU’s resources, and then apply them to the final image in a post-processing stage. Microsoft has identified a range of target applications specifically related to graphics processing that should improve games, including effects physics (particles, smoke, water, cloth, etc.), ray tracing, gameplay physics, and even AI.

Analysts expect the first DirectX 11–compatible GPUs to reach the market in the fourth quarter; games that take advantage of DirectX 11 aren’t expected until sometime in 2010.

Around the web

by CPMStar (Sponsored) Free to play

Comments