Like many of you, the first real 3D accelerator I owned was a 3dfx Voodoo card. This was way back in 1995. DirectX and Direct3D had yet to be released to the public, and OpenGL was only used for CAD and scientific rendering apps. In those primordial times, if a game developer wanted to harness the awesome rendering power of the Voodoo hardware, he had to write his game with Glide, 3dfx’s own application programming interface (API). This was all before the open standards movement became a powerful force in development circles, and Glide offered 3dfx a major competitive advantage: If a gamer wanted to see all the kick-ass 3D effects that Glide enabled, he had to play the game on 3dfx hardware—lest he suffer Glideless, in a depressing, busted-up world of jaggy, unfiltered textures.
The 3dfx/Glide domination ended when id Software and other game developers started releasing titles that used the OpenGL API, which wasn’t dependent on 3dfx hardware (but worked with 3dfx chips through a Glide translation layer). OpenGL opened the door for other 3D chip companies to build competitive products, and thus ATI, S3, Matrox, and Nvidia entered the fray with hardware of their own.
With every new OpenGL or DirectX game released, Glide slowly transitioned from an advantage to a liability for 3dfx. As competitors like Nvidia embraced new technology and embarked on a period of incredibly rapid improvements, 3dfx remained tied to its Glide past, and, as a result, was slow to embrace new rendering enhancements, such as 32-bit color and antialiasing. Ultimately, this contributed to 3dfx’s demise, and embracing open standards allowed Nvidia and ATI to flourish.
Why are we talking about this today? Because Nvidia stands at a crossroads, with two closed, proprietary APIs that have mainstream potential: the general-purpose computing CUDA API, and the PhysX physics-acceleration API, which sits on top of CUDA. These are both promising technologies, but only owners of Nvidia hardware can harness their power. Meanwhile, there are two emerging open standards that mirror what Nvidia is doing with its proprietary development. One is OpenCL 1.0, and the other is a general-purpose GPU computing API, which Microsoft will include in DirectX 11. There are a relatively small number of consumer applications that use CUDA, PhysX, or OpenCL right now, but the possible applications for the tech are endless—grossly simplified, these APIs let graphics chips perform CPU-like functions. The question Nvidia needs to be asking is simple: Will developers write their general-purpose GPU computing apps using a proprietary API that works on only a subset of PCs—those stuffed with Nvidia hardware—or will they use an open API that will work on every PC on the market?
Nvidia’s path is clear: It needs to stop trying to convince us that closed APIs are good, and instead embrace OpenCL and Microsoft’s yet-to-be-named solution. It needs to port PhysX to run on one of the open APIs, then use PhysX as a platform to advertise the kind of power that Nvidia delivers (with the recipients of all this messaging being ATI diehards and anyone considering the forthcoming Larrabee GPU from Intel).
By focusing on what its always done well—building kick-ass hardware—instead of force-feeding us closed APIs, Nvidia will thrive. As for CUDA? It’s served its purpose, but its time has passed. It’s time to kill CUDA.