Sport & Auto
- About Future
- Digital Future
- Cookies Policy
- Terms & Conditions
- Investor Relations
- Contact Future
When Nvidia acquired the struggling Ageia, we were disappointed—but not surprised—to learn that Nvidia was interested only in the PhysX software. While it wouldn’t be accurate to say that Nvidia has orphaned the hardware, the company has no plans to continue developing the PhysX silicon. What’s more, there is absolutely no Ageia intellectual property to be found in the GTX 200-series silicon—the new GPU had already been taped out when the acquisition was finalized in February.
But Nvidia didn’t acquire Ageia just to put the company out of its misery. The company’s engineers quickly set about porting the PhysX software to Nvidia’s GeForce 8-, 9-, and 200-series GPUs. When Ageia first introduced the PhysX silicon, the company maintained that it was a superior solution to the CPU and GPU architectures, which weren’t specifically optimized for accelerating complex physics calculations. In reality, the PhysX architecture wasn’t as radically different from modern GPU architectures as we’d been told.
The first PhysX part, for example, had 30 parallel cores; the mobile version that ships in Dell’s XPS 1730 notebook PC has 40 cores. Nvidia tells us it took only three months to get PhysX software running on GeForce, and the software will soon be running on every CUDA platform. See the sidebar on this page for more information on the GeForce 200-series’s physics capabilities.
The screenshot above shows something of what’s possible with PhysX technology. The Unreal Tournament Tornado mod features a whirling vortex that tears the battlefield apart as the game progresses. The tornado can also suck in projectile weapons, such as rockets, adding an exciting new dynamic to the game.
Unfortunately for Ageia, mods such as this were too few and far between, and this chicken-or-the-egg conundrum ultimately killed the PhysX physics processing unit. By the time Nvidia acquired the company, Ageia had convinced just two manufacturers—Asus and BFG—to build add-in boards based on the PPU, and Dell was the only major notebook manufacturer to offer machines featuring the mobile version. Absent a large installed base of customers, few major game developers (aside from Epic and Ubisoft’s GRAW team) saw any reason to support the hardware.
Nvidia will have a much more persuasive argument: When it releases PhysX drivers for the GeForce 8-, 9-, and 200-series GPUs, the installed base will amount to 90 million units—a number expected to swell to 100 million by the end of 2008.
Even then, we predict PhysX will need a killer app if it’s to really take off. Nvidia will need to help foster the development of more PhysX-exclusive games, such as the Tornado and Lighthouse mods for Unreal Tournament 3, and the Ageia Island level in Ghost Recon: Advanced Warfighter.
Nvidia will also remedy one of Ageia’s key marketing mistakes: Consumers couldn’t run a PhysX application unless they had a PhysX processor, which meant they had no idea what they might be missing out on. Under Nvidia’s wing, PhysX applications will fall back to the host CPU in the absence of a CUDA-compatible processor. The app might run like a fly dipped in molasses, but the experience could fuel demand for Nvidia-based videocards.
Nvidia tells us it expects to have PhysX drivers for the GTX-200 series shortly after launch; drivers for GeForce 8- and 9-series parts will follow shortly thereafter.
Both the GeForce GTX 280 and 260 have two SLI edge connectors, so they will support three-way SLI configurations. Nvidia wouldn’t comment on the possibility of a future single-board, dual-GPU product that would allow quad SLI, but reps did tell us they expect the current dual-GPU GeForce 9800 GX2 to fade away.
Nvidia’s reference-design board features two DVI ports and one analog video output on the mounting bracket, with HDMI support available via dongle. The somewhat kludgy solution of bringing digital audio to the board via SPDIF cable remains (we much prefer AMD’s over-the-bus solution). Add-in board partners can choose to offer DisplayPort SKUs for customers who want support for displays with 10-bit color and 120Hz refresh rates.
Nvidia tells us there’s more to the GeForce 200 series than just substantial increases in the numbers of stream processors and ROPs. The new GPUs, for example, are capable of managing three times as many threads in flight at a given time as the previous architecture. Improved dual-issue performance enables each stream processor to execute multiple instructions simultaneously, and the new processors have twice as many registers as the previous generation.
These performance-oriented improvements should allow for faster shader performance and increasingly complex shader effects, according to Nvidia. In a new demo called Medusa, a geometry shader enables the mythical creature to turn a warrior to stone with a single touch. This isn’t a simple texture change or skinning operation—the stone slowly creeps up the warrior’s leg, torso, and face until he is completely transformed. Medusa then knocks off his head with a flick of her tail for good measure.
Nvidia still perceives gaming as a critically important market for its GPUs, but the company is also looking well beyond that large, but still niche, market. Through its CUDA (Compute Unified Device Architecture) initiative, the company is taking on an increasing number of apps that have traditionally been the responsibility of the host CPU. Nvidia isn’t looking to replace the CPU with a GPU, it’s just trying to convince consumers that GPU purchasing decisions and upgrades are more important than CPU purchasing decisions.
CUDA applications will run on any GeForce 8- or 9-series GPU, but the GeForce 200 series delivers an important advantage over those architectures: support for the IEEE-754R double-precision floating-point standard. This should make the new GPUs—and CUDA in general—even more attractive to users who develop or run applications that rely heavily on floating-point math. Such applications are common not only in the scientific, engineering, and financial markets, but also in the mainstream consumer marketplace (for everything from video transcoding to digital photo and video editing).
Nvidia has made great strides in reducing its GPUs’ power consumption, and the GeForce 200 series promises to be no exception. In addition to supporting Hybrid Power (a feature that can shut down a relatively power-thirsty add-in GPU when a more economical integrated GPU can handle the workload instead), these new chips will have performance modes optimized for times when Vista is idle or the host PC is running a 2D application, when the user is watching a movie on Blu-ray or DVD, and when full 3D performance is called for. Nvidia promises the GeForce device driver will switch between these modes based on GPU utilization in a fashion that’s entirely transparent to the user.