In the dark ages of PC gaming, the CPU took care of most of the graphics chores. The graphics chip did just the basics: some raster operations, dedicated text modes, and such seemingly quaint tasks as dithering colors down to 256 or 16 colors. As Windows took hold, the graphics equation began to shift a bit, with some Windows bitmap operations handled by “Windows accelerators.” Then along came hardware like the 3dfx Voodoo and the Rendition V1000, and accelerated 3D graphics on the PC took off.
Now it’s coming full circle. Today’s GPUs are fully capable of running massively parallel, double-precision floating-point calculations. GPU computing allows the 3D graphics chip inside your PC to take on other chores. The GPU isn’t just for graphics anymore.
Maybe it's the holiday season that has big corporations in a giving mood this month, but there's definitely something in the air that's spreading the love of open source software. Whatever it is, Hewlett-Packard caught wind of it last week and flipped the open source switch on webOS, and now Nvidia is announcing plans to provide the source code for the new CUDA LLVM-based compiler to academic researchers and software-tool vendors to help make it easier to add GPU support for more programming languages and support CUDA applications on alternative processor architectures.
Nvidia on Tuesday welcomed a new mid-range part to its family of Fermi graphics cards. Based on the company's 40nm GF116 GPU, the new GeForce GTX 550 Ti is priced below $150, making it Nvidia's most affordable 500 series desktop card thus far. Keep reading after the break for more detailed GTX 550 Ti specs.
NVIDIA on Wednesday unveiled its latest range of mobile graphics cards. Sandwiched between the graphics chip maker’s mainstream and enthusiast offerings, the new GeForce 500M family of GPUs is focused on performance.
The GPUs introduced yesterday are all fabricated on the 40nm process technology and feature up to 1.5GB of GDDR5 or DDR3 memory, with the GeForce GT 540M, GeForce GT 550M, and GeForce GT 555M offering four times the performance of integrated graphics and the GeForce GT 520M and GeForce GT 525M offering around twice as much. Of course, they are all designed to work with Intel’s new generation of Core processors.
NVIDIA also reminded us in the press release that GeForce 500M GPUs support DirectX 11, NVIDIA 3D Vision, PhysX physics engine, CUDA and NVIDIA 3DTV Play. The new range will be hitting the market later this month as part of laptops from the likes of Acer, Alienware, Asus, Dell, Fujitsu, Lenovo, MSI, Samsung, Sony, and Toshiba.
Nvidia isn't making a big deal about its GeForce GTX 460 SE videocard (hit the specifications tab), the latest addition to the GTX 460 line with the least amount of CUDA cores.
The SE version comes with 288 CUDA cores in all, compared to 336 on the standard GTX 460 in both 768MB and 1GB trim. It's also clocked a bit slower at 650MHz core and 1700MHz memory, compared to 1350MHz and 1800MHz on the two other models, respectively.
It does, however, sport the same 256-bit memory bus interface as the non-SE 1GB version, whereas the standard GTX 460 in 768MB form features a 192-bit bus. That provides the SE with more memory bandwidth than the 768GB version (108.8GB/s versus 86.4GB/s) though still less than the non-SE 1GB version (115.2GB/s). And finally, the SE comes with 1GB of GDDR5 memory.
Its powerplant, the NVIDIA GT430, features 96 CUDA cores, a 700MHz core clockspeed, 1GB of DDR3 clocked at 900MHz, a 128-bit memory bus, 4 ROPs, and 585 million transistors. The presence of NVIDIA’s Pure Video engine means that it can even accelerate Blu-ray 3D media.
According to Asus, the card is exceptionally durable owing to “highest quality components, including ASUS Dust-Proof fan, GPU Guard, and Fuse Protection,” with the fan alone adding 25% extra to its lifespan. It is available now for $79.99.
I'm sitting here at Nvidia's GPU Technology Conference, and will liveblog Jen-Hsun Huang's keynote. I'd expect we'll hear lots about GPU-based computing applications, as well as some new hardware focused on GPU-based computing. Hit the jump to see the liveblog.
To those looking for another venue to get their very own supercomputer, you’re in luck! Nvidia has recently announced that their CUDA-based Tesla C1060 GPU is available in Dell’s Precision R5400, T5500 and T7500 workstations effective immediately.
If you’re worried that just one of these GPUs isn’t enough to handle your hardcore needs, worry not – just one C1060 has enough power to control the main system of the European Extremely Large Telescope project (reportedly the world’s largest). According to Jeff Meisel with National Instruments, a workstation “equipped with a single Tesla C1060 can achieve near real-time control of the mirror simulation and controller, which before wouldn't be possible in a single machine without the computational density offered by GPUs."
Palit Microsystems began offering a custom-built GTX 285 with 2GB memory in February. From the face of it, Sparkle’s entire staff was probably marooned on a remote island – or away on an intergalactic excursion, and therefore had no idea what was going around.
The GTX 285 runs at a core clock frequency of 648MHz. Sparkle has also promised its card will deliver “30% faster performance than competing single GPU graphic card solutions.” But the company is mum on pricing.
Nvidia stands at a crossroads, with two closed, proprietary APIs that have mainstream potential: the general-purpose computing CUDA API, and the PhysX physics-acceleration API, which sits on top of CUDA. These are both promising technologies, but only owners of Nvidia hardware can harness their power. Meanwhile, there are two emerging open standards that mirror what Nvidia is doing with its proprietary development. One is OpenCL 1.0, and the other is a general-purpose GPU computing API, which Microsoft will include in DirectX 11. There are a relatively small number of consumer applications that use CUDA, PhysX, or OpenCL right now, but the possible applications for the tech are endless—grossly simplified, these APIs let graphics chips perform CPU-like functions.
The question Nvidia needs to be asking is simple: Will developers write their general-purpose GPU computing apps using a proprietary API that works on only a subset of PCs—those stuffed with Nvidia hardware—or will they use an open API that will work on every PC on the market?