Nvidia stands at a crossroads, with two closed, proprietary APIs that have mainstream potential: the general-purpose computing CUDA API, and the PhysX physics-acceleration API, which sits on top of CUDA. These are both promising technologies, but only owners of Nvidia hardware can harness their power. Meanwhile, there are two emerging open standards that mirror what Nvidia is doing with its proprietary development. One is OpenCL 1.0, and the other is a general-purpose GPU computing API, which Microsoft will include in DirectX 11. There are a relatively small number of consumer applications that use CUDA, PhysX, or OpenCL right now, but the possible applications for the tech are endless—grossly simplified, these APIs let graphics chips perform CPU-like functions.
The question Nvidia needs to be asking is simple: Will developers write their general-purpose GPU computing apps using a proprietary API that works on only a subset of PCs—those stuffed with Nvidia hardware—or will they use an open API that will work on every PC on the market?
This week, we recorded a mostly zombie-free edition of the No BS podcast. While there was a little undead chat, we also talked about CUDA vs. OpenCL vs. DirectX 11 and using iTunes the Gordon Mah Ung way. This week, we're pretty certain that we even managed to post the right pocast (if you missed last week's, just redownload it. It's linking to the right one now). Join the podcast gang as we answer your tech questions, take a trip to the Lab, and get a chock-full-o'-rage edition of Gordon Mah Ung's Rant of the Week!
Do you have a tech question? A comment? A tale of technological triumph? Just need to get something off your chest? A secret to share? Email us at firstname.lastname@example.org or call our 24-hour No BS Podcast hotline at 877.404.1337 x1337--operators are standing by. For the love of all that's holy people, if you guys don't start asking tech questions, we're going to change the name to the Nothing But Undead podcast...
The technique leverages the parallel processing power of Nvidia’s latest graphics cards to speed up the “password recovery” process by 10,000 per cent. Global Security Systems (GSS) has advised enterprises to deploy VPNs for safeguarding their WiFi networks.
We, too, can only advise you to secure your office WiFi network using VPN encryption before professional industrial sleuths start waging brute forcing blitzkriegs using ordinary graphics cards.
Earlier this summer, both Nvidia and ATI hosted press events to unveil their new hardware—and the excitement about GPU-based encoding was palpable. We were promised that our videocards would make Photoshop faster and better and our GPUs would encode video 10 times faster than our CPUs. In fact, someone lacking tech savvy would have left these presentations thinking, "Wow, these GPU things can make common computing tasks run insanely fast, and there are a couple of games that work with them too." Of course, as is typical, the truly big promises (like 10x faster video encodes) were off in the future, when the software was "ready."
Well, the software's nearly ready. Elemental's Badaboom uses Nvidia's CUDA interface to do lots of the grunt work of DVD ripping by using the GPU instead of your musty old CPU. I've been in the Lab for the last few days putting this app through the ringer. Our test bed for this challenge is an Intel Q6600 quad core, running at a stock 2.4GHz, with 4GB of memory and a GeForce GTX 280 reference board.
Two years ago, Nvidia unveiled its Quadro Plex range of visual computing systems at SIGGRAPH 2006. Now, at this year’s SIGGRPAH, it has announced desk-mounted visual supercomputers in the Quadro Plex range. The D series of Quadro Plex visual computer systems is claimed to have leapfrogged previous versions by over a 100% in terms of performance. The NVIDIA Quadro Plex 2200 D2 VCS has two Quadro FX 5800 GPUs, 4 dual-link DVI channels, and 8 GB of frame buffer memory. Whereas its sibling the NVIDIA Quadro Plex 2100 D4 VCS has four GPUs, 8 dual-link DVI channels and a 4 GB frame buffer.
The D series visual supercomputers are ideal for highly taxing 3D models, engineering designs and other scientific visualizations. The hundred of Nvidia CUDA Parallel Processing Cores pack copious parallel computing capabilities and the visual supercomputers can be easily hooked to workstations or servers using PCI Express adapter cards. The D series is due in September with prices starting at $10,750.
Here’s the second part of our exclusive QuakeCon interview with John Carmack. In the first part of our conversation, Carmack discussed his hopes for Quake Live and the id Software’s new gaming direction in Rage. This time around, he gets more into the heady technical stuff with his thoughts on Nvidia’s CUDA, physics accelerators, general purpose computing, and ATI’s rumored Fusion technology. Here’s a snippet:
John Carmack – I was well known as not being a supporter of the PhysX accelerators. It’s always felt like a gimmicky plan with people setting up a company to be acquired. For years, the tack has been what do you do with any time Intel delivers something more with processors and more cores? It’s never really proven out right and there’re a lot of reasons for it.
For one thing you can’t scale AI and physics in general with your gameplay, while with graphics, you could scale. Without scaling, you can’t design a game that requires fancy AI and then turn off the fancy AI for the low end systems because practically that’s not possible. Similarly for physics, if it’s anything other than eye candy, you also can’t scale. If the building is going to fall down you need to know whether you’re going to be able to get past it on the high end or the low end.
As Intel gears up to sample Larrabee later this year, the chip maker continues to build hype over the architecture's x86 roots. Intel is quick to point out that developers will be able to program in C or C++ languages just as they're used to doing on x86 processors, giving them an easy way to port applications from other platforms over to Larrabee.
Meanwhile, Nvidia also wants to build hype, but over its competing CUDA architecture. DailyTech has posted Nvidia's comments on the issue, which read:
CUDA is a C-language compiler that is based on the PathScale C compiler. This open source compiler was originally developed for the x86 architecture. The NVIDIA computing architecture was specifically designed to support the C language - like any other processor architecture. Competitive comments that the GPU is only partially programmable are incorrect - all the processors in the NVIDIA GPU are programmable in the C language.
NVIDIA's approach to parallel computing has already proven to scale from 8 to 240 GPU cores. Also, NVIDIA is just about to release a multi-core CPU version of the CUDA compiler. This allows the developer to write an application once and run across multiple platforms. Larrabee's development environment is proprietary to Intel and, at least disclosed in marketing materials to date, is different than a multi-core CPU software environment.
Andrew Humber from Nvidia also went on to clarify that CUDA is a brand name for the C-compiler rather than being two different things.
Anyone else feel chilly when Nvidia and Intel are in the same room?
John Carmack may be the face of id Software, but he’s definitely not the only person working on Rage or the next Doom. We spoke with Robert Duffy, id’s Programming Director, and Matt Hooper, Rage’s Lead Designer, about their upcoming shooter. The conversation delves into topics ranging from art design to multiplayer modes, and touches on the challenges of developing on both console and PC hardware. Here’s a snippet:
MaxPC: With the combination of driving and fps gameplay, what’s fun and exciting that we should look forward to that we haven’t seen before in games? Matt Hooper: The thing you haven’t seen is really the mix. We’re still id software and we’re still making this intense, action shooter game. Those moment to moment, finely crafted action sequences – running around with the coolest weapons and shooting guys – that’s still there. We invented that and we’re still going to do that really well. Just around the office everyone likes a lot of cool games. What we did was pull in these different elements that don’t detract from the action but add this little bit of flavor, and the vehicles are a part of that. The vehicles are almost an extension of your FPS avatar – you’re “running” around with a vehicle. It has armor on it, it carries a cool weapon, you fire that weapon, and the other car blows up in a cool satisfying explosion. It’s not as far removed as you would probably initially think. It all feels really good together.
We interviewed John Carmack back during this year's E3 when id first announced a partnership with EA to publish their next shooter, Rage. We had a chance to sit with Carmack again at this past weekend's Quakecon, where we followed up on our earlier discussion to squeeze more details out of the legendary game developer. Carmack dished out more details about their plans for Quake Live (including their high expenctations), the technology powering Rage and the next Doom, their cancelled Darkness project, and his thoughts about the current modding community.
Take a seat, grab a Mountain Dew, and click through for the full interview. You'll even find out which aspects of id Tech 5 may not be as powerful as id Tech 4!
Last month Nvidia said it planned to tweak its 9800GTX videocard with a die shrink and faster clockspeeds resulting in the 9800GTX+, and today the release becomes official with immediate availability. Along with the 9800GTX+, Nvidia fleshes out its GeForce 9-series line with two other videocards, the 9800GT and 9500GT.
All three cards are available now, and each one brings support for Nvidia's PhysX and CUDA technologies, two areas currently exclusive to Nvidia.
"The addition of the new 9800GTX+, 9800GT, and the 9500GT GPUs brings a new level of visual computing capability to additional mainstream market segments," said Ujesh Desai, general manger of desktop GPUs at Nvidia. "Nvidia GPUs deliver the best bang for the buck in each price category, and with support for CUDA, PhysX, and 3D stereoscopic technology, consumers can now experience the unique, innovative, and immersive computing experience that only Nvidia can deliver."
Claiming victory in the bang-for-buck war would have been a tough sell just weeks ago, but such claims become easier to swallow with the 9500GT taking residence in the sub-$70 pricing tier. Both the 9800GT and GTX+ can be bought for under $200, with the latter going head to head against ATI's HD 4850 videocard. For you old schoolers, it hasn't been this fun to shop for a GPU since the TI4200 days.