Nvidia’s secret war with Intel has evolved into a full scale arms race for the atomic bomb of graphics technology, ray tracing. Using its forum at SIGGRAPH, Nvidia was able to demonstrate an interactive ray tracing simulation using four of the company's next-generation Quadro GPUs. They were set in a Quadro Plex 2100 D4 Visual Computing System with an estimated street price of around $11,000. Not exactly your standard gaming rig, but it gets the point across. Either way, it appears as though Nvidia is finally taking a cue from Intel and is focusing at least some of its effort on developing hardware capable of making this technique a reality for everyday users. The demonstration featured linear scaling of an anti aliased Bugatti Veyron with over two-million polygons. It was run at a resolution of 1920x1080 (1080p) and chugged along at an impressive 30 FPS. The demonstration also featured image-based lighting paint shaders, reflections / refractions, and ray traced shadows. Industry insiders noted that the demo was an impressive undertaking since it was one of the first interactive demonstrations done using a GPU. Intel has demonstrated ray tracing using Quake 3 but was done using CPU power.Larrabee will be Intel’s counter in the consumer market, but it remains to be seen if the CPU style design will be as capable of pushing out polygons as Nvidia’s offerings.Gamers are no doubt hoping the new race to master ray tracing will accelerate its development, but I have a feeling we will be playing Duke Nukem Forever long before we see consumer based ray tracing solutions from either company. Though the important first steps are now well underway.
Two years ago, Nvidia unveiled its Quadro Plex range of visual computing systems at SIGGRAPH 2006. Now, at this year’s SIGGRPAH, it has announced desk-mounted visual supercomputers in the Quadro Plex range. The D series of Quadro Plex visual computer systems is claimed to have leapfrogged previous versions by over a 100% in terms of performance. The NVIDIA Quadro Plex 2200 D2 VCS has two Quadro FX 5800 GPUs, 4 dual-link DVI channels, and 8 GB of frame buffer memory. Whereas its sibling the NVIDIA Quadro Plex 2100 D4 VCS has four GPUs, 8 dual-link DVI channels and a 4 GB frame buffer.
The D series visual supercomputers are ideal for highly taxing 3D models, engineering designs and other scientific visualizations. The hundred of Nvidia CUDA Parallel Processing Cores pack copious parallel computing capabilities and the visual supercomputers can be easily hooked to workstations or servers using PCI Express adapter cards. The D series is due in September with prices starting at $10,750.
The ink was hardly dry on the Khronos Group's August 11th announcement that they released the OpenGL 3.0 API specification, when Nvidia releases beta drivers supporting the standard. These new drivers implement the OpenGL 3.0 API and the GLSL 1.30 shading language for both Windows XP and Vista on selected GeForce and Quadro videocards. This isn’t totally unexpected since Nvidia is a member of the Khronos Group
“OpenGL 3.0 is a significant advance for graphics standard and we’re proud that NVIDIA has played a major role in developing it,” said Barthold Lichtenbelt, Manager, Core OpenGL Software at NVIDIA and chair of the OpenGL working group at Khronos. “OpenGL 3.0 will be a first-class API on both GeForce and Quadro boards. Shipping drivers two days after this new specification is released demonstrates our strong commitment to the OpenGL developer community and our partners who rely on the standard.”
There has been much speculation on how the OpenGL 3.0 API will compete with DirectX 10. Some truly great games were made with previous OpenGL API specs like Far Cry, any of the Quake series, Starsiege: Tribes, and the original Half-Life. These games are pretty long in tooth, and newer games have been made with Direct X, including the engine that drives Valve's Source engine.
We can look forward to developers putting out some new games in the future using this standard. With all they accomplished with OpenGL 2.1, I am pretty excited about what’s coming.
It's been a rough ride for Nvidia as of late, who not only has had to contend with a suddenly competitive ATI, but also finds itself battling a bad batch of mobile GPUs (which might turn out to be a bigger problem than initially stated). The struggles have turned financial with the graphics chip maker reporting a net loss of $120.9 million in the second quarter, or 22 cents a share. This is in stark contrast to one year ago when the company posted a profit of $172.7 million, or 29 cents a share.
The quarter's results include a $196 million charge Nvidia took to cover warranty, repairs, and other costs associated with an "abnormal failure rate" among its mobile GPUs. Nvidia executives are hopeful for a somewhat better third quarter, saying they expect revenue to grow "slightly."
"We didn't lose any share, the market just got soft on us," said chief executive Jen-Hsun Huang. And while Huang admitted that the second quarter results are "disappointing," the company still saw its shares rise by 10 percent after announcing a $1 billion boost to its stock buyback program.
No one has been more critical of Nvidia then rumor and news outlet The Inquirer, who recently declared that all of the chipmaker's G84 and G86 parts are bad. The extent of the problem is still to be determined, but here's what's known so far.
A batch of bad GPUs have found their way into the wild causing an "abnormal failure rate" among certain laptop models
To deal with the problem, Nvidia said it was setting aside a one-time hit of $150 to $200 million to cover warranty and repair costs associated with the faulty mobile parts
Both HP and Dell have released a list of notebook models potentially affected by the faulty GPUs and are encouraging owners to update their BIOS as a preventive measure (the newer BIOS kicks on the cooling fan earlier than it normally would). HP has also extended their warranty for the affected models.
Nvidia has since moved on to its 9-M series GPUs, and in the process has presumably solved whatever problem affected the previous generation parts, right? Not so fast, says the The Inq. According to the rumor site, the fundamental flaw in the manufacturing process still exists, and now G92 and G94 parts are reportedly failing. The Inq claims that no less than four partners are already seeing the new chips go bad at high rates, and believes that Nvidia "is simply stonewalling everyone" about the alleged problem.
If true, another batch of parts could be disastrous for the chip maker, who continues to lose graphics market share to Intel and has seen its stock price plummet in the wake of a disappoint 8-K filing.
Is the problem bigger than Nvidia's letting on, or will it be this latest rumor that ultimately turns out to be the dud?
Here’s the second part of our exclusive QuakeCon interview with John Carmack. In the first part of our conversation, Carmack discussed his hopes for Quake Live and the id Software’s new gaming direction in Rage. This time around, he gets more into the heady technical stuff with his thoughts on Nvidia’s CUDA, physics accelerators, general purpose computing, and ATI’s rumored Fusion technology. Here’s a snippet:
John Carmack – I was well known as not being a supporter of the PhysX accelerators. It’s always felt like a gimmicky plan with people setting up a company to be acquired. For years, the tack has been what do you do with any time Intel delivers something more with processors and more cores? It’s never really proven out right and there’re a lot of reasons for it.
For one thing you can’t scale AI and physics in general with your gameplay, while with graphics, you could scale. Without scaling, you can’t design a game that requires fancy AI and then turn off the fancy AI for the low end systems because practically that’s not possible. Similarly for physics, if it’s anything other than eye candy, you also can’t scale. If the building is going to fall down you need to know whether you’re going to be able to get past it on the high end or the low end.
Nvidia has licensed Transmeta’s power conserving technology for a sum of $25 million. The technologies that Transmeta has leased out to Nvidia include its flagship power management technologies, Longrun and Longrun 2. Transmeta has quickly mastered its current business model of licensing IP to bigger companies and its coffers are loaded with cash.
It shouldn’t surprise anyone that Nvidia has licensed Transmeta’s power management technology as most chip manufacturers are concentrating on increasing power efficiency.
Matrox's TripleHead2Go Digital Edition, which enables you to drive up to three digital monitors from a single DVI port, has just received a significant upgrade.
We last encountered TripleHead2Go Digital Edition in our January 2008 review of the Hypersonic Sonic Boom OCX flight simulator PC. Hypersonic used it to drive three 1280x1024 digital monitors for a 3840x1024 panoramic view of the wild blue virtual yonder.
So, what's new with TripleHead2Go Digital Edition? Now, you can run up to three widescreen displays at 1680x1050 or 1440x900 resolutions. 3x1680x150 gives you an eye-popping 5040x1050 desktop, while 3x1440x900 provides a slightly less stunning 4320x900 desktop (and, it also supports WXGA's 1366x768 resolution).
And, the best news is that you don't need to buy a new version of the external box: if your graphics card has an ATI or NVIDIA DirectX 10 GPU with the latest graphics driver and a dual-link DVI connector running on Windows XP or Vista, all you need to do is:
Upgrade your TripleHead2Go Digital Edition's firmware to version 6.52 or later
Install the GXM software suite 2.03.02 or later
Choose your monitors' resolution from the display.
If you're not sure you're ready for the upgrade, the upgrade page also offers a link to the GXM System Compatibility Tool.
Like the sound of TripleHead2Go Digital Edition? Already using one? Your chance to sound off comes after the jump.
CrunchGear reports that the 177.79 Forceware driver release is going to have the drivers to activate PhysX on the GPU for GeForce 8000, 9000, and 200 series videocards. The estimated release date is August 12th, although these drivers are available in beta here. I was not able to verify this in the release documentation. No mention was made of PhysX support. The CrunchGear story is based on a TechReport article about the first look at on GPU PhysX acceleration. Unfortunately, I am limping along on my 7600GT, which is not supported for PhysX under CUDA yet.
Have any brave souls jumped into the beta drivers with a Geforce 8000 or better video card to test the PhysX waters? Tell me what you think about it below!
Sporting almost the same configuration as the reference design we previewed last month, BFG’s GeForce GTX 280 delivers amazing performance with the second-generation DirectX 10 chipset from Nvidia. It soundly spanks ATI’s new 4870, as well as all but the dual-GPU graphics solutions from the previous generation—and even against those, the GTX 280 wins all but a few benchmarks. The real question we’re asking is, Do we need this much power?