Ahoy hoy! This week, Maximum PC is going to be reporting from Nvision 08, Nvidia's three day visual computing festival in downtown San Jose. In addition to being a massive LAN party (bigger than the GeForce LANs of previous years), Nvision is also playing host to an epic gathering of Demo Scene developers, ready to show off their visual coding skills. We'll be there to sit in on the keynotes to be given by Nvidia's CEO and Battlestar Galactica's Tricia Helfer (seriously?), check out the various workshop tracks, and test drive the new hardware and software on display. Keep your eyes peeled for daily photo galleries and event reports. And if you're in the area and going to Nvision yourself, stop by the exhibit hall on Tuesday at 2:30pm to watch a presentation run by our own Will Smith. Personally, I can't wait for the Buzz Aldrin meet and greet session and a chance to heckle the too-kool-for-skool hosts of Diggnation during their live recording session. Hope to see some MaxPC readers there.
Nvidia contnues to feel the pressure from a suddenly competitive ATI and will once again tweak one of its mainstream videocards. Back in June, Nvidia took its 9800GTX card based on the immensely popular G92 core and shrank the core from 65nm to 55nm, pushed the core, memory, and shader clockspeeds, and dubbed the resulting product the 9800GTX+. This time around its the GTX 260 that will undergo a revision.
Citing an un-named source, Expreview reports Nvidia will add another Texture Processing Cluster (TPC) to its GTX 260, bringing the total up from 8 to 9. By doing so, the revised card will sport 216 shader processors instead of the 192 found in the original GTX 260. As far as Expreview knows, core, shader, and memory clockspeeds will remain the same.
If the report holds true, look for the updated card to arrive in September.
3DFX changed the gaming landscape forever when it brought 3D graphics to the masses, and in a similar fashion, ray tracing technology looks to be the next big revolution on the horizon. The promise of photo realistic scenery has provoked both developers and gamers, but is real-time ray tracing in games anywhere close to being a reality?
In an interview with Tom's Hardware, Intel's Daniel Pohl talked about the API Intel is using to showcase ray tracing demos and what he thinks needs to happen before the technology will be ready for commercial development.
"Creating higher image quality even faster. That requires smart anti-aliasing algorithms, a level of detail mechanism without switching artifacts, particle systems that also work in reflections, a fast soft shadowing algorithm, adoption to upcoming hardware architectures. We have some topics to keep us busy," said Pohl.
In the case of ray tracing, it's a matter of the hardware needing to catch up with the software. Pohl and his team of ray tracing researchers have been "targeting future architectures that consists out of tens, hundreds, and even thousands of cores," noting an almost linear scaling of frame rates with the number of processor cores.
Intel isn't the only one looking to push ray tracing technology into the mainstream, with Nvidia putting on demonstrations of its own. Here's hoping the race to the finish line ends up resembling more of a sprint than a marathon.
Do you subscribe to Maximum PC magazine? If so, turn to page 11 in the recently released October issue (everyone else scroll down to the 2-free trial issuesl order form, or jump straight to the subscription page). In the sidebar, Tom Halfhill discusses how AMD isn't too big to fail, and should they fall, it would leave Intel as the sole provider of x86 chips to the high end consumer market. Even staunch Intel fans can recognize this to be a bad thing, and as Halfhill points out, "AMD's demise would [overnight] create a monopoly that's almost impossible for another company to break." Or would it?
According to one of the hotter rumors making the rounds on the web, Nvidia might be doing more than just looking to get into the x86 market, they might already be working on it. Preposterous? Maybe not. Few would consider Intel's and Nvidia's relationship to be a warm and fuzzy one, and as the divide between GPUs and CPUs look to close, it's at least within the realm of possibility that Nvidia could be hashing out a x86 chip.
Nvidia’s secret war with Intel has evolved into a full scale arms race for the atomic bomb of graphics technology, ray tracing. Using its forum at SIGGRAPH, Nvidia was able to demonstrate an interactive ray tracing simulation using four of the company's next-generation Quadro GPUs. They were set in a Quadro Plex 2100 D4 Visual Computing System with an estimated street price of around $11,000. Not exactly your standard gaming rig, but it gets the point across. Either way, it appears as though Nvidia is finally taking a cue from Intel and is focusing at least some of its effort on developing hardware capable of making this technique a reality for everyday users. The demonstration featured linear scaling of an anti aliased Bugatti Veyron with over two-million polygons. It was run at a resolution of 1920x1080 (1080p) and chugged along at an impressive 30 FPS. The demonstration also featured image-based lighting paint shaders, reflections / refractions, and ray traced shadows. Industry insiders noted that the demo was an impressive undertaking since it was one of the first interactive demonstrations done using a GPU. Intel has demonstrated ray tracing using Quake 3 but was done using CPU power.Larrabee will be Intel’s counter in the consumer market, but it remains to be seen if the CPU style design will be as capable of pushing out polygons as Nvidia’s offerings.Gamers are no doubt hoping the new race to master ray tracing will accelerate its development, but I have a feeling we will be playing Duke Nukem Forever long before we see consumer based ray tracing solutions from either company. Though the important first steps are now well underway.
Two years ago, Nvidia unveiled its Quadro Plex range of visual computing systems at SIGGRAPH 2006. Now, at this year’s SIGGRPAH, it has announced desk-mounted visual supercomputers in the Quadro Plex range. The D series of Quadro Plex visual computer systems is claimed to have leapfrogged previous versions by over a 100% in terms of performance. The NVIDIA Quadro Plex 2200 D2 VCS has two Quadro FX 5800 GPUs, 4 dual-link DVI channels, and 8 GB of frame buffer memory. Whereas its sibling the NVIDIA Quadro Plex 2100 D4 VCS has four GPUs, 8 dual-link DVI channels and a 4 GB frame buffer.
The D series visual supercomputers are ideal for highly taxing 3D models, engineering designs and other scientific visualizations. The hundred of Nvidia CUDA Parallel Processing Cores pack copious parallel computing capabilities and the visual supercomputers can be easily hooked to workstations or servers using PCI Express adapter cards. The D series is due in September with prices starting at $10,750.
The ink was hardly dry on the Khronos Group's August 11th announcement that they released the OpenGL 3.0 API specification, when Nvidia releases beta drivers supporting the standard. These new drivers implement the OpenGL 3.0 API and the GLSL 1.30 shading language for both Windows XP and Vista on selected GeForce and Quadro videocards. This isn’t totally unexpected since Nvidia is a member of the Khronos Group
“OpenGL 3.0 is a significant advance for graphics standard and we’re proud that NVIDIA has played a major role in developing it,” said Barthold Lichtenbelt, Manager, Core OpenGL Software at NVIDIA and chair of the OpenGL working group at Khronos. “OpenGL 3.0 will be a first-class API on both GeForce and Quadro boards. Shipping drivers two days after this new specification is released demonstrates our strong commitment to the OpenGL developer community and our partners who rely on the standard.”
There has been much speculation on how the OpenGL 3.0 API will compete with DirectX 10. Some truly great games were made with previous OpenGL API specs like Far Cry, any of the Quake series, Starsiege: Tribes, and the original Half-Life. These games are pretty long in tooth, and newer games have been made with Direct X, including the engine that drives Valve's Source engine.
We can look forward to developers putting out some new games in the future using this standard. With all they accomplished with OpenGL 2.1, I am pretty excited about what’s coming.
It's been a rough ride for Nvidia as of late, who not only has had to contend with a suddenly competitive ATI, but also finds itself battling a bad batch of mobile GPUs (which might turn out to be a bigger problem than initially stated). The struggles have turned financial with the graphics chip maker reporting a net loss of $120.9 million in the second quarter, or 22 cents a share. This is in stark contrast to one year ago when the company posted a profit of $172.7 million, or 29 cents a share.
The quarter's results include a $196 million charge Nvidia took to cover warranty, repairs, and other costs associated with an "abnormal failure rate" among its mobile GPUs. Nvidia executives are hopeful for a somewhat better third quarter, saying they expect revenue to grow "slightly."
"We didn't lose any share, the market just got soft on us," said chief executive Jen-Hsun Huang. And while Huang admitted that the second quarter results are "disappointing," the company still saw its shares rise by 10 percent after announcing a $1 billion boost to its stock buyback program.
No one has been more critical of Nvidia then rumor and news outlet The Inquirer, who recently declared that all of the chipmaker's G84 and G86 parts are bad. The extent of the problem is still to be determined, but here's what's known so far.
A batch of bad GPUs have found their way into the wild causing an "abnormal failure rate" among certain laptop models
To deal with the problem, Nvidia said it was setting aside a one-time hit of $150 to $200 million to cover warranty and repair costs associated with the faulty mobile parts
Both HP and Dell have released a list of notebook models potentially affected by the faulty GPUs and are encouraging owners to update their BIOS as a preventive measure (the newer BIOS kicks on the cooling fan earlier than it normally would). HP has also extended their warranty for the affected models.
Nvidia has since moved on to its 9-M series GPUs, and in the process has presumably solved whatever problem affected the previous generation parts, right? Not so fast, says the The Inq. According to the rumor site, the fundamental flaw in the manufacturing process still exists, and now G92 and G94 parts are reportedly failing. The Inq claims that no less than four partners are already seeing the new chips go bad at high rates, and believes that Nvidia "is simply stonewalling everyone" about the alleged problem.
If true, another batch of parts could be disastrous for the chip maker, who continues to lose graphics market share to Intel and has seen its stock price plummet in the wake of a disappoint 8-K filing.
Is the problem bigger than Nvidia's letting on, or will it be this latest rumor that ultimately turns out to be the dud?
Here’s the second part of our exclusive QuakeCon interview with John Carmack. In the first part of our conversation, Carmack discussed his hopes for Quake Live and the id Software’s new gaming direction in Rage. This time around, he gets more into the heady technical stuff with his thoughts on Nvidia’s CUDA, physics accelerators, general purpose computing, and ATI’s rumored Fusion technology. Here’s a snippet:
John Carmack – I was well known as not being a supporter of the PhysX accelerators. It’s always felt like a gimmicky plan with people setting up a company to be acquired. For years, the tack has been what do you do with any time Intel delivers something more with processors and more cores? It’s never really proven out right and there’re a lot of reasons for it.
For one thing you can’t scale AI and physics in general with your gameplay, while with graphics, you could scale. Without scaling, you can’t design a game that requires fancy AI and then turn off the fancy AI for the low end systems because practically that’s not possible. Similarly for physics, if it’s anything other than eye candy, you also can’t scale. If the building is going to fall down you need to know whether you’re going to be able to get past it on the high end or the low end.