Nvidia has been largely silent on their upcoming Fermi GPUs. Now we’re hearing that the new line of consumer DirectX 11 graphics processors will be going into production around the third week of February. Apparently, “low quantities” will be available in mid-March.
Nvidia has fallen behind GPU rival AMD/ATI since the latter released their first DirectX 11 part last September. The Fermi chips were originally slated for a November release. The delays led some to speculate that Nvidia was shifting their business away from consumer level desktop graphics cards, and toward mobile and enterprise solutions.
When the new chip goes into production, yields are expected to be low. They are, however, expected to be higher than current Radeon 5800 yields, which hover around 4%. With all the delays, hopefully Nvidia can at least get it right and really knock our socks off in the performance department.
Nvidia is looking to assuage fears that it is falling behind rival AMD in the GPU race. Nvidia’s Michael Hara said the lead AMD currently has in DirectX 11 is “insignificant”. “To us, being out of sync with the API for a couple of months isn't as important as what we're trying to do in the big scheme of things for the next four or five years,” said Hera.
Nvidia’s next generation Fermi is supposed to appear in the first quarter of 2010. However, few details are available beyond the apparent low production yields. Hera also stressed the importance of Direct X 11 as it will offer tessellation and support for multi-core processes. The new standard will also fully support DirectCompute allowing parallel GPU processing in various applications.
So Nvidia must feel like they have a winner on their hands to be talking up DX11 so much. We can only hope.
The University of Antwerp gave everyone a chuckle last year when they built a quaint little supercomputer made out of four high end Nvidia GPUs. Apparently, that was just a practice run. The same group has now constructed a 13 GPU monster of a supercomputer called Fastra II.
The rig contains six dual-GPU Nvidia GTX295 cards and a single GTX275. As you can imagine, there were a few issues getting the whole system up and running. Motherboard manufacturers don’t usually anticipate someone needing to run 13 GPU cores. With a little persistence and a custom BIOS from ASUS, the tiny supercomputer was up and running. The whole affair cost only 6000 Euros, and is capable of twelve teraflops.
The value per teraflop is high considering most conventional supercomputers cost millions of dollars to build and run. You can check out some possible applications and crazy benchmarks here.
At Microsoft’s Professional Developers Conference the Redmond giant showed off an early build of Internet Explorer 9 complete with GPU acceleration. Not to be outclassed, Mozilla has indicated they too are working on GPU acceleration for the popular Firefox browser. After the Microsoft demo, Mozilla Director of Developer Relations, Chris Blizzard tweeted, “Interesting that we're doing Direct2D support in Firefox as well - I'll bet we'll ship it first. :)"
Later, Firefox developer Bas Schouten wrote about the addition of Direct2D to the browser. He said the browser wouldn’t look much different, but rendering should be much improved. Schouten provided benchmark data for Direct2D rendering compared to standard Windows Graphic Device Interface (GDI). While some sites showed little difference, several saw dramatic reduction in rendering times. Hopefully we’ll see this technology sooner rather than later. However, there are currently no ship dates for either product.
Matrox isn’t a name you hear a lot anymore. The graphics spotlight has been effectively taken over by Nvidia and AMD. Matrox isn’t letting that get them down and have announced a new GPU, the Matrox M9188 PCIe x16 multi-display Octal.
The M9188 comes equipped with eight DisplayPort outputs and 2GB of RAM. Each of the DisplayPorts is capable of driving a monitor with a resolution of 2560x1600. They also throw in eight DisplayPort to DVI adapters in case you have eight DVI monitors lying around.
Further, the driver supports multiple cards on a system. So with two of these monsters, you’d be capable of running 16 monitors with a total resolution of 20480 X 3200, in a 2 X 8 configuration. Good luck finding wallpaper for that.
The Radeon 5700 series card will be built upon ATI’s new 40nm “Juniper” chip, which consists of 1.04 billion transistors on a 166mm2 die. The smaller chip makes it possible to ATI to offer the cards at lower prices than the current DirectX 11 capable Radeon 5800 series cards: the HD 5870 and HD 5850. Price for the HD 5770 is set at $159, with the HD 5750 going for $129. ATI will later release a 512MB version of the HD 5750 for $109.
Setting up and maintaining a liquid-cooling setup isn't for everyone, and it's this crowd BFG is targeting with a pair of maintenance-free, self-contained liquid-cooled GeForce graphics cards, the GTX 285 H2O+ and the GTX 295 H2OC.
Both new cards sport BFG's new ThermoIntelligence Advanced Cooling Solution, which when you take away the fancy title means you can enjoy the benefits of water cooling your videocard(s) without all the fuss. According to BFG, the cards are easy to install right out of the box and never need refilling or additional components. The benefit, says BFG, is up to 30C cooler temps under load when pitted against standard air cooled models.
"We're very excited to be the first company to bring this type of professional grade advanced cooling solution to PC enthusiasts," said John Malley, senior director of marketing for BFG.
BFG's GTX 295 H2OC will sport a 675MHz core clockspeed, 2214MHz memory data rate, and 1458MHz shader clockspeed. The GTX 285 H2O+ will run at 691MHz, 2592MHz, and 1566MHz core, memory, and shader clockspeeds, respectively.
The GTX 295 H2OC will be available in limited quantities starting August 5th, while the GTX 285 H2O+ will also be available in limited quantities, starting August 12th. No word on price.
AMD's new ATI Graphics Scout is a visual wizard designed to help you find the "perfect" ATI GPU for your needs. Graphics Scout provides feature selections in four categories: video applications, pictures and photos, games, and office applications. Select the most important feature or features in some or all categories, and Graphics Scout (which resembles a Star Wars R2-D2 with a flat-panel upgrade) suggests a suitable match.
Earlier this week, The Inquirercomplained that Graphics Scout was pushing out some questionable suggestions. Thankfully, as an update to the original story indicates, ATI's been making some changes, and in our tests today, it made recommendations that make sense:
When we selected video editing, photo editing, DirectX 10+ gaming, and Microsoft Office applications, it suggested the top-of-the-line HD 4890.
When we changed our mind and selected big-screen TV connections with Blu-Ray support, photo viewing and editing, online gaming, and web browsing, Graphics Scout suggested the mid-line HD 4550.
The ability to move up and down the GPU line to see what upgrading or downgrading the recommended selection is handy, as is the ability to compare any other card with the recommended card. For its intended UK audience, Graphics Scout is great, as it provides links to various UK dealers. For users in other countries, it's still useful, but you'll need to use a site such as Cnet's Shopper.com to find actual products for sale. Take Graphics Scout for a spin and join us after the jump to chime in on its recommendations.
Just in case you were worried that Intel wasn’t committed to it’s heavily delayed Larrabee platform, a 12 million dollar investment in a new Visual Computing Institute should help convince you otherwise. Located at Saarland University in Saarbrücken, Germany, this is the largest joint project ever formed between Intel and a European university. The institute will help Intel explore advanced graphical computing technologies, which includes everything from more realistic gaming, to advanced 3D user interfaces.
The primary focus of the research will be applied to Intel’s terascalling program. This will help them better understand how they can apply Larrabees unique multi x86 core architecture to achieve sustainable performance increases over modern day GPU’s. Larrabee has been delayed until some unknown date in 2010, presumably because it hasn’t yet achieved the type of performance gains they were hoping for against Nvidia & AMD.
In addition to terascalling research, Intel will also work with other hardware design labs in Barcelona, Spain, and Braunschweig, Germany to help optimize the Larrabee design. Z-buffering, clipping, and even ray tracing are all promises made by the Larrabee team, but clearly the software needed to make all this happen still requires some work.
Want more details? Click here to watch the press video.
So is Larrabee really the future? Or does this only prove Nvidia’s case that its promise is overhyped?
Palit Microsystems began offering a custom-built GTX 285 with 2GB memory in February. From the face of it, Sparkle’s entire staff was probably marooned on a remote island – or away on an intergalactic excursion, and therefore had no idea what was going around.
The GTX 285 runs at a core clock frequency of 648MHz. Sparkle has also promised its card will deliver “30% faster performance than competing single GPU graphic card solutions.” But the company is mum on pricing.