Nvidia's Tegra platform continues to woo big-name customers, most recently attracting Nintendo, who reportedly is in talks with Nvidia to provide some extra oomph for its next-gen DS handheld console.
The deal marks a win-win situation for both involved. For Nvidia's part, no other handheld console would put Tegra in more hands, courtesy of the DS's 68.3 percent worldwide market share. And for Nintendo, tapping into Tegra gives the company's console a power boost sure to be well received by consumers and developers alike.
Until more details are released, we can only speculate on what the next DS might be like, but it's at least feasible that on top of the added muscle, it will also sport backwards compatibility with the existing DS library, assuming Theo Valich's sources prove reliable.
What will also be interesting to watch is how this relationship between Nintendo and Nvidia plays out in the home console market. Might Nvidia replace ATI as the graphics vendor of choice in whatever supersedes the Wii? We'll have to wait to find out.
One surefire way to egg on the hacking community is to place ever increasing restrictions on your product, essentially daring black hat coders to find a back door. Nvidia is finding this out the hard way, after the GPU maker modified its latest PhysX drivers to prevent any non-Nvidia GPU from working, says news and rumor site The Inquirer.
And if that weren't enough, the latest version of PhysX also prevents physics processing unit (PPU) cards from working if it detects a non-Nvidia card in the system. That may have been the proverbial straw that broke the hacking community's back, and a hacker who goes by the handle GenL has put together some experimental code that stops Nvidia's drivers from shutting everything down when it detects a Radeon card.
We haven't tried it ourselves, but if you're feeling adventurous, rebellious, or both, you can grab the code here.
Not everyone considers themselves an early adopter of new technology, and one of the advantages in timing your next upgrade to coincide with new releases is that current parts tend to plummet in price. But if you're looking to score a deal on a GT200 series GPU once Nvidia launches its next-gen parts, you may find the opposite to be true.
According to sources in the retail channel, a shortage of 55nm-made graphics cards is expected to last through the holidays and into the first quarter of 2010. This will affect both AMD and Nvidia, as the two companies divert their attention toward DirectX 11-based 40nm GPUs, DigiTimes reports.
The sources went on to specifically point out Nvidia's GT200 series GPU, saying the graphics chip maker does not plan to increase supply following the launch of Windows 7.
Just in case laying claim as the fastest single-GPU videocard on the planet wasn't enough of a selling point for AMD's recently released HD 5870 videocard, Asus has added a twist that it claims will boost performance by up to 38 percent: Voltage Tweak Technology.
According to Asus, owners of its newly launched EAH5870/2DIS/1GD5 and EAH5850/2DIS/1GD5 videocards will be able to crank up GPU voltages through its SmartDoctor application. On the HD 5870, end users can raise the volts from 1.15V to 1.35V, boosting GPU and memory clockspeeds from 850MHz and 4800MHz (effective) to 1035Mhz and 5200MHz. On the HD 5850, gamers can up the volts from 1.088V to 1.4V, which is enough to overclock the GPU and memory from 725MHz and 4000MHz to 1050MHz and 5200MHz.
By doing so, Asus' own benchmarking noted a 17 percent performance gain in 3DMark Vantage Extreme on the 5870, and an impressive 38 percent jump on the 5850.
During Nvidia's GPU Technology conference in San Jose, California on Wednesday, Jen-Hsun Huang, the company's outspoken chief executive, announced a next-generation graphics chip code-named "Fermi."
According to Huang, Fermi comes packed with 3 billion transistors and 512 parallel processors, which is twice as many as last year's chip. And while it will do wonders for gaming graphics, Fermi is also being aimed at scientists performing complex simulations, such as global warming, weather prediction, and other tasks typically associated with supercomputers.
Acknowledging that his company is behind AMD in releasing next-gen parts, Huang promised that we'll see Fermi-based products within a few short months.
"Nobody likes it when the competitor has a product and we don't," Huang said. "We have a different vision. I don't like to keep our enthusiasts waiting for our next-generation processor. But if we are behind a couple of months, it's not going to matter."
I'm sitting here at Nvidia's GPU Technology Conference, and will liveblog Jen-Hsun Huang's keynote. I'd expect we'll hear lots about GPU-based computing applications, as well as some new hardware focused on GPU-based computing. Hit the jump to see the liveblog.
DirectX 10 marked a radical departure from DirectX 9: In order to be compatible, a graphics processor must feature a unified architecture in which each shader unit is capable of executing pixel-, vertex-, and geometry-shader instructions. The changes in DirectX 11 aren’t quite as fundamental, but they could have just as big an impact—and not only with games.
DirectX 11 is a superset of DirectX 10, so everything in DirectX 10 is included in the new collection of APIs. In addition, DX11 offers several new features and three additional stages to the Direct3D rendering pipeline: the Hull Shader, the Tessellator, and the Domain Shader. And in an effort to deliver cross-hardware support for general-purpose computing on graphics processors, Microsoft has come up with a new Compute Shader.
DirectX 11 will be compatible with both Vista and Windows 7, but many of its graphics features will be available on GPUs designed for previous iterations of Direct3D. Tapping into the Tessellator’s power, however, will require a GPU with transistors dedicated to the task (in this sense, DX11 marks a slight departure from DX10’s vision of a unified architecture). Let’s explore the concept of tessellation now.
Sources at graphics card makers expect AMD to unveil the ATI Radeon HD 5770 and 5750 in October, and Radeon HD 5870 X2 and Radeon HD 5850 X2 a month later. The report adds that AMD will launch the single-GPU Radeon HD 5890 when the market seems to plead for it. That apparently is Digitimes’ way of saying that its sources have no idea when the Radeon HD 5890 will be released.
What's the first I did upon hearing the numbers for ATI's new HD Radeon 5870 graphics card? I scrambled for benchmarks, because that's the one thing an announcement and subsequent review of a smokin' new piece of hardware can do for a rabid enthusiast: inspire.
It's been a while since I've actually sat down and crunched the numbers for my killer custom PC (that's killer as in legendary, not NICs). I'm not lazy. Rather, I don't have access to the expensive system benchmarks that magazines and Web sites typically use to analyze the all the new hardware that comes out. I don't have all-in-one benchmarks like PCMark Vantage, GPU-punishing titles like Crysis, and--worst of all--preconfigured demo runs for any number of titles that would help ensure the validity and repeatability of the delivered scores.
In short, I have nothing. You might not have nothing, but odds are good that you are similarly ill-equipped to benchmark your graphics card (and any tweaks or modifications you make) in the style of a professional review. Nothing... until now.
This week's freeware roundup will show you five different games that you can use to punish your poor graphics card into frames-per-second submission. They might cost a grand total of zero dollars, but these tests are repeatable and easy to use--the perfect combination of characteristics for aspiring benchmarkers who might not want to get their hands dirty, but still want some kind of way to determine exactly how powerful their graphics card really is.
In Act I of the modern-day GPU wars, AMD lit up the scene by releasing the ATI Radeon HD 5870, the fastest single-GPU videcoard money can buy. In Act II, AMD will hope to also claim the dual-GPU crown with its upcoming HD 5870 X2.
The latest rumor pegs the beastly dual-GPU videocard for an October release, though AMD hasn't said anything official yet. Nevertheless, to satisfy power users with deep pockets who are chomping at the bit, leaked pics of the 5870 X2 have hit the web.
Not just one leaked pic either, but several of them, each one showing the 5870 X2 in its massive glory. The X2 appears to trump the 5870 in length, which already measures about 11 inches long. While it's hard to determine exactly how long the X2 will be, it looks to be about a half-inch longer.
Get your fill of fuzzy GPU porn here, then hit the jump and sound off!