ASUS HAS GOTTEN a lot of mileage out of its beefy DirectCU II GPU-cooling technology. It has brought some serious overclocking chops to the GeForce GTX 580 in the form of Asus’s Matrix-branded edition, for example. The DirectCU II versions of the GeForce GTX 560 Ti and the Radeon HD 6870 also sport serious overclocks, and those cards perform well in their respective classes. What’s even better is that the company doesn’t charge much of a price premium for its best cooling tech on cards below the Matrix GTX 580.
We’re scratching our heads, however, over Asus’s decision to offer this GTX 570 card in a three-slot configuration similar to its Matrix GTX 580, but running at Nvidia’s reference clock speeds. The beefy cooler delivers plenty of DIY overclocking potential, but you must assume all the risk. Since we review cards based on out-of-the-box performance, we had to benchmark this one with its 742MHz core clock and 3,800MHz (effective) memory clock.
One good thing the new cooler does provide is fewer decibels. This card isn’t whisper-quiet under load, but it generates much less noise than many of the cards in its class—particularly the Radeon HD 6970, which can get fairly loud under heavy loads.
Now that the Nvidia GTX 680 has (finally) hit the streets, manufacturers are tripping over themselves to release cards that somehow stand out from the pack. A lot of the time, that means a custom cooling system; last week alone we saw new GTX 680s from Palit and Gainward covered in fans and heatsinks, respectively. Now, EVGA is getting in on the fun with the EVGA GeForce GTX 680 Hydro Copper, a card that comes equipped with a preinstalled waterblock and a big ole factory overclock.
When is a GTX 560 Ti not really a GTX 560 Ti? When it’s almost a GTX 570.
Nvidia’s latest GPU, the GTX 560 Ti 448 is really a GTX 580 (originally dubbed the GF110) with two functional blocks disabled, reducing its CUDA Core count from 512 to 448. The GTX 570 is a GF110 with one functional block disabled, endowing it with 480 CUDA Cores. The original GTX 560 Ti is a completely different chip, with different power requirements, but all 384 of its cores are fully functional.
Graphics card vendors have been busy with the onslaught of new PC titles heading into the holiday, forcing AMD to release its second out of cycle performance driver in less than 2 weeks. Catalyst 11.11b includes Crossfire performance scaling for Skyrim, similar multi-gpu support for Assassin’s Creed Revelations, along with DirectX 11 tweaks for Batman Arkham City.
Nvidia’s latest GPU release, the GF110, is essentially a re-engineered version of the original Fermi chip, with the addition of a few tweaks. By re-spinning the original, the full potential of Fermi is now realized, with all 512 compute cores active. (The original GeForce GTX 480 had the same number of compute cores, but 32 of them were deactivated.) Besides that, the GF110 features other enhancements, like improved FP16 texture performance, which boosts the frame rate in scenes using high dynamic range (HDR) rendering. The new chip also clocks higher; reference cards run at 772MHz core and 1,000MHz memory.
The first feature-reduced version of the GTX 580 arrives, rendering the GTX 480 obsolete and body-slamming the Radeon HD 5870.
This is the silly season for PR presents. Technology writers and product reviewers receive boxes in the mail, sometimes elaborately giftwrapped, from public relations people in the industry. Usually, what we find inside are fruit, chocolate, calendars with generic photographs and assorted pastries. So when we got a gift box from Nvidia, we naturally thought it was one of the usual holiday PR gimmicks.
We were wrong. When we got around to opening the box, we found this:
This is the follow-up to the GeForce GTX 580. Unsurprisingly, it’s called the GTX 570. As with the earlier GTX 470, it’s a cut-down version of the mother chip, offering 480 compute cores instead of the GTX 580’s 512 cores. Other features have been scaled back as well.
We gave the GTX 570 a spin with out full battery of benchmarks. Hit the jump to find out more!
Nvidia’s GeForce GTX 580 is what the original should have been: quieter, full-featured, faster and more efficient.
When Nvidia launched the GTX 480 -- code-named the GF100 -- early this year, the new GPU proved to be something of a mixed bag. It was undeniably fast, but also crippled – every GTX 480 GPU shipped with a full functional unit disabled. Whether that was because of yield or power issues wasn’t clear. Power clearly was a problem – Nvidia’s flagship ran hot and loud.
Given the competition, Nvidia had to get Fermi out the door. Even before the original Fermi left the building, Nvidia’s engineers were heads-down, respinning and reengineering the GF100. The result is the GF110. The new GPU is, as Emperor Palpatine might put it, “fully operational”, with all functional units now enabled.
Hit the jump for a detailed analysis of the GTX 580.
It's the end of an era, folks. In the coming months, AMD will retire the ATI brand, letting the ATI name ride off into the sunset after a remarkable 25-year run, presumably never to be seen again. Don't mistake that to mean AMD is getting out of the graphics business -- it isn't -- but once the brand is dropped, you won't see the ATI name attached to any new Radeon, FirePro, or EyeFinity products.
The decision came after AMD sent out surveys to several thousand "discrete graphics aware" respondents spread out across the U.S., U.K., Germany, China, Japan, Brazil, and Russia. According to John Volkmann, AMD's VP of global corporate marketing, "the Radeon brand and the ATI brand are equally strong with respect to conveying our graphics processor offering." That might be so, but it doesn't tell the full story behind ATI and its 25 year tenure in the graphics business, one that includes witnessing the rise and fall of 3dfx, and continued participation in what's largely become a two-man battle in the discrete graphics space.
Join us as we take a look back at some of the most important periods and events in ATI's history, starting with when it was formed in 1985.
We know you're anxious to learn all about Apple's upcoming tablet, and you will, but not until tomorrow morning when Steve Jobs plans to announce "a major new product that we're really excited about." So even though it might be pretty poor timing on HP's part, there's a new video making the rounds on the Web in which Phil McKinney, CTO of HP's Personal Systems Group, answers a few questions about his company's upcoming HP Slate.
Most of the video deals with the Slate's background and history, and we learn that HP first began working the tablet concept five years ago "around the concept of an e-reader platform." Based in part on user feedback requesting rich media content, the initial concept evolved into the Slate, McKinney says.
"What we predict is that users are looking for that consolidated device, that one device that they can use really as their ultimate content consumption experience," McKinney explains. "And also we saw this gap in the marketplace north of kind of what a smartphone was and smaller than the netbook and notebook. They wanted something thin and light, but again, allowing them to have that rich media experience."
According to McKinney, the Slate will be every bit as good as the current e-book readers on the market, but also capable of a whole lot more. What he didn't say, however, is what kind of hardware you can expect, though he did describe 2010 as the optimal year for the Slate because of a "perfect storm of innovation" consisting of a convergence of "low cost, low power processors, Win 7 with an operating system that is touch aware, the ability to create these kind of platforms with new kinds of touch technologies and hit that price point."