A Spanish website posted a bevy of detailed specs for Advanced Micro Device's (AMD) upcoming Radeon HD 8000 Series graphics card, and assuming they're accurate, there will be a dual-GPU version launching in the second quarter of 2013. That coincides nicely with a recent report that AMD was pushing the launch of its entire Radeon HD 8000 Series into next year, though we're inclined to take the launch dates with a grain of salt.
MSI’s GTX 660 is an all-around great card that includes a healthy dollop of overclocking and a side of Frozr to keep it cool. Its base clock speed is a decent 53MHz over stock at 1,033Mhz, and when running at full load we saw its boost clock speed rise 130MHz over stock to 1,110MHz, which is also higher than the stock boost-clock spec. The Twin Frozr III cooler sports three copper heat pipes, aluminum fins, and dual 8cm fans housed in a metal-alloy shroud to direct the airflow. Like the other GTX 660 cards, it uses just a single 6-pin power connector, but unlike the others it sports an extra-long 9-inch PCB (Gigabyte’s board is just 7.5 inches but the cooler is actually 9 inches long).
Gigabyte’s GTX 660 is similar to MSI’s board in that it’s overclocked and has a cooler with a silly name—Windforce. The board is clocked at the same base and boost clock speeds as the MSI card, too, running at 1,033MHz and 1,098MHz, respectively. The cooler features four copper heat pipes, aluminum fins, and two large 10cm fans breathing down on the whole shebang. Even though the board sports a smallish 7.5-inch PCB, the cooling apparatus is so large that it’s 2-inches longer than the PCB and extends the length of the card to 9.5 inches. With a cooler this large you expect it to perform quite well, and it does. It kept the card absolutely silent even when the board was being tortured in the Lab, and allowed it to run at a moderately cool 63 C under full load.
Hello, gorgeous. That’s what we said when we first laid eyes on Nvidia’s reference design for the GeForce GTX 690, which combines two full 28nm GK104 GPUs into one PCB and covers them with the best-looking cooling shroud we’ve seen on any videocard. Our in-depth analysis of the reference card can be found in our August 2012 issue, but we can’t verdictize a reference card. If you’re wondering how this Asus GTX 690 differs from the reference card Nvidia sent us, wonder no more: It’s exactly the same, except the edges of the PCB are a slightly different color.
When Asus’s Zenbook UX31E debuted last year, it seemed to almost single-handedly put Ultrabooks on the map. Its intriguing mix of good looks, performance, and price convinced many a skeptic, us included, that PCs could compete with the likes of Apple’s vaunted MacBook Air—at a price that catered to common folk.
The UX32Vd comes with a protective sleeve, as well as a small pouch for carrying two connector dongles: one USB-to-Ethernet, one Mini-VGA-to-VGA.
Advanced Micro Devices (AMD) promised performance gains of up to 15 percent with its Catalyst 12.11 driver, which the company announced in conjunction with its "Never Settle" game bundles. Our own evaluation of the driver update using a Radeon HD 7970 yielded mostly minor framerate bumps compared to Catalyst 12.8, though performance did improve in the majority of games and benchmarks we tested it with. If you're interested in grabbing the new driver package, it's now available to download, albeit in beta form.
Nvidia's relationship with the open source Linux community is sometimes strained, such as when Linus Torvalds flipped Nvidia the bird and dropped f-bombs at the GPU maker in frustration over the lack of Linux support. It is what it is, and slowly but surely, things are improving. Proof of that can be found in Nvidia's new 304.51 display driver for Linux, which addresses a whole bunch of issues and adds support for several graphics cards.
The $250 price point is where the hardcore and the serious gamer part ways. It’s not that hardcore gamers aren’t serious—it’s that they sometimes lose perspective, willing to throw vast, silly sums of money at shiny high-end GPUs. Serious gamers know that a good $250 graphics card will buy you high frame rates on standard, 1080p displays without requiring a second mortgage.
XFX’s “Ghost” fan shrouds are easy on the eyes, but they don’t vary much from card to card
Every GPU generation has its flagship videocards: the ones with the top-of-the-line GPU with all cores enabled, loaded for bear. In this generation, those cards are Nvidia’s GTX 680 (with a full GK104 GPU inside) and AMD’s Radeon HD 7970 (with a full Tahiti GPU). These cards are monstrously fast, but they’re also expensive and tricky to manufacture. Not all parts come off the line fully functional. So a few months after each flagship GPU launch, the vendors come out with a slightly stripped-down version that uses binned top-end GPUs with a few parts disabled, or lower clock speeds. AMD’s Radeon HD 7950, for example, uses the same GPU as the 7970, but with 28 GCN units instead of 32, and with an 800MHz reference clock instead of 925MHz. The cheaper, lower-powered video cards appeal both to gamers with shallower pockets and also to vendors, who clock those stripped-down, less expensive GPUs right back up to within spitting distance of their full-powered peers. Thus we arrive at the Asus GeForce GTX 670 DirectCU II TOP, a factory-overclocked GTX 670 with a custom cooling solution.
The DirectCU II cooler’s three direct-contact heat pipes keep the GPU cool.
The GTX 660 is the first 28nm Kepler board based on a new GPU dubbed GK106, and the final 6-series card to support high-performance features like GPU Boost and SLI. Compared to the GTX 660 Ti, the GTX 660 offers the same 2GB of DDR5 memory, the same 192-bit memory interface, and the same number of ROP units, but loses two SMX units compared to the GTX 660 Ti, giving it just 960 CUDA cores compared to 1,344 in the previous cards (and the 1,536 in the GTX 680). At $230 it’s our new favorite GPU in the price-to-performance category.