Build the Perfect PC! Step-by-Step Illustrated How-To Guide


Maker’s Mark is of course the name of a fine Kentucky bourbon whiskey, but the phrase also applies to the stamp that skilled artisans apply to their creations. When you’ve finished building your custom PC, we’d encourage you to stamp it with your own maker’s mark; after all, the one-of-a-kind creation you’ll have wrought will have nothing in common with the mass-produced rigs that mainstream manufacturers churn out by the millions.

That’s one of the most exciting aspects of our hobby. Automobile buffs can tune and customize their factory-built cars and trucks, but computer geeks like us get to build something new and unique almost entirely from whole cloth. And it’s so easy that you have to wonder why anyone would buy a preassembled PC in the first place.

Thanks to the relatively open architecture that IBM stumbled into oh so many years ago (and has likely regretted ever since), we can rebuild and retune our creations again and again, boosting their performance and postponing their obsolescence. We do hit a wall every now and again. Intel’s new Core i7 CPU is a good example. Because the new processor features an onboard memory controller—a first for Intel, although AMD’s procs have had the technology for years—the company had to design a new socket architecture to accommodate the additional pins. That blocks the upgrade path for anyone using an LGA775 motherboard.

Intel has AMD on the run in the CPU front, but AMD is poking Nvidia in the behind in the graphics processor market. The result: ever more powerful, ever less expensive videocards. The two companies have shipped so many new parts that we expect things will stabilize over the next quarter or so, so now’s the time to find a great deal whether you’re building a new rig or retrofitting an old one. And if you’ve never experienced the joy and pride of building your own PC, click through to read our in-depth, hands-on guide.

Jump to:



CPU Coolers & Cases

Build-It Guide

Overclocking Intel and AMD

It's a Good Year For CPUs

There’s a Dizzying Array of CPUs Available To Enthusiasts Today; But Choice is a Good Thing Whether Your Budget is $150 or $1,000

Stop. If you enjoy sitting and twiddling your thumbs as you watch the bar slowly creep toward 100 percent while creating your home video or converting RAW digital files to JPEG, don’t read this because it’s all about the CPU. That magical piece of silicon that still makes just about everything in your PC faster. Sure, the GPU guys continue to crow about their massively parallel parts, but to everyone in the know, it’s still the CPU that does the heavy lifting.

Fortunately times have never been any better for enthusiasts (and other people who don’t like slow things). Power-hungry users have three main families to pick from:Intel’s new Core i7, Intel’s older Core 2 Quad, and AMD’s Phenom X4.

If you want sheer performance, reach for Intel’s Core i7. This will probably insult Intel, but the new chip is like the brain of a Core 2 Quad combined with the plumbing of an AMD Athlon 64. The Core i7 features an improved design that makes it more efficient than the current 45nm Core 2 Quad and Extreme CPUs, too.

A new feature dubbed Turbo Mode, for example, will automatically overclock the chip. And in addition to the return of Hyper-Threading, these chips also feature an integrated tri-channel memory controller and a high-speed chip-to-chip interconnect.

Core i7 is the hot new thing and the new fastest chip in town. Intel has even made the pricing of its new chip somewhat attractive: From a high of $1,000 to a low of $300.

Intel’s Core 2 series still has some legs, too. They’re cheap, plentiful, and the ecosystem of motherboards is so common you can find them abandoned behind warehouses. Intel offers the chips in the quad-core Core 2 Quad/Extreme variations and the dual-core Core 2 Duo versions. As representatives of Power Users Local 187, we must advise you to stick with a quad-core—unless you’re handcuffed by a severe budget or just like to wait for things to happen. Quad-cores are cheap and far faster in multi-threaded applications than dual cores. Unlike the Core i7 chips, which are all based on the superior 45nm process technology, Intel has both 65nm and 45nm Core 2s available on the desktop. You can tell the difference by the first number of the processor number: sixes are older, while nines are newer. A 2.66GHz Core 2 Quad Q6700 is based on the 65nm process, while a 2.66GHz Core 2 Quad Q9450. The 45nm versions are faster, cooler, and preferred over the older 65nm parts.

AMD’s Phenom X4 quad-core CPUs aren’t on quite the power curve of Intel’s CPUs, but they are dirt cheap and that matters to quite a few. The company’s fastest CPU, the Phenom X4 9950 Black Edition, clocks in at 2.6GHz and puts Athlon 64. All in all, these aren’t bad chips for budget buyers. AMD hopes to get back in the game at the end of this year when it finally releases its own 45nm-based quad cores.

If you’re confused by all this, just think of it this way: Core i7 gets the gold, Core 2 Quad gets the silver, and Phenom X4 gets the bronze. So what is it about Core i7 that makes it the gold-medal winner? Read on to dig into details of the chip and to see just how fast this puppy is.

Core i7 Cometh, and Kicketh Butt

Forget Moore’s Law and Amdahl’s Law; heck, just throw out the whole dang tech penal code. Ung’s Law dictates that the minute you buy something new, something better will come out the very next day. Well, that’s the story with Intel’s Core i7 CPUs at least. The sequel to Intel’s first 45nm CPU, Core i7 is a massive break from the past for Intel. Gone is the ancient front-side bus that tied all the CPUs together. Gone is the external memory controller. And gone is any possibility that Intel would turn back into its flabby old self and just coast now that AMD is falling apart at the seams. No, Intel’s Core i7 is a mean, lean chip that is here to chew bubble gum and kick ass. And, as Roddy Pipper said, he’s all out of gum.

Read on for all the juicy details on the new chip, complete with the results of our hands-on benchmarks.

Nehalem Q & A

What’s the big deal with the integrated memory controller?

One of Core 7’s most significant changes is the inclusion of an integrated memory controller. Instead of memory accesses going from the CPU across a relatively slow front-side bus to the motherboard chipset and finally to the RAM, an IMC eliminates the need for a front-side bus and external memory controller. The result is dramatically lower latency than was found in the Core 2 and Pentium 4 CPUs.

Why can’t the memory controller on the motherboard simply be pushed to higher speeds to match an IMC? Remember, when you’re talking about a memory controller residing directly in the core, the signals have to travel mere millimeters across silicon that’s running at several gigahertz. With an external design, the signals have to travel out of the CPU to a memory controller in the chipset an inch or so away. It’s not just distance, either—the data is traveling across a PCB at far, far slower speeds than if it were just within the CPU. In essence, it’s like having to go from an interstate to a bumpy dirt road.

AMD loyalists reading this, of course, are probably bristling at the thought of Intel calling an IMC an innovation. After all, AMD did it first. So doesn’t that make AMD the pioneer? We asked Intel the same question. The company’s response: One: An IMC isn’t an AMD invention; in fact, Intel had both an IMC and an integrated graphics core planned for its never-released Timna CPU years before the Athlon 64. Two: If AMD’s IMC design was so great, why does the Core 2 so thoroughly trash it with an external controller design? In short, Intel’s message to the AMD fanboys is “nyah, nyah!”

Naturally, you’re probably wondering why Intel thinks it needs an IMC now. Intel says the more efficient, faster execution engine of the Core i7 chip benefits from the internal controller more than previous designs. The new design demands boatloads of bandwidth and low latency to keep it from starving as its waits for data.

Is tri-channel better than dual-channel RAM? And what do I need to know about configuring it?

Yes, tri-channel is superior. First, Intel said it needed to add a third-channel because the the Core i7’s parallel design needs the additional bandwidth.

As for configuring it, just as you had to populate both independent channels in a dual-channel motherboard, you’ll need to run three memory DIMMs to give the chip the most bandwidth.

This does present some problems for board vendors, as standard consumer mobos have limited real estate. Most performance boards will feature six memory slots, but some will feature only four.

On these four-slot boards, you’ll plug in three sticks of RAM and use the fourth only if you absolutely have to, as populating the last slot will actually reduce the bandwidth of the system. Intel, in fact, recommends the fourth slot only for people who need more RAM than bandwidth. With three 2GB DIMMs, most enthusiast systems will feature 6GB of RAM as standard.

Although it may change, Core i7 will officially support DDR3/1066, with higher unofficial speeds supported through overclocking. Folks hoping to reuse DDR2 RAM with Intel’s budget chips next year should think again. Intel has no plans to support DDR2 with Core i7 at this point; and with DDR3 prices getting far friendlier to the wallet, we don’t expect the company to change its mind.

Didn’t HyperThreading suck with Pentium 4? Why bring it back?

First, you should know that a CPU core can execute only one instruction thread at a time. Since that thread will touch on only some portions of the CPU, resources that are not used sit idle. To address that, Intel introduced consumers to Hyper-Threading beginning with its 3.06GHz Pentium 4 chip. Hyper-Threading, more accurately called simultaneous multi-threading, partitions a CPU’s resources so that multiple threads can be executed at the same time.

In essence, a single-core Pentium 4 appears as two CPUs to the OS. But since it is actually just one core dividing its resources, you don’t get the same performance boost you would receive from adding a second core. Hyper-Threading does, however, generally smooth out multitasking; and in applications that are optimized for multi-threading, you will see a modest performance advantage. The problem is that not all applications were coded for Hyper-Threading when it was released and performance could actually be hindered. Hyper-Threading went away with the Core 2 series of CPUs, but Intel has dusted off the concept for the new Core i7 series because the transistor cost is minimal and the performance benefits stand to be far better than what the Pentium 4 could ever achieve.

Intel toyed with the idea of redubbing the feature Hyper-Threading 2 but decided against it, as the essential technology is unchanged. So why should we expect Hyper-Threading to be more successful this go around? Intel says it’s due to Core’s huge advantage over the Pentium 4 in bandwidth, parallelism, cache sizes, and performance. Depending on the application, the company says you can expect from 10 to 30 percent more performance with Hyper-Threading enabled. Still, Intel doesn’t force it down your throat because it knows many people still have mixed feelings about the feature. The company recommends that you give it a spin with your apps. If you don’t like it, you can just switch it off in the BIOS. Intel’s pretty confident, however, that you’ll leave it on.

Our own tests of HyperThreading on a Core i7 show that it’s definitely worth it – if you run multi-threaded applications that can exploit it.

How significant is it that Intel is killing the front side bus?

As you know, Intel currently uses a front-side-bus technology to tie its multiprocessor machines together. As you might imagine, problems arise when a single front-side bus is sharing two quad-core CPUs. With so many cores churning so much data, the front-side bus can become gridlocked. Intel “fixed” this issue by building chipsets with two front-side buses. But what happens when you have a machine with four or eight CPUs? Since Intel couldn’t keep adding front-side buses, it took another page from AMD’s playbook by building in direct point-to-point connections over what it calls a Quick Path Interconnect. Server versions of the Core i7 feature two QPI connections (desktop versions get just one), which can each talk at about 25GB/s or double what a 1,600MHz front-side bus can achieve.

For Intel, the move to QPI is long overdue as the front-side bus in multi-processor configurations was considered a serious handicap. It’s unlikely to impact people with machines with just one CPU today; but for servers, it means the world.

Tell me about Turbo Mode? It sounds like it’s from Knight Rider.

Intel designed Core i7 to be very aggressive in power management. With the previous Core 2, power to the CPU could be lowered only so far before the chip would crash. That’s because while you can cut power to large sections of the execution core, the cache can tolerate only so much power reduction before it blows up. With  the Core i7, Intel separates the power circuit, so the cache can be run independently. This lets Intel cut power consumption and thermal output even further than before. Furthermore, while the Core 2 CPUs require that all the cores be idle before reducing voltage, individual Core i7 cores can be turned off if they’re not in use.

Turbo Mode exploits the power savings by letting an individual core run at increased frequencies if needed. This again follows Intel’s mantra of improving performance on today’s applications. Since a majority of today’s applications are not threaded to take full advantage of a quad core with Hyper-Threading, Turbo Mode’s “overclocking” will make these applications run faster. In our tests, it worked well for speeding up single-threaded applications, but you will need additional cooling to exploit it to its fullest extent.

So Core i7 will work with my brand-new Socket LGA775 board right?

Silly rabbit, remember Ung’s Law: If you just bought a $500 motherboard, the new chip won’t work with it. And it doesn’t. Core i7 requires a new LGA1366 motherboard. This isn’t done just to piss you off. The on-die memory controller and tri-channel DDR3 support alone required a new board. Of course, your heat sink won’t work either (at least not without a new bracket) as the new socket is wider than the older by just a smidge, so toss it in the recycle bin too.

Just how fast is it?

Fast. We compared the top-end $999, 3.2GHz Core i7-965 Extreme Edition against the $1,399, 3.2GHz Core 2 Extreme QX9770 and AMD’s $175, 2.6GHz Phenom X4 9950 Black Edition and the Core i7 whopped ‘em both. First, AMD fanboys, please zip it. We know the Phenom is far cheaper but it’s also the absolute fastest CPU that AMD can muster right now. It’s not exactly Core i7’s fault that AMD won’t be able to kick it up a notch until the end of this year.

Second, we tested all the machines using matched components and drivers; but there’s one element that we biased in favor of the older chips: RAM. The Phenom X4 used 4GB of DDR2/800, the Core 2 Extreme QX9770 used 4GB of DDR3/1333.

Since the Core i7-965 features a tri-channel controller that requires that memory be matched in triads instead of pairs. The chip can run faster than DDR3/1066, but Intel officially rates it at DDR3/1066 so we ran it that way.

Even with the disadvantage, it cleaned both the Phenom X4’s and Core 2 Extreme’s clocks. Versus Phenom X4, we saw the Core i7 encoding, crunching, and generally running everywhere from 98 percent to 80 percent faster. Sure, there were some tests where the Phenom X4 got within striking distance (if you consider 30 percent striking distance) but for the most part, the Phenom X4 got handed its hat, soiled undies, and umbrella at the door.

The Core i7’s sibling did better, but even the mighty (and expensive) QX9770 got beat bloody by the Core i7 with many tests running from 25 percent to 45 percent faster. The only test that saw the Core i7 loose by any serious margin was in our FEAR test, but that was more than likely due to the presence of a bug in the benchmark (which is showing its age).

The upshot is that the Core i7 is a fast CPU. It instantly dispels any fears that Intel would phone it in this generation. Even with AMD on the ropes, and selling off pints of blood to make ends, Intel is clearly intent on keeping up the pressure.

Read our complete Core i7 review here!

AMD’s Phenom X3

When Four Cores are too Many and Two are Not Enough

If the gods give you lemons, make lemonade. And if the fabs give you bunk quad-core procs, you make tri cores. At least AMD does.

The company’s 2.4GHz Phenom X3 8750, 2.3GHz Phenom X3 8650, and 2.1GHz Phenom X3 8450 tri-core chips all feature the same cache size as the quad-core Phenoms, but one fewer  core. They’re targeted mainly targeted at consumers on a dual-core budget who want a little more bang for their buck. The 8750 is street priced at $180, the 8650 goes for $150, and the 8450 fetches $100.

AMD’s main problem is that its fastest CPU, the quad 2.6GHz Phenom X4 9950, costs just $185. Intel is also a factor, having lowered the price of its older quad cores to compete against AMD. For example, Intel’s 2.4GHz Core 2 Quad Q6600, which outperforms any Phenom, is now selling for just $190. Fortunately, all three of AMD’s tri cores are free of the TLB bug that hurt performance in the original quad-core Phenom chips.

Click here for our full Phenom II overview, analysis, and benchmarks!

KIA: K10 In Action

What tales do the benchmarks tell about AMD’s Phenom?

Our comparison is based on an unlocked engineering-sample Phenom that we ran at 2.6GHz and 2.3GHz in Asus’s 790FX-based M3A32-MVP Deluxe board. We compared AMD’s CPU to the original Intel 2.66GHz Core 2 Extreme QX6700 CPU that we received from Intel more than a year ago. While it carries the Extreme tag, the QX6700 is identical to the Core 2 Quad Q6700 except that it’s unlocked. That let us run the chip at both 2.66GHz and 2.4GHz to simulate the performance of a Core 2 Quad Q6600. The board used for the Core 2 chip was EVGA’s 680i SLI. Both machines featured DDR2 RAM clocked at 1,066MHz. Memory timing was manually set on both platforms, and both machines used 150GB Western Digital Raptor hard drives and identically clocked GeForce 8800 GTX cards, as well as the same drivers.

Once we were finished with the Phenom testing, we dropped in an Athlon 64 X2 6400+ for comparison. All of these tests were done with the TLB bug disabled so no performance penalty was paid. Since AMD’s new 2.6GHz Phenom X4 9950 Black Edition is essentially the same chip as the Phenom 9900 without the TLB bug, the performance is identical. The newer Phenom X4 9950 BE, however, should offer better overclocking results.

With the speed of the 9900 topping out at 2.6GHz, we didn’t feel the need to break out Intel’s 3.2GH Core 2 Extreme QX9770. In fact, we didn’t even bother to break out a new 45nm part for our comparison, just Intel’s ancient Core 2 Quad Q6700. And what about the Intel’s new $300 2.66GHz  Core i7-920? Safely tucked into bed asleep. There was simply no reason to wake it.

The Final Verdict

AMD tried to do too much with too little. While Intel kept it safe and built its first 65nm “quad cores” by gluing two dual-cores together, AMD thought it could build a native 65nm quad core. The result was chips with low yields and an inability to push clock speeds to the point of being even remotely competitive with Intel.

The sad part is that Phenom is really not a bad CPU. It’ll whip an Athlon 64 in most tasks (gaming may favor the A64, as few games are optimized for quad-core) and it is somewhat competitive with Intel’s Core 2 Quad Q6700 part. But again, The Q6700 is more than a year old. Newer 45nm parts that have largely replaced it are from 10 to 30 percent faster.

So, after all the trash talk of “true quad core” computing, AMD is left with a consumer CPU line that it has had to deeply discount to even make them attractive.

At the end of this year, Phenom may get a shot in the arm when AMD pushes out a 45nm version but; for now, we can see only one reason to build on Phenom: you already have an AM2 board that supports the chip. Otherwise, all the performance action is on the other side of the fence.

How we tested: For the Core i7, we used an Intel DX58SO board based on the new X58 chipset with 3GB of DDR3/1066, a single Western Digital Raptor 150, Windows Vista 64-bit with SP1 installed and a GeForce 8800GTX. For the Core 2 Extreme we used a Gigabyte X48T-DQ6, 4GB of DDR3/1333, a Western Digital Raptor 150, Windows 64-bit with SP1 installed and a GeForce 8800GTX. For the Phonom X4, we used an MSI K9A2 Platinum board using the ATI 790FX chipset, 4GB of DDR2/800,  a Western Digital Raptor 150, Windows 64-bit with SP1 installed, and a GeForce 8800GTX.

Motherboard Buyer's Guide

Early adopters be warned – there are only a handful of X58 motherboards have been released so far, which means your options are limited (and pricey) if you want to build a Core i7 machine. If you’re willing to throw down the big bucks to go the Nahalem route, here are your choices for building a cutting-edge rig.

Asus P6T Deluxe

Hallelujah, P6T Deluxe ushers in the era of graphics revolution

Asus’s P6T Deluxe isn’t the most over-the-top Core i7 board we’ve tested, but it certainly has a leg up on Intel’s bare-bones DX58SO. For one thing, it finally brings us graphics reunification by supporting both two-card SLI and CrossFire X configurations.

Click here for the rest of the review, benchmarks, and our verdict!

MSI Eclipse SLI

Alphabet soup heaven: Tri-SLI, i7, and X-Fi

An eclipse occurs when one celestial body obscures another. When MSI stuck its X58 motherboard with that moniker, we wondered just what it wanted to hide. Our guess is it’s the fact that the board supports ATI’s CrossFire X. Despite the Eclipse’s support for CrossFire X, MSI chose to change the name of the board at the last minute from simply Eclipse to Eclipse SLI. Regardless, the Eclipse SLI is jam-packed with features that would make any geek weep, including cross-platform GPU support, Core i7, six-slot DDR3, and onboard soft X-Fi audio.

We’ve now tested three X58 boards, and the Eclipse SLI has an edge over its closest competitor, the Asus P6T Deluxe, which we reviewed in January, as well as the stock Intel DX58SO board that we used for most of our Core i7 testing. The Eclipse SLI is technically able to run tri-SLI. We say technically because though you might be able to jam a GTX 280 into the third slot, you’ll probably have to saw off the end of the card to make it fit in your case—the card has to be seated in the bottom slot and hangs over the mobo by about an inch. We tested the Eclipse with a pair of EVGA GTX 280 cards but were unable to test it in tri, as our early board shipped without a bridge. MSI will include bridges with retail boards.

Right now, it’s difficult to compare the performance of the three X58-based boards we’ve tested, as it’s challenging to make sure the boards are all set to the same specs. We attribute most of the performance differences we’ve seen to how each vendor sets up the CPU, not to the performance differences with each board. One thing in the Eclipse’s favor: There’s no need to activate the X-Fi drivers on the board, which is necessary on the Asus boards that feature host-based X-Fi drivers.

So what board would we stick our Core i7 in? It’s hard to say at this point, but if we were forced to choose, the Eclipse SLI would just edge out the Asus P6T Deluxe. But to be honest, with BIOS updates coming out in near real time for the new CPU and new chipset, the answer to that question might be different next month.

Verdict: 9

Not ready to jump to Intel’s Core i7? No problem, the market has plenty of alternatives to offer

Sometimes it pays to wait. AMD’s and Intel’s older CPU lineups might not be the hot new tickets, but they are safe, stable, and endowed with terrific price/performance ratios.

Need more reasons to hold off on Core i7? We’ll give you four: First, every motherboard and chipset designed for this CPU is as new as it is, so there are bugs aplenty to be wrung out of them. Why pay top dollar to act as a beta tester?

Second, you’ll need to run three sticks of DDR3 memory in order to achieve the best performance.

Third, motherboards compatible with Intel’s new baby are only just now trickling out.

Fourth, your good buddy with the obsession for upgrading to the latest hardware (suckah!) just gave you one hell of a deal on his old Core 2 Quad (or Phenom X4). Besides, all that, the land of Core 2 motherboards is target rich: Depending on your CPU, you could pick up something as ancient as an Intel 975X or 965 board.

Intel’s P35 and X38 lineups are next, and the latest Intel dynamic duo is the P45 and X48. We recommend the P45 for the best CPU compatibility, and the X48 if you want to light up CrossFire. Interested in SLI? Nvidia’s 680i is a classic, or there’s the updated 780i or 790i series. Nvidia also offers  the economical 750i and 730i chipsets.

AMD’s Phenom doesn’t have quite the lineup that Intel does, but the pickin’s aren’t slim. AMD’s own 790FX is displacing Nvidia among multi-GPU users. AMD’s integrated chipset has much to offer, including the ability to combine the integrated GPU with a discrete part in a hybrid Crossfire mode.

Click to check out Intel Core-Logic Cagematch: Motherboard Roundup!

Asus Maximus II Formula

Nerds, start your engines!

People who buy motherboards with mainstream chipsets such as the P45 don’t want to pay for DDR3. At least, that’s the way it seems to us. Asus’s impressive Maximus II Formula is the third P45-based board we’ve tested, and not one of them sports DDR3 slots. But that doesn’t take anything away from the MIIF, the coolest P45 board we’ve encountered.

With its subdued heatsink, motherboard-based X-Fi support, and oversized start and reset buttons, the Maximus II Formula sports some slick features. It performs quite ably too. MSI’s more garish P45 Platinum outpaces the MIIF by a small margin in some benchmarks, but the MIIF led the MSI and a Gigabyte P45 board in RAM speeds by good margins. So, we’ll call it a wash.

Click here too check out the rest of the review!

Gigabyte GA-EP45-DQ6

Generous offerings make this P45 board unique

If you don’t just like Gigabit ports—you love them— Gigabyte’s GA-EP45-DQ6 is the motherboard for you. This mobo has four Gigabit ports that can be teamed together for one seriously fat-ass network connection.

The board is typical Gigabyte in other respects; it includes surface-mounted buttons and the most clearly marked USB and FireWire ports we’ve ever seen. If you nuke your USB drive because you plugged  it into a FireWire header, it’s your own fault, brother.

Click here to check out the rest of the review!

MSI P45 Platinum

A budget mobo you can count on

We admit it, sexy chipsets such as Nvidia’s nForce 790i SLI Ultra and Intel’s X48 get all the ink; but in reality, most of the world runs on plain-vanilla chipsets such as Intel’s new P45. And the truth is, you don’t necessarily give up performance or features when you choose a middle-of-the-road board; in fact, the affordable MSI Platinum has just about everything you’d want in a motherboard.

Let’s start with the chipset: Intel’s new P45 gives you far more features than Intel’s X38 and X48 higher-end chipsets. The P45 Platinum adds PCI-E 2.0 to this mainstream chipset and is the first mobo to use the new ICH10 south bridge, which lets you shut off individual USB or SATA ports to prevent people from stealing your data. (The new south bridge was rumored to add 10Gb Ethernet, but that’s not the case.)

Click here to check out the rest of the review!

EVGA eForce 790i Ultra SLI

The seven series done right

We weren’t impressed with Nvidia’s follow-up to the popular 680i chipset. The 780i felt like a retread of the original and lacked support for Intel’s top proc: the 1,600MHz FSB Core 2 Extreme QX9770. Plus, PCI Express 2.0 was simply tacked on as an extra chip and DDR3 support was glaringly absent.

Nvidia heard our complaints and created the 790i chipset, represented here by EVGA’s Ultra SLI board. It has native PCI-E 2.0, 1,600MHz FSB support, and DDR3. This board even addresses another shortcoming of the 680i and 780i reference boards: lack of eSATA.

Click here to check out the rest of the review!

MSI P35 Combo Platinum

DDR2 or DDR3—it’s your choice!

You can change CPU sockets, dump PCI, and jettison legacy ports all day long, but nothing, absolutely nothing, pisses people off like moving to a new type of RAM. Luckily, there’s a fallback: dual-format RAM motherboards such as MSI’s P35 Combo Platinum board.

Based on Intel’s P35 chipset, the Combo Platinum will take up to four DDR2 modules or two DDR3 modules. But don’t think about running them simultaneously—it’s impossible. You’ll also have to run a pair of funky blank adapters to get the board running.

Click here to check out the rest of the review!

Gigabyte MS 790GP-DS4H

Integrated graphics that don’t suck (much)

Pardon us, but crowing that your integrated graphics chip is better than your competitor’s integrated graphics chip is a bit like bragging that your D is better than your friend’s D-.

As sad as that is, it’s the tack AMD is taking with its 790GX chipset, which Gigabyte’s MA790GP-DS4H mobo is based on. While the chipset features DirectX 10 support and indeed might be faster than other integrated graphics solutions, it’s still slower than the ancient GeForce 7600 GS.

The 790GX does support a hybrid mode, which allows you to pair an equally weak Radeon HD 3400-class GPU with the board. By adding the subpar performance of the Radeon to the integrated graphics, you immediately realize you should have purchased a better videocard.

What’s interesting about the 790GX is that it scales from dirt-poor integrated, to illogical hybrid support, all the way up to full CrossFire. The MA790GP-DS4H takes full advantage of the CrossFire slots and lets you run two GPUs at full x16 PCI-E 2.0 data rates. However, Gigabyte makes a faux pas by pointing the SATA ports straight up. Running two double-wide GPUs in the board cuts off several SATA ports.

The real news is the inclusion of AMD’s new SB750 south-bridge chip, which adds RAID 5, additional SATA ports, and the ability to directly overclock the CPU further than you could before, theoretically. Our Phenom overclocks have been good but not stellar, and we didn’t seem to get much further with the new SB750.

Benchmark performance was all over the map, with particularly low hard drive scores. Only after installing a patch provided to the media (with the warning that it could result in data loss) did we see performance actually match that of boards based on the 790FX chipset. We imagine that final drivers will include the patch, but it’s obvious to us that the MA 790GP-DS4H’s drivers weren’t fully baked for the release, so color us unimpressed.

Verdict: 6

Buying Your Next Motherboard

You’ve read the verdicts, now it’s time to decide

As much as we like Intel’s new Core i7, the results here demonstrate that there’s still plenty of life in the older platforms. We were a bit disappointed by the 790GX’s performance but it is a completely new chipset and there are bugs to be wrung out. We won’t totally count it out and plan to revisit the chipset in other boards.

On the SLI front, EVGA’s 790i Ultra SLI board is a darling. It’s based on Nvidia’s reference design, but that’s not bad—it means that BIOS updates come directly from Nvidia. Obviously, it’s also the only board here that supports Nvidia multi-card SLI feature, so that makes it the top pick for a gamer looking to run two or more Nvidia cards for high-resolution gaming. One note for multi-GPU gamers: the artificial partition between  ATI’s CrossFireX and Nvidia’s SLI will dissolve with Core i7. Without a Core i7 chipset, Nvidia has said that it will certify motherboards to run SLI; that means you’ll eventually be able to buy one motherboard that can run either CrossFireX or SLI.

For most of us though, Intel’s P35/P45 will be the workhorse. It does best as a single-GPU platform. It can run more than one x16 PCI-E card, but the chipset design prevents it from running both slots at full x16 speeds. That mostly hurts high-resolution gaming, as the bandwidth falls a bit short than two full x16 slots even at PCI-E 2.0 speeds. You may wonder why that matters but the appeal of multi-card solutions is primarily for high-resolution gaming; most of us will sport just one card though.

If we had to choose, the Asus Maximus II Formula performance would be our performance pick. We’re annoyed by the need to activate the X-Fi audio for the onboard sound chip, but at least it has the capability. MSI’s P45 Platinum board would be our budget pick. It’s quite affordable and doesn’t give up much in performance. It does, however, have one ugly-ass heat sink.

Our experience with the MSI  P35 Combo Platinum was an eye opener. Previous dual memory-format motherboards we’ve touched bit us with their generally atrocious performance. The P35 Combo Platinum’s performance was surprisingly good for something that runs either DDR2 or DDR3. Despite this, we’d still recommend that you bypass it: Select either a board that runs DDR2 or one that uses DDR3; trying to straddle the two will only hurt you. The P35 Combo Platinum’s memory flexibility and RAM tweaking just isn’t as strong as boards that only stick with one flavor of RAM. Frankly, the price of DDR3 is low enough today that it shouldn’t kill you to upgrade.

B est scores are bolded. DNT indicates we did not run a given benchmark on the board indicated. We used a Core 2 Quad Q9300, 2GB of DDR2/1066, a GeForce 8800 GTX, and Windows XP Pro SP2 with each motherboard.

Videocard Buyers Guide

Blessed be the forces of competition. There’s never been a better time to shop for a new videocard.

By Will Smith and Michael Brown

Poor Nvidia. AMD is putting on the price squeeze from one side, while Intel is crimping its SLI style on the other.

It’s not that the two companies are colluding to pressure the once high-flying GPU manufacturer, it’s just that AMD’s Radeon HD 4000-series performs a whole lot better than anyone expected (and is much cheaper to manufacture than what Nvidia is building).

Meanwhile, Intel’s brand-new Core i7 CPU has earned so much positive buzz that Nvidia had to cry “Uncle!” and allow Intel to support SLI in its X58 chipset (the X58 supports CrossFire, too).

Nvidia’s top-shelf GPU, the GeForce GTX 280, isn’t a bad graphics processor, but AMD’s Radeon HD 4870 comes close enough that Nvidia was forced to slash its prices in order to remain competitive.

If you’re not building a gaming rig, there’s absolutely no reason why you shouldn’t use a cheaper videocard based on one of AMD’s GPUs. And if you are a hard-core gamer, you’ll want to take a look at Radeon HD 4870 X2 cards, which put two GPUs and two 1GB frame buffers on a single card. That’s enough to wipe the floor with any single-GPU solution Nvidia has yet fielded. And you can put two of those beasts in a CrossFire rig and harness the power of four GPUs.

So which videocard is right for your needs? Our buyers guide will help you decide.

Multi-GPU Support

If you want the ultimate in gaming performance, build your next PC (or upgrade your existing machine, if it’s compatible) so that you can tap the power of more than one videocard. AMD calls its solution CrossFire and Nvidia pitches its as SLI (Scalable Link Interface). Not all motherboards are capable of supporting more than one videocard, so check your documentation before you buy anything. AMD is slightly more permissive with CrossFire than Nvidia is with its SLI solution: Many Intel chipsets support AMD’s solution; relatively few are compatible with SLI. Refer to the section on motherboards for more details.

As noted above, however, you don’t necessarily need two videocards to get dual-GPU performance. AMD’s Radeon 4870 X2 mounts two of that company’s best GPUs (and two matching frame buffers) on a single card to deliver a product that’s faster than any single GPU Nvidia currently has to offer—and it doesn’t matter which chipset you run it on. Nvidia offers a dual-GPU configuration on a single card, too (the GeForce 9800 GX2), but it doesn’t have such an animal based on its newest GPU, the GeForce 280 GTX.

DirectX 10 Support

DirectX is a collection of APIs that Microsoft developed to make it easier for game developers to program for the latest PC hardware. DirectX 10, of which Shader Model 4.0 is a part, is supported on all Nvidia videocards beginning with the GeForce 8 series and on all AMD videocards beginning with the Radeon 2000 series. In other words, just about any card you might be considering upgrading to supports these important graphics standards.

AMD’s Radeon HD 3000- and 4000-series cards also support the most recent “point release” in DirectX technology: DirectX 10.1 and Shader Model 4.1. AMD gains little from this, however, because game developers have barely tapped what’s possible with DirectX 10 and have shown little interest in moving on to the incremental releases. What’s more, DirectX 10 is available only on the Vista operating system; if you’re still rolling with Windows XP, DX10 support is totally irrelevant.

DisplayPort, HDMI, and HDCP

DisplayPort and HDMI are both standards for connecting your videocard to a digital display; HDCP is a copy-protection scheme developed for high-definition movies. None of these is critical to gaming, unless you intend to connect your PC to a big-screen TV that doesn’t have a DVI port (an older type of digital-video connection).

Since most people—even hard-core gamers—use their PCs for more than just gaming, we think HDCP support is an important feature. You’ll need it if you ever want to watch a Blu-ray movie on your PC. Be aware that many cards offer HDMI via an adapter that plugs into a DVI port on the back of the card. This will add a couple of inches to your PC’s overall footprint, which can be a problem if you’re squeezing the box into an entertainment center. An HDMI port directly on the mounting bracket is a far superior solution, but surprisingly few cards offer this feature.

Frame Buffer

When it comes to memory, more is better if you’re going to play the most demanding new games at high resolutions. Even low-end cards come with 256MB, and higher-end cards come with 512MB or even 1GB onboard.
The type of memory is less important: Nvidia continues to use GDDR3, while AMD has moved on to GDDR4 and GDDR5 for many of its products.

Physics Acceleration

Anyone who’s played Half-Life 2—and that’s just about everyone—knows how much fun real-time physics can add to a game. Intel certainly knows: The company snapped up physics middleware developer Havok in 2007.

AMD and Nvidia would like to see physics code running on the GPU, not the CPU. The two companies have blown an awful lot of hot air on this topic in the past two years but have little to show for it. Ageia, the only company to have developed a dedicated physics co-processor—the PhysX PPU—failed to gain any market share at all. Nvidia bought the flailing company earlier this year, but no one was surprised when Nvidia announced they were interested only in Ageia’s PhysX API—the PhysX coprocessor is officially kaput.

Theoretically, the PhysX software should be able to run on any processor—be it a CPU, GPU, or PPU, but Nvidia could decide to restrict PhysX acceleration to its own products. It’s all academic at this point. Until we encounter a great game that makes outstanding use of PhysX, we see little reason to recommend the technology over Havok or anything else.

The Ageia-powered Unreal Tournament III Tornado mod featured a whirling vortex that tears the battlefield apart as the game progresses. The tornado could also suck in projectile weapons, such as rockets, adding an exciting new dynamic to the game.

Unfortunately for Ageia, mods such as this were too few and far between, and this chicken-or-the-egg conundrum ultimately killed the PhysX physics processing unit. By the time Nvidia acquired the company, Ageia had convinced just two manufacturers—Asus and BFG—to build add-in boards based on the PPU, and Dell was the only major notebook manufacturer to offer machines featuring the mobile version. Absent a large installed base of customers, few major game developers (aside from Epic and Ubisoft’s GRAW team) saw any reason to support the hardware.

Nvidia has a much more persuasive argument: Effective with the release of its GeForce driver version 177.92, every videocard with a GeForce 8-, 9-, or GTX 200-series processor and 256MB of memory is capable of accelerating PhysX routines. That’s an installed base of 90 million units—a number Nvidia expects to swell to 100 million by the end of 2008.
Even then, we predict PhysX will need a killer app if it’s to really take off. Nvidia will need to help foster the development of more PhysX-exclusive games, such as the Tornado and Lighthouse mods for Unreal Tournament 3, and the Ageia Island level in Ghost Recon: Advanced Warfighter.

Nvidia will also remedy one of Ageia’s key marketing mistakes: Consumers couldn’t run a PhysX application unless they had a PhysX processor, which meant they had no idea what they might be missing out on. Under Nvidia’s wing, PhysX applications will fall back to the host CPU in the absence of a CUDA-compatible processor. The app might run like a fly dipped in molasses, but if gamers with PhysX cards boast of a killer gaming experience, it could fuel demand for PhysX-capable videocards. At the very least, it enables Nvidia to claim a diffrentiating feature against AMD.

Should You Do the Multi-GPU Tango?

A rig with two GPUs should render graphics twice as fast as an otherwise identical machine outfitted with just one videocard, right? Well, not necessarily. In fact, you can drop three Nvidia GPUs in a rig or four AMD graphics processors in the same box and still not see an appreciable performance boost with some games.

In many situations, a custom profile must be added to the videocard’s driver before a game will recognize and assign part of its rendering workload to second and subsequent GPUs. And even that might not be enough to make an appreciable difference in performance with games like Crysis when you crank up anti-aliasing and anisotropic filtering, because the GPUs might be left starved for memory.

Dropping two videocards with 512MB frame buffers each in a machine doesn’t suddenly endow that rig with a 1GB frame buffer; each videocard’s frame buffer is segregated. The same goes for videocards that have two GPUs onboard, such as the AMD Radeon HD 4870 X2. That card has a whopping 2GB of GDDR5 video memory onboard, but it’s divided into two discrete 1GB frame buffers; each GPU is limited to addressing 1GB of memory.

Moving beyond one modern Nvidia GPU or two modern AMD GPUs will also require a motherboard that supports the feature. Although things are getting better on that front with the launch of Intel’s Core i7 (boards with Intel’s X58 chipset will support both Nvidia’s SLI and AMD’s CrossFire architectures), current LGA775 motherboards can support only one or the other—and Socket AM2 motherboards support only CrossFire.

Nvidia's Next-Gen GPU

It wouldn’t be fair to say that Nvidia has jumped the shark, but the GTX 200 series isn’t nearly as impressive as many of this company’s previous product rollouts

Watching the ongoing race between AMD and Nvidia to build the ultimate graphics processor reminds us of the tale of the tortoise and the hare. AMD has played the hare, aggressively bounding ahead of Nvidia in terms of process size, number of stream processors, frame buffer size, memory interface, die size, and even memory type. Yet Nvidia always manages to snag the performance crown. The GeForce 200 series is but the latest example (although AMD’s RV770 is a helluva comeback).

Earlier this year, we convinced Nvidia to provide us with a rough-around-the-edges engineering sample of its high-end reference design (the GeForce GTX 280), with very immature drivers, for a first look at the GPU’s performance potential. At the time, the company was a full month away from shipping this product and its lesser cousin, the GeForce GTX 260, so we didn’t issue a formal verdict. We’ve since obtained and benchmarked a shipping unit.

As interesting as those benchmark numbers proved to be, the story behind this new architecture is even more fascinating. We’ll give you all the juicy details.

But first, let’s explain the new naming scheme: Nvidia has sowed a lot of brand confusion in the recent past, especially with the 512MB 8800 GTS. That card was based on a completely different GPU architecture than the 8800 GTS models with 320MB and 640MB frame buffers. The Green Team hopes to change that with this generation.

The letters GTX now represent Nvidia’s “performance” brand, and the three digits following those letters indicate the degree of performance scaling: The higher the number, the more performance you should expect.

Using 260 as a starting line should give the company plenty of headroom for future products (as well as leave a few slots open below for budget parts).

Manufacturing Process

AMD jumped ahead to a 55nm manufacturing process with the RV670 (the foundation for the company’s flagship Radeon HD 3870); it uses the same process to fabricate the RV770. Nvidia has stuck with the tried-and-true 65nm process for the GeForce 200 series.

Nvidia cites the new part’s long development cycle and sensible risk management as justification; but with the benefit of hindsight, we think Nvidia’s play-it-safe decision was a major strategic mistake.

The GTX 280 is an absolute beast of a GPU: Packing1.4 billion transistors (the 8800 GTX got by with a mere 681 million, and a quad-core Penryn has 820 million), it’s capable of bringing a staggering 930 gigaFLOPs of processing power to any given application (a Radeon HD 3870 delivers 496 gigaFLOPs, and the quad-core Penryn just 96).

Considering the transistor count and the 65nm process size, the GeForce 200 die must be absolutely huge (and Nvidia’s manufacturing yields hideously low). Nvidia declined to provide numbers on either of those fronts.

Processor Cores

The GeForce GTX 280 has 240 stream processors onboard (Nvidia has taken to calling them “processing cores”). This being Nvidia’s second-generation unified architecture (a requiremed feature for compatibility with DirectX 10), each core can handle vertex-shader, pixel-shader, or geometry-shader instructions as needed.

The cores can handle other types of highly parallel, data-intensive computations, too—including physics, a topic we’ll explore in more depth shortly. The less-expensive GeForce GTX 260 is equipped with 192 stream processors. Several months after launch, Nvidia introduced a second-generation GeForce GTX 260 that bumped the processor count to 216.

Although the GeForce 280 has nearly twice as many stream processors as Nvidia’s previous best GPU, it’s still 80 shy of the 320 in AMD’s Radeon HD 3870; what’s more, AMD’s Radeon HD 4870 boasts an astounding 800 stream processors.
Nvidia’s asymmetric clock trick, however, enables its stream processors to run at clock speeds more than double that of the core. And this speed trick has so far overcome AMD’s numerical advantage.

A significant increase in the number of raster-operation processors (ROPs) and the speed at which they operate likely contributes to the new chip’s impressive performance. The 8800 GTX has 24 ROPs and the 9800 GTX has 16, but if the resulting pixels need to be blended as they’re written to the frame buffer, those two GPUs require two clock cycles to complete the operation. The 9800 GTX, therefore, is capable of blending only eight pixels per clock cycle.

The GTX 280 not only has 32 ROPs but also is capable of blending pixels at full speed—so its 32 ROPs can blend 32 pixels per clock cycle. The GTX 260, which is also capable of full-speed blending, is outfitted with 28 ROPs. The absurdly named GTX 260 Core 216, as we mentioned earlier, has more processing cores than the standard GTX 260, but it has the name number of ROPs as the lesser part.

Memory and Clock Speeds

GeForce GTX 280 cards will feature a 1GB frame buffer, and the GPU will access that memory over an interface that’s a full 512 bits wide. AMD’s Radeon 2900 XT, you might recall, also had a 512-bit memory interface, but the company dialed back to a 256-bit interface for the Radeon 3800 series, claiming that the wider alternative didn’t offer much of a performance advantage. That was before Crysis hit the market.

Cards based on both versions of the GTX 260 have 896MB of memory with 448-bit interfaces. Despite the news that AMD had moved to GDDR5 for its next-generation GPUs, Nvidia is sticking with GDDR3, claiming that the technology “still has plenty of life in it.” Judging by the performance of the GTX 280 compared to the Radeon 3870 X2, which uses GDDR4 memory (albeit half as much and with an interface half as wide as the GTX 280’s), we’d have to agree. Nvidia is taking a similar approach to Direct3D 10.1 and Shader Model 4.1: The GTX 280 and GTX 260 don’t support either.

A stock GTX 280 runs its core at 602MHz while its stream processors hum along at 1.296GHz. Memory is clocked at 1.107GHz. Both versions of the GTX 260 have stock core, stream processor, and memory clock speeds of 576MHz, 1.242GHz, and 999MHz, respectively (what, they couldn’t squeeze out an extra MHz to reach an even gig?).

PhysX Connection

When Nvidia acquired the struggling Ageia, we were disappointed—but not surprised—to learn that Nvidia was interested only in the PhysX software. While it wouldn’t be accurate to say that Nvidia has orphaned the hardware, the company has no plans to continue developing the PhysX silicon. What’s more, there is absolutely no Ageia intellectual property to be found in the GTX 200 series silicon—the new GPU had already been taped out when the acquisition was finalized in February.

But Nvidia didn’t acquire Ageia just to put the company out of its misery. Nvidia’s engineers quickly set about porting the PhysX software to the GeForce 8-, 9-, and 200-series GPUs. When Ageia first introduced the PhysX silicon, the company maintained that it was a superior solution to existing CPUs and GPUs because those architectures weren’t specifically optimized for accelerating complex physics calculations. In reality, the PhysX architecture wasn’t as radically different from modern GPU architectures as we’d been told.

The first PhysX part, for example, had 30 parallel cores; the mobile version that ships in Dell’s XPS 1730 notebook PC has 40 cores. Nvidia tells us it took only three months to get PhysX software running on GeForce, and the software will soon be running on every CUDA platform.

SLI and Display Considerations

Both the GeForce GTX 280 and both versions of the GTX 260 have two SLI edge connectors, so they will support three-way SLI configurations. Nvidia hasn’t commented on the possibility of a future single-board, dual-GPU product that would allow quad SLI, but reps have told us they expect the current dual-GPU GeForce 9800 GX2 to fade away.

Nvidia’s reference-design board features two DVI ports and one analog video output on the mounting bracket, with HDMI support available via dongle. The somewhat kludgy solution of bringing digital audio to the board via SPDIF cable remains (we much prefer AMD’s over-the-bus solution). Add-in board partners can choose to offer DisplayPort SKUs for customers who want support for displays with 10-bit color and 120Hz refresh rates.

More Architectural Details

Nvidia’s new GPUs are capable of managing three times as many threads in flight at a given time as the previous architecture. Improved dual-issue performance enables each stream processor to execute multiple instructions simultaneously, and the new processors have twice as many registers as the previous generation.

These performance-oriented improvements allow for faster shader performance and increasingly complex shader effects, according to Nvidia. In the new Medusa demo, a geometry shader enables the mythical creature to turn a warrior to stone with a single touch. This isn’t a simple texture change or skinning operation—the stone slowly creeps up the warrior’s leg, torso, and face until he is completely transformed.

Nvidia still perceives gaming as a critically important market for its GPUs, but the company is also looking well beyond that large niche, market. Through its CUDA (Compute Unified Device Architecture) initiative, the company is taking on an increasing number of apps that have traditionally been the responsibility of the host CPU. Nvidia isn’t looking to replace the CPU with a GPU; it’s just trying to convince consumers that the GPU is at least as important as the CPU.

CUDA applications will run on any GeForce 8- or 9-series GPU, but the GeForce 200 series delivers an important advantage over those architectures: support for the IEEE-754R double-precision floating-point standard.

This should make the new GPUs—and CUDA in general—even more attractive to users who develop or run applications that rely heavily on floating-point math. Such applications are common not only in the scientific, engineering, and financial markets, but also in the mainstream consumer marketplace (for everything from video transcoding to digital photo and video editing).

Power Considerations

Nvidia has made great strides in reducing the power consumption of its GPUs, and the GeForce 200 series promises to be no exception. In addition to supporting Hybrid Power (a feature that can shut down a relatively power-thirsty add-in GPU when a more economical integrated GPU can handle the workload instead), these new chips will have performance modes optimized for times when Vista is idle or the host PC is running a 2D application, when the user is watching a movie on Blu-ray or DVD, and when full 3D performance is called for. Nvidia promises the GeForce device driver will switch between these modes based on GPU utilization in a fashion that’s entirely transparent to the user.

One chip, 1.4 Billion Transistors

You could fit nearly six Penryns onto a single GeForce GTX 280 die, although a portion of the latter part’s massive size can be attributed to the fact that it’s manufactured using a 65nm process, compared to the Penryn’s more advanced 45nm process.

Nvidia packs 240 tiny processing cores into this space, plus 32 raster-operation processors, a host of memory controllers, and a set of texture processors. Thread schedulers, the host interface, and other components reside in the center of the die. With technologies like CUDA, Nvidia is increasingly targeting general-purpose computing as a primary application for its hardware, reducing its reliance on PC gaming as the raison d’être for such high-end GPUs.

ATI’s Excellent Adventure

How the graphics underdog regained its mojo and nearly ate Nvidia’s lunch

There are but a few great underdog stories in any era, and to the list of today’s finest, we add ATI’s RV770 series GPU. ATI, much like the Red Sox, had been in a bad funk—the only way the company could compete with archrival Nvidia was by cutting the price of wannabe high-end cards to the level of Nvidia’s midrange offerings. Clearly, this was not a good business model.

Then, on the eve of the GeForce GTX 280 launch, ATI unveiled a bombshell—a brand-new GPU architecture that utilizes better process technology and a more power-efficient design to outperform Nvidia’s gargantuan new GPU. ATI eschewed the huge, hot, monolithic GPU for a more compact but modular core. With the twin goals of decreased power consumption and more efficiency per die area, ATI now looks poised to dethrone Nvidia, and all without building a videocard that sports an aural footprint equivalent to that of a Dyson vacuum cleaner.

With this new GPU come three products: the $200 Radeon 4850, the $300 Radeon 4870, and the $600 4870 X2. While their prices vary wildly, the GPU for each is identical. Let’s find out what makes it tick.

ATI is kissing the giant GPU goodbye, preferring smaller, more efficient GPUs that can work in tandem on big workloads

We’ve walked this path before. When Intel’s NetBurst architecture reached the end of its life, we were seeing the largest, hottest, most power-hungry CPUs ever, but performance wasn’t scaling in line with the power and heat increases. In order to see a 10 percent performance boost, the new CPU would generate 30 percent more heat and require 30 percent more power.

This was an untenable situation, so Intel and AMD quickly moved away from monolithic cores to more efficient multicore designs. If your applications can take advantage of all the CPU cores in your system, you should see significantly better performance with a much slower, cooler multicore design than you would with a similar-size single-core design running at twice the speed.

The two main GPU manufacturers are at a similar crossroads, but each chose a different direction with its next-gen GPU. Nvidia has launched its GTX 280 boards, which sport a massive, monolithic GPU design. These are among the largest chips ever put into mass production—a single GTX 280 chip is 576mm2, features a 512-bit memory interface, and draws 236W when running full bore. By contrast, the RV770 chip that ATI is using in its new line of GPUs is just 260mm2, features a 256-bit memory bus, and draws about 170W when running at full power. Yet despite a much smaller die, a lower power draw, and a memory bus that’s half the width of the GTX 280’s, the Radeon 4870 delivers about 75 percent of the speed of the GTX 280 in most of our benchmarks.

The RV770 Unveiled

ATI packed the latency-sensitive silicon, such as the stream processors and the basic texturing units, in the center of the RV770 GPU. Surrounding it are the memory controllers and the L2 cache, and on the periphery of the chip rests the memory interface (GDDR5 on the 4870 and GDDR3 on the 4850), the PCI Express connection, the CrossFire controller, and the various display controllers for DVI, HDMI, VGA, and DisplayPort. And all of that is packed on a 260mm2 55nm die.

The GPU Core

With this generation of GPU, ATI’s beginning to see the payoff from its premature move to a 55nm die size last generation. While Nvidia languishes at 65nm, ATI is packing more silicon into a smaller space and increasing efficiency at the same time. But that’s not all ATI’s done. The new RV770 series GPU features a complete redesign with an astounding 800 stream processors—the little silicon dynamos that handle everything from rendering soft shadows and bump maps to decoding H.264 video from Blu-ray movies.

By integrating 16KB of cache with bundles of 10 stream processors and a dedicated texture unit into so-called SIMD units (which combined make up the complete shader core), ATI has juiced much better shader performance out of the overall package. The stream processors in each SIMD unit can share information using their shared memory, which makes the new shader core more efficient than previous designs. And because the stream processors pump their output directly into a dedicated texture unit, there’s very little time lost between writing the output to texture memory.

The SIMD units themselves are each integrated with four texture processors in modular units, which minimizes latency and improves the performance of the RV770 design. Each SIMD connects to its four dedicated texture processors with 480GB/s of bandwidth between them. This was absolutely crucial to maintain performance; otherwise, the texture processors, which render the actual pixels that are displayed, would remain a bottleneck.

The Memory

There are two basic ways to increase memory bandwidth. You can increase the clock speed of the memory or you can transfer more data with every clock cycle by increasing the width of the memory bus. Like ATI’s previous-generation GPUs, Nvidia’s GTX 280 uses a 512-bit-wide memory bus. The RV770 GPU utilizes a narrower 256-bit bus, but it also supports new GDDR5 memory, which is capable of twice as many transfers per clock cycle as GDDR3. This gives ATI’s GPU roughly the same memory bandwidth as the GTX 280 on a board with a less-expensive 256-bit bus and the ability to transfer more data at lower clock speeds.

What’s more, GDDR5 also uses fewer pins than DDR3 to connect the memory to the board. This reduces board complexity, which is very important given the reduced space available with smaller process technology. By using a less-complex 256-bit bus and cranking the clocks up on the GDDR5 memory, ATI should be able to achieve decent memory performance without harming yields for the GPU, all while spending less per board.

While the high-end Nvidia graphics parts are running at a punishing 1100MHz and pushing an impressive 115GB/s of bandwidth, ATI’s 4870 ticks along at just 900MHz but delivers the same 115GB/s. The net result is that the ATI card’s memory draws less power and generates less heat while delivering the same level of performance as the more expensive card.

Running GDDR5 memory at speeds lower than GDDR3 memory with the same bandwidth is great, but the current low-end and midrange ATI boards feature only 512MB of total card memory—half the amount Nvidia’s new cards offer (the GeForce GTX 260 ships with 896MB of memory on a 448-bit interface and the GTX 280 ships with a full gigabyte).

For the most part, performance doesn’t seem to suffer from this shortcoming, but that could change as graphically intensive games like Far Cry 2 and Fallout 3 are released later this year.

Video Playback and Encoding

Video decode acceleration is a crucial feature for modern GPUs. The new RV770-series GPU handles advanced Blu-ray-required features, such as picture-in-picture, on the hardware, which allows for much lower CPU utilization with supported players. In our testing, CPU utilization went up about 5 percent when we flipped on picture-in-picture playback, while there was about a 20 percent increase when using an older ATI card on the same system.

Like Nvidia, ATI has demonstrated GPU-accelerated video transcodes from MPEG-2 to H.264 video. While the demos run at an impressive clip, there’s no way for us to compare the performance of the two cards. The Elements BadaBoom encoder that Nvidia uses is not compatible with ATI cards and the Cyberlink PowerDirector 7 encoder used by ATI is not compatible with Nvidia cards. Nor are the two apps’ settings similar enough to elicit a meaningful comparison. This illustrates the fundamental problem with GPU-based computing today, which we’ll talk about next.

Stream Processing

GPU-based computing is expected to be the answer for tasks that entail massive numbers of parallel computations, and the early apps that take advantage of GPUs, such as the Folding@Home clients, make the prospect seem quite promising. The problem, however, is that there’s one GPU computing API for Nvidia’s cards and a separate one for ATI’s cards.

That means anyone who writes software to harness the power of GPUs needs to write not one but two programs—one for ATI and one for Nvidia. If the last 12 years of DirectX have taught us anything, it’s that in order for hardware-accelerated anything to succeed, you need a common API that allows developers to write code once that works on both platforms.

We don’t know whether ATI’s Stream or Nvidia’s CUDA is the better API. Because we’re not programmers, we don’t care. But we do know that the continuance of two competing standards will only hamper development of GPU-leveraged applications. ATI and Nvidia need to put aside their differences and work together to build a common API that works on all hardware. If the two companies need a place to start, Apple pitched OpenCL, which does just that.

The Cards

ATI has launched the Radeon 4850 and the Radeon 4870. Priced at $200 and $300, respectively, these cards compete squarely in the midrange.

The 4850 ships with 512MB of (less-expensive) GDDR3 running at 993MHz on a 256-bit bus. The board we tested runs a 625MHz core and sports the same 800 stream processors as the more expensive 4870. The card will sell for between $200 and $250, depending on configuration and specs.

The Radeon 4870’s core runs at 750MHz and the board’s 512MB of GDDR5 memory runs at 900MHz on a 256-bit bus. Remember, though, the GDDR5 memory transfers four chunks of data per clock, giving it an effective memory bandwidth that’s almost double that of the 4850. For $50 to $100 more, this is a good thing.

The Performance Story

Enough of this banter—Let’s get to the benchmarks

Does ATI’s new GPU deliver 75 percent of a GeForce GTX 280’s power for a fraction of the price? We went into the Lab to find out. The short answer is yes. The Radeon 4870 runs nearly as fast as a GTX 280 in most benchmarks for about 60 percent of the cost. That’s pretty impressive. Running two 4870 boards together in CrossFire delivers performance that beats a single GTX 280 board for the same cash outlay. This might tempt you to pony up for a pair, but think before you leap.

Dual-card solutions are well and good in practice, but before you make the jump to two cards, you need to be aware of the pitfalls. First, adding a second card to your rig completely obviates the power and noise benefits the 4870 has over the GTX 280. Second, functionality that you may take for granted, such as multiple-monitor support, might not work with a dual-card solution. ATI only recently added dual-monitor support via its 8.2 Catalyst driver and Nvidia’s SLI still does not support it. Third, new games frequently require a driver update or even a patch before they’ll properly take advantage of a second card. Multiple cards are great for power users, but you need to be aware of the sacrifices involved, preferably before you whip out your credit card.

During the course of our testing, we discovered that many of these new cards were CPU-bound on our test beds in all but the most demanding games. This means adding a second (or a faster) videocard to your system shows very little performance improvement because the CPU can’t handle its tasks fast enough to keep the GPUs occupied. We’ll be updating our test bed before the next round of GPU reviews to mitigate this factor. But if your current CPU is slower than an Intel Core2 Duo X6800—a 2.93GHz dual-core Conroe—you probably won’t see much benefit in games other than Crysis if you upgrade to more than one graphics card, whether it’s a GTX 280, a Radeon 4870, or even a Radeon 4850.

But we digress. The short, short verdict is that ATI’s new Radeon 4850 and Radeon 4870 deliver stunning performance at an extremely compelling price. If you’ve been waiting to upgrade to a DirectX 10-compatible graphics card, now is the time. For less than the price of an Xbox 360, you can upgrade your GPU and get kick-ass gaming performance on most modern PCs.

Visiontek Radeon HD 4850

A fairly competent DirectX 10 GPU for less than $200

We’ve spent a ton of time talking about the efficiency and overall pixel-pushing prowess of ATI’s new GPU, so we won’t waste much ink on it here. Suffice it to say that the 4850 delivers enough power to drive your sweet, new 22-inch monitor at its native resolution.

The card’s silicon is equivalent to that of previous-gen high-end cards. It’s equipped with 512MB of GDDR3 memory running at 993MHz.

Click here for the rest of the review, benchmarks, and our verdict!

EVGA GeForce GTX 260 Core 216 Superclocked

A midrange G200-based card that Nvidia can hang its hat on

When Nvidia unveiled its G200 GPU, we were immediately drawn to the shiny, speedy GeForce GTX 280. Why wouldn’t we be? With high core and memory clocks and 240 stream processors to churn through the toughest shaders, it was sexy and fast. We were less excited about the 260, which sported 192 stream processors and slower clocks speeds but cost about $100 less than the 280 (at the time). Since then, ATI has released its R700-based Radeon 4870, which outperforms the original 260 but costs the same amount.

And that’s where the Core 216 edition of the 260 GTX comes in. With the same stock clock speeds but 24 more shader processors than the original, the new version of the 260 GTX delivers comparable performance to the 4870 at a similar price. The speeds and feeds are about the same as the original 260, although EVGA clocked this card’s core at 626MHz (up from 576MHz stock) and includes 896MB of GDDR3 running on a 448-bit bus at 1053MHz (stock is 999MHz).

Click here for the rest of this review, benchmarks, and our verdict!

Visiontek Radeon 4870 HD

Once a benchmark slayer, these boards have been eclipsed by Nvidia’s GeForce GTX 260 Core 216

It’s tough to be a videocard. With just a quick spin of a piece of silicon, you can go from state-of-the-art to obsolete. That’s nearly what happened to the Radeon 4870 HD when the first revision to the GeForce GTX 260 shipped. By upping the number of shader processors from 192 to 216, Nvidia managed to push the 4870 out of the highly contested $350 price category.

That’s not to say that the 4870 HD isn’t a fast card; in most of our benchmarks, the margin of victory for Nvidia’s new part are slim, at best. With 512MB of quad-pumped GDDR5 memory running at 900MHz, the ATI card has a memory bandwidth advantage over the GTX 260, which shows in high anti-aliasing/anisotropic filtering situations, but not anywhere else. The card’s core utilizes ATI’s now-mature 55nm process, with the entire core (including the card’s 800 shader processors) running at 750MHz.

Like other current-gen ATI cards, the R700-based 4870 supports hardware decoding of common video formats, including MPEG-2, H.264, and VC-1. The card can even decode two video streams simultaneously, which minimizes CPU utilization and frame drops when you use the picture-in-picture feature common to many Blu-ray discs.

From a pure performance perspective, the 4870 delivers sufficient power to run most DirectX 9 games at resolutions up to 1920x1200. Even DirectX 10 games, such as Crysis, are playable at typical resolutions without anti-aliasing and anisotropic filtering. We’re reasonably certain that the GTX 260 outperforms the 4870 in Crysis with AA and aniso turned on simply because the board lacks sufficient memory to store everything that the game demands at those settings. With more shader processors, the ATI card should have the edge in a shader-heavy game like Crysis, which could turn the advantage back to the 4870 when it’s installed on a board with a 1GB frame buffer.

For now though, we’d rather spend a few more bucks and get one of the new GeForce GTX 260 Core 216 boards than a 4870 HD.

Verdict: 8

BFG GeForce GTX 280 OC 1GB

The GeForce GTX 280 is hot stuff!

Sporting almost the same configuration as Nvidia’s reference design, BFG’s GeForce GTX 280 delivers amazing performance with the second-generation DirectX 10 chipset from Nvidia. It soundly spanks ATI’s new 4870, as well as all but the dual-GPU graphics solutions from the previous generation—and even against those, the GTX 280 wins all but a few benchmarks.

The real question we’re asking is, Do we need this much power? Luckily for Nvidia, the answer is yes. The company’s GT200 GPU, which forms the heart of the GeForce GTX 280 and 260 boards, is a great performer, despite its massive footprint and huge energy requirements. BFG overclocked the GPU core ever so slightly—it runs at 615MHz—while the GDDR3 memory ticks along at a stock 1107MHz. The GTX 280 features 240 stream processors running at 1350MHz—a touch more than double the GPU’s core speed.

Click here for the rest of the review, benchmarks, and our verdict!

Sapphire Radeon 4870 X2

The fastest videocard we’ve ever tested—for the most part

We’ve long considered videocards that sport two GPUs second-class citizens. They have all the problems of multi-card solutions—namely application incompatibilities and no multi-monitor support—but fail to perform as well as dual-card solutions, since multi-GPU cards usually use slower midrange GPUs.

That’s finally changed with the new RV770-powered Radeon 4870 X2, which mounts two of ATI’s fastest GPUs on a single card, without sacrificing power-user features like multi-mon support.

Click here for the rest of the review, benchmarks, and our verdict!

Nvidia GeForce GTX 295

We’ve made no secret of the fact that we love the pulse-pounding speed that ATI’s Radeon 4870 X2 boards deliver, but there’s a new speed king in town—the GeForce GTX 295. On paper, the two GPUs on the 295 fall somewhere between the GTX 260 and GTX 280, but this board delivers a crushing performance blow to ATI’s fastest part

The GTX 295’s GPUs feature 896MB of GDDR3 memory and the full complement of 240 shader cores previously seen only on GTX 280 boards (current GTX 260 boards have just 216 shader units). However, the core and memory clocks are a touch below those of the single-GPU GTX 280 boards—576MHz and 999MHz respectively. Additionally, the new GPU is Nvidia’s first to step down from a 65nm to a more efficient 55nm process. The benefit? Mega-speed in one double-wide card. Even with the process-size shrink, the card requires a new mid-mounted cooler—that’s right, the heatsink/fan is sandwiched between two boards, each with its own GPU and memory.

Click here for the rest of the review, benchmarks, and our verdict!

CPU Cooler Buyers Guide

Overclocking? A great heatsink and fan will help you avoid a fried CPU

By David Murphy

You can’t just eyeball a cooler. The size of a fan doesn’t dictate its capacity for chilling your processor. We’ve tested huge models that do anything but, and we’ve put our hands on smaller devices that have blown the heat right out of our test rig. The only true way to evaluate a cooler’s prowess is to strap it over the top of your CPU and test it. But that’s easier said than done.

Here are a few tips to steer you in the right thermal direction. For starters, you should consider the total amount of heat dissipation a cooler has to offer, versus its actual ability to cool. What the heck does that mean?

A cooler with a great mass fins will heat up more slowly than one with equipped with less surface area for dissipating heat. But you need more than fins to prevent your processor from overheating; you must also consider the cooler’s fan’s capacity for drawing air over those fins.

A big, bulky cooler with lots of fins trades size for noise reduction, because it can use a a fan that spins at low RPM. A cooler with  less surface area will fit in a smaller enclosure and is less likely to interfere with other components inside the case, but it will need a fan that spins much faster in order to pull in enough air. But a rapidly spinning fan can turn a quiet PC into a wind tunnel.

A cooling solution that combines lots of fins with a powerful fan will deliver the best of both worlds. You can easily quiet your PC by dialing back the fan speed in the BIOS, but you’ll never squeeze the best performance out of your CPU using an underwhelming cooler.

Alpine 7 Pro

Look no further for a gentle upgrade over your stock Intel cooler

Given its small size, we didn’t expect maximum cooling performance from Arctic Cooling’s Alpine 7 Pro. And while the Alpine 7 Pro doesn’t set any performance records, in some situations it does match the capabilities of our cooler of choice, Thermaltake’s DuOrb. Given the sheer size difference between this 9x9x3cm cooler and the, well, monstrous DuOrb, the Alpine 7’s performance was a pleasant surprise.

The Alpine 7 Pro does an exceptional job of cooling when your processor is idle—it even ran head to head with the DuOrb in this capacity. Both coolers dropped the temperature of all four cores of our Q6700 to 36 C, but when we cranked up our processor to 100 percent usage rates, the Alpine 7 faltered.

Click here to read the rest of the review!

Thermaltake DuOrb

This figure eight is great!

Zalman’s CNPS9700 has been the Godzilla of coolers and a Best of the Best champion for more than a year. But it’s finally facing its Megalon in Thermaltake’s DuOrb cooler.

Unlike the CNPS9700, which has an 11cm fan strapped to the side of its imposing copper and aluminum frame, the DuOrb’s heatsinks are stretched out horizontally. The extra-wide cooler, shaped in a 20-centimeter-wide figure eight, comes with two 8cm blue and red LED fans tucked inside two rings of copper fins.

Click here to read the rest of the review!

Zalman CNPS9300 AT

Big power in a tiny package

When we first got our hands on Zalman’s CNPS9300 AT, we assumed that the company had pulled a “Honey, I Shrunk the CPU Cooler” on its flagship product, the bulky CNPS9700. The CNPS9300 is 80 percent smaller, and its total thermal dissipation area has been nearly halved, from 5,490cm2 to 2,583cm2.

Logic dictates that this cooler should perform far worse than the CNPS9700. But the built-for-silence CNPS9300 AT nearly matches its big brother’s performance—as well as that of our top cooler, Thermaltake’s DuOrb.

Click here to read the rest of the review!

Cooler Master V8

A healthy cup of cooling for your CPU

Cooler Master’s V8 CPU cooler ameliorates a somewhat time-consuming installation process with near-record-setting performance for an air cooler. The sleek aluminum cooler’s 12cm fan sits between two heatsinks on the device, sparing fingers from the accidental nip of its 800rpm to 1,800rpm variable fan.

The V8’s installation process is similar to that of most other CPU coolers, but with a few more screws involved. On an Intel platform, you start by attaching two retention plates to the cooler itself. You then remove the motherboard from your case and flip it upside-down, matching the ends of the cooler’s four retention screws with the mobo’s holes. You balance this contraption in your lap while using four large nuts to secure this beast of a device in place. You can also use an included backplane to mount the device, but we found the nut method to be far easier.

Click here to read the rest of the review!

Case Buyers Guide

These enclosures lend new meaning to the phrase “boom box”

By David Murphy

When we review a case, we take the time to build out a complete system: installing everything from the motherboard to the videocard and power supply. In our view, that’s the only valid means of assessing an enclosure’s merits and shortcomings.

Manufacturers invest big bucks designing the exteriors of their products, adding lights, motorized doors, and other (sometimes goofy) features in an effort to make their box stand apart on crowded store shelves. But you’ll curse every ounce of that bling while you’re building and every time you go back to tweak, upgrade, or plug in a component if they didn’t dedicate just as much attention to crucial details such as drive bays and mounts, the power-supply rack, air flow, front-panel connections (for eSATA, USB, Firewire, and audio), accommodations for air and liquid cooling, and other bread-and-butter features.

That’s not to say that an excellent case should be ugly; after all, it’s your rig’s most visible attribute. Indeed, the cases that earn Maximum PC’s coveted KickAss award are enclosures that manage to strike that delicate balance between form and function, beauty and utility, elegance and ruggedness.

Gigabyte Poseidon 310

Raise a trident in honor of this chassis

Instead of a god of the sea, Gigabyte’s midtower Poseidon 310 chassis is a petite prince. But that’s merely a reflection of this case’s size, not its prowess. It weighs in at 7.75”x17”x20”—small enough to fit into that nook in your desk or the space under your bed.

Even given its small size, the Poseidon supports up to five 5.25-inch devices. We’re unsure why this case—or any case, for that matter—still bothers with multiple external 3.5-inch bays. You get two helpings of them on the Poseidon. We would have rather sacrificed these and an additional 5.25-inch bay in favor of more internal hard drive space.

Click here to read the rest of the review!

CoolerMaster HAF

This is the new case to beat for air-cooling aficionados

Cooler Master’s newest HAF (High Air Flow) chassis is the company’s magnum opus, successfully unifying the best elements from various previous efforts. Event better, the HAF features a number of new features that raise the bar for case design.

Case cooling is the HAF’s centerpiece. Three 23cm fans at the top, front, and side of the HAF circulate air even when running at just 700rpm, balancing air flow with acceptable noise levels.

Click here to read the rest of the review!

Silverstone Kublai KL03

No stately pleasure dome this

The Kublai KL03 offers a hodgepodge of features that Silverstone never manages to get just right.

Take the  giant horizontal retention bar, for example. It supports two 12cm fans (which aren’t included) and comes with a number of sliding bars for holding your PCI-based components in place. But the retention bar, which must be screwed down to stay in place, did little to support our test rig’s guts.

Click here to read the rest of the review!

In Win B2

Tower, this is Ghost Rider requesting a fly-by

In Win can’t resist building gimmicks into its chassis. The company’s B2 isn’t quite as ostentatious—unless you think the motorized front panel is over the top.

This midtower chassis does, however, take its B2 theme to extreme levels. The vent on the case’s snap-locking side panel looks like a Stealth bomber and the case’s exterior is peppered with aeronautical jargon. We love the look, but working in this case is a different story.

Click here to read the rest of the review!

Koolance PC4-1025BK

Awesome cooling isn’t enough to make this case awesome

Koolance’s PC4-1025BK seems like a perfect power-user box, until you realize that this water-cooling-enriched case is just too small for enthusiast hardware.

A water-cooling mechanism is integrated into the chassis: Koolance’s KIT-1000KB cooler, a tri-fan setup that comes with a front-mounted controller for the fans’ speeds. The whole getup is a tidy little package that cools monstrously even when using the quietest mode the PC4-1025BK offers.

Click here to read the rest of the review!

Build Your Own No-Compromises Mid-Range PC

With the right parts and a little bit of elbow grease, you can have yourself an affordable DIY rig that does it all!

Any fool can spec out the ultimate Dream Machine. Just open up your wallet, pull out the Visa card, and tell the web store to overnight its most expensive parts to you. Voila! You’ve got the makings of a badass rig.

It’s an entirely different story when you’re building a machine on a budget. You’ve got to carefully weigh your options, consider every possible combination of parts, and make choices you know you can live with—because, typically, you will make some sacrifices. To be honest, the whole process can be downright painful. The last time we built a mid-range PC (February 2007), we had to break so many of our own minimum spec guidelines that no one was truly satisfied with the results.

But this year is different. We found the job of building a rig in this price range actually pleasurable—with minimal trade-offs. In fact, we’re tickled pink with the quality of the hardware we got into this box. So much so, we can unabashedly say that this PC can stand up to damn near any task. Read on to find out what parts we picked, why we picked them, and why there’s never been a better time to assemble your own no-compromises PC on a budget.

Parts and Price List

Why We Chose these Parts

Intel’s 2.8GHz Core 2 Quad Q9550 CPU

The PSU includes two cables with six-pin connectors, one of which has a break-away two-pin connector that we won’t need (but that gives us the option to run an even more powerful videocard down the road).

The connectors will click into place in the back of the Radeon HD 4870 board (image C, although we used a single-socket Nvidia board as a stand-in). Now plug the power cables into the SATA drive and the optical drive. It’s time to fire this mutha up!

We briefly toyed with the idea of using AMD’s new Phenom quad core for this year’s mid-range PC. After all, AMD has priced the new CPU quite attractively. There’s also an argument for the forward-looking upgradability of the AM2 platform.

In the end, however, we decided to go with Intel since its current performance roadmap is unquestionable.

We remain nervous about where AMD is headed. While it’s new Radeon GPUs are a hit, the CPU division seems to change direction each quarter; it’s just safer to go with Intel right now. And it doesn’t get any safer than the Core 2 2.83GHz Quad Core Q9550.

Enough applications now take advantage of four cores that we felt compelled to go quad with this year’s box. Being able to squeez a 45nm CPU into our mid-range box makes us doubly happy.

In a nutshell, we see Intel’s 45nm Penryn processors as the mainstream future for Intel. We’ll run the Q9550 for another 12 to 18 months and then  see how far Nehalem prices have fallen by then.

We’re not giving up on AMD, either. There’s always the chance that the company’s CPU team will take a page from their GPU colleagues and produce a part that knocks Intel back on its heels the way the Radeon 4800-series has knocked Nvidia for a loop.

We can dream, can’t we?

MSI’s P45 Platinum Motherboard

Once we decided to go with an Intel CPU, we had a hard time selecting a chipset/motherboard combination.

We considered rolling with an Nvidia chipset, which would have given us the option to run videocards in SLI down the road, but AMD’s Radeon 4800-series GPUs are such a great value that we decided we’d be satisfied with a CrossFire configuration if we decided to add a second videocard down the road.

We even considered (albeit briefly) a board using AMD’s 790GX chipset, which has integrated graphics and is capable of running in hybrid mode (switching over to a dedicated videocard when the workload warrants). The candidate was Gigabyte’s MA790GP-DS4H, but we quickly discovered that the 790GX’s hybrid mode limits you to a lowly Radeon HD 3400-series videocard.

In the end, we decided that Intel’s P45 platform made more sense for our needs. Intel chipsets are rock-solid—you don’t have to worry about any of the teething pains third-party chipsets often experience, especially with new CPUs.

So we selected MSI’s P45 Platinum. It’s not exactly cheap, but neither is it extravagant. It supports PCI Express 2.0, offers a host of memory-tweaking options, and support for up to 16GB of memory.

AMD’s Radon HD 4870

Trying to configure a new PC can be a massive mind bender. You’ll have to not only figure out which CPU you want and what CPU you’re going to eventually upgrade to but also factor in the GPU choice and its potential upgrade path.

We’ve long chosen Nvidia-based videocards because they’ve been incredibly fast for the money. But Nvidia took their eye off the ball this last design cycle and AMD took supreme advantage, moving to a smaller process size that enabled them to build a killer GPU that delivers performance equivalent to Nvidia’s best but at a fraction of the cost.

We could have gone cheaper still and dropped a Radeon HD 4850 in our shopping cart, but we saved enough money on the rest of our components that we could splurge just a wee bit.

What about CrossFire? The appeal of running dual videocards is limited to ultra-high-resolution gaming. Even then, CrossFire doesn’t actually run games any faster until driver updates are released.

As sexy as it is to pack multiple cards, we think it makes more sense for mid-range buyers to buy one very fast card. When it’s time to upgrade again—typically in 18 months—your money is better spent buying the fastest next-generation card.

4GB Patriot DDR2/800 RAM

One of the glaring weaknesses with last year’s mid-range PC was RAM. With RAM prices through the roof at that time, all we could afford was 1GB of DDR2/800.

Well, what a difference a year makes. While 1GB cost $150 last year, we were able to buy 4GB of Patriot DDR2/800 RAM for much less than $100 this year.

Why Patriot RAM? At this price, it’s all about bang for the buck, and after surfing the online stores, we picked the Patriot modules because they offer slightly better-rated latency for about the same cost cas the competition.

Our RAM configuration isn’t that simple, though. Although the board posts just fine with 4GB, running a 32-bit Windows OS doesn’t quite give you full access to all the RAM.

Check Windows XP and it’ll report only 3.25GB free. So has the other .75GB of memory been wasted? Not quite. It’s a complicated issue, but Microsoft argues that even if the applications cannot use all 4GB of RAM, the OS, and even the drivers, will, so the additional headroom does help.

All we know is that we’re happy to quadruple our RAM footprint for less money than we spent last year. Now that’s what we call positive technological progress.

Ready, Set, Build

Once you’ve collected all your parts, you’re just 13 steps away from a hand-built PC you can be proud of

1. Mount the Power Supply

Image A

Image B

We like a case that gives you options as to where you mount PSU. This Cooler Master CM stacker allows you to mount it in the traditional spot up top or place it below the motherboard. We’re sticking with tradition here, but the process is pretty much the same no matter where you mount yours. First, remove the back plate covering the PSU slot by removing the two screws that hold it in place. Then carefully slide the PSU in place, pushing the wire bundle from the PSU through the front of the cage (image A). Now attach the PSU brackets to the case (image B). The power supply will mount to this bracket.

2. Install the Optical Drive

Image A

Image B

You’ll need to remove both the left and right sides of the case in order to mount the optical drive. If you’ve chosen a case with a tool-less design, you’ll see a row of quick-release latches that hold the drives in place. Open the latches at the spot where you want to insert your drive. Now remove the bezel from the slot on the front of the case where your drive face will be. (While you’re at it, you should also remove the three lower bezels that block access to the hard-drive cage.)

Most modern cases use a quick-release rail system to mount optical drives. You’ll find two plastic rails in the case parts box marked Right and Left), attach each to the appropriate side of the optical drive (image A). The rails do not screw into the drive, so don’t bother trying. Now slide the drive into its slot in the case and flip the quick-release levers (image B).

3. Install the CPU

Image A

Image B

Installing the CPU is the most delicate operation you’ll perform when building your system. If you’re a ham-fisted tyro or you’ve got the coffee shakes really bad, get someone who’s more deft to install your proc rather than risk destroying your motherboard. First, remove the protective plastic shield covering the socket on your motherboard (image A). Set this cover aside in case you need to return the board (or store it). Many manufacturers will not take back a board without this plastic shield in place. Now unclip the metal arm alongside the socket and flip out the load plate (image B). To install the proc, match the two notches in the CPU with the tabs in the socket. Grasp the CPU with two fingers and carefully lower it straight into the socket while keeping it parallel to the plane of the socket. Do not drop one side first and do not slide the CPU around in the socket, as you could bend the pins and kill your mobo.

4. Add CPU Cooling

Image A

Image B

We chose a retail CPU package, which provides a stock Intel cooler. Although Intel ships different coolers with its CPUs, the Intel Core 2 Quad Q9550 comes with a fairly decent model. It’s quiet, efficient, and easy to install. A brand-spanking-new heatsink should have thermal past pre-applied to it (image A). To install the heatsink, match the four legs with the holes in the motherboard and gently push them in. Each leg’s locking mechanism should be in its install position out of the box, with the arrows facing out. Lock each leg into place by pushing on the locking mechanism until it catches, using a cross-star pattern (image B). To verify that you’ve done it correctly, flip the motherboard upside down. You should see the four legs protruding slightly through the bottom. If one is not locked in place, turn the locking mechanism counterclockwise—so the arrow faces in (you may need to use a slotted screwdriver to do this). Pull the leg straight up, turn it clockwise, and try again. Now plug the four-pin power cable into the four-pin mobo header labeled “CPU fan."

5. RAM it home

Image A

Image B

You’ll get the most performance out of your PC by installing RAM in dual-channel mode. The method for doing this varies among motherboard brands. On the MSI board, dual-channel mode requires that you put one stick in an orange slot and the second stick in a green slot.

Place the board on a flat, stable surface. Put your antistatic bag beneath the board if you don’t have a good static-free work area. Next, locate the notch on the RAM and match it with the notch in the slot (image A). Pop the arms open for the slot you’ll be filling, put your fingers on the ends of the module, and gently push the RAM in place until the arms lock into position (image B). If the RAM doesn’t go in, double-check that the notches line up and try again. When you’re done, make sure the arms that hold the RAM in are in their closed position. If they are extended, they could impede your GPU or even damage it.

6. Cage the Drive

Image A

Image B

Installing the hard drive is a straightforward process. But first getting to the hard drive cage in some enclosures is another matter. If your case uses one, unlock the quick-release arms on the left and right sides of the case. Pull the cage straight out. To get to the screws that you’ll use to mount the drive, pull the side plates off of the cage. They’re held there by simple friction and will come loose with just a slight amount of pressure.

Now, as you have done for the last 20 years, use four coarse screws to mount the hard drive in place (image B). Put both sides of the drive cage back on (along with the drive rails), and slide the cage back into the case. Push the bezels in place, lock the arms in place, and you’re done! With the hard drive, anyway.

7. Mount the Motherboard

Image A

Image B

Image C

Before you can mount the mobo, you must install the I/O shield—that little metal plate that frames all the inputs and outputs on the back of your case. The shield should have come with your mobo. If it didn’t, you’ll have to contact the maker for a replacement or mount the board commando. Use the butt of your screwdriver to pop out the shield that came with the case. Now take the shield that came with your board and pop out the squares for the necessary ports. Make sure the grounding arms for the Ethernet, eSATA, and PS/2 ports are bent in toward the case’s interior (image A). Otherwise, they’ll get tangled in the ports when you install the board.

Next, find the bag of brass motherboard standoffs that came with the case. Install these in the case, making sure you have one for each mounting hole in the motherboard. Use pliers to torque them down enough so they don’t back out should you need to remove the mobo (image B). Take note of how many mounts you installed. The typical number is nine. Now drop the board in and screw it down (image C). Use just enough force to keep the screws from backing out, but not enough force to crush the PCB.

8. Drop in the GPU

Image A

Image B

Our MSI board hosts two physical x16 PCI-E slots. We say “physical” because only one will operate at x16 data rates if both slots are occupied (in a dual-videocard CrossFire configuration, for instance). Before you install your videocard, remove the expansion slot cover from the rear of the case where the card’s ports will emerge. You can toss the cover or keep it as a back scratcher. When you put the card in, make sure it is firmly in place with all of the contacts securely in the PCI-E slot (image B).

A common error is to insert the card so the contacts sit just outside the slot. Another common error is for the contacts to not make a complete connection with the slot. This is usually the result of a bent case enclosure causing a gap between the card and the case. You can sometimes fix the problem by bending the case back in place. Now use the two thumbscrews that had held the expansion slot covers to screw the card in place.

9. Install the Soundcard

Image A

Image B

We didn’t have to forgo discrete audio this year, so we reached for a budget X-Fi card. There are two options at this price range, but only one is really worthy of being called an X-Fi: the XtremeGamer. Its cousin, the XtremeAudio, doesn’t actually feature a full X-Fi chip and does not support EAX 5. One thing Creative did right with the XtremeGamer is include a standard front-panel audio header on the card. This lets you use the front headphone and microphone jacks.

Grab the Poseidon cable labeled “AC97, HD Audio.” Insert the plug labeled “HD Audio” into the header on the card—it’s keyed and should fit only one way (image A). Now remove the expansion slot cover from the back of the case where the card’s ports rest and firmly insert the card into a PCI slot (image B). As with the GPU, make sure the card is firmly in place and that all of the contacts are connecting. Screw the card in place and you’re good to go.

10. Connect the Umbilical Cords

Image A

Image B

Connect the Gigabyte Poseidon 310’s two front-panel USB ports by connecting the case’s USB cables to the USB headers on the P45 Platinum. Each is keyed, so they cannot be installed incorrectly. Plug them into the JUSB1 header on the motherboard (image A. Note: This motherboard supports more USB ports than this particular case does.) Now use the SATA cable that came with the mobo to attach your hard drive. Simply plug one end into an available purple port on the mobo and plug the other end into the drive (image B). One caveat: A SATA connector is delicate.

Once you have it in place, do not put pressure on it or you may snap it; this can be fatal, especially if you snap the connector on the drive. Two of the MSI’s  internal SATA ports flaws are inconveniently located where a long videocard might block them. Fortunately for storage mavens, there are six other SATA ports that are situated much more conveniently. If you really need SATA ports six and seven, ou can get around the problem by purchasing right-angle SATA cables. There’s also an eSATA port on the back of the motherboard.

Now grab another SATA cable (several of these should have come with your mobo), attach one end to your optical drive and the other to a vacant SATA port.

11. Finish off the Front Panel

Image A

Image B

Remember in Kung Fu how Caine had to grab a pebble from the old man’s hand to prove how badass he was? Well, installing a computer’s front-panel connectors is kind of like that. Only after years of study and apprenticeship will you be able to plug in the front-panel connectors without getting it wrong at least once. Nevertheless, we’ll try to do it right here. Grab the rainbow cable with the Power SW, HDD LED, Reset SW, and Power LED connectors and take a close look at the front-panel header. You’ll notice that the yellow header is color-coded and also features a plus symbol on the positive connector (image A).

The motherboard manual will spell out where each cable goes, but since the board follows the Intel FP connector standard, we know that the power switch connects to black, the power LED connects to green, reset goes to blue, and the hard drive LED goes to red. The orientation of the power-on and reset switches don’t matter, but you’ll have to match the positive and negative with the LED indicators. If you do it wrong, don’t worry; there’s no risk of destroying anything. We got it right on the first shot, however, because our kung fu is that strong.

12. Add Power to all Parts

Image A

Image B

Image C

You’re in the final stretch. The penultimate chore is to power up all the components. One feature of our mid-range rig we’re particularly happy with is the PC Power & Cooling power supply. Usually, one of the first compromises you make when building a budget box is with the PSU. The Silencer 750 is relatively quiet and gives us reliable power output under all conditions. We can’t say that about other inexpensive power supplies, some of which have faded on us after 18 months of duty.

First, make sure the PSU is not connected to the wall socket. Now, begin by plugging the main power connector into the motherboard (image A). This 24-pin connector is keyed and cannot be inserted incorrectly. You should hear a soft click as it locks in place. Make sure it’s firmly in place—double-check by gently trying to pull out the connector. A common building error is to have the plug just slightly off kilter, which will cause booting problems or a failure to boot at all. Now grab the eight-pin ATX power connector. You’ll plug this into a socket on the mobo, southeast of the CPU (image B). Failure to plug this in is another common error. Next up is the GPU’s power.

Overclock your PC

Increase the performance of your CPU—for free! We tell you everything you need to know to do it safely and effectively

If you’re running your CPU at stock speeds, you’re missing out on your PC’s true potential, because processors often harbor power beyond their official specs. Your proc, for example, might be rated to run at 3GHz but is actually capable of operating reliably at 3.3GHz. There are myriad reasons for the hidden headroom, ranging from natural variance among parts (even those made from the same batch), to the manufacturers’ practice of underclocking parts to meet market needs, to the improved capabilities of a part over the lifetime of its production.

The point is, you’re not a true power user if you leave a CPU’s hidden performance potential untapped. And the only way to release your proc’s inner speed demon is to overclock it. This story will tell you how. Even if you’ve dabbled in the practice in the past, you’ll want to read on. Because just as CPUs have changed over the years, so has the art of pushing them to their limits. Over the following pages we’ll tell you everything you need to know about overclocking today’s CPUs, be they AMD- or Intel-branded. We’ll explain what’s involved, how to determine what your hardware is capable of, and how to achieve optimal results. Most importantly, we’ll tell you how to overclock safely. Indeed, overclocking is serious business and should never be taken lightly.

When you tamper with the internal workings of your computer’s parts, you do so at your own risk. Overclocking can damage, or even destroy, your CPU, motherboard, RAM, or other system components, and it can void the warranty on those parts. So consider yourself warned about the potential hazards! That said, you’re unlikely to harm your hardware if you overclock with extreme caution and care. And following the advice and instructions we lay out here will help you. So let’s get started!

The Basics of CPU overclocking

Pushing your processor to new heights can be extremely rewarding—if all goes right. Take the time to understand the concepts of overclocking and the factors that can affect succes

Determining a CPU’s Speed

There’s simple math that determines the clock speed of any CPU. Each CPU has a fixed internal number called the clock multiplier. That number multiplied by the reference clock of the front-side bus determines the stated clock speed of the processor. For example, an Intel 2.66GHz Core 2 Quad Q6700 has a clock multiplier of 10. The stock system bus speed for this processor is 1,066MHz. But wait, 1,066MHz multiplied by 10 equals 10GHz. What gives? Intel’s front-side bus is quad-pumped, so its actual reference clock is 266MHz (1,066MHz divided by four). That makes the clock speed of a Core 2 Quad Q6700 10 times 266MHz for 2,660MHz, or 2.66GHz.

This same math applies to AMD’s Athlon 64 CPUs, although, technically, they have no front-side bus; instead, a HyperTransport link connects the CPU to the chipset. A 2.6GHz Athlon 64 X2 5000+, for example, operates on a 13x multiplier using a 200MHz link—the actual HyperTransport link connection runs at 1GHz, as it operates on a 5x multiplier.
You can overclock both Intel and AMD CPUs by increasing the multiplier setting, increasing the “front-side bus,” or both. By using a combination of a multiplier and FSB overclock, you may achieve higher speeds with more stability. Depending on your situation, a combination of both may give you the best overclock, as your motherboard may simply not be up to running at excessively high speeds.

Multiplier Locking

CPU manufacturers will take measures to ensure that a processor runs at its intended speed by locking the multiplier. This fixes the multiplier setting, so it cannot be changed in the BIOS. This is done primarily to keep CPU “re-markers” from selling cheaper parts as more expensive ones, but it also serves to thwart overclockers.

But not every chip is locked. Intel’s Extreme series of CPUs does not feature multiplier locking nor does AMD’s FX series or some of its new Black Edition CPUs. This gives overclockers who pay the extra price of admission more flexibility in their adventures. A 2.66GHz Core 2 Extreme QX6600 CPU, for example, can be overclocked to 2.93GHz simply by increasing the multiplier from 10 to 11 without having to resort to front-side bus overclocking.

The Role of Core Voltage

When you overclock, you essentially run the CPU out of spec. Upping a CPU’s core voltage allows you to run a CPU way out of spec by further increasing your overclocking headroom. For example, a stock Intel Core 2 Duo E6600 running at 2.4GHz eats about 1.2 volts. To get the same CPU up past 5.6GHz, one overclocker increased the core voltage to 1.9 volts. As you can imagine, if AMD and Intel designed a CPU to operate at a certain voltage, running it higher will greatly decrease the life expectancy of the CPU. This is the most dangerous element of overclocking. The worst we’ve personally seen from overclocking a CPU via its multiplier or front-side bus is instability or a corrupted OS. But by adding a ton of voltage to a processor, you risk nuking it. Proceed with caution!

Have the Right Hardware

Your CPU isn’t the only part that matters in your quest for more speed. Here are the other components to care about


You get what you pay for and, generally, a cheap-ass motherboard will yield marginal overclocking results. The more expensive the motherboard, the more likely it is to have better components and better overclocking capabilities. That doesn’t mean all sub-$100 mobos are overclocking duds, but you’ll have to troll the forums and customer reviews on enthusiast sites to determine if a cheapo mobo can OC.

One additional tip: If you’re buying a motherboard for overclocking, you’ll likely have the best success with the latest “spin.” Mobo vendors update their boards with fixes and more recently built boards will usually overclock better.

Power Supply

We’ve long said that the PSU doesn’t get the attention it’s due, and that’s especially true when it comes to overclocking. The fact is, the need for clean, reliable power is of utmost importance if you’re pushing a CPU, RAM, and motherboard to the edge.
If you have a high-compression engine in your street racer, are you going to fill it with 85 octane fuel? A cheap power supply is the equivalent of questionable Kwik-E-Mart gasoline. For overclocking, you don’t need a 1,200-watt PSU, but you do need a name-brand unit. Generally, it’s safer to have a PSU that delivers a bit more power than you need. While it may not be the most power-efficient scenario, a 750-watt PSU running at 450 watts will probably live longer than a 500-watt PSU running at 450 watts.


Excessive heat can cause system instability, so it’s essential to keep your overclocked CPU cool. To achieve extremely high overclocks, some hobbyists bathe their CPUs in liquid nitrogen. Others use phase-change units (essentially tiny freezers) to push 3GHz chips past the 5GHz mark. The point is, you can’t expect to push your 1.86GHz proc to a reliable 4GHz using a $12 heatsink. Know your overclocking goals and then choose your cooling accordingly. Air cooling is the most modest solution, followed by water cooling, peltier/liquid combinations, phase change, and exotic liquids, such as liquid nitrogen. Also remember that the extra heat produced by overclocking will warm up the rest of your machine, so you may have to upgrade your case’s cooling or the case itself if you experience overheating issues. For our CPU cooling recommendations, see page 34.


In the old days the front-side bus’s speed was tied to the speed of the system RAM, so you had to overclock both. That’s no longer the case, but some folks still prefer to give their RAM some extra juice. This is the purpose of pricey, high-performance RAM. It’s been certified by the RAM manufacturer to operate at a given “overclocked” speed. We say overclocked because RAM speeds and timings are actually spec’d by an organization called JEDEC. The top standard speed of DDR2 today is 800MHz. Overclockable DDR2 RAM generally runs in the 1,066MHz range, with some pricier modules pushing 1,250MHz. While it’s not necessary to overclock your RAM to overclock your CPU, there are some instances when you will get improved performance if the FSB and RAM run at similar speeds that are closely synced. Some applications will also favor the increased bandwidth of overclocked RAM. Your research will help you determine if this applies to your parts.

Overclocking Intel's Core 2

The steps we take to push our Core 2 Extreme QX6700 beyond its 2.66GHz stock speed can be applied to any
modern Intel processor

Step 1 : Back Up Your Data

While the risk of hardware loss is generally very low, there’s always the possibility of OS corruption or data loss.

Step 2 : Enter Your Bios

Get into your BIOS by hitting the Del, F1, or F2 key during boot. The key will vary by motherboard, so check your documentation if you’re not sure what to press. Once in the BIOS, you will need to find the appropriate configuration screens for overclocking. The screens we refer to in our examples are specific to the EVGA 680i SLI motherboard—they will differ from BIOS to BIOS. Your mobo manual or an online search can provide guidance, but often you just need to dig around.

Step 3 : Goose Your CPU’s Multiplier

One way to overclock your Intel CPU is to increase its multiplier—if it’s unlocked, which is true for any Extreme-class Intel processor. The downside to doing a multiplier-only overclock is that there is very little granularity. Taking a 2.66GHz Core 2 Extreme QX6700 from its stock 10x multiplier to 12x jumps you all the way to 3.2GHz. If you want to hit 3.1GHz, a multiplier overclock won’t let you do it. Try increasing your CPU’s multiplier just a notch or two (in our BIOS, the multiplier setting is in Advanced Chipset Features, System Clocks). Then reboot your system and see how it runs. If your system crashes or won’t start, see Step 7.

Step 4 : Increase Your Front-Side Bus Speed

The other, more likely, way to overclock your Intel CPU is through the front-side bus. By bumping the FSB beyond its stock 800MHz or 1,066MHz, you increase your CPU’s clock speed. On the majority of CPUs, this will be the sole overclocking option, as only the most expensive Intel chips are unlocked. On our EVGA 680i board, we went into Advanced Chipset Features, FSB & Memory Config.

Here, we set the FSB Memory Clock Mode to Unlinked. This effectively separates the RAM clocks from the front-side bus. (If your chipset doesn’t allow you to unlink the RAM, you will need to choose an FSB-to-RAM speed ratio; make sure your choice keeps you within your RAM’s spec. See Step 6 for more info.) Increase your FSB by just 20MHz increments. Reboot with each increase to see if your machine will boot (if your system crashes or fails to reboot, see Step 7). With the multiplier set at its stock 10x, we pushed our 2.66GHz Core 2 to 3GHz by increasing the FSB speed from its stock 1,066MHz to 1,200MHz.

Step 5 : Add Some Voltage

We wanted to go beyond the 3GHz we achieved, but our attempts at pushing the FSB further made our system unstable. There’s still hope for more speed if we increase our CPU’s voltage. In our BIOS’s Advanced Chipset Features, System Voltages screen, we can increase the CPU voltage, the chipset voltage, and the memory voltage, in addition to the voltage of a few other parts. By pushing the CPU voltage of our early-rev 2.66GHz Core 2 Extreme QX6700 from 1.11 volts to 1.39 volts, we’re able to push the FSB up to 1,333MHz and achieve a stable 3.2GHz CPU speed. How much voltage is safe? It’s difficult to say, as the number differs among CPUs and motherboards. We recommend that you troll forums and overclocking databases to see how far people are going with individual chips. We can’t give general recommendations on voltage as each CPU has different specs and anything over stock could nuke your chip.

Step 6: To Overclock RAM or Not?

So you’re satisfied that the CPU is running far above its rated speed, but now you want to overclock the RAM. As we noted above, our nForce 680i board offers the option to run the RAM linked or unlinked. Linking the RAM sets the RAM speed as a ratio of the front-side bus’s clocked speed. The ratios are determined by the chipset, and in our case, we could choose between FSB: memclock ratios of 1:1, 5:4, 3:2 or Sync mode, which is fractionally equivalent to running at a 2:1 ratio. Picking any of the settings will change the RAM speed. For example, if you push your FSB to 1,066MHz and choose a 1:1 ratio, your RAM speed will hit 1,066MHz—if you’re using overclockable memory (see RAM section on page 24). If you’re not using overclockable RAM, your box will probably just hard lock. A 5:4 ratio would give you 853MHz, 3:2 generates 711MHz, and Sync gives you 533MHz. Which is better? Some overclockers report that linked RAM gives better performance than unlinked. But you’ll have to test your system by running apps you typically use to determine which setting is the most stable and provides the best performance for your needs.

Step 7: Beep! Beep!

No, your system isn’t asking you where the Dagobah system is. That constant beeping means your overclock failed. With some motherboards, simply powering down by unplugging the system from the wall or switching off the PSU for a few seconds will get you back into the BIOS. In some cases, you’ll need to reset the system’s CMOS by cutting power and then throwing the CMOS-clear jumper or removing and then reinserting the coin-cell battery.

Step 8: Test It

Just because you booted into the OS doesn’t put you in the clear. You should now stress-test the system using Prime95 or another application that really stresses the CPU. You might be tempted to use 3DMark06, but it’s primarily a GPU test, and many overclocked systems that pass 3DMark06 burn-ins will actually fail under heavy CPU loads.

Click here for our Core i7 overclocking guide!

Overclocking AMD

AMD’s low- and midrange procs, such as the Athlon 64 X2 6000+ we’re using here, are great values made even better by pusihing them to new heights. Got a Phenom? The steps are pretty much the same

Step 1: Back Up Your Data

Overclocking is inherently risky, so back up your data. We mean it.

Step 2: Enter Your Bios

Get into your BIOS by hitting the Del, F1, or F2 key during boot. The key will vary by motherboard, so check your documentation if you’re not sure what to press. Once in the BIOS, you will need to find the appropriate configuration screens for overclocking. The screens we refer to in our examples are specific to the Asus M2N32-SLI motherboard, but they will differ from BIOS to BIOS. Your mobo manual or an online search can provide guidance, but often you just need to dig around.

Step 3: Increase Your CPU’s Multiplier

Your choices for overclocking are determined by your proc. AMD’s FX-grade CPUs, like Intel’s Extreme chips, are unlocked and let you alter their multiplier settings. AMD recently began unlocking its Black Edition procs as a concession to overclockers who have stuck with the platform. Increasing the multiplier makes for a no muss, no fuss overclock. On our Asus M2N32-SLI board, we go into Advanced JumperFree Configuration and find CPU Multiplier. Our Athlon 64 X2 6000+ is locked, and thus can’t exceed its stock setting of 15x. If your chip is unlocked, you can select a higher multiplier. To get an Athlon 64 FX-60 from 2.6GHz to 2.8GHz, you would need to increase the multiplier from 13x to 14x. Next, test your system for stability. If it crashes or won’t boot, see Step 8.

Step 4 : Meet the HyperTransport Link

There’s an overclocking alternative to altering a chip’s multiplier setting. If this were an Intel platform, we’d turn our efforts to the front-side bus and be instantly overclocking, but AMD’s design is a little more complicated. You’ll need to futz with the HyperTransport (HT) speed before you overclock. This interface between the CPU and chipset buzzes along at about 1GHz and doesn’t like to get too far out of spec. Often, people who overclock without reducing the HT speed confuse HT instability with CPU instability. To lower the HT link on our M2N32-SLI board, we go into the BIOS and drill down through the Advanced and Chipset menus. There we see a setting for CPU<->NB HT Speed. Our choices are 1 through 5 and Auto. The default is 5x 200, or 1,000MHz. Since this value will increase during the overclock, knocking it back to 4x (800MHz) or even 3x (600MHz) shouldn’t hurt performance. Keep in mind that when you overclock the CPU frequency, you overclock the HT as well. If, for example, you overclock your CPU frequency to 220MHz and are running a 4x multiplier on your HyperTransport link, you’ll actually be running an 880MHz HT. Set it at a lower speed and prepare to overclock.

Step 5: Boost Your Frequencies

Now it’s time to overclock that sucker. On our M2N32-SLI board, we go to Advanced, JumperFree Configuration, and open CPU Frequency. There, we’re greeted by settings of 200MHz and up. We can bump the frequency up to 210MHz, which when multiplied by 15x (the CPU’s multiplier setting), gives us an overall speed of 3.1GHz. Another bump up to 220MHz gives us 3.3GHz. We recommend you increase speeds by 10MHz increments, testing for stability after each jump. If your machine crashes or fails to reboot, see Step 8.

Step 6: Add Voltage

In some cases, boosting the voltage to your CPU can help stabilize an overclock that’s crashing. Unfortunately, this is one of the more dangerous aspects of CPU overclocking as overvolting a chip could kill it. On our M2N32-SLI board, we went into Advanced, JumperFree Configuration and changed the CPU voltage from Auto to 1.5 volts. That’s about a tenth of a volt out of spec, but unfortunately for us, it didn’t help us sustain a 3.3GHz clock speed, so we’re stuck at 3.28GHz.

Step 7 : RAM Divisors

With AMD CPUs, the RAM is linked to the clock setting, and the Athlon 64’s on-die memory controller supports only whole numbers for memory divisors. So a 3GHz Athlon 64 X2 6000+ can use either a 7 or 8 divisor to generate a signal for the RAM. Unfortunately, 3,000 divided by 7 works out to DDR2/857 and 3,000 divided by 8 works out to DDR2/750. AMD errs on the side of caution, so this processor actually runs the DDR2/800 at 750MHz. But when you overclock, you may inadvertently overclock the RAM further than you suspect. The 3.28GHz we achieved on our M2N32-SLI board, for example, runs the DDR2/800 slightly out of spec at 825MHz. That’s not something to worry about, but if you’re running your chip at much higher speeds than us, you’ll need to make sure the RAM isn’t running beyond what its maker guarantees. To do that, go into Advanced, CPU Configuration, DRAM configuration, and then Memory Clock Frequency. You should select a conservative low speed for now and clock it up after you’ve reached the CPU’s highest speed.

Step 8 : It Won’t Boot

Don’t be bummed if your machine hard-locks—it’s the only way you’ll learn your CPU’s limits. To get out of the hole, shut off the PSU or pull the plug from the wall for five seconds. Plug it back in and power up the box. Some boards will automatically recover from a bad overclock and let you go back into the BIOS to aim a little lower. If this doesn’t work, you’ll have to power down again, unplug the PSU, and reset the CMOS via a jumper or button, or by pulling and reinserting the coin-cell battery. After five seconds, try booting it again—you should be able to access the BIOS.

Step 9 : Test It

Getting into the OS is about 65 percent of the challenge. You’ll now need to test the machine by pushing the CPU with an intensive workload. We don’t recommend gaming as a test since games are typically GPU-bound. Try a video encode or run Prime95. And if you have a multi- or dual-core processor, run a multithreaded app.

Keeping Your CPU Cool

Without adequate cooling, your overclocked rig is as good as toast

It’s hard to get much worse than a stock air cooler for your CPU. That’s not to say there’s anything outright wrong with the fan/heatsink combo that comes with a new CPU—the little guy will likely keep your stock-clocked processor running well within safe operating temperatures.

The minute you start overclocking your processor, however, you’ll be jacking up your thermals to levels a stock cooler can’t handle. Granted, when overclocking, temps will go up with even premium air coolers, but a solid aftermarket device will give you more room to work with. Your initial temperatures will be lower, and they won’t rise as quickly as they would with a stock solution.

Our current Lab champion, Zalman’s CNPS9700 , has maintained the throne for nearly a year. It uses a copper and aluminum framework to absorb the warmth produced by both Intel and AMD CPUs. The cooler’s 2,800rpm fan emits a tornado-like whoosh when it’s cranked to the max, but it also allows the device to reach epic levels of heat reduction. In fact, we now use the Zalman as a benchmark for other coolers. On the last test we ran, the device took our processor down to 37.5 C during our CPU burn-in test and 22.5 C when idle—a savings of 16.5 C and 9.5 C, respectively, over stock.

Aftermarket air cooling is a fine way to manage CPU temperatures, but only to a point. Eventually, practicality and performance concerns render air coolers insufficient for OC’d machines. That’s why there’s liquid cooling. Not only can you reach lower temperatures when using a liquid-based setup as opposed to air, but you’ll also benefit from a lower sound profile.

Of course, there’s an obvious caveat: Liquids plus electronics can equal a serious monetary hit if you have to replace hardware that inadvertently gets wet. Installing a water-cooling kit in your rig is a delicate process, and the drama only increases if you’ve never done it before. Sure, you can go with a preassembled liquid-cooling kit, but in our experience, a majority of these units perform on par with—if not worse than—stock air coolers.

The best liquid cooler we’ve found is CoolIT’s Boreas unit . A fancier, fatter version of the company’s Eliminator, the Boreas uses 12 thermoelectric modules to rip the heat from your molten tubing into a giant heatsink. Two 12cm fans take care of the rest, allowing the Boreas to beat our FX-60 test bed’s stock cooler by 20 C in idle and 32 C during our burn-in test.

Overclock an OEM Machine

So, you’re ready to crack open your business-class Dell, HP, or Gateway and give it some gusto, eh? Fugetabout it. The overwhelming majority of OEM machines and notebook PCs prevent overclocking to reduce complaints from the chumps who OC recklessly and ruin their machines. Even motherboard brands known for overclocking may be neutered in an OEM machine. Got it? OK, now we’re going to contradict ourselves. Some OEM boxes do overclock. Dell’s XPS and Hewlett-Packard’s Blackbird PCs are designed to overclock. Still, for the most part, overclocking and OEM machines don’t mix.

Do RAM coolers help overclockers?

Plenty of folks have taken overclockable RAM to its limits without the aid of aftermarket cooling devices, so why buy one? RAM coolers help, but not in the direct and immediate way a better CPU heatsink does. Overclocked RAM is pushed way beyond standard voltage. For example, a DDR2/1250 module from Corsair is spec’d to run at 2.4 volts—a far cry from a standard 1.8-volt DDR2 part. With the extra voltage comes additional heat generation and a potentially shortened life. Adding a RAM cooler or simply increasing the airflow in your case (especially if the modules sit above hot graphics cards) can only help extend your RAM’s life.

Around the web