Knowledge is power, and when it comes to PCs and computer hardware that’s especially true, because only by knowing how your PC components’ specs actually affect performance can you get the maximum power you need for the type of computing you do—and avoid being seduced by features that sound impressive on the box but won’t do squat to improve your experience. Knowing your stuff has other benefits, too. An in-depth understanding of what makes all your parts tick enables you to better troubleshoot problems, upgrade in ways that make sense, and converse with other nerds in your own secret language. Continue reading to begin your crash course in PC spec-speak.
Just how many cores and how much cache do you need? We’ll help you answer those questions and others with cool confidence
There are two kinds of buyers: Those who will never upgrade a CPU and those who actively plan for it. For the former, even a CPU welded to the motherboard won’t matter, but upgraders who want to use a system for years need to pay attention to the socket, as it’s one of the primary factors limiting your upgrade options. On Intel , there are three sockets to choose from: LGA2011 , LGA1155 , and the new LGA1150 . Of the three, LGA1155 has the least amount of life left in it, as it will be slowly phased out in favor of the new LGA1150 platform. We know from Intel roadmaps that LGA1150 and LGA2011 are good for at least another couple of years. On AMD , AM3+ offers a superb assortment, from budget dual-cores all the way to eight-core chips, with the company’s new Piledriver chip even slotting into this old socket. The company’s FM line isn’t quite as stable. FM1 didn’t go very far, but the company’s FM2 looks like it might have longer legs. The thing is, FM2 processors—or rather, APUs—aren’t aimed at the type of user who upgrades every year. We suspect that most FM2 buyers will use the platform for a couple years and then buy a new system instead of upgrading. For long-haulers, we recommend AM3+, LGA2011, and LGA1150. If you don’t care about doing an upgrade, go with whatever CPU you want.
Core count is the new clock speed. That’s because as consumers have been trained not to look at megahertz anymore as a defining factor, vendors have turned to core count as an emotional trigger. Two is better than one, four is better than two, and six is better than four.
Here’s the deal, though: More cores are indeed better—but only if you truly use them, and really only when compared within the same family of chips. For example, to assume that an eight-core AMD FX part is faster than a six-core Intel Core i7 part would be flat-out wrong. Likewise, to assume that a PC with a six-core Intel Core i7 will be faster at gaming than a quad-core Core i7 is also likely wrong. To make things more complicated, Intel uses a virtual CPU technology called Hyper-Threading to push its CPUs. Some chips have it, some don’t.
So, how do you figure out what you want? First, look at your workloads. If you’re primarily a gamer who browses, does some photo editing, and word processing, we think the sweet spot is a quad-core chip. Those who encode video, model 3D, or use other multithreaded apps, or even many apps simultaneously, should consider getting as many cores as possible because you can never have enough for these workloads. A good bridge for folks who encode video only occasionally, though, is a quad-core chip with Hyper-Threading.
Your CPU choice should be based on your workload and not what you read about.
Remember the Megahertz Myth? It’s what we alluded to above. It arose from the understanding that clock speed didn’t matter, because a 2GHz Pentium 4 was barely faster, if at all, than a 1.6GHz Athlon XP. Years later, that generally remains true. You really can’t say a 4.1GHz FX-8350 is going to smoke a 3.5GHz Core i7-3770K because in a hell of a lot of workloads the 3.5GHz Core i7 is going to dominate. Nevertheless, we have issues when someone dismisses megahertz outright as an important metric. We don’t think it’s handy when looking at AMD vs. Intel, but when you’re looking within the same family, it’s very telling. A 3.5GHz Intel chip will indeed be faster than a 2.8GHz Intel chip. The same applies among AMD chips. So, consider clock speeds wisely.
When vendors start looking for ways to separate your cash from your pocket, clock speed and core count are their first line of attack. If those features don’t get you, we’ve noticed that the amount of cache is the next spec dangled in your face. Choices these days run from 8MB to 3MB or less. First, you should know that in many cases, the chips themselves are often the same. When validating chips, AMD and Intel will weed out defective chips. If a chip has, say, 8MB of L2 cache and a bit of it is bad, it’s sold as a chip with 6MB of L2 cache, or 4MB of L2 cache. This isn’t always true, as some chips have the cache turned off or removed to save on building costs.
Does cache matter in performance? Yes and no. Let’s just say that a large cache rarely hinders performance, but you quickly get to diminishing returns, so for many apps, a chip with 8MB of L2 could offer the same performance as one with 3MB of L2. We’ve seen cache matter most in some bandwidth-sensitive tasks such as media encoding or compression, but for the most part, don’t sweat the difference between a chip with 4MB of L2 vs. one with one 3MB of L2.
Integrated graphics are likely one of the biggest advances in CPUs in the last few years. Yes, for gamers, a discrete graphics card is going to be faster 105 percent of the time, but for budget machines, ultra-thin notebooks, and all-in-ones, integrated graphics are usually all you get, and there’s a world of difference between them. Generally, AMD’s integrated graphics chips lead the way over Intel’s older generation of Ivy Bridge and Sandy Bridge chips. It’s like, well, AMD is the Intel of integrated graphics and Intel is the AMD. Intel’s latest Haswell chips make it far more interesting, though, as the graphics performance has increased greatly. Then again, AMD has also recently released its new APUs with Radeon HD 7000 graphics. The spec that matters most on integrated graphics is the number of graphics execution units and clock speed. More EUs mean better performance, as does higher clock speeds.
Let’s get it out in the open: Stock CPU coolers really aren’t as bad as people make them out to be. Sure, we all scoff at them, but the truth is that Intel and AMD spend considerable money on the design and certify them to work with their CPUs in all types of environments. For the vast majority of people, the stock cooler is just fine.
The Cooler Master Hyper 212 Evo is a low-cost, worthy upgrade over stock—if you need it.
But you’re not the vast majority of people. Sadly, today, if you can even open up the case, you’re an enthusiast. Sure, there are applications for the stock cooler, such as an HTPC or a small box that won’t be overclocked, but we like to think of the stock cooler as the minimum spec you should run. It’s fine, but it can be greatly improved upon.
Obviously, if you’re an overclocker, a beefier heatsink is a foregone conclusion, as heat is one of the worst enemies of a successful overclock. Swapping out the stock cooler for an aftermarket model is almost guaranteed to net higher or more stable overclocks than you can hit with the stock cooler.
Even if you don’t overclock, an aftermarket cooler can be a worthwhile addition. Since they can dissipate more heat than a stock cooler, and the fans are typically larger, the fan RPMs are usually lower, thus quieter.
Closed-loop liquid coolers are also a good option, as they require zero maintenance and the risk of a leak is extremely low. Liquid coolers are also quite affordable today and easily outstrip the vast majority of air coolers. One thing you’ll need to keep in mind is that closed-loop liquid coolers aren’t always the quietest option out there, though.
Click the next page to get more info on motherboards.
Knowing your way around a motherboard is a distinguishing characteristic of a PC nerd. Let us help orient you
The form factor of a motherboard is its physical dimensions. The most popular today is the 18-year-old ATX form factor. The two other popular sizes are the smaller microATX and Mini-ITX . Intel tried and failed to replace ATX with BTX . Two additional form factors are the wider Extended-ATX and XL-ATX. XL-ATX is not an official spec but generally denotes a longer board to support more expansion slots. For an enthusiast, ATX will cover about 90 percent of your needs. Besides offering the most flexibility in expansion, it’s also where you the get the widest range of selection. You can get budget all the way to the kitchen sink in ATX. MicroATX is usually reserved for budget boards, but there are a few high-end boards in this form factor these days. Mini-ITX is exciting, but the limited board space makes for few high-end options in this mini size.
As we said in our CPU write-up, your motherboard’s socket dictates all that the board will ever be. If, for example, you buy a discontinued socket such as LGA1156, your choice of CPU is greatly limited. The most modern sockets today are LGA1155, LGA1150, LGA2011 for Intel, and AM3+ and FM2 for AMD. For Intel, LGA2011 and LGA1150 have the longest legs. Though still useable, the sun is now setting on LGA1155 boards. AMD is actively supporting AM3+ and FM2, but there is talk of a new socket to replace FM2.
The chipset on a motherboard refers to the “core logic” and used to entail multiple chips doing several jobs. These days, the core-logic chipset is down to one or two chips, with much of the functionality moved into the CPU. Chipsets manage basic functions such as USB, PCIe, and SATA ports, and board makers throw on additional controllers to add even more functions. You should pay special attention to the chipset if you’re looking for certain functionality, some of which is only possible on newer chipsets. The P67 chipset, for example, did not support Intel’s SSD caching, but the Z68 did. Current high-end chipsets from Intel include the Z77 , Z87 , X79 ; from AMD you have the A85X, 990X, and 990FX.
The vast majority of gamers never run more than one video card, but it’s always nice to know you have the option. AMD’s multicard solution is CrossFire for two boards, and CrossFireX for more than two. For its part, Nvidia has SLI for two-card setups, tri-SLI for three cards, and four-way SLI for four cards. We won’t judge the relative merits of each system, as this isn’t the place for it. Most boards that offer one, also offer the other, but don’t assume a CrossFire board will support SLI. Read the specs ahead of time if you plan to run multiple cards.
One of the main differences between a high-end board and a low-end board is the ports. High-end boards tend to have ports galore, with FireWire, additional USB 3.0, digital audio, eSATA, and Thunderbolt added on to convince you that board B is better than board C. How many ports, and what type, do you need? That is something only you can answer. If you still run an older DV cam that needs FireWire, having the port on the board for “free” is always nice. Thunderbolt is also an incredibly cool, forward-looking feature, but is very pricey. If you never use it, you will have paid for nothing. These days, we say eSATA and FireWire aren’t needed. What we want, mostly, is a ton of USB 3.0 ports. The ultimate board today might be one with nothing but USB 3.0 ports, if you ask us.
If you see a board with tons of those long PCIe slots, don’t assume they’re all hot. PCIe slots can be physically x16 in length (that means 16 lanes) but only x8 or x4 electrically (which means the data is limited to x4 or x8 bandwidth). Cheaper boards may even disable some onboard devices when run in multi-GPU modes, while pricier boards use additional chips to spread the available bandwidth around and keep the devices running. AMD’s 990FX and Intel’s X79 don’t have the limited bandwidth of the Z77 or Z87 chipsets, so if you need lots of slots, you’ll want to opt for those chipsets. Unfortunately, Z77 and Z87 are where you find more PCIe 3.0 support. PCIe 3.0 doubles the effective bandwidth over PCIe 2.0, but it’s still not officially supported on X79, and only newer 990FX boards support it now. Confused? Our advice is that if you really need to run high-bandwidth add-in boards for video capture or RAID applications, ask the manufacturer what motherboards they have certified for it first.
There are degrees of enthusiast computing and motherboards to accommodate all scenarios.
This is a tiny segmented LED on the board that displays the POST code of the motherboard while booting. It may seem trivial, but POST LEDs are a godsend when things go sideways on a machine. If all other things were equal, we’d take a board with the POST LED over one without it.
A backup BIOS stores a duplicate BIOS on the motherboard that can be restored should the BIOS get corrupted. We think it’s a nice feature but a corrupt BIOS is pretty rare. Nevertheless, it’s probably better to have a backup BIOS and not need it than to need it and not have it.
Wireless, premium sound, fan controls, and headers galore are the special features board vendors use to hook you. You might dismiss them as unnecessary features, but so are the power windows and multi-speaker setup in your car. Certainly some extras aren’t needed, such as onboard Wi-Fi on a desktop box that will live on Ethernet, but fan control, such as Asus’s excellent FanXpert II, is worthwhile, as are premium audio circuits.
Click the next page to get in-depth information on hard drives and SSDs!
In a given chipset family—say, Z77—it’s easy to find a motherboard costing $110 as well one running $379. Both use the same chipset, so are they the same? It depends.
If you intend to socket in a non-overclocked Core i7-3770K, run one GPU, and a sound card, you’d probably be hard-pressed to tell the difference, but don’t assume that premium boards are just a gimmick to rip you off. High-end motherboards aren’t just anodized a different color and slapped a higher price. The $110 board will be pretty much a strippo option, with no multicard support, minimal ports and slots, and a design that’s not made for high overclocks. Yes, you might be able to overclock the budget board, but the voltage regulator modules and chipset cooling are likely to limit you. High-end overclocking boards are truly designed for the sport, with direct voltage readout hard points. And yes, fancy new technology such as Thunderbolt, additional USB 3.0, and SATA controllers cost more money. Even the software suite on the budget board will be pretty stripped down.
Still, the truth is that most of us will neither be overclocking with liquid nitrogen nor going ultra-budget. That’s why board vendors offer a dizzying array of selections between the rock-bottom and high-end. We think the $175 range gets you a pretty decent board, generally.
SSDs have a lot of complicated technology inside their waifish 2.5-inch shells, so follow along as we demystify it for you
The controller is the brains of the SSD , and what governs performance for the most part (along with the type of NAND flash used). The controller uses parallel channels to read and write data to the NAND, and also helps optimize the drive via the Trim command, as well as performing routine garbage collection. Though some companies might license a third-party controller, they always use custom firmware that they have created in order to define the performance of the drive, so two SSDs that use the same controller will still have varying levels of performance in different workload scenarios. While the SSD world used to be somewhat ruled by the LSI SandForce controller, those days have long passed, and we are now seeing the rise of in-house controllers by companies like Samsung.
Over-provisioning is a spec you will rarely see explicitly mentioned on a product box, but its presence, or lack thereof, is evident by a drive’s capacity. Over-provisioning is simply space taken out of the drive's general capacity and reserved for drive maintenance. So if you see a drive with 256GB of capacity, there’s no space reserved, but a drive listed as 240GB has 16GB reserved for over-provisioning. In exchange for that space you get increased endurance, as it gives the SSD controller a lot of NAND flash to use for drive optimization and management. The provisioned NAND can be compared to a swap file used by a mechanical hard drive and operating system, in that it is space reserved to manage the files on the SSD.
All SSDs use this type of memory, as it's non-volatile, meaning you can cut off power to it and the data remains in place (mid-data-transfer is another story, though). The opposite is DRAM , which is volatile, so once you shut down your PC, it is deleted. There are several manufacturers of NAND flash, including ONFI/Micro, Samsung, Toshiba, and SanDisk, and all the SSD vendors use them, so while a Samsung SSD obviously uses Samsung NAND, so does the new Seagate SSD, for example, since Seagate doesn't own a NAND fab. Corsair SSDs use Toshiba NAND, and so forth. There's no answer to the question of "who makes the best NAND?" as they all have varying performance characteristics, and it's typically the controller and its firmware that play the biggest role in determining a drive's performance. Good NAND with a crap controller equals crap, so keep that in mind when shopping for an SSD.
All modern NAND flash is either SLC, MLC, or TLC, which stands for single-, multi-, and triple-level cell, which indicates how many values it can hold in a cell at one time. The most secure, and precise, is SLC, which holds a single value in each cell. Obviously, this is a bit inefficient, but also very accurate, and has high endurance, making SLC NAND ridiculously expensive, and not for consumers (it's for enterprise). Next up is MLC, which stands for multi-level cell, as each cell can hold two values at a time. MLC is used on the majority of SSDs you can buy, as it strikes a fine balance between cost and capacity. TLC flash, which stands for triple-level cell, holds—you guessed it—three values per cell, giving it the lowest endurance of any drive available, with the caveat that it still allows years of usage. Only the Samsung 840 and Intel 335 use TLC NAND flash; the rest of the consumer SSDs available today use MLC NAND.
Here we see the main components of an SSD: NAND flash, controller chip, DRAM, printed circuit board, and SATA connectors.
Even though SSDs are the cool kids, we still need hard drives for our "multimedia" collections. Here are all the terms you need to know to sound like a pro
Spindle speed is the rotational velocity of the platters expressed in rotations per minute (rpm). Faster spinning platters result in lower seek times and improved performance. The most common desktop drives spin at 7,200rpm, but there are also 5,400–5,900rpm desktop drives, which we recommend only for backup purposes given their reduced performance relative to a 7,200rpm drive. There are 10Krpm drives as well, but the rise of much-faster SSDs have largely made them irrelevant in today's market.
Every hard drive stores data on platters made of glass alloy, with data retained on both sides that’s accessed by read and write heads hovering on each side of the platter. The number of platters is something to pay attention to when shopping for a drive, as it dictates area density, or how much data is stored per platter. Right now, 1TB is the maximum platter density available, and it offers improved performance compared to a 750GB platter, all other things being equal. Since the platter has more data on it, the read/write heads have to move around less to pick up data, so we've seen significantly improved performance from drives bearing these super-dense platters.
All hard drives have a bit of onboard memory referred to as cache, and the market has mostly settled on 64MB being the standard. The cache is used as a buffer, in that data is sent to it before being written to the disk. Whatever was last written or read will usually still be in the buffer should you need it again, so it improves performance by making recently accessed data available instantly. This practice of fetching data from the onboard cache is referred to as "bursting" in benchmarks, but in practice it rarely happens, so don't use this number to determine a drive's overall performance. Spindle speed is a much better indicator of hard drive performance compared to cache size.
This stands for Native Command Queuing and is technology that helps the drive prioritize data requests so that it can process them in an efficient fashion. For example, if a drive receives a command to go all the way out to the outer perimeter to fetch some data, but then receives a request for data that is closer to its current location, with NCQ enabled, it would fetch the data in the order of closest bits to furthest bits, resulting in faster data transfer. A drive without NCQ would simply fulfill the requests in the order received, which is highly inefficient. NCQ only shows significant gains in a heavily queued workload, however, which typically doesn't exist for home users, but does occur on a web server or some other high-traffic application.
A hard drive uses magnets (lower left) to move the read/write heads (the pointy things), which are both above and below the data platters.
We all want the speed of an SSD but with the price and capacity of a mechanical hard drive. Obviously that’s not possible. However, there is a middle ground, which is using a small SSD as a caching drive for a mechanical hard drive. This allows your most frequently used files (including your OS and boot files) to be cached to the SSD for fast access to them, while less frequently accessed files reside on your hard drive. This actually works quite well in our testing, and to set one up you’ll need to either run it off your existing motherboard with any SSD you have lying around, or buy a caching SSD and use the included software to set up the caching array. For Intel users, Z68 and Z77 boards include caching support natively via Intel Smart Response Technology, but users of other chipsets will need to BYO to the party.
Click the next page to get the inside scoop on graphics cards!
The world of GPUs can be a scary place fraught with big words, bigger numbers, and lots of confusing nomenclature. Allow us to un-confuse things a bit for you
The amount of memory a GPU has is also called its frame buffer (see below). Most cards these days come with 1GB to 3GB of memory, but some high-end cards like the GTX Titan have 6GB of memory. In the simplest terms, more memory lets you run higher resolutions, but read the Frame Buffer section below for more info.
GPUs nowadays include compartmentalized subsystems that have their own processing cores, called Stream Processors by AMD, and CUDA cores by Nvidia, but both perform the same task. Unlike a CPU, which is designed to handle a wide array of tasks, but only able to execute a handful of threads in parallel at a high clock speed, GPU cores are massively parallel and designed to handle specific tasks such as shader calculations. They can also be used for compute operations, but typically these features are heavily neutered in gaming cards, as the manufacturers want their most demanding clients paying top dollar for expensive workstation cards that offer full support for compute functionality. Since AMD and Nvidia's processor cores are built on different architectures, it's impossible to make direct comparisons between them, so just because one GPU has more cores than another does not automatically make it better.
The memory bus is a crucial pathway between the GPU itself and the card's onboard frame buffer, or memory. The width of the bus and the speed of the memory itself combine to give you a set amount of bandwidth, which equals how much data can be transferred across the bus, usually measured in gigabytes per second. In this respect, and what generally stands with all things PC, more is better. As an example, a GTX 680 with its 6GHz memory (1,500MHz quad-pumped) and 256-bit interface is capable of transferring 192.2GB of data per second, whereas the GTX Titan with the same 6GHz memory but a wider 384-bit interface is capable of transferring 288.4GB per second. Since most modern gaming boards now use 6GHz memory, the width of the interface is the only spec that ever changes, and the wider the better. Lower-end cards like the HD 7790, for example, have a 128-bit memory bus, so as you spend more money you'll find cards with wider buses.
This technology is available in high-end GPUs, and it allows the GPU to dynamically overclock itself when under load for increased performance. GPUs without this technology are locked at one core clock speed all the time.
The frame buffer is composed of DDR memory and is where all the computations are performed to the images before they are output to your display, so you'll need a bigger buffer to run higher resolutions, as the two are directly related to one another. Put simply, if you want to run higher resolutions—as in fill your screen with more pixels—you will need a frame buffer large enough to accommodate all those pixels. The same principle applies if you are running a standard resolution such as 1080p but want to enable super-sampling AA (see below): Since the scene is actually being rendered at a higher resolution and then down-sampled, you'll need a larger frame buffer to handle that higher internal resolution. In general, a 1GB or 2GB buffer is fine for 1080p, but you will need 2GB or 3GB for 2560x1600 at decent frame rates. This is why the GTX Titan has 6GB of memory, as it’s designed to run at the absolute highest resolutions possible, including across three displays at once. Most midrange cards now have 2GB, with 3GB and 4GB frame buffers now commonplace for high-end GPUs.
High resolutions require a lot of RAM, which is embedded in the area around the GPU just like on this 6GB GTX Titan.
All modern GPUs use PCI Express power connectors, either of the 6-pin or 8-pin variety. Small cards require one 6-pin connector, bigger cards require two 6-pin, and the top-shelf cards require one 8-pin and one 6-pin. Flagship boards like the GTX 690 and HD 7990 need two 8-pin connectors. Most high-end cards will draw between 100–200W of power under load, so you'll need around a 500–650W PSU for your entire system. Always give yourself somewhat of a buffer, so when a manufacturer says a 550W PSU is required, go for 650W.
These are what connect your GPU to your display, the most common being DVI, which comes in both single-link and dual-link. Dual-link is needed for resolutions up to 2560x1600, while single-link is fine for up to 1,200 pixels vertically. DisplayPort can go up to 2560x1600, as well. HDMI is another connector you will see: versions 1.0–1.2 support 1080p, 1.3 supports 2560x1600, while 1.4 supports 4K.
The latest generation of graphics cards from AMD and Nvidia are all PCIe 3.0, which theoretically allows for more bandwidth across the bus compared to PCIe 2.0, but actual in-game improvement will be slim-to-none in most cases, as PCIe 2.0 was never saturated to begin with. Your motherboard chipset and CPU must also support PCIe 3.0, but most Ivy Bridge and older boards do not support it in the chipset, even though the CPU may have the required lanes. In general, every GPU has PCIe 3.0 these days, but if your motherboard only supports version 2.0 you will not suffer a performance hit.
GPU coolers fall into several different categories, including blower, centralized, and water-cooled. The blower type is seen on most "reference" designs, which is what AMD and Nvidia provide to their add-in board partners as the most cost-effective solution typically. It sucks air in from the front of the chassis, then blows it along a heatsink through the back of the card to be exited out the rear of your case. Centralized coolers have one or two fans in the middle that suck air in from anywhere around the card and exhaust it into the same region, creating a pocket of warm air below the card. Water-cooled cards are very rare, of course, but use water to absorb heat contained within a radiator, which is cooled by a fan. Water cooling is usually the most effective (and quiet) way to cool a hot PC component, but its cost and complexity make it less common.
This is Nvidia technology baked into its last few generations of GPUs that allows for hardware-based rendering of physics in games that support it, most notably Borderlands 2, so instead of just a regular explosion, you will see an explosion with particles and volumetric fog and smoke. Typically, AMD card owners will see the PhysX option grayed out in the menus, but the games still look great, so we would not deem this technology a reason to go with Nvidia over AMD at this point in time.
Different GPUs offer different types of antialiasing (AA), which is the smoothing out of jaggies that appear on edges of surfaces in games. Let's look at the most common types:
Full Scene AA (FSAA, or AA):
The most basic type of AA, this is sometimes called super-sampling. It involves rendering a scene at higher resolutions and then down-sampling the final image for a smoother transition between pixels, which appears like softer edges on your screen. If you run 2X AA, the scene will be calculated at double the resolution, and 4X AA renders it at four times the resolution, hence a massive performance hit.
Multi-Sample AA (MSAA):
This is a more efficient form of FSAA, even though scenes are still rendered at higher resolutions, then down-sampled. It achieves this efficiency by only super-sampling pixels that are along edges; by sampling fewer pixels, you don't see as much of a hit as with FSAA.
This is a shader-based Nvidia creation designed to allow for decent AA with very little to no performance hit. It achieves this by smoothing every pixel onscreen, including those born from pixel shaders, which isn't possible with MSAA.
This is specific to Kepler GPUs and combines MSAA with post-processing to achieve higher-quality antialiasing, but it's not as efficient as FXAA.
Morphological Antialiasing (MLAA):
This is AMD technology that uses GPU-accelerated compute functionality to apply AA as a post-processing effect as opposed to the super-sampling method.
Click the next page to learn more about wi-fi technology, RAM, and PSUs.
Though the basic functionality of Wi-Fi routers has remained relatively unchanged since the olden days, new features have been added that help boost performance and allow for easier management
The band that a router operates on is key to determining how much traffic you will have to compete with. You would never want to hop on a congested freeway every day, and the same logic applies here. Currently there are two bands in use: 2.4GHz and 5GHz. Everyone and their nana is on 2.4GHz, including people nuking pizzas in the microwave, helicopter parents monitoring their baby via remote radios, and all the people surfing the Internet in your vicinity, making it a crowded band, to say the least. However, within the 2.4GHz band you still have 11 channels to choose from, which is how everyone is able to surf this band without issues (for the most part). But if everyone is using the same channel, you will see your bandwidth decrease. On the other hand, 5GHz is a no-man's-land at this time, so routers that can operate on it cost a pretty penny since it's the equivalent of using the diamond lane, and a great way to make sure your bandwidth remains unmolested.
This stands for multiple-input, multiple-output and it's the use of multiple transmitters and receivers to send/receive a Wi-Fi signal in order to improve performance, sort of like RAID for storage devices but with Wi-Fi. These devices are able to split a signal into several pieces and send it via multiple radio channels at once. This improves performance in a couple of ways. When only one signal is being sent, it has to bounce around before ending up at the receiver, and performance is degraded. When several signals are sent at the same time, however, spectral efficiency is improved as there is a greater chance of one hitting the receiver with minimal interference; it also improves performance with multiple streams of data being carried to the receiver at once.
Channel bonding is something that’s done by the router and the network adapter whereby parallel channels of data are "bonded" together much like stripes of data in a RAID. This technology is most prevalent in 802.11n networks, where channel bonding is required for a user to utilize the full amount of bandwidth available in the specification. The downside to channel bonding is that it increases the risk of interference from nearby networks, which can reduce speeds. Since each channel is 20MHz, "bonded mode" operates at 40MHz, so check your settings to see if you can enable this.
Every router adheres to a specific 802.11 standard, which governs its overall performance and features. In the old days, there was 802.11a/b, then 802.11g, then 802.11n, which is the most widespread specification in use today since it's been around for a few years and is relatively fast. Waiting in the wings is 802.11ac, which by default broadcasts on the uncongested 5GHz band, but is also backward compatible with 2.4GHz. Whereas 802.11g had a peak throughput of 300Mb/s, 802.11n has a peak of roughly 500Mb/s, and 802.11ac doubles that to an unholy 1.3Gb/s. It achieves this speed increase by supporting up to eight channels compared to 802.11n's four, and through increased channel width, using 80MHz and an optional 160MHz channel.
QoS is a common feature on today’s routers, and it lets you dictate which programs get priority when it comes to network bandwidth. You could theoretically slow down uTorrent while giving Netflix and Skype or Battlefield 3 more bandwidth. One crucial point is that the QoS setting is most important for outgoing traffic such as torrents, since incoming traffic is usually already prioritized by your ISP.
High-end 802.11n routers are able to broadcast dual networks on both 2.4GHz and 5GHz bands, though the new 802.11ac standard uses the 5GHz band by default.
System RAM, or memory, seems like such a basic thing, but there’s still much to know about it
The clock speed of RAM is usually expressed in megahertz, so DDR3/1866 runs at 1,866MHz, at a certain latency timing. The only problem is that modern CPUs pack so much cache and are so intelligent in managing data that very high-clocked RAM rarely impacts overall performance. Going from, say, DDR3/1600 to DDR3/1866 isn’t going to net you very much at all. Only certain bandwidth-intensive applications such as video encoding can benefit from higher-clocked RAM. The sweet spot for most users is 1,600 or 1,866. The exception to this is with integrated graphics. If the box will be running integrated graphics, reach for the highest-clocked RAM the board will support and you will see a direct benefit in most games.
Modern CPUs support everything from single-channel to quad-channel RAM. There isn’t really a difference between a dual-channel kit and a quad-channel kit except that the vendor has done the work to match them up. You can run, for example, two dual-channel kits just fine. The only time you may want a factory-matched kit is if you are running the maximum amount of RAM or at a very high clock speed.
Voltage isn’t a prominent marketing spec for RAM but it’s worth paying attention to, as many newer CPUs with integrated memory controllers need lower-voltage RAM to operate at high frequency. Older DDR3, which may have been rated to run at high frequencies, could need higher voltage than newer CPUs are capable of supporting.
Heat is bad for RAM, but we’ve never been able to get any vendor to tell us at what temperature failures are induced. Unless you’re into extreme overclocking, if you have good airflow in your case, you’re generally good. We’ve come to feel that heatspeaders, for the most part, are like hubcaps. They may not do much, but who the hell wants to drive a car with all four hubcaps missing?
It’s pretty easy to understand capacity on RAM—16GB is more than 8GB and 4GB is more than 2GB. With unbuffered, nonregistered RAM, the highest capacity you can get to run with a consumer CPU are 8GB modules. Registered DIMMs, or buffered DIMMs, usually refers to extra chips, or “buffers,” on the module to help take some of the electrical load off the memory controller. It’s useful when running servers or workstations that pack in a buttload of RAM. ECC RAM refers to error-correcting control and adds an additional RAM chip to correct multi-bit errors that can’t be tolerated in certain high-precision workloads. If this sounds like something you want, make sure your CPU supports it. Intel usually disables ECC on its consumer CPUs, even those based on the commercial ones. AMD, on the other hand, doesn’t. For most, ECC support is a bit overkill, though.
We’re not sure what RAM heatsinks do today except look cool.
The power supply doesn’t get all the attention of, say, the CPU or the video card, but disrespect the PSU at your own peril
The actual wattage of the PSU is the spec everyone pays attention to. That’s because 650 watts is 650 watts, right? Well, not always. One maker’s 650 watts might actually be more like 580 watts or lower at the actual temperature inside your case on a hot day. Despite all this, the wattage rating is still one of the more reliable specs you can use to judge a PSU. How much you need can only be answered by the rig you’re running. We will say that recent GPU improvements have caused us to back away from our must-have-1,000W-PSU mantra. These days, believe it or not, a hefty system can run on 750 watts or lower with a good-quality PSU.
After wattage, efficiency is the next checkmark feature. PSU efficiency is basically how well the unit converts the power from AC to DC. The lower the efficiency, the more power is wasted. The lowest efficiency rating is 80 Plus, which means 80 percent of the power at a load of 20 percent, 50 percent, or 100 percent is converted. From there it goes to Bronze, Silver, Gold, and Platinum, with the higher ratings indicating higher efficiency. Higher is better, but you do get diminishing returns on your investment as you approach the higher tiers. An 80 Plus Silver PSU hits 88 percent efficiency with a 50 percent load. An 80 Plus Platinum hits 92 percent. (Efficiencies for the higher tiers vary at different loads.) Is it worth paying 40 percent more for that? That’s up to you.
A single-rail PSU spits out all the power from a single “rail,” so all of the 12 volt power is combined into one source. A multi-rail splits it into different rails. Which is better? On a modern PSU, it doesn’t matter much. Much of the problems from multi-rail PSUs were in the early days of SLI and Pentium 4 processors. PSU designs that favored CPUs, combined with the siloing of power among rails, proved incapable of properly feeding a multi-GPU setup. Single-rail designs had no such issues. These days, multi-rail PSUs are designed with today’s configs in mind, so multi-GPUs are no longer a problem.
A “dumb” power supply is actually what 99 percent of us have: a PSU that supplies clean, reliable power. An “intelligent” PSU does the same but communicates telemetry to the OS via USB. Some smart PSUs even let you adjust the voltages on the rails in the operating system (something you’d have to do manually on high-end units) and let you control the fan temperature intelligently, too. Do you need a smart PSU? To be frank, no. But for those who like seeing how efficient the PSU is or what the 5-volt rail is, it’s pretty damned cool.
Modular PSUs are the rage and give you great flexibility by letting you swap in shorter cables, or cables of a different color, or to remove unused cables. The downside is that most high-end machines use all of the cables, so that last point in particular is moot—what’s more, we think it’s too easy to lose modular cables, which sucks.
Modular power supplies are the rage today—just don’t misplace the cables.
Click the next page to read more about PC hardware buying tips.
How to dole out system advice like a pro
Warning: As a PC expert, you will be called upon often by family and friends for system-buying advice. After all, purchasing a new PC retail can be a daunting task for the average consumer. Remember, you might know the difference between an AMD FX-8350 and FX-6100, but will Aunt Peg?
This machine is probably too much PC for Aunt Peg to handle.
No, Aunt Peg will walk into the local Big Box with the goal of spending $750 on a basic all-in-one and end up walking out with a $3,000 SLI rig. We’re not saying that Aunt Peg doesn’t like getting her frag on as much as the rest of us, but let’s face it, she needs some basic buying tips.
Peg, what level of CPU you require depends on your needs. If your idea of a good time is Bejeweled, email, and basic photo editing, a dual-core processor of any model except Atom is more than enough. If you’re looking for more performance, the good thing is that Intel and AMD’s model numbers can mostly be trusted to represent actual performance. A Core i5 is greater than a Core i3 and an A10 is faster than an A8. If you are doing home video editing, Peg, consider paying for a quad-core CPU or more.
There are three known levers pulled when convincing consumers to buy a new PC: CPU, storage size, and amount of RAM. You’ll often see systems with low-end processors loaded up with a ton of RAM, because someone with a Pentium is really in the market for a system with 16GB of RAM (not!). For most people on a budget, 4GB is adequate, with 8GB being the sweet spot today. If you have a choice between a Pentium with 16GB and a Core i3 with 8GB, get the Core i3 box.
Storage is pretty obvious to everyone now, and analogous to closet space. You can never have enough. What consumers should really look for is SSD caching support or even pony up for an SSD. SSD caching or an SSD so greatly improves the feel of a PC that only those on a very strict budget should pass on this option. SSDs are probably one of the most significant advances to PCs in the last four years, so not having one is almost like not having a CPU. How large of an SSD do you need? The minimum these days for a primary drive is 120GB, with 240GB being more usable.
There’s a sad statistic in the PC industry: Americans don’t pay for discrete graphics. It’s sad because a good GPU should be among the top four specs a person looks at in a new computer. Integrated graphics, usually really bad Intel integrated graphics, have long been a staple of American PCs. To be fair, that’s actually changing, as Intel’s new Haswell graphics greatly improves over previous generations, and for a casual gamer, it may even finally be enough. Still, almost any discrete GPU is still faster than integrated graphics these days. Aunt Peg might not play games, but her kids or grandkids might and not having a GPU will give them a frowny face. A GeForce 650 or Radeon HD 7770 is a good baseline for any machine that will touch games.