motherboard en MSI Z97 XPower AC Motherboard Comes with Delid Die Guard for Naked Haswell Chips <!--paging_filter--><h3><img src="/files/u69/delid_die_guard.jpg" alt="Delid_Die_Guard" title="Delid_Die_Guard" width="228" height="225" style="float: right;" />This board is prepped and primed to break world records</h3> <p><strong>MSI is throwing extreme overclockers a mighty big bone in the form of a motherboard</strong>. The company's upcoming Z97 XPower AC mobo will feature a "Delid Die Guard" that's measured to specification and designed to protect the CPU core on processors that no longer have an integrated heat spreader (IHS). In case you're unfamiliar, the IHS is that giant metal slab that covers the top of your processor. It's there to pull heat off of the CPU core, which is then transferred to a heatsink with a bit of thermal goo in between to fill in the microscopic nooks and crannies.</p> <p>Removing the IHS is tricky business and certainly not for the faint of heart. Extreme overclockers are typically about the only ones who ever attempt the procedure, as it gives them direct access to the CPU core for exotic cooling solutions. This in turn allows them to chase world records.</p> <p>MSI <a href=";stream_ref=10" target="_blank">teased the feature</a> on its Facebook page for MSI Europe. If you scroll through the mobo maker's Facebook feed, you'll run into several other related posts regarding its upcoming board. It and other boards like it will feature V-Check Point 2 with two additional ground connectors so that you can use up to three multimeters at the same time. Next generation boards from MSI will also have an OC engine to boost the baseclock of upcoming Haswell chips by 30 percent.</p> <p><em>Follow Paul on <a href="" target="_blank">Google+</a>, <a href="!/paul_b_lilly" target="_blank">Twitter</a>, and <a href="" target="_blank">Facebook</a></em></p> Build a PC Hardware haswell motherboard msi overclocking z97 xpower ac News Fri, 11 Apr 2014 16:40:17 +0000 Paul Lilly 27613 at Intel Haswell-E X99 Chipset Details Leak to the Web <!--paging_filter--><h3><img src="/files/u69/intel_x99.jpg" alt="Intel X99" title="Intel X99" width="228" height="180" style="float: right;" />The X99 chipset will support DDR4 memory</h3> <p>It's going to be an exciting summer for power users. Assuming all goes to plan and that leaked information turns out to be accurate, <strong>you can expect Intel to launch its Haswell-E hardware in June, along with its X99 "Wellsburg" chipset</strong>. Details of the forthcoming chipset have found their way online ahead of the chipset's release, which among other things will support DDR4 memory.</p> <p>According to what's supposed to be a leaked slide from Intel <a href="" target="_blank">posted by <em></em></a>, Intel's X99 chipset will support up to 14 USB ports, including half a dozen USB 3.0 ports and eight USB 2.0 ports. It will also feature support for 10 SATA 6Gbps ports, eight PCI-E Gen2 ports, Intel's integrated clock for HEDT, and Intel Rapid Storage and RST Smart Response technology.</p> <p>The chipset will offer a maximum memory speed of 2133MHz per DIMM slot, a spec that seems to suggest this will be the maximum frequency of DDR4 memory. Beyond that, you can expect things like Hyper-Threading, Turbo Boost 2.0, and Intel's Smart Cache technology.</p> <p><em>Follow Paul on <a href="" target="_blank">Google+</a>, <a href="!/paul_b_lilly" target="_blank">Twitter</a>, and <a href="" target="_blank">Facebook</a></em></p> Build a PC chipset ddr4 Hardware haswell-e intel motherboard wellsburg x99 News Mon, 17 Mar 2014 15:09:08 +0000 Paul Lilly 27454 at Gigabyte Unveils Single 2011 Socket Motherboard with 10GbE LAN <!--paging_filter--><h3><img src="/files/u69/gigabyte_ga-6pxsvt.jpg" alt="Gigabyte GA-6PXSVT" title="Gigabyte GA-6PXSVT" width="228" height="113" style="float: right;" />The only board in the world with a single LGA 2011 socket</h3> <p>Home networking demands seem to be increasing by the day -- 4K video streaming, anyone? -- which might explain why <strong>Gigabyte is launching a single LGA 2011 socket motherboard</strong> featuring an integrated 10 Gigabit Ethernet LAN controller. It's the worlds first motherboard to sport just one LGA 2011 socket, a move we suppose could help drive the price down while still offering home users 10GbE.</p> <p>That said, this is still a workstation-class motherboard. The <a href="" target="_blank">GA-6PXSVT</a> supports Intel Xeon E5 1600 V2 and E5-2600 V2 processors and has eight RDIMM/UDIMM slots for up to 256GB of ECC memory. It also features two GbE LAN ports (in addition to the 10GbE port), 10 SATA 6Gbps ports, and 4 SATA 3Gbps ports.</p> <p>With the memory slots and PCI-Express slots all facing the same direction, <a href="" target="_blank">Gigabyte says</a> the GA-6PXSVT is designed to deliver optimal airflow performance, and is therefore suitable for both rack and tower integration.</p> <p>No word yet on price or availability.</p> <p><em>Follow Paul on <a href="" target="_blank">Google+</a>, <a href="!/paul_b_lilly" target="_blank">Twitter</a>, and <a href="" target="_blank">Facebook</a></em></p> 10gbe Build a PC ga-6pxsvt gigabyte Hardware motherboard server socket 2011 News Fri, 07 Mar 2014 17:27:41 +0000 Paul Lilly 27401 at Asus Unveils Two Motherboards Supporting AMD's AM1 Platform <!--paging_filter--><h3><img src="/files/u69/am1i_a.jpg" alt="Asus AM1I-A" title="Asus AM1I-A" width="228" height="194" style="float: right;" />The first AM1-socket based SoC motherboards from Asus</h3> <p>AMD said there were several planned motherboard releases based on its recently announced AM1 platform, and true to form, they're starting to trickle out. Some of the first are from <strong>Asus, which just announced the AM1M-A and AM1I-A</strong>, a pair of small form factor (SFF) motherboards built to take advantage of AMD's AM1-socketed System-on-Chip (SoC) Athlon and Sempron series Accelerated Processing Units (APUs).</p> <p>The AM1M-A is a micro ATX board with two DDR3 DIMM slots supporting up to 32GB of single-channel RAM; a single PCI-E 2.0 x16 slot (at x4 speed); HDMI, DVI-D, and D-Sub (VGA) outputs; GbE LAN; 8-channel onboard audio; two SATA 6Gbps ports; four USB 3.0 ports (two front, two rear); eight USB 2.0 ports (four front, four rear); TPM header; LPT; and COM port.</p> <p>Asus's other board -- AM1I-A -- is a mini ITX board that also has two DDR3 DIMM slots with support for up to 32GB of single-channel RAM. It features similar specs, but has a PCI-E 2.0 x4 slot, half as many USB 3.0 ports, and two COM ports.</p> <p>Both boards sport a UEFI BIOS and are slated to arrive in early April. No word yet on price.</p> <p><em>Follow Paul on <a href="" target="_blank">Google+</a>, <a href="!/paul_b_lilly" target="_blank">Twitter</a>, and <a href="" target="_blank">Facebook</a></em></p> am1 am1I-a am1m-a amd asus Build a PC Hardware motherboard News Wed, 05 Mar 2014 19:15:17 +0000 Paul Lilly 27389 at PC Performance Tested <!--paging_filter--><h3><a class="thickbox" style="font-size: 10px; text-align: center;" href="/files/u152332/nvidia_geforce_gtx_780-top_small_0.jpg"><img src="/files/u152332/nvidia_geforce_gtx_780-top_small.jpg" alt="Nvidia’s new GK110-based GTX 780 takes on two ankle-biter GTX 660 Ti GPUs." title="Nvidia’s new GK110" width="250" height="225" style="float: right;" /></a>With our lab coats donned, our test benches primed, and our benchmarks at the ready, we look for answers to nine of the most burning performance-related questions</h3> <p>If there’s one thing that defines the Maximum PC ethos, it’s an obsession with Lab-testing. What better way to discern a product’s performance capabilities, or judge the value of an upgrade, or simply settle a heated office debate? This month, we focus our obsession on several of the major questions on the minds of enthusiasts. Is liquid cooling always more effective than air? Should serious gamers demand PCIe 3.0? When it comes to RAM, are higher clocks better? On the surface, the answers might seem obvious. But, as far as we’re concerned, nothing is for certain until it’s put to the test. We’re talking tests that isolate a subsystem and measure results using real-world workloads. Indeed, we not only want to know if a particular technology or piece of hardware is truly superior, but also by how much. After all, we’re spending our hard-earned skrilla on this gear, so we want our purchases to make real-world sense. Over the next several pages, we put some of the most pressing PC-related questions to the test. If you’re ready for the answers, read on.</p> <h4>Core i5-4670K vs. Core i5-3570K vs. FX-8350</h4> <p>People like to read about the $1,000 high-end parts, but the vast majority of enthusiasts don’t buy at that price range. In fact, they don’t even buy the $320 chips. No, the sweet spot for many budget enthusiasts is around $220. To find out which chip is the fastest midrange part, we ran Intel’s new <a title="4670k" href="" target="_blank">Haswell Core i5-4670K</a> against the current-champ <a title="i5 3570K" href="" target="_blank">Core i5-3570K</a> as well as AMD’s <a title="vishera fx-8350" href="" target="_blank">Vishera FX-8350</a>.</p> <p style="text-align: center;"><a class="thickbox" href="/files/u152332/fx_small_0.jpg"><img src="/files/u152332/fx_small.jpg" alt="AMD’s FX-8350 has two cores up on the competition, but does that matter?" width="620" height="607" /></a></p> <p style="text-align: center;"><strong>AMD’s FX-8350 has two cores up on the competition, but does that matter?</strong></p> <p><strong>The Test:</strong> For our test, we socketed the Core i5-4670K into an Asus Z87 Deluxe with 16GB of DDR3/1600, an OCZ Vertex 3, a GeForce GTX 580 card, and Windows 8. For the Core i5-3570K, we used the same hardware in an Asus P8Z77-V Premium board, and the FX-8350 was tested in an Asus CrossHair V Formula board. We ran the same set of benchmarks that we used in our original review of the FX-8350 published in the Holiday 2012 issue.</p> <p><strong>The Results:</strong> First, the most important factor in the budget category is the price. As we wrote this, the street price of the Core i5-4670K was $240, the older Core i5-3570K was in the $220 range, and AMD’s FX-8350 went for $200. The 4670K is definitely on the outer edge of the budget sweet spot while the AMD is cheaper by a bit.</p> <p style="text-align: center;"><a class="thickbox" href="/files/u152332/haswell_small_5.jpg"><img src="/files/u152332/haswell_small_4.jpg" alt="Intel’s Haswell Core i5-4670K slots right into the high end of the midrange." title="Haswell" width="620" height="620" /></a></p> <p style="text-align: center;"><strong>Intel’s Haswell Core i5-4670K slots right into the high end of the midrange.</strong></p> <p>One thing that’s not disputable is the performance edge the new Haswell i5 part has. It stepped away from its Ivy Bridge sibling in every test we ran by respectable double-digit margins. And while the FX-8350 actually pulled close enough to the Core i5-3570K in enough tests to go home with some multithreaded victories in its pocket, it was definitely kept humble by Haswell. The Core i5-4670K plain-and-simply trashed the FX-8350 in the vast majority of the tests that can’t push all eight cores of the FX-8350. Even worse, in the multithreaded tests where the FX-8350 squeezed past the Ivy Bridge Core i5-3570K, Haswell either handily beat or tied the chip with twice its cores.</p> <p style="text-align: center;"><a class="thickbox" href="/files/u152332/ivybridge_small_0.jpg"><img src="/files/u152332/ivybridge_small.jpg" alt="The Core i5-3570K was great in its day, but it needs more than that to stay on top." title="Core i5-3570K" width="620" height="622" /></a></p> <p style="text-align: center;"><strong>The Core i5-3570K was great in its day, but it needs more than that to stay on top.</strong></p> <p>Even folks concerned with bang-for-the-buck will find the Core i5-4670K makes a compelling argument. Yes, it’s 20 percent more expensive than the FX-8350, but in some of our benchmarks, it was easily that much faster or more. In Stitch.Efx 2.0, for example, the Haswell was 80 percent faster than the Vishera. Ouch.</p> <p>So where does this leave us? For first place, we’re proclaiming the Core i5-4570K the midrange king by a margin wider than Louie Anderson. Even the most ardent fanboys wearing green-tinted glasses or sporting an IVB4VR license plate can’t disagree.</p> <p>For second place, however, we’re going to get all controversial and call it for the FX-8350, by a narrow margin. Here’s why: FX-8350 actually holds up against the Core i5-3570K in a lot of benchmarks, has an edge in mulitithreaded apps, and its AM3+ socket has a far longer roadmap than LGA1155, which is on the fast track to Palookaville.</p> <p>Granted, Ivy Bridge and 1155 is still a great option, especially when bought on a discounted combo deal, but it’s a dead man walking, and our general guidance for those who like to upgrade is to stick to sockets that still have a pulse. Let’s not even mention that LGA1155 is the only one here with a pathetic two SATA 6Gb/s ports. Don’t agree? Great, because we have an LGA1156 motherboard and CPU to sell you.</p> <div class="module orange-module article-module"><strong><span class="module-name">Benchmarks</span></strong></div> <div class="spec-table orange"> <table style="width: 627px; height: 270px;" border="0"> <thead> <tr style="text-align: left;"> <th class="head-empty"> </th> <th class="head-light">Core i5-4670K</th> <th>Core i5-3570K</th> <th>FX-8350</th> </tr> </thead> <tbody> <tr> <td class="item"><strong>POV Ray 3.7 RC3 (sec)</strong></td> <td class="item-dark"><strong>168.53</strong></td> <td> <p>227.75</p> </td> <td>184.8</td> </tr> <tr> <td><strong>Cinebench 10 Single-Core</strong></td> <td><strong>8,500</strong></td> <td>6,866</td> <td>4,483</td> </tr> <tr> <td class="item"><strong>Cinebench 11.5</strong></td> <td class="item-dark"><strong>6.95<br /></strong></td> <td>6.41</td> <td><strong>6.90</strong></td> </tr> <tr> <td><strong>7Zip 9.20</strong></td> <td>17,898</td> <td>17,504</td> <td><strong>23,728</strong></td> </tr> <tr> <td><strong>Fritz Chess</strong></td> <td><strong>13,305</strong></td> <td>11,468</td> <td>12,506</td> </tr> <tr> <td class="item"><strong>Premiere Pro CS6 (sec)</strong></td> <td class="item-dark"><strong>2,849</strong></td> <td>3,422</td> <td>5,220</td> </tr> <tr> <td class="item"><strong>HandBrake Blu-ray encode&nbsp; (sec)</strong></td> <td class="item-dark"><strong>9,042</strong></td> <td>9,539</td> <td><strong>8,400</strong></td> </tr> <tr> <td><strong>x264 5.01 Pass 1 (fps)</strong></td> <td><strong>66.3<br /></strong></td> <td>57.1</td> <td>61.3</td> </tr> <tr> <td><strong>x264 5.01 Pass 2 (fps)</strong></td> <td><strong>15.8</strong></td> <td>12.7</td> <td><strong>15</strong></td> </tr> <tr> <td><strong>Sandra (GB/s)</strong></td> <td><strong>21.6</strong></td> <td><strong>21.3</strong></td> <td>18.9</td> </tr> <tr> <td><strong>Stitch.Efx 2.0 (sec)</strong></td> <td><strong>836</strong></td> <td>971</td> <td>1,511</td> </tr> <tr> <td><strong>ProShow Producer 5 (sec)</strong></td> <td><strong>1,275</strong></td> <td>1,463</td> <td>1,695</td> </tr> <tr> <td><strong>STALKER: CoP low-res (fps)</strong></td> <td><strong>173.5</strong></td> <td>167.3</td> <td>132.1</td> </tr> <tr> <td><strong>3DMark 11 Physics</strong></td> <td><strong>7,938</strong></td> <td>7,263</td> <td>7,005</td> </tr> <tr> <td><strong>PC Mark 7 Overall</strong></td> <td><strong>6,428</strong></td> <td>5,582</td> <td>4,408</td> </tr> <tr> <td><strong>PC Mark 7 Storage</strong></td> <td>5,300</td> <td><strong>5,377</strong></td> <td>4,559</td> </tr> <tr> <td><strong>Valve Particle (fps)</strong></td> <td><strong>180</strong></td> <td>155</td> <td>119</td> </tr> <tr> <td><strong>Heaven 3.0 low-res (fps)</strong></td> <td><strong>139.4</strong></td> <td>138.3</td> <td>134.4</td> </tr> </tbody> </table> </div> <p><em>Best scores are bolded. Test bed described in text</em></p> <h4>Hyper-Threading vs. No Hyper-Threading<em>&nbsp;</em></h4> <p><a title="hyper threading" href="" target="_blank">Hyper-Threading</a> came out 13 years ago with the original 3.06GHz Pentium 4, and was mostly a dud. Few apps were multithreaded and even Windows’s own scheduler didn’t know how to deal with HT, making some apps actually slow down when the feature was enabled. But the tech overcame those early hurdles to grow into a worthwhile feature today. Still, builders are continually faced with choosing between procs with and without HT, so we wanted to know definitively how much it matters. <em>&nbsp;</em></p> <p><strong>The Test:</strong> Since we haven’t actually run numbers on HT in some time, we broke out a Core i7-4770K and ran tests with HT turned on and off. We used a variety of benchmarks with differing degrees of threadedness to test the technology’s strengths and weaknesses.</p> <p><strong>The Results:</strong> One look at our results and you can tell HT is well worth it if your applications can use the available threads. We saw benefits of 10–30 percent from HT in some apps. But if your app can’t use the threads, you gain nothing. And in rare instances, it appears to hurt performance slightly—as in Hitman: Absolution when run to stress the CPU rather than the GPU. Our verdict is that you should pay for HT, but only if your chores include 3D modeling, video encoding or transcoding, or other thread-heavy tasks. Gamers who occasionally transcode videos, for example, would get more bang for their buck from a Core i5-4670K.</p> <div class="module orange-module article-module"><strong><span class="module-name">Benchmarks</span></strong></div> <div class="spec-table orange"> <table style="width: 627px; height: 270px;" border="0"> <thead> <tr style="text-align: left;"> <th class="head-empty"> </th> <th class="head-light">HT Off</th> <th>HT On</th> </tr> </thead> <tbody> <tr> <td class="item"><strong>PCMark 7 Overall</strong></td> <td class="item-dark">6,308</td> <td> <p><strong>6,348</strong></p> </td> </tr> <tr> <td><strong>Cinebench 11.5</strong></td> <td>6.95</td> <td><strong>8.88</strong></td> </tr> <tr> <td class="item"><strong>Stitch.EFx 2.0 (sec)</strong></td> <td class="item-dark">772</td> <td>772</td> </tr> <tr> <td><strong>ProShow Producer 5.0&nbsp; (sec)</strong></td> <td>1,317</td> <td><strong>1,314</strong></td> </tr> <tr> <td><strong>Premiere Pro CS6 (sec)</strong></td> <td>2,950</td> <td><strong>2,522</strong></td> </tr> <tr> <td class="item"><strong>HandBrake 0.9.9 (sec)</strong></td> <td class="item-dark">1,200</td> <td><strong>1,068</strong></td> </tr> <tr> <td class="item"><strong>3DMark 11 Overall</strong></td> <td class="item-dark">X2,210</td> <td>X2,209</td> </tr> <tr> <td><strong>Valve Particle Test (fps)</strong></td> <td>191</td> <td><strong>226</strong></td> </tr> <tr> <td><strong>Hitman: Absolution, low res (fps)</strong></td> <td><strong>92</strong></td> <td>84</td> </tr> <tr> <td><strong>Total War 2: Shogun CPU Test (fps)</strong></td> <td><strong>42.4</strong></td> <td>41</td> </tr> </tbody> </table> </div> <p><em>Best scores are bolded. We used a Core i7-4770K on a Asus Z87 Deluxe, with a Neutron GTX 240 SSD, a GeForce GTX 580, and 16GB of DDR3/1600 64-bit, with Windows 8</em></p> <p><em>Click the next page to read about air cooling vs water cooling</em></p> <h3> <hr /></h3> <h3>Air Cooling vs. Water Cooling<em>&nbsp;</em></h3> <p>There are two main ways to chill your CPU: a heatsink with a fan on it, or a closed-loop liquid cooler (CLC). Unlike a custom loop, you don't need to periodically drain and flush the system or check it for leaks. The "closed" part means that it's sealed and integrated. This integration also reduces manufacturing costs and makes the setup much easier to install. If you want maximum overclocks, custom loops are the best way to go. But it’s a steep climb in cost for a modest improvement beyond what current closed loops can deliver. <em>&nbsp;</em></p> <p>But air coolers are not down for the count. They're still the easiest to install and the cheapest. However, the prices between air and water are so close now that it's worth taking a look at the field to determine what's best for your budget.<em>&nbsp;</em></p> <p><strong>The Test:</strong> To test the two cooling methods, we dropped them into a rig with a hex-core Intel Core i7-3960X overclocked to 4.25GHz on an Asus Rampage IV Extreme motherboard, inside a Corsair 900D. By design, it's kind of a beast and tough to keep cool.</p> <h4>The Budget Class<em>&nbsp;</em></h4> <p><strong>The Results:</strong> At this level, the Cooler Master 212 Evo is legend…ary. It runs cool and quiet, it's easy to pop in, it can adapt to a variety of sockets, it's durable, and it costs about 30 bucks. Despite the 3960X's heavy load, the 212 Evo averages about 70 degrees C across all six cores, with a room temperature of about 22 C, or 71.6 F. Things don’t tend to get iffy until 80 C, so there's room to go even higher. Not bad for a cooler with one 120mm fan on it.</p> <p>Entry-level water coolers cost substantially more, unless you're patient enough to wait for a fire sale. They require more materials, more manufacturing, and more complex engineering. The Cooler Master Seidon 120M is a good example of the kind of unit you'll find at this tier. It uses a standard 120mm fan attached to a standard 120mm radiator (or "rad") and currently has a street price of $60. But in our tests, its thermal performance was about the same, or worse, than the 212 Evo. In order to meet an aggressive price target, you have to make some compromises. The pump is smaller than average, for example, and the copper block you install on top of the CPU is not as thick. The Seidon was moderately quieter, but we have to give the nod to the 212 Evo when it comes to raw performance.</p> <p style="text-align: center;"><a class="thickbox" href="/files/u152332/coolermaster_212evo_small_2.jpg"><img src="/files/u152332/coolermaster_212evo_small_1.jpg" alt="The Cooler Master 212 Evo has arguably the best price- performance ratio around." title="Cooler Master 212" width="620" height="715" /></a></p> <p style="text-align: center;"><strong>The Cooler Master 212 Evo has arguably the best price-performance ratio around.</strong></p> <h4>The Performance Class<em>&nbsp;</em></h4> <p><strong>The Results:</strong> While a CLC has trouble scaling its manufacturing costs down to the budget level, there's a lot more headroom when you hit the $100 mark. The NZXT Kraken X60 CLC is one of the best examples in this class; its dual–140mm fans and 280mm radiator can unload piles of heat without generating too much noise, and it has a larger pump and apparently larger tubes than the Seidon 120M. Our tests bear out the promise of the X60's design, with its "quiet" setting delivering a relatively chilly 66 C, or about 45 degrees above the ambient room temp.</p> <p style="text-align: center;"><a class="thickbox" href="/files/u152332/nzxt_krakenx60_small_3.jpg"><img src="/files/u152332/nzxt_krakenx60_small_1.jpg" alt="It may not look like much, but the Kraken X60 is the Ferrari of closed-loop coolers." title="Kraken X60" width="620" height="425" /></a></p> <p style="text-align: center;"><strong>It may not look like much, but the Kraken X60 is the Ferrari of closed-loop coolers.</strong></p> <p>Is there any air cooler that can keep up? Well, we grabbed a Phanteks TC14PE, which uses two heatsinks instead of one, dual–140mm fans, and retails at $85–$90. It performed only a little cooler than the 212 Evo, but it did so very quietly, like a ninja. At its quiet setting, it trailed behind the X60 by 5 C. It may not sound like much, but that extra 5 C of headroom means a higher potential overclock. So, water wins the high end.</p> <div class="module orange-module article-module"><strong><span class="module-name">Benchmarks</span></strong></div> <div class="spec-table orange"> <table style="width: 620px; height: 267px;" border="0"> <thead> <tr style="text-align: left;"> <th class="head-empty"> </th> <th class="head-light"><span style="font-family: times new roman,times;">Seidon 120M Quiet / Performance Mode</span></th> <th><span style="font-family: times new roman,times;">212 Evo<br />Quiet / Performance Mode</span></th> <th><span style="font-family: times new roman,times;">Kraken X60 Quiet / Performance Mode</span></th> <th><span style="font-family: times new roman,times;">TC14PE<br />Quiet / Performance Mode</span></th> </tr> </thead> <tbody> <tr> <td class="item"><strong>Ambient Air</strong></td> <td class="item-dark">22.1 / 22.2</td> <td> <p>20.5 / 20</p> </td> <td>20.9 / 20.7</td> <td>20 / 19.9</td> </tr> <tr> <td><strong>Idle Temperature</strong></td> <td>38 / 30.7</td> <td>35.5 / 30.5</td> <td>29.7 / 28.8</td> <td>32 / <strong>28.5</strong></td> </tr> <tr> <td class="item"><strong>Load Temperature</strong></td> <td class="item-dark">78.3 / 70.8</td> <td>70 / 67.3</td> <td>66 / 61.8</td> <td>70.3 / 68.6</td> </tr> <tr> <td><strong>Load - Ambient</strong></td> <td>56.2 / 48.6</td> <td>49.5 / 47.3</td> <td>45.1 / 41.1</td> <td>50.3/ 48.7</td> </tr> </tbody> </table> </div> <p><em>All temperatures in degrees Celsius. Best scores bolded.</em></p> <h4>Is High-Bandwidth RAM worth it?<em>&nbsp;</em></h4> <p>Today, you can get everything from vanilla DDR3/1333 all the way to exotic-as-hell DDR3/3000. The question is: Is it actually worth paying for anything more than the garden-variety RAM? <em>&nbsp;</em></p> <p><strong>The Test:</strong> For our test, we mounted a Core i7-4770K into an Asus Z87 Deluxe board and fitted it with AData modules at DDR3/2400, DDR3/1600, and DDR3/1333. We then picked a variety of real-world (and one synthetic) tests to see how the three compared.</p> <p><strong>The Results:</strong> First, let us state that if you’re running integrated graphics and you want better 3D performance, pay for higher-clocked RAM. With discrete graphics, though, the advantage isn’t as clear. We had several apps that saw no benefit from going from 1,333MHz to 2,400MHz. In others, though, we saw a fairly healthy boost, 5–10 percent, by going from standard DDR3/1333 to DDR3/2400. The shocker came in Dirt 3, which we ran in low-quality modes so as not to be bottlenecked by the GPU. At low resolution and low image quality, we saw an astounding 18 percent boost. <em>&nbsp;</em></p> <p>To keep you back on earth, you should know that cranking the resolution in the game all but erased the difference. To see any actual benefit, we think you’d really need a tri-SLI GeForce GTX 780 setup and expect that the vast majority of games won’t actually give you that scaling.<em>&nbsp;</em></p> <p>We think the sweet spot for price/performance is either DDR3/1600 or DDR3/1866.<em><br /></em></p> <div class="module orange-module article-module"><strong><span class="module-name">Benchmarks</span></strong></div> <div class="spec-table orange"> <table style="width: 620px; height: 267px;" border="0"> <thead> <tr style="text-align: left;"> <th class="head-empty"> </th> <th class="head-light"><span style="font-family: times new roman,times;">DDR3/1333</span></th> <th><span style="font-family: times new roman,times;">DDR3/1600</span></th> <th><span style="font-family: times new roman,times;">DDR3/2400</span></th> </tr> </thead> <tbody> <tr> <td class="item"><strong>Stitch.Efx 2.0 (sec)</strong></td> <td class="item-dark">776</td> <td> <p>773</p> </td> <td><strong>763</strong></td> </tr> <tr> <td><strong>PhotoMatix HDR (sec)</strong></td> <td>181</td> <td>180</td> <td>180</td> </tr> <tr> <td class="item"><strong>ProShow Producer 5.0 (sec) <br /></strong></td> <td class="item-dark">1,370</td> <td>1,337</td> <td><strong>1,302</strong></td> </tr> <tr> <td><strong>HandBrake 0.9.9 (sec)</strong></td> <td>1,142</td> <td>1,077</td> <td><strong>1,037</strong></td> </tr> <tr> <td><strong>3DMark Overall</strong></td> <td>2,211</td> <td>2,214</td> <td>2,215</td> </tr> <tr> <td><strong>Dirt 3 Low Quality (fps)</strong></td> <td>234</td> <td>247.6</td> <td><strong>272.7</strong></td> </tr> <tr> <td><strong>Price for two 4GB DIMMs (USD)</strong></td> <td>$70</td> <td>$73</td> <td>$99</td> </tr> </tbody> </table> </div> <p><em>All temperatures in degrees Celsius. Best scores bolded.</em></p> <p><em>Click the next page to see how two midrange graphics cards stack up against one high-end GPU!</em></p> <h3> <hr /></h3> <h3>One High-End GPU vs.Two Midrange GPUs<em>&nbsp;</em></h3> <p>One of the most common questions we get here at Maximum PC, aside from details about our lifting regimen, is whether to upgrade to a high-end GPU or run two less-expensive cards in SLI or CrossFire. It’s a good question, since high-end GPUs are expensive, and cards that are two rungs below them in the product stack cost about half the price, which naturally begs the question: Are two $300 cards faster than a single $600 card? Before we jump to the tests, dual-card setups suffer from a unique set of issues that need to be considered. First is the frame-pacing situation, where the cards are unable to deliver frames evenly, so even though the overall frames per second is high there is still micro-stutter on the screen. Nvidia and AMD dual-GPU configs suffer from this, but Nvidia’s SLI has less of a problem than AMD at this time. Both companies also need to offer drivers to allow games and benchmarks to see both GPUs, but they are equally good at delivering drivers the day games are released, so the days of waiting two weeks for a driver are largely over. <em>&nbsp;</em></p> <h4>2x Nvidia <a title="660 Ti" href="" target="_blank">GTX 660 Ti</a> vs. <a title="geforce gtx 780" href="" target="_blank">GTX 780</a><em>&nbsp;</em></h4> <p><strong>The Test:</strong> We considered using two $250 GTX 760 GPUs for this test, but Nvidia doesn't have a $500 GPU to test them against, and since this is Maximum PC, we rounded up one model from the "mainstream" to the $300 GTX 660 Ti. This video card was recently replaced by the GTX 760, causing its price to drop down to a bit below $300, but since that’s its MSRP we are using it for this comparison. We got two of them to go up against the GTX 780, which costs roughly $650, so it's not a totally fair fight, but we figured it's close enough for government work. We ran our standard graphics test suite in both single- and dual-card configurations. <em>&nbsp;</em></p> <p><strong>The Results:</strong> It looks like our test was conclusive—two cards in SLI provide a slightly better gaming experience than a single badass card, taking top marks in seven out of nine tests. And they cost less, to boot. Nvidia’s frame-pacing was virtually without issues, too, so we don’t have any problem recommending Nvidia SLI at this time. It is the superior cost/performance setup as our benchmarks show.</p> <p style="text-align: center;"><a class="thickbox" href="/files/u152332/nvidia_geforce_gtx_780-top_small_0.jpg"><img src="/files/u152332/nvidia_geforce_gtx_780-top_small.jpg" alt="Nvidia’s new GK110-based GTX 780 takes on two ankle-biter GTX 660 Ti GPUs." title="Nvidia’s new GK110" width="620" height="559" /></a></p> <p style="text-align: center;"><strong>Nvidia’s new GK110-based GTX 780 takes on two ankle-biter GTX 660 Ti GPUs.</strong></p> <h4>2x <a title="7790" href="" target="_blank">Radeon HD 7790</a> vs.<a title="7970" href="" target="_blank">Radeon HD 7970</a>&nbsp;GHz<em></em></h4> <p><strong>The Test:</strong> For our AMD comparison, we took two of the recently released HD 7790 cards, at $150 each, and threw them into the octagon with a $400 GPU, the PowerColor Radeon HD 7970 Vortex II, which isn't technically a "GHz" board, but is clocked at 1,100MHz, so we think it qualifies. We ran our standard graphics test suite in both single-and-dual card configurations.</p> <p style="text-align: center;"><a class="thickbox" href="/files/u152332/reviews-10649_small_0.jpg"><img src="/files/u152332/reviews-10649_small.jpg" alt="Two little knives of the HD 7790 ilk take on the big gun Radeon HD 7970 . " title="HD 7790" width="620" height="663" /></a></p> <p style="text-align: center;"><strong>Two little knives of the HD 7790 ilk take on the big gun Radeon HD 7970 . </strong></p> <p><strong>The Results:</strong> Our AMD tests resulted in a very close battle, with the dual-card setup taking the win by racking up higher scores in six out of nine tests, and the single HD 7970 card taking top spot in the other three tests. But, what you can’t see in the chart is that the dual HD 7790 cards were totally silent while the HD 7970 card was loud as hell. Also, AMD has acknowledged the micro-stutter problem with CrossFire, and promises a software fix for it, but unfortunately that fix is going to arrive right as we are going to press on July 31. Even without it, gameplay seemed smooth, and the duo is clearly faster, so it gets our vote as the superior solution, at least in this config.<em><br /></em></p> <div class="module orange-module article-module"><strong><span class="module-name">Benchmarks</span></strong></div> <div class="spec-table orange"> <table style="width: 620px; height: 267px;" border="0"> <thead> <tr style="text-align: left;"> <th class="head-empty"> </th> <th class="head-light"><span style="font-family: times new roman,times;">GTX 660 Ti SLI</span></th> <th><span style="font-family: times new roman,times;">GTX 780</span></th> <th><span style="font-family: times new roman,times;">Radeon HD 7870 CrossFire</span></th> <th><span style="font-family: times new roman,times;">Radeon HD 7970 GHz</span></th> </tr> </thead> <tbody> <tr> <td class="item"><strong>3DMark Fire Strike</strong></td> <td class="item-dark"><strong>8,858</strong></td> <td> <p>8,482</p> </td> <td><strong>8,842</strong></td> <td>7,329</td> </tr> <tr> <td><strong>Catzilla (Tiger) Beta</strong></td> <td><strong>7,682</strong></td> <td>6,933</td> <td><strong>6,184</strong></td> <td>4,889</td> </tr> <tr> <td class="item"><strong>Unigine Heaven 4.0 (fps)<br /></strong></td> <td class="item-dark">33</td> <td><strong>35<br /></strong></td> <td><strong>30</strong></td> <td>24</td> </tr> <tr> <td><strong>Crysis 3 (fps)</strong></td> <td><strong>26</strong></td> <td>24</td> <td>15</td> <td><strong>17</strong></td> </tr> <tr> <td><strong>Shogun 2 (fps)</strong></td> <td><strong>60</strong></td> <td>48</td> <td><strong>51</strong></td> <td>43</td> </tr> <tr> <td><strong>Far Cry 3 (fps)</strong></td> <td><strong>41</strong></td> <td>35</td> <td>21</td> <td><strong>33</strong></td> </tr> <tr> <td><strong>Metro: Last Light (fps)</strong></td> <td><strong>24</strong></td> <td>22</td> <td>13</td> <td><strong>14</strong></td> </tr> <tr> <td><strong>Tomb Raider (fps)</strong></td> <td>18</td> <td><strong>25</strong></td> <td><strong>24</strong></td> <td>20</td> </tr> <tr> <td><strong>Battlefield 3 (fps)</strong></td> <td><strong>56</strong></td> <td>53</td> <td><strong>57</strong></td> <td>41</td> </tr> </tbody> </table> </div> <p><em>Best scores are bolded. Our test bed is a 3.33GHz Core i7 3960X Extreme Edition in an Asus P9X79 motherboard with 16GB of DDR3/1600 and a Thermaltake ToughPower 1,050W PSU. The OS is 64-bit Windows 7 Ultimate. All tests, except for the 3DMark tests, are run at 2560x1600 with 4X AA.</em></p> <h3>PCI Express 2.0 vs. PCI Express 3.0<em></em></h3> <p>PCI Express is the specification that governs the amount of bandwidth available between the CPU and the PCI Express slots on your motherboard. We've recently made the jump from version 2.0 to version 3.0, and the PCI Express interface on all late-model video cards is now PCI Express 3.0, causing many frame-rate addicts to question the sanity of placing a PCIe 3.0 GPU into a PCIe 2.0 slot on their motherboard. The reason why is that PCIe 3.0 has quite a bit more theoretical bandwidth than PCIe 2.0. Specifically, one PCIe 2.0 lane can transmit 500MB/s in one direction, while a PCIe 3.0 lane can pump up to 985MB/s, so it's almost double the bandwidth, and then multiply that by 16, since there are that many lanes being used, and the difference is substantial. However, that extra bandwidth will only be important if it’s even needed, which is what we wanted to find out. <em></em></p> <p><strong>The Test:</strong> We plugged an Nvidia GTX Titan into our Asus P9X79 board and ran several of our gaming tests with the top PCI Express x16 slot alternately set to PCIe 3.0 and PCIe 2.0. On this particular board you can switch the setting in the BIOS. <em></em></p> <p><strong>The Results:</strong> We had heard previously that there was very little difference between PCIe 2.0 and PCIe 3.0 on current systems, and our tests back that up. In every single test, Gen 3.0 was faster, but the difference is so small it’s very hard for us to believe that PCIe 2.0 is being saturated by our GPU. It’s also quite possible that one would see more pronounced results using two or more cards, but we wanted to “keep it real” and just use one card. <em></em></p> <div class="module orange-module article-module"><strong><span class="module-name">Benchmarks</span></strong></div> <div class="spec-table orange"> <table style="width: 620px; height: 267px;" border="0"> <thead> <tr style="text-align: left;"> <th class="head-empty"> </th> <th class="head-light"><span style="font-family: times new roman,times;">GTX Titan PCIe 2.0</span></th> <th><span style="font-family: times new roman,times;">GTX Titan PCIe 3.0</span></th> </tr> </thead> <tbody> <tr> <td class="item"><strong>3DMark Fire Strike</strong></td> <td class="item-dark">9,363</td> <td> <p><strong>9,892</strong></p> </td> </tr> <tr> <td><strong>Unigine Heaven 4.0 (fps)</strong></td> <td>37</td> <td><strong>40</strong></td> </tr> <tr> <td class="item"><strong>Crysis 3 (fps)<br /></strong></td> <td class="item-dark">31</td> <td><strong>32<br /></strong></td> </tr> <tr> <td><strong>Shogun 2 (fps)</strong></td> <td>60</td> <td><strong>63</strong></td> </tr> <tr> <td><strong>Far Cry 3 (fps)</strong></td> <td>38</td> <td><strong>42</strong></td> </tr> <tr> <td><strong>Metro: Last Light (fps)</strong></td> <td>22</td> <td><strong>25</strong></td> </tr> <tr> <td><strong>Tomb Raider (fps)</strong></td> <td>22</td> <td><strong>25</strong></td> </tr> </tbody> </table> </div> <p><em>Best scores are bolded. Our test bed is a 3.33GHz Core i7 3960X Extreme Edition in an Asus P9X79 motherboard with 16GB of DDR3/1600 and a Thermaltake ToughPower 1,050W PSU. The OS is 64-bit Windows 7 Ultimate. All games are run at 2560x1600 with 4X AA except for the 3DMark tests.</em></p> <h3>PCIe x8 vs. PCIe x16</h3> <p>PCI Express expansion slots vary in both physical size and the amount of bandwidth they provide. The really long slots are called x16 slots, as they provide 16 lanes of PCIe bandwidth, and that’s where our video cards go, for obvious reasons. Almost all of the top slots in a motherboard (those closest to the CPU) are x16, but sometimes those 16 lanes are divided between two slots, so what might look like a x16 slot is actually a x8 slot. The tricky part is that sometimes the slots below the top slot only offer eight lanes of PCIe bandwidth, and sometimes people need to skip that top slot because their CPU cooler is in the way or water cooling tubes are coming out of a radiator in that location. Or you might be running a dual-card setup, and if you use a x8 slot for one card, it will force the x16 slot to run at x8 speeds. Here’s the question: Since a x16 slot provides 3.2GB/s of bandwidth in one direction, and a x8 slot pumps 1.6GB/s, is your performance hobbled by running at x8?</p> <p><strong>The Test:</strong> We wedged a GTX Titan into first a x16 slot and then a x8 slot on our Asus P9X79 motherboard and ran our gaming tests in order to compare the difference.</p> <p><strong>The Results:</strong> We were surprised by these results, which show x16 to be a clear winner. Sure, it seems obvious, but we didn’t think even current GPUs were saturating the x8 interface, but apparently they are, so this is an easy win for x16.</p> <p style="text-align: center;"><a class="thickbox" href="/files/u152332/asus_p9x79_small_0.jpg"><img src="/files/u152332/asus_p9x79_small.jpg" alt="The Asus P9X79 offers two x16 slots (blue) and two x8 slots (white)." title="Asus P9X79" width="620" height="727" /></a></p> <p style="text-align: center;"><strong>The Asus P9X79 offers two x16 slots (blue) and two x8 slots (white).</strong></p> <div class="module orange-module article-module"><strong><span class="module-name">Benchmarks</span></strong></div> <div class="spec-table orange"> <table style="width: 620px; height: 267px;" border="0"> <thead> <tr style="text-align: left;"> <th class="head-empty"> </th> <th class="head-light"><span style="font-family: times new roman,times;">GTX Titan PCIe x16</span></th> <th><span style="font-family: times new roman,times;">GTX Titan PCIe x8</span></th> </tr> </thead> <tbody> <tr> <td class="item"><strong>3DMark Fire Strike</strong></td> <td class="item-dark"><strong>9,471</strong></td> <td> <p>9,426</p> </td> </tr> <tr> <td><strong>Catzilla (Tiger) Beta</strong></td> <td><strong>7,921</strong></td> <td>7,095</td> </tr> <tr> <td class="item"><strong>Unigine Heaven 4.0 (fps)<br /></strong></td> <td class="item-dark"><strong>40</strong></td> <td>36</td> </tr> <tr> <td><strong>Crysis 3 (fps)</strong></td> <td>32</td> <td><strong>37</strong></td> </tr> <tr> <td><strong>Shogun 2 (fps)</strong></td> <td><strong>64</strong></td> <td>56</td> </tr> <tr> <td><strong>Far Cry 3 (fps)</strong></td> <td><strong>43</strong></td> <td>39</td> </tr> <tr> <td><strong>Metro: Last Light (fps)</strong></td> <td><strong>25</strong></td> <td>22</td> </tr> <tr> <td><strong>Tomb Raider (fps)</strong></td> <td><strong>25</strong></td> <td>23</td> </tr> <tr> <td><strong>Battlefield 3 (fps)</strong></td> <td><strong>57</strong></td> <td>50</td> </tr> </tbody> </table> </div> <p><em>Tests performed on an Asus P9X79 Deluxe motherboard. </em></p> <h3>IDE vs. AHCI<em></em></h3> <p>If you go into your BIOS and look at the options for your motherboard’s SATA controller, you usually have three options: IDE, AHCI, and RAID. RAID is for when you have more than one drive, so for running just a lone wolf storage device, you have AHCI and IDE. For ages we always just ran IDE, as it worked just fine. But now there’s AHCI too, which stands for Advanced Host Controller Interface, and it supports features IDE doesn’t, such as Native Command Queuing (NCQ), and hot swapping. Some people also claim that AHCI is faster than IDE due to NCQ and the fact that it's newer. Also, for SSD users, IDE does not support the Trim command, so AHCI is critical to an SSD's well-being over time, but is there a speed difference between IDE and AHCI for an SSD? We set to find out. <em></em></p> <p><strong>The Test:</strong> We enabled IDE on our SATA controller in the BIOS, then installed our OS. Next, we added our Corsair test SSD and ran a suite of storage tests. We then enabled AHCI, reinstalled the OS, re-added the Corsair Neutron test SSD, and re-ran all the tests.<em></em></p> <p><strong>The Results:</strong> We haven’t used IDE in a while, but we assumed it would allow our SSD to run at full speed even if it couldn’t NCQ or hot-swap anything. And we were wrong. Dead wrong. Performance with the SATA controller set to IDE was abysmal, plain and simple. <em></em></p> <div class="module orange-module article-module"><strong><span class="module-name">Benchmarks</span></strong></div> <div class="spec-table orange"> <table style="width: 620px; height: 267px;" border="0"> <thead> <tr style="text-align: left;"> <th class="head-empty"> </th> <th class="head-light"><span style="font-family: times new roman,times;">Corsair Neutron GTX IDE</span></th> <th><span style="font-family: times new roman,times;">Corsair Neutron GTX AHCI</span></th> </tr> </thead> <tbody> <tr> <td class="item"><strong>CrystalDiskMark</strong></td> <td class="item-dark">&nbsp;</td> <td>&nbsp;</td> </tr> <tr> <td><strong>Avg. Sustained Read (MB/s)</strong></td> <td>224</td> <td><strong>443</strong></td> </tr> <tr> <td class="item"><strong>Avg. Sustained Write (MB/s)<br /></strong></td> <td class="item-dark">386</td> <td><strong>479</strong></td> </tr> <tr> <td><strong>AS SSD - Compressed Data</strong></td> <td>&nbsp;</td> <td>&nbsp;</td> </tr> <tr> <td><strong>Avg. Sustained Read (MB/s)</strong></td> <td>210</td> <td><strong>514</strong></td> </tr> <tr> <td><strong>Avg. Sustained Write (MB/s)</strong></td> <td>386</td> <td><strong>479</strong></td> </tr> <tr> <td><strong>ATTO</strong></td> <td>&nbsp;</td> <td>&nbsp;</td> </tr> <tr> <td><strong>64KB File Read (MB/s, 4QD)</strong></td> <td>151</td> <td><strong>351</strong></td> </tr> <tr> <td><strong>64KB File Write (MB/s, 4QD)</strong></td> <td>354</td> <td><strong>485</strong></td> </tr> <tr> <td><strong>Iometer</strong></td> <td>&nbsp;</td> <td>&nbsp;</td> </tr> <tr> <td><strong>4KB Random Write 32QD <br />(IOPS)</strong></td> <td>19,943</td> <td><strong>64,688</strong></td> </tr> <tr> <td><strong>PCMark Vantage x64 </strong></td> <td>6,252</td> <td><strong>41,787</strong></td> </tr> </tbody> </table> </div> <p><em>Best scores are bolded. All tests conducted on our hard drive test bench, which consists of a Gigabyte Z77X-UP4 motherboard, Intel Core i5-3470 3.2GHz CPU, 8GB of RAM, Intel 520 Series SSD, and a Cooler Master 450W power supply.</em></p> <p><em>Click the next page to read about SSD RAID vs a single SSD!</em></p> <p><em></em></p> <hr /> <p>&nbsp;</p> <h3>SSD RAID vs. Single SSD</h3> <p>This test is somewhat analogous to the GPU comparison, as most people would assume that two small-capacity SSDs in RAID 0 would be able to outperform a single 256GB SSD. The little SSDs have a performance penalty out of the gate, though, as SSD performance usually improves as capacity increases because the controller is able to grab more data given the higher-capacity NAND wafers—just like higher-density platters increase hard drive performance. This is not a universal truth, however, and whether or not performance scales with an SSD’s capacity depends on the drive’s firmware, NAND flash, and other factors, but in general, it’s true that the higher the capacity of a drive, the better its performance. The question then is: Is the performance advantage of the single large drive enough to outpace two little drives in RAID 0?</p> <p>Before we jump into the numbers, we have to say a few things about SSD RAID. The first is that with the advent of SSDs, RAID setups are not quite as common as they were in the HDD days, at least when it comes to what we’re seeing from boutique system builders. The main reason is that it’s really not that necessary since a stand-alone SSD is already extremely fast. Adding more speed to an already-fast equation isn’t a big priority for a lot of home users (this is not necessarily our audience, mind you). Even more importantly, the biggest single issue with SSD RAID is that the operating system is unable to pass the Trim command to the RAID controller in most configurations (Intel 7 and 8 series chipsets excluded), so the OS can’t tell the drive how to keep itself optimized, which can degrade performance of the array in the long run, making the entire operation pointless. Now, it’s true that the drive’s controller will perform “routine garbage collection,” but how that differs from Trim is uncertain, and whether it’s able to manage the drive equally well is also unknown. However, the lack of Trim support on RAID 0 is a scary thing for a lot of people, so it’s one of the reasons SSD RAID often gets avoided. Personally, we’ve never seen it cause any problems, so we are fine with it. We even ran it in our Dream Machine 2013, and it rocked the Labizzle. So, even though people will say SSD RAID is bad because there’s no Trim support, we’ve never been able to verify exactly what that “bad” means long-term.</p> <p style="text-align: center;"><a class="thickbox" href="/files/u152332/reviews-10645_small_0.jpg"><img src="/files/u152332/reviews-10645_small.jpg" alt="It’s David and Goliath all over again, as two puny SSDs take on a bigger, badder drive. " title="SSDs" width="620" height="398" /></a></p> <p style="text-align: center;"><strong>It’s David and Goliath all over again, as two puny SSDs take on a bigger, badder drive. </strong></p> <p><strong>The Test:</strong> We plugged in two Corsair Neutron SSDs, set the SATA controller to RAID, created our array with a 64K stripe size, and then ran all of our tests off an Intel 520 SSD boot drive. We used the same protocol for the single drive.</p> <p><strong>The Results:</strong> The results of this test show a pretty clear advantage for the RAIDed SSDs, as they were faster in seven out of nine tests. That’s not surprising, however, as RAID 0 has always been able to benchmark well. That said, the single 256 Corsair Neutron drive came damned close to the RAID in several tests, including CrystalDiskMark, ATTO at four queue depth, and AS SSD. It’s not completely an open-and-shut case, though, because the RAID scored poorly in the PC Mark Vantage “real-world” benchmark, with just one-third of the score of the single drive. That’s cause for concern, but with these scripted tests it can be tough to tell exactly where things went wrong, since they just run and then spit out a score. Also, the big advantage of RAID is that it boosts sequential-read and -write speeds since you have two drives working in parallel (conversely, you typically won’t see a big boost for the small random writes made by the OS). Yet the SSDs in RAID were actually slower than the single SSD in our Sony Vegas “real-world” 20GB file encode test, which is where they should have had a sizable advantage. For now, we’ll say this much: The RAID numbers look good, but more “real-world” investigation is required before we can tell you one is better than the other.</p> <div class="module orange-module article-module"><strong><span class="module-name">Benchmarks</span></strong></div> <div class="spec-table orange"> <table style="width: 620px; height: 267px;" border="0"> <thead> <tr style="text-align: left;"> <th class="head-empty"> </th> <th class="head-light"><span style="font-family: times new roman,times;">1x Corsair Neutron 256GB</span></th> <th><span style="font-family: times new roman,times;">2x Corsair Neutron 128GB RAID 0 </span></th> </tr> </thead> <tbody> <tr> <td class="item"><strong>CrystalDiskMark</strong></td> <td class="item-dark">&nbsp;</td> <td>&nbsp;</td> </tr> <tr> <td><strong>Avg. Sustained Read (MB/s)</strong></td> <td>512</td> <td><strong>593</strong></td> </tr> <tr> <td class="item"><strong>Avg. Sustained Write (MB/s)<br /></strong></td> <td class="item-dark">436</td> <td><strong>487</strong></td> </tr> <tr> <td><strong>AS SSD - Compressed Data</strong></td> <td>&nbsp;</td> <td>&nbsp;</td> </tr> <tr> <td><strong>Avg. Sustained Read (MB/s)</strong></td> <td>506</td> <td><strong>647</strong></td> </tr> <tr> <td><strong>Avg. Sustained Write (MB/s)</strong></td> <td>318</td> <td><strong>368</strong></td> </tr> <tr> <td><strong>ATTO</strong></td> <td>&nbsp;</td> <td>&nbsp;</td> </tr> <tr> <td><strong>64KB File Read (MB/s, 4QD)</strong></td> <td>436</td> <td>934</td> </tr> <tr> <td><strong>64KB File Write (MB/s, 4QD)</strong></td> <td><strong>516</strong></td> <td>501</td> </tr> <tr> <td><strong>Iometer</strong></td> <td>&nbsp;</td> <td>&nbsp;</td> </tr> <tr> <td><strong>4KB Random Write 32QD <br />(IOPS)</strong></td> <td>70,083</td> <td><strong>88,341</strong></td> </tr> <tr> <td><strong>PCMark Vantage x64 <br /></strong></td> <td><strong>70,083</strong></td> <td>23,431</td> </tr> <tr> <td><strong>Sony Vegas Pro 9 Write (sec)</strong></td> <td>343</td> <td><strong>429</strong></td> </tr> </tbody> </table> </div> <p><em>Best scores are bolded. All tests conducted on our hard-drive test bench, which consists of a Gigabyte Z77X-UP4 motherboard, Intel Core i5-3470 3.2GHz CPU, 8GB of RAM, Intel 520 Series SSD, and a Cooler Master 450W power supply.</em></p> <h3>Benchmarking: Synthetic vs. Real-World<em>&nbsp;</em></h3> <p>There’s a tendency for testers to dismiss “synthetic” benchmarks as having no value whatsoever, but that attitude is misplaced. Synthetics got their bad name in the 1990s, when they were the only game in town for testing hardware. Hardware makers soon started to optimize for them, and on occasion, those actions would actually hurt performance in real games and applications.<em>&nbsp;</em></p> <p>The 1990s are long behind us, though, and benchmarks and the benchmarking community have matured to the point that synthetics can offer very useful metrics when measuring the performance of a single component or system. At the same time, real-world benchmarks aren’t untouchable. If a developer receives funding or engineering support from a hardware maker to optimize a game or app, does that really make it neutral? There is the argument that it doesn’t matter because if there’s “cheating” to improve performance, that only benefits the users. Except that it only benefits those using a certain piece of hardware.</p> <p>In the end, it’s probably more important to understand the nuances of each benchmark and how to apply them when testing hardware. SiSoft Sandra, for example, is a popular synthetic benchmark with a slew of tests for various components. We use it for memory bandwidth testing, for which it is invaluable—as long as the results are put in the right context. A doubling of main system memory bandwidth, for example, doesn’t mean you get a doubling of performance in games and apps. Of course, the same caveats apply to real-world benchmarks, too.</p> <h3>Avoid the Benchmarking Pitfalls<em></em></h3> <p>Even seasoned veterans are tripped up by benchmarking pitfalls, so beginners should be especially wary of making mistakes. Here are a few tips to help you on your own testing journey.<em></em></p> <p>Put away your jump-to-conclusions mat. If you set condition A and see a massive boost—or no difference at all when you were expecting one—don’t immediately attribute it to the hardware. Quite often, it’s the tester introducing errors into the test conditions that causes the result. Double-check your settings and re-run your tests and then look for feedback from others who have tested similar hardware to use as sanity-check numbers.<em></em></p> <p>When trying to compare one platform with another (certainly not ideal)—say, a GPU in system A against a GPU in system B—be especially wary of the differences that can result simply from using two different PCs, and try to make them as similar as possible. From drivers to BIOS to CPU and heatsink—everything should match. You may even want to put the same GPU in both systems to make sure the results are consistent.<em></em></p> <p>Use the right benchmark for the hardware. Running Cinebench 11.5—a CPU-centric test—to review memory, for example, would be odd. A better fit would be applications that are more memory-bandwidth sensitive, such as encoding, compression, synthetic RAM tests, or gaming.<em></em></p> <p>Be honest. Sometimes, when you shell out for new hardware, you want it to be faster because no one wants to pay through the nose to see no difference. Make sure your own feelings toward the hardware aren’t coloring the results.<em><br /></em></p> 2013 air cooling benchmark cpu graphics card Hardware Hardware liquid cooling maximum pc motherboard pc speed test performance ssd tests October 2013 Motherboards Features Mon, 10 Feb 2014 22:46:41 +0000 Maximum PC staff 26909 at MSI Announces Its Intel Bay Trail J1800I Motherboard <!--paging_filter--><h3><img src="/files/u160391/msimotherboard.jpg" width="250" height="155" style="float: right;" />Details on MSI's version of new J1800-based motherboards</h3> <p>While several manufacturers have already put forth announcements about new J1800-based motherboards, <strong>MSI</strong> is now throwing its hat into the ring with the J1800I, a Mini-ITX board packing an Intel Celeron J1800 CPU. </p> <p>MSI's version of the J1800 motherboard has two DDR3 SODIMM slots to accommodate up to 8 GB of memory, as well as one PCIe slot, two SATA2 ports, and USB 2.0 headers. The Celeron J1800 CPU is also a dual-core Bay Trail processor with a base frequency of 2.41 GHZ and 1 MB of L2 cache.</p> <p>As far as rear I/O connectivity, there's a USB 3.0 port, a DVI port, an HDMI port, and two USB 2.0 ports as well as HD audio jacks and gigabit ethernet.&nbsp;</p> <p>This news doesn't come coupled with pricing information or any kind of concrete details beyond specs, but rumors are swirling that the boards may retail around $60 when they finally release.</p> celeron Hardware intel mini-itx motherboard msi News Mon, 10 Feb 2014 01:48:32 +0000 Brittany Vincent 27223 at CES 2014: MSI Suite Tour [Video] <!--paging_filter--><h3><img src="/files/u69/msi_mobo.jpg" alt="MSI Motherboards" title="MSI Motherboards" width="228" height="149" style="float: right;" />A look MSI's upcoming Intel and AMD motherboards</h3> <p>MSI is in a tight race with ASRock to become the <a href="">third largest motherboard</a> maker behind heavyweights Asus and Gigabyte. ASRock took a slight lead in 2013, but looking ahead, MSI is expected to leapfrog into the No. 3 spot with 8 million motherboard shipments. Either way, system builders will have plenty of <strong>MSI motherboards</strong> to sift through in 2014, several of which we had a chance to glimpse at CES.</p> <p>Gordon captured on video several Intel motherboards aimed at gamers. Included in the lineup of upcoming mobos is a mini-ITX board with a ton of SATA ports and running the Z87 chipset. Smaller size boards are fast becoming popular, especially with Steam Machines gaining momentum. Here's a closer look:</p> <p><iframe src="//" width="620" height="349" frameborder="0"></iframe></p> <p>Prefer AMD? Don't worry, MSI has you covered. Like the Intel lineup, MSI is readying AMD boards of varying size and feature-sets, including a mini-ITX version of an FM2 board. More here:</p> <p><iframe src="//" width="620" height="349" frameborder="0"></iframe></p> <p><em>Follow Paul on <a href="" target="_blank">Google+</a>, <a href="!/paul_b_lilly" target="_blank">Twitter</a>, and <a href="" target="_blank">Facebook</a></em></p> amd Build a PC ces2014 Hardware motherboard msi News Wed, 08 Jan 2014 19:35:22 +0000 Paul Lilly and Gordon Mah Ung 27037 at The Ultimate Computer Hardware Guide <!--paging_filter--><h3><img src="/files/u152332/20060523corp_a_small_2.jpg" alt="Your CPU choice should be based on your workload and not what you read about." width="280" height="187" style="float: right;" /></h3> <h3>Things you need to know to become a PC hardware expert</h3> <p>Knowledge is power, and when it comes to PCs and <strong>computer hardware</strong> that’s especially true, because only by knowing how your PC components’ specs actually affect performance can you get the maximum power you need for the type of computing you do—and avoid being seduced by features that sound impressive on the box but won’t do squat to improve your experience. Knowing your stuff has other benefits, too. An in-depth understanding of what makes all your parts tick enables you to better troubleshoot problems, upgrade in ways that make sense, and converse with other nerds in your own secret language. Continue reading to begin your crash course in PC spec-speak.</p> <h3>CPU</h3> <p><strong>Just how many cores and how much cache do you need? We’ll help you answer those questions and others with cool confidence</strong></p> <h4>Socket</h4> <p>There are two kinds of buyers: Those who will never upgrade a <a title="cpu" href="" target="_blank">CPU</a> and those who actively plan for it. For the former, even a CPU welded to the motherboard won’t matter, but upgraders who want to use a system for years need to pay attention to the socket, as it’s one of the primary factors limiting your upgrade options. On <a title="intel" href="" target="_blank">Intel</a>, there are three sockets to choose from: <a title="lga2011" href="" target="_blank">LGA2011</a>, <a title="1155" href="" target="_blank">LGA1155</a>, and the new <a title="lga1150" href="" target="_blank">LGA1150</a>. Of the three, LGA1155 has the least amount of life left in it, as it will be slowly phased out in favor of the new LGA1150 platform. We know from Intel roadmaps that LGA1150 and LGA2011 are good for at least another couple of years. On <a title="amd" href="" target="_blank">AMD</a>, <a title="AM3+" href="" target="_blank">AM3+</a> offers a superb assortment, from budget dual-cores all the way to eight-core chips, with the company’s new <a title="piledriver" href="" target="_blank">Piledriver</a> chip even slotting into this old socket. The company’s FM line isn’t quite as stable. FM1 didn’t go very far, but the company’s FM2 looks like it might have longer legs. The thing is, <a title="fm2" href="" target="_blank">FM2</a> processors—or rather, APUs—aren’t aimed at the type of user who upgrades every year. We suspect that most FM2 buyers will use the platform for a couple years and then buy a new system instead of upgrading. For long-haulers, we recommend AM3+, LGA2011, and LGA1150. If you don’t care about doing an upgrade, go with whatever CPU you want.<strong>&nbsp;</strong></p> <h4>Core Count<strong>&nbsp;</strong></h4> <p>Core count is the new clock speed. That’s because as consumers have been trained not to look at megahertz anymore as a defining factor, vendors have turned to core count as an emotional trigger. Two is better than one, four is better than two, and six is better than four. <strong>&nbsp;</strong></p> <p>Here’s the deal, though: More cores are indeed better—but only if you truly use them, and really only when compared within the same family of chips. For example, to assume that an eight-core AMD FX part is faster than a six-core Intel Core i7 part would be flat-out wrong. Likewise, to assume that a PC with a six-core Intel Core i7 will be faster at gaming than a quad-core Core i7 is also likely wrong. To make things more complicated, Intel uses a virtual CPU technology called <a title="intel hyper threading" href="" target="_blank">Hyper-Threading</a> to push its CPUs. Some chips have it, some don’t.<strong>&nbsp;</strong></p> <p>So, how do you figure out what you want? First, look at your workloads. If you’re primarily a gamer who browses, does some photo editing, and word processing, we think the sweet spot is a quad-core chip. Those who encode video, model 3D, or use other multithreaded apps, or even many apps simultaneously, should consider getting as many cores as possible because you can never have enough for these workloads. A good bridge for folks who encode video only occasionally, though, is a quad-core chip with Hyper-Threading.</p> <p style="text-align: center;"><a class="thickbox" href="/files/u152332/20060523corp_a_small_0.jpg"><img src="/files/u152332/20060523corp_a_small.jpg" alt="Your CPU choice should be based on your workload and not what you read about." title="LGA1150" width="620" height="413" /></a></p> <p style="text-align: center;"><strong>Your CPU choice should be based on your workload and not what you read about.</strong></p> <h4>Clock Speed</h4> <p>Remember the Megahertz Myth? It’s what we alluded to above. It arose from the understanding that clock speed didn’t matter, because a 2GHz Pentium 4 was barely faster, if at all, than a 1.6GHz Athlon XP. Years later, that generally remains true. You really can’t say a 4.1GHz FX-8350 is going to smoke a 3.5GHz Core i7-3770K because in a hell of a lot of workloads the 3.5GHz Core i7 is going to dominate. Nevertheless, we have issues when someone dismisses megahertz outright as an important metric. We don’t think it’s handy when looking at AMD vs. Intel, but when you’re looking within the same family, it’s very telling. A 3.5GHz Intel chip will indeed be faster than a 2.8GHz Intel chip. The same applies among AMD chips. So, consider clock speeds wisely.</p> <h4>Cache</h4> <p>When vendors start looking for ways to separate your cash from your pocket, clock speed and core count are their first line of attack. If those features don’t get you, we’ve noticed that the amount of cache is the next spec dangled in your face. Choices these days run from 8MB to 3MB or less. First, you should know that in many cases, the chips themselves are often the same. When validating chips, AMD and Intel will weed out defective chips. If a chip has, say, 8MB of L2 cache and a bit of it is bad, it’s sold as a chip with 6MB of L2 cache, or 4MB of L2 cache. This isn’t always true, as some chips have the cache turned off or removed to save on building costs.</p> <p>Does cache matter in performance? Yes and no. Let’s just say that a large cache rarely hinders performance, but you quickly get to diminishing returns, so for many apps, a chip with 8MB of L2 could offer the same performance as one with 3MB of L2. We’ve seen cache matter most in some bandwidth-sensitive tasks such as media encoding or compression, but for the most part, don’t sweat the difference between a chip with 4MB of L2 vs. one with one 3MB of L2.</p> <h4>Integrated Graphics</h4> <p>Integrated graphics are likely one of the biggest advances in CPUs in the last few years. Yes, for gamers, a discrete graphics card is going to be faster 105 percent of the time, but for budget machines, ultra-thin notebooks, and all-in-ones, integrated graphics are usually all you get, and there’s a world of difference between them. Generally, AMD’s integrated graphics chips lead the way over Intel’s older generation of <a title="ivy bridge" href="" target="_blank">Ivy Bridge</a> and <a title="sandy bridge" href="" target="_blank">Sandy Bridge</a> chips. It’s like, well, AMD is the Intel of integrated graphics and Intel is the AMD. Intel’s latest <a title="haswell review" href="" target="_blank">Haswell</a> chips make it far more interesting, though, as the graphics performance has increased greatly. Then again, AMD has also recently released its new APUs with Radeon HD 7000 graphics. The spec that matters most on integrated graphics is the number of graphics execution units and clock speed. More EUs mean better performance, as does higher clock speeds.</p> <h3>When to Run Aftermarket Cooling</h3> <p>Let’s get it out in the open: Stock CPU coolers really aren’t as bad as people make them out to be. Sure, we all scoff at them, but the truth is that Intel and AMD spend considerable money on the design and certify them to work with their CPUs in all types of environments. For the vast majority of people, the stock cooler is just fine.</p> <p style="text-align: center;"><a class="thickbox" href="/files/u152332/hyper_212_evo-01_small_2.jpg"><img src="/files/u152332/hyper_212_evo-01_small_1.jpg" alt="The Cooler Master Hyper 212 Evo is a low-cost, worthy upgrade over stock—if you need it." title="Cooler Master Hyper 212 " width="620" height="465" /></a></p> <p style="text-align: center;"><strong>The Cooler Master Hyper 212 Evo is a low-cost, worthy upgrade over stock—if you need it.</strong></p> <p>But you’re not the vast majority of people. Sadly, today, if you can even open up the case, you’re an enthusiast. Sure, there are applications for the stock cooler, such as an <a title="htpc" href="" target="_blank">HTPC</a> or a small box that won’t be overclocked, but we like to think of the stock cooler as the minimum spec you should run. It’s fine, but it can be greatly improved upon.</p> <p>Obviously, if you’re an overclocker, a beefier heatsink is a foregone conclusion, as heat is one of the worst enemies of a successful overclock. Swapping out the stock cooler for an aftermarket model is almost guaranteed to net higher or more stable overclocks than you can hit with the stock cooler.</p> <p>Even if you don’t overclock, an aftermarket cooler can be a worthwhile addition. Since they can dissipate more heat than a stock cooler, and the fans are typically larger, the fan RPMs are usually lower, thus quieter.</p> <p>Closed-loop liquid coolers are also a good option, as they require zero maintenance and the risk of a leak is extremely low. <a title="water cooling" href="" target="_blank">Liquid coolers</a> are also quite affordable today and easily outstrip the vast majority of air coolers. One thing you’ll need to keep in mind is that closed-loop liquid coolers aren’t always the quietest option out there, though.</p> <p><em>Click the next page to get more info on motherboards.</em></p> <p>&nbsp;</p> <hr /> <p>&nbsp;</p> <h3>Motherboard</h3> <p><strong>Knowing your way around a motherboard is a distinguishing characteristic of a PC nerd. Let us help orient you </strong></p> <h4>Form Factor<strong>&nbsp;</strong></h4> <p>The form factor of a motherboard is its physical dimensions. The most popular today is the 18-year-old <a title="atx" href="" target="_blank">ATX</a> form factor. The two other popular sizes are the smaller <a title="microatx" href="" target="_blank">microATX</a> and <a title="mini-itx" href="" target="_blank">Mini-ITX</a>. Intel tried and failed to replace ATX with <a title="btx" href="" target="_blank">BTX</a>. Two additional form factors are the wider Extended-ATX and XL-ATX. XL-ATX is not an official spec but generally denotes a longer board to support more expansion slots. For an enthusiast, ATX will cover about 90 percent of your needs. Besides offering the most flexibility in expansion, it’s also where you the get the widest range of selection. You can get budget all the way to the kitchen sink in ATX. MicroATX is usually reserved for budget boards, but there are a few high-end boards in this form factor these days. Mini-ITX is exciting, but the limited board space makes for few high-end options in this mini size.<strong>&nbsp;</strong></p> <h4>Socket<strong>&nbsp;</strong></h4> <p>As we said in our CPU write-up, your motherboard’s socket dictates all that the board will ever be. If, for example, you buy a discontinued socket such as LGA1156, your choice of CPU is greatly limited. The most modern sockets today are LGA1155, LGA1150, LGA2011 for Intel, and AM3+ and FM2 for AMD. For Intel, LGA2011 and LGA1150 have the longest legs. Though still useable, the sun is now setting on LGA1155 boards. AMD is actively supporting AM3+ and FM2, but there is talk of a new socket to replace FM2.<strong>&nbsp;</strong></p> <h4>Chipset<strong>&nbsp;</strong></h4> <p>The chipset on a motherboard refers to the “core logic” and used to entail multiple chips doing several jobs. These days, the core-logic chipset is down to one or two chips, with much of the functionality moved into the CPU. Chipsets manage basic functions such as USB, PCIe, and SATA ports, and board makers throw on additional controllers to add even more functions. You should pay special attention to the chipset if you’re looking for certain functionality, some of which is only possible on newer chipsets. The <a title="p67" href="" target="_blank">P67</a> chipset, for example, did not support Intel’s SSD caching, but the <a title="z68" href="" target="_blank">Z68</a> did. Current high-end chipsets from Intel include the <a title="z77" href="" target="_blank">Z77</a>, <a title="z87" href="" target="_blank">Z87</a>, <a title="x79" href="" target="_blank">X79</a>; from AMD you have the A85X, 990X, and 990FX. <strong>&nbsp;</strong></p> <h4>SLI/CrossFire Support</h4> <p>The vast majority of gamers never run more than one video card, but it’s always nice to know you have the option. AMD’s multicard solution is <a title="crossfire" href="" target="_blank">CrossFire</a> for two boards, and CrossFireX for more than two. For its part, Nvidia has <a title="sli" href="" target="_blank">SLI</a> for two-card setups, tri-SLI for three cards, and four-way SLI for four cards. We won’t judge the relative merits of each system, as this isn’t the place for it. Most boards that offer one, also offer the other, but don’t assume a CrossFire board will support SLI. Read the specs ahead of time if you plan to run multiple cards.<strong>&nbsp;</strong></p> <h4>Ports<strong>&nbsp;</strong></h4> <p>One of the main differences between a high-end board and a low-end board is the ports. High-end boards tend to have ports galore, with FireWire, additional USB 3.0, digital audio, eSATA, and Thunderbolt added on to convince you that board B is better than board C. How many ports, and what type, do you need? That is something only you can answer. If you still run an older DV cam that needs FireWire, having the port on the board for “free” is always nice. Thunderbolt is also an incredibly cool, forward-looking feature, but is very pricey. If you never use it, you will have paid for nothing. These days, we say eSATA and FireWire aren’t needed. What we want, mostly, is a ton of USB 3.0 ports. The ultimate board today might be one with nothing but USB 3.0 ports, if you ask us.<strong>&nbsp;</strong></p> <h4>Slots<strong>&nbsp;</strong></h4> <p>If you see a board with tons of those long PCIe slots, don’t assume they’re all hot. PCIe slots can be physically x16 in length (that means 16 lanes) but only x8 or x4 electrically (which means the data is limited to x4 or x8 bandwidth). Cheaper boards may even disable some onboard devices when run in multi-GPU modes, while pricier boards use additional chips to spread the available bandwidth around and keep the devices running. AMD’s 990FX and Intel’s X79 don’t have the limited bandwidth of the Z77 or Z87 chipsets, so if you need lots of slots, you’ll want to opt for those chipsets. Unfortunately, Z77 and Z87 are where you find more PCIe 3.0 support. PCIe 3.0 doubles the effective bandwidth over PCIe 2.0, but it’s still not officially supported on X79, and only newer 990FX boards support it now. Confused? Our advice is that if you really need to run high-bandwidth add-in boards for video capture or RAID applications, ask the manufacturer what motherboards they have certified for it first.</p> <p style="text-align: center;"><a class="thickbox" href="/files/u152332/z87x-oc-rev1-0_small_0.jpg"><img src="/files/u152332/z87x-oc-rev1-0_small.jpg" alt="There are degrees of enthusiast computing and motherboards to accommodate all scenarios. " width="620" height="733" /></a></p> <p style="text-align: center;"><strong>There are degrees of enthusiast computing and motherboards to accommodate all scenarios. </strong></p> <h4>POST LED</h4> <p>This is a tiny segmented LED on the board that displays the POST code of the motherboard while booting. It may seem trivial, but POST LEDs are a godsend when things go sideways on a machine. If all other things were equal, we’d take a board with the POST LED over one without it.</p> <h4>Backup BIOS</h4> <p>A backup BIOS stores a duplicate BIOS on the motherboard that can be restored should the BIOS get corrupted. We think it’s a nice feature but a corrupt BIOS is pretty rare. Nevertheless, it’s probably better to have a backup BIOS and not need it than to need it and not have it.</p> <h4>Extra Features</h4> <p>Wireless, premium sound, fan controls, and headers galore are the special features board vendors use to hook you. You might dismiss them as unnecessary features, but so are the power windows and multi-speaker setup in your car. Certainly some extras aren’t needed, such as onboard Wi-Fi on a desktop box that will live on Ethernet, but fan control, such as Asus’s excellent FanXpert II, is worthwhile, as are premium audio circuits.</p> <p><em>Click the next page to get in-depth information on hard drives and SSDs!</em></p> <hr /> <p>&nbsp;</p> <h3>Budget vs. Premium: Is It Worth It?</h3> <p>In a given chipset family—say, Z77—it’s easy to find a motherboard costing $110 as well one running $379. Both use the same chipset, so are they the same? It depends.</p> <p>If you intend to socket in a non-overclocked Core i7-3770K, run one GPU, and a sound card, you’d probably be hard-pressed to tell the difference, but don’t assume that premium boards are just a gimmick to rip you off. High-end motherboards aren’t just anodized a different color and slapped a higher price. The $110 board will be pretty much a strippo option, with no multicard support, minimal ports and slots, and a design that’s not made for high overclocks. Yes, you might be able to overclock the budget board, but the voltage regulator modules and chipset cooling are likely to limit you. High-end overclocking boards are truly designed for the sport, with direct voltage readout hard points. And yes, fancy new technology such as Thunderbolt, additional USB 3.0, and SATA controllers cost more money. Even the software suite on the budget board will be pretty stripped down.</p> <p>Still, the truth is that most of us will neither be overclocking with liquid nitrogen nor going ultra-budget. That’s why board vendors offer a dizzying array of selections between the rock-bottom and high-end. We think the $175 range gets you a pretty decent board, generally.</p> <h4>SSDs</h4> <p><strong>SSDs have a lot of complicated technology inside their waifish 2.5-inch shells, so follow along as we demystify it for you</strong></p> <h4>Controller</h4> <p>The controller is the brains of the <a title="ssd" href="" target="_blank">SSD</a>, and what governs performance for the most part (along with the type of NAND flash used). The controller uses parallel channels to read and write data to the NAND, and also helps optimize the drive via the Trim command, as well as performing routine garbage collection. Though some companies might license a third-party controller, they always use custom firmware that they have created in order to define the performance of the drive, so two SSDs that use the same controller will still have varying levels of performance in different workload scenarios. While the SSD world used to be somewhat ruled by the LSI SandForce controller, those days have long passed, and we are now seeing the rise of in-house controllers by companies like Samsung.</p> <h4>Over-provisioning</h4> <p>Over-provisioning is a spec you will rarely see explicitly mentioned on a product box, but its presence, or lack thereof, is evident by a drive’s capacity. Over-provisioning is simply space taken out of the drive's general capacity and reserved for drive maintenance. So if you see a drive with 256GB of capacity, there’s no space reserved, but a drive listed as 240GB has 16GB reserved for over-provisioning. In exchange for that space you get increased endurance, as it gives the SSD controller a lot of NAND flash to use for drive optimization and management. The provisioned NAND can be compared to a swap file used by a mechanical hard drive and operating system, in that it is space reserved to manage the files on the SSD.</p> <h4>NAND Flash</h4> <p>All SSDs use this type of memory, as it's non-volatile, meaning you can cut off power to it and the data remains in place (mid-data-transfer is another story, though). The opposite is <a title="dram" href="" target="_blank">DRAM</a>, which is volatile, so once you shut down your PC, it is deleted. There are several manufacturers of NAND flash, including ONFI/Micro, Samsung, Toshiba, and SanDisk, and all the SSD vendors use them, so while a Samsung SSD obviously uses Samsung NAND, so does the new Seagate SSD, for example, since Seagate doesn't own a NAND fab. Corsair SSDs use Toshiba NAND, and so forth. There's no answer to the question of "who makes the best NAND?" as they all have varying performance characteristics, and it's typically the controller and its firmware that play the biggest role in determining a drive's performance. Good NAND with a crap controller equals crap, so keep that in mind when shopping for an SSD.</p> <h4>MLC, SLC, TLC NAND</h4> <p>All modern NAND flash is either SLC, MLC, or TLC, which stands for single-, multi-, and triple-level cell, which indicates how many values it can hold in a cell at one time. The most secure, and precise, is SLC, which holds a single value in each cell. Obviously, this is a bit inefficient, but also very accurate, and has high endurance, making SLC NAND ridiculously expensive, and not for consumers (it's for enterprise). Next up is MLC, which stands for multi-level cell, as each cell can hold two values at a time. MLC is used on the majority of SSDs you can buy, as it strikes a fine balance between cost and capacity. TLC flash, which stands for triple-level cell, holds—you guessed it—three values per cell, giving it the lowest endurance of any drive available, with the caveat that it still allows years of usage. Only the Samsung 840 and Intel 335 use TLC NAND flash; the rest of the consumer SSDs available today use MLC NAND.</p> <p style="text-align: center;"><a class="thickbox" href="/files/u152332/naked_ssd_take2_small_2.jpg"><img src="/files/u152332/naked_ssd_take2_small_1.jpg" alt="Here we see the main components of an SSD: NAND flash, controller chip, DRAM, printed circuit board, and SATA connectors. " title="NAND flash" width="620" height="770" /></a></p> <p style="text-align: center;"><strong>Here we see the main components of an SSD: NAND flash, controller chip, DRAM, printed circuit board, and SATA connectors. </strong></p> <h4>HDD</h4> <p><strong>Even though SSDs are the cool kids, we still need hard drives for our "multimedia" collections. Here are all the terms you need to know to sound like a pro</strong></p> <h4>Spindle Speed</h4> <p>Spindle speed is the rotational velocity of the platters expressed in rotations per minute (rpm). Faster spinning platters result in lower seek times and improved performance. The most common desktop drives spin at 7,200rpm, but there are also 5,400–5,900rpm desktop drives, which we recommend only for backup purposes given their reduced performance relative to a 7,200rpm drive. There are 10Krpm drives as well, but the rise of much-faster SSDs have largely made them irrelevant in today's market.</p> <h4>Platters</h4> <p>Every hard drive stores data on platters made of glass alloy, with data retained on both sides that’s accessed by read and write heads hovering on each side of the platter. The number of platters is something to pay attention to when shopping for a drive, as it dictates area density, or how much data is stored per platter. Right now, 1TB is the maximum platter density available, and it offers improved performance compared to a 750GB platter, all other things being equal. Since the platter has more data on it, the read/write heads have to move around less to pick up data, so we've seen significantly improved performance from drives bearing these super-dense platters.</p> <h4>Cache Size</h4> <p>All hard drives have a bit of onboard memory referred to as cache, and the market has mostly settled on 64MB being the standard. The cache is used as a buffer, in that data is sent to it before being written to the disk. Whatever was last written or read will usually still be in the buffer should you need it again, so it improves performance by making recently accessed data available instantly. This practice of fetching data from the onboard cache is referred to as "bursting" in benchmarks, but in practice it rarely happens, so don't use this number to determine a drive's overall performance. Spindle speed is a much better indicator of hard drive performance compared to cache size.</p> <h4>NCQ</h4> <p>This stands for Native Command Queuing and is technology that helps the drive prioritize data requests so that it can process them in an efficient fashion. For example, if a drive receives a command to go all the way out to the outer perimeter to fetch some data, but then receives a request for data that is closer to its current location, with NCQ enabled, it would fetch the data in the order of closest bits to furthest bits, resulting in faster data transfer. A drive without NCQ would simply fulfill the requests in the order received, which is highly inefficient. NCQ only shows significant gains in a heavily queued workload, however, which typically doesn't exist for home users, but does occur on a web server or some other high-traffic application.</p> <p style="text-align: center;"><a class="thickbox" href="/files/u152332/barracuda_dyn_hi-res_small_0.jpg"><img src="/files/u152332/barracuda_dyn_hi-res_small.jpg" alt="A hard drive uses magnets (lower left) to move the read/write heads (the pointy things), which are both above and below the data platters." title="HDD" width="620" height="665" /></a></p> <p style="text-align: center;"><strong>A hard drive uses magnets (lower left) to move the read/write heads (the pointy things), which are both above and below the data platters.</strong></p> <h3>The Scoop on SSD Caching</h3> <p>We all want the speed of an SSD but with the price and capacity of a mechanical hard drive. Obviously that’s not possible. However, there is a middle ground, which is using a small SSD as a caching drive for a mechanical hard drive. This allows your most frequently used files (including your OS and boot files) to be cached to the SSD for fast access to them, while less frequently accessed files reside on your hard drive. This actually works quite well in our testing, and to set one up you’ll need to either run it off your existing motherboard with any SSD you have lying around, or buy a caching SSD and use the included software to set up the caching array. For Intel users, Z68 and Z77 boards include caching support natively via Intel Smart Response Technology, but users of other chipsets will need to BYO to the party.</p> <p><em>Click the next page to get the inside scoop on graphics cards!</em></p> <p>&nbsp;</p> <hr /> <p>&nbsp;</p> <h3>GPUs</h3> <p><strong>The world of GPUs can be a scary place fraught with big words, bigger numbers, and lots of confusing nomenclature. Allow us to un-confuse things a bit for you</strong></p> <h4>Memory</h4> <p>The amount of memory a GPU has is also called its frame buffer (see below). Most cards these days come with 1GB to 3GB of memory, but some high-end cards like the <a title="GTX titan" href="" target="_blank">GTX Titan</a> have 6GB of memory. In the simplest terms, more memory lets you run higher resolutions, but read the Frame Buffer section below for more info.</p> <h4>Cores/Processors</h4> <p>GPUs nowadays include compartmentalized subsystems that have their own processing cores, called Stream Processors by AMD, and <a title="cuda" href="" target="_blank">CUDA</a> cores by Nvidia, but both perform the same task. Unlike a CPU, which is designed to handle a wide array of tasks, but only able to execute a handful of threads in parallel at a high clock speed, GPU cores are massively parallel and designed to handle specific tasks such as shader calculations. They can also be used for compute operations, but typically these features are heavily neutered in gaming cards, as the manufacturers want their most demanding clients paying top dollar for expensive workstation cards that offer full support for compute functionality. Since AMD and Nvidia's processor cores are built on different architectures, it's impossible to make direct comparisons between them, so just because one GPU has more cores than another does not automatically make it better.</p> <h4>Memory Bus</h4> <p>The memory bus is a crucial pathway between the GPU itself and the card's onboard frame buffer, or memory. The width of the bus and the speed of the memory itself combine to give you a set amount of bandwidth, which equals how much data can be transferred across the bus, usually measured in gigabytes per second. In this respect, and what generally stands with all things PC, more is better. As an example, a GTX 680 with its 6GHz memory (1,500MHz quad-pumped) and 256-bit interface is capable of transferring 192.2GB of data per second, whereas the GTX Titan with the same 6GHz memory but a wider 384-bit interface is capable of transferring 288.4GB per second. Since most modern gaming boards now use 6GHz memory, the width of the interface is the only spec that ever changes, and the wider the better. Lower-end cards like the HD 7790, for example, have a 128-bit memory bus, so as you spend more money you'll find cards with wider buses.</p> <h4>GPU Boost</h4> <p>This technology is available in high-end GPUs, and it allows the GPU to dynamically overclock itself when under load for increased performance. GPUs without this technology are locked at one core clock speed all the time.</p> <h4>Frame Buffer</h4> <p>The frame buffer is composed of DDR memory and is where all the computations are performed to the images before they are output to your display, so you'll need a bigger buffer to run higher resolutions, as the two are directly related to one another. Put simply, if you want to run higher resolutions—as in fill your screen with more pixels—you will need a frame buffer large enough to accommodate all those pixels. The same principle applies if you are running a standard resolution such as 1080p but want to enable super-sampling AA (see below): Since the scene is actually being rendered at a higher resolution and then down-sampled, you'll need a larger frame buffer to handle that higher internal resolution. In general, a 1GB or 2GB buffer is fine for 1080p, but you will need 2GB or 3GB for 2560x1600 at decent frame rates. This is why the GTX Titan has 6GB of memory, as it’s designed to run at the absolute highest resolutions possible, including across three displays at once. Most midrange cards now have 2GB, with 3GB and 4GB frame buffers now commonplace for high-end GPUs.</p> <p style="text-align: center;"><a class="thickbox" href="/files/u152332/geforcegtx_titan_front1_small_0.jpg"><img src="/files/u152332/geforcegtx_titan_front1_small.jpg" alt="High resolutions require a lot of RAM, which is embedded in the area around the GPU just like on this 6GB GTX Titan." width="620" height="306" /></a></p> <p style="text-align: center;"><strong>High resolutions require a lot of RAM, which is embedded in the area around the GPU just like on this 6GB GTX Titan.</strong></p> <h4>Power Requirements</h4> <p>All modern GPUs use PCI Express power connectors, either of the 6-pin or 8-pin variety. Small cards require one 6-pin connector, bigger cards require two 6-pin, and the top-shelf cards require one 8-pin and one 6-pin. Flagship boards like the GTX 690 and HD 7990 need two 8-pin connectors. Most high-end cards will draw between 100–200W of power under load, so you'll need around a 500–650W PSU for your entire system. Always give yourself somewhat of a buffer, so when a manufacturer says a 550W PSU is required, go for 650W.</p> <h4>Display Connectors</h4> <p>These are what connect your GPU to your display, the most common being DVI, which comes in both single-link and dual-link. Dual-link is needed for resolutions up to 2560x1600, while single-link is fine for up to 1,200 pixels vertically. DisplayPort can go up to 2560x1600, as well. HDMI is another connector you will see: versions 1.0–1.2 support 1080p, 1.3 supports 2560x1600, while 1.4 supports 4K.</p> <h4>PCI Express 3.0</h4> <p>The latest generation of graphics cards from AMD and Nvidia are all PCIe 3.0, which theoretically allows for more bandwidth across the bus compared to PCIe 2.0, but actual in-game improvement will be slim-to-none in most cases, as PCIe 2.0 was never saturated to begin with. Your motherboard chipset and CPU must also support PCIe 3.0, but most Ivy Bridge and older boards do not support it in the chipset, even though the CPU may have the required lanes. In general, every GPU has PCIe 3.0 these days, but if your motherboard only supports version 2.0 you will not suffer a performance hit.</p> <h4>Cooling</h4> <p>GPU coolers fall into several different categories, including blower, centralized, and water-cooled. The blower type is seen on most "reference" designs, which is what AMD and Nvidia provide to their add-in board partners as the most cost-effective solution typically. It sucks air in from the front of the chassis, then blows it along a heatsink through the back of the card to be exited out the rear of your case. Centralized coolers have one or two fans in the middle that suck air in from anywhere around the card and exhaust it into the same region, creating a pocket of warm air below the card. Water-cooled cards are very rare, of course, but use water to absorb heat contained within a radiator, which is cooled by a fan. Water cooling is usually the most effective (and quiet) way to cool a hot PC component, but its cost and complexity make it less common.</p> <h4>PhysX</h4> <p>This is Nvidia technology baked into its last few generations of GPUs that allows for hardware-based rendering of physics in games that support it, most notably Borderlands 2, so instead of just a regular explosion, you will see an explosion with particles and volumetric fog and smoke. Typically, AMD card owners will see the <a title="physx" href="" target="_blank">PhysX</a> option grayed out in the menus, but the games still look great, so we would not deem this technology a reason to go with Nvidia over AMD at this point in time.</p> <h3>Antialiasing Explained</h3> <p>Different GPUs offer different types of antialiasing (AA), which is the smoothing out of jaggies that appear on edges of surfaces in games. Let's look at the most common types:</p> <p><strong>Full Scene AA (FSAA, or AA):</strong></p> <p>The most basic type of AA, this is sometimes called super-sampling. It involves rendering a scene at higher resolutions and then down-sampling the final image for a smoother transition between pixels, which appears like softer edges on your screen. If you run 2X AA, the scene will be calculated at double the resolution, and 4X AA renders it at four times the resolution, hence a massive performance hit.</p> <p><strong>Multi-Sample AA (MSAA):</strong></p> <p>This is a more efficient form of FSAA, even though scenes are still rendered at higher resolutions, then down-sampled. It achieves this efficiency by only super-sampling pixels that are along edges; by sampling fewer pixels, you don't see as much of a hit as with FSAA.</p> <h4>Fast Approximate AA (FXAA):</h4> <p>This is a shader-based Nvidia creation designed to allow for decent AA with very little to no performance hit. It achieves this by smoothing every pixel onscreen, including those born from pixel shaders, which isn't possible with MSAA.</p> <p><strong>TXAA:</strong></p> <p>This is specific to Kepler GPUs and combines MSAA with post-processing to achieve higher-quality antialiasing, but it's not as efficient as FXAA.</p> <p><strong>Morphological Antialiasing (MLAA):</strong></p> <p>This is AMD technology that uses GPU-accelerated compute functionality to apply AA as a post-processing effect as opposed to the super-sampling method.</p> <p><em>Click the next page to learn more about wi-fi technology, RAM, and PSUs.&nbsp;</em></p> <hr /> <p>&nbsp;</p> <h3>Wi-Fi Router</h3> <p><strong>Though the basic functionality of Wi-Fi routers has remained relatively unchanged since the olden days, new features have been added that help boost performance and allow for easier management</strong></p> <h4>Band<strong>&nbsp;</strong></h4> <p>The band that a router operates on is key to determining how much traffic you will have to compete with. You would never want to hop on a congested freeway every day, and the same logic applies here. Currently there are two bands in use: 2.4GHz and 5GHz. Everyone and their nana is on 2.4GHz, including people nuking pizzas in the microwave, helicopter parents monitoring their baby via remote radios, and all the people surfing the Internet in your vicinity, making it a crowded band, to say the least. However, within the 2.4GHz band you still have 11 channels to choose from, which is how everyone is able to surf this band without issues (for the most part). But if everyone is using the same channel, you will see your bandwidth decrease. On the other hand, 5GHz is a no-man's-land at this time, so routers that can operate on it cost a pretty penny since it's the equivalent of using the diamond lane, and a great way to make sure your bandwidth remains unmolested. <strong><br /></strong></p> <h4>MIMO</h4> <p>This stands for multiple-input, multiple-output and it's the use of multiple transmitters and receivers to send/receive a Wi-Fi signal in order to improve performance, sort of like RAID for storage devices but with Wi-Fi. These devices are able to split a signal into several pieces and send it via multiple radio channels at once. This improves performance in a couple of ways. When only one signal is being sent, it has to bounce around before ending up at the receiver, and performance is degraded. When several signals are sent at the same time, however, spectral efficiency is improved as there is a greater chance of one hitting the receiver with minimal interference; it also improves performance with multiple streams of data being carried to the receiver at once.</p> <h4>Channel Bonding</h4> <p>Channel bonding is something that’s done by the router and the network adapter whereby parallel channels of data are "bonded" together much like stripes of data in a RAID. This technology is most prevalent in 802.11n networks, where channel bonding is required for a user to utilize the full amount of bandwidth available in the specification. The downside to channel bonding is that it increases the risk of interference from nearby networks, which can reduce speeds. Since each channel is 20MHz, "bonded mode" operates at 40MHz, so check your settings to see if you can enable this.</p> <h4>802.11 Standards</h4> <p>Every router adheres to a specific 802.11 standard, which governs its overall performance and features. In the old days, there was 802.11a/b, then 802.11g, then 802.11n, which is the most widespread specification in use today since it's been around for a few years and is relatively fast. Waiting in the wings is 802.11ac, which by default broadcasts on the uncongested 5GHz band, but is also backward compatible with 2.4GHz. Whereas 802.11g had a peak throughput of 300Mb/s, 802.11n has a peak of roughly 500Mb/s, and 802.11ac doubles that to an unholy 1.3Gb/s. It achieves this speed increase by supporting up to eight channels compared to 802.11n's four, and through increased channel width, using 80MHz and an optional 160MHz channel.</p> <h4>Quality of Service (QoS)</h4> <p>QoS is a common feature on today’s routers, and it lets you dictate which programs get priority when it comes to network bandwidth. You could theoretically slow down uTorrent while giving Netflix and Skype or Battlefield 3 more bandwidth. One crucial point is that the QoS setting is most important for outgoing traffic such as torrents, since incoming traffic is usually already prioritized by your ISP.</p> <p style="text-align: center;"><a class="thickbox" href="/files/u152332/asus_rtn66u_small_0.jpg"><img src="/files/u152332/asus_rtn66u_small.jpg" alt="High-end 802.11n routers are able to broadcast dual networks on both 2.4GHz and 5GHz bands, though the new 802.11ac standard uses the 5GHz band by default." title="Wi-Fi Router" width="620" height="523" /></a></p> <p style="text-align: center;"><strong>High-end 802.11n routers are able to broadcast dual networks on both 2.4GHz and 5GHz bands, though the new 802.11ac standard uses the 5GHz band by default.&nbsp; </strong></p> <h4 style="text-align: left;">RAM<strong>&nbsp;</strong></h4> <p style="text-align: left;"><strong>System RAM, or memory, seems like such a basic thing, but there’s still much to know about it</strong></p> <h4 style="text-align: left;">Clock Speed<strong>&nbsp;</strong></h4> <p style="text-align: left;">The clock speed of RAM is usually expressed in megahertz, so DDR3/1866 runs at 1,866MHz, at a certain latency timing. The only problem is that modern CPUs pack so much cache and are so intelligent in managing data that very high-clocked RAM rarely impacts overall performance. Going from, say, DDR3/1600 to DDR3/1866 isn’t going to net you very much at all. Only certain bandwidth-intensive applications such as video encoding can benefit from higher-clocked RAM. The sweet spot for most users is 1,600 or 1,866. The exception to this is with integrated graphics. If the box will be running integrated graphics, reach for the highest-clocked RAM the board will support and you will see a direct benefit in most games. <strong>&nbsp;</strong></p> <h4 style="text-align: left;">Channels</h4> <p style="text-align: left;">Modern CPUs support everything from single-channel to quad-channel RAM. There isn’t really a difference between a dual-channel kit and a quad-channel kit except that the vendor has done the work to match them up. You can run, for example, two dual-channel kits just fine. The only time you may want a factory-matched kit is if you are running the maximum amount of RAM or at a very high clock speed.</p> <h4 style="text-align: left;">Voltage</h4> <p style="text-align: left;">Voltage isn’t a prominent marketing spec for RAM but it’s worth paying attention to, as many newer CPUs with integrated memory controllers need lower-voltage RAM to operate at high frequency. Older DDR3, which may have been rated to run at high frequencies, could need higher voltage than newer CPUs are capable of supporting.</p> <h4 style="text-align: left;">Heatspreaders</h4> <p style="text-align: left;">Heat is bad for RAM, but we’ve never been able to get any vendor to tell us at what temperature failures are induced. Unless you’re into extreme overclocking, if you have good airflow in your case, you’re generally good. We’ve come to feel that heatspeaders, for the most part, are like hubcaps. They may not do much, but who the hell wants to drive a car with all four hubcaps missing?</p> <h4 style="text-align: left;">Capacity, Registered DIMMs, and Error Correction</h4> <p style="text-align: left;">It’s pretty easy to understand capacity on RAM—16GB is more than 8GB and 4GB is more than 2GB. With unbuffered, nonregistered RAM, the highest capacity you can get to run with a consumer CPU are 8GB modules. Registered DIMMs, or buffered DIMMs, usually refers to extra chips, or “buffers,” on the module to help take some of the electrical load off the memory controller. It’s useful when running servers or workstations that pack in a buttload of RAM. ECC RAM refers to error-correcting control and adds an additional RAM chip to correct multi-bit errors that can’t be tolerated in certain high-precision workloads. If this sounds like something you want, make sure your CPU supports it. Intel usually disables ECC on its consumer CPUs, even those based on the commercial ones. AMD, on the other hand, doesn’t. For most, ECC support is a bit overkill, though.</p> <p style="text-align: center;"><a class="thickbox" href="/files/u152332/hyperx_red_hx_blu_red_2_hr_small_0.jpg"><img src="/files/u152332/hyperx_red_hx_blu_red_2_hr_small.jpg" alt="We’re not sure what RAM heatsinks do today except look cool." title="RAM" width="620" height="388" /></a></p> <p style="text-align: center;"><strong>We’re not sure what RAM heatsinks do today except look cool.</strong></p> <h3 style="text-align: left;">Power Supply Unit</h3> <p style="text-align: left;"><strong>The power supply doesn’t get all the attention of, say, the CPU or the video card, but disrespect the PSU at your own peril </strong></p> <h4 style="text-align: left;">Wattage</h4> <p style="text-align: left;">The actual wattage of the PSU is the spec everyone pays attention to. That’s because 650 watts is 650 watts, right? Well, not always. One maker’s 650 watts might actually be more like 580 watts or lower at the actual temperature inside your case on a hot day. Despite all this, the wattage rating is still one of the more reliable specs you can use to judge a PSU. How much you need can only be answered by the rig you’re running. We will say that recent GPU improvements have caused us to back away from our must-have-1,000W-PSU mantra. These days, believe it or not, a hefty system can run on 750 watts or lower with a good-quality PSU.</p> <h4 style="text-align: left;">Efficiency</h4> <p style="text-align: left;">After wattage, efficiency is the next checkmark feature. PSU efficiency is basically how well the unit converts the power from AC to DC. The lower the efficiency, the more power is wasted. The lowest efficiency rating is 80 Plus, which means 80 percent of the power at a load of 20 percent, 50 percent, or 100 percent is converted. From there it goes to Bronze, Silver, Gold, and Platinum, with the higher ratings indicating higher efficiency. Higher is better, but you do get diminishing returns on your investment as you approach the higher tiers. An 80 Plus Silver PSU hits 88 percent efficiency with a 50 percent load. An 80 Plus Platinum hits 92 percent. (Efficiencies for the higher tiers vary at different loads.) Is it worth paying 40 percent more for that? That’s up to you.</p> <h4 style="text-align: left;">Single-rail vs. Multi-rail</h4> <p style="text-align: left;">A single-rail PSU spits out all the power from a single “rail,” so all of the 12 volt power is combined into one source. A multi-rail splits it into different rails. Which is better? On a modern PSU, it doesn’t matter much. Much of the problems from multi-rail PSUs were in the early days of SLI and Pentium 4 processors. PSU designs that favored CPUs, combined with the siloing of power among rails, proved incapable of properly feeding a multi-GPU setup. Single-rail designs had no such issues. These days, multi-rail PSUs are designed with today’s configs in mind, so multi-GPUs are no longer a problem.</p> <h4 style="text-align: left;">Intelligent vs. Dumb</h4> <p style="text-align: left;">A “dumb” power supply is actually what 99 percent of us have: a PSU that supplies clean, reliable power. An “intelligent” PSU does the same but communicates telemetry to the OS via USB. Some smart PSUs even let you adjust the voltages on the rails in the operating system (something you’d have to do manually on high-end units) and let you control the fan temperature intelligently, too. Do you need a smart PSU? To be frank, no. But for those who like seeing how efficient the PSU is or what the 5-volt rail is, it’s pretty damned cool.</p> <h4 style="text-align: left;">Modular vs. Non-modular</h4> <p style="text-align: left;">Modular PSUs are the rage and give you great flexibility by letting you swap in shorter cables, or cables of a different color, or to remove unused cables. The downside is that most high-end machines use all of the cables, so that last point in particular is moot—what’s more, we think it’s too easy to lose modular cables, which sucks.</p> <p style="text-align: center;"><a class="thickbox" href="/files/u152332/triathlor_eta650awt-m_small_0.jpg"><img src="/files/u152332/triathlor_eta650awt-m_small.jpg" alt="Modular power supplies are the rage today—just don’t misplace the cables." title="Power Supply" width="620" height="413" /></a></p> <p style="text-align: center;"><strong>Modular power supplies are the rage today—just don’t misplace the cables.</strong></p> <p style="text-align: left;"><em>Click the next page to read more about PC hardware buying tips.</em></p> <p style="text-align: center;"><strong>&nbsp;</strong></p> <hr /> <p>&nbsp;</p> <h4 style="text-align: left;">System Specs<strong>&nbsp;</strong></h4> <p style="text-align: left;"><strong>How to dole out system advice like a pro</strong></p> <p style="text-align: left;">Warning: As a PC expert, you will be called upon often by family and friends for system-buying advice. After all, purchasing a new PC retail can be a daunting task for the average consumer. Remember, you might know the difference between an AMD FX-8350 and FX-6100, but will Aunt Peg?</p> <p style="text-align: center;"><a class="thickbox" href="/files/u152332/blkang_small_0.jpg"><img src="/files/u152332/blkang_small.jpg" alt="This machine is probably too much PC for Aunt Peg to handle." width="620" height="748" /></a></p> <p style="text-align: center;"><strong>This machine is probably too much PC for Aunt Peg to handle.</strong></p> <p style="text-align: left;">No, Aunt Peg will walk into the local Big Box with the goal of spending $750 on a basic all-in-one and end up walking out with a $3,000 SLI rig. We’re not saying that Aunt Peg doesn’t like getting her frag on as much as the rest of us, but let’s face it, she needs some basic buying tips.<strong>&nbsp;</strong></p> <h4 style="text-align: left;">CPU<strong>&nbsp;</strong></h4> <p style="text-align: left;">Peg, what level of CPU you require depends on your needs. If your idea of a good time is Bejeweled, email, and basic photo editing, a dual-core processor of any model except <a title="atom" href="" target="_blank">Atom</a> is more than enough. If you’re looking for more performance, the good thing is that Intel and AMD’s model numbers can mostly be trusted to represent actual performance. A Core i5 is greater than a Core i3 and an A10 is faster than an A8. If you are doing home video editing, Peg, consider paying for a quad-core CPU or more.<strong>&nbsp;</strong></p> <h4 style="text-align: left;">RAM<strong>&nbsp;</strong></h4> <p style="text-align: left;">There are three known levers pulled when convincing consumers to buy a new PC: CPU, storage size, and amount of RAM. You’ll often see systems with low-end processors loaded up with a ton of RAM, because someone with a Pentium is really in the market for a system with 16GB of RAM (not!).&nbsp; For most people on a budget, 4GB is adequate, with 8GB being the sweet spot today. If you have a choice between a Pentium with 16GB and a Core i3 with 8GB, get the Core i3 box.<strong>&nbsp;</strong></p> <h4 style="text-align: left;">Storage<strong>&nbsp;</strong></h4> <p style="text-align: left;">Storage is pretty obvious to everyone now, and analogous to closet space. You can never have enough. What consumers should really look for is SSD caching support or even pony up for an SSD. SSD caching or an SSD so greatly improves the feel of a PC that only those on a very strict budget should pass on this option. SSDs are probably one of the most significant advances to PCs in the last four years, so not having one is almost like not having a CPU. How large of an SSD do you need? The minimum these days for a primary drive is 120GB, with 240GB being more usable.<strong>&nbsp;</strong></p> <h4 style="text-align: left;">GPU</h4> <p>There’s a sad statistic in the PC industry: Americans don’t pay for discrete graphics. It’s sad because a good GPU should be among the top four specs a person looks at in a new computer. Integrated graphics, usually really bad Intel integrated graphics, have long been a staple of American PCs. To be fair, that’s actually changing, as Intel’s new Haswell graphics greatly improves over previous generations, and for a casual gamer, it may even finally be enough. Still, almost any discrete GPU is still faster than integrated graphics these days. Aunt Peg might not play games, but her kids or grandkids might and not having a GPU will give them a frowny face.&nbsp; A GeForce 650 or Radeon HD 7770 is a good baseline for any machine that will touch games.</p> 2013 August 2013 computer computer hardware cpu desktop pc expert gpu graphics card info knowledge motherboard processor ram News Features How-Tos Thu, 19 Dec 2013 21:09:17 +0000 Gordon Mah Ung and Josh Norem 26598 at Asus Z87-Deluxe Review <!--paging_filter--><h3>Asus Z87-Deluxe review: finally, all SATA 6Gb/s on an Intel board!</h3> <p>Motherboard shopping used to be like buying a Model T—you could buy any color you wanted as long it was black. Today, we have a serious Nerd World problem in the dizzying array of motherboard choices, with Asus offering no less than 10 Z87 boards just in its “standard” line, at prices that range from ultra-budget to luxury.</p> <p style="text-align: center;"><a class="thickbox" href="/files/u152332/3d-2_small_0.jpg"><img src="/files/u152332/3d-2_small.jpg" alt="With the exception of Goldmember, PC users may miss the old blue heatsinks." title="Asus Z87-Deluxe" width="620" height="413" /></a></p> <p style="text-align: center;"><strong>With the exception of Goldmember, PC users may miss the old blue heatsinks.</strong></p> <p>The Z87 Deluxe comes in near the top of Asus’s consumer line, and immediately raises the question, “Is it worth it?” given its price tag of about $290 on the street. That’s hard to say, but Asus has ladled on enough features to make a convincing argument.</p> <p>There’s 802.11ac with a redesigned antenna, Bluetooth 4.0 for support of ultra-low-power devices such as the Fitbit, and multi-GPU support for up to two Nvidia cards in SLI or up to three AMD cards in CrossFireX. The wireless support may seem extraneous on a desktop board, but Asus has a few tricks to get you to use it. You can, for example, set the 802.11ac to access-point mode and use it to sync files with your phone. Another mode allows you to use your phone as a remote desktop session. Sounds nifty, but we couldn’t figure out how to zoom in on the Android app, making it useless. It would also be nice if there was a way to know that the mode is on, lest someone remotely control or—watch—our screen without our knowledge.</p> <p>Elsewhere in the board, Asus does a polish job on its already excellent UEFI implementation. For a long time, Asus’s UEFI has been our pick of the litter and only recently have competitors come close. New features include a notebook, favorites, and—the most handy—a list of what you just changed in the UEFI as you exit. Asus’s other strong suite has been its AI Suite software, which also leads the pack in usability. It too has been polished up.</p> <p>Other extras include dual NICs, with one using an Intel PHY, and the latest Realtek audio codec, the ALC1150. We put on a set of analog gaming cans and did some close listening to lossless audio while hammering the USB 3.0 ports with data and couldn’t discern any of the electrical noise that can crop up with onboard audio.</p> <p>Moving to performance, we configured an Intel DZ87KLT-75K with the exact same components and then quickly watched the Z87-Deluxe mercilessly smack the Intel board around. Magic? Well, yes, if the magic is running the Core i7-4770K at higher boost clo cks. The Z87-Deluxe consistently ran the Haswell at 4.2- or 4.3GHz when the Intel board would nary go past 3.9GHz on Turbo Boost. That let the Asus board run away with most of the benchmarks. There was even a little overclocking room left. Using the board’s auto-overclock, it set a clock speed of 4.6GHz. That’s pretty audacious for a Haswell chip. The only place where the Intel board came out ahead was in storage, where it edged out the Z87-Deluxe in I/O across the native Intel SATA ports as well as the USB 3.0. Overall, though, we give the “performance” nod to the Asus board because it pushes the chip far harder than Intel does with its own motherboard.</p> <p>So, is it worth it? Yes and no. We’ll be honest. Not everyone needs the fancy Wi-Fi/Bluetooth features nor 10 SATA 6Gb/s ports. But then again, do you really need the leather seats in your car? Or the heated mirrors and HID lamps? No, but that doesn’t make it bad to have them, either. It’s just a question of whether you want to be pampered or to drive around in a strippo.</p> <p><strong>$290,</strong> <a href=""></a></p> 2013 Asus Z87 Hardware motherboard October issues 2013 October 2013 Motherboards Reviews Wed, 11 Dec 2013 08:50:01 +0000 Gordon Mah Ung 26858 at MSI Details Mini ITX Gaming Motherboard and Graphics Card <!--paging_filter--><h3><img src="/files/u69/msi_itx_parts.jpg" alt="MSI ITX Parts" title="MSI ITX Parts" width="228" height="130" style="float: right;" />Finding out what's in a picture</h3> <p>Just before the holiday weekend in November, <a href="" target="_blank">MSI posted to its Facebook</a> account a teaser shot showing off a pair of mini ITX gaming products. One was a graphics card and the other a motherboard, but beyond what you could make out in the picture, mum was the word from MSI at the time. Well, MSI is now ready to reveal the full monty. Those of you who guessed the graphics card was a <strong>mini ITX</strong> <strong>GeForce GTX 760</strong>, you're awarded 760 geek cred points.</p> <p>The miniature gaming card is 30 percent shorter than reference. <a href="" target="_blank">MSI said</a> the challenge it faced with such a short card was to provide "great and silent cooling." The company found its solution in its new RADAX fan, which is a hybrid radial/axial fan that's supposed to combine the best of both worlds while reducing temps by as much as 30 percent.</p> <p>As for the Z87I motherboard, it features five SATA 6Gbps ports, a single eSATA 6Gbps port, DisplayPort, HDMI, DL-DVI-I, a single PCI-E 3.0 x16 slot, half a dozen USB 3.0 ports, four USB 2.0 ports, 8-channel audio, 802.11ac Wi-Fi, Bluetooth 4.0, Intel WiDi, and Killer E2205 GbE LAN.</p> <p>No word yet on when <a href="" target="_blank">these parts</a> will be available to purchase or for how much.</p> <p><em>Follow Paul on <a href="" target="_blank">Google+</a>, <a href="!/paul_b_lilly" target="_blank">Twitter</a>, and <a href="" target="_blank">Facebook</a></em></p> geforce gtx 760 graphics card mini itx motherboard msi Video Card z87I News Wed, 04 Dec 2013 17:46:13 +0000 Paul Lilly 26819 at MSI Posts Teaser Shot of Upcoming Mini ITX Gaming Board and Graphics Card <!--paging_filter--><h3><img src="/files/u69/msi_mini-itx_parts.jpg" alt="MSI Mini ITX Parts" title="MSI Mini ITX Parts" width="228" height="151" style="float: right;" />A picture is a worth 1,000 words</h3> <p><a href=""><strong>MSI</strong></a> is using Facebook correctly. While Facebook feeds have a tendency to be cluttered with pictures of food and political rants, MSI posted a photo of two unreleased mini ITX gaming products. One is a mini ITX motherboard and the other is a pint-sized graphics card. Both sport a red and black color scheme along with MSI's familiar dragon logo, plus some clues to the feature-sets.</p> <p>"We’re very close to expanding our gaming family with two new members, a mini-ITX gaming motherboard and mini-ITX gaming graphics card. Can you already recognize some of the features?," MSI Europe asks on its <a href=";set=a.129731527066467.11729.111253142247639&amp;type=1" target="_blank">Facebook post</a>.</p> <p>Looking over the pic, we count five SATA ports on the motherboard, along with two DDR3 DIMM slots, built-in Wi-Fi, some high-end looking components, an Audio Boost chip, and probably a generous amount of USB ports on the other side of the I/O panel.</p> <p>As for the graphics card, thre's an 8-pin power connector and a dual-slot cooling solution with a single fan. The general consensus among the Facebook comments so far is that it's a GeForce GTX 760 graphic card in mini ITX trim.</p> <p><em>Follow Paul on <a href="" target="_blank">Google+</a>, <a href="!/paul_b_lilly" target="_blank">Twitter</a>, and <a href="" target="_blank">Facebook</a></em></p> Build a PC graphics card Hardware mini itx motherboard msi News Thu, 21 Nov 2013 20:38:26 +0000 Paul Lilly 26741 at ASRock's Bitcoin Mining Motherboards Each Supports 6 Graphics Cards <!--paging_filter--><h3><img src="/files/u69/asrock_bitcoin.jpg" alt="ASRock Bitcoin" title="ASRock Bitcoin" width="228" height="149" style="float: right;" />Live out your dream of being a real-life miner</h3> <p>So here it is, black sheep, your shot at going against the grain and becoming a modern day miner while the rest of your family pursues careers in the field of medicine, law, science, or whatever else they're doing. <a href=""><strong>ASRock</strong></a> has your back with a <a href=";ID=1765" target="_blank">pair of motherboards</a> designed specifically with Bitcoin mining in mind. Each board supports half a dozen graphics cards, allowing you to mine for virtual currency like a madman, if that's your thing.</p> <p>Let's step back. If you haven't been following the whole Bitcoin craze, this short YouTube video (embedded below) will get you up to speed on what Bitcoins are and how they're mined:</p> <p><iframe src="//" width="620" height="349" frameborder="0"></iframe></p> <p>Today, a single Bitcoin is worth around $785. Just two days ago, they were worth around $500 a pop. Bitcoins are currently surging, though they're also prone to dramatic dips in value, such as when the FBI busted a website called Silk Road that was selling illegal drugs and dealing exclusively with Bitcoin currency.</p> <p>In any event, if you want to become a miner, ASRock has two boards of interest: <a href="" target="_blank">H61 Pro BTC</a> and <a href="" target="_blank">H81 Pro BTC</a>. Both sport six PCi-E slots and extra four-pin connectors. The primary difference between the two comes down to processor support; the H61 Pro BTC is an LGA1155 board with support for up to Ivy Bridge, and the H81 Pro BTC is an LGA1150 board for Haswell chips.</p> <p>Whether or not you're interested in either one, it's kind of neat to see ASRock think outside the box and come up with unique slabs of silicon. In addition to Bitcoin mining boards, ASRock recently announced a mobo with <a href="">22 SATA slots</a>.</p> <p><em>Follow Paul on <a href="" target="_blank">Google+</a>, <a href="!/paul_b_lilly" target="_blank">Twitter</a>, and <a href="" target="_blank">Facebook</a></em></p> asrock bitcoin Build a PC h61 pro btc h81 pro btc Hardware motherboard News Tue, 19 Nov 2013 16:05:52 +0000 Paul Lilly 26721 at ASRock Z87 Extreme11/ac Mobo Sports 22 SATA and SAS-3 Connectors for Storage Junkies <!--paging_filter--><h3><img src="/files/u69/asrock_z87_extreme.jpg" alt="ASRock Z87 Extreme" title="ASRock Z87 Extreme" width="228" height="156" style="float: right;" />More storage ports than you can shake a stick at</h3> <p>Fancy yourself a digital packrat? If oodles of storage options float your boat, you're going to love what <a href=""><strong>ASRock</strong></a> has done with its new Z87 Extreme11/ac motherboard. This slice of silicon is, <a href=";ID=1768">according to ASRock</a>, "the most high-end Z87 motherboard on the face of the earth!" It's certainly one of the most storage friendly with 22 SATA3 ports, including 6 SATA3 ports by way of Intel's Z87 chipset, and another 16 SAS-3 12.0GB/s ports from the added LSI SAS 3008 controller plus 3X24R Expander.</p> <p>Storage isn't the only selling point. ASRock's newest board features support for 4-way CrossFireX and 4-way SLI with its four PCI-E 3.0 slots at x8/x8/x8/x8 mode. It also has two Thunderbolt connectors, four DDR3-2933+ (OC) DIMM slots, HDMI output, built-in dual-band 802.11ac Wi-Fi support, Bluetooth 4.0, dual GbE LAN ports, and various other bullet points.</p> <p style="text-align: center;"><img src="/files/u69/asrock_storage.jpg" alt="ASRock Storage" title="ASRock Storage" width="461" height="142" /></p> <p>Overclockers will appreciate ASRock's premium design decisions, such as gold capacitors, digital PWM, 12-phase power design, 8-layer PCB, multiple filter caps (MFCs), dual-stack MOSFETs, and more.</p> <p>No word on when the <a href="" target="_blank">Extreme11/ac</a> will be available or for how much.</p> <p><em>Follow Paul on <a href="" target="_blank">Google+</a>, <a href="!/paul_b_lilly" target="_blank">Twitter</a>, and <a href="" target="_blank">Facebook</a></em></p> <p>&nbsp;</p> asrock Build a PC Hardware motherboard sata storage z87 extreme11/ac News Mon, 18 Nov 2013 19:47:05 +0000 Paul Lilly 26716 at Intel Rolls Out "Thunderbolt Ready" Program for Indecisive PC Builders <!--paging_filter--><h3><img src="/files/u69/thunderbolt_4.jpg" alt="Thunderbolt" title="Thunderbolt" width="228" height="179" style="float: right;" />Buy a board or system today, add Thunderbolt support later</h3> <p>Intel is obviously geeked about its <a href=""><strong>Thunderbolt</strong></a> interface, the question is, are you? Thunderbolt has made some strides since it was first introduced -- it's present on all Apple Mac systems, there are over 100 Thunderbolt devices available, and the first Thunderbolt 2 systems were unveiled last month -- but it's not as widely available as, say, USB. To further promote the interface, Intel came up with the idea of enabling PC makers to offer Thunderbolt upgradeable motherboards within desktops and workstation systems.</p> <p>It's an initiative called "Thunderbolt ready" and it entails using a Thunderbolt card, which can be added to any motherboard that includes a GPIO header (general purpose/input/output header).</p> <p>"Even if your system doesn’t have Thunderbolt it is now possible to 'upgrade' to it. Users that are interested in adding Thunderbolt 2 technology to an existing Thunderbolt ready system can combine a Thunderbolt card with a growing number of enabled motherboards, all identified by the use of the 'Thunderbolt ready' moniker," <a href="" target="_blank">Intel explains</a>. "The Thunderbolt ready program makes it simple to identify which components work together to upgrade your PC with Thunderbolt 2 capability."</p> <p><img src="/files/u69/thunderbolt_card.jpg" alt="Thunderbolt Card" title="Thunderbolt Card" width="620" height="297" /></p> <p>For the end-user, the upgrade is pretty simple. Just plug the Thunderbolt card into the designated PCI-E slot, connect a cable to the GPIO header, and use an available DisplayPort out connector from the motherboard processor graphics, or an external graphics card (depending on the system).</p> <p><em>Follow Paul on <a href="" target="_blank">Google+</a>, <a href="!/paul_b_lilly" target="_blank">Twitter</a>, and <a href="" target="_blank">Facebook</a></em></p> Hardware intel motherboard thunderbolt thunderbolt ready News Fri, 15 Nov 2013 16:51:38 +0000 Paul Lilly 26701 at Gigabyte Goes for the Kill with G1.Sniper Z87 Motherboard <!--paging_filter--><h3><img src="/files/u69/gigabyte_g1_sniper_z87.jpg" alt="Gigabyte G1.Sniper Z87" title="Gigabyte G1.Sniper Z87" width="228" height="143" style="float: right;" />A killer board for gamers and overclockers</h3> <p>Heading into the weekend, <a href=""><strong>Gigabyte</strong></a> announced the launch of its latest gaming motherboard, the G1.Sniper Z87. Gigabyte made the announcement at Blizzcon 2013, one of the biggest U.S. gaming events of the year. It's a fitting place to unveil the G1.Sniper Z87, which combines aggressive looks with high-end hardware like an AMP-UP audio feature and Killer E2200 networking.</p> <p>Gigabyte paid a lot of attention to its board's audio scheme. It has an upgradeable on-board OP-Amp, USB DAC-UP for clean, noise-free power delivery to any DAC, gain boost with onboard switches to select between 2.5x and 6x amplification, integrated Creative Sound Core3D audio processor, audio noise guard to protect against EMI, and a few other odds and ends.</p> <p><iframe src="//" width="620" height="349" frameborder="0"></iframe></p> <p>Outside of audio, the G1.Sniper Z87 sports four DDR3 DIMM slots with support for up to 32GB of memory, two PCI-E x16 slots, three PCI-E x1 slots, two regular PCI slots, half a dozen SATA 6Gbps ports with RAID support, 6 USB 3.0 ports, 7 USB 2.0 ports, and more.</p> <p>No word yet on when the <a href="" target="_blank">Gigabyte G1.Sniper Z87</a> will be available or for how much.</p> <p><em>Follow Paul on <a href="" target="_blank">Google+</a>, <a href="!/paul_b_lilly" target="_blank">Twitter</a>, and <a href="" target="_blank">Facebook</a></em></p> Build a PC g1.sniper gigabyte Hardware motherboard z87 News Fri, 08 Nov 2013 16:22:50 +0000 Paul Lilly 26658 at