Maximum PC - Reviews http://www.maximumpc.com/articles/40/feed en Haswell-E Review http://www.maximumpc.com/haswell-e_review_2014 <!--paging_filter--><h3>Haswell-E: Meet Intel’s new eight-core game changing CPU</h3> <p>After three long years of going hungry with quad-cores, red meat is finally back on the menu for enthusiasts. And not just any gamey slab full of gristle with shared cores, either. With its new eight-core Haswell-E CPU, Intel may have served up the most mouth-watering, beautifully seared piece of red meat in a long time.</p> <p>And it’s a good thing, too, because enthusiast’s stomachs have been growling. Devil’s Canyon? That puny quad-core was just an appetizer. And that dual-core highly overclockable Pentium K CPU? It’s the mint you grab on your way out of the steak house.</p> <p><iframe src="//www.youtube.com/embed/_h9ggGZHFtU" width="620" height="349" frameborder="0"></iframe></p> <p>No, what enthusiasts have craved and wanted ever since Intel’s original clock-blocking job on the original Sandy Bridge-E was a true, overclockable enthusiast chip with eight cores. So if you’re ready for a belt loosening, belly full of enthusiast-level prime rib, pass the horse radish, get that damned salad off our table, and read on to see if Intel’s Haswell-E is everything we hoped it would be.&nbsp;</p> <p><strong>Meet the Haswell-E parts</strong></p> <p><span style="color: #ff0000;"><img src="/files/u154082/haswell-e_comparison_chart.png" alt="haswell e comparison chart" title="haswell e comparison chart" width="620" height="241" /></span></p> <p>&nbsp;</p> <p><img src="/files/u154082/lga2011v3socket.jpg" alt="haswell e socket" title="haswell e socket" width="620" height="626" /></p> <p><strong>Despite its name, the LGA2011-v3 socket is not same as the older LGA2011 socket. Fortunately, the cooling offsets are exactly the same, so almost all older coolers and accessories should work just fine.&nbsp;</strong></p> <p><img src="/files/u154082/lga2011socket1.jpg" alt="lga2011" title="lga2011" width="620" height="556" /></p> <p><strong>Though they look the same, LGA2011’s socket has arms that are actually arranged differently than the new LGA2011-v3 that replaces it. And no, you can’t drop a newer Haswell-E into this socket and make it work.</strong></p> <h4>Haswell-E</h4> <p><strong>The first consumer Intel eight-core arrives at last</strong></p> <p>Being a card-carrying member of the PC enthusiast class is not an easy path to follow. Sure, you get the most cores and priciest parts, but it also means you get to wait a hell of a long time in between CPU upgrades. And with Intel’s cadence the last few years, it also means you get the leftovers. It’s been that way ever since Intel went with its two-socket strategy with the original LGA1366/LGA1156. Those who picked the big-boy socket and stuck to their guns on Pure PC performance always got the shaft.&nbsp;</p> <p>The original Ivy Bridge in LGA1156 socket, for example, hit the streets in April of 2012. As a reward for having the more efficient and faster CPU, Intel rewarded the small-socket crowd with its Haswell in June of 2013. It wasn’t until September of 2013 that big-boy socket users finally got Ivy Bridge-E for their LGA2011s. But with Haswell already out and tearing up the benchmarks, who the hell cared?</p> <p>Well, that time has come with Haswell-E, Intel’s first replacement for the aging LGA2011 platform since 2011. This time though, Intel isn’t just shuffling new parts into its old stack. For the first since the original Pentium 4 Extreme Edition, paying the price premium actually nets you more: namely, the company’s first consumer eight-core CPU.</p> <p><strong>Meet the T-Rex of consumer CPUs: The Core i7-5960X</strong></p> <p>We were actually a little leery of Haswell when it first launched last year. It was, after all, a chip seemingly tuned for the increasingly mobile/laptoppy world we were told was our post PC-apocalyptic future. Despite this, we recognized the chip as the CPU to have for new system builders. Clock for clock, its 22nm process, tri-gate transistors put everything else to shame—even the six-core Core i7-3930K chip in many tasks. So it’s no surprise that when Intel took a quad-core Haswell, put it in the Xerox machine, and hit the copy x2 button , we’d be ecstatic. Eight cores are decidedly better than six cores or four cores when you need them.&nbsp;</p> <p>The cores don’t come without a cost though, and we don’t mean the usual painful price Intel asks for its highest-end CPUs. It’s no secret that more cores means more heat, which means lower clock speeds. That’s one of the rationales Intel used with the original six-core Core i7-3960X. Although sold as a six-core, the original Sandy Bridge-E was built using an eight-core die on which Intel had permanently switched off two cores. Intel said it wanted to balance the needs of the many versus the needs of the few—that is, by turning off two of the cores, the part could hit higher clock speeds. Indeed, the Core i7-3960X had a base clock of 3.3GHz and Turbo Boost of 3.9GHz, and most could overclock it to 5GHz. The same chip packaged as a Xeon with all eight cores working—the Xeon E5-2687W—was locked down at 3.1GHz and mostly buzzed along at 3.4GHz.</p> <p>With the new Core i7-5960X—the only eight-core of the bunch—the chip starts at a seemingly pedestrian 3GHz with a Turbo Boost of one core up to 3.5GHz. Those subsonic clock speeds won’t impress against the Core i7-4790K, which starts at 4GHz. You’ll find more on how well Haswell-E performs against Haswell in our performance section, but that’s the price to be paid, apparently, to get a chip with this many cores under the heat spreader. Regarding thermals, in fact, Intel has increased the TDP rating to 140 watts versus 130 watts of Ivy Bridge-E and Sandy Bridge-E.&nbsp;</p> <p>If the low clocks annoy you, the good news is the part is fully unlocked, so the use of overclocking has been approved. For our test units, we had very early hardware and tight deadlines, so we didn’t get very far with our overclocking efforts. Talking with vendors, however, most seem very pleased with the clock speeds they were seeing. One vendor told us overclocks of all cores at 4.5GHz was already obtainable and newer microcode updates were expected to improve that. With even the vaunted Devil’s Canyon Core i7-4790K topping out at 4.7GHz to 4.8GHz, a 4.5GHz is actually a healthy overclock for an eight-core CPU.</p> <p><span style="white-space: pre;"> </span>When you dive down into the actual cores though, much is the same, of course. It’s based on a 22nm process. It has “3D” tri-gate transistors and integrated voltage regulation. Oh, and it’s also the first CPU to feature an integrated DDR4 memory controller.</p> <p><strong>Click the next page to read about DDR4</strong></p> <hr /> <p>&nbsp;</p> <h4>DDR4 details</h4> <p>If you think Haswell-E has been a long wait, just think about DDR3, which made its debut as main memory in systems since 2007. Yes, 2007. The only component that has lasted seven years in most enthusiasts systems might be the PSU, but it’s even rare to find anyone kicking a 500-watt PSU from 2007 these days.&nbsp;</p> <p><span style="white-space: pre;"> </span>DDR4 has been in gestation seemingly as long, so why the delay? From what we can tell, resistance to yet another new memory standard during a time when people thought the desktop PC and the PC in general were dying has been the root delay. It didn’t help that no one wanted to stick their head out first, either. RAM makers didn’t want to begin producing it DDR4 in volume until AMD or Intel made chipsets for it, and AMD and Intel didn’t want to support it because of the costs it would add to PCs at a time when people were trying to lower costs. The stalemate finally ends with Haswell-E, which integrates a quad-channel memory controller into its die.</p> <p>Initial launch speeds of DDR4 clock in at DDR4/2133. For those already running DDR3 at 3GHz or higher, a 2,133 data rate is a snooze, but you should realize that anything over 2133 is overclocked RAM. With DDR4, the JEDEC speeds (the body that sets RAM standards) already has target data rates of 3200 on the map. RAM vendors we’ve talked to are already shopping DIMMS near that speed.</p> <p>The best part of DDR4 may be its density message, though. For years, consumer DDR3 has topped out at 8GB on a DIMM. With DDR4, we should see 16B DIMMs almost immediately, and stacking of chips is built into the standard, so it’s possible we’ll see 32GB DIMMs over its lifetime. On a quad-channel, eight-DIMM motherboard, you should expect to be able to build systems with 128GB of RAM using non-ECC DIMMs almost immediately. DDR4 also brings power savings and other improvements, but the main highlights enthusiasts should expect are higher densities and higher clocks. Oh, and higher prices. RAM prices haven’t been fun for anyone of late, but DDR4 will definitely be a premium part for some time. In fact, we couldn’t even get exact pricing from memory vendors as we were going to press, so we’re bracing for some really bad news.</p> <h4>PCIe lanes: now a feature to be blocked</h4> <p>Over the years, we’ve come to expect Intel to clock-block core counts, clock speeds, Hyper-Threading, and even cache for “market segmentation” purposes. What that means is Intel has to find ways to differentiate one CPU from another. Sometimes that’s by turning off Hyper-Threading (witness Core i5 and Core i7) and sometimes its locking down clock speeds. With Haswell-E though, Intel has gone to new heights with its clock-blocking by actually turning off PCIe lanes on some Haswell-E parts to make them less desirable. At the top end, you have the 3GHz Core i7-5960X with eight cores. In the midrange you have the six-core 3.5GHz Core i7-5930K. And at the “low-end” you have the six-core 3.3GHz Core i7-5820K. The 5930K and the 5820K are virtually the same in specs except for one key difference: The PCIe lanes get blocked. Yes, while the Core i7-5960X and Core i7-5930K get 40 lanes of PCIe 3.0, the Core i7-5820K gets an odd 28 lanes of PCIe 3.0. That means those who had hoped to build “budget” Haswell-E boxes with multiple GPUs may have to think hard and fast about using the lowest-end Haswell-E chip. The good news is that for most people, it won’t matter. Plenty of people run Haswell systems with SLI or CrossFire, and those CPUs are limited to 16 lanes. Boards with PLX switches even support four-way GPU setups.</p> <p>Still, it’s a brain bender to think that when you populate an X99 board with the lowest-end Haswell-E, the PCIe configuration will change. The good news is at least they’ll work, just more slowly. Intel says it worked with board vendors to make sure all the slots will function with the budget Haswell-E part.&nbsp;</p> <p><img src="/files/u154082/mpc_haswell_front-back_1.jpg" alt="haswell e chip" title="haswell e chip" width="620" height="413" /></p> <p><strong>There have been clock-blocking rumors swirling around about the Haswell being a 12-core Xeon with four cores turned off. That’s not true and Intel says this die-shot proves it.&nbsp;</strong></p> <p><img src="/files/u154082/ivbe.jpg" alt="ivy bridge e" title="ivy bridge e" width="620" height="550" /></p> <p><strong>Ivy Bridge-E’s main advantage over Sandy Bridge-E was a native six-core die and greatly reduced power consumption. And, unfortunately, like its Ivy Bridge counterpart, overclocking yields on Ivy Bridge-E were greatly reduced over its predecessor, too, with few chips hitting more than 4.7GHz at best.</strong></p> <p><img src="/files/u154082/snbe.jpg" alt="sandy bridge e" title="sandy bridge e" width="308" height="260" /></p> <p><strong>Sandy Bridge-E and Sandy Bridge will long be remembered for its friendliness to overclocking and having two of its working cores killed Red Wedding–style by Intel.</strong></p> <p><strong>Click the next page to read about X99.</strong></p> <hr /> <p>&nbsp;</p> <h4>X99&nbsp;</h4> <p><strong>High-end enthusiasts finally get the chipset they want, sort of</strong></p> <p><img src="/files/u154082/x99blockdiagram.jpg" alt="x99 block diagram" title="x99 block diagram" width="620" height="381" /></p> <p><strong>Intel overcompensated in SATA on X99 but oddly left SATA Express on the cutting-room floor.</strong></p> <p>You know what we won’t miss? The X79 chipset. No offense to X79 owners, while the Core i7-4960X can stick around for a few more months, X79 can take its under-spec’ed butt out of our establishment. Think we’re being too harsh? We don’t.</p> <p>X79 has no native USB 3.0 support. And its SATA 6Gb/s ports? Only two. It almost reads like a feature set from the last decade to us. Fortunately, in a move we wholly endorse, Intel has gone hog wild in over-compensating for the weaknesses of X79.&nbsp;</p> <p>X99 has eight USB 2.0 ports and six USB 3.0 ports baked into the peripheral controller hub in it. For SATA 6Gb/s, Intel adds 10 ports to X99. Yes, 10 ports of SATA 6Gb/s. That gazongo number of SATA ports, however, is balanced out by two glaring omission in X99: no official SATA Express or M.2 support that came with Z97. Intel didn’t say why it left off SATA Express or M.2 in the chipset, but it did say motherboard vendors were free to implement it using techniques they gleaned from doing it on Z97 motherboards. If we had to hazard a guess, we’d say Intel’s conservative nature led it to leave the feature off the chipset, as the company is a stickler for testing new interfaces before adding official support. At this point, SATA Express has been a no-show. After all, motherboards with SATA Express became available in May with Z97, yet we still have not seen any native SATA Express drives. We expect most motherboard vendors to simply add it through discrete controllers; even our early board sample had a SATA Express port.&nbsp;</p> <p>One potential weakness of X99 is Intel’s use of the DMI 2.0. That offers roughly 2.5GB/s of transfer speed between the CPU and the south bridge or PCH, but with the board hanging 10 SATA devices, USB 3.0, Gigabit Ethernet, and 8 PCIe Gen 2.0 lanes off that link, there is the potential for massive congestion—but only in a worst-case scenario. You’d really have to a boat load of hardware lit up and sending and receiving data at once to cause the DMI 2.0 to bottleneck. Besides, Intel says, you can just hang the device off the plentiful PCIe Gen 3.0 from the CPU.</p> <p>That does bring up our last point on X99: the PCIe lanes. As we mentioned earlier, there will be some confusion over the PCIe lane configuration on systems with Core i7-5820K parts. With only 28 lanes of PCIe lanes available from that one chip, there’s concern that whole slots on the motherboard will be turned off. That won’t happen, Intel says. Instead, if you go with the low-rent ride, you simply lose bandwidth. Take an X99 mobo and plug in the Core i7-5930K and you get two slots at x16 PCIe, and one x8 slot. Remove that CPU and install the Core i7-5820K, and the slots will now be configured as one x16, one x8 and one x4. It’s still more bandwidth than you can get from a normal LGA1150-based Core i7-4770K but it will be confusing nonetheless. We expect motherboard vendors to sort it out for their customers, though.</p> <p>Haswell-E does bring one more interesting PCIe configuration though: the ability to run five graphics cards in the PCIe slots at x8 speeds. Intel didn’t comment on the reasons for the option but there only a few apparent reasons. The first is mining configurations where miners are already running six GPUs. Mining, however, doesn’t seem to need the bandwidth a x8 slot would provide. The other possibility is a five-way graphics card configuration being planned by Nvidia or AMD. At this point it’s just conjecture, but one thing we know is that X99 is a welcome upgrade. Good riddance X79.&nbsp;</p> <h4>Top Procs Compared</h4> <p><span style="color: #ff0000;"><span style="white-space: pre;"><img src="/files/u154082/top_processors.png" alt="top processors compared" title="top processors compared" width="620" height="344" /></span></span></p> <h4>Core Competency&nbsp;</h4> <p><strong>How many cores do you really need?</strong></p> <p><img src="/files/u154082/haswelletaskamanger.png" alt="haswell task manager" title="haswell task manager" width="620" height="564" /></p> <p><strong>It is indeed a glorious thing to see a task manager with this many threads, but not everyone needs them.</strong></p> <p>Like the great technology philosopher Sir Mix-A-Lot said, we like big cores and we cannot lie. We want as many cores as legally available. But we recognize that not everyone rolls as hard as we do with a posse of threads. With Intel’s first eight-core CPU, consumers can now pick from two cores all the way to eight on the Intel side of the aisle—and then there’s Hyper-Threading to confuse you even more. So, how many cores do you need? We’ll give you the quick-and-dirty lowdown.</p> <p><strong>Two cores</strong></p> <p>Normally, we’d completely skip dual-cores without Hyper-Threading because the parts tend to be the very bottom end of the pool Celerons. Our asterisk is the new Intel Pentium G3258 Anniversary Edition, or “Pentium K,” which is a real hoot of a chip. It easily overclocks and is dead cheap. It’s not the fastest in content creation by a long shot, but if we were building an ultra-budget gaming rig and needed to steal from the CPU budget for a faster GPU, we’d recommend this one. Otherwise, we see dual-cores as purely ultra-budget parts today.</p> <p><strong>Two cores with Hyper-Threading</strong></p> <p>For your parents who need a reliable, solid PC without overclocking (you really don’t want to explain how to back down the core voltage in the BIOS to grandma, do you?), the dual-core Core i3 parts fulfill the needs of most people who only do content creation on occasion. Hyper-Threading adds value in multi-threaded and multi-tasking tasks. You can almost think of these chips with Hyper-Threading as three-core CPUs.&nbsp;</p> <p><strong>Four cores</strong></p> <p>For anyone who does content creation such as video editing, encoding, or even photo editing with newer applications, a quad-core is usually our recommended part. Newer game consoles are also expected to push min specs for newer games to quad-cores or more as well, so for most people who carry an Enthusiast badge, a quad-core part is the place to start.</p> <p><strong>Four cores with Hyper-Threading</strong></p> <p>Hyper-Threading got a bad name early on from the Pentium 4 and existing software that actually saw it reduce performance when turned on. Those days are long behind us though, and Hyper-Threading offers a nice performance boost with its virtual cores. How much? &nbsp;A 3.5GHz Core i7 quad-core with Hyper-Threading generally offers the same performance on multi-threaded tasks as a Core i5 running at 4.5GHz. The Hyper-Threading helps with content creation and we’d say, if content creation is 30 percent or less of your time, this is the place to be and really the best fit for 90 percent of enthusiasts.</p> <p><strong>Six cores with Hyper-Threading</strong></p> <p>Once you pass the quad-core mark, you are moving pixels professionally in video editing, 3D modeling, or other tasks that necessitate the costs of a six-core chip or more. We still think that for 90 percent of folks, a four-core CPU is plenty, but if losing time rendering a video costs you money (or you’re just ADD), pay for a six-core or more CPU. How do you decide if you need six or eight cores? Read on.&nbsp;</p> <p><strong>Eight cores with Hyper-Threading</strong></p> <p>We recognize that not everyone needs an eight-core processor. In fact, one way to save cash is to buy the midrange six-core chip instead, but if time is money, an eight-core chip will pay for itself. For example, the eight-core Haswell-E is about 45 percent faster than the four-core Core i7-4790K chip. If your render job is three hours, that’s more time working on other paying projects. The gap gets smaller between the six-core and the eight-core of course, so it’s very much about how much your time is worth or how short your attention span is. But just to give you an idea, the 3.3GHz Core i7-5960X is about 20 percent faster than the Core i7-4960X running at 4GHz.</p> <p><strong>Click the next page to see how Haswell-E stacks up against Intel's other top CPUs.</strong></p> <hr /> <p>&nbsp;</p> <h4 style="font-size: 10px;">Intel’s Top Guns Compared</h4> <p><img src="/files/u154082/cpus17918.jpg" alt="haswell" title="haswell" width="620" height="413" /></p> <p><strong><strong>The LGA2011-based Core i7-4960X (left) and the LGA2011-v3-based Core i7-5960X (middle) dwarf the Core i7-4790K chip (right). Note the change in the heat spreader between the older 4960X and 5960X, which now has larger “wings” that make it easier to remove the CPU by hand. The breather hole, which allows for curing of the thermal interface material (solder in this case), has also been moved. Finally, while the chips are the same size, they are keyed differently to prevent you from installing a newer Haswell-E into an older Ivy Bridge-E board.</strong></strong></p> <h4>Benchmarks</h4> <p><strong>Performance junkies, rejoice! Haswell-E hits it out of the ballpark</strong></p> <p><img src="/files/u154082/x99-gaming_5-rev10.jpg" alt="x99 gigabyte" title="x99 gigabyte" width="620" height="734" /></p> <p><strong>We used a Gigabyte X99 motherboard (without the final heatsinks for the voltage-regulation modules) for our testing.</strong></p> <p>For our testing, we set up three identical systems with the fastest available CPUs for each platform. Each system used an Nvidia GeForce GTX 780 with the same 340.52 drivers, Corsair 240GB Neutron GTX SSDs, and 64-bit Windows 8.1 Enterprise. Since we’ve had issues with clock speeds varying on cards that physically look the same, we also verified the clock speeds of each GPU manually and also recorded the multiplier, bclock, and speeds the parts run at under single-threaded and multi-threaded loads. So you know, the 3GHz Core i7-5960X’s would run at 3.5GHz on single-threaded tasks but usually sat at 3.33GHz on multi-threaded tasks. The 3.6GHz Core i7-4960X ran everything at 4GHz, including multi-threading tasks. The 4GHz Core i7-4790K part sat at 4.4GHz on both single- and multi-threaded loads.</p> <p>For Z97, we used a Gigabyte Z97M-D3H mobo with a Core i7-4790K “Devil’s Canyon” chip aboard. &nbsp;An Asus Sabertooth X79 did the duty for our Core i7-4960X “Ivy Bridge-E” chip. Finally, for our Core i7-5960X chip, we obtained an early Gigabyte X99-Gaming 5 motherboard. The board was pretty early but we feel comfortable with our performance numbers as Intel has claimed the Core i7-5960X was “45 percent” faster than a quad-core chip, and that’s what we saw in some of our tests.&nbsp;</p> <p>One thing to note: The RAM capacities were different but in the grand scheme of things and the tests we run, it has no impact. The Sabertooth X79 &nbsp;had 16GB of DDR3/2133 in quad-channel mode, the Z97M-D3H had 16GB of DDR3/2133 in dual-channel mode. Finally, the X99-Gaming 5 board had 32GB of Corsair DDR4/2133. All three CPUs will overclock, but we tested at stock speeds to get a good baseline feel.&nbsp;</p> <p>For our benchmarks, we selected from a pile of real-world games, synthetic tests, as well as real-world applications across a wide gamut of disciplines. Our gaming tests were also run at very low resolutions and low-quality settings to take the graphics card out of the equation. We also acknowledge that people want to know what they can expect from the different CPUs at realistic settings and resolutions, so we also ran all of the games at their highest settings at 1920x1080 resolution, which is still the norm in PC gaming.&nbsp;</p> <p><strong>The results</strong></p> <p>We could get into a multi-sentence analysis of how it did and slowly break out with our verdict but in a society where people get impatient at the microwave, we’ll give you the goods up front: Holy Frakking Smokes, this chip is fast! The Core i7-5960X is simply everything high-end enthusiasts have been dreaming about.&nbsp;</p> <p>Just to give you an idea, we’ve been recording scores from $7,000 and $13,000 PCs in our custom Premiere Pro CS6 benchmark for a couple of years now. The fastest we’ve ever seen is the Digital Storm Aventum II that we reviewed in our January 2014 issue. The 3.3GHz Core i7-5960X was faster than the Aventum II’s Core i7-4960X running at 4.7GHz. Again, at stock speeds, the Haswell-E was faster than the fastest Ivy Bridge-E machine we’ve ever seen.</p> <p>It wasn’t just Premiere Pro CS6 we saw that spread in either. In most of our tests that stress multi-threading, we saw roughly a 45 percent to 50 percent improvement going from the Haswell to the Haswell-E part. The scaling gets tighter when you’re comparing the six-core Core i7-4960X but it’s still a nice, big number. We generally saw a 20 percent to 25 percent improvement in multi-threaded tasks.&nbsp;</p> <p>That’s not even factoring in the clock differences between the parts. The Core i7-4790K buzzes along at 4.4GHz—1.1GHz faster than the Core i7-5960X in multi-threaded tasks—yet it still got stomped by 45 to 50 percent. The Core i7-4960X had a nearly 700MHz clock advantage as well over the eight-core chip.</p> <p>The whole world isn’t multi-threaded, though. Once we get to workloads that don’t push all eight cores, the higher clock speeds of the other parts predictably take over. ProShow Producer 5.0, for example, has never pushed more than four threads and we saw the Core i7-5960X lose by 17 percent. The same happened in our custom Stitch.Efx 2.0 benchmark, too. In fact, in general, the Core i7-4790K will be faster thanks to its clock speed advantage. If you overclocked the Core i7-5960X to 4GHz or 4.4GHz on just four cores, the two should be on par in pure performance on light-duty workloads.</p> <p>In gaming, we saw some results from our tests that are a little bewildering to us. At low-resolution and low-quality settings, where the graphics card was not the bottleneck, the Core i7-4790K had the same 10 percent to 20 percent advantage. When we ran the same tests at ultra and 1080p resolution, the Core i7-5960X actually had a slight advantage in some of the runs against the Core i7-4790K chip. We think that may be from the bandwidth advantage the 5960X has. Remember, we ran all of the RAM at 2,133, so it’s not DDR4 vs. DDR3. It’s really quad-channel vs. dual-channel.</p> <p>We actually put a full breakdown of each of the benchmarks and detailed analysis on MaximumPC.com if you really want to nerd out on the performance.</p> <p><strong>What you should buy</strong></p> <p>Let’s say it again: The Core i7-5960X stands as the single fastest CPU we’ve seen to date. It’s simply a monster in performance in multi-threaded tasks and we think once you’ve overclocked it, it’ll be as fast as all the others in tasks that aren’t thread-heavy workloads.</p> <p>That, however, doesn’t mean everyone should start saving to buy a $1,000 CPU. No, for most people, the dynamic doesn’t change. For the 80 percent of you who fall into the average Joe or Jane nerd category, a four-core with Hyper-Threading still offers the best bang for the buck. It won’t be as fast as the eight-core, but unless you’re really working your rig for a living, made of money, or hate for your Handbrake encodes to take that extra 25 minutes, you can slum it with the Core i7-4790K chip. You don’t even have to heavily overclock it for the performance to be extremely peppy.</p> <p>For the remaining 20 percent who actually do a lot of encoding, rendering, professional photo editing, or heavy multi-tasking, the Core i7-5960X stands as the must-have CPU. It’s the chip you’ve been waiting for Intel to release. Just know that at purely stock speeds, you do give up performance to the Core i7-4790K part. But again, the good news is that with minor overclocking tweaks, it’ll be the equal or better of the quad-core chip.</p> <p>What’s really nice here is that for the first time, Intel is giving its “Extreme” SKU something truly extra for the $999 they spend. Previous Core i7 Extreme parts have always been good overclockers, but a lot of people bypassed them for the midrange chips such as the Core i7-4930K, which gave you the same core counts and overclocking to boot. The only true differentiation Extreme CPU buyers got was bragging rights. With Haswell-E, the Extreme buyers are the only ones with eight-core parts.</p> <p>Bang-for-the-buck buyers also get a treat from the six-core Core i7-5820K chip. At $389, it’s slightly more expensive than the chip it replaces—the $323 Core i7-4820K—but the extra price nets you two more cores. Yes, you lose PCIe bandwidth but most people probably won’t notice the difference. We didn’t have a Core i7-5820K part to test, but we &nbsp;believe on our testing with the Core i7-5960X that minor overclocking on the cheap Haswell-E would easily make it the equal of Intel’s previous six-core chips that could never be had for less than $580.</p> <p>And that, of course, brings us to the last point of discussion: Should you upgrade from your Core i7-4960X part? The easy answer is no. In pure CPU-on-CPU &nbsp;showdowns, the Core i7-4960X is about 20 percent slower in multi-threaded tasks, and in light-duty threads it’s about the same, thanks to the clock-speed advantage the Core i7-4960X has. There are two reasons we might want to toss aside the older chip, though. The first is the pathetic SATA 6Gb/s ports, which, frankly, you actually need on a heavy-duty work machine. The second reason would be the folks for whom a 20 percent reduction in rendering time would actually be worth paying for.&nbsp;</p> <p><strong>Click the next page to check out our Haswell-E benchmarks.</strong></p> <hr /> <h4><span style="font-size: 1.17em;">Haswell-E Benchmarks</span></h4> <p><strong>Haswell-E benchmarks overview</strong></p> <p><span style="font-size: 1.17em;">&nbsp;</span><img src="/files/u154082/haswell_e_benchmarks.png" alt="haswell e benchmarks" title="haswell e benchmarks" width="541" height="968" /></p> <p>&nbsp;</p> <p>&nbsp;</p> <p><strong>Benchmark Breakdown</strong></p> <p>We like to give you the goods on a nice table but not everyone is familiar with what we use to test and what exactly the numbers means so let’s break down some of the more significant results for you.&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p><img src="/files/u154082/cinebenchsinglethreaded.png" alt="cinebench 15 single" title="cinebench 15 single" width="620" height="472" /></p> <p><strong>Cinebench 15 single-threaded performance</strong></p> <p><span style="color: #000000;">We used Maxon’s Cinebench 15 benchmark to see just how fast the trio of chips would run this 3D rendering test. Cinebench 15 allows you to restrict it from using all of the cores or just one core. For this test, we wanted to see how the Core i7-5960X “Haswell-E” would do against the others by measuring a single core. The winner here is the Core i7-4790K “Devil’s Canyon” chip. That’s no surprise—it uses the same microarchitecture as the big boy Haswell-E but it has a ton more clock speed on default. The Haswell-E is about 21 percent slower running at 3.5GHz. The Devil’s Canyon part is running about 900MHz faster at 4.4GHz. Remember, on default, the Haswell-E only hits 3.5GHz on single-core loads. The Haswell-E better microarchitecture also loses to the Core i7-4960X “Ivy Bridge-E,” but not by much and that’s with the Ivy Bridge-E’s clock speed advantage of 500MHz. Still, the clear winner in single-threaded performance is the higher-clocked Devil’s Canyon chip.&nbsp;</span></p> <p><span style="color: #333399;">Winner: Core i7-4790K</span></p> <p><span style="color: #000000;"><img src="/files/u154082/cinebenchmulti.png" alt="cinebench 15 multi" title="cinebench 15 multi" width="620" height="428" /></span></p> <p><span style="color: #000000;"><strong>Cinebench 15 multi-threaded performance</strong></span></p> <p><span style="color: #000000;">You don’t buy an eight-core CPU and then throw only single-thread workloads at it, so we took the handcuffs off of Cinebench 15 and let it render with all available threads. On the Haswell-E part, that’s 16 threads of fun, on Ivy Bridge-E it’s 12-threads, and on Devil’s Canyon we’re looking at eight-threads. The winner by a clear margin is the Haswell-E part. Its performance is an astounding 49 percent faster than the Devil’s Canyon and about 22 percent faster than Ivy Bridge-E. We’ll just have to continue to remind you, too: this is with a severe clock penalty. That 49-percent-faster score is with all eight cores running at 3.3GHz vs all four of the Devil’s Canyon cores buzzing along at 4.4GHz. That’s an 1,100MHz clock speed advantage. Ivy Bridge-E also has a nice 700MHz clock advantage than Haswell-E. Chalk this up as a big, huge win for Haswell-E.&nbsp;</span></p> <p><span style="color: #333399;">Winner: Core i7-5960X</span></p> <p><span style="color: #000000;"><img src="/files/u154082/povray.png" alt="pov-ray" title="pov-ray" width="620" height="491" /></span></p> <p><span style="color: #000000;"><strong>POV-Ray performance</strong></span></p> <p><span style="color: #000000;">We wanted a second opinion on rendering performance, so we ran POV-Ray, a freeware ray tracer that has roots that reach back to the Amiga. Again, Haswell-E wins big-time with a 47 percent performance advantage over Devil’s Canyon and a 25 percent advantage over Ivy Bridge-E. Yeah, and all that stuff we said about the clock speed advantage the quad-core and six-core had, that applies here, too. Blah, blah, blah.</span></p> <p><span style="color: #333399;">Winner: Core i7-5960X</span></p> <p><span style="color: #000000;"><img src="/files/u154082/premierepro.png" alt="premiere pro" title="premiere pro" width="620" height="474" /></span></p> <p><span style="color: #000000;"><strong>Premiere Pro CS6 performance</strong></span></p> <p><span style="color: #000000;">One sanity check (benchmark results Intel produces to let you know what kind of performance to expect) said Haswell-E would outperform quad-core Intel parts by 45 percent in Premiere Pro Creative Cloud when working with 4K content. Our benchmark, however, doesn’t use 4K content yet, so we wondered if our results would be similar. For our test, we render out a 1080p-resolution file using source material shot by us on a Canon EOS 5D Mk II using multiple timelines and transitions. We restrict it to the CPU rather than using the GPU as well. Our result? The 3.3GHz Haswell-E was about 45 percent faster than the 4.4GHz Devil’s Canyon chip. Bada-bing! The two extra cores also spit out the render about 19 percent faster than the six-core Ivy Bridge-E. That’s fairly consistent performance we’re seeing between the different workload disciplines of 3D rendering and video encoding so far, and again, big, big wins for the Haswell-E part.&nbsp;</span></p> <p><span style="color: #333399;">Winner: Core i7-5960X</span></p> <p><span style="color: #000000;"><img src="/files/u154082/handbrake.png" alt="handbrake" title="handbrake" width="620" height="407" /></span></p> <p><span style="color: #000000;"><strong>Handbrake Encoding performance</strong></span></p> <p><span style="color: #000000;">For our encoding test, we took a 1080p-resolution video file and used Handbrake 0.9.9 to transcode it into a file using the Android tablet profile. Handbrake is very multi-threaded and leverages the CPU for its encoding and transcoding. Our results were still fairly stellar, with Haswell-E CPU performing about 38 percent faster than the Devil’s Canyon part. Things were uncomfortably close with the Ivy Bridge-E part though, with the eight-core chip coming in only about 13 percent faster than the six-core chip. Since the Ivy Bridge-E cores are slower than Haswell cores clock-for-clock, we were a bit surprised at how close they were. In the past, we have seen memory bandwidth play a role in encoding, but not necessarily Handbrake. Interestingly, despite locking all three parts down at 2,133MHz, the Ivy Bridge-E does provide more bandwidth than the Haswell-E part. One other thing we should mention: Intel’s “sanity check” numbers to let the media know what to expect for Handbrake performance showed a tremendous advantage for the Haswell-E. Against a Devil’s Canyon chip, Haswell-E was 69 percent faster and 34 percent faster than the Ivy Bridge-E chip. Why the difference? The workload. Intel uses a 4K-resolution file and transcodes it down to 1080p. We haven’t tried it at 4K, but we may, as Intel has provided the 4K-resolution sample files to the media. If true, and we have no reason to doubt it, it’s a good message for those who actually work at Ultra HD resolutions that the eight-cores can pay off. Overall, we’re declaring Haswell-E the winner here.</span></p> <p><span style="color: #333399;">Winner: Core i7-5960X</span></p> <p><span style="color: #000000;"><img src="/files/u154082/x264pass1.png" alt="x264 pass 1" title="x264 pass 1" width="620" height="496" /></span></p> <p><span style="color: #000000;"><strong>X264 HD 5.01 Pass 1 performance</strong></span></p> <p><span style="color: #000000;">We’ve been using TechArp.com’s X264 HD 5.0.1 benchmark to measure performance on new PCs. The test does two passes using the freeware x264 encoding library. The first pass is seemingly a little more sensitive to clock speeds and memory bandwidth rather than just pure core count. A higher frame rate is better. The first pass isn’t as core-sensitive, and memory bandwidth clock speed have more dividends here. Haswell still gives you a nice 36 percent boost over the Devil’s Canyon but that Ivy Bridge-E chip, despite its older core microarchitecture, comes is only beaten by 12 percent—too close for comfort. Of course, we’d throw in the usual caveat about the very large clock differences between the chips, but we’ve already said that three times. Oh, and yes, we did actually plagiarize by lifting two sentences from a previous CPU review for our description. That’s OK, we gave ourselves permission.&nbsp;</span></p> <p><span style="color: #333399;">Winner: Core i7-5960X but not by much</span></p> <p><span style="color: #000000;"><img src="/files/u154082/x264pass2.png" alt="x264 pass 2" title="x264 pass 2" width="620" height="499" /></span></p> <p><span style="color: #000000;"><strong>X264 HD 5.01 Pass 2 performance</strong></span></p> <p><span style="color: #000000;">Pass two of the X264 HD 5.01 benchmark is more sensitive to core and thread counts, and we see the Haswell-E come in with a nice 46 percent performance advantage against the Devil’s Canyon chip. The Ivy Bridge-E, though, still represents well. The Haswell-E chip is “only” 22 percent faster than it. Still, this is a solid win for the Haswell-E chip. We also like how we’re seeing very similar scaling in multiple encoding tests of roughly 45 percent. With Intel saying it’s seeing 69 percent in 4K resolution content in Handbrake, we’re wondering if the Haswell-E would offer similar scaling if we just moved all of our tests up to 4K.</span></p> <p><span style="color: #333399;">Winner: Core i7-5960X</span></p> <p><strong>Click the next page for even more Haswell-E benchmarks.</strong></p> <hr /> <p>&nbsp;</p> <p><span style="color: #000000;"><img src="/files/u154082/stitch.png" alt="stitch" title="stitch" width="620" height="473" /></span></p> <p><span style="color: #000000;"><strong>Stitch.EFx 2.0 Performance&nbsp;</strong></span></p> <p><span style="color: #000000;">Again, we like to mix up our workloads to stress different tasks that aren’t always multi-threaded to take advantage of a 12-core Xeon chip. For this test, we shot about 200 images with a Canon EOS 7D using a GigaPan motorized head. That’s roughly 1.9GB in images to make our gigapixel image using Stitch.EFx 2.0. The first third of the render is single-threaded as it stitches together the images. The final third is multi-threaded as it does the blending, perspective correction, and other intensive image processing. It’s a good blend of single-threaded performance and multi-threaded, but we expected the higher clocked parts to take the lead. No surprise, the Devil’s Canyon 4.4GHz advantage puts it in front, and the Haswell-E comes in about 14 percent slower with its 1.1GHz clock disadvantage. The clock speed advantage of the 4GHz Ivy Bridge-E also pays dividends, and we see the Haswell-E losing by about 10 percent. The good news? A dual-core Pentium K running at 4.7GHz coughed up a score of 1,029 seconds (not represented on the chart) and is roughly 22 percent slower than the CPU that costs about 11 times more.&nbsp;</span></p> <p><span style="color: #333399;">Winner: Core i7-4790K</span></p> <p><span style="color: #000000;"><img src="/files/u154082/7zip.png" alt="7-zip" title="7-zip" width="620" height="477" /></span></p> <p><span style="color: #000000;"><strong>7-Zip Performance</strong></span></p> <p><span style="color: #000000;">The popular and free zip utility, 7-Zip, has a nifty built-in benchmark that tells you the theoretical file-compression performance a CPU. You can pick the workload size and the number of threads. For our test, we maxed it out at 16-threads using an 8MB workload. That gives the Haswell-E familiar advantage in performance—about 45 percent—over the Devil’s Canyon part. Against that Ivy Bridge-E part though, it’s another uncomfortably close one at 8 percent. Still, a win is a win even if we have to say that if you have a shiny Core i7-4960X CPU in your system, you’re still doing fine.</span></p> <p><span style="color: #333399;">Winner: Core i7-5960X</span></p> <p><span style="color: #000000;"><img src="/files/u154082/sandra.png" alt="sisoft sandra" title="sisoft sandra" width="620" height="421" /></span></p> <p><span style="color: #000000;"><strong>Sisoft Sandra Memory Bandwidth (GB/s)</strong></span></p> <p>Since this is the first time we’re seeing DDR4 in a desktop part, we wanted to see how it stacked up in benchmarks. But, before you get too excited, remember that we set all three systems to 2133 data rates. The Devil’s Canyon part is dual-channel and the Ivy Bridge-E and Haswell-E are both quad-channel. With the memory set at 2133, we expected Haswell-E to be on par with the Ivy Bridge-E chip, but oddly, it was slower, putting out about 40GB/s of bandwidth. It’s still more than the 27GB/s the Devil’s Canyon could hit, but we expected it to be closer to double of what the Ivy Bridge-E was producing. For what it’s worth, we did double-check that we were operating in quad-channel mode and the clock speeds of our DIMMs. It’s possible this may change as the hardware we see becomes more final. We’ll also note that even at the same clock, DDR4 does suffer a latency penalty over DDR3. That would also be missing the point of DDR4, though. The new memory should give us larger modules and hit higher frequencies far easier, too, which will nullify that latency issue. Still, the winner is Ivy Bridge-E.</p> <p><span style="color: #333399;">Winner: Core i7-4960X</span></p> <p><span style="color: #000000;"><img src="/files/u154082/3dmarkgpu.png" alt="3d mark" title="3d mark" width="620" height="457" /></span></p> <p><span style="color: #000000;"><strong>3DMark Firestrike Overall Performance</strong></span></p> <p><span style="color: #000000;">Even though 3DMark Firestrike is primarily a graphics benchmark, not having a 3DMark Firestrike score is like not having coffee in the morning. Basically, it’s a tie between all three chips, and 3DMark Firestrike is working exactly as you expect it to: as a GPU benchmark.</span></p> <p><span style="color: #333399;">Winner: Tie</span></p> <p><span style="color: #000000;"><img src="/files/u154082/3dmarkphysics.png" alt="3d mark physics" title="3d mark physics" width="620" height="477" /></span></p> <p><span style="color: #000000;"><strong>3DMark Firestrike Physics Performance</strong></span></p> <p><span style="color: #000000;">3DMark does factor in the CPU performance for its physics tests. It’s certainly not weighted for multi-core counts as other tests are, but we see the Haswell-E with a decent 29 percent bump over the Devil’s Canyon chip. But, breathing down the neck of the Haswell-E is the Ivy Bridge-E chip. To us, that’s damned near a tie. Overall, the Haswell-E wins, but in gaming tasks—at stock clocks—paying for an 8-core monster is unnecessary except for those running multi-GPU setups.</span></p> <p><span style="color: #333399;">Winner: Core i7-5960X</span></p> <p><span style="color: #000000;"><img src="/files/u154082/valveparticle.png" alt="valve particle" title="valve particle" width="620" height="451" /></span></p> <p><span style="color: #000000;"><strong>Valve Particle Benchmark Performance</strong></span></p> <p><span style="color: #000000;">Valve’s Particle test was originally developed to show off quad-core performance to the world. It uses the company’s own physics magic, so it should give some indication of how well a chip will run. We’ve long suspected the test is cache and RAM latency happy. That seems to be backed by the numbers because despite the 1.1GHz advantage the Devil’s Canyon chip has, the Haswell-E is in front to the tune of 15 percent. The Ivy Bridge-E chip though, with its large cache, lower latency DDR3, and assloads of memory bandwidth actually comes out on top by about 3 percent. We’ll again note the Ivy Bridge-E part has a 700MHz advantage, so this is a very nice showing for the Haswell-E part.&nbsp;</span></p> <p><span style="color: #333399;">Winner: Core i7-4960X</span></p> <p><span style="color: #000000;"><img src="/files/u154082/dirtlow.png" alt="dirt showdown low" title="dirt showdown low" width="620" height="438" /></span></p> <p><span style="color: #000000;"><strong>Dirt Showdown low-resolution performance</strong></span></p> <p><span style="color: #000000;">For our gaming tests, we decided to run the games at 1366x768 resolution and at very low settings to take the graphics card out of the equation. In one way, you imagine this as what it would look like if you had infinitely powerful graphics cards in your system. As most games are not multi-threaded and are perfectly fine with a quad-core with Hyper-Threading, we fully expected the parts with the highest clock speeds to win all of our low-resolution, low-quality tests. No surprise, the Devil’s Canyon part at 4.4GHz private schools the 3.3GHz Haswell-E chip. And, no surprise, the 4GHz Ivy Bridge-E also eats the Haswell-E’s lunch and drinks its milk, too.</span></p> <p><span style="color: #333399;">Winner: Core i7-4790K</span></p> <p><span style="color: #000000;"><img src="/files/u154082/dirtultra.png" alt="dirt showdown ultra performance" title="dirt showdown ultra performance" width="620" height="475" /></span></p> <p><span style="color: #000000;"><strong>Dirt Showdown 1080p, ultra performance</strong></span></p> <p><span style="color: #000000;">To make sure we put everything in the right context, we also ran the Dirt Showdown at 1920x1080 resolution at Ultra settings. This puts most of the load on the single GeForce GTX 780 we used for our tests. Interestingly, we saw the Haswell-E with a slight edge over the Devil’s Canyon and Ivy Bridge-E parts. We’re not sure, but we don’t think it’s a very significant difference, but it’s still technically a win for Haswell-E.</span></p> <p><span style="color: #333399;">Winner: Core i7-5960X</span></p> <p><span style="color: #000000;"><img src="/files/u154082/hitmanlow.png" alt="hitman low" title="hitman low" width="620" height="502" /></span></p> <p><span style="color: #000000;"><strong>Hitman: Absolution, low quality, low performance&nbsp;</strong></span></p> <p><span style="color: #000000;">We did the same with Hitman: Absolution, running it at low resolution and its lowest settings. The Haswell-E came in about 12 percent slower the Devil’s Canyon part and 13 percent slower than the Ivy Bridge-E.&nbsp;</span></p> <p><span style="color: #333399;">Winner: Core i7-4960X</span></p> <p><span style="color: #000000;"><img src="/files/u154082/hitmanultra.png" alt="hitman ultra" title="hitman ultra" width="620" height="479" /></span></p> <p><span style="color: #000000;"><strong>Hitman: Absolution, 1080p, ultra quality</strong></span></p> <p><span style="color: #000000;">Again, we tick the settings to an actual resolution and quality at which people actually play. Once we do that, the gap closes slightly, with the Haswell-E trailing the Devil’s Canyon by about 8 percent and the Ivy Bridge-E by 9 percent. Still, these are all very playable frame rates and few could tell the difference.&nbsp;</span></p> <p><span style="color: #333399;">Winner: Core i7-4960X</span></p> <p><span style="color: #000000;"><img src="/files/u154082/tombraider.png" alt="tomb raider low" title="tomb raider low" width="620" height="465" /></span></p> <p><span style="color: #000000;"><strong>Tomb Raider, low quality, low resolution.</strong></span></p> <p><span style="color: #000000;">We did the same low quality, low resolution trick with Tomb Raider and while need to see 500 frames per second, it’s pretty much a wash here.&nbsp;</span></p> <p><span style="color: #333399;">Winner: Tie</span></p> <p><span style="color: #000000;"><img src="/files/u154082/tomraiderulti.png" alt="tomb raider ultra" title="tomb raider ultra" width="620" height="472" /></span></p> <p><span style="color: #000000;"><strong>Tomb Raider, 1080p, Ultimate</strong></span></p> <p><span style="color: #000000;">At normal resolutions and settings we were a little surprised, as the Haswell-E actually had a 15 percent advantage over the Devil’s Canyon CPU. We’re not exactly sure why, as the only real advantage we can see is memory bandwidth and large caches on the Haswell-E part. We seriously doubt it’s due to the number of CPU cores. The Haswell-E also has a very, very slight lead against the Ivy Bridge-E part, too. That’s not bad considering the clock penalty it’s running at.</span></p> <p><span style="color: #333399;">Winner: Core i7-5960X</span></p> <p><span style="color: #000000;"><img src="/files/u154082/metrolastlight.png" alt="metro last light low" title="metro last light low" width="620" height="503" /></span></p> <p><span style="color: #000000;"><strong>Metro Last Light, low resolution, low quality</strong></span></p> <p><span style="color: #000000;">In Metro Last light, at low settings it’s a wash between all of them.</span></p> <p><span style="color: #333399;">Winner: Tie</span></p> <p><span style="color: #000000;"><img src="/files/u154082/metroveryhigh.png" alt="metro last light high" title="metro last light high" width="620" height="502" /></span></p> <p><span style="color: #000000;"><strong>Metro Last Light, 1080p, Very High quality</strong></span></p> <p><span style="color: #000000;">Metro at high-quality settings mirrors that of Hitman: Absolution, and we think favors the parts with higher clock speeds. We should also note that none of the chips with the $500 graphics card could run Metro at 1080p at high-quality settings. That is, of course, you consider 30 to 40 fps to be “smooth.” We don’t. Interestingly, the Core i7-4690X was the overall winner.</span></p> <p><span style="color: #333399;">Winner: Core i7-4960X</span></p> <p><strong>Conclusion:</strong> If you skipped to the very last page to read the conclusion, you’re in the wrong place. You need to go back to page 4 to read our conclusions and what you should buy. And no, we didn’t do this to generate just one more click either though that would be very clever of us wouldn’t it?</p> http://www.maximumpc.com/haswell-e_review_2014#comments benchmarks cpu haswell e intel ivy bridge e maximum pc processor Review Specs News Reviews Features Fri, 29 Aug 2014 16:00:40 +0000 Gordon Mah Ung 28431 at http://www.maximumpc.com Civilization: Beyond Earth Hands-On http://www.maximumpc.com/civilization_beyond_earth_hands-on_2014 <!--paging_filter--><h3>We play through the first 100 turns of Firaxis' next Civ game</h3> <p>We're still a couple months away from the retail release of Civilization: Beyond Earth (C:BE), but publisher 2K Games couldn't hold back the horde any longer. We've been eager to try it out because it's Civ, but also because it feels like a spiritual sequel to Alpha Centauri, which itself dealt with a nagging question from earlier entries in the series: What happens when you win the game by launching an interstellar ship into space? Where do those people go? At first glance, C:BE looks like a sci-fi Civilization V with an exotic color palette, but a number of new layers unfolded during our time with it.</p> <p>Most Civ games begin with selecting your starting conditions (unless you like to live on the edge and randomize all your choices). Your options include the usual things like world size, continent shape, and faction leader characteristics. In the build that we played, we could choose from three randomly generated planets. We could also let the game randomly choose one of those three for us, or we could tell C:BE to roll the dice and generate three new worlds. If that's not your cup of tea, we could also go to the "Advanced Worlds" menu and choose from about ten worlds with scripted conditions. 82 Eridani e, for example, has no oceans and little water. Or we could choose Archipelago, which was basically the opposite. Eta Vulpeculae b, meanwhile, has one large continent and an abundance of resources and wildlife.</p> <p><img src="/files/u160416/screenshot_terrain_lush02.jpg" width="600" height="354" style="text-align: center;" /></p> <p>Six of the worlds that are accessible from this menu come from the Exoplanets Map Pack, which you get by pre-ordering the game before October 24th. Each of these planets will randomize its geography each time you play, leading to an additional layer of replayability. We were not able to dig up a menu that allowed us to fine-tune specific map or gameplay attributes (such as disabling neutral factions or hostile wildlife), but this was not a final build.</p> <p>Then you can also choose to begin the game with a soldier or worker unit, instead of an explorer. Or you could have a clinic installed in your first city automatically. This building improves the city health stat, which indicates population growth and the happiness of your citizens. You will also choose what ship type you want to use to arrive on the planet. This determines bonuses like starting with 100 energy (the currency of C:BE); the initial visibility of coast lines, alien nests, certain resources; and the size of the fog of war around your first city.</p> <p><img src="/files/u160416/screen_combat_satellitebombard.jpg" width="600" height="341" style="text-align: center;" /></p> <p>Then you choose your colonist type. For example, the Refugee type adds +2 food to every city, which promotes growth. Engineers give you +2 production in every city, which decreases the time it takes to construct buildings. Scientists, unsurprisingly, give you +2 science in every city, which increases the speed at which you research new technology. Lastly, you designate your sponsor, which determines who your faction leader is. There are no historical leaders this time, like George Washington or Ghandi. This new gang consists of fictional characters set in a speculative future. We had eight sponsors to choose from. Going with the African Union grants us +10% food in growing cities when their Health rating is 1 or greater. With the Pan-Asian Cooperative, you get a 10% production bonus for Wonders, and 25% faster workers.</p> <p>So after agonizing over all of those branching decisions, you can finally drop into the game. If you're familiar with the last couple Civ games, the interface should be pretty familiar. Your resources appear in the upper right-hand corner, with positive and negative numbers indicating gains or losses per turn. Hovering the cursor over each one gives you a detailed breakdown of where the resources are coming from, and how they're being consumed. Your lower right-hand corner is for notifications and to run through your list of available actions The lower left-hand shows you your selected unit (if any) and its abilities.</p> <p style="text-align: right;"><a href="http://www.maximumpc.com/civilization_beyond_earth_hands-on_2014?page=0,1" target="_blank"><strong>Page 2: Exploration, affinities, and virtues</strong></a></p> <hr /> <p>But while the UI should be familiar, this is definitely an exotic planet, with unfamiliar formations like canyons and craters, clouds of poisonous gas, alien critters used for resources, and other alien critters that are actively hostile. It's definitely dangerous terrain for a fledgling civilization. But you'll find resource pods dotted throughout the landscape, which usually contain caches of energy or satellites. Satellites are launched into orbit and extract energy from the planet's surface, though it's not clear how. They stay up for a limited time, though, so you'll need to keep finding them, or produce them on your own. You'll also encounter stations, which behave similarly to city-states in Civ V.</p> <p>And your explorer (scout) unit can excavate native ruins and giant animal bones to grant more bonuses, like free technology. He can only carry one of these excavation kits at once, though, and he needs to return to a city to get more. It also takes five turns to excavate something. This slower pace maintains the unit's viability for a longer stretch than in previous games, and compels you to make more agonizing decisions. Competing factions also don't like it when you excavate something that's closer to their territory than to yours. So you have to balance your desire for discovery against your long-term political risks.</p> <p><img src="/files/u160416/screen_fielding_diplomacy.jpg" title="text-align: center;" width="600" height="341" /></p> <p>Meanwhile, you'll be conducting research on new buildings and units. Instead of going left to right and hitting up pretty much everything along the way, you begin from a central point on the research map and must choose between different branches, each of which contains "leaves" or individual research choices. Each branch has a theme, usually divided into cultural, military, and scientific categories. You can try focusing on one theme, or it might be better to balance as many as you can. Since we were limited to 100 turns, we weren't able to see which turned out to be the better strategy. The things you encounter on the map, the things you build, and the tech you research will frequently trigger binary choices. At one point, the game made us choose between two stations to conduct business with. One station specialized in converting military equipment for civilian use, while another could increase our science score. Both choices have effects on your relationship with the planet's flora and fauna, and you have three affinities to balance: Harmony, Supremacy, and Purity.</p> <p>Each choice grants you a mix of experience points in each affinity, and enough points in one will move you up a level and grant you a bonus. Hovering your mouse over each affinity (located in the upper left-hand corner) tells you what different levels will do. Level 1 of Harmony, for example, reduces the aggression level of the native creatures. Eventually you'll actually gain health from the poison clouds (called "miasma"), and the highest level of your primary affinity grants a critical element for one of the five available victory conditions. At the same time, you'll eventually be at odds with the factions that have different affinities than yours. You can attempt to smooth over relations by establishing lucrative trading routes, engaging in joint military actions, and good old-fashioned bribery. Or you can attempt to wipe them off the map, if you're not into the whole diplomacy thing.</p> <p><img src="/files/u160416/screen_ui_virtues.jpg" width="600" height="341" style="text-align: center;" /></p> <p>And let's not forget about the Virtue system. These operate like Civ V's social policies, but this time there are four of them with nine tiers, so there's more focus and depth to your choices here. On top of that is a grid of synergies, designed to encourage the exploration of multiple virtues. Activating the first tier of each virtue, for example, gives you a bonus activation of your choosing.</p> <p>Eventually, the 2K staff gently ushered us out the door, and we were reluctant to leave. Beyond Earth has a more layers of faction evolution and political intrigue than we're used to seeing in Civ, and we were eager to see the choices that the game would present us with next. We also wanted to build more stuff, of course, and establish more trade routes, explore more of the map, investigate the critters, and maybe start a war or two. Thankfully, we only have about eight more weeks until the game launches into orbit.</p> http://www.maximumpc.com/civilization_beyond_earth_hands-on_2014#comments alpha centauri beyond earth civiliation pc game PC gaming pre-review Sci-fi Sid Meier strategy Games Gaming News Features Web Exclusive Thu, 28 Aug 2014 18:43:23 +0000 Tom McNamara 28439 at http://www.maximumpc.com Broken Age Review http://www.maximumpc.com/broken_age_review_2014 <!--paging_filter--><h3>Two stories, tons of creativity, yummy ice cream, no grog</h3> <p>That’s fair advice for the half of you who will start out Broken Age in a miserable funk instead of a monster-filled fairy tale. At least, that’s how we felt when we initially began our trip through Tim Schafer’s imaginative title—the first half of a two-part, point-and-click adventure from the industry veteran whose previous credits stand well on their own within the genre: Day of the Tentacle, The Secret of Monkey Island, Full Throttle, et cetera.</p> <p>The game splits the two protagonists’ (seemingly) separate story lines right from the start. We started our journey with the boy, Shay, but found the initial ramp-up to his adventure a bit too convincing.</p> <p style="text-align: center;"><a class="thickbox" href="/files/u152332/2_small_26.jpg"><img src="/files/u152332/2_small_25.jpg" alt="It’s no Mystery Science Theater movie warning, that’s for sure." title="Broken Age" width="620" height="388" /></a></p> <p style="text-align: center;"><strong>It’s no Mystery Science Theater movie warning, that’s for sure.</strong></p> <p>Without spoiling too much of the plot, Shay is trapped on a spaceship that goes above and beyond to protect him from the harshness of growing up. Shay could not be any more apathetic to the idea of daily life with his “mother,” a benevolent, computerized AI of sorts, who washes him, feeds him his daily cereal, and sends him on “adventures” that end in hugs, piles of ice cream, and, most likely, a bout of depression.</p> <p>The other protagonist of this half-game, Vella, presents a more compelling story line. In this case, you’re playing the classic damsel in distress. Rather than being eaten by a giant monster as part of her town’s sacrificial ritual to avoid destruction, she decides to go on a one-woman crusade to slay said monster herself.</p> <p>While Vella’s story line is a bit more action-packed—or at least, feels more so as a result of its classic slay-the-dragon-like premise—we actually found ourselves more proud of our experience in Shay’s adventure. Our favorite moment involved trying to find a way to “kill” our character, for lack of a better way to say it, in order to see if his daily monotony could be averted somehow. Spoiler: It can.</p> <p>That’s the most challenging example of the game’s puzzles that we could come up with, as Broken Age feels perfectly balanced between “breeze on by” and “consult game FAQs” for its overall difficulty. You get just enough quirky items to keep you thinking about what goes where without feeling overwhelmed with options—this isn’t a 20-item-inventory, combine-every-gizmo kind of adventure title.</p> <p>While Broken Age features no hint system, which might frustrate those looking for an extra boost or two in some head-scratching moments, you do have the option to switch between the two separate”story lines at a moment’s notice. Think Day of the Tentacle, only, your actions in the two stories don’t affect each other—a somewhat curious oversight that we hope developer Double Fine Productions changes up in the game’s second half.</p> <p>There’s no real point to spending much time talking about the game’s graphics, as you’ll fall in love with the beautiful visuals the moment you start adventuring. Kudos to Broken Age’s original orchestration as well—it’s the bread keeping the delicious presentation together. Sharp writing, endless wit, and excellent characterization (with similarly awesome voice talent) all work in tandem to deliver a welcome arrival to a genre whose blockbuster titles are not always at the forefront of gamers’ minds.</p> <p>You won’t forget Broken Age; in fact, we think you’ll be clamoring for quite a while to see how chapter one’s big cliffhanger ends up. More, Tim Schafer! More!</p> <p><strong>$25,</strong> <a href="http://www.brokenagegame.com/">www.brokenagegame.com</a><strong><a href="http://www.brokenagegame.com/">,</a> ESRB: n/a</strong></p> http://www.maximumpc.com/broken_age_review_2014#comments Broken Age maximum pc May issues 2014 Software Software Reviews Wed, 20 Aug 2014 14:57:20 +0000 David Murphy 28383 at http://www.maximumpc.com OCZ Vertex 460 240GB Review http://www.maximumpc.com/ocz_vertex_460_240gb_review <!--paging_filter--><h3>Rumors of its death were greatly exaggerated</h3> <p>That last time we heard from OCZ was back before the end of 2013, when the company was in the grips of bankruptcy and nobody was sure what its future held. Fast forward to March 2014, and things are looking rather good for the formerly beleaguered company, much to everyone’s surprise. Rather than simply dissolve and fade away like we had feared, the company has been acquired by storage behemoth Toshiba, and is now operating as an independent subsidiary.</p> <p style="text-align: center;"><a class="thickbox" href="/files/u152332/vertex460_lrg_small_0.jpg"><img src="/files/u152332/vertex460_lrg_small.jpg" alt="OCZ’s new drive has a more subdued, corporate look to it, thanks to a takeover by “the man.”" title="OCZ Vertex 460 240GB" width="620" height="449" /></a></p> <p style="text-align: center;"><strong>OCZ’s new drive has a more subdued, corporate look to it, thanks to a takeover by “the man.”</strong></p> <p>The best news is OCZ’s NAND-acquisition troubles are seemingly a thing of the past, as Toshiba is one of the world’s largest manufacturers of NAND. So, it is no surprise that the first drive we’re seeing from the new venture is essentially a reborn Vector drive, only with Toshiba NAND flash. Dubbed the Vertex 460, this “new” drive blends the company’s proprietary Barefoot 3 controller found on its high-end Vector drives with Toshiba’s 19nm MLC NAND flash, so it’s ditching the Micron NAND it used previously. The result is basically a slight watering-down of its Vector 150 drive in order to make it more affordable and consumer-friendly. It also needed to bring its Barefoot 3 controller over to its mainstream line of Vertex-branded drives, so this drive accomplishes that feat, as well.</p> <p>In many ways, the Vertex 460 is very similar to the company’s recent Vector 150 drive, the only difference being the Vector has a five-year warranty and has a higher overall endurance rating to reflect its use of binned NAND flash. The Vertex 460 is no slouch, though, and is rated to handle up to 20GB of NAND writes per day for three years. The drive also utilizes over-provisioning, so 12 percent of the drive is reserved for NAND management by the Barefoot 3 controller. Though you lose some capacity, you gain longer endurance and better performance, so it’s a worthwhile trade-off. The Vertex 460 also offers hardware encryption support, which is very uncommon for a mainstream drive, and though we’d never use it, it’s nice to have options. Otherwise, its specs are par for the course in that it’s a 7mm drive and is available in 120GB, 240GB, and 480GB flavors. It’s also bundled with a 3.5-inch bay adapter as well as a copy of Acronis True Image, which is appreciated.</p> <p>When we strapped the Vertex to our test bench, we saw results that were consistently impressive. In every test, the Vertex 460 was very close to the fastest drives in its class, and in all scenarios it’s very close to saturating the SATA bus, so it’s not really possible for it to be any faster. It had no problem handling small queue depths of four commands in ATTO, and held its own with a 32 queue depth in Iometer, too. It was a minute slower than the Samsung 840 EVO in our Sony Vegas test, which writes a 20GB uncompressed AVI file to the drive, but also much faster than the Crucial M500 in the same test. Overall, there were no weak points whatsoever in its performance, but it is not faster than the Samsung 840 EVO, and its OCZ Toolbox software utility is extremely rudimentary compared to the Samsung app. Though the Vertex 460 is an overall very solid drive, it doesn’t exceed our expectations in any particular category. In other words, it’s a great SSD, but not quite Kick Ass.</p> <p><strong>$190,</strong> <a href="http://ocz.com/">www.ocz.com</a></p> http://www.maximumpc.com/ocz_vertex_460_240gb_review#comments Hard Drive Hardware HDD May issues 2014 OCZ Vertex 460 240GB Review solid state drive ssd Reviews Wed, 20 Aug 2014 14:16:12 +0000 Josh Norem 28382 at http://www.maximumpc.com Nvidia Shield Tablet Review http://www.maximumpc.com/nvidia_shield_tablet_review_2014 <!--paging_filter--><h3>Updated: Now with video review!&nbsp;</h3> <p>Despite its problems, we actually liked <a title="Nvidia Shield review" href="http://www.maximumpc.com/nvidia_shield_review_2013" target="_blank">Nvidia’s original Shield Android gaming handheld</a>. Our biggest issue with it was that it was bulky and heavy. With rumors swirling around about a Shield 2, we were hoping to see a slimmer, lighter design. So consider us initially disappointed when we learned that the next iteration of Shield would just be yet another Android tablet. Yawn, right? The fact of the matter is that the Shield Tablet may be playing in an oversaturated market, but it’s still great at what it sets out to be.</p> <p><iframe src="//www.youtube.com/embed/dGigsxi9-K4" width="620" height="349" frameborder="0"></iframe></p> <p style="text-align: center;"><strong>We've updated our review to include the video review above.</strong></p> <p>At eight inches, the Shield Tablet features a gorgeous 1900x1200 display, which shares the same resolution as Google’s flagship <a title="nexus 7 review" href="http://www.maximumpc.com/google_nexus_7_review_2013" target="_blank">Nexus 7</a> tablet. At 13.1 ounces, the Shield Tablet is about three ounces heavier than the Nexus 7 but still a lot lighter than the original’s 1 lb. 4.7 ounces.&nbsp;</p> <p>Part of the weight increase with the Shield Tablet over the Nexus 7 is due to the extra inch that you’re getting from the screen, but also because the Shield Tablet is passively cooled and has an extra thermal shield built inside to dissipate heat. It’s a little heavier than we like, but isn’t likely to cause any wrist problems. On the back of the Shield is an anti-slip surface and a 5MP camera, and on the front of the tablet is a front-facing 5MP camera and two front-facing speakers. While the speakers are not going to blow away dedicated Bluetooth speakers, they sound excellent for a tablet. In addition to the speakers, the Shield Tablet has a 3.5mm headphone jack up at the top. Other ports include Micro USB, Mini HDMI out, and a MicroSD card slot capable of taking up to 128GB cards. Buttons on the Shield include a volume rocker and a power button, which we found to be a little small and shallow for our liking.</p> <p style="text-align: center;"><img src="/files/u154082/shield_tablet_exploded_view_black_bckgr.jpg" alt="Nvidia Shield Tablet guts" title="Nvidia Shield Tablet guts" width="620" height="349" /></p> <p style="text-align: center;"><strong>The guts of the Nvidia Shield Tablet.</strong></p> <p>All of this is running on the latest version of Android KitKat (4.4). Nvidia says that it will update the tablet to Android L within a few weeks of Google’s official release. If Nvidia’s original Shield is any indication of how well the company keeps up with OS updates, you should be able to expect to get the latest version of Android after a couple of weeks, if not a months, after release. Regardless, the Shield Tablet is running a pretty stock version of Android to begin with, the main difference being that Nvidia has pre-loaded the tablet with its Shield Hub, which is a 10-foot UI used to purchase, download, and launch games.</p> <p>Arguably, the real star of the tablet is Nvidia’s new Tegra K1 mobile superchip. The 2.2GHz quad-core A15 SOC features Nvidia’s Kepler GPU architecture and 192 CUDA cores along with 2GB of low-power DDR3. K1 supports many of the graphical features commonplace in GeForce graphics cards, including tesselation, HDR lighting, Global illumination, subsurface scattering, and more.</p> <p>In our performance benchmarks, the K1 killed it. Up until now, the original Shield’s actively cooled Tegra 4 is arguably one of the most, if not <em>the</em> most, powerful Android SOC on the market, but the K1 slaughters it across the board. In Antutu and GeekBench benchmark, we saw modest gains of 12 percent to 23 percent in Shield vs. Shield Tablet action. But in Passmark and GFX Bench’s Trex test, we saw nearly a 50 percent spread, and in 3DMark’s mobile Icestorm Unlimited test, we saw an astounding 90 percent advantage for the Shield Tablet. This is incredible when you consider that the tablet has no fans and a two-watt TDP. Compared to the second-gen Nexus 7, the Shield Tablet benchmarks anywhere from 77 percent to 250 percent faster. This SOC is smoking fast.</p> <p>In terms of battery life, Nvidia claims you’ll get 10 hours watching/surfing the web and about five hours from gaming with its 19.75 Wh battery. This is up 3.75 Wh up from Google’s Nexus 7 equivalent, and from our experiential tests, we found those figures to be fairly accurate if not a best-case scenario. It will pretty much last you all day, but you'll still want to let it sip juice every night.</p> <p style="text-align: center;"><img src="/files/u154082/shield_tablet_shield_controller_war_thunder.jpg" alt="Shield Tablet review" title="Shield Tablet review" width="620" height="343" /></p> <p style="text-align: center;"><strong>The new wireless controller uses Wi-Fi Direct instead of Bluetooth for lower latency.</strong></p> <p>Of course, if you’re going to game with it, you’re going to need Nvidia’s new wireless Shield Controller. Sold separately for $60, the 11.2-ounce Shield Controller maintains the same button layout as the original Shield controller, but feels a lot lighter and is more comfortable to hold. While most Android game controllers operate over Bluetooth, Nvidia opted to go with Wi-Fi Direct, stating that it offers 2x faster response time and more bandwidth. The extra bandwidth allows you to plug a 3.5mm headphone into the controller and also allows you to link up to four controllers to the device, which is an appreciated feature when you hook up the tablet to your HDTV via the Shield Tablet’s <a title="shield console mode" href="http://www.maximumpc.com/nvidia_sweetens_shield_console_android_442_kitkat_price_drop_199_through_april" target="_blank">Console Mode</a>. Other unique features of the controller include capacitive-touch buttons for Android’s home, back, and play buttons. There’s also a big green Nvidia button that launches Shield Hub. The controller also has a small, triangle-shaped clickable touch pad which allows you to navigate your tablet from afar. One quibble with it is that we wish the trackpad was more square, to at least mimic the dimensions of the tablet; the triangle shape was a little awkward to interface with. Another problem that we initially had with the controller was that the + volume button stopped working after a while. We contacted Nvidia about this and the company sent us a new unit, which remedied the issue. One noticeable feature missing from the controller is rumble support. Nvidia said this was omitted on the original Shield to keep the weight down; its omission is a little more glaring this time around, however, since there's no screen attached to the device.</p> <p>The controller isn’t the only accessory that you’ll need to purchase separately if you want to tap into the full Shield Tablet experience. To effectively game with the tablet, you’ll need the Shield Tablet cover, which also acts as a stand. Like most tablets, a magnet in the cover shuts off the Shield Tablet when closed, but otherwise setting up the cover and getting it to act as a stand is initially pretty confusing. The cover currently only comes in black, and while we’re generally not big on marketing aesthetics, it would be nice to have an Nvidia green option to give the whole look a little more pop. We actually think the cover should just be thrown in gratis, especially considering that the cheapest 16GB model costs $300. On the upside though, you do get Nvidia’s new passive DirectStylus 2 that stows away nicely in the body of the Shield Tablet. Nvidia has pre-installed note-writing software and its own Nvidia Dabbler painting program. The nice thing about Dabbler is that it leverages the K1’s GPU acceleration so that you can virtually paint and blend colors in real time. There’s also a realistic mode where the “paint” slowly drips down the virtual canvas like it would in real life.&nbsp;</p> <p style="text-align: center;"><img src="/files/u154082/shield_tablet_shield_controller_trine2_0.jpg" alt="Shield tablet review" title="Shield tablet review" width="620" height="404" /></p> <p style="text-align: center;"><strong>The Shield Controller is a lot lighter and less blocky than the original Shield Portable.</strong></p> <p>But that’s probably not why you’re interested in the Shield Tablet. This device is first and foremost a gaming tablet and even comes with a free Android copy of Trine 2. Trine 2 was originally a PC game and it’s made a great transition to the Shield Tablet. While the game was never known to be a polygon pusher, it looks just as good as it ever did on its x86 debut.&nbsp;</p> <p>With gaming as the primary driver for Shield Tablet, you may wonder why Nvidia didn’t bundle its new controller. The company likely learned from Microsoft’s mistake with Kinect and the Xbox One: Gamers don’t like to spend money and getting the price as low as possible was likely on Nvidia’s mind. Of course, not everyone may even want a controller, with the general lack of support for them in games. Nvidia says there are now around 400 Android titles that support its controller, but that’s only a small percentage of Android games and the straight truth is that the overwhelming majority of these games are garbage.&nbsp;</p> <p>Nvidia is making a push for Android gaming, however. The company worked with Valve to port over Half Life 2 and Portal to the Shield and they look surprisingly fantastic and are easily the two prettiest games on Android at the moment. Whether Android will ever become a legitimate platform for hardcore gaming is anyone’s guess, but at least the Shield Tablet will net you a great front seat if the time ever arises.</p> <p>Luckily, you won’t have to rely solely on the Google Play store to get your gaming fix. Emulators run just as well here as they did on the original Shield and this iteration of Shield is also compatible with Gamestream, which is Nvidia’s streaming technology that allows you to stream games from your PC to your Shield. Gamestream, in theory, lets you play your controller-enabled PC games on a Shield.</p> <p>At this point, Nvidia says Gamestream supports more than 100 games such as Batman: Arkham Origins and Titanfall from EA’s Origin and Valve’s Steam service. The problem, though, is that there are hundreds more games on Steam and Origin that support controllers—but not the Shield Tablet’s controller. For example, Final Fantasy VII, a game that we couldn’t get to work with the original Shield, still isn't supported even though it works with an Xbox controller on the PC. When Gamestream does work, however, it’s relatively lag-free and kind of wonderful. The one caveat here is that you’ll have to get a 5GHz dual-band router to effectively get it working.&nbsp;</p> <p style="text-align: center;"><iframe src="//www.youtube.com/embed/rh7fWdQT2eE" width="620" height="349" frameborder="0"></iframe></p> <p style="text-align: center;"><strong>Nvidia Shield Video demo.</strong></p> <p>Would we buy the Shield Tablet if we owned the original Shield (now renamed the Shield Portable)? Probably not. If we were looking for a new tablet and top-notch gaming performance was on the checklist, the Shield Tablet is easily the top contender today. We’d take it over the second-gen Nexus 7 in a heartbeat. While we understand why Nvidia decided to separate the cover and controller to keep the prices down and avoid the Kinect factor, we think a bundled package with a small price break as an alternative would have been nice. All things considered though, consider us surprised. The Shield Tablet is pretty dang cool.&nbsp;</p> <p><strong>$300</strong></p> <p><em><strong>Update:</strong> The original article incorrectly labled the Shield Portable benchmarks with the Nexus 7 figures. The issue has been resolved and both benchmark charts are listed below.&nbsp;</em></p> http://www.maximumpc.com/nvidia_shield_tablet_review_2014#comments android Google Hardware KitKat maximum pc nvidia portable Review shield tablet wireless controller News Reviews Tablets Mon, 18 Aug 2014 21:36:57 +0000 Jimmy Thang 28263 at http://www.maximumpc.com Best Cheap Graphics Card http://www.maximumpc.com/best_cheap_graphics_card_2014 <!--paging_filter--><h3>Six entry-level cards battle for budget-board bragging rights</h3> <p>The video-card game is a lot like Hollywood. Movies like My Left Foot and The Artist take home the Oscars every year, but movies like Grown Ups 2 and Transformers 3 pull in all the cash. It's the same with GPUs, in that everyone loves to talk about $1,000 cards, but the actual bread-and-butter of the market is made up of models that cost between $100 and $150. These are not GPUs for 4K gaming, obviously, but they can provide a surprisingly pleasant 1080p gaming experience, and run cool and quiet, too.</p> <p>This arena has been so hot that AMD and Nvidia have recently released no fewer than six cards aimed at budget buyers. Four of these cards are from AMD, and Nvidia launched two models care of its all-new Maxwell architecture, so we decided to pit them against one another in an old-fashioned GPU roundup. All of these cards use either a single six-pin PCIe connector or none at all, so you don't even need a burly power supply to run them, just a little bit of scratch and the desire to get your game on. Let's dive in and see who rules the roost!</p> <h3>Nvidia's Maxwell changes the game</h3> <p>Budget GPUs have always been low-power components, and usually need just a single six-pin PCIe power connector to run them. After all, a budget GPU goes into a budget build, and those PCs typically don't come with the 600W-or-higher power supplies that provide dual six- or eight-pin PCIe connectors. Since many budget PSUs done have PCIe connectors, most of these cards come with Molex adapters in case you don't have one. The typical thermal design power (TDP) of these cards is around 110 watts or so, but that number fluctuates up and down according to spec. For comparison, the Radeon R9 290X has a TDP of roughly 300 watts, and Nvidia's flagship card, the GTX 780 Ti, has a TDP of 250W, so these budget cards don't have a lot of juice to work with. Therefore, efficiency is key, as the GPUs need to make the most out of the teeny, tiny bit of wattage they are allotted. During 2013, we saw AMD and Nvidia release GPUs based on all-new 28nm architectures named GCN and Kepler, respectively, and though Nvidia held a decisive advantage in the efficiency battle, it's taken things to the next level with its new ultra-low-power Maxwell GPUs that were released in February 2014.</p> <p>Beginning with the GTX 750 Ti and the GTX 750, Nvidia is embarking on a whole new course for its GPUs, centered around maximum power efficiency. The goal with its former Kepler architecture was to have better performance per watt compared to the previous architecture named Fermi, and it succeeded, but it's taken that same philosophy even further with Maxwell, which had as its goal to be twice as efficient as Kepler while providing 25 percent more performance.</p> <p style="text-align: center;"><a class="thickbox" href="/files/u152332/maxwell_small_0.jpg"><img src="/files/u152332/maxwell_small.jpg" alt="Maxwell offers far greater power savings by using more granular clock gating, which allows it to shut down unused graphics units. " width="620" height="279" /></a></p> <p style="text-align: center;"><strong>Maxwell offers far greater power savings by using more granular clock gating, which allows it to shut down unused graphics units. </strong></p> <p>Achieving more performance for the same model or SKU from one generation to the next is a tough enough challenge, but to do so by cutting power consumption in half is an even trickier gambit, especially considering the Maxwell GPUs are being fabricated on the same 28nm process it used for Kepler. We always expect more performance for less power when moving from one process to the next, such as 32nm to 28nm or 22nm to 14nm, but to do so on the same process is an amazing achievement indeed. Though Nvidia used many technological advances to reduce power consumption, the main structural change was to how the individual CUDA cores inside the Graphics Processing Clusters (GPCs) are organized and controlled. In Kepler, each GPC contained individual processing units, named SMX units, and each unit featured a piece of control logic that handled scheduling for 192 CUDA cores, which was a major increase from the 32 cores in each block found in Fermi. In Maxwell, Nvidia has gone back to 32 CUDA cores per block, but is putting four blocks into each unit, which are now called SM units. If you're confused, the simple version is this—rather than one piece of logic controlling 192 cores, Maxwell has a piece of logic for each cluster of 32 cores, and there are four clusters per unit, for a total of 128 cores per block. Therefore, it's reduced the number of cores per block by 64, from 192 to 128, which helps save energy. Also, since each piece of control logic only has to pay attention to 32 cores instead of 192, it can run them more efficiently, which also saves energy.</p> <p>The benefit to all this energy-saving is the GTX 750 cards don't need external power, so they can be dropped into pretty much any PC on the market without upgrading the power supply. That makes it a great upgrade for any pre-built POS you have lying around the house.</p> <h4>Gigabyte GTX 750 Ti WindForce</h4> <p>Nvidia's new Maxwell cards run surprisingly cool and quiet in stock trim, and that's with a fan no larger than an oversized Ritz cracker, so you can guess what happens when you throw a mid-sized WindForce cooler onto one of them. Yep, it's so quiet and cool you have to check with your fingers to see if it's even running. This bad boy ran at 45 C under load, making it the coolest-running card we've ever tested, so kudos to Nvidia and Gigabyte on holding it down (the temps, that is). This board comes off the factory line with a very mild overclock of just 13MHz (why even bother, seriously), and its boost clock has been massaged up to 1,111MHz from 1,085MHz, but as always, this is just a starting point for your overclocking adventures. The memory is kept at reference speeds however, running at 5,400MHz. The board sports 2GB of GDDR5 memory, and uses a custom design for its blue-colored PCB. It features two 80mm fans and an 8mm copper heat pipe. Most interesting is the board requires a six-pin PCIe connector, unlike the reference design, which does not.</p> <p style="text-align: center;"><a class="thickbox" href="/files/u152332/9755_big_small_0.jpg"><img src="/files/u152332/9755_big_small.jpg" alt="The WindForce cooler is overkill, but we like it that way. " title="Gigabyte GTX 750 Ti WindForce" width="620" height="500" /></a></p> <p style="text-align: center;"><strong>The WindForce cooler is overkill, but we like it that way. </strong></p> <p>In testing, the GTX 750 Ti WindForce was neck-and-neck with the Nvidia reference design, proving that Nvidia did a pretty good job with this card, and that its cooling requirements don't really warrant such an outlandish cooler. Still, we'll take it, and we loved that it was totally silent at all times. Overclocking potential is higher, of course, but since the reference design overclocked to 1,270MHz or so, we don’t think you should expect moon-shot overclocking records. Still, this card was rock solid, whisper quiet, and extremely cool.</p> <p><strong>Gigabyte GTX 750 Ti WindForce</strong></p> <p><strong><br /></strong></p> <div class="module-content" style="margin-top: -20px;"> <div class="module-text full"> <div class="verdict"><img src="/sites/maximumpc.com/themes/maximumpc/i/mxpc_9.jpg" alt="score:9" title="score:9" width="210" height="80" /></div> </div> </div> <p><strong>$160(Street), <a href="http://www.gigabyte.us/ " target="_blank">www.gigabyte.us</a></strong></p> <h4>MSI GeForce GTX 750 Gaming</h4> <p>Much like Gigabyte's GTX 750 Ti WindForce card, the MSI GTX 750 Gaming is a low-power board with a massive Twin Frozr cooler attached to it for truly exceptional cooling performance. The only downside is the formerly waifish GPU has been transformed into a full-size card, measuring more than nine inches long. Unlike the Gigabyte card though, this GPU eschews the six-pin PCIe connector, as it's just a 55W board, and since the PCIe slot delivers up to 75W, it doesn't even need the juice. Despite this card's entry-level billing, MSI has fitted it with “military-class” components for better overclocking and improved stability. It uses twin heat pipes to dual 100mm fans to keep it cool, as well. It also includes a switch that lets you toggle between booting from an older BIOS in case you run into overclocking issues.</p> <p style="text-align: center;"><a class="thickbox" href="/files/u152332/msigtx750_small_0.jpg"><img src="/files/u152332/msigtx750_small.jpg" alt="MSI’s Twin Frozr cooling apparatus transforms this svelte GPU into a full-sized card." title="MSI GeForce GTX 750 Gaming" width="620" height="364" /></a></p> <p style="text-align: center;"><strong>MSI’s Twin Frozr cooling apparatus transforms this svelte GPU into a full-sized card.</strong></p> <p>Speaking of which, this board lives up to its name and has a beefy overclock right out of the box, running at 1,085MHz base clock and 1,163MHz boost clock. It features 1GB of GDDR5 RAM on a 128-bit interface.</p> <p>The Twin Frozr cooler handles the miniscule amount of heat coming out of this board with aplomb—we were able to press our finger forcibly on the heatsink under load and felt almost no warmth, sort of like when we give Gordon a hug when he arrives at the office. As the only GTX 750 in this test, it showed it could run our entire test suite at decent frame rates, but it traded barbs with the slightly less expensive Radeon R7 260X. On paper, both the GTX 750 and the R7 260X are about $119, but rising prices from either increased demand or low supply have placed both cards in the $150 range, making it a dead heat. Still, it's a very good option for those who want an Nvidia GPU and its ecosystem but can't afford the Ti model.</p> <p><strong>MSI GeForce GTX 750 Gaming</strong></p> <p><strong><br /></strong></p> <div class="module-content" style="margin-top: -20px;"> <div class="module-text full"> <div class="verdict"><img src="/sites/maximumpc.com/themes/maximumpc/i/mxpc_8.jpg" alt="score:8" title="score:8" width="210" height="80" /></div> </div> </div> <p><strong>$140, <a href="http://www.msi.com/ " target="_blank">www.msi.com</a></strong></p> <h4>Sapphire Radeon R7 265 Dual-X</h4> <p>The Sapphire Radeon R7 265 is the odds-on favorite in this roundup, due to its impressive specs and the fact that it consumes more than twice the power of the Nvidia cards. Sure, it's an unfair advantage, but hate the game, not the player. This board is essentially a rebadged Radeon HD 7850, which is a Pitcairn part, and it slides right in between the $120 R7 260X and the $180ish R7 270. This card actually has the same clock speeds as the R7 270, but features fewer streaming processors for reduced shader performance. It has the same 2GB of memory, same 925MHz boost clock, same 256-bit memory bus, and so on. At 150W, its TDP is very high—or at least it seems high, given that the GTX 750 Ti costs the exact same $150 and is sitting at just 60W. Unlike the lower-priced R7 260X Bonaire part, though, the R7 265 is older silicon and thus does not support TrueAudio and XDMA CrossFire (bridgeless CrossFire, basically). However, it will support the Mantle API, someday.</p> <p style="text-align: center;"><a class="thickbox" href="/files/u152332/sapphire_radeon_r7_265_dualx_2gb_small_0.jpg"><img src="/files/u152332/sapphire_radeon_r7_265_dualx_2gb_small.jpg" alt="Sapphire's R7 265 is the third card in this roundup to use a two-fan cooling apparatus. " title="Sapphire Radeon R7 265 Dual-X" width="620" height="473" /></a></p> <p style="text-align: center;"><strong>Sapphire's R7 265 is the third card in this roundup to use a two-fan cooling apparatus. </strong></p> <p>The Sapphire card delivered the goods in testing, boasting top scores in many benchmarks and coming in as the only GPU in this roundup to hit the magical 60fps in any test, which was a blistering turn in Call of Duty: Ghosts where it hit 67fps at 1080p on Ultra settings. That's damned impressive, as was its ability to run at 49fps in Battlefield 4, though the GTX 750 Ti was just a few frames behind it. Overall, though, this card cleaned up, taking first place in seven out of nine benchmarks. If that isn't a Kick Ass performance, we don't know what is. The Dual-X cooler also kept temps and noise in check, too, making this the go-to GPU for those with small boxes or small monitors.</p> <p><strong> Sapphire Radeon R7 265 Dual-X</strong></p> <p><strong><br /></strong></p> <div class="module-content" style="margin-top: -20px;"> <div class="module-text full"> <div class="verdict"><img src="/sites/maximumpc.com/themes/maximumpc/i/mxpc_9ka.jpg" alt="score:9ka" title="score:9ka" width="210" height="80" /></div> </div> </div> <p><strong>$150 (MSRP), <a href="http://www.sapphiretech.com/ " target="_blank">www.sapphiretech.com</a></strong></p> <h4>AMD Radeon R7 260X</h4> <p>The Radeon R7 260X was originally AMD's go-to card for 1080p gaming on a budget. It’s the only card in the company’s sub-$200 lineup that supports all the next-gen features that appeared in its Hawaii-based flagship boards, including support for TrueAudio, XDMA Crossfire, Mantle (as in, it worked at launch), and it has the ability to drive up to three displays —all from this tiny $120 GPU. Not bad. In its previous life, this GPU was known as the Radeon HD 7790, aka Bonaire, and it was our favorite "budget" GPU when pitted against the Nvidia GTX 650 Ti Boost due to its decent performance and amazing at-the-time game bundles. It features a 128-bit memory bus, 896 Stream Processors, 2GB of RAM (up from 1GB on the previous card), and a healthy boost clock of 1,100MHz. TDP is just 115W, so it slots right in between the Nvidia cards and the higher-end R7 265 board. Essentially, this is an HD 7790 card with 1GB more RAM, and support for TrueAudio, which we have yet to experience.</p> <p style="text-align: center;"><a class="thickbox" href="http://www.maximumpc.com/files/u152332/amdrad_r7_260x_1_small_0.jpg"><img src="http://www.maximumpc.com/files/u152332/amdrad_r7_260x_1_small.jpg" alt="This $120 card supports Mantle, TrueAudio, and CrossFire. " title="AMD Radeon R7 260X" width="620" height="667" /></a></p> <p style="text-align: center;"><strong>This $120 card supports Mantle, TrueAudio, and CrossFire. </strong></p> <p>In testing, the R7 260X delivered passable performance, staking out the middle ground between the faster R7 265 and the much slower R7 250 cards. It ran at about 30fps in tests like Crysis 3 and Tomb Raider, but hit 51fps on CoD: Ghosts and 40fps on Battlefield 4, so it's certainly got enough horsepower to run the latest games on max settings. The fact that it supports all the latest technology from AMD is what bolsters this card's credentials, though. And the fact that it can run Mantle with no problems is a big plus for Battlefield 4 players. We like this card a lot, just like we enjoyed the HD 7790. While it’s not the fastest card in the bunch, it’s certainly far from the slowest.</p> <p><strong>AMD Radeon R7 260X</strong></p> <p><strong><br /></strong></p> <div class="module-content" style="margin-top: -20px;"> <div class="module-text full"> <div class="verdict"><img src="/sites/maximumpc.com/themes/maximumpc/i/mxpc_8.jpg" alt="score:8" title="score:8" width="210" height="80" /></div> </div> </div> <p><strong>$120 <a href="http://www.amd.com/ " target="_blank">www.amd.com</a></strong></p> <h4> <hr />MSI Radeon R7 250 OC</h4> <p>In every competition, there must be one card that represents the lower end of the spectrum, and in this particular roundup, it’s this little guy from MSI. Sure, it's been overclocked a smidge and has a big-boy 2GB of memory, but this GPU is otherwise outgunned, plain and simple. For starters, it has just 384 Stream Processors, which is the lowest number we've ever seen on a modern GPU, so it's already severely handicapped right out of the gate. Board power is a decent 65W, but when looking at the specs of the Nvidia GTX 750, it is clearly outmatched. One other major problem, at least for those of us with big monitors, is we couldn't get it to run our desktop at 2560x1600 out of the box, as it only has one single-link DVI connector instead of dual-link. On the plus side, it doesn't require an auxiliary power connector and is just $100, so it's a very inexpensive board and would make a great upgrade from integrated graphics for someone on a strict budget.</p> <p style="text-align: center;"><a class="thickbox" href="/files/u152332/msi_radeon_r7_250_oc_2gb_small_0.jpg"><img src="/files/u152332/msi_radeon_r7_250_oc_2gb_small.jpg" alt="Some R7 250 cards include 1GB of RAM, but this MSI board sports 2GB." title="MSI Radeon R7 250 OC" width="620" height="498" /></a></p> <p style="text-align: center;"><strong>Some R7 250 cards include 1GB of RAM, but this MSI board sports 2GB.</strong></p> <p>That said, we actually felt bad for this card during testing. The sight of it limping along at 9 frames per second in Heaven 4.0 was tough to watch, and it didn't do much better on our other tests, either. Its best score was in Call of Duty: Ghosts, where it hit a barely playable 22fps. In all of our other tests, it was somewhere between 10 and 20 frames per second on high settings, which is simply not playable. We'd love to say something positive about the card though, so we'll note that it probably runs fine at medium settings and has a lot of great reviews on Newegg from people running at 1280x720 or 1680x1050 resolution.</p> <p><strong>MSI Radeon R7 250 OC 1TB</strong></p> <p><strong><br /></strong></p> <div class="module-content" style="margin-top: -20px;"> <div class="module-text full"> <div class="verdict"><img src="/sites/maximumpc.com/themes/maximumpc/i/mxpc_6.jpg" alt="score:6" title="score:6" width="210" height="80" /></div> </div> </div> <p><strong>$90 <a href=" http://us.msi.com/ " target="_blank">http://us.msi.com</a></strong></p> <h4>PowerColor Radeon R7 250X</h4> <p>The PowerColor Radeon R7 250X represents a mild bump in specs from the R7 250, as you would expect given its naming convention. It is outfitted with 1GB of RAM however, and a decent 1,000MHz boost clock. It packs 640 Stream Processors, placing it above the regular R7 250 but about mid-pack in this group. Its 1GB of memory runs on the same 128-bit memory bus as other cards in this roundup, so it's a bit constrained in its memory bandwidth, and we saw the effects of it in our testing. It supports DirectX 11.2, though, and has a dual-link DVI connector. It even supports CrossFire with an APU, but not with another PCIe GPU&shy;—or at least that's our understanding of it, since it says it supports CrossFire but doesn't have a connector on top of the card.</p> <p style="text-align: center;"><a class="thickbox" href="/files/u152332/r7-250x-angle_small_0.jpg"><img src="/files/u152332/r7-250x-angle_small.jpg" alt="The R7 250X is a rebadged HD 7770, made for cash-strapped gamers. " title="PowerColor Radeon R7 250X " width="620" height="369" /></a></p> <p style="text-align: center;"><strong>The R7 250X is a rebadged HD 7770, made for cash-strapped gamers. </strong></p> <p>When we put the X-card to the test, it ended up faring a smidgen better than the non-X version, but just barely. It was able to hit 27 and 28 frames per second in Battlefield 4 and CoD: Ghosts, and 34 fps in Batman: Arkham Origins, but in the rest of the games in our test suite, its performance was simply not what we would call playable. Much like the R7 250 from MSI, this card can't handle 1080p with all settings maxed out, so this GPU is bringing up the rear in this crowd. Since it's priced "under $100" we won't be too harsh on it, as it seems like a fairly solid option for those on a very tight budget, and we'd definitely take it over the vanilla R7 250. We weren't able to see "street" pricing for this card, as it had not been released at press time, but our guess is even though it's one of the slowest in this bunch, it will likely be the go-to card under $100.</p> <p><strong>PowerColor Radeon R7 250X</strong></p> <p><strong><br /></strong></p> <div class="module-content" style="margin-top: -20px;"> <div class="module-text full"> <div class="verdict"><img src="/sites/maximumpc.com/themes/maximumpc/i/mxpc_7.jpg" alt="score:7" title="score:7" width="210" height="80" /></div> </div> </div> <p><strong>$100, <a href="http://www.powercolor.com/ " target="_blank">www.powercolor.com</a></strong></p> <h3>Should you take the red pill or the green pill?</h3> <p><strong>Both companies offer proprietary technologies to lure you into their "ecosystems," so let’s take a look at what each has to offer</strong></p> <h4>Nvidia's Offerings</h4> <p><strong>G-Sync</strong></p> <p>Nvidia's G-Sync technology is arguably one of the strongest cards in Nvidia's hand, as it eliminates tearing in video games caused by the display's refresh rate being out of sync with the frame rate of the GPU. The silicon syncs the refresh rate with the cycle of frames rendered by the GPU, so movement onscreen looks buttery smooth at all times, even below 30fps. The only downside is you must have a G-Sync monitor, so that limits your selection quite a bit.</p> <p><strong>Regular driver releases</strong></p> <p>People love to say Nvidia has "better drivers" than AMD, and though the notion of "better" is debatable, it certainly releases them much more frequently than AMD. That's not to say AMD is a slouch—especially now that it releases a new "beta" build each month—but Nvidia seems to be paying more attention to driver support than AMD.</p> <p><strong>GeForce Experience and ShadowPlay</strong></p> <p>Nvidia's GeForce Experience software will automatically optimize any supported games you have installed, and also lets you stream to Twitch as well as capture in-game footage via ShadowPlay. It's a really slick piece of software, and though we don't need a software program to tell us "hey, max out all settings," we do love ShadowPlay.</p> <p><strong>PhysX</strong></p> <p>Nvidia's proprietary PhysX software allows game developers to include billowing smoke, exploding particles, cloth simulation, flowing liquids, and more, but there's just one problem—very few games utilize it. Even worse, the ones that do utilize it, do so in a way that is simply not that impressive, with one exception: Borderlands 2.</p> <h4>AMD's Offerings</h4> <p><strong>Mantle and TrueAudio</strong></p> <p>AMD is hoping that Mantle and TrueAudio become the must-have "killer technology" it offers over Nvidia, but at this early stage, it's difficult to say with certainty if that will ever happen. Mantle is a lower-level API that allows developers to optimize a game specifically targeted at AMD hardware, allowing for improved performance.</p> <p><strong>TressFX</strong></p> <p>This is proprietary physics technology similar to Nvidia's PhysX in that it only appears in certain games, and does very specific things. Thus far, we've only seen it used once—for Lara Croft's hair in Tomb Raider. Instead of a blocky ponytail, her mane is flowing and gets blown around by the wind. It looks cool but is by no means a must-have item on your shopping list, just like Nvidia's PhysX. <strong>&nbsp;</strong></p> <p><strong>Gaming Evolved by Raptr<br /></strong></p> <p>This software package is for Radeon users only, and does several things. First, it will automatically optimize supported games you have installed, and it also connects you to a huge community of gamers across all platforms, including PC and console. You can see who is playing what, track achievements, chat with friends, and also broadcast to Twitch.tv, too. AMD also has a "rewards" program that doles out points for using the software, and you can exchange those points for gear, games, swag, and more.</p> <p><strong>Currency mining</strong></p> <p>AMD cards are better for currency mining than Nvidia cards for several reasons, but their dominance is not in question. The most basic reason is the algorithms used in currency mining favor the GCN architecture, so much so that AMD cards are usually up to five times faster in performing these operations than their Nvidia equivalent. In fact, the mining craze has pushed the demand for these cards is so high that there's now a supply shortage.</p> <h3>All the cards, side by side</h3> <div class="module orange-module article-module"> <div class="module orange-module article-module"><span class="module-name">Benchmarks</span><br /> <div class="module-content"> <div class="module-text full"> <div class="spec-table orange"> <table style="width: 620px; height: 265px;" border="0"> <thead> </thead> <tbody> <tr> <td>&nbsp;</td> <td>MSI Geforce GTX 750 Gaming</td> <td>GigaByte GeForce GTX 750 Ti </td> <td>GeForce GTX 650 Ti Boost *</td> <td>GeForce GTX 660 *</td> <td>MSI Radeon R7 250</td> <td>PowerColor Radeon R7 250X</td> <td>AMD Radeon R7 260X</td> <td>Sapphire Radeon R7 265</td> </tr> <tr> <td class="item">Price</td> <td class="item-dark">$120 </td> <td>$150</td> <td>$160</td> <td>$210</td> <td>$90</td> <td>$100</td> <td>$120</td> <td>$150</td> </tr> <tr> <td>Code-name</td> <td>Maxwell</td> <td>Maxwell</td> <td>Kepler</td> <td>Kepler</td> <td>Oland</td> <td>Cape Verde</td> <td>Bonaire</td> <td>Curaco</td> </tr> <tr> <td class="item">Processing cores</td> <td class="item-dark">512</td> <td>640</td> <td>768</td> <td>960</td> <td>384</td> <td>640</td> <td>896</td> <td>1,024</td> </tr> <tr> <td>ROP units</td> <td>16</td> <td>16</td> <td>24</td> <td>24</td> <td>8</td> <td>16</td> <td>16</td> <td>32</td> </tr> <tr> <td>Texture units</td> <td>32</td> <td>40<strong><br /></strong></td> <td>64</td> <td>80</td> <td>24</td> <td>40</td> <td>56</td> <td>64</td> </tr> <tr> <td>Memory</td> <td>2GB</td> <td>2GB</td> <td>2GB</td> <td>2GB</td> <td>1GB</td> <td>1GB</td> <td>2GB</td> <td>2GB</td> </tr> <tr> <td>Memory speed</td> <td>1,350MHz</td> <td>1,350MHz</td> <td>1,500MHz</td> <td>1,500MHz</td> <td>1,500MHz</td> <td>1,125MHz</td> <td>1,500MHz</td> <td>1,400MHz</td> </tr> <tr> <td>Memory bus</td> <td>128-bit</td> <td>128-bit</td> <td>192-bit</td> <td>192-bit</td> <td>128-bit</td> <td>128-bit</td> <td>128-bit</td> <td>256-bit</td> </tr> <tr> <td>Base clock</td> <td>1,020MHz</td> <td>1,020MHz</td> <td>980MHz</td> <td>980MHz</td> <td>N/A</td> <td>N/A</td> <td>N/A</td> <td>N/A</td> </tr> <tr> <td>Boost clock</td> <td>1,085MHz</td> <td>1,085MHz</td> <td>1,033MHz</td> <td>1,033MHz</td> <td>1,050MHz</td> <td>1,000MHz</td> <td>1,000MHz</td> <td>925MHz</td> </tr> <tr> <td>PCI Express version</td> <td>3</td> <td>3</td> <td>3</td> <td>3</td> <td>3</td> <td>3</td> <td>3</td> <td>3</td> </tr> <tr> <td>Transistor count</td> <td>1.87 billion</td> <td>1.87 billion</td> <td>2.54 billion</td> <td>2.54 billion</td> <td>1.04 billion</td> <td>1.04 billion</td> <td>2.08 billion</td> <td>2.8 billion</td> </tr> <tr> <td>Power connectors</td> <td>N/A</td> <td>N/A</td> <td>1x six-pin</td> <td>1x six-pin</td> <td>N/A</td> <td>1x six-pin</td> <td>1x six-pin</td> <td>1x six-pin</td> </tr> <tr> <td>TDP</td> <td>54W</td> <td>60W</td> <td>134W</td> <td>140W</td> <td>65W</td> <td>80W</td> <td>115W</td> <td>150W</td> </tr> <tr> <td>Fab process</td> <td>28nm</td> <td>28nm</td> <td>28nm</td> <td>28nm</td> <td>28nm</td> <td>28nm</td> <td>28nm</td> <td>28nm</td> </tr> <tr> <td>Multi-card support</td> <td>No</td> <td>No</td> <td>Yes</td> <td>Yes</td> <td>No</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> </tr> <tr> <td>Outputs</td> <td>DVI, VGA, HDMI</td> <td>2x DVI, <br />2x HDMI</td> <td>2x DVI, HDMI, DisplayPort</td> <td>2x DVI, <br />HDMI, DisplayPort</td> <td>DVI-S, VGA, HDMI</td> <td>DVI, VGA, HDMI</td> <td>2x DVI, HDMI, DisplayPort</td> <td>2x DVI, HDMI, DisplayPort</td> </tr> </tbody> </table> <p><em>Provided for reference purposes.<br /></em></p> </div> </div> </div> </div> </div> <h3>How we tested</h3> <p><strong>We lowered our requirements, but not too much</strong></p> <p>We normally test all of our video cards on our standardized test bed, which has now been in operation for a year and a half, with only a few changes along the way. In fact, the only major change we've made to it in the last year was swapping the X79 motherboard and case. The motherboard had endured several hundred video-card insertions, which is well beyond the design specs. The case had also become bent to the point where the video cards were drooping slightly. Some, shall we say, "overzealous" overclocking also caused the motherboard to begin behaving unpredictably. Regardless, it's a top-tier rig with an Intel Core i7-3960X Extreme processor, 16GB of DDR3 memory, an Asus Rampage IV Extreme motherboard, Crucial M500 SSD, and Windows 8 64-bit Enterprise.</p> <p>For the AMD video cards, we loaded Catalyst driver 14.1 Beta 1.6, as that was the latest driver, and for the Nvidia cards, we used the 334.89 WHQL driver that was released just before testing began. We originally planned to run the cards at our normal "midrange GPU" settings, which is 1920x1080 resolution with maximum settings and 4X AA enabled, but after testing began, we realized we needed to back off those settings just a tad. Instead of dialing it down to medium settings, though, as that would run counter to everything we stand for as a magazine, we left the settings on "high" across the board, but disabled AA. These settings were a bit much for the lower-end cards, but rather than lower our settings once again, we decided to stand fast at 1080p with high settings, since we figured that's where you want to be gaming and you deserve to know if some of the less-expensive cards can handle that type of action.</p> <h3>Mantle Reviewed</h3> <p><strong>A word about AMD's Mantle API</strong></p> <p>AMD's Mantle API is a competitor to DirectX, optimized specifically for AMD's GCN hardware. In theory, it should allow for better performance since its code knows exactly what hardware it's talking to, as opposed to DirectX's "works with any card" approach. The Mantle API should be able to give all GCN 1.0 and later AMD cards quite a boost in games that support it. However, AMD points out that Mantle will only show benefits in scenarios that are CPU-bound, not GPU-bound, so if your GPU is already working as hard as it can, Mantle isn’t going to help it. However, if your GPU is always waiting around for instructions from an overloaded CPU, then Mantle can offer respectable gains.</p> <p>To test it out, we ran Battlefield 4 on an older Ivy Bridge quad-core, non-Hyper-Threaded Core i5-3470 test bench with the R7 260X GPU at 1920x1080 and 4X AA enabled. As of press time, there are only two games that support Mantle—Battlefield 4 and an RTS demo on Steam named Star Swarm. In Battlefield 4, we were able to achieve 36fps using DirectX, and 44fps using Mantle, which is a healthy increase and a very respectable showing for a $120 video card. The benefit was much smaller in Star Swarm, however, showing a negligible increase of just two frames per second.</p> <p><img src="/files/u152332/bf4_screen_swap_small.jpg" alt="Enabling Mantle in Battlefield 4 does provide performance boosts for most configs." title="Battlefield 4" width="620" height="207" /></p> <p>We then moved to a much beefier test bench running a six-core, Hyper-Threaded Core i7-3960X and a Radeon R9 290X, and we saw an increase in Star Swarm of 100 percent, going from 25fps with DirectX to 51fps using Mantle in a timed demo. We got a decent bump in Battlefield 4, too, going from 84 fps using DirectX to 98 fps in Mantle.</p> <p>Overall, Mantle is legit, but it’s kind of like PhysX or TressFX in that it’s nice to have when it’s supported, and does provide a boost, but it isn’t something we’d count on being available in most games.</p> <h3>Final Thoughts</h3> <h3>If cost is an issue, you've got options</h3> <p>Testing the cards for this feature was an enlightening experience. We don’t usually dabble in GPU waters that are this shallow, so we really had no idea what to expect from all the cards assembled. To be honest, if we were given a shot of sodium pentothal, we’d have to admit that given these cards’ price points, we had low expectations but thought they’d all at least be able to handle 1920x1080 gaming. As spoiled gamers used to running 2K or 4K resolution, 1080p seems like child’s play to us. But we found out that running that resolution at maximum settings is a bridge too far for any GPU that costs less than $120 or so. The $150 models are the sweet spot, though, and are able to game extremely well at 1080 resolution, meaning the barrier to entry for “sweet gaming” has been lowered by $100, thanks to these new GPUs from AMD and Nvidia.</p> <p>Therefore, the summary of our results is that if you have $150 to spend on a GPU, you should buy the Sapphire Radeon R7 265, as it’s the best card for gaming at this price point, end of discussion. OK, thanks for reading.</p> <p>Oh, are you still here? OK, here’s some more detail. In our testing, the Sapphire R7 265 was hands-down the fastest GPU at its price point—by a non-trivial margin in many tests—and is superior to the GTX 750 Ti from Nvidia. It was also the only GPU to come close to the magical 60fps we desire in every game, making it pretty much the only card in this crowd that came close to satisfying our sky-high demands. The Nvidia GTX 750 Ti card was a close second, though, and provides a totally passable experience at 1080p with all settings maxed. Nvidia’s trump card is that it consumes less than half the power of the R7 265 and runs 10 C cooler, but we doubt most gamers will care except in severely PSU-constrained systems.</p> <p>Moving down one notch to the $120 cards, the GTX 750 and R7 260X trade blows quite well, so there’s no clear winner. Pick your ecosystem and get your game on, because these cards are totally decent, and delivered playable frame rates in every test we ran.</p> <p>The bottom rung of cards, which consists of the R7 250(X) cards, were not playable at 1080p at max settings, so avoid them. They are probably good for 1680x1050 gaming at medium settings or something in that ballpark, but in our world, that is a no-man’s land filled with shattered dreams and sad pixels.</p> <div class="module orange-module article-module"> <div class="module orange-module article-module"><span class="module-name">Benchmarks</span><br /> <div class="module-content"> <div class="module-text full"> <div class="spec-table orange"> <table style="width: 620px; height: 265px;" border="0"> <thead> </thead> <tbody> <tr> <td>&nbsp;</td> <td>Nvidia GTX 750 Ti (reference)</td> <td>Gigabyte GTX 750 Ti</td> <td>MSI GTX 750 Gaming</td> <td>Sapphire Radeon R7 265</td> <td>AMD Radeon R7 260X</td> <td>PowerColor Radeon R7 250X</td> <td>MSI Radeon R7 250 OC</td> </tr> <tr> <td class="item">Driver</td> <td class="item-dark">334.89</td> <td>334.89</td> <td>334.89</td> <td>14.1 v1.6</td> <td>14.1 v1.6</td> <td>14.1 v1.6</td> <td>14.1 v1.6</td> </tr> <tr> <td>3DMark Fire Storm</td> <td>3,960</td> <td>3,974</td> <td>3,558</td> <td><strong>4,686</strong></td> <td>3,832</td> <td>2,806</td> <td>1,524</td> </tr> <tr> <td class="item">Unigine Heaven 4.0 (fps)</td> <td class="item-dark"><strong>30</strong></td> <td><strong>30<br /></strong></td> <td>25</td> <td>29</td> <td>23</td> <td>17</td> <td>9</td> </tr> <tr> <td>Crysis 3 (fps)</td> <td>27</td> <td>25</td> <td>21</td> <td><strong>32</strong></td> <td>26</td> <td>16</td> <td>10</td> </tr> <tr> <td>Far Cry 3 (fps)</td> <td><strong>40</strong></td> <td><strong>40<br /></strong></td> <td>34</td> <td><strong>40</strong></td> <td>34</td> <td>16</td> <td>14</td> </tr> <tr> <td>Tomb Raider (fps)</td> <td>30</td> <td>30</td> <td>26</td> <td><strong>36</strong></td> <td>31</td> <td>20</td> <td>12</td> </tr> <tr> <td>CoD: Ghosts (fps)</td> <td>51</td> <td>49</td> <td>42</td> <td><strong>67</strong></td> <td>51</td> <td>28</td> <td>22</td> </tr> <tr> <td>Battlefield 4 (fps)</td> <td>45</td> <td>45</td> <td>32</td> <td><strong>49</strong></td> <td>40</td> <td>27</td> <td>14</td> </tr> <tr> <td>Batman: Arkham Origins (fps)</td> <td><strong>74</strong></td> <td>71</td> <td>61</td> <td>55</td> <td>43</td> <td>34</td> <td>18</td> </tr> <tr> <td>Assassin's Creed: Black Flag (fps)</td> <td>33</td> <td>33</td> <td>29</td> <td><strong>39</strong></td> <td>21</td> <td>21</td> <td>14</td> </tr> </tbody> </table> <p><em>Best scores are bolded. Our test bed is a 3.33GHz Core i7-3960X Extreme Edition in an Asus Rampage IV Extreme motherboard with 16GB of DDR3/1600 and a Thermaltake ToughPower 1,050W PSU. The OS is 64-bit Windows 8. All games are run at 1920x1080 with no AA except for the 3DMark tests.<br /></em></p> </div> </div> </div> </div> </div> http://www.maximumpc.com/best_cheap_graphics_card_2014#comments 1080p affordable amd benchmarks budget cheap cheap graphics card gpu Hardware Hardware maximum pc may 2014 nvidia Video Card Features Tue, 12 Aug 2014 21:43:32 +0000 Josh Norem 28304 at http://www.maximumpc.com A Crash Course to Editing Images in Adobe Lightroom http://www.maximumpc.com/crash_course_editing_images_adobe_lightroom_2014 <!--paging_filter--><p><strong>When your images aren’t up to snuff, there’s always photo-editing software</strong></p> <p>Photography can be impenetrable from the gear to actually shooting and then the image editing software is a whole other uphill battle. Even with Adobe introducing Lightroom as a lightweight Photoshop alternative, it can be daunting to see a screen full of sliders as a complete novice. To help get you from serial Instagramer to amateur photographer, here’s a crash course to making your images look great with just a few steps in Lightroom.</p> <p><img src="http://www.maximumpc.com/files/u170397/lightroom_crash_course_top.jpg" width="620" height="419" style="font-weight: bold;" /></p> <h3 dir="ltr">Why you should shoot in RAW</h3> <p>First off, before we get to editing any images, it’s super important to start shooting RAW format images in case you haven’t already. Unlike JPEGs, RAW files are uncompressed digital negatives that carry much more information. This in turn makes them easier to work with in Lightroom or any image editor. Thanks to this full allotment of the data packed into RAW files, you can fix more images otherwise destined for the trash heap such as blue-tinged messes or almost completely black frames.</p> <p>If that wasn’t enough to sell you on shooting in RAW, this entire guide was done using the uncompressed format to show off and take advantage of the full image editing power of Lightroom.</p> <h3 dir="ltr">Getting started</h3> <p><img src="/files/u170397/image_import.jpg" width="620" height="324" /></p> <p>The first thing you’ll need to in Lightroom is to migrate your images of course. Upon starting Adobe Lightroom, navigate your mouse up to File and select “Import Photos and Video” (Ctrl+Shift+I). Another shortcut users can take advantage of is Lightroom will auto-detect any memory cards or cameras plugged into the computer.</p> <p>Lightroom will automatically drop images into dated folders. Unfortunately (or fortunately for some) this is programmed into the software, but users can always rename their folders. More importantly keywording your photos will be an indispensable tool to manage, search, and organize your images.</p> <h3 dir="ltr">Getting around inside lightroom</h3> <p><img src="/files/u170397/main_screen.jpg" width="620" height="333" /></p> <p>Once your images are all loaded into the library we can start editing one by clicking over (or hitting "D" on the keyboard) to the “Develop” screen. On the right edge of the screen users will find a list of settings that will allow them to tweak their images.</p> <p>There’s a lot to take in with Lightroom’s interface, but the most important thing users will navigate to are the filmstrip along the bottom to navigate images. Clicking anywhere on the image displayed in the center window, meanwhile, will zoom into the frame.</p> <p>Just beneath the featured picture there’s also a box designated with “X|Y” that will allow you to view the original image next to their processed counterpart. The button to the left of this aforementioned comparison toggle will return the window to normal, displaying only the final picture. Along the left side of the screen users will find a history log of all the edits made so far to each individual photo--and speaking of image settings, they’re all stacked on the right side of the window. At the bottom of this list of editing options there's also a handy "Previous" button to let users undo one chance or "Reset" to start all over again.</p> <h3 dir="ltr">Fix your framing</h3> <p><img src="/files/u170397/image_rotate.jpg" width="620" height="333" /></p> <p>Sometimes in the rush to capture that decisive moment there isn’t enough time to line up a perfect composition. But as long as the subject in the photo is in focus and your camera has enough megapixels, there’s always the option to crop the image.</p> <p>The crop tool is located on right, underneath the histogram, and is designated by a boxed grid icon closest to the left. Depending on the shot it might be smart to cut away some of the background to isolate the subject. Alternatively, cropping could come in handy to remove a busy or boring background (otherwise known as negative space). Sticklers for completely level images can also bring their mouse cursor to the edge of the frame to rotate the picture as well.</p> <h3 dir="ltr">Red Eye Correction</h3> <p><img src="/files/u170397/red_eye_correction.jpg" width="620" height="364" /></p> <p>Red eyes and flash photography seem to be inseparable despite all our technological advances, but at least it has gotten incredibly easy to fix this niggling issue. Located just two icons to the right from the Framing icon, clicking on Red Eye Correction will give you a new cursor that you'll want to select any red eyes in the photo. After that Lightroom will use the point users select and auto detect red pupils.</p> <h3 dir="ltr">White balance</h3> <p><img src="/files/u170397/white_balance.jpg" width="620" height="418" /></p> <p>Lighting is one of the toughest things in photography, especially when there’s a mix of sunlight and a blue hued lightbulb. Not only do the two different types of warm and cool light clash, they also completely throw off all the colors in your photos. With this in mind shifting the white balance should be one of the very first stops on your image editing train. Lightroom comes with a series of preset white balance settings just as cameras do with options such as daylight, shade, tungsten, and flash just to name a few.</p> <p>There's also the option to have Lightroom figure it out all on its own and most of the time it does an admirable job of picking out the right type of lighting. In case anything still looks a little off, there are also sliders that users can move around. Each slider is fairly self explanatory—shifting the top knob leftwards will make the image take on a blue shade while shifting towards yellow will cause your image to take on a sepia tone. The one underneath splits the spectrum between green and violet.</p> <p>For those wanting a bit more fine tuned control with a point-and-click solution users should select the eyedropper tool. Simply hover the dropper over to a neutral gray or white area and clicking it will have Lightroom take a best guess on white balance from that one spot.</p> <p><em>Click on to the next page where we'll dive into more editing magic.<br /></em></p> <hr /> <h3 dir="ltr">Getting to the Meat</h3> <p>Now that we’ve colored corrected the image and fixed up the composition, it's time to adjust the exposure. But before we start, there’s no hard and fast rule for what is the perfect image. It does not have to be a perfectly balanced image where everything in the frame is evenly illuminated. There’s nothing wrong with having harsh shadows or a blindingly bright spot—in fact it can actually be the thematic part of the picture you want to accentuate.</p> <p>Without further ado, here’s are the main ways you can use Lightroom to manipulate your images.</p> <ul> <li> <h3 dir="ltr"><img src="/files/u170397/basic_settings.jpg" width="200" height="610" style="float: right; margin: 10px;" /></h3> <p>Exposure: In a nutshell this lets users make the entire image brighter or darker.</p> </li> <li> <p>Contrast: Contrast changes the difference between the bright and dark parts of the image. Lowering the contrast evens out the exposure making it helpful if the picture was caught with extremely dark and bright sections. As such it can help to restore parts of the frame caught in shadows, but the trade off is this can also cause the entire picture to turn gray. On the flipside making photos more contrasty will produce a harsher look and cause colors to intensify.</p> </li> <li> <p>Highlights: Similar to affecting the brightness of the image, highlights specifically tones down the brightest parts of the frame. In most cases this could be useful for bringing back clouds lost in the blinding sunlight. Alternatively, photographers will want to tweak the highlights when photographing anything with a backlit screen or lights at night.</p> </li> <li> <p>Shadows: On the flipside of highlights changing the shadows will brighten or darken any areas caught in shade.</p> </li> <li> <p>Whites: Despite the fact we’ve already adjusted the bright parts of the frame, changing the White level in the image appears to do the same thing. Appears. What changing the white level really does is affect the lightest (or brightest) tones in the image, whereas highlights control the midtones in the frame.</p> </li> <li> <p>Blacks: At the opposite end of the spectrum blacks dictate how the darkest part of the images look. This can be helpful to make sure dark colors aren't grayed out when you've already brightened up the shadows.</p> </li> <li> <p>Auto Tone: Aside from setting all the parameters manually, Lightroom also has a handy Auto Tone tool. As with auto white balance, auto tone automatically adjusts the picture for what the program thinks will look best.</p> </li> </ul> <h3 dir="ltr">Time to get technical</h3> <p>Aside from the mix of sliders and staring at the image preview, a much more technical way of editing is using the histogram, which appears at the very top of the right side panel. Essentially it displays a graphical overview of the pictures’s full tonal range in which darker pixels fill out on the left side of as they lighten towards the right. Every edit we just explained can be done by clicking on parts of this histogram and dragging them around. Either way works so it's really up to your preference.</p> <h3 dir="ltr">Making photos “pop”</h3> <p>The tonal curve isn’t all there is to editing images. Just underneath the exposure settings is something called presence. Starting with Clarity, users can increase the sharpness of their images or give them a dreamy, hazy quality. Saturation intensifies colors in the photo, which can be useful to bringing back some color on gray and cloudy days.</p> <p>Vibrance does a similar job of intensifying colors except in a slightly smarter fashion than Saturation. Rather than uniformly bumping up the hues in the frame, Vibrance increases the intensity of muted colors whilst leaving already bright colors alone.</p> <p><em>Next up Sharpening, Noise Reduction, Lens Correction, and more.<br /></em></p> <hr /> <h3 dir="ltr">Detail control</h3> <p>Located in the "Detail" section below Lightroom’s "Basic" editing options you’ll find options to sharpen and reduce the noise of photos.</p> <p dir="ltr"><strong>Sharpening</strong></p> <p dir="ltr"><strong><img src="/files/u170397/sharpening_mask.jpg" width="620" height="363" /><br /></strong></p> <p>Firstly to quell any misconceptions, Sharpening won’t fix images for soft focus, camera shake, or any mistakes made at the time of taking the shot. Rather sharpening is a tool to accentuate details already in the photo. Just don’t over do it as over sharpening images introduces a slew of new problems including harsh edges, grainy noise, and smooth lines transforming into jagged zigzags.</p> <p>There are four parameters when it comes sharpening images:</p> <ul> <li> <p><strong>The Alt key:</strong> Well before we actually get started with any settings, holding down the Alt key is an invaluable tool that will give you a clearer, alternate view of what’s going on while you move around the sliders.</p> </li> <li> <p><strong>Amount:</strong> As you might have guessed this increases the amount of sharpening you add. This value starts at zero and as users get towards the high-end they will end up enhancing the noise in the image along with sharpening details.</p> </li> <li> <p><strong>Radius:</strong> Image sharpening mainly refines edges, but the Radius can be extended by a few pixels. In this case the radius number corresponds with the number of pixels Lightroom will apply sharpening around the edges in the picture. Having a high radius number will intensify details with a thicker edge.</p> </li> <li> <p><strong>Detail:</strong> The Detail slider determines how many edges on the image get sharpened. With lower values the image editor will only target large edges in the frame, meanwhile a value of a 100 will include every small edge.</p> </li> <li> <p><strong>Masking:</strong> Although every other slider has been about incorporating more sharpening into the image, masking does the opposite by telling Lightroom which areas should not be sharpened. Just keep in mind masking works best from image with an isolated background. The sharpening masks' effectiveness is significantly more limited with busy images, where there are edges everywhere.</p> </li> </ul> <p dir="ltr"><strong>Noise Reduction</strong></p> <p dir="ltr"><strong><img src="/files/u170397/noise_reduction.jpg" width="620" height="364" /><br /></strong></p> <p>Noise is unavoidable whether its due to shooting higher ISOs or a result from bumping up the exposure in post—luckily there’s a way to save images from looking like sandpaper.</p> <ul> <li> <p><strong>Luminance:</strong> Our first stop towards reducing noise. Increasing this value will smooth over any stippling on the photo. Take care not to raise this too high as Lightroom will begin to sacrificing the detail and turn the picture into a soft mess.</p> </li> <li> <p><strong>Detail:</strong> In case users want to better preserve the sharp details in their image, they should increase the Detail slider.</p> </li> <li> <p><strong>Contrast:</strong> This is specifically used to tone down the amount of chromatic noise—typically green and red flecks that make their way into high ISO images. Unless there is colored noise in the image, it’s best to leave this set to 0.</p> </li> </ul> <ul> </ul> <h3 dir="ltr">Lens Correction</h3> <p>Moving on, we’re going to start correcting for imperfections in the lens by scrolling down the right sidebar to "Lens Corrections."</p> <p dir="ltr"><strong>Lens profiles</strong></p> <p dir="ltr"><strong><img src="/files/u170397/lens_correction.jpg" width="620" height="333" /><br /></strong></p> <p>Enter the round hole, square peg problem. No matter how well engineered an expensive lens is, it will always produce some amount of distortion thanks to the nature of curved lenses filtering light onto flat sensors. The good news is this is the easiest thing to correct for. Simply click on "Enable Profile Corrections" on the "Basic" pane of Lens Corrections and Lightroom will do the work for you. Witness as your images are automatically corrected for barrel distortion and vignetting (dark corners). It's pretty much fool proof unless of course Adobe has not made a Lens Profile for the lens you shot with. It also might not be necessary to always click this option on as some photos might look better with the vingetting and distortion.</p> <p dir="ltr"><strong>Color Fringing</strong></p> <p dir="ltr"><strong><img src="/files/u170397/fringing.jpg" width="620" height="333" /><br /></strong></p> <p>Fringing for who don’t know appears as a purple or blue and green outline when an object is captured against a bright background—the most common example being a tree limb with the bright sky behind it. It can be a minor quibble with photos in most cases but certain lenses fringe so badly it can make a scene look like it was outlined with a color pencil.</p> <p>Luckily getting rid of fringing in Lightroom can be easy as spotting it and then clicking on it. To start, select the Color pane within the Lens Corrections and use the eyedropper just as we did with white balance. Usually fringing appears at points of high contrast so bring the cursor over to dark edges that meet a bright background. It might take a little bit of sniffing around but stay vigilant and you should be able to spot some misplaced purple or green-blue colors eventually. Some lenses are guilty of fringing terribly while others control it well, so it’s really up to you if the flaw is noticeable enough to merit correction.</p> <p dir="ltr"><strong>Chromatic Aberration</strong></p> <p>Since we’re here anyway, go ahead and click on the option to remove chromatic aberration—another type of color fringing where wavelengths of light are blurring together—since it’s as simple as turning the option on.</p> <h3 dir="ltr">You Can’t Save Them All</h3> <p><img src="/files/u170397/cannot_save.jpg" width="620" height="333" /></p> <p>Despite how extensive this guide might appear, there’s even more editing magic to mine from Lightroom—we haven’t even gotten to making black and white images, or split toning! This is only a crash course to help you make your images look better and the only way to master photography is to keep on shooting and practicing.</p> <p>In the same breath, however, we would recommend users should not use Lightroom as a crutch. Although Lightroom can do a lot to salvage poorly shot images, it’s no excuse to just shoot half-assed and expect to fix things up afterwards. Otherwise post processing will end up eating up most of the shooter's time and eventually they’ll realize that there are even certain images Lightroom can’t save (as evidenced by the one shown above). Image editing software can be a great help, but its no substitute for good old skilled photography.</p> http://www.maximumpc.com/crash_course_editing_images_adobe_lightroom_2014#comments Adobe image editing Lighroom Lighroom crash course Media Applications photoshop post processing Software Software Features Wed, 06 Aug 2014 17:43:10 +0000 Kevin Lee 28246 at http://www.maximumpc.com Xidax M6 Mining Rig Review http://www.maximumpc.com/xidax_m6_mining_rig_review_2014 <!--paging_filter--><h3>A gaming rig that pays for itself</h3> <p>Exotic car paint, multiple GPUs, and custom-built chassis’ be damned, boutique PC builder <a title="xidax" href="http://www.maximumpc.com/tags/Xidax" target="_blank">Xidax</a> thinks it has the sexiest sales pitch on the planet with its <strong>M6 Mining Rig</strong>: It pays for itself! Now, we can’t say this PC is basically “free” because it ain’t that, but Xidax says by using the box’s spare GPU cycles to mine for crypto-currency, this baby would be paid off in about four months. To be honest, it’s not something we’ve ever considered, as we’ve seen gaming rigs, and we’ve seen coining rigs, but never in the same box. It seems like a solid idea though, as the system can game during the day, then mine at night to help cover its cost.</p> <p style="text-align: center;"><a class="thickbox" href="/files/u152332/xidax_guts13979_small_0.jpg"><img src="/files/u152332/xidax_guts13979_small.jpg" alt="The Xidax M6 Mining Rig comes set up with everything you need to start mining crypto-currancy almost right out of the box." title="Xidax M6 Mining Rig" width="620" height="676" /></a></p> <p style="text-align: center;"><strong>The Xidax M6 Mining Rig comes set up with everything you need to start mining crypto-currancy almost right out of the box.</strong></p> <p>The system’s specs include a 3.4GHz Core i5-4670K with 16GB of RAM, a Corsair RM 850 PSU, closed-loop liquid cooler, 250GB Samsung 840 EVO SSD, 1TB WD Black, and a pair of Sapphire Radeon R9 290X cards. In application performance, it’s pretty pedestrian with its stock-clocked Core i5-4670K. Why not something more badass? Xidax says it weighed hardware choices carefully because the pricier the hardware, the longer it takes to pay off with crypto-coins. The Radeons are a wise choice, as they offer about twice the performance of Nvidia’s fastest GPUs in mining applications. Gaming is also quite excellent (obviously, for a two-card system), and its mining performance is impressive at 1.7 to 1.8 Kilohashes per second. (Hashes of the kilo/mega/giga variety are the units of measurement for mining productivity.)</p> <p>Xidax ships the PC ready to start mining operations almost right out of the box, which is normally a daunting task. It also includes a Concierge (or should we say coincierge) service that has a Xidax rep remotely connect to the rig and do a final tune on the box for maximum mining performance. On this particular machine, it came ready to mine for Doge Coins and was forecast to make about $21.60 a day, or $670 a month, on a 24/7 schedule—including electricity costs.</p> <p>What’s the catch? There are a few. First, it’s loud when mining. In fact, it’s so loud that you won’t be able to stand being in the same room with it. Second, you can’t do anything with it while it’s mining because all GPU resources are pegged to the max. Third, crypto-currency can be volatile. Bitcoin saw its value see-saw from $130 to $1,242 and then back to $455 and $900 in just four months. It could all go kaput in a few months, or who knows—the government might even step in and ruin the fun.</p> <p>Considering its performance outside of mining, the M6 Mining Rig is pricey at $3,000. However, the price includes a lifetime warranty on parts and service except for the GPUs. Those carry a five-year warranty, which is still surprisingly good, considering that board vendors are already making noises that they don’t want to eat the cost of dead boards killed by mining. Xidax says it will cover them, though. And—again—it pays for itself, right?</p> <p>That’s ultimately the appeal of the M6 Gaming Rig, but it has to be carefully considered by potential buyers. After all, anything that sounds too good to be true usually is, but then again, it is a powerful gaming PC that could theoretically pay for itself in a few months. And even if the market blew up, at least you’d still have a formidable gaming PC rather than just standing there with your RAM sticks in one hand. And if it works out, whoa baby, you just got a PC for free! –</p> <p><strong>$3,000,</strong> <a href="http://www.xidax.com/">www.xidax.com</a></p> <p><img src="/files/u154082/xidax_benchmarks.png" alt="xidax benchmarks" title="xidax benchmarks" width="620" height="277" /></p> http://www.maximumpc.com/xidax_m6_mining_rig_review_2014#comments april issues 2014 bitcoin dogecoin Hardware maximum pc Review xidax m6 mining computer Reviews Systems Wed, 06 Aug 2014 16:42:51 +0000 Gordon Mah Ung 28234 at http://www.maximumpc.com Intel 730 Series SSD 480GB Review http://www.maximumpc.com/intel_730_series_ssd_480gb_review <!--paging_filter--><h3>An overclocked enterprise SSD, priced accordingly</h3> <p><a title="intel" href="http://www.maximumpc.com/tags/Intel_0" target="_blank">Intel</a> has largely been absent from the high-end SSD market for many years, which has been a real head-scratcher, considering the original X-25M’s dominance back in 2009. That all changes this month with the release of its all-new <strong>730 series SSD</strong>. It springs from the loins of its data center SSDs, which use validated NAND and Intel’s enterprise-level controller technology. To emphasize this heritage, Intel isn’t bragging about the drive’s overall speed, but instead notes the drive is rated to handle up to 70GB of writes per day, which is higher than any other SSD on the market by a huge margin. It features capacitors to protect data being written in case of a power outage, which is an unusual but not unprecedented feature on a consumer SSD. Intel also backs the drive with a five-year warranty.</p> <p style="text-align: center;"><a class="thickbox" href="/files/u152332/ww_13_18_small_0.jpg"><img src="/files/u152332/ww_13_18_small.jpg" alt="Intel’s new flagship SSD is validated for a whopping 70GB of writes per day." title="Intel 730 Series SSD 480GB" width="620" height="437" /></a></p> <p style="text-align: center;"><strong>Intel’s new flagship SSD is validated for a whopping 70GB of writes per day.</strong></p> <p>To create the 730 Series, Intel has basically taken the NAND flash and controller from its data center–oriented S3700 SSD and bumped up the clock and interface speeds. If you recall the “SSD overclocking” demo Intel held at Pax last year, this is the result, though Intel decided against letting consumers overclock the drive. Instead, it did the overclocking at the factory so that the drives could be validated at those speeds. To drive home the point that this is an SSD made for enthusiasts, Intel has even adorned it with a sweet-looking Skulltrail badge.</p> <p>The drive is a 7mm unit, so it will fit inside an ultrabook, but is available only in 240GB and 480GB capacities. It’s odd that it’s not available in 750GB or higher capacities, but our guess is Intel is afraid of the sky-high sticker price that such a drive would require; the two capacities it’s offering are priced very&nbsp; high at $250 and $490, respectively. The drive features Intel’s 20nm MLC NAND and its own third-generation controller. It’s ditched SandForce, along with all the other SSD makers in the business. One interesting note is that since this is an enterprise drive, it essentially doesn’t have a “low-power state,” so it’s not intended for mobile usage. Also, it consumes 5W under load, which is double the consumption of even a 7,200rpm mobile hard drive.</p> <p>When we strapped the 730 Series drive to our test bench, we saw results that were a bit slower overall than we expected. It topped the charts in AS SSD, which measures read and write speeds of incompressible data, but the Intel drive was only a smidge faster than most, and not by enough to make it stand out, as they are all very fast. It was a bit slower than average in straight-line sequential read speeds, topping out at 468MB/s for reads and 491MB/s for writes. While this is still plenty fast, it’s a bit short of the 550MB/s Intel claims the drive is capable of, which is totally saturating the SATA 6Gb/s interface.</p> <p>It was also oddly slow in the ATTO benchmark, which has a queue depth of four and is a “best case scenario” for most drives. It scored just 373MB/s for 64KB-read speeds, compared to 524MB/s for the Samsung 840 Pro. We ran the test several times to verify, so it’s not an aberration. It placed mid-pack in PCMark Vantage, but was slower than its competition in our real-<br />world Sony Vegas test, where we write a 20GB uncompressed AVI file to the drive.</p> <p>Overall, this drive is a bit of a conundrum. We have no doubt it’s reliable, as Intel has always been strong in that regard and this drive is full of safety-oriented features. But is it more reliable than a Samsung 840 Pro for the average consumer? We doubt it, and therefore the drive’s extra-high price tag doesn’t make much sense. If Intel realizes it’s no longer the only game in town and adjusts the price a bit, it’ll be a much more competitive drive, but as it stands, we must give it a so-so verdict of 8.</p> <p><strong>$490,</strong> <a href="http://www.intel.sg/content/www/xa/en/homepage.html">www.intel.com</a></p> http://www.maximumpc.com/intel_730_series_ssd_480gb_review#comments Hardware Intel 730 Series SSD 480GB maximum pc May issues 2014 solid state drive Reviews SSD Wed, 06 Aug 2014 16:36:43 +0000 Josh Norem 28289 at http://www.maximumpc.com Gigabyte Radeon R9 290X OC Review http://www.maximumpc.com/gigabyte_radeon_r9_290x_oc_review <!--paging_filter--><h3>As good as it gets, if you can find one to buy</h3> <p>Aftermarket Radeon R9 290X GPUs are beginning to make the rounds, and this month we had a WindForce-cooled behemoth from <a title="gigabyte" href="http://www.maximumpc.com/tags/Gigabyte" target="_blank">Gigabyte</a> strutting its stuff in the lab. Unlike last month’s <a title="sapphire tri x r9 290x" href="http://www.maximumpc.com/sapphire_tri-x_radeon_r9_290x_review" target="_blank">Sapphire Tri-X R9 290X</a>, this board features a custom PCB in addition to the custom cooler, whereas the Sapphire slapped a huge cooler onto the reference design circuit board. Theoretically, this could allow for higher overclocks on the Gigabyte due to better-quality components, but more on that later.</p> <p style="text-align: center;"><a class="thickbox" href="/files/u152332/windforce14052_small_0.jpg"><img src="/files/u152332/windforce14052_small.jpg" alt="Unlike the reference design, Gigabyte’s R9 290X is cool, quiet, and overclockable." title="Gigabyte Radeon R9 290X OC" width="620" height="476" /></a></p> <p style="text-align: center;"><strong>Unlike the reference design, Gigabyte’s R9 290X is cool, quiet, and overclockable.</strong></p> <p>This is the overclocked version of the card, so it clocks up to 1,040MHz under load, which is a mere 40MHz over stock. These boards always have conservative overclocks out of the box, though, and that is by no means the final clock speed for this card. We’ve covered its WindForce cooler in past reviews, so we won’t go into all the details, but it’s a three-fan cooler that only takes up two PCIe slots and uses six heat pipes with inclined heatsinks to better dissipate the warm. It’s good for 450W of heat dispersal, according to Gigabyte, and since the R9 290X is roughly a 300W card (AMD has never given a TDP for this particular model for some reason), the WindForce cooler should be more than up to the job.</p> <p>Like all Radeon R9 290X boards, this sucker is big and long, measuring 11.5 inches. Gigabyte recommends you use at least a 600W power supply with it, and it sports two dual-link DVI ports for 2560x1600 gaming, as well as HDMI 1.4 and DisplayPort 1.2a if you want to run 4K. The card comes bundled with a free set of headphones. It used to include a free copy of Battlefield 4, but the company told us it was no longer offering the game bundle because it had run out of coupons. The MSRP of the board is $620, but some stores had it for $599 while others marked it up to $700.</p> <p>Once we had this Windy Bad Boy in the lab, we were very curious to compare it to the Sapphire Tri-X R9 290X we tested last month. Since both cards feature enormous aftermarket coolers, have the exact same specs and clocks, and are roughly the same price, we weren’t surprised to find that they performed identically for the most part.</p> <p>If you look at the benchmark chart, in every test the two cards are almost exactly the same—the only exception being Metro, but since that’s a PhysX game, AMD cards can get a bit wonky sometimes. In every other test, the two cards are within a few frames-per-second difference, making them interchangeable. Both cards also run in the mid–70 C zone under load, which is 20 C cooler than the reference design. We were able to overclock both cards to just a smidge over 1,100MHz, as well.</p> <p>“Okay,” you are saying to yourself. “I’m ready to buy!” Well, that’s where we run into a small problem. Gigabyte’s MSRP for this card is $620—the same as the Sapphire Tri-X card—but at press time, the cheapest we could find it for was $700 on Newegg. We can’t ding Gigabyte for Newegg’s pricing, but it’s a real shame these R9 290X cards are so damned expensive.</p> <p><strong>$620,</strong> <a href="http://www.gigabyte.us/">www.gigabyte.us</a></p> http://www.maximumpc.com/gigabyte_radeon_r9_290x_oc_review#comments Air Cooling amd april issues 2014 Gigabyte Radeon R9 290X OC gpu graphics card Hardware maximum pc Review Reviews Tue, 05 Aug 2014 19:52:42 +0000 Josh Norem 28227 at http://www.maximumpc.com