cpu http://www.maximumpc.com/taxonomy/term/291/ en Intel to Close Manufacturing Facility in Costa Rica http://www.maximumpc.com/intel_close_manufacturing_facility_costa_rica_2014 <!--paging_filter--><h3><img src="/files/u69/intel_costa_rica.jpg" alt="Intel Costa Rica" title="Intel Costa Rica" width="228" height="160" style="float: right;" />Closing up shop in Costa Rica is Intel's latest attempt to cut costs</h3> <p><strong>Intel, the world's largest supplier of semiconductors, is in the process of shutting down an assembly and test plant in Costa Rica</strong> as part of continued efforts to slash costs across the board. Closing the plant will result in around 1,500 layoffs, as well as take away one of Costa Rica's major exports. Intel issued a statement saying the closure is completely unrelated to the election of the new Costa Rica government.</p> <p>"It's being closed and consolidated into our other operations throughout the world," Intel spokesman Chuck Mulloy said in a <a href="http://www.reuters.com/article/2014/04/09/us-intel-costa-rica-idUSBREA371TJ20140409" target="_blank">statement to <em>Reuters</em></a>.</p> <p>Intel said it will still have more than 1,000 engineers, finance, and human resource employees in Costa Rica. The chip maker also plans to continue with some research and development projects in the area, which will entail the hiring of another 200 "high-value" positions.</p> <p>This all falls in line with Intel's announcement in January that it was planning to reduce its workforce of around 107,000 employees by about 5 percent this year.</p> <p><em>Follow Paul on <a href="https://plus.google.com/+PaulLilly?rel=author" target="_blank">Google+</a>, <a href="https://twitter.com/#!/paul_b_lilly" target="_blank">Twitter</a>, and <a href="http://www.facebook.com/Paul.B.Lilly" target="_blank">Facebook</a></em></p> http://www.maximumpc.com/intel_close_manufacturing_facility_costa_rica_2014#comments costa rica cpu Hardware intel processor News Wed, 09 Apr 2014 14:38:35 +0000 Paul Lilly 27593 at http://www.maximumpc.com AMD Rolls Out Socketed Sempron and Athlon Chips with AM1 Platform to Retail http://www.maximumpc.com/amd_rolls_out_socketed_sempron_and_athlon_chips_am1_platform_retail_2014 <!--paging_filter--><h3><img src="/files/u69/athlon_sempron.jpg" alt="Athlon and Sempron" title="Athlon and Sempron" width="228" height="132" style="float: right;" />Socketed APUs have arrived</h3> <p>The answer is $34, which addresses the question of what price AMD's new socketed "Kabini" APUs will debut at. There's also the cost of the motherboard to factor in, so add another $25 to $35. As to when you'll be able to buy these new parts, <strong>AMD today announced the global availability of its AM1 platform</strong> featuring its quad-core and dual-core Sempron and Athlon APU lineup based on Kabini.</p> <p>To <a href="http://www.maximumpc.com/amd_introduces_am1_platform_socketed_kabini_apu_2014">quickly refresh</a> what it is we're talking about about, AMD's AM1 platform consists of pairing a socketed Kabini APU with a socket FS1b motherboard sporting a pin-grid-array (PGA). What's unique about this platform is that it's AMD's first "system in a socket" offering -- essentially an SoC (System-on-Chip). Socketed Kabini parts sport two or four Jaguar CPU cores along with an integrated memory controller, Radeon graphics with Graphics Core Next (GCN) architecture, and all the supported I/O functions (two SATA 6Gbps, two USB 3.0, eight USB 2.0, DisplayPort, HDMI, and VGA). With all this residing on the APU, it allows motherboard makers to deliver cheap boards.</p> <p>AMD is making available four APUs to kick off the platform. They include the following:</p> <ul> <li>Athlon 5350: 4 cores, 2.05GHz, 128 GCN Radeon cores, 600MHz GPU, 1600MHz memory, 2MB cache, 25W TDP, $59</li> <li>Athlon 5150: 4 cores, 1.60GHz, 128 GCN Radeon cores, 600MHz GPU, 1600MHz memory, 2MB cache, 25W TDP, $49</li> <li>Sempron 3850: 4 cores, 1.3GHz, 128 GCN Radeon cores, 450MHz GPU, 1600MHz memory, 2MB cache, 25W TDP, $39</li> <li>Sempron 2650: 2 cores, 1.45GHz, 128 GCN Radeon cores, 400MHz GPU, 1333MHz memory, 1MB cache, 25W TDP, $34</li> </ul> <p><img src="/files/u69/am1_bay_trail.jpg" alt="AMD AM1 versus Bay Trail" title="AMD AM1 versus Bay Trail" width="620" height="332" /></p> <p>One point AMD is really trying to drive home is that consumers now have access to quad-core computing for under $40. AMD's also pitching its AM1 platform as a superior solution to Intel's Bay Trail with faster performance in everything from encryption and web surfing to playing games, and more.</p> <p>Several motherboard makers have signed on to support AM1, including ASRock, Asus, Biostar, Gigabyte, MSI, and ECS. You should be able to find both boards and the above mentioned Athlon and Sempron parts today from Amazon, NCIX, Newegg, and TigerDirect.</p> <p><em>Follow Paul on <a href="https://plus.google.com/+PaulLilly?rel=author" target="_blank">Google+</a>, <a href="https://twitter.com/#!/paul_b_lilly" target="_blank">Twitter</a>, and <a href="http://www.facebook.com/Paul.B.Lilly" target="_blank">Facebook</a></em></p> http://www.maximumpc.com/amd_rolls_out_socketed_sempron_and_athlon_chips_am1_platform_retail_2014#comments am1 amd Athlon athlon 3530 athlon 5150 Build a PC cpu Hardware kabini processor sempron sempron 2650 sempron 3850 News Wed, 09 Apr 2014 12:53:39 +0000 Paul Lilly 27591 at http://www.maximumpc.com AMD Tweaks Wafer Agreement with GlobalFoundries to Include GPUs http://www.maximumpc.com/amd_tweaks_wafer_agreement_globalfoundries_include_gpus_2014 <!--paging_filter--><h3><img src="/files/u69/amd_gpu.jpg" alt="AMD GPU" title="AMD GPU" width="228" height="223" style="float: right;" />Amended agreement includes $50 million in additional purchase commitments</h3> <p><strong>AMD bumped up its purchase commitments with GlobalFoundries</strong> in 2014 by about $50 million. Under terms of the amended Wafer Supply Agreement (WSA), AMD expects to pay $1.2 billion in all this year, though what's interesting is that the deal is no longer limited to traditional CPUs and APUs; it now includes GPUs and semi-custom game console chips, such as those found in the Xbox 360 and PlayStation 4.</p> <p>AMD has leaned on Taiwan Semiconductor Manufacturing Company (TSMC) to produce graphics chips, including more than 10 million Xbox One and PS4 chips. With console sales expected to keep climbing, AMD essentially ensured that a shortage of parts won't become an issue.</p> <p>"The successful close of our amended wafer supply agreement with GlobalFoundries demonstrates the continued commitment from our two companies to strengthen our business relationship as long-term strategic partners, and GlobalFoundries’ ability to execute in alignment with our product roadmap," <a href="http://www.amd.com/en-us/press-releases/Pages/amd-amends-wafer-2014apr1.aspx" target="_blank">said Rory Read</a>, president and chief executive officer, AMD. "This latest step in AMD’s continued transformation plays a critical role in our goals for 2014."</p> <p>While specific figures weren't disclosed, AMD and GlobalFoundries did establish fixed pricing and other commitments as part of the amended agreement.</p> <p><em>Follow Paul on <a href="https://plus.google.com/+PaulLilly?rel=author" target="_blank">Google+</a>, <a href="https://twitter.com/#!/paul_b_lilly" target="_blank">Twitter</a>, and <a href="http://www.facebook.com/Paul.B.Lilly" target="_blank">Facebook</a></em></p> http://www.maximumpc.com/amd_tweaks_wafer_agreement_globalfoundries_include_gpus_2014#comments amd apu cpu globalfoundries gpu Hardware manufacturing wafers News Wed, 02 Apr 2014 16:13:14 +0000 Paul Lilly 27551 at http://www.maximumpc.com Intel Shows Love for the Desktop at GDC, Unveils 8-Core Haswell-E Processor http://www.maximumpc.com/intel_shows_love_desktop_gdc_unveils_8-core_haswell-e_processor_2014 <!--paging_filter--><h3><img src="/files/u69/intel_chips.jpg" alt="Intel Chips" title="Intel Chips" width="228" height="191" style="float: right;" />Power users needn't worry, the desktop is alive and thriving!</h3> <p>These days you can't flip on the Internet without being bombarded by tablet and smartphone announcements. Hey, we love mobile just as much as the next geek, but we're also stoked when major players take time to shower some love on the desktop, which is what Intel did at the Game Developer Conference (GDC) today. One thing <strong>Intel unveiled today is an 8-core Haswell-E CPU</strong>.</p> <p>Intel didn't go into detailed specifics on many of its announcements, though here's what we know about its Haswell-E launch. It will feature Intel's first 8-core desktop processor and it will be supported by the chip maker's X99 chipset, which is the first desktop platform to support DDR4 memory. Look for Haswell-E to make a grand entrance in the second half of the year.</p> <p>Arriving beforehand -- around the middle of the year -- will be an unlocked 4th Generation Intel Core processor line codenamed "Devil's Canyon." If you're into overclocking, these will be the Haswell parts you want to buy. In addition to an unlocked multiplier, they'll also sport an improved thermal interface material (TIM) for better cooling and updated packaging materials. These will be supported by Intel's 9 Series chipset.</p> <p>Around the same time, Intel will launch an Anniversary Edition Pentium processor. It too will be unlocked and work in both Intel 8 and 9 Series chipsets, and will also have Intel Quick Sync Video technology.</p> <p><img src="/files/u69/intel_desktop_roadmap.jpg" alt="Intel Desktop Roadmap" title="Intel Desktop Roadmap" width="620" height="343" /></p> <p>Finally, Intel is planning to launch an unlocked 5th Generation Core processor with Iris Pro graphics built on a 14nm manufacturing process. It will be the first time Iris will appear on desktop socketed unlocked processors. Intel didn't say when it's planning to release this part.</p> <p>All of these launches are an attempt by Intel to reinvent and reinvigorate the desktop PC market. In addition to new hardware, Intel teased its new Ready Mode technology, which will make your PC turn on instantly and always be connected while sipping on power.</p> <p><em>Follow Paul on <a href="https://plus.google.com/+PaulLilly?rel=author" target="_blank">Google+</a>, <a href="https://twitter.com/#!/paul_b_lilly" target="_blank">Twitter</a>, and <a href="http://www.facebook.com/Paul.B.Lilly" target="_blank">Facebook</a></em></p> http://www.maximumpc.com/intel_shows_love_desktop_gdc_unveils_8-core_haswell-e_processor_2014#comments Build a PC cpu devil's canyon gdc Hardware haswell-e intel processor ready more News Thu, 20 Mar 2014 01:27:04 +0000 Paul Lilly 27471 at http://www.maximumpc.com Pre-Order Price List and Specs Pop Up for Refreshed Haswell CPUs http://www.maximumpc.com/pre-order_price_list_and_specs_pop_refreshed_haswell_cpus_2014 <!--paging_filter--><h3><img src="/files/u69/haswell_die_0.jpg" alt="Haswell Die" title="Haswell Die" width="228" height="166" style="float: right;" />Look for new Haswell chips to appear in Q2 2014</h3> <p>Intel's Haswell refresh for the desktop is presumably only weeks away at this point -- rumor has it the new parts will show up in retail in the second quarter of 2014 -- and while we'll have to wait until then for the full scoop, <strong>an online store is already posting pre-order prices and specs of 10 upcoming Haswell CPUs</strong>. Most of them boast minor speed bumps of 100MHz over their predecessors.</p> <p>For that reason, don't hold your breath in anticipation of any major performance improvements. What you can expect (assuming the price list is accurate) are slightly faster parts, both in terms of base and Turbo Boost frequencies. In any event, here's a look at the 10 CPUs as they appear on ShopBLT (with <a href="http://www.cpu-world.com/news_2014/2014030201_Pre-order_prices_of_Haswell_Refresh_desktop_CPUs.html" target="_blank">credit going to <em>CPU-World</em></a> for digging these up):</p> <ul> <li>Intel Celeron G1840: 2.8GHz, 2MB cache, $47.51</li> <li>Intel Celeron G1850: 2.9GHz, 2MB cache, $59.20</li> <li>Intel Pentium G3240: 3.1GHz, 3MB cache, $70.96</li> <li>Intel Pentium G3440: 3.3GHz, 3MB cache, $90.54</li> <li>Intel Core i3 4150: 3.5GHz, 3MB cache, $132.73</li> <li>Intel Core i3 4350: 3.6GHz, 4MB cache, $155.12</li> <li>Intel Core i3 4360: 3.7GHz, 4MB cache, $166.31</li> <li>Intel Core i5 4590: 3.7GHz, 6MB cache, $213.36</li> <li>Intel Core i5 4690: 3.9GHz, 6MB cache, $235.75</li> <li>Intel Core i7 4790: 4GHz, 8MB cache, $326.48</li> </ul> <p>Comparing the prices of the new Haswell parts to currently listed ones on ShopBLT's website, it appears these will cost about the same as the CPUs they're replacing, while the old parts are likely to see a price drop.</p> <p><em>Follow Paul on <a href="https://plus.google.com/+PaulLilly?rel=author" target="_blank">Google+</a>, <a href="https://twitter.com/#!/paul_b_lilly" target="_blank">Twitter</a>, and <a href="http://www.facebook.com/Paul.B.Lilly" target="_blank">Facebook</a></em></p> http://www.maximumpc.com/pre-order_price_list_and_specs_pop_refreshed_haswell_cpus_2014#comments Build a PC celeron cpu Hardware haswell intel processor News Mon, 03 Mar 2014 17:41:52 +0000 Paul Lilly 27373 at http://www.maximumpc.com Intel Adds 64-bit Atom "Merrifield" and "Moorefield" Chips to Mobile Portfolio http://www.maximumpc.com/intel_adds_64-bit_atom_merrifield_and_moorefield_chips_mobile_portfolio_2014 <!--paging_filter--><h3><img src="/files/u69/intel_merrifield_die.jpg" alt="Intel Merrifield Die" title="Intel Merrifield Die" width="228" height="197" style="float: right;" />New SoCs give Intel a greater presence in the mobile sector</h3> <p>The mobile device category is dominated by ARM-based processors, and that's something that doesn't sit well with Intel. The Santa Clara chip maker is used to being on top of the semiconductor world, and in the mobile space, <strong>Intel will attempt to wrestle some share away from ARM with its new 64-bit Atom Z3480 processor</strong>, otherwise known as Merrifield, which is a quad-core part intended for Android devices.</p> <p>Intel's Z3480 SoC is built on a 22nm manufacturing process and is based on the company's Silvermont architecture. It has a maximum frequency of 2.13GHz and boasts a PowerVR G6400 GPU for up to 4x better graphics performance compared to Intel's Z2580 processor.</p> <p>"Sixty-four bit computing is moving from the desktop to the mobile device," <a href="http://newsroom.intel.com/community/intel_newsroom/blog/2014/02/24/intel-gaining-in-mobile-and-accelerating-internet-of-things" target="_blank">Intel president Renee James said</a>. "Intel knows 64-bit computing, and we're the only company currently shipping 64-bit processors supporting multiple operating systems today, and capable of supporting 64-bit Android when it is available."</p> <p>Intel also disclosed details about its Moorefield part (Z35XX), which it expects to be available in the second half of the year. Moorefield builds on Merrifield by adding two additional Intel Architecture (IA) cores for up to 2.3GHz of compute performance, an enhanced GPU (PowerVR 6430), faster memory support, and optimizations for Intel's 2014 LTE platform, the Intel XMM 7260.</p> <p>In addition to new parts, Intel said it inked mulit-year agreements with Lenovo, Asus, and Foxconn to expand the availability of Intel-based mobile devices.</p> <p><em>Follow Paul on <a href="https://plus.google.com/+PaulLilly?rel=author" target="_blank">Google+</a>, <a href="https://twitter.com/#!/paul_b_lilly" target="_blank">Twitter</a>, and <a href="http://www.facebook.com/Paul.B.Lilly" target="_blank">Facebook</a></em></p> http://www.maximumpc.com/intel_adds_64-bit_atom_merrifield_and_moorefield_chips_mobile_portfolio_2014#comments 64-bit atom cpu Hardware intel merrifield mobile moorefield mwc2014 processor soc z3480 News Mon, 24 Feb 2014 18:19:02 +0000 Paul Lilly 27328 at http://www.maximumpc.com Intel's Xeon E7 v2 Family Targets Mission Critical Computing http://www.maximumpc.com/intels_xeon_e7_v2_family_targets_mission_critical_computing_2014 <!--paging_filter--><h3><img src="/files/u69/xeon_e7_v2.jpg" alt="Intel Xeon E7 v2" title="Intel Xeon E7 v2" width="228" height="178" style="float: right;" />New workstation processor is a data hog</h3> <p><strong>Intel today announced its Xeon E7 v2 line of processors</strong> featuring the industry's largest memory support (1.5TB per socket versus 1TB per socket delivered by alternative architectures), which enables the chips to rapidly analyze large data sets and deliver real-time insights based on a vast amount of diverse data. The processors are intended for mission critical computing chores.</p> <p>"Organizations that leverage data to accelerate business insights will have a tremendous edge in this economy," <a href="http://newsroom.intel.com/community/intel_newsroom/blog/2014/02/18/intel-advances-next-phase-of-big-data-intelligence-real-time-analytics" target="_blank">said Diane Bryant</a>, senior vice president and general manager of Intel's Data Center Group. "The advanced performance, memory capacity and reliability of the Intel Xeon processor E7 v2 family enable IT organizations to deliver real-time analysis of large data sets to spot and capitalize on trends, create new services and deliver business efficiency."</p> <p>A motivating factor for Intel in developing this new line of processors is the "immense amount of data" that's coming from a growing number of connected devices making up the "Internet of Things" (IOT). The Xeon E7 v2 line will make it possible for companies to analyze all that data and receive real-time results from large data sets. The embedded video below offers a look at how this can come in handy:</p> <p><iframe src="//www.youtube.com/embed/TZ_TyJqj9nc" width="620" height="349" frameborder="0"></iframe></p> <p>This is the first new version of Xeon in three years. It's designed to support up to 32-socket servers with configurations of up to 15 processing cores. Nearly two dozen hardware partners have already signed up to support the platform.</p> <p><em>Follow Paul on <a href="https://plus.google.com/+PaulLilly?rel=author" target="_blank">Google+</a>, <a href="https://twitter.com/#!/paul_b_lilly" target="_blank">Twitter</a>, and <a href="http://www.facebook.com/Paul.B.Lilly" target="_blank">Facebook</a></em></p> http://www.maximumpc.com/intels_xeon_e7_v2_family_targets_mission_critical_computing_2014#comments cpu Hardware intel processor server workstation xeon e7 v2 News Tue, 18 Feb 2014 20:12:37 +0000 Paul Lilly 27286 at http://www.maximumpc.com Intel Rumored to Release Refreshed Haswell Processors in April http://www.maximumpc.com/intel_rumored_release_refreshed_haswell_processors_april_2014 <!--paging_filter--><h3><img src="/files/u69/intel_haswell_die.jpg" alt="Intel Haswell Die" title="Intel Haswell Die" width="228" height="128" style="float: right;" />New Haswell processors may arrive a month ahead of schedule</h3> <p>It seems like we hear something new everyday by hanging around the CPU rumor mill. Once again, Intel is at the center of speculation, though instead of talking about delays, rumor has it the Santa Clara chip maker is planning to launch its refreshed Haswell line a month early. That means new Haswell processors could appear just a few weeks from now, in April, rather than May as originally planned.</p> <p>News and rumor site <a href="http://www.digitimes.com/news/a20140214PD204.html?mod=3&amp;q=HASWELL" target="_blank"><em>Digitimes</em> says</a> it was made aware of Intel's change of plans by do-it-yourself motherboard makers in Taiwan. If this turns out to be true, it would add a bit of credence to an earlier <em>Digitimes</em> report <a href="http://www.maximumpc.com/intel_rumored_delay_broadwell_until_q4_2014">claiming</a> Intel is delaying Broadwell until the fourth quarter so that Haswell has more time to sell.</p> <p>While that may be the case, some vendors are worried that refreshed Haswell processors will make it more difficult to clear out their existing Haswell inventories. It could get even more difficult in May when motherboard makers release mobos built around Intel's 9-series chipset, however the new chipset will support existing Haswell CPUs.</p> <p><em>Follow Paul on <a href="https://plus.google.com/+PaulLilly?rel=author" target="_blank">Google+</a>, <a href="https://twitter.com/#!/paul_b_lilly" target="_blank">Twitter</a>, and <a href="http://www.facebook.com/Paul.B.Lilly" target="_blank">Facebook</a></em></p> http://www.maximumpc.com/intel_rumored_release_refreshed_haswell_processors_april_2014#comments 9-series Build a PC cpu Hardware haswell intel processor News Mon, 17 Feb 2014 20:14:10 +0000 Paul Lilly 27279 at http://www.maximumpc.com Intel Rumored to Delay Broadwell Until Q4 2014 http://www.maximumpc.com/intel_rumored_delay_broadwell_until_q4_2014 <!--paging_filter--><h3><img src="/files/u69/broadwell_die.jpg" alt="Intel Broadwell Die" title="Intel Broadwell Die" width="228" height="108" style="float: right;" />A Broadwell delay isn't what the PC industry needs</h3> <p>It was <a href="http://www.maximumpc.com/manufacturing_defect_forces_intel_delay_broadwell_until_q1_2014">last October</a> when Intel CEO Brian Krzanich said a "defect density issue" was negatively affecting yields, prompting the Santa Clara chip maker to delay its 14nm Broadwell launch by a quarter. Production was to begin in the first quarter of 2014, though there's a rumor going around that <strong>Intel might postpone Broadwell's big debut</strong> to the fourth quarter of this year. Is that really the case?</p> <p>The rumor <a href="http://www.digitimes.com/news/a20140212PD209.html" target="_blank">originates from <em>Digitimes</em></a> and its "sources from the upstream supply chain." Sometimes <em>Digitimes</em> is spot on with its inside information, and other times it's dead wrong. In this case, it's difficult to figure out because Intel stopped short of outright denying there's another delay.</p> <p>"We continue to make progress with the industry's first 14nm manufacturing process and our second generation 3-D transistors. Broadwell, the first product on 14nm, is up-and-running as we demonstrated at the Intel Developer Forum in Q3 2013. We're now planning to begin production this quarter with shipments to customers later this year," <a href="http://www.extremetech.com/computing/176576-broadwell-bombshell-has-intel-delayed-14nm-deployments-until-q4-2014" target="_blank">Intel told <em>ExtremeTech</em></a> when asked about the rumored delay.</p> <p>On the surface, it sounds like everything is proceeding as planned, though Intel certainly left the door open to a delay by not outright saying everything is still on schedule and that the rumors are false. At the same time, this is one of the reasons why companies like Intel don't comment on rumors or speculation.</p> <p>According to <em>Digitimes</em> and its supply stream sources, it's not so much technical difficulties this time around, but "slow digestion of Haswell processor inventories" that's to blame. If that's true, this could be yet another blow to the PC market as a whole.</p> <p><em>Follow Paul on <a href="https://plus.google.com/+PaulLilly?rel=author" target="_blank">Google+</a>, <a href="https://twitter.com/#!/paul_b_lilly" target="_blank">Twitter</a>, and <a href="http://www.facebook.com/Paul.B.Lilly" target="_blank">Facebook</a></em></p> http://www.maximumpc.com/intel_rumored_delay_broadwell_until_q4_2014#comments 14nm broadwell Build a PC cpu delay Hardware intel processor rumor News Thu, 13 Feb 2014 18:42:28 +0000 Paul Lilly 27260 at http://www.maximumpc.com Intel Discusses 15-core Ivytown Processor for Servers http://www.maximumpc.com/intel_discusses_15-core_ivytown_processor_servers2014 <!--paging_filter--><h3><img src="/files/u69/xeon_inside.jpg" alt="Intel Xeon Inside" title="Intel Xeon Inside" width="228" height="171" style="float: right;" />Ivytown will slip into Intel's Xeon E7 chip family</h3> <p>Intel's codenames for processors sound like directions someone might give you if you get lost in the country. Take a wrong turn off of I64 in West Virginia, for example, and you might be told that Ivytown is on the other side of Ivy Bridge, not to be confused with Sandy Bridge. In reality, <strong>Ivytown is Intel's codename for an upcoming 15-core Xeon processor</strong> based on Ivy Bridge and designed for high-end servers.</p> <p>Intel shared a few more details about Ivytown at the International Solid State Circuits Conference (ISSCC) in San Francisco this week. The Santa Clara chip maker said Ivytown will be part of its Xeon E7 lineup, which it's likely to formally introduce next week, <a href="http://www.pcworld.com/article/2096320/intels-15core-xeon-server-chip-has-431-billion-transistors.html#tk.rss_all" target="_blank"><em>PCWorld</em> reports</a>.</p> <p>The Ivytown part is packing 4.31 billion transistors and has the most cores of any Intel x86 server CPU. Intel says it will likely be its fastest performing CPU for servers with frequencies ranging from 1.4GHz to 3.8GHz, drawing anywhere from 40W to 150W of power. Each of the 15 cores supports multithreading so that each Ivytown chip can run 30 threads at the same time.</p> <p>Intel came to the odd core count by arranging the cores across three columns. It also has 40 PCI-Express 3.0 lanes.</p> <p><em>Follow Paul on <a href="https://plus.google.com/+PaulLilly?rel=author" target="_blank">Google+</a>, <a href="https://twitter.com/#!/paul_b_lilly" target="_blank">Twitter</a>, and <a href="http://www.facebook.com/Paul.B.Lilly" target="_blank">Facebook</a></em></p> http://www.maximumpc.com/intel_discusses_15-core_ivytown_processor_servers2014#comments Build a PC cpu e7 Hardware intel ivytown processor server xeon News Tue, 11 Feb 2014 16:53:19 +0000 Paul Lilly 27238 at http://www.maximumpc.com PC Performance Tested http://www.maximumpc.com/pc_performance_tested_2014 <!--paging_filter--><h3><a class="thickbox" style="font-size: 10px; text-align: center;" href="/files/u152332/nvidia_geforce_gtx_780-top_small_0.jpg"><img src="/files/u152332/nvidia_geforce_gtx_780-top_small.jpg" alt="Nvidia’s new GK110-based GTX 780 takes on two ankle-biter GTX 660 Ti GPUs." title="Nvidia’s new GK110" width="250" height="225" style="float: right;" /></a>With our lab coats donned, our test benches primed, and our benchmarks at the ready, we look for answers to nine of the most burning performance-related questions</h3> <p>If there’s one thing that defines the Maximum PC ethos, it’s an obsession with Lab-testing. What better way to discern a product’s performance capabilities, or judge the value of an upgrade, or simply settle a heated office debate? This month, we focus our obsession on several of the major questions on the minds of enthusiasts. Is liquid cooling always more effective than air? Should serious gamers demand PCIe 3.0? When it comes to RAM, are higher clocks better? On the surface, the answers might seem obvious. But, as far as we’re concerned, nothing is for certain until it’s put to the test. We’re talking tests that isolate a subsystem and measure results using real-world workloads. Indeed, we not only want to know if a particular technology or piece of hardware is truly superior, but also by how much. After all, we’re spending our hard-earned skrilla on this gear, so we want our purchases to make real-world sense. Over the next several pages, we put some of the most pressing PC-related questions to the test. If you’re ready for the answers, read on.</p> <h4>Core i5-4670K vs. Core i5-3570K vs. FX-8350</h4> <p>People like to read about the $1,000 high-end parts, but the vast majority of enthusiasts don’t buy at that price range. In fact, they don’t even buy the $320 chips. No, the sweet spot for many budget enthusiasts is around $220. To find out which chip is the fastest midrange part, we ran Intel’s new <a title="4670k" href="http://ark.intel.com/products/75048/" target="_blank">Haswell Core i5-4670K</a> against the current-champ <a title="i5 3570K" href="http://ark.intel.com/products/65520" target="_blank">Core i5-3570K</a> as well as AMD’s <a title="vishera fx-8350" href="http://www.maximumpc.com/article/features/vishera_review" target="_blank">Vishera FX-8350</a>.</p> <p style="text-align: center;"><a class="thickbox" href="/files/u152332/fx_small_0.jpg"><img src="/files/u152332/fx_small.jpg" alt="AMD’s FX-8350 has two cores up on the competition, but does that matter?" width="620" height="607" /></a></p> <p style="text-align: center;"><strong>AMD’s FX-8350 has two cores up on the competition, but does that matter?</strong></p> <p><strong>The Test:</strong> For our test, we socketed the Core i5-4670K into an Asus Z87 Deluxe with 16GB of DDR3/1600, an OCZ Vertex 3, a GeForce GTX 580 card, and Windows 8. For the Core i5-3570K, we used the same hardware in an Asus P8Z77-V Premium board, and the FX-8350 was tested in an Asus CrossHair V Formula board. We ran the same set of benchmarks that we used in our original review of the FX-8350 published in the Holiday 2012 issue.</p> <p><strong>The Results:</strong> First, the most important factor in the budget category is the price. As we wrote this, the street price of the Core i5-4670K was $240, the older Core i5-3570K was in the $220 range, and AMD’s FX-8350 went for $200. The 4670K is definitely on the outer edge of the budget sweet spot while the AMD is cheaper by a bit.</p> <p style="text-align: center;"><a class="thickbox" href="/files/u152332/haswell_small_5.jpg"><img src="/files/u152332/haswell_small_4.jpg" alt="Intel’s Haswell Core i5-4670K slots right into the high end of the midrange." title="Haswell" width="620" height="620" /></a></p> <p style="text-align: center;"><strong>Intel’s Haswell Core i5-4670K slots right into the high end of the midrange.</strong></p> <p>One thing that’s not disputable is the performance edge the new Haswell i5 part has. It stepped away from its Ivy Bridge sibling in every test we ran by respectable double-digit margins. And while the FX-8350 actually pulled close enough to the Core i5-3570K in enough tests to go home with some multithreaded victories in its pocket, it was definitely kept humble by Haswell. The Core i5-4670K plain-and-simply trashed the FX-8350 in the vast majority of the tests that can’t push all eight cores of the FX-8350. Even worse, in the multithreaded tests where the FX-8350 squeezed past the Ivy Bridge Core i5-3570K, Haswell either handily beat or tied the chip with twice its cores.</p> <p style="text-align: center;"><a class="thickbox" href="/files/u152332/ivybridge_small_0.jpg"><img src="/files/u152332/ivybridge_small.jpg" alt="The Core i5-3570K was great in its day, but it needs more than that to stay on top." title="Core i5-3570K" width="620" height="622" /></a></p> <p style="text-align: center;"><strong>The Core i5-3570K was great in its day, but it needs more than that to stay on top.</strong></p> <p>Even folks concerned with bang-for-the-buck will find the Core i5-4670K makes a compelling argument. Yes, it’s 20 percent more expensive than the FX-8350, but in some of our benchmarks, it was easily that much faster or more. In Stitch.Efx 2.0, for example, the Haswell was 80 percent faster than the Vishera. Ouch.</p> <p>So where does this leave us? For first place, we’re proclaiming the Core i5-4570K the midrange king by a margin wider than Louie Anderson. Even the most ardent fanboys wearing green-tinted glasses or sporting an IVB4VR license plate can’t disagree.</p> <p>For second place, however, we’re going to get all controversial and call it for the FX-8350, by a narrow margin. Here’s why: FX-8350 actually holds up against the Core i5-3570K in a lot of benchmarks, has an edge in mulitithreaded apps, and its AM3+ socket has a far longer roadmap than LGA1155, which is on the fast track to Palookaville.</p> <p>Granted, Ivy Bridge and 1155 is still a great option, especially when bought on a discounted combo deal, but it’s a dead man walking, and our general guidance for those who like to upgrade is to stick to sockets that still have a pulse. Let’s not even mention that LGA1155 is the only one here with a pathetic two SATA 6Gb/s ports. Don’t agree? Great, because we have an LGA1156 motherboard and CPU to sell you.</p> <div class="module orange-module article-module"><strong><span class="module-name">Benchmarks</span></strong></div> <div class="spec-table orange"> <table style="width: 627px; height: 270px;" border="0"> <thead> <tr style="text-align: left;"> <th class="head-empty"> </th> <th class="head-light">Core i5-4670K</th> <th>Core i5-3570K</th> <th>FX-8350</th> </tr> </thead> <tbody> <tr> <td class="item"><strong>POV Ray 3.7 RC3 (sec)</strong></td> <td class="item-dark"><strong>168.53</strong></td> <td> <p>227.75</p> </td> <td>184.8</td> </tr> <tr> <td><strong>Cinebench 10 Single-Core</strong></td> <td><strong>8,500</strong></td> <td>6,866</td> <td>4,483</td> </tr> <tr> <td class="item"><strong>Cinebench 11.5</strong></td> <td class="item-dark"><strong>6.95<br /></strong></td> <td>6.41</td> <td><strong>6.90</strong></td> </tr> <tr> <td><strong>7Zip 9.20</strong></td> <td>17,898</td> <td>17,504</td> <td><strong>23,728</strong></td> </tr> <tr> <td><strong>Fritz Chess</strong></td> <td><strong>13,305</strong></td> <td>11,468</td> <td>12,506</td> </tr> <tr> <td class="item"><strong>Premiere Pro CS6 (sec)</strong></td> <td class="item-dark"><strong>2,849</strong></td> <td>3,422</td> <td>5,220</td> </tr> <tr> <td class="item"><strong>HandBrake Blu-ray encode&nbsp; (sec)</strong></td> <td class="item-dark"><strong>9,042</strong></td> <td>9,539</td> <td><strong>8,400</strong></td> </tr> <tr> <td><strong>x264 5.01 Pass 1 (fps)</strong></td> <td><strong>66.3<br /></strong></td> <td>57.1</td> <td>61.3</td> </tr> <tr> <td><strong>x264 5.01 Pass 2 (fps)</strong></td> <td><strong>15.8</strong></td> <td>12.7</td> <td><strong>15</strong></td> </tr> <tr> <td><strong>Sandra (GB/s)</strong></td> <td><strong>21.6</strong></td> <td><strong>21.3</strong></td> <td>18.9</td> </tr> <tr> <td><strong>Stitch.Efx 2.0 (sec)</strong></td> <td><strong>836</strong></td> <td>971</td> <td>1,511</td> </tr> <tr> <td><strong>ProShow Producer 5 (sec)</strong></td> <td><strong>1,275</strong></td> <td>1,463</td> <td>1,695</td> </tr> <tr> <td><strong>STALKER: CoP low-res (fps)</strong></td> <td><strong>173.5</strong></td> <td>167.3</td> <td>132.1</td> </tr> <tr> <td><strong>3DMark 11 Physics</strong></td> <td><strong>7,938</strong></td> <td>7,263</td> <td>7,005</td> </tr> <tr> <td><strong>PC Mark 7 Overall</strong></td> <td><strong>6,428</strong></td> <td>5,582</td> <td>4,408</td> </tr> <tr> <td><strong>PC Mark 7 Storage</strong></td> <td>5,300</td> <td><strong>5,377</strong></td> <td>4,559</td> </tr> <tr> <td><strong>Valve Particle (fps)</strong></td> <td><strong>180</strong></td> <td>155</td> <td>119</td> </tr> <tr> <td><strong>Heaven 3.0 low-res (fps)</strong></td> <td><strong>139.4</strong></td> <td>138.3</td> <td>134.4</td> </tr> </tbody> </table> </div> <p><em>Best scores are bolded. Test bed described in text</em></p> <h4>Hyper-Threading vs. No Hyper-Threading<em>&nbsp;</em></h4> <p><a title="hyper threading" href="http://www.intel.com/content/www/us/en/architecture-and-technology/hyper-threading/hyper-threading-technology.html" target="_blank">Hyper-Threading</a> came out 13 years ago with the original 3.06GHz Pentium 4, and was mostly a dud. Few apps were multithreaded and even Windows’s own scheduler didn’t know how to deal with HT, making some apps actually slow down when the feature was enabled. But the tech overcame those early hurdles to grow into a worthwhile feature today. Still, builders are continually faced with choosing between procs with and without HT, so we wanted to know definitively how much it matters. <em>&nbsp;</em></p> <p><strong>The Test:</strong> Since we haven’t actually run numbers on HT in some time, we broke out a Core i7-4770K and ran tests with HT turned on and off. We used a variety of benchmarks with differing degrees of threadedness to test the technology’s strengths and weaknesses.</p> <p><strong>The Results:</strong> One look at our results and you can tell HT is well worth it if your applications can use the available threads. We saw benefits of 10–30 percent from HT in some apps. But if your app can’t use the threads, you gain nothing. And in rare instances, it appears to hurt performance slightly—as in Hitman: Absolution when run to stress the CPU rather than the GPU. Our verdict is that you should pay for HT, but only if your chores include 3D modeling, video encoding or transcoding, or other thread-heavy tasks. Gamers who occasionally transcode videos, for example, would get more bang for their buck from a Core i5-4670K.</p> <div class="module orange-module article-module"><strong><span class="module-name">Benchmarks</span></strong></div> <div class="spec-table orange"> <table style="width: 627px; height: 270px;" border="0"> <thead> <tr style="text-align: left;"> <th class="head-empty"> </th> <th class="head-light">HT Off</th> <th>HT On</th> </tr> </thead> <tbody> <tr> <td class="item"><strong>PCMark 7 Overall</strong></td> <td class="item-dark">6,308</td> <td> <p><strong>6,348</strong></p> </td> </tr> <tr> <td><strong>Cinebench 11.5</strong></td> <td>6.95</td> <td><strong>8.88</strong></td> </tr> <tr> <td class="item"><strong>Stitch.EFx 2.0 (sec)</strong></td> <td class="item-dark">772</td> <td>772</td> </tr> <tr> <td><strong>ProShow Producer 5.0&nbsp; (sec)</strong></td> <td>1,317</td> <td><strong>1,314</strong></td> </tr> <tr> <td><strong>Premiere Pro CS6 (sec)</strong></td> <td>2,950</td> <td><strong>2,522</strong></td> </tr> <tr> <td class="item"><strong>HandBrake 0.9.9 (sec)</strong></td> <td class="item-dark">1,200</td> <td><strong>1,068</strong></td> </tr> <tr> <td class="item"><strong>3DMark 11 Overall</strong></td> <td class="item-dark">X2,210</td> <td>X2,209</td> </tr> <tr> <td><strong>Valve Particle Test (fps)</strong></td> <td>191</td> <td><strong>226</strong></td> </tr> <tr> <td><strong>Hitman: Absolution, low res (fps)</strong></td> <td><strong>92</strong></td> <td>84</td> </tr> <tr> <td><strong>Total War 2: Shogun CPU Test (fps)</strong></td> <td><strong>42.4</strong></td> <td>41</td> </tr> </tbody> </table> </div> <p><em>Best scores are bolded. We used a Core i7-4770K on a Asus Z87 Deluxe, with a Neutron GTX 240 SSD, a GeForce GTX 580, and 16GB of DDR3/1600 64-bit, with Windows 8</em></p> <p><em>Click the next page to read about air cooling vs water cooling</em></p> <h3> <hr /></h3> <h3>Air Cooling vs. Water Cooling<em>&nbsp;</em></h3> <p>There are two main ways to chill your CPU: a heatsink with a fan on it, or a closed-loop liquid cooler (CLC). Unlike a custom loop, you don't need to periodically drain and flush the system or check it for leaks. The "closed" part means that it's sealed and integrated. This integration also reduces manufacturing costs and makes the setup much easier to install. If you want maximum overclocks, custom loops are the best way to go. But it’s a steep climb in cost for a modest improvement beyond what current closed loops can deliver. <em>&nbsp;</em></p> <p>But air coolers are not down for the count. They're still the easiest to install and the cheapest. However, the prices between air and water are so close now that it's worth taking a look at the field to determine what's best for your budget.<em>&nbsp;</em></p> <p><strong>The Test:</strong> To test the two cooling methods, we dropped them into a rig with a hex-core Intel Core i7-3960X overclocked to 4.25GHz on an Asus Rampage IV Extreme motherboard, inside a Corsair 900D. By design, it's kind of a beast and tough to keep cool.</p> <h4>The Budget Class<em>&nbsp;</em></h4> <p><strong>The Results:</strong> At this level, the Cooler Master 212 Evo is legend…ary. It runs cool and quiet, it's easy to pop in, it can adapt to a variety of sockets, it's durable, and it costs about 30 bucks. Despite the 3960X's heavy load, the 212 Evo averages about 70 degrees C across all six cores, with a room temperature of about 22 C, or 71.6 F. Things don’t tend to get iffy until 80 C, so there's room to go even higher. Not bad for a cooler with one 120mm fan on it.</p> <p>Entry-level water coolers cost substantially more, unless you're patient enough to wait for a fire sale. They require more materials, more manufacturing, and more complex engineering. The Cooler Master Seidon 120M is a good example of the kind of unit you'll find at this tier. It uses a standard 120mm fan attached to a standard 120mm radiator (or "rad") and currently has a street price of $60. But in our tests, its thermal performance was about the same, or worse, than the 212 Evo. In order to meet an aggressive price target, you have to make some compromises. The pump is smaller than average, for example, and the copper block you install on top of the CPU is not as thick. The Seidon was moderately quieter, but we have to give the nod to the 212 Evo when it comes to raw performance.</p> <p style="text-align: center;"><a class="thickbox" href="/files/u152332/coolermaster_212evo_small_2.jpg"><img src="/files/u152332/coolermaster_212evo_small_1.jpg" alt="The Cooler Master 212 Evo has arguably the best price- performance ratio around." title="Cooler Master 212" width="620" height="715" /></a></p> <p style="text-align: center;"><strong>The Cooler Master 212 Evo has arguably the best price-performance ratio around.</strong></p> <h4>The Performance Class<em>&nbsp;</em></h4> <p><strong>The Results:</strong> While a CLC has trouble scaling its manufacturing costs down to the budget level, there's a lot more headroom when you hit the $100 mark. The NZXT Kraken X60 CLC is one of the best examples in this class; its dual–140mm fans and 280mm radiator can unload piles of heat without generating too much noise, and it has a larger pump and apparently larger tubes than the Seidon 120M. Our tests bear out the promise of the X60's design, with its "quiet" setting delivering a relatively chilly 66 C, or about 45 degrees above the ambient room temp.</p> <p style="text-align: center;"><a class="thickbox" href="/files/u152332/nzxt_krakenx60_small_3.jpg"><img src="/files/u152332/nzxt_krakenx60_small_1.jpg" alt="It may not look like much, but the Kraken X60 is the Ferrari of closed-loop coolers." title="Kraken X60" width="620" height="425" /></a></p> <p style="text-align: center;"><strong>It may not look like much, but the Kraken X60 is the Ferrari of closed-loop coolers.</strong></p> <p>Is there any air cooler that can keep up? Well, we grabbed a Phanteks TC14PE, which uses two heatsinks instead of one, dual–140mm fans, and retails at $85–$90. It performed only a little cooler than the 212 Evo, but it did so very quietly, like a ninja. At its quiet setting, it trailed behind the X60 by 5 C. It may not sound like much, but that extra 5 C of headroom means a higher potential overclock. So, water wins the high end.</p> <div class="module orange-module article-module"><strong><span class="module-name">Benchmarks</span></strong></div> <div class="spec-table orange"> <table style="width: 620px; height: 267px;" border="0"> <thead> <tr style="text-align: left;"> <th class="head-empty"> </th> <th class="head-light"><span style="font-family: times new roman,times;">Seidon 120M Quiet / Performance Mode</span></th> <th><span style="font-family: times new roman,times;">212 Evo<br />Quiet / Performance Mode</span></th> <th><span style="font-family: times new roman,times;">Kraken X60 Quiet / Performance Mode</span></th> <th><span style="font-family: times new roman,times;">TC14PE<br />Quiet / Performance Mode</span></th> </tr> </thead> <tbody> <tr> <td class="item"><strong>Ambient Air</strong></td> <td class="item-dark">22.1 / 22.2</td> <td> <p>20.5 / 20</p> </td> <td>20.9 / 20.7</td> <td>20 / 19.9</td> </tr> <tr> <td><strong>Idle Temperature</strong></td> <td>38 / 30.7</td> <td>35.5 / 30.5</td> <td>29.7 / 28.8</td> <td>32 / <strong>28.5</strong></td> </tr> <tr> <td class="item"><strong>Load Temperature</strong></td> <td class="item-dark">78.3 / 70.8</td> <td>70 / 67.3</td> <td>66 / 61.8</td> <td>70.3 / 68.6</td> </tr> <tr> <td><strong>Load - Ambient</strong></td> <td>56.2 / 48.6</td> <td>49.5 / 47.3</td> <td>45.1 / 41.1</td> <td>50.3/ 48.7</td> </tr> </tbody> </table> </div> <p><em>All temperatures in degrees Celsius. Best scores bolded.</em></p> <h4>Is High-Bandwidth RAM worth it?<em>&nbsp;</em></h4> <p>Today, you can get everything from vanilla DDR3/1333 all the way to exotic-as-hell DDR3/3000. The question is: Is it actually worth paying for anything more than the garden-variety RAM? <em>&nbsp;</em></p> <p><strong>The Test:</strong> For our test, we mounted a Core i7-4770K into an Asus Z87 Deluxe board and fitted it with AData modules at DDR3/2400, DDR3/1600, and DDR3/1333. We then picked a variety of real-world (and one synthetic) tests to see how the three compared.</p> <p><strong>The Results:</strong> First, let us state that if you’re running integrated graphics and you want better 3D performance, pay for higher-clocked RAM. With discrete graphics, though, the advantage isn’t as clear. We had several apps that saw no benefit from going from 1,333MHz to 2,400MHz. In others, though, we saw a fairly healthy boost, 5–10 percent, by going from standard DDR3/1333 to DDR3/2400. The shocker came in Dirt 3, which we ran in low-quality modes so as not to be bottlenecked by the GPU. At low resolution and low image quality, we saw an astounding 18 percent boost. <em>&nbsp;</em></p> <p>To keep you back on earth, you should know that cranking the resolution in the game all but erased the difference. To see any actual benefit, we think you’d really need a tri-SLI GeForce GTX 780 setup and expect that the vast majority of games won’t actually give you that scaling.<em>&nbsp;</em></p> <p>We think the sweet spot for price/performance is either DDR3/1600 or DDR3/1866.<em><br /></em></p> <div class="module orange-module article-module"><strong><span class="module-name">Benchmarks</span></strong></div> <div class="spec-table orange"> <table style="width: 620px; height: 267px;" border="0"> <thead> <tr style="text-align: left;"> <th class="head-empty"> </th> <th class="head-light"><span style="font-family: times new roman,times;">DDR3/1333</span></th> <th><span style="font-family: times new roman,times;">DDR3/1600</span></th> <th><span style="font-family: times new roman,times;">DDR3/2400</span></th> </tr> </thead> <tbody> <tr> <td class="item"><strong>Stitch.Efx 2.0 (sec)</strong></td> <td class="item-dark">776</td> <td> <p>773</p> </td> <td><strong>763</strong></td> </tr> <tr> <td><strong>PhotoMatix HDR (sec)</strong></td> <td>181</td> <td>180</td> <td>180</td> </tr> <tr> <td class="item"><strong>ProShow Producer 5.0 (sec) <br /></strong></td> <td class="item-dark">1,370</td> <td>1,337</td> <td><strong>1,302</strong></td> </tr> <tr> <td><strong>HandBrake 0.9.9 (sec)</strong></td> <td>1,142</td> <td>1,077</td> <td><strong>1,037</strong></td> </tr> <tr> <td><strong>3DMark Overall</strong></td> <td>2,211</td> <td>2,214</td> <td>2,215</td> </tr> <tr> <td><strong>Dirt 3 Low Quality (fps)</strong></td> <td>234</td> <td>247.6</td> <td><strong>272.7</strong></td> </tr> <tr> <td><strong>Price for two 4GB DIMMs (USD)</strong></td> <td>$70</td> <td>$73</td> <td>$99</td> </tr> </tbody> </table> </div> <p><em>All temperatures in degrees Celsius. Best scores bolded.</em></p> <p><em>Click the next page to see how two midrange graphics cards stack up against one high-end GPU!</em></p> <h3> <hr /></h3> <h3>One High-End GPU vs.Two Midrange GPUs<em>&nbsp;</em></h3> <p>One of the most common questions we get here at Maximum PC, aside from details about our lifting regimen, is whether to upgrade to a high-end GPU or run two less-expensive cards in SLI or CrossFire. It’s a good question, since high-end GPUs are expensive, and cards that are two rungs below them in the product stack cost about half the price, which naturally begs the question: Are two $300 cards faster than a single $600 card? Before we jump to the tests, dual-card setups suffer from a unique set of issues that need to be considered. First is the frame-pacing situation, where the cards are unable to deliver frames evenly, so even though the overall frames per second is high there is still micro-stutter on the screen. Nvidia and AMD dual-GPU configs suffer from this, but Nvidia’s SLI has less of a problem than AMD at this time. Both companies also need to offer drivers to allow games and benchmarks to see both GPUs, but they are equally good at delivering drivers the day games are released, so the days of waiting two weeks for a driver are largely over. <em>&nbsp;</em></p> <h4>2x Nvidia <a title="660 Ti" href="http://www.geforce.com/hardware/desktop-gpus/geforce-gtx-660ti" target="_blank">GTX 660 Ti</a> vs. <a title="geforce gtx 780" href="http://www.maximumpc.com/article/news/geforce_gtx_780_benchmarks" target="_blank">GTX 780</a><em>&nbsp;</em></h4> <p><strong>The Test:</strong> We considered using two $250 GTX 760 GPUs for this test, but Nvidia doesn't have a $500 GPU to test them against, and since this is Maximum PC, we rounded up one model from the "mainstream" to the $300 GTX 660 Ti. This video card was recently replaced by the GTX 760, causing its price to drop down to a bit below $300, but since that’s its MSRP we are using it for this comparison. We got two of them to go up against the GTX 780, which costs roughly $650, so it's not a totally fair fight, but we figured it's close enough for government work. We ran our standard graphics test suite in both single- and dual-card configurations. <em>&nbsp;</em></p> <p><strong>The Results:</strong> It looks like our test was conclusive—two cards in SLI provide a slightly better gaming experience than a single badass card, taking top marks in seven out of nine tests. And they cost less, to boot. Nvidia’s frame-pacing was virtually without issues, too, so we don’t have any problem recommending Nvidia SLI at this time. It is the superior cost/performance setup as our benchmarks show.</p> <p style="text-align: center;"><a class="thickbox" href="/files/u152332/nvidia_geforce_gtx_780-top_small_0.jpg"><img src="/files/u152332/nvidia_geforce_gtx_780-top_small.jpg" alt="Nvidia’s new GK110-based GTX 780 takes on two ankle-biter GTX 660 Ti GPUs." title="Nvidia’s new GK110" width="620" height="559" /></a></p> <p style="text-align: center;"><strong>Nvidia’s new GK110-based GTX 780 takes on two ankle-biter GTX 660 Ti GPUs.</strong></p> <h4>2x <a title="7790" href="http://www.maximumpc.com/best_cheap_graphics_card_2013" target="_blank">Radeon HD 7790</a> vs.<a title="7970" href="http://www.maximumpc.com/article/features/Asus_680_7970" target="_blank">Radeon HD 7970</a>&nbsp;GHz<em></em></h4> <p><strong>The Test:</strong> For our AMD comparison, we took two of the recently released HD 7790 cards, at $150 each, and threw them into the octagon with a $400 GPU, the PowerColor Radeon HD 7970 Vortex II, which isn't technically a "GHz" board, but is clocked at 1,100MHz, so we think it qualifies. We ran our standard graphics test suite in both single-and-dual card configurations.</p> <p style="text-align: center;"><a class="thickbox" href="/files/u152332/reviews-10649_small_0.jpg"><img src="/files/u152332/reviews-10649_small.jpg" alt="Two little knives of the HD 7790 ilk take on the big gun Radeon HD 7970 . " title="HD 7790" width="620" height="663" /></a></p> <p style="text-align: center;"><strong>Two little knives of the HD 7790 ilk take on the big gun Radeon HD 7970 . </strong></p> <p><strong>The Results:</strong> Our AMD tests resulted in a very close battle, with the dual-card setup taking the win by racking up higher scores in six out of nine tests, and the single HD 7970 card taking top spot in the other three tests. But, what you can’t see in the chart is that the dual HD 7790 cards were totally silent while the HD 7970 card was loud as hell. Also, AMD has acknowledged the micro-stutter problem with CrossFire, and promises a software fix for it, but unfortunately that fix is going to arrive right as we are going to press on July 31. Even without it, gameplay seemed smooth, and the duo is clearly faster, so it gets our vote as the superior solution, at least in this config.<em><br /></em></p> <div class="module orange-module article-module"><strong><span class="module-name">Benchmarks</span></strong></div> <div class="spec-table orange"> <table style="width: 620px; height: 267px;" border="0"> <thead> <tr style="text-align: left;"> <th class="head-empty"> </th> <th class="head-light"><span style="font-family: times new roman,times;">GTX 660 Ti SLI</span></th> <th><span style="font-family: times new roman,times;">GTX 780</span></th> <th><span style="font-family: times new roman,times;">Radeon HD 7870 CrossFire</span></th> <th><span style="font-family: times new roman,times;">Radeon HD 7970 GHz</span></th> </tr> </thead> <tbody> <tr> <td class="item"><strong>3DMark Fire Strike</strong></td> <td class="item-dark"><strong>8,858</strong></td> <td> <p>8,482</p> </td> <td><strong>8,842</strong></td> <td>7,329</td> </tr> <tr> <td><strong>Catzilla (Tiger) Beta</strong></td> <td><strong>7,682</strong></td> <td>6,933</td> <td><strong>6,184</strong></td> <td>4,889</td> </tr> <tr> <td class="item"><strong>Unigine Heaven 4.0 (fps)<br /></strong></td> <td class="item-dark">33</td> <td><strong>35<br /></strong></td> <td><strong>30</strong></td> <td>24</td> </tr> <tr> <td><strong>Crysis 3 (fps)</strong></td> <td><strong>26</strong></td> <td>24</td> <td>15</td> <td><strong>17</strong></td> </tr> <tr> <td><strong>Shogun 2 (fps)</strong></td> <td><strong>60</strong></td> <td>48</td> <td><strong>51</strong></td> <td>43</td> </tr> <tr> <td><strong>Far Cry 3 (fps)</strong></td> <td><strong>41</strong></td> <td>35</td> <td>21</td> <td><strong>33</strong></td> </tr> <tr> <td><strong>Metro: Last Light (fps)</strong></td> <td><strong>24</strong></td> <td>22</td> <td>13</td> <td><strong>14</strong></td> </tr> <tr> <td><strong>Tomb Raider (fps)</strong></td> <td>18</td> <td><strong>25</strong></td> <td><strong>24</strong></td> <td>20</td> </tr> <tr> <td><strong>Battlefield 3 (fps)</strong></td> <td><strong>56</strong></td> <td>53</td> <td><strong>57</strong></td> <td>41</td> </tr> </tbody> </table> </div> <p><em>Best scores are bolded. Our test bed is a 3.33GHz Core i7 3960X Extreme Edition in an Asus P9X79 motherboard with 16GB of DDR3/1600 and a Thermaltake ToughPower 1,050W PSU. The OS is 64-bit Windows 7 Ultimate. All tests, except for the 3DMark tests, are run at 2560x1600 with 4X AA.</em></p> <h3>PCI Express 2.0 vs. PCI Express 3.0<em></em></h3> <p>PCI Express is the specification that governs the amount of bandwidth available between the CPU and the PCI Express slots on your motherboard. We've recently made the jump from version 2.0 to version 3.0, and the PCI Express interface on all late-model video cards is now PCI Express 3.0, causing many frame-rate addicts to question the sanity of placing a PCIe 3.0 GPU into a PCIe 2.0 slot on their motherboard. The reason why is that PCIe 3.0 has quite a bit more theoretical bandwidth than PCIe 2.0. Specifically, one PCIe 2.0 lane can transmit 500MB/s in one direction, while a PCIe 3.0 lane can pump up to 985MB/s, so it's almost double the bandwidth, and then multiply that by 16, since there are that many lanes being used, and the difference is substantial. However, that extra bandwidth will only be important if it’s even needed, which is what we wanted to find out. <em></em></p> <p><strong>The Test:</strong> We plugged an Nvidia GTX Titan into our Asus P9X79 board and ran several of our gaming tests with the top PCI Express x16 slot alternately set to PCIe 3.0 and PCIe 2.0. On this particular board you can switch the setting in the BIOS. <em></em></p> <p><strong>The Results:</strong> We had heard previously that there was very little difference between PCIe 2.0 and PCIe 3.0 on current systems, and our tests back that up. In every single test, Gen 3.0 was faster, but the difference is so small it’s very hard for us to believe that PCIe 2.0 is being saturated by our GPU. It’s also quite possible that one would see more pronounced results using two or more cards, but we wanted to “keep it real” and just use one card. <em></em></p> <div class="module orange-module article-module"><strong><span class="module-name">Benchmarks</span></strong></div> <div class="spec-table orange"> <table style="width: 620px; height: 267px;" border="0"> <thead> <tr style="text-align: left;"> <th class="head-empty"> </th> <th class="head-light"><span style="font-family: times new roman,times;">GTX Titan PCIe 2.0</span></th> <th><span style="font-family: times new roman,times;">GTX Titan PCIe 3.0</span></th> </tr> </thead> <tbody> <tr> <td class="item"><strong>3DMark Fire Strike</strong></td> <td class="item-dark">9,363</td> <td> <p><strong>9,892</strong></p> </td> </tr> <tr> <td><strong>Unigine Heaven 4.0 (fps)</strong></td> <td>37</td> <td><strong>40</strong></td> </tr> <tr> <td class="item"><strong>Crysis 3 (fps)<br /></strong></td> <td class="item-dark">31</td> <td><strong>32<br /></strong></td> </tr> <tr> <td><strong>Shogun 2 (fps)</strong></td> <td>60</td> <td><strong>63</strong></td> </tr> <tr> <td><strong>Far Cry 3 (fps)</strong></td> <td>38</td> <td><strong>42</strong></td> </tr> <tr> <td><strong>Metro: Last Light (fps)</strong></td> <td>22</td> <td><strong>25</strong></td> </tr> <tr> <td><strong>Tomb Raider (fps)</strong></td> <td>22</td> <td><strong>25</strong></td> </tr> </tbody> </table> </div> <p><em>Best scores are bolded. Our test bed is a 3.33GHz Core i7 3960X Extreme Edition in an Asus P9X79 motherboard with 16GB of DDR3/1600 and a Thermaltake ToughPower 1,050W PSU. The OS is 64-bit Windows 7 Ultimate. All games are run at 2560x1600 with 4X AA except for the 3DMark tests.</em></p> <h3>PCIe x8 vs. PCIe x16</h3> <p>PCI Express expansion slots vary in both physical size and the amount of bandwidth they provide. The really long slots are called x16 slots, as they provide 16 lanes of PCIe bandwidth, and that’s where our video cards go, for obvious reasons. Almost all of the top slots in a motherboard (those closest to the CPU) are x16, but sometimes those 16 lanes are divided between two slots, so what might look like a x16 slot is actually a x8 slot. The tricky part is that sometimes the slots below the top slot only offer eight lanes of PCIe bandwidth, and sometimes people need to skip that top slot because their CPU cooler is in the way or water cooling tubes are coming out of a radiator in that location. Or you might be running a dual-card setup, and if you use a x8 slot for one card, it will force the x16 slot to run at x8 speeds. Here’s the question: Since a x16 slot provides 3.2GB/s of bandwidth in one direction, and a x8 slot pumps 1.6GB/s, is your performance hobbled by running at x8?</p> <p><strong>The Test:</strong> We wedged a GTX Titan into first a x16 slot and then a x8 slot on our Asus P9X79 motherboard and ran our gaming tests in order to compare the difference.</p> <p><strong>The Results:</strong> We were surprised by these results, which show x16 to be a clear winner. Sure, it seems obvious, but we didn’t think even current GPUs were saturating the x8 interface, but apparently they are, so this is an easy win for x16.</p> <p style="text-align: center;"><a class="thickbox" href="/files/u152332/asus_p9x79_small_0.jpg"><img src="/files/u152332/asus_p9x79_small.jpg" alt="The Asus P9X79 offers two x16 slots (blue) and two x8 slots (white)." title="Asus P9X79" width="620" height="727" /></a></p> <p style="text-align: center;"><strong>The Asus P9X79 offers two x16 slots (blue) and two x8 slots (white).</strong></p> <div class="module orange-module article-module"><strong><span class="module-name">Benchmarks</span></strong></div> <div class="spec-table orange"> <table style="width: 620px; height: 267px;" border="0"> <thead> <tr style="text-align: left;"> <th class="head-empty"> </th> <th class="head-light"><span style="font-family: times new roman,times;">GTX Titan PCIe x16</span></th> <th><span style="font-family: times new roman,times;">GTX Titan PCIe x8</span></th> </tr> </thead> <tbody> <tr> <td class="item"><strong>3DMark Fire Strike</strong></td> <td class="item-dark"><strong>9,471</strong></td> <td> <p>9,426</p> </td> </tr> <tr> <td><strong>Catzilla (Tiger) Beta</strong></td> <td><strong>7,921</strong></td> <td>7,095</td> </tr> <tr> <td class="item"><strong>Unigine Heaven 4.0 (fps)<br /></strong></td> <td class="item-dark"><strong>40</strong></td> <td>36</td> </tr> <tr> <td><strong>Crysis 3 (fps)</strong></td> <td>32</td> <td><strong>37</strong></td> </tr> <tr> <td><strong>Shogun 2 (fps)</strong></td> <td><strong>64</strong></td> <td>56</td> </tr> <tr> <td><strong>Far Cry 3 (fps)</strong></td> <td><strong>43</strong></td> <td>39</td> </tr> <tr> <td><strong>Metro: Last Light (fps)</strong></td> <td><strong>25</strong></td> <td>22</td> </tr> <tr> <td><strong>Tomb Raider (fps)</strong></td> <td><strong>25</strong></td> <td>23</td> </tr> <tr> <td><strong>Battlefield 3 (fps)</strong></td> <td><strong>57</strong></td> <td>50</td> </tr> </tbody> </table> </div> <p><em>Tests performed on an Asus P9X79 Deluxe motherboard. </em></p> <h3>IDE vs. AHCI<em></em></h3> <p>If you go into your BIOS and look at the options for your motherboard’s SATA controller, you usually have three options: IDE, AHCI, and RAID. RAID is for when you have more than one drive, so for running just a lone wolf storage device, you have AHCI and IDE. For ages we always just ran IDE, as it worked just fine. But now there’s AHCI too, which stands for Advanced Host Controller Interface, and it supports features IDE doesn’t, such as Native Command Queuing (NCQ), and hot swapping. Some people also claim that AHCI is faster than IDE due to NCQ and the fact that it's newer. Also, for SSD users, IDE does not support the Trim command, so AHCI is critical to an SSD's well-being over time, but is there a speed difference between IDE and AHCI for an SSD? We set to find out. <em></em></p> <p><strong>The Test:</strong> We enabled IDE on our SATA controller in the BIOS, then installed our OS. Next, we added our Corsair test SSD and ran a suite of storage tests. We then enabled AHCI, reinstalled the OS, re-added the Corsair Neutron test SSD, and re-ran all the tests.<em></em></p> <p><strong>The Results:</strong> We haven’t used IDE in a while, but we assumed it would allow our SSD to run at full speed even if it couldn’t NCQ or hot-swap anything. And we were wrong. Dead wrong. Performance with the SATA controller set to IDE was abysmal, plain and simple. <em></em></p> <div class="module orange-module article-module"><strong><span class="module-name">Benchmarks</span></strong></div> <div class="spec-table orange"> <table style="width: 620px; height: 267px;" border="0"> <thead> <tr style="text-align: left;"> <th class="head-empty"> </th> <th class="head-light"><span style="font-family: times new roman,times;">Corsair Neutron GTX IDE</span></th> <th><span style="font-family: times new roman,times;">Corsair Neutron GTX AHCI</span></th> </tr> </thead> <tbody> <tr> <td class="item"><strong>CrystalDiskMark</strong></td> <td class="item-dark">&nbsp;</td> <td>&nbsp;</td> </tr> <tr> <td><strong>Avg. Sustained Read (MB/s)</strong></td> <td>224</td> <td><strong>443</strong></td> </tr> <tr> <td class="item"><strong>Avg. Sustained Write (MB/s)<br /></strong></td> <td class="item-dark">386</td> <td><strong>479</strong></td> </tr> <tr> <td><strong>AS SSD - Compressed Data</strong></td> <td>&nbsp;</td> <td>&nbsp;</td> </tr> <tr> <td><strong>Avg. Sustained Read (MB/s)</strong></td> <td>210</td> <td><strong>514</strong></td> </tr> <tr> <td><strong>Avg. Sustained Write (MB/s)</strong></td> <td>386</td> <td><strong>479</strong></td> </tr> <tr> <td><strong>ATTO</strong></td> <td>&nbsp;</td> <td>&nbsp;</td> </tr> <tr> <td><strong>64KB File Read (MB/s, 4QD)</strong></td> <td>151</td> <td><strong>351</strong></td> </tr> <tr> <td><strong>64KB File Write (MB/s, 4QD)</strong></td> <td>354</td> <td><strong>485</strong></td> </tr> <tr> <td><strong>Iometer</strong></td> <td>&nbsp;</td> <td>&nbsp;</td> </tr> <tr> <td><strong>4KB Random Write 32QD <br />(IOPS)</strong></td> <td>19,943</td> <td><strong>64,688</strong></td> </tr> <tr> <td><strong>PCMark Vantage x64 </strong></td> <td>6,252</td> <td><strong>41,787</strong></td> </tr> </tbody> </table> </div> <p><em>Best scores are bolded. All tests conducted on our hard drive test bench, which consists of a Gigabyte Z77X-UP4 motherboard, Intel Core i5-3470 3.2GHz CPU, 8GB of RAM, Intel 520 Series SSD, and a Cooler Master 450W power supply.</em></p> <p><em>Click the next page to read about SSD RAID vs a single SSD!</em></p> <p><em></em></p> <hr /> <p>&nbsp;</p> <h3>SSD RAID vs. Single SSD</h3> <p>This test is somewhat analogous to the GPU comparison, as most people would assume that two small-capacity SSDs in RAID 0 would be able to outperform a single 256GB SSD. The little SSDs have a performance penalty out of the gate, though, as SSD performance usually improves as capacity increases because the controller is able to grab more data given the higher-capacity NAND wafers—just like higher-density platters increase hard drive performance. This is not a universal truth, however, and whether or not performance scales with an SSD’s capacity depends on the drive’s firmware, NAND flash, and other factors, but in general, it’s true that the higher the capacity of a drive, the better its performance. The question then is: Is the performance advantage of the single large drive enough to outpace two little drives in RAID 0?</p> <p>Before we jump into the numbers, we have to say a few things about SSD RAID. The first is that with the advent of SSDs, RAID setups are not quite as common as they were in the HDD days, at least when it comes to what we’re seeing from boutique system builders. The main reason is that it’s really not that necessary since a stand-alone SSD is already extremely fast. Adding more speed to an already-fast equation isn’t a big priority for a lot of home users (this is not necessarily our audience, mind you). Even more importantly, the biggest single issue with SSD RAID is that the operating system is unable to pass the Trim command to the RAID controller in most configurations (Intel 7 and 8 series chipsets excluded), so the OS can’t tell the drive how to keep itself optimized, which can degrade performance of the array in the long run, making the entire operation pointless. Now, it’s true that the drive’s controller will perform “routine garbage collection,” but how that differs from Trim is uncertain, and whether it’s able to manage the drive equally well is also unknown. However, the lack of Trim support on RAID 0 is a scary thing for a lot of people, so it’s one of the reasons SSD RAID often gets avoided. Personally, we’ve never seen it cause any problems, so we are fine with it. We even ran it in our Dream Machine 2013, and it rocked the Labizzle. So, even though people will say SSD RAID is bad because there’s no Trim support, we’ve never been able to verify exactly what that “bad” means long-term.</p> <p style="text-align: center;"><a class="thickbox" href="/files/u152332/reviews-10645_small_0.jpg"><img src="/files/u152332/reviews-10645_small.jpg" alt="It’s David and Goliath all over again, as two puny SSDs take on a bigger, badder drive. " title="SSDs" width="620" height="398" /></a></p> <p style="text-align: center;"><strong>It’s David and Goliath all over again, as two puny SSDs take on a bigger, badder drive. </strong></p> <p><strong>The Test:</strong> We plugged in two Corsair Neutron SSDs, set the SATA controller to RAID, created our array with a 64K stripe size, and then ran all of our tests off an Intel 520 SSD boot drive. We used the same protocol for the single drive.</p> <p><strong>The Results:</strong> The results of this test show a pretty clear advantage for the RAIDed SSDs, as they were faster in seven out of nine tests. That’s not surprising, however, as RAID 0 has always been able to benchmark well. That said, the single 256 Corsair Neutron drive came damned close to the RAID in several tests, including CrystalDiskMark, ATTO at four queue depth, and AS SSD. It’s not completely an open-and-shut case, though, because the RAID scored poorly in the PC Mark Vantage “real-world” benchmark, with just one-third of the score of the single drive. That’s cause for concern, but with these scripted tests it can be tough to tell exactly where things went wrong, since they just run and then spit out a score. Also, the big advantage of RAID is that it boosts sequential-read and -write speeds since you have two drives working in parallel (conversely, you typically won’t see a big boost for the small random writes made by the OS). Yet the SSDs in RAID were actually slower than the single SSD in our Sony Vegas “real-world” 20GB file encode test, which is where they should have had a sizable advantage. For now, we’ll say this much: The RAID numbers look good, but more “real-world” investigation is required before we can tell you one is better than the other.</p> <div class="module orange-module article-module"><strong><span class="module-name">Benchmarks</span></strong></div> <div class="spec-table orange"> <table style="width: 620px; height: 267px;" border="0"> <thead> <tr style="text-align: left;"> <th class="head-empty"> </th> <th class="head-light"><span style="font-family: times new roman,times;">1x Corsair Neutron 256GB</span></th> <th><span style="font-family: times new roman,times;">2x Corsair Neutron 128GB RAID 0 </span></th> </tr> </thead> <tbody> <tr> <td class="item"><strong>CrystalDiskMark</strong></td> <td class="item-dark">&nbsp;</td> <td>&nbsp;</td> </tr> <tr> <td><strong>Avg. Sustained Read (MB/s)</strong></td> <td>512</td> <td><strong>593</strong></td> </tr> <tr> <td class="item"><strong>Avg. Sustained Write (MB/s)<br /></strong></td> <td class="item-dark">436</td> <td><strong>487</strong></td> </tr> <tr> <td><strong>AS SSD - Compressed Data</strong></td> <td>&nbsp;</td> <td>&nbsp;</td> </tr> <tr> <td><strong>Avg. Sustained Read (MB/s)</strong></td> <td>506</td> <td><strong>647</strong></td> </tr> <tr> <td><strong>Avg. Sustained Write (MB/s)</strong></td> <td>318</td> <td><strong>368</strong></td> </tr> <tr> <td><strong>ATTO</strong></td> <td>&nbsp;</td> <td>&nbsp;</td> </tr> <tr> <td><strong>64KB File Read (MB/s, 4QD)</strong></td> <td>436</td> <td>934</td> </tr> <tr> <td><strong>64KB File Write (MB/s, 4QD)</strong></td> <td><strong>516</strong></td> <td>501</td> </tr> <tr> <td><strong>Iometer</strong></td> <td>&nbsp;</td> <td>&nbsp;</td> </tr> <tr> <td><strong>4KB Random Write 32QD <br />(IOPS)</strong></td> <td>70,083</td> <td><strong>88,341</strong></td> </tr> <tr> <td><strong>PCMark Vantage x64 <br /></strong></td> <td><strong>70,083</strong></td> <td>23,431</td> </tr> <tr> <td><strong>Sony Vegas Pro 9 Write (sec)</strong></td> <td>343</td> <td><strong>429</strong></td> </tr> </tbody> </table> </div> <p><em>Best scores are bolded. All tests conducted on our hard-drive test bench, which consists of a Gigabyte Z77X-UP4 motherboard, Intel Core i5-3470 3.2GHz CPU, 8GB of RAM, Intel 520 Series SSD, and a Cooler Master 450W power supply.</em></p> <h3>Benchmarking: Synthetic vs. Real-World<em>&nbsp;</em></h3> <p>There’s a tendency for testers to dismiss “synthetic” benchmarks as having no value whatsoever, but that attitude is misplaced. Synthetics got their bad name in the 1990s, when they were the only game in town for testing hardware. Hardware makers soon started to optimize for them, and on occasion, those actions would actually hurt performance in real games and applications.<em>&nbsp;</em></p> <p>The 1990s are long behind us, though, and benchmarks and the benchmarking community have matured to the point that synthetics can offer very useful metrics when measuring the performance of a single component or system. At the same time, real-world benchmarks aren’t untouchable. If a developer receives funding or engineering support from a hardware maker to optimize a game or app, does that really make it neutral? There is the argument that it doesn’t matter because if there’s “cheating” to improve performance, that only benefits the users. Except that it only benefits those using a certain piece of hardware.</p> <p>In the end, it’s probably more important to understand the nuances of each benchmark and how to apply them when testing hardware. SiSoft Sandra, for example, is a popular synthetic benchmark with a slew of tests for various components. We use it for memory bandwidth testing, for which it is invaluable—as long as the results are put in the right context. A doubling of main system memory bandwidth, for example, doesn’t mean you get a doubling of performance in games and apps. Of course, the same caveats apply to real-world benchmarks, too.</p> <h3>Avoid the Benchmarking Pitfalls<em></em></h3> <p>Even seasoned veterans are tripped up by benchmarking pitfalls, so beginners should be especially wary of making mistakes. Here are a few tips to help you on your own testing journey.<em></em></p> <p>Put away your jump-to-conclusions mat. If you set condition A and see a massive boost—or no difference at all when you were expecting one—don’t immediately attribute it to the hardware. Quite often, it’s the tester introducing errors into the test conditions that causes the result. Double-check your settings and re-run your tests and then look for feedback from others who have tested similar hardware to use as sanity-check numbers.<em></em></p> <p>When trying to compare one platform with another (certainly not ideal)—say, a GPU in system A against a GPU in system B—be especially wary of the differences that can result simply from using two different PCs, and try to make them as similar as possible. From drivers to BIOS to CPU and heatsink—everything should match. You may even want to put the same GPU in both systems to make sure the results are consistent.<em></em></p> <p>Use the right benchmark for the hardware. Running Cinebench 11.5—a CPU-centric test—to review memory, for example, would be odd. A better fit would be applications that are more memory-bandwidth sensitive, such as encoding, compression, synthetic RAM tests, or gaming.<em></em></p> <p>Be honest. Sometimes, when you shell out for new hardware, you want it to be faster because no one wants to pay through the nose to see no difference. Make sure your own feelings toward the hardware aren’t coloring the results.<em><br /></em></p> http://www.maximumpc.com/pc_performance_tested_2014#comments 2013 air cooling benchmark cpu graphics card Hardware Hardware liquid cooling maximum pc motherboard pc speed test performance ssd tests October 2013 Motherboards Features Mon, 10 Feb 2014 22:46:41 +0000 Maximum PC staff 26909 at http://www.maximumpc.com AMD Flexes Its First ARM Based Server SoC http://www.maximumpc.com/amd_flexes_its_first_arm_based_server_soc2014 <!--paging_filter--><h3><img src="/files/u69/amd_development_board_0.jpg" alt="AMD Development Board" title="AMD Development Board" width="228" height="173" style="float: right;" />AMD's foray into ARM-based server SoCs begins with the Opteron A Series</h3> <p>A milestone has been reached in Sunnyvale less than a month into 2014. Chip designer <strong>AMD formally introduced its first 64-bit ARM-based server system-on-chip (SoC)</strong> previously codenamed "Seattle" and now called Opteron A1100. The chip is fabricated using a 28-nanometer process technology and is the first of its kind from an established server vendor. Along with the new SoC, AMD also unveiled a new development platform intended to make software design on the Opteron A1100 Series quick and easy.</p> <p>AMD's new SoC supports 4-core or 8-core ARM Cortex A57 processors and has up to 4MB of shared L2 cache and 8MB of shared L3 cache. Other features include configurable dual DDR3 or DDR4 memory channels with ECC at up to 1866 MT/second; up to four SODIMM, UDIMM, or RDIMM; eight lanes of PCI-Express Gen 3; eight SATA III ports; two 10-Gigabit Ethernet ports; ARM TrustZone technology; and Crytpo and data compression co-processors.</p> <p>"The needs of the data center are changing. A one-size-fits-all approach typically limits efficiency and results in higher-cost solutions," <a href="http://www.amd.com/us/press-releases/Pages/amd-to-accelerate-2014jan28.aspx" target="_blank">said Suresh Gopalakrishnan</a>, corporate vice president and general manager of the AMD server business unit. "The new ARM-based AMD Opteron A-Series processor brings the experience and technology portfolio of an established server processor vendor to the ARM ecosystem and provides the ideal complement to our established AMD Opteron x86 server processors."</p> <p>AMD's new development kit is packaged in a micro-ATX form factor and includes an Opteron A1100 Series processor, four registered DIMM slots for up to 128GB of DDR3 RAM, PCI Express connectors configurable as single x8 or dual x4 ports, and eight SATA connectors. They're compatible with standard power supplies and can be used stand-alone or mounted in a standard rack-mount chassis.</p> <p><em>Follow Paul on <a href="https://plus.google.com/+PaulLilly?rel=author" target="_blank">Google+</a>, <a href="https://twitter.com/#!/paul_b_lilly" target="_blank">Twitter</a>, and <a href="http://www.facebook.com/Paul.B.Lilly" target="_blank">Facebook</a></em></p> http://www.maximumpc.com/amd_flexes_its_first_arm_based_server_soc2014#comments amd ARM cpu enterprise Hardware opteron a1100 processor seattle Servers News Wed, 29 Jan 2014 15:58:54 +0000 Paul Lilly 27157 at http://www.maximumpc.com AMD Adds More 12-Core and 16-Core Processors to Opteron 6300 Series http://www.maximumpc.com/amd_adds_more_12-core_and_16-core_processors_opteron_6300_series <!--paging_filter--><h3><img src="/files/u69/amd_opteron_0.jpg" alt="AMD Opteron" title="AMD Opteron" width="228" height="152" style="float: right;" />New server chips starting at $377</h3> <p><strong>AMD just fleshed out its Opteron 6300 Series</strong> of server processors with a pair of new chips, one of which is a 12-core part and the other a 16-core offering. These additions to what AMD calls "Warshaw" are intended for enterprise applications and feature AMD's "Piledriver" core architecture. They're also fully socket and software compatible with the existing Opteron 6300 Series.</p> <p><a href="http://www.amd.com/us/press-releases/Pages/amd-offers-new-levels-2014jan22.aspx" target="_blank">According to AMD</a>, these new parts are in response to customers' requests. The new Opteron 6370P is a 16-core part with a 2.0GHz base frequency and 2.5GHz Turbo frequency. It supports quad-channel memory up to DDR3-1866, has 16MB of L3 cache, and carries a max TDP rating of 99W.</p> <p>The Opteron 6338P is similar except that it has 12-cores clocked at 2.3GHz base and 2.8GHz Turbo. Both it and the 6370P are optimized to handle heavily virtualized workloads, including tasks like data analysis, xSQL, and traditional databases, all without breaking the bank.</p> <p>System integrators have already begun selling the Opteron 6338P and 6370P for $377 and $598, respectively.</p> <p><em>Follow Paul on <a href="https://plus.google.com/+PaulLilly?rel=author" target="_blank">Google+</a>, <a href="https://twitter.com/#!/paul_b_lilly" target="_blank">Twitter</a>, and <a href="http://www.facebook.com/Paul.B.Lilly" target="_blank">Facebook</a></em></p> http://www.maximumpc.com/amd_adds_more_12-core_and_16-core_processors_opteron_6300_series#comments 6338p 6370p amd cpu Hardware opteron 6300 processor warsaw News Wed, 22 Jan 2014 18:03:46 +0000 Paul Lilly 27116 at http://www.maximumpc.com Intel Rumored to Launch Several New Haswell Processors in Q2 http://www.maximumpc.com/intel_launch_several_new_haswell_processors_q22014 <!--paging_filter--><h3><img src="/files/u69/intel_haswell_slide.jpg" alt="Intel Haswell Slide" title="Intel Haswell Slide" width="228" height="146" style="float: right;" />More Haswell options are on the way</h3> <p>The big news in processors today is the <a href="http://www.maximumpc.com/amd_launches_kaveri_apu_radeon_r7_graphics2014">official launch of AMD's Kaveri APUs</a> with Radeon R7 graphics, but if you'd rather wait to see what Intel has up its sleeve, you'll have to get cozy for a few months. Word on the web is that <strong>Intel is preparing to refresh its Haswell processor line</strong> with nearly two dozen new CPUs sometime in the second quarter of 2014, likely starting in May.</p> <p><a href="http://www.digitimes.com/news/a20140113PD216.html" target="_blank">According to <em>Digitimes</em></a>, May is when the new parts will hit the retail channel. That will include 20 new processors, including the Core i7 4790, Core i5 4690, Core i3 4360, Pentium G3450, and Celeron G1840. Intel will also launch some low power CPUs, such as the Core i7 4790S, Core i5 4590S, and Core i3 4150T.</p> <p>Slipping into retail a month ahead of time will be Intel's new Z97 and H97 chipsets. This will give board partners like Asus, Gigabyte, MSI, and ASRock time to build silicon around the new chipsets before the refreshed Haswell processors land on store shelves.</p> <p>Finally, <em>Digitimes</em> says Intel will follow this up by refreshing its Haswell K series and Haswell-E in the third quarter, while phasing out its Core i5 3350P, Core i3 3225, and Core i3 3210 processors in Q1. Beyond that, the news and rumor site didn't provide any information on pricing.</p> <p><em>Follow Paul on <a href="https://plus.google.com/+PaulLilly?rel=author" target="_blank">Google+</a>, <a href="https://twitter.com/#!/paul_b_lilly" target="_blank">Twitter</a>, and <a href="http://www.facebook.com/Paul.B.Lilly" target="_blank">Facebook</a></em></p> http://www.maximumpc.com/intel_launch_several_new_haswell_processors_q22014#comments Build a PC cpu Hardware haswell intel processor News Tue, 14 Jan 2014 17:57:28 +0000 Paul Lilly 27065 at http://www.maximumpc.com AMD Launches "Kaveri" APU with Radeon R7 Graphics http://www.maximumpc.com/amd_launches_kaveri_apu_radeon_r7_graphics2014 <!--paging_filter--><h3><img src="/files/u69/kaveri_die.jpg" alt="AMD Kaveri Die" title="AMD Kaveri Die" width="228" height="191" style="float: right;" />Low power, high performance</h3> <p>The boys and girls at <strong>AMD officially launched the company's 2014 A-Series Accelerated Processing Units</strong> (APUs) with integrated Radeon R7 graphics. You know the parts by their codename "<strong>Kaveri</strong>," which AMD says is representative of a major architecture improvement. Kaveri sports completely redesigned cores, new Heterogeneous System Architecture (HSA) features, new accelerators, and enhanced power management on a new 28nm manufacturing process.</p> <p>Kaveri boasts up to 12 compute cores (4 "Steamroller" CPU cores + 8 GCN GPU cores) on a die measuring 245mm2 with 2.41 billion transistors. AMD made a concerted effort to improve the graphics in Kaveri, which include the latest technologies found in Hawaii -- Graphics Core Next (GCN), TrueAudio, Eyefinity, UVD, and VCE technologies. One thing that's interesting with AMD's focus on graphics is that the chip designer is embracing its role as a preferred choice among virtual coin miners. Apparetly Kaveri parts will be good at digging up Bitcoins, Litecoins, and other virtual currencies.</p> <p>"AMD maintains our technology leadership with the 2014 AMD A-Series APUs, a revolutionary next generation APU that marks a new era of computing," said Bernd Lienhard, corporate vice president and general manager, Client Business Unit, AMD. "With world-class graphics and compute technology on a single chip, the AMD A-Series APU is an effective and efficient solution for our customers and enable industry-leading computing experiences."</p> <p><img src="/files/u69/kaveri_performance.jpg" alt="AMD Kaveri Performance Slide" title="AMD Kaveri Performance Slide" width="620" height="331" /></p> <p>The other big focus is on performance per watt. Kaveri will push better battery life on notebooks. AMD says Kaveri can scale up or down to other segments, such as embedded and server platforms, while bringing new features to small form factor (SFF) desktops.</p> <p style="text-align: center;"><img src="/files/u69/kaveri_specs.jpg" alt="Kaveri Specs" title="Kaveri Specs" width="620" height="200" /></p> <p>AMD is kicking off the Kaveri launch with its A10-7850K and A10-7700K APUS, both available this month and both coming bundled with Battlefield 4. Sometime later this quarter, AMD will release its lower end A8-7600 APU. As far as we know, the A8-7600 will not come with a game bundle.</p> <p><em>Follow Paul on <a href="https://plus.google.com/+PaulLilly?rel=author" target="_blank">Google+</a>, <a href="https://twitter.com/#!/paul_b_lilly" target="_blank">Twitter</a>, and <a href="http://www.facebook.com/Paul.B.Lilly" target="_blank">Facebook</a></em></p> http://www.maximumpc.com/amd_launches_kaveri_apu_radeon_r7_graphics2014#comments a10-7700K a10-7850K A8-7600 amd apu Build a PC cpu Hardware kaveri processor News Tue, 14 Jan 2014 14:33:16 +0000 Paul Lilly 27061 at http://www.maximumpc.com