The onslaught of smartphones, tablets, and sundry cloud-based devices might give us ways to be “connected” in more places at more times, but they don’t lessen the wonders to behold in a full-fledged PC. Not by a long shot.
In fact, despite all the dire prognostications about the PC, our personal computers are poised to get a major boost in performance, thanks to all the new technologies and components coming to fruition next year. We’re going to give you the complete rundown on what to expect—can someone say fastest CPU ever?—so you can start plotting your next build now.
Oh, we’ll still see plenty of tablets, to be sure, and we’ll tell you how those happening slabs will change, but we’re also going to see a major push by Intel to make stylish, super-portable, super-affordable laptop PCs an even more compelling option.
Yes, there’s a lot to look forward to in 2012. And you can start peeping at what lies ahead right here!
True performance enthusiasts have had a very difficult choice this past year. Go for maximum core and thread count using an older core microarchitecture, or cheap out and get almost the same (or better) performance in most apps and games using the mainstream Sandy Bridge chip.
That, in a nutshell, has been the enthusiasts’ dilemma ever since Intel introduced the Sandy Bridge chip in January 2011. Well those days are behind us now that Intel has finally, finally released its Sandy Bridge-E (for Enthusiast) chip. With one simple chip—the new 3.3GHz Core i7-3960X—Intel has neatly folded up all those worries and put them into a nice little blue box stamped with the Intel logo.
A TRUE ENTHUSIASTS' CPU
Boiled down to the simplest of terms, if the quad-core 3.4GHz Core i7-2600K (or its new sibling the 3.5GHz Core i7-2700K) was the best chip out there, the Core i7-3960X is now the bestest. That’s because the Core i7-3960X is simply a Core i7-2600K with two additional cores.
Actually, that’s not really accurate. As an enthusiast chip, there are no graphics cores in the Core i7-3960X. And while the Core i7-2600K is limited to just 16 PCIe 2.0 lanes, the Core i7-3960X sports 40. Even better, those 40 lanes of PCIe support are PCIe 3.0 compliant. Out the gate, however, Intel (or its lawyers, anyway) is reluctant to label them as PCIe 3.0 until it actually has enough PCIe 3.0 cards to test.
As to the cores, you already know about them. They’re Sandy Bridge cores and include AVX and AES-NI instruction-set goodness. Turbo Boost 2.0 on these models will take the top-end 3.3GHz Core i7-3960X to 3.9GHz. The cores are built using Intel’s 32nm process and, well, there are two more of them turned on.
Besides the added cores, enthusiasts will also be thrilled by the memory support: To keep those cores fed, Intel is using a new quad-channel memory controller. The memory controller seems significantly faster than previous iterations, too. While the tri-channel memory controller in the original LGA1366 didn’t blow our socks off (over a dual-channel configuration), the quad-channel controller in the Core i7-3960X has us stunned. In our tests, we found that it offered nearly 100 percent more memory bandwidth than the Core i7-990X’s triple-channel configuration.
PSSST, IT'S REALLY EIGHT CORES INSIDE
Intel isn’t making the Core i7-3960X just to satiate the appetites of speed freaks. The chip is mostly intended to be sold as a Xeon workstation CPU. So it shouldn’t surprise you that the Core i7-3960X is actually an eight-core chip. Yup, that’s right; looking at the block map of the chip, you can see that the new CPU has two sections blocked out where cores seven and eight go. Why leave them off? Intel officially says the decision was based on its desire to balance clock speeds, thermals, and power needs. We suspect that it’s really because Intel doesn’t need those two extra cores at this point. Not to telegraph too much, but AMD hasn’t posed much performance competition yet. By leaving cores off now, Intel can always introduce octo-core chips later if it needs to be more competitive. There could also truly be a thermal concern, as unsubstantiated rumors (are there any other kind?) initially told of Intel’s new chip pushing an unheard‑of 180‑watt thermal rating.
Yeah, we know what you’re thinking already, because we asked the same thing ourselves: Can you unlock those two other cores? Negative, Ghostrider. Intel has laser-cut those cores off in the die, so unless someone has the smallest‑possible soldering gun, we’d bet a box of adamantium claws that it’s impossible.
MEET THE NEW PLATFORM
As is Intel’s modus operandi, the company has a new socket. While the switch from LGA1156 to LGA1155 certainly pissed off customers, the LGA1366 crowd can hardly complain. LGA1366 launched with the original Core i7-965 Extreme Edition way back in 2008. For Intel to even support a socket that long is almost unheard of. So, with Core i7-3960X, Intel is introducing its new LGA2011.
Why the extra pins? The additional pins in the socket are to support the quad-channel memory and the relocation of the PCIe lanes from the core-logic chipset to the CPU core (à la Sandy Bridge and Lynnfield). For the most part, enthusiasts will be tickled pink with the beastly new socket, the quad-channel memory, and PCIe 3.0. What they won’t be happy with is the SATA 6Gb/s situation. The new X79 chipset features a Serial Attached SCSI controller that can support up to 10 drives in SATA 6Gb/s, but at the 11th hour, the feature was switched off due to compatibility concerns. Instead, we’re left with an X79 peripheral controller hub that’s pretty much a weak-sauce retread of the P67 and Z68’s PCH: two SATA 6Gb/s and four SATA 3Gb/s ports. You can certainly argue that you don’t need more than two SATA 6Gb/s ports since they’re only useful for SSD drives, but we think it stinks, especially as we had been teased by thoughts of motherboards bursting with SATA 6Gb/s. We expect initial boards to be limited in SATA 6Gb/s ports due to the last-minute switch, but in a few months, board vendors will tack on additional ports using third-party controllers. If anything, the SATA 6Gb/s features on boards and how they’re implemented will separate the men from the boys in mobo land.
MEET THE SANDY BRIDGE-E FAMILY
For the LGA2011 platform, Intel is introducing three new chips: The top-end Core 7-3960X at $990—yup, that’s $9 cheaper than the existing Core i7-990X chip (gee thanks, Intel!) that this Extreme chip is meant to replace. Intel is also introducing two other chips. The mid-tier 3.2GHz Core i7-3930K will sell for $555. Besides the lower stock clock, the chip will shed some of the L3 cache, for a total of 12MB. For the budget enthusiast, Intel has plans to release a quad-core, Hyper-Threaded Sandy Bridge-E with 10MB of L3 cache early next year. Prices of the Core i7-3820 haven’t been released, but we’re pretty sure it’ll slot in at about $300. The part is “partially unlocked,” meaning it will have limited overclocking features, and is likely intended as a way to get entry-level enthusiasts in the X79 game.
The good news for enthusiasts is that Intel has no plans to step away from offering blistering‑fast chips with cutting-edge technology, despite all the focus on tablets and smartphones these days. Hallelujah.
|Intel Core i7-2600K||Intel Core i7-990X||Intel Core i7-3960X||AMD Phenom II X6 1100T||AMD FX-8150|
|Turbo Clock (Max)||3.8GHz||3.7GHz||3.9GHz||3.7GHz||3.9GHz (4.2GHz)|
|TDP||95 watts||130 watts||130 watts||125 watts||125 watts|
|Cores / Threads||4 /8||6 / 12||6 / 12||6||8|
|Total L2 Cache||1MB||1.5MB||1.5MB||3MB||8MB|
|Total L3 Cache||8MB||12MB||15MB||6MB||8MB|
|Transistor Count||995 million||1.17 billion||2.27 billion||904 million||2 billion|
|Socket||LGA1155||LGA1366||LGA2011||Socket AM3||Socket AM3+|
|Memory Controller||Dual Channel DDR3/1333||Tri-Channel DDR3/1066||Quad-Channel DDR3/1600||Dual Channel DDR3/1333||Dual Channel DDR3/1866|
AMD’s newest CPU is perhaps the worst-kept secret in the industry. It seems like years ago that the company telegraphed the microarchitecture and garnered much attention. That’s no surprise, as the chip code-named Bulldozer is considered AMD’s first true CPU redesign since the original Athlon 64. Truth be told, there’s also a lot hanging on the new chip, as many are wondering if AMD still has the mojo to go toe-to-toe with Intel’s processors.
JUST WHAT IS A CORE?
The last year has seen a blurring of the lines regarding the definition of a core. Is it strictly x86? Do you count the integrated graphics portions? Adding to that Jack Daniel’s blurred-and-slurred line is AMD’s new Bulldozer. Officially named FX (in a throwback to the glory days of the Athlon 64 FX-51), the chip makes you wonder if what you thought was a core is still a core.
FX isn’t made up of cores, but rather modules. Each module is built using two monolithic “cores.” Each core has its own set of integer schedulers, pipelines, and L1 data cache. AMD says that compared to Intel’s Hyper-Threading, which splits up the resources of a single core into two virtual cores, FX’s design won’t get as bogged down when it has to deal with multithreaded workloads. On an Intel chip with Hyper-Threading, the core really only has resources for one core, and multithreaded loads must take turns running if the code calls on the same portion of the chip. That’s not the case with FX.
But AMD didn’t completely duplicate all the resources of a dual core in its module—a single floating-point unit services both of the cores in integer workloads. Why just integer workloads? AMD says it believes that’s where most of the performance is to be made today.
AMD also says the modular design lends itself to higher performance when, say, a single-threaded workload is thrown at a single module. That’s because the cores are so interconnected that if only one core is working, some of the second core’s resources can be put toward that single-threaded workload.
AMD will launch four FX chips (two eight-core, one six-core, and a quad-core) ranging from $115 to $245. The company’s top-end part is the FX-8150, which is made up of four dual-core modules on a single die. One potential performance issue AMD has already admitted could crop up on Windows 7 and older OSes is the scheduler inefficiencies. The scheduler should know to throw four threads at four different modules, instead of four cores on two modules for the highest performance. Unfortunately, Windows 7 and anything older isn’t capable of determining how to load an FX for the utmost performance returns, AMD says. That may not happen until Windows 8 is released. Intel faced similar teething pains when Hyper-Threading was first released.
NEW MAX TURBO AND NEW INSTRUCTIONS
AMD first introduced Turbo modes with its hexa-core Thuban chips, aka Phenom II X6. With FX, the company has refined its Turbo even more with a new Max Turbo mode that, well, maxes out the overclock. On workloads that hit all cores, each one can be overclocked by 300MHz, to 3.9GHz. On lightly threaded workloads, half of the cores can go to sleep while the other half can clock up to 4.2GHz.
Elsewhere in the chip, AMD has brought the new CPU to instruction parity with Intel. The FX processors will have AES instructions to support acceleration of encryption and decryption workloads, and Advanced Vector Extensions (AVX), which Intel introduced with its Sandy Bridge lineup, is now also present. The old instruction-set wars still run hot, though, as the FX will support the Fused Multiply Add 4, or FMA4, instruction set. Intel, unfortunately, is only supporting FMA3 in its upcoming Ivy Bridge CPUs and has apparently canceled plans to support FMA4. This little standoff could cause problems with developers as to which instruction set they support and how they support it. Who is at fault? Most observers say both companies are playing games. Our standard guidance is to not sweat over new instruction sets, because by the time software support is there, the first chips to support it are usually so old that it’s just easier to upgrade instead.
NEW CHIPS, SAME OLD SOCKET
There’s one thing AMD gets right year after year—the same old socket. While Intel has shuffled through five sockets, AMD has pretty much stuck with just one. The only change has been to the electrical underpinnings in the AM3+ spec, which could render some AM3 boards incompatible. Still, it’s expected that the AM3+ FX chips will drop into most late-model AM3 boards without incident (check with your motherboard vendor first, of course) and even better, the mounts for the coolers have remained the same, so you can reuse your exotic cooler.
In a nod to enthusiasts, AMD says all FX chips will be fully unlocked, giving overclockers an all-access backstage pass to faster speeds. With Intel’s chips, only the K versions and Extreme Editions are fully unlocked. AMD has made overclocking a bragging point, too, and helped fund a team of overclockers to push an FX to 8.429GHz using liquid helium.
So where does Bulldozer stand? On paper, it looks like AMD has finally caught up—to some of Intel’s Sandy Bridge chips, anyway. But what does it look like in benchmarks? For the answer to that, you’ll have to read on.
AMD’s E-350, aka Zacate, was the sleeper hit of the year and completely sold out several times. The company’s second Fusion part, Llano, is also forecast to sell well, with some reports saying the chip will equal 40 percent of AMD’s total sales.
The message? While AMD has been unable to outdo Intel in the performance category for some time, its APU approach to the mainstream seems to be gaining traction. Next year, AMD will introduce a new CPU‑cum‑GPU, code-named Trinity, which it hopes will continue that trend.
The top-end Trinity will use a derivative of the FX core modules to create an eight-core chip with a new DirectX 11 GPU. Like FX, the chip will be fabbed on a 32nm process by Global Foundries. What’s unfortunate is that AMD may pull an Intel and switch to a new socket called FM2, and it’s not clear whether Trinity will be compatible with existing FM1 motherboards.
Despite the excitement over Trinity, AMD won’t be giving up on the high end. The company has a very ambitious plan to increase the performance of FX chips over the next few years. In 2012, the company expects to release Piledriver. The year after that we’ll see Steamroller, and the year after that Excavator.
Each iteration is expected to bring at least a 10–15 percent performance increase, the company says. AMD won’t say what the microarchitecture changes are exactly, but that most of the performance should come from clock-speed bumps and other changes under the hood.
Intel has a reputation as the master of the process, and that likely won’t change early next year when the company introduces its new Ivy Bridge chip. The most significant aspect of Ivy Bridge is the move to a new 22nm process using 3D transistors. These transistors literally use three dimensions, and if Intel’s bet pays off, could offer very significant power reductions and higher performance dividends on CPUs. Ivy Bridge itself is only considered a “tick,” Intel speak for small step forward. That’s describing the x86 side of the chip, which won’t offer huge changes. On the GPU side, Intel says it will introduce a major step forward in graphics performance and add DirectX 11 and support for OpenCL. Ivy Bridge chips will also offer the ability for OEMs to dial thermal performance up or down (it’s not known if end users can change this yet).
Ivy Bridge will also bring new FMA3 instructions, a new digital random number generator to enhance security, and improved power management. Also on tap is support for PCIe 3.0.
The really good news is that Ivy Bridge will be backward compatible with current LGA1155 motherboards. PCIe 3.0 won't be supported in all slots on all boards (although some vendors say they’re ready to go with PCIe 3.0) and will require a BIOS update to run. Paired with Ivy Bridge will be the new Panther Point, or 7-series, chipset that will finally bring native USB 3.0 to Intel CPUs.
For our testing, we built four machines to test the various chips and tried to balance them as closely as possible. Each test station had a stock-clocked GeForce GTX 580, the same public Nvidia drivers, 64-bit Windows 7 Professional, and matching WD Raptor drives. Why not SSDs? We’ve seen occasional SSD performance variance among chipsets, leading us to believe that an HDD is the more reliable option for comparison. (We have, however, tested several of our benchmarks with current-gen SSDs to ensure they weren’t being bottlenecked by disk I/O). One area where our configs diverged was in RAM, due to the channel differences. The dual-channel and quad-channel systems featured 8GB of DDR3/1600, while the tri-channel had 6GB of DDR3/1600. The difference is very unlikely to impact our benchmarks, as none cross the 6GB boundary.
For the Phenom II and FX-8150, we used an Asus Crosshair V Formula motherboard. With the FX-8150, we used a specific UEFI developed by Asus for testing, while a public UEFI was used with the Phenom II X6. The Core i7-2700K was tested with a Gigabyte GA-Z68X-UD3H-B3 board, the Core i7-990X with an Intel DX58SO2 Smackover 2 board, and the Core i7-3960X blew away everyone from the socket of a new Asus P9X79 Deluxe board.
And that’s really the upshot of all this. In the Intel three-way showdown, we figured the Core 7-3960X would give us only a slight boost over the Core i7-990X. Instead, the newer hexa-core demolished its older sibling in just about every test. The 3960X really shined in multithreaded tests, with encoding taking 20 percent less time. Using Sony Vegas 10, we saw the 3960X kick out a 27 percent faster encode. 3D rendering saw increases from 13 to 27 percent. And that pesky Core i7-2600K, which occasionally beats the pricier 990X? The 3960X put it in its place in most of our tests or ran dead even with it. In actual per-core performance, Sandy Bridge and Sandy Bridge-E were about the same, but the extra cache, additional memory bandwidth, and four more threads make the 3960X not just a winner, but a decisive champ today. The Core i7-3960X is simply the fastest CPU we’ve ever tested and puts red meat back on the menu for PC enthusiasts.
That doesn’t negate the value of the 2600K and new 2700K, though. Systems using those chips are far cheaper to build, offer more than enough performance, and have a solid upgrade path. So how do you pick? Generally, we’re sticking to our recommendation that if you do 3D rendering, video editing, or other workstation tasks for a living, the 3960X is a must-have CPU. If you also find yourself encoding a lot of media, those extra cores are well worth it. However, if you primarily game and don’t get paid by the hour to render video or perform other processing-intensive tasks that need the cores, the 2600K/2700K is still a killer value—for now. That’s likely to change when Intel releases its quad-core version of the Sandy Bridge-E. If the part is price competitive, it might simply make more sense to build on that chip, which has a better upgrade path for an enthusiast.
And what about AMD? To be fair, we don’t think the FX-8150 should be compared to the new 3960X or the 990X, as those chips cost four times as much. But what about the 2600K? Even there, the FX-8150 has a tough time and can get beaten pretty badly by Intel’s second-fiddle Sandy Bridge. AMD actually thinks the eight-core FX-8150 is a better match with Intel’s Core i5-2500/2500K parts (a 2600K with less cache and no Hyper-Threading). How meaningful that is really depends on how you view the glass. In one way, it’s great that AMD finally has a part that is at least competitive with some of Intel’s higher-tier Sandy Bridge CPUs. But seen differently, how good is it that after all this time and a major redesign, the best AMD can do with an octo-core CPU is compete with a cheaper Intel quad-core chip? We know that for people who only pay attention to core counts (like they did megahertz), the sound of eight cores is really appealing. But with GPU and CPU cores starting to blur, does it really matter how many “cores” you have? Just as we once had to keep in mind that a 2.13GHz Athlon XP could kick the crap out of a Pentium 4 clocked 1GHz faster, perhaps we have to stop looking at CPUs in terms of cores but instead look at, well, the model number.
It’s not all downer news for AMD, though. We saw several signs of great performance with the new chip. Up against the Phenom II X6, the FX-8150 offers a serious boost in performance in several encoding tests. In fact, in many encoding tests where the Phenom II X6 is road kill, the FX-8150 offers, umm, Sandy Bridge-like performance. In fact, in our MainConcept test where we only do one-pass rendering, the FX-8150 mangles the vaunted 2600K. In other tests, such as POV Ray, Bibble, and HandBrake, the FX-8150 pulls pretty damn close, too.
As sad as some AMD fans will be that Bulldozer doesn’t flatten Sandy Bridge, it’s probably as close as AMD has been in some years.
|3.6 FX-8150||3.3 Phenom II X6 1100T||3.3 Core i7-3960X||3.4 Core i7-2600K||3.46 Core i7-990X|
|Cinebench 10 Single Core||4,080||4,128||6,363||6,011||5,176|
|Cinebench 10 Multi Core||20,277||18,735||35,638||23,315||28,019|
|POV Ray 4.7 (sec)||213.08||227.25||143.1||218.93||163|
|Fritz Chess Benchmark||11,704||11,504||13,823||13,065||13,122|
|Intel Burn Test (Gflops)||63.1||29||152||89.7||72|
|Sony Vegas Pro 10 (sec)||3,193||4,382||1,900||2,752||2,429|
|ProShow Producer (sec)||1,171||1,610||856||1,043||1,037|
|CyberLink Espresso 6.5 CPU (sec)||429||420||293||379||316|
|CyberLink Espresso 6.5 Discrete GPU (sec)||412||414||287||329||333|
|CyberLink Espresso 6.5 QuickSync (sec)||N/A||N/A||N/A||311||N/A|
|7-Zip 12 Threads (MIPS)||20,400||18,149||30,156||19,046||30,178|
|7-Zip Max Thread Load (MIPS)||20,773||17,952||30,156||19,288||30,178|
|Valve Particle Test (fps)||108||119||299||179||243|
|Dirt 2 (fps)||120||70||187||189||188.4|
|Far Cry 2 (fps)||111.2||107.1||253||202.3||207.1|
|Unigine 2.5 (fps)||53.9||54.2||54.4||54.2||53.8|
We wind down 2011 with the curtain falling on Intel’s X58 chipset and LGA1366 socket. Many folks will lament the end of LGA1366, but the socket is now three years old. That’s pretty good for Intel, which seems to measure its socket lifetimes in dog-to-human ratios. So, a 3-year-old socket for Intel is like a 9-year-old socket for AMD.
Replacing LGA1366 is Intel’s new LGA2011 socket and the accompanying X79 chipset. A beast of a socket, LGA2011 supports enough pin-outs for four channels of RAM as well as the PCIe 3.0 connections that are now in the CPU core (see our write‑up on Sandy Bridge-E for the full skinny on PCIe 3.0). Storage fiends are sure to be disappointed with X79 motherboards. The X79 chipset was originally to ship with a bountiful 10 SATA 6Gb/s connections—yes, 10!—but in the end, all we get is the same peripheral controller hub layout as the P67 and Z68—two SATA 6Gb/s ports plus four SATA 3Gb/s ports. Board vendors are making up for the deficit by adding third-party controllers for SATA 6Gb/s drives.
On the LGA1155 side, the good news is that your motherboard will survive the transition to Intel’s Ivy Bridge chip. The bad news is that you may want to upgrade anyway. That’s because early next year, Intel is expected to introduce its Panther Point chipset. Coupled with a new Ivy Bridge CPU, Panther Point will add an improved peripheral controller hub to improve performance and compatibility with some SSDs. On the “what took so frakking long?” front, Panther Point will also finally offer native USB 3.0 on some ports. USB 3.0 functionality should also improve when Microsofts Windows 8 ships with a native USB 3.0 stack.
On the AMD side of the aisle, there isn’t much news, except that the company will introduce a new FM2 socket with its Trinity APU. Trinity essentially updates the existing Llano chip with newer cores and an updated GPU. What’s not known is if FM2 CPUs will be compatible with existing FM1 motherboards.
There’s a lesson that everyone in the city of Los Angeles wishes its builders had taken to heart: You don’t wait for congestion before you lay down new infrastructure—you do it before the roads are clogged with cars.
That’s the mantra the PCI-SIG has followed, and the organization isn’t stopping. Before PCIe 3.0 motherboards and cards have even hit the road, the PCI-SIG is working on PCIe 4.0. The problem will be in finding clever ways to scale the popular interface without making it a cost burden. With the move from PCIe 2.0 to PCIe 3.0, the PCI-SIG was able to double the effective speed by reducing overhead in the protocol. So, while PCIe 2.0 transfers data at five gigatransfers per second (GT/s), PCIe 3.0’s transfer rate is 8GT/s, yet it doubled the bandwidth per lane from 500MB/s to 1,000MB/s.
Unfortunately, it’s unlikely that PCIe 4.0 will see such a dramatic gain. The PCI-SIG hopes to keep the interface on copper instead of moving to optical, which is cost prohibitive today. PCI SIG is considering a pay-as-you-go model, where only high-end applications need implement the highest bandwidth. Don’t fret that your PCIe 3.0 board will be replaced immediately. The PCIe 4.0 spec won’t be finalized until 2012, and we’d expect it to take until 2013 or later for it to materialize on desktops.
The PCI-SIG is also continuing to move forward with a new external cable for PCI-SIG. The new cable could feature from one to four lanes, each capable of transfer from 2.5GT/s to 8GT/s. That would let a cable potentially transfer 4GB/s of data. At that speed, an external PCIe cable could give even Thunderbolt a run for its money.
And it’s Thunderbolt that the PCI-SIG and others have their eyes on, too, as that spec continues to ease forward. Earlier this year at Computex, it was thought that the new X79 chipset would include native Thunderbolt support but it won’t (although board vendors are free to add it). Intel, meanwhile, plans on releasing two new T-bolt controllers next year: Eagle Ridge and Light Ridge. The company will also release a controller called Port Ridge that will likely go into devices at the end of a Thunderbolt chain.
We’re not sure if Thunderbolt is headed for victory or failure, but we do know that several motherboard vendors intend to integrate Thunderbolt controllers into next year’s crop of boards as an extra feature.
The USB Implementers Forum (USB-IF) is also active. The organization hasn’t laid out any concrete plans to compete with Thunderbolt and it doesn’t like to spar verbally with competing standards, but officials assure us that USB 3.0 and its future iterations can take on any competing standard in performance, and its ubiquity gives it the power to truly dominate.
That ubiquity could become even greater if the organization is successful at convincing notebook makers and other electronic device manufactures to support a universal, USB-based charging system. The idea is to use standard power bricks and Micro USB plugs to charge everything from digital cameras to external hard drives, laptops, and monitors.
Consumers could then recycle power bricks instead of throwing them into the landfill. The USB-IF says it can deliver from 2.5 watts to charge a phone up to 100 watts to power a monitor or laptop. The topology would allow you to power your laptop or phone from a monitor.
In performance, USB officials say the spec has plenty of speed left. Today’s USB 3.0 operates at 5Gb/s and the protocol can easily surpass 25Gb/s. Again, we don’t know who will win this battle, but we know that the PCI-SIG and USB-IF don’t intend to simply roll over and let Thunderbolt take over.
With solid-state drives already pushing the limits of 6Gb/s SATA, and the price per gigabyte of SSDs dropping rapidly, how will mechanical drive vendors make traditional storage more appealing—aside from its still-killer price/capacity ratio? According to Seagate’s Joni Clark, we can expect to see more hybrid drives toward the end of 2011 and into 2012, starting with Seagate’s second-gen Momentus XT, with a 750GB capacity and 8GB of NAND cache. DIY hybrids, like the ones offered by Intel and OCZ, will continue as well, while other disk makers could introduce on-disk hybrids to compete with Seagate’s.
On the desktop side, expect 1TB/platter drives to become standard in 2012, enabling one-platter 1TB drives and four-platter 4TB drives early in 2012, with a few 5TB drives here and there. From there, drive sizes will stay the same for a while until the tech for higher areal density improves and drive makers focus on speed. Seagate already announced that it's dropping 5,400rpm and “green” drives from its lineup and focusing exclusively on 7,200rpm drives. Our sources indicate that desktop hybrid drives—perhaps 1TB drives with 16GB of NAND for caching—could arrive in 2012, as well.
It took years to hit the 3Gb/s SATA throughput limit, but SSDs are already close to saturating the 6Gb/s spec, which has barely reached mainstream adoption. So where do they go from here? Expect drives with second-gen SandForce controllers to continue shipping through 2012, joined by other 6Gb/s controllers. Samsung has already launched its 6Gb/s controller, and Indilinx (bought by OCZ in 2011) has released the Everest 6Gb/s SATA controller, which for now is mostly powering Ultrabook-style devices. Look for SSD vendors and controller companies to continue to refine 6Gb/s performance, especially in metrics like random reads and writes and sustained writes.
Speaking of Ultrabooks: Intel’s vision for ultraportable computing requires SSDs, and lots of ‘em, in new form factors. mSATA and the just-announced SATA µSSDs, as well as 7mm 2.5-inch drives, will become common, if not as common as 9.5mm 2.5-inch drives.
PCIe SSDs, like OCZ’s RevoDrive series, will continue to be a small part of the market, and will offer one way to bypass the 6Gb/s SATA bandwidth limit, but will eventually be supplanted by SATA Express.
SATA Express and SATA µSSD
In August 2011, the Serial ATA International Organization (SATA-IO) announced two new SATA specifications for tablet devices and desktops, respectively. SATA µSSD provides a connectorless SATA specification for tablets and ultraportables. Rather than use the familiar SATA connectors, a SATA µSSD is soldered directly to the motherboard and uses a ball‑grid array package to pass SATA commands to the SSD. This connectorless spec will allow smaller, faster SSDs to be used in tinier devices.
Concept drawings for SATA Express suggest that some connectors could accept either PCIe or SATA.
SATA Express, on the other hand, is designed to overcome the bandwidth limitations of 6Gb/s SATA without replacing 6Gb/s SATA with another spec. SATA Express uses the PCIe interface but with SATA software commands. The specification is still being worked on, but concepts we’ve seen show both PCIe/SATA combo connectors and dedicated SATA connectors that use the PCIe form factor and pin-outs. SATA Express will allow 8Gb/s and 16GB/s connections.
Both SATA Express and SATA µSSD are scheduled to appear in the second half of 2012, but since neither specification is due to be finished until the end of 2011, there’s a chance neither will appear in 2012 at all.
According to Intel, 40 percent of all consumer laptops sold next year will be Ultrabooks. In fact, the company has staked $300 million on it. That’s the figure behind Intel’s Capital Ultrabook Fund, which goes to companies that develop hardware and software technologies to enhance the Ultrabook experience.
So what is an Ultrabook? By Intel’s own definition, it’s a notebook that’s no more than 21mm (or .83 inches) thick; that features at least five hours of battery life, but preferably as much as eight hours (as measured by MobileMark 2007); that can resume power from hibernation in seven seconds or less using Intel’s Rapid Start technology; and that exposes hardware features for Intel’s Anti-Theft and Identity Protection technology in the BIOS/firmware.
The cornerstone of the Ultrabook experience, however, is an Intel CULV processor. The category will launch at the end of 2011 with second-gen Sandy Bridge CPUs, but it’s expected to really take off when Ivy Bridge CPUs are released in 2012. Intel believes that the combined power efficiency and performance capabilities of its low-voltage procs with integrated graphics—when packaged in a sleek, highly portable form factor—will make Ultrabooks a compelling alternative to tablets.
Of course, another very important factor to the Ultrabook equation is price. After all, no one is going to pass over a $200 (thanks, Amazon!)–$500 tablet in favor of a $1,300 thin-and-light, no matter how capable or energy efficient it might be. That’s why Intel is strongly encouraging its OEM partners to price their Ultrabooks below $1,000. At that price point, Intel believes, consumers can be swayed by value.
Price, however, is not a requirement for the Ultrabook designation. And indeed, at least a couple of the early models slated for 2011 release appear to be priced at over a grand. Lenovo’s U300s, for example, is expected to retail at $1,200 for the base model (1.6GHz Core i5-2467M and 128GB SSD) and $1,600 for the step up (1.6GHz Core i7-2657M and 256GB SSD). Pricing for Toshiba’s upcoming Z830 Ultrabook has yet to be announced. But with Acer’s Aspire S3 debuting at $900, we might see price pressure come to bear on future models.
As Intel sees it, a ramp up in production, the development of new, focused technologies (à la that $300 million fund), and the performance enhancements of Ivy Bridge could make Ultrabooks the must-have mainstream device of 2012.
In the span of nine months, 2011's tablet SoC of choice, the Nvidia 1GHz dual-core Tegra 2, has gone from blazing to blasé. Expect 2012 to be the year of the quad-core tablet. As early as February of this year, both Nvidia and Texas Instruments had announced the early stages of their quad-core mobile chip development.
Nvidia's Project Kal-El uses Variable Symmetric Multiprocessing (vSMP) and promises tantalizing benefits, such as lower power consumption, higher performance per watt, faster website load times, console-quality gaming, faster multitasking, etc. It packs four ARM Cortex 9 CPUs, a fifth companion core for low-power operation, and a 12-core GPU onto one chip. CoreMark benchmark results showing roughly double the performance from a Tegra 2 to Kal-El were run with 1GHz cores in the Kal-El. We expect those cores to clock in at 1.5GHz when the Kal-El hits production smartphones and tablets.
Nvidia's quad-core Project Kal-El SoC adds a fifth "companion" CPU to handle low-power and background tasks, such as email syncs and social media updates.
TI's quad-core OMAP 5 touts similar benefits and will comprise two ARM Cortex-A15 cores (up to 2GHz) and two slower ARM Cortex-M4 cores.
Typical 10-inch Android tablet display resolutions of 1280x800 will no doubt ramp up next year. Fusion Garage CEO Chandra Rathakrishnan thinks the resolutions may double the present number of total pixels, and Ted Theocheung, VP of the PC division at Synaptics, believes tablets will gravitate to full 1080p resolution (1920x1080) as early as 2012 or as late as 2013.
However, too much added resolution increases the price of the display, power consumption, and processor load. Dr. Raymond Soneira of DisplayMate Technologies says, "A good technical and marketing compromise for tablet resolution is 200ppi. For the 10.1-inch Android tablets, 1792x1120 works out to 209ppi."
Soneira adds that the first tablets with OLED displays should debut in 2012, although at high-end prices. Meanwhile, high-performance in-plane switching (IPS) LCDs like those in the iPad 2 and Kindle Fire will become the norm as manufacturers forsake the inferior STN LCDs seen on the Motorola Xoom and Acer Iconia.
Theocheung brings further good tidings, saying that tablets will become thinner. "The removal of a discrete touch sensor will help reduce the thickness," he says, "allowing the overall stack-up of the traditional touch sensor module to shrink down to as thin as 0.55mm." Theocheung's work with Microsoft also foretells of new multitouch breakthroughs in Windows 8 that will allow slate computers to perform full-time finger tracking of up to 10 fingers, whereas most current models top out at five.
"The whole spectrum of hardware innovation that you see in the PC space will be carried over to the tablet world," Rathakrishnan predicts.
Prices on graphics cards have seen massive cuts in the past few months. While that’s great for gamers looking to upgrade, it’s also a sign that new GPUs are on the horizon. When are we likely to see next-generation GPUs and graphics cards?
That’s a tough question for several reasons. One of the key manufacturers of GPU chips, TSMC, is prepping its next-generation 28nm manufacturing process, which is likely online now. Demand has been extremely high, though, forcing the company to hike prices, and thus pushing back shipments of 28nm GPUs.
That said, AMD demonstrated its 28nm next-generation GPU at its Fusion 2011 event in Taiwan in October. And a leaked mobile GPU roadmap indicates that AMD products will ship in early 2012 for mobile applications. The AMD 7000-series desktop graphics cards will also likely make an early 2012 appearance.
What’s interesting about AMD’s new architecture is a greater emphasis on compute capabilities. While the 6000 series was no slouch on the compute front, that GPU design wasn’t as tuned for compute as Nvidia’s Fermi architecture. AMD’s goal is to share the same memory space as the CPU, making it easier for software developers to write apps that utilize the GPU—although the 7000 series may not quite hit that target. The other interesting tidbit leaked recently is that AMD might be using RAMBUS XDR2 memory, running at up to 8,000MHz effective throughput.
There is some confusion about the process technology. One leak showed early 7000-series GPUs being built on the existing 40nm technology. This isn’t unprecedented; the Radeon HD 6970 was originally targeted for 28nm, but ended up shipping on 40nm. Whatever process technology is used, the lowest-end card to show up on any roadmap is a Radeon HD 7950. More on that in a moment.
Nvidia is also busy as a bee, prepping its next-generation graphics cards based on its Kepler architecture. It’s looking like the first samples of Kepler GPUs are on track for a late 2011 delivery to Nvidia. That means a late winter or early spring 2012 ship date for retail cards, at best. Like AMD’s new GPUs, Kepler will be built on 28nm. Nvidia’s been fairly quiet about Kepler, unlike its openness regarding Fermi, so little is actually known about features. Kepler is likely to be a significant enhancement to Fermi, but not a complete architecture redesign.
So it’s likely we’ll see midrange and high-end graphics cards in the first quarter of 2012. Meanwhile, entry level graphics are going through a quiet revolution.
THE FATE OF ENTRY LEVEL
Part of that is AMD’s push for Fusion. We’ve already seen the first Fusion APUs from AMD, and while the traditional x86 CPU performance is somewhat anemic, the GPU is clearly stronger than any previous integrated GPU. Offering up to 400 Radeon GPU cores, the GPU inside the higher-end 3800 series APUs gives a tantalizing glimpse into the future of PC graphics.
For its part, Intel made gains with the Intel HD Graphics on Sandy Bridge, particularly on the video side. When Ivy Bridge debuts in early 2012, expect the CPU itself to be an incremental improvement, but graphics to be significantly improved, offering full DirectX 11 support, double the compute horsepower, and greater bandwidth. Ivy Bridge will likely be “good enough” for entry level gamers.
High-end gamers, of course, will still want to add a discrete graphics card, but mainstream users may not need additional GPU horsepower. Intel has hinted that Ivy Bridge’s GPU will run OpenCL and DirectCompute applications entirely on the GPU cores, which would be a significant departure for an Intel GPU.
PREP YOUR WALLETS
So, yes, if you’re looking to upgrade, prices are very attractive for current-generation cards, but it might pay to hold off until new midrange and high-end graphics cards from both AMD and Nvidia partners arrive, certainly by spring of 2012. After all, every time we think that GPUs have gotten fast enough, games like Deus Ex: Human Revolution and The Witcher 2 come along that crush current-card performance.
Now you know what the PC is likely to look like in 2012, but what kind of rig will enthusiasts be rocking in 2013? To figure it out, we gaze into our crystal ball. We suspect LGA2011 motherboards will remain the platform of choice. By then, Intel should have introduced Ivy Bridge-E. We suspect that the thermal savings from the switch to 22nm will help Intel get the eight-core version of Ivy Bridge within reasonable thermal limits.
By 2013, DDR4 will still be in gestation, so we don’t think our machines will yet sport the next-gen memory. We do, however, suspect that RAM prices will have come down drastically. So, while a fully loaded Sandy Bridge-E machine might pack 32GB of RAM—eight 4GB modules—(for just $200), a 2013 Ivy Bridge-E rig might pack 64GB—eight 8GB modules—at proportionate pricing.
By 2013, PCIe 3.0 GPUs will have long been available, but we won’t even hazard a guess as to which brand—AMD or Nvidia—will be on top. SSD’s will have grown, but will be bottlenecked by SATA 6Gb/s, still. By then, it’s possible prices will have dropped so you can get a 480GB SSD for $250, though. HDD space will remain static at roughly 5TB, which it will hit in 2012.