At a time when the ranks of quad-core Android devices are swelling rapidly, Intel is trying to find its feet in this highly competitive market with its single-core “Medfield” Atom chip. But Mike Bell, GM of Intel's Mobile and Communications Group, does not view Medfield’s current lack of multiple CPU cores as a cause for concern.
If you’re a PC hipster who loves his quad core CPU we have bad news for you, Intel has officially declared that multi-core processors have gone mainstream. The announcement was made by Intel’s chief technology officer Justin Rattner, to a captive audience at their annual Intel Developer Forum in San Francisco last week.
Think of all the things you could do with a 100-core processor, or even a 1,000-core processor. Climb the ranks of Maximum PC's Folding@Home team! Encode videos like a boss! Run Crysis! Ah, if only it was as easy as piling on more cores for exponential performance gains. There's more to it than that, and a couple of researchers from North Carolina University say they've developed a pair of techniques that will help maximize the performance of multi-core processors by allowing them to retrieve data more efficiently.
On the surface, asking Intel to manufacture a 16-core Atom processor sounds like an odd request. But that's exactly what Microsoft has done. Not for Windows on the home desktop front, mind you, but for use in servers, ComputerWorld reports.
According to one Microsoft executive, low-power processors like Intel's Atom chip and AMD's Bobcat present a "huge opportunity" to tackle energy consumption woes. Even though these chips weren't really developed with server tasks in mind, they're more energy efficient at some server workloads than Xeon processors, says Dileep Bhandarkar, an engineer with Microsoft's Global Foundation Services
"I think Intel is going to have to do it at some point. We're seeing more of the ARM guys going after the server market and just to compete on power performance per watt, Intel is going to have to rely on the Atom CPU," said Linley Gwennap, founder and principal analyst at The Linley Group.
While that's true for ARM, Microsoft is hesitant to move away from x86. Bhandarkar said Microsoft would consider using ARM-based servers "if ARM can show [Microsoft] enough value over an x86 solution...but there has to be a clear performance benefit."
Let's get a few things straight -- there aren't any 1,000 processors on the horizon, there's no such chip on Intel's processor roadmap, and we've yet to really tap into the computing power of today's multi-core architectures. Got it? Great, let's move on.
Despite the above disclaimers, Intel engineer Timothy Mattson was more than willing to sit and talk with ZDNet about what it would take to build such a monstrous CPU. Here's some of what he had to say:
"The challenge this presents to those of us in parallel computing at Intel is, if our fabs could build a 1,000-core chip, do we have an architecture in hand that could scale that far? And if built, could that chip be effectively programmed?
"The architecture used on the 48-core chip could indeed fit that bill. I say that since we don't have cache coherency overhead. Message-passing applications tend to scale at worst as the diameter of the network, which runs roughly as the square root of the number of nodes on the network. So I can say with confidence we could scale the architecture used on the SCC to 1,000 cores."
Mattson went on to say that as far as programming goes, that too is feasible "as a cluster on a chip using a message-passing API."
AMD has had a tough time keeping up with Intel in terms of performance per core, but if all you’re looking at is price per core they have a definitive advantage over the competition. Instead of focusing their efforts on driving up clock speeds they are now putting more emphasis on increasing the number of cores they can fit on a single die.
In a blog posting titled, “Cores – More is Better”, AMD’s John Fruehe, Director of Product Marketing for Server/Workstation revealed classified shipment data, and according to Fruehe people are obsessed with cores. "In looking through sales data for the first half of 2010, 12-core processors clearly outsold their 8-core counterparts – by a wide margin. I was expecting that there would be a slight bias towards the 12-core, but I figured there were plenty of applications where the extra clock speed of an 8-core might be popular," Fruehe wrote. "Apparently I was wrong, customers are voting with their budgets, and cores matter."
Software developers are starting to catch on to the trend by offering more and more multi-threaded applications, an approach that would clearly favor AMD’s strategy of increasing the core count above all else. The upcoming AMD Bulldozer architecture will feature chips with up to 16 cores, a product that will likely fill a valuable niche in the virtual server market.
What do you think will ultimately be more valuable, more speed per core or more cores per die?
Intel recently scrapped plans to launch Larrabee-based discrete graphics products while hinting that the multi-core GPU technology still holds promise as far as high-performance computing goes. It today unveiled plans to launch a new line of products, based on its Many Integrated Core (MIC) architecture, to cater to the needs of various HPC segments.
The announcement implies that all the time and effort spent on Larrabee hasn't gone down the drain since the MIC architecture is itself based on a bunch of Intel projects, including Larrabee and the Single-chip Cloud Computer.
It should be very clear to anyone familiar with the Single-chip Cloud Computer (SCC) – a research microprocessor containing 48 Intel Architecture cores, that a commercial product derived from it is almost bound to feature a ridiculous number of cores. Indeed, the first offering in the new line will feature 50 cores on a single chip. Knights Core, as the chip is codenamed. will be made on a 22nm process.
"Intel's Xeon processors, and now our new Intel® Many Integrated Core architecture products, will further push the boundaries of science and discovery as Intel accelerates solutions to some of humanity's most challenging problems," said Kirk Skaugen, vice president and general manager of Intel's Data Center Group.
It’s a pretty good question, and we really don’t have a ton of ideas. There might not be a lot of real world use cases for a 48-core setup, but maybe you could come up with a few if the price was right. Like for instance, if AMD would give you those 48 cores if you came up with a good one. Well, that’s just what they’re doing.
AMD wants people to submit essays, videos, or blog posts explaining how they’d use a monster 48 core server to “make the world a better, more interesting place”. The contest is seemingly meant to promote the upcoming Magny-Cours based Opteron CPUs AMD will be releasing this quarter. If you can come up with the best idea, AMD will provide you with four new AMD Opteron 6174 12-core CPUs, a TYAN S8812 motherboard, and a copy of Windows Server 2008.
So what'll it be? Super powered Folding@home box to cure cancer? Rendering farm for underprivileged, inner-city video producers? Check out the full rules here before you formulate any plans. Anyone planning on submitting an entry? Drop us a line if you win…
Intel’s graphics offerings have traditionally been a little lackluster, but that could be about to change. Intel has reportedly informed its corporate partners that the new Sandy Bridge CPUs will be available by year’s end, and will pack a significant graphics performance increase. Intel is claiming as much as a doubling of performance. A “doubling” compared to what is currently unclear, but one could assume Intel is referring to the current Nehalem architecture.
The Sandy Bridge parts will be based on a 32nm manufacturing process and will have an on die graphics processor. The CPU core will be capable of clocks up to 4GHz and some models will have eight cores. ATI and Nvidia plan to move to 28nm graphics cores, which would leave Intel the only purveyor of 32nm cores.
We’d all love to see a doubling of performance over the poor Intel HD graphics found in the current Nehalem line. Only time will tell if this is just more wild speculation.
Tilera today announced its new TILE-GX line of processors, including the TILE-Gx100, the world's first 100-core CPU. According to Tilera, the 100-core part offers the highest performance of any processor on the planet by at least a factor of four.
"The launch of the TILE-Gx family, including the world's first 100-core microprocessor, ushers in a new era of many-core processing. We believe this next generation of high-core count, ultra high-performance chips will open completely new computing possibilities," said Omid Tahernia, Tilera's CEO.
While the 100-core part is not meant to run Crysis (so please don't ask) or any other desktop application, it does offer 10 times the performance per watt as Intel's fastest Nehalem-based server chips. Assuming Tilera can convince customers to switch from Intel and Texas Instruments, The TILE-Gx100 will likely end up in data centers powering cell phone network equipment and cloud computing ventures.
Tilera says its 100-core chip will start shipping in Q4 of this year.