Fujitsu this week laid a humdinger on Intel by unveiling the world's fastest CPU. The new chip is thought to be about 2.5 times faster than anything Intel has in its lineup, while also consuming two-thirds less power.
You can put any grandiose ideas of picking one up and setting new benchmarking records to rest, as the 'Venus' chip, or otherwise known as the SPARC64 VIIIfx, is designed for supercomputers. As such, Fujitsu claims the new CPU can process a mind boggling 128 billion computations per second, making Fujitsu the first Japanese firm in a decade to wear the raw CPU performance crown.
Built on a 45nm manufacturing process, Venus comes with eight cores and an integrated memory controller spread across two square centimeters. Fujitsu says it will take several years to come up with practical applications for the new chip, but that it could see use in pharmaceutical research, astronomy, weather prediction, scientific researching, and Folding@Home while running Crysis (we may have added the last two on our own).
To those looking for another venue to get their very own supercomputer, you’re in luck! Nvidia has recently announced that their CUDA-based Tesla C1060 GPU is available in Dell’s Precision R5400, T5500 and T7500 workstations effective immediately.
If you’re worried that just one of these GPUs isn’t enough to handle your hardcore needs, worry not – just one C1060 has enough power to control the main system of the European Extremely Large Telescope project (reportedly the world’s largest). According to Jeff Meisel with National Instruments, a workstation “equipped with a single Tesla C1060 can achieve near real-time control of the mirror simulation and controller, which before wouldn't be possible in a single machine without the computational density offered by GPUs."
Nvidia recently announced the immediate availability of their ready to use Tesla GPU Preconfigured Cluster, aimed at the scientists, engineers and researchers of the world.
According to Nvidia the Tesla Cluster will provide up to 30 times the performance of a CPU-only cluster, while using only a fraction of the power. One example that they provide to drive this point home is that of BNP Paribas’ (a French Bank) Corporate and Investment Banking division, which recently replaced 500 CPUs that consumed 25kW of power with smaller CPU clusters and two Tesla S1070 1U systems, which only consumed 2kW of power. And, along with the lowered power expenditure, they received better performance.
According to Andy Keane, Nvidia’s Tesla General Manager, “There are 15 to 20 million engineers, scientists and researchers around the world struggling for time on supercomputers, which has led to a huge pent-up demand for computation. With the launch of the Tesla Preconfigured Cluster, every one of them can easily deploy a GPU-powered supercomputing cluster that dramatically reduces their power consumption while still advancing the pace of their work.”
With IBM having recently announced it was building a supercomputer with 1.6 million cores capable of 20 petaflops of computing power, its hard to get too jazzed over a single petaflop. But for Europe, breaking the petaflop barrier is something that hasn't been done, but soon will be.
IBM and German research center Forschungszentrum Juelich are collaborating to build a new Blue Gene/P System supercomputer for Europe. It will mark the first time that a supercomputer capable of delivering petaflops of performance will be located outside of the U.S.
"With speeds over a Petaflop, this new Juelich-based supercomputer offers the processing ability of more than 200,000 laptop computers," explains Professor Thomas Lippert, lead scientist of the Juelich supercomputing center. "In addition to raw power, this new system will be among the most energy efficient in the world."
The Blue Gene/P System will house 294,912 processor, 144TB of memory, and 6PB of hard drive storage contained within 72 server racks. Adding to the historical significance, it will also be IBM's first watercooled supercomputer. IBM says the use of watercooling will result in a 91 percent reduction in air conditioning units that otherwise would have been required to cool the data center.
We don't want to spook anyone wearing an aluminum foil deflector beanie, but pretty soon the U.S. government will be the owner of two more supercomputers from IBM, one of which will scale to 20 petaflops, enough power to probably be able to penetrate industrial strength aluminum to read minds.
It was less than a year ago that IBM became the first to break the petaflops performance mark, also used by the government. The new IBM BlueGene-class systems will make its home at the Lawrence Livermore National Laboratory and will handle analysis of the U.S. nuclear stockpile (and spy on your thoughts). But the full 20 petaflops of computing power won't be available right away. The deal stipulates IBM will deliver one of its BlueGene/P systems capable of 500 teraflops by April, with a followup system called Sequoia to be delivered sometime in 2012.
"The Sequoia system will be 15 times faster than BlueGene/P with roughly the same footprint and a modest increase in pwoer consumption," said Herb Schultz, manager in IBM's deep computing group.
BlueGene/P uses a modified PowerPC 450 processor clocked at 850MHz with four cores and up to 4,096 processors in a rack. The Sequoia system uses 45nm processors with as many as 16 cores per chip running "significantly faster." Sequoia will also have 1.6 million petabytes of memory feeding its 1.6 million cores.
“Personal” and “supercomputer” aren’t words that would usually appear side by side, unless you’re a mastermind at Nvidia. With the announcement of their latest machine, the Tesla Personal Supercomputer, they’re looking to bring what was normally thought of as gigantic, to the small time.
The Tesla only costs 1/100th of what a normal supercomputer cluster would cost, and only takes up a small fraction of the space. Thanks to heterogeneous computing, the process of CPUs acting in tandem with GPUs, it all fits right into a desktop form factor.
It’s reported that the Tesla is based off of Nvidia’s CUDA architecture, making it possible for the system to be programmed in the C language. 960 cores can be working side by side inside the system, and it’s claimed that these systems are already in use at MIT, Cambridge and other environments.
How much will your own personal supercomputer run you? An admittedly reasonable 10 large. Hey, 960 cores is a bargain at that rate.
While Intel’s Atom processor is meant for low-power demand machines, such as netbooks, it’s found a new use with a not-so-likely candidate – a supercomputer.
Silicon Graphics (SGI) has started exhibiting a new concept for a supercomputer that could pack almost 10,000 Intel Atom processors into one rack. SGI is planning to name it the Molecule.
The Molecule could reportedly offer the horsepower and memory bandwidth of more than 750 high-end desktop PCs, and consume only half the power. It would also occupy a meager 1.4 percent of the physical space.
For many, supercomputing seems like something that’s out of reach. At the most, we’ll usually just contribute our spare processor cycles to a project that involves it. But Purdue University is looking to change all that with their latest venture, Rack-A-Node.
Rack-A-Node is a flash-based game that requires you to become the network admin, and set up each rack so that they hold a solid cluster of servers that are good at tackling a variety of different tasks. From chemistry to physics, it’s all up to you to figure out if you’ll need more CPU power, more RAM or a wicked fast connection.
While the game isn’t meant to actually turn the average man into a supercomputing whiz, it is meant to let us get one step closer to it. “This is a dry and boring topic even for geeks,” claimed Gerry McCartney, the chief information officer at Purdue. “So, we wanted a way to get people excited about these things.”
Evidently they’ve been asked to create a more sophisticated version of the game that would be designed as a learning tool. “It is not stupid right now, but it’s way too simple,” Mr. McCartney said.
A machine’s ability to think is something that’s been questioned for nearly half a century, thanks to mathematician Alan Turing. Turing, who helped decipher German military codes during WWII, created a test that is designed to find out if a machine can think on its own. The test consists of a machine attempting to fool a judge into believing that it could be a human by having a text-based conversation on any subject. If the computer’s responses convince the judge that they are speaking with a human, then it has passed the Turing test, and is believed to be capable of thought.
This Sunday, six computer programs will be put through the Turing test in an attempt to win their creator not only an 18-carat gold medal and $100,000, but to prove that computers are capable of thought. The programs competing for the prize go by the names Alice, Brother Jerome, Elbot, Eugene Goostman, Jabberwacky and Ultra Hal. While the names sound like those of rejected VH1 reality show contestant names, they’re far more intelligent, and won’t be spitting on any of their opponents anytime soon.
Should the computers be found to have the ability to think, it’ll raise ethical questions as to how conscious a computer is, and if humans have the “right” to switch them off.
But the Turing test isn’t for everyone. "The test is misguided. Everyone thinks it's you pitting yourself against a computer and a human, but it's you pitting yourself against a computer and computer programmer,” criticizes Professor AC Grayling of Birkbeck College, “AI is an exciting subject, but the Turing test is pretty crude."
Do you think you’ve got what it takes to decipher whether or not you’re talking to a computer? Test your mental mettle after the jump.
Sure, you overclock your rig to the bleeding edge, direct deposit your paycheck to Newegg, and are on the utility company’s watch list because of the blackouts you’ve been known to cause. Yes, you’re a badass power user, but let’s face it, none of your home-built rigs can touch these 10 beasts. So what if half of these machines only exist in the minds of sci-fi writers – their computational prowess transcends the fiction/reality plane, putting our mighty Petaflop age to shame. Peruse this list for inspiration and then get building, you’ve got some catching up to do before you can compete with the real big boys. We won’t settle until our rigs achieve sentience.