Austrian computing pioneer created "May Breeze" with 3,000 donated transistors
Every so often we learn about the passing of a pioneer in the field of computing, and this time it's Heinz Zemanek, creator of "May Breeze" (or Mailüfterl in German) the first computer in Europe to run solely on transistors instead of vacuum tubes. With the help of students at the Vienna University of Technology (TUV), the Austrian engineer and programmer built the PC from 3,000 transistors donated from Philips, along with 5,000 diodes, 1,000 assembly platelets, 100,000 solder joints, 15,000 resistors, 5,000 capacitors, and 20,000 meters of switching wire.
IBM has announced its plans to invest $3 billion on Semiconductor research and development over the next five years. The purpose of this investment is to develop smaller chips by designing smaller transistors.
Intel's doing a bang-up job and shrinking transistors and packing them in tighter than ever before, but let's face it: it's going to be hard to scale silicon down much further. That eventual wall is why engineers are pumped about the potential of graphene, a substance with more than 200 times the electron mobility of silicon. (Read: better potential performance.) Coaxing graphene transistors into switching off current to create the 1 and 0 signals we know and love has been tricky, however. Now Samsung says it's developed a solution that does just that, without limiting graphene's electron mobility.
Time to start firing the PR guys! As is the case with all technical products these days, AMD used a lot of lofty-sounding numbers and specs to make its new 8-core Bulldozer chips sound friggin’ awesome in the company’s press releases. Eight cores, four modules, a 315mm die area, two billion transistors – actually, scratch that last one. Over the past weekend, AMD contacted several publications and said that, um, somebody screwed up. Eight-core Bulldozer chips actually only have 1.2 billion transistors. Oops.
Suddenly, like a plunging guillotine blade, Intel has severed any hope that competitors will match its chip-fabrication technology for years to come. Last month I observed that the rest of the industry was gaining a little ground on Intel by adopting high-k metal-gate (HKMG) transistors—only four years after Intel’s HKMG debut in 2007. But now comes Intel’s next big leap: tri-gate transistors.
File this one away for the future: graphene transistors. Graphene makes use of carborn rather than silicon, and transistors produced from it are capable of operating at 100 gigahertz, or about ten times faster than the fastest silicon transistors. And IBM has figured out a way to make production of these little beauties commercially feasible.
Graphene transistors aren’t new. But the methods for making them are clumsy and inefficient. For example, sheets of graphene would be flaked away from graphite--a tricky process at best. And it could only produce transistors with speeds up to 26 gigahertz.
IBM has devised a method for ‘growing’ graphene transistors on the surface of a two-inch silicon carbide wafer. The wafter is heated until the silicon evaporates, leaving behind a thin layer of epitaxial graphene, from which a transistor is produced. In addition, IBM improved the process by using better materials for parts of the transistor, such as the insulator.
Speedier transistors translate into speedier computing. Graphene transistors, therefore, hold promise for bumping up hardware potential on motherboards and add-in cards. (Not CPUs, though--graphene won’t work for CPUs.) While things will get speedier, for us it won’t be right away. Projected first applications will be in military devices. After that, maybe, graphene transistors will work their way into consumer electronics.
Moore’s Law states that approximately every two years, the number of transistors that can be placed on an integrated circuit doubles. This has held true for the last 50 years. But there will come a point one day when physics puts a stop to that. Eventually the boundaries of atomic scale will limit transistor density. However, a new breakthrough in the field of quantum computing may provide hope for future advances. Until now, a quantum computing device had to be designed for one, and only one, operation. But scientists from the National Institute of Standards and Technology (NIST) have constructed the first programmable quantum processor.
Quantum processing units are fundamentally different in a number of ways. First, where a regular bit can be only 1 or 0, a quantum bit (or qubit) only assumes a value of 1 or 0 when it is observed. Additionally, Quantum computers aren’t bound by Boolean operators like ‘and, ‘or’ and ‘not’. Finally, two qubits can be “entangled”, meaning they will always have the same value when observed, even if separated.
The NIST computer consists of two quantum gates, one single qubit gate and an entangled two qubit gate. The gates utilized two beryllium ions stimulated with UV lasers to represent operations. The test programs run came back with 79% correct results. Certainly not perfect, but a huge step forward. You won’t be dropping one of these into a socket on your motherboard anytime soon, but maybe someday.
For those of you who have never met Francois, he’s a member of the performance marketing team at Intel. It’s always entertaining to carry on a conversation with Francois. He was the guy at Intel who first steered me to the idea of building small systems around an X58 micro ATX motherboard and undervolting the CPU while maintaining the reference clock speed. This is sort of the inverse of overclocking, and results in pretty high performance systems that run cooler and quieter than the norm. What worries Piednoel, though, is this: what are desktop users ever going to do with six cores?
Moore’s Law means we get more CPUs with more and more transistors. Or we get smaller CPUs with the same number of transistors as past products. But what are the practical benefits for users?
Intel co-founder Gordon Moore once predicted that the number of transistors on an integrated circuit would double every 18 to 24 months, a prediction which has been famously dubbed Moore's Law. But according to market research firm iSuppli, the move to 18nm will signal the end of Moore's Law.
"The usable limit for semiconductor process technology will be reached when chip process geometries shrink to be smaller than 20nm, to 18nm nodes," said Len Jelinek, director and chief analyst, semiconductor manufacturing, for iSuppli. "At those nodes, the industry will start getting to the point where semiconductor manufacturing tools are too expensive to depreciate with volume production, i.e., their costs will be so high, that the value of their lifetime productivity can never justify it."
So when exactly will it happen? According to iSupply, in the year 2014. In 2007, Gordon Moore said his prediction could be upheld for at least another decade. Five years from now, one of them is going to be wrong.