Some of the biggest breakthroughs in future tech revolve around some of the smallest materials on Earth. Even calling these technologies "micro" is magnitudes of measure larger than their actual tiny sizes. From the nano-scaled heat transfer of Nanowick Cooling down to the single atomic-level of Graphene and Quantum Computing, our white papers will help you wrap your head around the maximum potential of these miniscule technologies.
Heat is the enemy of modern electronics. As integrated circuits consume more electrical power and become ever smaller, with their constituent components packed closer and closer together, they generate more and more heat. If that thermal energy isn't effectively dissipated, it will damage and eventually destroy the circuitry.
Today's most popular cooling solutions utilize heatsinks and heat pipes, often augmented by powered fans. But that technology is rapidly reaching its practical limit and is threatening to impede the chip industry's progress. Enter nanowick cooling: While fundamentally based on the same mechanics as the heat pipe, a nanowick cooler is capable of dissipating 10 times more heat. We'll explain conventional cooling techniques, how nanowick cooling functions, and why it performs so much better.
Look inside your PC and you'll find passive heatsinks and/or heat pipes, typically fabricated from aluminum or copper, clinging to your motherboard chipset and maybe even your RAM. For components that generate even more heat—your CPU and videocard, for example—the coolers are usually augmented by fans. A heatsink simply uses thermal conductivity to draw heat from the point-of-contact to a cooler area at the opposite end of the metal. Segmenting that far end into a host of very thin fins increases the heatsink's total overall surface area, making it easier for the heat to pass into the air; adding a fan draws the heat away even faster.
Heat pipes, typically fabricated from copper, operate on a similar principle, and are often used in conjunction with a heatsink. The pipes contain a small amount of fluid—often water—and are sealed at a low atmosphere pressure, which means the fluid will boil at a relatively low temperature while it's in close proximity to the heat source. The resulting steam transfers the heat to the far end of the tube, where it condenses back into a liquid. Gravity and other forces cause the liquid to flow back to the heat source and the cycle repeats.
A nanowick cooling system is based on the same physics; but as its name implies, it operates on a vastly smaller scale, with pipes and fins that are nearly as thin as cell membranes. A nanowick draws a liquid coolant toward the hot surface of the chip via capillary action, a phenomenon that moves fluids through small spaces based on molecular charges. Since capillary pressure increases as the channel through which the fluid moves narrows, nanowick pressure can be orders of magnitude greater than a conventional heat pipe.
A nanowick cooler operates in a fashion very similar to a conventional heat pipe: Fluid in a sealed chamber boils and vaporizes, carrying heat away from the source as it rises. The vapor then condenses back into a fluid and returns to the plate that's in direct contact with the source of the heat and the cycle repeats.
Nanowicks are created through a sintering process in which tiny copper spheres are fused together to form a porous sponge. To make the pathways even smaller, carbon nanotubes with a diameter of about 50 nanometers are inserted into the mix. Since carbon repels liquid, the nanotubes are coated with another substance, often copper. The specific pattern and channel size affects the wicking speed. Nanowicks can even be designed to separate different fluids or to filter substances.
The ultimate nanowick design will be the perfect balance of material, surface area, and capillary channel size: A thick wick has a large contact patch that increases the area over which it can draw heat, but the corresponding downside is a reduced capillary effect. Researchers are still searching for the perfect balance.
The rest of a nanowick system echoes the design of a typical heat pipe. The heated liquid—often water—evaporates and travels to the opposite end of a sealed tube, where the liquid condenses. The nanowick then draws the fluid back to a plate—also known as a thermal ground plane—that's in direct contact with the component that's being cooled. And then the process repeats.
A conventional heat pipe is capable of absorbing roughly 50 watts of energy per square centimeter. Researchers at Purdue University's Birck Nanotechnology Center recently developed new nanowick materials that have proven capable of absorbing more than 550 watts per square centimeter without any occurrence of dryout, the point at which the coolant completely disappears from the loop and the system fails. This suggests that the researchers have only scratched the surface of nanowick technology's capacity for absorbing heat.
The first nanowick cooling systems are being deployed in high-power electronic devices developed from the automobile and defense industries. In the auto industry, such applications include the switching transistors that drive the electric motors in hybrid and battery-powered cars. Military applications include the electronic components embedded in radar and laser devices used in vehicles and aircraft. The integrated circuits used in both applications can generate more than 300 watts per square centimeter—far more than conventional heat pipes are capable of dissipating.
Nanowick coolers for consumer-electronics devices will likely reach the market within the next two years, a development that could enable the design and manufacture of even faster CPUs. GPUs, and other chips—especially those designed for mobile applications where cooling is always a challenge. One day, even your smartphone might harbor one of these small wonders.
The tip of your pencil contains the future of computing, touch-screen displays, solar cells, gas detection, and the strongest, lightest physical materials ever. Each scribble leaves layers of this recently isolated super-substance.
It's called graphene, and it's a one-atom-thick hexagonal-grid pattern of carbon atoms. It looks a little bit like chicken-wire—or the Settlers of Catan board—only 100 million times smaller.
In its sheet form, it's the first two-dimensional, crystalline substance that's ever been isolated. It can be rolled into tubes—carbon nanotubes—that behave as a single-dimensional material, and can even be made into a zero-dimensional ball. These multidimensional properties allow for new research and experiments down to a quantum-physics level.
We'll explain the coming graphene boom, how the material is harvested, and why this space-age material could change everything from airplanes to mobile phones.
The 2010 Noble Prize in Physics was awarded to Andre Geim and Konstantin Novoselov for their research isolating graphene. Prior to their discovery and 2004 paper, scientists thought graphene couldn't be stable in a single, one-atom sheet.
In what Geim calls a "Friday night experiment"—a test on a whim at the end of the day—the scientists affixed cohesive tape to a chunk of carbon. Peeling it back, they tore off clusters of more than 100 layers of graphene. But by sticking the tape back to itself, they cleaved off smaller and smaller layers of graphene.
In the end, they discovered single layers of graphene flakes by viewing the substance on top of silicon oxide. A slightly pink halo revealed the location around the virtually clear substance; about 98 percent of light passed through the layer. In subsequent experiments, other scientists reproduced their technique, setting off a boom in graphene experimentation.
Graphene can be produced in many ways in addition to this low-tech method. In 2009, scientists devised a means of growing graphene suited to larger commercial applications. Researchers heat a silicon carbide wafer to 1,300 C, at which point the silicon layer bakes off, leaving the carbon atoms, which realign into graphene. This method can be used to pattern or cut into shapes for microelectronics.
Graphene's many unique properties lead to a wide range of potential applications. Two hundred times stronger than steel, it's possibly the lightest, strongest material ever discovered, suitable for airplane parts and other high-pressure, low-weight applications. It conducts electricity with an extremely low resistance—faster than silicon—making it suitable for many electronics applications.
Graphene is a 2D building material that, when isolated, can be wrapped into buckyballs, rolled into nanotubes, or stacked into graphite.
These traits, combined with graphene's transparency, could also make the material a key component in building more functional lightweight OLED, LCD, and touch-screen panels. And with its large surface-to-volume ratio, graphene in powder form could even improve batteries.
Graphene's electrical properties are leading to branching ideas about the future of computing. "You can try to do everything in a similar way but find a material that can maybe do it better [than silicon]," Dr. Roland Kawakami, an associate professor of physics and astronomy at the University of California, Riverside explains. "Maybe we can make a better transistor."
Following this logic, graphene could be built into tiny transistors that can move single electrons around with electromagnetic forces. An electron will come to an obstacle in its path—like a wedge—and have to move around it in one of two directions. This choice reproduces the binary basis for the rest of the computer. Theoretically, these transistors would be smaller, consume less power, and yield much higher speeds than current silicon. Heck, we might see 100GHz mobile phones based on the technology in coming decades.
The counter alternative to transistor replacement, according to Dr. Kawakami, is to "try to do computing in a different way. So…maybe you can have additional benefit since you're doing something fundamentally different," Kawakami says. His research relates to spin computing, and rethinking processing paradigms down to an atomic level.
Here's the logic: Electrons don't just have an atomic charge, they also have spin, behaving like tiny magnets with a north and south pole. Spin computers can take advantage of this polarity to process and store data; it's similar to the magnetic alignment of current hard disks. This spin can be oriented in many directions, easily accommodating the current binary concept as "up" or "down," while allowing for further expansion.
The problem is that when researchers try to inject spin into semiconductors, they have to cool them to cryogenic levels, such as 100 Kelvin. Even then, it works poorly. Graphene can maintain this spin much longer and do so at room temperatures. Kawakami has researched ways of extending the spin further by layering graphene with a thin insulator. Spin is injected through the insulator, and the extra material helps prevent it from leaking out immediately.
The spin can now last significantly longer than a nanosecond, with theoretical estimates of it lasting between a millisecond and microsecond. While these times don't sound long, consider a processor that runs at 1GHz—a graphene-based spin computer could retain information for up to a million cycles.
With so many uses and with the cost per yield continuously dropping, you can expect to see the first commercial uses of graphene in the next two to three years. More ambitious usage will, of course, take decades to develop. This said, some companies, such as Samsung, are already testing 30-inch graphene-based display prototypes.
Kawakami says, "There are certain things we can already do based on this last [research]." So, how long will it take until graphene computers make it to the market? "At the very optimistic end," Kawakami responded, "[it will take] at least 15 years."
Despite the misconception created by phrases such as "quantum leap," quanta are among the smallest known particles in the universe. If they weren't, quantum computing wouldn't be such a big deal.
At its core, quantum computing leverages the possible dimensions associated with the quantum properties of a physical atom. The construction of a quantum computer involves the arrangement of entangled atoms. A quantum entanglement is a description of the state of a system containing two or more objects. The objects within wuch a system are associated in such a way that the quantum state of any one of them cannot be adequately described without full mention of the others—even if the objects are separated from each other.
If that's starting to sound a bit complex, you're probably an Einstein devotee. He and a few friends (Podolsky and Rosen, to name two), postulated that all physical objects have real values at all times. Unfortunately, thanks to the behavior of particles on the atomic level, that's not necessarily the case for quantum computing.
The core of a quantum computer starts with a quantum bit, or qubit as it's more often called. The qubit is the fundamental equivalent of the digital computing "bit." However, while a bit must be be either 1 or 0, a qubit can be either |0> or |1> (for the purposes of quantum computing, the added notation indicates that the object can be a state, a vector, or a ket).
To visualize the possible states of a single qubit we typically use a Bloch sphere. Within such a sphere, because of its on/off nature, a classical bit could only be at the "north pole" or the "south pole," in the locations where |0> and |1> are positioned, respectively. The rest of the surface of the sphere is inaccessible to a classical bit but not in the case of a qubit. A qubit state can be represented by any point on the surface—any point. For example the pure qubit state:
|0> + i|1>
would lie on the equator of the sphere, on the positive y axis.
A quantum computation is performed by initializing this system of qubits with a quantum algorithm. "Initialization" here refers to some process that puts the system into an entangled state.
How to do that? In a natural state, sub-atomic particles decay into other particles. The decay follows the atomic laws of convservation and you can, therefore, generate pairs of particles that will be in certain predictable quantum states.
Purposefully initializing such a system typically entails one of the following methods: using spontaneous parametric down-conversion, where a nonlinear crystal is used to split incoming photons into pairs of photons of lower energy; using a fiber coupler to confine and mix photons; or using a quantum dot, a semiconductor whose excitons are bound within all three spatial dimensions, giving it properties that are somewhere between those of bulk semiconductors and those of discrete molecules, to trap electrons until decay occurs.
Typical computational gates use Boolean logic, but in quantum computing, these gates are represented by matrices, and can be thought of as rotations of the quantum state within a Bloch sphere (see the infographic below).
This Bloch sphere is the typical representation of a qubit and indicates its possible states. A typical bit would have states on the north and south poles of the sphere. A qubit's state can be represented by any point on the surface.
Manipulating these states presents the probability of performing a mathematical operation on all of a qubit's states simultaneously. For example, as a single qubit state can be 1 and 0 or 0 and 1, we could compute four values at once using two qubits. Doubling that to four qubits pushes the possibility to 16 values, and so on. The more you increase the number of qubits, the more the processing power increases in an exponential fashion. It's akin to the way we started back in the dark ages with 4-bit, then 8-bit, then 16-bit processors until now we've reached 64-bit (on the desktop at least). Here as there, increasing the number of bits increases the data precision as well as the amount of data the CPU can handle in one fell swoop.
While the first quantum processor was built back in 2009 by a team out of Yale University, a useful quantum computer is still at least that ubiquitous 10 years (if not further out to 50 years) away. Early quantum algorithms tried to exploit very simple quantum computing, using what's called "oracles." Like a Magic 8-Ball, they were designed to deliver yes or no answers. That's hardly adequate for even our most basic binary computer of today.
Beefing up a quantum computer is not simple. The overall goal is to stay small, but just the logic gates alone are a serious point of consideration. A 16-qubit computer can register a single "NOT" gate. Now imagine the possibilities beyond that.
We might be able to cure some of the clutter if we use ternary computing lessons (three possible values as opposed to transferring binary technology, which uses bits) that employ "trits" to store data. With this method, it may be possible to transfer this concept over to quantum computing with a roughly equivalent qutrit. That lone would reduce the number of gates significantly, possibly lowering a 50-gate construct down to one needing only nine gates.
Coherence is another hurdle. Simply looking at a qubit (or in any other way letting it interact with the environment) will cause it to decohere or dephase. Decoherence impedes superimposition, which reduces the quantum computer's effectiveness—sometimes down to binary levels.
Still, however, once these impediments have been conquered, over whatever time period it might take, a quantum computer could tackle password and encryption problems, as well as simulations and design tasks, in a matter of heartbeats, where a conventional binary computer might require a lifetime. That's what makes them so magical.