Gigabit Ethernet may still outrun all but the most extreme SSD Raid configurations, but researchers can never rest on their laurels. Always hoping to invent the next big thing, scientists now have their sights set on Terabit Ethernet to help quell our insatiable hunger for bandwidth. A team from Australia, Denmark, and China has combined their efforts to demonstrate terabit-per-second speeds using fiber optic cables, laser light, and an unusual material named chalcogenide.
The group documented the results of its most recent trial in a white paper published in the February 16th 2009 issue of Optics Express. Though the technology is promising, Ben Eggleton, research director for CUDOS (Center for Ultrahigh bandwidth Devices for Optical Systems), points out the current limitations. “The problem isn't injecting that much high speed data into an optical strand, called multiplexing, but retrieving data at such high rates”. Conventional electronics are capable of injecting dozens of 10 Gbps streams, but trying to retrieve these streams any faster than 40 Gbps is beyond our current capabilities.
The breakthrough here however isn’t in the speed itself, but in proving the concept.Until the processing hardware catches up with our transmission capabilities, you won’t be finding this in routers anytime soon. Eggleton speculates that these concepts can be adapted to achieve slower and more manageable results, but the goal of this experiment was simply to prove that it was possible using fully photonic chips built using the same methods employed by current CMOS circuits. "It's years to complete," Eggleton said, taking these research efforts into a production technology. But these demonstrations "are starting to establish this is a serious proposition."
Can a computer exist without hardware? It can if it’s a virtual machine. A virtual machine is software that’s capable of executing programs as if it were a physical machine—it’s a computer within a computer. Virtual machines can be divided into two broad categories: process virtual machines and system virtual machines.
A process virtual machine is limited to running a single program. A system virtual machine, on the other hand, enables one computer to behave like two or more computers by sharing the host hardware’s resources. A system virtual machine consists entirely of software, but an operating system and the applications running on that OS see a CPU, memory, storage, a network interface card, and all the other components that would exist in a physical computer. For the remainder of this discussion, we’ll use the term “virtual machine” to refer to a system virtual machine.
Software running on a virtual machine is limited to the resources and abstract hardware that the virtual machine provides. Since a virtual machine can provide a complete instruction set architecture (ISA, a definition of all the data types, registers, address modes, external input/output, and other programming elements that a given collection of hardware is capable of working with), a virtual machine can simulate hardware that might not even exist in the physical world.
Using virtual machines, a computer can run several iterations of an operating system—or even several different operating systems—with each OS isolated from and oblivious to the existence of the others. The only requirement is that each operating system must be capable of supporting the underlying hardware. And, of course, there must be enough resources (memory, hard disk space, CPU cycles, and so on) to support everything. You could use a virtual machine to run Linux on top of Windows, for instance, or you could run two versions of Windows and use one as a sandbox for testing software you wouldn’t trust on a “real” machine.
In one second, the nuclear fusion process taking place inside the sun produces enough energy to satisfy the needs of the earth’s population for nearly 500,000 years. Photovoltaic cells are capable of capturing some of that energy and converting it into usable electricity; unfortunately, today’s technology can’t do this very efficiently.
French physicist Edmond Becquerel first described the photovoltaic effect in 1839. He discovered that some materials were capable of producing small amounts of electricity when exposed to sunlight. The first photovoltaic cell, however, wasn’t created until 1883, and more than 70 years passed before the next major scientific advance took place, when researchers at Bell Labs developed the first crystalline silicon photovoltaic cell in 1954.
We invariably refer to the video memory in modern videocards as GDDR, differentiating it only by version (GDDR2, GDDR3, GDDR4, and now GDDR5), but the technology’s full acronym is actually GDDR SDRAM, which stands for Graphics Double Data Rate Synchronous Dynamic Random Access Memory.
“Double data rate” describes the memory’s capacity for double-pumping data: Transfers occur on both the rising and falling edges of the clock signal. This endows memory clocked at 800MHz with an effective data-transfer rate of 1.6GHz. “Synchronous” refers to the memory’s ability to operate in time with the computer’s system bus. This allows the memory to accept a new instruction without having to wait for a previous instruction to be processed, a practice known as instruction pipelining.
The shiny, new hatchback you nudge in a street race dents slightly on the driver’s side door. Although you’re playing a PC game, created with beaucoup equations, the bend looks almost real. The 3D renderer sculpts all those numbers into images, with help from the video API (application program interface). However, several completely different rendering techniques can be the source of those images. Currently, the hardware and software industries are debating how to best utilize two graphics-rendering techniques: ray tracing and rasterization.
Hit the jump to see how 3D game rendering is changing with hardware advancements.