Sport & Auto
- About Future
- Digital Future
- Cookies Policy
- Terms & Conditions
- Investor Relations
- Contact Future
The Peripheral Component Interconnect Special Interest Group (more commonly known as the PCI-SIG) unveiled the PCI Express 2.0 specification in January 2007. If you’re surprised that it took motherboard and GPU manufacturers nearly a year to introduce products based on this technology, keep in mind that it took 14 years for the industry to get this far.
The PCI concept was unveiled way back in 1993, at a time when the PC was just beginning its evolution from glorified word processor and oversized calculator to the full-fledged entertainment and business hub it is today. The ISA (Industry Standard Architecture) bus used in the original IBM PC-compatible architecture was too slow and primitive to handle the new CPUs, videocards, and peripherals being introduced.
Two other bus architectures, MCA (IBM’s proprietary Micro Channel Architecture) and EISA (the open Extended Industry Standard Architecture championed in large part by Compaq), preceded PCI, but neither technology gained significant traction. The VESA bus (promulgated by the Video Electronics Standards Association as a means of enabling faster video performance) was introduced shortly after MCA and EISA. VESA’s popularity was also brief, but the concept of providing an expansion slot dedicated to bandwidth-hungry devices such as videocards lived on.
Intel spearheaded development of the PCI bus in order to provide a direct connection between add-in cards and system memory and a bridged connection between add-in cards and the CPU via the front-side bus. The bridge was necessary because the PCI bus was clocked at a much lower frequency than the CPU. PCI delivered much more bandwidth than ISA (133MB/s, compared to ISA’s paltry 5MB/s), and it delivered plug-and-play capabilities that eliminated the need for jumpers and DIP switches.
The PCI-SIG consistently improved PCI’s performance, and the bus remains viable today, but its usefulness is limited. Since all PCI devices must share the same bandwidth, the bus hobbled videocard performance. Intel also realized that PCI wasn’t fast enough for graphics, so the company introduced AGP (Accelerated Graphics Port) in 1997. AGP, as its name implies, is not a true bus, but it did provide direct access to system memory and the CPU via the front-side bus. It also offered twice the bandwidth of the PCI bus: 266MB/s. AGP’s bandwidth increased to 2.1GB/s with the 2002 rollout of AGP 8X, but since the architecture was limited to a single slot, early dual-videocard solutions such as 3Dfx’s Voodoo SLI (Scanline Interleave) remained dependent on PCI.
PCI Express hit the market in 2004. This architecture delivers the same type of point-to-point connection that AGP has to offer; unlike AGP, however, PCI Express supports multiple devices through the use of a shared switch. Rather than having each device negotiate for the use of the bus, each PCI Express device is granted direct and exclusive access to the switch. And instead of dividing bandwidth between multiple devices, as the PCI bus does, each PCI Express device is provided its own dedicated pipeline. In this respect, PCI Express behaves much like a tiny network on the motherboard.
Data is transmitted serially in packets through two pairs of transmit-and-receive wires called lanes. A one-lane (x1 or by-one) PCI Express connection can carry one bit per cycle in each direction to deliver bandwidth of 2Gb/s in each direction. Multiple lanes can be grouped together, so an eight-lane (x8 or by-eight) PCI Express slot delivers 2GB/s in each direction, and an x16 slot provides 4GB/s each way.
You might also see PCI Express bandwidth expressed in terms of GT/s (GT stands for gigatransfers). Data traveling over PCI Express has a clock embedded in it: Every eight bits of data are encoded into a 10-bit symbol that is decoded when it reaches the receiver. So the bus needs to transfer 10 bits in order to send 8 bits of encoded data. A single PCI Express 1.1 lane, for instance, is capable of a raw data rate of 2.5GT/s, but its effective data rate is 2.5 x (8/10) or 2Gb/s.
PCI Express 2.0 doubles the data rate of each lane to 5GT/s, or an effective data rate of 4Gb/s. That means an x16 PCI Express slot will be capable of delivering a staggering 8GB/s in usable bandwidth. Videocards will likely continue to be the biggest beneficiaries of the new standard, and not only from the perspective of data throughput. PCI-SIG is working on a new power spec that feeds these power-hungry beasts more wattage.
|Point-to-point communication and a switch are key ingredients in the secret sauce that endows PCI Express with its awesome bandwidth. Each lane can acquire exclusive access to the switch, so none of them must compete for bus bandwidth. In this respect, PCI Express behaves much like a LAN.|
Even midrange videocards, such as those based on Nvidia’s new GeForce 8800 GT and AMD’s Radeon HD 3870, need more power than the 75 watts a PCI Express 1.1 slot can deliver. These cards are equipped with additional 6-pin power sockets that draw another 75 watts directly from the power supply; high-end cards have two such sockets to draw a total of 225 watts each. PCI-SIG president Al Yanes tells us that the consortium is working on a new power specification “that will support an increased power need to 225 or 300 watts. This spec is targeted for release in the first quarter of 2008.”
The new spec also offers dynamic link-speed management, so the speed of each lane can be increased or decreased on an as-needed basis. This should reduce power consumption, which would be particularly useful for battery-operated notebook PCs. A new link-bandwidth notification scheme will be able to notify software (such as the operating system or a device driver) of any changes to link speed and width. And new access-control services are available to help manage peer-to-peer transactions over the bus.
PCI Express 2.0 will be backward-compatible with PCI Express 1.1, so products designed for the older spec will continue to work in the new architecture. Obtaining the full benefit of the new technology, of course, will require that both the peripheral and the motherboard support the new standard. The fact that Intel just recently began shipping chipsets (the X38 Express) that support the new spec explains why only the very newest GPUs from AMD and Nvidia support PCI Express 2.0. AMD and Nvidia are lagging behind Intel on the chipset side. AMD announced that its new R790 chipset would support PCI Express 2.0 as we went to press. Nvidia is widely rumored to be adding support for the new spec to its unannounced nForce 7 series chipset.
Will you benefit from an early upgrade to PCI Express 2.0? “Initially, only the performance-centric solutions will move [to the new standard],” says Yanes. “But eventually, all will support it since it has functional enhancements that all new PCI-E solutions will want…. Graphics cards have been the solution demanding the most performance. We expect that to continue.” Other early adopter technologies will include enterprise-class storage products and high-speed networking.
Or you could wait for PCI Express 3.0. That standard is expected to increase the interconnect’s bitrate to 8GT/s per lane. That would enable an x16 slot to deliver bandwidth of 12.8GB/s—but products based on that spec aren’t expected to reach the market until 2010.