From Voodoo to GeForce: The Awesome History of 3D Graphics

Paul_Lilly

Try to imagine where 3D gaming would be today if not for the graphics processing unit, or GPU. Without it, you wouldn't be tredging through the jungles of Crysis in all its visual splendor, nor would you be fending off endless hordes of fast-moving zombies at high resolutions. For that to happen, it takes a highly specialized chip designed for parallel processing to pull off the kinds of games you see today, the same ones that wouldn't be possible on a CPU alone. Going forward, GPU makers will try to extend the reliance on videocards to also include physics processing, video encoding/decoding, and other tasks that where once handled by the CPU.

It's pretty amazing when you think about how far graphics technology has come. To help you do that, we're going to take a look back at every major GPU release since the infancy of 3D graphics. Join us as we travel back in time and relive releases like 3dfx's Voodoo3 and S3's ViRGE lineup. This is one nostalgiac ride you don't want to miss!

S3 ViRGE

A virgin in the 3D graphics arena, S3 in 1995 thrust itself into this new territory with its ViRGE graphics series. Playing on the hype surrounding virtual reality a decade and a half ago, ViRGE stood for Virtual Reality Graphics Engine and was one of the first 3D GPUs to take aim at the mainstream consumer. While nothing compared to today’s offerings, early 64-bit ViRGE cards came with up to 4MB of onboard memory, and core and memory clockspeeds of up to 66MHz. It also supported such features as Bilinear and Trilinear texture filtering, MIP mapping, alpha blending, video texturing mapping, Z-buffering, and other 3D texture mapping goodies.

Ironically, those same ‘cutting edge’ features took a toll on the ViRGE silicon resulting in underwhelming 3D performance. In some cases, performance was so bad that users could obtain better results with the CPU, causing the ViRGE to be unaffectionate dubbed the first 3D decelerator. Ouch.

Fun Fact: Just how far has graphic cards come in the past 15 years? Enough so that we've seen the S3 ViRGE selling for as little as $0.45 in the second-hand market.

(Image Credit: Palcalova Sbirka)

Model: ViRGE
Date Released: 1995
Interface: PCI
Shader Model: N/A
DirectX: 6
Manufacturing Process: 0.35 micron
Core Clockspeed: 66MHz
Memory Bus: 64-bit

(Image Credit: pctuning.tyden.cz)

Model: ViRGE VX
Date Released: 1995
Interface: PCI
Shader Model: N/A
DirectX: 6
Manufacturing Process: 0.5 micron
Core Clockspeed: 50MHz
Memory Bus: 64-bit


(Image Credit: Wikipedia Commons)

Model: ViRGE GX
Date Released: 1997
Interface: PCI
Shader Model: N/A
DirectX: 6
Manufacturing Process: 0.35 micron
Core Clockspeed: 66MHz
Memory Bus: 64-bit


(Image Credit: promptweb.co.uk)

Model: ViRGE DX
Date Released: 1997
Interface: PCI
Shader Model: N/A
DirectX: 6
Manufacturing Process: 0.35 micron
Core Clockspeed: 66MHz
Memory Bus: 64-bit


(Image Credit: freewebs.com)

Model: ViRGE GX2
Date Released: 1998
Interface: PCI
Shader Model: N/A
DirectX: 6
Manufacturing Process: 0.35 micron
Core Clockspeed: 66MHz
Memory Bus: 64-bit

ATI Rage 3D and Rage II

Well before Radeon ever became synonymous with ATI, the Canadian-based graphics chip maker was best known for its 3D Rage line. Released in 1995, the original Rage 3D didn't have a whole going for it, such as slow EDO RAM, a 32-bit memory bus, and a max memory of just 2MB.

A year later ATI released the Rage II, and while the upgrades seem minor on paper, performance was significantly improved. The new chipset traded in 2MB of EDO memory for up to 8MB of SDRAM and widened the bus to 64-bit, while also increasing the core clockspeed from 40MHz to 60MHz. Support for DVD playback was also added.


(Image Credit: ixbt.com)

Model: Rage II
Date Released: 1995
Interface: PCI
Shader Model: N/A
DirectX: 5
Manufacturing Process: 0.25 micron
Core Clockspeed: 25-60MHz
Memory Clockspeed: 66-83MHz
Memory Bus: 64- bit

Rendition Verite 1000

Headquartered in Mountain View, CA, Rendition emerged in the mid 1990s as a fabless semiconductor manufacturer whose goal was to compete in the high-end videocard market. Throughout the company's tenure, Rendition managed to get a leg up on the competition by working with John Carmack to develop the first 3D-accelerated version of Quake (VQuake, or Verite-accelerated Quake).

VQuake was designed to take advantage of the Verite 1000 chipset, which was launched in 1996. A year prior, Carmack stated "Verite will be the premier platform for Quake." The card came capable of bilinear filtering, perspective correcting, and a basic pipeline configuration of 1/1/1 (textures/pixels/Z).

Poor 2D performance proved problematic for the board, as did programming for the Verite. It was the latter which Carmack would later say led to iD's decision to move away from proprietary APIs to OpenGL.


(Image Credit: Wikipedia)

Model: Verite 2100
Date Released: 1996
Interface: PCI
Shader Model: N/A
DirectX: 2
Manufacturing Process: 0.5 micron
Core Clockspeed: 25MHz
Memory Bus: 64-bit

3dfx Voodoo1

Like a modern day Greek Tragedy, the rapid rise and untimely demise of 3dfx can best be described as a wild roller-coaster ride that most enthusiasts wish would have never ended. And in a way, it didn't, as 3dfx had a tremendous hand in shaping the 3D market as we know it today. But every good story needs a beginning, and this one starts with the original Voodoo card, or otherwise known as the Voodoo1, released in 1996.

The Voodoo1 launched 3D gaming into the limelight, even if the add-in card's implementation was less than graceful. While other videocards fused both 2D and 3D functionality onto a single board, the Voodoo1 concentrated solely on 3D and lacked any 2D capabilities. This meant consumers still needed a 2D graphics card for day to day computing, which would be connected to the Voodoo1 via a VGA pass-through cable. Only when a compatible 3D videogame was detected would the Voodoo1 then wake out of its slumber and flex its gaming muscle.

It's hard to imagine such a design being successful today, but consumers were willing to cope with the costly inconvenience at the time because the Voodoo1 put every other available 3D card in a headlock and gave them a noogie.

Voodoo1: No Shader Model, DX3 support, 0.5 micron, PCI, 50MHz core, 64-bit

Model: Voodoo1
Date Released: 1996
Interface: PCI
Shader Model: N/A
DirectX: 3
Manufacturing Process: 0.5 micron
Core Clockspeed: 50MHz
Memory Clockspeed: 50MHz
Memory Bus: 64-bit
Transistors:
1 million

Next up, Nvidia makes it debut.

Nvidia NV3

The first of Nvidia's GPUs to target the "performance segment of the volume PC graphics market," the N3, or Riva 128, was designed with Microsoft's DirectX 5 API in mind. Nvidia packed 3.5 million transistors on its first performance part, along with a single pixel pipeline.

Modern for its time, the Riva 128 came configured with 4MB of memory, a 100MHz core clockspeed, 1.6GB/s of bandwidth, a 206MHz RAMDAC, compatible with AGP 2X. It was also a 2D/3D combo card, whereas 3dfx's Voodoo line still required a separate 2D card, a costly proposition not all gamers were keen on. However, image quality was poor compared to the Voodoo line, at least early on, and some games at the time were embracing 3dfx's proprietary Glide API.



Model: Riva 128
Date Released: 1997

Interface: AGP/PCI
Shader Model: N/A
DirectX: 5
Manufacturing Process: 0.35 micron
Core Clockspeed: 100MHz
Memory Clockspeed: 100MHz
Memory Bus: 128-bit
Transistors: 3.5
million

ATI Rage Pro

The next iteration of the Rage came in 1997. ATI claimed it had been working closely with both Intel and Microsoft in developing the new part, leading to the first videocard to support AGP 2X (133MHz) mode. This gave the card a peak bandwidth in excess of 500MB/s, or twice the throughput of AGP 1X.

Everything about the Pro was improved compared to its predecessor. It came with an integrated floating-point set-up engine that could process up to 1.2 million triangles per second, improved DVD playback, and support for resolutions up to 1600x1200.

Fun Fact: Following disappointing sales, ATI added the word 'Turbo' to the card's moniker and released new drivers that were supposed to increase performance. In reality, the drivers only helped when benchmarking.

(Image Credit: recycledgoods.com)

Model: Rage Pro
Date Released: 1997
Interface: AGP/PCI
Shader Model: N/A
DirectX: 6
Manufacturing Process: 0.35 micron
Core Clockspeed: 75MHz
Memory Clockspeed: 100MHz
Memory Bus: 64-bit

Rendition Verite 2000

Rendition's second and third attempts at a competitive GPU solution materialized in the form of the Verite 2100 and 2200, both of which shared the exact same architecture with one another and differed only in clockspeed. Both boards boasted a 4MB frame buffer, bilinear and trilinear filtering, support for texture animation, a complete on-chip triangle setup and a triangle engine capable of rendering triangles asynchronously, somewhat improved (but still lackluster) 2D performance, and hardware accelerated DVD playback.

Despite plans to continue producing 3D cards, the Verite 2000 was the last of the line for Rendition before the company was acquired by Micron.

Fun Fact : Had the company not been sold, Rendition would have released the Verite 3300. But delays and other setbacks caused left Micron uninterested in contnuing its development, and the project was ultimately scrapped.


(Image Credit: Palcalova Sbirka)

Model: Verite 2100
Date Released: 1997
Interface: AGP/PCI
Shader Model: N/A
DirectX: 5
Manufacturing Process: 0.35 micron
Core Clockspeed: 50MHz
Memory Clockspeed: 100MHz
Memory Bus: 64- bit


(Image Credit: ixbt.com)

Model: Verite 2200
Date Released: 1997
Interface: AGP/PCI
Shader Model: N/A
DirectX: 5
Manufacturing Process: 0.35 micron
Core Clockspeed: 60MHz
Memory Clockspeed: 120MHz
Memory Bus: 64-bit

3dfx Rush

As good as the Voodoo1 was at the time, 3dfx found out not everyone was willing to invest in a two-card solution for 2D and 3D graphics. To remedy the Voodoo1's shortcoming, 3dfx released the Voodoo Rush in 1997, which added a 2D chip to the original graphics board, either as an integrated chip or a daughtercard. Gamers no longer had to fiddle with daisy-chaining multiple videocards, but at the expense of performance. A kludgy solution at best, some estimates put the 3D performance hit at up to 20 percent, a direct result of sharing bandwidth between chips. Making matters worse, the Rush suffered from poor 2D performance and instability, making it one of the few unforgettable cards in 3dfx's storied history.


(Image Credit: Rage3d forum pahncrd)

Model: Voodoo Rush
Date Released: 1997
Interface: AGP/PCI
Shader Model: N/A
DirectX: 3
Manufacturing Process: 0.5 micron
Core Clockspeed: 50MHz
Memory Clockspeed: 50MHz
Memory Bus: 64-bit
Transistors:
1 million

S3 Savage 3D

Following the hype machine that was the ViRGE, which played on the popular Virtual Reality nomenclature of the time, S3 refocused its effort on the hardware, and the result was the Savage 3D released in 1998. It was the first 0.25 micron-based GPU ever released, as well as the first to use texture compression. Other standout features include single-cycle trilinear-filtering, AGP 2X support, and up to a 125 million pixel/s fill rate.

But one thing the Savage 3D had in common with the ViRGE was that it was doomed for failure. While the hardware was up to par for its time, the driver support was not and it was only after the Savage 3D aged into a budget card that driver releases seemed to improve. Poor quality boards also proved problematic, resulting in low yields, and sub- standard SDRAM chips limited the chipset's clockspeed. Barely a year on the market, S3 was ready to forget all about the Savage 3D and put the GPU out to pasture.

Fun Fact: The last official drive update for the Savage 3D was posted in 2007, though the modding community has continued to support the card with most recently release (2007) showing support for Vista.


(Image Credit:freewebs.com)

Savage 3D: No shader model, DX6 support, 0.25 micron, PCI/AGP, 90MHz core, 128-bit

Model: Savage 3D
Date Released: 1998
Interface: AGP/PCI
Shader Model: N/A
DirectX: 6
Manufacturing Process: 0.25 micron
Core Clockspeed: 100MHz
Memory Clockspeed: 120MHz
Memory Bus: 64-bit


(Image Credit: hothardware.com)

Model: Savage 3D Supercharged
Date Released: 1998
Interface: AGP/PCI
Shader Model: N/A
DirectX: 6
Manufacturing Process: 0.25 micron
Core Clockspeed: 120MHz
Memory Clockspeed: 120MHz
Memory Bus: 64-bit

Matrox G100/G200

Canadian-based Matrox first got start producing graphic solutions in 1978, but really became a major force in the 1990s. Barely a blip on the radar, the company's G100 chipset was quickly replaced by the G200, the first generation of Matrox cards to be truly considered a gaming solution.

In addition to fast 2D performance, the G200 delivered 3D acceleration through a variety of videocards, most notably the Millennium and Mystique. All G200 cards were built on a 0.35-micron manufacturing process, came with a 85MHz reference clockspeed, and a 64-bit bus. In addition, the G200 offered full hardware accelerated DVD and MPEG1/2 playback, up 16MB of onboard memory, and TV-Out. But perhaps its most notable feature was what Matrox called the 'DualBus.' The G200 consisted of a 128-bit core with dual unidirectional 64-bit buses, allowing for lower latencies.

As a gaming solution, the G200 offered competitive 3D performance, if not somewhat slower than its rivals, but this was offset by the G200's superb image quality. However, poor OpenGL performance and, at least early on, buggy driver support held the G200 back from its potential.

Fun Fact: Ever wish you could upgrade your videocard's memory? With the Mystique, you could, thanks to a handy SODIMM slot integrated right on the card!


(Image Credit: Dell)

Model: Millennium G 200
Date Released: 1998
Interface: AGP
Shader Model: N/A
DirectX: 6
Manufacturing Process: 0.35 micron
Core Clockspeed: 85MHz
Memory Clockspeed: 112MHz
Memory Bus: 64-bit


(Image Credit: duiops.net)

Model: Mystique G 200
Date Released: 1998
Interface: AGP
Shader Model: N/A
DirectX: 6
Manufacturing Process: 0.35 micron
Core Clockspeed: 85MHz
Memory Clockspeed: 112MHz
Memory Bus: 64-bit

Matrox G220/G250

In 1997, Matrox increased the Mystique's RAMDAC from 200MHz to 220MHz, resulting in the G220. No other changes were made that would separate the new card from the old. For the G250, Matrox switched to a 0.25-micron manufacturing process, a move which allowed the company to increase production. The G250 sipped less power than its predecessor, ran cooler, and was primarily sold to OEMs.


(Image Credit: yjfy.com)

Model: G 250
Date Released: 1998
Interface: AGP
Shader Model: N/A
DirectX: 6
Manufacturing Process: 0.25 micron
Core Clockspeed: 85MHz
Memory Clockspeed: 112MHz
Memory Bus: 64-bit

Intel i740 and GMA

You're no doubt familiar with Intel's integrated GMA graphics that litter the low-cost landscape today, but did you know Intel also came out with a discrete 3D graphics chip? The year was 1998 and Intel had grand plans of competing in the 3D market, starting with the i740. Part of the reasoning behind the release was to help promote the AGP interface, and it was widely believed that Intel's financial status and manufacturing muscle would give the chip maker a substantial edge in competing with Nvidia and ATI.

Instead, poor sales and an underperforming product led Intel to abandon the discrete graphics market less than 18 months after it had entered, which also meant the i752 and i754 -- two followup GPUs -- would never see the light of day. And ten years after its launch, at least one site would look back at the i740 as one of "The Most Disappointing Graphics Chips in the Last Decade."

The original i740 design lives on, however, as it provided the basis for the much longer lasting GMA line, which still exists today. Moreover, Intel has on more than one occasion showed interest in re-entering the discrete graphics market, and its Larrabee architecture could see the light of day as early as this year.

Fun Fact: Sales of the i740 were so bad that some accused Intel of anticompetitive practices for allegedly seling its 740 graphics chips below cost to overseas videocard vendors in order to boost its market share.

Model: i740
Date Released: 1998
Interface: AGP/PCI
Shader Model: N/A
DirectX: 6
Manufacturing Process: ?
Core Clockspeed: 55MHz
Memory Clockspeed: 100MHz
Memory Bus: 64-bit
Transistors: 3.5 million

Next, 3dfx dominates 3D graphics with SLI

3dfx Voodoo2

"We do not train to be merciful here. Mercy is for the weak. Here, in the streets in competiton: A man confronts you, he is the enemy. An enemy deserves no mercy." - John Kreese, Cobra Kai Dojo, The Karate Kid

These words may have first been muttered by John Kreese in The Karate Kid, but they could just as easily been attributed to 3dfx. After shellacking the competition with the Voodoo1, mercy was the furthest thing from 3dfx's mind when it released the Voodoo2 in 1998.

Like the original Voodoo, the Voodoo2 once again required a separate 2D videocard for non-3D gaming, only this time the image quality was improved, particularly at higher resolutions (1024x768) where the Voodoo1 struggled. Unlike the original Voodoo, the Voodoo2 added a third chip to the PCB, so there were two TexelFX chips and one PixelFX chip. This gave the new card support for multitexturing, resulting in up to four times better performance in the few games that supported this technique.

Fun Fact: It wasn't Nvidia, but 3dfx who first introduced SLI, albeit a different kind. Called Scan-Line Interleave (as opposed to Nvidia's Scalable Link Interface), each Voodoo2 card in a system would render separate scan lines.


(Image Credit: Wikimedia)

Model: Voodoo2
Date Released: 1998
Interface: AGP/PCI
Shader Model: N/A
DirectX: 3
Manufacturing Process: 0.35 micron
Core Clockspeed: 90MHz
Memory Clockspeed: 90MHz
Memory Bus: 64-bit
Transistors: 4
million

3dfx Voodoo Banshee

Much less menacing than its name implies, the Voodoo Banshee was more about 3dfx proving to the public it could design a single-chip videocard capable of both 2D and 3D rendering, just like the competition had been doing. With faster clockrates than the Voodoo2, the 128-bit Banshee was poised to be the fastest, most flexible videocard on the market, and that presented a problem for 3dfx, who feared the Banshee would cut into sales of the Voodoo2 released just weeks earlier. To prevent that from happening, 3dfx designed the Banshee with only one texturing unit, taking away its ability to support multitexturing.


(Image Credit: v3info.de)

Model: Voodoo Banshee
Date Released: 1998
Interface: AGP/PCI
Shader Model: N/A
DirectX: 6
Manufacturing Process: 0.35 micron
Core Clockspeed: 100MHz
Memory Clockspeed: 100MHz
Memory Bus: 128-bit
Transistors:
4 million

Nvidia NV4

In 1998, Nvidia tweaked its existing architecture and released the NV4-based Riva TNT. At a glance, NV4 doesn't appear to be much of an upgrade over NV3. The max amount of memory was doubled to 16MB and clocked a smidgen faster at 110MHz, however the core clockspeed ran 10MHz slower at 90MHz. But what set NV4 apart was the addition of a second pixel pipeline, 32-bit truecolore support, and trilinear filtering.

As good as NV4 was, Nvidia found itself once again trailing behind 3dfx and its proprietary Glide API. Moreover, Voodoo2 owners could link two cards in SLI (as discussed previously) for an even bigger advantage. Plus, 3dfx arguably enjoyed better brand recognition at the time, making it harder to sell Nvidia-branded videocards, although this would be the case for very long.


(Image Credit: Nvidia)

Model: Riva TNT
Date Released: 1998

Interface: AGP/PCI
Shader Model: N/A
DirectX: 6
Manufacturing Process: 0.35 micron
Core Clockspeed: 90MHz
Memory Clockspeed: 110MHz
Memory Bus: 128-bit
Transistors: 7
million

ATI Rage 128

Just as the name implies, the Rage 128, released in 1998, touted a 128- bit memory bus, although ATI would release a lower priced version that still maintained a 64-bit design. All of the previous goodies were present and accounted for, such as triangle setup and improved DVD playback, as well as some new tricks, like inverse discrete cosine transform (IDCT) acceleration for even better DVD handling.

For the first time, ATI had a card capable of dual-texturing, outputting two pixels per clock. ATI also implemented a new caching technique, which helped with 3D rendering. A fairly robust card, the Rage 128 support a variety of features, like Alpha blending, texture lighting, hidden surface removal, bump mapping, and plenty more.

ATI would later release the Rage 128 Pro, essentially a 'supercharged' version with improved texture filtering and better video decoding/encoding thanks to the inclusion of a Rage Theater chip, and a slower clocked All-in-Wonder 128, the first of more AIW cards to come with a built-in TV tuner. The AIW 128

(Image Credit: do-dat.net)

Model: Rage 128
Date Released: 1998
Interface: AGP/PCI
Shader Model: N/A
DirectX: 6
Manufacturing Process: 0.25 micron
Core Clockspeed: 90-103MHz
Memory Clockspeed: 90-103MHz
Memory Bus: 128 -bit
Transistors:
8 million

(Image Credit: ixbt.com)

Model: Rage 128 Pro
Date Released: 1999
Interface: AGP/PCI
Shader Model: N/A
DirectX: 6
Manufacturing Process: 0.25 micron
Core Clockspeed: 125MHz
Memory Clockspeed: 143MHz
Memory Bus: 128-bit
Transistors:
8 million

(Image Credit: firingsquad.com)

Model: All-in-Wonder 128
Date Released: 1999
Interface: AGP/PCI
Shader Model: N/A
DirectX: 6
Manufacturing Process: 0.25 micron
Core Clockspeed: 90MHz
Memory Clockspeed: 90MHz
Memory Bus: 128-bit
Transistors:
8 million

ATI Rage Fury MAXX

Just two months after the Rage 128 Pro came to market, ATI followed suit with the Rage Fury MAXX, a graphics card with two processing engines. Similar in concept to 3dfx's Scan-Line Interleave (SLI) technology, the MAXX differed because not only was this a single-card solution, but because it took a slightly more efficient approach to rendering a scene.

OpenGL and Direct3D were both supported on the Fury MAXX, and the 64MB of onboard memory was overkill for the time, even when split between the two rendering chips. However, the card was hampered by a lack of hardware Transform & Lighting (T&L), which would become a bigger issue as time went on.

ATI would also release a version with an integrated TV tuner, the All-in-Wonder 128, marking the first AIW card of more to come. This first version used the same Rage 128 chipset

Fun Fact: Only during full-screen 3D gaming would the second processing engine kick in. Otherwise, it would lay dormant for both 2D and windowed 3D tasks.

(Image Credit: hothardware.com)

Model: Rage Fury MAXX
Date Released: 1999
Interface: AGP
Shader Model: N/A
DirectX: 7
Manufacturing Process: 0.125 micron
Core Clockspeed: 125MHz
Memory Clockspeed: 143MHz
Memory Bus: 128-bit
Transistors:
8 million (x2)

ATI Rage Mobility

ATI didn't just concentrate on the desktop, it also took aim at the mobile market. Just about every Rage chipset also came in mobile form, starting with the Rage II-based Rage LT in November, 1996. This would continue up until late 1999 when ATI released the Rage Mobility M4, a mobile chip built around the Rage 128 Pro architecture and the last Rage-based mobile GPU. The Mobility moniker lives on to this day culminating the Mobility Radeon HD 4870.

Model: Rage Mobility M4
Date Released: 1999
Interface: AGP/PCI
Shader Model: N/A
DirectX: 6
Manufacturing Process: 0.25 micron
Core Clockspeed: 105MHz
Memory Clockspeed: 105MHz
Memory Bus: 128-bit

S3 Savage 4

Because the Savage 3D wasn't a bad design, but one that merely lacked polish, S3 continued on with development of the core chipset and several tweaks later, the Savage 4 was born in 1999. Single-pass multi-texturing became part of the package, as did both AGP 2X and 4X support. And in a nod towards forward thinking, some Savage 4 cards came with a DVI port to accommodate LCD panels, which had not yet become a real threat to CRTs.

Sketchy drivers continued to rear their ugly heads, and once again, performance lagged behind the competition, although not necessarily by accident. With the Savage 4, S3 took aim at the budget crowd where it could offer serviceable 2D and 3D performance for a lower price than the higher end cards of the time.


(Image Credit: ixbt.com)

Model: Savage 4
Date Released: 1999
Interface: AGP
Shader Model: N/A
DirectX: 6
Manufacturing Process: 0.25 micron
Core Clockspeed: 110MHz
Memory Clockspeed: 125MHz
Memory Bus: 64-bit

(Image Credit: Palcalova Sbirka)

Model: Savage 4 Pro
Date Released: 1999
Interface: AGP
Shader Model: N/A
DirectX: 6
Manufacturing Process: 0.25 micron
Core Clockspeed: 125MHz
Memory Clockspeed: 125MHz
Memory Bus: 64- bit


(Image Credit: IBM)

Model: Savage 4 Extreme
Date Released: 1999
Interface: AGP
Shader Model: N/A
DirectX: 6
Manufacturing Process: 0.25 micron
Core Clockspeed: 166MHz
Memory Clockspeed: 166MHz
Memory Bus: 64-bit

S3 Savage 2000

Towards the end of 1999, S3 made one last push into what was fast becoming a competitive 3D graphics market. Facing competition from Matrox, ATI, Nvidia, and 3dfx, S3 acquired Diamond Multimedia as "part of an ongoing strategic plan to return S3 to profitability," and the collaboration gave birth to the Savage 2000.

Featuring 12 million transistors on a 0.18-micron manufacturing process, the Savage 2000 brought its A-game, at least on paper. Onboard hardware Transform & Lighting (T&L), single-pass quad texture blending, 500 million pixel/s fill rate, a 128-bit memory bus, and overclockable memory should have added up to a videocard to be reckoned with. But as was becoming an all too familiar scenario, poorly written drivers continued to hold S3's silicon back from its full potential.


(Image Credit: Tomshardware)


Model: Savage 2000
Date Released: 1999
Interface: AGP
Shader Model: N/A
DirectX: 7
Manufacturing Process: 0.18 micron
Core Clockspeed: 120MHz
Memory Clockspeed: 120MHz
Memory Bus: 128-bit

Next, Nvidia takes the lead with the explosive TNT2 and Geforce

Nvidia NV5

Now entering its fifth generation of videocards, Nvidia took its NV4 architecture and moved to a smaller 0.25-micron manufacturing process. This allowed Nvidia to push clockspeeds up to almost 70 percent faster than it had before. The maximum amount of onboard memory was again doubled, this time to 32MB, and AGP 4X support was also added to NV5.

Nvidia would attack all three market segments -- entry-level, mid-range, and high-end -- with the NV5-based TNT2, all of which were Directx6-capable videocards. At the lower end, however, Nvidia would slice the memory bus in half from 128-bit to 64-bit. This would still run faster than the older NV4-based TNT, making them very good value cards.


(Image Credit: eletech.com)

Model: Riva TNT 2 Vanta
Date Released: 1999

Interface: AGP/PCI
Shader Model: N/A
DirectX: 6
Manufacturing Process: 0.25 micron
Core Clockspeed: 100MHz
Memory Clockspeed: 125MHz
Memory Bus: 64-bit
Transistors: 15
million


(Image Credit: Photobucket mypapit)

Model: Riva TNT 2
Date Released: 1999

Interface: AGP/PCI
Shader Model: N/A
DirectX: 6
Manufacturing Process: 0.25 micron
Core Clockspeed: 125MHz
Memory Clockspeed: 150MHz
Memory Bus: 128-bit
Transistors: 15
million


(Image Credit: Nvidia)

Model: Riva TNT 2 Pro
Date Released: 1999

Interface: AGP/PCI
Shader Model: N/A
DirectX: 6
Manufacturing Process: 0.22 micron
Core Clockspeed: 143MHz
Memory Clockspeed: 166MHz
Memory Bus: 128-bit
Transistors: 15
million


(Image Credit: ixbt.com)

Model: Riva TNT 2 Ultra
Date Released: 1999

Interface: AGP/PCI
Shader Model: N/A
DirectX: 6
Manufacturing Process: 0.25 micron
Core Clockspeed: 150MHz
Memory Clockspeed: 183MHz
Memory Bus: 128-bit
Transistors: 15
million

PowerVR Series 2

A year and a half late to market, the Neon 250 debuted in 1999 running at 125MHz and touting 32MB of high speed SDRAM. and a 250MHz RAMDAC. The Neon 250 could process up to 4 million polygons per second, offered advanced texturing (Bilinear, trilinear, anisotropic, and bump mapping), D3D and OpenGL blend modes, and a host of other specs.

But what really made the Neon 250 stand out was it's unique (at the time) approach to 3D computing. Rather than display images by processing polygons one at a time -- through the CPU and the GPU -- the Neon 250 took a 'tile-based' approach. Each 3D scene would be broken up into separate small tiles and independently processed before being sent through the GPU. Hidden surfaces wouldn't be processed, saving Z-bugger memory and memory bandwidth, while also requiring less graphics processing power.

A great card on its own merit, the Neon 250 would have been a bigger success had PowerVR been able to ship it on time. However, the delays proved costly in terms of going toe-to-toe with the competition, who had a leg up on the Neon 250 by the time it actually came to market.

Fun Fact: Old school console gamers will remember PowerVR as the company who supplied the processor for Sega's Dreamcast console. And in fact PowerVR's VR2-based Neon 250 was essentially the much anticipated PC version of the same chip.


(Image Credit: tilebase.de)

Model: Neon 250
Date Released: 1999
Interface: AGP/PCI
Shader Model: N/A
DirectX: 6
Manufacturing Process: 0.25 micron
Core Clockspeed: 125MHz
Memory Clockspeed: 125MHz
Memory Bus: 128-bit

Trident TVGA and Blade

Pour yourself a cold one if you still own one of the earlier Trident TVGA videocards. First introduced over two decades ago in 1987, Trident's TVGA 8200LX kicked off an extensive line of TVGA videocards that would extend to the 1990s. Can you say ISA?

We can, but we'd rather say Blade, the name of what many consider to be Trident's first real foray into the 3D acceleration on the Windows platform. The first Blade3D card packed 8MB of video memory and came in both PCI and AGP flavors. Later revisions would add faster clockspeeds. The end result was a card that offered playable framerates in Quake 2 and competed well against Intel's i740, but couldn't keep up with Nvidia's Riva TNT or the Matrox G200.


(Image Credit: anime.net)

Blade3D: No Shader Model, no DX support, 0.25 micron AGP, 120MHz core, AGP, 128-bit

Model: Blade3D
Date Released: 1999
Interface: AGP
Shader Model: N/A
DirectX: 6
Manufacturing Process: 0.25 micron
Core Clockspeed: 110MHz
Memory Clockspeed: 120MHz
Memory Bus: 128-bit

3dfx Voodoo3

Codenamed 'Avenger,' the Voodoo3 combined the best 3dfx had to offer up to this point in a single videocard. Like the Banshee, the Avenger sported a single-chip, 2D/3D-capable design, but it also included mutlitexturing support. Other features included a 128-bit GDI accelerator, dual 32-bit pipelines, a 300MHz - 350MHz RAMDAC, up to 16MB of memory, and support for resolutions up to 2046x1536.

There would be several versions of the Voodoo3, both in PCI and AGP form, with the higher model numbers representing faster clockspeeds. In addition, the Voodoo3 3500 TV would add an integrated TV-tuner to the mix capable of real-time MPEG-2 video and audio capture.


(Image Credit: pacificgeek.com)

Model: Voodoo3 1000
Date Released: 1999
Interface: AGP/PCI
Shader Model: N/A
DirectX: 6
Manufacturing Process: 0.25 micron
Core Clockspeed: 125MHz
Memory Clockspeed: 125MHz
Memory Bus: 128-bit
Transistors:
8.2 million


(Image Credit: freewebs.com)

Model: Voodoo3 2000
Date Released: 1999
Interface: AGP/PCI
Shader Model: N/A
DirectX: 6
Manufacturing Process: 0.25 micron
Core Clockspeed: 143MHz
Memory Clockspeed: 143MHz
Memory Bus: 128-bit
Transistors:
8.2 million


(Image Credit: mateusz.niste.free.fr)

Model: Voodoo3 3000
Date Released: 1999
Interface: AGP/PCI
Shader Model: N/A
DirectX: 6
Manufacturing Process: 0.25 micron
Core Clockspeed: 125MHz
Memory Clockspeed: 125MHz
Memory Bus: 128- bit
Transistors:
8.2 million


(Image Credit: rashly3dfx.com)

Model: Voodoo3 3500 TV
Date Released: 1999
Interface: AGP/PCI
Shader Model: N/A
DirectX: 6
Manufacturing Process: 0.25 micron
Core Clockspeed: 183MHz
Memory Clockspeed: 183MHz
Memory Bus: 128-bit
Transistors:
8.2 million

Nvidia GeForce 256 (NV10)

Today's Nvidia-branded desktop cards carry the GeForce nomenclature, a naming scheme that first began a decade earlier. Also known as NV10, the GeForce 256 architecture came into being in 1999 and offered significant speed gains over its predecessor, almost twice as fast in some cases, and allowed Nvidia to snatch the performance crown from 3dfx in dramatic fashion.

The GeForce 256's quad-pixel rendering pipeline could pump 480 M/texels, which was about 100-166M more than other videocards on the market. It also came equipped with hardware T&L and a feature called cube environment mapping for creating real-time reflections.

It's worth noting at this point that Nvidia's decision to remain a fabless manufacturer was really starting to pay off. Third party add-in board (AIB) partners flooded the market with Nvidia silicon, allowing the chip maker to make a steady profit selling its chips regardless of what price point(s) each board partner sold their cards.

Fun Fact: Nvidia hailed the GeForce 256 as "the world's first GPU," a claim made possible by being the first to integrate a geometry transform engine, a dynamic lighting engine, a 4-pixel rendering pipeline, and DirectX 7 features onto the graphics chip.


(Image Credit: SharkyExtreme)

Model: GeForce 256
Date Released: 1999
Interface: AGP
Shader Model: N/A
DirectX: 7
Manufacturing Process: 0.22 micron
Core Clockspeed: 120MHz
Memory Clockspeed: 166MHz
Memory Bus: 128-bit
Transistors: 23 million

Next, did users bite with Matrox's innovative DualHead feature?

Matrox G400/G450

Moving up the Matrox line, the G400 picked up where the G200 left off and sported several improvements. What didn't change was the 0.25-micron manufacturing process, and Matrox ported its DualBus architecture to the new chipset, but everything else about the G400 was either bigger or better. The 64-bit bus had been upgraded to 128-bit, a second pixel pipeline and texture unit was added, core clockspeed was increased to 125MHz, and both the texture and pixel fill rates went up from 85 MP/s and MT/s to 250 MP/s and MT/s, respectively.

The G400 also introduced a feature called DualHead. This allowed end users to extend their desktops across two monitors, a capability that most power users take for granted today, but one that gave Matrox a marketing bullet its competition didn't have.

Following a familiar pattern, Matrox in 2000 released the G450 chipset, which introduced a shrunken die (0.18- micron) and integrated the G400's second RAMDAC onto the chip. Matrox also decided to cut the memory bus in half to 64-bit, hoping the switch to DDR memory would make up for the reduced bandwidth. It didn't, and as a result, the G400 proved a better gaming card in most situations.

Fun Fact: A special version of the G400 known as the Marvel G400-TV added a TV tuner, along with hardware MPEG video capture and editing capabilities.


(Image Credit: SharkyExtreme.com)

Model: G400
Date Released: 1999
Interface: AGP
Shader Model: N/A
DirectX: 6
Manufacturing Process: 0.25 micron
Core Clockspeed: 125MHz
Memory Clockspeed: 166MHz
Memory Bus: 128-bit


(Image Credit: emertx-info.com)

Model: G450
Date Released: 2000
Interface: AGP/PCI
Shader Model: N/A
DirectX: 6
Manufacturing Process: 0.18 micron
Core Clockspeed: 125MHz
Memory Clockspeed: 166MHz
Memory Bus: 64-bit

3dfx VSA-100 Series

By this time, 3dfx's storied run was coming to an end. The competition had caught up with, and surpassed, the Voodoo series, with Nvidia's GeForce 256 starting to steal the limelight. In March of 2000, 3dfx acquired chip maker GigaPixel Corp. for around $186 million, a move that would prove fatal. Shortly after the acquistion, 3dfx released its first Voodoo Scalable Architecture (VSA) videocards (Voodoo4 and Voodoo5), which were designed to support multiple chip configurations.

At the lower end, the Voodoo4 4500 came configured with a single VSA-100 chip, up to a 367 MPixel fill rate, and 32MB of memory. Higher up on the performance ladder, the Voodoo5 5000 came with two VSA-100 chips, up to a 773 MPixel fill rate, and a 64MB buffer. 3dfx also planned to release a Voodoo5 6000, but a manufacturing defect kept the card from ever making it to market.

These would be last cards 3dfx would ever release due to piling debt and increasingly impatient creditors. In December 2000, rival Nvidia announced it had signed a definitive agreement to acquire 3dfx's intellectual graphics assets.


(Image Credit: ixbt.com)

Model: Voodoo4 4500
Date Released: 2000
Interface: AGP/PCI
Shader Model: N/A
DirectX: 6
Manufacturing Process: 0.22 micron
Core Clockspeed: 183MHz
Memory Clockspeed: 183MHz
Memory Bus: 128- bit
Transistors:
14 million


(Image Credit: techreport.com)

Model: Voodoo5 5500
Date Released: 2000
Interface: AGP/PCI
Shader Model: N/A
DirectX: 6
Manufacturing Process: 0.22 micron
Core Clockspeed: 166MHz
Memory Clockspeed: 166MHz
Memory Bus: 128-bit
Transistors:
14 million (x2)

Nvidia GeForce 2 (NV11, NV15, NV16)

How does 1.6 billion texels per second sound? In the year 2000, such a feat sounded pretty damn impressive, considering Nvidia's previous generation product pushed out over three times less than that. It wasn't that the GeForce 2 was a wholly different architecture, but adding a second texture map unit (TMU) to each of its 4 pixel pipelines helped boost performance, as did a much faster core clockspeed. An early form of pixel shaders, called Nvidia Shading Rasterizer (NSR) was also introduced.

The GeForce 2 ran circles around the GeForce 256 in some games, however ATI and 3dfx offered stiff competition with a more efficient handling of memory bandwidth and RAM controllers.

From a sales perspective, the somewhat crippled GeForce 2 MX proved immensely popular for its low price and serviceable 3D performance. The MX had two less pixel pipelines and half the memory bandwidth, but retained hardware T&L. Eventually, the MX variant would provide the basis for Nvidia's integrated motherboard graphics.


(Image Credit: trustedreviews.com )

Model: GeForce 2 MX
Date Released: 2000
Interface: AGP
Shader Model: N/A
DirectX: 7
Manufacturing Process: 0.08 micron
Core Clockspeed: 176MHz
Memory Clockspeed: 166MHz
Memory Bus: 128-bit
Transistors: 20 million


(Image Credit: tomshardware.com )

Model: GeForce 2 GTS
Date Released: 2000
Interface: AGP
Shader Model: N/A
DirectX: 7
Manufacturing Process: 0.22 micron
Core Clockspeed: 200MHz
Memory Clockspeed: 333MHz
Memory Bus: 128-bit
Transistors: 25 million


(Image Credit: nvnews.net)

Model: GeForce 2 Ultra
Date Released: 2000
Interface: AGP
Shader Model: N/A
DirectX: 7
Manufacturing Process: 0.22 micron
Core Clockspeed: 260MHz
Memory Clockspeed: 460MHz
Memory Bus: 128-bit
Transistors: 25 million

ATI Radeon R100

Originally intended as the next generation of Rage cards, the R100 ditched its Rage 6 code-name and instead kicked off the long-standing Radeon line that remains today and into the foreseeable future. This first generation Radeon was released in 2000 with 32MB or 64MB of DDR memory. Later that year, ATI would release a pair of Radeons with SDR memory, although these would be slower and in the case of the Radeon LE, also severely gimped. Still, the LE proved popular among overclockers, who made up for the card's shortcomings by ramping up its core and memory clockspeeds.

ATI would build several R100-based videcoards throughout the architecture's lifetime, including another All-in-Wonder version, the AIW 7500. Save for the Radeon 7000 VE, all of these cards boasted a 128-bit memory bus. Stock core clockspeed ranged anywhere from 167MHz to 290MHz, and each iteration supported hardware T&L, save again for the 7000 VE.

Fun Fact: After the Radeon 8500 came to market, ATI went back and applied the Radeon 7200 nomenclature to all R100-based cards.

(Image Credit: ixbt.com)


Model: Radeon SDR (7200)
Date Released: 2000
Interface: AGP/PCI
Shader Model: N/A
DirectX: 7
Manufacturing Process: 0.18 micron
Core Clockspeed: 167-183MHz
Memory Clockspeed: 166MHz
Memory Bus: 128-bit
Transistors:
30 million

(Image Credit: techtree.com)

Model: Radeon 7500
Date Released: 2001
Interface: AGP/PCI
Shader Model: N/A
DirectX: 7
Manufacturing Process: 0.18 micron
Core Clockspeed: 290MHz
Memory Clockspeed: 230MHz
Memory Bus: 128-bit
Transistors:
30 million

(Image Credit: rage3d.com)

Model: Radeon 7500 All-in-Wonder
Date Released: 2001
Interface: AGP/PCI
Shader Model: N/A
DirectX: 7
Manufacturing Process: 0.18 micron
Core Clockspeed: 167-183MHz
Memory Clockspeed: 167-183MHz
Memory Bus: 128-bit
Transistors:
30 million

Next, programmable shaders arrive and change graphics forever

PowerVR Series 3 (Kyro)

In 2000, PowerVR released its first Kyro card, followed suit by the Kyro II in 2001. The future looked mighty bright for PowerVR, who by the end of the 2001 had reportedly sold over 1 million Kyro 1/2-based videocards and was giving both Nvidia and ATI a run for their money.

Based on the company's STG 4000X architecture, the original Kyro was built on a 0.25-micron manufacturing process and had two ROPs and two texture clocks, versus one each on the Neon 250. In addition to sticking with its tile- based rendering scheme, the Kyro added 8-layer multlitexturing, made easier thanks to the aforementioned tile-based rendering.

With the release of the STG 4500-based Kyro II in 2001, and later the Kyro II SE, performance got a much needed boost thanks the new version's faster clockspeeds, which put it in competition with Nvidia's GeForce2 MX. Success was short-lived, however, as the Kyro II was still a lower end performer, and the lack of hardware T&L didn't bode well for long-term success.

Fun Fact: PowerVR had shown the Linux community some love by announcing development of Linux drivers for its Kyro videocards.


(Image Credit: dansdata.com)

Model: Kyro
Date Released: 2001
Interface: AGP/PCI
Shader Model: N/A
DirectX: 6
Manufacturing Process: 0.25 micron
Core Clockspeed: 115MHz
Memory Clockspeed: 115MHz
Memory Bus: 128-bit
Transistors: 12 million


(Image Credit: overclockersonline.net)

Model: Kyro II
Date Released: 2001
Interface: AGP
Shader Model: N/A
DirectX: 7
Manufacturing Process: 0.18 micron
Core Clockspeed: 175MHz
Memory Clockspeed: 175MHz
Memory Bus: 128-bit
Transistors: 15 million


(Image Credit: hwsw.hu)

Model: Kyro II SE
Date Released: 2001
Interface: AGP
Shader Model: N/A
DirectX: 7
Manufacturing Process: 0.18 micron
Core Clockspeed: 200MHz
Memory Clockspeed: 200MHz
Memory Bus: 128-bit
Transistors: 15 million

Nvidia GeForce 3 (NV20)

By the end of 2000, main rival 3dfx found itself in dire straights following a series of bad business moves, and it was Nvidia who acquired the storied 3D chip maker that December. Three months later, Nvidia released the GeForce 3, a new architecture with programmable vertex and pixel shaders, a new, more efficient memory manager, multi-sampling anti-aliasing, and DirectX 8 support.

Performance without all the eye candy turned on was very good, but the GeForce 3 would strut really its stuff once AA was enabled. In the fall of 2001, Nvidia would release faster clocked revisions in the Ti20 and Ti500, neither of which had dual-monitor support. But they were both high performance cards, with the latter boasting a max bandwidth of 8GB/s.


(Image Credit: slcentral.com )

Model: GeForce 3
Date Released: 2001
Interface: AGP
Shader Model: 1.1
DirectX: 8
Manufacturing Process: 0.15 micron
Core Clockspeed: 200MHz
Memory Clockspeed: 230MHz
Memory Bus: 128-bit
Transistors: 57 million


(Image Credit: nvnews.com )

Model: GeForce 3 Ti200
Date Released: 2001
Interface: AGP
Shader Model: 1.1
DirectX: 8
Manufacturing Process: 0.15 micron
Core Clockspeed: 175MHz
Memory Clockspeed: 200MHz
Memory Bus: 128-bit
Transistors: 57 million


(Image Credit: golem.de)

Model: GeForce 3 Ti500
Date Released: 2001
Interface: AGP
Shader Model: 1.1
DirectX: 8
Manufacturing Process: 0.15 micron
Core Clockspeed: 240MHz
Memory Clockspeed: 250MHz
Memory Bus: 128-bit
Transistors: 57 million

ATI Radeon R200

In late 2001, ATI unveiled its second generation Radeon along with its very first DirectX 8 compliant card, the Radeon 8500. Also for the first time, the R200 architecture propelled ATI from the also-ran crowd into direct competition with Nvidia, a rivalry we hope continues on for many more years to come.

The time around, ATI didn't waste time churning out SDR videocards, completely making the switch to DDR. Almost ever R200 would also come equipped with a 128-bit memory bus, save for a few rogue SE series cards aimed at the budget buyer. In addition, some later model cards (9000 series) would make the jump from AGP 4X to supporting 8X.

Support for DirectX 8.1 helped the R200 stand side by side, and in some cases, in front of the competition, albeit after early driver issues were ironed out.

(Image Credit: ixbt.com)

Model: Radeon 8500
Date Released: 2001
Interface: AGP
Shader Model: 1.4
DirectX: 8.1
Manufacturing Process: 0.15 micron
Core Clockspeed: 275MHz
Memory Clockspeed: 275MHz
Memory Bus: 128-bit
Transistors: 6
0 million


(Image Credit: sharkyextreme.com)

Model: Radeon 9000 Pro AIW
Date Released: 2003
Interface: AGP
Shader Model: 1.4
DirectX: 8.1
Manufacturing Process: 0.15 micron
Core Clockspeed: 275MHz
Memory Clockspeed: 270MHz
Memory Bus: 128-bit
Transistors: 6
0 million

Nvidia GeForce 4

With 3dfx out of the picture and the GeForce 3 putting on a strong showing, Nvidia didn't feel the need to overhaul its graphics chipset, and so it didn't. GeForce 4 was all about refining the existing core, which Nvidia did by making it faster, tweaking the memory controller, adding another vertext shader, and improving DVD playback.

GeForce 4 was an instant hit and Nvidia attacked every market segment with a barrage of videocards from the entry-level on up to the high-end. It was the GeForce 4 Ti4200 that stole the show, offering gamers exception performance for a fraction of the cost of higher end GeForce 4 parts. This made the middling Ti4400 somewhat of an outcast, as it didn't offer enough of a performance boost to justify skipping the Ti4200, and gamers looking for more pixel pushing oomph chose instead to jump straight to the Ti4600, and later the Ti4800.


(Image Credit: Dell)

Model: GeForce 4 MX 420
Date Released: 2002
Interface: AGP
Shader Model: 1.1
DirectX: 8
Manufacturing Process: 0.15 micron
Core Clockspeed: 250MHz
Memory Clockspeed: 166MHz
Memory Bus: 64-bit and 128-bit
Transistors: 57 million


(Image Credit: xbitlabs.com)

Model: GeForce 4 Ti4200
Date Released: 2002
Interface: AGP
Shader Model: 1.1
DirectX: 8
Manufacturing Process: 0.15 micron
Core Clockspeed: 250MHz
Memory Clockspeed: 500MHz
Memory Bus: 128-bit
Transistors: 57 million


(Image Credit: activewin.com)

Model: GeForce 4 Ti4600
Date Released: 2002
Interface: AGP
Shader Model: 1.1
DirectX: 8
Manufacturing Process: 0.15 micron
Core Clockspeed: 300MHz
Memory Clockspeed: 650MHz
Memory Bus: 128-bit
Transistors: 57 million


(Image Credit: pclab.pl)

Model: GeForce 4 Ti4800
Date Released: 2003
Interface: AGP
Shader Model: 1.1
DirectX: 8
Manufacturing Process: 0.15 micron
Core Clockspeed: 300MHz
Memory Clockspeed: 650MHz
Memory Bus: 128-bit
Transistors: 57 million

Next, surround gaming entices, and it's ATI's turn to be king.

Matrox Parhelia

Our condolences go out to you if you happened to be one of the unlucky gamers who spent far too much for a card that performed far too poorly. By all accounts, the Parhelia was supposed to carry Matrox on its shoulders back into the 3D gaming scene after essentially giving up following the G400, and the Parhelia's spec sheet looked hopeful. It had a 256-bit memory bus, was the first to feature a 512-bit ring bus, it came clocked at 220MHz, and a feature called 'Surround Gaming' made it possible to game on three monitors. It even supported DirectX 9, or did it?

Matrox would later admit that the Parhelia's vertex shaders were not DirectX 9-compliant, even though it was advertised as a DX9 videocard. Making matters worse, the Parheila retailed for $400, which meant gamers were paying big bucks for a broken card. And if that weren't enough, cards released by the competition that cost half as much pummeled the Parhelia.


(Image Credit: Matrox)

Model: Parhelia
Date Released: 2002
Interface: PCI-E
Shader Model: 1.1
DirectX: 8
Manufacturing Process: 0.15 micron
Core Clockspeed: 220MHz
Memory Clockspeed: 275MHz
Memory Bus: 256-bit
Transistors:
80 million

ATI Radeon R300

After being leapfrogged by Nvidia, who stole back the performance crown with its GeForce Ti series, it was ATI's turn to jump back in front. The Radeon family entered its third generation in the summer of 2002, which introduced a completely reworked architecture. Even though the R300 was built on the same 0.15-micron manufacturing process, ATI managed to double the number of transistors, ramp up the clockspeeds, and lay claim to having the first fully Direct3D 9-capable desktop graphics card.

Part of the technological leap could be attributed to the use of the flip chip packaging technology. Flipping the die allowed the chip to be more effectively cooled, opening up additional frequency headroom that otherwise wouldn't have been attainable. But higher clockspeeds wasn't the only thing the R300 had going for it. This was the first time a graphics chip maker put out a product that fully utilized a 256-bit bus (let's not even revisit the Parhelia). Along with an integrated crossbar memory controller, the R300 excelled at memory intensive tasks.

ATI had again regained the top benchmark spots with its 9700 Pro, and later the 9800 Pro and XT. Several other videocards would flesh out the then modern Radeon lineup.

Fun Fact: Budget buyers found a gem in the 9500 Pro, which some enthusiasts were able to mod into a 9700 non-Pro. Even when unsuccessful, the 9500 Pro proved popular because of its 8 pixel pipelines and resulting fast performance.


(Image Credit: comresurs.ru)

Model: Radeon 9500 Pro
Date Released: 2002
Interface: AGP
Shader Model: 2.0
DirectX: 9
Manufacturing Process: 0.15 micron
Core Clockspeed: 275MHz
Memory Clockspeed: 270MHz
Memory Bus: 128-bit
Transistors: 107
million


(Image Credit: techspot.com)

Model: Radeon 9700 Pro
Date Released: 2002
Interface: AGP
Shader Model: 2.0
DirectX: 9
Manufacturing Process: 0.15 micron
Core Clockspeed: 325MHz
Memory Clockspeed: 310MHz
Memory Bus: 256-bit
Transistors: 107
million


(Image Credit: foroswebgratis.com)

Model: Radeon 9800 XT
Date Released: 2003
Interface: AGP
Shader Model: 2.0
DirectX: 9
Manufacturing Process: 0.15 micron
Core Clockspeed: 380MHz
Memory Clockspeed: 350MHz
Memory Bus: 256-bit
Transistors: 107
million

Nvidia GeForce FX Series

Whereas the GeForce 4 was, for all intents and purposes, a revised GeForce 3, Nvidia opted to change its naming scheme for its fifth generation of GeForce cards calling it the FX Series. This also gave birth to the infamous Dawn fairy and the accompanying demo designed to show off what the new architecture was capable of.

One such feature was support for Shader Model 2.0, a requirement of the then-recently released DirectX 9 API. Nvidia's FX Series were the company's first videocards to support SM2.0. Depending on the model, FX cards made use of DDR, DDR2, or GDDR3 memory and a 0.13 micron manufacturing process.

Fun Fact: The FX5800's two-slot cooling solution drew heavy criticism over its excessive noise. It was so loud, many likened it to a dustbuster, and it didn't help that it looked a little bit like one.

Fun Fact 2: Soon after the Dawn demo was released, it was hacked by the online community to work with any ATI card that supported DirectX 9. The modified demo file also featured a naughty NSFW mode when users renamed the executable 3dmark03.exe or quake3.exe.


(Image Credit: Directron)

Model: GeForce FX 5200
Date Released: 2003
Interface: AGP/PCI
Shader Model: 2.1
DirectX: 9
Manufacturing Process: 0.15 micron
Core Clockspeed: 250MHz
Memory Clockspeed: 400MHz
Memory Bus: 64- and 128-bit
Transistors: 45 million


(Image Credit: AOpen)

Model: GeForce FX 5600
Date Released: 2003
Interface: AGP
Shader Model: 2.1
DirectX: 9
Manufacturing Process: 0.13 micron
Core Clockspeed: 350MHz
Memory Clockspeed: 700MHz
Memory Bus: 128-bit
Transistors: 80 million



Model: GeForce FX 5800
Date Released: 2003
Interface: AGP
Shader Model: 2.1
DirectX: 9
Manufacturing Process: 0.13 micron
Core Clockspeed: 400MHz
Memory Clockspeed: 800MHz
Memory Bus: 128-bit
Transistors: 15 million

ATI Radeon R420

Unlike the jump from R200 to R300, ATI didn't totally revamp the core architecture to come up with R420. What the chip maker did do was introduce a new design process, in which R420 cards were manufactured with 0.13-micron low-k dielectric technology. We won't bore you (or ourselves) with all the technical details - in short, ATI was able to produce higher frequency cards that consumed less power and ran cooler. That's a win-win proposition.

In order to maximize its manufacturing potential, the R420 arranged its pixel pipelines into groups of four. If a quad proved defective, ATI could disable it and still sell chipsets with 12, 8, or 4 pipelines for different market sectors.

Introduced with the new architecture was a new naming scheme, starting with the X800 XT Platinum, X800 Pro, and X800 SE. These three cards came configured with 16, 12, and 8 pixel pipelines, respectively, the latter two sporting one and two disabled quads.

Fun Fact: The first generation of CrossFire cards appeared in 2005, but it wasn't as elegant as it is today. Gamers couldn't just toss any two videocards into their system; instead, one had to be a 'Master' version, which would connect to the other card via a DVI Y- dongle.


(Image Credit: Gigabyte)

Model: Radeon X700
Date Released: 2004
Interface: AGP/PCI-E
Shader Model: 2.0
DirectX: 9
Manufacturing Process: 0.11 micron
Core Clockspeed: 400MHz
Memory Clockspeed: 350MHz
Memory Bus: 128-bit
Transistors: 120
million


(Image Credit: thg.ru)

Model: Radeon X800 XT Platinum
Date Released: 2004
Interface: AGP/PCI-E
Shader Model: 2.0
DirectX: 9
Manufacturing Process: 0.13 micron
Core Clockspeed: 520MHz
Memory Clockspeed: 540MHz
Memory Bus: 256-bit
Transistors: 120
million


(Image Credit: tecnacom.nl)

Model: Radeon X850 XT Platinum
Date Released: 2004
Interface: AGP/PCI-E
Shader Model: 2.0
DirectX: 9
Manufacturing Process: 0.13 micron
Core Clockspeed: 540MHz
Memory Clockspeed: 590MHz
Memory Bus: 256-bit
Transistors: 160
million

Next, like Star Trek movies, Nvidia's even-series cards deliver again.

Nvidia GeForce 6 Series

Between running loud (in the beginning) and struggling to remain competitive with ATI's 9800 Series, the GeForce 5 architecture served as somewhat of an anomaly, one which was redeemed with the release of the GeForce 6 Series. GeForce 6 ushered in Shader Model 3.0 support, Nvidia's PureVideo technology, and multi-videocard support with SLI.

Comparing blueprints, GeForce 6 consisted of a larger die, almost twice as many transistors, a pixel fill rate nearly three times as high as the GeForce 5, and a four-fold increase in pixel pipelines (16 total). Later Nvidia would release its GeForce 6 Series in PCI-E form.

A card that proved popular among overclockers was the original 6800, sometimes referred to as the 6800nu (Non Ultra).These cards were built with the exact same chip as the 6800GT and Ultra, however 4 of its 16 pipelines came disabled. In some cases, these diasbled pipelines were actually defective, but overclockers quickly found out this was not always the case. Using an overclocking program called RivaTuner, it was possible to unlock the dormant pipelines, essentially transforming a vanilla 6800 into a faster performing (and pricier) 6800GT. And it was a low risk mod, too - if the the card displayed artifacts after applying the software mod, the original settings could easily be restored.


(Image Credit: Nvidia)

Model: GeForce 6600
Date Released: 2004
Interface: AGP/PCI-E
Shader Model: 3.0
DirectX: 9
Manufacturing Process: 0.11 micron
Core Clockspeed: 300MHz
Memory Clockspeed: 500MHz
Memory Bus: 128-bit
Transistors: 146 million


(Image Credit: techimo.com)

Model: GeForce 6800
Date Released: 2004
Interface: PCI-E
Shader Model: 3.0
DirectX: 9
Manufacturing Process: 0.13 micron
Core Clockspeed: 325MHz
Memory Clockspeed: 600MHz
Memory Bus: 256-bit
Transistors: 222 million


(Image Credit: hardwarezone.com)

Model: GeForce 6800 Ultra Extreme
Date Released: 2004
Interface: PCI-E
Shader Model: 3.0
DirectX: 9
Manufacturing Process: 0.13 micron
Core Clockspeed: 450MHz
Memory Clockspeed: 1200MHz
Memory Bus: 256-bit
Transistors: 222 million

XGI / Trident XG40

When XGI Technology announced its Volari line of videocards in September of 2003, nobody knew what to expect. In fact, hardly anyone had even heard of XGI, who had emerged as a new graphics pision just four months prior. But XGI wasn't truly a new kid on the block. XGI, or e X treme G raphics I nnovation (see what they did there?), had previously existed as the Multimedia Product pision of Silicon integrated Systems (SiS). Not long after XGI branched off under its own moniker, the company went and acquired Trident's mobile graphics pision, which was responsible for developing a small handful of Volari videocards.

On the higher end of the Volari spectrum sat the XG40 chipset, which provided the foundation for the Volari V8 Ultra and Volari Duo V8 Ultra. The V8 Ultra utilized a 0.13 micron manufacturing processor, boasted 90 million transistors, and compliance with DirectX 9's vast feature-set. Even more impressive, at least by today's standards, the Duo added a second GPU to the mix. It share the same 350MHz core clockspeed and 16 rendering pipelines as its single-GPU brethren, and was identical in every way, save for the extra GPU.

Alas, reviews of the Volari 8 Ultra/Duo cited sketcy drivers and poor image quality in an attempt to make the card(s) faster.


(Image Credit: tech.163.com)

Model: Volari V8 Ultra
Date Released: 2004
Interface: AGP
Shader Model: 2.0
DirectX: 9
Manufacturing Process: 0.13 micron
Core Clockspeed: 300MHz
Memory Clockspeed: 300MHz
Memory Bus: 128-bit
Transistors: 80 million


(Image Credit: hardwareanalysis.com)

Model: Volari V8 Ultra
Date Released: 2004
Interface: AGP
Shader Model: 2.0
DirectX: 9
Manufacturing Process: 0.13 micron
Core Clockspeed: 350MHz
Memory Clockspeed: 350MHz
Memory Bus: 128-bit
Transistors: 80 million

XGI / Trident XG41

While the XG40 chipset targeted (unsuccessfully) the high end graphics market, the XG41 concentrated on the mainstream crowd. It did this via the Volari V5 Ultra, which also came in a dual-GPU Duo package, and cutting the number of rendering pipelines in half from 16 down to 8. Clockspeeds and memory bandwidth remained the same as the V8 series, but the reduced pipelines meant a significantly lower pixel and texture fill rates. As such, benchmarks weren't too impressive.


(Image Credit: pcstats.com)

Model: Volari V5
Date Released: 2004
Interface: AGP
Shader Model: 2.0
DirectX: 9
Manufacturing Process: 0.13 micron
Core Clockspeed: 350MHz
Memory Clockspeed: 350MHz
Memory Bus: 128-bit
Transistors: 80 million


(Image Credit: madboxpc.com)

Model: Volari V5 Ultra
Date Released: 2004
Interface: AGP
Shader Model: 2.0
DirectX: 9
Manufacturing Process: 0.13 micron
Core Clockspeed: 350MHz
Memory Clockspeed: 450MHz
Memory Bus: 128-bit
Transistors: 80 million (x2)

XGI / Trident XG47

The highest numbered graphics chipset in XGI's short-lived lineup, the XG47 also represented the company's lowest end part. Taking aim at the entry-level sector, the XG47-based Volari 8300 released in 2005 slashed the memory interface down to just 64-bit, making it better suited for home theater setups than for pushing gaming pixels.

But one thing the Volari 8300 boasted that the other Volari videocards didn't was a PCI-E interface. This gave the 8300 added appeal to those who had upgraded, or planned to upgrade, their motherboards and leave AGP behind. Surprisingly current (from a spec-sheet standpoint) still today, the 8300 also brought a DVI port to the table and support for the then upcoming Windows Vista operating system.

Low power consumption, an integrated TrueVideo engine, and an effective de-interlacing scheme made the 8300 a good all-around multimedia card, so long as you didn't expect much out of 3D games.


(Image Credit: pc- erfahrung.de)

Model: Volari 8300
Date Released: 2005
Interface: PCI-E
Shader Model: 2.0
DirectX: 9
Manufacturing Process: 0.13 micron
Core Clockspeed: 300MHz
Memory Clockspeed: 300MHz
Memory Bus: 64-bit
Transistors: 90 million

Matrox G550

As time went on, Matrox struggled to remain competitive with Nvidia and 3dfx in terms of gaming performance, and it seemed to almost concede the market with the release of the G550 released in 2001. Rather than take aim at blistering 3D performance, the G550 focused on the budget and mainstream market with slower clockspeeds (125MHz core and 333MHz memory) than the GeForce2 Ultra (200MHz core and 460MHz memory). Morever, Matrox again stayed with a 64-bit memory bus, a clear signal that hardcore gamers need not apply.

On the productivity front, the G550 held a bit more appeal, thanks in large part to its DualHead technology. End users could take advantage of two different monitors running separate resolutions, something neither ATI's HydraVision nor Nvidia's TwinView technology could do, and it worked under Windows 2000.

Matrox also introduced a new Head Casting engine, which was "designed specifically to accelerate the 3D rendering of high-resolution human facial animations over the internet," paving the way for photo-realistic 3D talking heads without hogging bandwidth.

Fun Fact: The Millennium G550 PCI-E was the world's first PCI Express x1 videocard.


(Image Credit: Matrox)

Model: G550
Date Released: 2005
Interface: AGP/PCI-E
Shader Model: N/A
DirectX: 6
Manufacturing Process: 0.18 micron
Core Clockspeed: 125MHz
Memory Clockspeed: 166MHz
Memory Bus: 64-bit

ATI Radeon R520

ATI had come a long way since the days of 3D Rage, and the biggest shift was yet to come. ATI's engineers had gone back to the drawing board, and what they came up with was the R520, a completely new architecture that was unlike anything that had been done before. Serving as the backbone for the new design was what ATI called an "Ultra-Threading Dispatch Processor." Like a foreman, the UTDP was responsible for telling its workers what to do, and when to do it. In this case, the 'workers' were four groups of four pixel shaders, 16 in all. This technique proved highly efficient and allowed ATI to get away with utilizing less pixel shaders than the 24 employed by the competition.

The R520 also had a redesigned memory controller. The new controller used a weighting system responsible for prioritizing which clients needed access to data the quickest.

Several other advancements had been made, most of which focused on efficiency. Image quality was better, full High Dynamic Range (HDR) lighting was implemented for the first time, and better DVD decoding were among the improvements that had been made.

Fun Fact: At the extreme high end, some graphics partners implemented a self-contained water-cooling assembly on the X1950 XTX.


(Image Credit: Gigabyte)

Model: Radeon X1300
Date Released: 2005

Interface: AGP/PCI-E
Shader Model: 3.0
DirectX: 9
Manufacturing Process: 90nm
Core Clockspeed: 450MHz
Memory Clockspeed: 533MHz
Memory Bus: 128-bit (64-bit PCI)
Transistors: 105
million


(Image Credit: computershopper.com)

Model: Radeon X1950 XTX
Date Released: 2006

Interface: PCI-E
Shader Model: 3.0
DirectX: 9
Manufacturing Process: 90nm
Core Clockspeed: 650MHz
Memory Clockspeed: 1000MHz
Memory Bus: 256-bit
Transistors: 384
million

Next, the race to a TFLOP continues with only two dominant contenders

Nvidia GeForce 7 Series

AGP made its last stand in the Nvidia camp with the GeForce 7 Series; every chipset that came after would only come in PCI-E form. And as had become the norm by this time, 7-series videocard would come in a wide variety of speeds and specifications, including integrated graphics.

At the higher end, the 7800 GTX would comprise 302 transistors and a 333 mm 2 die size. This was the first 7-series GPU Nvidia released and it would remain the flagship part until the 7900 GTX showed up almost a year later. The 7900 GTX carried a massive (for the time) 512MB frame buffer, 24 pixel shader units, 16 ROPs, and a whopping 54.4GB/s of memory bandwidth.

The 7-series also introduced dual-GPU videocards in the 7900 GX2 and 7950 GX2. While these would fit in a single PCI-E slot, they were both essentially two videocards sandwiched together. This proved a popular option for gamers who wanted a two-videocard solution but lacked an SLI-compatible motherboard.


(Image Credit: hwspirit.com)

Model: GeForce 7300 GS
Date Released: 2006
Interface: AGP/PCI-E
Shader Model: 3.0
DirectX: 9
Manufacturing Process: 90nm
Core Clockspeed: 550MHz
Memory Clockspeed: 810MHz
Memory Bus: 128, 256, and 512bit
Transistors: 90 million


(Image Credit: hwupgrade.it)

Model: GeForce 7500 LE
Date Released: 2006
Interface: PCI-E
Shader Model: 3.0
DirectX: 9
Manufacturing Process: 90nm
Core Clockspeed: 550MHz
Memory Clockspeed: 800MHz
Memory Bus: 256 and 512-bit
Transistors: 112 million


(Image Credit: fsplanet.com)

Model: GeForce 7800 GTX
Date Released: 2005
Interface: PCI-E
Shader Model: 3.0
DirectX: 9
Manufacturing Process: 90nm
Core Clockspeed: 430MHz
Memory Clockspeed: 1200MHz
Memory Bus: 256 and 512-bit
Transistors: 302 million


(Image Credit: techpowerup.com)

Model: GeForce 7900 GTO
Date Released: 2006
Interface: PCI-E
Shader Model: 3.0
DirectX: 9
Manufacturing Process: 90nm
Core Clockspeed: 550MHz
Memory Clockspeed: 800MHz
Memory Bus: 512-bit
Transistors: 278 million


(Image Credit: tech2.in.com)

Model: GeForce 7950 GX2
Date Released: 2006
Interface: PCI-E
Shader Model: 3.0
DirectX: 9
Manufacturing Process: 90nm
Core Clockspeed: 500MHz
Memory Clockspeed: 1200MHz
Memory Bus: 512-bit
Transistors: 278 million

ATI Radeon R600

For the R600, ATI drew inspiration from the Xenos GPU it had developed for the Xbox 360 console. Like the Xbox 360, the PC version consisted of a unified shader architecture rather than relying on separate processors for each task. This modern design also boasted DirectX 10, Shader Model 4.0, and full OpenGL 3.0 support.

ATI retained the Ultra-Threaded Dispatch Processor technology it used in the R520, as well as the same general memory controller design. However, bandwidth was almost doubled as a result of a bi-directional 512-bit memory bus on the HD 2900 Pro and XT. This led to very good performance overall, but it couldn't topple the best Nvidia had to offer.

This was again the case when ATI released the R670-based HD 3870. Performance was excellent for the price, but ATI had all but conceded the high end market to Nvidia. It came as little consolation that the new part added support for DirectX 10.1.

Fun Fact: ATI's dual-GPU HD 3870 X2 became the first single-videocard to reach the 1 TFLOP milestone.


(Image Credit: vr-zone.com)

Model: Radeon HD 2900 XT
Date Released: 2007

Interface: PCI-E
Shader Model: 4.1
DirectX: 10
Manufacturing Process: 80nm
Core Clockspeed: 743MHz
Memory Clockspeed: 1000MHz
Memory Bus: 512-bit
Transistors: 720
million


(Image Credit: regmedia.co.uk)

Model: Radeon HD 3870
Date Released: 2007

Interface: PCI-E
Shader Model: 4.1
DirectX: 10.1
Manufacturing Process: 55nm
Core Clockspeed: 775MHz
Memory Clockspeed: 1125MHz
Memory Bus: 256-bit
Transistors: 666
million



(Image Credit: techspot.com)

Model: Radeon HD 3870 X2
Date Released: 2007

Interface: PCI-E
Shader Model: 4.1
DirectX: 10.1
Manufacturing Process: 55nm
Core Clockspeed: 825MHz
Memory Clockspeed: 900MHz
Memory Bus: 256-bit
Transistors: 666
million (x2)

Nvidia GeForce 8 Series

When the G80 was released in 2006, it became the first consumer GPU on the planet to support Windows Vista's Direct3D 10 (D3D10) component. It was a $400 million project in the making and the first brand new architecture from Nvidia in a long time. What made the 8-Series so different from anything Nvidia had done before was the introduction of unified shaders. Prior to this point, pixel shaders and vertex shaders existed as spearate units.

The heavily threaded unified shader architecture proved more efficient than Nvidia's older designs, and it was also considered by some as the "biggest and most complex piece of mass-market silicon ever created." It owed this complexity in part to 681 million transistors at debut, along with support for Shader Model 4.0. The new architecture allowed Nvidia to clock its shader core at a different speed than the rest of the GPU. For example, at launch the 8800 GTX's shader clock ran at 1350MHz, more than twice as fast as the core clockspeed (575MHz).

Once again, Nvidia would release several videocards based on the new series, only this time keeping track of them all would be more confusing than ever. Logic would tell you that performance from top to bottom on the 8800GTS would look like this:

  • 8800 GTS 640MB
  • 8800 GTS 512MB
  • 8800 GTS 320MB
In reality, the 512MB 8800 GTS was the best of the bunch. Huh? The reason for this is because the 320MB and 640MB versions were early releases based on the G80 core, whereas the newer 512MB was built around the much improved G82 core. This made all the difference in the world, as even though the frame buffer was lower, the G92-based 8800 GTS had 128 stream processors instead of the 96 found on the original design. In addition, a die shrink from 90nm to 65nm led to faster clockspeeds, faster than even the original 8800 GTX. In some cases the 8800 GTS 512MB outpaced the 8800 GTX, making it one of the most popular cards Nvidia ever released.

Fun Fact: At roughly 10.5 inches long and weighing 740g, the GeForce 8800 GTX was the largest consumer graphics card ever made. As a result (of the length), many gamers found out that their existing PC case woudn't accomodate the longer videocard, while others had get creative and either bust out the dremel to mod drive cages, or remove a drive cage altogether.


(Image Credit: Foxconn)

Model: GeForce 8400 GS
Date Released: 2007
Interface: PCI/PCI-E
Shader Model: 4.0
DirectX: 10
Manufacturing Process: 80nm
Core Clockspeed: 450MHz
Memory Clockspeed: 800MHz
Memory Bus: 64-bit
Transistors: 210 million


(Image Credit: tech2.in.com)

Model: GeForce 8600 GT
Date Released: 2007
Interface: PCI/PCI-E
Shader Model: 4.0
DirectX: 10
Manufacturing Process: 80nm
Core Clockspeed: 540MHz
Memory Clockspeed: 800MHz (DDR2) or 1400MHz (GDDR3)
Memory Bus: 128-bit
Transistors: 289 million


(Image Credit: sysopt.com)

Model: GeForce 8800 GTS (G80)
Date Released: 2007
Interface: PCI-E
Shader Model: 4.0
DirectX: 10
Manufacturing Process: 90nm
Core Clockspeed: 513MHz
Memory Clockspeed: 1584MHz
Memory Bus: 320-bit
Transistors: 681 million


(Image Credit: 3dnews.ru)

Model: GeForce 8800 GTS (G92)
Date Released: 2007
Interface: PCI-E
Shader Model: 4.0
DirectX: 10
Manufacturing Process: 65nm
Core Clockspeed: 650MHz
Memory Clockspeed: 1940MHz
Memory Bus: 256-bit
Transistors: 754 million


(Image Credit: OCZ)

Model: GeForce 8800 GTX
Date Released: 2006
Interface: PCI-E
Shader Model: 4.0
DirectX: 10
Manufacturing Process: 90nm
Core Clockspeed: 575MHz
Memory Clockspeed: 1800MHz
Memory Bus: 384-bit
Transistors: 484 million

Next, catching up to the present with today's graphical beasts.

Nvidia GeForce 9 Series

A new series does not a new architecture make, and that's certainly the case with Nvidia's ninth generation of GeForce parts. In fact, the G92 core (notice the '9') had already found its way onto thousands of desktops before the 9-series was ever introduced. We're of course referring to the G92-based 8800 GTS 512MB discussed earlier. So what exactly was going on?

For the first time in a long time, nothing much was going on. AMD didn't seem interested (or capable) in competing at the high end, leading to a lull in the graphics wars. Whether or not this influenced Nvidia's roadmap is debatable, but rather than release a new architecture, many considered the 9-Series little more than a rebranding of existing parts. And in many ways, it was. If we're to compare the new with the old, both the 9800 GTX and G92-based 8800 GTS contain 128 stream processors, 16 ROPs, are built on a 65nm manufacturing process, have a 256-bit memory bus, and pack 754 transistors.

Kicking off the 9-Series, however, was the G94-based 9600 GT. Despite the higher number, the G94 has only half the number of stream processors as the G92 (64 versus 128), and still less than the 112 found on the 8800 GT. It also has 33 percent fewer transistors than the G92.


(Image Credit: Nvidia)

Model: GeForce 9400 GT
Date Released: 2008
Interface: PCI-E
Shader Model: 4.0
DirectX: 10
Manufacturing Process: 55nm
Core Clockspeed: 550MHz
Memory Clockspeed: 800MHz
Memory Bus: 128-bit
Transistors: 314 million


(Image Credit: hardware.info)

Model: GeForce 9800 GTX+
Date Released: 2008
Interface: PCI-E
Shader Model: 4.0
DirectX: 10
Manufacturing Process: 55nm
Core Clockspeed: 738MHz
Memory Clockspeed: 2200
Memory Bus: 256-bit
Transistors: 754 million

ATI Radeon R700

Announced in 2008, the R700 chipset made it possible for ATI to once again compete at the high end, when it had previously been focusing almost exclusively on the mid-range and entry-level markets. A few key architectural changes made this move up the performance ladder possible, or maybe better it would better to refer to them as architectural refinements, as the overall design remains similar to the R600/R670.

ATI reworked the tesselation unit in R700 so that it could now export data to the geometry shader, allowing the two to work together, and do so efficiently. ROPs have been completely rebuilt for speed in R700 and sport dedicated hardware-based multi-sampled AA resolve. The result is a much less dramatic fall-off in performance when AA is cranked up. And with the RV770 revision, ATI's GPUs became the first to support GDDR5 memory.

Related to the new architecture, ATI just recently released the HD 4890, the fastest R7xx-based single-GPU videocard in the company's lineup. The dual -GPU 4870X2 remains the company's flaghips model, and is much more readily available than Nvidia's GTX 295.

Fun Fact: At the end of March 2009, AMD made available a 392-page PDF reference guide on its R700-family Instruction Set Architecture. The 1.88MB document is intended for programmers writing appls and system software, and you can download it here .

Model: Radeon HD 4850 X2
Date Released: 2008

Interface: PCI-E
Shader Model: 4.1
DirectX: 10.1
Manufacturing Process: 55nm
Core Clockspeed: 650MHz
Memory Clockspeed: 993MHz
Memory Bus: 256-bit
Transistors: 956
million (x2)


(Image Credit: geeks3d.com)

Model: Radeon HD 4870 X2
Date Released: 2008

Interface: PCI-E
Shader Model: 4.1
DirectX: 10.1
Manufacturing Process: 55nm
Core Clockspeed: 750MHz
Memory Clockspeed: 900MHz
Memory Bus: 256-bit
Transistors: 956
million (x2)


(Image Credit: HotHardware)

Model: Radeon HD 4890
Date Released: 2009

Interface: PCI-E
Shader Model: 4.1
DirectX: 10.1
Manufacturing Process: 55nm
Core Clockspeed: 850MHz
Memory Clockspeed: 900MHz
Memory Bus: 256-bit
Transistors: 960
million


Nvidia GeForce G 100 Series

If you managed to miss the release of Nvidia's G 100 Series, don't sweat it. No big PR fanfare would accompany the 'new' cards, which aren't really new at all. Instead, the G 100 nomenclature was designed to replace the 9-Series in the OEM market. Minus a few clockspeed adjustments here and there, nothing separated one Series from the other.

Nvidia GeForce 200 Series

Continuing with the unified shader architecture that has served Nvidia well since the 8-Series, the latest chipset pushes the design envelope by stuffing 1.4 billion transistors into the GPU. Let's say that again: 1.4 BILLION transistors. This translates into 930 gigaFLOPs of processing power for the GTX 280, which launched alongside the GTX 260 when Nvidia introduced the updated architecture.

More than just a simple rehash of the 8- and 9-Series GPUs, the 200 series added considerable muscle by outfitting the GTX 280 with 240 stream processors, nearly twice as many as the fastest 9800 videcoard, and 192 stream processors on the GTX 260. These were both built on a 65nm manufacturing process, however Nvidia has since switched to 55nm, and as part of that, has released the GTX 260. This newer revision is identifiable by its Core 216 designation, which refers to the number of shader processors, up from 192 on the original.

While Nvidia didn't re-release the GTX 280, it did add the 285 to its lineup, currently the fastest single-GPU card in the entire lineup. Serving as the flagship part, Nvidia's GTX 295 packs two GPUs into a single board, which again is comprised of two PCBs sandwiched together. As of this writing, the GTX 295 ranks as the fastest videocard in the universe with a mind boggling 1,788 gigaFLOPs of processing power.

Fun Fact: While feuding with Intel over a Nehalem licensing agreement, Nvidia CEO Jen-Hsun Huang said the "the CPU has run its course and the soul of the PC is shifting quickly to the GPU," a notion which refers to general purpose computing on the GPU (GPGPU). He went on to call the CPU a "decaying" business.


(Image Credit: hothardware.com)

Model: GeForce GTS 250 (G92)
Date Released: 2009
Interface: PCI-E
Shader Model: 4.0
DirectX: 10
Manufacturing Process: 65nm and 55nm
Core Clockspeed: 738MHz
Memory Clockspeed: 2200MHz
Memory Bus: 256-bit
Transistors: 754 million


(Image Credit: gearlog.com)

Model: GeForce GTX 260
Date Released: 2008
Interface: PCI-E
Shader Model: 4.0
DirectX: 10
Manufacturing Process: 65nm and 55nm
Core Clockspeed: 576MHz
Memory Clockspeed: 1998MHz
Memory Bus: 448-bit
Transistors: 1400 million


(Image Credit: Nvidia)

Model: GeForce GTX 275
Date Released: 2009
Interface: PCI-E
Shader Model: 4.0
DirectX: 10
Manufacturing Process: 55nm
Core Clockspeed: 633MHz
Memory Clockspeed: 2268MHz
Memory Bus: 448-bit
Transistors: 1400 million


(Image Credit: gameguru.in)

Model: GeForce GTX 280
Date Released: 2008
Interface: PCI-E
Shader Model: 4.0
DirectX: 10
Manufacturing Process: 55nm
Core Clockspeed: 602MHz
Memory Clockspeed: 2214MHz
Memory Bus: 512-bit
Transistors: 1400 million


(Image Credit: bit-tech.net)

Model: GeForce GTX 285
Date Released: 2009
Interface: PCI-E
Shader Model: 4.0
DirectX: 10
Manufacturing Process: 55nm
Core Clockspeed: 648MHz
Memory Clockspeed: 2484MHz
Memory Bus: 512-bit
Transistors: 1400 million


(Image Credit: hardware.info)

Model: GeForce GTX 295
Date Released: 2009
Interface: PCI-E
Shader Model: 4.0
DirectX: 10
Manufacturing Process: 55nm
Core Clockspeed: 576MHz
Memory Clockspeed: 1998MHz
Memory Bus: 448-bit
Transistors: 1400 million

Around the web

by CPMStar (Sponsored) Free to play

Comments