AMD’s Radeon HD 3870 is a fine GPU for the money. It doesn’t outperform Nvidia’s GeForce 8800 GTX, and it lags far behind the extravagant 8800 Ultra, but it does deliver a phenom— er, make that a tremendous price/performance ratio.
So what happens when you put two of these parts—each with its own 512MB frame buffer—on a single board? You get a Radeon R3870 X2. The result isn’t as spectacular as you’d expect, but MSI’s implementation delivers plenty of bang for the buck. This card isn’t an Ultra killer by any means, but with a price tag of just $450, it doesn’t need to be.
There’s nothing mysterious about the R3870 X2—the two GPUs are exactly the same as those on a single-GPU card. Each one has 320 stream processors, a 256-bit memory interface, support for AMD’s Unified Video Decoder (for offloading HD and Blu-ray video-decoding from the host CPU), and dual-link DVI with HDCP on both links (to support the native resolution of 30-inch LCD panels).
If you care as much about high-definition video decoding as you care about gaming, you probably know that neither Nvidia’s 8800 GTX nor its 8800 Ultra supports those last two features. And unlike Nvidia’s new GPUs that do fully offload HD video decoding, the R3870 X2 supports the incremental updates to DirectX: Direct3D 10.1 and Shader Model 4.1 (although we believe this support to be unimportant right now).
MSI set the GPUs’ cores to run at 828MHz and the memory at 955MHz, a fraction faster than AMD’s reference-design specs of 825MHz and 900MHz, respectively. As with AMD’s 3870 X2 reference design, MSI’s board has two 512MB frame buffers, one for each GPU. AMD’s reference design and MSI’s implementation both use GDDR3 memory, compared to the GDDR4 memory found on single-GPU 3870 cards. AMD tells us there’s nothing about the design that would prevent its board partners from using GDDR4 memory or from increasing the size of the frame buffers (although we suspect there wouldn’t be a tremendous difference in performance from either design change).
A PCI Express 1.1 bridge chip sitting between the two processors effectively creates CrossFire on the card (with 16 bi-directional lanes for each GPU) without the need for a CrossFire chipset on the motherboard. There is, however, a single interconnect that will allow you to build a CrossFireX rig with four 3870 GPUs onboard, but that does require a CrossFire chipset. The board itself supports PCI Express 2.0, but AMD tells us that putting a PCI Express 2.0 bridge chip between the two GPUs would have delayed the product and wouldn’t have yielded much of a performance boost anyway.
Having all the components on a single board strikes us as a much more elegant solution than sandwiching boards together, which Nvidia did with its since-discontinued 7950 GX2. It also allows AMD to use a single cooler, which is located at the very end of the board and exhausts outside the case, for both GPUs and both frame buffers.
|You’ll need both the six-pin and eight-pin power connectors if you intend to overclock a 3870 X2 board.|
Having a single fan not only renders the card nearly as quiet as a single-GPU configuration but also avoids the need for twice the electrical power. The R3870 has two auxiliary power sockets onboard, one six-pin and one eight-pin, but only the six-pin socket is needed for normal operation. If you intend to overclock the board, you will need to send power to both of them. In our tests, our 3870 X2 test system consumed about 170 watts at idle and around 275 watts under load, compared to the 3870’s 117 watts at idle and 208 watts under load.
We periodically update the games we use for videocard benchmarking, but we’ve stuck with the Shader Model 3.0 tests in the artificial benchmark 3DMark06 as a means of providing continuity. The results we’ve seen with the 3870 X2, however, indicate that the benchmark has finally outlived its usefulness: The 2x performance boost it delivers there doesn’t jibe with the frame rates we saw in actual games. In fact, there was virtually no performance scaling in Crysis at all with the 3870 X2 when compared to a single Radeon 3870.
The 3870 X2 is a good solution, but it doesn’t solve the fundamental problem with dual-, tri-, and quad-GPU systems: Their performance doesn’t scale with every game—including high-profile titles like Crysis that you’d buy these cards for in the first place.
Relatively quiet and power efficient; offloads video-decoding chores from the host CPU.
Currently delivers no performance scaling with the most demanding game on the market today.
|Windows XP (DirectX 9) |
|MSI Radeon 2870 x2 ||XFX GeForce 8800 Ultra |
|3DMark06 Game 1 (FPS)||47.6||35.2 |
|3DMark06 Game 2 (FPS)||42.1 ||31.7 |
|Crysis (DX9) (FPS) ||23.1 ||38.6|
|Unreal Tournament 3 (FPS)||86.5 ||97.6|
|Windows Vista (DirectX 10) |
|MSI Radeon 2870 x2||XFX GeForce 8800 Ultra|
|3DMark06 Game 1 (FPS)||46.4 ||34.3|
|3DMark06 Game 2 (FPS)||41.9||30.9 |
|Crysis (DX10) (FPS)||26.7 ||30.1 |
|Unreal Tournament 3 (FPS)||68.4 ||75.4 |
|Best scores are bolded. AMD-based cards tested with an Intel D975BX2 motherboard; Nvidia-based cards tested with an EVGA 680i SLI motherboard. Intel 2.93GHz Core 2 Extreme X6800 CPUs and 2GB of Corsair DDR RAM used in both scenarios. Benchmarks performed on ViewSonic VP2330wb monitors.|
|GPU ||Radeon HD 2870 x2 |
|GPU Manufacturing Process ||55nm |
|Memory||512 MB GDDR3 (x2) |
|Form Factor ||Dual Slot |
|Display Interface ||Dual-Link DVI with HDCP on both links |
|PCI Express Support||2.0 on the motherboard; 1.1 on the card |
|DirectX Support ||Direct3D 10.1; Shadermodel 4.1 |
|Power Sockets||One six-pin, one eight-pin|