Sport & Auto
- About Future
- Digital Future
- Cookies Policy
- Terms & Conditions
- Investor Relations
- Contact Future
With the GeForce GTX 780 Ti, Nvidia has snatched the single-GPU performance crown back from the clutches of the recently launched Radeon R9 290X, and not just by a small margin either, but by a landslide. By dethroning the R9 290X Nvidia has also taken the GTX Titan to the woodshed as well, as the GTX 780 Ti is far and away the fastest single GPU we have ever tested. Read on to see how it fares against the GTX 780, the R9 290X, and the former champ, the GTX Titan.
Back when the GTX Titan launched we all proclaimed it to be "Big Kepler," or the full implementation of the Kepler architecture instead of the half-Kepler GK104 we got with the GTX 680. Of course, we all loved the GTX 680 at the time, but it was roughly half the size of the GK110 chip Nvidia had deployed to supercomputers worldwide. When Nvidia finally got around to stuffing the GK110 into a gaming GPU named Titan, we all rejoiced since we had finally acquired the real-deal Holyfield Big Kepler GPU.
It's hard to notice in this image, but the cooling shroud has a darker, smoked appearance to match the darker lettering.
However, even the Titan wasn't a full GK110 part, as it had one of its SMX units disabled. This begged the question - would Nvidia ever release a Titan Ultra with all SMX units intact? With the GTX 780 Ti we finally have that card. Not only does it have all 15 SMX units enabled, this bad mutha also has the fastest memory available on an Nvidia GPU with its 3GB of 7GHz GDDR5 RAM. Previously, this speed of memory was only found on the mid-range GTX 770. The bottom line is Nvidia is pulling out all the stops with the GTX 780 Ti in an effort to shame the R9 290X, and once again establish itself as the king of the single-GPU space. It should be noted that the GTX 780 Ti does not offer Double Precision compute performance like the GTX Titan, so CUDA developers will still prefer that card. The GTX 780 Ti is made for gamers, not scientists. We should also point out that the GTX 780 Ti supports quad-SLI, just like the GTX Titan, and the GTX 780 does not.
Let's have a look at the specs of the GTX 780 Ti along with its closest competitors.
*The R9 290X's TDP isn't a quoted spec from AMD but rather one with air quotes around it. We believe it to be a bit higher than 250w.
On paper it's clear the GTX 780 Ti has a higher specification than either of its competitors, not to mention the obvious GTX 780. Although its memory bus isn't as wide as the R9 290X's, it has faster memory, so it's able to achieve higher overall memory bandwidth. The R9 290X is capable of pushing 320GB/s thanks to its slower 5GHz memory but wider 512-bit channel, while the GTX 780's faster 7GHz memory can squeeze 336GB/s through its narrower 384-bit bus. The GTX 780 Ti has more processing cores as well, and thanks to Kepler's higher level of efficiency compared to AMD's GCN architecture, is able to sustain much higher clock rates at all times as well. All that adds up to one ass-kicking GPU, as we'll see shortly. Like the GTX 780 the card measures 10.5 inches in length, and requires a six-pin and an eight-pin power connector. TDP is unchanged at 250w.
Since this board carries the GTX 780 moniker, let's look at how it is different from the GTX 780, because remember, this card costs $200 more than the original GTX 780 now that Nvidia has lowered its price. First up, it has 25 percent more CUDA cores, going from 2,304 to 2,880, which is quite a jump. Second, it has faster GDDR5 memory, which has been bumped up a full 1GHz to 7GHz. Third, it has a new feature Nvidia calls Max OC that simply balances the power going to the card from its three sources: the six-pin and eight-pin rails, and the PCI Express bus. Nvidia claims the board usually does this on its own quite well, but when overclocking all bets are off and not enough power from one source could limit the overclock. It claims this situation is rectified on the GTX 780 Ti, so you should be able to overclock this board higher than you could a GTX Titan or GTX 780. Finally, though it's not a new feature, this card also supports GPU Boost 2.0, like the other cards in the 700 series. However, with the arrival of the variable clock rate Radeon R9 290X, Nvidia is pointing out that it guarantees a base level of performance on all its 700 series cards, regardless of operating conditions. This is in contrast to the new Hawaii boards from AMD, which state a "max clock speed" but not what the actual average clock speed is under load as it tends to be a bit lower. We'll have more on that a bit later.
One of the most interesting features Nvidia has announced recently for its Kepler GPUs is G-Sync, which is technology built into upcoming LCDs that enable it to work hand-in-hand with the Kepler GPU to sync refresh rate and frames coming out of the GPU. It's essentially the end of V-sync as we know it, and since most hardcore gamers never use V-sync we couldn't be more thrilled about this technology. By syncing the monitor's refresh rate with the actual framerate coming out of the GPU, tearing and sheering is totally eliminated, resulting in a much smoother visual experience on-screen. There are some caveats, of course. First, we have not tested or witnessed G-Sync in action in our own lab, and have only seen an Nvidia-prepared demo of the tech, but what we've seen so far looks very good, and we have no reason to doubt it won't fulfill its promises once it lands in the lab.
In order to experience Nvidia's G-Sync technology you'll need a G-Sync LCD. The first one from Asus is a $400 24" model.
However, since we haven't seen it yet as the monitors are not yet available, we'll have to wait to deliver a verdict on this particular piece of gear. Second, in order to acquire this technology you will have to first acquire a G-Sync display, or buy an actual PCB and mod your monitor somehow. We're not sure how that would work, and what monitors will allow it, so again, we'll have to wait and see. We don't believe most gamers will want to buy a new LCD just to get this technology, however. Still, kudos to Nvidia for taking on a problem that has existed for as long as we can remember. If it really is as good as John Carmack and Tim Sweeney say it is, it could revolutionize the gaming industry. We'll have to wait and see.
ShadowPlay is more efficient than FRAPs, and doesn't consume your entire hard drive either.
We covered this technology at the GTX Titan launch, and back then it was "coming soon." Now that it's finally out, though still in beta, this is technology exclusive to Nvidia that should factor into one's purchasing decision. Since we've already covered it, in brief it lets you capture gaming footage with almost no performance penalty, according to Nvidia. Once captured the onboard H.264 encoder built into the Kepler architecture compresses it to reduce file size, and it works in the background always recording what you last did in the game, hence its name. We have been playing with it in the lab, so expect a writeup on our experience with it shortly.