Update: Leaked Intel Roadmap Reveals New Batch of Haswell Chips
Faster hardware shouldn’t be this somber. Yet we can’t help but furrow our brow in concern over Intel’s fourth-generation Core i7 CPU. Yes, in typical Intel fashion, it’s a tour de force of technical achievement and features that’s the envy of the free world. It’s also, by the way, quite fast.
How fast? *Spoiler alert* Let’s just say that the new Core i7-4770K easily unseats the previous midrange sweetheart, the Core i7-3770K, as the best all-around performer, and even gives the high-end hexa-core part a hard time.
Even the most diehard fan boy can admit AMD’s not in the hunt against Intel’s top-end processors—well the reasonable one’s anyway.
That doesn’t mean AMD still can’t give Intel a hard time. While AMD can’t compete with the Core i7-3970X or even the Core i7—4770K, the company’s rush to merge CPU and GPU to make the APU has put more pressure on Intel than Intel would probably want to admit.
Nvidia delivers a juiced GK104 in the GeForce GTX 770
Today the embargo lifts on the second GeForce GTX 700 series GPU to be announced in a week's time; the Titan-cooled but GK104-powered GeForce GTX 770. Unlike the GTX 780 announced last week, this card does not use the monstrous GK110 GPU, but instead opts for a highly-clocked version of the GK104 chip found previously in the GTX 680, GTX 670, and GTX 660 Ti. It's the highest clocked-part of all of those cards though, and also has 7Gb/s memory instead of the 6Gb/s variety found in all the previous Kepler cards, giving it a signficant bump in memory bandwidth.
Today Nvidia pulls the wraps off its $650 GK110-based 700 series flagship card, the GeForce GTX 780. This board slides directly into the yawning chasm that exists between the $500 GK104-based GTX 680 and the $1,000 GK110-based GTX Titan, though despite its price it's actually much closer in specs and performance to the Titan than it is to the GTX 680.
Samsung's upcoming flagship smartphone bests the competition in Rightware's Browsermark benchmark.
It's expected Samsung will launch its highly anticipated Galaxy S IV smartphone at the Mobile Unpacked event in New York on March 14, but in the meantime, we have some benchmark scores to salivate over. Topping Rightware's Browsermark 2.0 benchmark is a listing for the Samsung GT-I9500, believed to be the codename for the Galaxy S IV, and it looks to be a scorching fast device.
View several screenshots from the Cloud Gate test in Futuremark's upcoming 3DMark tool.
The folks at Futuremark are putting the final touches on a new version of the popular 3DMark benchmark suite, one that promises cross platform benchmarking, meaning you can compare scores from Windows, Windows RT, Android, and iOS devices. It's slated to launch in January 2013, but in the meantime, you can view a handful of screenshots from the Cloud Gate test in our gallery after the jump.
We test the latest Beta drivers to see who is the single-GPU champ
Earlier this year both AMD and Nvidia released all-new 28nm GPUs, resulting in AMD taking the single-GPU performance crown momentarily with its HD 7970 before Nvidia swiped it away a few months later with its GeForce GTX 680. It’s been awhile since we’ve even thought about either of these cards as we’ve been busy testing their binned counterparts for most of the year, but this past week AMD released a new Beta driver that it claims provides "significant" performance improvements for its already-potent HD 7000 series cards. Just one day later Nvidia pounced, releasing its own Beta driver which also claimed to boost performance in a wide variety of popular titles. This happens all the time; as soon as one manufacturer holds an advantage the other strikes back in order to help drag the performance crown back to its own camp, typically via an overclocked card, improved drivers, or both.
Can AMD make magic? Check out our in-depth Vishera benchmarks.
On paper, AMD’s Bulldozer microarchitecture always sounded like a mean, green machine. When it landed last year, though, in the form of the Zambezi processor (aka FX-8150), it actually went about as fast as a bulldozer.
AMD didn’t just give up and curl into a ball. The company went back to work polishing the FX chip into the new AMD FX-8350 “Vishera.” The chip might look like a Zambezi, but it features an improved branch predictor, improved scheduler, larger L1 translate lookaside buffer, new FMA3 and F16C instructions, L2 improvements, among many other changes.
Vishera looks the same externally and the good news: it’ll use the same AM3+ socket too.
We knew this day would come, but that doesn’t make it any less exciting. After all, we’ve been waiting since Saturday. Today Nvidia launches the just-announced GeForce GTX 690, which packs two full GK104 Kepler GPUs onto one video card—and what a card it is. (For an in-depth look at the GTX 680, the GK104 GPU, and the Kepler architecture, check out the feature story from our June issue!)
With premium magnesium-alloy casing, polycarbonate windows, and an LED-backlit logo, the $1,000 GeForce GTX 690 reference card looks as expensive as it is.
Two. Two GPUs.
The GTX 690 is 11 inches long—big for an Nvidia card, but still smaller than the 12.2-inch high water mark established by the AMD Radeon 5970 a few generations ago. As you’d expect, the GTX 690 contains two of the same GPU found in the GTX 680, with a slightly lower base clock—915MHz with a boost clock of 985MHz, compared to the 1,006MHz base and 1,058MHz boost clock for a reference GTX 680. Nvidia says they’ve built in substantial room for overclocking, too, saying that you can get over 1,100MHz clocks from the stock cooler.
Aside from the slightly lower clocks, the rest of the board’s specs are exactly what you’d expect from a true dual-680 configuration: 3,072 CUDA cores, 16 SMX units, 256 texture units, and 64 ROPs. Each GK104 GPU has 2GB of GDDR5 with four 64-bit memory channels per GPU, for a total of 4GB GDDR5 frame buffer for the whole card.
Click "Read More" for the full specs, benchmarks, and more!