Intel's Atom platform has been such a resounding success, one has to wonder what the No. 1 chip maker has planned for a follow-up. You don't have to wonder anymore, as Intel this week officially unveiled 'Pine Trail', the codename for Atom's successor.
The CPU used in Pine Trail, called 'Pineview,' moves the memory controller and GPU onto the same die as the CPU. This means Pine Trail will be a two-chip solution, one less than Intel's current netbook platform. In theory, this should result in cost savings and lower power consumption.
Pineview is being built on a 45nm manufacturing process. Intel hasn't said what type of memory controller it will use, though previous speculation pointed to single-channel DDR2. But what's most interesting is how the war between Intel and Nvidia is shaping up. Like Pine Trail, Nvidia's Ion platform is also a two-chip solution and will have had time to mature by the time Pine Trail debuts later this year. Performance looks to be better on the 9400M-based Ion as well, but Intel's price structure for selling standalone Atoms could put Nvidia at a disadvantage. Moreover, what chips will Nvidia use once Intel makes the move to a CPU+GPU solution?
Try to imagine where 3D gaming would be today if not for the graphics processing unit, or GPU. Without it, you wouldn't be tredging through the jungles of Crysis in all its visual splendor, nor would you be fending off endless hordes of fast-moving zombies at high resolutions. For that to happen, it takes a highly specialized chip designed for parallel processing to pull off the kinds of games you see today, the same ones that wouldn't be possible on a CPU alone. Going forward, GPU makers will try to extend the reliance on videocards to also include physics processing, video encoding/decoding, and other tasks that where once handled by the CPU.
It's pretty amazing when you think about how far graphics technology has come. To help you do that, we're going to take a look back at every major GPU release since the infancy of 3D graphics. Join us as we travel back in time and relive releases like 3dfx's Voodoo3 and S3's ViRGE lineup. This is one nostalgiac ride you don't want to miss!
Just in case you were worried that Intel wasn’t committed to it’s heavily delayed Larrabee platform, a 12 million dollar investment in a new Visual Computing Institute should help convince you otherwise. Located at Saarland University in Saarbrücken, Germany, this is the largest joint project ever formed between Intel and a European university. The institute will help Intel explore advanced graphical computing technologies, which includes everything from more realistic gaming, to advanced 3D user interfaces.
The primary focus of the research will be applied to Intel’s terascalling program. This will help them better understand how they can apply Larrabees unique multi x86 core architecture to achieve sustainable performance increases over modern day GPU’s. Larrabee has been delayed until some unknown date in 2010, presumably because it hasn’t yet achieved the type of performance gains they were hoping for against Nvidia & AMD.
In addition to terascalling research, Intel will also work with other hardware design labs in Barcelona, Spain, and Braunschweig, Germany to help optimize the Larrabee design. Z-buffering, clipping, and even ray tracing are all promises made by the Larrabee team, but clearly the software needed to make all this happen still requires some work.
Want more details? Click here to watch the press video.
So is Larrabee really the future? Or does this only prove Nvidia’s case that its promise is overhyped?
Intel's Larrabee project might rank as one of the most anticipated technology releases in a long while, and it looks like we'll have to wait just a bit longer than originally thought. It was expected that Intel would launch its many-cored cGPU sometime in late 2009, however the chip maker is now saying it plans to launch Larrabee in 2010.
Not a whole lot of details are known about Larrabee, only that it's a x86-based discrete graphics solution built around the second generation Pentium processor technology with the P54C core. When Larrabee launches, it will come in several iterations, the lowest of which will comprise no less than 8 cores. On the higher end, look for at least 32 cores and a 2GHz or faster clockspeed.
While it all sounds impressive, Intel's Jospeh Schultz did say that it would be a "big challenge" to compete with products from Nvidia and AMD.
Much was made over the race to 1GHz on the CPU front, a race AMD won with its Athlon processor. Markedly less exciting (but still an impressive feat) has been the sprint to churn out the first factory-clocked 1GHz GPU, with AMD again claiming victory, this time over Nvidia instead of Intel.
"Throughout the 40-year history of AMD, we have continually focused on technology firsts that deliver superior value to the customer," said Rick Bergman, senior VP, Products Group, AMD. "The 1GHz ATI Radeon HD 4890 continues that tradition by increasing the performance and compute power of our flagship singleGPU solution, ensuring a great experience whether our customers are playing the latest DirectX 10.1 game or running GPU accelerated applications built with OpenCL."
At 1GHz, the HD 4890 is able to deliver 1.6 TeraFLOPs of computing power, or "50 percent more than that of the competition's best single-GPU solution." In terms of real-world performance, however, the HD 4890 trails slightly behind Nvidia's GTX 285 in most benchmarks, or at least it does at 900MHz (see review of Asus Radeon EAH4890 Top in the June 2009 issue of Maximum PC on page 74).
FPS jockeying aside, it's good to see AMD aggressively going after the top spot in the graphics market rather than concede the high-end sector to Nvidia like it had done with its last generation of GPUs.
Another first-quarter revenue report, and another loss, this one from Nvidia. According to the report, the GPU chip maker's revenue slid 42 percent from last year, posting a net loss of $201.3 million, or 37 cents per share. That's a big change over last year when Nvidia posted a profit to the tune of $176.8 million, or 30 cents per share.
But hey, it seems everyone's numbers are down, and for Nvidia, analysts were anticipating worse numbers. While Nvidia's revenue of $664.2 million is a far cry from the $1.15 billion it posted last year, Wall Street had Nvidia pegged at $534.4 million, undershooting by $130 million.
To help cope with the recession, Nvidia has begun cutting back on its inventory, a method which seems to be working so far. Inventory was scaled back from 144 to 64 days sequentially, CNet reports, and revenue grew 38 percent sequentially from last quarter.
"We made good progress managing expenses and significantly reducing inventory," said Jen-Hsun Huang, CEO of Nvidia.
Nvidia has just released a new WHQL-certified driver, version 185.85, for GeForce videocard and ION platform owners. The new driver adds official support for the recently released GTX 275 videocard, as well as support for CUDA 2.2, which Nvidia says will result in improved performance in GPU computing applications. Other performance claims include:
Up to 25 percent in The Chronicles of Riddick: Assault on Dark Athena
Up to 22 perent in Crysis: Warhead with antialiasing enabled
Up to 11 percent in Fallout 3 with antialiasing enabled
Up to 14 percent in Far Cry 2
Up to 30 percent in Half-Life 2 engine games with 3-way and 4-way SLI
Up to 45 percent in Mirror's Edge with antialiasing enabled
You read that right - that's up to a 45 percent boost in Mirror's Edge, according to Nvidia. In addition, 185.85 updates the PhysX software to 9.09.0408 and offers "numerous bug fixes." Barrage of links below.
To those looking for another venue to get their very own supercomputer, you’re in luck! Nvidia has recently announced that their CUDA-based Tesla C1060 GPU is available in Dell’s Precision R5400, T5500 and T7500 workstations effective immediately.
If you’re worried that just one of these GPUs isn’t enough to handle your hardcore needs, worry not – just one C1060 has enough power to control the main system of the European Extremely Large Telescope project (reportedly the world’s largest). According to Jeff Meisel with National Instruments, a workstation “equipped with a single Tesla C1060 can achieve near real-time control of the mirror simulation and controller, which before wouldn't be possible in a single machine without the computational density offered by GPUs."
Three years ago AMD acquired graphics chip maker ATI for $5.4 billion, which has been producing and selling videocards as a separate division ever since. Under a new reorganization plan, that will no longer be the case, as AMD will merge its CPU and graphics units into a single group.
"The next generation of innovation in the computing industry will be grounded in the fusion of microprocessor and graphics technologies," AMD CEO Dirk Meyer said in a statement. "With these changes, we are putting the right organization in place to help enable the future of computing."
The new products group will be one of the four new groups, with the others focusing on technology, marketing, and customers. Senior VP Rick Bergman will lead the new products group, which AMD says will be responsible for deliver all of AMD's platforms and products.
AMD also announced that Randy Allen, senior VP, Computing Solutions Group, has decided to step down. Allen had stepped into his role a year ago as part of another major reorganization.
Nvidia recently announced the immediate availability of their ready to use Tesla GPU Preconfigured Cluster, aimed at the scientists, engineers and researchers of the world.
According to Nvidia the Tesla Cluster will provide up to 30 times the performance of a CPU-only cluster, while using only a fraction of the power. One example that they provide to drive this point home is that of BNP Paribas’ (a French Bank) Corporate and Investment Banking division, which recently replaced 500 CPUs that consumed 25kW of power with smaller CPU clusters and two Tesla S1070 1U systems, which only consumed 2kW of power. And, along with the lowered power expenditure, they received better performance.
According to Andy Keane, Nvidia’s Tesla General Manager, “There are 15 to 20 million engineers, scientists and researchers around the world struggling for time on supercomputers, which has led to a huge pent-up demand for computation. With the launch of the Tesla Preconfigured Cluster, every one of them can easily deploy a GPU-powered supercomputing cluster that dramatically reduces their power consumption while still advancing the pace of their work.”