Nvidia wants to be your CPU, and Intel Wants To Be Your GPU?

Justin Kerr

Followers of the ongoing soap opera between Intel and Nvidia know no love has been lost between the two tech titans over the years. When AMD and ATI merged back in July of 2006 the internet was abuzz with rumors that an Intel/Nvidia merger couldn’t be far behind. As time pressed on and this possibility began to seem increasingly less likely, a competitive culture began to form between the two companies. The saber rattling has reached deafening proportions of late, and a seemly endless stream of jabs has dominated the headlines . Any merger pushed through now might require barbed wire to separate the water coolers. Both organizations seem determined to earn a slice of the other’s market share, and for once they seem willing to do it the hard way, through innovation. As Intel’s pushes into accelerated graphics with its Larrabee platform, Nvidia wants us to believe the CUDA API for its graphics cards will allow video accelerators to dominate the CPU.

What Is CUDA?

CUDA just like any programming interface is an API available to developers which will allow them to create applications specifically targeted towards execution on the GPU. Since much of the knowledge needed to program with the API already exists within the C programming community, Nvidia’s hopes the low barrier to entry will encourage developers to make use of it. Back when CUDA was first announced for the 8800 series in 2007 many pegged it as little more then a marketing ploy. But as Nvidia continues to update it’s SDK and early tests show impressive results , it seems clear the GPU may finally be powerful enough to provide more then just high gaming frame rates and the odd translucent window in Vista. Given that Nvidia’s newest GeForce 280 GTX sports about 1.4 billion transistors, almost 70% more then Intel’s latest Penryn processors perhaps it’s about time we put all that extra power to work.


Image Credit: Tom’s Hardware

Will CUDA Win The Day?

When established companies implement new technologies into everyday components people tend to take notice because adoption is almost guaranteed. This completely bypass’s the chicken and the egg situation companies like Ageia faced prior to their acquisition by Nvidia. Ageia’s dedicated physics hardware was doomed to fail because consumers won’t buy hardware without an application, and software developers won’t write code for non existent hardware. With CUDA capable GPU’s already seeded to more than 70 million machines worldwide developers already have an established audience to work with. The only hold back to the development of this technology will be the proprietary nature of the SDK. It should be no surprise that ATI is working on a competitive offering of their own and without standards, software developers are unlikely to exclude a large percentage of their customer base by focusing on hardware from a single vendor. Until that day comes it seems unlikely we will reap the promise of API’s such as CUDA unless the OS manufacturers can impose some kind of standard. This may be more difficult than it sounds with the glory days of Microsoft and its ability to impose standards long behind it.. The future of CUDA certainly looks promising, but the road is paved with obstacles that may prove to be ultimately beyond Nvidia’s control.

Around the web