Nvidia: It’s Time to Kill CUDA

19

Comments

+ Add a Comment
avatar

kclessonm98

You shouldn't kill CUDA because it isn't ideal for games because it ISN'T DESIGNED TO MAKE GAMES. CUDA is for high performance professional tasks like 3D rendering, visual effects, filmmaking, and 3D animation. Programs like Autodesk Maya, Pixar Renderman, and Adobe Premiere wouldn't be as powerful as they are without CUDA.

avatar

gregsci

IMO, CUDA is a killer, it is a low-level, multi-thread API and that means *FAST*, you code almost directly on the graphic card, it even has it's own assembly language for efficiency nerds. If you see some Direct3D structures and classes you will find a mess of methods and obscure even inefficient stuff thanks to the geniuses@MS, wich is stupid because the whole idea of an API is to be a thin layer over the hardware/driver that deliver an interface at a minimum cost, MS doesn't seem to understand that. And CUDA is precisely that, a very thin layer over the graphic card with just the needed functionality, and if that's not enough: it is FREE, extremely well documented and 100% supported by Nvidia.

This OpenCL, even if it is supported by Nvidia, is not the same thing: it could NEVER be as fast an efficient as an lower-level API like CUDA, wich is optimized for the intended purpose: processing on Nvidia cards. The sole idea of being an multi-purpose, multi-platform API will make this OpenCL inefficient.

Besides, CUDA uses CUDA-C, an slight extension of the C language, so porting CUDA code to another API or solution compatible with C wont be much of a hassle at all. Thanks Nvidia for this decision! I don't need to learn some weird language to code for CUDA.

The only problem is the hardware dependence, but by Feb/2010 Nvidia own almost 60% of the market and it is continually growing, so I don't see it so much at a problem (of course, it WILL be a problem for like game companies, but for other purposes like a design app, it isn't to much a problem at all).

And Directx11 with its DirectCompute, it is in the market since 2009 and only like 5 games used it, and again, it is a multi-plataform API = not fast for a critical domain like the graphics programming.

I'll be supporting CUDA and hope it doesn't die anytime soon.

avatar

bobdude11

This may just be propoganda, but here is a pretty good article with information about CUDA vs. OpenCL:

http://www.appleinsider.com/articles/08/12/10/nvidia_pioneering_opencl_support_on_top_of_cuda.html

I found it rather interesting and it looks like most of what OpenCL is designed to accomplish is already in play with NVIDIA GPUs that have CUDA support built in. (8, 9, and 200 series chipsets)

avatar

eyeaethe

Sometimes it is worth it to take a step back and look at the big picture when you come across an issue like this. It really is quite obvious that the only reason CUDA is around is to promote the use of NVIDIA hardware for all applications. Fundamentally there is nothing wrong with this, but when you consider that the vast majority (90%+) of people using discrete GPUs do so for the sole purpose of playing games, it becomes clear that NVIDIA's persistance on using CUDA is not going to have a groovy effect on the gaming hardware/software industry.

The real value of CUDA, and indeed its originally intended purpose, is for GPGPU applications in the scientific and design realm. Pressing the technology into the gaming space creates a division in the market that is ultimately going to effect the actual game developers in a negative way. A decision will have to be made to implement either CUDA, which runs on a limited hardware set from one company only and limits the market for the game to people who already have that hardware, or something like OpenCL/DX11 which will run on all hardware platforms and eliminate restriction of the intended audience (OR develop for both and increase costs). By stacking PhysX on CUDA NVIDIA makes the technology exclusive to NVIDIA products which sells hardware. The PhysX API has been around for almost 3 years now and at least 2.5 of those years it existed as a FREE standalone API for developers. You don't need to run PhysX on CUDA to make it exclusive to NVIDIA cards.

Will is right: NVIDIA (and ATI for that matter) needs to focus on creating killer GPU hardware moreso than trying to force largely uncessary, closed standards into the wrong markets.

Of coruse, opinionated editorials will be opinionated editorials. We can only hope that the companies move forward with everyone's best interests in mind.

avatar

bobdude11

I don't have any hard numbers so I can't really dispute your claim about the percentages, but I do think you have overlooked the Tesla core GPUs used in various capacities for science, architecture, etc.

CUDA was originally developed to leverage those types of cards and was backported to the retail GPUs that have recently come out.

It has only been with the introduction of those cards that have really exposed how good gaming can be. And lets not forget that some of the technologies are being ported to consoles (PhysX being one of those- Xbox I believe)

I feel like NVIDIA and ATI are both focused on killer hardware, hence the dual GPU in a single slot options of the ATI 4870x2 (monster FPS) and the GTX 285 and 295 (equally monster FPS). These are killer hardware and the drivers just kick them up a notch.

I hope both continue to do what they have been doing - its good for the market and REALLY good for the average gamer.

avatar

comp_builder

This use to be a great magazine, I even have a subscription. But you have become such a pack of AMD cheerleaders it's unbelievable. 20% of you magazine is giant full page amd adds and another 20% is --How to Clean windows tips -- .... AGAIN...  AMD, Intel & Nvidia's cards architecture will all be different and having c libraries custom tuned for each architecture will always be faster than a general language for all. your solution is to be a jack of all trades and master of none.. much like what your magazine has become. Your so afraid of being beholden to any company because they have their own standard that runs best on their specific hardware, but you bend over and take it when Microsoft crams windows 7 in you. You don't get Microsoft's universal solution without buying a new os. Tons of apps have been modified to use cuda with astounding speed increases in speed and there is no way it would be as fast on a generalized gpu solution like OpenCl.. Oh and guess what it cost all those companies to use Nvidia cuda libraries and development kit.. Not a penny.. so their the bad guys...

Keep you tabloid rag, I will not be resubscribing. Check my profile, I am a subscriber and a 43 year old industry professional who has seen first hand how amazing Nvidias cuda code can be.

avatar

QUINTIX256

O.k. It has been a long time since I used C. You use the same libraries from MATH to STDIO for AMD and INTEL. They have the same SSE instructions. In fact, they share the same x86 instruction set.

If you are talking about the GPUs made by the respective companies, C isn't a good language. There isn't a direct standardized equivalent to, say, FLOAT4 in C. All the primatives types in C are scalers. If you want a vector or a matrix you have to create a struct.

As for "all these companies", we are talking about the hardware license, not the software license.

You can have your recession. I'm not participating.

avatar

gregsci

The language used by CUDA is CUDA-C, an extension to the C language compatible with some C++ features, and it includes NATIVE (not structures) data types like int2, float2 and float3.

And yes, C *IS* a good language, it's a shame they didn't go for C++ (though, like I said, some C++ features are in CUDA-C) but I don't see myself learning some weird, obscure syntax like that of Cocoa from Apple to code in CUDA (WTF Apple? We devs don't have time to learn "objetive-C" and stuff like that, everyone of us already master C/C++ *sighs*...)

 

avatar

comp_builder

 

Thank goodness there was a tool like you trolling around to show me the error of my grammatical ways. Nvidia cards are about 60% of the market with an even higher adoption rate in the technical and scientific communities. So yes it is "free" to millions of people who already own their products. So all you and this rag can do is point fingers at a company that obviously spent a lot of time money and effort to make their products more valuable and useful and then they gave it away free. That's really obnoxious, how dare they.

And when your parents lose their house because of the recession and you have to move out of the basement you will be participating in this recession whether you like it or not.....

avatar

QUINTIX256

You probably should read more than the subject line before you post. At least you would have been able to point out my cute little grammatical error.

Troll or not, you are sounding more and more like one of those obnoxious Rambus investors with each post.

You can have your recession. I'm not participating.

avatar

vlestat

Hey Will,

 I hate to nitpick but... I find it a bit strange that you cite adaptation of rendering technologies as one of 3dfx's reasons for their downfall, specifically antialiasing. Correct me if I'm wrong but wasn't it 3dfx who originally pioneered antialiasing in their products prior to anyone else? 

avatar

Vadi

Microsoft and "open", nevermind a "standard", don't mix as far as I know.

avatar

Avery

Any games out there now that take advantage of CUDA?

I believe Age of Conan with it's DX10 patch that is on the test server about to hit live servers in a few weeks will be the first game to take advantage of it. Built upon a custom CUDA platform.

Been testing it out and the advantages looks to me that it is going to be around for a while yet.

avatar

decapitor

In response to the high level low level from the first post, low level is right next to hardware adn high level would be like at the OS level, but apparently there are different schools of thought on teh matter.

 

http://www.control.com/1026151382/index_html

avatar

AndyYankee17

seems a little bit like apple's demise where you could buy one OS, apple, that runs on one hardware, apple, or buy one OS, MS-DOS, and run it on lots of hardware.

avatar

QUINTIX256

But there was an article claiming that NVidia was willing to license PhysX/Cuda for "pennies per gpu" to competitors, making ATI seem stubborn.
 
I think ATI's refusal to take that offer has far more to do with NVidia’s past behavior. For two generations of GPUS, NVidia refused to support DX10.1, and there is even a conspiracy surrounding Assassin's Creed regarding just that. Also, NVidia does not support programmable tessellation —-a feature that has been available since ATI's first DX10 card.

This may sound like tit-for-tat, but “pennies per gpu” are NVidia’s words. No one can know about the gritty licensing details. Correct me if I am wrong, but Rambus made similar public claims about licensing cost in order to make memory manufactures seem stiff. On top of all this, I do not think anybody pays licensing fees to Microsoft for the DirectX spec.

Also, let us not forget Nvidia’s FX series. I like how it is put in Wikipedia: “Both Nvidia and ATI have optimized drivers for tests like this historically. However, Nvidia went to a new extreme with the FX series.”

Put simply, NVidia has a history of being obnoxious. For that reason I do not think Cuda will go away any time soon.

You can have your recession. I'm not participating.

avatar

decapitor

While I would agree that open standards are for the best and am excited about OpenCL, it is worth noting that CUDA is a higher level language than OpenCL, which makes a considerable difference in its intended uses.  At the moment I would like to take advantage of the power of these GPGPU langauges but I'm afraid that coding for them is still not simple and I just don't have time to port my fluid dynamics models that are written in C and use MPI for parallelization to CUDA or OpenCL.  What would truly be amazing is if any company releases a compiler that can utilize OpenCL from existing code written in C, Fortran, etc, while keeping the parallelization intact from MPI.  If this ever happens it would be amazing, especially for the scientific community.

avatar

billysundays

Are you sure you got that right about CUDA being a higher level language than OpenCL? From what I read in this article-

http://arstechnica.com/news.ars/post/20081209-gpgpu-opens-up-with-opencl-1-0-spec-release.html

- it would seem OpenCL gives programmers more direct access to hardware.

 

avatar

dimonic

Low level implies greater access to hardware

High level implies hardware independence - not having to know hardware details. 

Log in to MaximumPC directly or log in using Facebook

Forgot your username or password?
Click here for help.

Login with Facebook
Log in using Facebook to share comments and articles easily with your Facebook feed.