Nvidia Comments on Recent Larrabee Buzz

1

Comment

+ Add a Comment
avatar

jwalch.hawk

'Cause it's cold.  Get it? Heh, yeah, it was lame...  Anyways...

I'm inclined to take nVidia's side on this one.  Kinda.  From a pragmatic standpoint, they are in the right here.  Compilers are ultimately a translating tool between the higher-level language itself (ie, C, Java, so on and so forth) and lower-level languages (like machine code or assembly).  It's not like the x86 processor processes your C++ code.  It processes the machine code that your C++ code got translated into.  With that in mind, I see absolutely nothing preventing nVidia (well, besides maybe legal issues or something) from developing a special compiler that takes your C++ code and translates into something their GPUs like rather than something an x86 likes.  It's just changing the target translation platform, right?

That all said, Intel does make a valid point to bring up that x86 is in fact an advantage because we already know how to make all the high-level languages that everyone loves work on it.  This would allow applications that perhaps don't hit the GPU hard nowadays to run some more of their code on the GPU because it'd be so easy to port.

But if nVidia says that CUDA is basically just a glorified compiler for the C that so many already use...  Then what's the difference?

Are Intel and nVidia both ultimately just changing different links in the equation underneath essentially the same high-level language?

 

Damn, I just talked myself in a circle and got nowhere fast.

Log in to MaximumPC directly or log in using Facebook

Forgot your username or password?
Click here for help.

Login with Facebook
Log in using Facebook to share comments and articles easily with your Facebook feed.