Open, Closed, Proprietary ... Sorting out the Confusion

Over the past few months, we've seen plenty of confusion over the direction NVIDIA and AMD are taking with respect to GPU computing. This isn't helped by either AMD or NVIDIA who both tend to tout the advantages of their approach and the disadvantages of the other guy's take on it.

AMD and supporters tend to claim that NVIDIA's CUDA is not optimal because it is not an open standard and that AMD supports openness because their solution (Brook+) is open source. But Brook+ isn't an open standard either: it was developed at Stanford University and hasn't been standardized. While the source for the Brook+ compiler is available, it would take a large investment to retool it for NVIDIA hardware. Even then, you'd need to build different versions of a program for AMD and NVIDIA platforms. The original GPGPU based Brook is a different story as it generated OpenGL code to do the GPGPU work, but modifying it to generate CAL code makes it very not interoperable and not very open or standard. At least as those terms are used when talking about languages, APIs and interoperability.

NVIDIA isn't much better though. They tend to act like anything AMD does is to copy them and amounts to nothing because CUDA for C is the gold standard for GPU computing and they don't have it, which just isn't the case. In fact, AMD started demonstrating concerted efforts to advance GPU computing before we saw anything from NVIDIA, and in much more interesting ways.

With R580 AMD (then ATI) actually published part of their ISA and called the initiative CTM (for Close to Metal). Before we had a beta version of CUDA, we had folding@home GPU accelerated on R520 and R580. Beyond that, CUDA for C has done really well in the HPC (high performance computing) space, but it hasn't caught on in the consumer space. Neither AMD nor NVIDIA have a viable consumer oriented solution for GPU computing.

So NVIDIA has the HPC market with CUDA and have gotten some universities to start teaching data parallel programming using CUDA for C. AMD could make an investment in the CUDA for C language and create either their own compiler (nothing is stopping them). But then you still have the same problem of interoperability as if NVIDIA implemented Brook+. If NVIDIA or AMD want to make their solution work with the other guy, they would need to write a wrapper to translate CAL to PTX or PTX to CAL. Or we could go a different direction and work on building an industry standard virtual ISA for data parallel architectures. But I doubt that effort would ever take off.

So the bottom line is that both AMD and NVIDIA support both proprietary (Brook+ and CUDA for C) and open standard (OpenCL) solutions. There are further differences between Brook+ and CUDA, but the important part is that these proprietary solutions are not ever going to be able to produce one binary that runs on both AMD and NVIDIA hardware both because of the approach used and the fact that AMD and NVIDIA aren't going to work closely enough to make something like that work. At least in the foreseeable future.

OpenCL, on the other hand, offers developers the ability to write an application once, compile it once, and expect it to run on all major GPU hardware. Something that could never happen with ether CUDA or Brook+.

Parallel Computing: Why We Need OpenCL Why NVIDIA Thinks CUDA for C and Brook+ Are Viable Alternatives
Comments Locked

37 Comments

View All Comments

  • melgross - Thursday, January 1, 2009 - link

    It's interesting that while ATI and Nvidia are heavily mentioned with their rapidly depreciating standards, Apple, which after all, developed OpenCL isn't mentioned even once, though it will also likely be the first to implement OpenCL in 10.6 later this year, possibly by March. Even their Logo isn't shown. Very strange!
  • Wwhat - Monday, January 5, 2009 - link

    By march they might (should) not be the first but graphicscard makers should have updated their drivers to support it already, after all they were well aware of OpenCL long before and already announced they would support it, and nvidia said that porting to it would be easy, plus both ATI and nvidia have no problem at all releasing unstable software/drivers, none at all, as we all experienced.
    Oh and nvidia had an OpenGL3 driver out in like 2 days after final specs and ATI a in a few weeks, so that makes you think they can put some steam behind their efforts if they want to.
  • dvinnen - Thursday, January 1, 2009 - link

    The logo picture was taken from their site
  • rdbrown - Friday, January 2, 2009 - link

    On the the Khronos website right above the "Logos" Apple is the one who initially proposed the working group, Apple is also mentioned in the list of companies. They must not of posted Apple's logo knowing that everyone who knows anything about Open CL knows that it is Apple's technology, Heck Apple even owns the trademark rights.
  • melgross - Thursday, January 1, 2009 - link

    At least they should have been mentioned in the article.
  • yyrkoon - Thursday, January 1, 2009 - link

    And to say what ? That Apple feeling left out in the cold has made efforts to take the next obvious step and standardize GPU processing( very late in the game )? That is, assuming what you're saying is true.

    Gee, how very innovative of them.
  • hakime - Saturday, January 3, 2009 - link

    Shut up you are trolling!! You don't know what you are talking about, period.

    The fact that there is not reference of Apple in the article is a serious drawback. Apple invented and designed Open CL as mush as SGI invented and designed Open GL, ignoring it is simply wrong. Credit to who is deserved for, and Apple deserved the credit for inventing Open CL, you have to admit it either you like Apple or not.

    Apple has taken the industry of HPC upside down with Open CL, for the first time there is one single state of the art API and environment for high performance, multi-core and GPU programing, which is also OS and hardware independent. Open CL goes well beyond Direct X, as the latter is not only limited to what you can do for GPGPU, but also it is only designed for GPU (Microsoft is very late in the world of GPGPU, Apple has been targeting the GPU for high performance processing for a while now with Core Image and Core Video).

    Open CL offers an unique interface for both CPU and GPU, which in other words means that it brings together different technologies like Open MP or CUDA, this is unique in the industry, Apple deserves the credit for having created this single interface.

    Open CL is designed to target a large set of devices like CPU, GPU, Cell chips, DSPs, Direct X can't do that. Open CL targets small factor devices like the iPhone, Direct X does not and can not.

    Not only the author of the article fails to recognize this unique aspect of Open CL, but he also fails to comment on the effort made by Apple in creating Open CL. Again you like Apple or not, that does not matter, give the credit to who it is deserved for and get the facts right.

    Please correct the article and make it more interesting on what Open CL is really for, not the general bla, bla which is written.

    Thanks.
  • ltcommanderdata - Thursday, January 1, 2009 - link

    Which part isn't true? That Apple developed OpenCL and then submitted to Khronos? Since even Khronos admits that is true.

    http://www.khronos.org/news/press/releases/khronos...">http://www.khronos.org/news/press/relea...es_heter...

    "Apple has proposed the Open Computing Language (OpenCL) specification to enable any application to tap into the vast gigaflops of GPU and CPU resources through an approachable C-based language."

    Apple's Aaftab Munshi was also the chairman of the OpenCL working group.

    And how is OpenCL late in the game? I'm pretty sure that DirectX 11 is the only standardized GPGPU implementation across multiple vendors, but it's still in beta. In comparison OpenCL has been ratified, in record time compared to OpenGL 3.0, probably due to Apple's pressure to get it ready for Snow Leopard. And nVidia has already released OpenCL drivers for Windows and Linux.

    http://developer.nvidia.com/object/opengl_3_driver...">http://developer.nvidia.com/object/opengl_3_driver...
  • yyrkoon - Thursday, January 1, 2009 - link

    Oh, and sorry, my original point was something like this. While the true innovative companies are squabbling about whose product is superior, Apple sneaks up behind them, and claims to have invented the internet. In other words, whether Apple participated or not, an open standard would have been made.
  • melgross - Friday, January 2, 2009 - link

    You're not very knowledgeable. You ARE very anti-Apple apparently.

    And why do gamers have to be the most beneficial parties? What's so great about gaming? Besides, OpenCL will benefit them, as well as parties that won't be benefitted by DirectX. Is that a bad thing? To you, it seems to be.

    If MS had developed this, you would be jumping up and down, and claiming that it was the next step beyond the now old DirectX methodology, and far more useful.

    Like it or not, this IS a major innovation, otherwise, so many companies of note wouldn't be signing on so quickly.

    Whether Windows users benefit from this, or are left out of it is up to MS, who seems only interested in destroying standards that don't result in MS's increasing dominance. Too bad for them! That doesn't work too well anymore.

    You know nothing about innovation at all. That's sad. Just go on being blinded by your prejudices, we all see it for what it is.

Log in

Don't have an account? Sign up now