AMD’s Plan of Attack

So how does AMD take back a larger share of the professional graphics market? The short answer is that there is no easy solution.

Traditionally AMD has underpriced NVIDIA on comparable hardware, and while that’s easy to do it only works to a certain extent, as showcased by AMD’s continuing sub-20% market share.

Professional graphics buyers, as it turns out, are not nearly as sensitive to price as consumers, particularly when it comes to something as relatively cheap as a video card. Depending on the job at hand a video card may be a fraction of the cost of lost time, so if buyers believe they’re getting something that is going to be more compatible or more reliable, then they can usually justify the additional cost for what’s roughly the same level of performance.

This is not to say that AMD’s products are unreliable or incompatible, but it means that for the professional graphics market perception is reality. And that perception is largely being driven by NVIDIA, who by all accounts is extremely good at product marketing and promotion and combining that marketing message with solid products. Even if AMD has a superior product they still need to counter NVIDIA’s marketing and their momentum, and that can’t be accomplished just by beating NVIDIA’s price/performance ratio.

As a result AMD has had to learn to play NVIDIA’s game. Realistically, AMD can’t match NVIDIA’s marketing muscle right away (this is where having all the profits confers an advantage), but they can match elements of NVIDIA’s winning strategy.

The first most important part of that strategy is to continue to improve their validation and certification process.  Most of their customers are buying professional video cards to run very specific and very important applications like AutoCAD, Creo Parametric, CINEMA 4D, and of course Adobe’s Creative Suite. They aren’t necessarily using AMD’s products because they particularly like AMD, but rather because provides the best tool for the job.  As a result ISV certifications are essential here, which requires AMD to be proactive in reaching out to ISVs and quick to fix any bugs keeping them from being certified. The more software they can get certified, the wider the market they can sell to.

As AMD has learned however, being proactive doesn’t just mean getting ISV certification, but also directly working with those ISVs. NVIDIA’s work with Adobe on the Mercury Playback Engine for Adobe Premiere Pro CS5 not only earned a lot of press for NVIDIA, but it made their Quadro cards the product to get for serious Premiere users. A well planned partnership will benefit both partners, with the ISV gaining the experience of the GPU vendor and the GPU vendor gaining sales from the users of that software.

To that end, AMD’s big partnership right now is with PTC, who is responsible for a number of CAD programs including Creo Parametric. Like most professional software ISVs, PTC has taken a very conservative stance towards adopting new technologies, meaning their software is slow to make use of new GPU features. In a move similar to NVIDIA’s Adobe partnership, AMD has formed a partnership with PTC to improve Creo Parametric in return for exclusive feature rights for a time.

AMD has implemented Vertex Buffer Object (VBO) support and Order Independent Transparency (OIT) support in Cero Parametric, greatly speeding up the software in some cases. In return AMD has a 1 year exclusive on OIT functionality, which means that users who want to take advantage of it would need to use AMD video cards. Whether this partnership will significantly benefit AMD remains to be seen, but at the very least it’s exactly the kind of thing they need to be doing to improve their standing and their sales in the professional graphics community.

Of course ISVs are not the only partners a GPU vendor wants to have, as on the other end of the spectrum you have the hardware. Professional graphics customers buy hardware based on its performance and compatibility with the applications they use, but those same customers are generally looking to buy entire systems, not individual video cards. This makes OEM partnerships the other important partnership for AMD to work on.

AMD already regularly works with OEMs in order to integrate their consumer products, so this generally isn’t a matter of forging new partnerships, but rather making the best use of the partnerships they already have. Here AMD needs to approach OEMs in order to get their FirePro cards certified in the appropriate workstation models, and then actually land design wins for those products. For the W series AMD has already certified with and landed design wins with both Dell and HP. Specifically AMD has design wins with both companies for the W5000 and W7000, while they also have a design win with Dell on the W8000. Meanwhile the W9000 isn’t up for any design wins due to its very high price and low volume, but AMD has told us that they have the option of paying for certification themselves, at which point it would be offered as an optional upgrade for equally high-end workstations.

Bringing things to a close, AMD’s plan of attack has one final plank: compute & OpenCL evangelism. AMD likes to say that they’ve bet the company on OpenCL, and while that’s a bit exaggerated it isn’t too far from the truth. Compute performance is an important aspect of the professional graphics market (NVIDIA has proven that much), and because NVIDIA has their own proprietary compute API it’s not enough for AMD to just have good compute performance – they need to actually convince developers to use OpenCL as opposed to CUDA.

This has been an ongoing process for AMD, and unfortunately it’s one where it’s hard to gauge the results. Over the years AMD has introduced a number of new tools for OpenCL development, and though OpenCL is an open standard officially controlled by the Khronos consortium, in the public eye AMD is by far the most active proponent of OpenCL.

At this point in time AMD believes they’ve finally turned the corner on OpenCL, both in terms of general adoption and in eating into CUDA’s marketshare. Technically speaking OpenCL adoption is always increasing, but the lag time can be quite long between when an ISV announces they’ll be using it and when they actually release products that meaningfully use it. Only in the last year or so have products making meaningful use of it finally shipped, including the Adobe Creative Suite and various Autodesk products. This mirrors a general trend we’ve been seeing on the consumer desktop, where applications like WinZip and Handbrake are also finally making meaningful use of OpenCL.

As for stealing market share from NVIDIA’s CUDA, that ends up being a bit more nebulous. Research by 3rd party firms such as Evans Data Corp has OpenCL already beating CUDA, but because we don’t have access to the details of that research there’s not a lot we can say there. Programming language & API usage has always been difficult to estimate since it requires developers to volunteer information, which is good enough for measuring general trends but is often not good enough for measuring specific numbers. AMD has certainly turned some CUDA developers over to OpenCL, but judging from their respective conferences (GTC and AFDS) and the presentations given, CUDA is still alive and well, and it’s by no means clear as to whether OpenCL has overtaken CUDA. But much like the overall OpenCL adoption rate this is clearly improving.

Finally, it goes without saying that while software, tools, and marketing play a big part of AMD’s OpenCL strategy, hardware is quite often the biggest part of the equation. To that end AMD will be the first to tell you that their previous VLIW architecture wasn’t particularly well suited for compute tasks. VLIW could do very, very well in true brute force operations (e.g. password cracking), but more complex programs mapped poorly to the underlying architecture. So developers looking to write complex compute applications would often find themselves writing for NVIDIA hardware, at which point there’s a strong incentive to write those programs in CUDA In the first place.

The solution to that problem was for AMD to turn to an entirely different architecture, which was introduced last year as Graphics Core Next. For AMD GCN provides the final piece of the puzzle, bringing together all of their other efforts with a high performance compute architecture. With GCN AMD can finally offer the hardware necessary to rival NVIDIA’s own compute performance, which in turn makes OpenCL more attractive to developers even if they aren’t necessarily intending to target AMD’s hardware right away, as it’s now a viable alternative.

Setting the Scene: The Professional Graphics Market Graphics Core Next: Compute for Professionals
Comments Locked

35 Comments

View All Comments

  • johnthacker - Tuesday, August 14, 2012 - link

    The W7000 has some uses in specific situations, but that's because it's a single slot card. Single slot Radeon HD 7850s (much less 7870s), which also use Pitcairn, are difficult to find; there was one OEM that showed off a design IIRC. Other than that it's hard to see exactly when someone would want these cards.

    The same generally holds for NVIDIA (the Fermi Quadro cards are cut down GF100 based, so they can be better at compute than their gaming numbers suggest, and the old Quadro 4000 is a single slot card.) Interesting that NVIDIA so far is trying to reserve GK110 for Quadro and Kepler only. We'll see if that works.
  • Dribble - Tuesday, August 14, 2012 - link

    AMD doesn't need to provide compatible, they need to provide better.

    Bottom line is companies won't change gpu manufacturer. Nvidia works well, has traditionally worked better then AMD and they still have much better driver support (team is much bigger).

    There are no AMD fanboys routing for the underdog, you have to provide a clear business reason to change, and "we're almost as good as nvidia in sometimes" isn't it.
  • CeriseCogburn - Wednesday, August 29, 2012 - link

    Compute baby ! amd compute ! compute ! GPGPU ! amd wins! amd wins!
    (that's all I've heard since 7000 series)
    Hey, winzip man.
  • Pixelpusher6 - Wednesday, August 15, 2012 - link

    I really have to question AMD's move here to kill off Firestream and have the FirePro line serve both markets. At the present time this is where they have an advantage on Nvidia. 1TB double precision performance is pretty good even though its only 1/4 of single precision, even clocked low enough to be passively cooled it should still beat Nvidia's best compute card. K10 is not really a compute card at all and to me it seems like they just wanted to get something, anything out until K20. And K20 is by no means a certainty for Q4 2012, my guess is it will be delayed. I just don't have confidence in Nvidia's mastering of the 28nm process yet, especially given the enormous die size of this chip which I've heard presents some unique challenges. And when K20 does come out it will probably be more expensive than their current compute cards.

    If I were AMD I would re-brand the compute card, drop the Firestream name because of it's association with VLIW, and come out with a new brand to highlight what really makes up GCN...Compute. Does anyone know if HPC clusters use actively cooled cards or only passively cooled? I was under the impression that compute cards generally were clocked a little lower but passively cooled. If that is the case then that rules out using the FirePro W9000 and W8000 in these server clusters. It seems like AMD just conceded this market completely when they finally have a competitive compute GPU to gain a foothold. As someone else noted this market will only be expanding. If AMD wants to only focus on professional graphics I sure hope their drivers will be better than the consumer counterparts.
  • dtolios - Wednesday, August 15, 2012 - link

    When will AMD start improving compatibility with VRay RT and other similar OpenCL apps? All this computational potential remains unused outside benchmarks - at least for the CG world.

    Radeons are vastly better in OpenGL than GeForce cards, so the switch to FirePro's is way less "mandatory" for such apps. But even if those driver issues were solved, AMD would secure a huge increase of share in the professional CG market which now uses nVidia (yes, mostly gaming cards) almost exclusively.
  • AG@FirePro - Monday, August 27, 2012 - link

    You might imagine that it's in AMD's best interest to work very closely with all the important ISVs in this space - and you would be right! :) Helping our technology partners and the broader software development community implement open-standards-based GPU acceleration in their applications is an area of heavy onging investment for us.

    Of course, not all apps are written the same. Some applications -especially those written in years past- are architected in a way that makes it challenging to enable the best performance across all the modern GPU options on the market. Proprietary or "hybrid" codebases often make full cross-compatibility quite difficult. can assure you that neither ISVs nor end-users want their toolsets be tied to a particular hardware vendor or proprietary technology base. Unfortunately, it's not always as easy as flipping a switch and sometimes this takes a while. This said, I think it's fair to say that our aim is that very soon, everybody will have the option to run the hardware of their choice in conjunction with their favorite realtime raytracer, physics solver or any other hardware-accelerated toolset.

    AMD FirePro cards fully support OpenCL in both hardware and software. Our devices offer certified and acknowledged compatibility and killer performance for a broad range of OpenCL-based applications. The same is true for tons of applications accelerated under OpenGL, DirectX and DirectCompute APIs. Compatibility and reliability are crucial. Nobody understands this more than us.

    To this end, we continue to be closely aligned with all the key ISVs in the M&E, DCC and CAD space to help them provide maximum flexibility, choice and value for their end-users.We also continue to refine and expand our range of developer tools (profilers, compliers, debuggers, etc) while at the same time contributing heavily to the open-source community in the form of optimized libraries and other free developer resources.

    The OpenCL story gets better every day. Every day, there are new and better OpenCL libraries being written and shared. There are new compiler optimizations being made all the time which allow for faster andmore flexible implementations. More and more software devs are liberating their code and their customers from proprietary APIs. While CUDA-bound apps still provide lots of value for many end-users, the writing is clearly on the wall. The age of proprietary GPU acceleration has begun to yield to a new reality of flexibility and choice for consumers.

    This is a good thing, no?

    *PS* You may have noticed an announcement about certain new server-side GCN-based FirePro GPU offerings today. Stay tuned. Things are about to get seriously fun up in here.

    Adam G.
    AMD FirePro Team
  • CeriseCogburn - Wednesday, August 29, 2012 - link

    It's not a good thing because it has not happened, and it doesn't appear it will even in the next decade.
    It's still proprietary, and is not cross card company compatible, so it's all crap talk.
    As we saw amd do after their years of PR bashing, WinZip PROPRIETARY.

    It's gonna be all seriously vendor specific up in there for a long, long time.
  • warpuck - Sunday, August 19, 2012 - link

    Does this mean I wont need 2 PCs? one for games and another for graphics. I did notice what appears to be a crossfire connector. I know most companies would not go for PC configuration like that, unless it was in the boss's office. I am one of those independants. I like taking a break when I feel like it. Not having 2 PCs would simplify things for me.
  • peevee - Friday, August 24, 2012 - link

    OK, "later this week"? In the review written 8/14. "This" week ended, then "next" week ends today...
  • Death666Angel - Tuesday, August 28, 2012 - link

    Hey!
    No problem about the inconsistent data, but maybe you can present it in a more accurate way? Currently the interval of the X axis is not to scale and the line through the data points makes it seem as though you know the way it progressed in between the data points. I'd rather make a simple bar chart with the intervals showing correctly. It would be a more honest and easy to read diagram. :)
    Great article though! :D

Log in

Don't have an account? Sign up now