Vega Frontier Edition’s Target Market: AI, Machine Learning, and other Professionals

As important as the Vega hardware itself is, for AMD the target market for the hardware is equally important if not more. Vega’s the first new high-end GPU from the company in two years, and it comes at a time when GPU sales are booming.

Advances in machine learning have made GPUs the hottest computational peripheral since the x87 floating point co-processor, and unfortunately for AMD, they’ve largely missed the boat on this. Competitor NVIDIA has vastly grown their datacenter business over just the last year on the back of machine learning, thanks in large part to the task-optimized capabilities of the Pascal architecture. And most importantly of all, these machine learning accelerators have been highly profitable, fetching high margins even when the cards are readily available.

For AMD then, Vega is their chance to finally break into the machine learning market in a big way. The GPU isn’t just a high-end competitor, but it offers high performance FP16 and INT8 modes that earlier AMD GPU architectures lacked, and those modes are in turn immensely beneficial to machine learning performance. As a result, for the Vega Frontier Edition launch, AMD is taking a page from the NVIDIA playbook: rather than starting off the Vega generation with consumer cards, they’re going to launch with professional cards for the workstation market.

To be sure, the Radeon Vega Frontier Edition is not officially branded as a Pro or WX series card. But in terms of AMD’s target market, it’s unambiguously a professional card. The product page is hosted on the pro graphics section of AMD’s website, the marketing material is all about professional uses, and AMD even goes so far as to tell gamers to hold off for cheaper gaming cards later on in their official blog post. Consequently the Vega FE is about the closest analogue AMD has to NVIDIA’s Titan series cards, which although are gaming capable, in the last generation they have become almost exclusively professional focused.

AMD launching a new GPU architecture in the professional space first is a very big deal. Simply put, the company has never done it before. Fiji, Hawaii, Tahiti, Cayman, Cypress, and more all launched in consumer cards first. The traditional wisdom here is that launching in the consumer space first allows consumers to get their hands on the cards now, while professional products undergo further validation and refinement to meet the higher standards of professional users. Put another way, consumers serve as the final layer of debugging a new GPU, offering mass testing unlike anything else. So for AMD to launch in the pro market first indicates that they have an extensive amount of faith in the product.

As for why AMD would want to do this, the following AMD slide says it all.

Simply put, professional cards sell for higher prices than consumer gaming cards, sometimes significantly higher. As a result it makes all the sense in the world to sell their first Vega cards to professional users who are willing to pay $1000+ for a compute card, as opposed to consumers who would like to pay half that. More than anything else, AMD’s overall lack of profitability has come from a lack of high-margin parts to help offset their ongoing operational costs, and launching Vega as a pro card is one of the steps AMD is taking to correct that.

For pro users then with sufficiently deep pockets, they’ll be the first to get a crack at AMD’s latest high-end video card/accelerator. AMD calls this line of cards the Frontier Edition, and while the name is clearly AMD being cheeky towards NVIDIA’s Founder’s Edition line, the analogy isn’t completely off-base.  AMD’s target market is going to be machine learning developers, game developers, and others who AMD believes need early access to the cards for future development. The advantage of this route is that, particularly in the case of machine learning, this allows developers to get a jump on testing a new architecture ahead of placing a large order for server cards. So in a sense, one of the roles of the Vega FE is to prime the pump for selling Radeon Instinct MI25 cards later in the year.

As for consumers, while this is as big a change for them as it is for AMD, it’s likely a hallmark of what to expect in the future for new high-end GPU launches. For AMD gamers who have been holding out for Vega, it’s clear that they’ll have to hold out a bit longer. AMD is developing traditional consumer gaming cards as well, but by asking gamers to hold off a little while longer when the Vega FE already isn’t launching until late June, AMD is signaling that we shouldn’t be expecting consumer cards until the second half of the year.

Wrapping things up, it’ll be very interesting to see how this strategy goes for AMD. NVIDIA has been very successful in the machine learning market over the last year, and if AMD can replicate NVIDIA’s success, not only will they make the machine learning market far more competitive for everyone, but they also stand a very good chance of finally turning the corner on both profitability and their overall share of the HPC market.

AMD's Radeon Vega Frontier Edition: By The Numbers
Comments Locked

134 Comments

View All Comments

  • BurntMyBacon - Thursday, May 18, 2017 - link

    I suppose it's not as obvious to me as to you. The article doesn't present any benchmarks, much less gaming benchmarks. So I ask again, based on what premise?

    Theoretical (Max) single / double / half precision TFLOPS perhaps. Historically these numbers haven't been very useful for comparing gaming performance between vendors.

    Memory bandwidth - suffers the same problem as above.

    Pixel fill rate - only a small part of the story that may be more or less useful depending on application.

    Outside benchmarks - please share. I'd love to see some actual gaming (or gaming related) performance number. The validity of leaked or cherry picked benchmarks is perhaps questionable, but by necessity incorrect. Don't let that keep you from sharing.

    I apologize if I sound obstinate to you. The truth is, I actually like your conclusion and I am inclined to agree with it based on a combination of specifications, theoretical performance numbers, historical tendencies, market forces, and "gut feeling". However, I have yet to see anywhere near enough evidence to conclude more than a very rough wag on this one. Known changes to the architecture make historical tendencies a fuzzy approximation at best. We don't yet know how much (or little) they will affect the gaming performance.
  • Meteor2 - Wednesday, May 17, 2017 - link

    The software stack slide is telling; CUDA owns the acceleration space and OpenCL isn't very popular. AMD doesn't have an answer here and it's too late anyway because CUDA is established and it works. Nvidia are being bastards and not supporting OpenCL 2 either, locking the market in.
  • Yojimbo - Wednesday, May 17, 2017 - link

    It's not too late, but it's going to take a lot of hard work, determination, and resources. Koduri's claim that 10 or 20 engineers working for a few weeks on each framework is all that's necessary is not auspicious. Developers need assurance that AMD are going to actively support the ecosystem, something that they haven't been doing up to this point. Those number of engineers for that amount of time probably is what it took them to be able to run one chosen benchmark well that matched up particularly well with their chosen architecture (my guess is that the high bandwidth cache is a prime candidate for an advantage to focus on). As far as I know, for general usage, the BLAS libraries in GPUOpen are significantly slower than NVIDIA's cuBLAS.

    There's a lot more to support in GPU computing than just machine learning, as well. If they only focus on machine learning they will lose a lot of opportunities from companies that want to do simulations and analytics along with machine learning, which is probably the majority of them. Each application has its own issues, and the people in those market segments are mostly not machine learning experts. AMD has 3,000 employees in its Radeon Technologies Group. NVIDIA has 10,000 employees, and they don't have thousands of them sitting around doing nothing.

    As far as OpenCL, even when NVIDIA's OpenCL support was more current, CUDA had the advantage because NVIDIA actively supported the space with high performance libraries. If NVIDIA controls both the architecture and the programming model they are able to bring features to market much faster and more efficiently, which is pretty important with the pace of innovation that's going on right now. My guess is that opening CUDA would probably be a more beneficial action for the community than supporting OpenCL at the moment, unless opening CUDA meant losing control of it.
  • BurntMyBacon - Wednesday, May 17, 2017 - link

    Yes, it would be more beneficial to the community to open CUDA up to other vendors. However, I think it is about as likely to happen as opening up PhysX or G-Sync. nVidia doesn't exactly have a reputation for opening up proprietary tech.
  • Yojimbo - Wednesday, May 17, 2017 - link

    Well, they are open sourcing their upcoming deep learning accelerator ASIC. They recently open sourced a lot of GameWorks, I think. They will open source things when it's beneficial to them. That's the same thing that can be said for AMD or most tech companies. Considering their output with these initiatives and market position, AMD open sourcing GPUOpen and FreeSync was beneficial to them.

    NVIDIA has a history of not waiting around for committees and forging on their own the things they need/want, such as with CUDA, NVLink, and G-Sync. They are trying to build high-value platforms. They spend the money and take the risks in doing that and their motivation is profitability.

    I also don't expect them to open up CUDA at the moment.
  • Meteor2 - Wednesday, May 17, 2017 - link

    I'd forgotten NVLink. I'm not seeing any answer to that at all from AMD and NVLink 2 is fundamental to accelerated HPCs which are vaguely programmable.
  • tuxRoller - Wednesday, May 17, 2017 - link

    Opencapi (phenomenal latency). Given the huge number of lanes that Naples has, it would be a good target for opencapi.
    There's also the in-progress genz fabric, but that's not close to being standardized, and is more of an interconnect between nodes, I suppose.
  • Yojimbo - Wednesday, May 17, 2017 - link

    OpenCAPI is managed by the host processor (CPU). From the way I've seen it described, it is not meant for accelerators to use to link to each other without going through a host processor that supports OpenCAPI. AMD presumably might support OpenCAPI on their CPUs, but they can't count on their GPUs being used with their CPUs. Perhaps CCIX plans to allow accelerator to accelerator interconnect, I'm not sure.

    Regardless, both OpenCAPI and CCIX are not available yet, and certainly not enabled on the upcoming Vega products.
  • tuxRoller - Friday, May 19, 2017 - link

    Capi is definitely managed by the processor. Opencapi is supposed to be a "ground up" rethink of a high-speed, low latency interconnect for various on-node resources.
    Ibm appears to be calling theirs "Blue Link" (or something similar), and that's supposed to be released this year.
    I would be astonished if amd hasn't done some work towards supporting one of these standards on vega, even if not on this exact chip.
    Regardless, these are the options for amd, and since they are "standards" the hope is that others will begin buying into the spec as well.
    If opencapi remains a star network then amd can offer a competitive, even if not identical, solution instead of ceding the market to Nvidia.
  • BurntMyBacon - Thursday, May 18, 2017 - link

    @Yojimbo: "Well, they are open sourcing their upcoming deep learning accelerator ASIC."

    Good Point.

    @Yojimbo: "They recently open sourced a lot of GameWorks, I think."

    I didn't know about that, but I'm not sure how much this one matters as gameworks products already run on competitor's GPUs and we already know that gameworkst is optimized for nVidia GPUs (Why wouldn't it be?).

    @Yojimbo: "They will open source things when it's beneficial to them. That's the same thing that can be said for AMD or most tech companies."

    I generally agree, but I would be remiss if I did not point out HyperTransport as an example of an AMD open standard initiative that predated it's proprietary Intel counterpart (QPI). It was also introduced in 2001 with their Athlon64 architecture, a time when AMD was doing well due to the recent success of the Athlon processors. It doesn't happen as often as I'd like, but sometimes tech companies will prioritize long term interoperability and viability over short term gain.

    @Yojimbo: "Considering their output with these initiatives and market position, AMD open sourcing GPUOpen and FreeSync was beneficial to them."

    I don't disagree. Historically, AMD has found themselves in this position far more often than nVidia.

    @Yojimbo: "NVIDIA has a history of not waiting around for committees and forging on their own the things they need/want, such as with CUDA, NVLink, and G-Sync.

    They don't need to wait on a committee to open up a standard. AMD forged ahead with mantle until such a point as the industry was ready to work the issue. They also don't necessarily need to give up control of their standards to allow other vendors to use them. It would even be reasonable to demand a licensing fee for access to some of their technologies.

    @Yojimbo: "They are trying to build high-value platforms. They spend the money and take the risks in doing that and their motivation is profitability."

    Keeping proprietary tech proprietary is their right. When they are are dominating a one or two player market, the risk is low, the returns are high, and the decision makes sense. If there were several major players or they weren't the dominant player, this would be far more risky and interoperability would be a bigger concern. Given the current market, I expect they'll keep their proprietary tech in house.

Log in

Don't have an account? Sign up now