Features & the Future Launch

Last but not least, along with today’s card announcements, AMD has also lifted the embargos on the architectural elements of the Vega 10 GPU. While there are no new architecture features to discuss – it turns out AMD was rather thorough with their preview at the start of this year – there’s still quite a bit in the way of details to dig into here. In fact it's more than I can hope to cover in a single day, let alone the few hours I’m getting to write this article ahead of AMD’s embargo. So I want to hit the highlights.

First and foremost, AMD has clarified that they do not consider Vega 10 to be a high performance computing (HPC) chip, unlike Hawaii. In particular, Vega 10’s native FP64 rate is just 1/16, like AMD’s other consumer chips. And the chip, while supporting HBM2’s native ECC mode, doesn’t have ECC on its internal pathways. This doesn’t mean that the chip can’t be placed in a server – its FP16 and INT16/INT8 performance is the cornerstone of its existence as part of the Radeon Instinct family. But that’s still a different market than traditional HPC. When we might see another HPC chip from AMD is anyone’s guess at this moment.

Second of all, we have a formal die size and transistor count for Vega 10. The GPU is officially 486mm2, containing 12.5B transistors therein. That amounts to 3.9B more transistors than Fiji – an especially apt comparison since Fiji is also a 64 CU/64 ROP card – all the while being 112mm2 smaller than Fiji (despite the higher transistor count) thanks to the higher density of GloFo’s 14nm LPP process, which AMD is using to fab the chip.

Talking to AMD’s engineers, what especially surprised me is where the bulk of those transistors went; the single largest consumer of the additional 3.9B transistors was spent on designing the chip to clock much higher than Fiji. Vega 10 can reach 1.7GHz, whereas Fiji couldn’t do much more than 1.05GHz. Additional transistors are needed to add pipeline stages at various points or build in latency hiding mechanisms, as electrons can only move so far on a single clock cycle; this is something we’ve seen in NVIDIA’s Pascal, not to mention countless CPU designs. Still, what it means is that those 3.9B transistors are serving a very important performance purpose: allowing AMD to clock the card high enough to see significant performance gains over Fiji.

Speaking of Fiji, there’s been some question over whether the already shipping Vega FE cards had AMD’s Draw Steam Binning Rasterizer enabled, which is one of the Vega architecture’s new features. The short answer is that no, the DSBR is not enabled in Vega FE’s current drivers. Whereas we have been told to expect it with the RX Vega launch. AMD is being careful not to make too many promises here – the performance and power impact of the DSBR vary wildly with the software used – but it means that the RX Vega will have a bit more going on than the Vega FE at launch.

Fifth, AMD is tweaking their boost clock mechanism and how it's advertised. Up until now, the boost clock was the maximum clock a Radeon card would hit. e.g. a stock Fury X topped out at 1.05GHz, and a stock RX 580 at 1340MHz. Starting with Vega RX, however, the company's definition of the boost clock is changing to the highest clockspeed a card is guaranteed to hit, but not the absolute highest clockspeed it can hit under load. There are still some nuances I want to test once I have the hardware in-hand, but at first glance this sounds closer to NVIDIA's definition, where each and every card is allowed to reach its natural limit (for the stock voltage).

Finally, while AMD didn’t clarify exactly when this change was made, the company has tweaked their Freesync Low Framerate Compensation technology to work with monitors that have a 2x range between min and max refresh rates, as opposed to 2.5x at launch. I’m told that 2.5x (or better) is still the ideal range, but it now works at 2x. This is an important development because the Samsung CF791 monitor that AMD is discounting as part of the Radeon Pack bundles has a 48Hz to 100Hz range; so it wouldn’t have been LFC capable under the old LFC rules.

Overall AMD has given us a lot to chew on – and more that can be covered in a single afternoon – but we still haven’t seen the whole picture. Today’s announcement of the card specifications and prices sets a very important baseline for what we should expect on August 14th, but no hard performance numbers to judge the final product. As I said at the start of this article, AMD’s marketing has done a very good job keeping Vega in the public’s mind. And today’s announcement is the next phase of that plan. But the end is approaching, and RX Vega’s full launch is drawing near, so it will be very interesting to see how everything we’ve learned over the last several months translates into gaming performance, power efficiency, and where AMD’s new place shall be in the realm of high-end video cards.

Buying RX Vega: Prices & Bundles
Comments Locked

162 Comments

View All Comments

  • Sttm - Monday, July 31, 2017 - link

    295W to match the 180W GTX 1080... THEY ARE A GENERATION BEHIND!
  • tuxRoller - Monday, July 31, 2017 - link

    Wow.
    This is just awful.
    Something must've gone really wrong with either the design or implementation, or both.
    There's no good excuse why a card that is running on a two generation newer node AND that has a similar power draw is ever running at fewer fps.
    I really hope the story of wtf happened to Vega emerges.
  • abufrejoval - Monday, July 31, 2017 - link

    The R9 Nano pointed in the right direction for what HBM could do: Less power, noise and size, while beating the R9 290X on performance.

    For the VEGA I expected a generational improvement like I got from the GTX 1070 vs. the GTX 980ti: Same performance at half price and power.

    345 Watts for the equivalent of a 180 Watt/(R9 Nano level) GTX 1080? Seriously? Did someone swallow a shrink and make HBM2 more energy hungry than GDDR5X?

    Even the Infinity Fabric cannot rescue such a Watt/performance blunder...
  • CiccioB - Tuesday, August 1, 2017 - link

    BTW, Infinity Fabric is not a magic component, and for sure nor a energy saver one. On the contrary, it is an added component that need extra energy to solve scaling problems.
    So, at the end, a possible MCM GPU will have a lower performance/energy ratio than a monolithic one.
  • 7beauties - Monday, July 31, 2017 - link

    I'm pleased as a bowl of rum punch that Dr. Lisa Su is aggressively competing against Intel with Ryzen and now against Nvidia with Vega but to be an AMD fanboy is to suffering wait times of foot-long beards and inches on the waistline. AMD is typically late in unveiling new hardware. I hope that their cadence of tic-toc-toc will at least bring noteworthy enhancements. Good luck AMD. It's great to see spoil Intel's party, as I hope you do with Nvidia's, but it's been a long, long time coming.
  • spat55 - Tuesday, August 1, 2017 - link

    The biggest issue with them being this late will be nVidia with Volta, even if Vega is decent and slots in between a 1080 & 1080ti it'll soon be beaten by Volta, for those of us who wanted to upgrade we brought a 1070/1080/1080ti and will be waiting for Volta.
  • CiccioB - Tuesday, August 1, 2017 - link

    aggressively competing against Intel with Ryzen and now against Nvidia with Vega

    What many AMD fanboys have still not understood is that you do not compete with anyone if your product costs more (at production) ad is sold at less. Your product is simply under priced to have some appeal, and THAT IS NOT CONCURRENCY!
    Vega, as is the entire GCN architecture is simply under performing and needs to be boosted in clocks a lot outside of their optimal energy efficient point without still reaching a decent performance against the lower tier concurrent solution.
    This architecture has to be scrapped and a new one must take its place as soon as possible, or will we wait for the eternal savior (it was Tahiti at the beginning, which soon showed it was too big too power hyngry, then it was Fiji, then Pascal, then Vega.. next is Navi... we will ever see an architecture performing better than the nvidia ones without using tons more of silicon and watts, so being really competitive?
  • Outlander_04 - Tuesday, August 1, 2017 - link

    You must be one of the very few who have actually used a Vega card? No?
    So its almost like you are just making stuff up?
    Weird !
  • CiccioB - Thursday, August 3, 2017 - link

    Hahahahah.
    With 490mmq, HBM2 and 300W it should leave GP102 in the dust, not trading blow with its lower tier cousin, the GP104, released 14 months ago.
    Are you kidding when talking about how good it GCN? Have you still not understood how bad it is? What do you need to understand it? Well, possibly Volta will teach you how good it GCN, when the x80 series will sell for $600.
    And, yes, if you expressly code for GCN (like DICE did) it will gain some points. As for any other architecture. Which does not cancel the poor area*power/performace GCN has.
  • fanofanand - Wednesday, August 2, 2017 - link

    Pascal was Nvidia, just an FYI. Otherwise nice rant.

Log in

Don't have an account? Sign up now