Features & the Future Launch

Last but not least, along with today’s card announcements, AMD has also lifted the embargos on the architectural elements of the Vega 10 GPU. While there are no new architecture features to discuss – it turns out AMD was rather thorough with their preview at the start of this year – there’s still quite a bit in the way of details to dig into here. In fact it's more than I can hope to cover in a single day, let alone the few hours I’m getting to write this article ahead of AMD’s embargo. So I want to hit the highlights.

First and foremost, AMD has clarified that they do not consider Vega 10 to be a high performance computing (HPC) chip, unlike Hawaii. In particular, Vega 10’s native FP64 rate is just 1/16, like AMD’s other consumer chips. And the chip, while supporting HBM2’s native ECC mode, doesn’t have ECC on its internal pathways. This doesn’t mean that the chip can’t be placed in a server – its FP16 and INT16/INT8 performance is the cornerstone of its existence as part of the Radeon Instinct family. But that’s still a different market than traditional HPC. When we might see another HPC chip from AMD is anyone’s guess at this moment.

Second of all, we have a formal die size and transistor count for Vega 10. The GPU is officially 486mm2, containing 12.5B transistors therein. That amounts to 3.9B more transistors than Fiji – an especially apt comparison since Fiji is also a 64 CU/64 ROP card – all the while being 112mm2 smaller than Fiji (despite the higher transistor count) thanks to the higher density of GloFo’s 14nm LPP process, which AMD is using to fab the chip.

Talking to AMD’s engineers, what especially surprised me is where the bulk of those transistors went; the single largest consumer of the additional 3.9B transistors was spent on designing the chip to clock much higher than Fiji. Vega 10 can reach 1.7GHz, whereas Fiji couldn’t do much more than 1.05GHz. Additional transistors are needed to add pipeline stages at various points or build in latency hiding mechanisms, as electrons can only move so far on a single clock cycle; this is something we’ve seen in NVIDIA’s Pascal, not to mention countless CPU designs. Still, what it means is that those 3.9B transistors are serving a very important performance purpose: allowing AMD to clock the card high enough to see significant performance gains over Fiji.

Speaking of Fiji, there’s been some question over whether the already shipping Vega FE cards had AMD’s Draw Steam Binning Rasterizer enabled, which is one of the Vega architecture’s new features. The short answer is that no, the DSBR is not enabled in Vega FE’s current drivers. Whereas we have been told to expect it with the RX Vega launch. AMD is being careful not to make too many promises here – the performance and power impact of the DSBR vary wildly with the software used – but it means that the RX Vega will have a bit more going on than the Vega FE at launch.

Fifth, AMD is tweaking their boost clock mechanism and how it's advertised. Up until now, the boost clock was the maximum clock a Radeon card would hit. e.g. a stock Fury X topped out at 1.05GHz, and a stock RX 580 at 1340MHz. Starting with Vega RX, however, the company's definition of the boost clock is changing to the highest clockspeed a card is guaranteed to hit, but not the absolute highest clockspeed it can hit under load. There are still some nuances I want to test once I have the hardware in-hand, but at first glance this sounds closer to NVIDIA's definition, where each and every card is allowed to reach its natural limit (for the stock voltage).

Finally, while AMD didn’t clarify exactly when this change was made, the company has tweaked their Freesync Low Framerate Compensation technology to work with monitors that have a 2x range between min and max refresh rates, as opposed to 2.5x at launch. I’m told that 2.5x (or better) is still the ideal range, but it now works at 2x. This is an important development because the Samsung CF791 monitor that AMD is discounting as part of the Radeon Pack bundles has a 48Hz to 100Hz range; so it wouldn’t have been LFC capable under the old LFC rules.

Overall AMD has given us a lot to chew on – and more that can be covered in a single afternoon – but we still haven’t seen the whole picture. Today’s announcement of the card specifications and prices sets a very important baseline for what we should expect on August 14th, but no hard performance numbers to judge the final product. As I said at the start of this article, AMD’s marketing has done a very good job keeping Vega in the public’s mind. And today’s announcement is the next phase of that plan. But the end is approaching, and RX Vega’s full launch is drawing near, so it will be very interesting to see how everything we’ve learned over the last several months translates into gaming performance, power efficiency, and where AMD’s new place shall be in the realm of high-end video cards.

Buying RX Vega: Prices & Bundles
Comments Locked

162 Comments

View All Comments

  • CiccioB - Monday, July 31, 2017 - link

    Is there any particular school that teaches this rubbish about good and evil in economy?
    Or it just mere frustration that creates all these idiotic theories that customers that pay for a product (which are also the best on the market) must feel robbed?

    I would say that fanboys that wait 14 months to have a product the same as another older than a year should feel robbed of their time and enjoyed occasions.
    But they are fanboys, you can't expect them to understand
  • fanofanand - Wednesday, August 2, 2017 - link

    The victim mentality is not acquired via economics classes, these are individuals who believe that there is a place for GPUs within the "social justice" spectrum. You can pretty much just ignore them and move on to the next comment.
  • mapesdhs - Wednesday, August 2, 2017 - link

    "The victim mentality is not acquired via economics classes, ..."

    An SJW gfx card, now I've heard it all. :D
  • HAILHOFF - Monday, July 31, 2017 - link

    Have you seen the Volta predictions? AMD are unable to produce a card that competes with Nvidia in terms of pure brute force. AMD cards need optimisations in Vulkan and what not to be competitive. People talk about tick/tock and about how Nvidia abuses the market but seriously who cares? When I buy a GPU I want to buy the best, Feelings have no place when you're buying hardware. That being said Ryzen is amazing but Vega sucks.
  • vladx - Monday, July 31, 2017 - link

    Newsflash people, companies are in it to make as much money as the market allows them. They may pretend they care about their customers, but the truth is neither AMD or Nvidia does.
  • stockolicious - Monday, July 31, 2017 - link

    " Ryzen is amazing but Vega sucks"
    seems to be the consesus - I think the attach rate on Ryzen will be pretty high though. you will be able to get a high end PC for "Much less" than an intel/Nvidia PC. same thing for AI ect.
    bulldozer (fail) AMD was never able to test the attach rate which is one of the advantages you should have if you produce both a CPU and GPU.
  • Deathlokke - Saturday, August 5, 2017 - link

    This is flat out wrong, and has been disproven multiple times. In fact, AMD drops support for older cards much faster than NVIDIA does; they are still releasing updates for the 8400 GS, which came out 10 years ago.
  • nagi603 - Monday, July 31, 2017 - link

    Yeah, those packs are problematic for games as well. I already have one of the games, a 34" freesync display, and I want to upgrade to a Threadripper later, not a Ryzen 7....

    With that being said, distributors usually remove the game codes here, and hardware packs are quite unheard of, so probably we won't get anything, just more expensive separate pieces.
  • Ian Cutress - Monday, July 31, 2017 - link

    It's not a hardware bundle coming from AMD: you have to add both the monitor and the card pack to the basket for the discount to be applied. The retailer then has to deal with AMD / the distributor to work out what they get out of it.
  • superflux - Monday, July 31, 2017 - link

    Man... I thought there would be some mention of SR-VIO. I know that it is a 'Pro' Graphics card feature, but it is very, very 'desired' for people who run Linux with virtualized Windows. We don't need 8 to 64 different virtualized operating systems to share one graphics card, just 1.

    As a consumer, I really don't want to plug in two different graphics cards just so I can run Linux with Windows for games. I get that some people just boot into a different operating system, but that honestly makes no sense at all when we simply have the technology to move beyond that.

Log in

Don't have an account? Sign up now