The Vega Architecture: AMD’s Brightest Day

From an architectural standpoint, AMD’s engineers consider the Vega architecture to be their most sweeping architectural change in five years. And looking over everything that has been added to the architecture, it’s easy to see why. In terms of core graphics/compute features, Vega introduces more than any other iteration of GCN before it.

Speaking of GCN, before getting too deep here, it’s interesting to note that at least publicly, AMD is shying away from the Graphics Core Next name. GCN doesn’t appear anywhere in AMD’s whitepaper, while in programmers’ documents such as the shader ISA, the name is still present. But at least for the purposes of public discussion, rather than using the term GCN 5, AMD is consistently calling it the Vega architecture. Though make no mistake, this is still very much GCN, so AMD’s basic GPU execution model remains.

So what does Vega bring to the table? Back in January we got what has turned out to be a fairly extensive high-level overview of Vega’s main architectural improvements. In a nutshell, Vega is:

  • Higher clocks
  • Double rate FP16 math (Rapid Packed Math)
  • HBM2
  • New memory page management for the high-bandwidth cache controller
  • Tiled rasterization (Draw Stream Binning Rasterizer)
  • Increased ROP efficiency via L2 cache
  • Improved geometry engine
  • Primitive shading for even faster triangle culling
  • Direct3D feature level 12_1 graphics features
  • Improved display controllers

The interesting thing is that even with this significant number of changes, the Vega ISA is not a complete departure from the GCN4 ISA. AMD has added a number of new instructions – mostly for FP16 operations – along with some additional instructions that they expect to improve performance for video processing and some 8-bit integer operations, but nothing that radically upends Vega from earlier ISAs. So in terms of compute, Vega is still very comparable to Polaris and Fiji in terms of how data moves through the GPU.

Consequently, the burning question I think many will ask is if the effective compute IPC is significantly higher than Fiji, and the answer is no. AMD has actually taken significant pains to keep the throughput latency of a CU at 4 cycles (4 stages deep), however strictly speaking, existing code isn’t going to run any faster on Vega than earlier architectures. In order to wring the most out of Vega’s new CUs, you need to take advantage of the new compute features. Note that this doesn’t mean that compilers can’t take advantage of them on their own, but especially with the datatype matters, it’s important that code be designed for lower precision datatypes to begin with.

Vega 10: Fiji of the Stars Rapid Packed Math: Fast FP16 Comes to Consumer Cards
Comments Locked

213 Comments

View All Comments

  • BOBOSTRUMF - Monday, August 14, 2017 - link

    well, I was expected lower performance compared to a geforce 1080 so this is one of the few plusses. Now NVIDIA only has to bump the base clocks for the Geforce 1080 while still consuming less power. Competition is great but this is not the best product from AMD, on 14nm the gains should be much higher. Fortunately AMD is great now on CPU's and that will hopefully bring income that should be invested in GPU research.
    Good luck AMD
  • mapesdhs - Monday, August 14, 2017 - link

    NV doesn't have to do anything as long as retail pricing has the 1080 so much cheaper. I look foward to seeing how the 56 fares.
  • webdoctors - Tuesday, August 15, 2017 - link

    It looks like the 1080 MSRP is actually less! Other sites mentioning the initial price included a $100 rebate which has expired :( and the new MSRP has taken effect....

    https://pcgamesn.com/amd/amd-rx-vega-rebates
  • mdriftmeyer - Monday, August 14, 2017 - link

    Remember your last paragraph after the game engines adopt AMD's architecture and features, of which they have committed themselves in doing, and already partially in development. When that happens I look forward to you asking what the hell went wrong at Nvidia.
  • Yojimbo - Monday, August 14, 2017 - link

    The whole "game engines will adopt AMD's architecture" thesis was made when the Xbox One and PS4 were released in 2013. Since then, AMD's market share among PC gamers has declined considerably and NVIDIA seems to be doing just fine in terms of features and performance in relevant game engines. The XBox One and PS4 architectures account for a significant percentage of total software sales. Vega architecture will account for a minuscule percentage. So why would the thesis hold true for Vega when it didn't hold true for Sea Islands?

    Besides, NVIDIA has had packed FP16 capability since 2015 with the Tegra X1. They also have it in their big GP100 and GV100 GPUs. They can relatively easily implement it in consumer GeForce GPUs whenever they feel it is appropriate. And within 3 months of doing so they will have more FP16-enabled gaming GPUs in the market than Vega will represent over its entire lifespan.
  • Yojimbo - Monday, August 14, 2017 - link

    That means the Nintendo Switch is FP16 capable, by the way.
  • mapesdhs - Monday, August 14, 2017 - link

    Good points, and an extra gazillion for reminding me of an awesome movie. 8)
  • stockolicious - Tuesday, August 15, 2017 - link

    "the Xbox One and PS4 were released in 2013. Since then, AMD's market share among PC gamers has declined considerably "

    The problem AMD had was they could not play to their advantage - which was having a CPU and GPU. The CPU was so aweful that nobody used them to game (or very few) now that Ryzen is here and successful they will gain GPU share even though their top cards dont beat Nvida. This is called "Attach Rate" - when a person buys a Computer with an AMD CPU the get an AMD GPU 55% of the time vs 25% of the time with an Intel CPU. AMD had the same issue with their APU - the CPU side was so bad that nobody cared to build designs around them but now with Raven Ridge coming Ryzen/Vega they will do very well there as well.
  • Yojimbo - Tuesday, August 15, 2017 - link

    I wouldn't expect bulldozer (or whatever their latest pre-zen architecture was called) attach rates to hold true for Ryzen. There were probably a significant percentage of AMD fans accounting for bulldozer sales. If Ryzen is a lot more successful (and by all accounts it looks like it will be), then only a small percentage of Ryzen sales will be by die hard AMD fans. Most will be by people looking to get the best value. Then you can expect attach rates for AMD GPUs with Ryzen CPUs to be significantly lower than with bulldozer.
  • nwarawa - Monday, August 14, 2017 - link

    *yawn* Wake me up when the prices return to normal levels. I've had my eye on a few nice'n'cheap freesync monitors for awhile now, but missed my chance at an affordable RX470/570.

    Make a Vega 48 -3GB card (still enough RAM for 1080P for me, but should shoo-off the miners) for around $250, and I'll probably bite. And get that power consumption under control while you're at it. I'll undervolt it either way.

Log in

Don't have an account? Sign up now