The Polaris Architecture: In Brief

For today’s preview I’m going to quickly hit the highlights of the Polaris architecture.

In their announcement of the architecture this year, AMD laid out a basic overview of what components of the GPU would see major updates with Polaris. Polaris is not a complete overhaul of past AMD designs, but AMD has combined targeted performance upgrades with a chip-wide energy efficiency upgrade. As a result Polaris is a mix of old and new, and a lot more efficient in the process.

At its heart, Polaris is based on AMD’s 4th generation Graphics Core Next architecture (GCN 4). GCN 4 is not significantly different than GCN 1.2 (Tonga/Fiji), and in fact GCN 4’s ISA is identical to that of GCN 1.2’s. So everything we see here today comes not from broad, architectural changes, but from low-level microarchitectural changes that improve how instructions execute under the hood.

Overall AMD is claiming that GCN 4 (via RX 480) offers a 15% improvement in shader efficiency over GCN 1.1 (R9 290). This comes from two changes; instruction prefetching and a larger instruction buffer. In the case of the former, GCN 4 can, with the driver’s assistance, attempt to pre-fetch future instructions, something GCN 1.x could not do. When done correctly, this reduces/eliminates the need for a wave to stall to wait on an instruction fetch, keeping the CU fed and active more often. Meanwhile the per-wave instruction buffer (which is separate from the register file) has been increased from 12 DWORDs to 16 DWORDs, allowing more instructions to be buffered and, according to AMD, improving single-threaded performance.

Outside of the shader cores themselves, AMD has also made enhancements to the graphics front-end for Polaris. AMD’s latest architecture integrates what AMD calls a Primative Discard Accelerator. True to its name, the job of the discard accelerator is to remove (cull) triangles that are too small to be used, and to do so early enough in the rendering pipeline that the rest of the GPU is spared from having to deal with these unnecessary triangles. Degenerate triangles are culled before they even hit the vertex shader, while small triangles culled a bit later, after the vertex shader but before they hit the rasterizer. There’s no visual quality impact to this (only triangles that can’t be seen/rendered are culled), and as claimed by AMD, the benefits of the discard accelerator increase with MSAA levels, as MSAA otherwise exacerbates the small triangle problem.

Along these lines, Polaris also implements a new index cache, again meant to improve geometry performance. The index cache is designed specifically to accelerate geometry instancing performance, allowing small instanced geometry to stay close by in the cache, avoiding the power and bandwidth costs of shuffling this data around to other caches and VRAM.

Finally, at the back-end of the GPU, the ROP/L2/Memory controller partitions have also received their own updates. Chief among these is that Polaris implements the next generation of AMD’s delta color compression technology, which uses pattern matching to reduce the size and resulting memory bandwidth needs of frame buffers and render targets. As a result of this compression, color compression results in a de facto increase in available memory bandwidth and decrease in power consumption, at least so long as buffer is compressible. With Polaris, AMD supports a larger pattern library to better compress more buffers more often, improving on GCN 1.2 color compression by around 17%.

Otherwise we’ve already covered the increased L2 cache size, which is now at 2MB. Paired with this is AMD’s latest generation memory controller, which can now officially go to 8Gbps, and even a bit more than that when oveclocking.

AMD's Path to Polaris Gaming Performance
Comments Locked

449 Comments

View All Comments

  • sonicmerlin - Friday, July 1, 2016 - link

    Job not well done. Doesn't come close to reaching AMD's advertised 2.8x performance/Watt improvements. The mass market $200 reference board is drawing power over PCIe outside of spec. If Nvidia comes out with an overclockable 1060 for $250 no one is going for the hot and dangerous 480 over the power sipping 1060. And regardless of what some people claim, even 3 GB of VRAM is plenty for 1080p.
  • mickulty - Wednesday, June 29, 2016 - link

    >50% performance jump over 380X, I'll take that.
  • sonicmerlin - Friday, July 1, 2016 - link

    Even with the benefit of 2 node shrinks?
  • Geranium - Wednesday, June 29, 2016 - link

    By your logic 45% to 70% performance improvement over previous generation is F-up? LOL.
  • Geranium - Wednesday, June 29, 2016 - link

    wrong reply. It was for first comment.
  • MATHEOS - Wednesday, June 29, 2016 - link

    Agree
  • dustwalker13 - Wednesday, June 29, 2016 - link

    a massive F-up?
    only if you compare a 200,- card to a 500,- or 700,- and expect the same performance. this card sits right in the sweet spot of performance and efficiency for a really nice price.
    granted it is not for me, but i am one of those crazy people who shell out 500,- or more for a graphics card - this puts me in the top roughly 5% of gamers i would suspect, the rest will buy cards below 300,- and there the 480 delivers extremely well.

    no it is not a high-end card, but then it never was supposed to be a 1080 killer.

    the interesting question now will be what the 1060 will deliver in terms of price/performance/efficiency.
  • Byte - Wednesday, June 29, 2016 - link

    Ouch AMD can't catch a break. Promised lower power consumption, but seems to be surpassing it, possibly blowing out the PCIE. Performance is about what expected, but won't turn heads. Looks like the GPUs are following CPUs succession in disappointment. Lets hope Vega will be a stunner and nVidia won't have a 1080Ti in time to rain on it like they did with Fury. We need to give AMD a surviving chance!
  • cocochanel - Thursday, June 30, 2016 - link

    With people like you, how could they get a break ?
  • Frenetic Pony - Wednesday, June 29, 2016 - link

    And yet they'll make bank off of it. Learned over the past few years that GPU quality and sales have little to do with each other. Nvidia made a ton of money off their last generation despite the fact that no desktop user should give much of a shit about TDP, but it worked anyway despite AMD beating them price for perforrmance in almost every category. Similarly this card sucks while Pascal is quite impressive, but this all the good will and PR in the world so will sell like hotcakes anyway.

Log in

Don't have an account? Sign up now