Bonaire’s Microarchitecture - What We’re Calling GCN 1.1

With our introduction out of the way, before looking at the cards and our performance results we would like to dive into a technical discussion and a bit of nitpicking. Specifically we would like to spend some time talking about architectures and product naming, as it’s going to be a bit confusing at first. As AMD has stated numerous times in the past, Graphics Core Next is a long-term architecture for the company. AMD intends to evolve GCN over the years, releasing multiple microarchitectures based on GCN that improve the architecture and add features while still being rooted in the design principles of GCN. GCN is after all the other half of AMD’s upcoming HSA-capable APUs, the culmination of years of AMD’s efforts with HSA/Fusion.

So where does Bonaire fit in? Bonaire is of course a GCN part; it’s a new microarchitecture that’s technically different from Southern Islands, but on the whole it’s a microarchitecture that’s extremely close in design to Southern Islands. In this new microarchitecture there are some changes – among other things the new microarchitecture implements some new instructions that will be useful for HSA, support for a larger number of compute work queues (also good for HSA) and it also implements a new version of AMD’s PowerTune technology (which we’ll get to in a bit) – but otherwise the differences from Southern Islands are very few. There are no notable changes in shader/CU efficiency, ROP efficiency, graphics features, etc. Unless you’re writing compute code for AMD GPUs, from what we know about this microarchitecture it’s likely you’d never notice a difference.

Unfortunately AMD has chosen to more-or-less gloss over the microarchitectural differences altogether, which is not wholly surprising since they will be selling Bonaire and previous generation products side-by-side. Bonaire’s microarchitecture has no official name (at least not one AMD wants to give us) and no version number. The Sea Islands name we’ve been seeing thrown around is not the microarchitecture name. Sea Islands is in fact the name for all of the GPUs in this wave – or perhaps it would be better to say all of the products created in this development cycle – including both Bonaire and it’s new microarchitecture, and Oland, AMD’s other new GPU primarily for mobile that is purely Southern Islands in microarchitecture.

In fact if not for the fact that AMD released (and then retracted) an ISA document called “AMD Sea Islands Instruction Set Architecture” last month, we would likely know even less about Bonaire’s microarchitecture. The document has been retracted at least in part due to the name (since AMD will not be calling the microarchitecture Sea Islands after all), so as a whole AMD isn’t particularly keen in talking about their microarchitecture at this time. But at the same time from a product standpoint it gives you an idea of how AMD is intending to smoothly offer both Southern Islands and Bonaire microarchitecture parts together as one product family.

Anyhow, for the sake of our sanity and for our discussions, in lieu of an official name from AMD we’re going to be retroactively renaming AMD’s GCN microarchitectures in order to quickly tell them apart. For the rest of this article and in future articles we will be referring to Southern Islands as GCN 1.0, while Bonaire’s microarchitecture will be GCN 1.1, to reflect the small changes between it and the first rendition of GCN.

Ultimately the differences between GCN 1.0 and GCN 1.1 are extremely minor, but they are real. But despite our general annoyance in how this has been handled, for consumers the difference between a GCN 1.0 card like the 7770 and a GCN 1.1 card like the 7790 should be limited to their innate performance differences, and of course PowerTune. GCN 1.1 or not, Bonaire fits in nicely in AMD’s current product stack and is in a position where it’s reasonable for it to be lumped together with GCN 1.0 parts as a single family. It’s really only the technical enthusiasts (like ourselves) and programmers that should have any significant reason to care about GCN 1.0 versus GCN 1.1. For everyone else this may just as well be another Southern Islands part.

Introduction The New PowerTune: Adding Further States
Comments Locked

107 Comments

View All Comments

  • Spunjji - Friday, March 22, 2013 - link

    ...forgive my stupidity. Actual figures of the 7790 here:
    http://www.techpowerup.com/reviews/Sapphire/HD_779...

    Depends on whether we focus on Peak / Max figures to decide whether you or I am closer to the truth. :)
  • Ryan Smith - Friday, March 22, 2013 - link

    Typical Board Power, not Total. TBP is an average rather than a peak like TDP, which is why it's a lower number than TDP.
  • dbcoopernz - Friday, March 22, 2013 - link

    Any details on UVD module? Any changes?

    The Asus Direct Cu-II might make an interesting high power but quiet HTPC card. Any chance of a review?
  • Ryan Smith - Friday, March 22, 2013 - link

    There are no changes that we have been made aware of.
  • haplo602 - Friday, March 22, 2013 - link

    somebody please make this a single slot card and I am sold ... otherwise I'll wait for the 8k radeons ...
  • Shut up and drink - Friday, March 22, 2013 - link

    Has it occurred to anyone else that this is in all probability an OEM release of the "semi-custom" silicon that will find its way into Sony's Playstation 4 in the fall?

    Word has it that Sony has some form of GPU switching tech integrated into the PS4.

    - apologies for the link to something other than Anand but I don't think they ran anything on the story http://www.tomshardware.com/news/sony-ps4-patent-p...

    Initially I presumed this to be some "Optimus"-esque dynamic context switching power saving routine. However, the patent explicitly states, "This architecture lets a user run one or more GPUs in parallel, but only for the purpose of increasing performance, not to reduce power consumption."
    Which struck me as some kind of expansion on the nebulous "hybrid crossfire" tech that AMD has been playing w/since they birthed the 3000 series 780G igpu

    Based off of AMD's previous endeavors in this area on the PC side I would be skeptical of the benefits/merit of pairing the comparatively anemic iGPU's of Kabini w/a presumably Bonaire derived GPU.
    As an aside; since SLI/CFX work by issuing frames to the next GPU available, if one GPU is substantially faster than the other(s), frames get finished out-of-order and the IGP/slower-GPU's tardy frames simply get dropped which may make the final rendered video stuttery/choppy.

    Pairing an IGP with a disproportionately powerful discrete GPU simply does not work for realtime rendering.

    It is certainly possible that with the static nature of the console and perhaps especially the unified nature of the GDDR5 memory pool/bank that performance gains could be had

    However, my digression on the merits of the tech thus far is
    128 + 128 = 256 + 896 = Anand's own deduction of 1152sp's)
  • Shut up and drink - Friday, March 22, 2013 - link

    I pushed submit by mistake...damn...

    oh well...my last point of arithmetic was simply that 1 fully enabled 4 core Kabini's I'm suspecting would have a 128 shader count igpu. Factor in the much ballyhooed 8-core Cpu in the PS4 we would have two Kabini's (128+128=256) + a Bonaire derived 896sp GPU all on some kind of custom MCM style packaging "semi-custom APU" (rumor had it that the majority of Sony's R&D contributions were in the stacking/packaging dept.)

    Anyone concur?
  • Shut up and drink - Friday, March 22, 2013 - link

    ...which jives w/Anand's own piece that ran on the console's unveiling, "Sony claims the GPU features 18 compute units, which if this is GCN based we'd be looking at 1152 SPs and 72 texture units"

    http://www.anandtech.com/show/6770/sony-announces-...
  • A5 - Friday, March 22, 2013 - link

    Yeah, once this came in at 14 CUs with minor architecture changes, it seemed like a likely scenario to me.

    Obviously it isn't going to give you PS4 performance on ports with only 1GB of memory, though.
  • crimson117 - Friday, March 22, 2013 - link

    Good thought, but I sure hope Sony doesn't hamstring its PS4 with a 128-bit memory bus!

Log in

Don't have an account? Sign up now