A Quick Refresher: Graphics Core Next

One of the things we’ve seen as a result of the shift from pure graphics GPUs to mixed graphics and compute GPUs is how NVIDIA and AMD go about making their announcements and courting developers. With graphics GPUs there was no great need to discuss products or architectures ahead of time; a few choice developers would get engineering sample hardware a few months early, and everyone else would wait for the actual product launch. With the inclusion of compute capabilities however comes the need to approach launches in a different manner, a more CPU-like manner.

As a result both NVIDIA and AMD have begun revealing their architectures to developers roughly six months before the first products launch. This is very similar to how CPU launches are handled, where the basic principles of an architecture are publically disclosed months in advance. All of this is necessary as the compute (and specifically, HPC) development pipeline is far more focused on optimizing code around a specific architecture in order to maximize performance; whereas graphics development is still fairly abstracted by APIs, compute developers want to get down and dirty, and to do that they need to know as much about new architectures as possible as soon as possible.

It’s for these reasons that AMD announced Graphics Core Next, the fundamental architecture behind AMD’s new GPUs, back in June of this year at the AMD Fusion Developers Summit. There are some implementation and product specific details that we haven’t known until now, and of course very little was revealed about GCN’s graphics capabilities, but otherwise on the compute side AMD is delivering on exactly what they promised 6 months ago.

Since we’ve already covered the fundamentals of GCN in our GCN preview and the Radeon HD 7970 is primarily a gaming product we’re not going to go over GCN in depth here, but I’d encourage you to read our preview to fully understand the intricacies of GCN. But if you’re not interested in that, here’s a quick refresher on GCN with details pertinent to the 7970.

As we’ve already seen in some depth with the Radeon HD 6970, VLIW architectures are very good for graphics work, but they’re poor for compute work. VLIW designs excel in high instruction level parallelism (ILP) use cases, which graphics falls under quite nicely thanks to the fact that with most operations pixels and the color component channels of pixels are independently addressable datum. In fact at the time of the Cayman launch AMD found that the average slot utilization factor for shader programs on their VLIW5 architecture was 3.4 out of 5, reflecting the fact that most shader operations were operating on pixels or other data types that could be scheduled together

Meanwhile, at a hardware level VLIW is a unique design in that it’s the epitome of the “more is better” philosophy. AMD’s high steam processor counts with VLIW4 and VLIW5 are a result of VLIW being a very thin type of architecture that purposely uses many simple ALUs, as opposed to fewer complex units (e.g. Fermi). Furthermore all of the scheduling for VLIW is done in advance by the compiler, so VLIW designs are in effect very dense collections of simple ALUs and cache.

The hardware traits of VLIW mean that for a VLIW architecture to work, the workloads need to map well to the architecture. Complex operations that the simple ALUs can’t handle are bad for VLIW, as are instructions that aren’t trivial to schedule together due to dependencies or other conflicts. As we’ve seen graphics operations do map well to VLIW, which is why VLIW has been in use since the earliest pixel shader equipped GPUs. Yet even then graphics operations don’t achieve perfect utilization under VLIW, but that’s okay because VLIW designs are so dense that it’s not a big problem if they’re operating at under full efficiency.

When it comes to compute workloads however, the idiosyncrasies of VLIW start to become a problem. “Compute” covers a wide range of workloads and algorithms; graphics algorithms may be rigidly defined, but compute workloads can be virtually anything. On the one hand there are compute workloads such as password hashing that are every bit as embarrassingly parallel as graphics workloads are, meaning these map well to existing VLIW architectures. On the other hand there are tasks like texture decompression which are parallel but not embarrassingly so, which means they map poorly to VLIW architectures. At one extreme you have a highly parallel workload, and at the other you have an almost serial workload.


Cayman, A VLIW4 Design

So long as you only want to handle the highly parallel workloads VLIW is fine. But using VLIW as the basis of a compute architecture is going is limit what tasks your processor is sufficiently good at. If you want to handle a wider spectrum of compute workloads you need a more general purpose architecture, and this is the situation AMD faced.

But why does AMD want to chase compute in the first place when they already have a successful graphics GPU business? In the long term GCN plays a big part in AMD’s Fusion plans, but in the short term there’s a much simpler answer: because they have to.

In Q3’2011 NVIDIA’s Professional Solutions Business (Quadro + Tesla) had an operating income of 95M on 230M in revenue. Their (consumer) GPU business had an operating income of 146M, but on a much larger 644M in revenue. Professional products have much higher profit margins and it’s a growing business, particularly the GPU computing side. As it stands NVIDIA and AMD may have relatively equal shares of the discrete GPU market, but it’s NVIDIA that makes all the money. For AMD’s GPU business it’s no longer enough to focus only on graphics, they need a larger piece of the professional product market to survive and thrive in the future. And thus we have GCN.

Index A Quick Refresher, Cont
Comments Locked

292 Comments

View All Comments

  • Zingam - Thursday, December 22, 2011 - link

    I think this card is a kinda fail. Well, maybe it is a driver issue and they'll up the performance 20-25% in the future but it is still not fast enough for such huge jump - 2 nodes down!!!
    It smell like a graphics Bulldozer for AMD. Good ideas on paper but in practice something doesn't work quite right. Raw performance is all that counts (of course raw performance/$).
    If NVIDIA does better than usual this time. AMD might be in trouble. Well, will wait and see.
    Hopefully they'll be able to release improved CPUs and GPUs soon because this generation does not seem to be very impressive.

    I've expected at least triple performance over the previous generation. Maybe the drivers are not that well optimized yet. After all it is a huge architecture change.

    I don't really care that much about that GPU generation but I'm worried that they won't be able to put something impressively new in the next generation of consoles. I really hope that we are not stuck with obsolete CPU/GPU combination for the next 7-8 years again.

    Anyway: massively parallel computing sounds tasty!
  • B3an - Thursday, December 22, 2011 - link

    You dont seem to understand that all them extra transistors are mostly there for computing. Thats mostly what this was designed for. Not specifically for gaming performance. Computing is where this card will offer massive increases over the previous AMD generation.
    Look at Nvidia's Fermi, that had way more transistors than the previous generation but wasn't that much faster than AMD's cards at the time. Because again all the extra transistors were mainly for computing.

    And come on LOL, expecting over triple the performance?? That has never happened once with any GPU release.
  • SlyNine - Friday, December 23, 2011 - link

    The 9700pro was up to 4x faster then the 4600 in certian situations. So yes it has happened.
  • tzhu07 - Thursday, December 22, 2011 - link

    LOL, triple the performance?

    Do you also have a standard of dating only Victoria's Secret models?
  • eanazag - Thursday, December 22, 2011 - link

    I have a 3870 which I got in early 2007. It still does well for the main games I play: Dawn of War 2 and Starcraft 2 (25 fps has been fine for me here with settings mostly maxed). I have eyeing a new card. I like the power usage and thermals here. I am not spending $500+ though. I am thinking they are using that price to compensate for the mediocre yields they getting on 28nm, but either way the numbers look justified. I will be look for the best card between $150-$250, maybe $300. I am counting on this cards price coming down, but I doubt it will hit under $400-350 next year.

    No matter what this looks like a successful soft launch of a video card. For me, anything smokes what I have in performance but not so much on power usage. I'd really not mind the extra noise as the heat is better than my 3870.

    I'm in the single card strategy camp.

    Monitor is a single 42" 1920x1200 60 Hz.
    Intel Core i5 760 at stock clocks. My first Intel since the P3 days.

    Great article.
  • Death666Angel - Thursday, December 22, 2011 - link

    Can someone explain the different heights in the die-size comparison picture? Does that reflect processing-changes? I'm lost. :D Otherwise, good review. I don't see the HD7970 in Bench, am I blind or is it just missing.
  • Ryan Smith - Thursday, December 22, 2011 - link

    The Y axis is the die size. The higher a GPU the bigger it is (relative to the other GPUs from that company).
  • Death666Angel - Friday, December 23, 2011 - link

    Thanks! I thought the actual sizes were the sizes and the y-axis meant something else. Makes sense though how you did it! :-)
  • MonkeyPaw - Thursday, December 22, 2011 - link

    As a former owner of the 3870, mine had the short-lived GDDR4. That old card has a place in my nerd heart, as it played Bioshock wonderfully.
  • Peichen - Thursday, December 22, 2011 - link

    The improvement is simply not as impressive as I was led to believed. Rumor has it that a single 7970 would have the power of a 6990. In fact, if you crunch the numbers, it would be at least 50% faster than 6970 which should put it close to 6990. (63.25% increase in transistors, 40.37% in TFLOP and 50% increase in memory bandwidth.)

    What we got is a Fermi 1st gen with the price to match. Remember, this is not a half-node improvement in manufacturing process, it is a full-node and we waited two years for this.

    In any case, I am just ranting because I am waiting for something to replace my current card before GTA 5 came out. Nvidia's GK104 in Q1 2012 should be interesting. Rumored to be slightly faster than GTX 580 (slower than 7970) but much cheaper. We'll see.

Log in

Don't have an account? Sign up now