• What
    is this?
    You've landed on the AMD Portal on AnandTech. This section is sponsored by AMD. It features a collection of all of our independent AMD content, as well as Tweets & News from AMD directly. AMD will also be running a couple of huge giveaways here so check back for those.
    PRESENTED BY

A Quick Refresher: Graphics Core Next

One of the things we’ve seen as a result of the shift from pure graphics GPUs to mixed graphics and compute GPUs is how NVIDIA and AMD go about making their announcements and courting developers. With graphics GPUs there was no great need to discuss products or architectures ahead of time; a few choice developers would get engineering sample hardware a few months early, and everyone else would wait for the actual product launch. With the inclusion of compute capabilities however comes the need to approach launches in a different manner, a more CPU-like manner.

As a result both NVIDIA and AMD have begun revealing their architectures to developers roughly six months before the first products launch. This is very similar to how CPU launches are handled, where the basic principles of an architecture are publically disclosed months in advance. All of this is necessary as the compute (and specifically, HPC) development pipeline is far more focused on optimizing code around a specific architecture in order to maximize performance; whereas graphics development is still fairly abstracted by APIs, compute developers want to get down and dirty, and to do that they need to know as much about new architectures as possible as soon as possible.

It’s for these reasons that AMD announced Graphics Core Next, the fundamental architecture behind AMD’s new GPUs, back in June of this year at the AMD Fusion Developers Summit. There are some implementation and product specific details that we haven’t known until now, and of course very little was revealed about GCN’s graphics capabilities, but otherwise on the compute side AMD is delivering on exactly what they promised 6 months ago.

Since we’ve already covered the fundamentals of GCN in our GCN preview and the Radeon HD 7970 is primarily a gaming product we’re not going to go over GCN in depth here, but I’d encourage you to read our preview to fully understand the intricacies of GCN. But if you’re not interested in that, here’s a quick refresher on GCN with details pertinent to the 7970.

As we’ve already seen in some depth with the Radeon HD 6970, VLIW architectures are very good for graphics work, but they’re poor for compute work. VLIW designs excel in high instruction level parallelism (ILP) use cases, which graphics falls under quite nicely thanks to the fact that with most operations pixels and the color component channels of pixels are independently addressable datum. In fact at the time of the Cayman launch AMD found that the average slot utilization factor for shader programs on their VLIW5 architecture was 3.4 out of 5, reflecting the fact that most shader operations were operating on pixels or other data types that could be scheduled together

Meanwhile, at a hardware level VLIW is a unique design in that it’s the epitome of the “more is better” philosophy. AMD’s high steam processor counts with VLIW4 and VLIW5 are a result of VLIW being a very thin type of architecture that purposely uses many simple ALUs, as opposed to fewer complex units (e.g. Fermi). Furthermore all of the scheduling for VLIW is done in advance by the compiler, so VLIW designs are in effect very dense collections of simple ALUs and cache.

The hardware traits of VLIW mean that for a VLIW architecture to work, the workloads need to map well to the architecture. Complex operations that the simple ALUs can’t handle are bad for VLIW, as are instructions that aren’t trivial to schedule together due to dependencies or other conflicts. As we’ve seen graphics operations do map well to VLIW, which is why VLIW has been in use since the earliest pixel shader equipped GPUs. Yet even then graphics operations don’t achieve perfect utilization under VLIW, but that’s okay because VLIW designs are so dense that it’s not a big problem if they’re operating at under full efficiency.

When it comes to compute workloads however, the idiosyncrasies of VLIW start to become a problem. “Compute” covers a wide range of workloads and algorithms; graphics algorithms may be rigidly defined, but compute workloads can be virtually anything. On the one hand there are compute workloads such as password hashing that are every bit as embarrassingly parallel as graphics workloads are, meaning these map well to existing VLIW architectures. On the other hand there are tasks like texture decompression which are parallel but not embarrassingly so, which means they map poorly to VLIW architectures. At one extreme you have a highly parallel workload, and at the other you have an almost serial workload.


Cayman, A VLIW4 Design

So long as you only want to handle the highly parallel workloads VLIW is fine. But using VLIW as the basis of a compute architecture is going is limit what tasks your processor is sufficiently good at. If you want to handle a wider spectrum of compute workloads you need a more general purpose architecture, and this is the situation AMD faced.

But why does AMD want to chase compute in the first place when they already have a successful graphics GPU business? In the long term GCN plays a big part in AMD’s Fusion plans, but in the short term there’s a much simpler answer: because they have to.

In Q3’2011 NVIDIA’s Professional Solutions Business (Quadro + Tesla) had an operating income of 95M on 230M in revenue. Their (consumer) GPU business had an operating income of 146M, but on a much larger 644M in revenue. Professional products have much higher profit margins and it’s a growing business, particularly the GPU computing side. As it stands NVIDIA and AMD may have relatively equal shares of the discrete GPU market, but it’s NVIDIA that makes all the money. For AMD’s GPU business it’s no longer enough to focus only on graphics, they need a larger piece of the professional product market to survive and thrive in the future. And thus we have GCN.

Index A Quick Refresher, Cont
POST A COMMENT

291 Comments

View All Comments

  • SSIV - Saturday, February 18, 2012 - link

    Since there's a new driver out for there cards we can now regard these results with a grain of salt. Revise the benchmarks! Reply
  • DaOGGuru - Thursday, March 01, 2012 - link

    I don't know why people keep forgeting about the 560ti 2win. Yes I said 2win = 2 560ti processors on one card. It still kills the 7970 numbers in BF3 by 20Fps. and is same price. It also beats the 580 and is cheaper. It's a single card with 50amp min. draw and it will smoke anything except 590 and the 6990...

    http://www.guru3d.com/article/evga-geforce-gtx-560...
    Reply
  • CeriseCogburn - Thursday, March 08, 2012 - link

    Oh, right, well this isn't an nvidia card review, so we won't hear from 50 posts about how some CF (would be SLI of course in this case) combo will whip the crap out of it in performance and price...
    You know ?
    That's how it goes...
    Usually the articel itself rages on about how some amd CF combo is really so much good and better and blah blah blah.... then the rpice perf, then the results - on and on and on ....
    ---
    The angry ankle biters are swarmed up on the under red dog radeon side...
    --
    So you made a very good point, I'm just sorry it took 29 pages of reading to get to it, in it's glorious singularity.... you shouldn't strike out in independent thought like that it's dangerous.... not allowed unless the card being reviewed is an nvidia !!!!
    Reply
  • DaOGGuru - Thursday, March 01, 2012 - link

    oops... forgot to say look at previous post links BF3 rating for the 560ti 2win and compare to this charts 7970 fps. The 2win is pumping out @20 more FPS and is $50.00 - $100.00 cheaper than the 7970... lame.. ATi is still behind Nvidia but proud of it! lol They are just now catching up to Nvidia's tessellations and oh and AFTER they changed to a "cuda core copy" architecture and posting it as big news... Evga's older 560ti 2win still dusts it by 20FPS.. lame. Reply
  • DaOGGuru - Thursday, March 01, 2012 - link

    sorry 10FPS not 20.. it's late. Reply
  • DaOGGuru - Thursday, March 01, 2012 - link

    I don't get what's the hub-bub about the 7970.. sure it's the fastest single cpu;BUT, for $50.00-$100.00 less you can get the 560Ti 2win (dual cpu) that smokes the 7970 and the 2win PCB does have an SLI bridge and is cabapable of doing SLI to a second card but it's currently locked by Nvidia (see paragraph 3).

    Also, the 2win draws a min of only 50amps (way less than most sli configurations) 1. has a considerably lower noise dba, 2. runs cooler and with less power than almost all the high end cards and 3. will run 3 montiors in Nvidia 2D and 3D surround off a single card! 4.Will kill the GTX 580 by @33-23% (depending on review) 5. Will beat the 590 in some sample testing for TDP. And finally 6. will kill the 7970 by 10-20FPS in BF3 including by 10FPS in 1920x1200 4AA-16AF Ultra high mode. So, why have people forgotten the 2win? It's a singlecard, multi-GPU, full 3D/2D surround without a second card in SLI, $500.00USD beast !

    OH and for those that say you can't SLI with a second 2win.... http://www.guru3d.com/article/evga-geforce-gtx-560... (this review states on conclusion page) > quote " you will have noticed there is a SLI connector on the PCB. Unfortunately you can not add a second card to go for quad-SLI mode. It's not a hardware limitation, yet a limitation set by NVIDIA, the GTX 560 Ti series is only allowed in 2-way SLI mode, which this card already is."

    ... So actually, the card is cabale 2card SLI but Nvidia for some (gosh aweful reason) won't let the dog off the chain. Probably because it will absolutely kill the need for a GTX580, 570, 560 Ti SLI configuration for ever!

    Resources: (pay attention to BF3 FPS and compare to 7970 FPS in this article.)
    http://www.anandtech.com/show/5048/evgas-geforce-g...
    http://www.guru3d.com/article/evga-geforce-gtx-560...
    Peace...
    Reply
  • DaOGGuru - Thursday, March 01, 2012 - link

    I don't get what's the hub-bub about the 7970.. sure it's the fastest single CPU; BUT, for $50.00-$100.00 less you can get the 560Ti 2win (dualCPU) that smokes the 7970 and the 2win PCB does have an SLI bridge and is capable of doing SLI to a second card but it's currently locked by Nvidia (see paragraph 3).

    Also, the 2win draws a min of only 50amps (way less than most sli configurations) 1. Has a considerably lower noise DBA, 2. runs cooler and with less power than almost all the high end cards and 3. Will run 3 monitors in Nvidia 2D and 3D surround off a single card! 4.Will kill the GTX 580 by @33-23% (depending on review) 5. Will beat the 590 in some sample testing for TDP. And finally 6. will kill the 7970 by 10-20FPS in BF3 including by 10FPS in 1920x1200 4AA-16AF Ultra high mode. So, why have people forgotten the 2win? It's a single card, multi-GPU, full 3D/2D surround without a second card in SLI, $500.00USD beast !

    OH and for those that say you can't SLI with a second 2win.... http://www.guru3d.com/article/evga-geforce-gtx-560... (this review states on conclusion page) > quote " you will have noticed there is a SLI connector on the PCB. Unfortunately you cannot add a second card to go for quad-SLI mode. It's not a hardware limitation, yet a limitation set by NVIDIA, the GTX 560 Ti series is only allowed in 2-way SLI mode, which this card already is."

    ... So actually, the card is capable 2card SLI but Nvidia for some (gosh awful reason) won't let the dog off the chain. Probably because it will absolutely kill the need for a GTX580, 570, 560 Ti SLI configuration forever!

    Resources: (pay attention to BF3 FPS and compare to 7970 FPS in this article.)
    http://www.anandtech.com/show/5048/evgas-geforce-g...
    http://www.guru3d.com/article/evga-geforce-gtx-560...
    Peace...
    Reply
  • CeriseCogburn - Thursday, March 08, 2012 - link

    Ummm.... I read you, I see your frustration with all the posts - just refer to my one above there - you really should not be dissing the new amd like that - they like are 1st and uhh... nvidia is evil... so no comparisons like that are allowed when the fanboy side content is like 100 to 1....
    Now next nvidia card review you will notice a hundred posts on how this or that CF beats the nvidia in price perf and overall perf, etc, and it will be memorized and screamed far and wide...
    Just like... your point "doesn't count", okay ?
    It's best to ignore you GREEN fanboy types... ( yes even if you point out gigantic savings, or rather especially when you do...)
    Thanks for waiting till page 30 - a wise choice.
    Reply
  • CeriseCogburn - Sunday, March 11, 2012 - link

    Southern Islands is a whole generation late. AMD promised us this SI in the last generation 6000 series. Then right before that prior release, they told us they had changed everything and 6000 was not Southern Islands anymore. LOL
    Talk about late - it's what two years late ?
    Maybe it's three years....
    In every case here, Nvidia beat them to the core architecture by two years. Now amd is merely late to the party crashing copycats....
    That's late son, that's not original, that's not innovative, that's not superior, it's tag a long tu loo little sister style.
    Reply
  • warmbit - Tuesday, April 10, 2012 - link

    Here is the link to an interesting overview performance Radeon 7970 of 5 Web sites competing GTX580 and 6970.

    Analysis of the results of the Radeon 7970 in 18 games and 6 resolutions:
    http://translate.google.pl/translate?hl=pl&sl=...

    You will know the average relationship rates between these interest cards and you will find out which graphics card is better in the game and resolution.
    Reply

Log in

Don't have an account? Sign up now