VLIW4: Finding the Balance Between TLP, ILP, and Everything Else

To properly frame why AMD went with a VLIW4 design we’d have to first explain why AMD went with a VLIW5 design. And to do that we’d have to go back even further to the days of DirectX 9, and thus that is where we will start.

Back in the days of yore, when shading was new and pixel and vertex shaders were still separate entities, AMD (née ATI) settled on a VLIW5 design for their vertex shaders. Based on their data this was deemed the ideal configuration for a vertex shader block, as it allowed them to process a 4 component dot product (e.g. w, x, y, z) and a scalar component (e.g. lighting) at the same time.

Fast forward to 2007 and the introduction of AMD’s Radeon HD 2000 series (R600), where AMD introduced their first unified architecture for the PC. AMD went with a VLIW5 design once more, as even though the product was their first DX10 product it still made sense to build something that could optimally handle DX9 vertex shaders. This was also well before GPGPU had a significant impact on the market, as AMD had at best toyed around with the idea late in the X1K series’ lifetime (and well after R600 was started).

Now let us jump to 2008, when Cayman’s predecessors were being drawn up. GPGPU computing is still fairly new – NVIDIA is at the forefront of a market that only amounts to a few million dollars at best – and DX10 games are still relatively rare. With 2+ years to bring up a GPU, AMD has to be looking forward at where things will be in 2010. Their predictions are that GPGPU computing will finally become important, and that DX9 games will fade in importance to DX10/11 games. It’s time to reevaluate VLIW5.

This brings us to the present day and the launch of Cayman. GPGPU computing is taking off, and DX10 & DX11 alongside Windows 7 are gaining momentum while DX9 is well past its peak. AMD’s own internal database of games tells them an interesting story: the average slot utilization is 3.4 – on average a 5th streaming processor is going unused in games. VLIW5, which made so much sense for DX9 vertex shaders is now becoming too wide, while scalar and narrow workloads are increasing in number. The stage is set for a narrower Streaming Processor Unit; enter VLIW4.

As you may recall from a number of our discussions on AMD’s core architecture, AMD’s architecture is heavily invested in Instruction Level Parallelism, that is having instructions in a single thread that have no dependencies on each other that can be executed in parallel. With VLIW5 the best case scenario is that 5 instructions can be scheduled together on every SPU every clock, a scenario that rarely happens. We’ve already touched on how in games AMD is seeing an average of 3.4, which is actually pretty good but still is under 80% efficient. Ultimately extracting ILP from a workload is hard, leading to a wide delta between the best and worst case scenarios.

Meanwhile all of this is in stark contrast to Thread Level Parallelism (TLP), which looks for threads that can be run at the same time without having any interdependencies. This is where NVIDIA has focused their energies at the high-end, as GF100/GF100 are both scalar architectures that rely on TLP to achieve efficient operation.

Ultimately the realization is that AMD’s VLIW5 architecture is not the best architecture going forward. Up until now it has made sense at a high efficiency gaming-oriented design, and even today in a gaming part like the 6800 series it’s still a reasonable choice. But AMD needs a new architecture for the future, not only as something that’s going to better fit their 3.4 shader average, but something that is better designed for compute workloads. AMD’s choice is an overhauled version of their existing architecture. Overall it’s built on a solid foundation, but VLIW5 is too wide to meet their future goals.

The solution is to shrink their VLIW5 SPU to a VLIW4 SPU. Specifically, the solution is to remove the t-unit, the architecture’s 5th SP and largest SP that’s capable of both regular INT/FP operations as well as being responsible for transcendental operations. In the case of regular INT/FP operations this means an SPU is reduced from being able to process 5 operations at once to 4. While in the case of transcendentals an SPU now ties together 3 SPs to process 1 transcendental in the same period of time, representing a much more severe reduction in theoretical performance as an SPU can only process 1 transcendental + 1 INT/FP per clock as opposed to 1 transcendental + 4 INT/FP operations (or any variations).

There are a number of advantages to this change. As far as compute is concerned, the biggest advantage is that much of the space previously allocated to the t-unit can now be scrounged up to build more SIMDs. Cypress had 20 SIMDs while Cayman has 24; on average Cayman’s shader block is 10% more efficient per mm2 than Cypress’s , taking in to account the fact that Cayman’s SPs are a bit larger than Cypress’ to pick up the workload the t-unit would handle. The SIMDs are further tied to a number of attributes: the number of texture units, the number of threads that can be in flight at once, and the number of FP64 operations that can be completed per clock. The latter is particularly important for AMD’s compute efforts, as they can now retire FP64 FMA/MUL operations at 1/4th their FP32 rate, in the case of a full Cayman up to 384/clock. Technically speaking they’re no faster per SPU, but with this layout change they have more SPUs to work with, improving their performance.


Fewer SPs per SIMD = More Space For More SIMDs

There are even ancillary benefits within the individual SPUs. While the SP count changed the register file did not, leading to less pressure on each SPU’s registers as now only 4 SPs vie for register space. Even scheduling is easier as there are fewer SPs to schedule and the fact that they’re all alike means the scheduler no longer has to take into consideration the difference between the w/x/y/z units and the t-unit.

Meanwhile in terms of gaming the benefits are similar. Games that were already failing to fully utilize the VLIW5 design now have additional SIMDs to take advantage of, and as rendering is still an embarrassingly parallel operation as far as threading is concerned, it’s very easy to further divide the rendering workload in to more threads to take advantage of this change. The extra SIMDs mean that Cayman has additional texturing horsepower over Cypress, and the overall compute:texture ratio has been reduced, a beneficial situation for any games that are texture/filtering bound more than they’re compute bound.

Of course any architectural change involves tradeoffs, so it’s not a pure improvement. For gaming the tradeoff is that Cayman isn’t going to be well suited to VLIW5-style vertex shaders; generally speaking games using such shaders already run incredibly fast, but if they’re even GPU-bound in the first place they’re not going to gain much from Cayman. The other big tradeoff is when transcendental operations are paired with vector operations, as Cypress could handle both in one clock while Cayman will take two. It’s AMD’s belief that these operations are rare enough that the loss of performance in this one situation is worth it for the gain in performance everywhere else.

It’s worth noting that AMD still considers VLIW4 to be a risky/experimental design, or at least this is their rationale for going with it first on Cayman while sticking to VLIW5 elsewhere. At this point we’d imagine the real experiment to already be over, as AMD would already be well in the middle of designing Cayman’s 28nm successor, so they undoubtedly know if they’ll be using VLIW4 in the future.

Finally, the switch to a new VLIW architecture means the AMD driver team has to do some relearning. While VLIW4 is quite similar to VLIW5 it’s not by any means identical, which is both good and bad for performance purposes. The bad news is that it means many of AMD’s VLIW5-centric shader compiler tricks are no longer valid; at the start shader compiler performance is going to be worse while AMD learns how to better program a VLIW4 design. The good news is that in time they’re going to learn how to better program a VLIW4 design, meaning there’s the potential for sizable performance increases throughout the lifetime of the 6900 series. That doesn’t mean they’re guaranteed, but we certainly expect at least some improvement in shader performance as the months wear on.

On that note these VLIW changes do mean that some code is going to have to be rewritten to better deal with the reduction of VLIW width. AMD’s shader compiler goes through a number of steps to try to optimize code, but if kernels were written specifically to organize instructions to go through AMD’s shaders in a 5-wide fashion, then there’s only so much AMD’s compiler can do. Of course code doesn’t have to be written that way, but it is the best way to maximize ILP and hence shader performance.

VLIW5:

  • 4 32-bit FP MAD
  • Or 2 64-bit FP MUL or ADD
  • Or 1 64-bit FP MAD
  • Or 4 24-bit Int MUL or ADD
  • Plus 1 transcendental or 1 32-bit FP MAD

VLIW4:

  • 4 32-bit FP MAD/MUL/ADD
  • Or 2 64-bit FP ADD
  • Or 1 64-bit FP MAD/FMA/MUL
  • Or 4 24-bit INT MAD/MUL/ADD
  • Or 4 32-bit INT ADD/Bitwise
  • Or 1 32-bit MAD/MUL
  • Or 1 64-bit ADD
  • Or 1 transcendental plus 1 32-bit FP MAD
Cayman: The Last 32nm Castaway Cayman: The New Dawn of AMD GPU Computing
Comments Locked

168 Comments

View All Comments

  • versesuvius - Friday, December 17, 2010 - link

    Ananke,

    I am not very knowledgeable about this, but I don't think a modern GPU can fit inside a CPU for now. A better idea would be a console on a card. The motherboards on the consoles are not much bigger than the large graphic cards of today. A console card for $100 would be great. I am sure that there is no technical obstacles that the average electronic wizard cannot overcome, doing that.

    Sure, there is a use for everything. I can imagine that every single human being on earth can find a use for a Ferrari, but the point is that even those who do have it, do not use it as often as their other car, (Toyota, VW or whatever). In fact, there is rarely a Ferrari that has more than 20,000 km on it, and even that is put on it by successive owners, not one. The average total an ordinary person can stand a Ferrari is 5000 KM. (Disclaimer: I do not have one. I only read something to that effect somewhere). Having said that, I do have a sense of the "need for speed". I can remember sitting in front of the university's 80286 waiting for the FE program to spit out the results, one node at a time, click, click, ... . You have millions of polygons, we can have billions of mesh nodes, and that even does not even begin to model a running faucet. How's that for the need for speed. I do appreciate the current speeds. However, the CPU deal was and is a straight one. The graphic card deals, today, are not. To be clear, the "and" in "High End"s and "Fool"s is an inclusive one. "Someone will pay for it", was also initiated in the eighties of the last century. By the way, the big question "can it play crysis", will no longer be. Crysis 2 is coming to the consoles.
  • Quidam67 - Friday, December 17, 2010 - link

    "But can it play Crysis" should be in the Urban dictionary as a satirical reference on graphics code that combines two potent attributes: 1) is way ahead of its time in terms of what current hardware can support 2) is so badly written and optimised that even hardware that should be able to run it still can't.

    In 1000 years time when Organic Graphics cards that you can plug into your head still can't run it smoothly @2560*1600 60fps they will realise the joke was on us and that the code itself was written to run more and more needless loops in order to overwhelm any amount of compute-resource thrown at it.
  • Iketh - Friday, December 24, 2010 - link

    LOL
  • marc1000 - Friday, December 17, 2010 - link

    I swear I've read ALL the comments to see if anyone already pointed it... but no one did.

    I feel a bit disappointed with this launch too (I have a 5770 and wanted to get 6950 but was wanting a bigger increase %-wise). But one thing interesting it the number of Stream Processors in the new gpus. By the "pure processor" count this number decreased from 1600 SPs on 5870 to 1536 SPs on 6970. But the size of the VLIW processors changed too. It was 5 SPs on 5870 and now is 4 SPs.

    So we have:
    hd5870 = 1600 SPs / 5 = 320 "processors"
    hd6970 = 1536 SPs / 4 = 384 "processors"

    if we take that 384 and multiply by 5, we would have 1920 SPs on the new generation (on par with many rumors). this is 20% more shaders. and considering AMD is saying that the new VLIW4 is 10% faster than VLIW5 we should have more than 20% increase in all situations. but this is only true in the minority of tests (like crysis at 2560x1660 where it is 24%, but in the same game at 1680x1050 the increase is only 16%). and at the same time the minimun FPS got better, yet in another games the difference is smaller.

    but then again, I was expecting a little more. I believe the 6950 will be a worthy upgrade to me, but the expectations were so high that too much people ended a little disappointed... myself included.
  • Sunburn74 - Tuesday, December 28, 2010 - link

    Well... at least they delivered on time and didn't make you wait 6 more months to simply deliver an equivalent, if not considerably worse, product.
  • Mr Perfect - Friday, December 17, 2010 - link

    Yes, the minimums are appreciated when they're included.

    It would be even better if the framerates was displayed as a line graph instead of a bar graph. That way readers could tell if an average consisted of a lot of high peaks and low valleys, or really was a nice smooth experience all the way through. Some other review sites use linegraphs and while I visit Anandtech for it's timeliness, professionalism, industry insight and community involvement, I go to the other sites for the actual performance numbers.
  • Quidam67 - Friday, December 17, 2010 - link

    There is further rationale for splitting the article. Lets say someone is googling "HD 6970 architecture" perhaps they will pick up this review, or perhaps they won't, but either way, if they see that it is actually a review on the cards, they might be inclined to bypass it in favour of a more focused piece.

    And again, there is no reason why the Architecture Article can't provide a hyperlink to the review, if the reader then decides they want to see how that architecture translates into performance on the current generation of cards supporting it.

    I really hope AT are reading this and giving it some consideration. As you say, they are a great sight and no one is disputing that, but it's not a religion, so you should be allowed to question it without being accused of blasphemy :O)
  • dustcrusher - Friday, December 17, 2010 - link

    It really comes down to how important the mainstream market is. If they are a large enough segment of the market, one company using a simple, easy-to-grasp naming convention would likely grab some market share. Make it easy to buy your product and at least some people will be more likely to do so.

    If not, then it's fun to talk about but not terribly important. Tech-savvy folk will buy whatever meets their needs price/performance-wise after doing research, even if a card is named the Transylvania 6-9000 or the Wankermeister GTFO. Eager to please tech-naive folk are going to buy the largest model number they can get with the money they have, because "larger model numbers = bigger/better equipment" is a long-established consumer shorthand.

    I have a half-baked idea for a model numbering system that's based around the key specs of the card- it's a 5 digit system where the first digit is the hardware platform ID (like what we have now, mostly) and the other four would represent combinations of other specs (one digit could be the lowest memory clock speed and bus width would be 1, the next lowest memory clock speed and lowest bus width would be 2, etc).

    No idea if this could actually be implemented- there are probably too many variables with GPU/memory clock speeds, among other things.
  • Shinobi_III - Saturday, December 18, 2010 - link

    If you ever saw Nvidia 4xAA in action, you know it's not as smooth as the radeon implementation (especially in motion) and z-buffer miscalculations has always been a nvidia feature.

    Go up a hill in Fallout New Vegas and look at Vegas in the horizon, with Nvidia cards it always looks like a disco due to meshes overlapping. Now do the same on Radeon.
  • TheUsual - Saturday, December 18, 2010 - link

    Right now, Newegg has a 6870 for $200 after rebate. Two of these makes for an awesome value at $400. The top tier of cards doesn't give a corresponding increase in performance for the extra cost. Two 6950s costs 50% more but does not give you 50% more FPS. Two GTX 460 1GBs is also a great bang for the buck at $300.

    Neither of these lets you do triple SLI/XFIRE however. That would be what would be paying extra for.

    My hope is that the price will drop on the 6950 by around February. By then the GTX 560 should be out and might drive prices down some. The benchmarks could change some with Sandy Bridge too, if they are currently CPU bound.

Log in

Don't have an account? Sign up now