A Quick Primer on ILP

NVIDIA throws ILP (instruction level parallelism) out the window while AMD tackles it head on.

ILP is parallelism that can be extracted from a single instruction stream. For instance, if i have a lot of math that isn't dependent on previous instructions, it is perfectly reasonable to execute all this math in parallel.

For this example on my imaginary architecture, instruction format is:

LineNumber INSTRUCTION dest-reg, source-reg-1, source-reg-2

This is compiled code for adding 8 numbers together. (i.e. A = B + C + D + E + F + G + H + I;)

1 ADD r2,r0,r1
2 ADD r5,r3,r4
3 ADD r8,r6,r7
4 ADD r11,r9,r10
5 ADD r12,r2,r5
6 ADD r13,r8,r11
7 ADD r14,r12,r13
8 [some totally independent instruction]
...

Lines 1,2,3 and 4 could all be executed in parallel if hardware is available to handle it. Line 5 must wait for lines 1 and 2, line 6 must wait for lines 3 and 4, and line 7 can't execute until all other computation is finished. Line 8 can execute at any point hardware is available.

For the above example, in two wide hardware we can get optimal throughput (and we ignore or assume full speed handling of read-after-write hazards, but that's a whole other issue). If we are looking at AMD's 5 wide hardware, we can't achieve optimal throughput unless the following code offers much more opportunity to extract ILP. Here's why:

From the above block, we can immediately execute 5 operations at once: lines 1,2,3,4 and 8. Next, we can only execute two operations together: lines 5 and 6 (three execution units go unused). Finally, we must execute instruction 7 all by itself leaving 4 execution units unused.

The limitations of extracting ILP are on the program itself (the mix of independent and dependent instructions), the hardware resources (how much can you do at once from the same instruction stream), the compiler (how well does the compiler organize basic blocks into something the hardware can best extract ILP from) and the scheduler (the hardware that takes independent instructions and schedules them to run simultaneously).

Extracting ILP is one of the most heavily researched areas of computing and was the primary focuses of CPU design until the advent of multicore hardware. But it is still an incredibly tough problem to solve and the benefits vary based on the program being executed.

The instruction stream above is sent to an AMD and NVIDIA SP. In the best case scenario, the instruction stream going into AMD's SP should be 1/5th the length of the one going into NVIDIA's SP (as in, AMD should be executing 5 ops per SP vs. 1 per SP for NVIDIA) but as you can see in this exampe, the instruction stream is around half the height of the one in the NVIDIA column. The more ILP AMD can extract from the instruction stream, the better its hardware will do.

AMD's RV770 (And R6xx based hardware) needs to schedule 5 operations per thread every every clock to get the most out of their hardware. This certainly requires a bit of fancy compiler work and internal hardware scheduling, which NVIDIA doesn't need to bother with. We'll explain why in a second.

Instruction Issue Limitations and ILP vs TLP Extraction

Since a great deal of graphics code manipulates vectors like vertex positions (x,y,c,w) or colors (r,g,b,a), lots of things happen in parallel anyway. This is a fine and logical aspect of graphics to exploit, but when it comes down to it the point of extracting parallelism is simply to maximize utilization of hardware (after all, everything in a scene needs to be rendered before it can be drawn) and hide latency. Of course, building a GPU is not all about extracting parallelism, as AMD and NVIDIA both need to worry about things like performance per square millimeter, performance per watt, and suitability to the code that will be running on it.

NVIDIA relies entirely on TLP (thread level parallelism) while AMD exploits both TLP and ILP. Extracting TLP is much much easier than ILP, as the only time you need to worry about any inter-thread conflicts is when sharing data (which happens much less frequently than does dependent code within a single thread). In a graphics architecture, with the necessity of running millions of threads per frame, there are plenty of threads with which to fill the execution units of the hardware, and thus exploiting TLP to fill the width of the hardware is all NVIDIA needs to do to get good utilization.

There are ways in which AMD's architecture offers benefits though. Because AMD doesn't have to context switch wavefronts every chance it gets and is able to extract ILP, it can be less sensitive to the number of active threads running than NVIDIA hardware (however both do require a very large number of threads to be active to hide latency). For NVIDIA we know that to properly hide latency, we must issue 6 warps per SM on G80 (we are not sure of the number for GT200 right now), which would result in a requirement for over 3k threads to be running at a time in order to keep things busy. We don't have similar details from AMD, but if shader programs are sufficiently long and don't stall, AMD can serially execute code from a single program (which NVIDIA cannot do without reducing its throughput by its instruction latency). While AMD hardware can certainly handle a huge number of threads in flight at one time and having multiple threads running will help hide latency, the flexibility to do more efficient work on serial code could be an advantage in some situations.

ILP is completely ignored in NVIDIA's architecture, because only one operation per thread is performed at a time: there is no way to exploit ILP on a scalar single-issue (per context) architecture. Since all operations need to be completed anyway, using TLP to hide instruction and memory latency and to fill available execution units is a much less cumbersome way to go. We are all but guaranteed massive amounts of TLP when executing graphics code (there can be many thousand vertecies and millions of pixels to process per frame, and with many frames per second, that's a ton of threads available for execution). This makes the lack of attention to serial execution and ILP with a stark focus on TLP not a crazy idea, but definitely divergent.

Just from the angle of extracting parallelism, we see NVIDIA's architecture as the more elegant solution. How can we say that? The ratio of realizable to peak theoretical performance. Sure, Radeon HD 4870 has 1.2 TFLOPS of compute potential (800 execution units * 2 flops/unit (for a multiply-add) * 750MHz), but in the vast majority of cases we'll look at, NVIDIA's GeForce GTX 280 with 933.12 GFLOPS ((240 SPs * 2 flops/unit (for multiply-add) + 60 SFUs * 4 flops/unit (when doing 4 scalar muls paired with MADs run on SPs)) * 1296MHz) is the top performer.

But that doesn't mean NVIDIA's architecture is necessarily "better" than AMD's architecture. There are a lot of factors that go into making something better, not the least of which is real world performance and value. But before we get to that, there is another important point to consider. Efficiency.

Derek Gets Technical Again: Of Warps, Wavefronts and SPMD AMD's RV770 vs. NVIDIA's GT200: Which one is More Efficient?
Comments Locked

215 Comments

View All Comments

  • paydirt - Wednesday, June 25, 2008 - link

    This is a review site. This isn't a site to market/promote products.
  • formulav8 - Thursday, June 26, 2008 - link

    They do recommend hardware for different price points and such. So they do market in a way. Have you seen anands picks links? That is promoting products and does it through his referral links as well to get paid to do so. :)

    Anyways, mentioning something as a better buy up to a certain price point would be helpful to someone who is not really in the know.



    Jason
  • shadowteam - Wednesday, June 25, 2008 - link

    You've got excellent written skills buddy, and I can't help thinking you're actually better at reviews than your m8 (no offence Anand), but what I truly meant from my post above is what you summed up rather well in your conclusive lines, quote: "You can either look at it as AMD giving you a bargain or NVIDIA charging too much, either way it's healthy competition in the graphics industry once again (after far too long of a hiatus)"

    Either way? Why should anyone look the other way? NV is clearly shitting all over the place, and you can tell that from the email they send you (or Anand) a couple days back. So they ripped us off for 6 months, and now suddenly decide the 9800GTX is worth $200?

    Healthy competition? Could you please elaborate on this further?
    $199 4850 vs $399 GTX260.... yup! that's healthy

    GTX+ vs 4850?
    Does that mean the GTX260 is now completely irrelevant? In fact, the 2xx series is utterly pointless no matter how you look at it.

    To bash on AMD, the 4870 is obviously priced high. For $100 extra, all you get is an OC'ed 4850 w/ DDR5 support. I don't think anyone here cares about DDR5, all that matters is performance, and the extra bucks plainly not worth it. From a consumers' perspective, the 4850 is the best buy, the 4870 isn't.
  • mlambert890 - Sunday, July 13, 2008 - link

    "200 series is utterly pointless"

    Yep... pointless unless you want the fastest card (280), then it has a point.

    Pointless to YOU possibly because you're focusing on perf per dollar. Good for you. Nice of you to presume to force that view on the world.

    Absolute performance? GTX 280 seems near the top of every benchmark there bud. Both in single card and in SLI where, last I checked, it gives up maybe TWO instances to the 4870CF - Bioshock and CoD and in both cases framerates are north of 100 at 2560. The 4870, on the other hand, falls WELL short of playable at that res in CF in most other benches.

    High res + high perf = 200 series. Sorry if thats offensive to the egos of those who cant afford the cards.

    Theres a lot in life we can and cant afford. Should have ZERO impact on ABSOLUTE PERFORMANCE discussions.
  • FITCamaro - Wednesday, June 25, 2008 - link

    AMD/ATI has to make some money somewhere. And regardless, at $300, the 4870 is a hell of a deal compared to the competition. Yes the 4850 is probably the best value. But the 4870 is still right behind it if you want a decent amount of extra performance at a great price.

    Nvidia may have the fastest thing out there. But only the richest, most brain dead idiots who have not a care in the world about how they spend their (or their parents) money will buy it with cards like the 4850 and 4870 available.

    And its pretty sad when your new $650 high end card is routinely beat by two of your last generation cards (8800GT) that you can get for $150 each or less. It wouldn't be as big a deal if the new card was $300-350 but at $650, it should be stomping on it.

    I think Nvidia is in for a reality check for what people want. If their new chips are only going to cater to the top 1% of the market, they're going to find themselves quickly in trouble. Especially with the all the issues their chipsets have for 6 months after release. And their shoddy drivers. I mean this past Friday I decided to try and set up some profiles so that when I started up Age of Conan, it would apply an overclock to my GPU and unapply it after I exited, it ended up locking up my PC continuously. I had to restore my OS from a backup disc because not even completely uninstalling and reinstalling my nvidia chipset and video drivers fixed it. And in my anger, I didn't back up my "My Documents" folder so I lost 5 years worth of stuff, largely pictures.
  • mlambert890 - Sunday, July 13, 2008 - link

    "Nvidia may have the fastest thing out there. But only the richest, most brain dead idiots who have not a care in the world about how they spend their (or their parents) money will buy it with cards like the 4850 and 4870 available."

    You just summed it up in that first sentence there bud. NVidia has the fastest thing out there. The rest is just opinion, bitterness and noise.

    I notice that the tone of the "enthusiast" community seems to be laser focused on cost now. This is like car discussions. People want to pretend to be "Xtreme" but what they really want to see is validation of whatever it is THEY can afford.

    Have fun with the 4870 by all means, its a great card. But the GTX280 IS faster. Did NVidia price it too high? Dont know and dont care.

    These are PERFORMANCE forums to all of the people that dont get that. Maybe even the editors need to be reminded.

    If I want to see an obsession with "bang for the buck" Ill go to Consumer Reports.

    I mean seriously. How much of a loser are you when you're taking a shot like "your PARENTS money"? LOL...

    Personally, I treat the PC hobby as an expensive distraction. Ive been a technology pro for 15 years now and this is my vice. As an adult earning my own money, I can decide how I spend it and the difference between $500 and a grand isnt a big deal.

    The rehtoric on forums is really funny. People throw the "kid/parents" insult around alot, but I think its more likely that the people who take prices beyond what they can afford as some kind of personal insult are more likely the kids here.
  • formulav8 - Thursday, June 26, 2008 - link

    "Nvidia may have the fastest thing out there. But only the richest, most brain dead idiots who have not a care in the world about how they spend their (or their parents) money will buy it with cards like the 4850 and 4870 available."


    Yuk Yuk Yuk :)



    Jason
  • drpepper128 - Wednesday, June 25, 2008 - link

    To be honest, while I was reading the article I felt as if the article seemed a little ATI biased, but I guess that goes to show you that two different people can get drastically different opinions from the same article.

    The real reason I’m posting this is I want to thank you guys for writing some of the best articles that Anandtech has ever written. I read every page and enjoyed the whole thing. Keep up the great work guys and I look forward to reading more (especially about Nehalem and anything relating to AMD’s future architecture).

    Also, is GDDR5 coming to the 4850 ever? If so, maybe it would be a drastically better buy.

    Thank you,
    drpepper128
  • Clauzii - Wednesday, June 25, 2008 - link

    Damn, You R pissed!! :O

    OK, get some sleep and wake up smiling tomorrow, knowing that It's ATI needing to raise prices - - - and go get that 4870 :))
  • Clauzii - Wednesday, June 25, 2008 - link

    OH, " ... that It's NOT ATI needing to ... "

    BTW: I actually read the review as pretty neutral, making a hint here and there that the further potential of the HD4870 is quite big :)

Log in

Don't have an account? Sign up now