Derek Gets Technical Again: Of Warps, Wavefronts and SPMD

From our GT200 review, we learned a little about thread organization and scheduling on NVIDIA hardware. In speaking with AMD we discovered that sometimes it just makes sense to approach the solution to a problem in similar ways. Like NVIDIA, AMD schedules threads in groups (called wavefronts by AMD) that execute over 4 cycles. As RV770 has 16 5-wide SPs (each of which process one "stream" or thread or whatever you want to call it) at a time (and because they said so), we can conclude that AMD organizes 64 threads into one wavefront which all must execute in parallel. After GT200, we did learn that NVIDIA further groups warps into thread blocks, and we just learned that their are two more levels of organization in AMD hardware.

Like NVIDIA, AMD maintains context per wavefront: register space, instruction stream, global constants, and local store space are shared between all threads running in a wavefront and data sharing and synchronization can be done within a thread block. The larger grouping of thread blocks enables global data sharing using the global data store, but we didn't actually get a name or specification for it. On RV770 one VLIW instruction (up to 5 operations) is broadcast to each of the SPs which runs on it's own unique set of data and subset of the register file.

To put it side by side with NVIDIA's architecture, we've put together a table with what we know about resources per SM / SIMD array.

NVIDIA/AMD Feature NVIDIA GT200 AMD RV770
Registers per SM/SIMD Core 16K x 32-bit 16K x 128-bit
Registers on Chip 491,520 (1.875MB) 163,840 (2.5MB)
Local Store 16KB 16KB
Global Store None 16KB
Max Threads on Chip 30,720 16,384
Max Threads per SM/SIMD Core 1,024 > 1,000
Max Threads per Warp/Wavefront 960 256 (with 64 reserved)
Max Warps/Wavefronts on Chip 512 We Have No Idea
Max Thread Blocks per SM/SIMD Core 8 AMD Won't Tell Us
That's right, AMD has 2.5MB of register space

We love that we have all this data, and both NVIDIA's CUDA programming guide and the documentation that comes with AMD's CAL SDK offer some great low level info. But the problem is that hard core tuners of code really need more information to properly tune their applications. To some extent, graphics takes care of itself, as there are a lot of different things that need to happen in different ways. It's the GPGPU crowd, the pioneers of GPU computing, that will need much more low level data on how resource allocation impacts thread issue rates and how to properly fetch and prefetch data to make the best use of external and internal memory bandwidth.

But for now, these details are the ones we have, and we hope that programmers used to programming massively data parallel code will be able to get under the hood and do something with these architectures even before we have an industry standard way to take advantage of heterogeneous computing on the desktop.

Which brings us to an interesting point.

NVIDIA wanted us to push some ridiculous acronym for their SM's architecture: SIMT (single instruction multiple thread). First off, this is a confusing descriptor based on the normal understanding of instructions and threads. But more to the point, there already exists a programming model that nicely fits what NVIDIA and AMD are both actually doing in hardware: SPMD, or single program multiple data. This description is most often attached to distributed memory systems and large scale clusters, but it really is actually what is going on here.

Modern graphics architectures process multiple data sets (such as a vertex or a pixel and its attributes) with single programs (a shader program in graphics or a kernel if we're talking GPU computing) that are run both independently on multiple "cores" and in groups within a "core". Functionally we maintain one instruction stream (program) per context and apply it to multiple data sets, layered with the fact that multiple contexts can be running the same program independently. As with distributed SPMD systems, not all copies of the program are running at the same time: multiple warps or wavefronts may be at different stages of execution within the same program and support barrier synchronization.

For more information on the SPMD programming model, wikipedia has a good page on the subject even though it doesn't talk about how GPUs would fit into SPMD quite yet.

GPUs take advantage of a property of SPMD that distributed systems do not (explicitly anyway): fine grained resource sharing with SIMD processing where data comes from multiple threads. Threads running the same code can actually physically share the same instruction and data caches and can have high speed access to each others data through a local store. This is in contrast to larger systems where each system gets a copy of everything to handle in its own way with its own data at its own pace (and in which messaging and communication become more asynchronous, critical and complex).

AMD offers an advantage in the SPMD paradigm in that it maintains a global store (present since RV670) where all threads can share result data globally if they need to (this is something that NVIDIA does not support). This feature allows more flexibility in algorithm implementation and can offer performance benefits in some applications.

In short, the reality of GPGPU computing has been the implementation in hardware of the ideal machine to handle the SPMD programming model. Bits and pieces are borrowed from SIMD, SMT, TMT, and other micro-architectural features to build architectures that we submit should be classified as SPMD hardware in honor of the programming model they natively support. We've already got enough acronyms in the computing world, and it's high time we consolidate where it makes sense and stop making up new terms for the same things.

That Darn Compute:Texture Ratio A Quick Primer on ILP and ILP vs. TLP Extraction
Comments Locked

215 Comments

View All Comments

  • paydirt - Wednesday, June 25, 2008 - link

    This is a review site. This isn't a site to market/promote products.
  • formulav8 - Thursday, June 26, 2008 - link

    They do recommend hardware for different price points and such. So they do market in a way. Have you seen anands picks links? That is promoting products and does it through his referral links as well to get paid to do so. :)

    Anyways, mentioning something as a better buy up to a certain price point would be helpful to someone who is not really in the know.



    Jason
  • shadowteam - Wednesday, June 25, 2008 - link

    You've got excellent written skills buddy, and I can't help thinking you're actually better at reviews than your m8 (no offence Anand), but what I truly meant from my post above is what you summed up rather well in your conclusive lines, quote: "You can either look at it as AMD giving you a bargain or NVIDIA charging too much, either way it's healthy competition in the graphics industry once again (after far too long of a hiatus)"

    Either way? Why should anyone look the other way? NV is clearly shitting all over the place, and you can tell that from the email they send you (or Anand) a couple days back. So they ripped us off for 6 months, and now suddenly decide the 9800GTX is worth $200?

    Healthy competition? Could you please elaborate on this further?
    $199 4850 vs $399 GTX260.... yup! that's healthy

    GTX+ vs 4850?
    Does that mean the GTX260 is now completely irrelevant? In fact, the 2xx series is utterly pointless no matter how you look at it.

    To bash on AMD, the 4870 is obviously priced high. For $100 extra, all you get is an OC'ed 4850 w/ DDR5 support. I don't think anyone here cares about DDR5, all that matters is performance, and the extra bucks plainly not worth it. From a consumers' perspective, the 4850 is the best buy, the 4870 isn't.
  • mlambert890 - Sunday, July 13, 2008 - link

    "200 series is utterly pointless"

    Yep... pointless unless you want the fastest card (280), then it has a point.

    Pointless to YOU possibly because you're focusing on perf per dollar. Good for you. Nice of you to presume to force that view on the world.

    Absolute performance? GTX 280 seems near the top of every benchmark there bud. Both in single card and in SLI where, last I checked, it gives up maybe TWO instances to the 4870CF - Bioshock and CoD and in both cases framerates are north of 100 at 2560. The 4870, on the other hand, falls WELL short of playable at that res in CF in most other benches.

    High res + high perf = 200 series. Sorry if thats offensive to the egos of those who cant afford the cards.

    Theres a lot in life we can and cant afford. Should have ZERO impact on ABSOLUTE PERFORMANCE discussions.
  • FITCamaro - Wednesday, June 25, 2008 - link

    AMD/ATI has to make some money somewhere. And regardless, at $300, the 4870 is a hell of a deal compared to the competition. Yes the 4850 is probably the best value. But the 4870 is still right behind it if you want a decent amount of extra performance at a great price.

    Nvidia may have the fastest thing out there. But only the richest, most brain dead idiots who have not a care in the world about how they spend their (or their parents) money will buy it with cards like the 4850 and 4870 available.

    And its pretty sad when your new $650 high end card is routinely beat by two of your last generation cards (8800GT) that you can get for $150 each or less. It wouldn't be as big a deal if the new card was $300-350 but at $650, it should be stomping on it.

    I think Nvidia is in for a reality check for what people want. If their new chips are only going to cater to the top 1% of the market, they're going to find themselves quickly in trouble. Especially with the all the issues their chipsets have for 6 months after release. And their shoddy drivers. I mean this past Friday I decided to try and set up some profiles so that when I started up Age of Conan, it would apply an overclock to my GPU and unapply it after I exited, it ended up locking up my PC continuously. I had to restore my OS from a backup disc because not even completely uninstalling and reinstalling my nvidia chipset and video drivers fixed it. And in my anger, I didn't back up my "My Documents" folder so I lost 5 years worth of stuff, largely pictures.
  • mlambert890 - Sunday, July 13, 2008 - link

    "Nvidia may have the fastest thing out there. But only the richest, most brain dead idiots who have not a care in the world about how they spend their (or their parents) money will buy it with cards like the 4850 and 4870 available."

    You just summed it up in that first sentence there bud. NVidia has the fastest thing out there. The rest is just opinion, bitterness and noise.

    I notice that the tone of the "enthusiast" community seems to be laser focused on cost now. This is like car discussions. People want to pretend to be "Xtreme" but what they really want to see is validation of whatever it is THEY can afford.

    Have fun with the 4870 by all means, its a great card. But the GTX280 IS faster. Did NVidia price it too high? Dont know and dont care.

    These are PERFORMANCE forums to all of the people that dont get that. Maybe even the editors need to be reminded.

    If I want to see an obsession with "bang for the buck" Ill go to Consumer Reports.

    I mean seriously. How much of a loser are you when you're taking a shot like "your PARENTS money"? LOL...

    Personally, I treat the PC hobby as an expensive distraction. Ive been a technology pro for 15 years now and this is my vice. As an adult earning my own money, I can decide how I spend it and the difference between $500 and a grand isnt a big deal.

    The rehtoric on forums is really funny. People throw the "kid/parents" insult around alot, but I think its more likely that the people who take prices beyond what they can afford as some kind of personal insult are more likely the kids here.
  • formulav8 - Thursday, June 26, 2008 - link

    "Nvidia may have the fastest thing out there. But only the richest, most brain dead idiots who have not a care in the world about how they spend their (or their parents) money will buy it with cards like the 4850 and 4870 available."


    Yuk Yuk Yuk :)



    Jason
  • drpepper128 - Wednesday, June 25, 2008 - link

    To be honest, while I was reading the article I felt as if the article seemed a little ATI biased, but I guess that goes to show you that two different people can get drastically different opinions from the same article.

    The real reason I’m posting this is I want to thank you guys for writing some of the best articles that Anandtech has ever written. I read every page and enjoyed the whole thing. Keep up the great work guys and I look forward to reading more (especially about Nehalem and anything relating to AMD’s future architecture).

    Also, is GDDR5 coming to the 4850 ever? If so, maybe it would be a drastically better buy.

    Thank you,
    drpepper128
  • Clauzii - Wednesday, June 25, 2008 - link

    Damn, You R pissed!! :O

    OK, get some sleep and wake up smiling tomorrow, knowing that It's ATI needing to raise prices - - - and go get that 4870 :))
  • Clauzii - Wednesday, June 25, 2008 - link

    OH, " ... that It's NOT ATI needing to ... "

    BTW: I actually read the review as pretty neutral, making a hint here and there that the further potential of the HD4870 is quite big :)

Log in

Don't have an account? Sign up now