Derek Gets Technical Again: Of Warps, Wavefronts and SPMD

From our GT200 review, we learned a little about thread organization and scheduling on NVIDIA hardware. In speaking with AMD we discovered that sometimes it just makes sense to approach the solution to a problem in similar ways. Like NVIDIA, AMD schedules threads in groups (called wavefronts by AMD) that execute over 4 cycles. As RV770 has 16 5-wide SPs (each of which process one "stream" or thread or whatever you want to call it) at a time (and because they said so), we can conclude that AMD organizes 64 threads into one wavefront which all must execute in parallel. After GT200, we did learn that NVIDIA further groups warps into thread blocks, and we just learned that their are two more levels of organization in AMD hardware.

Like NVIDIA, AMD maintains context per wavefront: register space, instruction stream, global constants, and local store space are shared between all threads running in a wavefront and data sharing and synchronization can be done within a thread block. The larger grouping of thread blocks enables global data sharing using the global data store, but we didn't actually get a name or specification for it. On RV770 one VLIW instruction (up to 5 operations) is broadcast to each of the SPs which runs on it's own unique set of data and subset of the register file.

To put it side by side with NVIDIA's architecture, we've put together a table with what we know about resources per SM / SIMD array.

NVIDIA/AMD Feature NVIDIA GT200 AMD RV770
Registers per SM/SIMD Core 16K x 32-bit 16K x 128-bit
Registers on Chip 491,520 (1.875MB) 163,840 (2.5MB)
Local Store 16KB 16KB
Global Store None 16KB
Max Threads on Chip 30,720 16,384
Max Threads per SM/SIMD Core 1,024 > 1,000
Max Threads per Warp/Wavefront 960 256 (with 64 reserved)
Max Warps/Wavefronts on Chip 512 We Have No Idea
Max Thread Blocks per SM/SIMD Core 8 AMD Won't Tell Us
That's right, AMD has 2.5MB of register space

We love that we have all this data, and both NVIDIA's CUDA programming guide and the documentation that comes with AMD's CAL SDK offer some great low level info. But the problem is that hard core tuners of code really need more information to properly tune their applications. To some extent, graphics takes care of itself, as there are a lot of different things that need to happen in different ways. It's the GPGPU crowd, the pioneers of GPU computing, that will need much more low level data on how resource allocation impacts thread issue rates and how to properly fetch and prefetch data to make the best use of external and internal memory bandwidth.

But for now, these details are the ones we have, and we hope that programmers used to programming massively data parallel code will be able to get under the hood and do something with these architectures even before we have an industry standard way to take advantage of heterogeneous computing on the desktop.

Which brings us to an interesting point.

NVIDIA wanted us to push some ridiculous acronym for their SM's architecture: SIMT (single instruction multiple thread). First off, this is a confusing descriptor based on the normal understanding of instructions and threads. But more to the point, there already exists a programming model that nicely fits what NVIDIA and AMD are both actually doing in hardware: SPMD, or single program multiple data. This description is most often attached to distributed memory systems and large scale clusters, but it really is actually what is going on here.

Modern graphics architectures process multiple data sets (such as a vertex or a pixel and its attributes) with single programs (a shader program in graphics or a kernel if we're talking GPU computing) that are run both independently on multiple "cores" and in groups within a "core". Functionally we maintain one instruction stream (program) per context and apply it to multiple data sets, layered with the fact that multiple contexts can be running the same program independently. As with distributed SPMD systems, not all copies of the program are running at the same time: multiple warps or wavefronts may be at different stages of execution within the same program and support barrier synchronization.

For more information on the SPMD programming model, wikipedia has a good page on the subject even though it doesn't talk about how GPUs would fit into SPMD quite yet.

GPUs take advantage of a property of SPMD that distributed systems do not (explicitly anyway): fine grained resource sharing with SIMD processing where data comes from multiple threads. Threads running the same code can actually physically share the same instruction and data caches and can have high speed access to each others data through a local store. This is in contrast to larger systems where each system gets a copy of everything to handle in its own way with its own data at its own pace (and in which messaging and communication become more asynchronous, critical and complex).

AMD offers an advantage in the SPMD paradigm in that it maintains a global store (present since RV670) where all threads can share result data globally if they need to (this is something that NVIDIA does not support). This feature allows more flexibility in algorithm implementation and can offer performance benefits in some applications.

In short, the reality of GPGPU computing has been the implementation in hardware of the ideal machine to handle the SPMD programming model. Bits and pieces are borrowed from SIMD, SMT, TMT, and other micro-architectural features to build architectures that we submit should be classified as SPMD hardware in honor of the programming model they natively support. We've already got enough acronyms in the computing world, and it's high time we consolidate where it makes sense and stop making up new terms for the same things.

That Darn Compute:Texture Ratio A Quick Primer on ILP and ILP vs. TLP Extraction
Comments Locked

215 Comments

View All Comments

  • jALLAD - Wednesday, July 9, 2008 - link

    well I am looking forward to a single card setup. SLI or CF is beyond the reach of my pockets. :P

  • Grantman - Friday, July 4, 2008 - link

    Thank you very much for including the 8800gt sli figures in your benchmarks. I created an account especially so I could thank Anand Lal Shimpi & Derek Wilson as I have found no other review site including 8800gt sli info. It is very interesting to see the much cheaper 8800gt sli solution beating the gtx 280 on several occasions.
  • Grantman - Friday, July 4, 2008 - link

    When I mentioned "no other review site including 8800gt sli info" I naturally meant in comparison with the gtx280, gx2 4850 crossfire etc etc.

    Thanks again.
  • ohodownload - Wednesday, July 2, 2008 - link

    computer-hardware-zone.blogspot.com/2008/07/ati-radeon-hd4870-x2-specification.
    tml
  • DucBertus - Wednesday, July 2, 2008 - link

    Hi,

    Nice article. Could you please add the amount of graphics memory on the cards to the "The Test" page of the article. The amount of memory matters for the performance and (not unimportant) the price of the cards...

    Cheers, DucBertus.
  • hybrid2d4x4 - Sunday, June 29, 2008 - link

    Hello!
    Long-time reader here that finally decided to make an account. First off, thanks for the great review Anand and Derek, and hats off to you guys for following up to the comments on here.
    One thing that I was hoping to see mentioned in the power consumption section is if AMD has by any chance implemented their PowerXpress feature into this generation (where the discrete card can be turned off when not needed in favor of the more efficient on-board video- ie: HD3200)? I recall reading that the 780G was supposed to support this kind of functionality, but I guess it got overlooked. Have you guys heard if AMD intends to bring it back (maybe in their 780GX or other upcoming chipsets)? It'd be a shame if they didn't, seeing as how they were probably the first to bring it up and integrate it into their mobile solutions, and now even nVidia has their own version of it (Hybrid Power, as part of HybridSLI) on the desktop...
  • AcornArmy - Sunday, June 29, 2008 - link

    I honestly don't understand what Nvidia was thinking with the GTX 200 series, at least at their current prices. Several of Nvidia's own cards are better buys. Right now, you can find a 9800 GX2 at Pricewatch for almost $180 less than a GTX 280, and it'll perform as well as the 280 in almost all cases and occasionally beat the hell out of it. You can SLI two 8800 GTs for less than half the price and come close in performance.

    There really doesn't seem to be any point in even shipping the 280 or 260 at their current prices. The only people who'll buy them are those who don't do any research before they buy a video card, and if someone's that foolish they deserve to get screwed.
  • CJBTech - Sunday, June 29, 2008 - link

    Hey iamap, with the current release of HD 4870 cards, all of the manufacturers are using the reference ATI design, so they should all be pretty much identical. It boils down to individual manufacturer's warranty and support. Sapphire, VisionTek, and Powercolor have all been great for me over the years, VisionTek is offering a lifetime warranty on these cards. I've had poor experiences with HIS and Diamond, but probably wouldn't hesitate to get one of these from either of those manufactures on this particular card (or the HD 4850) because they are the same card, ATI reference.
  • Paladin1211 - Saturday, June 28, 2008 - link

    Now that the large monolithic, underperforming chip is out, leaving AMD free to grab market share, I'm so excited at what to happen. As nVidia's strategy goes, they're now scaling down the chip. But pardon me, cut the GTX 280 in half and then prices it at $324.99? That sounds so crazy!

    Anyone remembers the shock treatment of AMD with codename "Thunder"? DAAMIT has just opened "a can of whoop ass" on nVidia!
  • helldrell666 - Friday, June 27, 2008 - link

    Anand tech why didnt you use and amd 790FX board to bench the radeon cards instead of using an nvidia board for both nvidia and ATI cards.It would be more accurate to bench those cards on compatible boards .
    I think those cards would have worked better on an amd board based on the radeon express 790fx chipset.

Log in

Don't have an account? Sign up now