Derek Gets Technical Again: Of Warps, Wavefronts and SPMD

From our GT200 review, we learned a little about thread organization and scheduling on NVIDIA hardware. In speaking with AMD we discovered that sometimes it just makes sense to approach the solution to a problem in similar ways. Like NVIDIA, AMD schedules threads in groups (called wavefronts by AMD) that execute over 4 cycles. As RV770 has 16 5-wide SPs (each of which process one "stream" or thread or whatever you want to call it) at a time (and because they said so), we can conclude that AMD organizes 64 threads into one wavefront which all must execute in parallel. After GT200, we did learn that NVIDIA further groups warps into thread blocks, and we just learned that their are two more levels of organization in AMD hardware.

Like NVIDIA, AMD maintains context per wavefront: register space, instruction stream, global constants, and local store space are shared between all threads running in a wavefront and data sharing and synchronization can be done within a thread block. The larger grouping of thread blocks enables global data sharing using the global data store, but we didn't actually get a name or specification for it. On RV770 one VLIW instruction (up to 5 operations) is broadcast to each of the SPs which runs on it's own unique set of data and subset of the register file.

To put it side by side with NVIDIA's architecture, we've put together a table with what we know about resources per SM / SIMD array.

NVIDIA/AMD Feature NVIDIA GT200 AMD RV770
Registers per SM/SIMD Core 16K x 32-bit 16K x 128-bit
Registers on Chip 491,520 (1.875MB) 163,840 (2.5MB)
Local Store 16KB 16KB
Global Store None 16KB
Max Threads on Chip 30,720 16,384
Max Threads per SM/SIMD Core 1,024 > 1,000
Max Threads per Warp/Wavefront 960 256 (with 64 reserved)
Max Warps/Wavefronts on Chip 512 We Have No Idea
Max Thread Blocks per SM/SIMD Core 8 AMD Won't Tell Us
That's right, AMD has 2.5MB of register space

We love that we have all this data, and both NVIDIA's CUDA programming guide and the documentation that comes with AMD's CAL SDK offer some great low level info. But the problem is that hard core tuners of code really need more information to properly tune their applications. To some extent, graphics takes care of itself, as there are a lot of different things that need to happen in different ways. It's the GPGPU crowd, the pioneers of GPU computing, that will need much more low level data on how resource allocation impacts thread issue rates and how to properly fetch and prefetch data to make the best use of external and internal memory bandwidth.

But for now, these details are the ones we have, and we hope that programmers used to programming massively data parallel code will be able to get under the hood and do something with these architectures even before we have an industry standard way to take advantage of heterogeneous computing on the desktop.

Which brings us to an interesting point.

NVIDIA wanted us to push some ridiculous acronym for their SM's architecture: SIMT (single instruction multiple thread). First off, this is a confusing descriptor based on the normal understanding of instructions and threads. But more to the point, there already exists a programming model that nicely fits what NVIDIA and AMD are both actually doing in hardware: SPMD, or single program multiple data. This description is most often attached to distributed memory systems and large scale clusters, but it really is actually what is going on here.

Modern graphics architectures process multiple data sets (such as a vertex or a pixel and its attributes) with single programs (a shader program in graphics or a kernel if we're talking GPU computing) that are run both independently on multiple "cores" and in groups within a "core". Functionally we maintain one instruction stream (program) per context and apply it to multiple data sets, layered with the fact that multiple contexts can be running the same program independently. As with distributed SPMD systems, not all copies of the program are running at the same time: multiple warps or wavefronts may be at different stages of execution within the same program and support barrier synchronization.

For more information on the SPMD programming model, wikipedia has a good page on the subject even though it doesn't talk about how GPUs would fit into SPMD quite yet.

GPUs take advantage of a property of SPMD that distributed systems do not (explicitly anyway): fine grained resource sharing with SIMD processing where data comes from multiple threads. Threads running the same code can actually physically share the same instruction and data caches and can have high speed access to each others data through a local store. This is in contrast to larger systems where each system gets a copy of everything to handle in its own way with its own data at its own pace (and in which messaging and communication become more asynchronous, critical and complex).

AMD offers an advantage in the SPMD paradigm in that it maintains a global store (present since RV670) where all threads can share result data globally if they need to (this is something that NVIDIA does not support). This feature allows more flexibility in algorithm implementation and can offer performance benefits in some applications.

In short, the reality of GPGPU computing has been the implementation in hardware of the ideal machine to handle the SPMD programming model. Bits and pieces are borrowed from SIMD, SMT, TMT, and other micro-architectural features to build architectures that we submit should be classified as SPMD hardware in honor of the programming model they natively support. We've already got enough acronyms in the computing world, and it's high time we consolidate where it makes sense and stop making up new terms for the same things.

That Darn Compute:Texture Ratio A Quick Primer on ILP and ILP vs. TLP Extraction
Comments Locked

215 Comments

View All Comments

  • Amiga500 - Wednesday, June 25, 2008 - link

    Apple has passed over control of Open CL to the Khronos group, which manage open sourced coding.

    To all intentions and purposes, it is open source. :-)
  • emergancyexit - Wednesday, June 25, 2008 - link

    i hope you do 3x crossfire can do. maybe a 4x 4850 vs 3x GTX 260 just to satisfy us readers for the moment would be lovely!
  • DerekWilson - Wednesday, June 25, 2008 - link

    i'm not sure if this is supported out of the box ... ill have to check it out ...
  • emergancyexit - Wednesday, June 25, 2008 - link

    i would really like to know what type of performance theese cards could get in an MMO. (and hopefully compare them to some cheaper cards) Games im interested in are some of the newer titles like Age of conan ( i hear it's graphics are great and is a workout for even a 8800 ultra) And Eve-online (thier new graphics engine works cards pretty hard too)

    MMO's Graphics usually get pretty intesive with some odd 200+ characters flying around shooting fireballs evrywhere with missles sailing through the air in a land of hundreds of monsters as far as the eye can see. it can get pretty demanding on a gameing computer, just as much (if not more) as a hit new title.

    for example, on my current Rig i can get around 50FPS steady at 1440x900 but on Eve-Online i get 35 at the most at peacefull times and 20 or even 15 in a large fight with FEW graphics options selected.
  • MIP - Wednesday, June 25, 2008 - link

    Great review, the 4870 looks to be fantastic value. However, we're missing the 'heat and noise' part.
  • skiboysteve - Wednesday, June 25, 2008 - link

    Not only do these cards rock, but I wouldn't be surprised if AMD has an ace up its sleeve with the 4870x2... with that crossfire interconnect directly connected to the data hub that you showed on the chart. That and the fact that they have been looking forward to this crossfire strategy of attacking the high end for quite some time so they might have some tricky driver stuff coming with it.

    I have been disappointed with the heat and power consumption of these cards. But:
    1) Someone said powerplay is getting a driver tweak and, I can always clock them lower in 2D than 500/1000 (which is insane for 2d)
    2) That hardware site someone linked earlier showed a more than 50% reduction in temperatures with an aftermarket cooler! Thats insane!!

    And finally, if I can get the 1 & 2 fixed... I want to know how well these babys overclock. If I can get a 4850 running like a 4870 or better... yum. And in that case, how high will a 4870 OC? And I want to know this with a non stock cooler, because apparently the stock ones suck. With a non stock cooler if the 4850 clocks up to 4870 level, but the 4870 clocks way up too... i'm gonna have to grab a 4870.

    So yeah, fix #1 and #2 and find me non-stock cooler OC #s and I'll go buy one (maybe two?) when nehalem comes out
  • Powered by AMD - Wednesday, June 25, 2008 - link

    Impressive review, Thanks :)
    A few glitches:
    It says "Power Consumption, Heat and Noise", but the graphs only shows Power Consuption.
    In Page 17 (The Witcher), in second paragraph, it says 390X2 instead of 3870.

    Thanks again.
    Cheers from Argentina.
  • Conscript - Wednesday, June 25, 2008 - link

    atleast that was the tile of the second to last page...but only see two power consumption graphs?
  • Proteusza - Wednesday, June 25, 2008 - link

    I quote one Kristopher Kubricki regarding whether the RV770 is inferior to the GT200:

    "It is. Even AMD isn't going to tell you otherwise. You can debate this all you want, but it's still a $200 video card."

    So, please tell me now why I should pay $650 for a GTX280. I'm struggling to see the logic here.

    Source: http://www.dailytech.com/Update+AMD+Preps+Radeon+4...">http://www.dailytech.com/Update+AMD+Pre...50+Launc...
    (near the bottom)
  • AbRASiON - Wednesday, June 25, 2008 - link

    I can live with a greedier card than my 8800GT but I refuse to put up with a noisy machine.

    Any comments on the heat and noise please? would be nice!

Log in

Don't have an account? Sign up now