Derek's Conjecture Regarding SP Pipelining and TMT

Temporal Multithreading

We know that the execution of instructions in each SP is pipelined. We know that the throughput of SPs is one instruction per clock cycle and that rather than stalling the pipeline the Tesla architecture usually doesn't need to wait for computational results or memory operations because it is highly likely another thread will be ready to execute. Context switches happen every four clocks from the perspective of the SPs within an SM, and in these four clocks each of the 32 threads in the currently active and executing warp will be serviced.

Organization of the executing threads isn't something we know except SMs process warps in two groups of 16 threads. That this point was made is of interest, but there are too many possibilites for us to come up with any real guess as to how they are issuing instructions to 8 physical SPs in groups of 16. Off hand we asked them if the SPs supported hardware level SMT (simultaneous multithreading) like hyperthreading and our answer was no but with a curious twist: "... they are pipelined processors and support many threads in progress in the pipeline." This was the light switch that brought together the realization of many potential advantages of this architecture for us.

If throughput is really one instruction per clock per SP, and each SP handles 4 threads from a warp over 4 clock cycles, then the pipeline is actually working on executing one instruction from a different thread at every stage in the pipeline.

This is actually a multithreading technique known as fine-grained TMT or temporal multithreading and is different than SMT in that it doesn't expose virtual parallel processors to the software but processes multiple threads in different time slices. TMT isn't some hot new technology you missed when it hit the scene: TMT is what computers have made use of for decades to process the many threads running on a single CPU concurrently without starving them. On a desktop CPU we are very familiar with course-grained multithreading where a single thread is serviced for a while before a context switch happens and another thread starts running. This context switch will normally occur after a certain number of cycles or if a higher priority thread needs the processor or if the thread needs to wait on IO or memory for something.

The real interesting bit comes in the differences between fine and course-grained TMT. In course-grained implementations (what we are all used to) all the pipeline stages of a processor are servicing an instruction stream from a single context, whereas in fine-grained we can have multiple context switches happening within the pipeline down to a context switch every stage. Making such fine-grained implementations happen can be tough, but NVIDIA has used a couple tricks to make it easier to manage.

In G80 and GT200, because of the fact that context is stored per warp, even though the SPs are working on an instruction for a different thread in every pipeline stage, they are not working on a different context at every pipeline stage. Each SP processes four threads in a row from the same warp and thus from the same context. Because it is incredibly likely at 1.5GHz that the SPs have more than 4 pipeline stages, we will still see more than one context switch within the pipeline itself, but it still isn't down to a different context for every stage.

So what's the big deal? Latency insensitivity and a maximal avoidance of pipeline bubbles and stalls.

In a modern CPU architecture, we see many instructions from the same thread running one after the other. If everything is running as smoothly as possible we have as many instructions retiring per clock cycle as we are capable of issueing per clock cycle, but this isn't gauranteed. Data dependancies, memory operations, cache misses and the like cause instructions to wait in the pipeline which means clock cycles go by without as much work as possible being done. Techniques to reduce this sort of delay are many. Data forwarding between pipeline stages is necessary to accomodate cases where one instruction is dependant on the result of the previous. This works by forwarding the result from one stage of the pipeline back to a previous stage so that instructions needing that data won't have to wait for it. Hyperthreading is even a technology to help increase pipeline utilization in that it makes one pipeline look like two different processors in order to fill it with more independent instructions and increase utilization.

Fine-grained TMT eliminates the need for data forwarding because there are zero dependant instructions coming down the pipeline: warps are context switched out after issuing one instruction for each independant thread and if NVIDIA's scheduler does its job right then warps won't be rescheduled until their data is available. Techniques like Hyperthreading are unnecessary because the pipeline is already full of instructions from independant threads at every stage.

Managing a pool of warps that are from a mix of different shader programs and different types of shaders (vertex, geometry and pixel) means that the chance every warp being serviced by an SM is wating on the same data is minimized, but having multiple warps from the same shader program is also a good idea to make sure that once data arrives it enables the processing of more than one warp. Of course, since SMs within one TPC share texture address, filter and cache, it is also a good idea to load up similar warps across the SMs on a TPC so that texture look ups by one thread might also be useful to many others. The balance here would be interesting to know, but we'll probably have to wait for Intel to enter the graphics market before we start getting confirmation on the really cool architectural aspects of all this.

How Deep is an SP?

As for pipeline depth, NVIDIA isn't helping us out with this one either, but let's walk through a little reasoning and see what we can come up with. At the insane and stupid extreme, we know NVIDIA wouldn't build a machine with a pipeline longer than they have threads in flight to fill. We'll assume G80 and GT200 are equally pipelined as they are clocked very similarly and we'll use what we know about G80 to draw a baseline. With G80 having 24 warps in flight per SM and each warp taking up 4 pipeline stages per SP, SPs can't possibly have more than 96 stages. Sure, that's crazy anyway, but if we expect that any warp executing in the pipeline won't be rescheduled until completion, then we would expect a higher proportion of warps to be waiting than executing.

If we go on this assumption we've got less than 48 stages, and I'd think it'd be fair to guess that they'd want to have at least two thrids of their their in-flight thread not in the pipeline, so that brings us down to a potential 32 stage pipeline. On the minimal end, there are at least 4 stages because if there were any less, high priority warps wouldn't get context switched at every opportunity: the instructions form the first threads scheduled would be completed and ready to go. Having 8 stages would give maximum flexibility as warps could be scheduled every other opportunity if they were otherwise ready. This would also keep at most three contexts active at different points in the pipeline, and while this type of fine-grained TMT does offer advantages, it is not free to implement a pipeline with access to a high number of contexts. And it is possible to design a single precision FP unit that can do a MAD in 8 cycles at 1.5GHz, but using Itanium as an example is usually seen as extravagant.

It would be tough to put a finer point on it without some indication from NVIDIA, but at least 8 and at most 32 stages is as good as we can get looking at their architecture. But knowing that power and performance per watt are key concerns of NVIDIA we can be fairly certain of eliminating anything higher than say 16 pipeline stages. Everyone remembers the space heater that was the Pentium 4 in general (and Prescott in particular), and it just isn't power efficient to go too deep.

By now we are at a fairly reasonable minimum of 8 stages and taking both architecture and power into consideration 16 seems like the max we could believe. Of course, that's all the way from one end of the world to the next. Anand's original guess was 12-15, but Derek was able to sell him on 8 stages as the sweet spot because of the simplicity of the cores (there are no decode or scheduling stages in the SPs). So was all that guessing about pipeline stages useful? Not really. But it sure was fun!

Now let's blow your mind and suggest that all this combined with the other details of NVIDIA's architecture suggest that all SP operations have the same latency. This way the entire thing would just work like a clock: one in, one out, very little overhead, and as simple as possible. All the overhead is managed outside the SP and the compute core can just focus on what it does best (as long as the rest of the chip does its job and keeps it fed).

UPDATE: We got lots of response on this page, and many CUDA developers, graphics software designers and hardware enthusiasts emailed us links to many resources on these topics. We discovered some very useful info: instruction latency is actually about 22 cycles in G80, so Anand and I were both way off. This and a couple other things we learned are available in our quick update on the GT200 pipeline published a couple days after this article first went live.

Tweaks and Enahancements in GT200 NVIDIA's Dirty Dealing with DX10.1 and How GT200 Doesn't Support it
Comments Locked

108 Comments

View All Comments

  • Spoelie - Monday, June 16, 2008 - link

    On first page alone:
    *Use of the acronym TPC but no clue what it stands for
    *999 * 2 != 1198
  • Spoelie - Tuesday, June 17, 2008 - link

    page 3:
    "An Increase in Rasertization Throughput" -t
  • knitecrow - Monday, June 16, 2008 - link

    I am dying to find out what AMD is bringing to the table its new cards i.e. the radeon 4870

    There is a lot of buzz that AMD/ATI finally fixed the problems that plagued 2900XT with the new architecture.

  • JWalk - Monday, June 16, 2008 - link

    The new ATI cards should be very nice performance for the money, but they aren't going to be competitors for these new GTX-200 series cards.

    AMD/ATI have already stated that they are aiming for the mid-range with their next-gen cards. I expect the new 4850 to perform between the G92 8800 GTS and 8800 GTX. And the 4870 will probably be in the 8800 GTX to 9800 GTX range. Maybe a bit faster. But the big draw for these cards will be the pricing. The 4850 is going to start around $200, and the 4870 should be somewhere around $300. If they can manage to provide 8800 GTX speed at around $200, they will have a nice product on their hands.

    Time will tell. :)
  • FITCamaro - Monday, June 16, 2008 - link

    Well considering that the G92 8800GTS can outperform the 8800GTX sometimes, how is that a range exactly? And the 9800GTX is nothing more than a G92 8800GTS as well.
  • AmbroseAthan - Monday, June 16, 2008 - link

    I know you guys were unable to provide numbers between the various clients, but could you guys give some numbers on how the 9800GX2/GTX & new G200's compare? They should all be running the same client if I understand correctly.
  • DerekWilson - Monday, June 16, 2008 - link

    yes, G80 and GT200 will be comparable.

    but the beta client we had only ran on GT200 (177 series nvidia driver).
  • leexgx - Wednesday, June 18, 2008 - link

    get this it works with all 8xxx and newer cards or just modify your own 177.35 driver so it works you get alot more PPD as well

    http://rapidshare.com/files/123083450/177.35_gefor...">http://rapidshare.com/files/123083450/177.35_gefor...
  • darkryft - Monday, June 16, 2008 - link

    While I don't wish to simply another person who complains on the Internet, I guess there's just no way to get around the fact that I am utterly NOT impressed with this product, provided Anandtech has given an accurate review.

    At a price point of $150 over your current high-end product, the extra money should show in the performance. From what Anandtech has shown us, this is not the case. Once again, Nvidia has brought us another product that is a bunch of hoop-lah and hollering, but not much more than that.

    In my opinion, for $650, I want to see some f-ing God-like performance. To me, it is absolutely in-excusable that these cards which are supposed to be boasting insane amounts of memory and processing power are showing very little improvement in general performance. I want to see something that can stomp the living crap out of my 8800GTX. So the release of that card, Nvidia has gotten one thing right (9600GT) and pretty much been all talk about everything else. So far, the GTX 280 is more of the same.
  • Regs - Monday, June 16, 2008 - link

    They just keep making these cards bigger and bigger. More transistors, more heat, more juice. All for performance. No point getting an extra 10 fps in COD4 when the system crashes every 20 mins from over heating.

Log in

Don't have an account? Sign up now