Multi Threading Prowess

The gains of 2-way SMT (Hyperthreading) on Intel processors are still relatively small (10-20%) in many applications. The reason is that threads have to share most of the critical resources such as L1-cache, the instruction TLB, µop cache, and instruction queue. That IBM uses 8-way SMT and still claims to get significant performance gains piqued our interest. Is this just benchmarketing at best or did they actually find a way to make 8-way SMT work?

It is interesting to note that with 2-way SMT, a single thread is still running at about 80% of its performance without SMT. IBM claims no less than a 60% performance increase due to 2-way SMT, far beyond what Intel has ever claimed (30%). This can not be simply explained by the higher amount of issue slots or decoding capabilities.

The real reason is a series of trade-offs and extra resource investments that IBM made. For example, the fetch buffer contains 64 instructions in ST mode, but twice as many entries are available in 2-way SMT mode, ensuring each thread still has a 64 instruction buffer. In SMT4 mode, the size of the fetch buffer for each thread is divided in 2 (32 instructions), and only in SMT8 mode things get a bit cramped as the buffer is divided by 4.

The design philosophy of making sure that 2 threads do not hinder each other can be found further down the pipeline. The Unified Issue Queue (UniQueue) consists of two symmetric halves (UQ0 and UQ1), each with 32 entries for instructions to be issued.

Each of these UQs can issue instructions to their own reserved Load/Store, Integer (FX), Load, and Vector units. A single thread can use both queues, but this setup is less flexible (and thus less performant) than a single issue queue. However, once you run 2 threads on top of a core (SMT-2), the back-end acts like it consists of two full-blown 5-way superscalar cores, each with their own set of physical registers. This means that one thread cannot strangle the other by using or blocking some of the resources. That is the reason why IBM can claim that two threads will perform so much better than one.

It is somewhat similar to the "shared front-end, dual-core back-end" that we have seen in Bulldozer, but with (much) more finesse. For example, the data cache is not divided. The large and fast 64 KB D-cache is available for all threads and has 4 read ports. So two threads will be able to perform two loads at the same time. Another example is that a single thread is not limited to one half, but can actually use both, something that was not possible with Bulldozer.

Dividing those ample resources in two again (SMT-4) should not pose a problem. All resources are there to run most server applications fast and one of the two threads will regularly pause when a cache miss or other stalls occur. The SMT-8 mode can sometimes be a step too far for some applications, as 4 threads are now dividing up the resources of each issue queue. There are more signs that SMT-8 is rather cramped: instruction prefetching is disabled in SMT-8 modus for bandwidth reasons. So we suspect that SMT-8 is only good for very low IPC, "throughput is everything" server applications. In most applications, SMT-8 might increase the latency of individual threads, while offering only a small increase in throughput performance. But the flexibility is enormous: the POWER8 can work with two heavy threads but can also transform itself into a lightweight thread machine gun.

Comparing with Intel's best System Specs
Comments Locked

124 Comments

View All Comments

  • Michael Bay - Sunday, July 24, 2016 - link

    Hardware does not exist for its own sake, it exists to run software. AT is entirely correct in their methodology.
  • jospoortvliet - Tuesday, July 26, 2016 - link

    I'd argue it is the other way around, GCC might leave 5-10% performance on the table in some niche cases but does just fine most of the time. There's a reason Intel and IBM contribute to GCC - to make sure it doesn't get too far behind as they know very well most of their customers use these compilers and not their proprietary ones.

    Of course, for scientific computing and other niches it makes all the difference and one can argue these heavy systems ARE for niche markets but I still think it was a sane choice to go with GCC.
  • abufrejoval - Thursday, August 4, 2016 - link

    Actually exercising 90% of all transistors on a CPU die these days, is both very hard to do (next to impossible) and will only slow the clock to avoid overstepping TDP.

    And I seriously doubt that the GCC will underuse a CPU at 10% its computational capacity.

    Actually from what I saw the GCC by itself (compiling) was best at exploiting the full 8T potential of the Power8. And since the GCC is compiled by itself, that speaks for the quality of machine code that it can produce, if the source allows it. And that speaks for the quality of the GCC source code, ergo prove you can do better before you rant.
  • abufrejoval - Thursday, August 4, 2016 - link

    Well this is part 1 and describes one scenario. What you want is another scenario and of course it's a valid if a very distinct one.

    Actually distinct is the word here: You'd be using a vendor's compiler if your main job is a distinct workload, because you'd want to squeeze every bit of performance out of that.

    The problem with that is of course, that any distinct workload makes it rather boring for the general public because they cannot translate the benchmark to their environment.

    AT aims to satisfy the broadest meaningful audience and Johan as done a great, great job at that.

    I'm sure he'll also write a part 4711 for you specifically, if you make it economically attractive.

    Hell, even I'd do that given the proper incentive!
  • Zan Lynx - Sunday, July 24, 2016 - link

    Using GCC as the compiler is also why (in my opinion) the Intel chips aren't using their full TDP. Large areas of Intel chips are dedicated to vector operations in SSE and AVX. If you don't issue those instructions then half the chip isn't even being used.

    Some gamers who love their overclocked Intel chips have actually complained to game engine developers who add AVX to the game engine. Because it ruins their overclock even if the game runs much faster. Then they're in the situation of being forced to clock down from 4.5 GHz to 3.7 in order to avoid lockups or thermal throttling.
  • Kevin G - Sunday, July 24, 2016 - link

    The Xeon E3 v3's had different clock speeds for AVX code: it consumed too much power and got too hot while under total load.

    This holds true on the E5 v4's but the AVX penalty is done on a core-by-core basis, not across the entire chip. The result is improved performance in mixed workloads. This is a good thing as AVX hasn't broken out much beyond the HPC markets.
  • talonted - Monday, July 25, 2016 - link

    For those interested in getting a Power8 workstation. Check out Talos.

    https://www.raptorengineering.com/TALOS/prerelease...
  • 137ben - Monday, July 25, 2016 - link

    I made an account to say that this article (along with the subsequent stock-cooler comparison article) is why I really love Anandtech. A lot of the code I run/write for my research is CPU-bottlenecked. Still, until the last year or so, I didn't know very much about hardware. Now, reading Anandtech, I have learned so much more about the hardware I depend on from this website than from any other website. Most just repeat announcements or run meaningless cursory synthetic benchmarks. The fact that Johan De Gelas has written such a deep dive into the inner workings of something as complex as a server CPU architecture, and done it in a way that I can understand, is remarkable. Great job Anandtech, keep it up and I'll always come back.
  • JohanAnandtech - Thursday, July 28, 2016 - link

    You made me a happy man, I achieved my goal :-)
  • alpha754293 - Wednesday, July 27, 2016 - link

    Excellent work and review as always Johan. I would have been interest to see how the two processors perform in floating point intensive benchmarks though...

Log in

Don't have an account? Sign up now