Multi Threading Prowess

The gains of 2-way SMT (Hyperthreading) on Intel processors are still relatively small (10-20%) in many applications. The reason is that threads have to share most of the critical resources such as L1-cache, the instruction TLB, µop cache, and instruction queue. That IBM uses 8-way SMT and still claims to get significant performance gains piqued our interest. Is this just benchmarketing at best or did they actually find a way to make 8-way SMT work?

It is interesting to note that with 2-way SMT, a single thread is still running at about 80% of its performance without SMT. IBM claims no less than a 60% performance increase due to 2-way SMT, far beyond what Intel has ever claimed (30%). This can not be simply explained by the higher amount of issue slots or decoding capabilities.

The real reason is a series of trade-offs and extra resource investments that IBM made. For example, the fetch buffer contains 64 instructions in ST mode, but twice as many entries are available in 2-way SMT mode, ensuring each thread still has a 64 instruction buffer. In SMT4 mode, the size of the fetch buffer for each thread is divided in 2 (32 instructions), and only in SMT8 mode things get a bit cramped as the buffer is divided by 4.

The design philosophy of making sure that 2 threads do not hinder each other can be found further down the pipeline. The Unified Issue Queue (UniQueue) consists of two symmetric halves (UQ0 and UQ1), each with 32 entries for instructions to be issued.

Each of these UQs can issue instructions to their own reserved Load/Store, Integer (FX), Load, and Vector units. A single thread can use both queues, but this setup is less flexible (and thus less performant) than a single issue queue. However, once you run 2 threads on top of a core (SMT-2), the back-end acts like it consists of two full-blown 5-way superscalar cores, each with their own set of physical registers. This means that one thread cannot strangle the other by using or blocking some of the resources. That is the reason why IBM can claim that two threads will perform so much better than one.

It is somewhat similar to the "shared front-end, dual-core back-end" that we have seen in Bulldozer, but with (much) more finesse. For example, the data cache is not divided. The large and fast 64 KB D-cache is available for all threads and has 4 read ports. So two threads will be able to perform two loads at the same time. Another example is that a single thread is not limited to one half, but can actually use both, something that was not possible with Bulldozer.

Dividing those ample resources in two again (SMT-4) should not pose a problem. All resources are there to run most server applications fast and one of the two threads will regularly pause when a cache miss or other stalls occur. The SMT-8 mode can sometimes be a step too far for some applications, as 4 threads are now dividing up the resources of each issue queue. There are more signs that SMT-8 is rather cramped: instruction prefetching is disabled in SMT-8 modus for bandwidth reasons. So we suspect that SMT-8 is only good for very low IPC, "throughput is everything" server applications. In most applications, SMT-8 might increase the latency of individual threads, while offering only a small increase in throughput performance. But the flexibility is enormous: the POWER8 can work with two heavy threads but can also transform itself into a lightweight thread machine gun.

Comparing with Intel's best System Specs
Comments Locked

124 Comments

View All Comments

  • zodiacfml - Thursday, July 21, 2016 - link

    Like a good TV series, I can't wait for the next episode.
  • aryonoco - Friday, July 22, 2016 - link

    OK, this is literally why Anandtech is the best in the tech journalism industry.

    There is nowhere else on the net that you can find a head to head comparison between POWER and Xeon, and unless you work in the tech department of a Fortune 500 company, this information has just not been available, until now.

    Johan, thank you for your work on this article. I did give you beef in your previous article about using LE Ubuntu but I concede your point. Very happy to you are writing more for Anandtech these days.

    Xeons really need some competition. Whether that competition comes from POWER or ARM or Zen, I am happy to see some competition. IBM has big plans for POWER9. Hopefully this is just the start of things to come.
  • JohanAnandtech - Friday, July 22, 2016 - link

    Thanks! it is very exciting to perform benchmarks that nobody has published yet :-).

    In hindsight, I have to admit that the first article contained too few benchmarks that really mattered for POWER8. Most of our usual testing and scripting did not work, and so after lot of tinkering, swearing and sweat I got some benchmarks working on this "exotic to me" platform. The contrast between what one would expect to see on POWER8 and me being proud of being able to somewhat "tame the beast" could not have been greater :-). In other words, there was a learning curve.
  • tipoo - Friday, July 22, 2016 - link

    I found it very interesting as well and would certainly not mind seeing more from this space, like maybe Xeon Phi and SPARC M7
  • jospoortvliet - Tuesday, July 26, 2016 - link

    Amen. But, to not ask to much, just the prospect of part 2 of the Power benchmark is already super exciting. Yes, the Internetz need more of this!
  • Daniel Egger - Friday, July 22, 2016 - link

    Not quite sure what the Endianess of a systems adds to the competitive factor. Maybe someone could elaborate why it is so important to run a system in LE?
  • ZeDestructor - Friday, July 22, 2016 - link

    Not much, really, with the compilers being good and all that.

    Really, it's quite clearly there just for some excellent alliteration.
  • JohanAnandtech - Friday, July 22, 2016 - link

    Basically LE reduces the barrier for an IBM server being integrated in x86 dominated datacenter.

    see https://www.ibm.com/developerworks/community/blogs...

    Just a few reasons:

    "Numerous clients, software partners, and IBM’s own software developers have told us that porting their software to Power becomes simpler if the Linux environment on Power supports little endian mode, more closely matching the environment provided by Linux on x86. This new level of support will *** lower the barrier to entry for porting Linux on x86 software to Linux on Power **."

    "A system accelerator programmer (GPU or FPGA) who needs to share memory with applications running in the system processor must share data in an pre-determined endianness for correct application functionality."
  • Daniel Egger - Friday, July 22, 2016 - link

    While correct in theory, this hasn't been a problem for the last 20 years. People are used to using BE on PPC/POWER, the software, the drivers and the infrastructure are very mature (as a matter of fact it was my job 15 years ago to make sure they are). PPC/POWER actually have configurable endianess so if someone wanted to go LE earlier it would have easily been possible but only few ever attempted that stunt; so why have the big disruption now?
  • KAlmquist - Friday, July 22, 2016 - link

    I assume that this is about selling POWER boxes to companies that currently run all x86 servers, and have a bunch of custom software that they might be willing to recompile. If the customer has to spend a bunch of time fixing endian dependencies in his software in order to get it to work on POWER, it will probably be less expensive for them to simply stick with x86.

Log in

Don't have an account? Sign up now