Conclusions: SMT On

I wasn’t too sure what we were going to see when I started this testing. I know the theory behind implementing SMT, and what it means for the instruction streams having access to core resources, and how cores that have SMT in mind from the start are built differently to cores that are just one thread per core. But theory only gets you so far. Aside from all the forum messages over the years talking about performance gains/losses when a product has SMT enabled, and the few demonstrations of server processors running focused workloads with SMT disabled, it is actually worth testing on real workloads to find if there is a difference at all.

Results Overview

In our testing, we covered three areas: Single Thread, Multi-Thread, and Gaming Performance.

In single threaded workloads, where each thread has access to all of the resources in a single core, we saw no change in performance when SMT is enabled – all of our workloads were within 1% either side.

In multi-threaded workloads, we saw an average uplift in performance of +22% when SMT was enabled. Most of our tests scored a +5% to a +35% gain in performance. A couple of workloads scored worse, mostly due to resource contention having so many threads in play – the limit here is memory bandwidth per thread. One workload scored +60%, a computational workload with little-to-no memory requirements; this workload scored even better in AVX2 mode, showing that there is still some bottleneck that gets alleviated with fewer instructions.

On gaming, overall there was no difference between SMT On and SMT Off, however some games may show differences in CPU limited scenarios. Deus Ex was down almost 10% when CPU limited, however Borderlands 3 was up almost 10%. As we moved to a more GPU limited scenario, those discrepancies were neutralized, with a few games still gaining single-digit percentage points improvement with SMT enabled.

For power and performance, we tested two examples where performance at two threads per core was either saw no improvement (Agisoft), or significant improvement (3DPMavx). In both cases, SMT Off mode (1 thread/core) ran at higher temperatures and higher frequencies. For the benchmark per performance was about equal, the power consumed was a couple of percentage points lower when running one thread per core. For the benchmark were running two threads per core has a big performance increase, the power in that mode was also lower, and there was a significant +91% performance per watt improvement by enabling SMT.

What Does This Mean?

I mentioned at the beginning of the article that SMT performance gains can be seen from two different viewpoints.

The first is that if SMT enables more performance, then it’s an easy switch to use, and some users consider that if you can get perfect scaling, then if SMT is an effective design.

The second is that if SMT enables too much performance, then it’s indicative of a bad core design. If you can get perfect scaling with SMT2, then perhaps something is wrong about the design of the core and the bottleneck is quite bad.

Having poor SMT scaling doesn’t always mean that the SMT is badly implemented – it can also imply that the core design is very good. If an effective SMT design can be interpreted as a poor core design, then it’s quite easy to see that vendors can’t have it both ways. Every core design has deficiencies (that much is true), and both Intel and AMD will tell its users that SMT enables the system to pick up extra bits of performance where workloads can take advantage of it, and for real-world use cases, there are very few downsides.

We’ve known for many years that having two threads per core is not the same as having two cores – in a worst case scenario, there is some performance regression as more threads try and fight for cache space, but those use cases seem to be highly specialized for HPC and Supercomputer-like tasks. SMT in the real world fills in the gaps where gaps are available, and this occurs mostly in heavily multi-threaded applications with no cache contention. In the best case, SMT offers a sizeable performance per watt increase. But on average, there are small (+22% on MT) gains to be had, and gaming performance isn’t disturbed, so it is worth keeping enabled on Zen 3.

 
Power Consumption, Temperature
Comments Locked

126 Comments

View All Comments

  • quadibloc - Friday, December 4, 2020 - link

    On the one hand, if one program does a lot of integer calculations, and the other does a lot of floating-point calculations, putting them on the same core would seem to make sense because they're using different execution resources. On the other hand, if you use two threads from the same program on the same core, then you may have less contention for that core's cache memory.
  • linuxgeex - Thursday, December 3, 2020 - link

    One of the key ways in which SMT helps keep the execution resources of the core active, is cache misses. When one thread is waiting 100+ clocks on reads from DRAM, the other thread can happily keep running out of cache. Of course, this is a two-edged sword. Since the second thread is consuming cache, the first thread was more likely to have a cache miss. So for the very best per-thread latency you're best off to disable SMT. For the very best throughput, you're best off to enable SMT. It all depends on your workload. SMT gives you that flexibility. A core without SMT lacks that flexibility.
  • Dahak - Thursday, December 3, 2020 - link

    Now I only glanced through the article, but would it make more sense to use a lower core count cpu to see the benefits of SMT as using a higher core count might mean it will use the real core over the smt cores?
  • Holliday75 - Thursday, December 3, 2020 - link

    Starting to think the entire point of the article was this subject is so damn complicated and hard to quantify for the average user that there is no point in trying unless you are running work loads in a lab environment to find the best possible outcome for the work you plan on doing. Who is going to bother doing that unless you work for a company where it makes sense to do so.
  • 29a - Thursday, December 3, 2020 - link

    I wonder what the charts would look like with a dual core SMT processor. I think the game tests would have a good chance of changing. I'm a little surprised the author didn't think to test CPU's with more limited resources.
  • Klaas - Thursday, December 3, 2020 - link

    Nice article. And nice benchmarks.

    Too bad you could not include more real-life test.
    I think that a processor with far less threads (like a 5600X)
    is easier to saturate with real-life software that actually uses threads.
    The Cache Sizes and Memory channels bandwith ratio to core is quite different
    for those 'smaller' processors. That will probably result in different benchmark results...
    So it would be interesting to see what those processors will do, SMT ON versus SMT OFF.
    I don't think the end result will be different, but it could even be a bigger victory for SMT ON.

    Another interesting area is virtualization.
    And as already mentioned in more comments it is very important that the Operating Systems
    will assign threads to the right core or SMT-Core combinations.
    That is even more important in virtualization situations...
  • MDD1963 - Thursday, December 3, 2020 - link

    Determining the usefulness of SMT with 16 cores on tap is not quite as relevant as when this experiment might be done with, say, a 5600X or 5800X....; naturally 16 cores without SMT might still be plenty (as even 8 non SMT cores on the 9700K proved)
  • thejaredhuang - Thursday, December 3, 2020 - link

    This would be a better test on a CPU that doesn't have 16 base cores. If you could try it on a 4C/8T part I think the difference would be more pronounced.
  • dfstar - Thursday, December 3, 2020 - link

    The basic benefit of SMT is it allows the processer to hide the impact of long latency instructions on average IPC, since it can switch to new thread and execute those instructions. In this way it is similar to OOO(which leverages speculative execution to do the same) and also more flexible than fine-grained multi-threading. There is an overhead and cost(area/power) due to the duplicated structures in the core that will impact the perf/watt of pure single-threaded workloads, I don't think disabling SMT removes all this impact ...
  • GreenReaper - Thursday, December 3, 2020 - link

    Perhaps not. But at the same time, it is likely that any non-SMT chip that has a SMT variant actually *is* a SMT chip, it is just disabled in firmware - either because it is broken on that chip, or because the non-SMT variant sold better.

Log in

Don't have an account? Sign up now