Power Consumption, Temperature

Two other arguments for having SMT enabled or disabled comes down to power consumption and temperature.

With SMT enabled, the core utilization is expected to be higher, with more instructions flowing through and being processed per cycle. This naturally increases the power requirements on the core, but might also reduce the frequency of the core. The trade-off is meant to be that the work going through the core should be more than enough to make up for extra power used, or any lower frequency. The lower frequency should enable a more efficient throughput, assuming the voltage is adjusted accordingly.

This is perhaps where AMD and Intel differ slightly. Intel’s turbo frequency range is hard-bound to specific frequency values based on core loading, regardless of how many threads are active or how many threads per core are active. The activity is a little more opportunistic when we reach steady state power, although exactly how far down the line that is will depend on what the motherboard has set the power length to. AMD’s frequency is continually opportunistic from the moment load is applied: it obviously scales down as more cores are loaded, but it will balance up and down based on core load at all times. On the side of thermals, this will depend on the heat density being generated in each core, but this also acts as a feedback loop into the turbo algorithm if the power limit has not been reached.

For our analysis here, we’ve picked two benchmarks. Agisoft, which is a variable threaded test performs practically the same with SMT On/Off, and 3DPMavx, a pure MT test which gets the biggest gain from SMT.

Agisoft

Photoscan from Agisoft is a 2D image to 3D model creator, using dozens of high-quality 2D images to generate related point maps to form a 3D model, before finally texturing the model using the images provided. It is used in archiving artefacts, as well as converting 2D sculpture into 3D scenes. Our test analyses a standardized set of 85 x 18 megapixel photos, with a result measured in time to complete.

Simply looking at CPU temperatures while running our real-world Agisoft test, our current setup (MSI X570 Godlike with Noctua NH12S) shows that both CPUs will flutter around 74ºC sustained. Perhaps the interesting element is at the beginning of the test, where the CPU temperatures are higher in SMT Off mode. Looking into the data, and during SMT Off, the processor is at 4300 MHz, compared to 4150 MHz when SMT is enabled. This would account for the difference.

Looking at power, we can follow that for the bulk of the test, both processors have similar package power consumption, around 130 W. The SMT Off is drawing more power during the first couple of minutes of the test, due to the higher frequency. Clearly the thermal density in this part of the test by only having one thread per core is allowing for a higher turbo.

If we measure the total power of the test, it’s basically identical in any metric that matters. Nearer the end of the test, where the workload is more variably threaded, this is where the SMT Off mode seems to come under power. This benchmark completion time is essentially the same due to the nature of the test, but SMT Off comes in at 2% lower power overall.

3DPMavx (3D Particle Movement)

Our 3DPM test is an algorithmic sequence of non-interactive random three-dimensional movement, designed to simulate molecular diffusive movement inside a gas or a fluid. The simulation is made non-interactive (i.e. no two molecules will collide) due to the original average movement of each particle taking collisions into account. Our test cycles through six movement algorithms at ten seconds apiece, followed by ten seconds of idle, with the whole loop being repeated six times, taking about 20 minutes, regardless of how fast or slow the processor is. The related performance figure is millions of particle movements per second. Each algorithm has been accelerated for AVX2.

On the temperature side of things, it is clear that the SMT Off mode again puts up a higher thermal profile. Temperatures this time peak at 66ºC, but it is clear the difference between the two modes.

On the power side, we can see why SMT Off mode is warmer – the cores are drawing more power. Looking at the data, SMT Off mode is running ~4350 MHz, compared to SMT On which is running closer to 4000 MHz.

With the higher frequency with SMT Off, the estimated total power consumption is 6.8% higher. This appears to be very constant throughout the benchmark, which lasts about 20 minutes total.

But, let us add in the performance numbers. Because 3DPMavx can take advantage of SMT On, that mode scores +77.5% by having two threads per core rather than one (a score of 10245 vs 5773). Combined this makes SMT On mode +91% better in performance per watt on this benchmark.

Gaming Performance (Discrete GPU) Conclusions: SMT On
Comments Locked

126 Comments

View All Comments

  • MrSpadge - Thursday, December 3, 2020 - link

    > We’ve known for many years that having two threads per core is not the same as having two cores

    True, and I still read this as an argument against SMT in forums. IMO it should be pointed out clearly that the cost of implementing either also differs drastically: +100% core size for another core and ~5% for SMT.
  • WaltC - Thursday, December 3, 2020 - link

    Intel began its HT journey in order to pull more efficiency from each core--basically, as performance was being left on the table. Interestingly enough, after Athlon and A64, AMD roundly criticized Intel because the SMT thread was not done by a "real core"...and then proceeded to drop cores with two integer units--which AMD then labeled as "cores"...;) Intel's HT approach proved superior, obviously. IIRC. It's been awhile so the memories are vague...;) The only problem with this article is that it tries to make calls about SMT hardware design without really looking hard at the software, and the case for SMT is a case for SMT software. Games will not use more than 4-8 threads simultaneously so of course there is little difference between SMT on and off when running most games on a 5950. You would likely see near the same results on a 5600 in terms of gaming. SMT on or off when running these games leaves most of the CPU's resources untouched. Programs designed and written to utilize a lot of threads, however, show a robust, healthy scaling with SMT on versus no SMT. So--without a doubt--SMT CPU design is superior to no SMT from the standpoint of the hardware's performance. The outlier is the software--not the hardware. And of course the hardware should never, ever be judged strictly by the software one arbitrarily decides to run on it. We learn a lot more about the limits of the software tested here than we learn about SMT--which is a solid performance design in CPU hardware.
  • WarlockOfOz - Friday, December 4, 2020 - link

    Very valid point about how games won't see a difference between 16 and 32 threads when they only use 6. Do you know if this type of analysis has been done at the lower end of the market?
  • WaltC - Friday, December 4, 2020 - link

    It's been common knowledge established a few years ago when AMD started pushing 8 core (and greater) CPUs that games don't require that many cores and that 6 cores is optimal for gaming right now. And if you do more than game, occasionally, and need more than 6 threads then SMT is there for you. As the new consoles are 8-core CPU designs, over time the number of cores required for optimal game performance will increase.
  • Flying Aardvark - Friday, December 4, 2020 - link

    Consoles are 8-core now, with 2 reserved for the OS. Count on 6-cores being optimal for gaming for quite some time.
  • Kangal - Friday, December 4, 2020 - link

    Thoset Jaguar cores was more like a 4c/8t processor to be fair. And they weren't that much better than Intel's Atom cores, a far cry from Intel's Core-i SkyLake architecture. And current gen consoles were very light on the OS, so maybe using 1-full core (or 2-threads-shared) leaving only 3-cores for games, but much better than the 2-core optimised games from the PS3/360 era.

    The new gen consoles will be somewhat similar, using only 1-full core (2-threads) reserved for the OS. But this time we have an architecture that's on-par with Intel's Core-i SkyLake, with a modern full 8-core processor (SMT/HT optional). This time leaving a healthy 7-cores that's dedicated to games. Optimisations should come sooner than later, and we'll see the effects on PC ports by 2022. So we should see a widening gap between 4vs6-core, and to a lesser extent 6vs8-core in the future. I wouldn't future-proof my rig by going for a 5700x instead of a 5600x, I would do that for the next round (ie 2022 Zen4).
  • AntonErtl - Sunday, December 6, 2020 - link

    The 8 Jaguar cores are in no way like 4c/8t CPUs; if you use only half of them, you get half the performance (unless your application is memory/L2-bandwidth-limited). Their predecessor Bobcat is about twice as fast as an Bonnell core (Atom proper), and a little slower than Silvermont (the core that replaced Bonnell), about half as fast as Goldmont+ (all at the clock rates at which they were available in fanless mini-ITX boards), one third as fast as a 3.5GHz Excavator core, and one sixth as fast as a 4.2GHz Skylake.
  • Oxford Guy - Sunday, December 6, 2020 - link

    Worse IPC than Bulldozer as far as I know. Certainly worse than Piledriver.

    Really sad. The "consoles" should have used something better than Jaguar. It's bad enough that the "consoles" are a parasitic drain on PC gaming in the first place. It's worse when they not only drain life with their superfluous walled gardens but also by foisting such a low-grade CPU onto the art.
  • Kangal - Thursday, December 24, 2020 - link

    The Jaguar cores share alot of DNA with Bulldozer, but they aren't the same. It's like Intel's Atom chips compared to Intel Core-i chips. With that said, 2015 Puma+ was a slight improvement over 2013 Jaguar, which was a modest improvement over the initial 2011 Bobcat lineup. All this started in 2006 with AMD choosing to evolve their earlier Phenom2 cores which are derivatives of the AMD Athlon-64.

    So just by their history, we can see they're inline with Intel's Atom architecture evolution, and basically a direct competitor. Where Intel had slightly less performance, but had much lower power-draw... making them the obvious winner. Leaving AMD to fill in the budget segments of the market.

    As for the core arrangement, they don't have full proper cores as people expect them. Like the Bulldozer architecture, each core had to share resources like the decoder and floating-point unit. So in many instances, one core would have to wait for the other core. This boosts multithreaded performance with simple calculations in orderly patterns. However, with more complex calculations and erratic/dynamic patterns (ie Regular PC use), it causes a hit to the single-thread performance and notable hiccups. So my statement was true. This is more like a 4c/8t chipset, and it is less like a Core-i and much more like an Atom. But don't take my word for it, take Dr Ian
    Cutress. He said the same thing during the deep dive into the Jaguar microarchitecture, and recently in the Chuwi Aerobox (Xbox One S) article.
    https://www.anandtech.com/show/16336/installing-wi...

    Now, there have been huge benefits to the Gaming PC industry, and game ports, due to the PS4/XB1. The first being the x86-64bit direct compatibility. Second was the cross-compatability thanks to Vulkan and DirectX (moreso with PS4 Pro and XB1X). The third being that it forced game developers to innovate their game engines, so that they're less narrow and more multi-threaded. With PS5/XseX we now see a second huge push with this philosophy, and the improvements of fast single-thread performance and fast-flash storage access. So I think while we have legitimate reasons to groan about the architecture (especially in the PS4) upon release, we do have to recognize the conveniences that they also brought (especially in the XB1X). This is just to show that my stance wasn't about console bashing.
  • at_clucks - Monday, December 7, 2020 - link

    @Kangal, Jaguar APUs in consoles are definitely not "like a 4c/8t processor" because they don't use CMT. They are full 8 cores. Their IPC may be comparable with some newer Atoms although it's hard to benchmark how the later "Evolved Jaguar" cores in the mid generation console refresh compares against the regular Jaguar or Atom.

Log in

Don't have an account? Sign up now