The Windows and Multithreading Problem (A Must Read)

Unfortunately, not everything is just as straightforward as installing Windows 10 and going off on a 128 thread adventure. Most home users that have Windows typically have versions of Windows 10 Home or Windows 10 Pro, which are both fairly ubiquitous even among workstation users. The problem that these operating systems have rears its ugly head when we go above 64 threads. Now to be clear, Microsoft never expected home (or even most workstations) systems to go above this amount, and to a certain extent they are correct.

Whenever Windows experiences more than 64 threads in a system, it separates those threads into processor groups. The way this is done is very rudimentary: of the enumerated cores and threads, the first 64 go into the first group, the second 64 go into the next group, and so on. This is most easily observed by going into task manager and trying to set the affinity of a particular program:

 

With our 64 core processor, when simultaneous multithreading is enabled, we get a system with 128 threads. This is split into two groups, as shown above.

When the system is in this mode, it becomes very tricky for most software to operate properly. When a program is launched, it will be pushed into one of the processor groups based on load – if one group is busy, the program will be spawned in the other. When the program is running inside the group, unless it is processor group aware, then it can only access other threads in the same group. This means that if a multi-threaded program can use 128 threads, if it isn’t built with processor groups in mind, then it might only spawn with access to 64.

If this sounds somewhat familiar, then you may have heard of NUMA, or non-uniform memory architecture. This occurs when the CPU cores in the system might have different latencies to main memory, such as within a dual socket system: it can be quick to access the memory directly attached to its own core, but it can be a lot slower if a core needs to access memory attached to the other physical CPU. Processor groups is one way around this, to stop threads jumping from CPU to CPU. The only issue here is that despite having 128 threads on the 3990X, it’s all one CPU!

In Windows 10 Pro, this becomes a problem. We can look directly at Task Manager:

Here we see all 64 cores and 128 threads being loaded up with an artificial load. The important number here though is the socket count. The system thinks that we have two sockets, just because we have a high number of threads in the system. This is a big pain, and the source of a lot of slowdowns in some benchmarks.

(Interestingly enough, Intel’s latest Xeon Phi chips with 72 lightweight cores and 4-way HT for 288 threads show up as five sockets. How’s that for pain!)

Of course, there is a simple solution to avoid all of this – disable simultaneous multithreading. This means we still have 64 cores but now there’s only one processor group.

We still have most of the performance on the chip (and we’ll see later in the benchmarks). However, some of the performance has been lost – if I wanted 64 threads, I’d save some money and get the 32-core! There seems to be no easy way around this.

But then we remember that there are different versions of Windows 10.


From Wikipedia

Microsoft at retail sells Windows 10 Home, Windows 10 Pro, Windows 10 Pro for Workstations, and we can also find keys for Windows 10 Enterprise for sale. Each of these, aside from the usual feature limitations based on the market, also have limitations on processor counts and sockets. In the diagram above, we can see where it says Windows 10 Home is limited to 64 cores (threads), whereas Pro/Education versions go up to 128, and then Workstation/Enterprise to 256. There’s also Windows Server.

Now the thing is, Workstation and Enterprise are built with multiple processor groups in mind, whereas Pro is not. This has comes through scheduler adjustments, which aren’t immediately apparent without digging deeper into the finer elements of the design. We saw significant differences in performance.

In order to see the differences, we did the following comparisons:

  • 3990X with 64 C / 128 T (SMT On), Win10 Pro vs Win10 Ent
  • Win 10 Pro with 3990X, SMT On vs SMT Off

This isn’t just a case of the effect SMT has on overall performance – the way the scheduler and the OS works to make cores available and distribute work are big factors.

3D Particle Movement v2.1

In 3DPM, with standard non-expert code, the difference between SMT on and off is 8.6%, however moving to Enterprise brings half of it back.

3D Particle Movement v2.1 (with AVX)

When we move to hand-tuned AVX code, the extra threads can be used and per-thread gets a 2x speed increase. Here the Enterprise version again gets a small lead over the Pro.

DigiCortex 1.20 (32k Neuron, 1.8B Synapse)

DigiCortex is a more memory bound benchmark, and we see here that disabling SMT scores a massive gain as it frees up CPU-to-memory communication. Enterprise claws back half that gain while keeping SMT enabled.

Agisoft Photoscan 1.3.3, Complex Test

Photoscan is a variable threaded test, but having SMT disabled gives the better performance with each thread having more resources on tap. Again, W10 Enterprise splits the difference between SMT off and on.

NAMD 2.31 Molecular Dynamics (ApoA1)

Our biggest difference was in our new NAMD testing. Here the code is AVX2 accelerated, and the difference to watch out for is with SMT On, going from W10 Pro to W10 Ent is a massive 8.3x speed up. In regular Pro, we noticed that when spawning 128 threads, they would only sit on 16 actual cores, or less than, with the other cores not being utilized. In SMT-Off mode, we saw more of the cores being used, but the score still seemed to be around the same as a 3950X. It wasn’t until we moved to W10 Enterprise that all the threads were actually being used.

Corona 1.3 Benchmark

On the opposite end of the scale, Corona can actually take advantage of different processor groups. We see the improvement moving from SMT off to SMT On, and then another small jump moving to Enterprise.

Blender 2.79b bmw27_cpu Benchmark

Similarly in our Blender test, having processor groups was no problem, and Enterprise gets a small jump.

POV-Ray 3.7.1 Benchmark

POV-Ray benefits from having SMT disabled, regardless of OS version.

Handbrake 1.1.0 - 1080p60 HEVC 3500 kbps Fast

Whereas Handbrake (due to AVX acceleration) gets a big uplift on Windows 10 Enterprise

What’s The Verdict?

From our multithreaded test data, there can only be two conclusions. One is to disable SMT, as it seems to get performance uplifts in most benchmarks, given that most benchmarks don’t understand what processor groups are. However, if you absolutely have to have SMT enabled, then don’t use normal Windows 10 Pro: use Pro for Workstations (or Enterprise) instead. At the end of the day, this is the catch in using hardware that's skirting the line of being enterprise-grade: it also skirts the line with triggering enterprise software licensing. Thankfully, workstation software that is outright licensed per core is still almost non-existent, unlike the server realm.

Ultimately this puts us in a bit of a quandary for our CPU-to-CPU comparisons on the following pages. Normally we run our CPUs on W10 Pro with SMT enabled, but it’s clear from these benchmarks that in every multithreaded scenario, we won’t get the best result. We may have to look at how we test processors >16 cores in the future, and run them on Windows 10 Enterprise. Over the following pages, we’ll include W10 Pro and W10 Enterprise data for completeness.

Frequency, Temperature, and Power AMD 3990X Against Prosumer CPUs
Comments Locked

279 Comments

View All Comments

  • velanapontinha - Friday, February 7, 2020 - link

    far too many to mention in a comment
  • andrewaggb - Friday, February 7, 2020 - link

    We use both. Windows server is fine. Very long support, much longer than Ubuntu lts, and it's stable.
  • reuthermonkey1 - Friday, February 7, 2020 - link

    I'm a Linux dude through and through, but most companies I've worked for use a lot of Windows for their backend systems. I think it's a bad idea, since dealing with Windows Server adds quite a bit to overall costs and complexity, but the financial folks demand it so they pay for it.
  • FunBunny2 - Friday, February 7, 2020 - link

    "the financial folks demand it so they pay for it."

    because they've been slaves to Office for decades. no other reason.
  • Ratman6161 - Friday, February 7, 2020 - link

    There is cost and then there is cost. What does that mean? Well, I'm in a highly regulated environment where if an auditor saw a system that wasn't under manufacturers support, that's an automatic fail. So lets all please get the word "free" out of our vocabulary for this discussion. Linux is definitely not free. The companies that supply Linux distros just bill you for it using a very different licensing model than Microsoft does. To find the real costs, you have to figure the total cost of a system including all hardware and software and all costs associated with each. When you do that, a couple of things become obvious. 1) The cost of hardware is relatively trivial when compared with software licensing. 2) the cost of the operating system, regardless of what that OS is, is also relatively trivial though generally speaking we find that fully supported Linux and Windows end up costing very close to the same. 3) The big costs are for the software that runs on top of the hardware and OS. So saving costs by using Linux is essentially a fantasy.

    RRINKER also had a great point...the vast majority of Windows servers these days are virtual. We tend to have large numbers of small Windows servers dedicated to a particular task. We don't really find this adding complexity.
  • Whiteknight2020 - Friday, February 7, 2020 - link

    Windows server is way cheaper to administer, configuration with group policy, DSC etc is way easier than messing with ansible, puppet etc. TCO is lower as windows admins are cheaper, SA licensing is on par or cheaper than RHEL/Oracle UK, server core is rock solid. I use & deploy both Linux & Windows, whichever the application runs best or more stable on.
  • dysonlu - Friday, February 7, 2020 - link

    It's cheaper the same way it is cheaper to outsource. The cost is hidden and usually comes later.
  • PeachNCream - Monday, February 10, 2020 - link

    This is the best approach. The underlying OS should be whatever is best suited to the task it is expected to perform within the limits of the costs running it incurs. Of course, figuring out what might be the best balance between costs and performance can be tricky and a lot of companies do not dedicate the resources to examine options, simply defaulting to something familiar while assuming it is the best choice..
  • dysonlu - Friday, February 7, 2020 - link

    Enterprise use Windows because they are pretty tech-illiterate and needs Microsoft support. Nobody will get fired for selecting Microsoft and Windows, even if it'll cost more and your whole IT will be at the mercy of Microsoft.

    But, kickbacks help a lot in the decision making.
  • zmatt - Friday, February 7, 2020 - link

    Many. I would argue most actually. There are certainly some areas where Linux really shines but one place where they aren't just behind but completely non existing is competing with active directory. Most offices still use AD domains, and for good reasons, and Linux doesn't have an answer to it.

    We have a few VM clusters that run redundant DCs. Its the only option because active directory is unique. It isn't perfect, but nobody offers a competing solution. Someone could develop an open source competitor but nobody has.

Log in

Don't have an account? Sign up now