Fundamental Windows 10 Issues: Priority and Focus

In a normal scenario the expected running of software on a computer is that all cores are equal, such that any thread can go anywhere and expect the same performance. As we’ve already discussed, the new Alder Lake design of performance cores and efficiency cores means that not everything is equal, and the system has to know where to put what workload for maximum effect.

To this end, Intel created Thread Director, which acts as the ultimate information depot for what is happening on the CPU. It knows what threads are where, what each of the cores can do, how compute heavy or memory heavy each thread is, and where all the thermal hot spots and voltages mix in. With that information, it sends data to the operating system about how the threads are operating, with suggestions of actions to perform, or which threads can be promoted/demoted in the event of something new coming in. The operating system scheduler is then the ring master, combining the Thread Director information with the information it has about the user – what software is in the foreground, what threads are tagged as low priority, and then it’s the operating system that actually orchestrates the whole process.

Intel has said that Windows 11 does all of this. The only thing Windows 10 doesn’t have is insight into the efficiency of the cores on the CPU. It assumes the efficiency is equal, but the performance differs – so instead of ‘performance vs efficiency’ cores, Windows 10 sees it more as ‘high performance vs low performance’. Intel says the net result of this will be seen only in run-to-run variation: there’s more of a chance of a thread spending some time on the low performance cores before being moved to high performance, and so anyone benchmarking multiple runs will see more variation on Windows 10 than Windows 11. But ultimately, the peak performance should be identical.

However, there are a couple of flaws.

At Intel’s Innovation event last week, we learned that the operating system will de-emphasise any workload that is not in user focus. For an office workload, or a mobile workload, this makes sense – if you’re in Excel, for example, you want Excel to be on the performance cores and those 60 chrome tabs you have open are all considered background tasks for the efficiency cores. The same with email, Netflix, or video games – what you are using there and then matters most, and everything else doesn’t really need the CPU.

However, this breaks down when it comes to more professional workflows. Intel gave an example of a content creator, exporting a video, and while that was processing going to edit some images. This puts the video export on the efficiency cores, while the image editor gets the performance cores. In my experience, the limiting factor in that scenario is the video export, not the image editor – what should take a unit of time on the P-cores now suddenly takes 2-3x on the E-cores while I’m doing something else. This extends to anyone who multi-tasks during a heavy workload, such as programmers waiting for the latest compile. Under this philosophy, the user would have to keep the important window in focus at all times. Beyond this, any software that spawns heavy compute threads in the background, without the potential for focus, would also be placed on the E-cores.

Personally, I think this is a crazy way to do things, especially on a desktop. Intel tells me there are three ways to stop this behaviour:

  1. Running dual monitors stops it
  2. Changing Windows Power Plan from Balanced to High Performance stops it
  3. There’s an option in the BIOS that, when enabled, means the Scroll Lock can be used to disable/park the E-cores, meaning nothing will be scheduled on them when the Scroll Lock is active.

(For those that are interested in Alder Lake confusing some DRM packages like Denuvo, #3 can also be used in that instance to play older games.)

For users that only have one window open at a time, or aren’t relying on any serious all-core time-critical workload, it won’t really affect them. But for anyone else, it’s a bit of a problem. But the problems don’t stop there, at least for Windows 10.

Knowing my luck by the time this review goes out it might be fixed, but:

Windows 10 also uses the threads in-OS priority as a guide for core scheduling. For any users that have played around with the task manager, there is an option to give a program a priority: Realtime, High, Above Normal, Normal, Below Normal, or Idle. The default is Normal. Behind the scenes this is actually a number from 0 to 31, where Normal is 8.

Some software will naturally give itself a lower priority, usually a 7 (below normal), as an indication to the operating system of either ‘I’m not important’ or ‘I’m a heavy workload and I want the user to still have a responsive system’. This second reason is an issue on Windows 10, as with Alder Lake it will schedule the workload on the E-cores. So even if it is a heavy workload, moving to the E-cores will slow it down, compared to simply being across all cores but at a lower priority. This is regardless of whether the program is in focus or not.

Of the normal benchmarks we run, this issue flared up mainly with the rendering tasks like CineBench, Corona, POV-Ray, but also happened with yCruncher and Keyshot (a visualization tool). In speaking to others, it appears that sometimes Chrome has a similar issue. The only way to fix these programs was to go into task manager and either (a) change the thread priority to Normal or higher, or (b) change the thread affinity to only P-cores. Software such as Project Lasso can be used to make sure that every time these programs are loaded, the priority is bumped up to normal.

Intel Disabled AVX-512, but Not Really Power: P-Core vs E-Core, Win10 vs Win11
Comments Locked

474 Comments

View All Comments

  • Kvaern1 - Sunday, November 7, 2021 - link

    Because there are no games which are 'incompatible'' with ADL.
  • eastcoast_pete - Sunday, November 7, 2021 - link

    While AL is an interesting CPU (regardless of what one's preference is), I still think the star of AL is the Gracemont core (E cores), and did some very simple-minded, back of a napkin calculations. The top AL has 8 (P cores with multithreading) = 16 + 8 E core threads (no multithreading here) for a total of 24 threads. According to first die shots, one P core requires the same die area as 4 E cores. That leaves me wanting an all-E core CPU with the same die size as the i9 AL, because that could fit 8x4= 32 plus the existing 8 Gracemonts, for a total of 40. And, the old problem of "Atoms can't do AVX and AVX2" is solved - because now they can! Yes, single thread performance would be significantly lower, but any workload that can take advantage of many threads should be at least as fast as on the i9. Anyone here knows if Intel is considering that? It wouldn't be the choice for gaming, but for productivity, it might give both the i9 and, possibly, the 5950x a run for the money.
  • mode_13h - Monday, November 8, 2021 - link

    They currently make Atom-branded embedded server CPUs with up to 24 cores. This one launched last year, using Tremont cores:

    https://ark.intel.com/content/www/us/en/ark/produc...

    I think you can expect to see a Gracemont-based refresh, possibly with some new product lines expanding into non-embedded markets.
  • eastcoast_pete - Monday, November 8, 2021 - link

    Yes, those Tremont-based CPUs are intended/sold for 5G cell stations; I hope that Intel doesn't just refresh those with Gracemont, but makes a 32-40 Gracemont core CPU available for workstations and servers. The one thing that might prevent that is fear (Intel's) of cannibalizing their Sapphire Rapid sales. However, if I would be in their shoes, I'd worry more about upcoming AMD and multi-core ARM server chips, and sell all the CPUs they can.
  • mode_13h - Tuesday, November 9, 2021 - link

    Well, it's a start that Intel is already using these cores in *some* kind of server CPU, no? That suggests they already should have some server-grade RAS features built-in. So, it should be a fairly small step to use them in a high core count CPU to counter the Gravitons and Altras. I think they will, since it should be more competitive in terms of perf/W.

    As for workstations, I think you'll need to find a workstation board with a server CPU socket. I doubt they'll be pushing massive E-core -only CPUs specifically for workstations, since workstation users also tend to care about single-thread performance.
  • anemusek - Sunday, November 7, 2021 - link

    Sorry but performance it isn't all +- a few percent in the real world will not restore confidence. Critical flaws, disabling functionality (dx12 in hanswell for example), instabbility instruction features etc.
    I cannot afford to trust such a company
  • Dolda2000 - Sunday, November 7, 2021 - link

    I just wanted to add a big Kudos for this article. AnandTech's coverage of the 12900K was by a wide margin the best of any I read or watched, with regards to coverage of the various variables involved, and with the breadth and depth of testing. Thanks for keeping it up!
  • chantzeleong - Monday, November 8, 2021 - link

    I run Power bi and tensorflow with large dataset. Which Intel CPU do you recommend and why?
  • mode_13h - Tuesday, November 9, 2021 - link

    I don't know about "Power bi", but Tensorflow should run best on GPUs. Which CPU to get then depends on how many GPUs you're going to use. If >= 3, then Threadripper. Otherwise, go for Alder Lake or Ryzen 5000 series.

    You'll probably find the best advice among user communities for those specific apps.
  • velanapontinha - Monday, November 8, 2021 - link

    We've seen this before. It is time to short AMD, unfortunately.

Log in

Don't have an account? Sign up now