Fundamental Windows 10 Issues: Priority and Focus

In a normal scenario the expected running of software on a computer is that all cores are equal, such that any thread can go anywhere and expect the same performance. As we’ve already discussed, the new Alder Lake design of performance cores and efficiency cores means that not everything is equal, and the system has to know where to put what workload for maximum effect.

To this end, Intel created Thread Director, which acts as the ultimate information depot for what is happening on the CPU. It knows what threads are where, what each of the cores can do, how compute heavy or memory heavy each thread is, and where all the thermal hot spots and voltages mix in. With that information, it sends data to the operating system about how the threads are operating, with suggestions of actions to perform, or which threads can be promoted/demoted in the event of something new coming in. The operating system scheduler is then the ring master, combining the Thread Director information with the information it has about the user – what software is in the foreground, what threads are tagged as low priority, and then it’s the operating system that actually orchestrates the whole process.

Intel has said that Windows 11 does all of this. The only thing Windows 10 doesn’t have is insight into the efficiency of the cores on the CPU. It assumes the efficiency is equal, but the performance differs – so instead of ‘performance vs efficiency’ cores, Windows 10 sees it more as ‘high performance vs low performance’. Intel says the net result of this will be seen only in run-to-run variation: there’s more of a chance of a thread spending some time on the low performance cores before being moved to high performance, and so anyone benchmarking multiple runs will see more variation on Windows 10 than Windows 11. But ultimately, the peak performance should be identical.

However, there are a couple of flaws.

At Intel’s Innovation event last week, we learned that the operating system will de-emphasise any workload that is not in user focus. For an office workload, or a mobile workload, this makes sense – if you’re in Excel, for example, you want Excel to be on the performance cores and those 60 chrome tabs you have open are all considered background tasks for the efficiency cores. The same with email, Netflix, or video games – what you are using there and then matters most, and everything else doesn’t really need the CPU.

However, this breaks down when it comes to more professional workflows. Intel gave an example of a content creator, exporting a video, and while that was processing going to edit some images. This puts the video export on the efficiency cores, while the image editor gets the performance cores. In my experience, the limiting factor in that scenario is the video export, not the image editor – what should take a unit of time on the P-cores now suddenly takes 2-3x on the E-cores while I’m doing something else. This extends to anyone who multi-tasks during a heavy workload, such as programmers waiting for the latest compile. Under this philosophy, the user would have to keep the important window in focus at all times. Beyond this, any software that spawns heavy compute threads in the background, without the potential for focus, would also be placed on the E-cores.

Personally, I think this is a crazy way to do things, especially on a desktop. Intel tells me there are three ways to stop this behaviour:

  1. Running dual monitors stops it
  2. Changing Windows Power Plan from Balanced to High Performance stops it
  3. There’s an option in the BIOS that, when enabled, means the Scroll Lock can be used to disable/park the E-cores, meaning nothing will be scheduled on them when the Scroll Lock is active.

(For those that are interested in Alder Lake confusing some DRM packages like Denuvo, #3 can also be used in that instance to play older games.)

For users that only have one window open at a time, or aren’t relying on any serious all-core time-critical workload, it won’t really affect them. But for anyone else, it’s a bit of a problem. But the problems don’t stop there, at least for Windows 10.

Knowing my luck by the time this review goes out it might be fixed, but:

Windows 10 also uses the threads in-OS priority as a guide for core scheduling. For any users that have played around with the task manager, there is an option to give a program a priority: Realtime, High, Above Normal, Normal, Below Normal, or Idle. The default is Normal. Behind the scenes this is actually a number from 0 to 31, where Normal is 8.

Some software will naturally give itself a lower priority, usually a 7 (below normal), as an indication to the operating system of either ‘I’m not important’ or ‘I’m a heavy workload and I want the user to still have a responsive system’. This second reason is an issue on Windows 10, as with Alder Lake it will schedule the workload on the E-cores. So even if it is a heavy workload, moving to the E-cores will slow it down, compared to simply being across all cores but at a lower priority. This is regardless of whether the program is in focus or not.

Of the normal benchmarks we run, this issue flared up mainly with the rendering tasks like CineBench, Corona, POV-Ray, but also happened with yCruncher and Keyshot (a visualization tool). In speaking to others, it appears that sometimes Chrome has a similar issue. The only way to fix these programs was to go into task manager and either (a) change the thread priority to Normal or higher, or (b) change the thread affinity to only P-cores. Software such as Project Lasso can be used to make sure that every time these programs are loaded, the priority is bumped up to normal.

Intel Disabled AVX-512, but Not Really Power: P-Core vs E-Core, Win10 vs Win11
Comments Locked


View All Comments

  • ButIDontWantAUsername - Wednesday, November 10, 2021 - link

    How's that validation with Denuvo going? Nothing like upgrading to Intel and having your games suddenly start crashing.
  • Iketh - Tuesday, November 30, 2021 - link

    please, no more comments from you
  • tuxRoller - Friday, November 5, 2021 - link

    Most desktops at enterprise companies could be replaced with terminals given that most of the people are really just performing data entry & retrieval. The network is the bit doing the work.
    For people who need old school workstations, then I agree, but that's a damn small (but high margin) market.
  • blanarahul - Thursday, November 4, 2021 - link

    Alder Lake is extremely efficient when gaming -

    Scroll down and you'll find a graph detailing total gaming power consumption (CPU + GPU) and CPU power consumed per fps. In both metrics, Alder Lake is doing better than Zen 3 and much better than Rocket Lake.

    PC World's review - - conveys that while 12900K goes volcanic in Cinebench, it sips power in a real world workload.

    It seems like Alder Lake for desktop has been clocked way beyond its performance/watt sweet spot. It should be very interesting to compare Alder Lake for laptops v/s Zen 3 for laptops.
  • blanarahul - Thursday, November 4, 2021 - link

    To give a short summary for (only) CPU power consumption v/s FPS when playing Horizon Zero Dawn

    11900K consumes 100 watts for 143 fps
    5950X consumes 95 watts for 145 fps
    5800X consumes 59 watts for 144 fps
    12900K consumes 52 watts for 146 fps
    12700K consume 43 (!) watts for 145 fps

    Intel is very, very competent with AMD. Considering that 12700K has less E cores and consumes less power, I am very curious how it would do with all E cores disabled and running only on P cores.
  • Netmsm - Thursday, November 4, 2021 - link

    Sounds like there is only gaming world!
    In PCs it may not be considered as a egregious blunder however you're right Intel is now competitive but to previous AMD's if and only if we wink at Intel's guzzling power.

    Some examples from Tom's benches:
    12900k DDR5 consumes 197 watts whereas 5950x consumes 103 watts.

    12900k DDR5 consumes 224 watts whereas 5950x consumes 124 watts.

    blender bmw27
    12900k DDR5 consumes 205 watts whereas 5950x consumes 125 watts.

    Will you calculate power efficiency, please?
  • geoxile - Thursday, November 4, 2021 - link

    My 5950X uses 130-140W in y-cruncher. And @TweakPC on twitter tested lower PL1 and found the 12900k was only around 5% slower using 150W than 218W. Alderlake being power hungry is only because Intel is pushing 8 P-cores and 8 E-cores (collectively equal to around 4 P-cores according to Intel) to the limit, to compete against 16 Zen 3 cores. You can argue that it's still not as good as the 5950X but efficiency in this case is purely a problem of how much power Intel is allowing by default
  • flyingpants265 - Thursday, November 4, 2021 - link

    Because they need all that extra power to increase their performance a tiny bit. They're not just doing it for fun.
  • Netmsm - Saturday, November 6, 2021 - link

    Exactly 👍
  • Netmsm - Thursday, November 4, 2021 - link

    Even Ian has "accidentally" forgotten to put nominal TDP for 12900k in results =))
    All CPUs in "CUP Benchmark Performance: Intel vs AMD" are mentioned with their nominal TDP except 12900k.
    It sounds there's some recommendations! How venal!

Log in

Don't have an account? Sign up now