Favored Core

For Broadwell-E, the last generation of Intel’s HEDT platform, we were introduced to the term ‘Favored Core’, which was given the title of Turbo Boost Max 3.0. The idea here is that each piece of silicon that comes off of the production line is different (which is then binned to match to a SKU), but within a piece of silicon the cores themselves will have different frequency and voltage characteristics. The one core that is determined to be the best is called the ‘Favored Core’, and when Intel’s Windows 10 driver and software were in place, single threaded workloads were moved to this favored core to run faster.

In theory, it was good – a step above the generic Turbo Boost 2.0 and offered an extra 100-200 MHz for single threaded applications. In practice, it was flawed: motherboard manufacturers didn’t support it, or they had it disabled in the BIOS by default. Users had to install the drivers and software as well – without the combination of all of these at work, the favored core feature didn’t work at all.

Intel is changing the feature for Skylake-X, with an upgrade and for ease-of-use. The driver and software are now part of Windows updates, so users will get them automatically (if you don’t want it, you have to disable it manually). With Skylake-X, instead of one core being the favored core, there are two cores in this family. As a result, two apps can be run at the higher frequency, or one app that needs two cores can participate. 

Speed Shift

In Skylake-S, the processor has been designed in a way that with the right commands, the OS can hand control of the frequency and voltage back to the processor. Intel called this technology 'Speed Shift'. We’ve discussed Speed Shift before in the Skylake architecture analysis, and it now comes to Skylake-X. One of the requirements for Speed Shift is that it requires operating system support to be able to hand over control of the processor performance to the CPU, and Intel has had to work with Microsoft in order to get this functionality enabled in Windows 10.

Compared to Speed Step / P-state transitions, Intel's new Speed Shift terminology changes the game by having the operating system relinquish some or all control of the P-States, and handing that control off to the processor. This has a couple of noticeable benefits. First, it is much faster for the processor to control the ramp up and down in frequency, compared to OS control. Second, the processor has much finer control over its states, allowing it to choose the most optimum performance level for a given task, and therefore using less energy as a result. Specific jumps in frequency are reduced to around 1ms with Speed Shift's CPU control from 20-30 ms on OS control, and going from an efficient power state to maximum performance can be done in around 35 ms, compared to around 100 ms with the legacy implementation. As seen in the images below, neither technology can jump from low to high instantly, because to maintain data coherency through frequency/voltage changes there is an element of gradient as data is realigned.

The ability to quickly ramp up performance is done to increase overall responsiveness of the system, rather than linger at lower frequencies waiting for OS to pass commands through a translation layer. Speed Shift cannot increase absolute maximum performance, but on short workloads that require a brief burst of performance, it can make a big difference in how quickly that task gets done. Ultimately, much of what we do falls more into this category, such as web browsing or office work. As an example, web browsing is all about getting the page loaded quickly, and then getting the processor back down to idle.

Again, Speed Shift is something that needs to be enabled on all levels - CPU, OS, driver, and motherboard BIOS. It has come to light that some motherboard manufacturers are disabling Speed Shift on desktops by default, negating the feature. In the BIOS is it labeled either as Speed Shift or Hardware P-States, and sometimes even has non-descript options. Unfortunately, a combination of this and other issues has led to a small problem on X299 motherboards.

X299 Motherboards

When we started testing for this review, the main instructions we were given was that when changing between Skylake-X and Kaby Lake-X processors, be sure to remove AC power and hold the reset BIOS button for 30 seconds. This comes down to an issue with supporting both sets of CPUs at once: Skylake-X features some form of integrated voltage regulator (somewhat like the FIVR on Broadwell), whereas Kaby Lake-X is more motherboard controlled. As a result, some of the voltages going in to the CPU, if configured incorrectly, can cause damage. This is where I say I broke a CPU: our Kaby Lake-X Core i7 died on the test bed. We are told that in the future there should be a way to switch between the two without having this issue, but there are some other issues as well.

After speaking with a number of journalists in my close circle, it was clear that some of the GPU testing was not reflective of where the processors sat in the product stack. Some results were 25-50% worse than we expected for Skylake-X (Kaby Lake-X seemingly unaffected), scoring disastrously low frame rates. This was worrying.

Speaking with the motherboard manufacturers, it's coming down to a few issues: managing the mesh frequency (and if the mesh frequency has a turbo), controlling turbo modes, and controlling features like Speed Shift. 'Controlling' in this case can mean boosting voltages to support it better, overriding the default behavior for 'performance' which works on some tests but not others, or disabling the feature completely.

We were still getting new BIOSes two days before launch, right when I need to fly half-way across the world to cover other events. Even retesting the latest BIOS we had for the boards we had, there still seems to be an underlying issue with either the games or the power management involved. This isn't necessarily a code optimization issue for the games themselves: the base microarchitecture on the CPU is still the same with a slight cache adjustment, so if a Skylake-X starts performing below an old Sandy Bridge Core i3, it's not on the game.

We're still waiting to hear for BIOS updates, or reasons why this is the case. Some games are affected a lot, others not at all. Any game we are testing which ends up being GPU limited is unaffected, showing that this is a CPU issue.

Analyzing The Silicon: Die Size Estimates and Arrangements Power Consumption, Test Bed and Setup
POST A COMMENT

264 Comments

View All Comments

  • FreckledTrout - Monday, June 19, 2017 - link

    Missing the 7820x on the power draw graph. Reply
  • Ian Cutress - Tuesday, June 20, 2017 - link

    The 7820X power numbers didn't look right when we tested it. I'm now on the road for two weeks, so we'll update the numbers when I get back. Reply
  • chrysrobyn - Monday, June 19, 2017 - link

    In my head I'm still doing the math on every benchmark and dividing by watts and seeing Zen looking very different. Reply
  • Old_Fogie_Late_Bloomer - Monday, June 19, 2017 - link

    I'm sure I'm wrong about this, but it makes more sense to me that the i9-7900X would be a (significantly) cut down HCC die instead of a perfect LCC. i9 vs i7, 44 vs 28 lanes, two AVX units instead of one?

    And yet the one source I've found so far says it's the smaller die. It's definitely the LCC die, then?
    Reply
  • Ian Cutress - Tuesday, June 20, 2017 - link

    HCC isn't ready, basically. LCC is. Plus, having a 10C LCC die and not posting a top SKU would be wasteful of the smallest die of the set.

    Also, delidding a 10C SKU.
    Reply
  • Old_Fogie_Late_Bloomer - Tuesday, June 20, 2017 - link

    Well, it wouldn't be a waste if Intel's yields weren't good enough to get fully functional dies. The fact that Intel is not just releasing fully functional LCC chips but announced that they would be the first ones available suggests that they have no trouble reliably producing them, which is pretty impressive (though they have had plenty of practice on this process by now).

    Thanks for the response; I thoroughly enjoyed the review and look forward to further coverage. Exciting times!
    Reply
  • Despoiler - Monday, June 19, 2017 - link

    Considering Ryzen is in the desktop category and these Intel chips are HEDT, we need to wait to see what Threadripper brings. AMD won't have the clock advantage, but for multithreaded workloads I suspect they will have more cores at a cheaper price than Intel. Reply
  • FreckledTrout - Monday, June 19, 2017 - link

    I wouldn't say AMD wont have a clock advantage once you get to the 14 and 16 core chips. They might not but you saw the power numbers and thermals, Intel very well may have to pull back the frequency as they scale up the cores more than AMD will. Reply
  • FMinus - Thursday, June 22, 2017 - link

    Actually I think it's the other way around. AMD might have clock advantage on higher core models thanks to not going with the monolithic approach. Easier to to cool those beasts but power is still an issue.

    If you imagine four 1800x on one interposer, you can see them reaching 4GHz on all of those dies, that said the power consumption would be massive, but easier cooler as the intel 16 core variant.
    Reply
  • Lolimaster - Tuesday, June 20, 2017 - link

    The 1995X will have a stock 3.6Ghz for the 16cores, same as the 7900X with just 10. Reply

Log in

Don't have an account? Sign up now