Favored Core

For Broadwell-E, the last generation of Intel’s HEDT platform, we were introduced to the term ‘Favored Core’, which was given the title of Turbo Boost Max 3.0. The idea here is that each piece of silicon that comes off of the production line is different (which is then binned to match to a SKU), but within a piece of silicon the cores themselves will have different frequency and voltage characteristics. The one core that is determined to be the best is called the ‘Favored Core’, and when Intel’s Windows 10 driver and software were in place, single threaded workloads were moved to this favored core to run faster.

In theory, it was good – a step above the generic Turbo Boost 2.0 and offered an extra 100-200 MHz for single threaded applications. In practice, it was flawed: motherboard manufacturers didn’t support it, or they had it disabled in the BIOS by default. Users had to install the drivers and software as well – without the combination of all of these at work, the favored core feature didn’t work at all.

Intel is changing the feature for Skylake-X, with an upgrade and for ease-of-use. The driver and software are now part of Windows updates, so users will get them automatically (if you don’t want it, you have to disable it manually). With Skylake-X, instead of one core being the favored core, there are two cores in this family. As a result, two apps can be run at the higher frequency, or one app that needs two cores can participate. 

Speed Shift

In Skylake-S, the processor has been designed in a way that with the right commands, the OS can hand control of the frequency and voltage back to the processor. Intel called this technology 'Speed Shift'. We’ve discussed Speed Shift before in the Skylake architecture analysis, and it now comes to Skylake-X. One of the requirements for Speed Shift is that it requires operating system support to be able to hand over control of the processor performance to the CPU, and Intel has had to work with Microsoft in order to get this functionality enabled in Windows 10.

Compared to Speed Step / P-state transitions, Intel's new Speed Shift terminology changes the game by having the operating system relinquish some or all control of the P-States, and handing that control off to the processor. This has a couple of noticeable benefits. First, it is much faster for the processor to control the ramp up and down in frequency, compared to OS control. Second, the processor has much finer control over its states, allowing it to choose the most optimum performance level for a given task, and therefore using less energy as a result. Specific jumps in frequency are reduced to around 1ms with Speed Shift's CPU control from 20-30 ms on OS control, and going from an efficient power state to maximum performance can be done in around 35 ms, compared to around 100 ms with the legacy implementation. As seen in the images below, neither technology can jump from low to high instantly, because to maintain data coherency through frequency/voltage changes there is an element of gradient as data is realigned.

The ability to quickly ramp up performance is done to increase overall responsiveness of the system, rather than linger at lower frequencies waiting for OS to pass commands through a translation layer. Speed Shift cannot increase absolute maximum performance, but on short workloads that require a brief burst of performance, it can make a big difference in how quickly that task gets done. Ultimately, much of what we do falls more into this category, such as web browsing or office work. As an example, web browsing is all about getting the page loaded quickly, and then getting the processor back down to idle.

Again, Speed Shift is something that needs to be enabled on all levels - CPU, OS, driver, and motherboard BIOS. It has come to light that some motherboard manufacturers are disabling Speed Shift on desktops by default, negating the feature. In the BIOS is it labeled either as Speed Shift or Hardware P-States, and sometimes even has non-descript options. Unfortunately, a combination of this and other issues has led to a small problem on X299 motherboards.

X299 Motherboards

When we started testing for this review, the main instructions we were given was that when changing between Skylake-X and Kaby Lake-X processors, be sure to remove AC power and hold the reset BIOS button for 30 seconds. This comes down to an issue with supporting both sets of CPUs at once: Skylake-X features some form of integrated voltage regulator (somewhat like the FIVR on Broadwell), whereas Kaby Lake-X is more motherboard controlled. As a result, some of the voltages going in to the CPU, if configured incorrectly, can cause damage. This is where I say I broke a CPU: our Kaby Lake-X Core i7 died on the test bed. We are told that in the future there should be a way to switch between the two without having this issue, but there are some other issues as well.

After speaking with a number of journalists in my close circle, it was clear that some of the GPU testing was not reflective of where the processors sat in the product stack. Some results were 25-50% worse than we expected for Skylake-X (Kaby Lake-X seemingly unaffected), scoring disastrously low frame rates. This was worrying.

Speaking with the motherboard manufacturers, it's coming down to a few issues: managing the mesh frequency (and if the mesh frequency has a turbo), controlling turbo modes, and controlling features like Speed Shift. 'Controlling' in this case can mean boosting voltages to support it better, overriding the default behavior for 'performance' which works on some tests but not others, or disabling the feature completely.

We were still getting new BIOSes two days before launch, right when I need to fly half-way across the world to cover other events. Even retesting the latest BIOS we had for the boards we had, there still seems to be an underlying issue with either the games or the power management involved. This isn't necessarily a code optimization issue for the games themselves: the base microarchitecture on the CPU is still the same with a slight cache adjustment, so if a Skylake-X starts performing below an old Sandy Bridge Core i3, it's not on the game.

We're still waiting to hear for BIOS updates, or reasons why this is the case. Some games are affected a lot, others not at all. Any game we are testing which ends up being GPU limited is unaffected, showing that this is a CPU issue.

Analyzing The Silicon: Die Size Estimates and Arrangements Power Consumption, Test Bed and Setup
Comments Locked

264 Comments

View All Comments

  • geekman1024 - Monday, June 19, 2017 - link

    Zen is winning in one department: Price.
  • Lolimaster - Tuesday, June 20, 2017 - link

    Ryzen has a sick efficiency at lower clocks, that Ryzen 7 1700 65w can de undervolted further more and make it a 50w 3Ghz monster.
  • sir_tech - Monday, June 19, 2017 - link

    Why there are no power consumption charts in the review? Also, you should have gone ahead and post the gaming performance charts also just like Ryzen reviews.

    While the MSRP is high the actual retail price for Ryzen processors retail prices are much lower now.

    Ryzen 7 1800x - $439 (MSRP - $499)
    Ryzen 7 1700x - $349 (MSRP - $399)
    Ryzen 7 1700 - $299 (MSRP - $329)
    Ryzen 5 1600x - $229 (MSRP - $249)
  • Ryan Smith - Monday, June 19, 2017 - link

    "Why there are no power consumption charts in the review?"

    Please refresh the conclusion.=)

    "Also, you should have gone ahead and post the gaming performance charts also just like Ryzen reviews."

    The BIOS updates have come so late that we don't even have a complete dataset for the new BIOSes. Ian had just enough time to make sure they were still screwy, and then was on a plane. We're going to need to sit down and completely redo all the Skylake-X chips once the platform stabilizes to the point where our results won't be immediately invalidated.
  • cheshirster - Monday, June 19, 2017 - link

    Your DDR4-2400 tests of 1800X and 1600X are already invalidated.
    And RoTR
    There was no problem with publishing bad gaming results for AMD.
    What's the problem with 2066?
  • Ryan Smith - Monday, June 19, 2017 - link

    If we had a complete, up-to-date dataset to publish, and time to write it up, we would have. If only to showcase why eager gamers should wait for the platform to mature a bit.
  • cheshirster - Monday, June 19, 2017 - link

    Sorry, with this text:
    "Our GTX1080 seems to be hit the hardest out of our four GPUs, as well as Civilization 6, the second Rise of the Tomb Raider test, and Rocket League on all GPUs. As a result, we only posted a minor selection of results, most of which show good parity at 4K"
    + ryzen bad fullhd results in RoTR and Rocket League fully published.

    You are going straigh to the Hall of Fame of typical brand loyalists.
  • jospoortvliet - Thursday, June 22, 2017 - link

    Well the state of Ryzen wasn't as bad as this and it isn't like it was not pointed out in this review.

    Also I am sure other benchmarks were also affected making Intel look worse in benchmark databases thanks to their rush job...
  • bongey - Wednesday, August 2, 2017 - link

    Yep you bashed Ryzen in gaming in your review, quit lying.
    "Gaming Performance, particularly towards 240 Hz gaming, is being questioned,"
  • Gasaraki88 - Monday, June 19, 2017 - link

    Everything is on default, no overclocking.

Log in

Don't have an account? Sign up now