Comparing Skylake-S and Skylake-X/SP Performance Clock-for-Clock

If you’ve read through the full review up to this point (and kudos), there should be three things that stick in the back of your mind about the new Skylake-SP cores: Cache, Mesh and AVX512. These are the three main features that separate the consumer grade Skylake-S core from this new core, and all three can have an impact in clock-for-clock performance. Even though the Skylake-S and the Skylake-SP are not competing in the same markets, it is still poignant to gather how much the changes affect the regular benchmark suite.

For this test, we took the Skylake-S based Core i5-6600 and the Skylake-SP based Core i9-7900X and ran them both with only 4 cores, no hyperthreading, and 3 GHz on all cores with no Turbo active. Both CPUs were run in high performance modes in the OS to restrict any time-to-idle, so it is worth noting here that we are not measuring power. This is just raw throughput.

Both of these cores support different DRAM frequencies, however: the i5-6600 lists DDR4-2133 as its maximum supported frequency, whereas the i9-7900X will run at DDR4-2400 at 2DPC. I queried a few colleagues as to what I should do here – technically the memory support is an extended element of the microarchitecture, and the caches/uncore/untile will be running at different frequencies, so how much of the system support should be chipped away for parity. The general consensus was to test with the supported frequencies, given this is how the parts ship.

For this analysis, each test was broken down in two ways: what sort of benchmark (single thread, multi-thread, mixed) and what category of benchmark (web, office, encode).

 

For the single threaded tests, results were generally positive. Kraken enjoyed the L2, and Dolphin emulation had a good gain as well. The legacy tests did not fair that great: 3DPM v1 has false sharing, which is likely taking a hit due to the increased L2 latency.

On the multithreaded tests, the big winner here was Corona. Corona is a high-performance renderer for Autodesk 3ds Max, showing that the larger L2 does a good job with its code base. The step back was in Handbrake – our testing does not implement any AVX512 code, but the L3 victim cache might be at play here over the L3 inclusive cache in SKL-S.

The mixed results are surprising: these tests vary with ST and MT parts to their computation, some being cache sensitive as well. The big outlier here is the compile test, indicating that the Skylake-SP might not be (clock for clock) a great compilation core. This is a result we can trace back to the L3 again, being a smaller non-inclusive cache. In our results database, we can see similar results when comparing a Ryzen 7 1700X, an 8-core 95W CPU with 16MB of L3 victim cache, is easily beaten by a Core i7-7700T, with 4 cores at 35W but has 8MB of inclusive L3 cache.

If we treat each of these tests with equal weighting, the overall result will offer a +0.5% gain to the new Skylake-SP core, which is with the margin of error. Nothing too much to be concerned about for most users (except perhaps people who compile all day), although again, these two cores are not in chips that directly compete. The 10-core SKL-SP chip still does the business on compiling:

Office: Chromium Compile (v56)

If all these changes (minus AVX512) offer a +0.5% gain over the standard Skylake-S core, then one question worth asking is what was the point? The answer is usually simple, and I suspect involves scaling (moving to chips with more cores), but also customer related. Intel’s big money comes from the enterprise, and no doubt some of Intel’s internal metrics (as well as customer requests) point to a sizeable chunk of enterprise compute being L2 size limited. I’ll be looking forward to Johan’s review on the enterprise side when the time comes.

Benchmarking Performance: CPU Legacy Tests Intel Skylake-X Core i9-7900X, i7-7820X and i7-7800X Conclusion
Comments Locked

264 Comments

View All Comments

  • Ian Cutress - Monday, June 19, 2017 - link

    Prime95
  • AnandTechReader2017 - Tuesday, June 20, 2017 - link

    Are you sure the numbers are correct as the i7 6950X on your graph here states less than the 135W on your original review of it under an all-core load.
  • Ian Cutress - Tuesday, June 20, 2017 - link

    We're running a new test suite, different OSes, updated BIOSes, with different metrics/data gathering (might even be a different CPU, as each one is slightly different). There's going to be some differences, unfortunately.
  • gerz1219 - Monday, June 19, 2017 - link

    Power draw isn't relevant in this space. High-end users who work from a home office can write off part of their electric bill as a business expense. Price/performance isn't even that much of an issue for many users in this space for the same reason -- if you're using the machine to earn a living, a faster machine pays for itself after a matter of weeks. The only thing that matters is performance. I don't understand why so many gamers read reviews for non-gamer parts and apply gamer complaints.
  • demMind - Monday, June 19, 2017 - link

    This kind of response keeps popping up and is highly short sighted. Price for performance matters to high end especially if you use it for your livelihood.

    If you go large-scale movie rendering studios will definitely be going with what can soften the blow to a large scale project. This is a fud response.
  • Spunjji - Tuesday, June 20, 2017 - link

    Power efficiency will matter again when Intel lead in it. Been watching the same see-saw on the graphics side with nVidia. They lead in it now, so now it's the most important factor.

    Marketing works, folks.
  • JKflipflop98 - Thursday, June 22, 2017 - link

    Ah, AMD fanbots. Always with the insane conspiracy theories.
  • AnandTechReader2017 - Tuesday, June 20, 2017 - link

    Power draw is important as well as temps, it will allow you to push to higher clocks and cut costs.
    Say your work had to get 500 of these machines, if you can use a cheaper PSU, cheaper CPU and lower power use, the saving could be quite extreme. We're talking 95W vs 140W, a 50% increase versus the Ryzen. That's quite a bit in the long run.

    I run 4 high-end desktops in my household, the power draw saving would be quite advantageous form me. All depends on circumstances, information is king.

    Ian posted that everything is running at stock speeds, each version overclocked with power draw would also be interesting, also the difference different RAM clock speeds make (there was a huge fiasco with people claiming nice performance increases by using higher RAM clocks with the Ryzen CPU, how much is Intel's new line-up influenced? Can we cut costs and spend more on GPU/monitor/keyboard/pretty much anything else?)
  • psychok9 - Sunday, July 23, 2017 - link

    It's scandalous... no one graph about temperature!? I suspect that if it had been an AMD cpu we would have mass hysteria and daily news... >:(
    I'm looking for Iy 7820X and understand how can I manage with an AIO.
  • cknobman - Monday, June 19, 2017 - link

    Nope this CPU is a turd IMO.
    Intel cheaped out on thermal paste again and this chip heats up big time.
    Only 44PCIE lanes, shoddy performance, and a rushed launch.

    Only a sucker would buy now before seeing AMD Threadripper and that is exactly why, and who, Intel released these things so quickly for.

Log in

Don't have an account? Sign up now