CPU Performance: SPEC2006 at 2.2 GHz

Aside from power, the other question is if the Cannon Lake microarchitecture is an efficient design. For most code paths, it holds the same core design elements as Skylake and Kaby Lake, and it does have additional optimizations for certain instructions, as we detailed earlier in this review. In order to do a direct IPC comparison, we are running SPEC2006 Speed on both of our comparison points at a fixed frequency of 2.2 GHz.

In order to get a fixed frequency on our chips required adjusting the relevant registers to disable the turbo modes. There is no setting in the BIOS to do this, but thankfully the folks at AIDA64 have a tool to do this and it works great. Choosing these two processors that both have a base frequency of 2.2 GHz make this a lot easier.

SPEC2006 is a series of industry standard tests designed to help differentiate performance levels between different architectures, microarchitectures, and compilers. All official submitted results from OEMs and manufacturers are posted online for comparison, and many vendors try and get the best results. From our perspective, these workloads are very well known, which enables a good benchmark for IPC analysis.

Credit for arranging the benchmarks goes completely to our resident Senior Mobile Editor, Andrei Frumusanu, who developed  a suitable harness and framework to generate the relevant binaries for both mobile and PC. On PC, we run SPEC2006 through the Windows Subsystem for Linux – we still need to do testing for overhead (we’ll do it with SPEC2017 when Andrei is ready), but for the purposes of this test today, comparing like for like both under WSL is a valid comparison. Andrei compiled SPEC2006 for AVX2 instructions, using Clang 8. We run SPEC2006 Speed, which runs one copy of each test on one thread, of all the integer tests as well as the C++ based floating point tests.

Here are our results:

SPEC2006 Speed
(Estimated Results)*
Intel Core i3-8121U
10nm Cannon Lake
AnandTech Intel Core i3-8130U
14nm Kaby Lake
Integer Workloads
24.8 400.perlbench 26.1
16.6 401.bzip2 16.8
27.6 403.gcc 27.3
25.9 429.mcf 28.4
19.0 445.gobmk 19.1
23.5 456.hmmr 23.1
22.2 458.sjeng 22.4
70.5 462.libquantum 75.4
39.7 464.h264ref 37.2
17.5 471.omnetpp 18.2
14.2 473.astar 14.1
27.1 483.xalancbmk 28.4
Floating Point Workloads
24.6 433.milc 23.8
23.0 444.namd 23.0
39.1 450.soplex 37.3
34.1 453.povray 33.5
59.9 470.lbm 68.4
43.2 482.sphinx3 44.2

* SPEC rules dictate that any results not verified on the SPEC website are called 'estimated results', as they have not been verified.

By and large, we actually get parity between both processors on almost all the tests. The Kaby Lake processor seems to have a small advantage in libquantum and lbm, which are SIMD related, which could be limited by the memory latency difference shown on the previous page.

CPU Performance: Memory and Power Stock CPU Performance: System Tests
Comments Locked

129 Comments

View All Comments

  • eva02langley - Sunday, January 27, 2019 - link

    Even better...

    https://youtu.be/osSMJRyxG0k?t=1220
  • AntonErtl - Sunday, January 27, 2019 - link

    Great Article! The title is a bit misleading given that it is much more than just a review. I found the historical perspective of the Intel processes most interesting: Other reporting often just reports on whatever comes out of the PR department of some company, and leaves the readers to compare for themselves with other reports; better reporting highlights some of the contradictions; but rarely do we se such a pervasive overview.

    The 8121U would be interesting to me to allow playing with AVX512, but the NUC is too expensive for me for that purpose, and I can wait until AMD or Intel provide it in a package with better value for money.
  • RamIt - Sunday, January 27, 2019 - link

    Need gaming benches. This would make a great cs:s laptop for my daughter to game with me on.
  • Byte - Monday, January 28, 2019 - link

    Cannonlake, 2019's Broadwell.
  • f4tali - Monday, January 28, 2019 - link

    I can't believe I read this whole review from start to finish...
    And all the comments...
    And let it sink in for over 24hrs...

    But somehow my main takeaway is that 10nm is Intel's biggest graphics snafu yet.

    (Well THAT and the fact you guys only have one Steam account!)
    ;)
  • NikosD - Monday, January 28, 2019 - link

    @Ian Cutress
    Great article, it's going to become all-time classic and kudos for mentioning semiaccurate and Charlie for his work and inside information (and guts)

    But really, how many days, weeks or even months did it take to finish it ?
  • bfonnes - Monday, January 28, 2019 - link

    RIP Intel
  • CharonPDX - Monday, January 28, 2019 - link

    Insane to think that there have been as many 14nm "generations" as there were "Core architecture" generations before 14nm.
  • ngazi - Tuesday, January 29, 2019 - link

    Windows is snappy because there is no graphics switching. Any machine with the integrated graphics completely off is snappier.
  • Catalina588 - Wednesday, January 30, 2019 - link

    @Ian, This was a valuable article and it is clipped to Evernote. Thanks!

    Without becoming Seeking Alpha, you could add another dimension or two to the history and future of 10nm: cost per transistor and amortizing R&D costs. At Intel's November 2013 investor meeting, William Holt strongly argued that Intel would deliver the lowest cost per transistor (slide 13). Then-CFO Stacey Smith and other execs also touted this line for many quarters. But as your article points out, poor yields and added processing steps make 10nm a more expensive product than the 14nm++ we see today. How will that get sold and can Intel improve the margins over the life of 10nm?

    Then there's amortizing the R&D costs. Intel has two independent design teams in Oregon and Israel. Each team in the good-old tick-tock days used to own a two-year process node and new microarchitecture. The costs for two teams over five-plus years without 10nm mainstream products yet is huge--likely hundreds of millions of dollars. My understanding is that Intel, under general accounting rules, has to write off the R&D expense over the useful life of the 10nm node, basically on a per chip basis. Did Intel start amortizing 10nm R&D with the "revenue" for Cannon Lake starting in 2017, or is all of the accrued R&D yet to hit the income statement? Wish I knew.

    Anyway, it sure looks to me like we'll be looking back at 10nm in the mid-2020s as a ten-year lifecycle. A big comedown from a two-year TickTock cycle.

Log in

Don't have an account? Sign up now