CPU Performance: SPEC2006

SPEC2006 has been a natural goal to aim for as a keystone analysis benchmark as it’s a respected industry standard benchmark that even silicon vendors use for architecture analysis and development. As we saw SPEC2017 released last year SPEC2006 is getting officially retired on January 9th, a funny coincidence as we now finally start using it.

As Android SoCs improve in power efficiency and performance it’s now becoming more practical to use SPEC2006 on consumer smartphones. The main concerns of the past were memory usage for subtests such as MCF, but more importantly sheer test runtimes for battery powered devices. For a couple of weeks I’ve been busy in porting over SPEC2006 to a custom Android application harness.

The results are quite remarkable as we see both the generational performance as well as efficiency improvements from the various Android SoC vendors. The Kirin 970 in particular closes in on the efficiency of the Snapdragon 835, leapfrogging the Kirin 960 and Exynos SoCs. We also see a non-improvement in absolute performance as the Kirin 970 showcases a slight performance degradation over the Kirin 960 – with all SoC vendors showing just meagre performance gains over the past generation.

Going Into The Details

Our new SPEC2006 harness is compiled using the official Android NDK. For this article the NDK version used in r16rc1 and Clang/LLVM were used as the compilers with just the –Ofast optimization flags (alongside applicable test portability flags). Clang was chosen over of GCC because Google has deprecated GCC in the NDK toolchain and will be removing the compiler altogether in 2018, making it unlikely that we’ll revisit GCC results in the future. It should be noted that in my testing GCC 4.9 still produced faster code in some SPEC subtests when compared to Clang. Nevertheless the choice of Clang should in the future also facilitate better Androids-to-Apples comparisons in the future. While there are arguments that SPEC scores should be published with the best compiler flags for each architecture I wanted a more apples-to-apples approach using identical binaries (Which is also what we expect to see distributed among real applications). As such for this article the I’ve chosen to pass to the compiler the –mcpu=cortex-a53 flag as it gave the best average overall score among all tested CPUs. The only exception was the Exynos M2 which profited from an additional 14% performance boost in perlbench when compiled with its corresponding CPU architecture target flag.

As the following SPEC scores are not submitted to the SPEC website we have to disclose that they represent only estimated values and thus are not officially validated submissions.

Alongside the full suite for CINT2006 we are also running the C/C++ subtests of CFP2006. Unfortunately 10 out of the 17 tests in the CFP2006 suite are written in Fortran and can only be compiled with hardship with GCC on Android and the NDK Clang lacks a Fortran front-end.

As an overview of the various run subtests, here are the various application areas and descriptions as listed on the official SPEC website:

SPEC2006 C/C++ Benchmarks
Suite Benchmark Application Area Description
SPECint2006

(Complete Suite)
400.perlbench Programming Language Derived from Perl V5.8.7. The workload includes SpamAssassin, MHonArc (an email indexer), and specdiff (SPEC's tool that checks benchmark outputs).
401.bzip2 Compression Julian Seward's bzip2 version 1.0.3, modified to do most work in memory, rather than doing I/O.
403.gcc C Compiler Based on gcc Version 3.2, generates code for Opteron.
429.mcf Combinatorial Optimization Vehicle scheduling. Uses a network simplex algorithm (which is also used in commercial products) to schedule public transport.
445.gobmk Artificial Intelligence: Go Plays the game of Go, a simply described but deeply complex game.
456.hmmer Search Gene Sequence Protein sequence analysis using profile hidden Markov models (profile HMMs)
458.sjeng Artificial Intelligence: chess A highly-ranked chess program that also plays several chess variants.
462.libquantum Physics / Quantum Computing Simulates a quantum computer, running Shor's polynomial-time factorization algorithm.
464.h264ref Video Compression A reference implementation of H.264/AVC, encodes a videostream using 2 parameter sets. The H.264/AVC standard is expected to replace MPEG2
471.omnetpp Discrete Event Simulation Uses the OMNet++ discrete event simulator to model a large Ethernet campus network.
473.astar Path-finding Algorithms Pathfinding library for 2D maps, including the well known A* algorithm.
483.xalancbmk XML Processing A modified version of Xalan-C++, which transforms XML documents to other document types.
SPECfp2006

(C/C++ Subtests)
433.milc Physics / Quantum Chromodynamics A gauge field generating program for lattice gauge theory programs with dynamical quarks.
444.namd Biology / Molecular Dynamics Simulates large biomolecular systems. The test case has 92,224 atoms of apolipoprotein A-I.
447.dealII Finite Element Analysis deal.II is a C++ program library targeted at adaptive finite elements and error estimation. The testcase solves a Helmholtz-type equation with non-constant coefficients.
450.soplex Linear Programming, Optimization Solves a linear program using a simplex algorithm and sparse linear algebra. Test cases include railroad planning and military airlift models.
453.povray Image Ray-tracing Image rendering. The testcase is a 1280x1024 anti-aliased image of a landscape with some abstract objects with textures using a Perlin noise function.
470.lbm Fluid Dynamics Implements the "Lattice-Boltzmann Method" to simulate incompressible fluids in 3D
482.sphinx3 Speech recognition A widely-known speech recognition system from Carnegie Mellon University

It’s important to note one extremely distinguishing aspect of SPEC CPU versus other CPU benchmarks such as GeekBench: it’s not just a CPU benchmark, but rather a system benchmark. While benchmarks such as GeekBench serve as a good quick view of basic workloads, the vastly greater workload and codebase size of SPEC CPU stresses the memory subsystem to a much greater degree. To demonstrate this we can see the individual subtest performance differences when solely limiting the memory controller frequency, in this case on the Mate 10 Pro with the Kirin 970.

An increase in main memory latency from just 80ns to 115ns (Random access within access window) can have dramatic effects on many of the more memory access sensitive tests in SPEC CPU. Meanwhile the same handicap essentially has no effect on the GeekBench 4 single-threaded scores and only marginal effect on some subtests of the multi-threaded scores.

In general the benchmarks can be grouped in three categories: memory-bound, balanced memory and execution-bound, and finally execution bound benchmarks. From the memory latency sensitivity chart it’s relatively easy to find out which benchmarks belong to which category based on the performance degradation. The worst memory bound benchmarks include the infamous 429.mcf but alongside we also see 433.milc, 450.soplex, 470.lbm and 482.sphinx3. The least affected such as 400.perlbench, 445.gobmk, 456.hmmer, 464.h264ref, 444.namd, 453.povray and with even 458.sjeng and 462.libquantum slightly increasing in performance pointing out to very saturated execution units. The remaining benchmarks are more balanced and see a reduced impact on the performance. Of course this is an oversimplification and the results will differ between architectures and platforms, but it gives us a solid hint in terms of separation between execution and memory-access bound tests.

As well as tracking performance (SPECspeed) I also included a power tracking mechanisms which relies on the device’s fuel-gauge for current measurements. The values published here represent only the active power of the platform, meaning it subtracts idle power from total absolute load power during the workloads to compensate for platform components such as the displays. Again I have to emphasize that the power and energy figures don't just represent the CPU, but the SoC system as a whole, including interconnects, memory controllers, DRAM, and PMIC overhead.

Alongside the current generation SoCs I also included a few predecessors to be able to track the progress that has happened over the last 2 years in the Android space and over CPU microarchitecture generations. Because the runtime of all benchmarks is in excess of 5 hours for the fastest devices we are actively cooling the phones with an external fan to ensure consistent DVFS frequencies across all of the subtests and that we don’t favour the early tests.

The Kirin 970 - Overview SPEC2006 - The Results
Comments Locked

116 Comments

View All Comments

  • Ratman6161 - Wednesday, January 24, 2018 - link

    Personally I think Samsung is in a great position...wheather you consider them "truly vertically integrated" or not. One thing to remember is that most often, Samsung flagship devices come in two variants. It's mostly in the US where we get the Qualcomm variants while elsewhere tends to get Exynos. The dual source is a great arrangement because every once in a while Qualcomm is going to turn out a something problematic like the Snapdragon 810. When that happens Samsung has the option to use its own which is what they did with the Galaxy S6/Note 5 generation which was Exynos only.

    Another point is: what do you consider "truly vertically integrated". The story cites Apple and Huewai but they don't actually manufacture their SOC's and neither does Qualcomm. I believe the Kirin SOC's are actually manufactured by TSMC while Apple and Qualcomm SOC's have at various times been actually manufactured in Samsung FABs. As far as I know, Samsung is the only company that even has the capability to design and also manufacture their own SOC. So in a way, you could say that my Samsung Note 5 is about the most vertically integrated phone there is, along with non-US versions of the S7 and S8 generations. In those cases you have a samsung SOC manufactured in a Samsung FAB in a Samsung phone with a Samsung screen etc. Don't make the mistake of thinking the whole world is just like us...they aren't. Also many of the screens for other brands are also of Samsung manufacture so you have to keep in mind that there is a lot more to the device than the SOC
  • fred666 - Monday, January 22, 2018 - link

    Huawei only uses HiSilicon SoCs? Nothing from Qualcomm?
  • Andrei Frumusanu - Monday, January 22, 2018 - link

    They've used Qualcomm chip-sets and still do use them in segments they can't fill with their own SoCs.
  • niva - Monday, January 22, 2018 - link

    So they still use QC chips, but unlike them, Samsung isn't vertically integrated because they use QC chips.

    Get out of here.
  • Dr. Swag - Monday, January 22, 2018 - link

    His point is Huawei only uses non-HiSilicon chips in price segments that they do not have SoCs for. Samsung, however, does sometimes use QC silicon even if they have SoCs that can fill that segment (e.g. Samsung uses the Snapdragon 835s even though they have the 8895).

    I'm not saying that I agree with Andrei's view, but there is a difference.
  • niva - Tuesday, January 23, 2018 - link

    I completely disagree with the assessment that Samsung is somehow not "as vertically integrated" as Huawei. Samsung is not just vertically integrated, it produces components for many other key players in the market. They have reasons why they CHOOSE not to use their SOCs in specific markets and areas. Some of the rationale behind those choices may be questioned, but it's a choice. I too think that the world would be a better place if they actually put their own chip designs into their phones and directly competed against Qualcom. That of course might be the end of Qualcom and a whole lot of other companies... Samsung can easily turn into a monopoly that suffocates the entire market, so it's not just veritcal, but horizontal integration. What Huawei has accomplished in short order is impressive, but isn't Huawei just another branch of the Chinese government at this point? Sure yeah, their country is more vertically integrated. Maybe that's the line to take to justify the statement...
  • levizx - Monday, February 26, 2018 - link

    No, it's not INTEGRATED because it doesn't prefer its own over outsourcing. Samsung Mobile department runs separately from its Semiconductor department which act as a contractor no different than Qualcomm.

    As for Huawei being a branch of the Chinese government, it's as true as Google being part of the US government. Stop spruiking conspiracy theory. I know for a fact their employees almost fully owns the company.
  • KarlKastor - Thursday, January 25, 2018 - link

    Well, that's not true. Huawei choose the Snapdragon 625 in the Nova. Why not use their own Kirin 600 Series? it is the same market segment.

    Samsung only opts for Snapdragon, where they have no own SoCs: all regions with CDMA2000 Networks.
    In all other regions, europe for example, they ship all smartphones frome the J- and A-Series to the S-Series and Note with their Exynos SoCs.
  • yslee - Tuesday, January 30, 2018 - link

    You keep on repeating that line, but where I am we have no CDMA2000 networks and still get Snapdragon Samsungs.
  • levizx - Monday, February 26, 2018 - link

    That's also not true, Samsung uses Snapdragon where there's no CDMA2000 as well. Huawei used to use VIA's 55nm CBP8.2D over Snapdragon.

    Mid-tier is not so indicative compared to higher end devices when it comes to, well everything. They may even outsource the ENTIRE DESIGN to a third party, and still proves nothing in particular. They might have chosen S625 because of supply issues which is completely reasonable. Same can not be applied to Samsung, since there's no such thing as supply issues when it comes to Exynos and Snapdragon.

Log in

Don't have an account? Sign up now