Single Threaded Integer Performance: SPEC CPU2006

Even in the server market where high core count CPUs are ruling the roost, high single threaded performance is still very desirable. It makes sure that a certain level of performance is guaranteed in every situation, not just in "throughput situations" of "embarrassingly parallel" software. 

SPEC CPU2017 has finally launched, but it did so while our testing was already under way. So SPEC CPU2006 was still our best option to evaluate single threaded performance. Even though SPEC CPU2006 is more HPC and workstation oriented, it contains a good variety of integer workloads.

It is our conviction that we should try to mimic how performance critical software is compiled instead of trying to achieve the highest scores. To that end, we:

  • use 64 bit gcc : by far the most used compiler on linux for integer workloads, good all round compiler that does not try to "break" benchmarks (libquantum...) or favor a certain architecture
  • use gcc version 5.4: standard compiler with Ubuntu 16.04 LTS. (Note that this is upgraded from 4.8.4 used in earlier articles)
  • use -Ofast -fno-strict-aliasing optimization: a good balance between performance and keeping things simple
  • added "-std=gnu89" to the portability settings to resolve the issue that some tests will not compile with gcc 5.x
  • run one copy of the test

The ultimate objective is to measure performance in non-"aggressively optimized" applications where for some reason – as is frequently the case – a "multi-thread unfriendly" task keeps us waiting. 

First the single threaded results. It is important to note that thanks to modern turbo technology, all CPUs will run at higher clock speeds than their base clock speed. 

  • The Xeon E5-2690 ("Sandy Bridge") is capable of boosting up to 3.8 GHz
  • The Xeon E5-2690 v3 ("Haswell") is capable of boosting up to 3.5GHz
  • The Xeon E5-2699 v4  ("Broadwell") is capable of boosting up to 3.6 GHz
  • The Xeon 8176 ("Skylake-SP") is capable of boosting up to 3.8 GHz
  • The EPYC 7601 ("Naples") is capable of boosting up to 3.2 GHz

First we look at the absolute numbers. 

Subtest Application type Xeon E5-2690
@ 3.8
Xeon E5-2690 v3
@ 3.5
Xeon E5-2699 v4
@ 3.6
EPYC 7601
@3.2
Xeon 8176
@3.8
400.perlbench Spam filter 35 41.6 43.4 31.1 50.1
401.bzip2 Compression 24.5 24.0 23.9 24.0 27.1
403.gcc Compiling 33.8 35.5 23.7 35.1 24.5
429.mcf Vehicle scheduling 43.5 42.1 44.6 40.1 43.3
445.gobmk Game AI 27.9 27.8 28.7 24.3 31.0
456.hmmer Protein seq. analyses 26.5 28.0 32.3 27.9 35.4
458.sjeng Chess 28.9 31.0 33.0 23.8 33.6
462.libquantum Quantum sim 55.5 65.0 97.3 69.2 102
464.h264ref Video encoding 50.7 53.7 58.0 50.3 67.0
471.omnetpp Network sim 23.3 31.3 44.5 23.0 40.8
473.astar Pathfinding 25.3 25.1 26.1 19.5 27.4
483.xalancbmk XML processing 41.8 46.1 64.9 35.4 67.3

As raw SPEC scores can be a bit much to deal with in a dense table, we've also broken out our scores on a percentage basis. Sandy Bridge EP (Xeon E5 v1) is about 5 years old, the servers based upon this CPU are going to get replaced by newer ones. So we've made "Single threaded Sandy Bridge-EP performance" our reference (100%) , and compare the single threaded performance of all other architectures accordingly.

Subtest Application type Xeon E5-2690
@ 3.8
Xeon E5-2690 v3
@ 3.5
Xeon E5-2699 v4 @ 3.6 EPYC 7601 @3.2 Xeon 8176 @ 3.8
400.perlbench Spam filter 100% 119% 124% 89% 143%
401.bzip2 Compression 100% 98% 98% 98% 111%
403.gcc Compiling 100% 105% 70% 104% 72%
429.mcf Vehicle scheduling 100% 97% 103% 92% 100%
445.gobmk Game AI 100% 100% 103% 87% 111%
456.hmmer Protein seq. analyses 100% 106% 122% 105% 134%
458.sjeng Chess 100% 107% 114% 82% 116%
462.libquantum Quantum sim 100% 117% 175% 125% 184%
464.h264ref Video encoding 100% 106% 114% 99% 132%
471.omnetpp Network sim 100% 134% 191% 99% 175%
473.astar Pathfinding 100% 99% 103% 77% 108%
483.xalancbmk XML processing 100% 110% 155% 85% 161%

SPEC CPU2006 analysis is complicated, and with only a few days spend on the EPYC server, we must admit that what follows is mostly educated guessing. 

First off, let's gauge the IPC efficiency of the different architectures. Considering that the EPYC core runs at 12-16% lower clockspeeds (3.2 vs 3.6/3.8 GHz), getting 90+% of the performance of the Intel architectures can be considered a "strong" (IPC) showing for the AMD "Zen" architecture. 

As for Intel's latest CPU, pay attention to the effect of the much larger L2-cache of the Skylake-SP core (Xeon 8176) compared to the previous generation "Broadwell". Especially perlbench, gobmk, hmmer and h264ref (the instruction part) benefit. 

Meanwhile with the new GCC 5.4 compiler, Intel's performance on the "403.gcc benchmark" seems to have regressed their newer rchitectures. While we previously saw the Xeon E5-2699v4 perform at 83-95% of the "Sandy Bridge" Xeon E5-2690, this has further regressed to 70%. The AMD Zen core, on the other hand, does exceptionally well when running GCC. The mix of a high percentage of (easy to predict) branches in the instruction mix, a relatively small footprint, and a heavy reliance on low latency (mostly L1/L2/8 MB L3) seems to work well. The workloads where the impact of branch prediction is higher (somewhat higher percentage of branch misses) - gobmk, sjeng, hmmer - perform quite well on "Zen" too, which has a much lower branch misprediction penalty than AMD's previous generation architecture thanks to the µop cache. 

Otherwise the pointer chasing benchmarks – XML procesing and Path finding – which need a large L3-cache, are the worst performing on EPYC. 

Also notice the fact that the low IPC omnetpp ("network sim") runs slower on Skylake-SP than on Broadwell, but still much faster than AMD's EPYC. Omnetpp is an application that benefited from the massive 55 MB L3-cache of Broadwell, and that is why performance has declined on Skylake. Of course, this also means that the fractured 8x8 MB L3 of AMD's EPYC processor causes it to perform much slower than the latest Intel server CPUs. In the video encoding benchmark "h264ref" this plays a role too, but that benchmark relies much more on DRAM bandwidth. The fact that the EPYC core has higher DRAM bandwidth available makes sure that the AMD chip does not fall too far behind the latest Intel cores. 

All in all, we think we can conclude that the single threaded performance of the "Zen architecture" is excellent, but it somewhat let down by the lower turbo clock and the "smaller" 8x8 MB L3-cache. 

Memory Subsystem: Latency SMT Integer Performance With SPEC CPU2006
Comments Locked

219 Comments

View All Comments

  • tmbm50 - Wednesday, July 12, 2017 - link

    Windows licensing is irrespective of virtualization.

    If you run a vm with a single vCPU on a server with 32 cores, you must license all 32 cores. KVM, ESXi...doesnt matter.

    I'm sure most folks ignore that point in the license but if your an enterprise and get audited it's enforced.
  • nils_ - Wednesday, July 19, 2017 - link

    Oracle does the same, and if your environment supports migration to other hosts you'd have to license those too (just in case). It's sort of criminal really.
  • pepoluan - Friday, July 28, 2017 - link

    I wonder, though, how does AWS managed to offer per-instance Windows licensing for EC2?

    Because, by that logic, EVERY Windows instance needs to be licensed against ALL cores in an Availability Zone...
  • Rοb - Sunday, July 23, 2017 - link

    From very brief research it looks like for you're in for $6K per 16 Cores for the Datacenter Edition, trying to run the Software on a 4S 32 Core would cost 64x as much (excluding any Bulk Buy pricing you might be able to request).

    If you bought SM Fat Twins everything would be separated with less loss of density; for the money saved on Licensing would it pay off.

    You want to conduct your business lawfully and can charge the customer what it costs plus profit - that's what it costs, want something different the price will probably be different.

    Most Software that has per Core Licenses costs a fair bit and has thought it out so someone can't (lawfully) buy a single License and then run the Software on a much more powerful machine.

    Take a deep breath and consider that if you ran it on a Phi x200 in x86 Mode that it would run slowly and you'd be charged for 256 Cores per CPU - so don't do that.

    I don't want to sound unsympathetic but if the Vendor didn't make money then they wouldn't have incentive to write the Software.

    Convince your customers to switch to free Software or for those prices write your own.

    What is the complaint exactly, have a Rack Unit Fee, an Electricity Fee, a CPU Fee, a Software Fee, etc., and tell the customer that XYZ costs that much but if they get WYZ it will only cost so much instead.

    Assuming everyone obeys the Law and pays the same for Electricity, Cooling, Electronics, Software and Labor then it's only the percentage of Profit where the difference in price lies - or in other words someone will always charge less (and not be 'audited' / as honest / as intelligent and hard working as your Team).

    Let the people who you buy your Software from know your complaint and options, we can't be of much more help to you other than the years of service some of us devote to free and pay Software.
  • rocky12345 - Wednesday, July 12, 2017 - link

    Great article as always I found it very well written and there was a lot of information to take in. It was good to see AMD chips doing this good. Bang for the buck seems to be in AMD's court in both the server market and consumer markets now.

    To those saying oh in the real world big companies would not be upgrading there software to the latest because of money that may be lost. You guys have a solid point there. BUT these tests are not being done in a real world company that depends on their servers to be up 100% of the time. These are just in house tests done to benchmark the new CPU's so yes the latest and greatest versions of the software can be & should be used. This shows exactly what the new CPU's can do when the software is updated to support the latest and greatest hardware. DO you actually think a huge company when buying new server clusters asks for software that is 5 -10 years olds I am fairly sure they do not. They want the most update to date software that is optimized for the new hardware they are spending big bucks on. They want it to be 100% stable and they also want the latest and greatest because of the fact that they probably will never update the software again or at least not for 5-7 years or more. So testing with old builds of software is very unrealistic and does not show the hardware at it's best and also not what a company is looking fro when buying new hardware.

    With that said this is still a great write up and deserves a lot of praise.
  • rahvin - Wednesday, July 12, 2017 - link

    I think it's a great comparison article too, you know it's pretty unbiased when both the Intel and AMD fanboi's are out in force criticizing the article for bias.

    My main comment is that Intel is crazy with those prices on the platinum chips. Those prices are easily two times the previous generation. This is the result of AMD being absent from the server market, that is Intel running processor prices up to the prices that Sun, IBM and HP used to charge in the worst of the enterprise server days. $13k for a Xeon, you've got to be shitting me.

    Here's to hoping AMD mops the floor with them and causes prices to crater just like the last time Opteron was competitive. I remember the days when the highest end Xeon was less than $1000. These days the bottom end Xeons are pricing at $1000 and the high ends are 13X that much. Again, I pray AMD can get 25% market share and knock these prices back into reasonable territory. I also hope AMD makes a ton of money and can keep it up with competitive designs (even if it is doubtful because their management is garbage).
  • Rοb - Sunday, July 23, 2017 - link

    Rahvin writes: "$13K for a Xeon ...".

    There's more to it than that, read the Fine Print; Intel has all kinds of expensive/inexpensive (depending upon your point of view).

    See this Comparison: https://ark.intel.com/compare/120498,120499 .

    Which is "less expensive":

    Intel® Xeon® Platinum 8180M Processor (28 Cores) for $13,011.00

    or

    Intel® Xeon® Platinum 8156 Processor (4 Cores) for $7,007.00

    So which is less 13 or 7 vs. 28 or 4?

    You can't just look at one number.

    There are other Technical Points, AMD doesn't have: AVX-512, OmniPath 400Gbps, 8-way Motherboards, etc.

    If you MUST have what Intel offers then there's only one choice, if you can work around those things and get along with AMD then you're saving money.

    If you wanted bleading edge performance then you'd be looking at Spark or Power; some complain that would deny the ability to play Crysis (and that due to their importance people stay up worrying about their issues).

    Which is "best" is often easy to say given a narrow definition, which is best in every possible circumstance can be more of a challenge.

    Disclaimer: I don't work at either place and intend to buy Epyc 7nm.
  • hahmed330 - Wednesday, July 12, 2017 - link

    Jolly Good! AMD just smoked Intel's bacon!
    Impressive showing! Outstanding just outstanding!
  • Shankar1962 - Wednesday, July 12, 2017 - link

    Yeah thats why AMD is still in losses and Intel is making net profits of ~$11billion plus each year
    They are gaining share by trying to sell their so called top products for cheap prices
    Wondering who is getting smoked
  • PixyMisa - Thursday, July 13, 2017 - link

    Epyc has been out for three weeks.

Log in

Don't have an account? Sign up now