SPEC2006 & 2017: Industry Standard - ST Performance

One big talking point around the new Ryzen 3000 series is the new augmented single-threaded performance of the new Zen 2 core. In order to investigate the topic in a more controlled manner with better documented workloads, we’ve fallen back to the industry standard SPEC benchmark suite.

We’ll be investigating the previous generation SPEC CPU2006 test suite giving us some better context to past platforms, as well as introducing the new SPEC CPU2017 suite. We have to note that SPEC2006 has been deprecated in favour of 2017, and we must also mention that the scores posted today are noted as estimates as they’re not officially submitted to the SPEC organisation.

For SPEC2006, we’re still using the same setup as on our mobile suite, meaning all the C/C++ benchmarks, while for SPEC2017 I’ve also went ahead and prepared all the Fortran tests for a near complete suite for desktop systems. I say near complete as due to time constraints we’re running the suite via WSL on Windows. I’ve checked that there are no noticeable performance differences to native Linux (we’re also compiling statically), however one bug on WSL is that it has a fixed stack size so we’ll be missing 521.wrf_r from the SPECfp2017 collection.

In terms of compilers, I’ve opted to use LLVM both for C/C++ and Fortran tests. For Fortran, we’re using the Flang compiler. The rationale of using LLVM over GCC is better cross-platform comparisons to platforms that have only have LLVM support and future articles where we’ll investigate this aspect more. We’re not considering closed-sourced compilers such as MSVC or ICC.

clang version 8.0.0-svn350067-1~exp1+0~20181226174230.701~1.gbp6019f2 (trunk)
clang version 7.0.1 (ssh://git@github.com/flang-compiler/flang-driver.git 
  24bd54da5c41af04838bbe7b68f830840d47fc03)

-Ofast -fomit-frame-pointer
-march=x86-64
-mtune=core-avx2 
-mfma -mavx -mavx2

Our compiler flags are straightforward, with basic –Ofast and relevant ISA switches to allow for AVX2 instructions.

The Ryzen 3900X system was run in the same way as the rest of our article with DDR4-3200CL16, same as with the i9-9900K, whilst the Ryzen 2700X had DDR-2933 with similar CL16 16-16-16-38 timings.

SPECint2006 Speed Estimated Scores

In terms of the int2006 benchmarks, the improvements of the new Zen2 based Ryzen 3900X is quite even across the board when compared to the Zen+ based Ryzen 2700X. We do note however somewhat larger performance increases in 403.gcc and 483.xalancbmk – it’s not immediately clear as to why as the benchmarks don’t have one particular characteristic that would fit Zen2’s design improvements, however I suspect it’s linked to the larger L3 cache.

445.gobmk in particular is a branch-heavy workload, and the 35% increase in performance here would be better explained by Zen2’s new additional TAGE branch predictor which is able to reduce overall branch misses.

It’s also interesting that although Ryzen3900X posted worse memory latency results than the 2700X, it’s still able to outperform the latter in memory sensitive workloads such as 429.mcf, although the increases for 471.omnetpp is amongst the smallest in the suite.

However we still see that AMD has an overall larger disadvantage to Intel in these memory sensitive tests, as the 9900K has large advantages in 429.mcf, and posting a large lead in the very memory bandwidth intensive 462.libquantum, the two tests that put the most pressure on the caches and memory subsystem.

SPECfp2006(C/C++) Speed Estimated Scores

In the fp2006 benchmarks, we gain see some larger jumps on the part of the Ryzen 3900X, particularly in 482.sphinx3. These two tests along with 450.soplex are characterized by higher data cache misses, so Zen2’s 16MB L3 cache should definitely be part of the reason we see such larger jumps.

I found it interesting that we’re not seeing much improvements in 470.lbm even though this is a test that is data store heavy, so I would have expected Zen2’s additional store AGU to greatly benefit this workload. There must be some higher level memory limitations which is bottlenecking the test.

453.povray isn’t data heavy nor branch heavy, as it’s one of the more simple workloads in the suite. Here it’s mostly up to the execution backend throughput and the ability of the front-end to feed it fast enough that are the bottlenecks. So while the Ryzen 3900X provides a big boost over the 2700X, it’s still largely lagging behind the 9900K, a characteristic we’re also seeing in the similar execution bottlenecked 456.hmmer of the integer suite.

SPEC2006 Speed Estimated Total

Overall, the 3900X is 25% faster in the integer and floating point tests of the SPEC2006 suite, which corresponds to an 17% IPC increase, above AMD's officially published figures for IPC increases.

Moving on to the 2017 suite, we have to clarify that we’re using the Rate benchmark variations. The 2017 suite’s speed and rate benchmarks differ from each other in terms of workloads. The speed tests were designed for single-threaded testing and have large memory demands of up to 11GB, while the rate tests were meant for multi-process tests. We’re using the rate variations of the benchmarks because we don’t see any large differentiation between the two variations in terms of their characterisation and thus the performance scaling between the both should be extremely similar. On top of that, the rate benchmarks take up to 5x less time (+1 hour vs +6 hours), and we're able run them on more memory limited platforms (which we plan on to do in the future).

SPECint2017 Rate-1 Estimated Scores

In the int2017 suite, we’re seeing similar performance differences and improvements, although this time around there’s a few workloads that are a bit more limited in terms of their performance boosts on the new Ryzen 3900X.

Unfortunately I’m not quite as familiar with the exact characteristics of these tests as I am with the 2006 suite, so a more detailed analysis should follow in the next few months as we delve deeper into microarchitectural counters.

SPECfp2017 Rate-1 Estimated Scores

In the fp2017 suite, things are also quite even. Interesting enough here in particular AMD is able to leapfrog Intel’s 9900K in a lot more workloads, sometimes winning in terms of absolute performance and sometimes losing.

SPEC2017 Rate-1 Estimated Total

As for the overall performance scores, the new Ryzen 3900X improves by 23% over the 2700X. Although closing the gap greatly and completely, it’s just a hair's width shy of actually beating the 9900K’s absolute single-threaded performance.

SPEC2017 Rate-1 Estimated Performance Per GHz

Normalising the scores for frequency, we see that AMD has achieved something that the company hasn’t been able to claim in over 15 years: It has beat Intel in terms of overall IPC. Overall here, the IPC improvements over Zen+ are 15%, which is a bit lower than the 17% figure for SPEC2006.

We already know about Intel’s new upcoming Sunny Cove microarchitecture which should undoubtedly be able to regain the IPC crown with relative ease, but the question for Intel is if they’ll be able to still maintain the single-thread absolute performance crown and continue to see 5GHz or similar clock speeds with the new core design.

Test Bed and Setup Benchmarking Performance: Web Tests
POST A COMMENT

451 Comments

View All Comments

  • MasterE - Wednesday, August 7, 2019 - link

    I considered going with the Ryzen 9 3900X chip and an x570 motherboard for a new rendering system but since these chips aren't available for less than $820+ anywhere, I guess I'll be back to either the threadripper or Intel 9000+ series. There is simply no way I'm paying that kind of price for a chip with a Manufacters Suggested Retail Price of $499. Reply
  • gglaw - Friday, August 23, 2019 - link

    @Andrei - I was just digging through reviews again before biting the bullet on a 3900X and one of the big questions that is not agreed upon in the tech community is gaming performance for PBO vs all-core overclock, yet you only run 2 benches on the overclocked settings. How can a review be complete with only 2 benches run, neither related to gaming? In a PURELY single threaded scenario PBO gives a tiny 2.X percent increase in single threaded Cinebench. This indicates to me that it is not sustaining the max 4.6 on a single core or it would have scaled better, so it may not be really comparing 4.6 vs 4.3 even for single threaded performance. Almost all recent game engines can at least utilize 4 threads, so I feel your exact same test run through the gaming suite would have shown a consistent winner with 4.3 all-core OC vs PBO. And in heavily threaded scenarios the gap would keep growing larger, but specifically in today's GAMES, especially if you consider very few of us have 0 background activity, all-core OC would hands-down win is my guess, but we could have better evidence of this if you could run a complete benchmarking suite. (unless I'm blind and missed it, in case my apologies :)

    I've been messing around with a 3700X, and even with a 14cm Noctua cooling it, it does not sustain max allowed boost on even a single core with PBO which is another thing I wish you touched on more. During your testing do you monitor the boost speeds and what percent of the time it can stay at the max boost over XX minutes?
    Reply
  • Maxiking - Monday, August 26, 2019 - link

    Veni, vidi vici

    Yeah, I was right.

    I would like to thank my family for all the support I have received whilst fighting amd fanboys.

    It was difficult, sometimes I was seriously thinking about giving up but the truth can not be stopped!
    The AMD fraud has been confirmed.

    https://www.reddit.com/r/pcgaming/comments/cusn2t/...
    Reply
  • Ninjawithagun - Thursday, October 10, 2019 - link

    Now all you have to do is have all these benchmarks ran again after applying the 1.0.0.3. ABBA BIOS update ;-) Reply
  • quadibloc - Tuesday, November 12, 2019 - link

    I am confused by the diagram of the current used by individual cores as the number of threads is increased. Since SMT doesn't double the performance of a core, on the 3900X, for example, shouldn't the number of cores in use increase to all 12 for the first 12 threads, one core for each thread, with all cores then remaining in use as the number of threads continues to increase to 24?

    Or is it just that this chart represents power consumption under a particular setting that minimizes the number of cores in use, and other settings that maximize performance are also possible?
    Reply
  • SjLeonardo - Saturday, December 14, 2019 - link

    Core and uncore get supplied by different VRMs, right? Reply
  • Parkab0y - Sunday, October 4, 2020 - link

    I really want to see something like this about zen3 5000 Reply
  • miabk - Thursday, October 15, 2020 - link

    Good working and positive thinking.
    https://unitconverters.online/
    Reply

Log in

Don't have an account? Sign up now