Section by Andrei Frumusanu

CPU ST Performance: SPEC 2006, SPEC 2017

SPEC2017 and SPEC2006 is a series of standardized tests used to probe the overall performance between different systems, different architectures, different microarchitectures, and setups. The code has to be compiled, and then the results can be submitted to an online database for comparison. It covers a range of integer and floating point workloads, and can be very optimized for each CPU, so it is important to check how the benchmarks are being compiled and run.

We run the tests in a harness built through Windows Subsystem for Linux, developed by our own Andrei Frumusanu. WSL has some odd quirks, with one test not running due to a WSL fixed stack size, but for like-for-like testing is good enough. SPEC2006 is deprecated in favor of 2017, but remains an interesting comparison point in our data. Because our scores aren’t official submissions, as per SPEC guidelines we have to declare them as internal estimates from our part.

For compilers, we use LLVM both for C/C++ and Fortan tests, and for Fortran we’re using the Flang compiler. The rationale of using LLVM over GCC is better cross-platform comparisons to platforms that have only have LLVM support and future articles where we’ll investigate this aspect more. We’re not considering closed-sourced compilers such as MSVC or ICC.

clang version 10.0.0
clang version 7.0.1 (ssh://git@github.com/flang-compiler/flang-driver.git
 24bd54da5c41af04838bbe7b68f830840d47fc03)

-Ofast -fomit-frame-pointer
-march=x86-64
-mtune=core-avx2
-mfma -mavx -mavx2

Our compiler flags are straightforward, with basic –Ofast and relevant ISA switches to allow for AVX2 instructions.

To note, the requirements for the SPEC licence state that any benchmark results from SPEC have to be labelled ‘estimated’ until they are verified on the SPEC website as a meaningful representation of the expected performance. This is most often done by the big companies and OEMs to showcase performance to customers, however is quite over the top for what we do as reviewers.

We start off with SPEC2006, a legacy benchmark by now, but which still has very well understood microarchitectural behaviour for us to analyse the new Zen3 design:

SPECint2006 Speed Estimated Scores

In SPECint2006, we’re seeing healthy performance upticks across the board for many of the tests. Particularly standing out is the new 462.libquantum behaviour of the Ryzen 9 5950X which is posting more than double the performance of its predecessor, likely thanks to the new much larger cache, but also the overall higher load/store throughput of the new core as well as the memory improvements of the microarchitecture.

We’re also seeing very large performance increases for 429.mcf and 471.omnetpp which are memory latency sensitive: Although the new design doesn’t actually change the structural latency to DRAM all that much, the new core’s much improved and smarter handling of memory through new cache-line replacement algorithms, new prefetchers, seem to have a large impact on these workloads.

400.perlbench is interesting as it’s not really a memory-heavy or L3 heavy workload, but instead has a lot of instruction pressure. I think that Zen3’s large boost here might be due to the new optimised OP-cache handling and optimisations as that would make the most sense out of all the changes in the new design – it’s one of the tests that has a very high L1I cache miss rate.

A simpler test that’s solely integer execution bound and sits almost solely in the L1D is 456.hmmer, and here we’re seeing only a minor uplift in performance only linear with the clock frequency increase of the new design, with only a 1% IPC uplift. Given that Zen3 doesn’t actually change its integer execution width in terms of ALUs or overall machine width, it makes sense to not see much improvements here.

SPECfp2006(C/C++) Speed Estimated Scores

In SPECfp2006, we’re seeing more healthy boosts in performance across the board which is mostly due to the more memory intensive nature of the workloads, and we’re seeing large IPC uplifts in most tests due to the larger L3 as well as the better memory capabilities of the core. 433.milc sees a smaller uplift than the other benchmarks and that’s due to it being more DRAM memory bandwidth bound. 482.spinx is also seeing a smaller 9% IPC uplift due to it not being that memory intensive.

SPEC2006 Speed Estimated Total

In the overall 2006 scores, the new Ryzen 5000 series parts are showcasing very large generational performance uplifts with margins well beyond that of the previous generation, as well as the nearest competition. Against the 3950X, the new 5950X is 36% faster in the integer workloads, and 29% faster in the floating-point workloads, which are both massive uplifts. AMD is also leaving Intel behind in terms of performance here with a 17% and 25% performance advantage against the 10900K.

SPEC2006 Speed Estimated PPC

In the performance per clock uplifts, measured at peak performance, we’re seeing a 20.87% median and 24.99% average improvement for the new Zen3 microarchitecture when compared to last year’s Zen2 design. AMD is still quite behind Apple’s A13 and A14 (review coming soon), but that’s natural given the almost double the microarchitectural width of Apple’s design, running at lower frequencies. It’ll be interesting to get Apple Silicon Mac devices tested and compared against the new AMD parts.

SPECint2017 Rate-1 Estimated Scores

Moving onto the newer SPECint2017, we again see some large improvement of Zen3 depending on the various microarchitectural characteristics of the respective workloads. 500.perlbench_r again shows a massive 37% IPC uplift for the new architecture – again very likely to the new design and optimisations on the part of the OP-cache of the Zen3 design.

520.omnetpp again also shows a 42% IPC uplift thanks to the memory technologies employed in the new design. Execution throughput limited workloads such as 525.x264 are seeing smaller increases of 9.5% IPC due to again overall less changes on this aspect of the microarchitecture.

SPECfp2017 Rate-1 Estimated Scores

In SPECfp2017, we see a similar situation as previous workloads. Execution-bound workloads such as 508.namd or 538.imagick are seeing smaller IPC increases in the 9-6% range. Similarly, DRAM bandwidth starved workloads such as 549.fotonik3d and 554.roms are showcasing also smaller IPC boosts of 2.7 – 8.6%.

The more hybrid workloads which make good use of the caches are seeing larger performance improvements across the board. Up to a 35.6% IPC peak for 519.lbm.

SPEC2017 Rate-1 Estimated Total

In the SPEC2017 suite total performance figures, the new Ryzen 5000 also shine thanks to their frequency and IPC uplifts. Generationally, across the int2017 and fp2017 suites, we’re seeing a 32% and 25% performance boost over the 3950X, which are very impressive figures.

IPC wise, looking at a histogram of all SPEC workloads, we’re seeing a median of 18.86%, which is very near AMD’s proclaimed 19% figure, and an average of 21.38% - although if we discount libquantum that average does go down to 19.12%. AMD’s marketing numbers are thus pretty much validated as they’ve exactly hit their proclaimed figure with the new Zen3 microarchitecture.

SPEC2017 Rate-1 Estimated PPC

On the competitive landscape, this now makes Zen3 the undisputed leader in the x86 space, leaving Intel’s old Skylake designs far behind and also showing more design complexity than the newer Sunny Cove and Willow Cove cores.

Overall, the new Ryzen 5000 series and the Zen3 microarchitecture seem like absolute winners, and there’s no dispute about them taking the performance crown. AMD has achieved this through both an uplift in frequency, as well as a notable 19% uplift thanks to a smarter design.

What I hope to see from AMD in future designs is a more aggressive push towards a wider core design with even larger IPC jumps. In workloads that are more execution bound, Zen3 isn’t all that big of an uplift. The move from a 16MB to a 32MB L3 cache isn’t something that’ll repeated any time soon in terms of improvement magnitude, and it’s also very doubtful we’ll see significant frequency uplifts with coming generations. As Moore’s Law is slowing, going wider and smarter seems to be the only way forward for advancing performance.

TDP and Per-Core Power Draw SPEC2017 Multi-Threaded Results
Comments Locked

339 Comments

View All Comments

  • TheinsanegamerN - Tuesday, November 10, 2020 - link

    However AMD's boost algorithim is very temperature sensitive. Those coolers may work fine, but if they get to the 70C range you're losing max performance to higher temperatures.
  • Andrew LB - Sunday, December 13, 2020 - link

    Blah blah....

    Ryzen 5800x @ 3.6-4.7ghz : 219w and 82'c.
    Ryzen 5800x @ 4.7ghz locked: 231w and 88'c.

    Fractal Celsius+ S28 Prisma 280mm AIO CPU cooler at full fan and pump speed
    https://www.kitguru.net/components/cpu/luke-hill/a...

    If you actually set your voltages on Intel chips they stay cool. My i7-10700k @ 5.0ghz all-core locked never goes above 70'c.
  • Count Rushmore - Friday, November 6, 2020 - link

    It took 3 days... finally the article load-up.
    AT seriously need to upgrade their server (or I need to stop using IE6).
  • name99 - Friday, November 6, 2020 - link

    "AMD wouldn’t exactly detail what this means but we suspect that this could allude to now two branch predictions per cycle instead of just one"

    So imagine you have wide OoO CPU. How do you design fetch? The current state of the art (and presumably AMD have aspects of this, though perhaps not the *entire* package) goes as follows:

    Instructions come as runs of sequential instructions separated by branches. At a branch you may HAVE to fetch instructions from a new address (think call, goto, return) or you may perhaps continue to the next address (think non-taken branch).
    So an intermediate complexity fetch engine will bring in blobs of instructions, up to (say 6 or 8) with the run of instructions terminating at
    - I've scooped up N or
    - I've hit a branch or
    - I've hit the end of a cache line.

    Basically every cycle should consist of pulling in the longest run of instructions possible subject to the above rules.

    The way really advanced fetch works is totally decoupled from the rest of the CPU. Every cycle the fetch engine predicts the next fetch address (from some hierarchy of : check the link stack, check the BTB, increment the PC), and fetches as much as possible from that address. These are stuck in a queue connected to decode, and ideally that queue would never run dry.

    BUT: on average there is about a branch every 6 instructions.
    Now supposed you want to sustain, let's say, 8-wide. That means that you might set N at 8, but most of the time you'll fetch 6 or so instructions because you'll bail out based on hitting a branch before you have a full 8 instructions in your scoop. So you're mostly unable to go beyond an IPC of 6, even if *everything* else is ideal.

    BUT most branches are conditional. And good enough half of those are not taken. This means that if you can generate TWO branch predictions per cycle then much of the time the first branch will not be taken, can be ignored, and fetch can continue in a straight line past it. Big win! Half the time you can pull in only 6 instructions, but the other half you could pull in maybe 12 instructions. Basically, if you want to sustain 8 wide, you'd probably want to pull in at least 10 or 12 instructions under best case conditions, to help fill up the queue for the cases where you pull in less than 8 instructions (first branch is taken, or you reach the end of the cache line).

    Now there are some technicalities here.
    One is "how does fetch know where the branches are, to know when to stop fetching". This is usually done via pre-decode bits living in the I-cache, and set by a kinda decode when the line is first pulled into the I-cache. (I think x86 also does this, but I have no idea how. It's obviously much easier for a sane ISA like ARM, POWER, even z.)
    Second, and more interesting, is that you're actually performing two DIFFERENT TYPES of prediction, which makes it somewhat easier from a bandwidth point of view. The prediction on the first branch is purely "taken/not taken", and all you care about is "not taken"; the prediction on the second branch is more sophisticated because if you predict taken you also have to predict the target, which means dealing BTB or link stack.

    But you don't have to predict TWO DIFFERENT "next fetch addresses" per cycle, which makes it somewhat easier.
    Note also that any CPU that uses two level branch prediction is, I think, already doing two branch prediction per cycle, even if it doesn't look like it. Think about it: how do you USE a large (but slow) second level pool of branch prediction information?
    You run the async fetch engine primarily from the first level; and this gives a constant stream of "runs of instructions, separated by branches" with zero delay cycles between runs. Great, zero cycle branches, we all want that. BUT for the predictors to generate a new result in a single cycle they can't be too large.
    So you also run a separate engine, delayed a cycle or two, based on the larger pool of second level branch data, checking the predictions of the async engine. If there's a disagreement you flush whatever was fetched past that point (which hopefully is still just in the fetch queue...) and resteer. This will give you a one (or three or four) cycle bubble in the fetch stream, which is not ideal, but
    - it doesn't happen that often
    - it's a lot better catching a bad prediction very early in fetch, rather than much later in execution
    - hopefully the fetch queue is full enough, and filled fast enough, that perhaps it's not even drained by the time decode has walked along it to the point at which the re-steer occurred...

    This second (checking) branch prediction doesn't ever get mentioned, but it is there behind the scenes, even when the CPU is ostensibly doing only a single prediction per cycle.

    There are other crazy things that happen in modern fetch engines (which are basically in themselves as complicated as a whole CPU from 20 years ago).

    One interesting idea is to use the same data that is informing the async fetch engine to inform prefetch. The idea is that you now have essentially two fetch engines running. One is as I described above; the second ONLY cares about the stream of TAKEN branches, and follows that stream as rapidly as possible, ensuring that each line referenced by this stream is being pulled into the I-cache. (You will recognize this as something like a very specialized form of run-ahead.)
    In principle this should be perfect -- the I prefetcher and branch-prediction are both trying to solve the *exact* same problem, so pooling their resources should be optimal! In practice, so far this hasn't yet been perfected; the best simulations using this idea are a very few percent behind the best simulations using a different I prefetch technology. But IMHO this is mostly a consequence of this being a fairly new idea that has so far been explored mainly by using pre-existing branch predictors, rather than designing a branch predictor store that's optimal for both tasks.
    The main difference is that what matters for prefetching is "far future" branches, branches somewhat beyond where I am now, so that there's plenty of time to pull in the line all the way from RAM. And existing branch predictors have had no incentive to hold onto that sort of far future prediction state. HOWEVER
    A second interesting idea is what IBM has been doing for two or three years now. They store branch prediction in what they call an L2 storage but, to avoid things, I'll cal a cold cache. This is stale/far future branch prediction data that is unused for a while but, on triggering events, that cold cache data will be swapped into the branch prediction storage so that the branch predictors are ready to go for the new context in which they find themselves.

    I don't believe IBM use this to drive their I-prefetcher, but obviously it is a great solution to the problem I described above and I suspect this will be where all the performance CPUs eventually find themselves over the next few years. (Apple and IBM probably first, because Apple is Apple, and IBM has the hard part of the solution already in place; then ARM because they's smart and trying hard; then AMD because they're also smart but their technology cycles are slower than ARM; and final Intel because, well, they're Intel and have been running on fumes for a few years now.)
    (Note of course this only solves I-prefetch, which is nice and important; but D-prefetch remains as a difficult and different problem.)
  • name99 - Friday, November 6, 2020 - link

    Oh, one more thing. I referred to "width" of the CPU above. This becomes an ever vaguer term every year. The basic points are two:

    - when OoO started, it seemed reasonable to scale every step of the pipeline together. Make the CPU 4-wide. So it can fetch up to 4 instructions/cycle. decode up to 4, issue up to 4, retire up to 4. BUT if you do this you're losing performance every step of the way. Every cycle that fetches only 3 instructions can never make that up; likewise every cycle that only issues 3 instructions.

    - so once you have enough transistors available for better designs, you need to ask yourself what's the RATE-LIMITING step? For x86 that's probably in fetch and decode, but let's consider sane ISAs like ARM. There the rate limiting step is probably register rename. So lets assume your max rename bandwidth is 6 instructions/cycle. You actually want to run the rest of your machinery at something like 7 or 8 wide because (by definition) you CAN do so (they are not rate limiting, so they can be grown). And by running them wider you can ensure that the inevitable hiccups along the way are mostly hidden by queues, and your rename machinery is running at full speed, 6-wide each and every cycle, rather than frequently running at 5 or 4 wide because of some unfortunate glitch upstream.
  • Spunjji - Monday, November 9, 2020 - link

    These were interesting posts. Thank you!
  • GeoffreyA - Monday, November 9, 2020 - link

    Yes, excellent posts. Thanks.

    Touching on width, I was expecting Zen 3 to add another decoder and take it up to 5-wide decode (like Skylake onwards). Zen 3's keeping it at 4 makes good sense though, considering their constraint of not raising power. Another decoder might have raised IPC but would have likely picked up power quite a bit.
  • ignizkrizalid - Saturday, November 7, 2020 - link

    Rip Intel no matter how hard you try squeezing Intel sometimes on top within your graphics! stupid site bias and unreliable if this site was to be truth why not do a live video comparison side by side using 3600 or 4000Mhz ram so we can see the actual numbers and be 100% assured the graphic table is not manipulated in any way, yea I know you will never do it! personally I don't trust these "reviews" that can be manipulated as desired, I respect live video comparison with nothing to hide to the public. Rip Intel Rip Intel.
  • Spunjji - Monday, November 9, 2020 - link

    I... don't think this makes an awful lots of sense, tbh.
  • MDD1963 - Saturday, November 7, 2020 - link

    It would be interesting to also see the various results of the 10900K the way most people actually run them on Z490 boards, i.e, with higher RAM clocks, MCE enabled, etc...; do the equivalent tuning with 5000 series, I'm sure they will run with faster than DDR4-3200 MHz. plus perhaps a small all-core overclock.

Log in

Don't have an account? Sign up now