SPEC2017 Multi-Threaded Results

Single-threaded performance is only one element in regard to performance on a multi-core processor, and it's time to look at multi-threaded performance in SPEC2017. Although things in the single-threaded SPEC2017 testing showed that both Zen 4 and Raptor Lake were consistently at loggerheads, let's look at data in the Rate-N multi-threaded section.

SPECint2017 Rate-N Estimated Scores

Looking at the data in our first part of SPEC2017 (int) nT testing, we're seeing similar trade-offs between Raptor Lake (13th Gen) and Zen 4 (Ryzen 7000) platforms. While Raptor Lake won in the 500.perlbench_r single-threaded test, Zen 4 has the lead by around 13% in multi-threaded performance, despite the Core i9-13900K having eight more physical cores (albeit efficiency cores).

One stand-out part of our SPECint2017 multi-threaded testing is just how far ahead the Core i9-13900K is ahead of the previous Core i9-12900K in multi-threaded tests. This comes thanks to more cores (2x the E-cores), and higher turbo frequencies. For example, in the 525.x264_r test, the Core i9-13900K is nearly 50% better than the i9-12900K; the only part where Raptor Lake failed to outperform Alder Lake was in the 502.gc_r test.

SPECfp2017 Rate-N Estimated Scores

Moving onto the second half of our SPEC2017 multi-threaded results (Floating Point), the Core i9-13900K really does show itself to be a formidable force compared to Zen 4. In the majority of SPECfp2017 tests, the Core i9-13900K is ahead in multi-threaded performance. The improvements in overall performance from Rocket Lake (11th Gen) to Alder Lake were decent, but the improvement from Rocket Lake last year in Q1 2021 to Raptor Lake today – a more useful metric for the usual 2-4 year hardware upgrade cycle – is very impressive indeed.

Summarizing the SPEC2017 multi-threaded results, in some areas Zen 4 is the winner, some areas Raptor Lake (Raptor Cove + Gracemont) is the winner. It is incredibly close in quite a few of the tests, and without sounding negative on the Zen 4 architecture here, but Intel has done a very good job bridging that initial gap to make things competitive against AMD's best.

Update 18/07/23

Following on from our initial results in our SPECint2017 Rate multi-threaded testing of the Core i9-13900K, we wanted to investigate the 502.gcc_r result, which we believed to be an anomaly of sorts. We observed a score of 49.1 on the Core i9-13900K, which, compared to the previous generation Core i9-12900K, is a regression in performance; the result was around 37% lower than the previous generation. 

To investigate further, we've re-tested the Core i9-13900K using SPEC2017 Rate to identify any issues and to see if we could further replicate the issue or, at the very least, provide a more up-to-date list of results.

SPECint2017 Rate-N Estimated Scores (i9-13900K)

Looking at our updated SPECin2017 results, we are comparing the original Core i9-13900K data to the new data for comparative purposes. Although the results are very similar in many cases, we can see some slight regression in a few results, which could be attributed to various factors, including Windows 11's scheduler, power budget, or just general variance in running.

The biggest highlight of our re-test is the 502.gcc_r result, which seems to be an anomaly for the original run. We've run SPEC2017 numerous times to confirm that the above results are exactly where they should be.

SPECfp2017 Rate-N Estimated Scores (i9-13900K)

Focusing on our latest results for SPECfp2017 Rate N, we can see a similar story, with very similar results in multi-threaded SPEC2017 performance as with our original testing. In fact, a couple of the results yielded slightly higher results, which could be simply down to scheduler maturity, OS-related improvements including the scheduler, or overall firmware maturity. The results include 527.cam4_r, 549.fotonik3d_r, and 554.roms_r, which show better gains in our latest testing, especially compared to the Core i9-12900K, which this chip replaced in the market.

The biggest takeaway from our re-testing is the updated SPECint2017 Rate-N result for 502.gcc_r, which shows that our original results were nothing more than an anomaly, and we've been unable to replicate the issue.

Update: 07/22/23

We are aware of potential issues with memory capacity, and as such, we are re-running the Core i9-13900K with higher capacity DDR5 memory at JEDEC settings relevant to the platform. We have done a run with 64 GB instead of the regular 32 GB, which with the Core i9-13900K is 2 GB per thread (2 GB x 32 = 64 GB). Looking at preliminary results, we aren't seeing any major variances in these results.

SPEC2017 Single-Threaded Results CPU Benchmark Performance: Power, Office And Web
POST A COMMENT

169 Comments

View All Comments

  • m53 - Thursday, October 20, 2022 - link

    PCs are idle (or used for light browsing, reading bews, watching youtube or a movie, etc.) most of the time. Intel idles at around 12W due to E cores while AMD idles at around 45W which will make the energy consumption 4x. Reply
  • t.s - Thursday, October 20, 2022 - link

    idle around 45w? sources? My 5600G idle at 11W. others, around 7 s/d 17W. Reply
  • titaniumrock - Thursday, October 20, 2022 - link

    here is the source link https://www.youtube.com/watch?v=UNmpVvTUkJE&li... Reply
  • t.s - Friday, October 21, 2022 - link

    And where it states the AMD vs Intel watt vs watt? Reply
  • Wrs - Friday, October 21, 2022 - link

    A 5600g is a monolithic chip, just like the Intels. A 7600x or 7950x is a multi-chip module, though, with 2 or 3 modules, and the IOD idle is very substantial now with all the PCIe5 lanes. Bottom line Zen 4 is more efficient when doing major work, courtesy of being one process generation ahead, but Raptor Lake and Alder Lake idle lower. If you want low idle with Zen4, wait for the SoC variants like your 5600g. Reply
  • tygrus - Saturday, October 22, 2022 - link

    They don't run constantly with at maximum power consumption in all workloads. They use less while gaming or more integer & less FP/AVX. Highest usage probably when they have a performance lead over the other. AMD can run at lower power limits & loose a few % in many cases. Reply
  • neblogai - Thursday, October 20, 2022 - link

    I was hoping for Ryzen 7000X iGPU benchmarks too. There are no proper comparisons of them vs Intel's 32EU iGPUs on the internet. Reply
  • nandnandnand - Thursday, October 20, 2022 - link

    ETA Prime 7700X iGPU tests (no comparisons):
    https://www.youtube.com/watch?v=p4cwNn4kI6M (gaming)
    https://www.youtube.com/watch?v=MnSVPM78ZaQ (emulation)

    7600X vs. 12900 vs. 5700G
    https://arstechnica.com/gadgets/2022/09/ryzen-7600...

    All Zen 4 vs. 12900K vs. others
    https://www.techpowerup.com/review/amd-ryzen-7-770...

    It's similar to the UHD 770 in Alder Lake, sometimes a little better or worse. About half the performance of a 5700G which is impressive for 2 CUs.

    UHD 770 in Raptor Lake gets +100 MHz across the board, so that could make a slight difference.
    Reply
  • neblogai - Thursday, October 20, 2022 - link

    Thanks. I liked the ones on Techpowerup, as they include tests at 720p low, and tested more than a few titles. Part of my interest is the need to compare to Tomshardware 7950 iGPU results, which looked suspiciously low for the specs, and probably faulty: https://www.tomshardware.com/news/ryzen-7000-integ... Reply
  • CiccioB - Thursday, October 20, 2022 - link

    About power consumption.
    I think it is completely useless to measure it when running a useless benchmark that you then don't even use to compare the relative performances to other CPUs.
    It would be much worth having a measurement for some more useful (common?) benches, just to understand when a real work is applied how much the CPU is consuming and, related to the performances, understand how efficient it is.

    Just think what the results would be if the CPU would be artificially limited (by BIOS/driver) in Prime95 bench: you would measure a much lower consumption that extrapolated for other tests, and you could just think the CPU is consuming a fraction of what is does. It's the same for the torture benches of GPUs. The max consumption in that test is useless to understand how much they really consume while gaming, and in fact, most of them are artificially limited or just hit the max TDP (which is again not a measure of power consumption).

    If you don't want to provide the power consumption for most benches, at least use a bench that gives a comparable performance, so that (at least for that test) one can make a comparison of the efficiency.
    Reply

Log in

Don't have an account? Sign up now