CPU ST Performance: Not Much Change from M1

Apple didn’t talk much about core performance of the new M1 Pro and Max, and this is likely because it hasn’t really changed all that much compared to the M1. We’re still seeing the same Firestrom performance cores, and they’re still clocked at 3.23GHz. The new chip has more caches, and more DRAM bandwidth, but under ST scenarios we’re not expecting large differences.

When we first tested the M1 last year, we had compiled SPEC under Apple’s Xcode compiler, and we lacked a Fortran compiler. We’ve moved onto a vanilla LLVM11 toolchain and making use of GFortran (GCC11) for the numbers published here, allowing us more apple-to-apples comparisons. The figures don’t change much for the C/C++ workloads, but we get a more complete set of figures for the suite due to the Fortran workloads. We keep flags very simple at just “-Ofast” and nothing else.

SPECint2017 Rate-1 Estimated Scores

In SPECint2017, the differences to the M1 are small. 523.xalancbmk is showcasing a large performance improvement, however I don’t think this is due to changes on the chip, but rather a change in Apple’s memory allocator in macOS 12. Unfortunately, we no longer have an M1 device available to us, so these are still older figures from earlier in the year on macOS 11.

Against the competition, the M1 Max either has a significant performance lead, or is able to at least reach parity with the best AMD and Intel have to offer. The chip however doesn’t change the landscape all too much.

SPECfp2017 Rate-1 Estimated Scores

SPECfp2017 also doesn’t change dramatically, 549.fotonik3d does score quite a bit better than the M1, which could be tied to the more available DRAM bandwidth as this workloads puts extreme stress on the memory subsystem, but otherwise the scores change quite little compared to the M1, which is still on average quite ahead of the laptop competition.

SPEC2017 Rate-1 Estimated Total

The M1 Max lands as the top performing laptop chip in SPECint2017, just shy of being the best CPU overall which still goes to the 5950X, but is able to take and maintain the crown from the M1 in the FP suite.

Overall, the new M1 Max doesn’t deliver any large surprises on single-threaded performance metrics, which is also something we didn’t expect the chip to achieve.

Power Behaviour: No Real TDP, but Wide Range CPU MT Performance: A Real Monster
Comments Locked

493 Comments

View All Comments

  • taligentia - Monday, October 25, 2021 - link

    M1 destroyed its competitors in real world use. Performance/battery life was unlike anything we had seen.

    And companies like Davinci are saying that M1 Pro/Max continue this even further.

    So the idea that Apple is just doing this for benchmarks is laughable.
  • techconc - Monday, October 25, 2021 - link

    @goatfajitas Apple has never been caught cheating in benchmarks. You seem to confuse Apple with your typical Android OEM in that regard.
  • Lavkesh - Monday, October 25, 2021 - link

    Ignore him. Just a butt hurt troll with nothing else to do.
  • name99 - Monday, October 25, 2021 - link

    :eyeroll:
  • schujj07 - Monday, October 25, 2021 - link

    Basically the M1 is great in synthetic benchmarks but once you have to run applications it falls behind. Apple made this big deal about how their GPU could compete with the mobile 3080 at 1/3 the power all based on synthetic benchmarks. However, once the GPU is actually used you see it is only 1/3 as fast as the mobile 3080 in real scenarios. I also do not like the use of SPEC at all. It is essentially a synthetic benchmark as well. Problem is there aren't a lot of benchmarks for the Apple eco system that aren't like Geekbench.
  • SarahKerrigan - Monday, October 25, 2021 - link

    SPEC isn't a synthetic - it's real workload traces.
  • schujj07 - Monday, October 25, 2021 - link

    More like its "real world." OEMs spend hours tweaking their platforms to get the highest SPEC score possible. That really shows how SPEC borders real world and synthetic. I have been to many conferences and never once have the decision makes for companies said they made their decision based on SPEC performance. It is essentially nothing more than a bragging right for OEMs.
  • The Garden Variety - Monday, October 25, 2021 - link

    I did some googling and could not find measurements to back up your statements. I'm interested in learning more about how the M1's real performance is dramatically below the measurements of people like Andre, et al. I've relied on Anandtech to provide a sort of quantitative middle ground, and I'm a little rocked to hear that I shouldn't. Could you point me in the right direction for articles or some kind of analysis? You don't have to do my homework for me, just let me know where I could read more.
  • schujj07 - Monday, October 25, 2021 - link

    That is the biggest problem with the Apple eco system. Typical benchmark suites aren't useful as many of the programs either don't run on ARM or OSX. Therefore you are left with things like Geekbench or SPEC. I will be interested in seeing what the M1 Max can do in things like Adobe. Puget Systems has their own Adobe Premiere benchmark suite but the M1 Max hasn't been benchmarked, however, the M1 has. https://www.pugetsystems.com/labs/articles/Apple-M...
  • Ppietra - Monday, October 25, 2021 - link

    Puget Premiere Pro benchmark is in the article, though I would never classify that as CPU benchmark, nor Premiere Pro as particularly suitable to make general conclusions considering that it isn’t as optimised for macOS as it is for Windows.

Log in

Don't have an account? Sign up now