CPU ST Performance: Not Much Change from M1

Apple didn’t talk much about core performance of the new M1 Pro and Max, and this is likely because it hasn’t really changed all that much compared to the M1. We’re still seeing the same Firestrom performance cores, and they’re still clocked at 3.23GHz. The new chip has more caches, and more DRAM bandwidth, but under ST scenarios we’re not expecting large differences.

When we first tested the M1 last year, we had compiled SPEC under Apple’s Xcode compiler, and we lacked a Fortran compiler. We’ve moved onto a vanilla LLVM11 toolchain and making use of GFortran (GCC11) for the numbers published here, allowing us more apple-to-apples comparisons. The figures don’t change much for the C/C++ workloads, but we get a more complete set of figures for the suite due to the Fortran workloads. We keep flags very simple at just “-Ofast” and nothing else.

SPECint2017 Rate-1 Estimated Scores

In SPECint2017, the differences to the M1 are small. 523.xalancbmk is showcasing a large performance improvement, however I don’t think this is due to changes on the chip, but rather a change in Apple’s memory allocator in macOS 12. Unfortunately, we no longer have an M1 device available to us, so these are still older figures from earlier in the year on macOS 11.

Against the competition, the M1 Max either has a significant performance lead, or is able to at least reach parity with the best AMD and Intel have to offer. The chip however doesn’t change the landscape all too much.

SPECfp2017 Rate-1 Estimated Scores

SPECfp2017 also doesn’t change dramatically, 549.fotonik3d does score quite a bit better than the M1, which could be tied to the more available DRAM bandwidth as this workloads puts extreme stress on the memory subsystem, but otherwise the scores change quite little compared to the M1, which is still on average quite ahead of the laptop competition.

SPEC2017 Rate-1 Estimated Total

The M1 Max lands as the top performing laptop chip in SPECint2017, just shy of being the best CPU overall which still goes to the 5950X, but is able to take and maintain the crown from the M1 in the FP suite.

Overall, the new M1 Max doesn’t deliver any large surprises on single-threaded performance metrics, which is also something we didn’t expect the chip to achieve.

Power Behaviour: No Real TDP, but Wide Range CPU MT Performance: A Real Monster
Comments Locked

493 Comments

View All Comments

  • Speedfriend - Tuesday, October 26, 2021 - link

    This isn't their first attempt. They have been building laptop version of the A series chips for years now for testing. There have been leaks about this for years. Assuming that the world best SOC design team will make a significant advancement from here after 10 years of progress on A series is hoping for a bit much
  • robotManThingy - Tuesday, October 26, 2021 - link

    All of the games are x86 translated by Apple's Rosetta, which means they are meaningless when it come to determining the speed of the M1 Max or any other M1 chip.
  • TheinsanegamerN - Tuesday, October 26, 2021 - link

    Real-world software isnt worthless.
  • AshlayW - Tuesday, October 26, 2021 - link

    "The M1X is slightly slower than the RTX-3080, at least on-paper and in synthetic benchmarks."
    Not quite, it matches the 3080 in mobile-focused synthetics where Apple is focusing on pretending to have best-in-class performance, and then its true colours shows in actual video gaming. This GPU is for content creators (where it's excellent) but you don't just out-muscle decades of GPU IP optimisation for gaming in hardware and software that AMD/NVIDIA have. Furthermore, the M1MAX is significantly weaker in GPU resources than the GA104 chip in the mobile 3080, which here, is actually limited to quite low clock speeds, it is no surprise it is faster in actual games, by a lot.
  • TheinsanegamerN - Tuesday, October 26, 2021 - link

    Rarely do synthetics ever line up with real word performance, especially in games. MatcHong 3060 mobile performance is already pretty good.
  • NPPraxis - Tuesday, October 26, 2021 - link

    Where are you seeing "actual gaming performance" benchmarks that you can compare? There's very few AAA games available for Mac to begin with; most of the ones that do exist are running under Rosetta 2 or not using Metal; and Windows games using VMs or WINE + Rosetta 2 has massive overhead.

    The number of actual games running is tiny and basically the only benchmark I've seen is Shadow of the Tomb Raider. I need a higher sample size to state anything definitively.

    That said, I wouldn't be shocked if you're right, Apple has always targeted Workstation GPU buyers more than gaming GPU buyers.
  • GigaFlopped - Tuesday, October 26, 2021 - link

    The games tested were already ported over to the Metal API, it was only the CPU side that was emulated, we've seen emulated benchmarks before, the M1 and Rosetta does a pretty decent job at it and when they ran the games at 4k, that would have pretty much removed any potential bottleneck. So what you see is pretty much what you'll get in terms of real-world rasterization performance, they might squeeze an extra 5% or so out of it, but don't expect any miracles, it's an RTX 3060 Mobile competitor in terms of Rasterization, which is certainly not to be sniffed at and very good achievement. The fact that it can match the 3060 whilst consuming less power is a feat of its own, considering this is Apple first real attempt at desktop level or performance GPU.
  • lilkwarrior - Friday, November 5, 2021 - link

    These M1 chips aren't appropriate for serious AAA Gaming. They don't even have hardware-accelerated ray-tracing and other core DX12U/Vulkan tech for current-gen games coming up moving forward. Want to preview that? Play Metro Exodus: Enhanced Edition.
  • OrphanSource - Thursday, May 26, 2022 - link

    you 'premium gaming' encephalitics are the scum of the GD earth. Oh, you can only play your AAA money pit cash grabs at 108 fps instead of 145fps at FOURTEEN FORTY PEE on HIGH QUALITY SETTING? OMG, IT"S AS BAD AS THE RTX 3060? THE OBJECTIVELY MOST COST/FRAME EFFECTIVE GRAPHICS CARD OF 2021??? WOW THAT SOUNDS FUCKING AMAZING!

    Wait, no I, misunderstood, you are saying that's a bad thing? Oh you poor, old, blind, incontinent man... well, at least I THINK you are blind if you need 2k resolution at well over 100fps across the most graphics intensive games of 2020/2021 to see what's going on clearly enough to EVEN REMOTELY enjoy the $75 drug you pay for (the incontinence I assume because you 1. clearly wouldn't give a sh*t about these top end, graphics obsessed metrics and 2. have literally nothing else to do except shell out enough money to feed a family a small family for a week with the cost of each of your cutting edge games UNLESS you were homebound in some way?)

    Maybe stop being the reason why the gaming industry only cares about improving their graphics at the cost of everything else. Maybe stop being the reason why graphics cards are so wildly expensive that scientific researchers can't get the tools they need to do the more complex processing needed to fold proteins and cure cancer, or use machine learning to push ahead in scientific problems that resist our conventional means of analysis

    KYS fool
  • BillBear - Monday, October 25, 2021 - link

    The performance numbers would look even nicer if we had numbers for that GE76 Raider when it's unplugged from the wall and has to throttle the CPU and GPU way the hell down.

    How about testing both on battery only?

Log in

Don't have an account? Sign up now