CPU MT Performance: A Real Monster

What’s more interesting than ST performance, is MT performance. With 8 performance cores and 2 efficiency cores, this is now the largest iteration of Apple Silicon we’ve seen.

As a prelude into the scores, I wanted to remark some things on the previous smaller M1 chip. The 4+4 setup on the M1 actually resulted that a significant chunk of the MT performance being enabled by the E-cores, with the SPECint score in particular seeing a +33% performance boost versus just the 4 P-cores of the system. Because the new M1 Pro and Max have 2 less E-cores, just assuming linear scaling, the theoretical peak of the M1 Pro/Max should be +62% over the M1. Of course, the new chips should behave better than linear, due to the better memory subsystem.

In the detailed scores I’m showcasing the full 8+2 scores of the new chips, and later we’ll talk about the 8 P scores in context. I hadn’t run the MT scores of the new Fortran compiler set on the M1 and some numbers will be missing from the charts because of that reason.

SPECint2017 Rate-N Estimated Scores

Looking at the data – there’s very evident changes to Apple’s performance positioning with the new 10-core CPU. Although, yes, Apple does have 2 additional cores versus the 8-core 11980HK or the 5980HS, the performance advantages of Apple’s silicon is far ahead of either competitor in most workloads. Again, to reiterate, we’re comparing the M1 Max against Intel’s best of the best, and also nearly AMD’s best (The 5980HX has a 45W TDP).

The one workload standing out to me the most was 502.gcc_r, where the M1 Max nearly doubles the M1 score, and lands in +69% ahead of the 11980HK. We’re seeing similar mind-boggling performance deltas in other workloads, memory bound tests such as mcf and omnetpp are evidently in Apple’s forte. A few of the workloads, mostly more core-bound or L2 resident, have less advantages, or sometimes even fall behind AMD’s CPUs.

SPECfp2017 Rate-N Estimated Scores

The fp2017 suite has more workloads that are more memory-bound, and it’s here where the M1 Max is absolutely absurd. The workloads that put the most memory pressure and stress the DRAM the most, such as 503.bwaves, 519.lbm, 549.fotonik3d and 554.roms, have all multiple factors of performance advantages compared to the best Intel and AMD have to offer.

The performance differences here are just insane, and really showcase just how far ahead Apple’s memory subsystem is in its ability to allow the CPUs to scale to such degree in memory-bound workloads.

Even workloads which are more execution bound, such as 511.porvray or 538.imagick, are – albeit not as dramatically, still very much clearly in favour of the M1 Max, achieving significantly better performance at drastically lower power.

We noted how the M1 Max CPUs are not able to fully take advantage of the DRAM bandwidth of the chip, and as of writing we didn’t measure the M1 Pro, but imagine that design not to score much lower than the M1 Max here. We can’t help but ask ourselves how much better the CPUs would score if the cluster and fabric would allow them to fully utilise the memory.

SPEC2017 Rate-N Estimated Total

In the aggregate scores – there’s two sides. On the SPECint work suite, the M1 Max lies +37% ahead of the best competition, it’s a very clear win here and given the power levels and TDPs, the performance per watt advantages is clear. The M1 Max is also able to outperform desktop chips such as the 11900K, or AMD’s 5800X.

In the SPECfp suite, the M1 Max is in its own category of silicon with no comparison in the market. It completely demolishes any laptop contender, showcasing 2.2x performance of the second-best laptop chip. The M1 Max even manages to outperform the 16-core 5950X – a chip whose package power is at 142W, with rest of system even quite above that. It’s an absolutely absurd comparison and a situation we haven’t seen the likes of.

We also ran the chip with just the 8 performance cores active, as expected, the scores are a little lower at -7-9%, the 2 E-cores here represent a much smaller percentage of the total MT performance than on the M1.

Apple’s stark advantage in specific workloads here do make us ask the question how this translates into application and use-cases. We’ve never seen such a design before, so it’s not exactly clear where things would land, but I think Apple has been rather clear that their focus with these designs is catering to the content creation crowd, the power users who use the large productivity applications, be it in video editing, audio mastering, or code compiling. These are all areas where the microarchitectural characteristics of the M1 Pro/Max would shine and are likely vastly outperform any other system out there.

CPU ST Performance: Not Much Change from M1 GPU Performance: 2-4x For Productivity, Mixed Gaming
Comments Locked

493 Comments

View All Comments

  • OreoCookie - Friday, October 29, 2021 - link

    You shouldn't mix the M1 Pro and M1 Max: the article was about the Max. The Pro makes some concessions and it looks like there are some workloads where you can saturate its memory bandwidth … but only barely so. Even then, the M1 Pro would have much, much more memory bandwidth than any laptop CPU available today (and any x86 on the horizon).

    And I think you should include the L2 cache here, which is larger than the SL cache on the Pro, and still significant in the Max (28 MB vs. 48 MB).

    I still think you are nitpicking: memory bandwidth is a strength of the M1 Pro and Max, not a weakness. The extra cache in AMD's Zen 3D will not change the landscape in this respect either.
  • richardnpaul - Friday, October 29, 2021 - link

    The article does describe the differences between the two on the front page and runs comparisons throughout the benchmarks, whilst it's titled to be about the Max I found that it really basically covered both chips, the focus was on what benefits if any the Max brings over the Pro, so I felt it natural to include what I now see is a confusing reference to 24MB because you don't know what's going on in my head 😁

    From what I could tell the SL cache was not described like a typical L3 cache but I guess you could think of it more like that, so I was thinking of it as almost like an L4 cache (thus my comment about its placement in the die, its next to the memory controllers, and the GPU blocks, and quite far away from the CPU cores themselves so there will be a larger penalty for access vs a typical L3 which would be very close to the CPU core blocks. I've gone back and looked again and it's not as far away as I first though as I'd mistook where the CPU cores were)

    Total cache is 72MB (76MB including the efficiency cores' L2, and anything in the GPU), the AMD Desktop M3 chip has 36MB and will be 100MB with the Vcache so certainly in the same ballpark really, as in it's a lot currently (but I'm sure that we'll see the famed 1GB in the next decade). The M1 Max is crazy huge for a laptop which is why I compare it to the desktop Zen3 and also because nothing else is really comparable with 8 cores.

    I don't think it's a weakness, it's pretty huge for a 10TF GPU and an 8 core CPU (plus whatever the NPU etc. pull through it). I'm just not a fan of the compromises involved, such as RAM that can't be upgraded; though a 512bit interface would necessitate quite a few PCB layers to achieve with modular RAM.
  • Oxford Guy - Friday, October 29, 2021 - link

    Apple pioneered the disposable closed system with the original Mac.

    It was so extreme that Jobs used outright bait and switch fraud to sucker the tech press with speech synthesis. The only Mac to be sold at the time of the big unveiling had 128K and was not expandable. Jobs used a 512K prototype without informing the press so he could run speech synthesis — software that also did not come with the Mac (another deception).

    Non-expandable RAM isn’t a bug to Apple’s management; it’s a very highly-craved feature.
  • techconc - Thursday, October 28, 2021 - link

    You're exactly right. Here's what Affinity Photo has to say about it...

    "The #M1Max is the fastest GPU we have ever measured in the @affinitybyserif Photo benchmark. It outperforms the W6900X — a $6000, 300W desktop part — because it has immense compute performance, immense on-chip bandwidth and immediate transfer of data on and off the GPU (UMA)."
  • richardnpaul - Thursday, October 28, 2021 - link

    They're right, which is why you see SMA these days on the newer AMD stuff (Resize BAR) and why Nvidia did the custom interface tech with IBM and are looking to do the same in servers with ARM to leverage these kinds of performance gains. It's also the reason why AMD bought ATI in the first place all those years ago; the whole failed heterogeneous compute (it must be galling for some at AMD that Apple have executed on this promise so well.)
  • techconc - Thursday, October 28, 2021 - link

    You clearly don't understand what drives performance. You have a very limited view which looks only at the TFLOPs metric and not at the entire system. Performance comes from the following 3 things: High compute performance (TFLOPS), fast on-chip bandwidth and fast transfer on and off the GPU.

    As an example, Andy Somerfield, lead for Affinity Photo app had the following to say regarding the M1 Max with their application:
    "The #M1Max is the fastest GPU we have ever measured in the @affinitybyserif Photo benchmark. It outperforms the W6900X — a $6000, 300W desktop part — because it has immense compute performance, immense on-chip bandwidth and immediate transfer of data on and off the GPU (UMA)."

    This is comparing the M1 Max GPU to a $6000, 300W part and the M1 Max handily outperforms it. In terms of TFLOPS, the 6900XT has more than 2x the power. Yet, the high speed and efficient design of the share memory on the M1 Max allows it to outperform this more expensive part in actual practice. It does so while using just a fraction of the power. That does make the M1 Max pretty special.
  • richardnpaul - Thursday, October 28, 2021 - link

    Yes TFLOPs is a very simple metric and doesn't directly tell you much about performance, but it's a general guide (Nvidia got more out of their hardware compared to AMD for example and have until the 6800 series if you only looked at the TFLOPS figures.) Please, tell me more about what I think and understand /s

    It's fastest for their scenario and for their implementation. It may be, and is very likely, that there's some specific bottleneck that they are hitting with the W6900X that isn't a problem with the implementation details of the M1 Pro/Max chips. Their issue seems to be interconnect bandwidth, they're constantly moving data back and forth between the CPU and GPU and with the M1 chips they don't need to do that, saving huge amounts of time because the PCI-E bus adds a lot of latency from what I understand so you really don't want to transfer back and forth over it (and maybe you don't need to, maybe you can do something differently in the software implementation, maybe you can't and it's just a problem that's much more efficiently done on this kind of architecture I don't know and wouldn't be able to comment knowing nothing about the software or problem that it solves. What I don't take at face value is one person/company saying use our software as it's amazing on only this hardware; I mean a la Oracle right?)

    When it comes to gaming performance, it seems that the 6900XT or the RTX 3080 seem to put this chip in its place, based on the benchmarks we saw (infact, the mobile 3080 is basically just an RTX 3070 so even more so which could be because of all sorts of issues already highlighted) you could say that the GPU isn't good as a GPU but is great at one task as a highly parallel co-processor for one piece of software that if that's the software you want to use then great for you but if you want to use the GPU for actual GPU tasks it might underwhelm (though in a laptop format and for this little power draw of ~120W max it's not going to do that for a few years which is the point that you're making and I'm not disputing - Apple will obviously launch new replacements which will put this in the shade in time).
  • Hrunga_Zmuda - Tuesday, October 26, 2021 - link

    From the developers of Affinity Photo:

    "The #M1Max is the fastest GPU we have ever measured in the @affinitybyserif Photo benchmark. It outperforms the W6900X — a $6000, 300W desktop part — because it has immense compute performance, immense on-chip bandwidth and immediate transfer of data on and off the GPU (UMA)."

    Ahem, a laptop that tops out at not much more than the top GPU. That is bananas!
  • buta8 - Wednesday, October 27, 2021 - link

    Please tell me how monitor the CPU Bandwidth - Intra-cacheline R&W?
  • buta8 - Wednesday, October 27, 2021 - link

    Please tell me how monitor the CPU Bandwidth - Intra-cacheline R&W?

Log in

Don't have an account? Sign up now