SPEC2017 - Multi-Core Performance

While we knew that the Apple M1 would do extremely well in single-threaded performance, the design’s strengths are also in its power-efficiency which should directly translate to exceptionally good multi-threaded performance in power limited designs. We noted that although Apple doesn’t really publish any TDP figure, we estimate that the M1 here in the Mac mini behaves like a 20-24W TDP chip.

We’re including Intel’s newest Tiger Lake system with an i7-1185G7 at 28W, an AMD Ryzen 7 4800U at 15W, and a Ryzen 9 4900HS at 35W as comparison points. It’s to be noted that the actual power consumption of these devices should exceed that of their advertised TDPs, as it doesn’t account for DRAM or VRMs.

SPECint2017(C/C++) Rate-N Estimated Scores

In SPECint2017 rate, the Apple M1 battles with AMD’s chipsets, with the results differing depending on the workload, sometimes winning, sometimes losing.

SPECfp2017(C/C++) Rate-N Estimated Scores

In the fp2017 rate results, we see similar results, with the Apple M1 battling it out with AMD’s higher-end laptop chip, able to beat the lower TDP part and clearly stay ahead of Intel’s design.

SPEC2017(C/C++) Rate-N Estimated Total

In the overall multi-core scores, the Apple M1 is extremely impressive. On integer workloads, it still seems that AMD’s more recent Renoir-based designs beat the M1 in performance, but only in the integer workloads and at a notably higher TDP and power consumption.

Apple’s lead against Intel’s Tiger Lake SoC at 28W here is indisputable, and shows the reason as to why Apple chose to abandon their long-term silicon partner of 15 years. The M1 not only beats the best Intel has to offer in this market-segment, but does so at less power.

I also included multi-threaded scores of the M1 when ignoring the 4 efficiency cores of the system. Here although it’s an “8-core” design, the heterogeneous nature of the CPUs means that performance is lop-sided towards the big cores. That doesn’t mean that the efficiency cores are absolutely weak: Using them still increases total throughput by 20-33%, depending on the workload, favouring compute-heavy tasks.

Overall, Apple doesn’t just deliver a viable silicon alternative to AMD and Intel, but actually something that’s well outperforms them both in absolute performance as well as power efficiency. Naturally, in higher power-level, higher-core count systems, the M1 can’t keep up to AMD and Intel designs, but that’s something Apple likely will want to address with subsequent designs in that category over the next 2 years.

SPEC2006 & 2017: Industry Standard - ST Performance Rosetta2: x86-64 Translation Performance
Comments Locked

682 Comments

View All Comments

  • andrewaggb - Tuesday, November 17, 2020 - link

    Pretty much. There's no reason to think the cores will be better on a chip with more of them. The only thing that is a possibility (certainly not a given) is that the clock speed will be substantially higher which should put Apple in the lead. That said, the previous review showed a very modest IPC improvement this time around even with huge reorder buffers and an 8-wide design. So I suspect apple's best course for improved performance is higher clocks but that always runs counter to power usage so we'll see. AMD and Intel will probably have to go wider to compete with Apple for single thread IPC in the long run.

    GPU-wise it's pretty decent for integrated graphics but if you want to play games you shouldn't be running Mac OS or using integrated graphics. It'll be interesting to see if Apple's market share jumps enough to pull in some game development.
  • Eric S - Tuesday, November 17, 2020 - link

    I’m don’t think any of these benchmarks are optimized for TBDR. Memory bound operations could be significantly faster if optimized for the chip. Many render pipelines could run 4X faster. I’m curious to see iOS graphics benchmarks run on this that are more representative. Of course I hope we see apps and games optimized for TBDR as well.
  • Spunjji - Thursday, November 19, 2020 - link

    @andrewaggb - Agreed entirely. The cores themselves aren't going to magically improve, and it's not clear from the meagre scaling between A14 at 5-10W and M1 at 10-25W that they can make them a lot faster with clock speed increases. But a chip with 12 Firestorm cores and 4 Icestorm cores would be an interesting match for the 5900X, and if they beef the GPU up to 12 cores with a 192bit memory interface and/or LPDDR5 then they could have something that's actually pretty solid for the vast majority of workloads.

    I don't think games are going to be moving en-masse from Windows any time soon, but I guess we'll see as time goes on.
  • Stephen_L - Tuesday, November 17, 2020 - link

    I feel very lucky that I didn’t use your mindset when I decided to buy AMD R5-1600X instead of an Intel i5 for my pc.
  • Spunjji - Thursday, November 19, 2020 - link

    @YesYesNo - you responded to a comment about how they *will* be releasing faster chips by talking about how they haven't done so yet. This is known. You're kind of talking past the people you're replying to - nobody's asking you to reconsider how you feel about the M1 based on whatever comes next, but it doesn't make sense to assume this is the absolute best they can do, either.
  • andreltrn - Tuesday, November 17, 2020 - link

    This is not their High-end chip! This a chip for low-end devices such as fan-less laptops. They attacked that market first because this where they will make the most money. High end Pro won't go for a new platform until it is proven and that they are 100% sure that they will be able to port their workflow to it. They are starting with the low-end and follow up with probably a 10 or 12 core chip in the spring for the high-end laptop and the iMac.
  • vlad42 - Tuesday, November 17, 2020 - link

    I just do not see Apple using any but a low power mobile chip for consumer devices.

    Think about it, about half the time we did not see Apple release a tablet optimized A#X chip for the iPad. In their recent earnings reports the combined iPad and Mac revenue is still only half that of the iPhone. By using the same chip for the iPad and all Mac machines, except the Mac Pro, maybe Apple will actually update the soc every year.

    If apple were to provide a higher performing chip for consumer devices, then it would probably be updated only once every few years. Apple just does not make enough money from high end laptops and the iMac to justify dedicated silicon for those products without pulling an Intel and reusing the soc for far too many product cycles. Just look at the Mac Pros. The engineering resources needed to design the most recent x86 Mac Pro is a drop in the bucket compared to designing and taping out a new soc. Despite this, Apple has only been updating the Mac Pro lineup once every 5-7 years!

    The problem, is that by the time they are willing to update those theoretical high end consumer chips, they will have been long since been made obsolete. Who in their right mind would purchase a "high end" laptop or an iMac if it is out performed by an entry level Air or an iPad or was lacking in important features (hardware codec support, the next stupid version of HDCP needed for movies/TV shows, etc.). Even worse for Apple is if their customers by a non-Apple product instead. Much of Apple's current customer base does not actually need a Mac. They would be fine with any decent quality high end laptop or any all-in-one with a screen that is not hot garbage.
  • Eric S - Tuesday, November 17, 2020 - link

    They are working on updates for the high end. I expect they will be amazing. At least two higher end chips are in late design or early production.
  • Eric S - Tuesday, November 17, 2020 - link

    You are probably right in that they may only be updated every few years, but the same can be said of the Xeon which also skips generations.
  • vlad42 - Tuesday, November 17, 2020 - link

    But the Xeon chips are a bad example because Intel shot themselves in the foot through a combination of complacency, tying their next gen products too tightly to the manufacturing process and a shortage of 14nm capacity. We used to get new Xeons if not every year, then at least every time there was an architecture update.

    A better more recent comparison would be with AMD which has always updated the Threadripper lineup. Granted, we technically do not know if the Threadripper Pro lineup will be updated every year, but it very likely will be.

Log in

Don't have an account? Sign up now