Apple Shooting for the Stars: x86 Incumbents Beware

The previous pages were written ahead of Apple officially announcing the new M1 chip. We already saw the A14 performing outstandingly and outperforming the best that Intel has to offer. The new M1 should perform notably above that.

We come back to a few of Apple’s slides during the presentations as to what to expect in terms of performance and efficiency. Particularly the performance/power curves are the most detail that Apple is sharing at this moment in time:

In this graphic, Apple showcases the new M1 chip featuring a CPU power consumption peak of around 18W. The competing PC laptop chip here is peaking at the 35-40W range so certainly these are not single-threaded performance figures, but rather whole-chip multi-threaded performance. We don’t know if this is comparing M1 to an AMD Renoir chip or an Intel ICL or TGL chip, but in both cases the same general verdict applies:

Apple’s usage of a significantly more advanced microarchitecture that offers significant IPC, enabling high performance at low core clocks, allows for significant power efficiency gains versus the incumbent x86 players. The graphic shows that at peak-to-peak, M1 offers around a 40% performance uplift compared to the existing competitive offering, all whilst doing it at 40% of the power consumption.

Apple’s comparison of random performance points is to be criticised, however the 10W measurement point where Apple claims 2.5x the performance does make some sense, as this is the nominal TDP of the chips used in the Intel-based MacBook Air. Again, it’s thanks to the power efficiency characteristics that Apple has been able to achieve in the mobile space that the M1 is promised to showcase such large gains – it certainly matches our A14 data.

Don't forget about the GPU

Today we mostly covered the CPU side of things as that’s where the unprecedented industry shift is happening. However, we shouldn’t forget about the GPU, as the new M1 represents Apple’s first-time introduction of their custom designs into the Mac space.

Apple’s performance and power efficiency claims here are really lacking context as we have no idea what their comparison point is. I won’t try to theorise here as there’s just too many variables at play, and we don’t know enough details.

What we do know is that in the mobile space, Apple is absolutely leading the pack in terms of performance and power efficiency. The last time we tested the A12Z the design was more than able to compete and beat integrated graphics designs. But since then we’ve seen more significant jumps from both AMD and Intel.

Performance Leadership?

Apple claims the M1 to be the fastest CPU in the world. Given our data on the A14, beating all of Intel’s designs, and just falling short of AMD’s newest Zen3 chips – a higher clocked Firestorm above 3GHz, the 50% larger L2 cache, and an unleashed TDP, we can certainly believe Apple and the M1 to be able to achieve that claim.

This moment has been brewing for years now, and the new Apple Silicon is both shocking, but also very much expected. In the coming weeks we’ll be trying to get our hands on the new hardware and verify Apple’s claims.

Intel has stagnated itself out of the market, and has lost a major customer today. AMD has shown lots of progress lately, however it’ll be incredibly hard to catch up to Apple’s power efficiency. If Apple’s performance trajectory continues at this pace, the x86 performance crown might never be regained.

From Mobile to Mac: What to Expect?


View All Comments

  • SuperSunnyDay - Thursday, November 12, 2020 - link

    Big niche though, and could get a lot bigger Reply
  • Spunjji - Thursday, November 12, 2020 - link

    "If you go to a finer process, obviously you are more power efficent than competitors"
    The why does Intel's 10nm suck so hard for power efficiency? 🤔
    And what of Apple's measurably superior architectural efficiency? 🤔🤔

    Oh, Gondalf.
  • daveedvdv - Thursday, November 12, 2020 - link

    I'm surprised by that claim. Sure, it's currently probably more expensive than the A13, but that's not what it's replacing. It's replacing chips that Intel sells at quite a premium to Apple. I suspect the M1 saves Apple quite a bit, in fact. Maybe that contributes to the lowering of the Mac mini price?

    I think AMD and Intel are toast in the medium term. The problem is that Apple is showing it can be done. The Huaweis, Samsungs, and Qualcomms of this world are not going to just sit there: They're going to come for AMD & Intel's income streams, especially with the pressure that will come from other PC OEMs (Huawei and Samsung included).
  • vais - Friday, November 13, 2020 - link

    ARM server CPUs have existed for quite a while now but are still very low percentage of total server CPUs used - I wonder why that is. Reply
  • BedfordTim - Friday, November 13, 2020 - link

    The reason is that until recently they weren't good enough, and that there was a chicken and egg situation with software.
    That changed with the Neoverse cores and you could equally ask why Amazaon and other are now offering ARM servers.
  • Silver5urfer - Sunday, November 15, 2020 - link

    People forget things and people do not know what they are even talking about. Marvell Thunder X3 article right here on AT it was purported to be challenging AMD and Intel. And they abandoned the SKUs for the general purpose meaning, they won't be selling off shelf parts. They will do a custom chip if any company wants them like Graviton for AWS is custom like that Marvell will do for X company per se.

    And next we go to even further past, Qualcomm Centriq, remember Cloudflare advertisement on the Centriq, they were over the top for that CPU infrastructure. And then what happened ? Fizzled out, Qualcomm axed the whole Engg. Team forever which was the team who made 820 Kryo full custom chip (why did qcomm made full custom ? 810 and 64Bit disaster) and we have Nuvia, Altera, both have to prove themselves, EPYC7742 is the leading champion in the arena right now, Intel is trying so hard to contain the beast of AMD still no viable product, until their 10nmSF is ready.

    And here we have people reading this article which has mentioned only GB and Apple graph and then specint perf charts with Single Core performance only and then deciding on Intel and AMD are dead along with x86. Hilarious and even on hackernews, people are discussing some bs medium post on how Intel is going to get shunned...

    The world runs on Intel and AMD is just 6.6%, and we have people thinking Samsung and Qualcomm, Huawei are going to come at Intel. Epic joke really. Even more funny part is how Intel Macs are still on website at a higher price tag, WHY WILL APPLE STILL KEEP INTEL CPUs IF THIS M1 IS FASTER THAN 5950X and 10900K GOD KNOWS !!!
  • Sailor23M - Friday, November 13, 2020 - link

    ^^This. I fully expect Samsung to launch their own Arm chip based laptop and then grow from there to perhaps servers as well. Reply
  • duploxxx - Monday, December 7, 2020 - link

    Medium term?
    You mean people moving to the over expensive eco system of apple because they have a better device that is only compatible with 5-10% of the available sw?

    Moving away from x86 neither that will happen. But only for the parts that are written for mobile devices. Back end sw is not just shifting over. Arm servers are evolving but far from x86 portfolio. Only amazon is showing some progress in there own platform which they offer very cheap to bring people over. Those who dev in cloud look into future portability and complain e. Much to discuss about that.
  • SarahKerrigan - Tuesday, November 10, 2020 - link

    The number of in-flight loads and stores the Firestorm core can sustain is just crazy. Reply
  • lilmoe - Tuesday, November 10, 2020 - link

    Numbers are looking great for Zen3. ARM X1 should also give Firestorm a run for its money. That being said, the key advantage of the M1 platform will be its more advanced co-processors, coupled with mainstream adoption, should drive devs to build highly optimized apps that run circles around any CPU bound workload... Sure NVidia has Cuda, but it won't be nearly as wide-spread as the M1, and not nearly as efficient. If done right, you can saw bye-bye to CPU-first compute. Reply

Log in

Don't have an account? Sign up now