Apple Shooting for the Stars: x86 Incumbents Beware

The previous pages were written ahead of Apple officially announcing the new M1 chip. We already saw the A14 performing outstandingly and outperforming the best that Intel has to offer. The new M1 should perform notably above that.

We come back to a few of Apple’s slides during the presentations as to what to expect in terms of performance and efficiency. Particularly the performance/power curves are the most detail that Apple is sharing at this moment in time:

In this graphic, Apple showcases the new M1 chip featuring a CPU power consumption peak of around 18W. The competing PC laptop chip here is peaking at the 35-40W range so certainly these are not single-threaded performance figures, but rather whole-chip multi-threaded performance. We don’t know if this is comparing M1 to an AMD Renoir chip or an Intel ICL or TGL chip, but in both cases the same general verdict applies:

Apple’s usage of a significantly more advanced microarchitecture that offers significant IPC, enabling high performance at low core clocks, allows for significant power efficiency gains versus the incumbent x86 players. The graphic shows that at peak-to-peak, M1 offers around a 40% performance uplift compared to the existing competitive offering, all whilst doing it at 40% of the power consumption.

Apple’s comparison of random performance points is to be criticised, however the 10W measurement point where Apple claims 2.5x the performance does make some sense, as this is the nominal TDP of the chips used in the Intel-based MacBook Air. Again, it’s thanks to the power efficiency characteristics that Apple has been able to achieve in the mobile space that the M1 is promised to showcase such large gains – it certainly matches our A14 data.

Don't forget about the GPU

Today we mostly covered the CPU side of things as that’s where the unprecedented industry shift is happening. However, we shouldn’t forget about the GPU, as the new M1 represents Apple’s first-time introduction of their custom designs into the Mac space.

Apple’s performance and power efficiency claims here are really lacking context as we have no idea what their comparison point is. I won’t try to theorise here as there’s just too many variables at play, and we don’t know enough details.

What we do know is that in the mobile space, Apple is absolutely leading the pack in terms of performance and power efficiency. The last time we tested the A12Z the design was more than able to compete and beat integrated graphics designs. But since then we’ve seen more significant jumps from both AMD and Intel.

Performance Leadership?

Apple claims the M1 to be the fastest CPU in the world. Given our data on the A14, beating all of Intel’s designs, and just falling short of AMD’s newest Zen3 chips – a higher clocked Firestorm above 3GHz, the 50% larger L2 cache, and an unleashed TDP, we can certainly believe Apple and the M1 to be able to achieve that claim.

This moment has been brewing for years now, and the new Apple Silicon is both shocking, but also very much expected. In the coming weeks we’ll be trying to get our hands on the new hardware and verify Apple’s claims.

Intel has stagnated itself out of the market, and has lost a major customer today. AMD has shown lots of progress lately, however it’ll be incredibly hard to catch up to Apple’s power efficiency. If Apple’s performance trajectory continues at this pace, the x86 performance crown might never be regained.

From Mobile to Mac: What to Expect?
Comments Locked

644 Comments

View All Comments

  • vais - Thursday, November 12, 2020 - link

    A great article up until the benchmarking and comparing to x86 part. Then it turned into something reeking of paid promotion piece.
    Below are some quotes I want to focus the discussion on:

    "x86 CPUs today still only feature a 4-wide decoder designs (Intel is 1+4) that is seemingly limited from going wider at this point in time due to the ISA’s inherent variable instruction length nature, making designing decoders that are able to deal with aspect of the architecture more difficult compared to the ARM ISA’s fixed-length instructions"
    - This implies wider decoder is always a better thing, even when comparing not only different architectures, but architectures using different instruction sets. How was this conclusion reached?

    "On the ARM side of things, Samsung’s designs had been 6-wide from the M3 onwards, whilst Arm’s own Cortex cores had been steadily going wider with each generation, currently 4-wide in currently available silicon"
    - So Samsung’s Exynos is 6-wide - does that make it better than Snapdragon (which should be 4-wide)? Even better, does anyone in their right mind think it performs close to any modern x86 CPU, let alone an enthusiast grade desktop chip?

    "To not surprise, this is also again deeper than any other microarchitecture on the market. Interesting comparisons are AMD’s Zen3 at 44/64 loads & stores, and Intel’s Sunny Cove at 128/72. "
    - Again this assumes higher loads & stores is automagically better. Isn't Zen3 better than Intel counterparts accross the board? Despite the signifficantly worse loads & stores.

    "AMD also wouldn’t be looking good if not for the recently released Zen3 design."
    - What is the logic here? The competition is lucky they released a better product before Apple? How unfair that Apple have to compete with the latest (Zen3) instead of the previous generation - then their amazing architecture would have really shone bright!

    "The fact that Apple is able to achieve this in a total device power consumption of 5W including the SoC, DRAM, and regulators, versus +21W (1185G7) and 49W (5950X) package power figures, without DRAM or regulation, is absolutely mind-blowing."
    - I am specifically interested where the 49W for 5950X come from. AMD's specs list the TDP at 105W, so where is this draw of only 49W, for an enthusiast desktop processor, coming from?
  • thunng8 - Thursday, November 12, 2020 - link

    It is obvious that the power figure comes from running the spec benchmark. Spec is single threaded, so the Ryzen package is using 49w when using turbo boosting to 5.0ghz on the single core to achieve the score on the chart while the a14 using the exact same criteria uses 5w.
  • vais - Thursday, November 12, 2020 - link

    How is it obvious? Such things as "this benchmark is single threaded" must be stated clearly, not rely on everyone looking at the benchmarks knowing it. Same about the power.
  • thunng8 - Friday, November 13, 2020 - link

    The fact that it is a single threaded is ni the text of the review.
  • name99 - Friday, November 13, 2020 - link

    If you don't know the nature of SPEC benchmarks, then perhaps you should be using your ears/eye more and your mouth less? You don't barge into a conversation you admit to knowing nothing about and start telling all the gathered experts that they are wrong!
  • mandirabl - Thursday, November 12, 2020 - link

    Pretty cool, I came from this video https://www.youtube.com/watch?v=xUkDku_Qt5c and the analogy is awesome.
  • atomek - Thursday, November 12, 2020 - link

    If Apple plays it well, this is the dawn of x86 era. They'll just need to open their M1 for OEMs/builders, so people could actually make gaming desktops on their platform. And that would be end of AMD/Intel (or they will quickly (2-5 years) release ARM CPU which would be very problematic for them). I wouldn't mind to moving away from x86, only if Apple will open their ARM platform to enthusiasts/gamers, and don't lock it to MacOS.
  • dodoei - Thursday, November 12, 2020 - link

    The reason for the great performance could very well be that it’s locked to the MacOS
  • Zerrohero - Friday, November 13, 2020 - link

    Apple has spent billions to develop their own chips to differentiate from the others and to achieve iPad/iPhone like vertical integration with their own software.

    Why would they sell them to anyone?

    It seems that lots of people do not understand why Apple is doing this: to build better *Apple* products.

    There is nothing wrong with that, even if PC folks refuse to accept it. Every company strives to do better stuff.
  • corinthos - Thursday, November 12, 2020 - link

    Cheers to all of those who purchased Threadrippers and hi-end Intel Extreme processors plus the latest 3080/3090 gpus for video editing, only to be crushed by M1 with iGPU due to its more current and superior hardware decoders.

Log in

Don't have an account? Sign up now