Conclusion & First Impressions

Today’s piece was less of a review on the new Mac mini as it was testing out Apple’s new M1 chip. We’ve had very little time with the device but hopefully were able to manage to showcase the key aspects of the new chip, and boy, it’s impressive.

For years now we’ve seen Apple’s custom CPU microarchitecture in A-series phone SoCs post impressive and repeated performance jumps generation after generation, and it today’s new Apple Silicon devices are essentially the culmination of the inevitable trajectory that Apple has been on.

In terms of power, the Apple M1 inside of the new Mac mini fills up a thermal budget up to around 20-24W from the SoC side. This is still clearly a low-power design, and Apple takes advantage of that to implement it into machines such as the now fan-less Macbook Air. We haven’t had opportunity to test that device yet, but we expect the same peak performance, although with more heavy throttling once the SoC saturates the heat dissipation of that design.

In the new Macbook Pro, we expect the M1 to showcase similar, if not identical performance to what we’ve seen on the new Mac mini. Frankly, I suspect Apple could have down-sized the Mini, although we don’t exactly now the internal layout of the piece as we weren’t allowed to disassemble it.

The performance of the new M1 in this “maximum performance” design with a small fan is outstandingly good. The M1 undisputedly outperforms the core performance of everything Intel has to offer, and battles it with AMD’s new Zen3, winning some, losing some. And in the mobile space in particular, there doesn’t seem to be an equivalent in either ST or MT performance – at least within the same power budgets.

What’s really important for the general public and Apple’s success is the fact that the performance of the M1 doesn’t feel any different than if  you were using a very high-end Intel or AMD CPU. Apple achieving this in-house with their own design is a paradigm shift, and in the future will allow them to achieve a certain level of software-hardware vertical integration that just hasn’t been seen before and isn’t achieved yet by anybody else.

The software side of things already look good on day 1 due to Apple’s Rosetta2. Whilst the software doesn’t offer the best the hardware can offer, with time, as developers migrate their applications to native Apple Silicon support, the ecosystem will flourish. And in the meantime, the M1 is fast enough that it can absorb the performance hit from Rosetta2 and still deliver solid performance for all but the most CPU-critical x86 applications.

For developers, the Apple Silicon Macs also represent the very first full-fledged Arm machines on the market that have few-to-no compromises. This is a massive boost not just for Apple, but for the larger Arm ecosystem and the growing Arm cloud-computing business.

Overall, Apple hit it out of the park with the M1.

Rosetta2: x86-64 Translation Performance
Comments Locked

682 Comments

View All Comments

  • andrewaggb - Tuesday, November 17, 2020 - link

    Pretty much. There's no reason to think the cores will be better on a chip with more of them. The only thing that is a possibility (certainly not a given) is that the clock speed will be substantially higher which should put Apple in the lead. That said, the previous review showed a very modest IPC improvement this time around even with huge reorder buffers and an 8-wide design. So I suspect apple's best course for improved performance is higher clocks but that always runs counter to power usage so we'll see. AMD and Intel will probably have to go wider to compete with Apple for single thread IPC in the long run.

    GPU-wise it's pretty decent for integrated graphics but if you want to play games you shouldn't be running Mac OS or using integrated graphics. It'll be interesting to see if Apple's market share jumps enough to pull in some game development.
  • Eric S - Tuesday, November 17, 2020 - link

    I’m don’t think any of these benchmarks are optimized for TBDR. Memory bound operations could be significantly faster if optimized for the chip. Many render pipelines could run 4X faster. I’m curious to see iOS graphics benchmarks run on this that are more representative. Of course I hope we see apps and games optimized for TBDR as well.
  • Spunjji - Thursday, November 19, 2020 - link

    @andrewaggb - Agreed entirely. The cores themselves aren't going to magically improve, and it's not clear from the meagre scaling between A14 at 5-10W and M1 at 10-25W that they can make them a lot faster with clock speed increases. But a chip with 12 Firestorm cores and 4 Icestorm cores would be an interesting match for the 5900X, and if they beef the GPU up to 12 cores with a 192bit memory interface and/or LPDDR5 then they could have something that's actually pretty solid for the vast majority of workloads.

    I don't think games are going to be moving en-masse from Windows any time soon, but I guess we'll see as time goes on.
  • Stephen_L - Tuesday, November 17, 2020 - link

    I feel very lucky that I didn’t use your mindset when I decided to buy AMD R5-1600X instead of an Intel i5 for my pc.
  • Spunjji - Thursday, November 19, 2020 - link

    @YesYesNo - you responded to a comment about how they *will* be releasing faster chips by talking about how they haven't done so yet. This is known. You're kind of talking past the people you're replying to - nobody's asking you to reconsider how you feel about the M1 based on whatever comes next, but it doesn't make sense to assume this is the absolute best they can do, either.
  • andreltrn - Tuesday, November 17, 2020 - link

    This is not their High-end chip! This a chip for low-end devices such as fan-less laptops. They attacked that market first because this where they will make the most money. High end Pro won't go for a new platform until it is proven and that they are 100% sure that they will be able to port their workflow to it. They are starting with the low-end and follow up with probably a 10 or 12 core chip in the spring for the high-end laptop and the iMac.
  • vlad42 - Tuesday, November 17, 2020 - link

    I just do not see Apple using any but a low power mobile chip for consumer devices.

    Think about it, about half the time we did not see Apple release a tablet optimized A#X chip for the iPad. In their recent earnings reports the combined iPad and Mac revenue is still only half that of the iPhone. By using the same chip for the iPad and all Mac machines, except the Mac Pro, maybe Apple will actually update the soc every year.

    If apple were to provide a higher performing chip for consumer devices, then it would probably be updated only once every few years. Apple just does not make enough money from high end laptops and the iMac to justify dedicated silicon for those products without pulling an Intel and reusing the soc for far too many product cycles. Just look at the Mac Pros. The engineering resources needed to design the most recent x86 Mac Pro is a drop in the bucket compared to designing and taping out a new soc. Despite this, Apple has only been updating the Mac Pro lineup once every 5-7 years!

    The problem, is that by the time they are willing to update those theoretical high end consumer chips, they will have been long since been made obsolete. Who in their right mind would purchase a "high end" laptop or an iMac if it is out performed by an entry level Air or an iPad or was lacking in important features (hardware codec support, the next stupid version of HDCP needed for movies/TV shows, etc.). Even worse for Apple is if their customers by a non-Apple product instead. Much of Apple's current customer base does not actually need a Mac. They would be fine with any decent quality high end laptop or any all-in-one with a screen that is not hot garbage.
  • Eric S - Tuesday, November 17, 2020 - link

    They are working on updates for the high end. I expect they will be amazing. At least two higher end chips are in late design or early production.
  • Eric S - Tuesday, November 17, 2020 - link

    You are probably right in that they may only be updated every few years, but the same can be said of the Xeon which also skips generations.
  • vlad42 - Tuesday, November 17, 2020 - link

    But the Xeon chips are a bad example because Intel shot themselves in the foot through a combination of complacency, tying their next gen products too tightly to the manufacturing process and a shortage of 14nm capacity. We used to get new Xeons if not every year, then at least every time there was an architecture update.

    A better more recent comparison would be with AMD which has always updated the Threadripper lineup. Granted, we technically do not know if the Threadripper Pro lineup will be updated every year, but it very likely will be.

Log in

Don't have an account? Sign up now