After many months of rumors and speculation, Apple confirmed this morning during their annual WWDC keynote that the company intends to transition away from using x86 processors at the heart of their Mac family of computers. Replacing the venerable ISA – and the exclusively-Intel chips that Apple has been using – will be Apple’s own Arm-based custom silicon, with the company taking their extensive experience in producing SoCs for iOS devices, and applying that to making SoCs for Macs. With the first consumer devices slated to ship by the end of this year, Apple expects to complete the transition in about two years.

The last (and certainly most anticipated) segment of the keynote, Apple’s announcement that they are moving to using their own SoCs for future Macs was very much a traditional Apple announcement. Which is to say that it offered just enough information to whet developers (and consumers’) appetites without offering too much in the way of details too early. So while Apple has answered some very important questions immediately, there’s also a whole lot more we don’t know at the moment, and likely won’t known until late this year when hardware finally starts shipping.

What we do know, for the moment, is that this is the ultimate power play for Apple, with the company intending to leverage the full benefits of vertical integration. This kind of top-to-bottom control over hardware and software has been a major factor in the success of the company’s iOS devices, both with regards to hard metrics like performance and soft metrics like the user experience. So given what it’s enabled Apple to do for iPhones, iPads, etc, it’s not at all surprising to see that they want to do the same thing for the Mac. Even though the OS itself isn’t changing (much), the ramifications of Apple building the underlying hardware down to the SoC means that they can have the OS make full use of any special features that Apple bakes into their A-series SoCs. Idle power, ISPs, video encode/decode blocks, and neural networking inference are all subjects that are potentially on the table here.

Apple SoCs: Market Leading Performance & Efficiency

At the heart of this shift in the Mac ecosystem will be the transition to new SoCs built by Apple. Curiously, the company has carefully avoided using the word “Arm” anywhere in their announcement, but the latest macOS developer documentation makes it clear Apple is taking their future into their own hands with the Arm architecture. The company will be making a series of SoCs specifically for the Mac, and while I wouldn’t be too surprised if we see some iPad/Mac overlap, at the end of the day Apple will want SoCs more powerful than their current wares to replace the chips in their most powerful Mac desktops.

And it goes without saying that Apple’s pedigree in chip designs is nothing less than top-tier at this point. The company has continued to iterate on its CPU core designs year after year, making significant progress at a time when x86 partner Intel has stalled, allowing the company’s latest Lightning cores to exceed the IPC of Intel’s architectures, while overall performance has closed in on their best desktop chips.

Apple’s ability to outdo Intel’s wares is by no means guaranteed, especially when it comes to replacing the likes of the massive Xeon chips in the Mac Pro, but the company is coming into this with a seasoned design team that has done some amazing things with low-power phone and tablet SoCs. Now we’re going to get a chance to see what they can do when the last of the chains come off, and they are allowed to scale up their designs to full desktop and workstation-class chips. Apple believes they can deliver better performance at lower power than the current x86 chips they use, and we’re all excited to see just what they can do.

Though from an architecture standpoint, the timing of the transition is a bit of an odd one. As noted by our own Arm guru, Andrei Frumusanu, Arm is on the precipice of announcing the Arm v9 ISA, which will bring several notable additions to the ISA such as Scalable Vector Extension 2 (SVE2). So either Arm is about to announce v9, and Apple’s A14 SoCs will be among the first to implement the new ISA, otherwise Apple will be setting the baseline for macOS-on-Arm as v8.2 and its NEON extensions fairly late into the ISA’s lifecycle. This will be something worth keeping an eye on.

Selling x86 & Arm Side-by-Side: A Phased Transition

While for obvious reasons Apple’s messaging today is about where they want to be at the end of their two-year transition, their transition is just that: around two years long. As a result, Apple has confirmed that there will be an overlapping period where the company will be selling both x86 and Arm devices – and there will even be new x86 devices that the company has yet to launch.

In the near term, it will take Apple some time to build new devices around their in-house SoCs. So even if Apple doesn’t introduce any new device families or form factors over the next two years, the company will still need to refresh x86-based Macs with newer Intel processors to keep them current until their Arm-based successors are ready. And although Apple hasn’t offered any guidance on what devices will get replaced first, it’s as reasonable a bet as any that the earliest devices will be lower-end laptops and the like, while Apple’s pro gear such as the Mac Pro tower will be the last parts to transition, as those will require the most extensive silicon engineering.

This also means that Apple is still on the clock as far as x86 software support goes, and will continue to be so well after they complete their hardware transition. In part a practical statement to avoid Osborning themselves and their current x86-based systems, Apple has confirmed that they will continue supporting x86 Macs for years to come. Just how long that will be remains to be seen, of course, but unless Apple accelerates the retirement x86 Mac support, the company as of late has been supporting Macs with newer OSes and OS updates for several years after their initial launch.

x86 Compatibility: Rosetta 2 & Virtualization

Meanwhile, in order to bridge the gap between Apple’s current software ecosystem and where they want to be in a couple of years, Apple will once again be investing in a significant software compatibility layer in order to run current x86 applications on future Arm Macs. To be sure, Apple wants developers to recompile their applications to be native – and they are investing even more into the Xcode infrastructure to do just that – but some degree of x86 compatibility is still a necessity for now.

The cornerstone of this is the return of Rosetta, the PowerPC-to-x86 binary translation layer that Apple first used for the transition to x86 almost 15 years ago. Rosetta 2, as it’s called, is designed to do the same thing for x86-to-Arm, translating x86 macOS binaries so that they can run on Arm Macs.

Rosetta 2’s principle mode of operation will be to translate binaries at install time. I suspect that Apple is eyeing distributing pre-translated binaries via the App Store here (rather than making every Mac translate common binaries), but we’ll see what happens there. Meanwhile Rosetta 2 will also support dynamic translation, which is necessary for fast performance on x86 applications that do their own Just-in-Time compiling.

Overall Apple is touting Rosetta 2 as offering “fast performance”, and while their brief Maya demo is certainly impressive, it remains to be seen just how well the binary translation tech works. x86 to Arm translation has been a bit of a mixed bag, judging from Qualcomm & Microsoft’s efforts, though past efforts haven’t involved the kind of high-performance chips Apple is aiming for. At the same time, however, even with the vast speed advantage of x86 chips over PPC chips, running PPC applications under the original Rosetta was functional, but not fast.

As a result, Rosetta 2 is probably best thought of as a backstop to ensure program compatibility while devs get an Arm build working, rather than an ideal means of running x86 applications in the future. Especially since Rosetta 2 doesn’t support high-performance x86 instructions like AVX, which means that in applications that use dense, performance-critical code, they will need to fall back to slower methods.

On which note, right now it’s not clear how long Apple will offer Rosetta 2 for macOS. The original Rosetta was retired relatively quickly, as Apple has always pushed its developers to move quickly to keep up with the platform. And with a desire to have a unified architecture across all of its products, Rosetta 2 may face a similarly short lifecycle.

Meanwhile, macOS Big Sur (11.0), the launch OS for this new Mac ecosystem, will also be introducing a new binary format called Universal 2. Apple has ample experience here with fat binaries, and Universal 2 will extend that to cover Arm binaries. Truth be told, Apple already has the process down so well that I don’t expect this to be much more than including yet another folder in an application bundle with the necessary Arm binaries.

Finally, rounding out the compatibility package is an Apple-developed virtualization technology to handle things such as Linux Docker containers. Information on this feature is pretty light – the company briefly showed it off as part of Parallels running Linux in the keynote – so it remains to be seen just what the tech can do. At a minimum, and appropriate for a developers conference, the fact that they have a solution in place for Linux and Docker is an good feature to show off, as these are features that are critical to WWDC’s software developer crowd.

But it leaves unanswered some huge questions about Windows support, and whether this tech can be used to run Windows 10 similar to how Parallels and other virtualization software can run Windows inside of macOS today. As well, Apple isn’t saying anything about BootCamp support at this time, despite the fact that dual-booting macOS and Windows has long been a draw for Apple’s Mac machines.

Dev Kits: A12Z As A Taste of Things To Come

Finally, in order to prepare developers to launch native, Arm-compiled software later this year when the first Arm Macs ship, Apple has also put together a developer transition kit, which the company will be loaning out to registered developers. The DTK, as it’s called, was used in Apple’s keynote to demonstrate the features of macOS Big Sur. And while it’s essentially just an iPad in a Mac Mini’s body, it’ll be an important step in getting developers ready with native applications by giving them actual hardware to test and optimize against.

Overall, the DTK is based on Apple’s A12Z processor, and includes 16GB of RAM as well as a 512GB SSD. I wouldn’t be the least bit surprised if the machine is also clocked a bit higher than iPads as well, thanks to the device’s larger form factor, but in an interesting twist of fate it’s still likely to be slower than the iPhone 11 series of devices, which use the newer A13 SoC. The upside, at least, is that the A12Z sets a rather high low for performance, and conversely encourages developers to make efficient applications. So if developers can get their applications running well on an A12Z device, then they should have no problems whatsoever in running those apps on future A14-derived silicon.

And although the A12Z SoC inside the DTKs is a known quantity at this point, like their other beta programs, Apple will be keeping a tight lid on performance. The DTK license agreement bans public benchmarking, and even though developers will pay $500 to take part in the program, the DTKs remain the property of Apple and must be returned. So while leaks will undoubtedly drip out over the coming months, it would seem that we’re not going to get the chance to do any kind of extensive, above-the-board performance testing of Mac-on-Arm hardware until the final consumer systems come out late this year.

Closing Thoughts

While the confluence of events that have led to Apple’s decision may have removed any surprise from today’s announcement, there is no downplaying the significance of what Apple has decided to do. To go vertically integrated – developing their own chips and controlling virtually every aspect of Mac hardware and software – is a big move on its own. But the fact that Apple will be doing this while simultaneously transitioning the macOS and the larger Mac software ecosystem to another Instruction Set Architecture makes it all the more monumental. A large number of things have to go right for the company in order to successfully make the transition, both at the hardware and the software level. Every group within the Mac division will be up to bat, as it were.

The good news is that, as a fixture of the personal computing industry since the very beginning, there are few companies more experienced in these sorts of transitions than Apple. The move to x86 almost 15 years ago more or less created the playbook for Apple’s move to Arm over the next two years, as many of the software and hardware challenges at the same. By not tying themselves down with legacy software – and by forcing developers to keep up or perish – Apple has remained nimble enough to pull off these kinds of transitions, and to do so in a handful of years instead of a decade or longer.

Overall, I’m incredibly excited to see what Apple can do for the Mac ecosystem by supplying their own chips, as they have accomplished some amazing things with their A-series silicon thus far. However it’s also an announcement that brings mixed feelings. Apple’s original move to x86 finally unified the desktop computing market behind a single ISA, letting one system natively run macOS to Windows, and everything in between. But after 15 years of a software compatibility utopia, the PC market is about to become fractured once again.

POST A COMMENT

240 Comments

View All Comments

  • Kurosaki - Thursday, June 25, 2020 - link

    Oh, shit, that isn't official yet? Nevermind me. You don't know where I can find a edit function in this section? Reply
  • Kurosaki - Thursday, June 25, 2020 - link

    No, but I just presume they will when looking forward. The X1 could be seen as an 8 wide cpu in many usecases:
    "The fetch bandwidth out of the L1I has been bumped up 25% from 4 to 5 instructions with a corresponding increase in the decoder bandwidth, and the fetch and rename bandwidth out of the Mop-cache has seen a 33% increase from 6 to 8 instructions per cycle. In effect, the core can act as a 8-wide machine as long as it’s hitting the Mop cache."
    The A12 and 13 has been known for beeing extremely wide cores, the latter 7 wide, don't expect them to lag behind in this sense. Only time will tell I guess!
    Reply
  • Deicidium369 - Tuesday, June 23, 2020 - link

    Was a niche product created for and with HP. Was NEVER going to be mainstream. Reply
  • Korguz - Tuesday, June 23, 2020 - link

    " Intel's product marketing and industry engagement efforts were substantial and achieved design wins with the majority of enterprise server OEM's including those based on RISC processors at the time, industry analysts predicted that IA-64 would dominate in servers, workstations, and high-end desktops, and eventually supplant RISC and complex instruction set computing (CISC) architectures for all general-purpose applications "

    source : https://en.wikipedia.org/wiki/IA-64

    " Beyond Kittson, there will be no more chips coming from the Itanium family, an Intel spokesman said in an email. That ends a tumultuous, 16-year journey for Itanium, which Intel once envisioned as a replacement for x86 chips in 64-bit PCs and servers." " Intel hedged its bets and planned for Itanium 64—also called IA-64—to ultimately go down the stack from servers to PCs. But that never happened, and the market shifted quickly after AMD introduced the first 64-bit x86 server chips in 2003. That gave AMD a competitive edge over Intel, which still was offering 32-bit x86 chips. " " The transition disrupted Intel’s vision of Itanium as an architecture of the future for 64-bit servers and PCs. Instead, x86 chips started moving up the stack into more powerful servers. "

    source : https://www.pcworld.com/article/3196080/intels-ita...

    " When Intel launched its first Itanium processor in 2001, it had very high hopes: the 64-bit chip was supposed to do nothing less than kill off the x86 architecture that had dominated PCs for over two decades. Things didn't quite pan out that way, however, and Intel is officially calling it quits. " " The news marks the quiet end to a tumultuous saga. Itanium was supposed to represent a clean break from x86 that put Intel firmly into the 64-bit era. It was first intended for high-end servers and workstations, but it was eventually supposed to find its way into home PCs. Needless to say, that's not how it worked out. "

    source : https://www.engadget.com/2017-05-13-intel-ships-la...

    it wasnt created just for HP, with HP, it WAS supposed to replace X86. Sorry Deicidium369 but you are not correct.
    Reply
  • BenSkywalker - Monday, June 22, 2020 - link

    Right now Intel chips still easily outperform ARM at the higher tiers, yes ARM is far more power efficient. The reason ARM has gotten so much closer is Intel has been stuck for a long time on one node, but let's pretend they stay stuck and don't do anything about it. Five years from now ARM *might* be able to match their performance of both are running native code, but there is ZERO chance ARM would be anywhere close under emulation which would be required for the PC platform to migrate.

    To be clear, I'd love to see the PC market evolve into something where we could see more competition on the CPU side, but we already had decades where x86 was clearly inferior(MIPS, Sparq, Alpha etc) and still had an iron grip on the PC market.

    The GPU side is even less likely. IT has been making graphics chips for a very long time now. They've always done very well in lower power situations, and they've never been able to scale to the high end. We've seen Intel throw billions at the graphics side and utterly fail too, everyone else that dominated the market I the early days is give now. We didn't ask for a duopoly in the graphics segment, we got there because no-one else could compete.
    Reply
  • Kurosaki - Monday, June 22, 2020 - link

    I seriously believe we are in to a paradigm shift here. An older gen ARM chip of 3W out performs a 125W intel chip single core wise. Intel is going to have to prove their point with 7nm or what ever. AMD is al ready on the same node and the performance isn't stellar there either if we compare to the performance of ARM in general. Graphics wise, it might be as bad i'm afraid, or not afraid. Would be fun to be able to play some VR games with a decent performance soon... Reply
  • mattbe - Monday, June 22, 2020 - link

    >AMD is al ready on the same node and the performance isn't stellar there either if we compare to the performance of ARM in general. Graphics wise, it might be as bad i'm afraid, or not afraid. Would be fun to be able to play some VR games with a decent performance soon...

    This comment is so dumb I can't even tell if you are being serious or trolling. You seem to have this weird fantasy that arm is much better at EVERYTHING while ignoring context and what's actually out there.

    For example, the current AMD RDNA 7nmm chips are not even close to efficient. Nvidia's 12nm offering are just as , if not more efficient. This is partly due to the clocks being too high and past the power efficient point to get extra performance. Remember, these are hardware not designed for power saving.

    I also find it hilarious that you think AMD and Nvidia will somehow be "replaced" in 4 years. There is nothing to suggest that Apple is able to develop anything on the gpu side that comes close. Remember, Nvidia is no Intel, they are a market leader and is still constantly innovating. The gpu
    Reply
  • mattbe - Monday, June 22, 2020 - link

    >An older gen ARM chip of 3W out performs a 125W intel chip single core wise

    Did you actually look at the graph is this very article or are you just making things up?
    Reply
  • vladx - Tuesday, June 23, 2020 - link

    He's a delusional Apple fanboy, only someone like that could make such a laughable claim Reply
  • Kurosaki - Tuesday, June 23, 2020 - link

    Have not owned an apple product my whole life. What i'm seriously excited about is the raw performance at extremely low energy levels. :)
    Look at what ARM X1 is going, slap it on a motherboard and turn the frequencies a notch. Cool it with a noctua good for 200W.
    I'm just tired of the stagnation in the PC-industry, where higher prices for the same performance has become very common. ARM and Imagination is very much welcome to fix this, that apple is first out in this shift, good for them. I welcome the new PC-masterrace, developed for mobile.
    Apple gpu seems to deliver 5,7 fps per watt in 3d mark according to this site, that's friggin amazing if we could get it to scale with the power envelope. Would gladly buy a 150w gpu with that type of performance.
    Reply

Log in

Don't have an account? Sign up now