Rosetta2: x86-64 Translation Performance

The new Apple Silicon Macs being based on a new ISA means that the hardware isn’t capable of running existing x86-based software that has been developed over the past 15 years. At least, not without help.

Apple’s new Rosetta2 is a new ahead-of-time binary translation system which is able to translate old x86-64 software to AArch64, and then run that code on the new Apple Silicon CPUs.

So, what do you have to do to run Rosetta2 and x86 apps? The answer is pretty much nothing. As long as a given application has a x86-64 code-path with at most SSE4.2 instructions, Rosetta2 and the new macOS Big Sur will take care of everything in the background, without you noticing any difference to a native application beyond its performance.

Actually, Apple’s transparent handling of things are maybe a little too transparent, as currently there’s no way to even tell if an application on the App Store actually supports the new Apple Silicon or not. Hopefully this is something that we’ll see improved in future updates, serving also as an incentive for developers to port their applications to native code. Of course, it’s now possible for developers to target both x86-64 and AArch64 applications via “universal binaries”, essentially just glued together variants of the respective architecture binaries.

We didn’t have time to investigate what software runs well and what doesn’t, I’m sure other publications out there will do a much better job and variety of workloads out there, but I did want to post some more concrete numbers as to how the performance scales across different time of workloads by running SPEC both in native, and in x86-64 binary form through Rosetta2:

SPECint2006 - Rosetta2 vs Native Score %

In SPECint2006, there’s a wide range of performance scaling depending on the workloads, some doing quite well, while other not so much.

The workloads that do best with Rosetta2 primarily look to be those which have a more important memory footprint and interact more with memory, scaling perf even above 90% compared to the native AArch64 binaries.

The workloads that do the worst are execution and compute heavy workloads, with the absolute worst scaling in the L1 resident 456.hmmer test, followed by 464.h264ref.

SPECfp2006(C/C++) - Rosetta2 vs Native Score %

In the fp2006 workloads, things are doing relatively well except for 470.lbm which has a tight instruction loop.

SPECint2017(C/C++) - Rosetta2 vs Native Score %

In the int2017 tests, what stands out is the horrible performance of 502.gcc_r which only showcases 49.87% performance of the native workload – probably due to high code complexity and just overall uncommon code patterns.

SPECfp2017(C/C++) - Rosetta2 vs Native Score %

Finally, in fp2017, it looks like we’re again averaging in the 70-80% performance scale, depending on the workload’s code.

Generally, all of these results should be considered outstanding just given the feat that Apple is achieving here in terms of code translation technology. This is not a lacklustre emulator, but a full-fledged compatibility layer that when combined with the outstanding performance of the Apple M1, allows for very real and usable performance of the existing software application repertoire in Apple’s existing macOS ecosystem.

SPEC2017 - Multi-Core Performance Conclusion & First Impressions
Comments Locked

682 Comments

View All Comments

  • Spunjji - Tuesday, November 17, 2020 - link

    Apples to Apples is comparing what's on offer to what's on offer. If AMD were on the verge of releasing Zen 4 I'd say hey, let's see what's in store, but you have to admit that right now Apple have leveraged 5nm to build a fundamentally solid design that has some significant competitive advantages.

    I suspect M1 will lose to AMD's offerings at the ~25W mark once Cezanne hits, which means good things for Zen 4 when it does indeed arrive, but this is a mighty good CPU as it stands at this time in 2020.
  • lilmoe - Tuesday, November 17, 2020 - link

    Beating Intel doesn't say much. Intel is a well known issue in the tech industry. Apple isn't the only one complaining. Mobile Ryzen goes neck-to-neck with Intel desktop. This is well know fact. Apple doesn't have any breakthrough here; AMD did earlier this year. TSMC has great 7nm and a breakthrough 5nm process (Your move, Sammy).

    The M1 is Apple's A-Game in single thread. You won't see double digit improvements YoY.

    Apples to apples (pro review):
    - Consistent Charts.
    - M1+Rosetta2 VS Zen2(4800U): THAT's what's available today.
    - M1 Native vs Zen3 (Prediction/Analysis for Zen4): THAT is what M1/M2 Native will go up against.
    - M1 Chrome/Javascript VS 4800U Chrome, NOT Safari+M1(Native) VS Chrome/Edge+4800U. That's a browser benchmark, not a CPU benchmark. Chrome all the way, then an educated prediction into how much Native would improve that.
    - Actual popular games.

    I'm dismissing this entire review. Any monkey can install and run a couple of benchmark apps.
  • andreltrn - Tuesday, November 17, 2020 - link

    Why Chrome? What you people don't get is that people buying a Mac mini or a macbook air are buying a device. Like you would buy a refrigerator, a Nest thermostat. They will use what is the best for that DEVICE. They don't care about the processor. Even a processor (CPU) comparaison is bulls... when comparing the M1 with the AMD laptop offering. The M1 is a SOC with way more built-in functionality such as ML processor and accelerator and much more. A (intel or AMD) laptop CPU couldn't do what the M1 does in a properly coded app. EG. Final cut Pro or Logic or Safari for that mather. A device is the sum of it's parts. Not only a CPU specially in a laptop.
  • vlad42 - Tuesday, November 17, 2020 - link

    Why Chrome? Well because it is available for both operating systems. Of course another browser such as FireFox could also be used.

    We have seen time and time again that the web browser used can have an enormous impact on the results of browser based benchmarks. As you can see here in Anandtech's browser comparison on 9/10/2020, https://www.anandtech.com/show/16078/the-2020-brow... the Chromium version of Edge out performs FireFox in Speedometer 2.0 by roughly 35%! Since this article is not trying to compare the performance of different web browsers, the browser used should be kept the same.

    In addition, since Speedometer 2.0 was made by Apple, it is highly likely that they likely put more weight on Safari updates improving the Speedometer score than, say, Google does with Chrome.
  • helpmeoutnow - Thursday, November 26, 2020 - link

    @andreltrn lol keep it real. we are talking benchmarks. you can only compere software that runs on both systems.
  • Spunjji - Tuesday, November 17, 2020 - link

    Also, Zen 3 is in some of those comparisons... It wins as you'd expect, but dropping to 5nm wouldn't magically bring it to M1 power levels.
  • vlad42 - Tuesday, November 17, 2020 - link

    5nm would help by reducing the voltage, and thus power draw, for the chip. The bigger thing to remember is that there are no mobile versions of Zen 3 yet. Consider that the 5950X is only ~37% faster than the 4800U in single threaded Cinebench despite have a tdp 7 times higher. If the 5800U ends up having the same clocks as the 4800U, then the M1 would roughly have a 7% perf/W advantage. Granted, this assumes the 5800U’s score would be 19% faster than the 4800U's 1199.

    So, given the expected benefits that TSMC, Samsung, etc. have touted about 5nm, a die shrink from 7nm to 5nm would easily make up for this difference in power efficiency.
  • R3lay - Wednesday, November 18, 2020 - link

    You can't compare single core performance and then compare then to the TDP. At single core the 5950X doesn't use 7x more power.
  • Kangal - Saturday, November 21, 2020 - link

    To be honest, a lot of comparisons of the Apple Silicon M1 are vague, misrepresentative or blatantly off. The best representative benchmarks I've seen are:

    Single Core, Geekbench v5, 5min run, Rosetta2
    2020 Macbook Air (10W M1): ~1300 score
    2019 MacBook Pro 16in (35W i9-9880H): ~1100 points
    AMD Zen2+ Laptop (35W r9-4900HS): ~920 points
    2019 Macbook Pro 13in (15W i5-8257U): ~900 points
    AMD Zen2+ Laptop (20W r7-4800U): ~750 points

    Multi-Thread, CineBench r23, 10min run, Rosetta2
    AMD Zen2+ Laptop (35W r9-4900HS): ~11,000 score
    AMD Zen2+ Laptop (20W r7-4800U): ~9,200 score
    2019 MacBook Pro 16in (35W i9-9880H): ~9,100 score
    2020 Macbook Air (10W M1): ~7,100 score
    2019 Macbook Pro 13in (15W i5-8257U): ~5,100 score

    Rendering Performance, Final Cut ProX, 10min clip
    AMD Zen2+ Laptop (35W r9-4900HS): error on ryzentosh
    AMD Zen2+ Laptop (20W r7-4800U): error on ryzentosh
    2019 MacBook Pro 16in (35W i9-9880H): ~360 seconds
    2020 Macbook Air (10W M1): ~410 seconds
    2019 Macbook Pro 13in (15W i5-8257U): ~1100 seconds

    GPU Performance, GFXBench v5 Aztec Ruins High, Rosetta2
    2019 MacBook Pro 16in (i9 5600M): ~79 fps
    2020 Macbook Air (M1 8CU): ~76 fps
    AMD Zen2+ Laptop (r9 Vega-8): ~39 fps
    AMD Zen2+ Laptop (r7 Vega-7): ~36 fps
    2019 Macbook Pro 13in (i5 Iris Pro): ~20 fps

    Gaming Perfomance, Rise of the Tomb Raider, 1080p High
    2019 MacBook Pro 16in (i9 5600M): ~70 fps
    2020 Macbook Air (M1 8CU): ~40 fps
    AMD Zen2+ Laptop (r9 Vega-8): ~23 fps
    AMD Zen2+ Laptop (r7 Vega-7): ~21 fps
    2019 Macbook Pro 13in (i5 Iris Pro): ~12 fps

    ....so I share the well-grounded outlook that Dave Lee (D2D) has on the matter, where Linus (LTT) was more pessimistic than Dave but I think his opinions are pretty neutral overall. I simply out-right reject the unprofessional and unrealistic look that Andrei (Anandtech) has displayed in the previous article. Nor am I fully on-board with the overly-optimistic perspective that Jonathan Morrison demonstrated.
  • Kangal - Saturday, November 21, 2020 - link

    More thoughts on the matter...

    I get there's the argument to be made that: new modern and more efficient apps are coming natively, that single-core is most important, low TDP is very important, and race to idle (or at least race to small cores) is important. From that perspective, the M1 in the Macbook Air is the best by a HUGE margin. We're talking a x3 better overall experience than the best x86 devices in such comparisons.

    Then there's the alternate debate. That what you get is, what you get. So that legacy program performance is most important, single-core is no longer be-all-end-all, multi-thread being relevant for actual "Pro" users, and just as important as TDP is the Sustained Performance. When looking from that perspective, the Apple M1 in a MacBook Pro 16/13 is only equivalent to the very best x86 device performances. So basically a meh situation, not a paradigm shift.

    So what can we realistically postulate from this, and expect from Apple and the industry?
    Firstly, Apple disappointed us with the M1. In short, Apple played it safe and didn't really do their best. That means they purposely left performance on the table, it was artificial and it was deliberate. The why is simple, just so that they can incrementally introduce these increases, that way they can incentivise customers. In long, what they have now, the 4/8 setup is somewhat reminiscent to the current high-end phablets, or the 4c/8t hyperthreading setup of Intel CPUs, or the older AMD Bulldozer setup. At these thicknesses, there's really no need for the medium-cores, they should have killed it, and stuck with an 8-large-core design instead. These large ARM cores aren't too different to x86 core in size, so they could have afforded that silicon cost. As for operation, simply undervolt/underclock (1.5GHz) the whole stack, and ramp up 1-2 cores to high clocks/volts (3.5GHz) dynamically when necessary. That makes thread allocation simple, and here simple means more efficient software. And this means we can see a performance difference moving from an 11inch passive-cooled device, to a 17in active-cooled device. For example, 8-cores running at 4.0GHz, versus 2-cores running at 3.5GHz. And let's not forget the GPU which is fine as an 8CU (~GTX 1050) on an "ultraportable" like an 11in Macbook Air. But we we're expecting something more like 16CU (~GTX 1660) for the "regular laptop" 13in MacBook, and even beefier 32CU (~RTX 2070) for a "large laptop" 17in MacBook Pro. On top of this, the new SoC demands a smaller size internally, so we should have seen a much more compact Mac devices, and Apple didn't take advantage of this.

    Other places Apple dropped the ball, is that they have less PCIe ports allocated. There is no dedicated GPU or eGPU option available. Their current iGPU is about on-par with GTX 1050, so impressive against AMD's and Intel's iGPUs... but it's still behind modern (low-profile) dedicated-GPUs from Nvidia's Volta or AMD's RDNA2. There's no support for 32bit x86 programs. And lastly, there is no bootloader support, so that people can run another OS such as Android, Linux distro, Windows10 ARM/Mobile (or perhaps even to boot x86-OS via a low-level translator).

    And here's what Apple got right.
    They released the Mac Mini Zero/Development device a year early to get developers primed. Their new Operating System, which is definitely NOT the same OS X (macOS), but is an "iOS-Pro OS" actually is stable. Their forwards-compatibility with iOS Apps runs without issues. Their backwards-compatibility for 64bit-macOS Apps actually runs very very very well (some code, such as the gpu-APIs are actually processed natively). And we can only surmise that most current Apps will run (average 60%) almost as good as running natively (min 49%-to-94% max), something Microsoft dropped the ball on with Windows 8/RT and have dragged their feet since. Whilst in the near future (3-4 years), they will remove the actual hardware-coprocessors that handle this x86-to-ARM translation, and they will use that "silicon budget" to add to the SoC, slightly improving native further. So with updating Applications, improving microarchitecture, improving lithography, increasing silicon budget, and thus extending it from an efficient design (4B+4s) to a (8 Big) performance design...... we will see performance literally x2-x4 in the coming 2-4 year timeframe (think Apple M2, M3, M4 in 2024). And I didn't even mention GPU improvements either. That's just too much pressure on the whole industry (Lenovo, HP, Dell, ASUS), and more specifically on Microsoft, AMD Zen, and Intel (lol) when it comes to their roadmap.

    Plus, the current setup of 4-big, and 4-medium cores, is adequate, but works wonders for low-impact tasks and thermally limited devices. And they have demonstrated that their software is mature in handling these hybrid systems. So the current setup means the Macbook Air (ultra thin/light) has a phenomenal leap, and future iterations will benefit from this setup too. Also that means lower R&D time/effort/cost is necessary as most of the work between the smallest iPhone Mini, the medium-sized iPad-Mini, and the much larger Macs are closely related, as far as SoC is concerned. And it's a brilliant move to keep the current x86 line, and launch identical hardware with the M1 silicon. So all feedback will provide insight for future Silicon-M designs.

    I personally think, they're going to move to having a better quality keyboard (bye crappy Butterfly) as now there is more internal space to play around with. And they will add new features to the Macs that are already included in iPhones, like barometer, GPS, etc etc. Also, they will add Apple Pen support (but no silo), with probably a magnetic holder. Lastly, I think they're going to evolve the design of the Macbooks... they will all have OLED HDR10+ displays, maybe in the 4K-5K resolutions, have a proper touchscreen, and mimic the Lenovo Yoga style with a 360' hinge.

Log in

Don't have an account? Sign up now