The Move to 64-bit

Prior to the iPhone 5s launch, I heard a rumor that Apple would move to a 64-bit architecture with its A7 SoC. I initially discounted the rumor given the pain of moving to 64-bit from a validation standpoint and the upside not being worth it. Obviously, I was wrong.

In the PC world, most users are familiar with the 64-bit transition as something AMD started in the mid-2000s. The primary motivation back then was to enable greater memory addressability by moving from 32-bit addresses (2^32 or 4GB) to 64-bit addresses (2^64 or 16EB). Supporting up to 16 exabytes of memory from the get go seemed a little unnecessary, so AMD’s x86-64 ISA only uses 48-bits for unique memory addresses (256TB of memory). Along with the move from x86 to x86-64 came some small performance enhancements thanks to more available general purpose registers in 64-bit mode.

In the ARM world, the move to 64-bit is motivated primarily by the same factor: a desire for more memory. Remember that ARM and its partners have high hopes of eating into Intel’s high margin server business, and you really can’t play there without 64-bit support. ARM has already announced its first two 64-bit architectures: the Cortex A57 and Cortex A53. The ISA itself is referred to as ARMv8, a logical successor to the present day 32-bit ARMv7.

Unlike the 64-bit x86 transition, ARM’s move to 64-bit comes with a new ISA rather than an extension of the old one. The new instruction set is referred to as A64, while a largely backwards compatible 32-bit format is called A32. Both ISAs can be supported by a single microprocessor design, as ARMv8 features two architectural states: AArch32 and AArch64. Designs that implement both states can switch/interleave between the two states on exception boundaries. In other words, despite A64 being a new ISA you’ll still be able to run old code alongside it. As always, in order to support both you need an OS with support for A64. You can’t run A64 code on an A32 OS. It is also possible to do an A64/AArch64-only design, which is something some server players are considering where backwards compatibility isn’t such a big deal.

Cyclone is a full implementation of ARMv8 with both AArch32 and AArch64 states. Given Apple’s desire to maintain backwards compatibility with existing iOS apps and not unnecessarily fragment the ARM ecosystem, simply embracing ARMv8 makes a lot of sense.

The motivation for Apple to go 64-bit isn’t necessarily one of needing more address space immediately. A look at Apple’s historical scaling of memory capacity tells us everything we need to know:

At best Apple doubled memory capacity between generations, and at worst it took two generations before doubling. The iPhone 5s ships with 1GB of LPDDR3, keeping memory capacity the same as the iPhone 5, iPad 3 and iPad 4. It’s pretty safe to assume that Apple will go to 2GB with the iPhone 6 (and perhaps iPad 5), and then either stay there for the 6s or double again to 4GB. The soonest Apple would need 64-bit from a memory addressability standpoint in an iOS device would be 2015, and the latest would be 2016. Moving to 64-bit now preempts Apple’s hardware needs by 2 full years.

The more I think about it, the more the timing actually makes a lot of sense. The latest Xcode beta and LLVM compiler are both ARMv8 aware. Presumably all apps built starting with the official iOS 7 release and going forward could be built 64-bit aware. By the time 2015/2016 rolls around and Apple starts bumping into 32-bit addressability concerns, not only will it have navigated the OS transition but a huge number of apps will already be built for 64-bit. Apple tends to do well with these sorts of transitions, so starting early like this isn’t unusual. The rest of the ARM ecosystem is expected to begin moving to ARMv8 next year.

Apple isn’t very focused on delivering a larger memory address space today however. As A64 is a brand new ISA, there are other benefits that come along with the move. Similar to the x86-64 transition, the move to A64 comes with an increase in the number of general purpose registers. ARMv7 had 15 general purpose registers (and 1 register for the program counter), while ARMv8/A64 now has 31 that are each 64-bits wide. All 31 registers are accessible at all times. Increasing the number of architectural registers decreases register pressure and can directly impact performance. The doubling of the register space with x86-64 was responsible for up to a 10% increase in performance.

The original ARM architecture made all instructions conditional, which had a huge impact on the instruction space. The number of conditional instructions is far more limited in ARMv8/A64.

The move to ARMv8 also doubles the number of FP/NEON registers (from 16 to 32) as well as widens all of them registers to 128-bits (up from 64-bits). Support for 128-bit registers can go a long way in improving SIMD performance. Whereas simply doubling register count can provide moderate increases in performance, doubling the size of each register can be far more significant given the right workload. There are also new advanced SIMD instructions that are a part of ARMv8. Double precision SIMD FP math is now supported among other things.

ARMv8 also adds some new cryptographic instructions for hardware acceleration of AES and SHA1/SHA256 algorithms. These hardware AES/SHA instructions have the potential for huge increases in performance, just like we saw with the introduction of AES-NI on Intel CPUs a few years back. Both the new advanced SIMD instructions and AES/SHA instructions are really designed to enable a new wave of iOS apps.

Many A64 instructions mode can also work with 32-bit operands, with properly implemented designs simply power gating unused bits. The A32 implementation in ARMv8 also adds some new instructions, so it’s possible to compile AArch32 apps in ARMv8 that aren’t backwards compatible. All existing ARMv7 and 32-bit Thumb code should work just fine however.

On the software side, iOS 7 as well as all first party apps ship already compiled for AArch64 operation. In fact, at boot, there isn’t a single AArch32 process running on the iPhone 5s:

Safari, Mail, everything all made the move to 64-bit right away. Given the popularity of these first party apps, it’s not just the hardware that’s 64-bit ready but much of the software is as well. The industry often speaks about Apple’s vertically integrated advantage, this is quite possibly the best example of that advantage. In many ways it reminds me of the Retina Display transition on OS X.

Running A32 and A64 applications in parallel is seamless. On the phone itself, it’s impossible to tell when you’re running in a mixed environment or when everything you’re running is 64-bit. It all just works.

I didn’t run into any backwards compatibility issues with existing 32-bit ARMv7 apps either. From an end user perspective, navigating the 64-bit transition is as simple as buying an iPhone 5s.

64-bit Performance Gains

Geekbench 3 was among the first apps to be updated with ARMv8 support. There are some minor changes between the new version of Geekbench 3 and its predecessor (3.1/3.0), however the tests themselves (except for the memory benchmarks) haven't changed. What this allows us to do is look at the impact of the new ARMv8 A64 instructions and larger register space. We'll start with a look at integer performance:

Apple A7 - AArch64 vs. AArch32 Performance Comparison
  32-bit A32 64-bit A64 % Advantage
AES 91.5 MB/s 846.2 MB/s 825%
AES MT 180.2 MB/s 1640.0 MB/s 810%
Twofish 59.9 MB/s 55.6 MB/s -8%
Twofish MT 119.1 MB/s 110.2 MB/s -8%
SHA1 138.0 MB/s 477.3 MB/s 245%
SHA1 MT 275.7 MB/s 948.9 MB/s 244%
SHA2 86.1 MB/s 102.2 MB/s 18%
SHA2 MT 171.3 MB/s 203.7 MB/s 18%
BZip2 Compress 4.36 MB/s 4.52 MB/s 3%
BZip2 Compress MT 8.57 MB/s 8.86 MB/s 3%
BZip2 Decompress 5.94 MB/s 7.56 MB/s 27%
BZip2 Decompress MT 11.7 MB/s 15.0 MB/s 28%
JPEG Compress 15.5 MPixels/s 16.8 MPixels/s 8%
JPEG Compress MT 30.8 MPixels/s 33.3 MPixels/s 8%
JPEG Decompress 36.0 MPixels/s 40.3 MPixels/s 11%
JPEG Decompress MT 71.3 MPixels/s 78.1 MPixels/s 9%
PNG Compress 0.84 MPixels/s 1.14 MPixels/s 35%
PNG Compress MT 1.67 MPixels/s 2.26 MPixels/s 35%
PNG Decompress 13.9 MPixels/s 15.2 MPixels/s 9%
PNG Decompress MT 27.4 MPixels/s 29.8 MPixels/s 8%
Sobel 59.3 MPixels/s 58.0 MPixels/s -3%
Sobel MT 116.6 MPixels/s 114.6 MPixels/s -2%
Lua 1.25 MB/s 1.33 MB/s 6%
Lua MT 2.47 MB/s 2.49 MB/s 0%
Dijkstra 5.35 MPairs/s 4.05 MPairs/s -25%
Dijkstra MT 9.67 MPairs/s 7.26 MPairs/s -25%

The AES and SHA1 gains are a direct result of the new cryptographic instructions that are a part of ARMv8. The AES test in particular shows nearly an order of magnitude performance improvement. This is similar to what we saw in the PC space with the introduction of Intel's AES-NI support in Westmere. The Dijkstra workload is the only real regression. That test in particular appears to be very pointer heavy, and the increase in pointer size from 32 to 64-bit increases cache pressure and causes the reduction in performance. The rest of the gains are much smaller, but still fairly significant if you take into account the fact that we're just looking at what you get from a recompile. Add these gains to the ones you're about to see over Apple's A6 SoC and A7 is looking really good from a performance standpoint.

If the integer results looked good, the FP results are even better:

Apple A7 - AArch64 vs. AArch32 Performance Comparison
  32-bit A32 64-bit A64 % Advantage
BlackScholes 4.73 MNodes/s 5.92 MNodes/s 25%
BlackScholes MT 9.57 MNodes/s 12.0 MNodes/s 25%
Mandelbrot 930.2 MFLOPS 929.9 MFLOPS 0%
Mandelbrot 1840 MFLOPS 1850 MFLOPS 0%
Sharpen Filter 805.1 MFLOPS 857 MFLOPS 6%
Sharpen Filter MT 1610 MFLOPS 1710 MFLOPS 6%
Blur Filter 1.08 GFLOPS 1.26 GFLOPS 16%
Blur Filter MT 2.15 GFLOPS 2.47 GFLOPS 14%
SGEMM 3.09 GFLOPS 3.34 GFLOPS 8%
SGEMM MT 6.08 GFLOPS 6.56 GFLOPS 7%
DGEMM 0.56 GFLOPS 1.66 GFLOPS 195%
DGEMM MT 1.11 GFLOPS 3.24 GFLOPS 191%
SFFT 0.72 GFLOPS 1.59 GFLOPS 119%
SFFT MT 1.44 GFLOPS 3.17 GFLOPS 120%
DFFT 1.41 GFLOPS 1.47 GFLOPS 4%
DFFT MT 2.78 GFLOPS 2.91 GFLOPS 4%
N-Body 460.8 KPairs/s 582.6 KPairs/s 26%
N-Body MT 917.6 KPairs/s 1160.0 KPairs/s 26%
Ray Trace 1.52 MPixels/s 2.31 MPixels/s 51%
Ray Trace MT 3.04 MPixels/s 4.64 MPixels/s 52%

The DGEMM operations aren't vectorized under ARMv7, but they are under ARMv8 thanks to DP SIMD support so you get huge speedups there from the recompile. The SFFT workload benefits handsomely from the increased register space, significantly reducing the number of loads and stores (there's something like a 30% reduction in instructions for the A64 codepath compared to the A32 codepath here). The conclusion? There are definitely reasons outside of needing more memory to go 64-bit.

A7 and OS X

Before I spent time with the A7 I assumed the only reason Apple would go 64-bit in mobile is to prepare for eventually deploying these chips into larger machines. A couple of years ago, when the Apple/Intel relationship was at its rockiest I would've definitely said that's what was going on. Today, I'm far less convinced. 

Apple continues to build its own SoCs and invest in them because honestly, no one else seems up to the job. Only recently do we have GPUs competitive with what Apple has been shipping, and with the A7 Apple nearly equals Intel's performance with Bay Trail on the CPU side. As far as Macs go though, there's still a big gap between the A7 and where Intel is at with Haswell. The deficiency that Intel had in the ultra mobile space simply doesn't translate to its position with the big Core chips. I don't see Apple bridging that gap anytime soon. On top of that, the Apple/Intel relationship is very good at this point.

Although Apple could conceivably keep innovating to the point where an A-series chip ends up powering a Mac, I don't think that's in the cards today.

After Swift Comes Cyclone CPU Performance
Comments Locked

464 Comments

View All Comments

  • Wilco1 - Thursday, September 19, 2013 - link

    The Geekbench results are indeed skewed by AES encryption. The author claimed AES was the only benchmark where they use hardware acceleration when available. There has been a debate on fixing the weighting or to place hardware accelerated benchmarks in a separate category to avoid skewing the results. So I'm hoping a future version will fix this.

    As for cross-platform benchmarking, Geekbench currently uses the default platform compiler (LLVM on iOS, GCC on Android, VC++ on Windows). So there will be compiler differences that skew results slightly. However this is also what you'd get if you built the same application for iOS and Android.
  • smartypnt4 - Thursday, September 19, 2013 - link

    A lot of the other stuff in Geekbench seems to be fairly representative, though. Except a few of the FP ones like the blur and sharpen tests...

    It surely can't be hard to have Geekbench omit those results. I think if they did that, you'd see that the A7 is roughly 50-60% faster than the A6 instead of 100% faster, but I'm not sure there. I'd have to go and do work to figure that out. Which is annoying :-)
  • name99 - Wednesday, September 18, 2013 - link

    I'd agree with the tweaks you suggest: (improved memory controller and prefetcher, doubling of L2, larger branch predictor tables).

    There is also scope for a wider CPU. Obviously the most simple-minded widening of a CPU substantially increases power, but there are ways to limit the extra power without compromising performance too much, if you are willing to spend the transistors. I think Apple is not just willing to spend the transistors, but will have them available to spend once they ditch 32-bit compatibility. At that point they can add a fourth decoder, use POWER style blocking of instructions to reduce retirement costs, and add whatever extra pipes make sense.
    The most useful improvement (in my experience) would be to up the L1 from being able to handle one load+store cycle to two loads+ one store per cycle, but I don't know what the power cost of that is --- may be too high.

    On the topic of minor tweaks, do we know what the page size used by iOS is? If they go from 4K to 16K and/or add support for large pages, they could get a 10% of so speed boost just from better TLB coverage.
    (And what's Android's story on this front? Do they stick with standard 4K pages, or do they utilize 16 or 64K pages and/or large pages?)
  • extide - Wednesday, September 18, 2013 - link

    Those are some pretty generous numbers you pulled out of your hat there. It's not as easy as just do this and that and bam, you have something to compete with Intel Core series stuff. No. I mean yeah, Apple has done a great job here and I wish someone else was making CPU's like this for the Android phones but oh well.
  • name99 - Wednesday, September 18, 2013 - link

    "Now, I will agree that this does prove that if Apple really wanted to, they could build something to compete with Haswell in terms of raw throughput."

    I agree with your point, but I think we should consider what an astonishing statement this is.
    Two years ago Apple wasn't selling it's own CPU. They burst onto the scene and with their SECOND device they're at an IPC and a performance/watt that equals Intel! Equals THE competitor in this space, the guys who are using the best process on earth.

    If you don't consider that astonishing, you don't understand what has happened here.

    (And once again I'd make my pitch that THIS shows what Intel's fatal flaw is. The problem with x86 is not that it adds area to a design, or that it slows it down --- though it does both. The problem is that it makes design so damn complex that you're constantly lagging; and you're terrified of making large changes because you might screw up.
    Apple, saddled with only the much smaller ARM overhead, has been vastly more nimble than Intel.
    And it's only going to get worse if, as I expect, Apple ditches 32-bit ARM as soon as they can, in two years or so, giving them an even easier design target...)

    What's next for Apple?
    At the circuit level, I expect them to work hard to make their CPU as good at turboing as Intel. (Anand talked about this.)
    At the ISA level, I expect their next major target to be some form of hardware transactional memory --- it just makes life so much easier, and, even though they're at two cores today, they know as well as anyone that the future is more cores. You don't have to do TM the way Intel has done it; the solution IBM used for POWER8 is probably a better fit for ARM. And of course if Apple do this (using their own extensions, because as far as I know ARM doesn't yet even have a TM spec) it's just one more way in which they differentiate their world from the commodity ARM world.
  • smartypnt4 - Wednesday, September 18, 2013 - link

    @extide: agreed.

    @name99: It is very astonishing indeed. Then again, a high profile company like Apple has no problem attracting some of the best talent via compensation and prestige.

    They've still got quite a long way to match Haswell, in any case. But the throughput is technically there to rival Intel if they wanted to. I would hope that Haswell contains a much more advanced branch predictor and prefetcher than what Apple has, but you never know. My computer architecture professor always said that everything in computer architecture has already been discovered. The question now is when will it be advantageous to spend the transistors to implement the most complicated designs.

    The next year is going to be very interesting, indeed.
  • Bob Todd - Wednesday, September 18, 2013 - link

    How many crows did you stuff down after claiming BT would be slower than A15 and even A12? Remember posting this about integer performance?

    "Silverthorne < A7 < A9 < A9R4 < Silvermont < A12 < Bobcat < A15 < Jaguar"

    Apple's A7 looks great, but you've made so many utterly ridiculous Intel performance bashing posts that it's pretty much impossible to take anything you say seriously.
  • Wilco1 - Wednesday, September 18, 2013 - link

    BT has indeed far lower IPC than A15 just like I posted - pretty much all benchmark results confirm that. On Geekbench 3 A15 is 23-25% faster clock for clock on integer and FP.

    The jury is still out on A12 vs BT as we've seen no performance results for A12 so far. So claiming I was wrong is not only premature but also incorrect as the fact is that Bay Trail is slower.
  • Wilco1 - Wednesday, September 18, 2013 - link

    Also new version with A7 and A57 now looks like this:

    Silverthorne < A7 < A9 < A9R4 < Silvermont < A12 < Bobcat < A15 < Jaguar < A57 < Apple A7
  • Bob Todd - Wednesday, September 18, 2013 - link

    Cherry picking a single benchmark which is notoriously inaccurate at comparisons across platforms/architectures doesn't make you "right", it just makes you look like more of a troll. Bay Trail has better integer performance than Jaguar (at near identical base clocks), so by your own ranking above it *has* to be faster than A12 and A15.

    You show up in every ARM article spouting the same drivel over and over again, yet you were mysteriously absent in the Bay Trail performance preview. Here's the link if you want to try to find a way to spin more FUD.

    http://anandtech.com/show/7314/intel-baytrail-prev...

    Apple's A7 looks great, and IT is still the powerhouse of mobile graphics. The A7 version in the iPad should be a beast. None of that makes most of your comments any less loony.

Log in

Don't have an account? Sign up now