An Update on Apple’s A7: It's Better Than I Thought

When I reviewed the iPhone 5s I didn’t have much time to go in and do the sort of in-depth investigation into Cyclone (Apple’s 64-bit custom ARMv8 core) as I did with Swift (Apple’s custom ARMv7 core from A6) the year before. I had heard rumors that Cyclone was substantially wider than its predecessor but I didn’t really have any proof other than hearsay so I left it out of the article. Instead I surmised in the 5s review that the A7 was likely an evolved Swift core rather than a brand new design, after all - what sense would it make to design a new CPU core and then do it all over again for the next one? It turns out I was quite wrong.

Armed with a bit of custom code and a bunch of low level tests I think I have a far better idea of what Apple’s A7 and Cyclone cores look like now than I did a month ago. I’m still toying with the idea of doing a much deeper investigation into A7, but I wanted to share some of my findings here.

The first task is to understand the width of the machine. With Swift I got lucky in that Apple had left a bunch of public LLVM documentation uncensored, referring to Swift’s 3-wide design. It turns out that although the design might be capable of decoding, issuing and retiring up to three instructions per clock, in most cases it behaved like a 2-wide machine. Mix FP and integer code and you’re looking at a machine that’s more like 1.5 instructions wide. Obviously Swift did very well in the market and its competitors at the time, including Qualcomm’s Krait 300, were similarly capable.

With Cyclone Apple is in a completely different league. As far as I can tell, peak issue width of Cyclone is 6 instructions. That’s at least 2x the width of Swift and Krait, and at best more than 3x the width depending on instruction mix. Limitations on co-issuing FP and integer math have also been lifted as you can run up to four integer adds and two FP adds in parallel. You can also perform up to two loads or stores per clock.

I don’t yet have a good understanding of the number of execution ports and how they’re mapped, but Cyclone appears to be the widest ARM architecture we’ve ever seen at this point. I’m talking wider than Qualcomm’s Krait 400 and even ARM’s Cortex A15.

I did have some low level analysis in the 5s review, where I pointed out the significantly reduced memory latency and increased bandwidth to the A7. It turns out that I was missing a big part of the story back then as well…

A Large System Wide Cache

In our iPhone 5s review I pointed out that the A7 now featured more computational GPU power than the 4th generation iPad. For a device running at 1/8 the resolution of the iPad, the A7’s GPU either meant that Apple had an application that needed tons of GPU performance or it planned on using the A7 in other, higher resolution devices. I speculated it would be the latter, and it turns out that’s indeed the case. For the first time since the iPad 2, Apple once again shares common silicon between the iPhone 5s, iPad Air and iPad mini with Retina Display.

As Brian found out in his investigation after the iPad event last week all three devices use the exact same silicon with the exact same internal model number: S5L8960X. There are no extra cores, no change in GPU configuration and the biggest one: no increase in memory bandwidth.

Previously both the A5X and A6X featured a 128-bit wide memory interface, with half of it seemingly reserved for GPU use exclusively. The non-X parts by comparison only had a 64-bit wide memory interface. The assumption was that a move to such a high resolution display demanded a substantial increase in memory bandwidth. With the A7, Apple takes a step back in memory interface width - so is it enough to hamper the performance of the iPad Air with its 2048 x 1536 display?

The numbers alone tell us the answer is no. In all available graphics benchmarks the iPad Air delivers better performance at its native resolution than the outgoing 4th generation iPad (as you'll soon see). Now many of these benchmarks are bound more by GPU compute rather than memory bandwidth, a side effect of the relative lack of memory bandwidth on modern day mobile platforms. Across the board though I couldn’t find a situation where anything was smoother on the iPad 4 than the iPad Air.

There’s another part of this story. Something I missed in my original A7 analysis. When Chipworks posted a shot of the A7 die many of you correctly identified what appeared to be a 4MB SRAM on the die itself. It's highlighted on the right in the floorplan diagram below:


A7 Floorplan, Courtesy Chipworks

While I originally assumed that this SRAM might be reserved for use by the ISP, it turns out that it can do a lot more than that. If we look at memory latency (from the perspective of a single CPU core) vs. transfer size on A7 we notice a very interesting phenomenon between 1MB and 4MB:

That SRAM is indeed some sort of a cache before you get to main memory. It’s not the fastest thing in the world, but it’s appreciably quicker than going all the way out to main memory. Available bandwidth is also pretty good:

We’re only looking at bandwidth seen by a single CPU core, but even then we’re talking about 10GB/s. Lookups in this third level cache don’t happen in parallel with main memory requests, so the impact on worst case memory latency is additive unfortunately (a tradeoff of speed vs. power).

I don’t yet have the tools needed to measure the impact of this on-die memory on GPU accesses, but in the worst case scenario it’ll help free up more of the memory interface for use by the GPU. It’s more likely that some graphics requests are cached here as well, with intelligent allocation of bandwidth depending on what type of application you’re running.

That’s the other aspect of what makes A7 so very interesting. This is the first Apple SoC that’s able to deliver good amounts of memory bandwidth to all consumers. A single CPU core can use up 8GB/s of bandwidth. I’m still vetting other SoCs, but so far I haven’t come across anyone in the ARM camp that can compete with what Apple has built here. Only Intel is competitive.

 

Introduction, Hardware & Cases CPU Changes, Performance & Power Consumption
Comments Locked

444 Comments

View All Comments

  • over9k - Tuesday, October 29, 2013 - link

    Remember, Mavericks' big focus is RAM management. There's no reason to think iOS 7 also doesn't do some great RAM management. It's just that when talking mobile, that's not the "it" talk. Also, it's important to note that iOS devices have always used lower amounts of memory - they are just very well optimized.
  • tipoo - Tuesday, October 29, 2013 - link

    Mavericks borrowed how iOS already used memory, with background apps being able to be pushed out of memory if need be. iOS7 isn't getting Mavericks memory management, Mavericks is just getting the management iOS already had since its inception.
  • kirsch - Tuesday, October 29, 2013 - link

    I think over9k is referring to memory compression introduced in Mavericks. As far as I know this was never in iOS.
  • tipoo - Tuesday, October 29, 2013 - link

    The compression seems to work on idle apps too though, and iOS would run only one foreground app with most of the memory available to it. I just don't think it will help as much on a mostly single tasking OS.
  • NetMage - Wednesday, October 30, 2013 - link

    Actually it should help more as the recent Apps on iOS are mostly don't to run at all, but are kept in memory to improve swapping speed - these could be easily compressed with no impact on user experience.
  • tipoo - Wednesday, October 30, 2013 - link

    People saying Mavericks like compression will help: Anand measured how much RAM was in use under load. If it was compressing something, it would have been factored into how much was used. This is no holy grail here.
  • darwinosx - Wednesday, October 30, 2013 - link

    You think it needs 2 GB of ram why? It obviously doesn't. This isn't Windows or Android.
  • DarkXale - Wednesday, October 30, 2013 - link

    Because Safari is famous for resetting tabs (lost work), and many iOS apps are notorious for dropping undo states the very second you switch out of them. (Which you have to due to the inability to display more than one app at a time)

    Both cause data loss, which is one of the most serious events that can occur in computer systems.

    Both are due to RAM constraints.
  • dugbug - Wednesday, October 30, 2013 - link

    I have not seen that in ios7, they rewrote that behavior significantly. Would it benefit from more ram? sure. Or they could just rewrite it to cache tabs better.
  • tipoo - Wednesday, October 30, 2013 - link

    Because even with 30% less RAM use on 32 bit iOS, Safari boots tabs out like crazy and has to reload a lot of apps from NAND when switching.

Log in

Don't have an account? Sign up now