After Swift Comes Cyclone Oscar

I was fortunate enough to receive a tip last time that pointed me at some LLVM documentation calling out Apple’s Swift core by name. Scrubbing through those same docs, it seems like my leak has been plugged. Fortunately I came across a unique string looking at the iPhone 5s while it booted:

I can’t find any other references to Oscar online, in LLVM documentation or anywhere else of value. I also didn’t see Oscar references on prior iPhones, only on the 5s. I’d heard that this new core wasn’t called Swift, referencing just how different it was. Obviously Apple isn’t going to tell me what it’s called, so I’m going with Oscar unless someone tells me otherwise.

Oscar is a CPU core inside M7, Cyclone is the name of the Swift replacement.

Cyclone likely resembles a beefier Swift core (or at least Swift inspired) than a new design from the ground up. That means we’re likely talking about a 3-wide front end, and somewhere in the 5 - 7 range of execution ports. The design is likely also capable of out-of-order execution, given the performance levels we’ve been seeing.

Cyclone is a 64-bit ARMv8 core and not some Apple designed ISA. Cyclone manages to not only beat all other smartphone makers to ARMv8 but also key ARM server partners. I’ll talk about the whole 64-bit aspect of this next, but needless to say, this is a big deal.

The move to ARMv8 comes with some of its own performance enhancements. More registers, a cleaner ISA, improved SIMD extensions/performance as well as cryptographic acceleration are all on the menu for the new core.

Pipeline depth likely remains similar (maybe slightly longer) as frequencies haven’t gone up at all (1.3GHz). The A7 doesn’t feature support for any thermal driven CPU (or GPU) frequency boost.

The most visible change to Apple’s first ARMv8 core is a doubling of the L1 cache size: from 32KB/32KB (instruction/data) to 64KB/64KB. Along with this larger L1 cache comes an increase in access latency (from 2 clocks to 3 clocks from what I can tell), but the increase in hit rate likely makes up for the added latency. Such large L1 caches are quite common with AMD architectures, but unheard of in ultra mobile cores. A larger L1 cache will do a good job keeping the machine fed, implying a larger/more capable core.

The L2 cache remains unchanged in size at 1MB shared between both CPU cores. L2 access latency is improved tremendously with the new architecture. In some cases I measured L2 latency 1/2 that of what I saw with Swift.

The A7’s memory controller sees big improvements as well. I measured 20% lower main memory latency on the A7 compared to the A6. Branch prediction and memory prefetchers are both significantly better on the A7.

I noticed large increases in peak memory bandwidth on top of all of this. I used a combination of custom tools as well as publicly available benchmarks to confirm all of this. A quick look at Geekbench 3 (prior to the ARMv8 patch) gives a conservative estimate of memory bandwidth improvements:

Geekbench 3.0.0 Memory Bandwidth Comparison (1 thread)
  Stream Copy Stream Scale Stream Add Stream Triad
Apple A7 1.3GHz 5.24 GB/s 5.21 GB/s 5.74 GB/s 5.71 GB/s
Apple A6 1.3GHz 4.93 GB/s 3.77 GB/s 3.63 GB/s 3.62 GB/s
A7 Advantage 6% 38% 58% 57%

We see anywhere from a 6% improvement in memory bandwidth to nearly 60% running the same Stream code. I’m not entirely sure how Geekbench implemented Stream and whether or not we’re actually testing other execution paths in addition to (or instead of) memory bandwidth. One custom piece of code I used to measure memory bandwidth showed nearly a 2x increase in peak bandwidth. That may be overstating things a bit, but needless to say this new architecture has a vastly improved cache and memory interface.

Looking at low level Geekbench 3 results (again, prior to the ARMv8 patch), we get a good feel for just how much the CPU cores have improved.

Geekbench 3.0.0 Compute Performance
  Integer (ST) Integer (MT) FP (ST) FP (MT)
Apple A7 1.3GHz 1065 2095 983 1955
Apple A6 1.3GHz 750 1472 588 1165
A7 Advantage 42% 42% 67% 67%

Integer performance is up 44% on average, while floating point performance is up by 67%. Again this is without 64-bit or any other enhancements that go along with ARMv8. Memory bandwidth improves by 35% across all Geekbench tests. I confirmed with Apple that the A7 has a 64-bit wide memory interface, and we're likely talking about LPDDR3 memory this time around so there's probably some frequency uplift there as well.

The result is something Apple refers to as desktop-class CPU performance. I’ll get to evaluating those claims in a moment, but first, let’s talk about the other big part of the A7 story: the move to a 64-bit ISA.

A7 SoC Explained The Move to 64-bit
Comments Locked

464 Comments

View All Comments

  • teiglin - Wednesday, September 18, 2013 - link

    There is absolutely no basis to compare the process tech between A7 and Bay Trail. We know what battery life the A7 affords the iPhone 5s, but know nothing about what sort of battery life Silvermont might provide in a smartphone form factor. If those Oscar cores are really as power-efficient as Silvermont, then yes, that'd be amazing evidence of A7's power-efficiency.
  • Wilco1 - Wednesday, September 18, 2013 - link

    Given a 2.4GHz Bay Trail in a development board already cannot keep up with the A7 at 1.3GHz (A7 beats it by a huge margin on Geekbench), there is no hope for BT-based phones. BT would need to be clocked far lower to fit in a total phone TDP of ~2W, which means it loses out even worse on performance against A7, Krait and Cortex-A15.

    So yes, the fact that Bay Trail is already beaten by a phone before it is even for sale is a sign of things to come. 2014 will be a hard year for Intel given 20nm TSMC will give further performance and power efficiency gains to their competitors. It all starts to sound a lot like a repeat of the old Atom...
  • vcfan - Wednesday, September 18, 2013 - link

    "Apple's designs are superior to Intel's. and then, Intel had better watch out."

    first of all, its arm vs x86. and second, it was "LOLz intel cant do low power chips,arm wins", now its "but but intel is 22nm" . hilarious.
  • ScienceNOW - Wednesday, September 18, 2013 - link

    WE have plenty of time until 5nm, by that time most likely something New will be in place to pick up where silicone left
  • solipsism - Tuesday, September 17, 2013 - link

    Since when is a PS4 a desktop machine? And why only look at the GPU and not at the CPU that was clearly referenced?
  • Crono - Tuesday, September 17, 2013 - link

    Ha, I love the low-light image choice of subject. :D
    You have to admit, it seems like Apple learned some things from Nokia and HTC this round to improve their cameras, though the combination dual flash is pretty ingenious. I'm wondering if the other manufacturers will adopt it or stick to single LED and Xenon flashes.
  • StevoLincolnite - Tuesday, September 17, 2013 - link

    My Lumia has dual LED flashes, the Lumia 928 has dual Xenon flashes.
    So it's hardly anything new when the Lumia 920 has been on the shelf for almost a year.
  • whyso - Tuesday, September 17, 2013 - link

    My HTC raider from 2011 has dual LED flash.
  • Ewram - Wednesday, September 18, 2013 - link

    my HTC Desire HD (HTC Ace) has dual led, bought it november 2010.
  • melgross - Wednesday, September 18, 2013 - link

    I think you missed the explanation of what Apple did here. The dual flashed in the past, and in yours and other current devices are just two flashes of exactly the same type. Apple's is one cold temp flash and one warm temp flash. The camera flashes before it takes the photo, then evaluates the picture quality based on color temperature. It then comes up with a combination of flash exposure that varies the amount each flash generates so as to give the proper color rendering, and well as the correct exposure.

    No other camera does that. Not even professional strobes can do that. I wonder if Apple patented the electronics and software used for the evaluation.

Log in

Don't have an account? Sign up now