When Apple announced the iPhone 5, Phil Schiller officially announced what had leaked several days earlier: the phone is powered by Apple's new A6 SoC.

As always, Apple didn't announce clock speeds, CPU microarchitecture, memory bandwidth or GPU details. It did however give us an indication of expected CPU performance:
 
 
Prior to the announcement we speculated the iPhone 5's SoC would simply be a higher clocked version of the 32nm A5r2 used in the iPad 2,4. After all, Apple seems to like saving major architecture shifts for the iPad. 
 
However, just prior to the announcement I received some information pointing to a move away from the ARM Cortex A9 used in the A5. Given Apple's reliance on fully licensed ARM cores in the past, the expected performance gains and unpublishable information that started all of this I concluded Apple's A6 SoC likely featured two ARM Cortex A15 cores. 
 
It turns out I was wrong. But pleasantly surprised.
 
The A6 is the first Apple SoC to use its own ARMv7 based processor design. The CPU core(s) aren't based on a vanilla A9 or A15 design from ARM IP, but instead are something of Apple's own creation.
 

Hints in Xcode 4.5

 
The iPhone 5 will ship with and only run iOS 6.0. To coincide with the launch of iOS 6.0, Apple has seeded developers with a newer version of its development tools. Xcode 4.5 makes two major changes: it drops support for the ARMv6 ISA (used by the ARM11 core in the iPhone 2G and iPhone 3G), keeps support for ARMv7 (used by modern ARM cores) and it adds support for a new architecture target designed to support the new A6 SoC: armv7s.
 

 
What's the main difference between the armv7 and armv7s architecture targets for the LLVM C compiler? The presence of VFPv4 support. The armv7s target supports it, the v7 target doesn't. Why does this matter?
 
Only the Cortex A5, A7 and A15 support the VFPv4 extensions to the ARMv7-A ISA. The Cortex A8 and A9 top out at VFPv3. If you want to get really specific, the Cortex A5 and A7 implement a 16 register VFPv4 FPU, while the A15 features a 32 register implementation. The point is, if your architecture supports VFPv4 then it isn't a Cortex A8 or A9.
 
It's pretty easy to dismiss the A5 and A7 as neither of those architectures is significantly faster than the Cortex A9 used in Apple's A5. The obvious conclusion then is Apple implemented a pair of A15s in its A6 SoC.
 
For unpublishable reasons, I knew the A6 SoC wasn't based on ARM's Cortex A9, but I immediately assumed that the only other option was the Cortex A15. I foolishly cast aside the other major possibility: an Apple developed ARMv7 processor core.
 

Balancing Battery Life and Performance

 
There are two types of ARM licensees: those who license a specific processor core (e.g. Cortex A8, A9, A15), and those who license an ARM instruction set architecture for custom implementation (e.g. ARMv7 ISA). For a long time it's been known that Apple has both types of licenses. Qualcomm is in a similar situation; it licenses individual ARM cores for use in some SoCs (e.g. the MSM8x25/Snapdragon S4 Play uses ARM Cortex A5s) as well as licenses the ARM instruction set for use by its own processors (e.g. Scorpion/Krait implement in the ARMv7 ISA).
 
For a while now I'd heard that Apple was working on its own ARM based CPU core, but last I heard Apple was having issues making it work. I assumed that it was too early for Apple's own design to be ready. It turns out that it's not. Based on a lot of digging over the past couple of days, and conversations with the right people, I've confirmed that Apple's A6 SoC is based on Apple's own ARM based CPU core and not the Cortex A15.
 
Implementing VFPv4 tells us that this isn't simply another Cortex A9 design targeted at higher clocks. If I had to guess, I would assume Apple did something similar to Qualcomm this generation: go wider without going substantially deeper. Remember Qualcomm moved from a dual-issue mostly in-order architecture to a three-wide out-of-order machine with Krait. ARM went from two-wide OoO to three-wide OoO but in the process also heavily pursued clock speed by dramatically increasing the depth of the machine.
 
The deeper machine plus much wider front end and execution engines drives both power and performance up. Rumor has it that the original design goal for ARM's Cortex A15 was servers, and it's only through big.LITTLE (or other clever techniques) that the A15 would be suitable for smartphones. Given Apple's intense focus on power consumption, skipping the A15 would make sense but performance still had to improve.

Why not just run the Cortex A9 cores from Apple's A5 at higher frequencies? It's tempting, after all that's what many others have done in the space, but sub-optimal from a design perspective. As we learned during the Pentium 4 days, simply relying on frequency scaling to deliver generational performance improvements results in reduced power efficiency over the long run. 
 
To push frequency you have to push voltage, which has an exponential impact on power consumption. Running your cores as close as possible to their minimum voltage is ideal for battery life. The right approach to scaling CPU performance is a combination of increasing architectural efficiency (instructions executed per clock goes up), multithreading and conservative frequency scaling. Remember that in 2005 Intel hit 3.73GHz with the Pentium Extreme Edition. Seven years later Intel's fastest client CPU only runs at 3.5GHz (3.9GHz with turbo) but has four times the cores and up to 3x the single threaded performance. Architecture, not just frequency, must improve over time.
 
At its keynote, Apple promised longer battery life and 2x better CPU performance. It's clear that the A6 moved to 32nm but it's impossible to extract 2x better performance from the same CPU architecture while improving battery life over only a single process node shrink.
 
Despite all of this, had it not been for some external confirmation, I would've probably settled on a pair of higher clocked A9s as the likely option for the A6. In fact, higher clocked A9s was what we originally claimed would be in the iPhone 5 in our NFC post.
 
I should probably give Apple's CPU team more credit in the future.
 
The bad news is I have no details on the design of Apple's custom core. Despite Apple's willingness to spend on die area, I believe an A15/Krait class CPU core is a likely target. Slightly wider front end, more execution resources, more flexible OoO execution engine, deeper buffers, bigger windows, etc... Support for VFPv4 guarantees a bigger core size than the Cortex A9, it only makes sense that Apple would push the envelope everywhere else as well. I'm particularly interested in frequency targets and whether there's any clever dynamic clock work happening. Someone needs to run Geekbench on an iPhone 5 pronto.
 
I also have no indication how many cores there are. I am assuming two but Apple was careful not to report core count (as it has in the past). We'll get more details as we get our hands on devices in a week. I'm really interested to see what happens once Chipworks and UBM go to town on the A6.
The A6 GPU: PowerVR SGX 543MP3?
Comments Locked

163 Comments

View All Comments

  • madmilk - Sunday, September 16, 2012 - link

    Sunspider is not multithreaded (actually it's a limitation of Javascript in browsers). Sometimes the engine can use multiple cores, but there's no way the Tegra is being fully utilized here. That said, the additional two cores are hardly used elsewhere either.
  • lilmoe - Sunday, September 16, 2012 - link

    Sadly, but I also agree. First, on the CPU side, they tried to gain more popularity by providing false "exclusive" info, and now they're saying that it's "pleasant" and making it seam as if it's revolutionary (on the vertical integration side) when Qualcomm has been doing this for years now (and doing a better job on the CPU side at that).

    Second, about the GPU, they're still trying to speculate something TOTALLY off. Anand and crew know pretty well that Apple's claims of "2X" and "4X" are utter bull in real world performance (only in a sub-benchmarks do we get anything close to their claims). Refer to the comparison of the Apple A5X and nVidia Tegra3 for more info.

    Sorry, but by the sounds of this chip, it would probably be comparable to the Exynos 4 Quad, but not nearly as powerful as the Snapdragon S4 Pro. Exynos 5 Dual will wipe the floor with all of the above...

    It's really disappointing. I thought we had a reliable source on low level tech. They can be fanboys of whatever they want, but I didn't expect it to affect the way they reported tech news.
  • doobydoo - Sunday, September 16, 2012 - link

    You question the GPU claims Apple made, yet it's common knowledge that the iPad has the fastest GPU of any tablet and the iPhone 4S, despite being a year old - still dominates the GPU benchmarks in America - beating the Samsung Galaxy S3, for example.

    It is virtually guaranteed that the iPhone 5 will have the fastest GPU in any smartphone when it's reviewed.
  • doobydoo - Sunday, September 16, 2012 - link

    Oh and your comments on the CPU are nothing short of totally unfounded guesswork.
  • Death666Angel - Sunday, September 16, 2012 - link

    Considering the size of the SoC in the iPad, it better beat everything else. :P

    And the SGS3 international version is faster than the iPhone4s, so I guess it sucks to be stuck with the Krait version.
  • lilmoe - Sunday, September 16, 2012 - link

    Fanboy much? Go back to the review that this site has provided for the Galaxy S3, the Mali400 GPU in the Exynos 4 Quad is 25% faster than PowerVR in the iPhone 4S.

    Benchmarks are cool, they give us a roughly good idea of what to expect out of a platform, but real world performance can differ slightly to significantly.
  • doobydoo - Sunday, September 16, 2012 - link

    The benchmarks I referred to are from this site, and they are facts - nothing opinionated and your fanboy statement is therefore a bit illogical.

    I specifically stated the GPU benchmarks in 'America' - the GPU in the iPhone 4s outperforms the American version of the SG3.

    The Mali 400 is only found in the international version of the SG3 - and it's only marginally faster.

    Given Apples claim of 2x faster GPU you would expect the iPhone 5 to easily take the GPU crown once again.
  • Flunk - Sunday, September 16, 2012 - link

    It doesn't really matter, it just needs to be faster than the last iPhone because it's a closed system. People don't buy the iPhone for relative performance vs Android.
  • bill4 - Saturday, September 15, 2012 - link

    Apple has to spend a fraction of it's 500 zillion in the bank on something. And it certainly isn't modernizing that piece of crap iOS...
  • EnzoFX - Saturday, September 15, 2012 - link

    They obviously don't think they need to. I'm inclined to agree. It's still simple enough for 99% of people. It's an OS. You have icons and you have functionality. The only thing it can do is evolve. People are easy to dismiss the new features and updates because the home screen still looks the same. Do you need live widgets on there? Well I'd argue most people don't. At least not in any way that Android does it. Perhaps something simpler the way WP8 or whatever does this. But that's risky with little payoff, Apple iOS is already well established. There are a ton of features added, mostly through apps, and that's an argument most will understand. That it's all about the apps.

    Do you yell at Windows 7 and wish it was more evolved? I'd say most people here are content with the basic start menu/icons that it offers lol. So again, it would have to be a whole new redesign, and again don't think that's warranted whatsoever. It's all about the apps that will round out the OS. Like the reason people STICK with Windows OS', because it supports all their favorite and wide range of apps. Apple is bringing the apps. Look at their focus in the last couple years, they're pushing their iLife stuff and buffing up specs, and empower devs to develop more and more. When the Smartphone needs to evolve, then iOS will prob then too. Until then, a smartphone is just a smartphone, there's not much reason to take the next big leap.

Log in

Don't have an account? Sign up now