We’ve talked about the hole in ARM’s product lineup for quite a while now. The Cortex A9 is too slow to compete with the likes of Intel’s Atom and Qualcomm’s Krait 200/300 based SoCs. The Cortex A15 on the other hand outperforms both of those solutions, but at considerably higher power and die area requirements. The slide below from Samsung illustrates my point clearly:

 

 

The comparison point here is the Cortex A15 and Cortex A7, but the latter should be quite performance competitive with a Cortex A9 so the comparison is still relevant. The Cortex A15 island in Samsung’s Exynos 5 Octa occupies 5x the die area as the A7 island, and consumes nearly 6x the power. In exchange for 5x the area and 6x the performance, the Cortex A15 offers under 4x the performance. It’s not exactly an area or power efficient solution, but a great option for anyone looking to push the performance envelope.

 

Today, ARM is addressing that hole with the Cortex A12.

 

 

This announcement isn’t a deep architectural disclosure, but we do have some high level details to share. Like AMD’s Jaguar, Intel’s Silvermont and even ARM’s A9, the Cortex A12 is a dual-issue out-of-order architecture. Unlike the Cortex A9, the Cortex A12 is fully out-of-order including the NEON/FP units (NEON/FP was in-order on Cortex A9).

 

Pipeline length increased a bit compared to Cortex A9 (11 stages), however ARM told me to expect similar frequencies to what we have with the Cortex A9. 

 

The execution back end has also been improved, although I don’t have many details as to how. My guess is we should expect something a bit wider than Cortex A9 but not nearly as wide as Cortex A15.

 

Memory performance is much improved compared to Cortex A9 as well, which we’ve already demonstrated as a significant weak point in the A9 architecture.

 

All of the architectural enhancements are supposed to provide up to a 40% increase in performance (IPC) over Cortex A9 at the same frequency and process node. ARM isn’t talking power, other than to say that it can do the same workload at the same power as a Cortex A9. In order words, Cortex A12 should have higher power than Cortex A9 but faster execution reduces total energy consumed. With a higher max power we’ll see more dynamic range in power consumption, but just not nearly as much as with the Cortex A15. 

 

Cortex A12 also adds support for 40-bit memory addressability, an intermediate solution before we get to 64-bit ARMv8 based architectures. Finally, Cortex A12 features the same ACE bus interface as Cortex A7/A15 and can thus be used in big.LITTLE configurations with either core (but likely exclusively with the A7s). Given the lower power profile of Cortex A12, I'm not sure the complexity of doing a big.LITTLE implementation will be worth it though.

 

 

ARM expects the Cortex A12 to be used in mainstream smartphones and tablets where cost and power consumption are a bit more important. The design makes a lot of sense, the only downside is its launch timeframe. ARM expects to be sampling Cortex A12 in late 2014 with the first devices showing up in 2015. Update: ARM clarified that SoCs based on Cortex A12 would be shipping to device vendors in mid-2014, with devices shipping to consumers by late 2014 to early 2015. ARM has optimized Cortex A12 processor packs at both Global Foundries (28nm SLP) and TSMC (28nm HPM).

Comments Locked

78 Comments

View All Comments

  • Wilco1 - Tuesday, June 4, 2013 - link

    A15 in Exynos Octa does most definitely run at 1.6/1.8GHz, not 1.2. If you get different results then there is something wrong with the benchmark - that is the only possible explanation.

    Anyway which comparison are you talking about exactly - link? I am looking at Geekbench results and those clearly show A15 running at 1.6GHz and beating K900 by a huge margin. Eg. http://browser.primatelabs.com/geekbench2/compare/...
  • wsw1982 - Tuesday, June 4, 2013 - link

    In Antutu, the S4 (Octa) barely faster than the K900 and the S4(Snapdragon). This could only happen if the A15 is dramatically under clocked compare to it's counterpart.

    http://www.androidauthority.com/galaxy-s4-vs-ideap...
  • Wilco1 - Tuesday, June 4, 2013 - link

    That's not a CPU benchmark. Antutu does all sorts of stuff unrelated to CPU performance such as SD card read/write speed and GPU tests, and is Java rather than native compiled code. It happens to be one of the most cheated benchmarks (both by users and device makers adding special "optimizations" to show a better score). In short, Antutu scores mean absolutely nothing.
  • Krysto - Tuesday, June 4, 2013 - link

    Yeah, right. You're talking about mainly single-threaded tests, where A15 wouldn't have much of an advantage. In tests like Octane, A15 has a huge advantage over Atom. And Anand has already shown it with the Chromebook review, too.

    Wait until the quad core Atoms, if they ever even arrive in smartphones, and we'll see how they stand then. But prepare to not be so impressed by Intel.
  • kpal12 - Monday, June 3, 2013 - link

    The Cortex A12 is the successor to the Cortex A9, not something in between the A7 and A15.
    The Cortex A15 was not the successor to the A9, it used far more power. This uses less/the some amount of power and is much faster.
  • TheJian - Tuesday, June 4, 2013 - link

    But why waste deving a new chip, when by the time it comes out the power monger A15 will be on 20nm removing the powermonger feature? :) This chip is moot by 20nm A15 etc. I'd rather have a die shrunk A15's power in Q1 2015. This is way too late. We don't want to go backwards, I want them to die shrink their way forward....ROFL.
  • Wilco1 - Tuesday, June 4, 2013 - link

    Again A12 is not a replacement for A15 but for the old A9. Billions of Asians cannot afford a Galaxy S4 or iPhone, and current low-cost devices for these markets use Cortex-A7 on a 40nm process. A15 and Krait are too expensive, A9 is obsolete, so there is a huge gap. The A12 fills it. It's as simple as that.
  • wsw1982 - Tuesday, June 4, 2013 - link

    You are right, as a results, they turn themselves to Qualcomm or Intel...

Log in

Don't have an account? Sign up now