Introduction to Cortex A12 & The Front End

At a high level ARM’s Cortex A12 is a dual-issue, out-of-order microarchitecture with integrated L2 cache and multi-core capable.

The Cortex A12 team all previously worked on Cortex A9. ARM views the resulting design as not being a derivative of Cortex A9, but clearly inspired by it. At a high level, Cortex A12 features a 10 - 12 stage integer pipeline - a lengthening of Cortex A9’s 8 - 11 stage pipeline. The architecture is still 2-wide out-of-order, but unlike Cortex A9 the new tweener is fully out of order including load/store (within reason) and FP/NEON.

Cortex A12 retains feature and ISA compatibility with ARM’s Cortex A7 and A15, making it the new middle child in the updated microprocessor family. All three parts support 40-bit physical addressing, the same 128-bit AXI4 bus interface and the same 32-bit ARM-v7A instruction set (NEON is standard on Cortex A12). The Cortex A12 is so compatible with A7 and A15 that it’ll eventually be offered in a big.LITTLE configuration with a cluster of Cortex A7 cores (initial versions lack the coherent interface required for big.LITTLE).

In the Cortex A9, ARM had a decoupled L2 cache that required some OS awareness. The Cortex A12 design moves to a fully integrated L2, similar to the A7/A15. The L2 cache operates on its own voltage and frequency planes, although the latter can be in sync with the CPU cores if desired. The L2 cache is shared among up to four cores. Larger core count configurations are supported through replication of quad-core clusters.

The L1 instruction cache is 4-way set associative and configurable in size (32KB or 64KB). The cache line size in Cortex A12 was increased to 64 bytes (from 32B in Cortex A9) to better align with DDR memory controllers as well as the Cortex A7 and A15 designs. Similar to Cortex A9 there’s a fully associative instruction micro TLB and unified main TLB, although I’m not sure if/how the sizes of those two structures have changed.

The branch predictor was significantly improved over Cortex A9. Apparently in the design of the Cortex A12, ARM underestimated its overall performance and ended up speccing it out with too weak of a branch predictor. About three months ago ARM realized its mistake and was left with the difficult situation of either shipping a less efficient design, or quickly finding a suitable branch predictor. The Cortex A12 team went through the company looking for a designed predictor it could use, eventually finding one in the Cortex A53. The A53’s predictor got pulled into the Cortex A12 and with some minor modifications will be what the design ships with. Improved branch prediction obviously improves power efficiency as well as performance.

The ARM CPU Portfolio & Dynamic Range Back End Improvements
Comments Locked

65 Comments

View All Comments

  • JDG1980 - Wednesday, July 17, 2013 - link

    So, roughly speaking, how does ARM IPC compare to x86? Obviously it's not going to be as high as on modern big-core desktop x86 parts like SB/IB/Haswell, but how does it compare to Atom (both the current generation and the new one in the pipeline) and Bobcat/Kabini?
  • nathanddrews - Wednesday, July 17, 2013 - link

    I think that was covered before:

    http://www.anandtech.com/show/6936/intels-silvermo...
  • coder543 - Wednesday, July 17, 2013 - link

    The performance expectations (which relate to IPC) were misguided though. Intel's compiler was, essentially, cheating by skipping entire sections of the benchmark. http://www.androidauthority.com/analyst-says-intel...
  • DanNeely - Wednesday, July 17, 2013 - link

    If I'm interpreting the labels on that graph correctly ("K900 at 1.0"); they've under clocked the atom by 50% (from 2ghz to 1ghz) from what it normally operates at; which would flip the results back to Intel winning the majority of the tests.
  • Krysto - Wednesday, July 17, 2013 - link

    No, that's actually the *real* clock speed of Atom, which Intel misleadingly calls a "2 Ghz" core. The 2 Ghz speed is the turbo-boost speed, just like for laptops Haswell will really be clocked at 1.3 Ghz (same performance level as 1.5 Ghz IVB), and it goes up to 2.3 Ghz, or whatever, with Turbo-Boost.

    The problem is that I think benchmarks do use the Turbo-Boost speed fully, which means Atom will do very well in benchmarks, while that may not be the case in real day to day life, where the phone might never activate the Turbo-Boost speed, and just use the slower "real clock speed" all the time.
  • name99 - Wednesday, July 17, 2013 - link

    The fact of Turbo-Boost is a problem for lazy benchmarking, it is NOT a problem for Intel.

    Why do you want a high speed core in your phone? There is a population that wants to run aggressive games, or transcode video or whatever, and these people care about sustained performance. But they're in the dramatic minority. For most users, the value of a high speed core is that it makes the phone more zippy, meaning that operations are fast when they need to be fast, after which the phone can go back to sleeping. The user-level "speed" of a phone is measured by how fast it draws a (single) PDF page, or renders a (single) complex web page, or launches an app, not by how it performs over any task that takes longer than a second. In such a world, if Turbo-Boost allows the app to sprint for a second, then go back to low-power mode, the user is very happy with that behavior.

    The only "problem" with this strategy, for Intel, is that it is obvious and will be copied by everyone. Intel is there first and most aggressively for historical and process reasons, but there's no reason they will remain the only player.
    (It's also quite likely that competitors will adopt the ideas of Turbo-Boost, just never call it that. After all the problem to be solved for phones is different from the REAL Turbo-Boost problem. Turbo-Boost comes from a world where you run the chip as hot as it can go --- till it just about to overheat. If an ARM core has no danger of actually overheating, then the design space is different. Now it's simply "we'll rate the core for 2GHz, but at that speed it uses up 10 nJ/op, so as far as possible we'll try to run it a 1GHz (using up 2 nJ/op) or better yet 100MHz (using up 10 pJ/op) [all numbers made up, but you get the point].)
  • Wilco1 - Wednesday, July 17, 2013 - link

    All CPUs typically run at a far lower frequency than the maximum - I'm sure nobody believes eg. Krait 800 runs all 4 cores at 2.3GHz all the time. So if you call a specific frequency the "base" then anything faster than that is automatically a turbo boost. In that sense Intel's turbo boost is largely marketing, a way to claim a low TDP by setting the base frequency arbitrarily low, and allowing to go well over that TDP for a certain amount of time at a much higher boost frequency.
  • inighthawki - Wednesday, July 17, 2013 - link

    My 3.5/3.9GHz advertised i7 4770K runs at 800Mhz at idle :)
  • TomWomack - Thursday, July 18, 2013 - link

    My desktop Haswell with Intel's retail cooler runs all four cores 24/7 turbo-boosted - I'm not quite sure what to, it reports 3401MHz in Linux but that's because /proc/cpuinfo asks ACPI which isn't fully compatible with turbo-boost. The machine draws 100W from the mains while doing so, which (given that it idles at 28W) is entirely consistent with the 84W TDP.

    And, indeed, it runs at 800MHz at idle; and I suspect often slower than that, but /proc/cpuinfo doesn't report C-states
  • RicDavis - Friday, July 19, 2013 - link

    Intel's turbostat has proven very useful for getting good reporting of CPU clock speeds under Linux. with the -v option it also displays the maximum speeds that CPU will run at as the no of active cores varies. Recommended. i7z is another option, but I've seen it do a bad job of showing which cores are active when hyperthreading is enabled.

Log in

Don't have an account? Sign up now