Cortex A15 Architecture

I want to go deeper into ARM’s Cortex A15 but I’ll have to save that for another time. At a high level you’re looking at a much deeper, much wider architecture than the Cortex A9. The integer pipeline is significantly deeper (15 stages vs. 9 stages), however branch prediction has been improved considerably to hopefully offset the difference.

The front end is 50% wider and has double the instruction fetch bandwidth of the Cortex A9, which helps increase instruction level parallelism. In order to capitalize on the 3-wide machine, ARM dramatically increased the size of the reorder buffer and all associated data structures within the machine. While the Cortex A9 could keep around 32 - 40 decoded instructions in its reorder buffer, Cortex A15 can hold 128 - an increase of up to 4x. The larger ROB alone gives you a good idea of the magnitude of difference between the Cortex A9 and A15. While the former was a natural evolution over the Cortex A8, ARM’s Cortex A15 is really a leap forward both in performance and power consumption - clearly aimed at something much more than just smartphones.

Getting to the execution core, A15 continues the trend of being considerably wider than A9. There are more execution ports and more execution units, all of which help to increase ILP/single threaded performance. ARM went to multiple, independent issue queues in order to keep frequencies high. Each issue queue can accept up to three instructions and all issue queues can dispatch in parallel.

The A15 can execute instructions out of order like the A9, however its abilities grow quite a bit. All FP/NEON instructions had to be executed in-order on Cortex A9, but they can now be executed OoO in the A15. Despite the beefier OoO execution engine, the Cortex A15 can’t reorder all memory operations (independent loads can be executed out of order, but stores can’t be completed ahead of loads).

The Cortex A15 moves back to an integrated L2 cache structure, rather than a separate IP block as was the case with the Cortex A9. L1 and L2 cache latencies remain largely unchanged, although I do believe A15 does see a 1 - 2 cycle penalty over A9 in a few cases. The level 2 TLB and other data structures grow in size considerably in order to feed the hungrier machine.

Although the L1 caches remain the same size as NVIDIA’s Cortex A9 (32KB I + 32KB D), the the L2 cache grows to 2MB. The 2MB L2 is shared by all four cores (the companion core has its own private 512KB L2), and any individual core can occupy up to the entire 2MB space on its own. Alternatively, all four cores can evenly share and access the large L2.

Introduction & Power The Cortex A9 r4p1 & Tegra Clock Speeds
Comments Locked


View All Comments

  • TheJian - Monday, February 25, 2013 - link
    ipad4 scored 47 vs. 57 for T4 in egypt hd offscreen 1080p. I'd say it's more than competitive with ipad4. T4 scores 2.5x iphone5 in geekbench (4148 vs. 1640). So it's looking like it trumps A6 handily.,2817,2415809,
    T4 should beat 600 in Antutu and browsermark. IF S800 is just an upclocked cpu and adreno 330 this is going to be a tight race in browsermark and a total killing for NV in antutu. 400mhz won't make up the 22678 for HTC ONE vs. T4's 36489. It will fall far short in antutu unless the gpu means a lot more than I think in that benchmark. I don't think S600 will beat T4 in anything. HTC ONE only uses 1.7ghz the spec sheet at QCOM says it can go up to 1.9ghz but that won't help from the beating it took according to pcmag. They said this:
    "The first hint we've seen of Qualcomm's new generation comes in some benchmarks done on the HTC One, which uses Qualcomm's new 1.7-GHz Snapdragon 600 chipset - not the 800, but the next notch down. The Tegra 4 still destroys it."

    Iphone5 got destroyed too. Geekbench on T4=4148 vs. iphone5=1640. OUCH.

    Note samsung/qualcomm haven't let PC mag run their own benchmarks on Octa or S800. Nvidia is showing now signs of fear here. Does anyone have data on the cpu in Snapdragon800? Is the 400cpu in it just a 300cpu clocked up 400mhz because of the process or is it actually a different core? It kind of looks like this is just 400mhz more on the cpu with an adreno330 on top instead of 320 of S600.
    "The Krait 300 provides new microarchitecture improvements that increase per-clock performance by 10–15% while pushing CPU speed from 1.5GHz to 1.7GHz. The Krait 400 extends the new microarchitecture to 2.3GHz by switching to TSMC's high-k metal gate (HKMG) process."

    Anyone have anything showing the cpu is MORE than just 400mhz more on a new process? This sounds like no change in the chip itself. That article was Jan23 and Gwennap is pretty knowledgeable. Admittedly I didn't do a lot of digging yet (can't find much on 800 cpu specs, did most of my homework on S600 since it comes first).

    We need some Rogue 6 data now too :) Lots of post on the G6100 in the last 18hrs...Still reading it all... ROFL (MWC is causing me to do a lot of reading today...). About 1/2 way through and most of it seems to just brag about opengl es3.0 and DX11.1 (not seeing much about perf). I'm guessing because NV doesn't have it on T4 :) It's not used yet, so I don't care but that's how I'd attack T4 in the news ;) Try running something from DX11.1 on a soc and I think we'll see a slide show (think crysis3 on a soc...LOL). I'd almost say the same for all of es3.0 being on. NV was wise to save die space here and do a simpler chip that can undercut prices of others. They're working on DX9_3 features in WinRT (hopefully MS will allow it). OpenGL ES3.0 & DX11.1 will be more important next xmas. Game devs won't be aiming at $600 phones for their games this xmas, they'll aim at mass market for the most part just like on a pc (where they aim at consoles DX9, then we get ports...LOL). It's a rare game that's aimed at GXT680/7970ghz and up. Crysis3? Most devs shoot far lower.
    No perf bragging just features...Odd while everyone else brags vs. their own old versions or other chips.

    Qcom CMO goes all out:
    "Nvidia just launched their Tegra 4, not sure when those will be in the market on a commercial basis, but we believe our Snapdragon 600 outperforms Nvidia’s Tegra 4. And we believe our Snapdragon 800 completely outstrips it and puts a new benchmark in place.

    So, we clean Tegra 4′s clock. There’s nothing in Tegra 4 that we looked at and that looks interesting. Tegra 4 frankly, looks a lot like what we already have in S4 Pro..."

    OOPS...I guess he needs to check the perf of tegra4 again. PCmag shows he's 600 chip got "DESTROYED" and all other competition "crushed". Why is Imagination not bragging about perf of G6100? Is it all about feature/api's without much more power? Note that page from phonearena is having issues (their server is) as I had to get it out of google cache just now from earlier. He's a marketing guy from Intel so you know, a "blue crystals" kind of guy :) The CTO would be bragging about perf I think if he had it. Anand C is fluff marketing guy from Intel (but he has a masters in engineering, he's just marketing it appears now and NOT throwing around data just "i believe" comments). One last note, Exynos octa got kicked out of Galaxy S4 because it overheated the phone according to the same site. So Octa is tablet only I guess? Galaxy S4 is a superphone and octa didn't work in it if what they said is true (rumored 1.9ghz rather than 1.7ghz HTC ONE version).
  • fteoath64 - Wednesday, February 27, 2013 - link

    @TheJian: "ipad4 scored 47 vs. 57 for T4 in egypt hd offscreen 1080p. I'd say it's more than competitive with ipad4. T4 scores 2.5x iphone5 in geekbench (4148 vs. 1640). So it's looking like it trumps A6 handily."

    Good reference!. This shows T4 doing what it ought to in the tablet space as Apple's CPU release cycle tends to be 12 to 18 months, giving Nvidia lots of breathing room. Also besides, since Qualcomm launched all their new ranges, the next cycle is going to be a while. However, Qualcomm has so many design wins on their Snapdragons, it leaves little room for Nvidia and others to play. Is this why TI went out of this market ?. So could Amazon be candidate for T4i on their next tablet update ?.

    PS: The issue with Apple putting quad PVR544 into iPad was to ensure the performance overall with retina is up to par with the non-retina version. Especially the Mini which is among the fastest tablet out there considering it needs to push less than a million pixels yet delivering a good 10 hours of use.
  • mayankleoboy1 - Tuesday, February 26, 2013 - link

    Hey AnandTech, you never told us what is the "Project Thor" , that JHH let slip at CES..
  • CeriseCogburn - Thursday, February 28, 2013 - link

    This is how it goes for nVidia from, well we know whom at this point, meaning, it appears everyone here.

    " I have to give NVIDIA credit, back when it introduced Tegra 3 I assumed its 4+1 architecture was surely a gimmick and to be very short lived. I remember asking NVIDIA’s Phil Carmack point blank at MWC 2012 whether or not NVIDIA would standardize on four cores for future SoCs. While I expected a typical PR response, Phil surprised me with an astounding yes. NVIDIA was committed to quad-core designs going forward. I still didn’t believe it, but here we are in 2013 with NVIDIA’s high-end and mainstream roadmaps both exclusively featuring quad-core SoCs. NVIDIA remained true to its word, and the more I think about it, the more the approach makes sense."

    paraphrased: " They're lying to me, they lie, lie ,lie ,lie ,lie. (pass a year or two or three) Oh my it wasn't a lie. "
    Rinse and repeat often and in overlapping fashion.

    Love this place, and no one learns.
    Here's a clue: It's AMD that has been lying it's yapper off to you for years on end.
  • Origin64 - Tuesday, March 12, 2013 - link

    Wow. 120mbps LTE? I get a fifth of that through a cable at home.

Log in

Don't have an account? Sign up now