ARM told us to expect some of the first 64-bit ARMv8 based SoCs to ship in 2014, and it looks like we're seeing just that. Today Qualcomm is officially announcing its first 64-bit SoC: the Snapdragon 410 (MSM8916). 

Given that there's no 64-bit Android available at this point, most of the pressure to go to 64-bit in the Android space is actually being driven by the OEMs who view 64-bit support as a necessary checkbox feature at this point thanks to Apple's move with the A7. Combine that with the fact that the most ready 64-bit IP from ARM is the Cortex A53 (successor to the Cortex A5/A7 line), and all of the sudden it makes sense why Qualcomm's first 64-bit mobile SoC is aimed at the mainstream market (Snapdragon 400 instead of 600/800).

I'll get to explaining ARM's Cortex A53 in a moment, but first let's look at the specs of the SoC:

Qualcomm Snapdragon 410
Internal Model Number MSM8916
Manufacturing Process 28nm LP
CPU 4 x ARM Cortex A53 1.2GHz+
GPU Qualcomm Adreno 306
Memory Interface 1 x 64-bit LPDDR2/3
Integrated Modem 9x25 core, LTE Category 4, DC-HSPA+

At a high level we're talking about four ARM Cortex A53 cores, likely running around 1.2 - 1.4GHz. Having four cores still seems like a requirement for OEMs in many emerging markets unfortunately, although I'd personally much rather see two higher clocked A53s. Qualcomm said the following about 64-bit in its 410 press-release:

"The Snapdragon 410 chipset will also be the first of many 64-bit capable processors as Qualcomm Technologies helps lead the transition of the mobile ecosystem to 64-bit processing.”

Keep in mind that Qualcomm presently uses a mix of ARM and custom developed cores in its lineup. The Snapdragon 400 line already includes ARM (Cortex A7) and Krait based designs, so the move to Cortex A53 in the Snapdragon 410 isn't unprecedented. It will be very interesting to see what happens in the higher-end SKUs. I don't assume that Qualcomm will want to have a split between 32 and 64-bit designs, which means we'll either see a 64-bit Krait successor this year or we'll see more designs that leverage ARM IP in the interim. 

As you'll see from my notes below however, ARM's Cortex A53 looks like a really good choice for Qualcomm. It's an extremely power efficient design that should be significantly faster than the Cortex A5/A7s we've seen Qualcomm use in this class of SoC in the past.

The Cortex A53 CPU cores are paired with an Adreno 306 GPU, a variant of the Adreno 305 used in Snapdragon 400 based SoCs (MSM8x28/8x26).

The Snapdragon 410 also features an updated ISP compared to previous 400 offerings, adding support for up to a 13MP primary camera (no word on max throughput however).

Snapdragon 410 also integrates a Qualcomm 9x25 based LTE modem block (also included in the Snapdragon 800/MSM8974), featuring support for LTE Category 4, DC-HSPA+ and the usual legacy 3G air interfaces.

All of these IP blocks sit behind a single-channel 64-bit LPDDR2/3 memory interface.

The SoC is built on a 28nm LP process and will be sampling in the first half of 2014, with devices shipping in the second half of 2014. Given its relatively aggressive schedule, the Snapdragon 410 may be one of the first (if not the first) Cortex A53 based SoCs in the market. 

A Brief Look at ARM's Cortex A53

ARM's Cortex A53 is a dual-issue in-order design, similar to the Cortex A7. Although the machine width is unchanged, the A53 is far more flexible in how instructions can be co-issued compared to the Cortex A7 (e.g. branch, data processing, load-store, & FP/NEON all dual-issue from both decode paths). 

The A53 is fully ISA compatible with the upcoming Cortex A57, making A53 the first ARMv8 LITTLE processor (for use in big.LITTLE) configurations with an A57

The overall pipeline depth hasn't changed compared to the Cortex A7. We're still dealing with an 8-stage pipeline (3-stage fetch pipeline + 5 stage decode/execute for integer or 7 for NEON/FP). The vast majority of instructions will execute in one cycle, leaving branch prediction as a big lever for increasing performance. ARM significantly increased branch prediction accuracy with the Cortex A53, so much that it was actually leveraged in the dual-issue, out-of-order Cortex A12. ARM also improved the back end a bit, improving datapath throughput. 

The result of all of this is a dual-issue design that's pushed pretty much as far as you can without going out-of-order. Below are some core-level performance numbers, all taken in AArch32 mode, comparing the Cortex A53 to its A5/A7 competitors:

Core Level Performance Comparison
All cores running at 1.2GHz DMIPS CoreMark SPECint2000
ARM Cortex A5 1920 - 350
ARM Cortex A7 2280 3840 420
ARM Cortex A9 r4p1 - - 468
ARM Cortex A53 2760 4440 600

Even ignoring any uplift from new instructions or 64-bit, the Cortex A53 is going to be substantially faster than its predecessors. I threw in hypothetical SPECint2000 numbers for a 1.2GHz Cortex A9 to put A53's performance in even better perspective. You should expect to see better performance than a Cortex A9r4 at the same frequencies, but the A9r4 is expected to hit much higher frequencies (e.g. 2.3GHz for Cortex A9 r4p1 in NVIDIA's Tegra 4i). 

ARM included a number of power efficiency improvements and is targeting 130mW single-core power consumption at 28nm HPM (running SPECint 2000). I'd expect slightly higher power consumption at 28nm LP but we're still talking about an extremely low power design.

I'm really excited to see what ARM's Cortex A53 can do. It's a potent little architecture, one that I wish we'd see taken to higher clock speeds and maybe even used in higher end devices at the same time. The most obvious fit for these cores however is something like the Moto G, which presently uses the 32-bit Cortex A7. Given Qualcomm's schedule, I wouldn't be surprised to see something like a Moto G update late next year with a Snapdragon 410 inside. Adding LTE and four Cortex A53s would really make that the value smartphone to beat.

POST A COMMENT

95 Comments

View All Comments

  • FwFred - Wednesday, December 11, 2013 - link

    This graph ignores the benefit to sleeping on non-core resources... fabrics, IOs, etc. If your internal datapaths are scaled to meet higher performance cores, low powered cores running longer may lead to inefficiencies.

    It's obviously about balance in SoC design, and efficiency is not as simply as running small cores at low frequencies if you still want to allow scaling to higher performance..
    Reply
  • Wilco1 - Wednesday, December 11, 2013 - link

    Mobile devices are never really fully sleeping, so while you could power off the screen, you still need to check the touchscreen, keep in contact with the base station, check for incoming calls etc.

    Yes you definitely want to scale to high performance rather than only using slow cores or low frequencies. That increases power consumption and total energy to perform a given task.
    Reply
  • michael2k - Wednesday, December 11, 2013 - link

    Yes it does:
    If we extended the timeline for the iPhone 4 significantly beyond the end of its benchmark run we'd see the 4S eventually come out ahead in battery life as it was able to race to sleep quicker.
    Reply
  • Wilco1 - Thursday, December 12, 2013 - link

    No. The 4S uses significantly more power than the iPhone 4 when actually running. You can clearly see that the power consumption above the idle level for the 4S is exactly twice that of the 4, but it is only 75% faster. That means it used about 15% more energy to complete the benchmark. So clearly "race to idle" uses more energy than running a bit slower. If you lowered the maximum clock frequency of the 4S then it would become as efficient as the 4. Reply
  • Exophase - Tuesday, December 10, 2013 - link

    If race to sleep were always the best solution CPUs wouldn't have DVFS at all. You'd run it at the highest clock always then power gate when idle. But that's not how things are done at all. Dynamic clocking schemes actively try to minimize the amount of time the CPU spends idle, at least until it hits some minimum clock speed. Reply
  • michael2k - Tuesday, December 10, 2013 - link

    Who said it was always the best solution? It is just one solution, as is ramping clock, increasing cores, increasing execution units, etc. Reply
  • Wilco1 - Wednesday, December 11, 2013 - link

    The point is that "run to idle" is not a solution if you are trying to prolong battery life. Reply
  • Exophase - Wednesday, December 11, 2013 - link

    Was said outright, "Race to sleep is the best fit we've yet come up with given our current technologies " Reply
  • melgross - Wednesday, December 18, 2013 - link

    That's not even close to being true. Reply
  • xenol - Tuesday, December 10, 2013 - link

    More registers are better because its the fastest memory available to the CPU. The less you have, the more it'll be spent waiting on slower memory. Also, RISC architecture has no concept of say "add RAM_location_1 to RAM_location_2", RISC can only move data from RAM to registers and perform operations on registers. Therefore, more registers is pretty much vital on a RISC system. Reply

Log in

Don't have an account? Sign up now