The A6 GPU: PowerVR SGX 543MP3?

Apple made a similar "up to 2x" claim for GPU performance. It didn't share any benchmarks, but there are four options here:

1) PowerVR SGX 543MP2 (same as in A5) at 2x the clock speed
 
2) PowerVR SGX 543MP4 at the same clock as the MP2 in the A5
 
3) Marginally higher clocked PowerVR SGX 543MP3
 
4) Next-gen PowerVR Rogue GPU
 
It's too early for #4. The first option makes sense but you run into the same issues as on the CPU side with higher voltages used to ramp clocks up (also possible that you drop voltages in the move to the new process technology). 
 
The second option trades voltage for die area, which based on the A5X Apple is clearly willing to spend where necessary.
 
The third is sort of the best of both worlds. You don't take a huge die area penalty and at the same time don't run at a significantly higher frequency, and you can get to that same 2x value.

The third option is the most elegant and likely what Apple chose here. Remember that overall die size is dictated by the amount of IO you have around the chip. The A5X had four 32-bit LPDDR2 memory controllers, which gave Apple a huge die area to work with. The move to a smaller manufacturing process cuts down the total die area, which means Apple would either have to add a ton of compute (to fill empty space, no sense in shipping a big chip with a bunch of unused area) or reduce the memory interface to compensate. Pair that knowledge with the fact that Apple doesn't have the same memory bandwidth requirements on the iPhone 5 (0.7MP vs. 3.1MP display) and it makes sense that Apple would go for a narrower memory interface with the A6 compared to the A5X.
 
How much narrower? Phil Schiller mentioned the A6 was 22% smaller than the A5. We can assume this is compared to the 45nm A5 and not the 32nm A5r2, which would mean that we don't have any more memory channels compared to the A5. In other words, it's quite likely the A6 has a 2x32-bit LPDDR2 memory interface once again.
 

Final Words

 
There's not much more to add for now. We'll have a device in a week and I suspect the first reviews will be out a day or two before then. Then the real work begins on finding out exactly what Apple has done inside the A6. If anyone has been dying to put together some good low level iOS benchmarks, now is the time to start.
 
This is a huge deal for Apple. It puts the company in another league when it comes to vertical integration. The risks are higher (ARM's own designs are tested and proven across tons of different devices/platforms) but the payoff is potentially much greater. As Qualcomm discovered, it's far easier to differentiate (and dominate?) if you're shipping IP that's truly unique from what everyone else has.
 
Now we get to see just how good Apple's CPU team really is.
The A6's CPU
POST A COMMENT

162 Comments

View All Comments

  • jjj - Saturday, September 15, 2012 - link

    Always expected A15 .
    A9 could have been an option, Apple quoted some perf numbers in some tests and since they are not all that honest it could have been higher clocks combined with faster RAM and much faster NAND.
    Custom core is far more interesting since they can build the silicon and the software together and they do have the units vol to afford it.
    Now they got to integrate the baseband soon and more in the next few years.
    Reply
  • Lucian Armasu - Saturday, September 15, 2012 - link

    I'm curious what the performance of this will be like. Even though you seem to think that they've focused a lot on power consumption, the power consumption has barely improved in the new iPhone, according to Apple's own numbers. So it remains to seen if their CPU can compete with Qualcomm's S4 Pro and Cortex A15. My guess is it won't because this is a first for Apple, so it's unlikely they made something better than Qualcomm or ARM in terms of architecture, and knowing Apple they probably kept the frequency low, too (probably around 1.2 Ghz per core).

    As for the GPU, if they went with PowerVR SGX543 again, that means they won't support OpenGL ES 3.0 in the iPhone for a year, until the next iPhone arrives, even though there will be phones supporting it as soon as this year.
    Reply
  • WaltFrench - Saturday, September 15, 2012 - link

    I'm not sure that Apple needs to develop something “better than Qualcomm” to out-perform for its devices. It'd seem most high-performance, low-power designs would optimize for the Dalvik JIT instruction stream, which is very likely quite different from what Xcode generates.

    After all, the confirmation process seems to have started in Xcode; it's obvious that Apple is attempting to exploit its tight linkage between OS, development tools and silicon.

    Given the task-specific nature of CPU optimization, I'm not even sure what instruction mix would even be available for the unbiased analysis you seem to think would be out there.
    Reply
  • tipoo - Saturday, September 15, 2012 - link

    Krait already outperforms Apples 2x claim on CPU only benchmarks, I don't think the A6 will beat Krait there. But once more Apple will have the fastest GPU by a long shot, other phones had just started to get close to the 4S.

    I still wonder how after all this time no one else picked up SGX? It's in the PS Vita, but what smartphones besides the iPhone?
    Reply
  • Death666Angel - Sunday, September 16, 2012 - link

    Qualcomm has Adreno, so they will never use anything else. TI uses SGX graphics in its OMAP3/4 and upcoming OMAP5 SoCs. For example all current 44x0 SoCs have some form of SGX in it (the 4460 can be found in the Galaxy Nexus, the 4470 is in the new Archos tablets etc.). ST-Ericsson will use Rogue in its upcoming Nova SoC. Intel uses the SGX540 and Samsung used SGX in its old Hummingbird SoC. Reply
  • Death666Angel - Sunday, September 16, 2012 - link

    Hit "post" too soon. :D
    If you are wondering why no one else uses the multicore variants of the SGX, that is most likely because no one wants to build a SoC that large in the Android world. It costs too much and doesn't have enough benefits. Apple can do that easily as it has enough margins, can make good use of the graphics part because they are coding their own OS and having that graphics part across all their platforms means developers can exploit it easily as well. If you have an Android handset with great graphics developers still need to code for all the others with mediocre graphics and the good graphics might not make the game look better or run better.
    Reply
  • tipoo - Sunday, September 16, 2012 - link

    It makes sense that no one else wants to create something with that die size as that would be costly for a few reasons, but maybe with 28 and 32mn processes they could at least use the MP2, which Apple has been at 45nm for a year. Reply
  • Death666Angel - Sunday, September 16, 2012 - link

    Yeah, TI will use MP2 in OMAP5 (SGX544 at 532MHz).
    Samsung will likely stay with Mali GPUs, they are doing quite well. Nvidia will stay with their GeForce GPUs as well. Qualcomm has Adreno. Many of the Chinese SoCs are also using Mali this time around. So 3 of the big SoCs manufacturers at the high end have no need for SGX multicore solutions in the future. And it seems that Nvidia has the dedicated Android gaming market mostly in their hands. :-)
    Reply
  • joelypolly - Saturday, September 15, 2012 - link

    I might give Apple a bit more credit since they use to build their own CPUs as well as being the owners of ARM itself in the past. Also the purchase of both PA Semi and Intrinsity should to some degree put Apple on equal footing with Qualcomm. Reply
  • cocoviper - Sunday, September 16, 2012 - link

    I think you're forgetting there's a larger display and additional radio to drive. The fact that they held the line on power performance with those added and no real battery increase suggests tremendous power savings at the SoC level. Reply

Log in

Don't have an account? Sign up now