A8’s GPU: Imagination Technologies’ PowerVR GX6450

Last but not least on our tour of the A8 SoC is Apple’s GPU of choice, Imagination’s PowerVR GX6450.

When Apple first announced the A8 SoC as part of their iPhone keynote, they told us to expect a nearly 50% increase in graphics performance. Based on that information and on the fact that that Apple was moving to a denser 20nm process, we initially believed that Apple would be upgrading from A7’s 4-core PowerVR design to a 6-core design, especially in light of the higher resolution displays present on the iPhone 6 and iPhone 6 Plus.

Instead our analysis with Chipworks found that only four GPU cores were present on A8, which ruled out the idea of a 6-core design but did narrow down the options considerably. Based on that information and more importantly Apple’s Metal Programming Guide, we have been able to narrow down our options to a single GPU, the PowerVR GX6450.

The GX6450 is the immediate successor to the G6430 first used in the A7 and is based on Imagination’s PowerVR Series6XT architecture. Imagination first announced PowerVR Series6XT to the public at CES 2014, and now just a short eight months later we are seeing the first Series6XT hardware reach retail.

We have already covered the PowerVR Series6/Series6XT architecture in some detail earlier this year so we won’t go through all of it again, but we would encourage anyone who is interested to take a look at our architectural analysis for additional information. Otherwise we will be spending the bulk of our time looking at how GX6450 differs from G6430 and why Apple would choose this specific GPU.

From a technical perspective Series6XT is a direct evolution over the previous Series6, and GX6450 is a direct evolution over G6430 as well. Given a 4-core configuration there are only a limited number of scenarios where GX6450 outright has more hardware than G6430 (e.g. additional ALUs), and instead Series6XT is focused on adding features and improving performance over Series6 through various tweaks and optimizations to the architecture. Series6 at this point is actually over two years old – it was first introduced to the public at CES 2012 – so a lot has happened in the mobile GPU landscape over the past couple of years.

The closest thing to a marquee feature on Series6XT is support for Adaptive Scalable Texture Compression (ASTC), a next-generation texture compression technology that is slowly making its way into GPUs from a number of manufacturers. Designed by the consortium responsible for OpenGL ES, Khronos, ASTC is designed to offer better texture compression (with finer grained quality options) than existing texture compression formats while also being a universal format supported by all GPUs. In Apple’s case they have always been using PowerVR GPUs – and hence all products support PVRTC and more recently PVRTC2 – however ASTC being exposed allows them to take advantage of the quality improvements while also making game development and porting from other platforms easier.

Less visible to users but certainly important to Apple, Series6XT also includes new power management capabilities to reduce power consumption under idle and light workloads. Through finer grained power gating technology that Imagination dubs “PowerGearing G6XT”, GX6450 can now have its shading clusters (USCs) powered down individually, allowing only as many of them as are necessary to be fired up. As Apple continues to min-max their designs, being able to idle at a lower power state can be used to improve battery life and/or increase how often and how long the A8’s GPU uses higher power states, improving overall efficiency.


Apple iPhone GPU Performance Estimate: Over The Years

And, perhaps most importantly overall, Series6XT comprises a series of under-the-hood optimizations to improve overall performance. When it comes to the internals of PowerVR architectures we only have limited details from Imagination on how they operate, so in some areas we know quite a bit about what Imagination has been up to and in other areas their architectures are still something akin to a black box. At any rate Imagination’s goal for Series6XT was to improve performance by up to 50% – this seems to be where Apple’s 50% performance improvement claim comes from – though as we’ll see the performance gains on real world applications are not going to be quite as potent.

What we do know about Series6XT is that Imagination has made some changes to the structure of the USCs themselves. Series6XT still uses a 16-wide SIMD design, but in each pipeline they have added another set of medium/half-precision (FP16) ALUs specifically to improve FP16 performance. Now instead of 2x3 (6) FP16 ALUs, Series6XT bumps that up to 4x2 (8) FP16 ALUs. This is the only outright increase in shader hardware when you compare Series6 to Series6XT, and on paper it improves FP16 performance by 33% at equivalent clock speeds.

The focus on FP16 is interesting, though for iOS it may be misplaced. These half-precision floating point operations are an excellent way to conserve bandwidth and power by not firing up more expensive FP32 ALUs, but the tradeoff is that the numbers they work with aren’t nearly as precise, hence their use has to be carefully planned. In practice what you will find is that while FP16 operations do see some use, they are by no means the predominant type of floating point GPU operation used, so the FP16 increase is a 33% increase only in the cases where performance is being constrained by the GPU’s FP16 performance.

FP32 performance meanwhile remains unchanged. Each USC pipeline contains two such ALUs, for up to four FP32 FLOPS per clock, or to use our typical metric, 128 MADs (Multiply-Adds) per clock.

The rest of Series6XT’s optimizations take place at the front and back ends, where geometry processing and pixel fill take place respectively. Imagination has not told us exactly what they have done here, but both these areas have been targeted to improve sustained polygon rates and pixel fillrate performance. These more generic optimizations stand to be more applicable to general performance, though by how much we cannot say.

One final optimization we want to point out for Series6XT is that Imagination has made some additional under-the-hood changes to improve GPU compute performance. We have not talked about GPU compute on iOS devices thus far, as until now Apple has not exposed any APIs suitable for it (e.g. OpenCL is not available on iOS). With iOS8 Apple is releasing their Metal API, which is robust enough to be used for both graphics and now compute. How developers put this capability to use remains to be seen, but GX6450 should perform even better than G6430.

Mobile SoC GPU Comparison
  PowerVR SGX 543MP2 PowerVR SGX 543MP3 PowerVR SGX 543MP4 PowerVR SGX 554MP4 PowerVR G6430 PowerVR GX6450
Used In iPad 2/iPhone 4S iPhone 5 iPad 3 iPad 4 iPad Air/iPhone 5s iPhone 6/iPhone 6Plus
SIMD Name USSE2 USSE2 USSE2 USSE2 USC USC
# of SIMDs 8 12 16 32 4 4
MADs per SIMD 4 4 4 4 32 32
Total MADs 32 48 64 128 128 128
GFLOPS @ 300MHz 19.2 GFLOPS 28.8 GFLOPS 38.4 GFLOPS 76.8 GFLOPS 76.8 GFLOPS 76.8 GFLOPS
Pixels/Clock N/A N/A N/A N/A 8 8
Texels/Clock N/A N/A N/A N/A 8 8

The one wildcard when talking about performance here is going to be clock speeds. Apple doesn’t expose these and they aren’t easy to test for (yet), though in the long term Metal offers some interesting possibilities for nailing that down, or at least getting a better idea of relative clock speeds.

In any case, we’ll take a look at our GPU benchmarks in depth in a bit, but overall GPU performance compared to A7 and its G6430 is consistently better, but the exact performance gain will depend on the test at hand. Some tests will come very close to reaching 50% while others will be just 15-20%. The dependent factor generally seems to be whether the test is ALU-bound or not; because the USC has not changed significantly from G6430 to GX6450 outside of those additional FP16 ALUs, tests that hit the FP32 ALUs in particular show less of an improvement. Otherwise more balanced tests (or at least tests more defined by pixel fillrate performance) can show greater gains. In general we should be looking at a 30-35% performance improvement.

Why Four Cores?

One thing that admittedly surprised us in the revelation that A8 was using a 4-core PowerVR design was that we figured a 6-core design would be a shoe-in for A8, especially since Apple was on the receiving end of the density improvements from TSMC’s 20nm process. But upon further reflection an additional two cores is likely more than Apple needed nor wanted.

The biggest factor here is that coming from G6430 in the A7, performance has seen a solid improvement despite sticking to only four GPU cores. Due to the combination of performance improvements from the Series6XT architecture and any clock speed increases from Apple, A8 gets quite a bit more GPU performance to play with. The increased resolution of the iPhone 6 screen in turn requires more performance if Apple wants to keep native resolution performance from significantly regressing, which GX6450 is capable of delivering on. Never mind the fact that G6430 also drove the iPad Air and its much larger 2048x1536 pixel display.

PowerVR Series6/6XT "Rogue"
GPU # of Clusters # of FP32 Ops per Cluster Total FP32 Ops Optimization
G6200 2 64 128 Area
G6230 2 64 128 Performance
GX6240 2 64 128 Area
GX6250 2 64 128 Performance
G6400 4 64 256 Area
G6430 4 64 256 Performance
GX6450 4 64 256 Performance
G6630 6 64 384 Performance
GX6650 6 64 384 Performance

These performance improvements in Series6XT have a cost as well, and that cost is suitably reflected in the estimated die sizes for each GPU. The G6430 was 22.1mm2 on the 28nm A7, while the GX6450 is 19.1mm2 on A8. Though GX6450 is smaller overall, it’s nowhere near the roughly 11.1mm2 a pure and perfect die shrink of G6430 would occupy. Limited area scaling aside, GX6450’s additional functionality and additional performance requires more transistors, and at the end of the day Apple doesn’t see a significantly smaller GPU because of this. In other words, the upgrade from G6430 to GX6450 has delivered much of the performance (and consumed much of the die space) we initially expected to be allocated to a 6-core GPU.

Overall the choice of GX6450 seems to be one of picking the GPU best for a phone, which is an area the G6430 proved effective with A7. As a step below Imagination’s 6-core PowerVR designs, GX6450 delivers a better balance between performance and power than a larger GPU would, which in turn is clearly a benefit to Apple. On the other hand this means A8 is not going to have the GPU performance to compete with the fastest SoCs specifically designed for tablets, though what this could mean for the obligatory iPad update remains to be seen.

A8’s CPU: What Comes After Cyclone? CPU Performance
Comments Locked

531 Comments

View All Comments

  • Samus - Tuesday, September 30, 2014 - link

    Too many people define "good LCD" as having high resolution. In the real world, that just isn't true. There are so many terrible 1920x1080 panels on the market, and I'm not just talking mobile devices.

    Look at desktop LCD's. To get one properly calibrated you need to spend $500+ for a 24" HP Dreamcolor or NEC Multisync PA Spectraview. Some Dell's with Samsung LCD's are pretty good out of the box but almost nothing is 100% RGB until expensive calibration.

    So back to Apple. They're all about balance. They don't push the envelope in any direction like Samsung (and others.) What bothers me lately about Apple is they are "too" safe with their design language and technology that their products are actually becoming boring. As the company that pioneered mainstream unibody mobile devices and multitouch\gesture driven interfaces, it's interesting the competition has essentially been perfecting it for them to just copy back.

    At least Apple isn't suing everybody anymore...seems like Steve's thermonuclear crusade is finally dying along with him.
  • Omega215D - Tuesday, September 30, 2014 - link

    I do have a problem with people constantly trying to push AMOLED as the ultimate display tech due it being featured on DisplayMate with the Galaxy S5. People forget that you lose the efficiency of OLED once you start using something that displays a lot of white like web browsing. Also, turning down the brightness still has the tendency to give it a "soggy" look to it, though it is much less on the Galaxy S5.

    The burn in effect can be easily avoided unless the clock/ dock locks up for whatever reason and leaves the display static.
  • darkich - Tuesday, September 30, 2014 - link

    The burn in can be easily prevented also.
    Download an utterly simple burn in tool app, and leave it working for couple of hours, every few weeks or so.
  • nevertell - Wednesday, October 1, 2014 - link

    This is exactly the kind of thing that people DO NOT WANT TO DO.
    It's a phone, it must just work, I shouldn't worry about any specific technical part of the phone, let alone do maintenance work on it to keep it functional.
  • elajt_1 - Friday, October 3, 2014 - link

    Nah, thats for bad image retention. If you already got a burn-in you can't do anything about that as of now.
  • Toss3 - Wednesday, October 8, 2014 - link

    In my experience AMOLED displays look a lot better than their LCD counterparts with the brightness turned down. They also give you the option of having even lower screen brightness which is ideal if you want to use it in bed at night (night mode on android looks great on the S4, but not so much on my Nexus 5).
  • hoelee - Thursday, December 4, 2014 - link

    IPhone white balance not accurate, how you proven your point said iPhone had good display? At least I saw on mine eye compare between mine z3 with the so call iPhone... Besides, you calculate by yourself, iPhone 6 latest phone less 1x pixels to push compare to android FHD screen. Benchmark alway unfair in mine point of view...
  • braveit - Friday, December 12, 2014 - link

    Apple hasn't really done much on it's new iPhone 6 release. Source: http://berichinfo.com/reasons-iphone-6-sucks/
  • Caliko - Tuesday, October 6, 2015 - link

    Who else is innovating in design?

    All phones today are iKnockoffs.

    Steve's thermonuclear attack has juat started buddy!! Droid is making manufacture lose billions. Can't wait to see who goes bankrupt first!!
  • akdj - Thursday, October 2, 2014 - link

    The 'little SD expansion slot' that reads and writes at about 6-8Mb/second vs, the onboard NAND speed tested within the review should be enough to tell you why samsung and ....hmmm ....are offering the micro SD slot. Android is moving 'away' from accessibility to and from the 'off board' storage. I've got a Note 3 and with each update I've been able to do less and less
    Apparently you didn't bother to read the review ...at all. You also mention 'good LCD ...at least as good as the competitors.'
    Then you quite DisplayMate. The S5. And the single point difference between it and it's LCD runner up. From Apple. If you'd have bothered to have read this review ...or the S5 review on Anand's site when it dropped you'd know A) how impressed the author was with the evolution of AMOLED, it's continued refinement and accuracy, brightness and viewing angles. B) you'd notice in EVERY possible measurement. Each one in the review are 'objective measurements'. It (6) tops the display's characteristics in each area someone that knows what a 'great display' entails. At 400+ PPI, you nor any of your freneds will distinguish the difference between it and the new Note 4 or any other 2540 display. It's silly, it's a waste of energy and its detrimental to fluency of the UI (with today's SoC GPU and that many pixels) and the ever important 'battery life'
    To be fair, NO OTHER vendor offers 128GB, & 64 is extremely rare. I waited six weeks for a Note 3 w/64GB to show up at AT&T. Never happened and I settled for 32. At the same price as the 64GB iPhone 6.
    And from what I've read the 2014 model will have the same. 32GB with a caveat. Buy a micro SD card. Hope Google allows support in the future and keep your fingers crossed. 1080p on my Note chews up space. 4K is quadruple. And it will NOT shoot to the fastest SD or microSD card on the market.
    As an ambidextrous owner and user since their inception, I enjoy both Android and iOS. But silly comments like yours shows your ignorance. It was IN THE review! The actual (& damn near identical testing method used by DisplayMate's team will use) 'proof' it's 'LCD ....(IS) @ least as good as its competitors.

Log in

Don't have an account? Sign up now