As always, our good friends over at Kishonti managed to have the first GPU performance results for the new 4th generation iPad. Although the new iPad retains its 2048 x 1536 "retina" display, Apple claims a 2x improvement in GPU performance through the A6X SoC. The previous generation chip, the A5X, had two ARM Cortex A9 cores running at 1GHz paired with four PowerVR SGX 543 cores running at 250MHz. The entire SoC integrated 4 x 32-bit LPDDR2 memory controllers, giving the A5X the widest memory interface on a shipping mobile SoC in the market at the time of launch.

The A6X retains the 128-bit wide memory interface of the A5X (and it keeps the memory controller interface adjacent to the GPU cores and not the CPU cores as is the case in the A5/A6). It also integrates two of Apple's new Swift cores running at up to 1.4GHz (a slight increase from the 1.3GHz cores in the iPhone 5's A6). The big news today is what happens on the GPU side. A quick look at the GLBenchmark results for the new iPad 4 tells us all we need to know. The A6X moves to a newer GPU core: the PowerVR SGX 554.

Mobile SoC GPU Comparison
  PowerVR SGX 543 PowerVR SGX 543MP2 PowerVR SGX 543MP3 PowerVR SGX 543MP4 PowerVR SGX 554 PowerVR SGX 554MP2 PowerVR SGX 554MP4
Used In - iPad 2 iPhone 5 iPad 3 - - iPad 4
SIMD Name USSE2 USSE2 USSE2 USSE2 USSE2 USSE2 USSE2
# of SIMDs 4 8 12 16 8 16 32
MADs per SIMD 4 4 4 4 4 4 4
Total MADs 16 32 48 64 32 64 128
GFLOPS @ 300MHz 9.6 GFLOPS 19.2 GFLOPS 28.8 GFLOPS 38.4 GFLOPS 19.2 GFLOPS 38.4 GFLOPS 76.8 GFLOPS

As always, Imagination doesn't provide a ton of public information about the 554 but based on what I've seen internally it looks like the main difference between it and the 543 is a doubling of the ALU count per core (8 Vec4 ALUs per core vs. 4 Vec4). Chipworks' analysis of the GPU cores helps support this: "Each GPU core is sub-divided into 9 sub-cores (2 sets of 4 identical sub-cores plus a central core)."

I believe what we're looking at is the 8 Vec4 SIMDs (each one capable of executing 8+1 FLOPS). The 9th "core" is just the rest of the GPU including tiler front end and render backends. Based on the die shot and Apple's performance claims it looks like there are four PowerVR SGX554 cores on-die, resulting in peak theoretical performance greater than 77 GFLOPS.

There's no increase in TMU or ROP count per core, the main change between the 554 and 543 is the addition of more ALUs. There are some more low level tweaks which helps explain the different core layout from previous designs, but nothing major.

With that out of the way, let's get to the early performance results. We'll start with low level fill rate and triangle throughput numbers:

GLBenchmark 2.5 - Fill Test

Fill rate goes up by around 15% compared to the iPad, which isn't enough to indicate a huge increase in the number of texture units on the 554MP4 vs. the 543MP4. What we may be seeing here instead are benefits from higher clocked GPU cores rather than more texture units. If this is indeed the case it would indicate that the 554MP4 changes the texture to ALU ratio from what it was in the PowerVR SGX 543 (Update: this is confirmed). The data here points to a GPU clock at least 15% higher than the ~250MHz in the 3rd generation iPad.

GLBenchmark 2.5 - Fill Test (Offscreen 1080p)

GLBenchmark 2.5 - Triangle Texture Test

Triangle throughput goes up by a hefty 65%, these are huge gains over the previous generation iPad.

GLBenchmark 2.5 - Triangle Texture Test (Offscreen 1080p)

GLBenchmark 2.5 - Triangle Texture Test - Fragment Lit

The fragment lit triangle test starts showing us close to a doubling of performance at the iPad's native resolution.

GLBenchmark 2.5 - Triangle Texture Test - Fragment Lit (Offscreen 1080p)

GLBenchmark 2.5 - Triangle Texture Test - Vertex Lit

GLBenchmark 2.5 - Triangle Texture Test - Vertex Lit (Offscreen 1080p)

GLBenchmark 2.5 - Egypt HD

Throw in a more ALU heavy workload and we really start to see the advantage of the new GPU: almost double the performance in Egypt HD at 2048 x 1536. We also get performance that's well above 30 fps here on the iPad at native resolution for the first time.

GLBenchmark 2.5 - Egypt HD (Offscreen 1080p)

Normalize to the same resolution and we see that the new PowerVR graphics setup is 57% faster than even ARM's Mali-T604 in the Nexus 10. Once again we're seeing just about 2x the performance of the previous generation iPad.

GLBenchmark 2.5 - Egypt Classic

Vsync bound gaming performance obviously won't improve, but the offscreen classic test gives us an idea of how well the new SoC can handle lighter workloads:

GLBenchmark 2.5 - Egypt Classic (Offscreen 1080p)

For less compute bound workloads the new iPad still boasts a 53% performance boost over the previous generation.

Ultimately it looks like the A6X is the SoC that the iPad needed to really deliver good gaming performance at its native resolution. I would not be surprised to see more game developers default to 2048 x 1536 on the new iPad rather than picking a lower resolution and enabling anti-aliasing. The bar has been set for this generation and we've seen what ARM's latest GPU can do, now the question is whether or not NVIDIA will finally be able to challenge Imagination Technologies when it releases Wayne/Tegra 4 next year.

Comments Locked

113 Comments

View All Comments

  • Urizane - Wednesday, November 7, 2012 - link

    It's worth noting that PowerVR does not manufacture GPUs, they license IP to chip makers. Scarcity of virtual goods only serves to hurt PowerVR.
  • KoolAidMan1 - Friday, November 2, 2012 - link

    You get what you pay for.

    Most tablets cut corners with the SoCs they use. A good GPU costs money. Apple manages both high performance and better profit margins at similar prices to the competition by making many times more tablets and individual phone models than other companies.

    If Android and WinRT devices were to use comparable GPUs you'd either see something even more expensive than an iPad, or they'd make less profit per device sold.
  • augiem - Friday, November 2, 2012 - link

    Finally some Nexus 10 benchmarks. Mali T604 performance is utterly abysmal. I'm shocked, yet I'm not. Par for the course. And it's got to push more pixels than the iPad with a squirrel under the hood.

    All Android and Windows phone and tablet makers are absolutely negligent for not only continuing to let Apple's massive graphics lead go on for a 6th year, but for allowing the gap become even wider than its ever been.

    Nobody but Imagination knows squat about making a GPU. It's really frustrating as hardware enthusiast to see this year after year.
  • WaltFrench - Friday, November 2, 2012 - link

    Let's not overstate the case: nVidia has been a leader in graphics chips since its founding in 1993.

    But the economics are quite different here: aside from Samsung, virtually no Android OEM is able to charge a high enough price to justify putting advanced GPU chips, and the extra battery, memory interface, etc., that high-performance graphics needs.

    Is this likely to change? Certainly, hard-core gamers and tech enthusiasts care. But somebody who just wants a tablet for casual browsing and email will be happy to have a lower-cost device instead. And Google's incentive is NOT to get lots of very happy gamers, but rather a huge number of eyeballs for its ads. If the equation gets so lopsided that sales suffer, then they'll aim for more fps.

    Or just wait for Moore's Law to catch up with high-performance game standards. As these tests show, iPads are now within spitting distance of “more than good enough,” the point at which you do NOT throw more money at a function.
  • augiem - Friday, November 2, 2012 - link

    Yes that's true. For general purpose needs, you don't HAVE to have the fastest GPU on the market. But GPU's aren't only used for games. When you're talking about ~4 megapixel screens, thet GPU is going to come into play throughout the entire usage experience. I believe the original purpose of Apple's focus on GPU power was not because of gaming as much as it was because of their push toward higher desktop resolutions. Ultimately, users will care if the usage experience is marred by studdering. It's not a great formula for Google to put a higher res screen than the iPad, and then skimp on the engine powering that display. And the average user does play games. There's a reason Nintendo's portable division, which was their bread and butter for the last 20 years, is seriously suffering showing something like a 2/3 loss in revenue this year vs last.

    Waiting for Moore's law to catch up doesn't do anything for the present. Apple pretty much has a monopoly on mobile graphics performance.
  • CeriseCogburn - Tuesday, November 6, 2012 - link

    Isn't it also true that in these devices the CPU has a broad effect on screen stuttering, and in the case of the cpu's present, APPL has been losing.

    So you have the appl ax5 vs the nVidia tegra3 and despite a huge glbench win for appl, actual usage and games shows the tegra3 equivalent and winning. The strong 4core nVidia cpus contained within.

    So it's not just gpu in these devices that dictate screen experience.
  • augiem - Friday, November 2, 2012 - link

    >> iPads are now within spitting distance of “more than good enough,”

    I don't think that will ever be the case. People have been buying hardware on performance for 30+ years now and it never stops. People always want more. Look at CG in movies. By around 1999 the CG was good enough to look really good if they spent enough money on it. Fast forward 13 years and everyone's still obsessed with it. Every blockbuster that comes out has more advanced CG than the last. And if its not more advanced, it's just plain got more of it. It's only human nature to chase complexity. You can see it just by looking at the evolution of society. I don't think there really is such as thing as "good enough." We'll always get used to it and want something bettter.
  • ssiu - Friday, November 2, 2012 - link

    At first I was excited to hear about this, as "new tech == better" right?

    But then I realize: in the beginning when everyone think "no GPU change", it implies A6X GPU somehow manage to run at twice the frequency of A5X (not sure actually how that's accomplished without killing battery life) to achieve "up to twice GPU performance", but that would imply close to 2X performance on **almost every GPU operations**.

    Now with this new GPU core, it truly means **up to** twice GPU performance, i.e. from close to 2x performance on some operations ("ALU heavy workloads"), down to no improvement for some other operations (or x% if there is x% GPU frequency increase, for some small x).
  • ssiu - Friday, November 2, 2012 - link

    P.S. Don't mean to bad-mouth A6X GPU, yes it is still king-of-the-hill (even A5X is great in GPU). Just feeling a bit anti-climatic as my subject says.
  • MykeM - Saturday, November 3, 2012 - link

    The A5X is decent but unlike the A6 or the A6X, it still uses the older 45nm technology. Apple even moved the iPad2's A5 to the smaller and more efficient die (I believe the A5 used in the still in production iPhone 4S remains 45nm). The move to 32nm should explain the reason why the iPad 4 retains pretty much the same if not better battery life despite clocking higher. I've never actually own past iPad but with the 4 I'm rather surprised at how cool (as in temperature not factor) the whole unit feels. I heard the iPad 3 runs rather warm.

Log in

Don't have an account? Sign up now