GPU Architecture

Dating back to the original iPhone, Apple has relied on GPU IP from Imagination Technologies. In recent years, the iPhone and iPad lines have pushed the limits of Img’s technology - integrating larger and higher performing GPUs than all other Img partners. Apple definitely attempted to obfuscate its underlying GPU architecture this time around for some reason.

Dating back to a year ago I got a lot of tips saying that Apple would be integrating Imagination Technologies’ PowerVR Series 6 GPU this generation, but I needed more proof.

The first indication that this isn’t simply a Series 5XT part is the listed support for OpenGL ES 3.0. The only GPUs presently shipping with ES 3.0 support are Qualcomm’s Adreno 3xx (which is only integrated into Qualcomm silicon), ARM’s Mali-T6xx series and PowerVR Series 6. NVIDIA’s Tegra 4 GPU doesn’t support ES 3.0, and it’s too early for Logan/mobile Kepler. With Qualcomm out of the running that leaves Mali and PowerVR Series 6.

“All GPUs used in iOS devices use tile-based deferred rendering (TBDR).”

Apple’s developer documentation lists all of its SoCs as supporting Tile Based Deferred Rendering (TBDR). If you ask Imagination, they will tell you that they are the only ones with a true TBDR implementation. However if you look at ARM’s Mali-T6xx documentation, ARM also claims its GPU is a TBDR.

The real hint comes with anti-aliasing support:

The last line in the screenshot above, MAX_SAMPLES = 8. That’s a reference to 8 sample MSAA, a mode that isn’t supported by ARM’s Mali-T6xx hardware - only PowerVR Series 6 (Mali-T6xx supports 4x and 16x AA modes).

There are some other hints here that Apple is talking about PowerVR Series 6 when it references the A7’s GPU:

“The A7 GPU processes all floating-point calculations using a scalar processor, even when those values are declared in a vector. Proper use of write masks and careful definitions of your calculations can improve the performance of your shaders. For more information, see “Perform Vector Calculations Lazily” in OpenGL ES Programming Guide for iOS.

Medium- and low-precision floating-point shader values are computed identically, as 16-bit floating point values. This is a change from the PowerVR SGX hardware, which used 10-bit fixed-point format for low-precision values. If your shaders use low-precision floating point variables and you also support the PowerVR SGX hardware, you must test your shaders on both GPUs.”

As you’ll see below, both of the highlighted statements apply directly to PowerVR Series 6. With Series 6 Imagination moved to a scalar architecture, and in ImgTec’s developer documentation it confirms that the lowest precision mode supported is FP16.

All of this leads me to confirm what I heard would be the case a while ago: Apple’s A7 is the first shipping mobile silicon to integrate ImgTec’s PowerVR Series 6 GPU.

Now let’s talk about hardware.

The A7’s GPU Configuration: PowerVR G6430

Previously known by the codename Rogue, series 6 has been announced in the following configurations:

PowerVR Series 6 "Rogue"
GPU # of Clusters # of FP32 Ops per Cluster Total FP32 Ops Optimization
G6100 1 64 64 Area
G6200 2 64 128 Area
G6230 2 64 128 Performance
G6400 4 64 256 Area
G6430 4 64 256 Performance
G6630 6 64 384 Performance

Based on the delivered performance, as well as some other products coming down the pipeline I believe Apple’s A7 features a variant of the PowerVR G6430 - a 4 cluster Rogue design optimized for performance (vs. area).

Rogue is a significant departure from the Series 5XT architectures that were used in the iPhone 5, iPad mini and iPad 4. The biggest change? A switch to a fully scalar architecture, similar to the present day AMD and NVIDIA GPUs.

Whereas with 5XT designs we talked about multiple cores, the default replication unit in Rogue is a “cluster”. Each core in 5XT replicated all hardware, while each cluster in Rogue only replicates the shader ALUs and texture hardware. Rogue is still a unified architecture, but the front end no longer scales 1:1 with shading hardware. In many ways this approach is a lot more sensible, as it is typically how you build larger GPUs.

In 5XT, each core featured a number of USSE2 pipelines. Each pipeline was capable of a Vec4 multiply+add plus one additional FP operation that could be dual-issued under the right circumstances. Img never detailed the latter so I always counted flops by looking at the number of Vec4 MADs. If you count each MAD as two FP operations, that’s 8 FLOPS per USSE2 pipe. Each USSE2 was a SIMD, so that’s one instruction across all 4 slots and not some combination of instructions. If you had 3 MADs and something else, the USSE2 pipe would act as a Vec3 unit instead. The same goes for 1 or 2 MADs.

With Rogue the USSE2 pipe is gone and replaced by a Unified Shading Cluster (USC). Each USC is a 16-wide scalar SIMD, with each slot capable of up to 4 FP32 ops per clock. Doing the math, a single USC implementation can do a total of 64 FP32 ops per clock - the equivalent of a PowerVR SGX 543MP2. Efficiency obviously goes up with a scalar design, so realizable performance will likely be higher on Rogue than 5XT.

The A7 is a four cluster design, so that four USCs or a total of 256 FP32 ops per clock. At 200MHz that would give the A7 twice the peak theoretical performance of the GPU in the iPhone 5. And from what I’ve heard, the G6430 is clocked much higher than that.

There’s more graphics horsepower under the hood of the iPhone 5s than there is in the iPad 4. While I don’t doubt the iPad 5 will once again widen that gap, keep in mind that the iPhone 5s has less than 1/4 the number of pixels as the iPad 4. If I were a betting man, I’d say that the A7 was designed not only to drive the 5s’ 1136 x 640 display, but also a higher res panel in another device. Perhaps an iPad mini with Retina Display? There’s no solving the memory bandwidth requirements, but the A7 surely has enough compute power to get there. There's also the fact that Apple has prior history of delivering an SoC that wasn’t perfect for the display (e.g. iPad 3).

GPU Performance

As I mentioned earlier, the iPhone 5s is the first Apple device (and consumer device in the world) to ship with a PowerVR Series 6 GPU. The G6430 inside the A7 is a 4 cluster configuration, with each cluster featuring a 16-wide array of SIMD pipelines. Whereas the 5XT generation of hardware used a 4-wide vector architecture (1 pixel per clock, all 4 color components per SIMD), Series 6 moves to a scalar design (think 16 pixels per clock, one color per clock). Each pipeline is capable of two FP32 MADs per clock, for a total of 64 FP32 operations per clock, per cluster. With the A7's 4 cluster GPU, that works out to be the same throughput per clock as the 4th generation iPad.

Imagination claims its new scalar architecture is not only more computationally dense, but also far more efficient. With the transition to scalar GPU architectures in the PC space we generally saw efficiency go up, so I'm inclined to believe Imagination's claims here.

Apple claims up to a 2x increase in GPU performance compared to the iPhone 5, but just looking at the raw numbers in the table above there's far more shading power under the hood of the A7 than only "2x" the A6.

Mobile SoC GPU Comparison
  PowerVR SGX 543 PowerVR SGX 543MP2 PowerVR SGX 543MP3 PowerVR SGX 543MP4 PowerVR SGX 554 PowerVR SGX 554MP2 PowerVR SGX 554MP4 PowerVR G6430
Used In - iPad 2/iPhone 4S iPhone 5 iPad 3 - - iPad 4 iPhone 5s
SIMD Name USSE2 USSE2 USSE2 USSE2 USSE2 USSE2 USSE2 USC
# of SIMDs 4 8 12 16 8 16 32 4
MADs per SIMD 4 4 4 4 4 4 4 32
Total MADs 16 32 48 64 32 64 128 128
GFLOPS @ 300MHz 9.6 GFLOPS 19.2 GFLOPS 28.8 GFLOPS 38.4 GFLOPS 19.2 GFLOPS 38.4 GFLOPS 76.8 GFLOPS 76.8 GFLOPS

GFXBench 2.7

As always, we'll start with GFXBench (formerly GLBenchmark) 2.7 to get a feel for the theoretical performance of the new GPU. GFXBench 2.7 tends to be more computationally bound than most games as it is frequently used by silicon vendors to stress hardware, not by game developers as an actual performance target. Keep that in mind as we get to some of the actual game simulation results.

GLBenchmark 2.7 - Fill Test (Offscreen 1080p)

Twice the fill rate of the iPhone 5, and clearly higher than anything else we've tested. Rogue is off to a good start.

GLBenchmark 2.7 - Triangle Throughput (Offscreen 1080p)

What's this? A performance regression? Remember what I said earlier in the description of Rogue. Whereas 5XT replicated nearly the entire GPU for "multi-core" versions, multi-cluster versions of Rogue only replicate at the shader array. The result? We don't see the same sort of peak triangle setup scaling we did back on multi-core 5XT parts. I don't suppose this will be a big issue in actual games (and likely a better balance between triangle setup/rasterization and shading hardware), but it's worth pointing out.

GLBenchmark 2.7 - Triangle Throughput, Fragment Lit (Offscreen 1080p)

This is the worst case regression we've seen from 5XT to Rogue. Its clear that per chip triangle rates are much higher on Rogue, but with a many core implementation of 5XT there's just no competing. I suspect this change is part of how Img was able to increase the overall density of Rogue vs. 5XT. Now the question is whether or not this regression will actually appear in games? To find out we turn to the two game simulation tests in GFXBench 2.7, starting with the most stressful one: T-Rex HD.

As always, the onscreen tests run at a device's native resolution with v-sync enabled, while the offscreen results happen at 1080p and v-sync disabled.

GLBenchmark 2.7 - T-Rex HD

As expected, the G6430 in the iPhone 5s is more than twice the speed of the part in the iPhone 5. It is also the first device we've tested capable of breaking the 30 fps barrier in T-Rex HD at its native resolution. Given just how ridiculously intense this test is, I think it's safe to say that the iPhone 5s will probably have the longest shelf life from a gaming perspective of any previous iPhone.

GLBenchmark 2.7 - T-Rex HD (Offscreen 1080p)

The offscreen test helps put the G6430's performance in perspective. Here we show the 5s barely falling behind Qualcomm's Adreno 330 (Snapdragon 800). There are obvious thermal differences between the two platforms, but if we look at the G2's performance (another S800/A330 part) we get a better indication of an apples to apples comparison. Looking at the leaked Nexus 5 (also S800/A330) T-Rex HD scores confirms what we're seeing above. In a phone, it looks like the G6430 is a bit quicker than Qualcomm's Adreno 330.

The Egypt HD tests are much lighter and a lot closer to the workload of a lot of games on the store today, although admittedly it is getting a little light.

GLBenchmark 2.7 - Egypt HD

Onscreen we're at Vsync already, something the iPhone 5 wasn't capable of doing. The 5s should have no issues running most games at 30 fps.

GLBenchmark 2.7 - Egypt HD (Offscreen 1080p)

Offscreen, even at 1080p, performance doesn't really change. Qualcomm's Adreno 330 is definitely faster, at least in the MDP/T. In the G2, its performance lags behind the G6430. I really want to measure power on these things.

3DMark

3DMark finally released an iOS version of its benchmark, enabling us to run the 5s through on yet another test. As we've discovered in the past, 3DMark is far more of a CPU test than GFXBench. While CPU load will range from 6 - 25% during GFXBench, we'll see usage greater than 50% on 3DMark - even during the graphics tests. 3DMark is also heavily threaded, with its physics test taking advantage of quad-core CPUs.

With the iOS release of the benchmark comes a new offscreen rendering mode called Unlimited. The benchmark is the same but it renders offscreen at 720p with the display only being updated once every 100 frames to somewhat get around vsync. Because of the new test we don't have a ton of comparison data, so I've included whatever we've got at this point.

3DMark Unlimited - Ice Storm

3DMark ends up being more of a CPU and memory bandwidth test rather than a raw shader performance test like GFXBench and Basemark X. The 5s falls behind the Snapdragon 800/Adreno 330 based G2 in overall performance. To find out how much of that is GPU performance and how much is a lack of four cores, let's look at the subtests.

3DMark Unlimited - Graphics

The graphics test is more GPU bound than CPU bound, and here we see the G6430 based iPhone 5s pull ahead. Note how well the Moto X does because of its very high clocked CPU cores rather than its GPU. Although this is a graphics test, it's still well influenced by CPU performance.

3DMark Unlimited - Graphics Test 1

3DMark Unlimited - Graphics Test 2

3DMark Unlimited - Physics

The physics test hits all four cores in a quad-core chip and explains the G2 pulling ahead in overall performance. Note that I saw no improvement in this largely CPU bound test, leading me to believe that we've hit some sort of a bug with 3DMark and the new Cyclone core. 

3DMark Unlimited - Physics Test

Basemark X

Basemark X is a new addition to our mobile GPU benchmark suite. There are no low level tests here, just some game simulation tests run at both onscreen (device resolution) and offscreen (1080p, no vsync) settings. The scene complexity is far closer to GLBenchmark 2.7 than the new 3DMark Ice Storm benchmark, so frame rates are pretty low.

Unfortunately I ran into a bug with Basemark X under iOS 7 on the iPhone 5/5c/5s that prevented the off screen test from completing, leaving me only with on-screen results at native resolution.

Basemark X - On Screen

Once again we're seeing greater than 2x scaling comparing the iPhone 5s to the 5.

iPhone Generational Performance & iPhone 5s vs. Bay Trail M7 Motion Coprocessor & Touch ID
Comments Locked

464 Comments

View All Comments

  • ddriver - Wednesday, September 18, 2013 - link

    My basis for this conclusion is how the article is structured, the carefully picking of benchmarks and selective comparisons. This is clearly visible and has nothing to do with the actual chip specifications. It has nothing to do with execution mode specific details. So no, I don't have problem with facts, unlike you.

    Furthermore, that 30% number you were focused on is hardly impressive and proportional to the claims this article is making. In a workload that would take an hour, 30% is a noticeable improvement, but for typical phone applications this is not the case.
  • Dooderoo - Wednesday, September 18, 2013 - link

    The structure of the article and the benchmarks are mostly the same as they use in most reviews, excluding some Android specific benchmarks. Where exactly do you see "carefully picking of benchmarks and selective comparisons". Put differently: what benchmarks should they include to convince you there is no "cunning deceit" at work?

    What claims in the article are not proportional with the 30% (actually more) performance gain?

    I won't even comment on the "not a noticeable improvement" bit.
  • andrewaggb - Wednesday, September 18, 2013 - link

    My issue with all the benchmarks is that they are mostly synthetic. The most meaningful benchmarks are the applications you plan to use and the usage patterns you are targeting. Synthetics are fascinating, but I think it's generally a mistake to buy anything based on them.
  • notddriver - Thursday, September 19, 2013 - link

    Um, so if a 30% improvement is hardly impressive and irrelevant to phones, then isn't the entire concept of reviewing phones on the basis of hardware performance also irrelevant? Which would make your complaints about the biased-yet-insignificant-review as vital as a debate over whether Harry Potter or Spiderman would be better at defending Metropolis.

    Incidentally, my iPhone 5 is powerful enough that I never notice any issues—as I'm sure the last generation of Android phones would be. But if your going to go to town over a dozen or more comments about a topic, at least pretend that it matters a tiny bit. Just good form.
  • oRdchaos - Wednesday, September 18, 2013 - link

    I've seen people all over the web get very worked up about people's phrasing with regard to 64-bit. Would you prefer the title of the section were "Performance gains from a 64-bit architecture and the new ARMv8 instruction set"? People keep arguing that 64-bit in a vacuum doesn't give much performance gain. But there is no vacuum.

    I think the article is very clear to point out where gains are from additional instructions, versus a doubling of the register bit width, versus improved memory subsystem/cache. I'm sure when they get chances to write more of their own tests, they'll be able to pinpoint things further.
  • sfaerew - Wednesday, September 18, 2013 - link

    engadget is multi-thread geekbench performance. tegra4 4cores vs A7 2cores
  • Spoony - Wednesday, September 18, 2013 - link

    - You are correct, there are no native cross-platform benches used. Which ones do you suggest Anandtech use? We all know Geekbench is essentially meaningless across platforms.

    - If you are talking about this engadget review: http://www.engadget.com/2013/09/17/iphone-5s-revie... It appears that Nvidia SHIELD (Tegra 4) led in only one benchmark out of six. This makes your statement incorrect. LG G2 is more competitive. Need we repeat how inaccurate Geekbench is cross-platform. It is as apples-to-oranges as the JS tests.

    - I believe what Anandtech was attempting to show with the encryption was the difference ARMv8 ISA makes. In fact the title of that somewhat sensational chart is "AArch64 vs. AArch32 Performance Comparison". So while you are right, the encryption tests are handled in a fundamentally different way, that way is part of the ISA and is an advantage of AArch64, and thus is valid in the chart.

    - It will be curious to see whether Qualcomm can deliver A7 like performance using ARMv7 with extended features. My position is no, which is the whole point of that entire page of the review. ARMv8 is actually enabling some additional performance due to ISA efficiency and more features.

    - I think noticeable memory footprint bloat of a 64-bit executable is completely ridiculous. But to see if I was right, I did some testing. It's getting a bit hard to find fat binaries to take apart these days, most things are x86_64 only. But I found a few. I computed for three separate applications, took the average, and it looks like about a 9% increase in executable size. Considering that executables themselves are a tiny part of any application's assets, I think it is completely insubstantial. If you calculate the increase in executable size versus the size of the whole application package, it averages to a less than 1% increase.

    I too am a bit sad that Apple didn't increase the RAM, and also equally sad that connectivity was left out this rev. I continue to be sad that there is not a more serious storage controller inside the phone. You make some valid points, but I think you also make some erroneous ones. The question with phone SoCs is: Is this a well balanced platform along the axes of performance (GPU and CPU), power consumption (thus heat), and features. I believe that the A7 is well calibrated. Obviously Qualcomm is also doing great work, and perhaps their SoCs are equally as well calibrated.
  • ddriver - Wednesday, September 18, 2013 - link

    - This is entirely his decision, considering writing those reviews is his job, not mine. He can either use actual native benchmarks which reflect the performance of the actual hardware, or call it JS VM performance instead of CPU performance, because different JS implementations across platforms are entirely meaningless.

    - there is only one geekbench test at engadged. That is what I said "geekbench" - I did not imply it was faster in all tests in the engadged review, I don't know why you insinuate I did so. IIRC snapdragon 800 is actually a little slower in the CPU department than tegra 4, and only faster in the graphics department.

    - the boost in encryption is completely disproportional to other improvements and is due to hardware implementations, not 64bit execution mode. So, if anything, it should be a graph or chart on its own, instead of using it to bulk up the chart that is supposed to be indicative of integer performance improvements in 64bit mode.

    - maybe v7 chips with 128 bit SIMD units will not deliver quite the performance of A7, because there is more to the subject than the width of the registers (the number of registers doesn't really matter that much), like supported instructions. At any rate, v7 chips are still quad core, which means 4x128bit SIMD units compared to the 2 on A7, albeit with a few extra supported instructions. Until any native benchmark that guarantees saturation of SIMD units pops out, it will be a foolish thing to make a concrete statement on the subject. But boosting v7 SIMD units to 128bit width will at least make it competitive in number crunching scenarios, which use SIMD 99% of the time.

    - this is very relative, you can store the same data in three different containers and get a completely different footprint - a vector will only use a single pointer, since it is continuous in memory, a forward list will use a pointer for every data element, a linked list will use two. Depending on the requirements, you may need faster arbitrary inserts and deletions, which will require a linked list, and in the case of a single byte datatype, a 32bit single element will be 12 bytes because of padding and alignment, while in 64bit mode the size will grow to 24 bytes, which is exactly double. Granted, this is the other extremity of the "less than 1%" you came up with, truth is results will vary in between depending on the workload.

    As I said in the first post, the wise thing would be to reserve judgement until mass availability, mostly because I know corporate practices involving exclusive reviews prior to availability, which are a pronounced determining factor to the initial rate of sales. In short, apple is in the position to be greatly rewarded for imposing some cheating requirements on early exclusive reviewers. And at least in this aspect I think everyone will disagree, apple is not the kind of company to let such an opportunity go to waste.
  • ddriver - Wednesday, September 18, 2013 - link

    *no one will disagree
  • Dug - Wednesday, September 18, 2013 - link

    I will.
    "As I said in the first post, the wise thing would be to reserve judgement until mass availability, mostly because I know corporate practices involving exclusive reviews prior to availability, which are a pronounced determining factor to the initial rate of sales. In short, apple is in the position to be greatly rewarded for imposing some cheating requirements on early exclusive reviewers. And at least in this aspect I think everyone will disagree, apple is not the kind of company to let such an opportunity go to waste."

    Prove it and stop making assumptions.

Log in

Don't have an account? Sign up now