GPU Performance Using Unreal Engine 3

In our iPad 2 review I called the PowerVR SGX 543MP2 Apple's gift to game developers. Apple boasted a roughly 9x improvement in raw GPU compute power over the A4 into the A5. The increase came through more execution resources and a higher GPU clock. The A5 in the iPhone 4S gets the same GPU, simply clocked lower than the iPad 2 version. Apple claims the iPhone 4S can deliver up to 7x the GPU performance of the iPhone 4, down from 9x in the iPad 2 vs. iPad 1 comparison. Why the delta?

The iPad 2 has both a larger battery and a higher resolution display. There are 28% more pixels to deal with on the iPad 2 vs the iPhone 4S and 9x vs 7x actually works out to be a 28% increase. The lower clocked GPU goes along with the lower clocked CPU in the 4S' version of the A5 to keep power consumption in check and because the platform doesn't need the performance as much as the iPad 2 with its higher resolution display.

Mobile SoC GPU Comparison
  Adreno 225 PowerVR SGX 540 PowerVR SGX 543 PowerVR SGX 543MP2 Mali-400 MP4 GeForce ULP Kal-El GeForce
SIMD Name - USSE USSE2 USSE2 Core Core Core
# of SIMDs 8 4 4 8 4 + 1 8 12
MADs per SIMD 4 2 4 4 4 / 2 1 ?
Total MADs 32 8 16 32 18 8 ?
GFLOPS @ 200MHz 12.8 GFLOPS 3.2 GFLOPS 6.4 GFLOPS 12.8 GFLOPS 7.2 GFLOPS 3.2 GFLOPS ?
GFLOPS @ 300MHz 19.2  GFLOPS 4.8 GFLOPS 9.6 GFLOPS 19.2 GFLOPS 10.8 GFLOPS 4.8 GFLOPS ?

GLBenchmark continues to be our go-to guy for GPU performance under iOS. While there are other reputable 3D benchmarks, GLBench remains the only good cross-platform (iOS and Android) solution we have today.

The performance gains live up to Apple's expectations (Update: our original 4S for Egypt/Pro were incorrect. We had two sets of graphs, one internal and one external - the latter had incorrect data. We have since updated the charts to reflect the 4S' actual performance. Sorry for the mixup!):

GLBenchmark 2.1 - Egypt - Offscreen

GLBenchmark 2.1 - Pro - Offscreen

GLBenchmark gets around vsync by rendering offscreen, so the 4S is allowed to run as fast as it can. Here we see a 6.46x higher frame rate compared to the iPhone 4.

It's obvious that GLBenchmark is designed first and foremost to be bound by shader performance rather than memory bandwidth, otherwise all of these performance increases would be capped at 2x since that's the improvement in memory bandwidth from the 4 to the 4S. Note that we're clearly not overly bound by memory bandwidth in these tests if we scale pixel count by 50%, which is hardly realistic. Most games won't be shader bound, instead they should be more limited by memory bandwidth.

At the iPhone 4S introduction Epic was on stage showing off Infinity Blade 2, which will have new visual enhancements only present on the 4S thanks to its faster GPU. Thus far Epic has been using GPU performance improvements to make its games look better and not necessarily run faster (although they do) since the target is playability on all platforms. What I wanted however was a true apples-to-apples comparison using Epic's engine as it is arguably the best looking platform to develop iOS games on today.

Epic offers a free license to Unreal Engine 3 to anyone who wants to use it for non-commercial use. If you want to sell your UE3 based iOS game, you don't have to pay a large sum to license Epic's engine up front. Instead you toss Epic $99 and pay royalties (25%) on any revenue beyond the first $50K. It's a great deal for aspiring game developers since you get access to one of the best 3D engines around and don't need any additional startup capital to use it. If your game is a hit Epic gets a cut but you're still making money so all is good in the world.

The process starts with UDK, the Unreal Development Kit. Epic actually offers a great deal of documentation on developing using UDK, making the whole process extremely easy. The freely available UDK can target Windows, Mac OS X and iOS platforms. If you want Android support you'll have to pay to license the dev kit unfortunately. Given how successful Infinity Blade has been under iOS, I suspect this is a move partially designed to keep Apple happy. It's also possible the Android UE3 dev kit is simply not as far along as the iOS version.

Along with every UDK download, Epic now provides the full source code to its well known iOS Citadel demo. With access to Citadel's source code and Epic's excellent (and freely available) development tools I put together a real-world GPU test for iOS.


What's that? A frame counter in iOS? Huzzah!

The test shows us frame rate over the course of a flythrough of Epic's Citadel demo. This is simply the standard Citadel guided tour but with UE3's frame recording statistics enabled. Once again, UDK gave me the tools needed to accurately profile what was going on. For developers this would be helpful in tuning the performance of your app, but for me it gave me the one thing I've been hoping for: average frame rate in a UE3 game for iOS.

The raw data looks like this, a graph of frame render times:


iPhone 4S frame time

You're looking at frame render time in ms, so lower numbers mean better performance. Notice how the iPhone 4S graph seems to remain mostly flat for the majority of the benchmark run? That's because it's limited by vsync. At 60Hz the frame render time is capped to 16.7ms, which is approximately where the 4S' curve flattens out to. The 4S could likely run through this demo even quicker (or maintain the same speed with a heavier graphical workload) if we had a way to disable vsync in iOS.


iPhone 4 frame time

On the iPhone 4 however, frame times are significantly higher - more than 2x on average. You also see significant spikes in frame time, indicating periods where the frame rate drops significantly. Not only does the 4S offer better average performance here but its performance is far more consistent, hugging vsync rather than wildly bouncing around.

The chart below summarizes the two graphs above by looking at the average frames rendered per second throughout the benchmark:

AnandTech UE3 Performance Test

The iPhone 4S averages 2.3x the frame rate of the iPhone 4 throughout our test. I believe this gives us a more realistic value than the 6x we saw in GLBenchmark. A major cause for the difference is the vsync limitations present in all iOS apps that render to the screen. On top of that, while we're obviously not completely limited by memory bandwidth, it's clear that memory bandwidth does play a larger role here than it does in GLBenchmark.

The Citadel demo by default increases rendering quality on the iPhone 4, but a quick look at the game's configuration files didn't show any new features enabled for the 4S. Chances are the version of Citadel included with the UDK was built prior to the 4S being available. In other words, the 4 and 4S should be rendering the same workload in our benchmark. To confirm I also grabbed a couple of screenshots to ensure the two devices were running at the same settings:


iPhone 4


iPhone 4S

This is actually the most stressful scene in the level, it causes even the 4S to drop below 30 fps. With the camera stationary in roughly the same position I saw a 74% increase in performance on the 4S vs the iPhone 4.

Most game developers still target the iPhone 3GS, but the 4S allows them to significantly ramp up image quality without any performance penalty. Because of the lower hardware target for most iOS games and forced vsync I wouldn't expect to see 2x increases in frame rate for the 4S over the 4 in most games out today or in the near future. You can expect a smoother frame rate and better looking games if developers follow Epic's lead and simply enable more eye-candy on the 4S.

The Memory Interface The A6: What's Next?
Comments Locked

199 Comments

View All Comments

  • tipoo - Monday, October 31, 2011 - link

    Anyone know if there is a reason this hasn't made it into any Andriod phone yet? Does Google specify compatible GPU's, or is it cost, or development time, etc? Looks like it slaughters even the Mali 400 which is probably the next fastest.
  • zorxd - Monday, October 31, 2011 - link

    The only reason is that no one used it yet. The TI OMAP 4470 will use the 544 which is probably a little faster.
    The SGS2 is using the slower Mali 400, however it was released 6 months ago. Yet it's not that bad, even beating the 4S in Glbenchmark pro.
  • zorxd - Monday, October 31, 2011 - link

    I meant no SoC vendor is using it.
  • djboxbaba - Monday, October 31, 2011 - link

    The numbers were incorrect and have been updated, the 4S is ~2x faster than the GS2 on the GLBenchmark Pro.
  • freezer - Thursday, November 3, 2011 - link

    But not when running at phone's native resolution. Thats what people will use while running games on their phone.

    iPhone 4S has much more pixels for GPU to draw while having much smaller screen. Not very optimal for gaming right?

    http://glbenchmark.com/result.jsp?benchmark=glpro2...
  • djboxbaba - Thursday, November 3, 2011 - link

    Correct, but we're comparing the GPU's by standardizing the resolution. Of course in the native resolution this will change.
  • thunng8 - Monday, October 31, 2011 - link

    I don't see any GL benchmark that the Mail 400 beats the 4S???
  • freezer - Thursday, November 3, 2011 - link

    That's because Anandtech review shows only the 720p offscreen results.

    This gives very different numbers compared to running GL Benchmark Pro in phone's native resolution.

    iPhone 4S has about 60% more pixels than Galaxy S2, and so its GPU has to draw much more pixels in every frame.

    Go to glbenchmark.com and dig database yourself.
  • Ryan Smith - Monday, October 31, 2011 - link

    The 544 should be identical to the 543 at the same clock and core configuration. It's effectively a 543 variant with full D3D feature level 9_3 support. The primary purpose of the 544 will be to build Windows devices, whereas for non-Windows devices the 543 would suffice. We don't have access to PowerVR's pricing, but it likely costs more due to the need to license additional technologies (e.g. DXTC) to achieve full 9_3 support.
  • Penti - Tuesday, November 1, 2011 - link

    Who will use it to support Windows Phone though? Qualcomm uses their own AMD/ATi based Adreno GPU. I guess it will be TI's attempt off getting Microsoft to support Windows Phone on their SoC in order to supply say partners of theirs like Nokia. Or might just be a later purchase/contract date for the other SoC vendors. Getting the IP-blocks later, but many did opt for the Mali-400 so why wouldn't they opt for the successor too? It seems to have worked out good. Samsung is just one of the vendors that usually did use PowerVR. I guess ST-E will use it in order to support Windows Phone on Nova A9540 SoC too. While Android vendors might opt for the older A9500 still.

    Interesting to see how Nvidia do lag in this field though.

Log in

Don't have an account? Sign up now