Battery Life

Brian did some excellent sleuthing and came across battery capacities for both the iPhone 5s and 5c in Apple’s FCC disclosures. The iPhone 5 had a 3.8V 5.45Wh battery, while the 5s boosts total capacity to 5.96Wh (an increase of 9.35%). The move to a 28nm process doesn’t come with all of the benefits of a full node shrink, and it’s likely not enough to completely offset the higher potential power draw of a much beefier SoC. Apple claims the same or better battery life on the 5s compared to the iPhone 5, in practice the answer is a bit more complicated.

Unlike previous designs, we’ve never had a half node shrink for an s-SKU. Both the iPhone 3GS and iPhone 4S stayed on the same process node as their predecessor and drove up performance. In the case of the 3GS, the performance gains outweighed their power cost, while in the case of the iPhone 4S we generally saw a regression.

The iPhone 5s improves power consumption by going to 28nm, but turns that savings into increased performance. The SoC also delivers a wider dynamic range of performance than we’ve ever seen from an Apple device. There’s as much CPU power here as the first 11-inch MacBook Air, and more GPU power than an iPad 4.

To find out the balance of power savings vs. additional performance I turned to our current battery life test suite, which we first introduced with the iPhone 5 review last year.

We'll start with our WiFi battery life test. As always, we regularly load web pages at a fixed interval until the battery dies (all displays are calibrated to 200 nits).

AT Smartphone Bench 2013: Web Browsing Battery Life (WiFi)

The iPhone 5s regresses a bit compared to the 5 in this test (~12% reduction despite the larger battery). We're loading web pages very aggressively here, likely keeping the A7 cores running at their most power hungry state. Even the 5c sees a bit of a regression compared to the 5, which makes me wonder if we're seeing some of the effects of an early iOS 7 release here.

The story on LTE is a bit different. Here we see a slight improvement in battery life compared to the iPhone 5, although the larger battery of the 5s doesn't seem to give it anything other than parity with the 5c:

AT Smartphone Bench 2013: Web Browsing Battery Life (4G LTE)

Our cellular talk time test is almost entirely display and SoC independent, turning it mostly into a battery capacity test:

Cellular Talk Time

You can see the close grouping of the smaller iPhones at the bottom of the chart. There's a definite improvement in call time compared to the iPhone 5. We're finally up above iPhone 4S levels there.

AT Smartphone Bench 2013: GLBenchmark 2.5.1 Battery Life

Our Egypt HD based 3D battery life test gives us the first indication that Rogue, at least running fairly light code, can be more power efficient than the outgoing 5XT. Obviously the G6430 implemented here can run at fairly high performance levels, so I'm fully expecting peak power consumption to be worse but for more normal workloads there's no regression at all - a very good sign.

M7 Motion Coprocessor & Touch ID Camera
Comments Locked

464 Comments

View All Comments

  • Dug - Wednesday, September 18, 2013 - link

    "maybe you should hire a developer to write native cross platform benchmark tools"
    WHY? It is not going to make any difference. Developers aren't writing native cross platform programs. If they can take advantage of anything that's in the system, then show it off.
    That would be like telling car manufacturers to redesign a hybrid to gas only to compare with all the other gas only cars.
  • ddriver - Wednesday, September 18, 2013 - link

    "Developers aren't writing native cross platform programs"

    Maybe it is about time you crawl from under the rock you are living under... Any even remotely concerned with performance and efficiency application pretty much mandates it is a native application. It would be incredibly stupid to not do it, considering the "closest" to native language Java is like 2-3 times slower and users 10-20 times as much memory.
  • Dug - Wednesday, September 18, 2013 - link

    Exactly my point! "native cross platform" Each cross-platform solution can only support a subset of the functionality included in each native platform.

    It doesn't get you anywhere to produce a native cross platform benchmark tool.

    Again you have to mitigate to names and snide comments because you are wrong.
  • ddriver - Wednesday, September 18, 2013 - link

    What you talk about is I/O, events and stuff like that. When it comes to pure number crunching the same code can execute perfectly well for every platform it is complied against. Actually, some modern frameworks go even further than that and provide ample abstractions. For example, the same GUI application can run on Windows, Linux, MacOS, iOS and Android, apart from a few other minor platforms.
  • Anand Lal Shimpi - Wednesday, September 18, 2013 - link

    Ultimately the benchmarking problem is being fixed, just not on the time scale that we want it to. I figured we'd be better off by now, and in many ways we are (WebXPRT, Browsermark are both steps in the right direction, we have more native tools under Android now) but part of the problem is there was a long period of uncertainty around what OSes would prevail. Now that question is finally being answered and we're seeing some real investment in benchmarks. Trust me, I tried to do a lot behind the scenes over the past 4 years (some of which Brian and I did recently) but this stuff takes time. I remember going through this in the early days of the PC industry too though, I know how it all ends - it'll just take a little time to get there.

    Actually I think 128-bit registers might've been optional on v7.

    The only reason encryption results are in that table is because that's how Geekbench groups them. There's no nefarious purpose there (note that it's how we've always reported the Geekbench results, as they are reported in the test themselves).

    In my experience with the 5s I haven't noticed any performance regressions compared to the 5/5c. I'm not saying they don't exist and I'll continue to hunt, it's just that they aren't there now. I believe I established the reasoning for why you'd want to do this early, and again we're talking about at most 12 months before they should start the move to 64-bit anyways. Apple tends to like its ISA transitions to be as quick and painless as possible, and moving early to ARMv8 makes a lot of sense in that light. Sure they are benefiting from the marketing benefits of having a feature that no one else does, but what company doesn't do that?

    I don't believe the move to 64-bit with Cyclone was driven first and foremost by marketing. Keep in mind that this architecture was designed when a bunch of certain ex-AMDers were over there too...

    Take care,
    Anand
  • BrooksT - Wednesday, September 18, 2013 - link

    Why would Anand write cross-platform benchmarks that have no connection to real world usage? Especially when you then complain that the 64 bit coverage isn't real world enough?
  • ddriver - Wednesday, September 18, 2013 - link

    For starters, putting the encryption results in their own graph, like every other review before that, and side to side comparison between geekbench ST/MT scores for A7 and competing v7 chips would be a good start toward a more objective and less biased article.

    And I know I am asking a lot, but an edit feature in the comment section is long overdue...
  • TheBretz - Wednesday, September 18, 2013 - link

    For what it's worth this is NOT a case of LITERALLY comparing "Apples" and "Oranges" - it is a case of comparing "Apple" and many other manufacturers, but there was no fruit involved in the comparison, only smarthphones and tablets.
  • ddriver - Wednesday, September 18, 2013 - link

    Apples to oranges is a figure of speech, it has nothing to do with the company apple... It concerns comparing incomparable objects which is the case of completely different JS implementations on iOS and Android.
  • Arbee - Wednesday, September 18, 2013 - link

    Please name any case when AT's benchmarks and reviews have been proven to be biased or inaccurate. There's a reason the writers at other sites consider AT the gold standard for solid technical commentary (Engadget, Gizmodo, and the Verge all regularly credit AT on technical stories). As far as bias, have you *heard* Brian cooing about practically wanting to marry the Nexus 5? ;-)

    I think what actually happened here is that apparently Apple engineers listen to the AT podcast, because aside from 802.11ac and the screen size the 5S is designed almost perfectly to AT's well-known and often-stated specifications. It hits all of Anand's chip architecture geekery hot buttons in a way that Samsung's mashups of off-the-shelf parts never will, and they used Brian's exact line "Bigger pixels means better pictures" in the presentation. And naturally, if someone gives you what you want, you're likely to be happy with it. This is why people have Amazon gift lists ;-)

    Krait's 128 bit SIMD definitely helps, but it won't match true v8 architecture designs. I've written commercially shipping ARM assembly, and there's a *lot* of cruft in the older ISA that v8 cleans right up. And it lets compilers generate *much* more favorable code. I'll be surprised if the next Snapdragons aren't at least 32-bit v8. Qualcomm has been pretty forward-looking aside from their refusal to cooperate with the open-source community (Freedreno FTW).

    As far as 64 bit on less than 4 GB of RAM, it enables applications to more freely operate on files in NAND without taking up huge amounts of RAM (via mmap(), which the Linux kernel in Android of course also has). Apps like Loopy HD and MultiTrack DAW (not to mention Apple's own iMovie and GarageBand) will definitely be able to take advantage.

Log in

Don't have an account? Sign up now