Battery Life

Brian did some excellent sleuthing and came across battery capacities for both the iPhone 5s and 5c in Apple’s FCC disclosures. The iPhone 5 had a 3.8V 5.45Wh battery, while the 5s boosts total capacity to 5.96Wh (an increase of 9.35%). The move to a 28nm process doesn’t come with all of the benefits of a full node shrink, and it’s likely not enough to completely offset the higher potential power draw of a much beefier SoC. Apple claims the same or better battery life on the 5s compared to the iPhone 5, in practice the answer is a bit more complicated.

Unlike previous designs, we’ve never had a half node shrink for an s-SKU. Both the iPhone 3GS and iPhone 4S stayed on the same process node as their predecessor and drove up performance. In the case of the 3GS, the performance gains outweighed their power cost, while in the case of the iPhone 4S we generally saw a regression.

The iPhone 5s improves power consumption by going to 28nm, but turns that savings into increased performance. The SoC also delivers a wider dynamic range of performance than we’ve ever seen from an Apple device. There’s as much CPU power here as the first 11-inch MacBook Air, and more GPU power than an iPad 4.

To find out the balance of power savings vs. additional performance I turned to our current battery life test suite, which we first introduced with the iPhone 5 review last year.

We'll start with our WiFi battery life test. As always, we regularly load web pages at a fixed interval until the battery dies (all displays are calibrated to 200 nits).

AT Smartphone Bench 2013: Web Browsing Battery Life (WiFi)

The iPhone 5s regresses a bit compared to the 5 in this test (~12% reduction despite the larger battery). We're loading web pages very aggressively here, likely keeping the A7 cores running at their most power hungry state. Even the 5c sees a bit of a regression compared to the 5, which makes me wonder if we're seeing some of the effects of an early iOS 7 release here.

The story on LTE is a bit different. Here we see a slight improvement in battery life compared to the iPhone 5, although the larger battery of the 5s doesn't seem to give it anything other than parity with the 5c:

AT Smartphone Bench 2013: Web Browsing Battery Life (4G LTE)

Our cellular talk time test is almost entirely display and SoC independent, turning it mostly into a battery capacity test:

Cellular Talk Time

You can see the close grouping of the smaller iPhones at the bottom of the chart. There's a definite improvement in call time compared to the iPhone 5. We're finally up above iPhone 4S levels there.

AT Smartphone Bench 2013: GLBenchmark 2.5.1 Battery Life

Our Egypt HD based 3D battery life test gives us the first indication that Rogue, at least running fairly light code, can be more power efficient than the outgoing 5XT. Obviously the G6430 implemented here can run at fairly high performance levels, so I'm fully expecting peak power consumption to be worse but for more normal workloads there's no regression at all - a very good sign.

M7 Motion Coprocessor & Touch ID Camera
Comments Locked

464 Comments

View All Comments

  • ClarkGoble - Wednesday, September 18, 2013 - link

    On OSX most apps are 64 bit. Developers I've talked with say you get a 20%-30% speed increase by going 64 bit. Oddly Apple's iWork apps are among the few on my system still 32bit. (And that'll probably change next month) With regards to iOS7 I worry that they didn't increase the RAM but will, for multiprocessing tasks, be having to load both 32bit and 64bit frameworks in RAM at the same time. I assume they have a way to do this well but extra memory would have made it less painful (although perhaps have hurt the battery life)
  • DeciusStrabo - Wednesday, September 18, 2013 - link

    Now, now, that's not really true any more. Taking my Windows 8 machine her, about 2/3 of the programs and background processes currently running are 64bit, 1/3 32bit. On MacOS it is more like 90 % 64bit, 10 % 32bit.
  • name99 - Thursday, September 19, 2013 - link

    You would get more useful answers if you asked decent questions. What does "bloat your program by 25" mean?
    - 25% larger CODE footprint?
    - 25% larger ACTIVE CODE footprint?
    - 25% larger DATA footprint?
    - 25% larger ACTIVE DATA footprint?
    - 25% larger shipped binary?
    The last (shipped binary) is what most people seem to mean when they talk about bloat. It's also the one for which the claim is closest to bullshit because most of what takes up space in a binary is data assets --- images, translated strings, that sort of thing. Even duplicating the code resources to include both 64 and 32 bit code will, for most commercial apps, add only negligible size to the shipping binary.
  • Devfarce - Tuesday, September 17, 2013 - link

    The performance of the A7 chip sounds amazing. Similar performance to the original 11" MBA is pretty incredible. Makes me realize that I have a 2007 Merom 1.8 GHz Core 2 Duo in my laptop, that it's running Win7 32 bit (again!!!!) and that is within striking distance of the iPhone 5s. I don't even want to think about GPU or memory performance, I'm sure that ship sailed long ago with GMA X3100.
  • tipoo - Tuesday, September 17, 2013 - link

    Closing in on or maybe surpassing Intel HD2500 now at least, I think. HD4000 is still a bit away, probably within striking range of A7X.
  • dylan522p - Tuesday, September 17, 2013 - link

    Hopefully HD6000 is really good. They are doing a big design change then.
  • Krysto - Wednesday, September 18, 2013 - link

    Intel will be focusing mostly on power consumption from now on, not performance, even on the GPU side. Although I'm sure they'll try to be misleading again, by showing off the "high-end PC version" of their new GPU, to make everyone think that's what they're getting in their laptops (even though they're not), just like they did with Haswell.
  • Mondozai - Wednesday, September 18, 2013 - link

    You have no clue, Krysto.
  • Devfarce - Wednesday, September 18, 2013 - link

    I wouldn't say Intel is misleading on performance, however very few companies will demand the parts with the biggest GPU like Apple does. People just don't demand the parts with the big GPUs although they should. Which is why Intel currently sells mostly HD4400 in the windows Haswell chips on the market.

    But back to the iPhone, this is truly incredible even if people don't want to believe it.
  • akdj - Thursday, September 19, 2013 - link

    Not sure you know what you're talking about. The 5000 & 51(2?)00 iGPUs are incredible. Especially when you take in to count the efficiency and power increase between its (Haswell) architecture in comparison with the HD4000 in Ivy Bridge. I think Apple's demand here is a big motivation for Intel to continue to innovate with their iGPUs...regardless of what the other 'ultra book' OEMs are demanding. They just don't have the pull...or the 'balls' to stand up to Intel. I also think Intel has impressed themselves with the performance gains from the Hd3000--->40000--->>4600/5&5100 transitions. As they progress and shut the gap of what a normal consumer that enjoys gaming and video editing (not the GPU guru that's demanding the latest SLI nVidia setup)...when directly compared with discrete cards, they'll enjoy a big win. Already the ultra book sales are being subsidized by Intel...to the tune of $300,000,000. I think they're motivated and Apple absolutely IS using the high power GPUs. Not the 4600 all others have chosen. The 5000s are already in the new MBA. The rMBP refresh is close and my bet is they'll be using the high end iGPU in the 13/15" rMBP updates. Hopefully still maintaining the discreet option on the 15"...but as the performance increase, in the portable laptop sector....I'm not so sure most consumers wouldn't value all day battery performance vs an extra 10fps in the latest FPS;). The 13" MBA is already getting 10-12 hours of battery life on Haswell with the HD 5000. And able to play triple A games at decent frame rates, albeit not on the 'ultimate' settings with anti aliasing. For those interested, they'll augment their day long use laptop with a gaming console. I think the whole big beige desktop's days are limited. We'll see. While I don't disagree Intel tends to embellish their performance...in this case, they're going the right direction. Too much competition...including from the ultra low voltage SoC developers making such massive in roads (this review is all the proof you need).

Log in

Don't have an account? Sign up now