Battery Life

Apple is generally quite conservative when quoting battery life, and the iPad Pro 11 and 12.9 both are rated at up to 10 hours of web usage. The smaller model offers a 29.37 Wh battery, and the larger model offers 36.71 Wh of capacity. Both of these capacities are much lower than a Surface Pro 6, which has 45 Wh, or a typical Ultrabook, which would be well over 50 Wh.

Our battery tests are performed at 200 nits of brightness.

Web Browsing Battery Life

Battery Life 2016 - Web

Web Browsing Battery Life 2016 (WiFi)

Our iPad achieved well over the rated ten hours, coming in at 12:13 on our web rundown test. This is a couple of hours longer than you’d get on an iPhone XS Max, and well ahead of the battery life on a Surface Pro 6 on this same test. This is one area where the efficiencies of the SoC, coupled with the operating system, pay big dividends compared to the PC space.

Battery Life Movie Playback

Battery Life Movie Playback

Movie playback is a unique situation where the workload can be offloaded to fixed function hardware in the media block, which is much more efficient than doing the work on the CPU. The iPad Pro achieved just over 15.5 hours of movie playback of a locally stored video. This is a couple of hours longer than you’d get on a Surface Pro with the same workload, despite the much smaller battery capacity.

Normalized Results

Battery Life 2016 - Web - Normalized

One thing we do on our PC reviews is to look at the efficiency of the device by removing the battery capacity from the equation. This shows the current gap between tablets and PCs. The Surface Pro 6 is one of our most efficient devices around, offering over 12 minutes per Wh of battery capacity, and the iPad over doubles that efficiency at almost 25 minutes per Wh. Or put in other terms, the iPad, on average, was drawing 2.4 Watts of power during the web test, and the Surface Pro 6 was drawing about 5 Watts. Considering much of the Surface Pro draw is the display, it shows you how effective Apple has been in driving down all of the power drain.

Charge Time

The other end of the spectrum is the charge time. Apple ships the iPad Pro with a USB-C power adapter with 18 Watts of output. That is quite a bit lower than you’d see on a laptop, and for example the MacBook ships with a 30 Watt AC Adapter. That means that the iPad charge time is quite long, despite the small battery capacity.

Battery Charge Time

In addition, Apple ships an almost comically short USB-C cable with the iPad Pro. At three feet long, it will almost certainly be impossible to charge and use the iPad unless you happen to have an outlet right on your desk. At least with the move to USB-C getting a longer cable is not an issue, but for such an expensive device, this is a bit silly.

The Liquid Retina Display Wireless, Audio, Cameras, and Software
Comments Locked

145 Comments

View All Comments

  • jeremyshaw - Tuesday, December 4, 2018 - link

    Crikey, that's a fast chip.

    That question about the xbox one s class GPU does raise questions. Why does the Xbox One S draw so much power?
  • axfelix - Tuesday, December 4, 2018 - link

    Because it's still using an AMD GPU architecture from 2013, and Apple's and Nvidia's architectures are >3x as powerful per watt at this point.
  • PeachNCream - Tuesday, December 4, 2018 - link

    Eh, NV is just as bad. Their current gen products (GT 1030 aside) generally need more than 75W of power and occupy space equal to two PCI-E slots.
  • Pyrate2142 - Tuesday, December 4, 2018 - link

    Yeah, but those 75W and above cards are operating at a significantly higher performance. You cannot really compare it straight like rust, because 1- not only is the NV cards doing full FP32 compute compared to mixed FP16 and FP32 on the iPad, meaning it is inherently a more strenuous workload to begin with. 2- performance scaling is not a linear function

    In short we can't really take those claims at face value because A- we don't have a way to measure and compare performance in the first place (which brings me to the question of how is apple actually comparing? Using TFLOP performance? Because TFLOP is not an accurate way of measuring GPU performance as a GPU has to do more than just FLOP. Take a RX580 at almost 7 TFLOP and a similar GTX1060 6GB at 4.5TFLOP in FP32. The TFLOP difference suggests a huge performance differences butcher they both perform similarly.) and B- again NV doesn't really make cards that scale down to what the iPad is having. In short, best case it's truly an apples to oranges comparison and I don't think you can directly translate that GPU in the A12X performance against AND or NV because it just not the same comparison both in power target of even how the performance is measured
  • Spunjji - Wednesday, December 5, 2018 - link

    Just responding firstly to endorse your comment, and secondly to note that Nvidia do make something at that scale - the 256 CUDA-core Pascal GPU in Tegra X2 would be a solid point of comparison, were it not basically impossible to perform one.
  • olde94 - Wednesday, December 5, 2018 - link

    For power/performance i have a few inputs.

    When looking at Nvidia jetsons running X2 and X1 most performance improvement are on the CPU side of things.

    Also for power refference. The Nvidia shield is not a portable device, and the nintendo switch, running the older version of the 256 cuda core SoC have the GPU running at 764mhz in docked mode and 324 in handheld. The reason is a combination of the battery and the active 30mm fan + somewhat heatsink, cooling solution. The charger is 40W charger, and while it does charge the battery, i will assure you no more than 15W is used for this, and based on charging time during full load, it's less than 10W. Note also that the screen is NOT on.

    An nvidia TX2 is rated at ~20W if i recall, making it WAY more power hungy than the A12 chip
  • PeachNCream - Thursday, December 6, 2018 - link

    Eh, the A12X puts a lot into perspective when it comes to compute performance. The big three players in the x86 CPU and GPU space are chasing performance at a cost of rising TDP, at least the phone and tablet competition is highly constrained by power and thermal limits inherent to the platform. The result is that the technological improvements we see in those highly mobile products generally focus on both power and performance. Its a pity to see stupid dual slot coolers on graphics cards to that have to cope with TDPs that range from 75 to an absolutely irrational 200+ watts and processors that blow their TDP budget by 50% under load. I had a Packard Bell 386 PC that was happy with a 60W internal power supply. Computers in 2018 are stupid. They shouldn't even need cooling fans at this point or heatsinks. That old Packard Bell ran a bare IC without even so much as a piece of metal glued atop it and under load, you could rest your thumb on the CPU and it would feel warm, but not hot to the touch.
  • Oliseo - Thursday, January 2, 2020 - link

    That old packard bell was orders of magnitude slower than modern CPU/GPU's and was orders of magnitude less effecient than modern CPU/GPU's.

    Even if you normalised for cooling requirements.

    This doesn't make modern CPU's/GPU's stupid, you know what it does make stupid tho....
  • tipoo - Tuesday, December 4, 2018 - link

    It's several fabrication node shrinks back (28nm vs 7nm) and on a 2013 architecture.

    You could probably get something close-ish to XBO performance in a handheld Xbox on 7nm, that would be an interesting product if it had full compatibility...
  • axfelix - Tuesday, December 4, 2018 - link

    The Xbox One S (which I think is the comparison here) is actually on 16nm, though it's still that 2013 architecture. I think Apple gets about 2/3 of the advantage from the architecture and 1/3 from the process, and it does work out still to >3x efficiency.

Log in

Don't have an account? Sign up now