Battery Life

Both the Note 4 Exynos and the Snapdragon version sport the same size 3220mAh 3.85V battery. The difference is in the internal components, one running off Qualcomm's Snapdragon 805 platform that includes their own PMIC, audio solution, and modem IC. On the other hand Samsung provides their own PMIC too, but relies on audio solutions from WolfsonMicro and modem from Ericsson.

Web Browsing Battery Life (WiFi)

Unfortunately the battery disadvantage for the Exynos versions is quite large in our web browser benchmark. Here the Exynos comes out 79 minutes shorter than the Qualcomm version. I commented in the A57 page how bad Samsung's software power management is on the Exynos 5430 and 5433, running a quite outdated version of GTS to control big.LITTLE migration in the Linux kernel scheduler. I see this result as a direct consequence of an inferior configuration in terms of CPUIdle drivers and lacking big.LITTLE optimization. I mentioned in the A53 power analysis that the kernel is lacking several power management features such as "Small Task Packing".

In the time between starting this article and publishing it, I was able to confirm this and improve the power efficiency by applying a handful of patches onto the stock kernel. I'm looking forward and hoping to do a more in-depth article about user-customization of device firmware in the future, especially on the topics of overclocking and undervolting in the mobile space.

BaseMark OS II Battery Life

BaseMark OS II Battery Score

The BaseMark OS II Battery benchmark is a CPU run-down test. Here we're seeing the devices limited by thermal drivers and power allocation mechanisms. Both the Exynos and the Qualcomm are neck-and-neck in this benchmark, representing around 3.85W total device TDP. The Exynos is able to do more work in the same period, awarding it a slightly higher battery score.

Since we are adding PCMark to our 2015 suite, I decided it would also make a good test candidate for overall device power consumption. For this I measure the power draws during the various subtests. These figures portray total device power, meaning the screen is included in the measured power. The video test in particular is something that will be interesting as we have stopped publishing video playback battery life because modern devices routinely exceed 12 hours of lifetime.

Although we can't do much with the device power figures without having comparison devices, we can see the power differences between running the device in normal versus power-savings mode. The power savings mode limits the A57 cluster to 1.4GHz and disables all boost logic in the scheduler, meaning the threads are no longer aggressively scheduled on the big cluster due to lowered migration thresholds.

It's interesting to see the perf/W go up in battery savings mode. I've commented in the A57 section how Samsung's default scheduler settings are much too aggressive in normal usage and may have detrimental effects on power consumption. The figures presented by PCMark corroborate my suspicions as real-world loads are negatively impacted in power efficiency.

I hope to continue using PCMark as a power consumption test in future devices whenever it's possible so that we can paint a better picture of general power consumption of devices.

GFXBench 3.0 Battery Life

On the GFXBench battery rundown the Exynos leads the Snapdragon by half an hour. It's worth mentioning that I ran this several times while Josh ran his review unit under different ambient temperature conditions; we found the score can fluctuate a lot depending on much heat the phone is allowed to dissipate.

GFXBench 3.0 Performance Degradation

The performance degradation metric is exceptionally bad on the Exynos version. While the Snapdragon also has its thermal throttling issues, it seems to degrade much more gracefully than the Exynos. The Mali T760 is here limited to half its frequency during most of the benchmark run, spiking back to full frequency momentarily before throttling again. I'm disappointed to see such throttling behavior; it would've made much more sense to scale down to some of the higher frequencies first rather than dropping immediately to 350MHz.

The Exynos rendered 202K frames while the Snapdragon achieved 252K, 24.5% more. On the other hand the Exynos ran 18% longer. I have a feeling if both devices actually had similar throttling policies we would be seeing much closer scores in the end, taking into account screen power overhead.

All in all it seems the Qualcomm version has superior power management and is able to outpace the Exynos in day-to-day usage. I think this difference is not a matter of hardware efficiency differences but rather software oversights. The question is whether Samsung is willing to invest the time and effort into fixing their own SoC software, or if we'll see this pattern continue on for the foreseeable future.

Display Power

Until now we never really had the opportunity to take an in-depth look at a mobile phone's display power. This is particularly a problem in trying to characterize the power consumption and battery life of emissive screen technologies such as OLED. I went ahead and measured the device's power consumption at all its available brightness points and a few APL and grey-scale levels totalling 191 data points.

The x-axis represents the luminance measured at 100% APL throughout the device's 65 hardware brightness points, but the sunlight boost mode with auto brightness couldn't be included with this test. At 100% APL, meaning a full white screen, we see the power consumption rise up to 1.7W. The phone has a base power consumption of around 440mW when displaying a black screen, meaning the screen emission power to display white at 336 cd/m² of this particular device comes in at about 1.25W. The measurements were done in the sRGB accurate "Movie" screen mode; more saturated modes should increase the power usage due to the higher intensities on the OLEDs.

The power consumption seems to scale pretty linearly with luminance when using the device's brightness settings. However what is extremely interesting is how power scales with APL (Average Picture Level) and grey-scale level. Theoretically, an OLED's screen power should scale with APL and grey-scale level identically as a 70% grey-scale image is the same APL as an artificial 70% APL image. Here in our measurements we use APL samples consisting of pure white areas and pure black areas to achieve a target APL over the whole screen area.

Instead of our theoretical scenario, we see a huge discrepancy between a 70% APL pattern and a 70% grey-scale level image. This is due to how primary brightness and secondary luminance of color components are controlled in Samsung's AMOLED screens. I mentioned that power scaled more or less linearly when luminance is altered via the device's brightness controls. This is due to the panel increasing brightness by increasing the PWM cycle of all pixels on the panel.

In fact, if one had a high-speed camera or DSLR with fast enough shutter speed one could see and capture the rolling-shutter-like effect of how the panel is lit up: multiple horizontal stripes of pixels rolling from the top of the device towards the bottom in a similar fashion to how a CRT's electron beam behaves when moving in one axis. The brightness depends on the thickness of the stripes, or in other words how much active area is powered up vs inactive pixels. When looking from a single pixel's perspective, this results in nothing but a typical PWM pulse.

What remains to be answered is why we see a large difference in power when looking at the grey-scale power figures at equal APL as the APL pattern images. Although the primary brightness is controlled by PWM, the actual color output of each OLED is controlled via voltage. In my research in the past I've actually managed to retrieve the display driver IC's programmed voltage to see how the sub-pixels, and thus color, is actually controlled. Although I haven't looked into the Note 4's driver yet (it's an extremely time-consuming procedure), I had saved up the table of my old Galaxy S4:

If the Note 4's voltage curve behaves anywhere near the same, it's clear why an image containing white consumes more power than an equal homogeneous grey-scale picture with the same APL. There's a steep voltage increase towards the last few levels for each color component (white being 255 on all three RGB channels).

I am holding off trying to do any comparisons with LCD screens and past AMOLED generations as I still lack enough data points to make any educated statement on the power efficiency of the different technologies and panel generations. What can be said for certain is that the increasingly higher APL values, and especially white interfaces of Android, are AMOLED's Achilles' Heel. There's no doubt that we could see some very large power efficiency gains if UX design would try to minimize on colors nearing 100% saturation, which leaves some justification for the "grayscale mode" when using extreme power saving modes.

Charge Time

While Josh already tested the charge time on the Qualcomm version of the device, I wanted to add a charge graph comparing the two charge modes and add a little comment in regards to fast-charging.

The Exynos model supports the same fast-charging capabilities as the Qualcomm version - in fact many people have been calling this a feature enabled by Qualcomm's Snapdragon platform, or "Quick Charge", but that is only half correct. Both Note 4 variants' fast-charging mechanisms are enabled by Maxim Integrated's MAX77823 charger IC chip. The IC has been around for a bit but it has seen limited usage in only Japanese models of the Galaxy S5.

Samsung has not used Qualcomm's charger ICs in any of their recent products and always relied on Maxim solutions, even in combination with Snapdragon SoCs. What Qualcomm markets as Quick Charge is actually the proprietary signalling between the charger and a dedicated voltage switcher IC on the phone. What seems to be happening here is that although the Qualcomm device is advertised as QC2.0 compatible, it doesn't use it. Instead what appears to be implemented in the stock charger is Qnovo's Adaptive Fast Charging technology, which provides a similar solution.

The phone is able to detect if there's a 9V charger plugged in, enabling this charge mode if you enable it in the battery screen option, and it requires you to re-plug the device cable for the new mode to be active. The included charger supplies up to 1.67A at 9V, or 15W. Otherwise the normal charging mode takes place at 2A on 5V, or 10W.

We see in the fast-charging graph that the phone is being delivered just short of 11W of power up until the 75% capacity mark, after which the charger IC vastly limits the input power and gradually switches to trickle charging the device. The phone is able to be charged to 75% in just 50 minutes, which is quite amazing. A mere 15 minutes of charging on the fast charger can charge up to 25% of the battery. The total charging time in this mode is 138 minutes.

In the "slow" charging mode over a normal 5V charger, the device gets delivered around 7W of power up until the ~85% mark. Again the charger IC slows down the power gradually until the device reaches true 100% capacity. You will notice that the fuel-gauge will report 100% capacity much earlier than the input power reaching zero, but power continues for another 30 minutes after that. This is due to modern fuel-gauges faking the top few percent of a battery's capacity. Here it takes about 50 minutes to reach the 50% battery capacity mark, and a total of 171 minutes to truly fill up the battery.

GPU Performance Conclusion


View All Comments

  • ddriver - Tuesday, February 10, 2015 - link

    I'd like to see A57 performance without being so crippled by a ram bottleneck. Reply
  • blanarahul - Wednesday, February 11, 2015 - link

    Loved this article. Only thing missing was gaming fps and power consumption comparison b/w LITTLE cluster only, big cluster only and big.LITTLE modes. Reply
  • ddriver - Thursday, February 12, 2015 - link

    Also in true 64bit mode, cuz a lot of the perf improvements in v8 are not available in legacy 32bit mode.

    It is a shame really, samsung decided the uArch improvements would be enough to barely pass this chip as "incremental", they didn't bother to feed a higher throughput chip with a wider memory bus. As much as it pains me, apple did better in that aspect by not crippling their A7 chip, even if only because they needed it for a "wow factor" after so many generations of mediocre hardware, especially given the many exclusive initial shipment deals they secured to stay relevant.
  • thegeneral2010 - Wednesday, February 18, 2015 - link

    i like wat u say and i really like to see note 4 running on 64bit this would give samsung processors a great push forward and trust of consumers. Reply
  • bigstrudel - Tuesday, February 10, 2015 - link

    If it wasn't completely obvious already:

    Apple A Series stands alone years ahead of the rest of the pack.
  • Flunk - Tuesday, February 10, 2015 - link

    But if they don't sell it to anyone else, it doesn't really matter does it?

    Apple doesn't compete with Samsung or Qualcomm when it comes to selling SoCs because they don't sell SoCs to other companies. A slight lead in CPU performance is not going to get people to buy an iPhone over and Android, if that's what they're set on buying.
  • xype - Tuesday, February 10, 2015 - link

    It does matter insofar as to be a benchmark of what is possible (as long as they are ahead). And let’s not pretend Apple’s CPUs sucking wouldn’t invite the same kind of comments—just like every situation where 2 competing technologies are compared.

    Platform/fanboy trolling aside, that’s something Android users benefit from as well. Apple being "stubborn" about 2 core CPUs, for example, is a nice counterweight to the 8 cores and 8 mini-cores and 8 quasi-cores trend that some CPU vendors seem to have a hard-on for, and it gives a nice real-world example of how such an approach to mobile CPU design works out, no?

    If Apple stays ahead in the mobile CPU game, the people using non-Apple phones will always have a target to point to and demand equality with. Otherwise they’d just have to live with whatever Qualcomm et al feed them.
  • bigstrudel - Tuesday, February 10, 2015 - link

    My comment isn't fanboy jingo-ism. Its fact.

    There's not a single Android ARM core on the market that can even match the power of the Apple A7's Cyclone cores much less A8's 2nd gen design.

    Were still waiting for anything custom to come out of the Android camp aside from the frankensteinish design of Nvidia's Denver core.

    I really shouldn't need to explain why to people on Anandtech.
  • ergo98 - Tuesday, February 10, 2015 - link

    The Tegra K1 64 bit is faster, core per core, versus the A8 (you do realize that the K1-64 has only 2 cores, right? I'm going to have to guess no, or you just are completely unable to read a chart). The A8x offers marginal per core performance advantages over the A8, and the primary benefit is the third core. The K1 64 is a A57 derivative, *exactly like the A8*.

    Your comments can only be construed as trolling. Can't match the A7? Give me a break.
  • tipoo - Tuesday, February 10, 2015 - link

    Ergo, you're completely off. The Denver K1 is a VLIW code morphing architecture - it has nothing to do with the Cortex A57, nor does the Apple Cyclone, they're both custom architectures.

    The K1 offers better performance in benchmarks, but as a result of code morphing, it can be hit or miss in real world, causing jank.

Log in

Don't have an account? Sign up now