Camera2 API

Android's Camera2 API is one of the improvements in Lollipop that hopes to improve the camera experience on Android. While many users simply use a highly automatic single button mode for taking photos, the advancement of smartphone sensors and optics has created interest in greater control over options like ISO, white balance, and overall exposure. Unfortunately, Android's camera API has been quite lacking in many regards, with something as simple as tap to focus not being included until Ice Cream Sandwich. To include more features and improve the camera experience, OEMs and chip makers produced their own undocumented camera APIs. This worked for users who use the stock camera application, but it gave developers no way to create their own camera applications with manual controls. Josh did an article on Camera2 earlier this year after its announcement at Google IO, and rather than restating all of it I'll simply put it here so those interested can take a look at it.

Unfortunately, Google's camera application doesn't use many of the features in Camera2. Google is obviously going with a more automatic approach which will cater to general users, and leaving it up to developers to create applications with manual camera controls. All users really need to know is that Camera2 will give developers the ability to include full manual camera controls, RAW/DNG output, and even custom white balance and exposure algorithms. Unfortunately, I haven't been able to find many third party applications that takes advantage of everything Camera2 has to offer, so this will be something to look forward to in the future. There is an application that is currently in development called L Camera, which you can see pictured above. Anyone who wants to try out the new manual controls in Android Lollipop can check it out here on its Github. I can say that Google's camera application does still benefit from the new API For example, there's a considerable improvement in the capture time for HDR+ photos compared to KitKat.

ART

The move to Android Runtime, or ART, as the only Java runtime on Android Lollipop was one of the big announcements at Google IO. Google had actually included ART as an option that users and developers could enable on Android KitKat, but Google's older Dalvik runtime was still the default option. For users who don't know, Android applications are primarily programmed in Java. An application's Java bytecode is translated into native instructions for the hardware in a device. As of Android 2.2, Dalvik used a Just-in-time (JIT) compiler to compile code when it was run. This was primarily due to the limited RAM and NAND that was included in Android devices in the past. ART uses an Ahead-of-time (AOT) compiler to compile bytecode into native code at the time of install. Although this requires more space to be used by applications due to the storage of the entirely translated application, it allows the conversion from bytecode to native code to be done only the first time an application is run.

This shift from JIT to AOT compilation has large implications for application performance. By giving the compiler a view of the entire codebase, it's possible to do greater optimization than the JIT compiler which could only do optimizations based on the chunks of code it was working with at the time. At IO, Google claimed that benchmarking applications see a performance increase of up to 2x when moving from Dalvik to ART, and certain cases like Chessbench see improvements up to 3x.

Improvements to memory management also contribute to performance improvements. Being written in Java, Android applications rely on automatic memory management. This has unfortunately put them at the mercy of Dalvik's garbage collection routines, which have been a large contributor to visual stutters, or "jank", in the past. For a very in depth look at the new garbage collection in Lollipop, as well as the improvements in ART as a whole, I recommend taking a look at the article Andrei wrote on the subject earlier this year.

Performance: Interface

Most of the people reading this review are likely to be familiar with what jank is, but for those who aren't it can basically be described as the drops in frame rate that have existed on Android since its inception. There are two ways this can be categorized. The first is an outright stutter, where the frame rate of an animation effectively goes to zero for a moment in time as frames are dropped. This has been particularly pronounced during scrolling animations, and it can be reproduced simply by scrolling in the Google Play application. The cause of this usually relates to applications overloading the UI thread, or the Java runtime's garbage collector causing a visible pause in the application. The second category is a frame rate below 60fps, but at some number that allows an animation to be displayed. This is what causes sluggish animations in the interface, where there is some animation that is not as smooth as it could be. There are numerous reasons why applications can have sluggish animations, ranging from a lack in CPU or GPU power, to badly programmed applications. 

With every release of Android, Google continues to claim that they have improved performance on Android by reducing jank. In hindsight, initiatives like Project Butter obviously did bring improvements, but Google clearly oversold them by saying the interface was brought to a consistent 60fps. If it had done that, we wouldn't still be getting the promise of improvement with every subsequent release. Jank has just been a trade off for running Android, and while Google has worked to minimize it through better software, and device makers through faster hardware, it was always there to some extent.

With Android Lollipop, I'm actually confident in saying that many areas where jank was still a problem are now actually running at a consistent 60fps. There is a caveat, which is that I don't know exactly how many devices this is going to end up applying to. For whatever reason, the Nexus 6 has performance issues in areas the Nexus 5 has none. During the writing of this review, I looked over several 240fps videos of animations on the Nexus 5 and Nexus 6, both running Android Lollipop, and I analyzed the time between distinct frames shown on the display. It was clear to me that in any area the Nexus 5 still has issues, the Nexus 6 performs even worse. For that reason, I'm going to limit my impressions to Android Lollipop on the Nexus 5, as I don't think it's fair to judge Lollipop based on whatever poor implementations were made on the Nexus 6.

What's very exciting is that there's actually not much to say, as it can be summed up by saying that Lollipop is really the first time where basically everywhere on my Nexus 5 feels smooth, or "buttery" as it was supposed to be with Android Jellybean. It almost feels uncanny for an Android device. Using Android's onscreen GPU profiling confirms that there are few, if any stutters. There are a couple of problematic areas like scrolling the new Calendar app, and in the eternally poorly performing Google Play app, but these are very rare exceptions in an extremely fast interface. Additionally, I have no reason to believe that the applications that currently have some performance issues can't be improved with future updates. Overall, I think this puts to end any discussion about how Android is slow or laggy, except in some strange situations like the performance of the Nexus 6 which I am really at a loss to explain.

Google Fit and Updated Applications Final Words
Comments Locked

126 Comments

View All Comments

  • kron123456789 - Saturday, December 6, 2014 - link

    Are you sure about that? The SoC labled as Exynos 5433 was found in Galaxy Note 4, but later Samsung claimed that Galaxy Note 4 has Exynos 7 Octa.
    http://www.samsung.com/global/business/semiconduct...
    Did they changed the SoC through OTA update? :)
  • OreoCookie - Monday, December 1, 2014 - link

    Have a look at http://arstechnica.com/gadgets/2014/11/nexus-9-rev...">Arstechnica's review of the Nexus 9: you see that the Nexus 9 fares better in some benchmarks in 64 bit mode while in others, it's slower. The difference is always quite small (about 1~2.5 %), so small that it's barely above the threshold of the statistical error, I bet.

    That was surprising to some, because they expected the Denver-based K1 to get a 20~30 % boosts similar to iOS devices when comparing 32 and 64 bit modes. At this stage nobody knows whether the basically flat results are due to the unusual architecture of the Denver cores (maybe the result of the code morphing is microcodes which is optimized at about the same level) or whether it's that ART does not yet make good use of the architecture. Given the fact that in 64 bit mode ARM processor can address more registers etc. I would guess that it's the former, the unusual architecture of Denver. I really hope Anandtech subjects it to an architectural deep dive.
  • Solandri - Monday, December 1, 2014 - link

    The 20%-30% speed boosts I've seen in iOS benchmarks from going 64-bit were grossly exaggerated by using the mean. You can't use the mean because it by definition weighs larger values more heavily. If there's a playground with a half dozen 8 year old kids romping around, a single 90-year old sitting on a bench will raise the mean age to 20. So a single large benchmark improvement can disproportionately skew the mean when the vast majority of benchmarks showed little to no improvement.

    You have to use the median in these cases. The median benchmark speedup I've seen has been about 5%-9%. If you remove the benchmarks which improved due to specialized hardware being added, the median improvement drops below 5%. Which is in line with the speedup Windows experienced when transitioning to 64-bit CPUs.

    Really, the only places where going to 64-bit can speed things up is flat memory addressing, which isn't a factor because none of the iOS devices have more than 4 GB. With calculations using long long ints (64 bit ints), which almost nobody uses. And with double floats, which outside of certain benchmarks is mostly used in scientific programming. Even most 3d game engines still use 32-bit floats because it's "good enough" for most cases (doubles don't become necessary until you start dealing with extremely large distances, like space simulation games). Most of the speed increase from Apple's 64-bit SoC comes from increasing the number of registers and from new specialized hardware speeding up things like AES by over 200%, both of which have nothing to do with 64-bit-ness. (Memory bandwidth is a bigger issue due to light only being able to travel 12cm in a single 2.5 GHz clock cycle. I'm not sure what memory controller ARM uses, but most devices have long since adopted 128-bit or larger memory controllers to get around this physics-imposed limit.)
  • The Hardcard - Monday, December 1, 2014 - link

    The added registers and other resource are tied to 64-bit-ness in the sense that ARM decided to make architectural boosts with the move to 64 bit. True they could have added those to 32-bit ARM but there wasn't a point in doing that.

    An possible explanation for the Denver results may not be in the code morphing in and of itself. It could just as well be that it uses the full register set and all the other resources for both 32-bit ARM and 64-bit ARM emulation. Because it is not that the 64-bit results are so bad, but that the 32-bit results are so good. You could just be seeing what it would be like if ARM had added the registers and instructions to 32-bit.
  • OreoCookie - Monday, December 1, 2014 - link

    First of all, it's part of the interpretation of data on how you obtain an average from the raw data. You argue it's the median instead of the expectation value, but a more realistic average would rely on weighing according to the instruction mix -- which varies. If encryption algorithms are constantly used (and they are used a fair bit on smartphones with all the encrypted traffic), then 250~800 % speed advantage is not an outlier, but significant in applications. Moreover, many of the benchmarks such as these (http://www.anandtech.com/show/7335/the-iphone-5s-r... are not necessarily meant to tell you how much faster the device will be in typical use (where most mobile SoCs are idling anyway), but rather probe specific aspects of the architecture in order to understand how it has evolved. Certainly I won't be the one arguing that simulating SDEs such as the Black-Scholes equation is a common real-world application in SoCs ;-) And speaking of a 20~30 % speedup on average (even if you exclude outliers) seems quite sensible given the benchmarks. If you want to argue about how to properly weight these benchmarks (e. g. that FP benchmark results which show more of an improvement are less important), that's a different discussion.

    You're also wrong when you claim that flat memory addressing is the only place where going 64 bit can speed things up: ARMv8 has twice the number of registers compared to ARM v7 which (similar to going from x86 to x64) -- and while this has nothing to do with 64 bit, but it has everything to do with being part of the new ARM ISA which also happens to be 64 bit. There are also changes to the Objective C runtime in order to take advantage of the 64 bit-ness, e. g. creation and destruction of objects was sped up by a factor of 2 (https://mikeash.com/pyblog/friday-qa-2013-09-27-ar... which can cumulatively become significant in iOS apps where you constantly manipulate objects.

    I don't think the attempt to separate improvements which have nothing to do with going 64 bit itself but are rather part of the 64 bit ARM v8 ISA is going to be practically useful because we cannot decouple the different components. Ditto for hardware encryption logic which replaces software routines: they are a real-world advantage, and the only question will be by how much.
  • name99 - Wednesday, December 3, 2014 - link

    It's a little ridiculous to claim that speedups from AES are "unfair" on the same blog that has been complaining about how much dramatically Android's whole-disk encryption slows down the OS...
  • tuxRoller - Monday, December 1, 2014 - link

    Denver is a different enough arch (internally) that there may be other reasons why it isn't faster.
  • jnemesh - Tuesday, December 2, 2014 - link

    Did you see any performance benefits moving from Windows 32 bit to Windows 64 bit? Nope. Not one little bit. You wont see any here either. As with desktop PCs the main benefit to 64 bit is that it allows you to address more than 4GB of memory. As most phones are shipping with 3GB or less, you won't see any benefit to a 64 bit environment unless and until the apps start demanding more RAM.
  • Maleficum - Friday, December 26, 2014 - link

    You don't have to struggle to prove your ignorance.
    After the Itanium fiasco, Intel abandoned IA64 and licensed the half-baked x86_64 where larger addressing space is practically the only benefit.

    Aarch64 is a full-fledged 64-bit architecture with a completely rewritten ISA similar to what Intel dreamed of with the Itanium. Larger addressing space is just one of the many side benefits on Aarch64.
  • kspirit - Monday, December 1, 2014 - link

    I don't know why people refuse to accept that the Lollipop UI, with its metro-like appearance of flat tiles and bold colours isn't a blatant copy of Windows Phone. I'm not hating, I actually like Android a lot, but come on! It's wrong to be in denial like that.

    On my Nexus 4, the recent contacts in the dialler app are actually coloured squares arranged in a grid! I mean, seriously? Might as well have polished up Holo. At least it was unique to Android.

Log in

Don't have an account? Sign up now