Final Words

I think Google really hit the nail on the head with Android Lollipop. It evokes the same sort of feeling that the release of iOS 7 did, without some of the negative experiences that followed. Getting a brand new interface is always exciting, as it can dramatically change how it feels to use your phone. Moving from KitKat to Lollipop still provides you with a familiar Android experience, but it almost feels like getting a brand new phone in a way. There's a brand new UI, and big improvements to performance. But unlike the upgrade to iOS 7, Android Lollipop hasn't plagued my devices with application crashes and other bugs. In fact, I haven't really noticed any significant bugs at all after upgrading to Lollipop, which says a great deal about the work Google has put into testing to make sure things are stable. 

Material Design impresses me, and I think it's going to be around for many years to come. I find this feeling reassuring, as Google has a track record of redesigning large portions of Android with every major release. With the past designs, I never really felt like they were going to stick around for very long, and they never did. Material Design feels like Google has finally gotten Android to where they want it to be, with an interface that doesn't need any OEM overlays to be presentable. Of course, some OEMs will never change their policy of putting their own skin, but that's something Google isn't going to be able to fix. While we may see very iterative changes to Lollipop's interface in future updates, I don't think they're going to be anything beyond changes to the placement of buttons or the color of icons. Material Design also extends far beyond your Android device. It will eventually apply to all of Google's services on all platforms, so that your web browser, your tablet, your smartphone, and even your watch will all look and behave similarly. 

The performance increases are also greatly appreciated. Android hardware has advanced rapidly, and the move to a new application runtime is overdue, but warmly welcomed. The improvements it can bring to a device are actually amazing; it can feel like getting a brand new phone. The interface performance on Android still isn't quite perfect, but to be quite honest, it's not at all alone in this regard. I can name areas of every major smartphone OS that are susceptible to drops in frame rate, it's just not possible to write perfect software. I think what can be said is that overall, Android is pretty much at the same level as Windows Phone and iOS for animation smoothness and general performance. There is still the exception of certain poorly written applications which are up to developers to fix, and some of these even come from Google themselves, but I'm confident that we're moving toward a point where these remaining issues will be fixed simply because they aren't acceptable anymore.  

Of course, the last thing to discuss about an Android update is whether or not you're going to get it. Unfortunately, I still can't answer this question for most users. Android's nature means that Google doesn't have any influence over users receiving their updates, except the users that have Nexus and Google Play Edition devices. Although I can't guarantee you an update, I can say that the situation is looking good for more users than it has in the past. We've seen updates ship in record time from companies like LG, NVIDIA, and Motorola, and they should be commended for putting in the effort to get updates out to users in a reasonable time. Other companies like HTC have made promises to update their flagship devices from this year and last year to Lollipop within a 90 day time frame. While this doesn't cover every Android user in the world, it covers more users than we've ever seen in the past.

Going into the future, there are some improvements Google should make. Continuing to work on the performance of problematic applications is definitely necessary, as they stand out more than ever alongside a library of extremely well performing apps. I think it would be worth it to start creating special landscape layouts for applications, and to introduce more features that take advantage of larger displays. Google has entered the phablet market with the Nexus 6, and they need to create software that provides a reason for having such large devices. The only other thing they need to do is to continue innovating and improving, which they've been able to do time and time again. We haven't yet seen everything that Android Lollipop has to offer, as developers are only beginning to take advantage of the new APIs and features it brings. But with a great new interface, new applications, a new runtime, and new users adopting Android every day, the future of Android certainly looks bright.

Camera2, ART, and Performance
Comments Locked

126 Comments

View All Comments

  • kron123456789 - Saturday, December 6, 2014 - link

    Are you sure about that? The SoC labled as Exynos 5433 was found in Galaxy Note 4, but later Samsung claimed that Galaxy Note 4 has Exynos 7 Octa.
    http://www.samsung.com/global/business/semiconduct...
    Did they changed the SoC through OTA update? :)
  • OreoCookie - Monday, December 1, 2014 - link

    Have a look at http://arstechnica.com/gadgets/2014/11/nexus-9-rev...">Arstechnica's review of the Nexus 9: you see that the Nexus 9 fares better in some benchmarks in 64 bit mode while in others, it's slower. The difference is always quite small (about 1~2.5 %), so small that it's barely above the threshold of the statistical error, I bet.

    That was surprising to some, because they expected the Denver-based K1 to get a 20~30 % boosts similar to iOS devices when comparing 32 and 64 bit modes. At this stage nobody knows whether the basically flat results are due to the unusual architecture of the Denver cores (maybe the result of the code morphing is microcodes which is optimized at about the same level) or whether it's that ART does not yet make good use of the architecture. Given the fact that in 64 bit mode ARM processor can address more registers etc. I would guess that it's the former, the unusual architecture of Denver. I really hope Anandtech subjects it to an architectural deep dive.
  • Solandri - Monday, December 1, 2014 - link

    The 20%-30% speed boosts I've seen in iOS benchmarks from going 64-bit were grossly exaggerated by using the mean. You can't use the mean because it by definition weighs larger values more heavily. If there's a playground with a half dozen 8 year old kids romping around, a single 90-year old sitting on a bench will raise the mean age to 20. So a single large benchmark improvement can disproportionately skew the mean when the vast majority of benchmarks showed little to no improvement.

    You have to use the median in these cases. The median benchmark speedup I've seen has been about 5%-9%. If you remove the benchmarks which improved due to specialized hardware being added, the median improvement drops below 5%. Which is in line with the speedup Windows experienced when transitioning to 64-bit CPUs.

    Really, the only places where going to 64-bit can speed things up is flat memory addressing, which isn't a factor because none of the iOS devices have more than 4 GB. With calculations using long long ints (64 bit ints), which almost nobody uses. And with double floats, which outside of certain benchmarks is mostly used in scientific programming. Even most 3d game engines still use 32-bit floats because it's "good enough" for most cases (doubles don't become necessary until you start dealing with extremely large distances, like space simulation games). Most of the speed increase from Apple's 64-bit SoC comes from increasing the number of registers and from new specialized hardware speeding up things like AES by over 200%, both of which have nothing to do with 64-bit-ness. (Memory bandwidth is a bigger issue due to light only being able to travel 12cm in a single 2.5 GHz clock cycle. I'm not sure what memory controller ARM uses, but most devices have long since adopted 128-bit or larger memory controllers to get around this physics-imposed limit.)
  • The Hardcard - Monday, December 1, 2014 - link

    The added registers and other resource are tied to 64-bit-ness in the sense that ARM decided to make architectural boosts with the move to 64 bit. True they could have added those to 32-bit ARM but there wasn't a point in doing that.

    An possible explanation for the Denver results may not be in the code morphing in and of itself. It could just as well be that it uses the full register set and all the other resources for both 32-bit ARM and 64-bit ARM emulation. Because it is not that the 64-bit results are so bad, but that the 32-bit results are so good. You could just be seeing what it would be like if ARM had added the registers and instructions to 32-bit.
  • OreoCookie - Monday, December 1, 2014 - link

    First of all, it's part of the interpretation of data on how you obtain an average from the raw data. You argue it's the median instead of the expectation value, but a more realistic average would rely on weighing according to the instruction mix -- which varies. If encryption algorithms are constantly used (and they are used a fair bit on smartphones with all the encrypted traffic), then 250~800 % speed advantage is not an outlier, but significant in applications. Moreover, many of the benchmarks such as these (http://www.anandtech.com/show/7335/the-iphone-5s-r... are not necessarily meant to tell you how much faster the device will be in typical use (where most mobile SoCs are idling anyway), but rather probe specific aspects of the architecture in order to understand how it has evolved. Certainly I won't be the one arguing that simulating SDEs such as the Black-Scholes equation is a common real-world application in SoCs ;-) And speaking of a 20~30 % speedup on average (even if you exclude outliers) seems quite sensible given the benchmarks. If you want to argue about how to properly weight these benchmarks (e. g. that FP benchmark results which show more of an improvement are less important), that's a different discussion.

    You're also wrong when you claim that flat memory addressing is the only place where going 64 bit can speed things up: ARMv8 has twice the number of registers compared to ARM v7 which (similar to going from x86 to x64) -- and while this has nothing to do with 64 bit, but it has everything to do with being part of the new ARM ISA which also happens to be 64 bit. There are also changes to the Objective C runtime in order to take advantage of the 64 bit-ness, e. g. creation and destruction of objects was sped up by a factor of 2 (https://mikeash.com/pyblog/friday-qa-2013-09-27-ar... which can cumulatively become significant in iOS apps where you constantly manipulate objects.

    I don't think the attempt to separate improvements which have nothing to do with going 64 bit itself but are rather part of the 64 bit ARM v8 ISA is going to be practically useful because we cannot decouple the different components. Ditto for hardware encryption logic which replaces software routines: they are a real-world advantage, and the only question will be by how much.
  • name99 - Wednesday, December 3, 2014 - link

    It's a little ridiculous to claim that speedups from AES are "unfair" on the same blog that has been complaining about how much dramatically Android's whole-disk encryption slows down the OS...
  • tuxRoller - Monday, December 1, 2014 - link

    Denver is a different enough arch (internally) that there may be other reasons why it isn't faster.
  • jnemesh - Tuesday, December 2, 2014 - link

    Did you see any performance benefits moving from Windows 32 bit to Windows 64 bit? Nope. Not one little bit. You wont see any here either. As with desktop PCs the main benefit to 64 bit is that it allows you to address more than 4GB of memory. As most phones are shipping with 3GB or less, you won't see any benefit to a 64 bit environment unless and until the apps start demanding more RAM.
  • Maleficum - Friday, December 26, 2014 - link

    You don't have to struggle to prove your ignorance.
    After the Itanium fiasco, Intel abandoned IA64 and licensed the half-baked x86_64 where larger addressing space is practically the only benefit.

    Aarch64 is a full-fledged 64-bit architecture with a completely rewritten ISA similar to what Intel dreamed of with the Itanium. Larger addressing space is just one of the many side benefits on Aarch64.
  • kspirit - Monday, December 1, 2014 - link

    I don't know why people refuse to accept that the Lollipop UI, with its metro-like appearance of flat tiles and bold colours isn't a blatant copy of Windows Phone. I'm not hating, I actually like Android a lot, but come on! It's wrong to be in denial like that.

    On my Nexus 4, the recent contacts in the dialler app are actually coloured squares arranged in a grid! I mean, seriously? Might as well have polished up Holo. At least it was unique to Android.

Log in

Don't have an account? Sign up now