With the latest I/O conference, Google has finally publicly announced its plans for its new runtime on Android. The Android RunTime, ART, is the successor and replacement for Dalvik, the virtual machine on which Android Java code is executed on. We’ve had traces and previews of it available with KitKat devices since last fall, but there wasn’t much information in terms of technical details and the direction Google was heading with it.

Contrary to other mobile platforms such as iOS, Windows or Tizen, which run software compiled natively to their specific hardware architecture, the majority of Android software is based around a generic code language which is transformed from “byte-code” into native instructions for the hardware on the device itself.

Over the years and from the earliest Android versions, Dalvik started as a simple VM with little complexity. With time, however, Google felt the need to address performance concerns and to be able to keep up with hardware advances of the industry. Google eventually added a JIT-compiler to Dalvik with Android’s 2.2 release, added multi-threading capabilities, and generally tried to improve piece by piece.

However, lately over the last few years the ecosystem had been outpacing Dalvik development, so Google sought to build something new to serve as a solid foundation for the future, where it could scale with the performance of today’s and the future’s 8-core devices, large storage capabilities, and large working memories.

Thus ART was born.

Architecture

First, ART is designed to be fully compatible with Dalvik’s existing byte-code format, “dex” (Dalvik executable). As such, from a developer’s perspective, there are no changes at all in terms of having to write applications for one or the other runtime and no need to worry about compatibilities.

The big paradigm-shift that ART brings, is that instead of being a Just-in-Time (JIT) compiler, it now compiles application code Ahead-of-Time (AOT). The runtime goes from having to compile from bytecode to native code each time you run an application, to having it to do it only once, and any subsequent execution from that point forward is done from the existing compiled native code.

Of course, these native translations of the applications take up space, and this new methodology is something that has been made possible today only due to the vast increases in available storage space on today’s devices, a big shift from the early beginnings of Android devices.

This shift opens up a large amount of optimizations which were not possible in the past; because code is optimized and compiled only once, it is worth to optimize it really well that one time. Google claims that it now is able to achieve higher level optimizations over the whole of an applications code-base, as the compiler has an overview of the totality of the code, as opposed to the current JIT compiler which only does optimizations in local/method chunks. Overhead such as exception checks in code are largely removed, and method and interface calls are vastly sped up. The process which does this is the new “dex2oat” component, replacing the “dexopt” Dalvik equivalent. Odex files (optimized dex) also disappear in ART, replaced by ELF files.

Because ART compiles an ELF executable, the kernel is now able to handle page handling of code pages - this results in possibly much better memory management, and less memory usage too. I’m curious what the effect of KSM (Kernel same-page merging) has on ART, it’s definitely something to keep an eye on.

The implications to battery life are also significant - since there is no more interpretation or JIT-work to be done during the runtime of an app, that results in direct savings of CPU cycles, and thus, power consumption.

The only downside to all of this, is that this one-time compilation takes more time to complete. A device’s first boot, and an application’s first start-up will be much increased compared to an equivalent Dalvik system. Google claims that this is not too dramatic, as they expect the finished shipping runtime to be equivalent or even faster than Dalvik in these aspects.

The performance gains over Dalvik are significant, as pictured above; the gains are roughly a 2x improvement in speed for code running on the VM. Google claimed that applications such as Chessbench that represent an almost 3x increase are a more representative projection of real-world gains that can be expected once the final release of Android L is made available.

Garbage Collection: Theory and Practice
POST A COMMENT

137 Comments

View All Comments

  • darkich - Wednesday, July 02, 2014 - link

    The "iOS browser smoothness" you are talking about is actually nowhere near the smoothness I experience with Dolphin on Jetpack on my Note 3.
    End of story
    Reply
  • Flunk - Wednesday, July 02, 2014 - link

    Dolphin is a good browser but scrolling is still a lot choppier than Chrome on either my Nexus 10 or Nexus 5. Smoothness is not exactly its best feature. Reply
  • darkich - Wednesday, July 02, 2014 - link

    Lol, that is a flat out lie!
    Have you used Dolphin Jetpack?
    Reply
  • henrybravo - Wednesday, July 02, 2014 - link

    @darkich As difficult as it may be for you to believe, Dolphin 11.x with Jetpack enabled is not a consistently smooth experience across all devices. My HTC One M8 is such an example. A "heavy" website such as the newly redesigned androidcentral.com is buttery smooth on Chrome and the built-in HTC Internet app, but on Dolphin it loads slower and is a bit jittery when scrolling through the page. I realize that optimizing an app such as a web browser to be smooth across a large number of devices is difficult, but when you call people flat out liars because their experience differs from yours (on a different device nonetheless) just shows you don't know what you're talking about. Reply
  • darkich - Wednesday, July 02, 2014 - link

    I know precisely what I am talking about.
    He was referring to scrolling performance specifically, and in general, scrolling on Dolphin Jetpack is by far the fastest and most fluid out of any browser.
    Yes, I can also conform that Dolphin has issues on some pages, but that doesn't change the overall picture when we talk performance and fluidity.
    Show me a browser that handles every page flawlessly and then I will give you a point
    Reply
  • darkich - Wednesday, July 02, 2014 - link

    As for Androidcentral, well I just tested it on Chrome and Dolphin.
    A single swipe on Dolphin scrolls through the entire(in a typical Dolphin Jetpack fashion) front page in the desktop mode.
    Chrome? Gets only about halfway through!
    And Safari is even far worse.
    There is just no comparison
    Reply
  • phoenix_rizzen - Wednesday, July 02, 2014 - link

    "Number of swipes to reach end of page" is not the same metric as "scrolling is buttery smooth at all times". In fact, they aren't even remotely related. The two of you are talking about completely different things, almost orthogonal to each other. Reply
  • darkich - Thursday, July 03, 2014 - link

    That was completely nonsensical.
    By the most basic and obvious logic, the speed of scrolling is the very first metric of its smoothness.
    If you have two wheels and spin them with the same amount of force, and one spins for twice longer than the other-which one would you regard as the "smoother" one?
    Reply
  • sonicmerlin - Wednesday, July 02, 2014 - link

    In the past I would have singled you out as being stupid, but I've seen a number of android users make the exact same utterly bizarre connection between scrolling speed and smoothness. Has it ever occurred to you that high velocity is used to hide jank and stuttering? Reply
  • darkich - Thursday, July 03, 2014 - link

    Read the above comment.

    Are you saying the high scrolling velocity is unnecessary and has no practical benefit!?!?!

    If that's indeed what you think, then you definitely won the stupid mark.
    Reply

Log in

Don't have an account? Sign up now