ARM Compatibility: Binary Translation

Similar to Apple's move from PowerPC to x86, Intel finds itself in a difficult position with bringing Atom to Android. The OS isn't an issue as it has already been ported to x86 and all further releases will be available in both ARM and x86 flavors. The bigger problem is application compatibility.

There's already support for targeting both ARM and x86 architectures in the Android NDK so anything developed going forward should be ok so long as the developer is aware of x86.

Obviously the first party apps already work on x86, but what about those in the Market?

By default all Android apps run in a VM and are thus processor architecture agnostic. As long as the apps are calling Android libraries that aren't native ARM there, once again, shouldn't be a problem. Where Intel will have a problem is with apps that do call native libraries or apps that are ARM native (e.g. virtually anything CPU intensive like a 3D game).

Intel believes that roughly 75% of all Android apps in the Market don't feature any native ARM code. The remaining 25% are the issue. The presumption is that eventually this will be a non-issue (described above), but what do users of the first x86 Android phones do? Two words: binary translation.

Intel isn't disclosing much about the solution, but by intercepting ARM binaries and translating ARM code to x86 code on the fly during execution Intel is hoping to achieve ~90% app compatibility at launch. Binary translation is typically noticeably slower than running native code, although Intel is unsurprisingly optimistic about the experience on Android. I'm still very skeptical about the overall experience but we'll have to wait and see for ourselves.

 

What's Different This Time Around: Google & A Sweet Reference Platform Final Words
Comments Locked

164 Comments

View All Comments

  • mrtanner70 - Wednesday, January 11, 2012 - link

    I am a long time fan of the site and still think your actual product testing is the best. However over the last year or so there has been a noticeable pattern of really being a bit too enthusiastic toward Intel PR and spin. I would like to see a little bit more intelligent skepticism when dealing with Intel reference designs and slide decks.
  • Lucian Armasu - Thursday, January 12, 2012 - link

    I have to agree, about Intel spinning stuff more and more lately, too. I've noticed it since they announced the "3D" chips. Everyone reacted as if the 3D are 10x better than SNB or whatever, when in fact that's not true. The 3D chips have only +37% performance at the same power level, OR -50% power consumption at the same performance level. That's barely as news worthy as they made it sound. They put 1 extra year ahead of AMD at most on top of the 6 months difference they already have now.

    So I don't know how everyone got from that that now that they have that they will be able to compete with ARM, which ARM at the same performance has much better power efficiency and also costs several times less.
  • Hector2 - Thursday, January 12, 2012 - link

    What you said is about right for "active" power. A couple of things not mentioned is that the 22nm chip will be quite a bit smaller than 32nm (meaning even lower cost) and that the FinFET transistors drop the "standby" power by a factor of 10X-20X lower than even the 32nm version.

    As was said somewhere, Medfield gets Intel's foot in the door with a competitive product --- and I think most here will agree that it's at least competitive. Certainly, Intel's new partners must think so. The coming of the 22nm upgrade in 2013 is where it really gets interesting.

    It's taken Intel awhile to carve out a viable single chip design that's product-worthy. Now that they have one, I expect them to march with the same tick-tock cadence that they've driven the PC market with -- and they already have working 3D FinFET 14nm chips in the lab.
  • Finally - Thursday, January 12, 2012 - link

    Mark my words:
    The day that AnandTech welcomes a revolutionary AMD product (think e.g. Brazos II) with the words "It's finally here" will be the very day hell freezes over.
  • Hector2 - Thursday, January 19, 2012 - link

    Hmmm. When do you think AMD might be entering the smartphone market ?
  • thunng8 - Wednesday, January 11, 2012 - link

    That intel graph for graphics performance is way off and misleading.

    If one of the smartphones tested is the iphone4S, then that phone should have >2x performance advantage in graphics compared to Medfield reference platform.
  • solipsism - Wednesday, January 11, 2012 - link

    If Medfield is as good as it looks today and they can keep moving along as Cortex-A15 comes along tablets and smartphones could be Intel-based and Apple could be serious trouble by supporting ARM. It could be the whole the PPC issue all over again where Apple will linger for way too long before switching architectures. That's a worse scenario perfect storm, of course, and a long ways off, but it's still something that can't be good for companies with vertical integration with heavy ARM support.
  • markit - Wednesday, January 11, 2012 - link

    as infact 2 year old cortex a-9 (even if single cored) outperform it / mhz

    e.g. if i scale down medfields results linearly
    (which is imho quite accurate for this benches)

    i get:
    @1ghz
    2130ms and 72766 points

    and compare against my old tegra 2 based toshiba ac100 (running ubuntu and chrome 13)

    with one core running @ 1 Ghz, and second core turned off i get:
    sunspider 1962ms ( per mhz 9% faster)
    browsermark 111872 ( per mhz 54% faster as medfield)

    so regarding browsermark infact the ac100 nearly matched the 1.6ghz results of this medfield reference plattform while running on only one 1 ghz core,..

    btw. both cores turned on and @ 1Ghz:
    sunspider 1941 ms
    browsermark 114523

    so hmm, i think i can guess why intel didn`t choose dual-core friendly benchmarks,..

    and remeber the ac100 did hit the stores June 2010!
  • FunBunny2 - Wednesday, January 11, 2012 - link

    I still am puzzled by this. It's been at least a decade since an Intel chip executed X86 code in hardware; the chip is a RISC design fronted by an emulator. Or has Intel gone back to X86 instructions in hardware for these SoC?
  • dealcorn - Thursday, January 12, 2012 - link

    Was I deceived? I read about fin-fet and 3d transistors and was impressed because it is credible technology that will permit Intel to do good things before others. That colored my perspective on mobile Atom leading to statements such as "Show time starts at 22 nm." I felt that absent the 3d benefits, Intel would find it tough to go head to head against ARM. While Intel focused attention on 3d, the implication was they are dead in the water until 22nm arrives.

    That is wrong. Mobile Atom is never going to be built using 22nm transistors. It will be built using 22nm LP transistors. Furthermore, it appears that 32nm LP transistor get the job done just fine (even if 32 nm is worthless in this context). Is 32nm LP the secret sauce that lets it all work? Should the headline have been "Intel's process advantage muscle works almost as good at 32 nm LP as it was supposed to work at 22nm."?

Log in

Don't have an account? Sign up now