ARM Compatibility: Binary Translation

Similar to Apple's move from PowerPC to x86, Intel finds itself in a difficult position with bringing Atom to Android. The OS isn't an issue as it has already been ported to x86 and all further releases will be available in both ARM and x86 flavors. The bigger problem is application compatibility.

There's already support for targeting both ARM and x86 architectures in the Android NDK so anything developed going forward should be ok so long as the developer is aware of x86.

Obviously the first party apps already work on x86, but what about those in the Market?

By default all Android apps run in a VM and are thus processor architecture agnostic. As long as the apps are calling Android libraries that aren't native ARM there, once again, shouldn't be a problem. Where Intel will have a problem is with apps that do call native libraries or apps that are ARM native (e.g. virtually anything CPU intensive like a 3D game).

Intel believes that roughly 75% of all Android apps in the Market don't feature any native ARM code. The remaining 25% are the issue. The presumption is that eventually this will be a non-issue (described above), but what do users of the first x86 Android phones do? Two words: binary translation.

Intel isn't disclosing much about the solution, but by intercepting ARM binaries and translating ARM code to x86 code on the fly during execution Intel is hoping to achieve ~90% app compatibility at launch. Binary translation is typically noticeably slower than running native code, although Intel is unsurprisingly optimistic about the experience on Android. I'm still very skeptical about the overall experience but we'll have to wait and see for ourselves.

 

What's Different This Time Around: Google & A Sweet Reference Platform Final Words
POST A COMMENT

163 Comments

View All Comments

  • french toast - Wednesday, January 18, 2012 - link

    Yea dont tell us you have never heard of intel anti competitive practises?? hell they have already been fined billions of $ for it. Reply
  • jaffa62 - Wednesday, May 16, 2012 - link

    Typical smartphone malware leverages platform vulnerabilities that allow it to gain root access on the device in the background. Using this access the malware installs additional software to target communications, location, or other personal identifying information. Thanks.
    Regards,
    http://www.scottsdale-azsearchforhomes.com/scottsd...
    Reply
  • ltcommanderdata - Tuesday, January 10, 2012 - link

    So Intel has switched from the DirectX compliant SGX535 to the OpenGL ES only SGX540? Does this mean they have no plans to support Windows Phone or Windows with Medfield?

    In regards to the memory interface, many Cortex A9 implementations include a 64-bit memory controller just like Medfield. If Intel is saying Cortex A9 is still memory bandwidth limited does that mean that ARM memory controllers are currently inefficient? Would increasing L2 cache from the current 512KB per core Cortex A9 implementations be an effective way to mitigate this?
    Reply
  • guilmon19 - Tuesday, January 10, 2012 - link

    " Atom can support far more outstanding misses in L2 than the Cortex A9, which chokes bandwidth to the processor for anything not already in the L2 cache."

    It looks like its cache that is the problem and its more of a controller problem then the size of the cache itself, but increase the size of the cache would help, but it wouldn't be the most efficient solution.
    Reply
  • wumpus - Wednesday, January 11, 2012 - link

    The article implies that the core somehow handles it. Claiming that an in-order CPU can handle cache misses better than an out-of-order one has to be wrong. I wouldn't be surprised if the intel cache/memory controller is sufficiently better to cause these results. Reply
  • Exophase - Wednesday, January 11, 2012 - link

    Those in-flight memory requests that miss L2 wouldn't be coming from the CPU instructions themselves but the hardware prefetcher. So being in-order doesn't stop it from making requests. Plus it has SMT.

    It wouldn't surprise me if Atom's auto prefetcher is better than Cortex-A9's. Intel has a lot more experience with them, this is the first one ARM has done. It also goes directly into L1 cache, while Cortex-A9's just goes into L2 (the core gives prefetch hints to the L2 controller), but it can load into L1 directly with manual prefetch instructions.

    You can see some comparisons here:

    http://www.7-cpu.com/cpu/Cortex-A9.html
    http://www.7-cpu.com/cpu/Atom.html

    L2 latency is higher on A9 due to being less tightly coupled and shared between two cores. Somewhat mitigated by being OoO and (usually) having more of it. L2 bandwidth is comparable. Other latencies are also comparable. Effective read bandwidth is a lot higher on Atom, while effective write bandwidth higher on this A9. I'm sure the former highlights the differences in L2 misses in flight Intel is talking about, while the latter highlights differences in store queue depth.

    I doubt bandwidth is going to be a key player for most benchmarks or you'd see Exynos and OMAP4 have a big advantage over Tegra 2 (it doesn't), not to say that it doesn't matter for GPU performance.
    Reply
  • milli - Tuesday, January 10, 2012 - link

    SGX535 = DX 9.0c
    SGX540 = DX 10.1

    A CPU still needs to be able to take advantage of the available memory bandwidth (through technologies like prefetching, ...). A good example can be found in the desktop space between Intel and AMD, where Intel CPU's have much higher memory bandwidth (while both have similar theoretical bandwidth).
    While increasing the L2 cache on an A9 SOC would mitigate this to some extend, don't expect wonders. It's also not very realistic ATM to have more than 1MB cache in a mobile SOC.
    Reply
  • ltcommanderdata - Tuesday, January 10, 2012 - link

    The SGX540 does not have DirectX support. In the Series5/5XT line, the DX compliant cores are:

    SGX535: DX9.0c
    SGX544/554: DX9 level 3
    SGX545: DX10.1

    The SGX520/530/531/540/543 only support OpenGL ES 2.0 and not full DX compliance.
    Reply
  • milli - Tuesday, January 10, 2012 - link

    It seems you're right. Wikipedia is wrong about this. Reply
  • Penti - Wednesday, January 11, 2012 - link

    And it doesn't matter since the SoC's or rather CPU's aimed at Windows x86/64 tablets and Windows appliances does have SGX545. Windows 8 and Windows Phone (CE based) are two totally different OS's any way and Windows Phone is having a hard time just to support Qualcomm Snapdragon S1 and S2. I don't want to run Windows 8 Ribbon/MFC/WPF software on a phone platform neither do you. Microsoft won't support Windows Phone on x86. Microsoft won't support Windows 8 on this.

    They will as in Microsoft on Cedar Trail-M if PowerVR and Intel which have to ship them ever get their poor Windows drivers working. PowerVR/ImgTec aren't known for their Windows driver quality.

    In a tablet and even tablet-PC (which Microsoft is still going for) it's mostly the screen that uses power. It doesn't matter if the cpu and chipset uses the 5W TDP plus 2.1W TDP it's still more power efficient then anything else running Windows (NT). It's just a few watts and a screen that will use just as much if not more power. In a phone on the other hand you can't have massive batteries and screens.

    Intel is aiming the SoC towards Android handsets and tablets i.e. pads not tablet-pcs. They don't list DX support or even Windows Embedded support. Neither does it support more then 1GB of ram. It's built to interface with modem (baseband), LPDDR2, HDMI, MIPI-DSI, USB Phy, eMMC, with camera modules not with ordinary PC hardware topology of DDR3, PCI-e, LVDS/eDP, South bridge chipsets containing basic I/O. As well as support for USB, ethernet, SATA and whatnot in the SB. Memory will come included in the package too. Simply another platform.

    Not for powering Office 2010 and Visual Studio 2012. Look for other chips there.
    Reply

Log in

Don't have an account? Sign up now