Late last month, Intel dropped by my office with a power engineer for a rare demonstration of its competitive position versus NVIDIA's Tegra 3 when it came to power consumption. Like most companies in the mobile space, Intel doesn't just rely on device level power testing to determine battery life. In order to ensure that its CPU, GPU, memory controller and even NAND are all as power efficient as possible, most companies will measure power consumption directly on a tablet or smartphone motherboard.

The process would be a piece of cake if you had measurement points already prepared on the board, but in most cases Intel (and its competitors) are taking apart a retail device and hunting for a way to measure CPU or GPU power. I described how it's done in the original article:

Measuring power at the battery gives you an idea of total platform power consumption including display, SoC, memory, network stack and everything else on the motherboard. This approach is useful for understanding how long a device will last on a single charge, but if you're a component vendor you typically care a little more about the specific power consumption of your competitors' components.

What follows is a good mixture of art and science. Intel's power engineers will take apart a competing device and probe whatever looks to be a power delivery or filtering circuit while running various workloads on the device itself. By correlating the type of workload to spikes in voltage in these circuits, you can figure out what components on a smartphone or tablet motherboard are likely responsible for delivering power to individual blocks of an SoC. Despite the high level of integration in modern mobile SoCs, the major players on the chip (e.g. CPU and GPU) tend to operate on their own independent voltage planes.


A basic LC filter

What usually happens is you'll find a standard LC filter (inductor + capacitor) supplying power to a block on the SoC. Once the right LC filter has been identified, all you need to do is lift the inductor, insert a very small resistor (2 - 20 mΩ) and measure the voltage drop across the resistor. With voltage and resistance values known, you can determine current and power. Using good external instruments (NI USB-6289) you can plot power over time and now get a good idea of the power consumption of individual IP blocks within an SoC.


Basic LC filter modified with an inline resistor

The previous article focused on an admittedly not too interesting comparison: Intel's Atom Z2760 (Clover Trail) versus NVIDIA's Tegra 3. After much pleading, Intel returned with two more tablets: a Dell XPS 10 using Qualcomm's APQ8060A SoC (dual-core 28nm Krait) and a Nexus 10 using Samsung's Exynos 5 Dual (dual-core 32nm Cortex A15). What was a walk in the park for Atom all of the sudden became much more challenging. Both of these SoCs are built on very modern, low power manufacturing processes and Intel no longer has a performance advantage compared to Exynos 5.

Just like last time, I ensured all displays were calibrated to our usual 200 nits setting and ensured the software and configurations were as close to equal as possible. Both tablets were purchased at retail by Intel, but I verified their performance against our own samples/data and noticed no meaningful deviation. Since I don't have a Dell XPS 10 of my own, I compared performance to the Samsung ATIV Tab and confirmed that things were at least performing as they should.

We'll start with the Qualcomm based Dell XPS 10...

Modifying a Krait Platform: More Complicated
Comments Locked

140 Comments

View All Comments

  • kyuu - Friday, January 4, 2013 - link

    You're the one stepping into the past with the CISC vs. RISC. x86 is not going to go away anytime soon. Keep dreaming, though.
  • iwod - Saturday, January 5, 2013 - link

    Nothing about Architectures in this comment, but by the time ARM Cortex A57 is out, so is Intel ValleyView, which doubles the performance. A57 is expected to give in best case scenario 30 - 50% increase in performance. And All of a sudden this look so similar to 2x Atom performance.

    It will only take one, just ONE mistake that ARM make for Intel to possibly wipe them off the map.

    Although looking into the next 3 - 5 years ahead. It will be a bloody battle instead.
  • Cold Fussion - Friday, January 4, 2013 - link

    Why didn't have any charts which were performance per watt or energy consumption vs performance in the GPU area? If the Mali chip is using twice the energy but giving 3x the performance then that is a very significant point thats being misrepresented.
  • mrdude - Friday, January 4, 2013 - link

    I was thinking the same thing.

    If I can game at native resolution on a Nexus 10 at better frame rates than on the Atom or Snapdragon SoC and the battery capacity is larger and the price of the device is equal, then do I really care about the battery life?

    Although it's nice seeing Intel is getting x86 down to a competitive level with ARM, the most astonishing thing that I took away from that review was just how amazing that MaliT604 GPU is. All that performance and only that power draw? Yesplz :P
  • parkpy - Friday, January 4, 2013 - link

    i've learned so much from AT's review of the iPhone5, Galaxy S III, and Nexus 4, and this article about mobile phones that it makes me wish AT could produce MORE reviews of mobile devices.

    All of this information is crack! I can't get enough of it. Keep up the good work! And Intel, I can't wait for you to get your baseband processor situation sorted out!

    I was already tempted to get a Razr I, but it looks like before the end of the year consumers will have some very awesome technology in their phones that won't require as much time on the battery charger!
  • This Guy - Friday, January 4, 2013 - link

    What if Rosepoint is software defined instead of fixed function?
  • ddriver - Friday, January 4, 2013 - link

    I am confused here - this review shows the atom to be somewhat faster than A15, while the review at phoronix shows the A15 destroying the atom, despite the fact intel's compiler is incredibly good at optimizations and incomparably more mature.

    So I am in a dilemma on who to trust - a website that is known to be generously sponsored by intel or a website that is heavily focused on open source.

    What do you think?
  • kyuu - Friday, January 4, 2013 - link

    Uh, did we read the same article? Where does it show the Atom being "somewhat faster than A15"? The article showed that the A15 is faster than Atom, but at a large power premium.
  • ddriver - Friday, January 4, 2013 - link

    On the charts I see the blue line ending its task first and taking less time, blue is atom, right?
  • jwcalla - Friday, January 4, 2013 - link

    A couple things:

    1) The Phoronix benchmarks were for different Atoms than the one used in this article. I don't know how they compare, but they're probably older models.

    2) The Phoronix benchmarks used GCC 4.6 across the board. Yes, in general GCC will have better optimizations for x86, but we don't know anything (unless I missed it) about which compilers were used here. If this was an Intel sample sent to Anand, I'm sure they compiled the software stack with one of their own proprietary Intel compilers. Or perhaps it is the MS compiler, which no doubt has decades of x86 optimizations built in and probably less ARM work than GCC (for the RT comparison).

    Don't take the benchmarks too seriously, especially since even the software isn't held constant here like it was in the Phoronix benchmarks. It's all ballpark information. Atom is competitive with ARMv7 architectures -- that's the takeaway.

Log in

Don't have an account? Sign up now