Late last month, Intel dropped by my office with a power engineer for a rare demonstration of its competitive position versus NVIDIA's Tegra 3 when it came to power consumption. Like most companies in the mobile space, Intel doesn't just rely on device level power testing to determine battery life. In order to ensure that its CPU, GPU, memory controller and even NAND are all as power efficient as possible, most companies will measure power consumption directly on a tablet or smartphone motherboard.

The process would be a piece of cake if you had measurement points already prepared on the board, but in most cases Intel (and its competitors) are taking apart a retail device and hunting for a way to measure CPU or GPU power. I described how it's done in the original article:

Measuring power at the battery gives you an idea of total platform power consumption including display, SoC, memory, network stack and everything else on the motherboard. This approach is useful for understanding how long a device will last on a single charge, but if you're a component vendor you typically care a little more about the specific power consumption of your competitors' components.

What follows is a good mixture of art and science. Intel's power engineers will take apart a competing device and probe whatever looks to be a power delivery or filtering circuit while running various workloads on the device itself. By correlating the type of workload to spikes in voltage in these circuits, you can figure out what components on a smartphone or tablet motherboard are likely responsible for delivering power to individual blocks of an SoC. Despite the high level of integration in modern mobile SoCs, the major players on the chip (e.g. CPU and GPU) tend to operate on their own independent voltage planes.


A basic LC filter

What usually happens is you'll find a standard LC filter (inductor + capacitor) supplying power to a block on the SoC. Once the right LC filter has been identified, all you need to do is lift the inductor, insert a very small resistor (2 - 20 mΩ) and measure the voltage drop across the resistor. With voltage and resistance values known, you can determine current and power. Using good external instruments (NI USB-6289) you can plot power over time and now get a good idea of the power consumption of individual IP blocks within an SoC.


Basic LC filter modified with an inline resistor

The previous article focused on an admittedly not too interesting comparison: Intel's Atom Z2760 (Clover Trail) versus NVIDIA's Tegra 3. After much pleading, Intel returned with two more tablets: a Dell XPS 10 using Qualcomm's APQ8060A SoC (dual-core 28nm Krait) and a Nexus 10 using Samsung's Exynos 5 Dual (dual-core 32nm Cortex A15). What was a walk in the park for Atom all of the sudden became much more challenging. Both of these SoCs are built on very modern, low power manufacturing processes and Intel no longer has a performance advantage compared to Exynos 5.

Just like last time, I ensured all displays were calibrated to our usual 200 nits setting and ensured the software and configurations were as close to equal as possible. Both tablets were purchased at retail by Intel, but I verified their performance against our own samples/data and noticed no meaningful deviation. Since I don't have a Dell XPS 10 of my own, I compared performance to the Samsung ATIV Tab and confirmed that things were at least performing as they should.

We'll start with the Qualcomm based Dell XPS 10...

Modifying a Krait Platform: More Complicated
Comments Locked

140 Comments

View All Comments

  • StrangerGuy - Sunday, March 24, 2013 - link

    You really think having a slightly better chip would make Samsung risk everything to get locked into a chip with an ISA owned, designed and manufactured by one single sole supplier? And when that supplier in question historically has shown all sorts of monopolistic abuses?

    And when a quad A7 could already scroll desktop sites in Android capped at 60 fps additional performance provides very little real world advantage for most users. I'll even say most users would be annoyed by I/O bottlenecks like LTE speeds long before saying 2012+ class ARM CPUs are too slow.
  • duploxxx - Monday, January 7, 2013 - link

    Now it is clear that when Intel provides that much material and resources that they know they are at least ok in the comapere against ARM on cpu and power... else they wouldn't make such a fuss...,

    but what about GPU? any decent benchmark or testing available on GPU performance.

    I played in december with HP envy2 and after some questioning they installed a few light games which were "ok" but i wonder how well the gpu in the atom really is, power consumption looks ok, but i preffer a performing gpu @ a bit higher power then a none performing one.
  • memosk - Tuesday, January 8, 2013 - link

    It looks like old problem with PowerPc v.s. PC .
    PowerPC have had a faster Risc procesor and PC have had slower x86 like procesor .

    The end result was that clasical PC has won this Battle . Because of tradition and what is more important , you need knowledge about platform over users, producers, programers ...

    And you should think about economical thinks like mortage and whole enviroment named as Logistik.

    The same problem was Tesla vs. Edison. Tesla have had better ideas and Edison was Bussines-man. Who has won ? :)
  • memosk - Tuesday, January 8, 2013 - link

    Nokia tryed seriosly sell windows 8 phones without SD cards And they said because of microsoft .

    How can you then compete again android with SD cards. But if you copy an Apple you think it has logic.

    You need generaly: complete and logic and OWN and CONSISTENT and ORIGINAL strategy.

    If you copy something it is dangerous that you strategy will be incostintent , leaky , "two way" , vague also with tragical errors like incompatibility or like legendary siemens phones: 1 crash per day . :D
  • apozean - Friday, January 11, 2013 - link

    I studied the setup and it appears and Intel just wants to take on Nvidia's Tegra 3. Here are a couple of differences that I think are not highlighted appropriately:

    1. They used an Android tablet for Atom, Android tablet for Krait, but a Win RT (Surface) for Tegra 3. It must have been very difficult to fund a Google Nexus 7. Keeping the same OS across the devices would have controlled for a lot of other variables. Wouldn't it?

    2. Tegra 3 is the only quad core chip among chips being compared. Atom and Krait are dual-core. If all four cores are running, wouldn't it make a different to the idle power?

    3. Tegra 3 is built on 40nm and is one of the first A9 SoCs. In contrast, Atom is 32nm and Krait is 28nm.

    How does Tegra 3 fits in this setup?
  • apozean - Friday, January 11, 2013 - link

    Fixing typos..

    I studied the setup and it appears that Intel just wants to take on Nvidia's Tegra 3. Here are a couple of differences that I think are not highlighted appropriately:

    1. They used an Android tablet for Atom, Android tablet for Krait, but a Win RT (Surface) for Tegra 3. It must have been very difficult to fund a Google Nexus 7. Keeping the same OS across the devices would have controlled for a lot of other system variables. Wouldn't it?

    2. Tegra 3 is the only quad core chip among chips being compared. Atom and Krait are dual-core. If all four cores are running, wouldn't it make a difference to the idle power?

    3. Tegra 3 is built on 40nm and is one of the first A9 SoCs. In contrast, Atom is 32nm and Krait is 28nm.

    How does Tegra 3 fit in this setup?
  • some_guy - Wednesday, January 16, 2013 - link

    I thinking this may be the beginning of Intel being commoditied and the end of the juicy margins for most of their sales.

    I was just reading an article about how hedge funds love Intel. I don't see it, but that doesn't mean that the hedge funds would make money. Perhaps they know the earning report that is coming out soon, maybe tomorrow, will be good. http://www.insidermonkey.com/blog/top-semiconducto...
  • some_guy - Wednesday, January 16, 2013 - link

    I meant to say "but that doesn't mean that the hedge funds won't make money."
  • raptorious - Wednesday, February 20, 2013 - link

    but Anand has no clue what the rails might actually be powering. How do we know that the "GPU Rail" is in fact just powering the GPU and not the entire uncore of the SOC? This article is completely biased towards Intel and lacks true engineering rigor.
  • EtTch - Tuesday, April 2, 2013 - link

    My take in all of this is that ARM and x86 is in comparable at this point when it comes to comparing the different instruction set architectures due to different the lithography size and the new 3d transistors. When ARM based SOC has finally all the physical features of the x86 then it's only truly comparable. Right now x86 is most likely to have a lower power consumption than ARM based processors that has a higher lithographic size than itself. (I really don't know what it's called but I'll go out on a limb and call it lithography size even though I know that I am most likely wrong)

Log in

Don't have an account? Sign up now