Power Consumption: Better than Atom

Power efficiency was a big draw of Atom, but does AMD sacrifice any of that in order to deliver the performance it does with the E-350? To be blunt: no, not at all.

I don’t have any pico PSUs or anything super efficient readily available so don’t expect any of the numbers to be particularly impressive, but what they are is comparable to one another. I hooked up each one of the systems I’d been using to the same PSU and measured power in three conditions: idle, full CPU load (Cinebench 11.5) and while playing a 1080p H.264 video.

Pine Trail and the old ION platform consume just about the same amount of power at idle. The Athlon II system obviously draws more, in this case an increase of 17%. The E-350 uses less than 70% of the power of the Atom D510 system at idle.

Load Power Consumption - 1080p H.264 Video Decode

Under load the Brazos advantage shrinks a bit but it’s still much lower power than Atom. While playing a H.264 you’re looking at ~83% of the power of an ION system, and 85% under full CPU load.

Load Power Consumption - Cinebench 11.5

Say what you will about Intel’s manufacturing process advantage, it’s simply not put to use here with Atom. AMD’s E-350 is higher performing and uses less power than Intel’s 45nm Atom D510. Did I mention it’s built on a smaller die as well?

I wanted to isolate the CP...err APU and look at its power draw exclusively. I ran the same three tests but this time I’m not measuring power at the wall, but rather just power over the ATX12V connector directly to the CPU.

At idle the E-350 APU only requires around 3W of power. That’s actually not as low as I’d expect, especially given that Sandy Bridge is typically down at 4W when fully idle. AMD is apparently not being too aggressive with stopping clocks and gating when fully idle, at least on the desktop Brazos parts.

Power Consumption Comparison
ATX12V Power Draw Idle 1080p H.264 Decode Cinebench 11.5
AMD E-350 3W 8W 9W
AMD Athlon II X2 255 7W 12W 47W

Under load, either full CPU or when using the video decode engine, APU power consumption is around 8 - 9W. By comparison, an Athlon II X2 255 will use 12W when decoding video (this doesn’t include the UVD engine in the 890GX doing most of the heavy lifting. The more interesting comparison is what happens when the CPU cores are fully loaded. The E-350 uses 9W running Cinebench 11.5 compared to 47W by the Athlon II X2.

General Performance: In Between Atom and Athlon II Heavy Lifting: Performance in Complex Workloads
Comments Locked

176 Comments

View All Comments

  • ocre - Friday, January 28, 2011 - link

    hahaha, You need to educate yourself a little more on the advantages/disadvantages of both the x86 and ARM architectures before you go around posting ignorant statements like these. If we assumed your statement: is true: "E-350 has roughly 5 times the performance of Tegra 2" it would still be nothing to brag on becaue the same tegra 2 uses 7-9 times less energy than the E-350. This makes a mere 5 time performance a joke.

    But on top of that, there is no good reasonable way to conclude where these architectures rate towards one another. The software is radically different. There is not enough to go on for any useful conclusion and what little we do have is subject to a very special cases where there is SW that in the end results to similar obtained data. But the entire process is different and while we know the x86 is pretty well optimized, any ARM based counter SW is in its beginnings. X86 has the luxury of optimizations, ARM will only get better when SW engineers find better ways to utilize the system. But this is considering an actual case where there is SW similar enough to even reasonably attempt to measure x86 vs ARM performance. For the most part there the SW of each architecture is so radically different. ARM is extremely good at some things and not so good at others and this doesnt mean it cant be good, in some cases the SW just hasnt matured yet. All this matters little cause at the end of the day, everyone can see that the exact opposite from your statement; The superiority of ARM is just unmatched by x86 when it comes to performance per watt. This is undisputed and x86 has a long long way to go to catch up with arm (and many think it never will). Alls Arm has to do is actually build 18w CPUs, they will be 3 to 4 times more powerful than the e-350 based on the current ARM architecture.
  • jollyjugg - Friday, January 28, 2011 - link

    Well your last statement is simply laughable. Because all we have from ARM right now is chips which run in smartphones and tablets. ARM based processors are not exactly known for their performance as much as their power. While it is true that performance/watt is great in ARM today, it also true as any modern microprocessor designer would say that, as you go up the performance chain (by throwing more hardware and getting more giga Hz and other tweaks like wider issue and out of order etc), the mileage you get out of the machine improves so does the drop in performance/watt. There are no free lunches here. I doubt a good performance ARM architecture will be a whole lot different that an x86 architecure. While the lower power is an achilles heal for x86 as is the performance an achilles heal for ARM. You mentioned software maturity. It is laughable to even mention this in this cut throat industry. Intel showed its superiority over AMD by tweaking the free x86 compiler it gave away to developers to suit its x86 architecture compared to its rival and the users got cheated until European commision exposed Intel. So dont even talk about software maturity. The incumbant always has the advantage. ARM first has to kill Intel's OEM muscle and marketing muscle before it can start dominating. Even if it did the former two, there is something it can never do, which is matching intel's manufacturing muscle. Intel by far is way better than even the largest contract manufacturer TSMC whose only task is to manufacture.
  • Shadowmaster625 - Monday, January 31, 2011 - link

    Nonsense. ARM is more optimized that x86. x86 code is always sloppy, because it has always been designed without having to deal with RAM, ROM, and clock constraints. When you code for an ARM device, you are presented with limits that most software engineers never even faced when writing x86 code. When writing software for Windows, 99.9% of developers will tell you they never even think about the amount of RAM they are using. For ARM it was probably 80% 10 years ago. Today it is probably less than 20% of ARM software engineers who would tell you they run into RAM and ROM limitations. With all this smartphone development going on today, ARM devices are getting more sloppy, but still nowhere near as bad as x86.
  • Shadowmaster625 - Monday, January 31, 2011 - link

    Best buy is still littered with them. Literally. Littered.
  • e36Jeff - Thursday, January 27, 2011 - link

    what review were you reading? The only bug that is actually mentioned is the issue with flash, which AMD and Adobe are both aware of and should be fixed in the next iteration of flash. Stop seeing anything from AMD as bad and Intel as good. For where AMD wants this product to compete, this is a fantastic product that Intel has very little to compete with now that they locked out Nvidia from another Ion platform.
  • codedivine - Thursday, January 27, 2011 - link

    Ok one last question. Is it possible to run your VS2008 benchmark on it? Will be appreciated, thanks.
  • Anand Lal Shimpi - Thursday, January 27, 2011 - link

    Running it now, will update with the results :)

    Take care,
    Anand
  • Malih - Sunday, January 30, 2011 - link

    I'm with you on this.

    I'm thinking about buying a netbook and may be a couple net tops with E-350, which will mostly be used to code websites, may be some other dev that require IDE (Eclipse, Visual Studio and so on).
  • micksh - Thursday, January 27, 2011 - link

    how can it be that "1080i60 works just fine" when it failed all deinterlacing tests?
  • Anand Lal Shimpi - Thursday, January 27, 2011 - link

    It failed the quality tests but it can physically decode the video at full frame rate :)

    Take care,
    Anand

Log in

Don't have an account? Sign up now