Power Consumption

With Vishera, AMD was in a difficult position: it had to drive performance up without blowing through its 125W TDP. As the Piledriver cores were designed to do just that, Vishera benefitted. Remember that Piledriver was predominantly built to take this new architecture into mobile. I went through the details of what makes Piledriver different from its predecessor (Bulldozer) but at as far as power consumption is concerned, AMD moved to a different type of flip-flop in Piledriver that increased complexity on the design/timing end but decreased active power considerably. Basically, it made more work for AMD but resulted in a more power efficient chip without moving to a dramatically different architecture or new process node.

In mobile, AMD used these power saving gains to put Piledriver in mobile APUs, a place where Bulldozer never went. We saw this with Trinity, and surprisingly enough it managed to outperform the previous Llano generation APUs while improving battery life. On desktops however, AMD used the power savings offered by Piledriver to drive clock speeds up, thus increasing performance, without increasing power consumption. Since peak power didn't go up, overall power efficiency actually improves with Vishera over Zambezi. The chart below illustrates total system power consumption while running both passes of the x264 HD (5.0.1) benchmark to illustrate my point:

In the first pass Vishera actually draws a little less power, but once we get to the heavier second encode pass the two curves are mostly indistinguishable (Vishera still drops below Zambezi regularly). Vishera uses its extra frequency and IPC tweaks to complete the task sooner, and drive down to idle power levels, thus saving energy overall. The picture doesn't look as good though if we toss Ivy Bridge into the mix. Intel's 77W Core i5 3570K is targeted by AMD as the FX-8350's natural competitor. The 8350 is priced lower and actually outperforms the 3570K in this test, but it draws significantly more power:

The platforms aren't entirely comparable, but Intel maintains a huge power advantage over AMD. With the move to 22nm, Intel dropped power consumption over an already more power efficient Sandy Bridge CPU at 32nm. While Intel drove power consumption lower, AMD kept it constant and drove performance higher. Even if we look at the FX-8320 and toss Sandy Bridge into the mix, the situation doesn't change dramatically:

Sandy Bridge obviously consumes more than Ivy Bridge, but the gap between a Vishera and any of the two Intel platforms is significant. As I mentioned earlier however, this particular test runs quicker on Vishera however the test would have to be much longer in order to really give AMD the overall efficiency advantage.

If we look at average power over the course of the two x264 encode passes, the results back up what we've seen above:

Power Consumption - Load (x264 HD 5.0.1)

As more client PCs move towards smaller form factors, power consumption may become just as important as the single threaded performance gap. For those building in large cases this shouldn't be a problem, but for small form factor systems you'll want to go Ivy Bridge.

Note that idle power consumption can be competitive, but will obviously vary depending on the motherboard used (the Crosshair Formula V is hardly the lowest power AM3+ board available):

Power Consumption - Idle

3D Gaming Performance Projected Performance: Can AMD Catch up with Intel?
Comments Locked

250 Comments

View All Comments

  • CeriseCogburn - Tuesday, October 30, 2012 - link

    So did you buy the i5 3470, or the FX 6200 ?

    According to you and your 1st chart, that's what "most of us bought". Okay, since we know that's total BS, what you said is also total BS.

    " because most of us will buy the most performant processor per dollar "

    LOL - okay, so there's a big problem bub - OC the 2500K and it skyrockets off the top of your 1st chart straight up.

    So, did you buy the 2500K, like "most of us did" if we "used your declared knowledge about us all" and added 2 watts of common sense into the mix ?

    Why must you people torture us so ?
  • Idiot10 - Tuesday, May 7, 2013 - link

    Hey Mr. ChariseHogburn, why don't yoy take your 2500K with you and leave us all to our musings? You seem to know everything about processors why don't you let others do what they want to do? You big piece of Intel mercenary shit! SOB!!!!
  • Mathos - Tuesday, October 23, 2012 - link

    It does give a reason and an upgrade path to finally move up from my aging P2 1090T. One of the main workloads I do when I use my PC heavily is indeed easy h.264 encoding for game and other types of video. Always nice to be able to knock a video file down from 2.5GB to 200-500MB. I've personally always used MSI or ASRock boards myself, with some Asus boards when I can catch the price right, in reply to the board used for the benchmarks.

    I noticed there are overclocking numbers that do look decent. Some things I'm curious about. How do they take to undervolting? My luck with previous AMD generations has been pretty good when it came to that. At least when I felt like tinkering. Use to be able to run the old 9600be and 9850be considerably lower than stock voltages for example, at stock speeds, and some times even with mild overclocks on the NB's. I've noticed with that AMD tends to be fairly conservative.

    And since they appear to still be using the same IMC/L3 speed linked to the north bridge hyper transport speed. How does upping the actual speed of the NB IMC/L3 effect the performance and stability of the platform. I know back in the day of the 9600be/9850be I could generally get them close to the same performance level as a core2 quad at the same clock speeds through that kind of tweaking.

    And on a final note, it's a nice performance increase overall, even in single threaded apps, over the bulldozer cores. But you'd think they would of implemented a way to gang the integer cores and make them act as a single core for single threaded performance. That's all it would really take the pick up a bit of the slack I think.
  • jensend - Tuesday, October 23, 2012 - link

    Why the heck are you starting your power consumption charts at 50W rather than at zero?

    That's *extremely* misleading, wildly exaggerating AMD's disadvantage. AMD has roughly 2x the power consumption of IVB at load and 1.25x the power consumption at idle- but by starting your chart at 50W you're exaggerating that into over 3x at load *and at idle*.

    *Please* get yourself a copy of "The Visual Display of Quantitative Information" and read the section talking about the "lie factor" of a graph or chart.
  • Spunjji - Tuesday, October 23, 2012 - link

    I think they are anticipating their readership noticing that the graph starts at 50W, just as you did.
  • kevith - Tuesday, October 23, 2012 - link

    They probably do. But that´s not the point. A GRAPH is meant to show a string of figures as a drawing.

    When a graph starts at anything but zero, it will not show a true picture.

    With two pieces of something to compare, where both lay in the area between say 90 and 91 of some kind of value..

    If you then make a graph, thats going from 89-92 in 1/10´s, you wil get a graph, that shows a very uneven curve, going up and down all the time, with seemingly big differences in values.

    But if it started at zero, like it´s supposed to, you would see a almost straight line, reflecting the true picture: These two things are practically alike in this specific area.

    IF you don´t make a graph like that ALL THE TIME, there´s no need to make a graph at all, you could just write the values as figures.
  • CeriseCogburn - Tuesday, October 30, 2012 - link

    No spooge, it's called amd fanboy advantage, that is what should be always anticipated, and is actually always provided.
  • Pythias - Tuesday, October 23, 2012 - link

    Why was the i3 dropped from some of the charts?
  • CeriseCogburn - Tuesday, October 30, 2012 - link

    Because it kicked so much amd pileofcrap.
  • redwarrior - Tuesday, October 23, 2012 - link

    Anands testing was the usual lazy-designed testing with poor planning. Why run sysmark, that every one knows uses testing methods that tend to ignore multi-threading.. Keep the test on applications only and make sure your gaming apps are representative. I saw better testing done on several other websites where the usually poorly designed and coded trash was balanced with other games that did employ some level of multi-threading The FX-8350 did immensely better in that gaming selection. Mostl gamers are not shoot-em-up fascist gamers. There is no reason for Anand to stack the game selctions in the single-threaded direction only. I beleive Anand is a shill for Intel and chose the stupid sysmark tests and the game sin such a fashion to downplay the vast performance improvments that are possible from the FX-8350 cpu. That is one reason I do NOT spend much time on this site any more.
    There is nothing I detest more than intellectual dishionesty. Check out Tom's hardware their review was done more scientifically and had a balanced selection of tests. The Vishera FX-8350clearly bested the I5 3570 in most tests and was the best performance for the buck by far. A better objectively designed test. No axes to grind. To hell with Anand, unofficial Intel shill and LAZY intellectually.

Log in

Don't have an account? Sign up now