• What
    is this?
    You've landed on the AMD Portal on AnandTech. This section is sponsored by AMD. It features a collection of all of our independent AMD content, as well as Tweets & News from AMD directly. AMD will also be running a couple of huge giveaways here so check back for those.
    PRESENTED BY

Power Consumption

With Vishera, AMD was in a difficult position: it had to drive performance up without blowing through its 125W TDP. As the Piledriver cores were designed to do just that, Vishera benefitted. Remember that Piledriver was predominantly built to take this new architecture into mobile. I went through the details of what makes Piledriver different from its predecessor (Bulldozer) but at as far as power consumption is concerned, AMD moved to a different type of flip-flop in Piledriver that increased complexity on the design/timing end but decreased active power considerably. Basically, it made more work for AMD but resulted in a more power efficient chip without moving to a dramatically different architecture or new process node.

In mobile, AMD used these power saving gains to put Piledriver in mobile APUs, a place where Bulldozer never went. We saw this with Trinity, and surprisingly enough it managed to outperform the previous Llano generation APUs while improving battery life. On desktops however, AMD used the power savings offered by Piledriver to drive clock speeds up, thus increasing performance, without increasing power consumption. Since peak power didn't go up, overall power efficiency actually improves with Vishera over Zambezi. The chart below illustrates total system power consumption while running both passes of the x264 HD (5.0.1) benchmark to illustrate my point:

In the first pass Vishera actually draws a little less power, but once we get to the heavier second encode pass the two curves are mostly indistinguishable (Vishera still drops below Zambezi regularly). Vishera uses its extra frequency and IPC tweaks to complete the task sooner, and drive down to idle power levels, thus saving energy overall. The picture doesn't look as good though if we toss Ivy Bridge into the mix. Intel's 77W Core i5 3570K is targeted by AMD as the FX-8350's natural competitor. The 8350 is priced lower and actually outperforms the 3570K in this test, but it draws significantly more power:

The platforms aren't entirely comparable, but Intel maintains a huge power advantage over AMD. With the move to 22nm, Intel dropped power consumption over an already more power efficient Sandy Bridge CPU at 32nm. While Intel drove power consumption lower, AMD kept it constant and drove performance higher. Even if we look at the FX-8320 and toss Sandy Bridge into the mix, the situation doesn't change dramatically:

Sandy Bridge obviously consumes more than Ivy Bridge, but the gap between a Vishera and any of the two Intel platforms is significant. As I mentioned earlier however, this particular test runs quicker on Vishera however the test would have to be much longer in order to really give AMD the overall efficiency advantage.

If we look at average power over the course of the two x264 encode passes, the results back up what we've seen above:

Power Consumption - Load (x264 HD 5.0.1)

As more client PCs move towards smaller form factors, power consumption may become just as important as the single threaded performance gap. For those building in large cases this shouldn't be a problem, but for small form factor systems you'll want to go Ivy Bridge.

Note that idle power consumption can be competitive, but will obviously vary depending on the motherboard used (the Crosshair Formula V is hardly the lowest power AM3+ board available):

Power Consumption - Idle

3D Gaming Performance Projected Performance: Can AMD Catch up with Intel?
POST A COMMENT

239 Comments

View All Comments

  • Samus - Tuesday, October 23, 2012 - link

    It's safe to say all programs/games going forward will take advantage of four cores or more. Battlefield 3 released LAST year and basically requires 4 cores in order to be GPU-limited (as in the game is CPU limited with just about any videocard unless you have 4 cores. Reply
  • c0d1f1ed - Tuesday, October 23, 2012 - link

    Prognosis for the near future is that having that many threads will still not be a whole lot of use for gaming. See Amdahl's law for why.

    Amdahl's Law is not a reason. There is plenty of task parallelism to exploit. The real issue is ROI, and there's two aspects to that. One is that multi-threaded development is freakishly hard. Unlike single-threaded development, you cannot know exactly what each thread is doing at any given time. You need to synchronised to make certain actions deterministic. But even then you can end up with race conditions if you're not careful. The current synchronization methods are just very primitive. Intel will fix that with Haswell. The TSX technology enables hardware lock elision and hardware transactional memory. Both will make the developer's life a lot easier, and also make synchronization more efficient.

    The second aspect isn't about the costs but about the gains. It has taken quite a while for more than two cores to become the norm. So it just wasn't worth it for developers to go through all the pain of scalable fine-grained multi-threaded development if the average CPU is still only a dual-core. Haswell's TSX technology will come right in time as quad-core becomes mainstream. Also, Haswell will have phenomenal Hyper-Threading performance thanks to two nearly symmetrical sets of two integer execution units.

    AMD needs to implement TSX and AVX2 sooner rather than later to stay in the market.
    Reply
  • CeriseCogburn - Tuesday, October 30, 2012 - link

    Nice post. Appreciate it.

    And ouch for amd once again.
    Reply
  • surt - Tuesday, October 23, 2012 - link

    No, gaming won't need that many threads in the near future either. Nobody is going to make a game demand more than 4 threads because that's what common gamer systems support. Reply
  • AnnihilatorX - Wednesday, October 24, 2012 - link

    I disagree. Say we have a hypothetical game that support 8 threads. The overhead of over-threading in a quad core system is frankly, not very much, while it may provide improvements on people with octocore or Intel processors with hyper-threading. Reply
  • AnnihilatorX - Wednesday, October 24, 2012 - link

    In fact, there are many games nowadays that split workload into many threads for economic simulation, background AI planning in user phase, physics, audio, graphics subthread, network management, preloading and resources management. It is just that even with the parallelism, there bound to be bottlenecks in single threading that a 8 core may not benefit at all compared to 4 cores.

    So I disagree, it is not about people not spending resources in making parallelism or not supporting it. It is the nature of the workload that is the determining factor.
    Reply
  • CeriseCogburn - Sunday, December 09, 2012 - link

    LOL- amd sucks period, did you look at the gaming page ?
    these visheras got literally stomped to death

    AMD fanboy = the imaginary, non existent, and never to exist future looks glorious for de furhor amd!
    Reply
  • redwarrior - Wednesday, October 24, 2012 - link

    What a one dimensional computer enthusiast you are. You spend hundreds to play games on a computer when you could do the same ona console for less?? I use my computer to gain knowledge, impart knowledge, do organizing work to liberate the working class from wage slavery, write leaflets, an documents. I occasionally play strategy games that are usually multi-threaded, like Galactic Civilizations II. . There is no greater value on the planet than the FX processors for what I do. They save me time for the work I do over the Intel processor in the $200 price class. Time and money that's important , frame rates of a 120 are useless but too the over-privileged who buy 120 mhz monitors for their gaming. What a waste of money and resources that could be used for the advancement of human kind. Reply
  • bennyg - Thursday, October 25, 2012 - link

    "Value" is more than just perf per purchase dollar, running costs also need to be included.

    E.g. a basic calculation based on the charts above the FX CPU I've saved $50 on would cost 2c extra per hour at full load in power. So 2500 hours at load would be my break even point. That's 7 hours a day at full load over a year, a heavy use scenario but quite possible.

    Multithreaded games are such a vast exception to the rule (that once you have "enough" CPU power you gain infinitessimal fps from more) they are not worth even mentioning.
    Reply
  • redwarrior - Thursday, October 25, 2012 - link

    You know NOT what you speak. Battlefield 3 is multithreaderd and look at AMD FX-8350 on Battlefield III - right up near the top, better than I 5 3570 and close to I7 3770. You guys are ignoring the facts and ignoring the trends in software. the move to parallelism is unstoppable and will accelerate. Multithreading is a growing presence and ONLY BAD programmers and software designers ignore it. The turning point will come when steamroller ships in a year and it will compete nicely with Hasbeen. At 28nm it will be almost as efficient as Hasbeen
    Performance wise it will be as good.
    Reply

Log in

Don't have an account? Sign up now