Power Management and Real Turbo Core

Like Llano, Bulldozer incorporates significant clock and power gating throughout its design. Power gating allows individual idle cores to be almost completely powered down, opening up headroom for active cores to be throttled up above and beyond their base operating frequency. Intel's calls this dynamic clock speed adjustment Turbo Boost, while AMD refers to it as Turbo Core.

The Phenom II X6 featured a rudimentary version of Turbo Core without any power gating. As a result, Turbo Core was hardly active in those processors and when it was on, it didn't stay active for very long at all.

Bulldozer's Turbo Core is far more robust. While it still uses Llano's digital estimation method of determining power consumption (e.g. the CPU knows ALU operation x consumes y-watts of power), the results should be far more tangible than what we've seen from any high-end AMD processor in the past.

Turbo Core's granularity hasn't changed with the move to Bulldozer however. If half (or fewer) of the processor cores are active, max turbo is allowed. If any more cores are active, a lower turbo frequency can be selected. Those are the only two frequencies available above the base frequency.

AMD doesn't currently have a Turbo Core monitoring utility so we turned to Core Temp to record CPU frequency while running various workloads to measure the impact of Turbo Core on Bulldozer compared to Phenom II X6 and Sandy Bridge.

First let's pick a heavily threaded workload: our x264 HD benchmark. Each run of our x264 test is composed of two passes: a lightly threaded first pass that analyzes the video, and a heavily threaded second pass that performs the actual encode. Our test runs four times before outputting a result. I measured the frequency of Core 0 over the duration of the test.

Let's start with the Phenom II X6 1100T. By default the 1100T should run at 3.3GHz, but with half or fewer cores enabled it can turbo up to 3.7GHz. If Turbo Core is able to work, I'd expect to see some jumps up to 3.7GHz during the lightly threaded passes of our x264 test:

Unfortunately we see nothing of the sort. Turbo Core is pretty much non-functional on the Phenom II X6, at least running this workload. Average clock speed is a meager 3.31GHz, just barely above stock and likely only due to ASUS being aggressive with its clocking.

Now let's look at the FX-8150 with Turbo Core. The base clock here is 3.6GHz, max turbo is 4.2GHz and the intermediate turbo is 3.9GHz:

Ah that's more like it. While the average is only 3.69GHz (+2.5% over stock), we're actually seeing some movement here. This workload in particular is hard on any processor as you'll see from Intel's 2500K below:

The 2500K runs at 3.3GHz by default, but thanks to turbo it averages 3.41GHz for the duration of this test. We even see a couple of jumps to 3.5 and 3.6GHz. Intel's turbo is a bit more consistent than AMD's, but average clock increase is quite similar at 3%.

Now let's look at the best case scenario for turbo: a heavy single threaded application. A single demanding application, even for a brief period of time, is really where these turbo modes can truly shine. Turbo helps launch applications quicker, make windows appear faster and make an easy time of churning through bursty workloads.

We turn to our usual favorite Cinebench 11.5, as it has an excellent single-threaded benchmark built in. Once again we start with the Phenom II X6 1100T:

Turbo Core actually works on the Phenom II X6, albeit for a very short duration. We see a couple of blips up to 3.7GHz but the rest of the time the chip remains at 3.3GHz. Average clock speed is once again, 3.31GHz.

Bulldozer does far better:

Here we see blips up to 4.2GHz and pretty consistent performance at 3.9GHz, exactly what you'd expect. Average clock speed is 3.93GHz, a full 9% above the 3.6GHz base clock of the FX-8150.

Intel's turbo fluctuates much more frequently here, moving between 3.4GHz and 3.6GHz as it runs into TDP limits. The average clock speed remains at 3.5GHz, or a 6% increase over the base. For the first time ever, AMD actually does a better job at scaling frequency via turbo than Intel. While I would like to see more granular turbo options, it's clear that Turbo Core is a real feature in Bulldozer and not the half-hearted attempt we got with Phenom II X6. I measured the performance gains due to Turbo Core across a number of our benchmarks:

Average performance increased by just under 5% across our tests. It's nothing earth shattering, but it's a start. Don't forget how unassuming the first implementations of Turbo Boost were on Intel architectures. I do hope with future generations we may see even more significant gains from Turbo Core on Bulldozer derivatives.

Independent Clock Frequencies

When AMD introduced the original Phenom processor it promised more energy efficient execution by being able to clock each core independently. You could have a heavy workload running on Core 0 at 2.6GHz, while Core 3 ran a lighter thread at 1.6GHz. In practice, we felt Phenom's asynchronous clocking was a burden as the CPU/OS scheduler combination would sometimes take too long to ramp up a core to a higher frequency when needed. The result, at least back then, was that you'd get significantly lower performance in these workloads that shuffled threads from one core to the next. The problem was so bad that AMD abandoned asynchronous clocking altogether in Phenom II.

The feature is back in Bulldozer, and this time AMD believes it will be problem free. The first major change is with Windows 7, core parking should keep some threads from haphazardly dancing around all available cores. The second change is that Bulldozer can ramp frequencies up and down much quicker than the original Phenom ever could. Chalk that up to a side benefit of Turbo Core being a major part of the architecture this time around.

Asynchronous clocking in Bulldozer hasn't proven to be a burden in any of our tests thus far, however I'm reluctant to embrace it as an advantage just yet. At least not until we've had some more experience with the feature under our belts.

The Pursuit of Clock Speed The Impact of Bulldozer's Pipeline
Comments Locked

430 Comments

View All Comments

  • medi01 - Thursday, October 13, 2011 - link

    Slightest "problem" imaginable with AMD GPUs would make it into titles.

    nVidia article would go with comparing cherry picked overclocked board vs standard from AMD, with laughable "explanations" of "oh nVidia marketing asked us to do it, we kinda refused but then we thought that since we've already kinda refused, we might still do what they've asked".

    "Objectively", are you kidding me?
  • JKflipflop98 - Thursday, October 13, 2011 - link

    Anand runs the test, then writes down the number. Then he runs the test on the other PC, and writes down the number.

    If your number is lower, then it's physics "badmouthing" your precious, and not the site.
  • actionjksn - Wednesday, October 12, 2011 - link

    @medi01 Considering the results I think Anand were more than kind enough to AMD.
  • medi01 - Thursday, October 13, 2011 - link

    I recall low power AMD CPUs being tested on 1000Watt PSUs on this very site. How normal was that, cough? iPhones "forgoten in pocket" (authors comment) on comparison photos where they would look unfavourably)

    Thing with tests is, you have games that favour one manufacturer, then other games that favour another. Choose "right" set of games, and viola...

    The move with 1000Watt PSU on 35W TDP CPU is TOO DAMN LOW and should never happen.

    On top of it, absolute majority of games is more GPU sensitive, than CPU sensitive. Now one could reduce resolution to ridiculously low levels so that CPU becomes a bottleneck. but then, who on earth would care whether you get 150 or 194 frames per second at a resolution which you'll never use?
  • Stas - Thursday, October 13, 2011 - link

    Not sure what the deal is with PSUs or what article you're referring to. I'm assuming it made AMD power consumption look worse than it was because 1kW PSU was running at 10% load, thus way out of efficiency range. But w/e. My comment is mostly on CPU performance in games. Just because you don't run a game on the top-end CPU with $800 in multi-gpu tandem at lowest settings, doesn't mean it shouldn't be used to determine CPU performance. By making the CPU the bottleneck, you make it do as much as it can side-by-side with the GPU spiting out frames while whistling tunes and picking it's finger nails. There is more load on CPU than GPU. Which ever CPU is faster - that CPU will provide more FPS. Simple as that.
    Sure, no one will see 20%-30% performance difference using more appropriate resolution and quality settings. But we're enthusiasts, we want to see peak performance difference and extreme loads. Most synthetic tests are irrelevant in everyday use, but performance has been measured that way for decades.
  • jleach1 - Friday, October 14, 2011 - link

    I haven't seen one single sentence that was questionable in a and graphics review. In fact I'm glad to say that I'm a big fan of Intel CPU and and hour combos, and have never had even as much as a hint of bias.

    As a over exaggeration, in an age where were all stuffing multiple cards in our systems, and cards are efficient, reliable, powerful, and they run cool. yes the drivers have sucked in the past, but they don't really.

    (emphasis on the word seem)

    NvIdia cards have just seemed clunky and hot as hell since the 400 series. I don't feel like gaming next to a space heater. And I definitely don't want to pay 40 percent more for ten percent performance just to have a space heater and bragging rights.

    its like amd graphics are similar to intels CPU lineup, they're great performance per dollar parts, and they're efficient. But NvIdia and Intel graphics are like amd CPUs, they're either inefficient, or they're good at only a few things.

    The moral? what the *$&* amd....you might as well write off the whole desktop business if the competition IS fifty percent faster and gaining ground....that 15 percent you're promising next year better be closer to 50 or I'm going to forget about your processors altogether.
  • jleach1 - Friday, October 14, 2011 - link

    Intel CPU and amd combos*....sorry for the bat grammar. Writing on a tablet with Swype.
  • CeriseCogburn - Wednesday, March 21, 2012 - link

    40% more cost and 10% more performance?
    You said that's across the board.
    I'm certainly glad you aren't the reviewer here on anything. I mean really that was over the top.
  • CeriseCogburn - Friday, June 8, 2012 - link

    They went fullblown favor the bullsnoozer by using the GPU limited amd hd5870 to make the stupid amd cpu look good.

    Thank your lucky stars they did that much for you.
  • MJEvans - Thursday, October 13, 2011 - link

    I think your later point is exactly why the FPU support isn't as strong. (most) tasks that use FPU appear to be operating on large matrices of data, while sequential processing seems to have a good design idea (even if the implementation is a little immature and a little early), but slower latency l1/l2 cache access. I hope that's an area that will be addressed by the next iteration.

Log in

Don't have an account? Sign up now