Power Management in Windows Server 2008 SP2

Enabling the C-states in ESX 5i might bring the Opteron 6276 an improved performance/watt ratio. The question is whether the low power consumption at light loads will negate the performance impact. Although power consumption is lowered by using the "C-state enable" tweak, it is not spectacular: 10% lower energy consumption in idle will probably not give the Opteron 6276 an amazing performance/watt ration in ESXi. The impact of this tweak will make a difference in our EWL testing, not in the "full speed ahead" benchmarks. Also, our vApus FOS EWL testing showed that the Xeon consumed 25% less energy, so it will remain ahead.

As the virtualization benchmarks require more time to run, we will have to delay investigating them for a later article. But what about Windows 2008 R2? The idle power of the Opteron 6276 was excellent there. So which power policy should be chosen in Windows 2008? We compared Opteron performance in "High performance" to the Opteron 6276 performance when the power management policy was set to "Balanced.

  Opteron 6276
"High Performance"
Opteron 6276
"High Performance"
+ C6 enable.
Xeon X5670
"High performance" vs.
Xeon X5670 "Balanced"
Cinebench Single-threaded +16% +18% +1%
Cinebench Multi-threaded +5% +5% +1%
Blender +4% +13% +1%
Encryption/Decryption AES +43% / +42% +43% / +44% +28% / +28%
Encryption/Decryption Twofish/Serpent +8% / +8% +8 / +8% +0 / +0%
Compression/decompression +9% / +4% +9 / +4% +0 / +2%

If we combine the our idle power consumption measurements with these numbers, things get a lot clearer. The "balanced" power policy disables turbo. Therefore, the maximum performance boost from enabling "high performance" should be 13%. The TrueCrypt benchmarks show much larger increases (see (*)), which we honestly don't understand. The performance boost (40%) is only possible if the CPU boosts to 3.2GHz, but that is not supposed to happen. First, the TrueCrypt software is well threaded and uses all clusters (32 threads). Second, we disabled C6, so normally the CPU is not able to boost to 3.2GHz. Third, our monitoring clearly indicated a 2.6GHz clock as expected.

We also did a quick x264 4.0 benchmark (1st pass) which is lightly threaded and showed the same performance (46%!) increase by simply switching from "Balanced" to "High performance" (turbo limited to 2.6GHz, no C6). The Xeon only got a 13% increase in performance..

Closer monitoring reveals that "Balanced" frequently reduces the cores to 1.4GHz. So we have a similar situation as the one where we found power management problems on the AMD "Istanbul" Opteron when the power policy was set to "Balanced".

Basically "Balanced" brings the clock speed down to a low P-state even when a thread is demanding the maximum processing power. Or in other words, the power manager is too eager to bring the clock speed down instead of looking ahead: the polling is blind for the very near future. The result is that quite often the workload gets processed at 1.4GHz (for a short time).

In contrast, the high performance setting does not make use of frequency scaling besides Turbo. So the CPU runs at 2.3GHz at the very minimum and frequently reaches 2.6GHz. So if you buy an Opteron 6200 server, it is strongly advised to chose the "High Performance" setting. Under light load, the balanced power manager saves a few percentage of power running idle, but in our opinion, it is not worth the large performance degradation. Notice also that the Xeon hardly suffers from the same problem with the exception of the AES-NI enabled TrueCrypt bench, and even then the performance impact is significantly lower.

In a nutshell: the power policy "Balanced" strongly favors the Xeon as the performance impact is non-existent or much lower. Let us see some raw performance numbers.

Measuring Real-World Power Consumption, Part Two Rendering Performance: Cinebench
Comments Locked

106 Comments

View All Comments

  • DigitalFreak - Tuesday, November 15, 2011 - link

    Good to see that CPU-Z correctly reports the 6276 as 8 core, 16 thread, instead of falling for AMD's marketing BS.
  • N4g4rok - Tuesday, November 15, 2011 - link

    If each module possess two integer cores to a shared floating point core, what's to say that it can't be considered as a practical 16 core?
  • phoenix_rizzen - Tuesday, November 15, 2011 - link

    Each module includes 2x integer cores, correct. But the floating point core is "shared-separate", meaning it an be used as two separate 128-bit FPUs or as a single 256 FPU.

    Thus, each Bulldozer module can run either 3 or 4 threads simultaneously:
    - 2x integer + 2x 128-bit FP threads, or
    - 2x integer + 1x 256-bit FP threads

    It's definitely a dual-core module. It's just that the number of threads it can run is flexible.

    The thing to remember, though, is that these are separate hardware pipelines, not mickey-moused hyperthreaded pipelines.
  • JohanAnandtech - Tuesday, November 15, 2011 - link

    You can get into a long discussion about that. The way that I see it, is that part of the core is "logical/virtual", the other part is real in Bulldozer . What is the difference between an SMT thread and CMT thread when they enter the fetch-decode stages? Nothing AFAIK, both instructions are interleaved, and they both have a "thread tag".

    The difference is when they are scheduled, the instructions enters a real core with only one context in the CMT Bulldozer. With SMT, the instructions enter a real core which still interleave two logical contexts. So the core still consists of two logical cores.

    It is gets even more complicated when look at the FP "cores". AFAIK, the FP cores of Interlagos are nothing more than 8 SMT enabled cores.
  • alpha754293 - Tuesday, November 15, 2011 - link

    I think that Johan is partially correct.

    The way I see it, the FPU on the Interlagos is this:

    It's really a 256-bit wide FPU.

    It can't really QUITE separate the ONE physical FPUs into two 128-bit wide FPUs, but it more probably in reality, interleaves them (which is really just code for "FPU-starved").

    Intel's original HTT had this as a MAJOR problem, because the test back then can range from -30% to +30% performance increase. Floating-point intensive benchmarks have ALWAYS suffered mostly because suppose you're writing a calculator using ONLY 8-byte (64-bit) double precision.

    NORMALLY, that should mean that you should be able to crunch through four DWORDs at the same time. And that's kinda/sorta true.

    Now, if you are running two programs, really...I don't think that the CPU, the compiler (well..maybe), the OS, or the program knows that it needs to compile for 128-bit-wide FPUs if you're going to run two instances or two (different) calculators.

    So it's resource starved in trying to do the calculation processes at the same time.

    For non-FPU-heavy workloads, you can get away with that. For pretty much the entire scientific/math/engineering (SME) community; it's an 8-core processor or a highly crippled 16-core processor.

    Intel's latest HTT seems to have addressed a lot of that, and in practical terms, you can see upwards of 30% performance advantage even with FPU-heavy workloads.

    So in some cases, the definition of core depends on what you're going to be doing with it. For SME/HPC; it's good cuz it can do 12-actual-cores worth of work with 8 FPUs (33% more efficient), but sucks because unless they come out with a 32-thread/16-core monolithic die; as stated, it's only marginally better than the last. It's just cheaper. And going to get incrementally faster with higher clock speeds.
  • alpha754293 - Tuesday, November 15, 2011 - link

    P.S. Also, like Anand's article about nVidia Optimus:

    Context switching even at the CPU level, while faster, is still costly. Perhaps maybe not nearly as costly as shuffling data around; but it's still pretty costly.
  • Samus - Wednesday, November 16, 2011 - link

    Ouch, this is going to be AMD's Itanium. That is, it has architecture adoption problems that people simply won't build around. Maybe less substantial than IA64, but still a huge performance loss because of underutilized integer units.
  • leexgx - Wednesday, November 16, 2011 - link

    think they way CPU-z reporting it for BD cpus is correct each core has 2 FP, so 8 cores and 16 threads is correct

    to bad windows does not understand how to spread the load correctly on an amd cpu (windows 7 with HT cpus Intel works fine, spreads the load correctly, SP1 improves that more but for Intel cpus only)

    windows 7 sp1 makes biger use of core parking and gives better cpu use on Intel cpus as i have been seeing on 3 systems most work loads now stay on the first 2 cores and the other 2 stay parked, on amd side its still broke with cool and quite enabled
  • Stuka87 - Tuesday, November 15, 2011 - link

    So, what is your definition of a core?

    Bulldozers do not utilize hyper threading, which takes a single integer core and can at times put two threads into that single integer core. A Bulldozer core has actual hardware two run two threads at the same time. This would suggest there are two physical cores.

    Does it perform like an intel 16 core (if there was such a thing), no. But that does not mean that it is not in fact a 16 core device. As the hardware is there. Yes they share an FPU, but that doesn't mean they are not cores.
  • Filiprino - Tuesday, November 15, 2011 - link

    Actually, Bulldozer is 16 cores. It has two dedicated integer units and a float point unit which can act as two 128 bit units or one 256 bit unit for AVX. So, you can have 2 and 2 per module.
    Bulldozer does not use hyperthreading.

Log in

Don't have an account? Sign up now