The x86 rack server space is very crowded, but is still possible to rise above the crowd. Quite a few data centers have many "gaping holes" in the racks as they have exceeded the power or cooling capacity of the data center and it is no longer possible to add servers. One way to distinguish your server from the masses is to create a very low power server. The x86 rack server market is also very cost sensitive, so any innovation that seriously cuts the costs of buying and managing the server will draw some attention. This low power, cost sensitive part of the market does not get nearly the attention it deserves compared to the high performance servers, but it is a huge market. According to AMD, sales of their low power (HE and EE) Opterons account for up to 25% of their total server CPU sales, while the performance oriented SE parts only amount to 5% or less. Granted, AMD's presence in the performance oriented market is not that strong right now, but it is a fact that low power servers are getting more popular by the day.

The low power market is very diverse. The people in the "cloudy" data centers are - with good reason - completely power obsessed as increasing the size of a data center is a very costly affair, to be avoided at all costs. These people tend to almost automatically buy servers with low power CPUs. Then there is the large group of people, probably working in the Small and Medium Enterprise businesses (SMEs) who know they have many applications where performance is not the first priority. These people want to fill their hired rack space without paying a premium to the hosting provider for extra current. It used to be rather simple: give heavy applications the (high performance) server they need and go for the simplest, smallest, cheapest, and lowest power server for applications that peak at 15% CPU like fileservers and domain controllers. Virtualization made the server choices a lot more interesting: more performance per server does not necessarily go to waste; it can result in having to buy fewer servers, so prepare to face some interesting choices.

Do you go for a server that tries to minimize power and rack space by using low power CPUs and a server form-factor such as the Twin or Blade server? In that case, you may end up with more servers than originally planned. Alternatively, do you use standard 1U/2U servers with standard CPUs? In that case, you may miss the chance to lower your monthly collocation and/or energy bill. While we won't be able to give you a perfectly tailored answer, we'll give you some of the fundamental information that you need to make an educated decision. In this article we will measure how much power low power CPUs, such as the Opteron 2377 EE and the Xeon L5520, save compared to their "normal" siblings. We will use our own virtualization benchmarks to make it a bit more realistic than SPECpowerjbb. Maybe the most important question: is the performance/watt ratio of a more expensive, low power server CPU really better?

Focusing on the CPUs alone would be a missed chance. Whether you need 10 or 8 servers to consolidate your applications does not depend solely on the CPU power, but also on the amount of memory you can place inside your server, the amount of expansion slots, and so on. In addition, power consumption does not depend solely on the CPU but also on how clever the server engineers design the server chassis and power supply. We assembled four different servers from three different manufacturers. Every server represents a different take on how to reduce power and CAPEX costs. Let us see how much sense it makes to invest in low power servers with low power CPUs in a virtualized environment.

Does that mean this article is only for server administrators and CIOs? Well, we feel that the hardware enthusiasts will find some interesting info too. We will test seven different CPUs, so this article will complement our six-core Opteron "Istanbul" and quad-core Xeon "Nehalem" reviews. How do lower end Intel "Nehalem" Xeons compare with the high end quad-core Opterons? What's the difference between a lower clocked six-core and a highly clocked quad-core? How much processing power do you have to trade when moving from a 95W TDP Xeon to a 60W TDP chip? What happens when moving from a 75W ACP (105W TDP) six-core Opteron to a 40W ACP (55W TDP) quad-core Opteron? These questions are not the ultimate goal of this article, but it should shed some light on these topics for the interested.

Making Sense of AMD ACP/TDP and Intel TDP
Comments Locked

12 Comments

View All Comments

  • Doby - Thursday, July 23, 2009 - link

    I don't understand why virtualization benchmarking is done with 16 or fewer VMs. With the CPU power of the newer CPU you can consolidate far more on there. Why aren't the benchmarks done with VMs with varying workloads, around 5% or less utilization, and then see how many VMs a particular server can handle. It would be far more real world.

    I have customers running over 150 VMs on a 4 CPU box, the performance compison of which CPU can handle 16 VMs better is completely bogus. It's all about how many VMs can I get without overloading the server (60-80% utilization).
  • JohanAnandtech - Thursday, July 23, 2009 - link

    As explained in the article, we were limited with the amount of DDR-3 we have available. We had a total of 48 GB of DDR-3 and had to test up to servers. It should not be too hard to figure out what the power consumption could have been with twice or even four times more memory. Just add 5 Watt per DIMM.

    BTW, 150 VMs on one box is not extremely rare in the realworld. Are those VDI VMs?

    "the performance comparison of which CPU can handle 16 VMs better is completely bogus"

    On a dual socket machine it is not. Why would it be "bogus"? I agree that in a perfect world we would have loaded that machine up to 48 GB per Server (that is a fortune of 192 GB of RAM) and have run like 20-30 VMs per server. A little bit of understanding for the limitations we have to face would make my day....

  • uf - Thursday, July 23, 2009 - link

    What power consumption is for low loaded server (not idle!) say at 10% and 30% average cpu utilization per core?
  • MODEL3 - Wednesday, July 22, 2009 - link

    in your comment:
    If AMD would apply the methodology of Intel to determine TDP they would end up somewhere between ACP and the current "AMD TDP"

    You referring exclusively to the server CPUs?
    Because if not, the above statement is false and unprofessional.

    I don't have access to server CPUs but from my experience with mainstream consumer CPUs tells me the exact opposite:

    65nm dual core (same performance level) 65W max TDP:
    both 6420 (2,13GHz)& 4600 (2,4GHz) has lower* actual TDP than 5600 (2,9Ghz)

    45nm dual core (same performance level) 65W max TDP:
    both 7200 (2,53GHz)& 6300 (2,8GHz) has lower* actual TDP than Athlon 250 (3,0Ghz)

    45nm Quad core (same performance level) 65W max TDP:
    Q8200S (2,33 GHz) has lower* actual TDP than Phenom II 905e (2.5GHz)

    I don't even need to give details for system configurations everyone knows these facts.

    * not by much but nevertheless lower (so from that point to the point of " AMD has actual TDP somewhere between AMD's ACP and Intel's TDP " there is a huge gap
  • JohanAnandtech - Wednesday, July 22, 2009 - link

    Correct. I only checked for server CPUs (see the pdf I linked).
  • JarredWalton - Wednesday, July 22, 2009 - link

    There are several issues at work, particularly with desktop processors. For one, AMD and Intel both have a range of voltages on desktop parts, so (just throwing out numbers) one CPU might run at 1.2V and another with the same part might run at 1.225V - it's a small difference but it can show up.

    Next, Intel and AMD both seem to put out numbers that are a theoretical worst case, and clock speed and voltage of a given chip help determine where the CPUs actually fall. The stated TDP on a part might be 65W, and with some 65W chips you can get very close to that while with others you might never get above 50W, but they'll both still state 65W.

    The main point is that AMD's ACP ends up lower than what is realistic and their TDP ends up as essentially the worst-case scenario. (AMD parts are marketed with the ACP number, not TDP.) Meanwhile, Intel's TDP is higher than AMD's ACP but isn't quite the worst-case scenario of AMD's TDP.

    I believe that's the way it all works out: Intel reports TDP that is lower than the absolute maximum but is typically higher than most users will see. AMD reports ACP that is more like an "average power" instead of a realistic maximum, but their TDP is pretty accurate. Even with this being the general case, processors are still released in families and individual chips can have much lower power requirements than the stated ACP/TDP - basically they should always come in equal to or lower than the ACP/TDP, but one might be 2W lower and another might be 15W lower and there's no easy way to say which it is without testing.
  • MODEL3 - Wednesday, July 22, 2009 - link

    I mostly agree with what you 're saying except 2 things:

    1.AMD's TDP ends up as essentially the worst-case scenario (not true in all the cases e.g. Phenom X4 9350e (it has actual TDP higher than 65W)

    2.In all the examples I gave, Intel & AMD had the same "official" TDP (also same more or less performance & same manufacturing proccess) so with your logic AMD should have lower than Intel actual TDP which is not true.

    I live in Greece, here we pay 0,13€ (inc. VAT) per KW, so...

    In another topic did you see the new prices for AMD Athlon II X2 245 (66$) & 240 (60$)? (while Intel 5300 cost 64$ & 5400 74$)

    They should have priced them at 69$ & 78$.

    No wonder why AMD is loosing so much money, they have to fire immediately those idiots who dit it (it reminds me the days before K8 when AMD used these methods)
  • JPForums - Wednesday, July 22, 2009 - link

    I'm having a hard time correlating your chart and your assessment.

    "Notice how adding a second L5520 CPU and three DIMMs of DDR3-1066 to our Chenbro server only adds 9W."
    Found that one. However, on the previous page you make this statement:
    "So adding a Xeon X5570 adds about 58W (248W - 175W - three DIMMs of 5W), while adding an Opteron 2435 2.6GHz adds about 47W (243 - 181 - three DIMMs of 5W)."
    This implies to me that just adding the 3 DIMMs should have raised the power 15W.

    "Add an Opteron EE to our AMD server and you add 22W."
    Check. Did you add the 3 DIMMs here as well?

    "The result is that the best AMD platform consumes about 20W more than the best Intel platform when running idle."
    Can't find this one. There are 3W difference between the Xeon L5520 and the Opteron 2377 EE. There are 16W difference for the dual CPU counter parts (closer). All the other comparisons leave the Intel platform consuming more power than the AMD counterpart. Is this supposed a comparison of the platform without the CPU? It is unclear to me given the words chosen. I was under the impression that the CPU is generally considered part of the platform.

    "Intel's power gating is the decisive advantage here: it can turn the inactive cores completely off. Another indication is that the dual Opteron 2435 consumes about 156W when we turn off dynamic power management, which is higher than the Xeon X5570 (150W)."
    An explanation of dynamic power management would be helpful. It sounds like you're saying that Intel's power management techniques are clearly better because when you turn both their power management and AMD's power management off, the Intel platform works better. The only way your statements make sense is if the dynamic power management you are talking about isn't a CPU level feature like clock gating. In any case, power management techniques are worthless if you can't use them.

    As a side question, when the power management support issue with the Xeon X5570 is addressed and AMD has a new lower power platform, where do you predict the power numbers will end up? I'd still expect the "Nahalem" Xeons to win in performance/power, though.
  • JohanAnandtech - Wednesday, July 22, 2009 - link

    Part 2 :-)

    "The result is that the best AMD platform consumes about 20W more than the best Intel platform when running idle."
    Can't find this one. "

    135W - 119W = 16W. I made a small error there (spreadsheet error).

    "It sounds like you're saying that Intel's power management techniques are clearly better because when you turn both their power management and AMD's power management off, the Intel platform works better. "

    More or less. There are two ways the CPU can save power: 1) lower voltage and clockspeed or 2) Shut down the cores that you don't need. In case of the Intel part, it is better at shutting down the cores that it don't need. They simply are completely shut off and consume close to 0 W. In case of AMD, each core still consumes a few watt.

    So if you turn Speedstep and power now! off, you can see the effect of the 2nd way to save power. It confirms our suspicion of why the Opteron EE is not able to beat the L5520 when running idle.




  • JohanAnandtech - Wednesday, July 22, 2009 - link

    I'll chop my answers up to keep these comments readable.

    quote:
    "Notice how adding a second L5520 CPU and three DIMMs of DDR3-1066 to our Chenbro server only adds 9W."
    Found that one. However, on the previous page you make this statement:
    "So adding a Xeon X5570 adds about 58W (248W - 175W - three DIMMs of 5W), while adding an Opteron 2435 2.6GHz adds about 47W (243 - 181 - three DIMMs of 5W)."
    This implies to me that just adding the 3 DIMMs should have raised the power 15W. "

    No. Because the 9W is measured at idle. It is too small to measure accurately, but DIMMs do not consume 5W per DIMM in idle. Probably more like 1W or so.


Log in

Don't have an account? Sign up now