Conclusion

Intel's integrated power gating allows the newest Xeons to shutdown cores completely when running idle, and this works transparent to the software. Thanks to this technology and the generally excellent per clock performance, the Xeon L5520 is the best solution in most situations where performance/watt matters. The motherboard manufacturers are still struggling with the new platform as SpeedStep is not properly supported in the latest version of ESX. However, this is a temporary situation. Both ASUS and Supermicro will solve this problem in the coming weeks.

When it comes to low power, the AMD platform is getting a little old, but it is mature at the same time. We did not have any trouble enabling PowerNow! in ESX 4.0. AMD's CPUs lack the advanced power gating features that the newest Intel generation has and the result is that AMD CPUs consume up to 20% more running idle. That makes the AMD CPUs less attractive in applications where you run mostly in idle mode. AMD Opteron EE is the CPU that consumes the least running at full load, and this will make it interesting for power limited data centers. AMD's low power CPUs will become more attractive when the new power optimized Fiorano "Kroner" platform will arrive.

Discussing "low power" requires much more than just looking at CPUs.  The server nodes in ASUS RS700D server consume even less than the Willowbrook based server of Intel and CHenbro. ASUS is shaping up in the server world. However, a clever server design can save extra watts as the Supermicro Twin2 demonstrates. All in all, the Twin2 with Xeon L5520 CPUs is the best platform for those seeking an affordable server with an excellent performance/watt ratio at an affordable price. On the other hand, if performance/price is the most important criterion followed by performance/watt, we would probably opt for the six-core Opteron version of the Twin2. Supermicro has "a blade killer" avialable with the Twin², especially for those people who like to keep the hardware costs low. HP seems to have noticed this hole in the market too, this can get interesting ...
Final Verdict: the Servers
Comments Locked

12 Comments

View All Comments

  • Doby - Thursday, July 23, 2009 - link

    I don't understand why virtualization benchmarking is done with 16 or fewer VMs. With the CPU power of the newer CPU you can consolidate far more on there. Why aren't the benchmarks done with VMs with varying workloads, around 5% or less utilization, and then see how many VMs a particular server can handle. It would be far more real world.

    I have customers running over 150 VMs on a 4 CPU box, the performance compison of which CPU can handle 16 VMs better is completely bogus. It's all about how many VMs can I get without overloading the server (60-80% utilization).
  • JohanAnandtech - Thursday, July 23, 2009 - link

    As explained in the article, we were limited with the amount of DDR-3 we have available. We had a total of 48 GB of DDR-3 and had to test up to servers. It should not be too hard to figure out what the power consumption could have been with twice or even four times more memory. Just add 5 Watt per DIMM.

    BTW, 150 VMs on one box is not extremely rare in the realworld. Are those VDI VMs?

    "the performance comparison of which CPU can handle 16 VMs better is completely bogus"

    On a dual socket machine it is not. Why would it be "bogus"? I agree that in a perfect world we would have loaded that machine up to 48 GB per Server (that is a fortune of 192 GB of RAM) and have run like 20-30 VMs per server. A little bit of understanding for the limitations we have to face would make my day....

  • uf - Thursday, July 23, 2009 - link

    What power consumption is for low loaded server (not idle!) say at 10% and 30% average cpu utilization per core?
  • MODEL3 - Wednesday, July 22, 2009 - link

    in your comment:
    If AMD would apply the methodology of Intel to determine TDP they would end up somewhere between ACP and the current "AMD TDP"

    You referring exclusively to the server CPUs?
    Because if not, the above statement is false and unprofessional.

    I don't have access to server CPUs but from my experience with mainstream consumer CPUs tells me the exact opposite:

    65nm dual core (same performance level) 65W max TDP:
    both 6420 (2,13GHz)& 4600 (2,4GHz) has lower* actual TDP than 5600 (2,9Ghz)

    45nm dual core (same performance level) 65W max TDP:
    both 7200 (2,53GHz)& 6300 (2,8GHz) has lower* actual TDP than Athlon 250 (3,0Ghz)

    45nm Quad core (same performance level) 65W max TDP:
    Q8200S (2,33 GHz) has lower* actual TDP than Phenom II 905e (2.5GHz)

    I don't even need to give details for system configurations everyone knows these facts.

    * not by much but nevertheless lower (so from that point to the point of " AMD has actual TDP somewhere between AMD's ACP and Intel's TDP " there is a huge gap
  • JohanAnandtech - Wednesday, July 22, 2009 - link

    Correct. I only checked for server CPUs (see the pdf I linked).
  • JarredWalton - Wednesday, July 22, 2009 - link

    There are several issues at work, particularly with desktop processors. For one, AMD and Intel both have a range of voltages on desktop parts, so (just throwing out numbers) one CPU might run at 1.2V and another with the same part might run at 1.225V - it's a small difference but it can show up.

    Next, Intel and AMD both seem to put out numbers that are a theoretical worst case, and clock speed and voltage of a given chip help determine where the CPUs actually fall. The stated TDP on a part might be 65W, and with some 65W chips you can get very close to that while with others you might never get above 50W, but they'll both still state 65W.

    The main point is that AMD's ACP ends up lower than what is realistic and their TDP ends up as essentially the worst-case scenario. (AMD parts are marketed with the ACP number, not TDP.) Meanwhile, Intel's TDP is higher than AMD's ACP but isn't quite the worst-case scenario of AMD's TDP.

    I believe that's the way it all works out: Intel reports TDP that is lower than the absolute maximum but is typically higher than most users will see. AMD reports ACP that is more like an "average power" instead of a realistic maximum, but their TDP is pretty accurate. Even with this being the general case, processors are still released in families and individual chips can have much lower power requirements than the stated ACP/TDP - basically they should always come in equal to or lower than the ACP/TDP, but one might be 2W lower and another might be 15W lower and there's no easy way to say which it is without testing.
  • MODEL3 - Wednesday, July 22, 2009 - link

    I mostly agree with what you 're saying except 2 things:

    1.AMD's TDP ends up as essentially the worst-case scenario (not true in all the cases e.g. Phenom X4 9350e (it has actual TDP higher than 65W)

    2.In all the examples I gave, Intel & AMD had the same "official" TDP (also same more or less performance & same manufacturing proccess) so with your logic AMD should have lower than Intel actual TDP which is not true.

    I live in Greece, here we pay 0,13€ (inc. VAT) per KW, so...

    In another topic did you see the new prices for AMD Athlon II X2 245 (66$) & 240 (60$)? (while Intel 5300 cost 64$ & 5400 74$)

    They should have priced them at 69$ & 78$.

    No wonder why AMD is loosing so much money, they have to fire immediately those idiots who dit it (it reminds me the days before K8 when AMD used these methods)
  • JPForums - Wednesday, July 22, 2009 - link

    I'm having a hard time correlating your chart and your assessment.

    "Notice how adding a second L5520 CPU and three DIMMs of DDR3-1066 to our Chenbro server only adds 9W."
    Found that one. However, on the previous page you make this statement:
    "So adding a Xeon X5570 adds about 58W (248W - 175W - three DIMMs of 5W), while adding an Opteron 2435 2.6GHz adds about 47W (243 - 181 - three DIMMs of 5W)."
    This implies to me that just adding the 3 DIMMs should have raised the power 15W.

    "Add an Opteron EE to our AMD server and you add 22W."
    Check. Did you add the 3 DIMMs here as well?

    "The result is that the best AMD platform consumes about 20W more than the best Intel platform when running idle."
    Can't find this one. There are 3W difference between the Xeon L5520 and the Opteron 2377 EE. There are 16W difference for the dual CPU counter parts (closer). All the other comparisons leave the Intel platform consuming more power than the AMD counterpart. Is this supposed a comparison of the platform without the CPU? It is unclear to me given the words chosen. I was under the impression that the CPU is generally considered part of the platform.

    "Intel's power gating is the decisive advantage here: it can turn the inactive cores completely off. Another indication is that the dual Opteron 2435 consumes about 156W when we turn off dynamic power management, which is higher than the Xeon X5570 (150W)."
    An explanation of dynamic power management would be helpful. It sounds like you're saying that Intel's power management techniques are clearly better because when you turn both their power management and AMD's power management off, the Intel platform works better. The only way your statements make sense is if the dynamic power management you are talking about isn't a CPU level feature like clock gating. In any case, power management techniques are worthless if you can't use them.

    As a side question, when the power management support issue with the Xeon X5570 is addressed and AMD has a new lower power platform, where do you predict the power numbers will end up? I'd still expect the "Nahalem" Xeons to win in performance/power, though.
  • JohanAnandtech - Wednesday, July 22, 2009 - link

    Part 2 :-)

    "The result is that the best AMD platform consumes about 20W more than the best Intel platform when running idle."
    Can't find this one. "

    135W - 119W = 16W. I made a small error there (spreadsheet error).

    "It sounds like you're saying that Intel's power management techniques are clearly better because when you turn both their power management and AMD's power management off, the Intel platform works better. "

    More or less. There are two ways the CPU can save power: 1) lower voltage and clockspeed or 2) Shut down the cores that you don't need. In case of the Intel part, it is better at shutting down the cores that it don't need. They simply are completely shut off and consume close to 0 W. In case of AMD, each core still consumes a few watt.

    So if you turn Speedstep and power now! off, you can see the effect of the 2nd way to save power. It confirms our suspicion of why the Opteron EE is not able to beat the L5520 when running idle.




  • JohanAnandtech - Wednesday, July 22, 2009 - link

    I'll chop my answers up to keep these comments readable.

    quote:
    "Notice how adding a second L5520 CPU and three DIMMs of DDR3-1066 to our Chenbro server only adds 9W."
    Found that one. However, on the previous page you make this statement:
    "So adding a Xeon X5570 adds about 58W (248W - 175W - three DIMMs of 5W), while adding an Opteron 2435 2.6GHz adds about 47W (243 - 181 - three DIMMs of 5W)."
    This implies to me that just adding the 3 DIMMs should have raised the power 15W. "

    No. Because the 9W is measured at idle. It is too small to measure accurately, but DIMMs do not consume 5W per DIMM in idle. Probably more like 1W or so.


Log in

Don't have an account? Sign up now