Power Extremes: Idle and Full Load

Idle and full load power measurements are hardly relevant for virtualized environments but they offer some interesting datapoints. In the first test we report the power consumption running vApus Mark II, which means that the servers are working at 85-98% CPU load. We measured with four tiles, but we also tested the Xeon with five tiles (E7-4870 5T).

We test with redundant power supplies working, so the Dell R815 uses 1+1 1100W PSUs and the QSSC-4R uses 2+2 850W PSU. Remember that the QSSC server has cold redundant PSUs, so the redundant PSUs consume almost nothing. There's more to the story than just the PSU and performance, of course: the difference in RAS features, chassis options, PSU, and other aspects of the overall platform can definitely impact power use, and that's unfortunately something that we can't easily eliminate from the equation.

vApus Mark II Full Power

The Quad E7-4870 Xeons save about 7.5% power (894 vs 966) compared to their older brothers. The power consumption numbers look very bad compared to the AMD system in absolute terms. However, with five tiles the Quad Xeon E7 delivers 63% higher performance while consuming 57% more power. We can conclude that at very high loads, the Xeon E7 performance/watt ratio is quite competitive.

When we started testing at idle, we test with both the Samsung 1333MHz 4GB 1.35V (low power) DDR3 registered (M393B5273DH0) and 1.5V DIMMs (M393B5170FH0-H9).

vApus Mark II Idle Power

Despite the fact that the Xeon X7560 does not support low power DIMMs officially, it was able to save about 5% of the total power use. The Xeon E7's more advanced memory controller was able to reduce power by 8%. But the picture is clear: if your servers runs idle for large periods of time, the quad Opteron server is able to run on a very low amount of power, while the quad Xeon server needs at least 600W if you do not outfit it with low power 1.35V DIMMs. How much of the power difference comes from the platform in general and how much is specific to the servers being tested is unfortunately impossible to say without testing additional servers.

As we have stated before, maximum power and minimum power are not very realistic, so let us check out our real world scenario.

Virtualized Performance: Response Time Real-World Power


View All Comments

  • Casper42 - Thursday, May 19, 2011 - link

    HP makes the BL620c G7 Blade server that is a 2P Nehalem EX (soon to offer Westmere EX)
    And believe it or not, but the massive HP DL980 G7 (8 Proc Nehalem/Westmere EX is actually running 4 pair of EX CPUs. HP has a custom ASIC Bridge chip that brings them all together. This design MIGHT actually support running the 2P models as each Pair goes through the bridge chip.

    Dell makes the R810 and while its a 4P Server, the memory subsystem actually runs best when its run as a 2P Server. That would be a great platform for the 2P CPUs as well.
  • mczak - Friday, May 20, 2011 - link

    2P E7 looks like a product for a very, very small niche to me. In terms of pure performance, a 2P Westmere-EP pretty much makes up for the deficit in cores with the higher clock - sure it also has less cache, but for the vast majority of cases it is probably far more cost effective (not to mention at least idle power consumption will be quite a bit lower). Which relegates the 2P E7 to cases where you don't need more performance than 2P Westmere-EP, but depend on some of the extra features (like more memory possible, RAS, whatever) the E7 offers. Reply
  • DanNeely - Thursday, May 19, 2011 - link

    If anyone wants an explanation of what changed between these types of memory, simmtester.com has a decent writeup and illustrations. Basically each LR-DIMM has a private link to the buffer chip, instead of each dimm having a very high speed buffer daisychained to the next dimm on the channel.

  • jcandle - Saturday, May 21, 2011 - link

    I love the link but your comment is a bit misleading.

    The main difference isn't the removal of the point-to-point connections but the reversion to a parallel configuration similar to classic RDIMMs. The issues with FBDIMMs stemmed from their absurdly clocked serial bus that required 4x greater operating frequency over the actual DRAM clock.

    So in dumb terms... FBDIMM = serial, LRDIMM = parallel
  • ckryan - Thursday, May 19, 2011 - link

    It must be very, very difficult to generate a review for server equipment. Once you get into this class of hardware it seems as though there aren't really any ways to test it unless you actually deploy the server. Anyway, kudos for the effort in trying to quantify something of this caliber. Reply
  • Shadowmaster625 - Thursday, May 19, 2011 - link

    Correct me if I'm wrong, but isnt an Opteron 6174 just $1000? And it is beating the crap out of this "flagship" intel chip by a factor of 3:1 in performance per dollar, and beats it in performance per watt also? And this is the OLD AMD architecture? This means that Interlagos could pummel intel by something like 5:1. At what point does any of this start to matter?

    You know it only costs $10,000 for a quad opteron 6174 server with 128GB of RAM?
  • alpha754293 - Thursday, May 19, 2011 - link

    That's only true IF the programs/applications that you're running on it isn't licensed by sockets/processor/core-counts.

    How many of those Opteron systems would it take to match the performance? And the cost of the systems? And the cost of the software, if they're licensed on a per core basis?
  • tech6 - Thursday, May 19, 2011 - link

    Software licensing is a part of the overall picture (particularly if you have to deal with Oracle) but the point is well taken that AMD delivers much better bang for the buck than Intel. An analysis of performance/$ would be an interesting addition to this article. Reply
  • erple2 - Thursday, May 19, 2011 - link

    The analysis isn't too hard. If you're licensing things on a per core cost (Hello, Oracle, I'm staring straight at you), then how much does the licensing cost have to be per core before you've made up that 20k difference in price (assuming AMD = 10k, intel = 30k)? Well, it's simple - 20k/8 cores per server more for the AMD = $2500 cost per core. Now, if you factor in that on a per core basis, the intel server is between 50 and 60% faster, things get worse for AMD. Assuming you could buy a server from AMD that was 50% more powerful (via linearly increasing core count), that would be 50% more of a server, but remember each server has 20% more cores. So it's really about 60% more cores. Now you're talking about an approximately 76.8 core server. That's 36 more cores than intel. So what's the licensing cost gotta be before AMD isn't worth it for this performance level? well, 20k/36 = $555 per core.

    OK, fair enough. Maybe things are licensed per socket instead. You still need 50% more sockets to get equivalent performance. So that's 2 more sockets (give or take) for the AMD to equal the intel in performance. Assuming things scale linearly with price, that "server" will cost roughly 15k for the AMD server. Licensing costs now have to be more than 7.5k (15k difference in price between the AMD and intel servers divided by 2 extra sockets) higher per socket to make the intel the "better deal" per performance. Do you know how much an Oracle Suite of products costs? I'll give you a hint. 7.5k isn't that far off the mark.
  • JarredWalton - Thursday, May 19, 2011 - link

    There are so many factors in the cost of a server that it's difficult to compare on just price and performance. RAS is a huge one -- the Intel server targets that market far more than the AMD used. Show me two identical servers in all other areas other than CPU type and socket count, and then compare the pricing. For example, here are two similar HP ProLiant setups:

    HP ProLiant DL580 G7
    2 x Xeon E7-4830
    64GB RAM
    64 DIMM slots
    Advanced ECC
    Online Spare
    Mirrored Memory
    Memory Lock Step Mode
    DIMM Isolation
    Total: $13,679
    +$4000 for 2x E7-4830

    HP ProLiant DL585 G7
    4 x Opteron 6174
    64GB RAM
    32 DIMM slots
    Advanced ECC
    Online Spare
    Total: $14,889

    Now I'm sure there are other factors I'm missing, but seriously, those are comparable servers and the Intel setup is only about $3000 more than the AMD equivalent once you add in the two extra CPUs. I'm not quite sure how much better/worse the AMD is relative to the Intel, but when people throw out numbers like "OMG it costs $20K more for the Intel server" by comparing an ultra-high-end Intel setup to a basic low-end AMD setup, it's misguided at best. By my estimates, for relatively equal configurations it's more like $3000 extra for Intel, which is less than a 20% increase in price--and not even factoring in software costs.

Log in

Don't have an account? Sign up now