Conclusion

Performance and RAS features took a giant leap forwared when Intel replaced the Xeon 7400 with the Xeon 7500. The memory subsystem went from a high latency, totally bandwidth choked loser (hardly 10GB/s for 24 cores) to a low latency and very high bandwidth champion (up to 70GB/s). The Xeon E7 builds further on that excellent platform, and adds up to 35% higher performance.

We now have a proven platform with excellent RAS features that needs slightly less power now while it provides a decent performance boost. That's excellent, but the Xeon E7 still has a few weakness. One weakness is the relatively high power consumption at idle load. Compared to the high-end Power 7 servers, this kind of power consumption is probably very reasonable. The Power 7 CPUs are in the 100 to 170W TDP range, while the Xeon E7s are in the 95 to 130W TDP range. A quad 3.3GHz Power 755 with (256GB RAM) server consumes 1650W according to IBM (slide 24), while our first measurements show that our 2.4GHz E7-4870 server will consume about 1300W in those circumstances.

Considering that the 3.3GHz Power 7 and 2.4GHz E7-4870 perform at the same level, we'll go out on a limb and assume that the new Xeon wins in the performance/watt race. AMD might take advantage of this "weakness", but availablility of quad 16-core "Bulldozer" servers is still months away and we don't know what the power use will be yet.

The 10-core Xeons are pretty expensive ($3000-4600 per CPU), but many of these systems are bought to run software that will cost 10 times more. In a nutshell, Intel's Xeon E7 moves up the server CPU food chain. The Xeon E7 closes the performance gap with the best RISC CPUs (see the SAP benchmarks), offers lower power and cost, and the rest of the x86 competition is relegated to the low-end of the quad x86 market.

For those looking for a virtualization platform, there is no x86 server that is able to offer such low response times at such high consolidation ratios. However, in order to get a good performance/watt ratio, you need to make sure that your quad Xeon E7 servers are working under high CPU loads. The quad Xeon E7 server is a good platform for consolidating CPU intensive applications. For less intensive VMs, it makes a lot more sense to check out the dual Xeon and quad Opteron offerings.

I would also like to thank to Tijl Deneut for his invaluable assistance.

Real-World Power
POST A COMMENT

62 Comments

View All Comments

  • Casper42 - Thursday, May 19, 2011 - link

    HP makes the BL620c G7 Blade server that is a 2P Nehalem EX (soon to offer Westmere EX)
    And believe it or not, but the massive HP DL980 G7 (8 Proc Nehalem/Westmere EX is actually running 4 pair of EX CPUs. HP has a custom ASIC Bridge chip that brings them all together. This design MIGHT actually support running the 2P models as each Pair goes through the bridge chip.

    Dell makes the R810 and while its a 4P Server, the memory subsystem actually runs best when its run as a 2P Server. That would be a great platform for the 2P CPUs as well.
    Reply
  • mczak - Friday, May 20, 2011 - link

    2P E7 looks like a product for a very, very small niche to me. In terms of pure performance, a 2P Westmere-EP pretty much makes up for the deficit in cores with the higher clock - sure it also has less cache, but for the vast majority of cases it is probably far more cost effective (not to mention at least idle power consumption will be quite a bit lower). Which relegates the 2P E7 to cases where you don't need more performance than 2P Westmere-EP, but depend on some of the extra features (like more memory possible, RAS, whatever) the E7 offers. Reply
  • DanNeely - Thursday, May 19, 2011 - link

    If anyone wants an explanation of what changed between these types of memory, simmtester.com has a decent writeup and illustrations. Basically each LR-DIMM has a private link to the buffer chip, instead of each dimm having a very high speed buffer daisychained to the next dimm on the channel.

    http://www.simmtester.com/page/news/showpubnews.as...
    Reply
  • jcandle - Saturday, May 21, 2011 - link

    I love the link but your comment is a bit misleading.

    The main difference isn't the removal of the point-to-point connections but the reversion to a parallel configuration similar to classic RDIMMs. The issues with FBDIMMs stemmed from their absurdly clocked serial bus that required 4x greater operating frequency over the actual DRAM clock.

    So in dumb terms... FBDIMM = serial, LRDIMM = parallel
    Reply
  • ckryan - Thursday, May 19, 2011 - link

    It must be very, very difficult to generate a review for server equipment. Once you get into this class of hardware it seems as though there aren't really any ways to test it unless you actually deploy the server. Anyway, kudos for the effort in trying to quantify something of this caliber. Reply
  • Shadowmaster625 - Thursday, May 19, 2011 - link

    Correct me if I'm wrong, but isnt an Opteron 6174 just $1000? And it is beating the crap out of this "flagship" intel chip by a factor of 3:1 in performance per dollar, and beats it in performance per watt also? And this is the OLD AMD architecture? This means that Interlagos could pummel intel by something like 5:1. At what point does any of this start to matter?

    You know it only costs $10,000 for a quad opteron 6174 server with 128GB of RAM?
    Reply
  • alpha754293 - Thursday, May 19, 2011 - link

    That's only true IF the programs/applications that you're running on it isn't licensed by sockets/processor/core-counts.

    How many of those Opteron systems would it take to match the performance? And the cost of the systems? And the cost of the software, if they're licensed on a per core basis?
    Reply
  • tech6 - Thursday, May 19, 2011 - link

    Software licensing is a part of the overall picture (particularly if you have to deal with Oracle) but the point is well taken that AMD delivers much better bang for the buck than Intel. An analysis of performance/$ would be an interesting addition to this article. Reply
  • erple2 - Thursday, May 19, 2011 - link

    The analysis isn't too hard. If you're licensing things on a per core cost (Hello, Oracle, I'm staring straight at you), then how much does the licensing cost have to be per core before you've made up that 20k difference in price (assuming AMD = 10k, intel = 30k)? Well, it's simple - 20k/8 cores per server more for the AMD = $2500 cost per core. Now, if you factor in that on a per core basis, the intel server is between 50 and 60% faster, things get worse for AMD. Assuming you could buy a server from AMD that was 50% more powerful (via linearly increasing core count), that would be 50% more of a server, but remember each server has 20% more cores. So it's really about 60% more cores. Now you're talking about an approximately 76.8 core server. That's 36 more cores than intel. So what's the licensing cost gotta be before AMD isn't worth it for this performance level? well, 20k/36 = $555 per core.

    OK, fair enough. Maybe things are licensed per socket instead. You still need 50% more sockets to get equivalent performance. So that's 2 more sockets (give or take) for the AMD to equal the intel in performance. Assuming things scale linearly with price, that "server" will cost roughly 15k for the AMD server. Licensing costs now have to be more than 7.5k (15k difference in price between the AMD and intel servers divided by 2 extra sockets) higher per socket to make the intel the "better deal" per performance. Do you know how much an Oracle Suite of products costs? I'll give you a hint. 7.5k isn't that far off the mark.
    Reply
  • JarredWalton - Thursday, May 19, 2011 - link

    There are so many factors in the cost of a server that it's difficult to compare on just price and performance. RAS is a huge one -- the Intel server targets that market far more than the AMD used. Show me two identical servers in all other areas other than CPU type and socket count, and then compare the pricing. For example, here are two similar HP ProLiant setups:

    HP ProLiant DL580 G7
    http://h10010.www1.hp.com/wwpc/us/en/sm/WF06b/1535...
    2 x Xeon E7-4830
    64GB RAM
    64 DIMM slots
    Advanced ECC
    Online Spare
    Mirrored Memory
    Memory Lock Step Mode
    DIMM Isolation
    Total: $13,679
    +$4000 for 2x E7-4830


    HP ProLiant DL585 G7
    http://h10010.www1.hp.com/wwpc/us/en/sm/WF06b/1535...
    4 x Opteron 6174
    64GB RAM
    32 DIMM slots
    Advanced ECC
    Online Spare
    Total: $14,889

    Now I'm sure there are other factors I'm missing, but seriously, those are comparable servers and the Intel setup is only about $3000 more than the AMD equivalent once you add in the two extra CPUs. I'm not quite sure how much better/worse the AMD is relative to the Intel, but when people throw out numbers like "OMG it costs $20K more for the Intel server" by comparing an ultra-high-end Intel setup to a basic low-end AMD setup, it's misguided at best. By my estimates, for relatively equal configurations it's more like $3000 extra for Intel, which is less than a 20% increase in price--and not even factoring in software costs.
    Reply

Log in

Don't have an account? Sign up now