Performance and RAS features took a giant leap forwared when Intel replaced the Xeon 7400 with the Xeon 7500. The memory subsystem went from a high latency, totally bandwidth choked loser (hardly 10GB/s for 24 cores) to a low latency and very high bandwidth champion (up to 70GB/s). The Xeon E7 builds further on that excellent platform, and adds up to 35% higher performance.

We now have a proven platform with excellent RAS features that needs slightly less power now while it provides a decent performance boost. That's excellent, but the Xeon E7 still has a few weakness. One weakness is the relatively high power consumption at idle load. Compared to the high-end Power 7 servers, this kind of power consumption is probably very reasonable. The Power 7 CPUs are in the 100 to 170W TDP range, while the Xeon E7s are in the 95 to 130W TDP range. A quad 3.3GHz Power 755 with (256GB RAM) server consumes 1650W according to IBM (slide 24), while our first measurements show that our 2.4GHz E7-4870 server will consume about 1300W in those circumstances.

Considering that the 3.3GHz Power 7 and 2.4GHz E7-4870 perform at the same level, we'll go out on a limb and assume that the new Xeon wins in the performance/watt race. AMD might take advantage of this "weakness", but availablility of quad 16-core "Bulldozer" servers is still months away and we don't know what the power use will be yet.

The 10-core Xeons are pretty expensive ($3000-4600 per CPU), but many of these systems are bought to run software that will cost 10 times more. In a nutshell, Intel's Xeon E7 moves up the server CPU food chain. The Xeon E7 closes the performance gap with the best RISC CPUs (see the SAP benchmarks), offers lower power and cost, and the rest of the x86 competition is relegated to the low-end of the quad x86 market.

For those looking for a virtualization platform, there is no x86 server that is able to offer such low response times at such high consolidation ratios. However, in order to get a good performance/watt ratio, you need to make sure that your quad Xeon E7 servers are working under high CPU loads. The quad Xeon E7 server is a good platform for consolidating CPU intensive applications. For less intensive VMs, it makes a lot more sense to check out the dual Xeon and quad Opteron offerings.

I would also like to thank to Tijl Deneut for his invaluable assistance.

Real-World Power


View All Comments

  • extide - Monday, June 6, 2011 - link

    When you spend $100,000 + on the S/W running on it, the HW costs don't matter. Recently I was in a board meeting for launching a new website that the company I work for is going to be running. These guys don't know/care about these detailed specs/etc. They simply said, "Cost doesn't matter just get whatever is the fastest." Reply
  • alpha754293 - Thursday, May 19, 2011 - link

    Can you run the Fluent and LS-DYNA benchmarks on the system please? Thanks. Reply
  • mosu - Thursday, May 19, 2011 - link

    A good presentation with onest conclusions, I like this one. Reply
  • ProDigit - Thursday, May 19, 2011 - link

    What if you would compare it to 2x corei7 desktops, running Linux and free server software, what keeps companies from doing that? Reply
  • Orwell - Thursday, May 19, 2011 - link

    Most probably the lack of support for more than about 48GiB of RAM, the lack of ECC in the case of Intel and the lack of multisocket-support, just to name a few. Reply
  • ganjha - Thursday, May 19, 2011 - link

    There is always the option of clusters... Reply
  • L. - Friday, May 20, 2011 - link

    Err... like you're going to go cheap for the CPU and then put everything on infiniband -- Reply
  • DanNeely - Thursday, May 19, 2011 - link

    Many of the uses for this class of server involve software that won't scale across multiple boxes due to network latency, or monolithic design. The VM farm test was one example that would; but the lack of features like ECC support would preclude it from consideration by 99% of the buyers of godbox servers. Reply
  • erple2 - Thursday, May 19, 2011 - link

    I think that more and more people are realizing that the issue is more about lack of scaling linearly than anything like ECC. Buying a bullet proof server is turning out to cost way too much money (I mean ACTUALLY bullet proof, not "so far, this server has been rock solid for me").

    I read an interesting article about "design for failure" (note, NOT the same thing as "design to fail") by Jeff Atwood the other day, and it really opened my eyes. Each extra 9 in 99.99% uptime starts costing exponentially more money. That kind of begs the question, should you be investing more money into a server that shouldn't fail, or should you be investigating why your software is so fragile as to not be able to accommodate a hardware failure?

    I dunno. Designing and developing software that can work around hardware failures is a very difficult thing to do.
  • L. - Thursday, May 19, 2011 - link

    Well ./ obvious.

    Who has a fton of servers ? Google
    How do they manage availability ?
    So much redundancy that resilience is implicit and "reduced service" isn't even all that reduced.

    And no designing / dev s/w that works around h/w failures is not that hard at all, and it is in fact quite common (load balancing, active/passive stuff, virtualization helps too etc.).

Log in

Don't have an account? Sign up now