vApusMark II Response Time

Each tile in vApusMark II demands 18 virtual CPUs: four for the Oracle OLTP test, eight for the MS SQL Server OLAP test, and six for the three web application VMs (two CPUs each). Therefore, a four tile test will require 72 virtual CPUs. A quad Xeon E7-4870 contains 40 cores and 80 threads with Hyper-Threading enabled. With a test that puts 72 virtual CPUs to work, you cannot measure the total throughput of the quad Xeon E7. In fact, some of those 72 virtual CPUs are not working at 100% all of the time. For example, the CPU load caused by the web VMs shows a lot of spikes. Thus, we can not interprete the throughput numbers without a look at the response times.

vApus Mark II Response time

Back to our benchmark or throughput scores. Ideally, we should measure throughput at exactly the same response times. But with our current stress testing software, trying to keep response time the same would be an extremely time consuming process.

vApus Mark II score revisited

Since the quad Opteron shows a 40% increase in response time from 4 to 5 tiles (or from 20 to 25 VMs), we believe that the four tile score (149) is more representative of the "real performance". The extra throughput that the five tile test delivers comes at a response time price that is too high.

The response time of the Quad Xeon 7560 increases 9% when we try to load it with five extra VMs. In this case, the "real and fair" throughput score is a little bit harder to determine. It is somewhere between the score of 4 tiles and 5 tiles, probably around 180 or so.

In case of the Quad Xeon E7, however, things are crystal clear. Running 20 or 25 VMs does not make any difference: the response times stay in the same league. In this case we take the highest score to be the real one.

So if we take response times into account, the quad E7-4870 is about 35% faster than its predecessor (243 vs 180) and about 63% faster than the AMD system in our test (243 vs 149). AMD's fastest processor is the 2.5GHz 6180SE now. This CPU is clocked around 13% higher and should thus be able to reach a score of around 168. That means the Xeon E7-4870 should still have a 44% (or more) advantage over its nearest but much cheaper competitor in this particular workload.

Virtual Performance on vSphere 4 Power Extremes
Comments Locked

62 Comments

View All Comments

  • extide - Monday, June 6, 2011 - link

    When you spend $100,000 + on the S/W running on it, the HW costs don't matter. Recently I was in a board meeting for launching a new website that the company I work for is going to be running. These guys don't know/care about these detailed specs/etc. They simply said, "Cost doesn't matter just get whatever is the fastest."
  • alpha754293 - Thursday, May 19, 2011 - link

    Can you run the Fluent and LS-DYNA benchmarks on the system please? Thanks.
  • mosu - Thursday, May 19, 2011 - link

    A good presentation with onest conclusions, I like this one.
  • ProDigit - Thursday, May 19, 2011 - link

    What if you would compare it to 2x corei7 desktops, running Linux and free server software, what keeps companies from doing that?
  • Orwell - Thursday, May 19, 2011 - link

    Most probably the lack of support for more than about 48GiB of RAM, the lack of ECC in the case of Intel and the lack of multisocket-support, just to name a few.
  • ganjha - Thursday, May 19, 2011 - link

    There is always the option of clusters...
  • L. - Friday, May 20, 2011 - link

    Err... like you're going to go cheap for the CPU and then put everything on infiniband --
  • DanNeely - Thursday, May 19, 2011 - link

    Many of the uses for this class of server involve software that won't scale across multiple boxes due to network latency, or monolithic design. The VM farm test was one example that would; but the lack of features like ECC support would preclude it from consideration by 99% of the buyers of godbox servers.
  • erple2 - Thursday, May 19, 2011 - link

    I think that more and more people are realizing that the issue is more about lack of scaling linearly than anything like ECC. Buying a bullet proof server is turning out to cost way too much money (I mean ACTUALLY bullet proof, not "so far, this server has been rock solid for me").

    I read an interesting article about "design for failure" (note, NOT the same thing as "design to fail") by Jeff Atwood the other day, and it really opened my eyes. Each extra 9 in 99.99% uptime starts costing exponentially more money. That kind of begs the question, should you be investing more money into a server that shouldn't fail, or should you be investigating why your software is so fragile as to not be able to accommodate a hardware failure?

    I dunno. Designing and developing software that can work around hardware failures is a very difficult thing to do.
  • L. - Thursday, May 19, 2011 - link

    Well ./ obvious.

    Who has a fton of servers ? Google
    How do they manage availability ?
    So much redundancy that resilience is implicit and "reduced service" isn't even all that reduced.

    And no designing / dev s/w that works around h/w failures is not that hard at all, and it is in fact quite common (load balancing, active/passive stuff, virtualization helps too etc.).

Log in

Don't have an account? Sign up now