vApus Mark II

vApus Mark II is our newest benchmark suite that tests how well servers cope with virtualizing "heavy duty applications". We explained the benchmark methodology here. We used vSphere 4.1 Update 1, based upon the 64 bit ESX 4.1.0 b348481 hypervisor.

vApus Mark II score
* with 128GB of RAM

Benchmarks cannot be interpreted easily, and virtualization adds another layer of complexity. As always, we need to explain quite a few details and nuances.

First of all, we tested most servers with 64GB of RAM. However, the memory subsystem of the Quad Xeon needs 32 DIMMs before it can deliver maximum bandwidth. As some of these server systems will get those 32 DIMMs while others will not, we tested both with 16 (64GB) and 32 DIMMs (128GB). Our vApus mark test requires only 11GB per tile: 4GB for the OLTP database, 4GB for the OLAP and 1GB for each of the three web applications (3GB in total). So even a five tile test demands only 55GB. Thus, in this particular benchmark there is no real advantage to having 128GB of RAM other than the bandwidth advantage for the quad Xeon platform. That is why we do not test the the Quad Opteron with more than 64GB: it makes no difference and makes the graph even more complex.

Then there's the problem that every virtualization benchmark encounters: the number of tiles (a tile is a group of VMs). With VMmark, the benchmark folks add tiles until the total throughput begins to decline. The problem with this approach is that you favor throughput over response time. In the real world, response time is more important than throughput. We test with both four (20 VMs, 72 vCPUs) and five tiles (25 VMs, 90 vCPUs). Which benchmark gives you the most accurate number for a given system? Let us delve a little deeper and take the response time into account.

SAP S&D Benchmark Virtualized Performance: Response Time
Comments Locked

62 Comments

View All Comments

  • extide - Monday, June 6, 2011 - link

    When you spend $100,000 + on the S/W running on it, the HW costs don't matter. Recently I was in a board meeting for launching a new website that the company I work for is going to be running. These guys don't know/care about these detailed specs/etc. They simply said, "Cost doesn't matter just get whatever is the fastest."
  • alpha754293 - Thursday, May 19, 2011 - link

    Can you run the Fluent and LS-DYNA benchmarks on the system please? Thanks.
  • mosu - Thursday, May 19, 2011 - link

    A good presentation with onest conclusions, I like this one.
  • ProDigit - Thursday, May 19, 2011 - link

    What if you would compare it to 2x corei7 desktops, running Linux and free server software, what keeps companies from doing that?
  • Orwell - Thursday, May 19, 2011 - link

    Most probably the lack of support for more than about 48GiB of RAM, the lack of ECC in the case of Intel and the lack of multisocket-support, just to name a few.
  • ganjha - Thursday, May 19, 2011 - link

    There is always the option of clusters...
  • L. - Friday, May 20, 2011 - link

    Err... like you're going to go cheap for the CPU and then put everything on infiniband --
  • DanNeely - Thursday, May 19, 2011 - link

    Many of the uses for this class of server involve software that won't scale across multiple boxes due to network latency, or monolithic design. The VM farm test was one example that would; but the lack of features like ECC support would preclude it from consideration by 99% of the buyers of godbox servers.
  • erple2 - Thursday, May 19, 2011 - link

    I think that more and more people are realizing that the issue is more about lack of scaling linearly than anything like ECC. Buying a bullet proof server is turning out to cost way too much money (I mean ACTUALLY bullet proof, not "so far, this server has been rock solid for me").

    I read an interesting article about "design for failure" (note, NOT the same thing as "design to fail") by Jeff Atwood the other day, and it really opened my eyes. Each extra 9 in 99.99% uptime starts costing exponentially more money. That kind of begs the question, should you be investing more money into a server that shouldn't fail, or should you be investigating why your software is so fragile as to not be able to accommodate a hardware failure?

    I dunno. Designing and developing software that can work around hardware failures is a very difficult thing to do.
  • L. - Thursday, May 19, 2011 - link

    Well ./ obvious.

    Who has a fton of servers ? Google
    How do they manage availability ?
    So much redundancy that resilience is implicit and "reduced service" isn't even all that reduced.

    And no designing / dev s/w that works around h/w failures is not that hard at all, and it is in fact quite common (load balancing, active/passive stuff, virtualization helps too etc.).

Log in

Don't have an account? Sign up now