vApus Mark II

vApus Mark II is our newest benchmark suite that tests how well servers cope with virtualizing "heavy duty" applications; we've previously explained the benchmark methodology. However, we made a few changes to make this benchmark suitable for a cloud environment. The OLTP test, the freely available test "Calling Circle" of the Oracle Swingbench Suite, was not included. The OLTP test requires SSDs or a large amount of SAS drives and this would make it costly to run such a test on rented hardware. Thus, our scores are not directly comparable to other servers we have tested in the past, but the chart below uses the same test setup for all servers.

vApusMark

It is little surprise that our reference server is able to offer the best performance. We have four VMs requesting the power of 14 virtual CPUs, so the server has ample resources to satisfy this request. As a result, 14 physical cores of 2.26GHz are allocated, good for 31.6GHz of CPU power. This is our upper limit.

Next is the Terremark cluster in burst mode. We only reserved 10GHz, but the Terremark cluster is able to offer an extra 80% of CPU power on average. The result is that the Terremark cluster is able to offer about 70% of the throughput of the "in house" server. That is pretty good: we only pay for 10GHz most of the time and although the extra 80% comes at a premium cost, we only pay for the times where we actually need it.

Finally, let us compare the two similar setups, the "native server" with a 10GHz resource pool and the Terremark servers with a similar limitation. Once again, the Terremark virtual servers achieve about 70% of the throughput. That is not superb but it's not bad either. Even if Terremark ensures that every 10GHz of CPU power allocated is backed up with real physical processing power, the Terremark cluster has to manage more virtual machines and thus the overhead is higher than on our test machine that has to manage only our test virtual machines.

Benchmarking the Terremark Cloud Response Time
Comments Locked

29 Comments

View All Comments

  • TRodgers - Thursday, June 2, 2011 - link

    I like the way you have broken this subject it to small succinct snipets of value information. I work in a place where many of our physical resources are being converted into virtual ones, and it is so often difficult to break down the process, the reasoning, and benefit trees etc to the many different audiences we have.
  • johnsom - Friday, June 3, 2011 - link

    You said:
    Renting 5GB of RAM is pretty straightforward: it means that our applications should be able to use up to 5GB of RAM space.

    However this is not always the case with IaaS. vSphere allows memory over committing which allows you to allocate more memory across the virtual machines than the physical hardware makes available. If physical RAM is exhausted your VM gets swap file space tanking your VM memory performance. Likely killing performance when you need it most, peak memory usage.
  • GullLars - Friday, June 3, 2011 - link

    If the pools are well dimentioned, this should almost never happen.
    If the pagefile is on something like an ioDrive, performance wouldn't tank but be a noticable bit slower. If the pagefile is on spinning disks, the performance would be horrible if your task is memory intensive.
  • duploxxx - Sunday, June 5, 2011 - link

    THat is designing resource pools, if a service company is that idiot they will run out of business.

    Although swapping on SSD (certainly on next gen vsphere) is another way to avoid the slow performance as much as possible it is still slower and provides Hypervisor overhead.

    Ram is cheap, well chosen servers have enough memory allocation.
  • ckryan - Friday, June 3, 2011 - link

    I'm quite pleased with the easy, informative way the article has been presented; I for one would like to see more, and I'm sure future articles on the way. Keep it up, I think it's facinating.
  • JohanAnandtech - Sunday, June 5, 2011 - link

    Thank you for taking the time to let us know that you liked the article. Such readers have kept me going for the past 13 years (started in 1998 at Ace's ) :-).
  • HMTK - Monday, June 6, 2011 - link

    Yes, you're old :p The main reason I read Anand's these days is exactly for your articles. I liked them at Ace's, like them even more now. Nevertheless, nostalgia sometimes strikes when I think of Aces's and the hight quality of the articles and forums there.
  • bobbozzo - Friday, June 3, 2011 - link

    Hi, please include costs of the systems benchmarked... in the case of the Cloud, in $/hour or $/month, and in the case of the server, a purchase price and a lease price would be ideal.

    Thanks for all the articles!
  • bobbozzo - Friday, June 3, 2011 - link

    Oh, and include electric consumption for the server.
  • krazyderek - Friday, June 3, 2011 - link

    i agree, showing a simple cost comparison would have really rounded out this article, it was mentioned several time "you pay for bursting" but how much? put it in perspective for us, relate it to over purchasing hardware for your own data center.

Log in

Don't have an account? Sign up now