Virtualization Performance: Linux VMs on ESXi

We introduced our new vApus FOS (For Open Source) server workloads in our review of the Facebook "Open Compute" servers. In a nutshell, it a mix of four VMs with open source workloads: two PhpBB websites (Apache2, MySQL), one OLAP MySQL "Community server 5.1.37" database, and one VM with VMware's open source groupware Zimbra 7.1.0. Zimbra is quite a complex application as it contains the following components:

  • Jetty, the web application server
  • Postfix, an open source mail transfer agent
  • OpenLDAP software, user authentication
  • MySQL is the database
  • Lucene full-featured text and search engine
  • ClamAV, an anti-virus scanner
  • SpamAssassin, a mail filter
  • James/Sieve filtering (mail)

All VMs are based on a minimal CentOS 6 setup with VMware Tools installed. All our current virtualization testing is on top of the hypervisor which we know best: ESXi (5.0). We have changed two things in our vApusMark FOS setup: we upgradeded the guestOS from 5.6 to 6.0 and increased the number of vCPUs of the OLAP VM from 2 to 4. This small upgrade means that our latest results should not be compared to the results in our older articles.

We (Tijl Deneut and myself) tested with four tiles (one tile = four VMs). Each tile needs nine vCPUs, so the test requires 36 vCPUs.

vApusMark FOS

The benchmark above measures throughput. As for response times, let's take a look at the table below, which gives you the average response time per VM:

vApus FOS Average Response Times (ms), lower is better!
CPU PhpBB1 PHPBB2 MySQL OLAP Zimbra
AMD Opteron 6276 2.3 671 514 1410 758
AMD Opteron 6174 2.2 674 524 1210 861
Intel Xeon E5-2660 2.2 645 394 160 631
Intel Xeon E5-2690 2.9 362 288 40 483
Intel Xeon X5650 2.66 745 569 821 866

Considering that we may assume that the Xeon E5-2690 consumes considerably more than the E5-2660, it looks like the Xeon E5-2660 is the new virtualization champ. Let us check out the power consumption numbers under a realistic load.

Benchmarking Configuration ESXi Performance per Watt
Comments Locked

81 Comments

View All Comments

  • alpha754293 - Tuesday, March 6, 2012 - link

    Thanks for running those.

    Are those results with HTT or without?

    If you can write a little more about the run settings that you used (with/without HTT, number of processes), that would be great.

    Very interesting results thought.

    It would have been interesting to see what the power consumption and total energy consumption numbers would be for these runs (to see if having the faster processor would really be that beneficial).

    Thanks!
  • alpha754293 - Tuesday, March 6, 2012 - link

    I should work with you more to get you running some Fluent benchmarks as well.

    But, yes, HPC simulations DO take a VERY long time. And we beat the crap out of our systems on a regular basis.
  • jhh - Tuesday, March 6, 2012 - link

    This is the most interesting part to me, as someone interested in high network I/O. With the packets going directly into cache, as long as they get processed before they get pushed out by subsequent packets, the packet processing code doesn't have to stall waiting for the packet to be pulled from RAM into cache. Potentially, the packet never needs to be written to RAM at all, avoiding using that memory capacity. In the other direction, web servers and the like can produce their output without ever putting the results into RAM.
  • meloz - Tuesday, March 6, 2012 - link

    I wonder if this Data Direct I/O Technology has any relevance to audio engineering? I know that latency is a big deal for those guys. In past I have read some discussion on latency at gearslutz, but the exact science is beyond me.

    Perhaps future versions of protools and other professional DAWs will make use of Data Direct I/O Technology.
  • Samus - Tuesday, March 6, 2012 - link

    wow. 20MB of on-die cache. thats ridiculous.
  • PwnBroker2 - Tuesday, March 6, 2012 - link

    dont know about the others but not ATT. still using AMD even on the new workstation upgrades but then again IBM does our IT support, so who knows for the future.

    the new xeon's processors are beasts anyways, just wondering what the server price point will be.
  • tipoo - Tuesday, March 6, 2012 - link

    "AMD's engineers probably the dumbest engineers in the world because any data in AMD processor is not processed but only transferred to the chipset."

    ...What?
  • tipoo - Tuesday, March 6, 2012 - link

    Think you've repeated that enough for one article?
  • tipoo - Wednesday, March 7, 2012 - link

    Like the Ivy bridge comments, just for future readers note that this was a reply to a deleted troll and no longer applies.
  • IntelUser2000 - Tuesday, March 6, 2012 - link

    Johan, you got the percentage numbers for LS-Dyna wrong.

    You said for the first one: the Xeon E5-2660 offers 20% better performance, the 2690 is 31% faster. It is interesting to note that LS-Dyna does not scale well with clockspeed: the 32% higher clockspeed of the Xeon E5-2690 results in only a 14% speed increase.

    E5-2690 vs Opteron 6276: +46%(621/426)
    E5-2660 vs Opteron 6276: +26%(621/492)
    E5-2690 vs E5-2660: +15%(492/426)

    In the conclusion you said the E5 2660 is "56% faster than X5650, 21% faster than 6276, and 6C is 8% faster than 6276"

    Actually...

    LS Dyna Neon-

    E5-2660 vs X5650: +77%(872/492)
    E5-2660 vs 6276: +26%(621/492)
    E5-2660 6C vs 6276: +9%(621/570)

    LS Dyna TVC-

    E5-2660 vs X5650: +78%(10833/6072)
    E5-2660 vs 6276: +35%(8181/6072)
    E5-2660 6C vs 6276: +13%(8181/7228)

    It's funny how you got the % numbers for your conclusions. It's merely the ratio of lower number vs higher number multiplied by 100.

Log in

Don't have an account? Sign up now