If you've been keeping up with our articles for a while, you might have picked up on vApus Mark I: the virtualized stress test we created for internal use at the Sizing Servers testlab.

As detailed in Johan's article, this bench consists of 3 separate applications, all of which we are very familiar with due to extensive optimization and stress testing efforts. Although we believe the results published based on this bench speak for themselves, the problem remained that it was impossible for anyone outside our lab to verify the results, seeming as how two out of three of the applications used were owned by private companies and were entrusted to our lab under rather strict conditions (distributing them to the rest of the world sadly not being one of them).

Secondly, vApus M1 being a bench that focuses on fairly heavy VM's, we feel the need to create another point of reference. One that will back up the results of the original, but with a completely different mix of VM's.

Thus began the process of creating vApus For Open Source, or vApus FOS, as we like to call it in the lab.

The idea behind vApus FOS is that the VM's can be freely distributed to any vendors that wish to verify our results, and our lab can provide a version of the actual in-house developed vApus benching software to generate the load.

I am happy to say that the preliminary 1-tile testing for this new benchmark has just completed, and so far everything has been running quite smoothly. The results are reproducible, the VM's stable... looks like our 4-tile (16 VM's in total) testing can begin!

The fun part is that a lot of the ideas we incorporated into the new setup we owe to you, our readers! Thanks to the feedback we got on vApus M1, we were able to combine some new workloads into an interesting mix:

As it stands, one tile consists of 4 VM's, all of which run a basic, minimal CentOS 5.4.

VM1 runs an Apache webserver and MySQL database, hosting a phpbb3-forum. The VM is given 2 vCPU's and 1GB RAM.

VM2 runs the same setup as VM1, but is only given 1 vCPU.

VM3 runs a fully configured mailserver using Postfix, Courier and a Squirrelmail frontend. This VM is assigned 2 vCPU's and 1GB RAM.

VM4 runs a separate MySQL OLAP database, using InnoDB as its storage engine. This machine is also assigned 2 vCPU's and 1GB RAM.

The goal is currently to get a 4-tile test going on a 16-core machine, meaning that the hypervisor will have to account for 28 vCPU's in total. This should prove to be a very interesting exercise for the scheduler. Of course, this VM setup can be made to work perfectly fine in an OpenVZ environment as well, meaning we can finally do some real world testing on alternative Linux-based virtualization solutions as well.

We thought we'd keep you updated on the progress of our research. As any experienced IT professional will know, well thought-out server technology testing takes time, and it's important to realize the amount of steps required to produce results that can immediately be applied in the real world.

Stay tuned for our first testing results, they should be rolling in very soon now!


Comments Locked

10 Comments

View All Comments

  • mengzi0214 - Wednesday, February 17, 2010 - link

    http://www.china-wholesale-jewelry-supplies.com/">http://www.china-wholesale-jewelry-supplies.com/
    http://www.chinajewelrymakingsupplies.com/">http://www.chinajewelrymakingsupplies.com/
    http://www.wholesalechinajewelry.com/">http://www.wholesalechinajewelry.com/
  • laptopbattery - Thursday, November 19, 2009 - link

    My mouth waters when I hear the word! It's the closest thing beer has to chocolate milk. Boulevard Brewing Company out of Kansas City makes a pretty good one, Bully! Porter. Stouts are a little iffier for me, but I love and adore the Schafly Coffee Stout (out of St. Louis). When I used to live in Springfield, MO, I would order the "black sheep" at Springfield Brew Co and it would either be their Mudhouse Stout or their porter, depending on the season.
  • Nimiz99 - Wednesday, November 18, 2009 - link

    keep up the good work
  • yknott - Tuesday, November 17, 2009 - link

    Liz,

    Two quick comments...

    1. The image at the end of your post is broken :)

    2. These VMs seem RAM constrained. Is this on purpose? I don't think I'd build any of these servers in real life to have only 1GB of RAM. Is this to reduce the overall score by testing the storage subsystem since many of the processes would be swapping to disk?
  • JarredWalton - Wednesday, November 18, 2009 - link

    I would think the RAM limitations are more because they plan on 4-tile (and maybe even more) configs down the road, plus a desire to avoid other bottlenecks. Rather than reducing the overall scores, if the specific test all run within RAM they can avoid storage access altogether - and we know how slow storage is.
  • DirkMo - Wednesday, November 18, 2009 - link

    A 4-tile setup would need only 16GB of RAM for the VM's. The only reason to keep RAM this low is to assure it's a stress test. How much that really tells you about the hypervisor and the complete setup is something else.

    If hypervisor A behaves better than B when RAM is very limited that might be interesting to notice but at the same time completely irrelevant during day-to-day use. Cause if you'd run into this in a day to day situation you'd just up the RAM as you're likely to have free RAM.

    I've a few challenging months coming up myself during which we're going to test some stuff in a virtualised environment. These are things which are performance sensitive. Yet I'm sure the test won't be about finding the best hypervisor but finding the best RAM/vCPU/... distribution.
  • yknott - Wednesday, November 18, 2009 - link

    I guess the question is, What criteria do you define to determine what hypervisor is better than another?

    I'm not sure there's a clear and explicit answer to this question because it depends on your use case. There might be someone who wants to know which hypervisor works best in a RAM starved state. That makes a lot less sense to me because as the saying goes, RAM is cheap.

    For me, I want to know what happens if I take a group (or tile) of servers that *individually have enough resources* and place them on top of a hypervisor. Which hypervisor is more efficient at using the shared underlying CPU/RAM/Storage/Network resources? By ensuring that the individual servers have enough resources, we are taking the lack of RAM away as a possible variable so that we can get a glimpse at true hypervisor efficiency and not false positives due to resource constraints.
  • JarredWalton - Wednesday, November 18, 2009 - link

    I agree with the idea of multiple testing scenarios. I think it would be interesting for Johan, Liz, and their team to have multiple test scenarios in vApus FOM where they stress other areas in differing amounts. For example:

    1) Current setup (no storage interaction; not as heavily influenced by RAM)
    2) More light virtual clients with contention for RAM, storage, etc.
    3) Several clients that may not be so heavy on RAM but that do lots of disk access to large databases (i.e. not enough RAM to cache all the DBs).

    We could go on making up scenarios, and I'm sure that's part of what Liz and Johan are doing. The question then is how important other scenarios are, considering the use of VMmark and other benchmarks. Maybe VMmark already addresses scenario 2, so they can skip that. Maybe you should build a server setup so that scenario 3 is avoided as much as possible.

    What I know for sure is that creating and running lots of different scenarios is hard enough without worrying about virtualized environments, to I sit back and watch for the most part. :)
  • JohanAnandtech - Thursday, November 19, 2009 - link

    Hey jarred, you are pretty good at reading our minds. :-)

    The first goal of vApus FOS is just to offer a somewhat CPU limited test as vAPUS Mark I (Windows) but one that third parties can repeat and verify.

    Of course, we agree that this would not make a good hypervisor test. A hypervisor comparison should check NIC I/O (support for vmdq), Storage I/O (CPU overhead and latency) and how smart the hypervisor juggles around with RAM and CPU scheduling (think page sharing and SMT scheduling for example). That is also the reason why this is taking so long :-).
  • xintian - Wednesday, June 2, 2010 - link

    We know that men and women around the water flow is different, we tried it (FS2) to adapt to different situations.
    http://www.edhardysalestore.com/

Log in

Don't have an account? Sign up now