vApus Mark I: the choices we made

vApus mark I uses only Windows Guest OS VMs, but we are also preparing a mixed Linux and Windows scenario. vApus Mark I uses four VMs with four server applications:

  • The Nieuws.be OLAP database, based on SQL Server 2008 x64 running on Windows 2008 64-bit, stress tested by our in-house developed vApus test.
  • Two MCS eFMS portals running PHP, IIS on Windows 2003 R2, stress tested by our in house developed vApus test.
  • One OLTP database, based on Oracle 10G Calling Circle benchmark of Dominic Giles.

We took great care to make sure that the benchmarks start, run under full load, and stop at the same moment. vApus is capable of breaking off a test when another is finished, or repeating a stress test until the others have finished.


The OLAP VM is based on the Microsoft SQL Server database of the Flemish/Dutch Nieuws.be site, one of the newest web 2.0 websites launched in 2008. Nieuws.be uses a 64-bit SQL Server 2008 x64 database on top of Windows 2008 Enterprise RTM (64-bit). It is a typical OLAP database, with more than 100GB of data consisting of a few hundred separate tables. 99% of the load on the database consists of selects, and about 5% of these are stored procedures. Network traffic is 6.5MB/s average and 14MB/s peak, so our Gigabit connection still has a lot of headroom. DQL (Disk Queue Length) is at 2.0 in the first round of tests, but we only record the results of the subsequent rounds where the database is in a steady state. We measured a DQL close to 0 during these tests, so there is no tangible impact from the storage system. The database is warmed up with 50 to 150 users. The results are recorded while 250 to 700 users hit the database.

The MCS eFMS portal, a real-world facility management web application, has been discussed in detail here. It is a complex IIS, PHP, and FastCGI site running on top of Windows 2003 R2 32-bit. Note that these two VMs run in a 32-bit guest OS, which impacts the VM monitor mode.

Since OLTP testing with our own flexible stress testing software is still in beta, our fourth VM uses a freely available test: "Calling Circle" of the Oracle Swingbench Suite. Swingbench is a free load generator designed by Dominic Giles to stress test an Oracle database. We tested the same way as we have tested before, with one difference: we use an OLTP database that is only 2.7GB large (instead of 9.5GB). We used a 9.5GB database to make sure that locking contention didn't kill scaling on systems with up to 16 logical CPUs. In this case, 2.7GB is enough as we deploy the database on a 4 vCPU VM. Keeping the database relatively small allows us to shrink the SGA size (Oracle buffer in RAM) to 3GB (normally it's 10GB) and the PGA size to 350MB (normally it's 1.6GB). Shrinking the database ensures that our VM is content with 4GB of RAM. Remember that we want to keep the amount of memory needed low so we can perform these tests without needing the most expensive RAM modules on the market. A calling circle test consists of 83% selects, 7% inserts, and 10% updates. The OLTP test runs on the Oracle 10g Release 2 (10.2) 64-bit on top of Windows 2008 Enterprise RTM (64-bit).

Below is a small table that gives you the "native" characteristics that matter for virtualization in each test. (Page management is still being researched.) With "native" we mean the characteristics measured running on the native OS (Windows 2003 and 2008 server) with perfmon.

Native Performance Characteristics
Native Application / VM Kernel Time Typical CPU Load Interrupt/s Network Disk I/O DQL
Nieuws.be / VM1 0.65% 90-100% 3000 1.6MB/s 0.9MB/s 0.07
MCS eFMS / VM2 & 3 8% 50-100% 4000 3MB/s 0.01MB/s 0
Oracle Calling Circle / VM4 17% 95-100% 11900 1.6MB/s 3.2MB/s 0.07

Our OLAP database ("Nieuws.be") is clearly mostly CPU intensive and performs very little I/O besides a bit of network traffic. In contrast, the OLTP test causes an avalanche of interrupts. How much time an application spends in the native kernel gives a first rough indication of how much the hypervisor will have to work. It is not the only determining factor, as we have noticed that a lot of page activity is going on in the MCS eFMS application, which causes it to be even more "hypervisor intensive" than the OLTP VM. From the data we gathered, we suspect that the Nieuws.be VM will be mostly stressing the hypervisor by demanding "time slices" as the VM can absorb all the CPU power it gets. The same is true for the fourth "OLTP VM", but this one will also cause a lot of extra "world switches" (from the VM to hypervisor and back) due to the number of interrupts.

The two web portal VMs, which sometimes do not demand all available CPU power (4 cores per VM, 8 cores in total), will allow the hypervisor to make room for the other two VMs. However, the web portal (MCS eFMS) will give the hypervisor a lot of work if Hardware Assisted Paging (RVI, NPT, EPT) is not available. If EPT or RVI is available, the TLBs (Translation Lookaside Buffer) of the CPUs will be stressed quite a bit, and TLB misses will be costly.

As the SGA buffer is larger than the database, very little disk activity is measured. It helps of course that the storage system consist of two extremely fast X25-E SSDs. We only measure performance when all VMs are in a "steady" state; there is a warm up time of about 20 minutes before we actually start recording measurements.

Independent Real-World Virtualization Benchmarking vApus: Virtual Stress Testing
Comments Locked

66 Comments

View All Comments

  • tshen83 - Thursday, May 21, 2009 - link

    Jarred:

    Let's not fool each other. Johan's AMD bias is disgusting.

    My assertion that HardOCP killed the GPU market is simply trying to show you the effect of invalidating industry standard benchmarks. Architecturally, Nvidia's GPU bigger monolithic cores are far more advanced than ATI's cores right now. In GPGPU applications, it is not even close. The problem with gaming FPS benchmark as I have said is that developers are typically happy once the FPS reaches parity. It does not show architectural superiority.

    vApus? There are a ton of questions unanswered.
    1. Who wrote the software?(I assume European)
    2. Does the software scale linearly? And does the software scale on both AMD and Intel architecuture?
    3. Why benchmark 4 Core Virtual machines when we know that VMware doesn't really scale that well themselves in SMP setup?
    4. Seriously? Nieuws.be OLAP database? How many real world people run Nieuws.be?

    I usually don't respond to Anandtech articles unless the article is disgustingly stupid. I also don't understand why you guys can't accept the fact that Nehalem is in fact 100% performance/watt improved vs the previous generation Xeon. It is backed by data from more than one industry standard benchmark.

    Is AMD worth a look today? No, absolutely not. If you are still considering anything AMD today, you are an idiot. (The world is full of idiots) AMD's only chance is if they can release the G34 socket platform within a TDP range that is acceptable before they run out of cash.

    Before you call me a troll, remind yourself this: usually the troll is smarter than the people he/she is trolling. So ask yourself this question: did Johan deserve the negative critism?
  • JarredWalton - Thursday, May 21, 2009 - link

    You criticize every one of his articles, often because I'm not sure your reading comprehension is up to snuff. His "AMD bias" is not disgusting, though I'm quite sure your Intel bias is far worse than his AMD bias. The reason 3DMark has been largely invalidated is that it doesn't show realistic performance - though some of the latest versions scale similarly to some games, at best 3DMark measures 3DMark performance. Similarly, VMmark measures VMmark performance. Unless your workload is the same as VMmark, it doesn't really tell you much.

    1 - Who wrote the software? According to the article, "vApus or Virtual Application Unique Stresstest is a stress test developed by Dieter Vandroemme, lead developer of the Sizing Server Lab at the University College of West-Flanders." His being European has nothing to do with anything at all, unless you're a racist, bigoted fool.

    2 - 2-tile and 3-tile testing is in the works. It will take time.

    3 - Perhaps because there are companies looking for exactly that sort of solution. I guess we should only test situations where VMware performs optimally?

    4 - The source of the database is not so critical as the fact that it is a real-world database. Whether Johan uses a DB from Nieuws.be, AnandTech.com, Cnet.com, or some other source isn't particularly meaningful. It is a real setup used outside of benchmarking, and he had access to the site.

    I usually don't respond to trolls unless they are disgustingly stupid as well. I don't understand why you can't accept the fact that Nehalem isn't a panacea that fixes all the world's woes. That is backed by the world around us which continues to have all sorts of problems, and a "greener" CPU isn't going to save the environment any more than unplugging millions of cell phone charges that each consume 0.5W of power or less.

    AMD is certainly worth a *look* today. Will you actually end up purchasing AMD? That depends largely on your intended use. I have old Athlon 64/X2 systems that do everything that they need to do. For a small investment, you can build a much better AMD HTPC than Intel - mostly because the cheap Intel platform boards are garbage. I'd take a lesser CPU with a better motherboard any day over a top-end CPU with a crappy motherboard. If you want a system for less than $300, the motherboards alone would make me tend towards AMD.

    Of course, that completely misses the point that this isn't even remotely related to that market. Servers are in another realm, and features and support are critical. If you have a choice between AMD quad socket and Intel dual socket, and the price is the same, you might want the AMD solution. If you have existing hardware that can be upgraded to Shanghai without changing anything other than the CPU, you might want AMD. If you're buying new, you'd want to look at as much data as possible.

    Xeon X5570 still surpasses AMD in the initial tests by over 30%, which is not insignificant. If that extends to 50% or more in 2-tile and 3-tile setups, it's even more in Intel's favor. However, a 30% advantage is hardly out of line with the rest of the computing world. SYSmark 2007 shows the i7 965 beating the Phenom II 955 by 26.6%. Photoshop CS4 shows a 48.7% difference. DivX is 35.3%, xVid is 15.9% pass1 and 65.4% pass2, and WME9 is 25%. 3dsmax is 55.8%, CINEBENCH is 42%, and POV-ray is 65.3%.

    Which of those tests is a best indication of true potential for Core i7? Well, ALL OF THEM ARE! What's the best virtualization performance metric out there? Or the best server benchmark out there? They're ALL important and useful. vApus is just one more item to look at, and it still shows a good lead for Intel.

    Where is the 100% perf/watt boost compared to last generation? Well, it's in an application where i7 can stretch its eight threaded muscles. Compared to AMD, the performance/watt benefit for an entire system is more like 40% on servers. For QX9770, i7 965 is 32% more perf/watt in Cinebench, or 37.6% in Xvid. I doubt you can find a 100% increase in performance/watt without cherry-picking the benchmark and CPUs in question, but that's what you're already determined to do. That, my friend, is true bias - when you can't even admit that anything from the competition might be noteworthy, you are obviously wearing blinders.
  • Zstream - Thursday, May 21, 2009 - link

    Umm based on your two rants this means you have ZERO knowledge working with virtual desktops/terminal servers/virtual applications.

    I feel I need to make two corrections.

    One: ATI's die size is roughly 75% of Nvidia's, how do you conclude that Nvidia is better? Well honestly you can not because if you scale the performance and had the same die size of Nvidia, then ATI would be killing them.

    Second: Majority of enterprise's run AMD and Intel, in fact not till Neh. did Intel really come into the virtualization market.
  • tshen83 - Thursday, May 21, 2009 - link

    "Umm based on your two rants this means you have ZERO knowledge working with virtual desktops/terminal servers/virtual applications. "

    Really? Just how did you come up with this revelation?

    "One: ATI's die size is roughly 75% of Nvidia's, how do you conclude that Nvidia is better? Well honestly you can not because if you scale the performance and had the same die size of Nvidia, then ATI would be killing them. "

    You don't know shit about GPUs.

    "Second: Majority of enterprise's run AMD and Intel, in fact not till Neh. did Intel really come into the virtualization market. "

    True. That's what I am saying too, if you listened. I said, "no one should be considering AMD today because Nehalem is here".
  • Zstream - Thursday, May 21, 2009 - link

    I came to that conclusion based on your incoherent rants.

    Why would you say I do not know shit about GPU's? I provided you a fact, your illogical thinking does not change the matter. It comes down to die size and ATI wins performance/DIE. If you would like to argue that claim with then please do so.

    Who would consider Neh in todays market? Very few, unless you are a self proclaimed millionaire who crazily spends or needing the extra performance boost in some applications like exchange.
  • Viditor - Thursday, May 21, 2009 - link

    Guys, it's tshen...nobody over the age of 12 listens to his rants anyway, so don't feed the troll (or ban him if you can...).
  • leexgx - Thursday, May 21, 2009 - link

    LOL nice rant

    3dmark cant be used any more as its not an 3dmark any more its more like an 3d gpu/cpu mark the CPU can sway the total result

    AMD cpus have been using dedicated bus that talks to each other cpu socket and has direct access to the ram, allso AMD does have V-amd as well on all amd64 am2 cpus as well as optrons an (baring sempron)
  • Makaveli - Thursday, May 21, 2009 - link

    Ya what is the post all about.

    HardOCP killed the GPU market? I don't know about you but I never bought a videocard because of its 3dmark score. It's one benchmark that both companies cater to but is of little importance. Hardocp review method has much more valuable data for me than one benchmark.

    Let me ask you this when you are buying a car or anything of siginicant value. Do you not do your homework is one review being either positive or negative enough to drop your hard earned cash?

    If so Bestbuy is that way!

    As for the rest of your post the personal attacks and childish language cleary show your not even worth taking seriously. Sounds more like the ramblings of a Highschool child who is trying to get attention.

    Good day to you sir,

    Godspeed
  • Zstream - Thursday, May 21, 2009 - link

    You have no idea what you are talking about. The benchmark software can be downloaded. It is not our fault you are to poor to pay for a product.

    The rest I have to say "LOL".
  • DeepThought86 - Thursday, May 21, 2009 - link

    Wow, just wow.

Log in

Don't have an account? Sign up now