Conclusion

Optimizing a workload for use with virtualization is no easy task, and quite often requires an IT admin to dive deep into the workings of their applications, as these can significantly impact their performance under ESX. On one side, it makes the job of today’s IT admins a lot more interesting than those of 10 years ago, on the other hand, it is all the more important to make a difference by keeping all that knowledge up to date. By implementing solid optimization practices, it is not just possible to squeeze those extra percent that make the difference out of a platform, but also to gain a strategic advantage in the harshness of today’s job climate.

The objective of this article was not to provide a tuning solution for every problem, but to share some of the pitfalls Anandtech IT and the Sizing Servers Lab have encountered in their experiences with VMware’s ESX, along with some solid advice provided by VMware themselves at VMworld Europe 2009.

This very moment, the team is working on similar in-depth research into Hyper-V and Xen, learning more as we move along and pit these solutions against each other, using our vApus Mark I workloads to test both the strengths and weaknesses of each platform. We hope you are looking forward to our hypervisor comparison; we definitely are.

"Magic" memory!
Comments Locked

13 Comments

View All Comments

  • vorgusa - Monday, June 29, 2009 - link

    Just out of curiosity will you guys be adding KVM to the list?
  • JohanAnandtech - Wednesday, July 1, 2009 - link

    In our upcoming hypervisor comparison, we look at Hyper-V, Xen (Citrix and Novell) and ESX. So far KVM has got lot of press (in the OS community), but I have yet to see anything KVM in a production environment. We are open to suggestions, but it seems that we should give priority to the 3 hypervisors mentioned and look at KVM later.

    It is only now, June 2009, that Redhat announces a "beta-virtualization" product based on KVM. When running many VMs on a hypervisor, robustness and reliability is by far the most important criteria, and it seems to us that KVM is not there yet. Opinions (based on some good observations, not purely opinions :-) ?
  • Grudin - Monday, June 29, 2009 - link

    Something that is becoming more important as higher I/O systems are virtualized is disk alignment. Make sure your guest OS's are aligned with the SAN blocks.
  • yknott - Monday, June 29, 2009 - link

    I'd like to second this point. Mis-alignment of physical blocks with virtual blocks can result in two or more physical disk operations for a single VM operation. It's a quick way to kill I/O performance!
  • thornburg - Monday, June 29, 2009 - link

    Actually, I'd like to see an in-depth article on SANs. It seems like a technology space that has been evolving rapidly over the past several years, but doesn't get a lot of coverage.
  • JohanAnandtech - Wednesday, July 1, 2009 - link

    We are definitely working on that. Currently Dell and EMC have shown interest. Right now we are trying to finish off the low power server (and server CPUs) comparison and the quad socket comparison. After a the summer break (mid august) we'll focus on a SAN comparison.

    I personally have not seen any test on SANs. Most sites that cover it seem to repeat press releases...but I have may have missed some. It is of course a pretty hard thing to do as some of this stuff is costing 40k and more. We'll focus on the more affordable SANs :-).
  • thornburg - Monday, June 29, 2009 - link

    Some linux systems using the 2.6 kernel make 10x as many interrupts as Windows?

    Can you be more specific? Does it matter which specific 2.6 kernel you're using? Does it matter what filesystem you're using? Why do they do that? Can they be configured to behave differently?

    The way you've said it, it's like a blanket FUD statement that you shouldn't use Linux. I'm used to higher standards than that on Anandtech.
  • LizVD - Monday, June 29, 2009 - link

    As yknott already clarified, this is not in any way meant to be a jab at Linux, but is in fact a real problem caused by the gradual evolution of the Linux kernel. Sure enough, fixes have been implemented by now, and I will make sure to have that clarified in the article.

    If white papers aren't your thing, you could have a look at http://communities.vmware.com/docs/DOC-3580">http://communities.vmware.com/docs/DOC-3580 for more info on this issue.
  • thornburg - Monday, June 29, 2009 - link

    Thanks, both of you.
  • thornburg - Monday, June 29, 2009 - link

    Now that I've read the whitepaper, and looked at the kernel revisions in question, it seems that only people who don't update their kernel should worry about this.

    Based on a little search and a wikipedia entry, it appears that only Red Hat (or the major distros) is still on the older kernel version.

Log in

Don't have an account? Sign up now