Preface

We ended the first part of this article by talking about jumbo frames, having discussed some CPU considerations and general best practices for various situations.

In the meantime, the lab got the opportunity to sit down with VMware’s Scott Drummonds, who was able to provide us with some more interesting information on this subject, and a couple of our readers pointed out some of the problems they’ve been experiencing, along with the solutions. We are really happy with the overwhelmingly positive reactions we have received to Part 1, and hope Part 2 will continue to help out the people who need to work with ESX on a regular basis to get the most out of the product.

Before we dive into the more “structured” part of the article, we would like to mention a possible issue brought up by one of our readers, yknott. Apparently, IRQ sharing on certain platforms can cause a rather large performance hit in cases where the the interrupt line is used alternatively by ESX’s service console and the VMkernel. The service console is the actual console that can be logged into when an administrator wants to check out for example esxtop, and can as such take control of certain devices to perform its tasks. The problem seems to occur when both the VMkernel and the service console have control over the same device, which is something that can be checked for when displaying the /proc/vmware/interrupts file, as documented in this article of the VMware knowledge base.

Diving into storage
POST A COMMENT

13 Comments

View All Comments

  • vorgusa - Monday, June 29, 2009 - link

    Just out of curiosity will you guys be adding KVM to the list? Reply
  • JohanAnandtech - Wednesday, July 01, 2009 - link

    In our upcoming hypervisor comparison, we look at Hyper-V, Xen (Citrix and Novell) and ESX. So far KVM has got lot of press (in the OS community), but I have yet to see anything KVM in a production environment. We are open to suggestions, but it seems that we should give priority to the 3 hypervisors mentioned and look at KVM later.

    It is only now, June 2009, that Redhat announces a "beta-virtualization" product based on KVM. When running many VMs on a hypervisor, robustness and reliability is by far the most important criteria, and it seems to us that KVM is not there yet. Opinions (based on some good observations, not purely opinions :-) ?
    Reply
  • Grudin - Monday, June 29, 2009 - link

    Something that is becoming more important as higher I/O systems are virtualized is disk alignment. Make sure your guest OS's are aligned with the SAN blocks. Reply
  • yknott - Monday, June 29, 2009 - link

    I'd like to second this point. Mis-alignment of physical blocks with virtual blocks can result in two or more physical disk operations for a single VM operation. It's a quick way to kill I/O performance! Reply
  • thornburg - Monday, June 29, 2009 - link

    Actually, I'd like to see an in-depth article on SANs. It seems like a technology space that has been evolving rapidly over the past several years, but doesn't get a lot of coverage. Reply
  • JohanAnandtech - Wednesday, July 01, 2009 - link

    We are definitely working on that. Currently Dell and EMC have shown interest. Right now we are trying to finish off the low power server (and server CPUs) comparison and the quad socket comparison. After a the summer break (mid august) we'll focus on a SAN comparison.

    I personally have not seen any test on SANs. Most sites that cover it seem to repeat press releases...but I have may have missed some. It is of course a pretty hard thing to do as some of this stuff is costing 40k and more. We'll focus on the more affordable SANs :-).
    Reply
  • thornburg - Monday, June 29, 2009 - link

    Some linux systems using the 2.6 kernel make 10x as many interrupts as Windows?

    Can you be more specific? Does it matter which specific 2.6 kernel you're using? Does it matter what filesystem you're using? Why do they do that? Can they be configured to behave differently?

    The way you've said it, it's like a blanket FUD statement that you shouldn't use Linux. I'm used to higher standards than that on Anandtech.
    Reply
  • LizVD - Monday, June 29, 2009 - link

    As yknott already clarified, this is not in any way meant to be a jab at Linux, but is in fact a real problem caused by the gradual evolution of the Linux kernel. Sure enough, fixes have been implemented by now, and I will make sure to have that clarified in the article.

    If white papers aren't your thing, you could have a look at http://communities.vmware.com/docs/DOC-3580">http://communities.vmware.com/docs/DOC-3580 for more info on this issue.
    Reply
  • thornburg - Monday, June 29, 2009 - link

    Thanks, both of you. Reply
  • thornburg - Monday, June 29, 2009 - link

    Now that I've read the whitepaper, and looked at the kernel revisions in question, it seems that only people who don't update their kernel should worry about this.

    Based on a little search and a wikipedia entry, it appears that only Red Hat (or the major distros) is still on the older kernel version.
    Reply

Log in

Don't have an account? Sign up now