"Magic" memory

If you are already running ESX, you may be aware of the fact that it is still to this day the only hypervisor-based solution that allows for memory overcommitment. To get this to work, VMware has implemented three ways of reclaiming memory from a running VM: Page sharing, ballooning and swapping. To allow ESX to get the best out of these implementations, a final consideration when virtualizing would be to put VM’s with similar workloads and operating systems together on the same physical machine. Now, when running only a small amount of VM’s next to each other, this might not be a viable option, or seem to improve the situation much, but in heavy-duty environments, grouping similar VM’s together allows the hypervisor to free up a large amount of memory by sharing identical pages across the different systems. ESX scans through all pages at runtime to group the static parts of memory that each VM shares, allowing for example 5 idling Windows systems to take up little more RAM than a single one. This effect is made clear in the image below, in which VMware demonstrated the amount of memory saved across 4 VM’s, thanks to aggressive “page-sharing”, as VMware calls its technology.


Page-sharing is just one of 3 technologies ESX uses to reclaim memory from its VM’s, the other two being ballooning and swapping. You can check the effect page-sharing has on your VM’s in esxtop’s memory screen.


The SHRD counter denotes the amount of memory in the VM that is shared with other VM’s, this includes zero’d out pages.

The ZERO counter denotes the amount of memory that has been zero’d out. These pages also count as shared ones, as every single “zero’d” page refers to the same zero’d physical segment.

SHRDSVD is the estimated amount of memory that is saved for that VM thanks to the page-sharing mechanism.

Also interesting are the MCTL-columns, that deal with the VM’s ballooning driver. MCTLSZ is the amount of memory that has been reclaimed through the use of the balloon-driver installed with VMware Tools. This “balloon” is actually no more than a process claiming the free memory on a VM, to artificially increase pressure on the VM’s OS. This way, the OS is forced to run as memory-efficiently as possible, allowing for as much memory as possible to be reclaimed for other VM’s, should the need arise.

Setting up a proper configuration Conclusion
Comments Locked

13 Comments

View All Comments

  • vorgusa - Monday, June 29, 2009 - link

    Just out of curiosity will you guys be adding KVM to the list?
  • JohanAnandtech - Wednesday, July 1, 2009 - link

    In our upcoming hypervisor comparison, we look at Hyper-V, Xen (Citrix and Novell) and ESX. So far KVM has got lot of press (in the OS community), but I have yet to see anything KVM in a production environment. We are open to suggestions, but it seems that we should give priority to the 3 hypervisors mentioned and look at KVM later.

    It is only now, June 2009, that Redhat announces a "beta-virtualization" product based on KVM. When running many VMs on a hypervisor, robustness and reliability is by far the most important criteria, and it seems to us that KVM is not there yet. Opinions (based on some good observations, not purely opinions :-) ?
  • Grudin - Monday, June 29, 2009 - link

    Something that is becoming more important as higher I/O systems are virtualized is disk alignment. Make sure your guest OS's are aligned with the SAN blocks.
  • yknott - Monday, June 29, 2009 - link

    I'd like to second this point. Mis-alignment of physical blocks with virtual blocks can result in two or more physical disk operations for a single VM operation. It's a quick way to kill I/O performance!
  • thornburg - Monday, June 29, 2009 - link

    Actually, I'd like to see an in-depth article on SANs. It seems like a technology space that has been evolving rapidly over the past several years, but doesn't get a lot of coverage.
  • JohanAnandtech - Wednesday, July 1, 2009 - link

    We are definitely working on that. Currently Dell and EMC have shown interest. Right now we are trying to finish off the low power server (and server CPUs) comparison and the quad socket comparison. After a the summer break (mid august) we'll focus on a SAN comparison.

    I personally have not seen any test on SANs. Most sites that cover it seem to repeat press releases...but I have may have missed some. It is of course a pretty hard thing to do as some of this stuff is costing 40k and more. We'll focus on the more affordable SANs :-).
  • thornburg - Monday, June 29, 2009 - link

    Some linux systems using the 2.6 kernel make 10x as many interrupts as Windows?

    Can you be more specific? Does it matter which specific 2.6 kernel you're using? Does it matter what filesystem you're using? Why do they do that? Can they be configured to behave differently?

    The way you've said it, it's like a blanket FUD statement that you shouldn't use Linux. I'm used to higher standards than that on Anandtech.
  • LizVD - Monday, June 29, 2009 - link

    As yknott already clarified, this is not in any way meant to be a jab at Linux, but is in fact a real problem caused by the gradual evolution of the Linux kernel. Sure enough, fixes have been implemented by now, and I will make sure to have that clarified in the article.

    If white papers aren't your thing, you could have a look at http://communities.vmware.com/docs/DOC-3580">http://communities.vmware.com/docs/DOC-3580 for more info on this issue.
  • thornburg - Monday, June 29, 2009 - link

    Thanks, both of you.
  • thornburg - Monday, June 29, 2009 - link

    Now that I've read the whitepaper, and looked at the kernel revisions in question, it seems that only people who don't update their kernel should worry about this.

    Based on a little search and a wikipedia entry, it appears that only Red Hat (or the major distros) is still on the older kernel version.

Log in

Don't have an account? Sign up now