Our Ask the Experts series continues with another round of questions.

A couple of months ago we ran a webcast with Intel Fellow, Rich Uhlig, VMware Chief Platform Architect, Rich Brunner and myself. The goal was to talk about the past, present and future of virtualization. In preparation for the webcast we solicited questions from all of you, unfortunately we only had an hour during the webcast to address them. Rich Uhlig from Intel, Rich Brunner from VMware and our own Johan de Gelas all agreed to answer some of your questions in a 6 part series we're calling Ask the Experts. Each week we'll showcase three questions you guys asked about virtualization and provide answers from our panel of three experts. These responses haven't been edited and come straight from the experts.

If you'd like to see your question answered here leave it in the comments. While we can't guarantee we'll get to everything, we'll try to pick a few from the comments to answer as the weeks go on.

Question #1 by AnandTech user mpsii

Is an Opteron any better than a Phenom II? Is a core i7 that much better than a core i5? Core i7 vs Xeon?

Answer #1 by Johan de Gelas, AnandTech Senior IT Editor

Yes, a Xeon or Opteron are much better than their desktop brothers. The reason is simple: they can use ECC RAM. According to Google research, 8% of the DIMMs in a server sees one correctable error per year. That is not a real issue in a desktop: you probably don't use your desktop 24 hours a day. And most of the time, your applications only use 30-50% of your RAM. In a virtualized server however you are probably using 80-90% of your RAM. And your server is probably running 24 hours a day, 7 days in a week. On top of that, a typical desktop has 3 DIMMs nowadays, while a servers has probably at least twice or more DIMMs.

So it is easy to see that soft errors pose a much higher risk to a virtualized server than to a typical desktop. Therefore, ECC RAM is a must. And a single socket Xeon or Opteron does not cost much more than a Core i7 or Phenom II CPU. Both can be bought for a few hundred dollars.

Question #2 by Michael M.

Licensing restrictions continue to plague our virtualization attempts. In what ways are the industry attempting to solve these problems, such as yearly fees, metered purchasing, or other inventive solutions to allow unrestricted access while still compensating the software creators?

Answer #2 by Rich Brunner, VMware Chief Platform Architect

VMware can only control own it’s own pricing and licensing practices. VMware is always looking at ways to simplify licensing and map more closely to usage models. VMware vSphere continue to be sold on a ‘per socket’ basis while other SKU’s need to map more closely to usage - aligning to the average of daily high-water marks of concurrently powered-on VM’s encapsulating features/benefits for various segment usage (eg Enterprise licensing through ELA’s)

Question #3 by Mike K.

With Nehalem EX now out as well as Westmere, it seems that we can now build very large virtualization platforms that can run ridiculously many vm's - my question is - how do we provide the networking and SAN I/O bandwidth to keep all of these VM's happy? What changes in the other areas of IT infrastructure are necessary to fully utilize the power that we now have available in the x86-64 space? e.g., 10GbE, 8Gb SAN, iSCSI, ... ???

Answer #3 by Rich Uhlig, Intel Fellow

The transition from 1 GbE to 10 GbE will go a long way to achieve a better balance of I/O delivered per VM, both by delivering more bandwidth to the platform, as well as reducing port counts into the physical server. Converging storage protocols onto Ethernet, e.g. with iSCSI and FCoE (Fibre Channel over Ethernet) offer some additional efficiencies as well.

Some of the key challenges are to provide quality of service (QoS) guarantees across different VMs sharing the same physical network connection, and also to reduce packet processing and switching functions in software through the hypervisor. Intel has been adding new features to our 10 GbE network adaptors to address some of these problems, including VMDq, which provides a Layer 2 packet sorter and classifier together with send/receive queues that can be associated with different guest OSes. This support helps to reduce data copies in the hypervisor and improve QoS to the different VMs.

Another development to watch are the new standards for PCIe device virtualization developed in the PCI-SIG, specifically the SR-IOV (Single-Root I/O Virtualization) specification. The SR-IOV standard defines a way for a PCIe endpoint device to expose a “virtual function” (VF) that can be directly assigned by a hypervisor to a guest OS as a way to further reduce the overheads of the hypervisor in network processing. As we discussed in the webcast, there is a tension between getting the benefits of direct I/O assignment to a VM, while at the same time retaining the advantages of VM migration, but this is another way to improve I/O bandwidth seen by each VM, while keeping CPU utilization down. Leading technologies built on SR-IOV that address this tension by providing higher performance with fewer limitations will begin appearing on the market very soon.

Comments Locked

26 Comments

View All Comments

  • Stas - Thursday, September 30, 2010 - link

    chances* not changes
  • Stuka87 - Thursday, September 30, 2010 - link

    We use VM's for testing the software that we develop. We are able to run full environments (multiple machines that interact) in an easy to deploy and manage setup.

    We run several hundred machines on our cluster at any given time. We don't have the rack space (even with 11 racks) to handle this many machines. But we do have space for a big VMWare cluster. It also means if a machine blows up because of a bug in dev code, we can just deploy a new machine (Which we have scripted)

    So for us, VM's are a HUGE help. Overall it has saved us quite a large junk of money in hardware. Even after we count in the price of the Dell R910's And SunFire X4600's that we use for the clusters
  • TeXWiller - Wednesday, September 29, 2010 - link

    The support for ECC is dependent on the BIOS for the Phenom and on the chipset for the Westmere based i3 and i5 series of processors. My personal machine is phenomenally (pun intended) driving 8 GB of ECC memory as I write this. Perhaps Johan was really thinking about the extended capacity brought by registered or buffered memory necessary for bigger configurations?
  • Stuka87 - Thursday, September 30, 2010 - link

    ECC can be supported by some of those chipsets, but buffered memory is not. Typically you want Buffered ECC memory for a VM server.
  • andersenep - Friday, October 1, 2010 - link

    I could be completely wrong, but my impression was that the memory controller in K10 CPUs was the same, that they all support ECC, and that (Un)buffered or (un)registered ECC support was dependent on MB/chipset/BIOS.

    I am not certain why anyone would buy a server MB, and drop a consumer/desktop CPU in it, but isn't this possible, even with buffered/registered RAM?

    My understanding was also that Intel did not offer ECC support of any kind in its desktop CPUs, and reserved this support solely for Xeons. Has this changed? Could I run unbuffered ECC ram with my Core i7?

    Am I completely wrong here?
  • TeXWiller - Friday, October 1, 2010 - link

    No ECC for i7. 3xxx series Xeons with the 1156 sockets can drive both registered and unregistered ECC dram depending on the bios and the chipset. The 1366 socket based 3xxx Xeons can drive unregistered ECC/non-ECC only, irrespective of the chipset used. Of the non-Xeon processors, the Westmere based i3 and i5 do work with unbuffered ECC drams with the 3xxx chipsets and a proper bios, while the Lynnfield based i5 and i7 processors don't.
    Recent 1156 server boards can apparently take a Westmere i3/5 with the unbuffered memory. When you need more capacity you can switch to a Lynnfield based Xeon and drive four quad or six double rank registered ECC modules, depending on the board configuration.
  • andersenep - Friday, October 1, 2010 - link

    When you say the Westmere i5's and i3's will run unregistered ECC RAM depending on BIOS and chipset, do you mean that it will support ECC scrubbing or will it just "work".

    I have heard this has been an issue with some AMD MBs. Manufacturers claim ECC support, but ECC scrubbing is not supported. It just works like non-ECC RAM which defeats the whole purpose.

    Given that unbuffered ECC DIMMs cost pretty much the same as non-ECC DIMMs, I don't see why Intel and MB manufacturers are fighting supporting ECC in desktop/consumer CPUs and MBs. Why is there some artificial line being drawn between server/desktop components in regards to support ECC (registered/buffered or not)?

    ECC support (even though it's unbuffered) was a key consideration for me in selecting a CPU/MB. I went with an Opteron 1352 because it was cheap enough and powerful enough for my needs, but had I gone with a Phenom II or any other consumer AM2+ CPU, I should still have that same support.

    Answer #1 seems to imply that this assumption is wrong.

    Thanks for the reply.
  • TeXWiller - Saturday, October 2, 2010 - link

    Its difficult to say about the ECC support options for the Intel server boards. They should provide chipkill-like error correction for the x8 type of memory for at least 3xxx Xeons. I'm assuming a similar support is provided for the i3 and Pentium with unbuffered memory as most boards seem to support only i3 and Pentium processors even though Intel claims an equal support for i5 in the datasheet. What is interesting is that scrubbing is mentioned in the datasheets of the 5000 and 7000 series Xeons only.

    The limits of ECC support are probably caused by the typical use cases of the consumer "gear" such as "performance at any cost" and the rarity of the cases such as "the home server". The rest is probably implementation cost and greedy market segmentation.

    I have personally bumped into a consumer board which was claimed to support ECC memory only to discover that the support is limited to booting with the said memory. Yellow liquid was oozing from my general direction after that discovery. This was an AM2+ board. Now I'm using an Asus AM2+ board with configurable ECC support with adjustable scrubbing and chipkill under a Phenom 9750. High end Gigabyte boards seem to have a proper ECC support as well.
  • brundlefly77 - Wednesday, September 29, 2010 - link

    Licensing licensing licensing yes a HUGE issue for consumer desktop virtualization.

    I realized this week that I needed TWO Windows 7 licenses to run Windows 7 Pro Boot Camp under Fusion on Mac. I can't afford that for the amount of time I use it.

    I understand that VMWare can only address its own licensing issues, but would suggest that investing more resources, time, money, and lawyers into negotiating with Microsoft ways to avoid double-licensing is critical to VMWare's future in selling consumer desktop virtualization solutions.

    I won't even get into Apple's position, which is to basically not allow MacOS to run on anything but bare Apple hardware.
  • miteethor - Wednesday, September 29, 2010 - link

    We already have this solved on the server side. Microsoft offers Windows 2008 Datacenter edition which offers unlimited number of virtual machines per processor to be installed. Even though we use VMWare to do the virtualization we purchase this 1-shot license for each VMWare server and we are covered.

    I realize that doesn't help you on the desktop side but there is a solution for datacenters who are using this technology.

Log in

Don't have an account? Sign up now