Our Ask the Experts series continues with another round of questions.

A couple of months ago we ran a webcast with Intel Fellow, Rich Uhlig, VMware Chief Platform Architect, Rich Brunner and myself. The goal was to talk about the past, present and future of virtualization. In preparation for the webcast we solicited questions from all of you, unfortunately we only had an hour during the webcast to address them. Rich Uhlig from Intel, Rich Brunner from VMware and our own Johan de Gelas all agreed to answer some of your questions in a 6 part series we're calling Ask the Experts. Each week we'll showcase three questions you guys asked about virtualization and provide answers from our panel of three experts. These responses haven't been edited and come straight from the experts.

If you'd like to see your question answered here leave it in the comments. While we can't guarantee we'll get to everything, we'll try to pick a few from the comments to answer as the weeks go on.

Question #1 by AnandTech user mpsii

Is an Opteron any better than a Phenom II? Is a core i7 that much better than a core i5? Core i7 vs Xeon?

Answer #1 by Johan de Gelas, AnandTech Senior IT Editor

Yes, a Xeon or Opteron are much better than their desktop brothers. The reason is simple: they can use ECC RAM. According to Google research, 8% of the DIMMs in a server sees one correctable error per year. That is not a real issue in a desktop: you probably don't use your desktop 24 hours a day. And most of the time, your applications only use 30-50% of your RAM. In a virtualized server however you are probably using 80-90% of your RAM. And your server is probably running 24 hours a day, 7 days in a week. On top of that, a typical desktop has 3 DIMMs nowadays, while a servers has probably at least twice or more DIMMs.

So it is easy to see that soft errors pose a much higher risk to a virtualized server than to a typical desktop. Therefore, ECC RAM is a must. And a single socket Xeon or Opteron does not cost much more than a Core i7 or Phenom II CPU. Both can be bought for a few hundred dollars.

Question #2 by Michael M.

Licensing restrictions continue to plague our virtualization attempts. In what ways are the industry attempting to solve these problems, such as yearly fees, metered purchasing, or other inventive solutions to allow unrestricted access while still compensating the software creators?

Answer #2 by Rich Brunner, VMware Chief Platform Architect

VMware can only control own it’s own pricing and licensing practices. VMware is always looking at ways to simplify licensing and map more closely to usage models. VMware vSphere continue to be sold on a ‘per socket’ basis while other SKU’s need to map more closely to usage - aligning to the average of daily high-water marks of concurrently powered-on VM’s encapsulating features/benefits for various segment usage (eg Enterprise licensing through ELA’s)

Question #3 by Mike K.

With Nehalem EX now out as well as Westmere, it seems that we can now build very large virtualization platforms that can run ridiculously many vm's - my question is - how do we provide the networking and SAN I/O bandwidth to keep all of these VM's happy? What changes in the other areas of IT infrastructure are necessary to fully utilize the power that we now have available in the x86-64 space? e.g., 10GbE, 8Gb SAN, iSCSI, ... ???

Answer #3 by Rich Uhlig, Intel Fellow

The transition from 1 GbE to 10 GbE will go a long way to achieve a better balance of I/O delivered per VM, both by delivering more bandwidth to the platform, as well as reducing port counts into the physical server. Converging storage protocols onto Ethernet, e.g. with iSCSI and FCoE (Fibre Channel over Ethernet) offer some additional efficiencies as well.

Some of the key challenges are to provide quality of service (QoS) guarantees across different VMs sharing the same physical network connection, and also to reduce packet processing and switching functions in software through the hypervisor. Intel has been adding new features to our 10 GbE network adaptors to address some of these problems, including VMDq, which provides a Layer 2 packet sorter and classifier together with send/receive queues that can be associated with different guest OSes. This support helps to reduce data copies in the hypervisor and improve QoS to the different VMs.

Another development to watch are the new standards for PCIe device virtualization developed in the PCI-SIG, specifically the SR-IOV (Single-Root I/O Virtualization) specification. The SR-IOV standard defines a way for a PCIe endpoint device to expose a “virtual function” (VF) that can be directly assigned by a hypervisor to a guest OS as a way to further reduce the overheads of the hypervisor in network processing. As we discussed in the webcast, there is a tension between getting the benefits of direct I/O assignment to a VM, while at the same time retaining the advantages of VM migration, but this is another way to improve I/O bandwidth seen by each VM, while keeping CPU utilization down. Leading technologies built on SR-IOV that address this tension by providing higher performance with fewer limitations will begin appearing on the market very soon.

POST A COMMENT

26 Comments

View All Comments

  • HMTK - Thursday, September 30, 2010 - link

    Still, Windows 2008 Datacenter is only useful when you run a lot of VM's. For a small outfit, Enterprise (which can be virtualized 4 x per license) can be more interesting. YMMV. A SMB could for example have 3 low-end dual socket pizza boxes (like a Proliant DL160 or the cheapest DL360) and vSphere Essentials Plus. Such a cluster would be probably serious overkill hardware-wise to run a dozen VM's but would be worth it for failover. However, in such a case Windows 2008 Datacenter for 6 sockets would be incredibly expensive.

    I'm happy though that Microsoft changed it's licensing for VDI solutions. It's still expensive - and will probably be so until MS has a decent VDI solution itself - but one of my customers was happy to pay +/- € 8000 less PER YEAR for his VMware View solution.
    Reply
  • redisnidma - Thursday, September 30, 2010 - link

    Why can't you guys get AMD involved in this debate/article?
    That would be a plus to the discussion.
    Reply
  • HMTK - Thursday, September 30, 2010 - link

    There is something VMware can do about licensing. With the current model, licensing for vSphere is not only per socket but also by number of cores. You can use either 6 or 12 cores per socket which makes using current AMD CPU's with 8 or 12 cores rather expensive as you have to buy vSphere Advanced or Enterprise Plus. It would be nice if this limitation were removed, especially because mainstream Intel parts will get more than 6 cores as well in the furure. Strange thing is that Hyperthreading does not count here. 6 cores with a total of 12 threads is possible on the cheaper vSphere versions but 8 cores with total 8 threads requires Advanced/Enterprise Plus.

    OTOG vSphere 4.1 became a lot more interesting for SMB's since nog the Essentials Plus package (3 servers with 2 CPU's each) now also gets vMotion and High Availability which makes this product infinitely more interesting than in version 4.0
    Reply
  • Nehemoth - Thursday, September 30, 2010 - link

    Indeed I have days wondering the same question, 6 and 8 cores are practicaly everywhere on a Datacenter, vmware should leverage the cost of those licensing, should be a great plus.

    Next year with Sandy Bridge and with Bulldozer will be worse with the core count increase, let's hope vmware is listening they're customer.

    Also something that I would like to see would be like a development host especial license, let's say for example a go the virtualization route and decide to consolide my 60 to 100 servers, I put all those in 2 or 4 host for FT and HA but I also would like to have a extra host so the IT guys play nicely with the upcoming trend in technology, the license for that host should be cheaper, let's say cause you won't need that Host to be up 24/7, or you won't need HA, or FT, after all HW is way cheaper than SW.
    Reply
  • HMTK - Friday, October 01, 2010 - link

    ESXi is and will remain free (although only 6 cores/CPU) and Standard isn't all that expensive while you could manage it with your existing vCenter server. OR you could get Essentials which gives you a vCenter Foundation (3 hosts) and vSphere licenses for 3 dual socket machines. In fact, if you do not need vMotion and HA, Essentials is ridiculously cheap and Essentials Plus gives you both those features for a fairly modest price. Still, you're always stuck with 6 CPU's/core. Reply
  • Budwise - Friday, October 01, 2010 - link

    There are other options out there aside from Vmware. Lets get some XenServer feedback. Explain to everyone bare metal installs vs OS on OS, what paravirtualized drivers are, how i/o works, how to manage it, NIC bonding options, switch requirements, etc. Reply

Log in

Don't have an account? Sign up now