Our Ask the Experts series continues with another round of questions.

A couple of months ago we ran a webcast with Intel Fellow, Rich Uhlig, VMware Chief Platform Architect, Rich Brunner and myself. The goal was to talk about the past, present and future of virtualization. In preparation for the webcast we solicited questions from all of you, unfortunately we only had an hour during the webcast to address them. Rich Uhlig from Intel, Rich Brunner from VMware and our own Johan de Gelas all agreed to answer some of your questions in a 6 part series we're calling Ask the Experts. Each week we'll showcase three questions you guys asked about virtualization and provide answers from our panel of three experts. These responses haven't been edited and come straight from the experts.

If you'd like to see your question answered here leave it in the comments. While we can't guarantee we'll get to everything, we'll try to pick a few from the comments to answer as the weeks go on.

Question #1 by AnandTech user mpsii

Is an Opteron any better than a Phenom II? Is a core i7 that much better than a core i5? Core i7 vs Xeon?

Answer #1 by Johan de Gelas, AnandTech Senior IT Editor

Yes, a Xeon or Opteron are much better than their desktop brothers. The reason is simple: they can use ECC RAM. According to Google research, 8% of the DIMMs in a server sees one correctable error per year. That is not a real issue in a desktop: you probably don't use your desktop 24 hours a day. And most of the time, your applications only use 30-50% of your RAM. In a virtualized server however you are probably using 80-90% of your RAM. And your server is probably running 24 hours a day, 7 days in a week. On top of that, a typical desktop has 3 DIMMs nowadays, while a servers has probably at least twice or more DIMMs.

So it is easy to see that soft errors pose a much higher risk to a virtualized server than to a typical desktop. Therefore, ECC RAM is a must. And a single socket Xeon or Opteron does not cost much more than a Core i7 or Phenom II CPU. Both can be bought for a few hundred dollars.

Question #2 by Michael M.

Licensing restrictions continue to plague our virtualization attempts. In what ways are the industry attempting to solve these problems, such as yearly fees, metered purchasing, or other inventive solutions to allow unrestricted access while still compensating the software creators?

Answer #2 by Rich Brunner, VMware Chief Platform Architect

VMware can only control own it’s own pricing and licensing practices. VMware is always looking at ways to simplify licensing and map more closely to usage models. VMware vSphere continue to be sold on a ‘per socket’ basis while other SKU’s need to map more closely to usage - aligning to the average of daily high-water marks of concurrently powered-on VM’s encapsulating features/benefits for various segment usage (eg Enterprise licensing through ELA’s)

Question #3 by Mike K.

With Nehalem EX now out as well as Westmere, it seems that we can now build very large virtualization platforms that can run ridiculously many vm's - my question is - how do we provide the networking and SAN I/O bandwidth to keep all of these VM's happy? What changes in the other areas of IT infrastructure are necessary to fully utilize the power that we now have available in the x86-64 space? e.g., 10GbE, 8Gb SAN, iSCSI, ... ???

Answer #3 by Rich Uhlig, Intel Fellow

The transition from 1 GbE to 10 GbE will go a long way to achieve a better balance of I/O delivered per VM, both by delivering more bandwidth to the platform, as well as reducing port counts into the physical server. Converging storage protocols onto Ethernet, e.g. with iSCSI and FCoE (Fibre Channel over Ethernet) offer some additional efficiencies as well.

Some of the key challenges are to provide quality of service (QoS) guarantees across different VMs sharing the same physical network connection, and also to reduce packet processing and switching functions in software through the hypervisor. Intel has been adding new features to our 10 GbE network adaptors to address some of these problems, including VMDq, which provides a Layer 2 packet sorter and classifier together with send/receive queues that can be associated with different guest OSes. This support helps to reduce data copies in the hypervisor and improve QoS to the different VMs.

Another development to watch are the new standards for PCIe device virtualization developed in the PCI-SIG, specifically the SR-IOV (Single-Root I/O Virtualization) specification. The SR-IOV standard defines a way for a PCIe endpoint device to expose a “virtual function” (VF) that can be directly assigned by a hypervisor to a guest OS as a way to further reduce the overheads of the hypervisor in network processing. As we discussed in the webcast, there is a tension between getting the benefits of direct I/O assignment to a VM, while at the same time retaining the advantages of VM migration, but this is another way to improve I/O bandwidth seen by each VM, while keeping CPU utilization down. Leading technologies built on SR-IOV that address this tension by providing higher performance with fewer limitations will begin appearing on the market very soon.

Comments Locked

26 Comments

View All Comments

  • GoodBytes - Wednesday, September 29, 2010 - link

    I never understood this virtualisation thing. So ok, I can run a different OS on my desktop computer, to run a specific program. That is nice.. well not really as it takes forever to startup, but I guess it's better than nothing.
    Why have this on a server? Assuming all your software is up-to-date and all works with your sever OS, what's the benefit? What does it allow me to do, that I can't do without them?
  • Link23 - Wednesday, September 29, 2010 - link

    Visualization is very useful in a large environment. Case, i have at this point 160 Virtual servers running on 5 Hosts (5 physical servers). For my customer this is very useful since he doesn't need to have the space to store 160 servers or has to worry powering them. Also it is very useful for testing environments if you have developers that need to test client server applications.
  • TheHolyLancer - Wednesday, September 29, 2010 - link

    consolidation, you have a file server, a email server, a web server, a ----- server. each on it's own does not need that much processing power or IO, it is cheaper to buy a single / two really good and reliable sever and virtualize them.

    less electricity use, easier to manage and back up, allows for other nifty tricks like pause the os when it is running mid way to move the server physically etc.

    Some things that needs all the power and io it can get obviously is not the BEST candidate for vm for consolidation use, but then, you can buy several servers and make all of them run the same server at the same time, so then if one server dies, another takes over without down time (read vmware fault-tolerance or active/active server fail over). This is common in the risc/mainframe space to provide near 100% uptime (three or four 9s), and now can be done on el cheapo x84 stuff.
  • GoodBytes - Wednesday, September 29, 2010 - link

    Thanks to both!

    I don't know much about server side things, so please excuse my ignorance.
    Why can't you have a file server that is at the same time an e-mail server, that is at the same time a web server? What I am trying to say, is I already made my desktop home computer a FTP server and HTTP temporarily (for experimenting), I was still able to use my computer like I did not have the FTP and HTTP server, so why do you need separate servers? I guess, that if one crash, not everything will fall apart with the usage of separate computers, but using virtual environment leads to the same situation, where the computer that runs them crashes, then well they all crash, so why not merge every server into one, like if you install and use several applications on one computer (I know that each processes runs in their own virtual space on the computer, but you know what I mean. Where you install/configure the server to be an e-mail, file, web server all at once)
  • solgae1784 - Wednesday, September 29, 2010 - link

    The reason to separate those roles is to provide isolation so those roles will not affect the others. There are many technical (e.g. conflicts due to different OS settings required from different applications), business (e.g. uptime requirements), and political (e.g. separation of duty required by department rules) reasons that necessitates isolation. In the physical world, this meant additional physical servers that will quickly accumulate costs to buy and maintain them. With virtual, there are much less physical servers to maintain, even if the amount of servers & labor required will stay the same, which translates into less hardware space & cooling requirements & electric costs and more.

    Also, just know that putting all your business critical servers in one machine is simply asking for trouble - if that single machine goes down, virtual or not, then you're out of business. You really need redundancy and disaster recovery plans to make sure your business critical roles stay operative. Redundancy is even more important in the virtual environment, where one physical machine will host multiple servers.

    Fortunately, many virtual environments provide options such as VMware HA (restarts the VMs from failed host to another one) and FT (maintains two copies of a VM and immediately switches to secondary VM if primary VM goes down without incurring any downtime) to protect against hardware failure. Microsoft's Windows Clustering is also there to protect against application failure as well if needed. Disaster recovery in virtual world is also much simpler to implement, such as SAN replication to another site since the servers are now represented by a set of files stored on SAN, and products such as VMware Site Recovery Manager can automate the recovery of the VMs with a click of a button - long as of course, you have planned your disaster recovery well in the first place.
  • justaviking - Wednesday, September 29, 2010 - link

    Simple example:

    What if you need to reboot your email server? Maybe because you installed a patch.

    In a VM situation, you simply reboot the email VM.

    If you have everything running on one traditional computer, when you reboot your email server you also reboot your web server, your video server, your database... everything.

    Another example:

    Or what if your email software is not compatible with a certain patch yet, but everything else is. You have to run without the new patch while you wait for total compatibility across your entire software suite. In a VM environment, you can patch each "machine" independently if needed.
  • justaviking - Wednesday, September 29, 2010 - link

    Also, most crashes are "software crashes." At least I think so.

    So the odds of your underlying server crashing your entire VM system are very low.

    This way if one of your VMs crashes, due to some software lockup, the other VMs continue to run.

    On a traditional system, a Blue Screen of Death (BSOD) would wipe out all your services.
  • justaviking - Wednesday, September 29, 2010 - link

    One more answer to "Why use a VM?" then I'll quit replying to myself...

    Portability.

    If you want to move your email server to a new, faster piece of hardware, it's a lot of work. Installations. Licenses. Etc.

    With a VM, it's sort of "pick up the suitcase and go."

    This might not be the case in every situation, but the time I actually used a VM it was great. I just put the VM onto my work laptop and was up and running in about 1/2 hour. The alternative would be to spend 2 DAYS installing and configuring a database, a web server, my company's software, ensuring license keys were correct, and on and on. So a VM was a great way to clone a training/demo system and make it portable.
  • GoodBytes - Wednesday, September 29, 2010 - link

    Wow, thanks you very much for your time.
    I have learned a lot! Now I really see the importance of using virtual machine.

    +++ rep, if that was possible :)
  • Stas - Thursday, September 30, 2010 - link

    While we're at it, example from today.
    A client has 1 server running AD, Exchange, spam filter, DB, and is also a storage server. Somehow, the server got hacked - Exchange was sending out load of spam. So much spam that the whole machine came to a halt. Result: business software was down as their DB wasn't responding, no one could access their files on the shares, all that in addition to no email. They couldn't run transactions and lost some money because they had to wait for me to bring the server back to live. Had they spent the money on somewhat a more powerful system and split it up to 3-4 virtual servers with specific tasks, they would only suffer a loss of email functionality for a couple of hours. Let alone the fact that changes are, the vulnerability is due to so many applications requiring so many ports open, security measures relaxed, etc.

Log in

Don't have an account? Sign up now