Our Ask the Experts series continues with another round of questions.

A couple of months ago we ran a webcast with Intel Fellow, Rich Uhlig, VMware Chief Platform Architect, Rich Brunner and myself. The goal was to talk about the past, present and future of virtualization. In preparation for the webcast we solicited questions from all of you, unfortunately we only had an hour during the webcast to address them. Rich Uhlig from Intel, Rich Brunner from VMware and our own Johan de Gelas all agreed to answer some of your questions in a 6 part series we're calling Ask the Experts. Each week we'll showcase three questions you guys asked about virtualization and provide answers from our panel of three experts. These responses haven't been edited and come straight from the experts.

If you'd like to see your question answered here leave it in the comments. While we can't guarantee we'll get to everything, we'll try to pick a few from the comments to answer as the weeks go on.

Question #1 by Eric A.

What types of computing will (likely) never benefit from virtuaization?

Answer #1 by Johan de Gelas, AnandTech Senior IT Editor

Quite a few HPC applications that scale easily over multiple cores and which can easily gobble up all the resources a physical host has. I don't see graphical intensive applications being virtualized quickly either. And of course webservers that need to scale out (use multiple nodes) don't have much use for virtualization either.

Question #2 by Alexander H.

GPGPU is becoming very important these days. When can we expect virtual machines to tap this resource unobstructed?

Answer #2 by Rich Brunner, VMware Chief Platform Architect

Let me speak from the bare metal hypervisor (server) POV. If you directly expose a GPGPU to a VM (virtual machine), you make VMware VMotion of the VM to a different system too difficult, fragile, and costly to attempt. There is no guarantee that the target system of a VMware VMotion even has any graphics controller above the simple 2D VGA capability living in the BMC of the server and few server customers want to waste the limited PCIe slots of a server for a graphics card. Even if you could claim some high performance graphics controller in each server today, which we do not see our SMB and enterprise customers rushing toward right now, there is still no guarantee of compatibility even at the GPGPU instruction set level (OpenCL vs CUDA vs DirectCompute). This compatibility breaks live migration. Attempting to address the compatibility requirements by emulation of the GPGU instruction set on systems which do not have it also leads to unacceptable performance. As a result, I do not expect anyone to seriously expose GPGUs in a commercial enterprise hypervisor scenario for at least a few more years. But, desktop hypervisors, which have less requirements for live migration, could get this to work sooner and paper over some of the incompatibility issues.
I think a better first step is for the hypervisor and VM to share common graphics rendering commands and primitives so that they hypervisor does not have to convert one graphics command set to another in order to render on the VM's behalf. A native driver in the hypervisor can then tweak the commands and take advantage of any hardware acceleration that a high-performance graphics card could provide if present. (This is being done today for "hosted" hypervisors such as VMware's Fusion product on the MAC with regards to OpenGL.) A GPGPU-capable graphics card offers the possibility of further "offline" (or asynchronous) acceleration of rendering and other hypervisor tasks that are invisible to the VMs on the server.
Having said that, it is clear that the microprocessor vendors are slowly integrating more capable graphics devices with their CPUs into the same processor package, at least for some market segments. If they ever decide to make this capability available in server processors, then more direct exposure of the GPGU to the VM which does not break VMware VMotion may become possible due to the resulting wide-spread availability and commonality of integrated GPGUs.

Question #3 by Craig R.

What is the Roadmap for breakthrough Security features and their implementation?

Answer #3 by Rich Uhlig, Intel Fellow

Going back to the early days of the definition of Intel VT, we actually had security in mind from the beginning, and so security is sort of already built into our existing VT feature roadmap. VMs provide a fundamentally stronger form of isolation between bodies of code because it works down to the OS kernel and device drivers running in ring 0. Our goal has been to help VMM software to further strengthen the security boundaries between VMs through hardware support. As an example, VT includes hardware mechanisms for remapping and blocking device DMA accesses to system memory, so that even a privileged ring-0 device driver running in one VM can’t access the memory belonging to another VM; that’s something that can’t be done without new hardware support. VT also simplifies the implementation of a VMM by reducing the amount of code needed to work around virtualization problems – that reduces the overall size of the trusted computing base and therefore the “attack surface” for malicious software to exploit. More recently, we’ve been adding hardware support to compute a cryptographic hash of the VMM kernel image that is loaded into the machine as it boots. This cryptographic measurement of the VMM can help to ensure that the VMM binary has not be tampered with before it begins to run. We call this “Trusted Execution Technology”.

Comments Locked

15 Comments

View All Comments

  • GeorgeH - Wednesday, July 28, 2010 - link

    On my desktop machine at home? Nothing.

    On a server running in my basement serving multiple fully functional (including gaming) VMs to thin clients all over my house? Easily $200-$500, maybe more.
  • HMTK - Thursday, July 29, 2010 - link

    Well, I think you'd be a very rare individual indeed. Rare enough for software vendors not to be interested. If you don't need the vid performance and 6 cores is enough, ESXi works great and is free. More cores will cost you - a lot - which is stupid from VMware IMO as 8 and 12 core CPU's have become common.
  • gorgamin - Sunday, August 1, 2010 - link

    you're right, this article does not really concern my interests, and yes I but one guy with these needs... but having tinkered with vmware and reading the rumours... they are gearing up for GPU hardware access. this is also true for hardware sound devices.

    think of the downtime that could be saved just in a media/game dev. company... running all your dev. applications in vm's, and you just auto backup that virtual image every night. any viruses/malware and you just load the backup virtual image. done. forget about the infection, forget about AV software that sits in your ram. its the next step, and it just makes sense
  • serenecrue - Thursday, July 29, 2010 - link

    The specs above are really great,samsung is probably cheap product to buy .
    <a href="http://www.stop-painting.com/ao-30-730.html"&... pavement reflectors </a>
  • alinhan - Saturday, July 31, 2010 - link

    When I first heard of virtualization, I imagined the following would be possible: to have a computer run multiple operating systems simultaneously, with no problems, except for a small performance hit, of course. And afaik, this is possible, but with a big exception regarding the GPU: you cannot run any hardware accelerated games in a guest with almost the same performance as if you would run them in the host (minus, let's say, 20%).

    Will this ever be possible?

    My ideal setup would be: a powerful machine with a Linux host, where I would do most of my work, and a Windows guest for games, since games still are Windows-centric on the PC.

Log in

Don't have an account? Sign up now