NVIDIA GTC 2010 Wrapup

by Ryan Smith on October 10, 2010 12:56 AM EST

Microsoft’s Next RDP Moves Rendering Back to the Server

As the architects & overseers of DirectX, we mention Microsoft and GPUs together quite often when talking about the consumer aspects of their technology. But the company was also at GTC to discuss how GPUs factor in to their server plans. The major discussion they gave based on this theme was about the next version of Remote Desktop, version 7.1, which is going to have significant ramifications for how the protocol works, and when it comes to GPUs where the load is going to be.

For a quick bit of background, RDP was originally designed with a focus to be light on bandwidth usage – it accomplished this by having the server send rendering commands to the client rather than sending full frames. This worked very well in the 2D GDI days, where RDP’s underpinnings matched up rather well against how early video cards worked: a screen was really only composed of a handful of high-level commands, and the CPU (and later, GDI video cards) took care of rendering all of this. However this design wasn’t one that kept pace with the future very well; simple video cards became complex GPUs that were capable of quickly and effectively doing things like transparency on the desktop, playing back videos, and of course rendering many complex 3D items at once.

Ultimately in the 14 years since Windows NT 4.0 came out, passing data at a high level has become more tedious, and this has required that clients have GPUs whose capabilities were similar to what the remote desktop session wished to accomplish. Meanwhile the remote desktop concept (which is not in any way new) has seen a significant surge in popularity in recent years as x86 virtualization has matured, enticing companies to move to remotely hosted sessions running on virtualized hardware in order to improve their reliability and ease of client management. This in turn is a push towards a true thin client desktop where all the heavy lifting is done by the server, and the client is a dumb box (or tablet, or phone, or…) that displays things.

Because of these changes in the market, Microsoft’s chosen path of progression has been through what amounts to regression. RDP is a “smart” protocol intended to go light on bandwidth, but it requires a smart client to do the rendering. The solution Microsoft is pursing is to go dumb: instead of sending high-level commands, just stream the desktop outright. This nullifies RDP’s bandwidth savings, but it allows for truly thin clients that don’t need to be able to do anything other than decoding a video stream, and in the process it handily solves the problems with trying to use high-level commands to remotely render videos and complex 3D imagery. To use an analogy, this would be much closer to how VNC (open source remote desktop system) or OnLive (game streaming) work, as compared to RDP as it exists today.

The resulting desktop streaming software that Microsoft is developing is called RemoteFX, and it will be introduced as part of RDP 7.1 with Windows 2008 R2 SP1, which is due in early 2011. Microsoft has been developing it since 2008, when they acquired Calista Technologies who originally developed the technology.

By now a few of you are wondering what this has to do with GPUs, and it ultimately comes down to 1 thing: where the GPU needs to be. RemoteFX goes hand-in-hand with fully embracing a GPU-rendered desktop for remote users, which means the servers hosting these remote sessions – which previously never needed a real GPU - will now need GPUs to render all of these desktops. It’s a migration of sorts, as the clients will no longer use GPUs (at least not for anything more than video decoding) but the servers will. And this is the angle for GTC attendees, as business users looking to take advantage of RemoteFX now need to consider equipping their server farms with GPUs in order to handle the technology.

On the backend of things Microsoft is requiring DirectX 10-class GPUs, while the rendered desktop will be using DirectX 9. This is a bit of an odd combination, but it comes as a result of how RemoteFX works. Fundamentally RemoteFX will be hosting a graphical context for each remote desktop, rather than outright virtualizing the GPU in the traditional sense. In turn the virtualized copy of Windows will be running a video driver that emulates a DX9 GPU, the frontend of which exists as the graphical context running on the host. This is distinctly different from truly virtualizing the GPU as the virtualized instances of Windows do not have direct access to the GPU (ed: So no Folding@Home, guys), whereas a virtualized GPU would afford  direct access while it shared the hardware.

Overall this ties in well to DirectX 10 and the associated driver changes, which were strongly focused on GPU context switching and scheduling so that multiple applications could more easily share a GPU. But for many of these same reasons (specifically: DX10 + WDDM is complex to emulate) the remote desktops themselves will be limited to DirectX 9 functionality, as that’s the capability Microsoft inherited with they purchased the software, and for the time being DX9 much easier to accomplish while Microsoft works out the complexities of a virtualized DX10 for future versions of the software.

As for the streaming video codec, Microsoft is making it quite clear that they’re not using an existing codec such as H.264 for this. Rather it will be based on a new codec optimized for very low latency encoding/decoding and for high quality text reproduction, with the goal being lossless quality when possible. Furthermore they want to move this all in to dedicated hardware to minimize power usage: clients would have this codec added to the capabilities of their existing video decode blocks, and servers would be given dedicated ASICs for encoding. The benefits of using dedicated hardware are clear as we’ve seen with Intel’s Sandy Bridge Media Engine – which achieves GPU-like encode performance in a 3mm2 piece of silicon – and Microsoft believes they have the clout to influence hardware designers in to including their codec in their products’ decode blocks. The decoder itself is royalty-free to sweeten the deal, and will likely be Microsoft’s strongest leverage in to getting support for it in to mobile products such as tablets and cell phones.

Finally, with Windows Server 2008 R2 SP1 shipping next year, Microsoft showed off a demo of RemoteFX in action. With a focus on where RDP fails, they played an HD video and then an IE9 GPU test over an RDP session which in turn was being run over a 100Mb network to a host a few feet away; in both cases RDP wasn’t capable of keeping up, and rendered things in terms of seconds per frame. On the same demo with RemoteFX, RemoteFX was able to keep up although we hesitate to call it perfectly smooth.

Ultimately we can’t help but notice that Microsoft has come full circle on the role of clients and servers. Microsoft was founded at the start of the microcomputer revolution, when computing was just moving away from mainframes and proto-servers, and now with RemoteFX they’re ensuring that all of that can be pushed back to the server. Over the years the roles of clients and servers has constantly swung between the two, and it looks like it’s swinging back to the server side in the near future (after which it will no doubt swing back the other way). For NVIDIA and AMD this means the companies will have a new focus on server infrastructure, as the importance of a client GPU gives way to the importance of a server GPU.

Taking Care of Business: PCIe x16 For HPC & Quadro 3D Polishing the GPU: Memory & Stereo UIs
Comments Locked

19 Comments

View All Comments

  • dtdw - Sunday, October 10, 2010 - link

    we had a chance to Adobe, Microsoft, Cyberlink, and others about where they see GPU computing going in the next couple of years.

    shouldnt you add 'the' before adobe ?

    and adding 'is' after computing ?
  • tipoo - Sunday, October 10, 2010 - link

    " we had a chance to Adobe, Microsoft, Cyberlink, and others about where they see GPU computing going "

    Great article, but I think you accidentally the whole sentence :-P
  • Deanjo - Sunday, October 10, 2010 - link

    "While NVIDIA has VDPAU and also has parties like S3 use it, AMD and Intel are backing the rival Video Acceleration API (VA API)."

    Ummm wrong, AMD is using XvBA for it's video acceleration API. VAAPI provides a wrapper library to XvBA much like there is VAAPI wrapper for VDPAU. Also VDPAU is not proprietary, it is part of Freedesktop and the open source library package contains a wrapper library and a debugging library allowing other manufacturers to implement VDPAU support into their device drivers. In short every device manufacturer out there is free to include VDPAU support and it is up to the driver developer to add that support to a free and truly open API.
  • Ryan Smith - Sunday, October 10, 2010 - link

    AMD is using XvBA, but it's mostly an issue of semantics. They already had the XvBA backend written, so they merely wrote a shim for VA API to get it in there. In practice XvBA appears to be dead, and developers should use VA API and let AMD and the others work on the backend. So in that sense, AMD are backing VA API.

    As for NVIDIA, proprietary or not doesn't really come in to play. NVIDIA is not going to give up VAPAU (or write a VA API shim) and AMD/Intel don't want to settle on using VAPAU. That's the stalemate that's been going on for a couple of years now, and it doesn't look like there's any incentive on either side to come together.

    It's software developers that lose out; they're the ones that have to write in support for both APIs in their products.
  • electroju - Monday, October 11, 2010 - link

    Deanjo, that is incorrect. VA API is not a wrapper. It is the main API from freedesktop.org. It is created by Intel unfortunately, but they help extend the staled XvMC project to a more flexible API. VDPAU and XvBA came later to provide their own way to do about the same thing. They also include a backward compatibility to VA API. VDPAU is not open source. It is just provides structs to be able to use VDPAU, so this means VDPAU can not be changed by the open source community to implement new features.
  • AmdInside - Sunday, October 10, 2010 - link

    Good coverage. Always good to read new info. Often looking at graphics card reviews can get boring as I tend to sometimes just glance at the graphs and that is it. I sure wish Adobe would use GPU more for photography software. Lightroom is one software that works alright on desktops but too slow for my taste on laptops.
  • AnnonymousCoward - Monday, October 11, 2010 - link

    Holodeck? Cmon. It's a 3D display. You can't create a couch and then lay on it.
  • Guspaz - Tuesday, October 12, 2010 - link

    I'm sort of disappointed with RemoteFX. It sounds like it won't be usable remotely by consumers or small businesses who are on broadband-class connections; with these types of connections, you can probably count on half a megabit of throughput, and that's probably not enough to be streaming full-screen MJPEG (or whatever they end up using) over the net.

    So, sure, works great over a LAN, but as soon as you try to, say, telecommute to your office PC via a VPN, that's not going to fly.

    Even if you're working for a company with a fat pipe, many consumers (around here, at least) are on DSL lines that will get them 3 or 4 megabits per second; that might be enough for lossy motion-compensated compression like h.264, but is that enough for whatever Microsoft is planning? You lose a lot of efficiency by throwing away iframes and mocomp.
  • ABR - Tuesday, October 19, 2010 - link

    Yeah, it also makes no sense from an economic perspective. Now you have to buy a farm of GPUs to go with your servers? And the video capability now and soon being built in to every Intel CPU just goes for nothing? More great ideas from Microsoft.

Log in

Don't have an account? Sign up now