NVIDIA GTC 2010 Wrapup

by Ryan Smith on October 10, 2010 12:56 AM EST

Microsoft’s Next RDP Moves Rendering Back to the Server

As the architects & overseers of DirectX, we mention Microsoft and GPUs together quite often when talking about the consumer aspects of their technology. But the company was also at GTC to discuss how GPUs factor in to their server plans. The major discussion they gave based on this theme was about the next version of Remote Desktop, version 7.1, which is going to have significant ramifications for how the protocol works, and when it comes to GPUs where the load is going to be.

For a quick bit of background, RDP was originally designed with a focus to be light on bandwidth usage – it accomplished this by having the server send rendering commands to the client rather than sending full frames. This worked very well in the 2D GDI days, where RDP’s underpinnings matched up rather well against how early video cards worked: a screen was really only composed of a handful of high-level commands, and the CPU (and later, GDI video cards) took care of rendering all of this. However this design wasn’t one that kept pace with the future very well; simple video cards became complex GPUs that were capable of quickly and effectively doing things like transparency on the desktop, playing back videos, and of course rendering many complex 3D items at once.

Ultimately in the 14 years since Windows NT 4.0 came out, passing data at a high level has become more tedious, and this has required that clients have GPUs whose capabilities were similar to what the remote desktop session wished to accomplish. Meanwhile the remote desktop concept (which is not in any way new) has seen a significant surge in popularity in recent years as x86 virtualization has matured, enticing companies to move to remotely hosted sessions running on virtualized hardware in order to improve their reliability and ease of client management. This in turn is a push towards a true thin client desktop where all the heavy lifting is done by the server, and the client is a dumb box (or tablet, or phone, or…) that displays things.

Because of these changes in the market, Microsoft’s chosen path of progression has been through what amounts to regression. RDP is a “smart” protocol intended to go light on bandwidth, but it requires a smart client to do the rendering. The solution Microsoft is pursing is to go dumb: instead of sending high-level commands, just stream the desktop outright. This nullifies RDP’s bandwidth savings, but it allows for truly thin clients that don’t need to be able to do anything other than decoding a video stream, and in the process it handily solves the problems with trying to use high-level commands to remotely render videos and complex 3D imagery. To use an analogy, this would be much closer to how VNC (open source remote desktop system) or OnLive (game streaming) work, as compared to RDP as it exists today.

The resulting desktop streaming software that Microsoft is developing is called RemoteFX, and it will be introduced as part of RDP 7.1 with Windows 2008 R2 SP1, which is due in early 2011. Microsoft has been developing it since 2008, when they acquired Calista Technologies who originally developed the technology.

By now a few of you are wondering what this has to do with GPUs, and it ultimately comes down to 1 thing: where the GPU needs to be. RemoteFX goes hand-in-hand with fully embracing a GPU-rendered desktop for remote users, which means the servers hosting these remote sessions – which previously never needed a real GPU - will now need GPUs to render all of these desktops. It’s a migration of sorts, as the clients will no longer use GPUs (at least not for anything more than video decoding) but the servers will. And this is the angle for GTC attendees, as business users looking to take advantage of RemoteFX now need to consider equipping their server farms with GPUs in order to handle the technology.

On the backend of things Microsoft is requiring DirectX 10-class GPUs, while the rendered desktop will be using DirectX 9. This is a bit of an odd combination, but it comes as a result of how RemoteFX works. Fundamentally RemoteFX will be hosting a graphical context for each remote desktop, rather than outright virtualizing the GPU in the traditional sense. In turn the virtualized copy of Windows will be running a video driver that emulates a DX9 GPU, the frontend of which exists as the graphical context running on the host. This is distinctly different from truly virtualizing the GPU as the virtualized instances of Windows do not have direct access to the GPU (ed: So no Folding@Home, guys), whereas a virtualized GPU would afford  direct access while it shared the hardware.

Overall this ties in well to DirectX 10 and the associated driver changes, which were strongly focused on GPU context switching and scheduling so that multiple applications could more easily share a GPU. But for many of these same reasons (specifically: DX10 + WDDM is complex to emulate) the remote desktops themselves will be limited to DirectX 9 functionality, as that’s the capability Microsoft inherited with they purchased the software, and for the time being DX9 much easier to accomplish while Microsoft works out the complexities of a virtualized DX10 for future versions of the software.

As for the streaming video codec, Microsoft is making it quite clear that they’re not using an existing codec such as H.264 for this. Rather it will be based on a new codec optimized for very low latency encoding/decoding and for high quality text reproduction, with the goal being lossless quality when possible. Furthermore they want to move this all in to dedicated hardware to minimize power usage: clients would have this codec added to the capabilities of their existing video decode blocks, and servers would be given dedicated ASICs for encoding. The benefits of using dedicated hardware are clear as we’ve seen with Intel’s Sandy Bridge Media Engine – which achieves GPU-like encode performance in a 3mm2 piece of silicon – and Microsoft believes they have the clout to influence hardware designers in to including their codec in their products’ decode blocks. The decoder itself is royalty-free to sweeten the deal, and will likely be Microsoft’s strongest leverage in to getting support for it in to mobile products such as tablets and cell phones.

Finally, with Windows Server 2008 R2 SP1 shipping next year, Microsoft showed off a demo of RemoteFX in action. With a focus on where RDP fails, they played an HD video and then an IE9 GPU test over an RDP session which in turn was being run over a 100Mb network to a host a few feet away; in both cases RDP wasn’t capable of keeping up, and rendered things in terms of seconds per frame. On the same demo with RemoteFX, RemoteFX was able to keep up although we hesitate to call it perfectly smooth.

Ultimately we can’t help but notice that Microsoft has come full circle on the role of clients and servers. Microsoft was founded at the start of the microcomputer revolution, when computing was just moving away from mainframes and proto-servers, and now with RemoteFX they’re ensuring that all of that can be pushed back to the server. Over the years the roles of clients and servers has constantly swung between the two, and it looks like it’s swinging back to the server side in the near future (after which it will no doubt swing back the other way). For NVIDIA and AMD this means the companies will have a new focus on server infrastructure, as the importance of a client GPU gives way to the importance of a server GPU.

Taking Care of Business: PCIe x16 For HPC & Quadro 3D Polishing the GPU: Memory & Stereo UIs
Comments Locked

19 Comments

View All Comments

  • adonn78 - Sunday, October 10, 2010 - link

    This is pretty boring stuff. I mean the projectors ont eh curved screens were cool but what about gaming? anything about Nvidia's next gen? they are really falling far behind and are not really competing when it comes to price. I for one cannot wait for the debut of AMD's 6000 series. CUDA and PhysX are stupid proprietary BS.
  • iwodo - Sunday, October 10, 2010 - link

    What? This is GTC, it is all about the Workstation and HPC side of things. Gaming is not the focus of this conference.
  • bumble12 - Sunday, October 10, 2010 - link

    Sounds like you don't understand what CUDA is, by a long mile.
  • B3an - Sunday, October 10, 2010 - link

    "teh pr0ject0rz are kool but i dun understand anyting else lolz"

    Stupid kid.
  • iwodo - Sunday, October 10, 2010 - link

    I was about the post Rendering on Server is fundamentally, but the more i think about it the more it makes sense.

    However defining a codec takes months, actually refining and implementing a codec takes YEARS.

    I wonder what the client would consist of, Do we need a CPU to do any work at all? Or would EVERYTHING be done on server other then booting up an acquiring an IP.

    If that is the case may be an ARM A9 SoC would be enough to do the job.
  • iwodo - Sunday, October 10, 2010 - link

    Just started digging around. LG has a Network Monitor that allows you to RemoteFX with just an Ethernet Cable!.

    http://networkmonitor.lge.com/us/index.jsp

    And x264 can already encode at sub 10ms latency!. I can imagine IT management would be like trillion times easier with centrally managed VM like RemoteFX. No longer upgrade every clients computer. Stuff a few HDSL Revo Drive and let everyone enjoy the benefit of SSD.

    I have question of how it will scale, with over 500 machines you have effectively used up all your bandwidth...
  • Per Hansson - Sunday, October 10, 2010 - link

    I've been looking forward to this technology since I heard about it some time ago.
    Will be interesting to test how well it works with the CAD/CAM software I use, most of which is proprietary machine builder specific software...
    There was no mention of OpenGL in this article but from what I've read that is what it is supposed to support (OpenGL rendering offload)
    Atleast that's what like 100% of the CAD/CAM software out there use so it better be if MS wants it to be successful :)
  • Ryan Smith - Sunday, October 10, 2010 - link

    Someone asked about OpenGL during the presentation and I'm kicking myself for not writing down the answer, but I seem to recall that OpenGL would not be supported. Don't hold me to that, though.
  • Per Hansson - Monday, October 11, 2010 - link

    Well I hope OpenGL will be supported, otherwise this is pretty much a dead tech as far as enterprise industries are concerned.

    This article has a reply by the author Brian Madden in the comments regrading support for OpenGL; http://www.brianmadden.com/blogs/brianmadden/archi...

    "For support for apps that require OpenGL, they're supporting apps that use OpenGL v1.4 and below to work in the VM, but they don't expect that apps that use a higher version of OpenGL will work (unless of course they have a DirectX or CPU fallback mode)."
  • Sebec - Sunday, October 10, 2010 - link

    Page 5 -"... and the two companies are current the titans of GPU computing in consumer applications."

    Current the titans?

    "Tom believes that ultimately the company will ultimately end up using..."

Log in

Don't have an account? Sign up now