NVIDIA GTC 2010 Wrapup

by Ryan Smith on October 10, 2010 12:56 AM EST

Memory: The Compatriot of the GPU

While GTC was a show built around NVIDIA’s GPUs, it’s more than just a GPU that makes up a video card. Both Samsung and Hynix had a presence at the show, the two of them being the principle suppliers of GDDR5 at the moment (while there are other manufacturers, every single card we have is Samsung or Hynix). Both companies recently completed transitions to new manufacturing nodes (40nm), and are now bringing up denser GDDR5 memory that they were at the show to promote.

First off was Samsung, who held a green-themed session about their low power GDDR5 and LPDDR2 products, primarily geared towards ODMs and OEMs responsible for designing and building finished products. The adoption rate on both of these product lines has been slow so far.

Starting with GDDR5, normally GDDR5’s specification calls for a 1.5v operating voltage, but Samsung also offers a line of “Green Power” GDDR5 which operates at 1.35v. Going by Samsung’s own numbers, dropping from 1.5v to 1.35v reduces GDDR5 power consumption by 33%. The catch of course is that green power GDDR5 isn’t capable at running at the same speeds as full power GDDR5, with speeds topping out at around 4Gbps. This makes green power GDDR5 unsuitable for cards such as the Radeon 5800 and 5700 series which use 5Gbps GDDR5, but it would actually make it a good fit for AMD’s lower end cards and all of NVIDIA’s cards, the latter of which never exceed 4Gbps. Of course there are tradeoffs to consider; we don’t know what Samsung is doing with respect to bank grouping on the green parts to hit 4Gbps, and bank grouping is the enemy of low latencies. And of course there’s price, as Samsung charges a premium for what we believe are basically binned GDDR5 dies.

The other product Samsung was quickly discussing was LPDDR2. We don’t delve on this too much, but Samsung is very interested in moving the industry away from LPDDR1 (and even worse, DDR3L) as LPDDR2 consumes less power than either. Samsung believes the sweet spot for memory pricing and performance will shift to LPDDR2 next year.

Finally, we had a chance to talk to Samsung about the supply of their recently announced 2Gbit GDDR5. 2Gbit GDDR5 will allow cards using a traditional 8 memory chip configuration to move from 1GB to 2GB of memory, or for existing cards such as the Radeon 5870 2GB, move from 16 chips to 8 chips and save a few dozen watts in the process. The big question right now with regards to 2Gbit GDDR5 is the supply, as tight supplies of 1Gbit GDDR5 was fingered as one of the causes of the limited supply and higher prices of Radeon 5800 series cards last year. Samsung tells us that their 2Gbit GDDR5 is in mass production and shipping, however the supply is constrained through the end of the year.

Moving on, as we stated earlier Hynix was also at GTC. Unlike Samsung they weren’t doing a presentation, but they did have a small staffed booth in the exhibition hall touting their products. Like Samsung their 2Gbit GDDR5 is in full production and officially is available right now. Currently they’re offering 2Gbit and 1Gbit GDDR5 up to 6Gb/sec (albeit at 1.6v, 0.1v over spec) which should give you an idea of where future video cards may go. Like Samsung it sounds like they have as much demand as they can handle at the moment for their 2Gbit parts, so supply may be tight for high speed 2Gbit parts for the rest of the year throughout the industry.

There was one other thing on the brochure they were handing out that we’d like to touch on: things reaching End of Life. In spite of the fact that the AMD 5xxx series and NVIDIA 4xx series cards all use GDDR5 or plain DDR3, GDDR3 is still alive and well at Hynix. While they’re discontinuing their 4th gen GDDR3 products this month, their 5th gen products live on to fill video game consoles and any remaining production of last-generation video cards. What is being EOL’d however is plain DDR2, which meets its fate this month. DDR3 prices have already dropped below DDR2 prices, and it looks like DDR2 is now entering the next phase of its life where prices will continue to creep up as it’s consumed as upgrades for older systems.

Scaleform on Making a UI for 3D Vision

One of the sessions we were particularly interested in seeing ahead of time was a session by Scaleform, a middleware provider that specializes in UIs. Their session was about what they’ve learned in making UIs for 3D games and applications, a particularly interesting subject given the recent push by NVIDIA and the consumer electronics industry for 3D. Producing 3D material is still more of a dark art than a science, and for the many of you that have used 3D Vision with a game it’s clear to see that there are some kinks left to work out.

The problem right now is that traditional design choices for UIs are built around 2D, which leads to designers making the implicit assumption that the user can see the UI just as well as the game/application, since everything is on the same focal plane. 3D and the creation of multiple focal planes usually results in the UI being on one focal plane, and often quite far away from the action at that, which is why so many games (even those labeled as 3D Vision Ready) are cumbersome to play in 3D today. As a result a good 3D UI needs to take this in to account, which means breaking design rules and making new ones.

Scaleform’s presentation focused both on 3D UIs for applications and for gaming. For applications many of their suggestions were straightforward, but were elements that required a conscientious effort of the developer, such as not putting pop-out elements at the edge of the screen where they can get cut off, and rendering the cursor at the same depth as the item it’s hovering over. They also highlighted other pitfalls that don’t have an immediate solution right now, such as being able to maintain the high quality of fonts when scaling them in a 3D environment.

As for gaming, their suggestions were often those we’ve heard from NVIDIA in the past. The biggest suggestions (and biggest nuisance in gaming right now) had to deal with where to put the HUD: painting a HUD at screen depth doesn’t work; it needs to be rendered at depth with the objects the user is looking at. Barring that it should be tilted inwards to lead the eye rather than being an abrupt change. They mentioned Crysis 2 of this as an example of this, as it has implemented its UI in this manner. Unfortunately for 2D gamers, it also looks completely ridiculous on a 2D screen, so just as how 2D UIs aren’t great for 3D, 3D UIs aren’t great for 2D.


Crysis 2's UI as an example of a good 3D UI

Their other gaming suggestions focused on how the user needs to interact with the world. The crosshair is a dead concept since it’s 2D and rendered at screen depth. Instead, taking a page from the real world, laser sights (i.e. the red dot) should be used. Or for something that isn’t first-person like an RTS, selection marquees need to map to 3D objects – whether items should be selected should be based upon how the user would see it rather than absolute position.

Microsoft’s Next RDP Moves Rendering Back to the Server Cyberlink & Adobe: Views On Consumer GPU Computing
Comments Locked

19 Comments

View All Comments

  • dtdw - Sunday, October 10, 2010 - link

    we had a chance to Adobe, Microsoft, Cyberlink, and others about where they see GPU computing going in the next couple of years.

    shouldnt you add 'the' before adobe ?

    and adding 'is' after computing ?
  • tipoo - Sunday, October 10, 2010 - link

    " we had a chance to Adobe, Microsoft, Cyberlink, and others about where they see GPU computing going "

    Great article, but I think you accidentally the whole sentence :-P
  • Deanjo - Sunday, October 10, 2010 - link

    "While NVIDIA has VDPAU and also has parties like S3 use it, AMD and Intel are backing the rival Video Acceleration API (VA API)."

    Ummm wrong, AMD is using XvBA for it's video acceleration API. VAAPI provides a wrapper library to XvBA much like there is VAAPI wrapper for VDPAU. Also VDPAU is not proprietary, it is part of Freedesktop and the open source library package contains a wrapper library and a debugging library allowing other manufacturers to implement VDPAU support into their device drivers. In short every device manufacturer out there is free to include VDPAU support and it is up to the driver developer to add that support to a free and truly open API.
  • Ryan Smith - Sunday, October 10, 2010 - link

    AMD is using XvBA, but it's mostly an issue of semantics. They already had the XvBA backend written, so they merely wrote a shim for VA API to get it in there. In practice XvBA appears to be dead, and developers should use VA API and let AMD and the others work on the backend. So in that sense, AMD are backing VA API.

    As for NVIDIA, proprietary or not doesn't really come in to play. NVIDIA is not going to give up VAPAU (or write a VA API shim) and AMD/Intel don't want to settle on using VAPAU. That's the stalemate that's been going on for a couple of years now, and it doesn't look like there's any incentive on either side to come together.

    It's software developers that lose out; they're the ones that have to write in support for both APIs in their products.
  • electroju - Monday, October 11, 2010 - link

    Deanjo, that is incorrect. VA API is not a wrapper. It is the main API from freedesktop.org. It is created by Intel unfortunately, but they help extend the staled XvMC project to a more flexible API. VDPAU and XvBA came later to provide their own way to do about the same thing. They also include a backward compatibility to VA API. VDPAU is not open source. It is just provides structs to be able to use VDPAU, so this means VDPAU can not be changed by the open source community to implement new features.
  • AmdInside - Sunday, October 10, 2010 - link

    Good coverage. Always good to read new info. Often looking at graphics card reviews can get boring as I tend to sometimes just glance at the graphs and that is it. I sure wish Adobe would use GPU more for photography software. Lightroom is one software that works alright on desktops but too slow for my taste on laptops.
  • AnnonymousCoward - Monday, October 11, 2010 - link

    Holodeck? Cmon. It's a 3D display. You can't create a couch and then lay on it.
  • Guspaz - Tuesday, October 12, 2010 - link

    I'm sort of disappointed with RemoteFX. It sounds like it won't be usable remotely by consumers or small businesses who are on broadband-class connections; with these types of connections, you can probably count on half a megabit of throughput, and that's probably not enough to be streaming full-screen MJPEG (or whatever they end up using) over the net.

    So, sure, works great over a LAN, but as soon as you try to, say, telecommute to your office PC via a VPN, that's not going to fly.

    Even if you're working for a company with a fat pipe, many consumers (around here, at least) are on DSL lines that will get them 3 or 4 megabits per second; that might be enough for lossy motion-compensated compression like h.264, but is that enough for whatever Microsoft is planning? You lose a lot of efficiency by throwing away iframes and mocomp.
  • ABR - Tuesday, October 19, 2010 - link

    Yeah, it also makes no sense from an economic perspective. Now you have to buy a farm of GPUs to go with your servers? And the video capability now and soon being built in to every Intel CPU just goes for nothing? More great ideas from Microsoft.

Log in

Don't have an account? Sign up now