Word comes from NVIDIA this afternoon that they are rolling out a beta update to their GRID game streaming service. Starting today, the service is adding 1080p60 streaming to its existing 720p60 streaming option, with the option initially going out to members of the SHIELD HUB beta group.

Today’s announcement from NVIDIA comes as the company is ramping up for the launch of the SHIELD Android TV and its accompanying commercial GRID service. The new SHIELD console is scheduled to ship this month, meanwhile the commercialization of the GRID service is expected to take place in June, with the current free GRID service for existing SHIELD portable/tablet users listed as running through June 30th. Given NVIDIA’s ambitions to begin charging for the service, it was only a matter of time until the company began offering the service, especially as the SHIELD Android TV will be hooked up to much larger screens where the limits of 720p would be more easily noticed.

In any case, from a technical perspective NVIDIA has long had the tools necessary to support 1080p streaming – NVIDIA’s video cards already support 1080p60 streaming to SHIELD devices via GameStream – so the big news here is that NVIDIA has finally flipped the switch with their servers and clients. Though given the fact that 1080p is 2.25x as many pixels as 720p, I’m curious whether part of this process has involved NVIDIA adding some faster GRID K520 cards (GK104) to their server clusters, as the lower-end GRID K340 cards (GK107) don’t offer quite the throughput or VRAM one traditionally needs for 1080p at 60fps.

But the truly difficult part of this rollout is on the bandwidth side. With SHIELD 720p streaming already requiring 5-10Mbps of bandwidth and NVIDIA opting for quality over efficiency on the 1080p service, the client bandwidth requirements for the 1080p service are enormous. 1080p GRID will require a 30Mbps connection, with NVIDIA recommending users have a 50Mbps connection to keep from any other network devices compromising the game stream. To put this in perspective, no video streaming service hits 30Mbps, and in fact Blu-Ray itself tops out at 48Mbps for audio + video. NVIDIA in turn needs to run at a fairly high bitrate to make up for the fact that they have to all of this encoding in real-time with low latency (as opposed to highly optimized offline encoding), hence the significant bandwidth requirement. Meanwhile 50Mbps+ service in North America is still fairly rare – these requirements all but limit it to cable and fiber customers – so at least for now only a limited number of people will have the means to take advantage of the higher resolution.

NVIDIA GRID System Requirements
  720p60 1080p60
Minimum Bandwidth 10Mbps 30Mbps
Recommended Bandwidth N/A 50Mbps
Device Any SHIELD, Native Or Console Mode Any SHIELD, Console Mode Only (no 1080p60 to Tablet's screen)

As for the games that support 1080p streaming, most, but not all GRID games support it at this time. NVIDIA’s announcement says that 35 games support 1080p, with this being out of a library of more than 50 games. Meanwhile I’m curious just what kind of graphics settings NVIDIA is using for some of these games. With NVIDIA’s top GRID card being the equivalent of an underclocked GTX 680, older games shouldn’t be an issue, but more cutting edge games almost certainly require tradeoffs to maintain framerates near 60fps. So I don’t imagine NVIDIA is able to run every last game with all of their settings turned up to maximum.

Finally, NVIDIA’s press release also notes that the company has brought additional datacenters online, again presumably in anticipation of the commercial service launch. A Southwest US datacenter is now available, and a datacenter in Central Europe is said to be available later this month. This brings NVIDIA’s total datacenter count up to six: USA Northwest, USA Southwest, USA East Coast, Northern Europe, Central Europe, and Asia Pacific.

Source: NVIDIA

POST A COMMENT

62 Comments

View All Comments

  • D. Lister - Wednesday, May 13, 2015 - link

    @close

    What if Intel decides to disable some feature on their CPUs or the software support as soon as it detects an AMD card? How can that be considered fair practice?

    An intel CPU with an AMD GPU isn't the same as two different GPUs in the same system, sharing the same OS, with their individual drivers competing for the same resources. Would Intel provide support for a 3rd party dual CPU board, which had one socket for Intel and the other for AMD? Now if Nvidia GPU was not doing CUDA with an AMD CPU, that would be a different matter altogether.

    I am demanding however to be allowed to run the BMW as I bought it regardless of the fact that I also want to own another different brand. I bought a complete set of components + features and I wand to use them all as long as promised.

    Let me reiterate, a car is a standalone system. A GPU is a component of a system, you can't compare apples with oranges.

    It seems like you don't want to own two systems (two cars) and run them side-by-side, what your analogy suggests you really want to do is take a car, mod it's engine with unauthorized parts, and then expect the OEM to let their engine timing software be run in such a system, and your argument is that since some people managed to do that without the car catching on fire so the software shouldn't be locking anyone out. From an individual's point of view, that seems to make sense, since it is your money spent and you should be able to do whatever you please with it. From a business' point of view though, things get a lot more complicated with liability thrown into the mix.

    Also, if it was really that risk free for GPUs from the two companies to work in a CUDA environment, you can bet that AMD would have sued Nvidia's bollocks off for anti-competitive business practices and won. If they haven't still, then it means that Nvidia may just have a legally sound excuse for doing what they're doing.
    Reply
  • close - Wednesday, May 13, 2015 - link

    @D. Lister: You forget that Intel also has the GPU on die. This is good enough reason to disable stuff on that die when detecting an AMD or Nvidia GPU or disable support for QuickSync, or Optimus, or whatever. Because. This should be a good enough analogy even for a completely non-IT person. Like someone who thinks the graphic subSYSTEM (sub because it's part of an even larger system) is not actually a SYSTEM, like a car... you know... ;) They just decided to call it that for fun.

    Regardless, I think you're just trolling because it's impossible to "not get it" after so many tries. I paid for a card with certain features. As long as there's no clear statement on the box that those features will be arbitrarily disabled in case any competing product is found in the system then this is just crappy conduct put in place and supported by people with little to no respect for their customers. Just like the 970 4GB RAM issue, which was neither transmitted to the buyers, nor fully assumed by the company.
    There is no excuse for crippling a product that I paid in full just because I decided to also use a competing product or a product that looks like it's a competing one.
    AMD has no reason to sue Nvidia because they are not entitled to use that proprietary tech. Also, I don't care about AMD, you can replace this with any other driver as other guys commented (like an USB monitor). The customers have a right to sue because they paid for something and it's been disabled. I paid for a card and it's not behaving as advertised. It's as simple as that. The problem is it's too much of a hustle for a consumer to sue a company like Nvidia for this. I can't use my cards features because it finds traces of other drivers in my system so it artificially shuts down the whole thing.

    And related to the cars vs. GPUs, yet again you fail to realize that I AM talking about a system. It's composed out of GPU, graphic RAM, drivers and supporting software. It's called a graphic subsystem for God's sake, isn't that a good enough clue that it's a system? And in that GPU you have multiple blocks doing different things. One talks to the memory. One schedules instructions. Etc. It's a system but since it's integrated (hence integrated circuits) people unrelated to the domain tend to think it's "just there". They forget about all the parts that make it work. The software just arbitrarily disables a feature present in the hardware after a simple system check. "IF non-Nvidia driver detected THEN disable X".
    Reply
  • yannigr2 - Wednesday, May 13, 2015 - link

    No no no. Either YOU DON'T GET IT, or you deliberately try to make it look completely different. In any case you are wrong. Reply
  • Crunchy005 - Wednesday, May 13, 2015 - link

    Hmmm, Nvidia hacking other computers on the LAN to see if any non Nvidia cards are in use and turning off PhysX and CUDA on the Nvidia computer. I can see that happening.

    No one here is asking Nvidia tech to run on AMD hardware, just that Nvidia disables their proprietary stuff if it detects anything other than Nvidia in the system. I wonder if this happens if your running an AMD APU with an nvidia card but the graphics on the APU aren't being used at all but are detected. Bye bye PhysX and CUDA, because that makes sense.
    Reply
  • yannigr2 - Wednesday, May 13, 2015 - link

    You did NOT got it straight. Let me enlighten you.

    We guys have Nvidia GPUs AND AMD GPUs. On the box on our Nvidia GPUs it was saying "PhysX support, CUDA support". Nvidia is DENYING a feature that it is selling on the box because we did a huge mistake. We put the Nvidia GPU as secondary. In all Nvidia's arrogance having an AMD GPU as primary and an Nvidia GPU as secondary is a big insult. So as Nvidia customers we have to be punished, by removing support for PhysX and CUDA.

    In case you start talking about compatibility and stability problems let me enlighten you some more. Nvidia did a mistake in the past and 258 beta driver, go out on public WITHOUT a lock. So they add the lock latter, not while building the driver. A couple of patches out there where giving the option to use a GeForce card as a PhysX card having an AMD GPU as primary. Never got a BSOD with that patch in Batman, Alice, Borderland, or other games. Even if there where problems Nvidia could offer that option with a big, huge pop up window in the face of the user saying that they can not guaranty any performance or stability problems with AMD cards. They don't. They LOCK IT away from their OWN customers, even while advertising as a feature on the box.

    One last question. What part of the "USB monitor driver" didn't you understood?
    Reply
  • ravyne - Wednesday, May 13, 2015 - link

    What? "Cuda Cores" is just marketting speak for their Cuda-capable graphics micro-architectures, of which they have several -- Cuda isn't a hardware instruction set, its a high-level language. Cuda source code could be compiled for AMD's GPU microarchitectures just as easily, or even your CPU (it would just run a lot more slowly). Remember, nVidia didn't invent PhysX either -- that started off as a CPU library and PhysX's own physics accelerator cards. Reply
  • Einy0 - Wednesday, May 13, 2015 - link

    That's not what they are saying at all. What they are saying is the nVidia disables CUDA and PhysX when ever there is another video driver functioning in the system. Theoretically you could get a Radeon 290X for the video chores and dedicate a lower end nVidia based card to processing PhysX. You can do this if you have two nVidia cards but not if you mix flavors. If you remember correctly PhysX existed way before CUDA. If nVidia wanted, they could make it an open standard and allow AMD, Intel, eccetera to create their own processing engine for Phsyx or even CUDA for that matter. They are using shader engines to do floating point calculations for things other than pixels, vertexes or geometry. Personally I got an nVidia card because my last card was AMD/ATI. I try to rotate brands every other generation or so if the value is similar. I personally haven't had any driver related issues with either vendor in the past 5+ years. Reply
  • rtho782 - Wednesday, May 13, 2015 - link

    PhysX and CUDA work for me with a 2nd monitor on my IGP? Reply
  • yannigr2 - Wednesday, May 13, 2015 - link

    If the IGP is not powering the primary screen I THINK yes. The primary monitor would have to be on the Nvidia card so your IGP will be used only to show a picture on the secondary monitor, nothing else. Reply
  • extide - Wednesday, May 13, 2015 - link

    They make an exception for IGP's because otherwise Optimus wouldn't work at all. Reply

Log in

Don't have an account? Sign up now