Software: GeForce Experience, Out of Beta

Along with the launch of the GTX 780 hardware, NVIDIA is also using this opportunity to announce and roll out new software. Though they are (and always will be) fundamentally a hardware company, NVIDIA has been finding that software is increasingly important to the sales of their products. As a result the company has taken on several software initiatives over the years, both on the consumer side and the business side. To that end the products launching today are essentially a spearhead as part of a larger NVIDIA software ecosystem.

The first item on the list is GeForce Experience, NVIDIA’s game settings advisor. You may remember GeForce Experience from the launch of the GTX 690, which is when GeForce Experience was first announced. The actual rollout of GeForce Experience was slower than NVIDIA projected, having gone from an announcement to a final release in just over a year. Never the less, there is a light at the end of the tunnel and with version 1.5, GeForce Experience is finally out of beta and is being qualified as release quality.

So what is GeForce Experience? GFE is in a nutshell NVIDIA’s game settings advisor. The concept itself is not new, as games have auto-detected hardware and tried to set appropriate settings, and even NVIDIA has toyed with the concept before with their Optimal Playable Settings (OPS) service. The difference between those implementations and GFE comes down to who’s doing the work of figuring this out, and how much work is being done.

With OPS NVIDIA was essentially writing out recommended settings by hand based on human play testing. That process is of course slow, making it hard to cover a wide range of hardware and to get settings out for new games in a timely manner. Meanwhile with auto-detection built-in to games the quality of the recommendations is not a particular issue, but most games based their automatic settings around a list of profiles, which means most built-in auto-detection routines were fouled up by newer hardware. Simply put, it doesn’t do NVIDIA any good if a graphical showcase game like Crysis 3 selects the lowest quality settings because it doesn’t know what a GTX 780 is.

NVIDIA’s solution of choice is to take on most of this work themselves, and then move virtually all of it to automation. From a business perspective this makes great sense for NVIDIA as they already have the critical component for such a service, the hardware. NVIDIA already operates large GPU farms in order to test drivers, a process that isn’t all that different from what they would need to do to automate the search for optimal settings. Rather than regression testing and looking for errors, NVIDIA’s GPU farms can iterate through various settings on various GPUs in order to find the best combination of settings that can reach a playable level of performance. 

By iterating through the massive matrix of settings most games offer, NVIDIA’s GPU farms can do most of the work required. What’s left for humans is writing test cases for new games, something again necessary for driver/regression testing, and then identifying which settings are more desirable from a quality perspective so that those can be weighted and scored in the benchmarking process. This means that it’s not entirely a human-free experience, but having a handful of engineers writing test cases and assigning weights is a much more productive use of time than having humans test everything by hand like it was for OPS.

Moving on, all of this feeds into NVIDIA’s GFE backend service, which in turn feeds the frontend in the form of the GFE client. The GFE client has a number of features (which we’ll get into in a moment), but for the purposes of GFE its primary role is to find games on a user’s computer, pull optimal settings from NVIDIA, and then apply those settings as necessary. All of this is done through a relatively straightforward UI, which lists the detected games, the games’ current settings, and NVIDIA’s suggested settings.

The big question of course is whether GFE’s settings are any good, and in short the answer is yes. NVIDIA’s settings are overall reasonable, and more often than not have closely matched the settings we use for benchmarking. I’ve noticed that they do have a preference for FXAA and other pseudo-AA modes over real AA modes like MSAA, but at this point that’s probably a losing battle on my part given the performance hit of MSAA.

For casual users NVIDIA is expecting this to be a one-stop solution. Casual users will let GFE go with whatever it thinks are the best settings, and as long as NVIDIA has done their profiling right users will get the best mix of quality at an appropriate framerate. For power users on the other hand the expectation isn’t necessarily that those users will stick with GFE’s recommended settings, but rather GFE will provide a solid baseline to work from. Rather than diving into a new game blindly, power users can start with GFE’s recommended settings and then turn things down if the performance isn’t quite high enough, or adjust some settings for others if they favor a different tradeoff in quality. On a personal note this exactly matches what I’ve been using GFE for since the earlier betas landed in our hands, so it seems like NVIDIA is on the mark when it comes to power users.

With all of that said, GeForce Experience isn’t going to be a stand-alone game optimization product but rather the start of a larger software suite for consumers. GeForce Experience has already absorbed the NVIDIA Update functionality that previously existed as a small optional install in NVIDIA’s drivers. It’s from here that NVIDIA is going to be building further software products for GeForce users.

The first of these expansions will be for SHIELD, NVIDIA’s handheld game console launching next month. One of SHIELD’s major features is the ability to stream PC games to the console, which in turn requires a utility running on the host PC to provide the SHIELD interface, control mapping, and of course video encoding and streaming. Rather than roll that out as a separate utility, that functionality will be built into future versions of GeForce Experience.

To that end, with the next release of drivers for the GTX 780 GeForce Experience will be bundled with NVIDIA’s drivers, similar to how NVIDIA Update is today. Like NVIDIA Update it will be an optional-but-default item, so users can opt out of it, but if the adoption is anything like NVIDIA Update then the expectation is that most users will end up installing GFE.

It would be remiss of us to not point out the potential for bloat here, but we’ll have to see how this plays out. In terms of file size GeForce Experience is rather tiny at 11MB (versus 169MB for the 320.14 driver package), so after installer overhead is accounted for it should add very little to the size of the GeForce driver package. Similarly it doesn’t seem to have any real appetite for system resources, but this is the wildcard since it’s subject to change as NVIDIA adds more functionality to the client.

Meet The GeForce GTX 780, Cont Software, Cont: ShadowPlay and "Reason Flags"
Comments Locked

155 Comments

View All Comments

  • mac2j - Thursday, May 23, 2013 - link

    The problem with $650 vs $500 for this price point is this:

    I can get 2 x 7950s for <$600 - that's a setup that destroys a 780 for less money.

    Even if you're single-GPU limited $250 is a lot of extra cash for a relative small amount of performance gain.
  • Ytterbium - Thursday, May 23, 2013 - link

    I'm disappointed they decided to cut the compute to 1/24 vs 1/3 in Titan, AMD is much better value for compute tasks.
  • BiffaZ - Friday, May 24, 2013 - link

    Except much consumer (@home type) compute is SP not DP so it won't make much difference. SP performance is around equal or higher than AMD's in 780.
  • Nighyal - Thursday, May 23, 2013 - link

    I don't know if this is possible but it would be great to see a benchmark that showed power, noise and temperature at a standard work load. We can get an inferred idea of clock per watt performance but when you're measuring a whole system other factors come into play (you mentioned CPU loads scaling with increased GPU performance).

    My interest in this comes from living in a hot climate (Australia) where a computer can throw out a very noticeable amount of heat. The large majority of my usage is light gaming (LoL) but I occasionally play quite demanding single player titles which stretches the legs of my GPU. The amount of heat thrown out is directly proportional to power draw so to be able to clearly see how many less watts a system requires for a controlled work load would be a handy comparison for me.

    TL:DR - Please also measure temperature, noise and power at a controlled workload to isolate clock per watt performance.
  • BiggieShady - Friday, May 24, 2013 - link

    Kudos on the FCAT and the delta percentages metrics. So 32,2% for 7990 means that on average one frame is present 32,2% more time than the next. Still, it is only an average. Great extra info would be to show same metrics that averages only the deltas higher then the threshold delta, and display it on the graph with varying thresholds.
  • flexy - Friday, May 24, 2013 - link

    NV releases a card with a ridiculous price point of $1000. Then they castrate the exact same card and give it a new name, making it look like it's a "new card" and sell it cheaper than their way overpriced high end card. Which, of course, is a "big deal" (sarcasm) given the crazy price of Titan. So or so, I don't like what NV does, in the slightest.

    Many ages ago, people could buy *real* top of the line cards which always cost about $400-$500, today you pay $600 for "trash cards" which didn't make it into production for Titan due to sub-par chips. Nvidia:"Hey, let's just make-up a new card and sell those chips too, lols"

    Please AMD, help us!!
  • bds71 - Friday, May 24, 2013 - link

    for what it's worth, I would have like to have seen the 780 *truly* fill the gap between the 680 and titan by offering not only the gaming performance, but ALSO the compute performance - if they would have done a 1/6 or even 1/12!! to better fill the gap and round out the performance all around I would HAPPILY pay 650 for this card. as it is, I already have a 690, so I will simply get another for 4k gaming - but a comparison between 3x 780's and 2 690's (both very close to $2k) at 8Mpixels+ resolution would be extremely interesting. note: 3x 30" monitors could easily be configured for 4800x2560 resolution via NVidia surround or eyefinity - and I, for one, would love to see THAT review!!
  • flexy - Friday, May 24, 2013 - link

    Well compute performance is the other thing, along with their questionable GPU throttle aka "boost" (yeah right) technology. Paying premium for such a card and then weak compute performance in exchange compared to older gen cards or the AMD offerings... Seriously, there is a lot to not like about Kepler, at least from an enthusiast point of view. I hope that NV doesn't continue that route in the future with their cards becoming less attractive while prices go up.
  • EJS1980 - Wednesday, May 29, 2013 - link

    Cynical much?
  • ChefJeff789 - Friday, May 24, 2013 - link

    Glad to see the significant upgrade. I just hope that AMD forces the prices back down again soon. I hope the AMD release "at the end of the year" is closer to September than December. It'll be interesting to see how they stack up. BTW, I have shied away from AMD cards ever since I owned an X800 and had SERIOUS issues with the catalyst drivers (constant blue-screens, had to do a Windows clean-install to even get the card working for longer than a few minutes). I know this was a long time ago, and I've heard from numerous people that they're better now. Is this true?

Log in

Don't have an account? Sign up now