Software, Cont: ShadowPlay and "Reason Flags"

Along with providing the game optimization service and SHIELD’s PC client, GeForce Experience has another service that’s scheduled to be added this summer. That service is called ShadowPlay, and not unlike SHIELD it’s intended to serve as a novel software implementation of some of the hardware functionality present in NVIDIA’s latest hardware.

ShadowPlay will be NVIDIA’s take on video recording, the novel aspect of it coming from the fact that NVIDIA is basing the utility around Kepler’s hardware H.264 encoder. To be straightforward video recording software is nothing new, as we have FRAPS, Afterburner, Precision X, and other utilities that all do basically the same thing. However all of those utilities work entirely in software, fetching frames from the GPU and then encoding them on the CPU. The overhead from this is not insignificant, especially due to the CPU time required for video encoding.

With ShadowPlay NVIDIA is looking to spur on software developers by getting into video recording themselves, and to provide superior performance by using hardware encoding. Notably this isn’t something that was impossible prior to ShadowPlay, but for some reason recording utilities that use NVIDIA’s hardware H.264 encoder have been few and far between. Regardless, the end result should be that most of the overhead is removed by relying on the hardware encoder, minimally affecting the GPU while freeing up the CPU, reducing the amount of time spent on data transit back to the CPU, and producing much smaller recordings all at the same time.

ShadowPlay will feature multiple modes. Its manual mode will be analogous to FRAPS, recording whenever the user desires it. The second mode, shadow mode, is perhaps the more peculiar mode. Because the overhead of recording with the hardware H.264 encoder is so low, NVIDIA wants to simply record everything in a very DVR-like fashion. In shadow mode the utility keeps a rolling window of the last 20 minutes of footage, with the goal being that should something happen that the user decides they want to record after the fact, they can simply pull it out of the ShadowPlay buffer and save it. It’s perhaps a bit odd from the perspective of someone who doesn’t regularly record their gaming sessions, but it’s definitely a novel use of NVIDIA’s hardware H.264 encoder.

NVIDIA hasn’t begun external beta testing of ShadowPlay yet, so for the moment all we have to work from is screenshots and descriptions. The big question right now is what the resulting quality will be like. NVIDIA’s hardware encoder does have some limitations that are necessary for real-time encoding, so as we’ve seen in the past with qualitative looks at NVIDIA’s encoder and offline H.264 encoders like x264, there is a quality tradeoff if everything has to be done in hardware in real time. As such ShadowPlay may not be the best tool for reference quality productions, but for the YouTube/Twitch.tv generation it should be more than enough.

Anyhow, ShadowPlay is expected to be released sometime this summer. But since 95% of the software ShadowPlay requires is also required for the SHIELD client, we wouldn’t be surprised if ShadowPlay was released shortly after a release quality version of the SHIELD client is pushed out, which may come as early as June alongside the SHIELD release.

Reasons: Why NVIDIA Cards Throttle

The final software announcement from NVIDIA to coincide with the launch of the GTX 780 isn’t a software product in and of itself, but rather an expansion of NVIDIA’s 3rd party hardware monitoring API.

One of the common questions/complaints about GPU Boost that NVIDIA has received over the last year is about why a card isn’t boosting as high as it should be, or why it suddenly drops down a boost bin or two for no apparent reason. For technically minded users who know the various cards’ throttle points and specifications this isn’t too complex – just look at the power consumption, GPU load, and temperature – but that’s a bit much to ask of most users. So starting with the recently released 320.14 drivers, NVIDIA is exposing a selection of flags through their API that indicate what throttle point is causing throttling or otherwise holding back the card’s clockspeed. There isn’t an official name for these flags, but “reasons” is as good as anything else, so that’s what we’re going with.

The reasons flags are a simple set of 5 binary flags that NVIDIA’s driver uses to indicate why it isn’t increasing the clockspeed of the card further. These flags are:

  • Temperature Limit – the card is at its temperature throttle point
  • Power Limit – The card is at its global power/TDP limit
  • Voltage Limit – The card is at its highest boost bin
  • Overvoltage Max Limit – The card’s absolute maximum voltage limit (“if this were to occur, you’d be at risk of frying your GPU”)
  • Utilization Limit – The current workload is not high enough that boosting is necessary

As these are simple flags, it’s up to 3rd party utilities to decide how they want to present these flags. EVGA’s Precision X, which is NVIDIA’s utility of choice for sampling new features to the press, simply records the flags like it does the rest of the hardware monitoring data, and this is likely what most programs will do.

With the reason flags NVIDIA is hoping that this will help users better understand why their card isn’t boosting as high as they’d like to. At the same time the prevalence of GPU Boost 2.0 and its much higher reliance on temperature makes exposing this data all the more helpful, especially for overclockers that would like to know what attribute they need to turn up to unlock more performance.

Software: GeForce Experience, Out of Beta Our First FCAT & The Test
POST A COMMENT

155 Comments

View All Comments

  • Akrovah - Friday, May 24, 2013 - link

    Oh yeah, forgot audio data, all of which gets stored in main RAM. And THAT will take up a pretty nice chunk of space righ there. Reply
  • Sivar - Thursday, May 23, 2013 - link

    You realize, of course, that the 8GB RAM in consoles is 8GB *TOTAL* RAM, whose capacity and bandwidth must be shared for video tasks, the OS, and shuffling the game's data files.

    A PC with a 3GB video card can use that 3GB exclusively for textures and other video card stuff.
    Reply
  • B3an - Friday, May 24, 2013 - link

    See my comment above. Reply
  • DanNeely - Thursday, May 23, 2013 - link

    Right now all we've got is the reference card being rebadged by a half dozenish companies. Give it a few weeks or a month and I'm certain someone will start selling a 6GB model. People gaming at 2560 or on 3 monitor setups might benefit from the wait; people who just want to crank AA at 1080p or even just be able to always play at max instead of fiddling with settings (and there're a lot more of them than there are of us) have no real reason to wait. Also, in 12 months Maxwell will be out and with the power of a die shrink behind it the 860 will probably be able to match what the 780 does anyway. Reply
  • DanNeely - Tuesday, May 28, 2013 - link

    On HardOCP's forum I've read that nVidia's told it's partners they shouldn't make a 6GB variant of the 780 (presumably to protect Titan sales). While it's possible one of them might do so anyway; getting nVidia mad at them isn't a good business strategy so it's doubtful any will. Reply
  • tipoo - Thursday, May 23, 2013 - link

    If a slightly cut down Titan is their solution for the higher end 700 series card, I wonder what else the series will be like? Will everything just plop down a price category, the 680 in the 670s price point, etc? That would be uninteresting, but reasonable I guess, given how much power Kepler has on tap. And it wouldn't do much for mobile. Reply
  • DigitalFreak - Thursday, May 23, 2013 - link

    The 770 will be identical to the 680, but with a slightly faster clock speed. I believe the same will be true with the 760 / 670. Those cards are probably still under NDA, which is why they weren't mentioned. Reply
  • chizow - Thursday, May 23, 2013 - link

    Yep 770 at least is supposed to launch a week from today, 5/30. Satisfy demand from the top-down and grab a few impulse buyers who can't wait along the way. Reply
  • yannigr - Thursday, May 23, 2013 - link

    No free games. With an AMD card you also get many AAA games. So Nvidia is a little more expensive than just +$200 compared with 7970GE.
    I am expecting reviewers someday to stop ignoring game bundles because they come from AMD. We are not talking for one or two games here, for old games, or demos. We are talking about MONEY. 6-7-8-9-10 free AAA titles are MONEY.
    Reply
  • Tuvok86 - Thursday, May 23, 2013 - link

    I believe nVidia has bundles as well Reply

Log in

Don't have an account? Sign up now