Software, Cont: ShadowPlay and "Reason Flags"

Along with providing the game optimization service and SHIELD’s PC client, GeForce Experience has another service that’s scheduled to be added this summer. That service is called ShadowPlay, and not unlike SHIELD it’s intended to serve as a novel software implementation of some of the hardware functionality present in NVIDIA’s latest hardware.

ShadowPlay will be NVIDIA’s take on video recording, the novel aspect of it coming from the fact that NVIDIA is basing the utility around Kepler’s hardware H.264 encoder. To be straightforward video recording software is nothing new, as we have FRAPS, Afterburner, Precision X, and other utilities that all do basically the same thing. However all of those utilities work entirely in software, fetching frames from the GPU and then encoding them on the CPU. The overhead from this is not insignificant, especially due to the CPU time required for video encoding.

With ShadowPlay NVIDIA is looking to spur on software developers by getting into video recording themselves, and to provide superior performance by using hardware encoding. Notably this isn’t something that was impossible prior to ShadowPlay, but for some reason recording utilities that use NVIDIA’s hardware H.264 encoder have been few and far between. Regardless, the end result should be that most of the overhead is removed by relying on the hardware encoder, minimally affecting the GPU while freeing up the CPU, reducing the amount of time spent on data transit back to the CPU, and producing much smaller recordings all at the same time.

ShadowPlay will feature multiple modes. Its manual mode will be analogous to FRAPS, recording whenever the user desires it. The second mode, shadow mode, is perhaps the more peculiar mode. Because the overhead of recording with the hardware H.264 encoder is so low, NVIDIA wants to simply record everything in a very DVR-like fashion. In shadow mode the utility keeps a rolling window of the last 20 minutes of footage, with the goal being that should something happen that the user decides they want to record after the fact, they can simply pull it out of the ShadowPlay buffer and save it. It’s perhaps a bit odd from the perspective of someone who doesn’t regularly record their gaming sessions, but it’s definitely a novel use of NVIDIA’s hardware H.264 encoder.

NVIDIA hasn’t begun external beta testing of ShadowPlay yet, so for the moment all we have to work from is screenshots and descriptions. The big question right now is what the resulting quality will be like. NVIDIA’s hardware encoder does have some limitations that are necessary for real-time encoding, so as we’ve seen in the past with qualitative looks at NVIDIA’s encoder and offline H.264 encoders like x264, there is a quality tradeoff if everything has to be done in hardware in real time. As such ShadowPlay may not be the best tool for reference quality productions, but for the YouTube/Twitch.tv generation it should be more than enough.

Anyhow, ShadowPlay is expected to be released sometime this summer. But since 95% of the software ShadowPlay requires is also required for the SHIELD client, we wouldn’t be surprised if ShadowPlay was released shortly after a release quality version of the SHIELD client is pushed out, which may come as early as June alongside the SHIELD release.

Reasons: Why NVIDIA Cards Throttle

The final software announcement from NVIDIA to coincide with the launch of the GTX 780 isn’t a software product in and of itself, but rather an expansion of NVIDIA’s 3rd party hardware monitoring API.

One of the common questions/complaints about GPU Boost that NVIDIA has received over the last year is about why a card isn’t boosting as high as it should be, or why it suddenly drops down a boost bin or two for no apparent reason. For technically minded users who know the various cards’ throttle points and specifications this isn’t too complex – just look at the power consumption, GPU load, and temperature – but that’s a bit much to ask of most users. So starting with the recently released 320.14 drivers, NVIDIA is exposing a selection of flags through their API that indicate what throttle point is causing throttling or otherwise holding back the card’s clockspeed. There isn’t an official name for these flags, but “reasons” is as good as anything else, so that’s what we’re going with.

The reasons flags are a simple set of 5 binary flags that NVIDIA’s driver uses to indicate why it isn’t increasing the clockspeed of the card further. These flags are:

  • Temperature Limit – the card is at its temperature throttle point
  • Power Limit – The card is at its global power/TDP limit
  • Voltage Limit – The card is at its highest boost bin
  • Overvoltage Max Limit – The card’s absolute maximum voltage limit (“if this were to occur, you’d be at risk of frying your GPU”)
  • Utilization Limit – The current workload is not high enough that boosting is necessary

As these are simple flags, it’s up to 3rd party utilities to decide how they want to present these flags. EVGA’s Precision X, which is NVIDIA’s utility of choice for sampling new features to the press, simply records the flags like it does the rest of the hardware monitoring data, and this is likely what most programs will do.

With the reason flags NVIDIA is hoping that this will help users better understand why their card isn’t boosting as high as they’d like to. At the same time the prevalence of GPU Boost 2.0 and its much higher reliance on temperature makes exposing this data all the more helpful, especially for overclockers that would like to know what attribute they need to turn up to unlock more performance.

Software: GeForce Experience, Out of Beta Our First FCAT & The Test
POST A COMMENT

155 Comments

View All Comments

  • varad - Thursday, May 23, 2013 - link

    You do realize that a GPU like Titan has almost 5 times the number of transistors compared to Intel's biggest Core i7 CPU, right? There are 7.1 billion transistors in Titan vs 1.4 billion in Core i7 3770k. So, it means they cannot match the price of "a good CPU" unless they decide to become a non-profit organization :) Reply
  • AssBall - Thursday, May 23, 2013 - link

    Well if all you needed was a single titan to run your is, computations, games, and nothing else, then no problem. Reply
  • krutou - Sunday, May 26, 2013 - link

    Two problems with your logic

    22 nm fabrication is more expensive (price per transistor)

    CPUs are more difficult to design
    Reply
  • An00bis - Friday, May 31, 2013 - link

    it's not like you can just shove your hand in a jar full of transistors and just slap it on a chip and consider it a cpu, a cpu is required to do a gpu's task (integrated gpu) AND be good at everything a gpu can't do, which is... well lots of things actually. A gpu is much simpler, hence why the manufacturing + designing cost is probably more expensive than a big ass card that has to include memory+a pcb+a gpu Reply
  • chizow - Thursday, May 23, 2013 - link

    Great card, but a year late. This is what GTX 600 series should've been but we all know how that went.

    I think Nvidia made some pretty big mistakes with how they handled the entire Kepler generation after Tahiti's launch price debacle. I know their financial statements and stockholders don't agree but they've managed to piss off their core consumers at every performance segment.

    Titan owners have to feel absolutely gutted at this point having paid $1000 for a part that is only ~10-15% faster than the GTX 780. End result of this generation is we are effectively paying 50-100% more for the same class of card than previous generations. While the 780 is a great card and a relatively good value compared to Titan, we're still paying $650 for what is effectively Kepler's version of the GTX 470.
    Reply
  • Crisium - Thursday, May 23, 2013 - link

    People who bought a Titan knew what they were getting into. If you have regrets, you were in no position to buy a $1000 GPU to begin with and made a grievous financial error.

    $650 isn't horrible for this price, but you are still paying the Nvidia Tax.
    Reply
  • chizow - Thursday, May 23, 2013 - link

    I don't think so, if you polled GTX Titan owners if they would've paid $1000 knowing 2-3 months later there would be a part that performed similarly at 35% less price, I think you would hear most of them would've waited to buy not 1, but 2 for just a bit more. Or instead of buying 2 Titans, buying 3x780s.

    Also, it really has nothing to do with being in a financial position or not, it's funny when Titan released I made the comment anyone interested in Titan would be better served by simply investing that money into Nvidia stock, letting that money grow on Titan's fat margins, and then buying 2x780s when they released. All according to plan, for my initial investment of 1 Titan I can buy 2x780s.

    But I won't. Nvidia blew it this generation, I'll wait for Maxwell.
    Reply
  • IanCutress - Thursday, May 23, 2013 - link

    Titan was a compute card with optional gaming, rather than a gaming card with optional FP64 compute. That's why the price difference exists. If you bought a Titan card for Gaming, then you would/should have been smart enough to know that a similar card without compute was around the corner. Reply
  • chizow - Thursday, May 23, 2013 - link

    Unfortunately, that was never how *GTX* Titan was marketed, straight from the horses mouth:
    "With the DNA of the world’s fastest supercomputer and the soul of NVIDIA® Kepler™ architecture, GeForce® GTX TITAN GPU is a revolution in PC gaming performance."

    Not to mention the fact Titan is a horrible compute card and value outside of CUDA workloads, and even there it suffers as a serious compute card due to the lack of ECC. It's an overpriced gaming card, plain and simple.

    At the time, it was still uncertain whether or not Nvidia would launch more SKUs based on GK110 ASIC, but informed consumers knew Nvidia had to do something with all the chips that didn't make the TDP cut as Tesla parts.
    Reply
  • mayankleoboy1 - Thursday, May 23, 2013 - link

    Really ? Apart from a few apps, Titan is poor compared to a 7970. It has bad OpenGL performance, which 90% of industry renderfarms use.
    Titan is really an overpriced gaming card.
    Reply

Log in

Don't have an account? Sign up now