The Tools of the Trade: FRAPS & GPUView

Now that we have a basic understanding of the rendering pipeline and just what stuttering is, it’s time to talk about the tools that are commonly used to measure these issues. We’ll start with FRAPS, both because FRAPS is well understood by many of our readers and because FRAPS is what brought stuttering to the forefront of review sites in the first place.

 

AMD, quite bluntly, has a problem with how FRAPS is being used in some cases. To be clear here FRAPS is a wonderful tool, and without it we would be unable to include a number of different games in our hardware reviews. AMD’s problem with FRAPS is not its existence, what it does, or even how it does things. AMD’s problem with FRAPS comes down how it’s interpreted.

To get to that problem, we’re going to have to take a look at how FRAPS measures framerates. Going back to our diagram of the rendering pipeline, FRAPS hooks into the pipeline very early, at the application stage.

By injecting its DLL into the application, FRAPS then serves to intercept the Direct3D Present call as it’s being made to Direct3D. From here FRAPS can then delay the call for a split second to insert the draw commands to draw its overlay, or FRAPS can simply move on. When it comes to measuring framerates and frametimes what FRAPS is doing is to measure the Present calls. Every time it sees a new present call get pushed out, it counts that as a new frame, does any necessary logging, and then passes that Present call on to Direct3D.

This method is easy to accomplish and works with almost any application, which is what makes FRAPS so versatile. When it comes to measuring the average FPS over a benchmark run for example, FRAPS is great because every Present call it sees will eventually end up triggering a frame to be displayed. The average framerate is merely the number of Present calls FRAPS sees, divided by how long FRAPS was running for.

The problem here is not in using FRAPS to measure average framerates over the run of a benchmark, but rather when it comes to using FRAPS to measure individual frames. FRAPS is at the very start of the rendering pipeline; it’s before the GPU, it’s before the drivers, it’s even before Direct3D and the context queue. As such FRAPS can tell you all about what goes into the rendering pipeline, but FRAPS cannot tell you what comes out of the rendering pipeline.

So to use FRAPS in this method as a way of measuring frame intervals is problematic. Considering in particular that the application can only pass off a new frame when the context queue is ready for it, what FRAPS is actually measuring is the very start of the rendering pipeline, which not unlike a true pipe is limited by what comes after it. If the pipeline is backed up for whatever reason (context queue, drivers, etc), then FRAPS is essentially reporting on what the pipeline is doing, and not the frame interval on the final displayed frames. Simply put, FRAPS cannot tell you the frame interval at the end of the pipeline, it can only infer it from what it’s seeing.

AMD’s problem then is twofold. Going back to our definitions of latency versus frame intervals, FRAPS cannot measure “latency”. The context queue in particular will throw off any attempt to measure true frame latency. The amount of time between present calls is not the amount of time it took a frame to move through the pipeline, especially if the next Present call was delayed for any reason.

AMD’s second problem then is that even when FRAPS is being used to measure frame intervals, due to the issues we’ve mentioned earlier it’s simply not an accurate representation of what the user is seeing. Not only can FRAPS sometimes encounter anomalies that don’t translate to the end of the rendering pipeline, but FRAPS is going to see stuttering that the user cannot. It’s this last bit that is of particular concern to AMD. If FRAPS is saying that AMD cards are having more stuttering – even if the user cannot see it – then are AMD cards worse?

To be clear here the goal is to minimize stuttering throughout, and in a bit we’ll see how AMD is doing that and why it was a problem for them in the first place. But AMD is concerned about FRAPS being used in this manner because it can present data that makes stuttering look worse than it is. And in what’s a very human reaction, people pay more attention to bad news than good news; bad data more than good data. Or more simply put, it’s very easy to look at the data FRAPS produces and to see a problem that does not exist. FRAPS doesn’t just lack a good view of the rendering pipeline, but FRAPS data alone doesn’t provide context to decide what data matters and what does not.

Ultimately due to its mechanisms FRAPS is too coarse grained. It doesn’t have a complete picture of the rendering pipeline, and it’s taking readings from the wrong point in the rendering pipeline. In an ideal world we would like to be able to watch a frame in flight from the start to the end; to see what millisecond of a game simulation a frame is from, and to compare that against the frame intervals of successive frames. Baring that we would at least like to see the frame interval at the end of the rendering pipeline where the user is seeing the results, and unfortunately FRAPS can’t do that either.

Adding weight to the whole matter is the fact that FRAPS is one of the few things both AMD and NVIDIA can agree on. In our talks with NVIDIA and in past statements made to the press, NVIDIA dislikes FRAPS being used in this manner for roughly the same reason. The fact that it’s measuring Present calls instead of the time a frame is actually shown to the user impacts them just as well, and muddles the picture when it comes to trying to differentiate themselves from AMD. Again, not to say that NVIDIA thinks FRAPS is a bad tool, but there seems to be a general agreement with AMD’s stance that beyond a certain point it’s the wrong tool for measuring stuttering.

For our part, when we first went into our meeting with AMD we were expecting something a little more standoffish on the matter of FRAPS. Instead what we found was that we were in agreement on the same issues for the same reasons. As you, our readers are quick to point out, we do not currently do frame interval measurements. We do not do that because we do not currently have any meaningful tools to do so beyond FRAPS, for which we have known for years now about how it works and its limitations. There are tools in development that will change this, and this is something we’re hopefully going to be able to talk about soon. But in the meantime what we will tell you is the same thing AMD and NVIDIA will tell you: FRAPS is not the best way to measure frame intervals. There is a better way.

Finally, though we’ve just spent a great deal of time talking about FRAPS’ shortfalls when it comes to measuring frame intervals, we’re not going to dismiss it entirely. FRAPS may be a coarse tool, but even a coarse tool is going to catch big problems. And this is exactly what Scott Wasson and other reviewers have seen. At the very start of this odyssey AMD’s single-GPU frame interval problem was so bad that even FRAPS could see it. FRAPS did in fact “bust” AMD as it were, and for that AMD is even grateful. But as AMD resolves their problems and moves on to finer grained problems, the tools need to become finer grained too. And FRAPS as it currently is cannot make that jump.

GPUView

While we’ve spent most of our discussion on tools discussing FRAPS and why both AMD and NVIDIA find it insufficient, there are other tools out there. AMD and NVIDIA of course have access to far better tools than we do, and people with the knowledge to use them. This includes their internal tools, tools that are part of their respective SDKs, and other 3rd party tools.

AMD’s tool of choice here actually comes from Microsoft, and it’s called GPUView.

GPUView is a GPU performance profiling tool, and it gives very near a top-to-bottom overview of the rendering pipeline. GPUView can see the command buffers, the Present calls, the context queue, the CPU utilization of various threads, the drivers, and more. In fact short of being able to tell us the simulation time, GPUView is the kind of massive data dump a GPU developer, programmer, or even reviewer could ever want.

The only problem with GPUView is that it’s incredibly complex. We’ve tried to use it before and we’re simply overwhelmed with the data it provides. Furthermore it still doesn’t show us when a GPU buffer swap actually takes place and the user sees a new frame, and that remains the basis of any kind of fine-grained look into stuttering. Ultimately GPUView is a tool meant for seasoned professionals and it shows.

So why bring up GPUView at all? First and foremost, it’s one of the same tools AMD is using. Understanding something about the tool they use will bring us closer to understanding how they are (or are not) identifying problems in order to fix them. The second reason is that GPUView can show us in practice what up until now we’ve discussed only in theory: where some of the bottlenecks are in the GPU rendering process that lead to stuttering.

AMD’s presentation to use included two slides on GPUView, which in turn we’re including in this article. The first slide is of Crysis 3, and in it we can see a number of frames in flight. Notably we can also see the periods where there are several idle CPU threads, showing us there is some GPU bottlenecking going on.

The second slide is of GPUView with Unigine Heaven, presenting us with a textbook situation of where the GPU is the bottleneck, as Heaven is designed from the start to be a GPU benchmark and has limited CPU usage as a result. Of note, we can see the behavior of Heaven as it waits for the context queue to open up to take another frame. Heaven runs with the standard context queue limit of 3, and we can clearly see the 3 Presents, representing the 3 frames in the queue.

Ultimately GPUView is just one of many tools, but it does give us a better idea of what’s occurring in the middle of the rendering pipeline. And in AMD’s case it’s one of the better ways to break down the rendering pipeline and track down the issues that have led to their stuttering problems.

Just What Is Stuttering? AMD & Single-GPU Stuttering: Causes & Solutions
Comments Locked

103 Comments

View All Comments

  • JPForums - Tuesday, March 26, 2013 - link

    Their stance wasn't "Everyone else stutters so why should we bother with it." It was closer to "Everyone else stutters and we have no more control over it that they do ... Wait, you're saying they don't stutter as bad as we do ... and fixing our stutter would've actually helped our performance. Aw, nuts." I agree with Spoelie, it was ineptitude, not ill-will.
  • extide - Wednesday, March 27, 2013 - link

    You make absolutely no sense.
  • Galidou - Saturday, March 30, 2013 - link

    Lots of people seem to think AMD does so much mistakes that everytime it happens the world speaks about it more intensely because they actually admit it instead of trying to HIDE things, they even let the sites like anandtech make reviews on their problem. Right now, at home, I have two very comparable systems based on overclocked SNB. One with a 7950, and the other with a gtx 660 ti. Both play games extremely well but I have more trouble with my 660 ti. I get lots of ''Display adapter has stopped responding'' while surfing the internet, when waking up from long idle states and when playing League of Legends. I switched drivers, did clean install and I can't get rid of it totally. No my card is not overclocked and I can make it furmark all day long without a problem.

    I decided to play on my girlfriend's computer(which has the 7950) when I play league of legends because you just can't be interrupted in this kind of game. It even froze in LoL a couple times, at first I thought it was my SSD but after reading the dump files, found out it was the 660 ti. But hey, Nvidia is perfect(we just don't see them speaking of their problems that's it)... 4 months of it now, thanks alot... I wrote on nvidia forums did my research did everything they told me. Good thing it's only with LoL, internet and long idles, everything else runs flawlessly.
  • HisDivineOrder - Tuesday, March 26, 2013 - link

    Actually, I think you should reread the article. It's true that fixing these stuttering issues has given them some frame rate improvements, but the article seemed relatively clear on the point that they were focused on frame rates and not frame latency or frame interval or whatever they're choosing to call it today.

    They had all their focus on one aspect of driver development and that actually cost them in that area because they weren't considering a more well-rounded approach.

    I'll grant you AMD was pretty inept though. This has been problem they've had for years and it's taken them this long to suss it out...
  • piroroadkill - Tuesday, March 26, 2013 - link

    Hey, what about PC Perspective: http://www.pcper.com/reviews/Graphics-Cards/Frame-...

    Looks to me as being the best way to measure it, as it uses a DVI capture card to actually capture what the user sees, forgoing any overhead software may have..
  • Rick83 - Tuesday, March 26, 2013 - link

    Yes, that is the correct way to measure frame time.
  • KikassAssassin - Tuesday, March 26, 2013 - link

    Yeah, their method looks like the best of both worlds. It gets around the limitations of FRAPS without the complexity and difficulty of using GPUView.
  • Anand Lal Shimpi - Tuesday, March 26, 2013 - link

    The ideal method would be something that gives us timestamps at both ends of the pipeline, but that's a tall order. The PCPer method is very interesting indeed... ;)
  • Mopar63 - Tuesday, March 26, 2013 - link

    First very informative article. The issue at hand is that this so called concern is based on an individuals perception. Remember we are not talking about a stuttering that was so bad as to be noticeable to all gamers. Scott basically had to make a video specifically designed to point out the issue for others to see it originally.

    Because there is no real way to quantify the personal experience we have an issue in the fact that we now have a measurement craze that is being treated as fact when it is based in the end on subjection for the final result.

    Having access to various levels of AMD and NVidia based machines I can tell you that my gaming experience across them has been pretty uniform in most cases. The cases when I had a bad experience, probably a wash as they are on both platforms.

    I think the biggest issue is we sometimes get to caught up in the technology. We let benchmarks and measurement programs dictate to us what we will get the most enjoyment from with our gaming experience. A game is not the frame rates but the play that matters. While frame rates might play a roll it is not the measurement of them that makes that part of the fun.

    At the end of the day the single best test of a video card is not a benchmark suite or tool to measure frame rendering time. The best tool is to play the games you want and see if you get the game experience you desire. Turn off the benchmark and turn on the game, that is the ONLY true test of what is best.
  • HisDivineOrder - Tuesday, March 26, 2013 - link

    Speak for yourself. I've noticed this problem between nVidia vs AMD for years. For many years, gamers have said that nVidia cards are "smoother." People didn't listen because they didn't want to hear the truth or because they were likely stuck with one high end from AMD and a low end or medium end from nVidia.

    But comparing equivalent cards, I can tell you my experience has always led inescapably to the "feeling" that the nVidia card is smoother at the same or even slightly lower frame rate.

    This just proves what I "felt" was the case was in fact really the case. If you didn't see it, then that's a fail on the part of your visual acuity or perhaps you had a bias you wanted to see, so you saw less than everything present.

    But the stutter was always there. Now even AMD admits it.

Log in

Don't have an account? Sign up now