In the last year, stuttering, micro-stuttering, and frame interval benchmarking have become a very big deal in the world of GPUs, and for good reason. Through the hard work of the Tech Report’s Scott Wasson and others, significant stuttering issues were uncovered involving AMD’s video cards, breaking long-standing perceptions on stuttering, where the issues lie, and which GPU manufacturer (if anyone) does a better job of handling the problem. The end result of these investigations has seen AMD embarrassed and rightfully so, as it turned out they were stuttering far worse than they thought, and more importantly far worse than NVIDIA.

The story does not stop there however. As AMD has worked on fixing their stuttering issues, the methodologies pioneered by Scott have gone on to gain wide acceptance across the reviewing landscape. This has the benefit of putting more eyes on the problem and helping AMD find more of their stuttering issues, but as it turns out it has also created some problems. As we laid out in detail yesterday in a conversation with AMD, the current methodologies rely on coarse tools that don’t have a holistic view of the entire rendering pipeline. And as such while these tools can see the big problems that started this wave of interest, their ability to see small problems and to tell apart stuttering from other issues is very limited. Too limited.

In their conversation AMD laid out their argument for a change in benchmarking. A rationale for why benchmarking should move from using tools like FRAPS that can see the start of the rendering pipeline, and towards other tools and methods that can see the end of the rendering pipeline. And AMD was not alone in this; NVIDIA too has shown concern about tools like FRAPS, and has wanted to see testing methodologies evolve.

That brings us to this week. Often evolution is best left to occur naturally. But other times evolution needs a swift kick in the pants. This week NVIDIA has decided to give evolution that swift kick in the pants. This week NVIDIA is introducing FCAT.

FCAT, the Frame Capture Analysis Tool, is NVIDIA’s take on what the evolution of frame interval benchmarking should look like. By moving the measurements of frame intervals from the start of the rendering pipeline to the end of the pipeline, FCAT evolves the state of benchmarking by giving reviewers and consumers alike a new way to measure frame intervals.  A year and a half ago the use of FRAPS brought a revolution to the 3D game benchmarking scene, and today NVIDIA seeks to bring about that revolution all over again.

FCAT is a powerful, insightful, and perhaps above all else labor intensive tool. For these reasons we are going to be splitting up our coverage on FCAT into two parts. Between trade shows and product launches we simply have not had enough time to put together a complete and proper dataset for FCAT, so rather than to do this poorly, we’re going to hold back our results until we’ve had a chance to run all of the FCAT tests and scenarios that we want to run

In part one of our series on FCAT, today we will be taking a high-level overview of FCAT. How it works, why it’s different from FRAPS, and why we are so excited about this tool. Meanwhile next week will see the release of part two of our series, in which we’ll dive into our FCAT results, utilizing FCAT to its full extent to look at where FCAT sees stuttering and under what conditions. So with that in mind, let’s dive into FCAT.

Reprise: When FRAPS Isn’t Enough

Since we covered the subject of FRAPS in great detail yesterday, we’re not going to completely rehash it. But for those of you who have not had the time to read yesterday’s article, here’s a quick rundown of how FRAPS measures frame intervals, and why at times this can be insufficient.

Direct3D (and OpenGL) uses a complex rendering pipeline that spans several different mechanisms and stages. When a frame is generated by an application, it must travel through the pipeline to Direct3D, the video drivers, a frame queue (the context queue), a GPU scheduler, the video drivers again, the GPU, and finally after that a frame can be displayed. The pipeline analogy is used here because that’s exactly what it is, with the added complexity of the context queue sitting in the middle of that pipeline.

FRAPS for its part exists at almost the very beginning of this pipeline. It interfaces with individual applications and intercepts the Present calls made to Direct3D that mark the end of each frame. By counting Present calls FRAPS can easily tell how many frames have gone into the pipeline, making it a simple and effective tool for measuring average framerates.

The problem with FRAPS as it were, is that while it can also be used to measure the intervals between frames, it can only do so at the start of the rendering pipeline, by counting the time between Present calls. This, while better than nothing, is far removed from the end of the pipeline where the actual buffer swaps take place, and ultimately is equally removed from the end-user experience. Furthermore because FRAPS is so far up the rendering pipeline, it’s insulated from what’s going on elsewhere; the context queue in particular can hold up to 3 frames, which means the rate of flow into the context queue can at times be very different from the rate of flow outside of the context queue.

As a result FRAPS is best descried as a coarse tool. It can see particularly egregious stuttering situations – like what AMD has been experiencing as of late – but it cannot see everything. It cannot see stuttering issues the context queue hides, and it’s particularly blind to what’s going on in multi-GPU scenarios.

Enter FCAT
Comments Locked

88 Comments

View All Comments

  • arbiter9605 - Wednesday, April 17, 2013 - link

    As Ryan shrout has talked about, Frame issues with AMD cards happen in games were GPU is the limiting factor in fps. In games that use cpu more then gpu power for issue doesn't show up. So in case why AMD cpu's have problem less compared to intel cpu most likely is a case where amd cpu is maxing out well before gpu gets to its max. Its pretty common that per core per watt intel is faster.
  • Unoquepasaba - Tuesday, April 9, 2013 - link

    This is fucking tool unnecessary.

    Engaged in fixing the bugs that have disgusting play some games like Darksiders 2 or Bioshock Infinite.

    We see that AMD is doing a lot of damage, the best proof is that the new consoles carry their hardware, while Nvidia says not interested in that market but also throw them a console lol.

    Sorry for my English, it is google :)
  • ARealAnand - Tuesday, April 9, 2013 - link

    I do not believe your statement on patents is completely accurate. Patents do cover a specific implementation of an idea, although they also cover independent discoveries of that same implementation and clean room reverse engineering of said implementation. What I mean by a specific implementation is that taking for example the RSA cryptosystem, the patent did not cover all possible implementations of a public key cryptosystem although it might have been the only way to implement a public key system at the time. I'm reasonably certain that if diffie-hellman had been known of at the time people could have used that at the time just as it was used once it was discovered without infringing any patents. Similarly, LZW patents didn't cover generic data compression but instead a compression system that used less processing then LZ-77/78. People working on making patent free audio and visual codecs have been finding different ways to avoid various algorithms for some time now. I am not a lawyer or a trained solicitor. I have not taken any formal classes in practicing law and this post should not be taken as offering professional legal advice. I appreciate your comment and just felt it could use some clarification.
  • ARealAnand - Tuesday, April 9, 2013 - link

    I apologize. I meant this comment to be a reply to zanon's comment above.
  • 529th - Tuesday, April 9, 2013 - link

    Where can this tool be downloaded from?
  • marraco - Sunday, April 14, 2013 - link

    I wonder how 3Dmark will react to this evolution, because it gives scores based on FPS, but this kind of analysis requires an add on card to capture video.
    Will 3Dmark ignore frame by frame analysis?
    Will require more hardware (capture card)?
    Will resort to a software solution (questionable)?
    Will give the user a choice on methodology (giving different, non comparable results)?

    I hope that AMD also give a similar tool. Nvidia and AMD already tried to cheat with his software, and if nVIDIA is the only one providing tools…

    I’m happy with this evolution. I ranted for long time about noticeable freezes on benchmarks and games reporting great FPS (on single cards, not SLI/stuttering), but I felt frustrated by benchmarks review sites.
    Now I’m worried by not having the same hardware than reviewers, and not being able to verify websites claims/results.
    FbF analysis (Frame by Frame) is a step in the right direction, but makes us consumers vulnerable to companies corrupting websites to get better, non reader-verifiable, results.
  • arbiter9605 - Wednesday, April 17, 2013 - link

    Um not sure there is anything AMD can add to this. the DX overlay add's colors to each frame before it gets to sub systems, and colors are analyzed to be in a certain order. If a color is missing then a frame was dropped. i wouldn't say AMD is trying to cheat but they surely have an issue with frame's being dropped or being so small they don't improve gaming experience. When you remove those tiny and dropped frames from fps numbers. It paints a different picture on which card is faster in SLI/CF setup's. Single card setup, which card is better is a toss up.
  • dreamer77dd - Wednesday, April 24, 2013 - link

    The first time I read about this was on PC Perspective.
    http://www.pcper.com/reviews/Graphics-Cards/Frame-...

    This article you have written a good read. It all a new direction :)

Log in

Don't have an account? Sign up now