The Test

For the purposes of our testing we’ll be looking at the 6 games we’ve adopted for use with FCAT due to their proven reliability. These are Total War: Shogun 2, HItman: Absolution, Sleeping Dogs, Battlefield 3, Bioshock Infinite, and Crysis 3. All of our results unless otherwise noted are using Catalyst 13.8b1 for the AMD cards, and NVIDIA’s 326.19 beta drivers for the GeForce cards.

Our metric of choice for measuring frame times and frame pacing is a metric we’re calling Delta Percentages. With delta percentages we’re collecting the deltas (differences) between frame times, averaging that out, and then dividing delta average by the average frame time of the entire run. The end result of this process is that we can measure whether sequential frames are rendering in roughly the same amount of time, while controlling for performance differences by looking at the data relative to the average frame time (rather than as absolute time). This gives us the average frame-to-frame time difference as a percentage.

Bioshock Infinite - Delta Percentages - 2560x1440 - Ultra Quality + DDoF

In general, a properly behaving single-GPU card should have a delta average of under 3%, with the specific value depending in part on how variable the workload is throughout any given game benchmark. 3% may sound small, but since we’re talking about an average it means it’s weighed against the entire run, as the higher the percentage the more unevenly frames are arriving. For a multi-GPU setup we’d ideally like to see the delta percentages be equal to our single-GPU setups, but this is for the most part unreasonable. There is no hard number for what is or isn’t right here, but based on play testing we’d say 15%-20% is a reasonable threshold for acceptable variance, with anything under 10% being very good for a multi-GPU setup.

Finally, in our testing we did encounter an issue with Catalyst 13.8 that required we make some slight adjustments to FCAT to compensate for this bug, so we need to make note of this. For reasons we can’t sufficiently explain at this time but has been confirmed by AMD, in some cases in Crossfire mode AMD’s latest drivers are periodically drawing small slices of old frame buffers at the top of the screen. The gameplay impact is minimal-to-nonexistent, but this problem throws off FCAT badly.

To quickly demonstrate the problem, below we have two consecutive frames from one of our Battlefield 3 runs. The correct FCAT color order here is dark blue, green, light blue, and olive. The frames corresponding to dark blue and green occur on frame one, and light blue and olive on frame two. Yet looking at frame two, we see a small 6 pixel high stripe of dark blue at the very top of the image. At this point the dark blue frame should have already been discarded, as the cards have moved on to the green and later light blue frames. Instead we’re getting a very small slice of a frame that is essentially 2 frames old.

The gameplay impact from this is trivial to none; the issue never exceeds a 6 pixel slice, only occurs at the top of the frame (which is generally skybox territory), and is periodic to the point where it occurs at most a few times per minute. And based on our experience this primarily occurs when a buffer swap should be occurring during or right after the start of a new refresh cycle, which is why it’s so periodic.

However the larger issue is that FCAT detects this as a frame drop, believing that over a dozen frames have been dropped. This isn’t actually possible of course – the context queue isn’t large enough to hold that many frames – and analysis shows that it’s actually part of the old frame as we’ve explained earlier. As such we’ve had to modify FCAT to ignore this issue so that it doesn’t find these slices and count them as dropped frames. The issue is real enough (this isn’t a capture error) and AMD will be fixing it, but it’s not evidence of a dropped frame as the stock implementation of FCAT would assume.

Ultimately our best guess here is that AMD is somehow mistiming their buffer swaps, as the 2 frame old aspect of this correlates nicely to the fact that the dark blue and light blue frames would both be generated by the same GPU in a two-GPU setup.

CPU: Intel Core i7-3960X @ 4.3GHz
Motherboard: EVGA X79 SLI
Power Supply: Antec True Power Quattro 1200
Hard Disk: Samsung 470 (256GB)
Memory: G.Skill Ripjaws DDR3-1867 4 x 4GB (8-10-9-26)
Case: Thermaltake Spedo Advance
Monitor: Samsung 305T
Video Cards: AMD Radeon HD 6990
AMD Radeon HD 7970GE
AMD Radeon HD 7990
NVIDIA GeForce GTX 590
NVIDIA GeForce GTX 680
NVIDIA GeForce GTX 690
Video Drivers: NVIDIA ForceWare 326.19
AMD Catalyst 13.5 Beta 2
AMD Catalyst 13.6 Beta 2
AMD Catalyst 13.8 Beta 1
OS: Windows 8 Pro

 

Catalyst 13.8 Beta 1: The First Multi-GPU Frame Pacing Driver Catalyst 13.8 Results in Summary
Comments Locked

102 Comments

View All Comments

  • chizow - Wednesday, August 7, 2013 - link

    There was discussions of microstutter on various forums associated with multi-GPU, but PCGH was the first site to publish it's findings in detail with both video evidence and hard data. From what I remember, they were the first to develop the methodology of using FRAPs frametimes and graphing the subsequent results to illustrate microstutter.
  • BrightCandle - Friday, August 2, 2013 - link

    One of the most shocking revelations to me is that AMDs quality assurance did not include checking the output of their cards frame by frame. I had always assumed that both NVidia and AMD had HDMI/DVI/VGA recorders that allowed them to capture the output of their cards so they could check them pixel by pixel, frame by frame and presumably check they were correct automatically.

    Such a technology would clearly have shown the problem immediately. I am stunned that these companies don't do that. Even FCAT is a blatantly blunt tool as it doesn't say anything about the contents of the frames. We still don't have any way to measure end to end latency for comparison either. All in all there is much to left to do and I am not confident that either company is testing these products well, its just I couldn't believe that AMD wasn't testing theirs for consistency (it was obvious when you played it something was wrong) at all.
  • krutou - Friday, August 2, 2013 - link

    AMD is in the business of being the best performance per price entry in every market segment. Technology and quality come second.

    How often does AMD introduce and/or develop technologies for their graphics cards? The only two that come to mind are Eyefinity and TressFX (100 times more overhyped than PhysX).
  • Death666Angel - Saturday, August 3, 2013 - link

    I think ATI had tessellation in their old DX8 chips. nVidia bought PhysX, so that shouldn't count. But I don't really see how having exclusive technology usable by a single GPU vendor is anything good. We need standardization and everybody having access to the same technologies (albeit with different performance deltas). Look at the gimmicky state of PhysX and imagine what it could be if nVidia would allow it to be fully utilized by CPUs and AMD GPUs?
  • krutou - Saturday, August 3, 2013 - link

    Because OpenCl and TressFX are doing so well right?
  • bigboxes - Sunday, August 4, 2013 - link

    March on, fanboi.
  • JamesWoods - Sunday, August 4, 2013 - link

    If you think that is all AMD/ATI has ever done for graphics then you sir, are ignorant. I was going to use a more degrading word there and thought better of it.
  • Will Robinson - Friday, August 2, 2013 - link

    LOL...what a load of tosh.
    "NVDA had to take them by the hand"?
    You and Wreckage ought to post in green text.
  • chizow - Friday, August 2, 2013 - link

    Agree with pretty much of all of this, although I would direct a lot of the blame on AMD's most loyal, enthusiastic supporters as well. Every time microstutter was mentioned and identified as being worst with AMD solutions, AMD's biggest fans would get hyperdefensive about it. If those most likely to have a problem were too busy denying any problem existed, it really should be no surprise it was never fixed.

    And this is the result. Years of denial and broken CF, finally fixed as a result of the scrutiny from the press and laughter of Nvidia fans which brought this to a head and forced AMD to take a closer look and formulate a solution.
  • EJS1980 - Friday, August 2, 2013 - link

    "Truth favors not one side."

Log in

Don't have an account? Sign up now