Revisiting the Radeon HD 7990 & Frame Pacing

Before we jump into our full benchmark suite, the launch of a new AMD dual-GPU card makes this an opportune time to revisit the state of frame pacing on AMD’s cards and to reflect on the Radeon HD 7990, so we’d like to take a moment to do just that.

When the 7990 launched last year, for AMD it came at an unfortunate time when the subject of frame pacing was finally coming to a head. With the incredibly coincidental release of NVIDIA’s FCAT tool it became possible to systematically and objectively measure frame pacing, and those findings showed that AMD’s frame pacing algorithms were significantly lagging NVIDIA’s. AMD Crossfire setups, including the 7990, were doing little if anything to mete out frames in an even manner, resulting in a range of outcomes from badly paced frames to dropping frames altogether. Worse, the problem was especially prevalent on multi-display Eyefinity setups, including the pseudo-multi-display methods that are used to drive 4K monitors at 60Hz even today.

The issue of frame pacing had been brewing for some time and AMD was quick to respond to concerns and agree that it needed addressed,  but the complex nature of the problem meant that it would take some time to fully resolve. The result of AMD’s efforts resulted in a series of phases of Crossfire frame pacing improvements. Phase 1 was released in August, 4 months after the launch of the 7990, and implemented better Crossfire frame pacing for games operating at or below 2560x1600.

The 2560x1600 limitation was a significant one, as this essentially limited AMD’s fixes to single-display setups and excluded Eyefinity and 4K setups. This limit in turn was directly related to the technical underpinnings of AMD’s GCN 1.0 (and earlier) GPUs, which used the Crossfire Bridge Interconnect to share data when using Crossfire. The CFBI offered just 900MB/sec of bandwidth, which was enough for 2560x1600 but nothing more. To move larger frames between GCN 1.0 GPUs, AMD has to undertake a much trickier process of involving the PCI-Express bus.

In the intervening period between then and now AMD has released their GCN 1.1 GPUs, which implement the XDMA block to specifically and efficiently handle frame transfers between GPUs. The end result of the XDMA block is that Hawaii based products – including the R9 295X2 – have no trouble with frame pacing. This in turn makes the R9 295X2 all the more important for AMD due to the fact that it’s the first dual-GPU video card from them to utilize this feature. Otherwise for the 7990 and other GCN 1.0 products, utilizing Crossfire with high resolutions involves a great deal more effort under the hood.

It was only finally in February of this year that AMD rolled out their Phase 2 driver, which implemented their high resolution frame pacing solution for pre-GCN 1.1 video cards. But since that same driver also launched support for AMD’s Mantle API and their Heterogeneous System Architecture, we haven’t had a chance to reevaluate AMD’s frame pacing situation until now. In our full benchmark section we’ll include a complete breakdown of frame pacing performance for both AMD and NVIDIA setups, but we first wanted to stop and take a look at frame pacing for pre-GCN 1.1 cards in particular.

So we’ve set out to answer the following question: now that AMD is supporting high resolution frame pacing on cards such as the 7990, has the 7990 been fully fixed?

The short answer, unfortunately, is that it’s a mixed bag. AMD has made significant improvements since we last evaluated frame pacing on the 7990 back at the 290X launch, which at the time saw the 7990 dropping frames left and right. But AMD has still not come far enough to truly fix the issue, as we’ll see.

We’ll starting off with our delta percentage data, which is the average difference in frame times as a percentage. In an ideal world this number would be 0, indicating that every frame was delivered in exactly as much time as the previous one, which would give us a perfectly smooth experience. In practice however this is impossible to achieve even in a single-GPU setup, let alone a multi-GPU setup. So for multi-GPU setups our cutoff is 20%; if a GPU can deliver a frame with a variance of no more than 20% of the time the previous frame took, then the frame delivery is consistent enough that gameplay should be reasonably smooth and below the bounds of human perception, even if it’s not perfect.

Radeon HD 7990 Delta Percentages (Catalyst 14.4 Beta)

The end result, as we saw back in August with Catalyst 13.8, is that AMD’s frame pacing situation has been brought under control in single-display (2560x1440 and lower) resolutions, as AMD was able to continue using the CFBI and merely changed their algorithms to better handle frame pacing.

However the state of frame pacing for high resolution Crossfire, when invoking the PCIe bus, is still fundamentally broken. Of the games we have that scale with multiple GPUs, the best game is Bioshock: Infinite with a 48.5% delta, 3x the variance of 2560x1440. It gets worse from there, going up as high as 70% for Thief. To be clear this is significant improvement over the 7990 that was dropping frames before AMD’s latest fix, but the deltas are still more than twice what we believe the cutoff should be.

Radeon R9 295X2 Delta Percentages (Catalyst 14.4 Beta)

The Radeon R9 295X2 by comparison fares much better. Not only are AMD’s deltas below 20% on everything but Crysis 3 (where it’s essentially skirting that value), but in most of our games the variance drops with the increased resolution, rather than massively increasing as it does with the 7990. This is the kind of chart we’d like to see for the 7990 as well, and not just the R9 295X2.

Our final graph is a plot of frame times on both cards on the main menu of Thief, showcasing how the cards compare and giving us a visual for just what’s going on. As our delta percentages picked up on, the 7990’s frame times are all over the place, with the card frequently cycling between 15ms frame times and 30ms frame times. This is as opposed to the R9 295X2, which is relatively consistent throughout.

What makes this all the more interesting though – and is something we’ve seen on other charts – is that the 7990’s variance drops towards the end. It’s still unquestionably worse than the R9 295X2 and exceeds our 20% threshold, but compared to the worst point on the chart it has come close to being halved.

This data indicates that for the pre-GCN 1.1 cards AMD is relying on some kind of long term adaptive timing mechanism that takes quite some time (at least a minute) to kick in. Only after which do AMD’s frame pacing mechanisms exert enough control to better regulate frame timings. We’ve known since the launch of the 290X that AMD is using some kind of short term adaptive timer for the 290X and single-display resolutions on pre-GCN 1.1 cards, but this is the first time we’ve seen a long term adaptive timer in use.

The end result is that in extended play sessions the frame pacing situation on the 7990 should be better than what we’re seeing with our relatively short benchmarks. However it also means that the initial frame pacing situation will be very bad and that in the best case scenario even after a few minutes the frame timing variance is still much higher than the 20% threshold it takes to make the frame times reasonably consistent.

To that end, though the 7990 is much improved, it’s hard to say that the frame pacing situation has been fixed. To be sure single-display is fine and has been fine since August, but even with AMD’s most recent changes the 7990 (and presumably other pre-GCN 1.1 cards) are still struggling to display frames at an even pace. Which for a card originally touted as the perfect card for 4K, is not a great outcome.

Meet the Radeon R9 295X2: Build Quality & Performance Expectations The Test
Comments Locked

131 Comments

View All Comments

  • CiccioB - Tuesday, April 8, 2014 - link

    Well, not, not exactly. One thing is not being PCI compliant, and that's a thing I can understand. Another thing is going beyond connectors electrical power specifications. If they put 3 connectors I would have not had any problem. But as it is they are forcing components specifications, not simple indications rules on maximum size and power draw.
  • meowmanjack - Tuesday, April 8, 2014 - link

    If you look at the datasheet for the power connector (I'm guessing on the part number but the Molex part linked below should at least be similar enough), each pin is rated for 23 A and the housing can support a full load on each pin. Even if only 3 pairs are passing current, the connector can deliver over 800W at 12V.

    The limiting factor for how much power can be drawn from that connector is going to be the copper width and thickness on the PCB. If AMD designed the board to carry ~20 A (which the presumably have) off each connector it won't cause a problem.
  • meowmanjack - Tuesday, April 8, 2014 - link

    Oops, forgot the datasheet
    http://www.molex.com/molex/products/datasheet.jsp?...
  • behrouz - Tuesday, April 8, 2014 - link

    Thanks For Link,Finally My Doubts were Resolved.
  • Ian Cutress - Tuesday, April 8, 2014 - link

    Most of the power will be coming from the PCIe power connectors, not the lane itself. If you have 5/6/7 in a single system, then yes you might start to see issues without the appropriate motherboard power connectors.
  • dishayu - Tuesday, April 8, 2014 - link

    I'm yet to read the review but FIVE HUNDRED WATTS? WOW!
  • Pbryanw - Tuesday, April 8, 2014 - link

    I'd be more impressed if it drew 1.21 Jigawatts!! :)
  • krazyfrog - Tuesday, April 8, 2014 - link

    On the second last page, the second last chart is of load GPU temperature when it should be load load noise levels.
  • piroroadkill - Tuesday, April 8, 2014 - link

    Reasonable load noise and temps, high performance. Nice.

    You'll want to get the most efficient PSU you can get your mitts on, though.

    Also, I would seriously consider a system that is kicking out 600 Watts of heat to be something you wouldn't want in the same room as you. Your AC will work overtime, or you'll be sweating your ass off.

    A GPU for Siberia! But then, that's not really a downside as such, just a side effect of having a ridiculous amount of power pushing at the edges of this process node.
  • Mondozai - Tuesday, April 8, 2014 - link

    "Reasonable noise and temps"? It is shockingly quiet during load for a dual GPU card. And it has incredibly low GPU temps, too.

    As for heat, not really, only if you have a badly ventilated room in general or live in a warm climate.

Log in

Don't have an account? Sign up now