• What
    is this?

    You've landed on the AMD Portal on AnandTech. This section is sponsored by AMD. It features a collection of all of our independent AMD content, as well as Tweets & News from AMD directly. AMD will also be running a couple of huge giveaways here so check back for those.

    PRESENTED BY

Catalyst 13.8 Results in Summary, Cont

Up next, let’s take a quick look at how the 7990 with frame pacing compares to NVIDIA’s GTX 690. NVIDIA’s frame pacing has been the gold standard thus far, so let’s see how close AMD has come to NVIDIA on their first shot.

Delta Percentages: AMD Radeon HD 7990 vs. GeForce GTX 690

Frankly the results aren’t flattering for AMD here, although keeping things in perspective they’re not terrible. In every last game GTX 690 has much lower frame time variability than 7990. NVIDIA has been working on this problem a lot longer than AMD has and it shows. Ultimately while it’s true this is an absolute metric when it comes to comparing results – AMD experiences more than two times the frame time variation in 5 of the 6 games – keep in mind we’re looking at the variance in frame times, rather than the frame times themselves, a first order derivative. What it means is that AMD clearly still has room for improvement, but AMD’s approximately 20% results are not a poor showing in this metric; for every individual there exists a point below which the frame time variations cease to be perceptible.

While we’re on the matter of this comparison, it’s very much worth pointing out that while AMD can’t match NVIDIA’s delta percentages at this time the same cannot be said for runt and dropped frames. Throughout our tests on Catalyst 13.8 AMD delivered 0 runt frames and dropped 0 frames. This is a massive improvement over Catalyst 13.6, which would regularly deliver runt frames and drop frames at times too. In fact even NVIDIA can’t do this well; the GTX 690 doesn’t drop any frames but does deliver a small number of runt frames (particularly towards the start of certain benchmarks). So in their very first shot AMD is already beating NVIDIA on runt frames, a concept pioneered by NVIDIA in the first place.

We’ve also posted the FCAT graphs for the 7990 versus the GTX 690 below. We can clearly see the higher variation of the 7990, while we see a few more instances of late frames on GTX 690 than we do 7990.

Moving on, we wanted to quickly compare D3D9 to D3D11 performance on the 7990. As a reminder AMD’s frame pacing mechanism isn’t enabled for D3D9, so this gives us a quick chance to look at the difference. The only title in our collection that is D3D9 capable is Total War: Shogun 2, so we’ll use that.

And there you go. Frame pacing is not available on D3D9, leading to much more variable results for the 7990 when using the D3D9 path, even though it’s otherwise faster due to the simpler effects. AMD will ultimately address D3D9 in a further phase, but in the meantime this reinforces the need for a switch to turn off Crossfire on dual-GPU cards like the 7990. NVIDIA allows this, and AMD lets you do it on multi-card setups, but with the 6990 and 7990 you are unfortunately locked into Crossfire mode at all times.

Finally, while it’s not something we can properly measure, we did want to touch upon the matter of input lag. AMD’s earlier position that frame pacing and input lag are inversely related was not wrong. At some level adding frame pacing is going to increase the input lag due to frames being held back. The question is, to what extent and is it acceptable?

The short answer is that while we can’t really give the issue the full attention it deserves without a high speed camera (something we don’t have), subjective testing is quite good. If there is a difference in input lag from enabling frame pacing, it’s not something we’re able to perceive. Despite AMD’s concerns about input lag from what usage testing we’ve done we have no problem saying that enabling frame pacing by default was the right move. In our experience there’s simply no reason not to enable it.

Catalyst 13.8 Results in Summary Total War: Shogun 2
POST A COMMENT

102 Comments

View All Comments

  • chizow - Friday, August 02, 2013 - link

    That makes sense, but I guess the bigger concern from the outset was how AMD's allowance of runtframes/microstutter in an "all out performance" mentality might have overstated their performance. You found in your review that AMD performance typically dropped 5-10% as a result of this fix, that should certainly be considered, especially if AMD isn't doing a good job of making sure they implement this frame time fix across all their drivers, games, APIs etc.

    Also, any word whether this is a driver-level fix or an game-specific profile optimization (like CF, SLI, AA profiles)?
    Reply
  • Ryan Smith - Friday, August 02, 2013 - link

    The performance aspect is a bit weird. To be honest I'm not sure why performance was up with Cat 13.6 in the first place. For a mature platform like Tahiti it's unusual.

    As for the fix, AMD has always presented it as being a driver level fix. Now there are still individual game-level optimizations - AMD is currently trying to do something about Far Cry 3's generally terrible consistency, for example (an act I'm convinced is equivalent to parting the Red Sea) - but the basic frame pacing mechanism is universal.
    Reply
  • Thanny - Thursday, August 01, 2013 - link

    Perhaps this will be the end of the ludicrous "runt" frame concept.

    All frames with vsync disabled are runts, since they are never completely displayed. With a sufficiently fast graphics card and/or sufficiently less complex game, every frame will be a runt even by the arbitrary definitions you find at sites like this.

    And all the while, nothing at all is ever said about the most hideous artifact of all - screen tearing.
    Reply
  • Asik - Thursday, August 01, 2013 - link

    There is a simple and definite fix for tearing artifacts and you mention it yourself - vsync. If screen tearing bothers you, and I think it should bother most people, you should keep vsync on at all times. Reply
  • chizow - Thursday, August 01, 2013 - link

    Vsync or frame limiters are certainly workarounds, but it also introduces input lag and largely negates the benefit of having multiple powerful GPUs to begin with. A 120Hz monitor would increase the headroom for Vsync, but also by nature reduces the need for Vsync (there's much less tearing). Reply
  • krutou - Friday, August 02, 2013 - link

    Triple buffering solves tearing without introducing significant input lag. VSync is essentially triple buffering + frame limiter + timing funny business.

    I have a feeling that Nvidia's implementation of VSync might actually not have input lag due to their frame metering technology.

    Relevant: http://www.anandtech.com/show/2794/3
    Reply
  • chizow - Saturday, August 03, 2013 - link

    Yes this is certainly true, when I was on 60Hz I would always enable Triple Buffering when available, however, TB isn't the norm and few games implemented it natively. Even fewer implemented it correctly, most use a 3 frame render ahead queue, similar to the Nvidia driver forcing it which is essentially a driver hack for DX.

    Having said all that, TB does still have some input lag even at 120Hz even with Nvidia Vsync compared to 120Hz without Vsync (my preferred method of gaming now when not using 3D).
    Reply
  • vegemeister - Monday, August 05, 2013 - link

    The amount of tearing is independent the refresh rate of your monitor. If you have vsync off, every frame rendered creates a tear line. If you are drawing frames at 80Hz without vsync, you are going to see a tear every 1/80 of a second no matter what the refresh rate of your screen is. The only difference is that a 60Hz screen would occasionally have two tear lines on screen at once. Reply
  • chizow - Thursday, August 01, 2013 - link

    Sorry, not even remotely close to true. Runt frames were literally tiny shreds of frames followed by full frames, unlike normal screen tearing with Vsync off that results in 1/3 or more of the frame being updated at a time, consistently.

    The difference is, one method does provide the impression of fluidity and change from one frame to the next (with palpable tearing) whereas runt frames are literally worthless unless you think 3-4 rows worth of image followed by full images provides any meaningful sense of motion.

    I do love the term "runt frame" though, an anachronism in the tech world born of AMD's ineptitude with regard to CrossFire. I for one will miss it.
    Reply
  • Thanny - Thursday, August 01, 2013 - link

    You're not making sense. All frames with vsync off are partial. The frame buffer is replaced in the middle of screen updates, so no rendered frame is ever displayed completely.

    A sense of motion is achieved by displaying different frames in a time sequence. It has nothing to do with showing parts of different frames in the same screen refresh.

    And vsync adds a maximum latency of the inverse of the screen refresh (16.67ms for a 60Hz display). On average, it will be half that. If you have a very laggy monitor (Overdrive-TN, PVA, or MVA panel types), that tiny bump from vsync might push the display lag to noticeability. For plain TN and IPS panels (not to mention CRT), there will be no detectable display lag with vsync on.
    Reply

Log in

Don't have an account? Sign up now