Discussing Percentiles and Minimum Frame Rates

Up until this point we have only discussed average frame rates, which is an easy number to generate from a benchmark run. Discussing minimum frame rates is a little tricky, because it could be argued that the time taken to render the worst frame should be the minimum. All it then takes is a bad GPU request (misaligned texture cache) which happens infrequently to provide skewed data. To this end, thanks to the logging functionality of the benchmark, we are able to report the frame rate profiles of each run and percentile numbers.

For the GTX 980 and AMD Fury X, we pulled out the 90th, 95th and 99th percentile data from the outputs, as well as plotting full graphs. For each of these data points, the 90th percentile should represent the frame rate (we’ll stick to reporting frame rates to simplify the matter) a game will achieve during 90% of the frames. Similar logic applies to the 95th and 99th percentile data, where these are closer to the absolute maximum but should be more consistent between runs.

This page (and the next) is going to be data heavy, but our analysis will discuss the effect of CPU scaling on percentile data on both GPUs in all three resolutions using all three CPUs. Starting with the GTX 980 Ti:

Fable Legends Beta: GTX 980 Ti Percentiles

All three arrangements at 3840x2160 perform similarly, though there are slight regressions moving from the i3 to the i7 along most of the range, perhaps suggesting that having an excess of thread data has some issues. The Core i7 arrangement seems to have the upper hand at the low percentile (2%-4%) numbers as well.

Fable Legends Beta: GTX 980 Ti Percentiles

At 1080p, the Core i7 gives greater results when the frame rate is above the average and we see some scaling effects when the scenes are simple (giving high frame rates). But for whatever reason, when the going gets tough the i7 seems to bottom out as we go beyond the 80th percentile.

Fable Legends Beta: GTX 980 Ti Percentiles

If we ever wanted to see a good representation of CPU scaling, the 720p graph is practically there – all except for the 85th percentile and up which makes the data points pulled out in this region perhaps unrepresentative of the whole. This issue might be the same issue when it comes to the 1080p results as well.

CPU Scaling Discussing Percentiles and Minimum Frame Rates - AMD Fury X
Comments Locked

141 Comments

View All Comments

  • Alexvrb - Friday, September 25, 2015 - link

    Ship them both to the East Coast and set up a Review Office / Beach Resort, complete with community events!
  • zimanodenea - Thursday, September 24, 2015 - link

    My Asus m5a97 has an option to do this.
  • mdriftmeyer - Thursday, September 24, 2015 - link

    Time to develop in a test harness of equal merits and scope across the globe for the reviewers. To do less is unprofessional. The whole point of a test harness is not to ductape simulations but to cover all bases.
  • Spunjji - Friday, September 25, 2015 - link

    Well said. This isn't some tinpot organisation, is it? ;)
  • Drumsticks - Thursday, September 24, 2015 - link

    That's a shame. I'd really like to see that comparison. With the improvements Zen should, in theory, bring, it could really give AMD its best chance in years to get some wind under its sails.
  • beck2050 - Thursday, September 24, 2015 - link

    A little too early to worry about. Hopefully both companies will improve when 12 becomes standard issue.
  • DrKlahn - Thursday, September 24, 2015 - link

    Epic has always worked closely with Nvidia and used their hardware, so the only thing that surprises me is that the gap doesn't favor Nvidia more. It's very early to make any predictions, but there are some interesting conversations on other forums about how both architectures behave in different situations. Nvidia's architecture does appear to have issues in some asynchronous workloads. What little evidence we have says this may be an issue in some games.

    My own opinion is that with Nvidia's market dominance we will see most developers try to avoid situations where problems occur. As an AMD owner my main hope is that we see DX12 squeeze out proprietary codes and level the playing field more. I'm also happy that the latest Unreal engine appears to run well on both vendors hardware.
  • jiao lu - Thursday, September 24, 2015 - link

    not only close working relationship . the Unreal 3/4 use Nvidia Physics sdk outright. Epic engine is terribly optimized for console right now. basically it is PC engine, churn out pc demo now and then . Now far fewer AAA studio use unreal 4 like they do with unreal 3 in the ps 3/xbox 360 era. I am very much suspicious unreal 5 is not mult-threaded rendering enough , use dx 12 like do dx 11 before.
  • Midwayman - Thursday, September 24, 2015 - link

    Well, the xbox one is using AMD hardware and dx12. That's probably a bigger reason to keep it neutral than more nvidia share on the PC.
  • Spunjji - Friday, September 25, 2015 - link

    The PS4 is also using the same AMD GCN 1.0 architectures for CPU and GPU

Log in

Don't have an account? Sign up now