Discussing Percentiles and Minimum Frame Rates

Up until this point we have only discussed average frame rates, which is an easy number to generate from a benchmark run. Discussing minimum frame rates is a little tricky, because it could be argued that the time taken to render the worst frame should be the minimum. All it then takes is a bad GPU request (misaligned texture cache) which happens infrequently to provide skewed data. To this end, thanks to the logging functionality of the benchmark, we are able to report the frame rate profiles of each run and percentile numbers.

For the GTX 980 and AMD Fury X, we pulled out the 90th, 95th and 99th percentile data from the outputs, as well as plotting full graphs. For each of these data points, the 90th percentile should represent the frame rate (we’ll stick to reporting frame rates to simplify the matter) a game will achieve during 90% of the frames. Similar logic applies to the 95th and 99th percentile data, where these are closer to the absolute maximum but should be more consistent between runs.

This page (and the next) is going to be data heavy, but our analysis will discuss the effect of CPU scaling on percentile data on both GPUs in all three resolutions using all three CPUs. Starting with the GTX 980 Ti:

Fable Legends Beta: GTX 980 Ti Percentiles

All three arrangements at 3840x2160 perform similarly, though there are slight regressions moving from the i3 to the i7 along most of the range, perhaps suggesting that having an excess of thread data has some issues. The Core i7 arrangement seems to have the upper hand at the low percentile (2%-4%) numbers as well.

Fable Legends Beta: GTX 980 Ti Percentiles

At 1080p, the Core i7 gives greater results when the frame rate is above the average and we see some scaling effects when the scenes are simple (giving high frame rates). But for whatever reason, when the going gets tough the i7 seems to bottom out as we go beyond the 80th percentile.

Fable Legends Beta: GTX 980 Ti Percentiles

If we ever wanted to see a good representation of CPU scaling, the 720p graph is practically there – all except for the 85th percentile and up which makes the data points pulled out in this region perhaps unrepresentative of the whole. This issue might be the same issue when it comes to the 1080p results as well.

CPU Scaling Discussing Percentiles and Minimum Frame Rates - AMD Fury X
Comments Locked

141 Comments

View All Comments

  • lprates - Sunday, October 18, 2015 - link

    I totally Agree
  • anubis44 - Friday, October 30, 2015 - link

    It's not nonsense. AMD Radeon cards have a hardware based scheduler. These tests don't make any use of asynchronous shaders, but it IS a DX12 feature, and one which will hit the Maxwells hard, since they don't have a hardware based scheduler. nVidia left it out to get the power consumption down. Too bad it'll be needed in many upcoming DX12 titles.
  • Bleakwise - Tuesday, December 29, 2015 - link

    You think?

    They didn't even benchmark the 300 series cards and look at this, at 1080p a 290x is about 11% faster a 970, a 285 is 20% faster than 960.

    I mean holy shit.

    Also, why didn't Anandtech use the last gen AMD cards instead of the 300 series cards (no, they aren't just rebrands)? Why didn't they do 1440p benchmarks? What the hell?
  • Bleakwise - Tuesday, December 29, 2015 - link

    Also, "the driver got here late" my ass.

    What a bullshit excuse. It takes a couple hours to benchmark a card. The review couldn't wait one more day? Really? A review that's obsolete before it's even posted is better than posting a relevant review a day later?
  • Uxi - Thursday, September 24, 2015 - link

    Picture/Graph nr. 2 on the CPU scaling page seems to be the wrong. Should be 980 Ti not Fury X.
  • Brett Howse - Thursday, September 24, 2015 - link

    Fixed tyvm!
  • Drumsticks - Thursday, September 24, 2015 - link

    Any chance of having some AMD cpus tacked onto this? DX 12 is supposed to help them out after all, so it would be interesting to see if they've made any gains here.
  • Ian Cutress - Thursday, September 24, 2015 - link

    It's something we've thought of. A main issue is that the reviewer with all the GPUs, Ryan, is on the West Coast and the one with all the CPUs, me, is in Europe. So if Ryan does a piece it'll have lots of GPUs (and take time out of other things as he is Editor in Chief) and light on CPU. If I do it, it'll be limited to the GPU stack (R9 290X, R9 285, GTX 980, GTX 770, some of these non-reference) I have. We did this with Star Swarm and got a select group of angry emails claiming we were biased some way or another for not doing a full matrix intersection and claimed we were being paid off.

    That aside, when we get closer to launch of this game and others with DX12, I'll update the tests on our CPU test bed for 2016, and maybe get a new GPU or two with whatever is available at the time.
  • britjh22 - Thursday, September 24, 2015 - link

    Sounds like we need to get Ryan an FX-8320 and 990FX board, can you partially disable the FX processors in the same way to replicate the 6 and 4 series like you can with the i7?
  • R0H1T - Thursday, September 24, 2015 - link

    A better idea would be to ship Ian to the states, also since import duties are lower than that in Europe (:

    j/k

Log in

Don't have an account? Sign up now