Discussing Percentiles and Minimum Frame Rates

Up until this point we have only discussed average frame rates, which is an easy number to generate from a benchmark run. Discussing minimum frame rates is a little tricky, because it could be argued that the time taken to render the worst frame should be the minimum. All it then takes is a bad GPU request (misaligned texture cache) which happens infrequently to provide skewed data. To this end, thanks to the logging functionality of the benchmark, we are able to report the frame rate profiles of each run and percentile numbers.

For the GTX 980 and AMD Fury X, we pulled out the 90th, 95th and 99th percentile data from the outputs, as well as plotting full graphs. For each of these data points, the 90th percentile should represent the frame rate (we’ll stick to reporting frame rates to simplify the matter) a game will achieve during 90% of the frames. Similar logic applies to the 95th and 99th percentile data, where these are closer to the absolute maximum but should be more consistent between runs.

This page (and the next) is going to be data heavy, but our analysis will discuss the effect of CPU scaling on percentile data on both GPUs in all three resolutions using all three CPUs. Starting with the GTX 980 Ti:

Fable Legends Beta: GTX 980 Ti Percentiles

All three arrangements at 3840x2160 perform similarly, though there are slight regressions moving from the i3 to the i7 along most of the range, perhaps suggesting that having an excess of thread data has some issues. The Core i7 arrangement seems to have the upper hand at the low percentile (2%-4%) numbers as well.

Fable Legends Beta: GTX 980 Ti Percentiles

At 1080p, the Core i7 gives greater results when the frame rate is above the average and we see some scaling effects when the scenes are simple (giving high frame rates). But for whatever reason, when the going gets tough the i7 seems to bottom out as we go beyond the 80th percentile.

Fable Legends Beta: GTX 980 Ti Percentiles

If we ever wanted to see a good representation of CPU scaling, the 720p graph is practically there – all except for the 85th percentile and up which makes the data points pulled out in this region perhaps unrepresentative of the whole. This issue might be the same issue when it comes to the 1080p results as well.

CPU Scaling Discussing Percentiles and Minimum Frame Rates - AMD Fury X
Comments Locked

141 Comments

View All Comments

  • Gotpaidmuch - Thursday, September 24, 2015 - link

    Sad day for all of us when even the small wins, that AMD gets, are omitted from the benchmarks.
  • Oxford Guy - Thursday, September 24, 2015 - link

    "we are waiting for a better time to test the Ashes of the Singularity benchmark"
  • ZipSpeed - Thursday, September 24, 2015 - link

    The 7970 sure has legs. Turn down the quality down one notch from ultra to high, and the card is still viable at 1080p gaming.
  • looncraz - Thursday, September 24, 2015 - link

    As a long-time multi-CPU/threaded software developer AMD's results show one thing quite clearly: they have some unwanted lock contention in their current driver.

    As soon as that is resolved, we should see a decent improvement for AMD.

    On another note, am I the only one that noticed how much the 290X jumped compared to the rest of the lineup?!

    Does that put the 390X on par with the 980 for Direct X 12? That would be an interesting development.
  • mr_tawan - Thursday, September 24, 2015 - link

    Well even if UE4 uses DX12, it would probably just a straight port from DX11 (rather than from XBONE or other console). The approach it uses maynot flavour AMD as much as Nvidia, who know ?

    Also I think the Nvidia people would have involved with the engine development more than AMD (due to its developer relationships team size I guess). The Oxide games also mentioned that they got this kind of involvement as well (even if the game is AMD title).
  • tipoo - Thursday, September 24, 2015 - link

    Nice article. Looks like i3s are going to only get *more* feasible for gaming rigs under DX12. There's still the odd title that suffers without quads though, but most console ports at least should do fine.
  • ThomasS31 - Thursday, September 24, 2015 - link

    Still not a Game performance test... nor a CPU.

    There is no AI... and I guess a lot more is missing that would make a difference in CPU as well.

    Though yeah... kind a funny that an i3 is "faster" than an i5/7 here. :)
  • Traciatim - Thursday, September 24, 2015 - link

    This is what I was thinking too. I thought that DX12 might shake up the old rule of thumb saying 'i5 for gaming and i7 for working' but it seems to be this still holds true. In some cases it might even make more sense budget wise to go for a high end i3 and sink as much in to your video card as possible rather than go for an i5 depending on where your budget and current expected configuration are.

    More CPU benchmarking and DX12 benchmarks are needed of course, but it still looks like the design of machines isn't going to change all that much.
  • Margalus - Friday, September 25, 2015 - link

    this test shows absolutely nothing about "gaming". It is strictly rendering. When it comes to "gaming" I believe your i3 is going to drop like a rock once it has to start dealing with AI and other "gaming" features. Try playing something like StarCraft or Civilization on your i3. I don't think it's going to cut the muster in the real world.
  • joex4444 - Thursday, September 24, 2015 - link

    As far as using X79 as the test platform here goes, I'm mildly curious what sort of effect the quad channel RAM had. Particularly with Core i3, most people pair that with 2x4GB of cheap DDR3 and won't be getting even half the memory bandwidth your test platform had available.

    Also fun would be to switch to X99 and test the Core i7-5960X, though dropping an E5-2687W in the X79 platform (hey, it *is* supported after all).

Log in

Don't have an account? Sign up now