Discussing Percentiles and Minimum Frame Rates

Continuing from the previous page, we performed a similar analysis on AMD's Fury X graphics card. Same rules apply - all three resolution/setting combinations using all three system configurations. Results are given as frame rate profiles showing percentiles as well as choosing the 90th, 95th and 99th percentile values to get an indication of minimum frame rates.

Fable Legends Beta: AMD Fury X Percentiles

Moving on to the Fury X at 4K and we see all three processor lineups performing similarly, giving us an indication that we are more GPU limited here. There is a slight underline on the Core i7 though, giving slightly lower frame rates in easier scenes but a better frame rate when the going gets tough beyond the 95th percentile.

Fable Legends Beta: AMD Fury X Percentiles

For 1080p, the results take a twist. It almost seems as if we have some form of reverse scaling, whereby more cores is doing more damage to the results. If we have a look at the breakdown provided by the in-game benchmark (given in milliseconds, so lower is better):

Fable Legends Beta: AMD Fury X at 1080p Render Sub-Results

Three areas stand out as benefitting from fewer cores: Transparency and Effects, GBuffer Rendering and Dynamic Lighting. All three are related to illumination and how the illumination interacts with its surroundings. One reason springs to mind on this – with large core counts, too many threads are issuing work to the graphics card causing thread contention in the cache or giving the thread scheduler a hard time depending on what comes in as high priority.

Nevertheless, the situation changes when we move down again to 720p:

Fable Legends Beta: AMD Fury X Percentiles

Here the Core i3 takes a nose dive as we become CPU limited to pushing out the frames.

Discussing Percentiles and Minimum Frame Rates - NVIDIA GTX 980 Ti Comparing Percentile Numbers Between the GTX 980 Ti and Fury X
Comments Locked

141 Comments

View All Comments

  • Gotpaidmuch - Thursday, September 24, 2015 - link

    Sad day for all of us when even the small wins, that AMD gets, are omitted from the benchmarks.
  • Oxford Guy - Thursday, September 24, 2015 - link

    "we are waiting for a better time to test the Ashes of the Singularity benchmark"
  • ZipSpeed - Thursday, September 24, 2015 - link

    The 7970 sure has legs. Turn down the quality down one notch from ultra to high, and the card is still viable at 1080p gaming.
  • looncraz - Thursday, September 24, 2015 - link

    As a long-time multi-CPU/threaded software developer AMD's results show one thing quite clearly: they have some unwanted lock contention in their current driver.

    As soon as that is resolved, we should see a decent improvement for AMD.

    On another note, am I the only one that noticed how much the 290X jumped compared to the rest of the lineup?!

    Does that put the 390X on par with the 980 for Direct X 12? That would be an interesting development.
  • mr_tawan - Thursday, September 24, 2015 - link

    Well even if UE4 uses DX12, it would probably just a straight port from DX11 (rather than from XBONE or other console). The approach it uses maynot flavour AMD as much as Nvidia, who know ?

    Also I think the Nvidia people would have involved with the engine development more than AMD (due to its developer relationships team size I guess). The Oxide games also mentioned that they got this kind of involvement as well (even if the game is AMD title).
  • tipoo - Thursday, September 24, 2015 - link

    Nice article. Looks like i3s are going to only get *more* feasible for gaming rigs under DX12. There's still the odd title that suffers without quads though, but most console ports at least should do fine.
  • ThomasS31 - Thursday, September 24, 2015 - link

    Still not a Game performance test... nor a CPU.

    There is no AI... and I guess a lot more is missing that would make a difference in CPU as well.

    Though yeah... kind a funny that an i3 is "faster" than an i5/7 here. :)
  • Traciatim - Thursday, September 24, 2015 - link

    This is what I was thinking too. I thought that DX12 might shake up the old rule of thumb saying 'i5 for gaming and i7 for working' but it seems to be this still holds true. In some cases it might even make more sense budget wise to go for a high end i3 and sink as much in to your video card as possible rather than go for an i5 depending on where your budget and current expected configuration are.

    More CPU benchmarking and DX12 benchmarks are needed of course, but it still looks like the design of machines isn't going to change all that much.
  • Margalus - Friday, September 25, 2015 - link

    this test shows absolutely nothing about "gaming". It is strictly rendering. When it comes to "gaming" I believe your i3 is going to drop like a rock once it has to start dealing with AI and other "gaming" features. Try playing something like StarCraft or Civilization on your i3. I don't think it's going to cut the muster in the real world.
  • joex4444 - Thursday, September 24, 2015 - link

    As far as using X79 as the test platform here goes, I'm mildly curious what sort of effect the quad channel RAM had. Particularly with Core i3, most people pair that with 2x4GB of cheap DDR3 and won't be getting even half the memory bandwidth your test platform had available.

    Also fun would be to switch to X99 and test the Core i7-5960X, though dropping an E5-2687W in the X79 platform (hey, it *is* supported after all).

Log in

Don't have an account? Sign up now