CPU Scaling

When it comes to how well a game scales with a processor, DirectX 12 is somewhat of a mixed bag. This is due to two reasons – it allows GPU commands to be issued by each CPU core, therefore removing the single core performance limit that hindered a number of DX11 titles and aiding configurations with fewer core counts or lower clock speeds. On the other side of the coin is that it because it allows all the threads in a system to issue commands, it can pile on the work during heavy scenes, moving the cliff edge for high powered cards further down the line or making the visual effects at the high end very impressive, which is perhaps something benchmarking like this won’t capture.

For our CPU scaling tests, we took the two high end cards tested and placed them in each of our Core i7 (6C/12T), Core i5 (4C/4T) and Core i3 (2C/4T) environments, at three different resolution/setting configurations similar to the previous page, and recorded the results.

Fable Legends Beta: GTX 980 Ti Scaling

Fable Legends Beta: GTX 980 Ti Scaling %

Looking solely at the GTX 980 Ti to begin, and we see that for now the Fable Benchmark only scales at the low resolution and graphics quality. Moving up to 1080p or 4K sees similar performance no matter what the processor – perhaps even a slight decrease at 4K but this is well within a 2% variation.

Fable Legends Beta: AMD Fury X Scaling

Fable Legends Beta: AMD Fury X Scaling %

On the Fury X, the tale is similar and yet stranger. The Fable benchmark is canned, so it should be running the same data each time – but in all three circumstances the Core i7 trails behind the Core i5. Perhaps in this instance there are too many threads on the processor contesting for bandwidth, giving some slight cache pressure (one wonders if some eDRAM might help). But again we see no real scaling improvement moving from Core i3 to Core i7 for our 1920x1080 and 3840x2160.

Fable Legends Beta: Other CPU Scaling %, 720p

As we’ve seen in previous reviews, the effects of CPU scaling with regards resolution are dependent on both the CPU architecture and the GPU architecture, with each GPU manufacturer performing differently and two different models in the same silicon family also differing in scaling results. To that end, we actually see a boost at 1280x720 with the AMD 7970 and the GTX 680 when moving from the Core i3 to the Core i7.

If we look at the rendering time breakdown between GPUs on high end configurations, we get the following data. Numbers here are listed in milliseconds, so lower is better:

Fable Legends Beta: Render Sub-Results (Core i7, 3840x2160)

Looking at the 980Ti and Fury X we see that NVIDIA is significantly faster at GBuffer rendering, Dynamic Global Illumination, and Compute Shader Simulation & Culling. Meanwhile AMD pulls narrower leads in every other category including the ambiguous 'other'.

Fable Legends Beta: Render Sub-Results (Core i7, 3840x2160)

Dropping down a couple of tiers with the GTX 970 and R9 290X, we see some minor variations. The R9 290X has good leads in dynamic lighting, and 'other', with smaller leads in Compute Shader Simulation & Culling and Post Processing. The GTX 970 benefits on dynamic global illumination significantly.

What do these numbers mean? Overall it appears that NVIDIA has a strong hold on deferred rendering and global illumination and AMD has benefits with dynamic lighting and compute.

Graphics Performance Comparison Discussing Percentiles and Minimum Frame Rates - NVIDIA GTX 980 Ti
Comments Locked

141 Comments

View All Comments

  • lprates - Sunday, October 18, 2015 - link

    I totally Agree
  • anubis44 - Friday, October 30, 2015 - link

    It's not nonsense. AMD Radeon cards have a hardware based scheduler. These tests don't make any use of asynchronous shaders, but it IS a DX12 feature, and one which will hit the Maxwells hard, since they don't have a hardware based scheduler. nVidia left it out to get the power consumption down. Too bad it'll be needed in many upcoming DX12 titles.
  • Bleakwise - Tuesday, December 29, 2015 - link

    You think?

    They didn't even benchmark the 300 series cards and look at this, at 1080p a 290x is about 11% faster a 970, a 285 is 20% faster than 960.

    I mean holy shit.

    Also, why didn't Anandtech use the last gen AMD cards instead of the 300 series cards (no, they aren't just rebrands)? Why didn't they do 1440p benchmarks? What the hell?
  • Bleakwise - Tuesday, December 29, 2015 - link

    Also, "the driver got here late" my ass.

    What a bullshit excuse. It takes a couple hours to benchmark a card. The review couldn't wait one more day? Really? A review that's obsolete before it's even posted is better than posting a relevant review a day later?
  • Uxi - Thursday, September 24, 2015 - link

    Picture/Graph nr. 2 on the CPU scaling page seems to be the wrong. Should be 980 Ti not Fury X.
  • Brett Howse - Thursday, September 24, 2015 - link

    Fixed tyvm!
  • Drumsticks - Thursday, September 24, 2015 - link

    Any chance of having some AMD cpus tacked onto this? DX 12 is supposed to help them out after all, so it would be interesting to see if they've made any gains here.
  • Ian Cutress - Thursday, September 24, 2015 - link

    It's something we've thought of. A main issue is that the reviewer with all the GPUs, Ryan, is on the West Coast and the one with all the CPUs, me, is in Europe. So if Ryan does a piece it'll have lots of GPUs (and take time out of other things as he is Editor in Chief) and light on CPU. If I do it, it'll be limited to the GPU stack (R9 290X, R9 285, GTX 980, GTX 770, some of these non-reference) I have. We did this with Star Swarm and got a select group of angry emails claiming we were biased some way or another for not doing a full matrix intersection and claimed we were being paid off.

    That aside, when we get closer to launch of this game and others with DX12, I'll update the tests on our CPU test bed for 2016, and maybe get a new GPU or two with whatever is available at the time.
  • britjh22 - Thursday, September 24, 2015 - link

    Sounds like we need to get Ryan an FX-8320 and 990FX board, can you partially disable the FX processors in the same way to replicate the 6 and 4 series like you can with the i7?
  • R0H1T - Thursday, September 24, 2015 - link

    A better idea would be to ship Ian to the states, also since import duties are lower than that in Europe (:

    j/k

Log in

Don't have an account? Sign up now