CPU Scaling

When it comes to how well a game scales with a processor, DirectX 12 is somewhat of a mixed bag. This is due to two reasons – it allows GPU commands to be issued by each CPU core, therefore removing the single core performance limit that hindered a number of DX11 titles and aiding configurations with fewer core counts or lower clock speeds. On the other side of the coin is that it because it allows all the threads in a system to issue commands, it can pile on the work during heavy scenes, moving the cliff edge for high powered cards further down the line or making the visual effects at the high end very impressive, which is perhaps something benchmarking like this won’t capture.

For our CPU scaling tests, we took the two high end cards tested and placed them in each of our Core i7 (6C/12T), Core i5 (4C/4T) and Core i3 (2C/4T) environments, at three different resolution/setting configurations similar to the previous page, and recorded the results.

Fable Legends Beta: GTX 980 Ti Scaling

Fable Legends Beta: GTX 980 Ti Scaling %

Looking solely at the GTX 980 Ti to begin, and we see that for now the Fable Benchmark only scales at the low resolution and graphics quality. Moving up to 1080p or 4K sees similar performance no matter what the processor – perhaps even a slight decrease at 4K but this is well within a 2% variation.

Fable Legends Beta: AMD Fury X Scaling

Fable Legends Beta: AMD Fury X Scaling %

On the Fury X, the tale is similar and yet stranger. The Fable benchmark is canned, so it should be running the same data each time – but in all three circumstances the Core i7 trails behind the Core i5. Perhaps in this instance there are too many threads on the processor contesting for bandwidth, giving some slight cache pressure (one wonders if some eDRAM might help). But again we see no real scaling improvement moving from Core i3 to Core i7 for our 1920x1080 and 3840x2160.

Fable Legends Beta: Other CPU Scaling %, 720p

As we’ve seen in previous reviews, the effects of CPU scaling with regards resolution are dependent on both the CPU architecture and the GPU architecture, with each GPU manufacturer performing differently and two different models in the same silicon family also differing in scaling results. To that end, we actually see a boost at 1280x720 with the AMD 7970 and the GTX 680 when moving from the Core i3 to the Core i7.

If we look at the rendering time breakdown between GPUs on high end configurations, we get the following data. Numbers here are listed in milliseconds, so lower is better:

Fable Legends Beta: Render Sub-Results (Core i7, 3840x2160)

Looking at the 980Ti and Fury X we see that NVIDIA is significantly faster at GBuffer rendering, Dynamic Global Illumination, and Compute Shader Simulation & Culling. Meanwhile AMD pulls narrower leads in every other category including the ambiguous 'other'.

Fable Legends Beta: Render Sub-Results (Core i7, 3840x2160)

Dropping down a couple of tiers with the GTX 970 and R9 290X, we see some minor variations. The R9 290X has good leads in dynamic lighting, and 'other', with smaller leads in Compute Shader Simulation & Culling and Post Processing. The GTX 970 benefits on dynamic global illumination significantly.

What do these numbers mean? Overall it appears that NVIDIA has a strong hold on deferred rendering and global illumination and AMD has benefits with dynamic lighting and compute.

Graphics Performance Comparison Discussing Percentiles and Minimum Frame Rates - NVIDIA GTX 980 Ti
Comments Locked

141 Comments

View All Comments

  • anubis44 - Friday, October 30, 2015 - link

    The point is not whether you use DP, the point is that the circuitry is now missing, and that's why Maxwell uses less power. If I leave stuff out of a car, it'll be lighter, too. Hey look! No back seats anymore, and now it's LIGHTER! I'm a genius. It's not because nVidia whipped up a can of whoop-ass, or because they have magic powers, it's because they threw everything out of the airplane to make it lighter.
  • anubis44 - Friday, October 30, 2015 - link

    And left out the hardware based scheduler, which will bite them in the ass for a lot of DX12 games that will need this. No WAIT! nVidia isn't screwed! They'll just sell ANOTHER card to the nVidiots who JUST bought one that was obsolete, 'cause nVidia is ALWAYS better!
  • Alexvrb - Thursday, September 24, 2015 - link

    Not every game uses every DX12 feature, and knowing that their game is going to run on a lot of Nvidia hardware makes developers conservative in their use of new features that hurt performance on Nvidia cards. For example, as long as developers are careful with async compute and you've got plenty of CPU cycles, I think everything will be fine.

    Now, look at the 720p results. Why the change in the pecking order? Why do AMD cards increase their lead as CPU power falls? Is it a driver overhead issue - possibly related to async shader concerns? We don't know. Either way it might not matter, an early benchmark isn't even necessarily representative of the final thing, let alone a real-world experience.

    In the end it will depend on the individual game. I don't think most developers are going to push features really hard that kill performance on a large portion of cards... well not unless they get free middleware tools and marketing cash or something. ;)
  • cityuser - Sunday, September 27, 2015 - link

    quite sure it's nvidia again do some nasty work with the game company that descale the performance of AMD card !!!
    Look at where the nvidia cannot corrupt, futuremark's benchmark tells another story!!!
  • Drumsticks - Thursday, September 24, 2015 - link

    As always, it's only one data point. It was too early to declare AMD a winner then, but it's still too early to say they aren't actually going to benefit more from DX12 than Nvidia. We need more data to say for sure either way.
  • geniekid - Thursday, September 24, 2015 - link

    That's crazy talk.
  • Beararam - Thursday, September 24, 2015 - link

    Maybe not ''vastly superior'', but the gains in the 390x seem to be greater than those realized in the 980. Time will tell.

    https://youtu.be/_AH6pU36RUg?t=6m29s
  • justniz - Thursday, September 24, 2015 - link

    Such a large gain only on AMD just from DX12 (i.e. accessing the GPU at a lower level and bypassing AMD driver's DX11 implementation) is yet more evidence that AMD's DX11 drivers are much more of a bottleneck than nVidia's.
  • Gigaplex - Thursday, September 24, 2015 - link

    That part was pretty obvious. The current question is, how much of a bottleneck. Will DX12 be enough to put AMD in the lead (once final code starts shipping), or just catch up?
  • lefty2 - Thursday, September 24, 2015 - link

    I wonder if they were pressurized not to release any benchmark that would make Nvidia look bad, similiar to the way they did in ashes of the singularity

Log in

Don't have an account? Sign up now