CPU Scaling

When it comes to how well a game scales with a processor, DirectX 12 is somewhat of a mixed bag. This is due to two reasons – it allows GPU commands to be issued by each CPU core, therefore removing the single core performance limit that hindered a number of DX11 titles and aiding configurations with fewer core counts or lower clock speeds. On the other side of the coin is that it because it allows all the threads in a system to issue commands, it can pile on the work during heavy scenes, moving the cliff edge for high powered cards further down the line or making the visual effects at the high end very impressive, which is perhaps something benchmarking like this won’t capture.

For our CPU scaling tests, we took the two high end cards tested and placed them in each of our Core i7 (6C/12T), Core i5 (4C/4T) and Core i3 (2C/4T) environments, at three different resolution/setting configurations similar to the previous page, and recorded the results.

Fable Legends Beta: GTX 980 Ti Scaling

Fable Legends Beta: GTX 980 Ti Scaling %

Looking solely at the GTX 980 Ti to begin, and we see that for now the Fable Benchmark only scales at the low resolution and graphics quality. Moving up to 1080p or 4K sees similar performance no matter what the processor – perhaps even a slight decrease at 4K but this is well within a 2% variation.

Fable Legends Beta: AMD Fury X Scaling

Fable Legends Beta: AMD Fury X Scaling %

On the Fury X, the tale is similar and yet stranger. The Fable benchmark is canned, so it should be running the same data each time – but in all three circumstances the Core i7 trails behind the Core i5. Perhaps in this instance there are too many threads on the processor contesting for bandwidth, giving some slight cache pressure (one wonders if some eDRAM might help). But again we see no real scaling improvement moving from Core i3 to Core i7 for our 1920x1080 and 3840x2160.

Fable Legends Beta: Other CPU Scaling %, 720p

As we’ve seen in previous reviews, the effects of CPU scaling with regards resolution are dependent on both the CPU architecture and the GPU architecture, with each GPU manufacturer performing differently and two different models in the same silicon family also differing in scaling results. To that end, we actually see a boost at 1280x720 with the AMD 7970 and the GTX 680 when moving from the Core i3 to the Core i7.

If we look at the rendering time breakdown between GPUs on high end configurations, we get the following data. Numbers here are listed in milliseconds, so lower is better:

Fable Legends Beta: Render Sub-Results (Core i7, 3840x2160)

Looking at the 980Ti and Fury X we see that NVIDIA is significantly faster at GBuffer rendering, Dynamic Global Illumination, and Compute Shader Simulation & Culling. Meanwhile AMD pulls narrower leads in every other category including the ambiguous 'other'.

Fable Legends Beta: Render Sub-Results (Core i7, 3840x2160)

Dropping down a couple of tiers with the GTX 970 and R9 290X, we see some minor variations. The R9 290X has good leads in dynamic lighting, and 'other', with smaller leads in Compute Shader Simulation & Culling and Post Processing. The GTX 970 benefits on dynamic global illumination significantly.

What do these numbers mean? Overall it appears that NVIDIA has a strong hold on deferred rendering and global illumination and AMD has benefits with dynamic lighting and compute.

Graphics Performance Comparison Discussing Percentiles and Minimum Frame Rates - NVIDIA GTX 980 Ti
Comments Locked

141 Comments

View All Comments

  • Alexvrb - Friday, September 25, 2015 - link

    Ship them both to the East Coast and set up a Review Office / Beach Resort, complete with community events!
  • zimanodenea - Thursday, September 24, 2015 - link

    My Asus m5a97 has an option to do this.
  • mdriftmeyer - Thursday, September 24, 2015 - link

    Time to develop in a test harness of equal merits and scope across the globe for the reviewers. To do less is unprofessional. The whole point of a test harness is not to ductape simulations but to cover all bases.
  • Spunjji - Friday, September 25, 2015 - link

    Well said. This isn't some tinpot organisation, is it? ;)
  • Drumsticks - Thursday, September 24, 2015 - link

    That's a shame. I'd really like to see that comparison. With the improvements Zen should, in theory, bring, it could really give AMD its best chance in years to get some wind under its sails.
  • beck2050 - Thursday, September 24, 2015 - link

    A little too early to worry about. Hopefully both companies will improve when 12 becomes standard issue.
  • DrKlahn - Thursday, September 24, 2015 - link

    Epic has always worked closely with Nvidia and used their hardware, so the only thing that surprises me is that the gap doesn't favor Nvidia more. It's very early to make any predictions, but there are some interesting conversations on other forums about how both architectures behave in different situations. Nvidia's architecture does appear to have issues in some asynchronous workloads. What little evidence we have says this may be an issue in some games.

    My own opinion is that with Nvidia's market dominance we will see most developers try to avoid situations where problems occur. As an AMD owner my main hope is that we see DX12 squeeze out proprietary codes and level the playing field more. I'm also happy that the latest Unreal engine appears to run well on both vendors hardware.
  • jiao lu - Thursday, September 24, 2015 - link

    not only close working relationship . the Unreal 3/4 use Nvidia Physics sdk outright. Epic engine is terribly optimized for console right now. basically it is PC engine, churn out pc demo now and then . Now far fewer AAA studio use unreal 4 like they do with unreal 3 in the ps 3/xbox 360 era. I am very much suspicious unreal 5 is not mult-threaded rendering enough , use dx 12 like do dx 11 before.
  • Midwayman - Thursday, September 24, 2015 - link

    Well, the xbox one is using AMD hardware and dx12. That's probably a bigger reason to keep it neutral than more nvidia share on the PC.
  • Spunjji - Friday, September 25, 2015 - link

    The PS4 is also using the same AMD GCN 1.0 architectures for CPU and GPU

Log in

Don't have an account? Sign up now