Fable Legends Early Preview: DirectX 12 Benchmark Analysis
by Ryan Smith, Ian Cutress & Daniel Williams on September 24, 2015 9:00 AM ESTCPU Scaling
When it comes to how well a game scales with a processor, DirectX 12 is somewhat of a mixed bag. This is due to two reasons – it allows GPU commands to be issued by each CPU core, therefore removing the single core performance limit that hindered a number of DX11 titles and aiding configurations with fewer core counts or lower clock speeds. On the other side of the coin is that it because it allows all the threads in a system to issue commands, it can pile on the work during heavy scenes, moving the cliff edge for high powered cards further down the line or making the visual effects at the high end very impressive, which is perhaps something benchmarking like this won’t capture.
For our CPU scaling tests, we took the two high end cards tested and placed them in each of our Core i7 (6C/12T), Core i5 (4C/4T) and Core i3 (2C/4T) environments, at three different resolution/setting configurations similar to the previous page, and recorded the results.
Looking solely at the GTX 980 Ti to begin, and we see that for now the Fable Benchmark only scales at the low resolution and graphics quality. Moving up to 1080p or 4K sees similar performance no matter what the processor – perhaps even a slight decrease at 4K but this is well within a 2% variation.
On the Fury X, the tale is similar and yet stranger. The Fable benchmark is canned, so it should be running the same data each time – but in all three circumstances the Core i7 trails behind the Core i5. Perhaps in this instance there are too many threads on the processor contesting for bandwidth, giving some slight cache pressure (one wonders if some eDRAM might help). But again we see no real scaling improvement moving from Core i3 to Core i7 for our 1920x1080 and 3840x2160.
As we’ve seen in previous reviews, the effects of CPU scaling with regards resolution are dependent on both the CPU architecture and the GPU architecture, with each GPU manufacturer performing differently and two different models in the same silicon family also differing in scaling results. To that end, we actually see a boost at 1280x720 with the AMD 7970 and the GTX 680 when moving from the Core i3 to the Core i7.
If we look at the rendering time breakdown between GPUs on high end configurations, we get the following data. Numbers here are listed in milliseconds, so lower is better:
Looking at the 980Ti and Fury X we see that NVIDIA is significantly faster at GBuffer rendering, Dynamic Global Illumination, and Compute Shader Simulation & Culling. Meanwhile AMD pulls narrower leads in every other category including the ambiguous 'other'.
Dropping down a couple of tiers with the GTX 970 and R9 290X, we see some minor variations. The R9 290X has good leads in dynamic lighting, and 'other', with smaller leads in Compute Shader Simulation & Culling and Post Processing. The GTX 970 benefits on dynamic global illumination significantly.
What do these numbers mean? Overall it appears that NVIDIA has a strong hold on deferred rendering and global illumination and AMD has benefits with dynamic lighting and compute.
141 Comments
View All Comments
Traciatim - Thursday, September 24, 2015 - link
RAM generally has very little to no impact on gaming except for a few strange cases (like F1).Though, the machine still has it's cache available so the i3 test isn't quite the same thing as a real i3 it should be close enough that you wouldn't notice the difference.
Mr Perfect - Thursday, September 24, 2015 - link
In the future, could you please include/simulate a 4 core/8 thread CPU? That's probably what most of us have.Oxford Guy - Thursday, September 24, 2015 - link
How about Ashes running on a Fury and a 4.5 GHz FX CPU.Oxford Guy - Thursday, September 24, 2015 - link
and a 290X, of course, paired against a 980vision33r - Thursday, September 24, 2015 - link
Just because a game supports DX12 doesn't mean it uses all DX12 features. It looks like they have DX12 as a check box but not really utilizing DX12 complete features. We have to see more DX12 implemenations to know for sure how each card stack up.Wolfpup - Thursday, September 24, 2015 - link
I'd be curious about a direct X 12 vs 11 test at some point.Regarding Fable Legends, WOW am I disappointed by what it is. I shouldn't be in a sense, I mean I'm not complaining that Mario Baseball isn't a Mario game, but still, a "free" to play deathmatch type game isn't what I want and isn't what I think of with Fable (Even if, again, really this could be good for people who want it, and not a bad use of the license).
Just please don't make a sequel to New Vegas or Mass Effect or Bioshock that's deathmatch LOL
toyotabedzrock - Thursday, September 24, 2015 - link
You should have used the new driver given you where told it was related to this specific game preview.Shellshocked - Thursday, September 24, 2015 - link
Does this benchmark use Async compute?Spencer Andersen - Thursday, September 24, 2015 - link
Negative, Unreal Engine does NOT use Async compute except on Xbox One. Considering that is one of the main features of the newer APIs, what does that tell you? Nvidia+Unreal Engine=BFF But I don't see it as a big deal considering that Frostbite and likely other engines already have most if not all DX12 features built in including Async compute.Great article guys, looking forward to more DX12 benchmarks. It's an interesting time in gaming to say the least!
oyabun - Thursday, September 24, 2015 - link
There is something wrong with the webpages of the article, an ad by Samsung seems to cover the entire page and messes up all the rendering. Furthermore wherever I click a new tab opens at www.space.com! I had to reload several times just to be able to post this!