Final Words

Non-final benchmarks are a tough element to define. On one hand, they do not show the full range of both performance and graphical enhancements and could be subject to critical rendering paths that cause performance issues. On the other side, they are near-final representations and aspirations of the game developers, with the game engine almost at the point of being comfortable. To say that a preview benchmark is somewhere from 50% to 90% representative of the final product is not much of a bold statement to make in these circumstances, but between those two numbers can be a world of difference.

Fable Legends, developed by Lionhead Studios and published by Microsoft, uses EPIC’s Unreal 4 engine. All the elements of that previous sentence have gravitas in the gaming industry: Fable is a well-known franchise, Lionhead is a successful game developer, Microsoft is Microsoft, and EPIC’s Unreal engines have powered triple-A gaming titles for the best part of two decades. With the right ingredients, therein lies the potential for that melt-in-the-mouth cake as long as the oven is set just right.

Convoluted cake metaphors aside, this article set out to test the new Fable Legends benchmark in DirectX 12. As it stands, the software build we received indicated that the benchmark and game is still in 'early access preview' mode, so improvements may happen down the line. Users are interested in how DX12 games will both perform and scale on different hardware and different settings, and we aimed to fill in some of those blanks today. We used several AMD and NVIDIA GPUs, mainly focusing on NVIDIA’s GTX 980 Ti and AMD’s Fury X, with Core i7-X (six cores with HyperThreading), Core i5 (quad core, no HT) and Core i3 (two cores, HT) system configurations. These two GPUs were also tested at 3840x2160 (4K) with Ultra settings, 1920x1080 with Ultra settings and 1280x720 with low settings.

On pure average frame rate numbers, we saw NVIDIA’s GTX 980 Ti by just under 10% in all configurations except for the 1280x720 settings which gave the Fury X a substantial (10%+ on i5 and i3) lead. Looking at CPU scaling, this showed that scaling only ever really occurred at the 1280x720 settings anyway, with both AMD and NVIDIA showing a 20-25% gain moving from a Core i3 to a Core i7. Some of the older cards showed a smaller 7% improvement over the same test.

Looking through the frame rate profile data, specifically looking for minimum benchmark percentile numbers, we saw an interesting correlation with using a Core i7 (six core, HT) platform and the frame rates on complex frames being beaten by the Core i5 and even the Core i3 setups, despite the fact that during the easier frames to compute the Core i7 performed better. In our graphs, it gave a tilted axis akin to a seesaw:

When comparing the separate compute profile time data provided by the benchmark, it showed that the Core i7 was taking longer for a few of the lighting techniques, perhaps relating to cache or scheduling issues either at the CPU end or the GPU end which was alleviated with fewer cores in the mix. This may come down to a memory controller not being bombarded with higher priority requests causing a shuffle in the data request queue.

Fable Legends Beta: Render Sub-Results (Core i7, 3840x2160)

When we do a direct comparison for AMD’s Fury X and NVIDIA’s GTX 980 Ti in the render sub-category results for 4K using a Core i7, both AMD and NVIDIA have their strong points in this benchmark. NVIDIA favors illumination, compute shader work and GBuffer rendering where AMD favors post processing, transparency and dynamic lighting.

DirectX 12 is coming in with new effects to make games look better with new features to allow developers to extract performance out of our hardware. Fable Legends uses EPIC’s Unreal Engine 4 with added effects and represents a multi-year effort to develop the engine around DX12's feature set and ultimately improve performance over DX11. With this benchmark we have begun to peek a little in to what actual graphics performance in games might be like, and if DX12 benefits users on low powered CPUs or high-end GPUs more. That being said, there is a good chance that the performance we’ve seen today will change by release due to driver updates and/or optimizing the game code. Nevertheless, at this point it does appear that a reasonably strong card such as the 290X or GTX 970 are needed to get a smooth 1080p experience (at Ultra settings) with this demo. 

Comparing Percentile Numbers Between the GTX 980 Ti and Fury X
Comments Locked

141 Comments

View All Comments

  • anubis44 - Friday, October 30, 2015 - link

    The point is not whether you use DP, the point is that the circuitry is now missing, and that's why Maxwell uses less power. If I leave stuff out of a car, it'll be lighter, too. Hey look! No back seats anymore, and now it's LIGHTER! I'm a genius. It's not because nVidia whipped up a can of whoop-ass, or because they have magic powers, it's because they threw everything out of the airplane to make it lighter.
  • anubis44 - Friday, October 30, 2015 - link

    And left out the hardware based scheduler, which will bite them in the ass for a lot of DX12 games that will need this. No WAIT! nVidia isn't screwed! They'll just sell ANOTHER card to the nVidiots who JUST bought one that was obsolete, 'cause nVidia is ALWAYS better!
  • Alexvrb - Thursday, September 24, 2015 - link

    Not every game uses every DX12 feature, and knowing that their game is going to run on a lot of Nvidia hardware makes developers conservative in their use of new features that hurt performance on Nvidia cards. For example, as long as developers are careful with async compute and you've got plenty of CPU cycles, I think everything will be fine.

    Now, look at the 720p results. Why the change in the pecking order? Why do AMD cards increase their lead as CPU power falls? Is it a driver overhead issue - possibly related to async shader concerns? We don't know. Either way it might not matter, an early benchmark isn't even necessarily representative of the final thing, let alone a real-world experience.

    In the end it will depend on the individual game. I don't think most developers are going to push features really hard that kill performance on a large portion of cards... well not unless they get free middleware tools and marketing cash or something. ;)
  • cityuser - Sunday, September 27, 2015 - link

    quite sure it's nvidia again do some nasty work with the game company that descale the performance of AMD card !!!
    Look at where the nvidia cannot corrupt, futuremark's benchmark tells another story!!!
  • Drumsticks - Thursday, September 24, 2015 - link

    As always, it's only one data point. It was too early to declare AMD a winner then, but it's still too early to say they aren't actually going to benefit more from DX12 than Nvidia. We need more data to say for sure either way.
  • geniekid - Thursday, September 24, 2015 - link

    That's crazy talk.
  • Beararam - Thursday, September 24, 2015 - link

    Maybe not ''vastly superior'', but the gains in the 390x seem to be greater than those realized in the 980. Time will tell.

    https://youtu.be/_AH6pU36RUg?t=6m29s
  • justniz - Thursday, September 24, 2015 - link

    Such a large gain only on AMD just from DX12 (i.e. accessing the GPU at a lower level and bypassing AMD driver's DX11 implementation) is yet more evidence that AMD's DX11 drivers are much more of a bottleneck than nVidia's.
  • Gigaplex - Thursday, September 24, 2015 - link

    That part was pretty obvious. The current question is, how much of a bottleneck. Will DX12 be enough to put AMD in the lead (once final code starts shipping), or just catch up?
  • lefty2 - Thursday, September 24, 2015 - link

    I wonder if they were pressurized not to release any benchmark that would make Nvidia look bad, similiar to the way they did in ashes of the singularity

Log in

Don't have an account? Sign up now