Final Words

Non-final benchmarks are a tough element to define. On one hand, they do not show the full range of both performance and graphical enhancements and could be subject to critical rendering paths that cause performance issues. On the other side, they are near-final representations and aspirations of the game developers, with the game engine almost at the point of being comfortable. To say that a preview benchmark is somewhere from 50% to 90% representative of the final product is not much of a bold statement to make in these circumstances, but between those two numbers can be a world of difference.

Fable Legends, developed by Lionhead Studios and published by Microsoft, uses EPIC’s Unreal 4 engine. All the elements of that previous sentence have gravitas in the gaming industry: Fable is a well-known franchise, Lionhead is a successful game developer, Microsoft is Microsoft, and EPIC’s Unreal engines have powered triple-A gaming titles for the best part of two decades. With the right ingredients, therein lies the potential for that melt-in-the-mouth cake as long as the oven is set just right.

Convoluted cake metaphors aside, this article set out to test the new Fable Legends benchmark in DirectX 12. As it stands, the software build we received indicated that the benchmark and game is still in 'early access preview' mode, so improvements may happen down the line. Users are interested in how DX12 games will both perform and scale on different hardware and different settings, and we aimed to fill in some of those blanks today. We used several AMD and NVIDIA GPUs, mainly focusing on NVIDIA’s GTX 980 Ti and AMD’s Fury X, with Core i7-X (six cores with HyperThreading), Core i5 (quad core, no HT) and Core i3 (two cores, HT) system configurations. These two GPUs were also tested at 3840x2160 (4K) with Ultra settings, 1920x1080 with Ultra settings and 1280x720 with low settings.

On pure average frame rate numbers, we saw NVIDIA’s GTX 980 Ti by just under 10% in all configurations except for the 1280x720 settings which gave the Fury X a substantial (10%+ on i5 and i3) lead. Looking at CPU scaling, this showed that scaling only ever really occurred at the 1280x720 settings anyway, with both AMD and NVIDIA showing a 20-25% gain moving from a Core i3 to a Core i7. Some of the older cards showed a smaller 7% improvement over the same test.

Looking through the frame rate profile data, specifically looking for minimum benchmark percentile numbers, we saw an interesting correlation with using a Core i7 (six core, HT) platform and the frame rates on complex frames being beaten by the Core i5 and even the Core i3 setups, despite the fact that during the easier frames to compute the Core i7 performed better. In our graphs, it gave a tilted axis akin to a seesaw:

When comparing the separate compute profile time data provided by the benchmark, it showed that the Core i7 was taking longer for a few of the lighting techniques, perhaps relating to cache or scheduling issues either at the CPU end or the GPU end which was alleviated with fewer cores in the mix. This may come down to a memory controller not being bombarded with higher priority requests causing a shuffle in the data request queue.

Fable Legends Beta: Render Sub-Results (Core i7, 3840x2160)

When we do a direct comparison for AMD’s Fury X and NVIDIA’s GTX 980 Ti in the render sub-category results for 4K using a Core i7, both AMD and NVIDIA have their strong points in this benchmark. NVIDIA favors illumination, compute shader work and GBuffer rendering where AMD favors post processing, transparency and dynamic lighting.

DirectX 12 is coming in with new effects to make games look better with new features to allow developers to extract performance out of our hardware. Fable Legends uses EPIC’s Unreal Engine 4 with added effects and represents a multi-year effort to develop the engine around DX12's feature set and ultimately improve performance over DX11. With this benchmark we have begun to peek a little in to what actual graphics performance in games might be like, and if DX12 benefits users on low powered CPUs or high-end GPUs more. That being said, there is a good chance that the performance we’ve seen today will change by release due to driver updates and/or optimizing the game code. Nevertheless, at this point it does appear that a reasonably strong card such as the 290X or GTX 970 are needed to get a smooth 1080p experience (at Ultra settings) with this demo. 

Comparing Percentile Numbers Between the GTX 980 Ti and Fury X
Comments Locked

141 Comments

View All Comments

  • HotBBQ - Friday, September 25, 2015 - link

    Please avoid using green and red together for plots. It's nigh impossible to distinguish if you are red-green colorblind (the most common).
  • Crunchy005 - Friday, September 25, 2015 - link

    So we have a 680, 970, 980ti. Why is there a 980 missing and no 700 series cards from nvidia? The 700s were the original cards to go against things like the 7970, 290, 290x. Would be nice see whether those cards are still relevant, although the lack of them showing in benchmarks says otherwise. Also the 980 missing is a bit concerning.
  • Daniel Williams - Friday, September 25, 2015 - link

    It's mostly time constraints that limit our GPU selection. So many GPU's with not so many hours in the day. In this case Ryan got told about this benchmark just two days before leaving for the Samsung SSD global summit and just had time to bench the cards and hand the numbers to the rest of us for the writeup.

    It surely would be great if we had time to test everything. :-)
  • Oxford Guy - Friday, September 25, 2015 - link

    Didn't Ashes come out first?
  • Daniel Williams - Friday, September 25, 2015 - link

    Yes but our schedule didn't work out. We will likely look at it at a later time, closer to launch.
  • Oxford Guy - Saturday, September 26, 2015 - link

    So the benchmark that favors AMD is brushed to the side but this one fits right into the schedule.

    This is the sort of stuff that makes people wonder about this site's neutrality.
  • Brett Howse - Saturday, September 26, 2015 - link

    I think you are fishing a bit here. We didn't even have a chance to test Ashes because of when it came out (right at Intel's Developer Forum) so how would we even know it favored AMD? Regardless, now that Daniel is hired on hopefully this will alleviate the backlog on things like this. Unfortunately we are a very small team so we can't test everything we would like to, but that doesn't mean we don't want to test it.
  • Oxford Guy - Sunday, September 27, 2015 - link

    Ashes came out before this benchmark, right? So, how does it make sense that this one was tested first? I guess you'd know by reading the ArsTechnica article that showed up to a 70% increase in performance for the 290X over DX11 as well as much better minimum frame rates.
  • Ananke - Friday, September 25, 2015 - link

    Hmm, this review is kinda pathetic...NVidia has NO async scheduler in the GPU, the scheduler is in the driver, aka it needs CPU cycles. Then, async processors are one per compute cluster instead one per compute unit, i.e. lesser number.
    So, once you run a Dx12 game with all AI inside, in a heavy scene it will be CPU constrained and the GPU driver will not have enough resource to schedule, and it will drop performance significantly. Unless, you somehow manage to prioritize the GPU driver, aka have dedicated CPU thread/core for it in the game engine...which is exactly what Dx12 was supposed to avoid - higher level of virtualization. That abstract layer of Dx11 is not there anymore.
    So, yeah, NVidia is great in geometry calculations, it's always been, this review confirms it again.
  • The_Countess - Friday, September 25, 2015 - link

    often the fury X GAINS FPS as the CPU speed goes down from i7 to i5 and i3.

    3 FPS gained in the ultra benchmark going from the i7 to the i3, and 7 in the low benchmark between the i7 and the i5.

Log in

Don't have an account? Sign up now