Update 2016/03/07: Well so much for that. Fable Legends has been canceled. So it will ultimately be another game that gets to claim the right as the first Unreal Engine 4 based DX12 game.

DirectX 12 is now out in the wild as a part of Windows 10 and the updated driver model WDDM 2.0 that comes with it. Unlike DX11, there are no major gaming titles at launch - we are now waiting for games to take advantage of DX12 and what difference it will make for the game playing experience. One of the main focal points of DX12 is draw calls, leveraging multiple processor cores to dispatch GPU workloads, rather than the previous model of a single core doing most of the work. DX12 brings about a lot of changes with the goal of increasing performance, offering an even more immersive experience, but it does shift some of the support requirements to the engine developers such as SLI or Crossfire. We tackled two synthetic tests earlier this year, Star Swarm and 3DMark, but due to timing and other industry events, we are waiting for a better time to test the Ashes of the Singularity benchmark as the game nears completion. Until that point, a PR team got in contact with us regarding the upcoming Fable Legends title using the Unreal 4 engine, and an early access preview benchmark that came with it. Here are our results so far.

Fable Legends

Fable Legends is an Xbox One/Windows 10 exclusive free to play title built by Lionhead Studios in Unreal Engine 4. The game, styled as a ‘cooperative action RPG’, consists of asymmetrical multiplayer matches with attackers trying to raid a base and the defender playing more of a tower defense position.

The benchmark provided is more of a graphics showpiece than a representation of the gameplay, in order to show off the capabilities of the engine and the DX12 implementation. Unfortunately we didn't get to see any gameplay in this benchmark as a result, which would seem to focus more on combat. This is the one of the first DirectX 12 benchmarks available - Ashes of the Singularity by Stardock was released just before IDF, but due to scheduling we have not had a chance to dig into that one yet. This will be our first look at a DirectX 12 game engine with a game attached as a result.


Official Trailer

This benchmark pans through several outdoor scenes in a fashion similar to the Unigene Valley benchmark, focusing more on landscapes, distance drawing and tessellation rather than an upfront first-person perspective. Graphical effects such as dynamic global illumination are computed on the fly, making subtle differences in the lighting and it  shows the day/night cycle being accelerated, similar to the large Grand Theft Auto benchmark.  The engine itself draws on DX12 explicit features such as ‘asynchronous compute, manual resource barrier tracking, and explicit memory management’ that either allow the application to better take advantage of available hardware or open up options that allow developers to better manage multi-threaded applications and GPU memory resources respectively. The updated engine has had several additions to implement these visual effects and has promised that use of DirectX 12 will help to improve both the experience and performance.

The Test

The software provided to us is a prerelease version of Fable Legends, with early drivers, so ultimately the performance at this point is most likely not representative of the game at launch and should improve before release. What we will see here is more of a broad picture painting how different GPUs will scale when DX12 features are thrown into the mix. In fact, AMD sent us a note that there is a new driver available specifically for this benchmark which should improve the scores on the Fury X, although it arrived too late for this pre-release look at Fable Legends (Ryan did the testing but is covering Samsung’s 950 Pro launch in Korea at this time). It can underscore just how early in the game and driver development cycle DirectX 12 is for all players. But as with most important titles, we expect drivers and software updates to continue to drive performance forward as developers and engineers come to understand how the new version of DirectX works.

With that being said, there does not appear to be any stability issues with the benchmark as it stands, and we have had time to test graphics cards going back a few generations for both AMD and NVIDIA. Our pre-release package came with three test standards at 1280x720, 1920x1080 and 4K. We also attempted to test a number of these combinations multiple CPU core and thread count simulations in order to emulate a number of popular CPUs in the market.

CPU: Intel Core i7-4960X in 3 modes:

'Core i7' - 6 Cores, 12 Threads at 4.2 GHz
'Core i5' - 4 Cores, 4 Threads at 3.8 GHz
'Core i3' - 2 Cores, 4 Threads at 3.8 GHz
Motherboard: ASRock Fatal1ty X79 Professional
Power Supply: Corsair AX1200i
Hard Disk: Samsung SSD 840 EVO (750GB)
Memory: G.Skill RipjawZ DDR3-1866 4 x 8GB (9-10-9-26)
Case: NZXT Phantom 630 Windowed Edition
Monitor: Asus PQ321
Video Cards: AMD Radeon R9 Fury X
AMD Radeon R9 290X
AMD Radeon R9 285
AMD Radeon HD 7970

NVIDIA GeForce GTX 980 Ti
NVIDIA GeForce GTX 970 (EVGA)
NVIDIA GeForce GTX 960
NVIDIA GeForce GTX 680
NVIDIA GeForce GTX 750 Ti
Video Drivers: NVIDIA Release 355.82
AMD Catalyst Cat 15.201.1102
OS: Windows 10

This Test

All the results in this piece are on discrete GPUs. The benchmark outputs a score, which is merely the average frame rate multiplied by a hundred, but it also dumps an extensive data log where it tracks over 186 different elements of the system every frame, such as compute time for various effects for each frame. Our testing takes on three roles – direct GPU comparison of average frame rates at 1080p and 720p in our i7-4960X mode, CPU scaling at each resolution with the GTX 980 Ti and AMD Fury, X and then a deep analysis of the percentile data of these two graphics cards at each resolution and each CPU configuration. 

Graphics Performance Comparison
Comments Locked

141 Comments

View All Comments

  • HotBBQ - Friday, September 25, 2015 - link

    Please avoid using green and red together for plots. It's nigh impossible to distinguish if you are red-green colorblind (the most common).
  • Crunchy005 - Friday, September 25, 2015 - link

    So we have a 680, 970, 980ti. Why is there a 980 missing and no 700 series cards from nvidia? The 700s were the original cards to go against things like the 7970, 290, 290x. Would be nice see whether those cards are still relevant, although the lack of them showing in benchmarks says otherwise. Also the 980 missing is a bit concerning.
  • Daniel Williams - Friday, September 25, 2015 - link

    It's mostly time constraints that limit our GPU selection. So many GPU's with not so many hours in the day. In this case Ryan got told about this benchmark just two days before leaving for the Samsung SSD global summit and just had time to bench the cards and hand the numbers to the rest of us for the writeup.

    It surely would be great if we had time to test everything. :-)
  • Oxford Guy - Friday, September 25, 2015 - link

    Didn't Ashes come out first?
  • Daniel Williams - Friday, September 25, 2015 - link

    Yes but our schedule didn't work out. We will likely look at it at a later time, closer to launch.
  • Oxford Guy - Saturday, September 26, 2015 - link

    So the benchmark that favors AMD is brushed to the side but this one fits right into the schedule.

    This is the sort of stuff that makes people wonder about this site's neutrality.
  • Brett Howse - Saturday, September 26, 2015 - link

    I think you are fishing a bit here. We didn't even have a chance to test Ashes because of when it came out (right at Intel's Developer Forum) so how would we even know it favored AMD? Regardless, now that Daniel is hired on hopefully this will alleviate the backlog on things like this. Unfortunately we are a very small team so we can't test everything we would like to, but that doesn't mean we don't want to test it.
  • Oxford Guy - Sunday, September 27, 2015 - link

    Ashes came out before this benchmark, right? So, how does it make sense that this one was tested first? I guess you'd know by reading the ArsTechnica article that showed up to a 70% increase in performance for the 290X over DX11 as well as much better minimum frame rates.
  • Ananke - Friday, September 25, 2015 - link

    Hmm, this review is kinda pathetic...NVidia has NO async scheduler in the GPU, the scheduler is in the driver, aka it needs CPU cycles. Then, async processors are one per compute cluster instead one per compute unit, i.e. lesser number.
    So, once you run a Dx12 game with all AI inside, in a heavy scene it will be CPU constrained and the GPU driver will not have enough resource to schedule, and it will drop performance significantly. Unless, you somehow manage to prioritize the GPU driver, aka have dedicated CPU thread/core for it in the game engine...which is exactly what Dx12 was supposed to avoid - higher level of virtualization. That abstract layer of Dx11 is not there anymore.
    So, yeah, NVidia is great in geometry calculations, it's always been, this review confirms it again.
  • The_Countess - Friday, September 25, 2015 - link

    often the fury X GAINS FPS as the CPU speed goes down from i7 to i5 and i3.

    3 FPS gained in the ultra benchmark going from the i7 to the i3, and 7 in the low benchmark between the i7 and the i5.

Log in

Don't have an account? Sign up now