Final Fantasy XV (DX11)

Upon arriving to PC earlier this, Final Fantasy XV: Windows Edition was given a graphical overhaul as it was ported over from console, fruits of their successful partnership with NVIDIA, with hardly any hint of the troubles during Final Fantasy XV's original production and development.

In preparation for the launch, Square Enix opted to release a standalone benchmark that they have since updated. Using the Final Fantasy XV standalone benchmark gives us a lengthy standardized sequence to utilize OCAT. Upon release, the standalone benchmark received criticism for performance issues and general bugginess, as well as confusing graphical presets and performance measurement by 'score'. In its original iteration, the graphical settings could not be adjusted, leaving the user to the presets that were tied to resolution and hidden settings such as GameWorks features.

Since then, Square Enix has patched the benchmark with custom graphics settings and bugfixes to be more accurate in profiling in-game performance and graphical options, though leaving the 'score' measurement. For our testing, we enable or adjust settings to the highest except for NVIDIA-specific features and 'Model LOD', the latter of which is left at standard. Final Fantasy XV also supports HDR, and it will support DLSS at some later date.

Final Fantasy XV - 2560x1440 - Ultra Quality

Final Fantasy XV - 1920x1080 - Ultra Quality

Final Fantasy XV - 99th Percentile - 2560x1440 - Ultra Quality

Final Fantasy XV - 99th Percentile - 1920x1080 - Ultra Quality

Final Fantasy V is another strong title for NVIDIA across the board, and the GTX 1660 Ti comes very close to the RX Vega 64, let alone surpassing the RX 590 and RX Vega 56.

The GTX 960 is clearly out of its element, and given the 99th percentiles it's fair to say that the 2GB framebuffer shoulders a good amount of the blame. By comparison, this makes the GTX 1660 Ti look exceedingly good at offering basically triple the performance (and amusingly, triple the VRAM).

Wolfenstein II Grand Theft Auto V
Comments Locked

157 Comments

View All Comments

  • Rudde - Friday, February 22, 2019 - link

    Never mind, the second page explains this well. (Parallell execution of fp16, fp32 and int32)
  • CiccioB - Saturday, February 23, 2019 - link

    Not only that.
    With Turing you also get mesh shading and a better support for thread switching, which is a awful technique used on GCN to improve its terrible efficiency, having lots of "bubbles" in the pipelines.
    That's the reason you see previous AMD optimized games that didn't run too well with Pascal work much better with Turing, as the high threaded technique (the famous AC which is a bit overused in engines created for the console HW) is not going to constantly stall the SM with useless work as that of frequent task switching.
  • AciMars - Saturday, February 23, 2019 - link

    “Worse yet, the space used per SM has gotten worse“. not true.. you know, turing have separate cuda cores for int and fp. It means when turing have 1536 cuda cores means 1536 int + 1536 fp cores. So on die size actually turing have 2x cuda cores compare to pascal
  • CiccioB - Monday, February 25, 2019 - link

    Not exactly, the number of CUDA core are the same, just that a new independent ALU as been added.
    A CUDA core is not only an execution unit, it also registers, memory (cache), buses (memory access) and other special execution units (load/store).
    By adding a new integer ALU you don't automatically get double the capacity as really doubling the number of a complete CUDA core.
  • ballsystemlord - Friday, February 22, 2019 - link

    Here are some spelling and grammar corrections.

    This has proven to be one of NVIDIA's bigger advantages over AMD, an continues to allow them to get away with less memory bandwidth than we'd otherwise expect some of their GPUs to need.
    Missing d as in "and":
    This has proven to be one of NVIDIA's bigger advantages over AMD, and continues to allow them to get away with less memory bandwidth than we'd otherwise expect some of their GPUs to need.
    so we've only seen a handful of games implement (such as Wolfenstein II) implement it thus far.
    Double implement, 1 befor ()s and 1 after:
    so we've only seen a handful of games (such as Wolfenstein II) implement it thus far.

    For our games, these results is actually the closest the RX 590 can get to the GTX 1660 Ti,
    Use "are" not "is":
    For our games, these results are actually the closest the RX 590 can get to the GTX 1660 Ti,

    This test offers a slew of additional tests - many of which use behind the scenes or in our earlier architectural analysis - but for now we'll stick to simple pixel and texel fillrates.
    Missing "we" (I suspect that the sentence should be reconstructed without the "-"s, but I'm not that good.):
    This test offers a slew of additional tests - many of which we use behind the scenes or in our earlier architectural analysis - but for now we'll stick to simple pixel and texel fillrates.

    "Looking at temperatures, there are no big surprises here. EVGA seems to have tuned their card for cooling, and as a result the large, 2.75-slot card reports some of the lowest numbers in our charts, including a 67C under FurMark when the card is capped at the reference spec GTX 1660 Ti's 120W limit."
    I think this could be clarified as their are 2 EVGA cards in the charts and the one at 67C is not explicitly labeled as EVGA.

    Thanks
  • Ryan Smith - Saturday, February 23, 2019 - link

    Thanks!
  • boozed - Friday, February 22, 2019 - link

    The model numbers have become quite confusing
  • Yojimbo - Saturday, February 23, 2019 - link

    I don't think they are confusing, 16 is between 10 and 20, plus the RTX is extra differentiation. In fact if NVIDIA had some cards in the 20 series with RTX capability and some cards in 20 series without RTX capability, even if some were 'GTX' and some were 'RTX', then that would be far more confusing. Putting the non-RTX Turing cards in their own series is a way of avoiding confusion. But if they actually come out with an "1180" as say some rumors floating around, that would be very confusing.
  • haukionkannel - Saturday, February 23, 2019 - link

    Interesting to see the next year.
    Rtx 3050 and gtx 2650ti for the weaker version, if we get one new card rtx family... Hmm... that could work if They keep the naming. 2021 RTX3040 and gtx 2640ti...
  • CiccioB - Thursday, February 28, 2019 - link

    Next generation all cards will have enough RT and tensor core enabled.

Log in

Don't have an account? Sign up now