Final Fantasy XV (DX11)

Upon arriving to PC earlier this, Final Fantasy XV: Windows Edition was given a graphical overhaul as it was ported over from console, fruits of their successful partnership with NVIDIA, with hardly any hint of the troubles during Final Fantasy XV's original production and development.

In preparation for the launch, Square Enix opted to release a standalone benchmark that they have since updated. Using the Final Fantasy XV standalone benchmark gives us a lengthy standardized sequence to utilize OCAT. Upon release, the standalone benchmark received criticism for performance issues and general bugginess, as well as confusing graphical presets and performance measurement by 'score'. In its original iteration, the graphical settings could not be adjusted, leaving the user to the presets that were tied to resolution and hidden settings such as GameWorks features.

Since then, Square Enix has patched the benchmark with custom graphics settings and bugfixes to be much more accurate in profiling in-game performance and graphical options, though leaving the 'score' measurement. For our testing, we enable or adjust settings to the highest except for NVIDIA-specific features and 'Model LOD', the latter of which is left at standard. Final Fantasy XV also supports HDR, and it will support DLSS at some date.

Final Fantasy XV 1920x1080 2560x1440 3840x2160
Average FPS
99th Percentile

NVIDIA, of course, is working closely with Square Enix, and the game is naturally expected to run well on NVIDIA cards in general, but the 1080 Ti truly lives up to its gaming flagship reputation in matching the RTX 2080. With Final Fantasy XV, the Founders Edition power and clocks again prove highly useful in the 2080 FE pipping the 1080 Ti, while the 2080 Ti FE makes it across the psychological 60fps mark at 4K.

Wolfenstein II Grand Theft Auto V
Comments Locked

337 Comments

View All Comments

  • Holliday75 - Friday, September 21, 2018 - link

    Good thing there are cops around to keep me honest. If they weren't I'd go on a murder spree and blame them for it.
  • Yojimbo - Wednesday, September 19, 2018 - link

    It's NVIDIA making a conscious decision to spend its engineering resources on innovating and implementing new technologies that will shift the future of gaming instead of spending that energy and die space on increasing performance as much as it can in today's titles. If NVIDIA left out the RT cores and other new technologies they could have easily increased performance 50 or 60% in legacy technologies by building chips bigger than Pascal but smaller than Turing, while increasing prices only moderately. Then everyone would be happy getting a card that would be leading them into a gaming torpor. In a few years when everyone is capable of running at 4k and over 60 fps they'd get bored and wonder why the industry were going nowhere.
  • NikosD - Wednesday, September 19, 2018 - link

    nVidia has done the same thing in the past, introducing new technologies and platforms like tesselation, PhysX, HairWorks, GameWorks, GPP etc.
    All of these were proved to be just tricks in order to kill competition, like always, which nowadays means to kill AMD.
    Pseudoraytracing is not an innovation or something mandatory for gaming.
    It's just another premature technology that the opponent doesn't have in order to be nVidia unique again with huge cost for the consumer and performance regression.

    I repeat.

    Skip that Turing fraud.
  • maximumGPU - Thursday, September 20, 2018 - link

    i don't think it's fair to compare ray tracing to HairWorks...
    ray tracing is a superior way to render graphics compared to rasterisation, there's no question about this.
  • Lolimaster - Saturday, September 22, 2018 - link

    But with what, nvidia RTX only do it on a small part of a FRAME, on selected scenes. On tensor core repurposed for that.

    You will need tensor cores in the 100's to make nvidia implementation more "wowish", 1000's to actually talk about raytracing being a thing.

    Consoles dictate gaming progress, AMD holds that.
  • Lolimaster - Saturday, September 22, 2018 - link

    Exactly, to start talking about actual raytracing or at least most of the parts of a scene, we need 10-100x the current gpu performance.
  • Yojimbo - Saturday, September 22, 2018 - link

    GPP was a partner promotion program. Hairworks is part of Gameworks. PhysX is part of Gameworks. Gameworks is not a trick, and neither is the PhysX part of it. But neither of them compare to ray tracing. Maybe you should like up what the word "pseudo" means, because you're using it wrong.

    In 1 year or a year and a half AMD will have their own ray tracing acceleration hardware and then you'll be all in on it.

    As for killing AMD, NVIDIA are not interested in it. It wouldn't be good for them, anyway. NVIDIA are, however, interested in building their platform and market dominance.
  • Yojimbo - Saturday, September 22, 2018 - link

    Edit: look up*
  • Eris_Floralia - Thursday, September 20, 2018 - link

    I've read all ur comments and still struggle to find any consistent logic.
  • eva02langley - Thursday, September 20, 2018 - link

    Nvidia is throwing down the throat of gamers Ray Tracing development. We are paying for something that we didn't even wanted at first.

    You didn't even know about Ray Tracing and DLSS before it was announced. You are just drinking the coolaid unlike many of us who stand out and raging against these INDECENT prices.

Log in

Don't have an account? Sign up now