Gaming Tests: Final Fantasy XIV

Despite being one number less than Final Fantasy 15, because FF14 is a massively-multiplayer online title, there are always yearly update packages which give the opportunity for graphical updates too. In 2019, FFXIV launched its Shadowbringers expansion, and an official standalone benchmark was released at the same time for users to understand what level of performance they could expect. Much like the FF15 benchmark we’ve been using for a while, this test is a long 7-minute scene of simulated gameplay within the title. There are a number of interesting graphical features, and it certainly looks more like a 2019 title than a 2010 release, which is when FF14 first came out.

With this being a standalone benchmark, we do not have to worry about updates, and the idea for these sort of tests for end-users is to keep the code base consistent. For our testing suite, we are using the following settings:

  • 768p Minimum, 1440p Minimum, 4K Minimum, 1080p Maximum

As with the other benchmarks, we do as many runs until 10 minutes per resolution/setting combination has passed, and then take averages. Realistically, because of the length of this test, this equates to two runs per setting.

AnandTech Low Resolution
Low Quality
Medium Resolution
Low Quality
High Resolution
Low Quality
Medium Resolution
Max Quality
Average FPS

As the resolution increases, the 11900K seemed to get a better average frame rate, but with the quality increased, it falls back down again, coming behind the older Intel CPUs.

All of our benchmark results can also be found in our benchmark engine, Bench.

Gaming Tests: Deus Ex Mankind Divided Gaming Tests: Final Fantasy XV
Comments Locked

279 Comments

View All Comments

  • mitox0815 - Tuesday, April 13, 2021 - link

    "Just abandon"...those clocks you dream of might have been possible on certain CPUs, but definitely noton a broader line-up. The XPs ran hot enough as it was, screwing more out of them would've made no sense. THAT they tried with the 9590...and failed miserably. Not to mention people could OC the Northwoods too, beyond 3.6 or 3.7 Ghz in fact...negating that point entirely. As was said...Northwood, especially the FSB800 ones with HT were the top dogs until the A64 came around and showed them the door. Prescott was...ambitious, to put it nicely.
  • mitox0815 - Tuesday, April 13, 2021 - link

    *not on
  • TheinsanegamerN - Wednesday, March 31, 2021 - link

    Netburst was built for both high clock speeds and predictable workloads, such as video editing, where it did quite well. Obviously it royally sucked for unpredictable workloads like gaming, but you could see where intel was heading with the idea.
  • Oxford Guy - Wednesday, March 31, 2021 - link

    'you could see where intel was heading with the idea'

    Creating the phrase 'MHz myth' in the public consciousness.
  • GeoffreyA - Friday, April 2, 2021 - link

    "MHz myth in the public consciousness"

    And it largely worked, even in the K8 era with the non-enthusiast public. Only when Core 2 Duo dropped to lower clocks was it accepted overnight that, yes, lower clocks are now all right because Intel says so.
  • Prosthetic Head - Tuesday, March 30, 2021 - link

    Your point still stands, however P4 was also a VERY low bar for to measure IPC improvements relative to.
  • Hifihedgehog - Tuesday, March 30, 2021 - link

    Well, Bulldozer was too and look what AMD did with Ryzen...
  • Oxford Guy - Saturday, April 3, 2021 - link

    AMD had a long time. 2011 is stamped onto the spreader of Piledriver and that was only a small incremental change from Bulldozer, which is even older.
  • Oxford Guy - Saturday, April 3, 2021 - link

    And, Bulldozer had worse IPC than Phenom. So, AMD had basically tech eternity to improve on the IPC of what it was offering. It made Zen 1 seem a lot more revolutionary.
  • GeoffreyA - Saturday, April 3, 2021 - link

    "It made Zen 1 seem a lot more revolutionary"

    You're right; and if one compares against Haswell or Skylake, one will see that the Intel and AMD designs are crudely the same from a bird's-eye point of view, except for AMD's split-scheduler inherited from the Athlon. I think that goes to show there's pretty much only one way to make an efficient x86 CPU (notice departures are disastrous: Netburst/Bulldozer). Having said that, I'm glad AMD went through the BD era: taught them a great deal. Also forced them to start from scratch, which took their design further than revising K10 would have done.

Log in

Don't have an account? Sign up now