Final Fantasy XV (DX11)

Upon arriving to PC earlier this, Final Fantasy XV: Windows Edition was given a graphical overhaul as it was ported over from console, fruits of their successful partnership with NVIDIA, with hardly any hint of the troubles during Final Fantasy XV's original production and development.

In preparation for the launch, Square Enix opted to release a standalone benchmark that they have since updated. Using the Final Fantasy XV standalone benchmark gives us a lengthy standardized sequence to utilize OCAT. Upon release, the standalone benchmark received criticism for performance issues and general bugginess, as well as confusing graphical presets and performance measurement by 'score'. In its original iteration, the graphical settings could not be adjusted, leaving the user to the presets that were tied to resolution and hidden settings such as GameWorks features.

Since then, Square Enix has patched the benchmark with custom graphics settings and bugfixes to be much more accurate in profiling in-game performance and graphical options, though leaving the 'score' measurement. For our testing, we enable or adjust settings to the highest except for NVIDIA-specific features and 'Model LOD', the latter of which is left at standard. Final Fantasy XV also supports HDR, and it will support DLSS at some date.

Final Fantasy XV 1920x1080 2560x1440 3840x2160
Average FPS
99th Percentile

NVIDIA, of course, is working closely with Square Enix, and the game is naturally expected to run well on NVIDIA cards in general, but the 1080 Ti truly lives up to its gaming flagship reputation in matching the RTX 2080. With Final Fantasy XV, the Founders Edition power and clocks again prove highly useful in the 2080 FE pipping the 1080 Ti, while the 2080 Ti FE makes it across the psychological 60fps mark at 4K.

Wolfenstein II Grand Theft Auto V
Comments Locked

337 Comments

View All Comments

  • Bp_968 - Sunday, December 2, 2018 - link

    Even though the review is older and this comment is a few months old I just wanted to jump in and say "hah, look, eddman was right!" Now that the Titan RTX leaks are showing up. Lol. They didn't even wait for supply to stabilize on the 2080ti before dropping the titan.

    Plus, if the 2080 replaced the 1080ti then why is it more expensive and no faster? That would be a first even for Nvidia..
  • PeachNCream - Thursday, September 20, 2018 - link

    The model numbers aren't that significant. NVIDIA could just have easily released a 2080, a 2070, and a 2060 by putting different labels on the boxes of the 2080 Ti, the 2080, and the 2070 for instance. The Ti, the Titan, all of those are long standing marketing identities that buyers now automatically associate with a certain relative scale of performance among other GPUs of the same generation. NVIDIA can play upon buyer expectations by releasing various products to fill those expectations in the way that best advances the company's interest. Any company with enough brand recognition can easily do the same. Consider Intel's long-running i-series CPU numbering. The fact that something labeled as a Ti came out at a certain time isn't an example of technological development, but a way of meeting customer expectations in reflection of the MSRP. We would have balked much more at $1200 for the exact same product if it was labeled as a plain vanilla 2080 and the current vanilla 2080 was branded as a 2070. Instead, we say, "Well, the 2080 Ti is really expensive, but at least its a Ti so that makes it a little bit more reasonable."
  • eddman - Thursday, September 20, 2018 - link

    Model numbers are significant in the way that they point out the models in the same successive line up. That's the entire point of them.

    I and a lot of people are not in this "we" you talk about. Again, nvidia themselves compare it to 1080 Ti every chance they get, so I do not see why I should in any way think its price is "reasonable".

    That's not how past generational leaps worked, even for 8800 GTX. We got massive performance gains AND usually new rendering features at similar MSRPs or maybe a bit higher. The difference this time is that AMD has left the building, for now.
  • PeachNCream - Thursday, September 20, 2018 - link

    Don't misunderstand me. I'm not implying that the price is okay or that anyone should find it reasonable to stomach a $1200 MSRP for a mere graphics card. I also agree that part of the pricing problem is due to an absence of credible competition from AMD. I'm just arguing that the people in the NVIDIA marketing department may justify the price in part by slapping a Ti label on the box so consumers are less likely to balk during checkout. The reality is that we're getting a step sideways in performance for a noteworthy increase in TDP due to the addition of capabilities that may or may not actually add much value because said features are too demanding to play nicely at high resolutions and because there are not indications that the software side will move to take advantage of said features. At best, the addition of the hardware won't be very compelling until the next generation of GPUs after Turing when its likely that performance will pick up a bit.

    Then again, who am I to talk? I play PC games on a laptop with an HD 4000 infrequently and end up mostly gaming on my ancient dual core Kitkat era phone that I've been keeping as a cheap wireless mini tablet. To me, PC gaming became an overly pricey sink of my limited, single parent free time. I'd rather bank my spare money in something that yields interest over time than throw it into gaming hardware that's obsolete in a matter of a few years. That and my kids me to be both of their parents these days since that worthless ex of mine schlepped off to marry some woman in Canada. *grumble*
  • tamalero - Thursday, September 20, 2018 - link

    More like that they are pricing their high end cards like they are flagship cards.
    The 2080 Founders seems identical in price to a 1080TI. That is unacceptable. Specially when they are almost identical in performance (going slower in most games by a few small points).

    They(Nvidia) just want to clear the huge build up of PASCAL cards.. by charging insanity for those who are willing to claim to be "gamers" with money. period.
  • tamalero - Thursday, September 20, 2018 - link

    "You, like so many others don't get it. nVidia has re-worked their product lines. Didn't you notice how the Ti came out at the same time as the 2080?"
    What the hell does this has to do? Nothing for the consumer again.
  • tamalero - Thursday, September 20, 2018 - link

    "Die size is not irrelevant to consumers because increased die size means increased cost to manufacture. Increased cost to manufacture means a pressure for higher prices. The question is what you get in return for those higher prices.
    "
    You're repeating the same.
    Die size means NOTHING to a consumer. It means something for the manufacturer because it costs THEM.
    If the die doesnt benefit anything at all (Fermi) compared to smaller dies that offer almost the same performance (Pascal). Why would the consumer have to pay MORE for LESS?

    New tech is nothing if there is nothing to show. And there is NOTHING to show right now.
    by the time raytracing becomes really viable, the new generation of cards will be out.
  • Spunjji - Friday, September 21, 2018 - link

    This x1000. These cards are a necessary step towards getting the technology out there, but I'm thoroughly unconvinced that it is a good idea for anyone to buy them. The sacrifice in die area was too great, for far too little benefit. Given the strong indications that 1080p ~45fps is where real-time raytracing will be at right now, I just don't care. They sold me on high-resolution and high-framerate because those actually affect how much enjoyment I get from my games. I'm not interested in that rug being pulled from under my feet *and* paying damn near double price for the privilege.
  • Morawka - Wednesday, September 19, 2018 - link

    Doesn't TSMC charge their customers by the wafer nowadays?
  • PopinFRESH007 - Wednesday, September 19, 2018 - link

    how does that matter? Are you suggesting that magically makes the die size irrelevant? If you have a 300mm wafer and you double the die size, you also halve the number of die per wafer. This would also ignore yield. A larger die is more costly to produce because you get fewer die per wafer and increase the probability of having a defect within a die.

Log in

Don't have an account? Sign up now