The 2018 GPU Benchmark Suite & the Test

Another year marks another update to our GPU benchmark suite. This time, however, is more in line with a maintenance update than it is a complete overhaul. Although we've done some extended compute and deep learning benchmarking in the past year, and even some HDR gaming impressions, our compute and synthetic lineup remains largely the same. But before getting into the details, let's start with the bulk of benchmarking, and the biggest reason for these cards anyhow: games.

Joining the 2018 game list is Far Cry 5, Wolfenstein II, Final Fantasy XV and Middle-earth: Shadow of War. We are also bringing in F1 2018 and Total War: Warhammer II. Returning from last year is Battlefield 1, Ashes of the Singularity: Escalation, and Grand Theft Auto V. All-in-all, these games span multiple genres, differing graphics workloads, and contemporary APIs, with a nod towards modern and relatively intensive games.

AnandTech GPU Bench 2018 Game List
Game Genre Release Date API(s)
Battlefield 1 FPS Oct. 2016 DX11
(DX12)
Far Cry 5 FPS Mar. 2018 DX11
Ashes of the Singularity: Escalation RTS Mar. 2016 DX12
(DX11, Vulkan)
Wolfenstein II: The New Colossus FPS Oct. 2017 Vulkan
Final Fantasy XV: Windows Edition JRPG Mar. 2018 DX11
Grand Theft Auto V Action/Open world Apr. 2015 DX11
Middle-earth: Shadow of War Action/RPG Sep. 2017 DX11
F1 2018 Racing Aug. 2018 DX11
Total War: Warhammer II RTS Sep. 2017 DX11
(DX12)

That said, Ashes as a DX12 trailblazer may not be as hot and fresh as it once was, especially considering that the pace of DX12 and Vulkan adoption in new games has waned. The circumstances are worth an investigation on their own, but the learning curve required in modern low-level API and the subsequent return may not be convincing right now. As a more general remark, most developers and publishers tend not to advertise or document DX12 support as much as they used to, nor is it clearly labelled in game specifications as many times DX11 is the unmentioned default.

Particularly for NVIDIA and GeForce RTX, pushing DXR and raytracing means pushing DX12, of which DXR is a component. The API has a backstop in the form of Xbox consoles and Windows 10, and if multi-GPU is to make a comeback, whether that's via compatible workloads (VR), flexible usage (ray tracing workload topologies), or just the plain old inevitability of Moore's Law. So this is less likely to be the slow end of DX12.

In terms of data collection, measurements were gathered either using built-in benchmark tools or with AMD's open-source Open Capture and Analytics Tool (OCAT), which is itself powered by Intel's PresentMon. 99th percentiles were obtained or calculated in a similar fashion, as OCAT natively obtains 99th percentiles. In general, we prefer 99th percentiles over minimums, as they more accurately represent the gaming experience and filter out any artificial outliers.

We've also swapped out Blenchmark, which seems to have been abandoned in terms of updates, in favor of a BMW render from the Blender Institute Cycles Benchmark, and a more recent one from a Cycles benchmark developer on Blenderartists.org. There were concerns with Blenchmark's small tile size, which is not very applicable to GPUs, and in terms of usability we also ran into some GPU detection errors which were linked to inaccurate Blenchmark Python code.

Otherwise, we are also keeping an eye on a few trends and upcoming developments:

  • MLPerf machine learning benchmark suite
  • Blender Benchmark
  • Futuremark's 3DMark DirectX Raytracing benchmark
  • DXR and Vulkan raytracing extension support in games

Another point is that we do not have a permanent HDR monitor for our testbed, which would be necessary to incorporate HDR game testing in the near future; 5 games in our list actually support HDR. And as we look at technologies that enhance or alter image quality (e.g. HDR, Turing's DLSS), we will want to find a better way of comparing differences. This is particularly tricky with HDR as screenshots are inapplicable and even taking accurate photographs will most likely be viewed on an SDR screen. With DLSS, there is a built-in reference quality based on 64x supersampling, which in deep learning terms is the 'ground truth'; an intuitive solution would be to use a neural network based method of analyzing quality differences, but that is likely beyond our scope.

The following tech demos and test applications were provided via NVIDIA:

  • Star Wars 'Reflections' Demo (includes real time ray tracing and DLSS support)
  • Final Fantasy XV Official Benchmark (includes DLSS support)
  • Asteroids Demo (features mesh shading and variable LOD)
  • Epic Infiltrator Demo (features DLSS)

The Testbed

Because NVIDIA is not productizing any other reference-quality GeForce RTX 2080 Ti and 2080 card besides the Founders Editions, which are non-reference by specifications, we've gone ahead and emulated the true reference specifications with a 90MHz downclock and lowering the TDP by roughly 10W. This is to keep comparisons standardized and apples-to-apples, as we always look at reference-to-reference results.

In a classic case of Murphy's Law, our usual PSU started malfunctioning around the time of the review, but given the time constraints we couldn't do a 1:1 replacement in time. As it is a digital PSU, we were beginning to use it for PCIe power readings to augment system measurements, but for now we will have to stick power draw at the wall. For the time being, we've swapped it out with another high-quality and high-wattage PSU.

CPU: Intel Core i7-7820X @ 4.3GHz
Motherboard: Gigabyte X299 AORUS Gaming 7 (F9g)
Power Supply: Corsair AX860i
EVGA 1000 G3
Hard Disk: OCZ Toshiba RD400 (1TB)
Memory: G.Skill TridentZ DDR4-3200 4 x 8GB (16-18-18-38)
Case: NZXT Phantom 630 Windowed Edition
Monitor: LG 27UD68P-B
Video Cards: AMD Radeon RX Vega 64 (Air Cooled)
NVIDIA GeForce RTX 2080 Ti Founders Edition
NVIDIA GeForce RTX 2080 Founders Edition

NVIDIA GeForce GTX 1080 Ti Founders Edition
NVIDIA GeForce GTX 1080 Founders Edition
NVIDIA GeForce GTX 980 Ti
NVIDIA GeForce GTX 980
Video Drivers: NVIDIA Release 411.51 Press
AMD Radeon Software Adrenalin Edition 18.9.1
OS: Windows 10 Pro (April 2018 Update)
Spectre/Meltdown Mitigations Yes, both
Meet The GeForce RTX 2080 Ti & RTX 2080 Founders Editions Cards Battlefield 1
Comments Locked

337 Comments

View All Comments

  • V900 - Thursday, September 20, 2018 - link

    That’s plain false.

    Tomb Raider is a title out now with RTX enabled in the game.

    Battlefield 5 is out in a month or two (though you can play it right now) and will also utilize RTX.

    Sorry to destroy your narrative with the fact, that one of the biggest titles this year is supporting RTX.

    And that’s of course just one out of a handful of titles that will do so, just in the next few months.

    Developer support seems to be the last thing that RTX2080 owners need to worry about, considering that there are dozens of titles, many of them big AAA games, scheduled for release just in the first half of 2019.
  • Skiddywinks - Friday, September 21, 2018 - link

    Unless I'm mistaken, TR does not support RTX yet. Obviously, otherwise it would be showing up in reviews everywhere. There is a reason every single reviewer is only benchmarking traditional games; that's all there is right now.
  • Writer's Block - Monday, October 1, 2018 - link

    Exactly.
    Is supporting or enabled.
    However - neiher actually have it now to see, to experience.
  • eva02langley - Thursday, September 20, 2018 - link

    These cards are nothing more than a cheap magic trick show. Nvidia knew about the performances being lackluster, and based their marketing over gimmick to square the competition by affirming that these will be the future of gaming and you will be missing out without it.

    Literally, they basically tried to create a need... and if you are defending Nvidia over this, you have just drinking the coolaid at this point.

    Quote me on this, this will be the next gameworks feature that devs will not bother touching. Why? Because devs are developing games on consoles and transit them to PC. The extra time in development doesn't bring back any additional profit.
  • Skiddywinks - Friday, September 21, 2018 - link

    Here's the thing though, I don't the performance is that lacklustre, the issue is we have this huge die and half of it does not do what most people want; give us more frames. If they had made the same size die with nothing but traditional CUDA cores, the 2080 Ti would be an absolute beast. And I'd imagine it would be a lot cheaper as well.

    But nVidia (maybe not mistakenly) have decided to push the raytracing path, and those of us you just want maximum performance for the price (me) and were waiting for the next 1080 Ti are basically left thinking "... oh well, skip".
  • eva02langley - Friday, September 21, 2018 - link

    DOn't get me wrong, these cards are a normal upgrade performance jump, however it is not the second christ sent that Nvidia is marketing.

    The problem here is Nvidia want to corner AMD and their tactic they choose is RTX. However RTX is nothing else than a FEATURE. The gamble could cost them a lot.

    If AMD gaming and 7nm strategy pays off, devs will develop on AMD hardware and transit to PC architecture leaving devs no incentive to put the extra work for a FEATURE.

    The extra cost of the bigger die should have been for gaming performances, but Nvidia strategy is to disrupt competition and further their stand as a monopoly as they can.

    Physx didn't work, hairwork didn't work and this will not work. As cool as it is, this should have been a feature for pro cards only, not consumers.
  • mapesdhs - Thursday, September 27, 2018 - link

    That's the thing though, they aren't a "normal" upgrade performance jump, because the prices make no sense.
  • AnnoyedGrunt - Thursday, September 20, 2018 - link

    This reminds me quite a bit of the original GeForce 256 launch. Not sure how many of you were following Anandtech back then, but it was my go-to site then just as it is now. Here are links to some of the original reviews:

    GeForce256 SDR: https://www.anandtech.com/show/391
    GeForce256 DDR: https://www.anandtech.com/show/429

    Similar to the 20XX series, the GeForce256 was Nvidia's attempt to change the graphics card paradigm, adding hardware tranformation and lighting to the graphics card (and relieving the CPU from those tasks). The card was faster than the contemporary cards, but also much more expensive, making the value questionable for many.

    At the time I was a young mechanical engineer, and I remember feeling that Nvidia was brilliant for creating this card. It let me run Pro/E R18 on my $1000 home computer, about as fast as I could on my $20,000 HP workstation. That card basically destroyed the market of workstation-centric companies like SGI and Sun, as people could now run CAD packages on a windows PC.

    The 20XX series gives me a similar feeling, but with less obvious benefit to the user. The cards are as fast or faster than the previous generation, but are also much more expensive. The usefulness is likely there for developers and some professionals like industrial designers who would love to have an almost-real-time, high quality, rendered image. For gamers, the value seems to be a stretch.

    While I was extremely excited about the launch of the original GeForce256, I am a bit "meh" about the 20XX series. I am looking to build a new computer and replace my GTX 680/i5-3570K, but this release has not changed the value equation at all.

    If I look at Wolfenstein, then a strong argument could be made for the 2080 being more future proof, but pretty much all other games are a wash. The high price of the 20XX series means that the 1080 prices aren't dropping, and I doubt the 2070 will change things much since it looks like it would be competing with the vanilla 1080, but costing $100 more.

    Looks like I will wait a bit more to see how that price/performance ends up, but I don't see the ray-tracing capabilities bringing immediate value to the general public, so paying extra for it doesn't seem to make a lot of sense. Maybe driver updates will improve performance in today's games, making the 20XX series look better than it does now, but I think like many, I was hoping for a bit more than an actual reduction in the performance/price ratio.

    -AG
  • eddman - Thursday, September 20, 2018 - link

    How much was a 256 at launch? I couldn't find any concrete pricing info but let's go with $500 to be safe. That's just $750 by today's dollar for something that is arguably the most revolutionary nvidia video card.
  • Ananke - Thursday, September 20, 2018 - link

    Yep, and it was also not selling well among "gamers" novelty, that became popular after falling under $100 a pop years later. Same here, financial analysts say the expected revenue from gaming products will drop in the near future, and Wall Street already dropped NVidia. Product is good, but expensive, it is not going to sell in volume, their revenue will drop in the imminent quarters.
    Apple's XS phone was the same, but Apple started a buy-one-get-one campaign on the very next day, plus upfront discount and solid buyback of iPhones. Yet, not clear whether they will achieve volume and revenue growth within the priced in expectations.
    These are public companies - they make money from Wall Street, and they /NVidia/ can lose much more and much faster on the capital markets, versus what they would gain in profitability from lesser volume high end boutique products. This was relatively sh**y launch - NVidia actually didn't want to launch anything, they want to sell their glut of GTX inventory first, but they have silicon ordered and made already at TSMC, and couldn't just sit on it waiting...

Log in

Don't have an account? Sign up now