The Test

For the launch of the RTX 2080 Super, NVIDIA has rolled out a new set of drivers to enable the card: 431.56. These drivers don’t offer any performance improvements over the 431.15 drivers in our games, so the results are fully comparable.

Meanwhile, I've gone ahead and tossed in the Radeon RX 5700 XT in to our graphs. While it's aimed at a distinctly lower market with its $399 price tag, it has the potential to be a very strong spoiler here, especially for 1440p gaming.

CPU: Intel Core i9-9900K @ 5.0GHz
Motherboard: ASRock Z390 Taichi
Power Supply: Corsair AX1200i
Hard Disk: Phison E12 PCIe NVMe SSD (960GB)
Memory: G.Skill Trident Z RGB DDR4-3600 2 x 16GB (17-18-18-38)
Case: NZXT Phantom 630 Windowed Edition
Monitor: Asus PQ321
Video Cards: NVIDIA GeForce RTX 2080 Super Founders Edition
NVIDIA GeForce RTX 2080 Ti Founders Edition
NVIDIA GeForce RTX 2080 Founders Edition
NVIDIA GeForce RTX 2070 Super Founders Edition
NVIDIA GeForce GTX 1080
NVIDIA GeForce GTX 980
AMD Radeon VII
AMD Radeon RX 5700 XT
AMD Radeon RX Vega 64
AMD Radeon R9 390X
Video Drivers: NVIDIA Release 431.15
NVIDIA Release 431.56
AMD Radeon Software Adrenalin 2019 Edition 19.7.1
OS: Windows 10 Pro (1903)
Meet the GeForce RTX 2080 Super Shadow of the Tomb Raider
Comments Locked

111 Comments

View All Comments

  • eastcoast_pete - Tuesday, July 23, 2019 - link

    Thanks Ryan! I know this is a bit niche, but could you add a short test and paragraph or so on NVENC performance when you review NVIDIA cards?
  • Ryan Smith - Tuesday, July 23, 2019 - link

    Is there a specific test you'd like to see? NVENC is a fixed function block, so it's not something I normally look at in individual video card reviews.
  • eastcoast_pete - Wednesday, July 24, 2019 - link

    Thanks Ryan, appreciate you considering it! I am particularly interested in NVENC performance in encoding or transcoding to HEVC a 2160p video using ffMPEG with high quality (so, not NVENC default) settings, best at 10bit HDR. The clip currently used for handbrake tests of CPUs might serve for initial testing if it's captured in HDR. For gamers, it might be of interest to test this function to capture 1440p or 1080p gaming using settings appropriate for streaming. I haven't done that myself (yet), but believe some people here might provide suggestions.
  • eastcoast_pete - Wednesday, July 24, 2019 - link

    Forgot: some key questions are what, if any, difference in NVENC performance exists between cards, and (for gaming) , if and how using NVENC affects gaming. I am mostly interested in the first. Thanks!
  • ballsystemlord - Saturday, July 27, 2019 - link

    I've been curious about card encode performance on GPUs using ffmpeg for some time. ffmpeg can also use CUDA and opencl.
  • ballsystemlord - Saturday, July 27, 2019 - link

    If I were testing this I'd go for 10bit for x265 as suggested, but I'd test both the (8 bit) x264 and x265 (HEVC) codecs. The size of the video should be selected to allow many cards to compete. I'm guessing this would be 1440p for x265 and 1080p for x264.
  • GreenReaper - Wednesday, July 24, 2019 - link

    Fixed-function, but is it fixed-speed, or does it vary based on one or other of GPU/memory speed(s)?
  • Rudde - Thursday, July 25, 2019 - link

    Fixed-function logic is basically an asic (applications specific ic). It can only do one task, but it does the task well. It uses die area, and therefore increases manufacturing costs, but it shouldn't affect gpu performance in other tasks.
  • Dark42 - Tuesday, July 23, 2019 - link

    Considering the 99th PCTL fps values at 1440p:
    Tomb Raider: 42, Metro: 35, Division: 43
    I wouldn't necessarily call the 2080 Super overpowered for 1440p.
  • irsmurf - Tuesday, July 23, 2019 - link

    No GTX 1080 TI? 1080 TI / 2080 / 2080 Super / 2080 TI are the only comparisons I wanted to see. I see it's not available in Bench, either. :( Please add it.

Log in

Don't have an account? Sign up now