Grand Theft Auto

The highly anticipated iteration of the Grand Theft Auto franchise hit the shelves on April 14th 2015, with both AMD and NVIDIA in tow to help optimize the title. GTA doesn’t provide graphical presets, but opens up the options to users and extends the boundaries by pushing even the hardest systems to the limit using Rockstar’s Advanced Game Engine under DirectX 11. Whether the user is flying high in the mountains with long draw distances or dealing with assorted trash in the city, when cranked up to maximum it creates stunning visuals but hard work for both the CPU and the GPU.

For our test we have scripted a version of the in-game benchmark. The in-game benchmark consists of five scenarios: four short panning shots with varying lighting and weather effects, and a fifth action sequence that lasts around 90 seconds. We use only the final part of the benchmark, which combines a flight scene in a jet followed by an inner city drive-by through several intersections followed by ramming a tanker that explodes, causing other cars to explode as well. This is a mix of distance rendering followed by a detailed near-rendering action sequence, and the title thankfully spits out frame time data.

 

There are no presets for the graphics options on GTA, allowing the user to adjust options such as population density and distance scaling on sliders, but others such as texture/shadow/shader/water quality from Low to Very High. Other options include MSAA, soft shadows, post effects, shadow resolution and extended draw distance options. There is a handy option at the top which shows how much video memory the options are expected to consume, with obvious repercussions if a user requests more video memory than is present on the card (although there’s no obvious indication if you have a low-end GPU with lots of GPU memory, like an R7 240 4GB).

To that end, we run the benchmark at 1920x1080 using an average of Very High on the settings, and also at 4K using High on most of them. We take the average results of four runs, reporting frame rate averages, 99th percentiles, and our time under analysis.

All of our benchmark results can also be found in our benchmark engine, Bench.

MSI GTX 1080 Gaming 8G Performance


1080p

4K

ASUS GTX 1060 Strix 6G Performance


1080p

4K

Sapphire Nitro R9 Fury 4G Performance


1080p

4K

Sapphire Nitro RX 480 8G Performance


1080p

4K

CPU Gaming Performance: Rocket League (1080p, 4K) Analyzing Creator Mode and Game Mode
Comments Locked

104 Comments

View All Comments

  • MrSpadge - Thursday, August 17, 2017 - link

    It's definitely good that reviewers test the game mode and the others, so that we know what to expect from them. If they only tested creator mode the internets would be full of people shouting foul play to bash AMD.
  • deathBOB - Thursday, August 17, 2017 - link

    Ian - why not just enable NUMA and leave SMT on?
  • Ian Cutress - Thursday, August 17, 2017 - link

    The fourth corner of testing :)
  • lelitu - Thursday, August 17, 2017 - link

    Looking at setting up something for a home VM host, and linux development workstation makes NUMA with SMT the most useful set of benchmarks for my usecase.

    I'm particularly interested in TR, because it's brought the price of entry low enough that I can actually consider building such a system.
  • Ratman6161 - Friday, August 18, 2017 - link

    ThreadRipper is big bucks for your purposes if I'm reading this correctly. For a home lab sort of environment a lot of cores helps as does a lot of RAM, but you don't necessarily need a boatload of CPU power. For example, in my home ESXi system I've got an FX8350 which VMWare sees as an 8 Core CPU. I've also given it 32 GB of DDR3 RAM (purchased when that was cheap). The 990FX motherboards work great for this since they have plenty of PCIe lanes available. In my case, those are used for an ancient ATI video card I happened to have in a drawer, an LSI x8 RAID card and an x4 Intel dual port gigabit NIC. The RAID card has 4 1 TB desktop drives hooked up to it in a RAID 5.

    All of the above can be had pretty cheap these days. I'm thinking of upgrading my storage to 4x2 TB SAS drives - available for $35 each on Amazon...brand new (but old models). The system is running 6 to 7 VM's (Windows Servers mostly) at any given time. But with only two users, I don't run into many cases where more than two VM's are actually doing anything at the same time. Example: Web server and SQL Server serving up a web app.

    For this environment, having a storage setup where the VM's are not contending for the disks and also having plenty of RAM seems to make a lot more difference than the CPU.

    Of course if you have the bucks and just want to, ThreadRipper would be terrific for this - just way to expensive and overkill for me.
  • lelitu - Monday, August 21, 2017 - link

    That depends a lot on what you want the VMs for. Unfortunately for the sort of performance testing and development I do a VM toaster isn't actually good enough. Each VM needs at least 4 uncontended cores, and 10GB uncontended RAM. Two VMs is the absolute minimum, 3 would be better.

    That's not going to fit into anything less than a ryzen 7 minimum, and a Threadripper, *if* it performs as I expect in SMT + NUMA mode would be almost perfect. Unfortunately, you're right, it's a *lot* of coin to drop on something I don't know will actually do what I need well enough.

    Thus, I wish there were SMT+NUMA workstation and VM benchmarks here.
  • JasonMZW20 - Thursday, August 17, 2017 - link

    Seems like Game Mode should have bumped up the base clocks to 1800X levels, especially for Nvidia cards using a software scheduler that seems to scale with CPU frequency. AMD's hardware scheduler is apparent in overall FPS stability and being mostly CPU agnostic.

    Matching base clocks with 1800X or even 1900X (3.8GHz) might be better on TR for gaming in Game Mode.
  • lordken - Friday, August 18, 2017 - link

    Also for some weird reason that 1800X is much faster with higher fps in civilization and tomb rider?
  • peevee - Thursday, August 17, 2017 - link

    "because the 1920X has fewer cores per CCX, it actually falls behind the 1950X in Game Mode and the 1800X despite having more cores. "

    Sorry, but when 12 cores with twice memory bandwidth are compiling slower than 8, you are doing something wrong. Yes, Anandtech, you. I'd seriously investigate. For example, the maximum number of threads were set at 24 or something.
  • Ian Cutress - Thursday, August 17, 2017 - link

    When you have a bank of cores that communicate with each other, and replace it with more cores but uneven communication latencies, it is a difference and it can affect code paths.

Log in

Don't have an account? Sign up now