Grand Theft Auto

The highly anticipated iteration of the Grand Theft Auto franchise hit the shelves on April 14th 2015, with both AMD and NVIDIA in tow to help optimize the title. GTA doesn’t provide graphical presets, but opens up the options to users and extends the boundaries by pushing even the hardest systems to the limit using Rockstar’s Advanced Game Engine under DirectX 11. Whether the user is flying high in the mountains with long draw distances or dealing with assorted trash in the city, when cranked up to maximum it creates stunning visuals but hard work for both the CPU and the GPU.

For our test we have scripted a version of the in-game benchmark. The in-game benchmark consists of five scenarios: four short panning shots with varying lighting and weather effects, and a fifth action sequence that lasts around 90 seconds. We use only the final part of the benchmark, which combines a flight scene in a jet followed by an inner city drive-by through several intersections followed by ramming a tanker that explodes, causing other cars to explode as well. This is a mix of distance rendering followed by a detailed near-rendering action sequence, and the title thankfully spits out frame time data.

 

There are no presets for the graphics options on GTA, allowing the user to adjust options such as population density and distance scaling on sliders, but others such as texture/shadow/shader/water quality from Low to Very High. Other options include MSAA, soft shadows, post effects, shadow resolution and extended draw distance options. There is a handy option at the top which shows how much video memory the options are expected to consume, with obvious repercussions if a user requests more video memory than is present on the card (although there’s no obvious indication if you have a low-end GPU with lots of GPU memory, like an R7 240 4GB).

To that end, we run the benchmark at 1920x1080 using an average of Very High on the settings, and also at 4K using High on most of them. We take the average results of four runs, reporting frame rate averages, 99th percentiles, and our time under analysis.

All of our benchmark results can also be found in our benchmark engine, Bench.

MSI GTX 1080 Gaming 8G Performance


1080p

4K

ASUS GTX 1060 Strix 6G Performance


1080p

4K

Sapphire Nitro R9 Fury 4G Performance


1080p

4K

Sapphire Nitro RX 480 8G Performance


1080p

4K

Depending on the CPU, for the most part Threadripper performs near to Ryzen or just below it.

CPU Gaming Performance: Rocket League (1080p, 4K) Power Consumption and Distribution
Comments Locked

347 Comments

View All Comments

  • Kjella - Thursday, August 10, 2017 - link

    In the not so distant past - like last year - you'd have to pay Intel some seriously overpriced HEDT money for 6+ cores. Ryzen gave us 8 cores and most games can't even use that. ThreadRipper is a kick-ass processor in the workstation market. Why anyone would consider it for gaming I have no idea. It's giving you tons of PCIe lanes just as AMD is downplaying CF with Vega, nVidia has offically dropped 3-way/4-way support, even 2-way CF/SLI has been a hit-and-miss experience. I went from a dual card setup to a single 1080 Ti, don't think I'll ever do multi-GPU again.
  • tamalero - Thursday, August 10, 2017 - link

    Probably their target is for those systems that have tons of cards with SATA RAID ports or PCI-E accelerators like AMD's or Nvidia's?
  • mapesdhs - Thursday, August 10, 2017 - link

    And then there's GPU acceleration for rendering (eg. CUDA) where the SLI/CF modes are not needed at all. Here's my old X79 CUDA box with quad 900MHz GTX 580 3GB:

    http://www.sgidepot.co.uk/misc/3930K_quad580_13.jp...

    I recall someone who does quantum chemistry saying they make significant use of multiple GPUs, and check out the OctaneBench CUDA test, the top spot has eleven 1080 Tis. :D (PCIe splitter boxes)
  • GreenMeters - Thursday, August 10, 2017 - link

    There is no such thing as SHED. Ryzen is a traditional desktop part. That it raises the bar in that segment compared to Intel's offering is a good thing--a significant performance and feature boost that we haven't seen in years. Threadripper is a HEDT part. That it raises the bar in that segment compared to Intel's offering is a good thing--a significant performance and feature boost that we haven't seen in years.
  • Ian Cutress - Thursday, August 10, 2017 - link

    Ryzen 7 was set as a HEDT directly against Intel's HEDT competition. This is a new socket and a new set over and above that, and not to mention that Intel will be offering its HCC die on a consumer platform for the first time, increasing the consumer core count by 8 in one generation which has never happened before. If what used to be HEDT is still HEDT, then this is a step above.

    Plus, AMD call it something like UHED internally. I prefer SHED.
  • FreckledTrout - Thursday, August 10, 2017 - link

    I think AMD has the better division of what is and isn't HEDT. Going forward Intel really should follow suite and make it 8+ cores to get into the HEDT lineup as what they have done this go around is just confusing and a bit goofy.
  • ajoy39 - Thursday, August 10, 2017 - link

    Small nitpick but

    "AMD could easily make those two ‘dead’ silicon packages into ‘real’ silicon packages, and offer 32 cores"

    That's exactly what the, already announced, EPYC parts are doing is it not?

    Great review otherwise, these parts are intriguing but I don't personally have a workload that would suit them. Excited to see what sort of innovation this brings about though, about time Intel had some competition at this end of the market.
  • Dr. Swag - Thursday, August 10, 2017 - link

    I assume they're referring to putting 32 cores on TR4
  • mapesdhs - Thursday, August 10, 2017 - link

    Presumably a relevant difference being that such a 32c TR would have the use of all of its I/O connections, instead of some of them used to connect to other EPYC units. OTOH, with a 32c TR, how the heck could mbd vendors cram enough RAM slots on a board to feed the 8 channels? Either that or stick with 8 slots and just fiddle around somehow so that the channel connections match the core count in a suitable manner, eg. one per channel for 32c, 2 per channel for 16c, etc.

    Who knows whether AMD would ever release a full 32c TR for TR4 socket, but at least the option is there I suppose if enough people buy it would happily go for a 32c part (depends on the task).
  • smilingcrow - Thursday, August 10, 2017 - link

    Considering the TDP with just a 16C chip to go 32C would hit the clock speeds badly unless they were able to keep the turbo speeds when ONLY 16 or less of the cores are loaded?
    The 32C server parts have much lower max turbo speeds seemingly when less loaded.

Log in

Don't have an account? Sign up now