Grand Theft Auto V

The highly anticipated iteration of the Grand Theft Auto franchise hit the shelves on April 14th 2015, with both AMD and NVIDIA in tow to help optimize the title. GTA doesn’t provide graphical presets, but opens up the options to users and extends the boundaries by pushing even the hardest systems to the limit using Rockstar’s Advanced Game Engine under DirectX 11. Whether the user is flying high in the mountains with long draw distances or dealing with assorted trash in the city, when cranked up to maximum it creates stunning visuals but hard work for both the CPU and the GPU.

For our test we have scripted a version of the in-game benchmark. The in-game benchmark consists of five scenarios: four short panning shots with varying lighting and weather effects, and a fifth action sequence that lasts around 90 seconds. We use only the final part of the benchmark, which combines a flight scene in a jet followed by an inner city drive-by through several intersections followed by ramming a tanker that explodes, causing other cars to explode as well. This is a mix of distance rendering followed by a detailed near-rendering action sequence, and the title thankfully spits out frame time data.

There are no presets for the graphics options on GTA, allowing the user to adjust options such as population density and distance scaling on sliders, but others such as texture/shadow/shader/water quality from Low to Very High. Other options include MSAA, soft shadows, post effects, shadow resolution and extended draw distance options. There is a handy option at the top which shows how much video memory the options are expected to consume, with obvious repercussions if a user requests more video memory than is present on the card (although there’s no obvious indication if you have a low end GPU with lots of GPU memory, like an R7 240 4GB).

To that end, we run the benchmark at 1920x1080 using an average of Very High on the settings, and also at 4K using High on most of them. We take the average results of four runs, reporting frame rate averages, 99th percentiles, and our time under analysis.

All of our benchmark results can also be found in our benchmark engine, Bench.

ASRock RX 580 Performance

Grand Theft Auto (1080p, VHigh)
Grand Theft Auto (1080p, VHigh)

GPU Tests: Rocket League Overclocking Performance: CPU Tests
Comments Locked

111 Comments

View All Comments

  • mkaibear - Tuesday, June 12, 2018 - link

    "Total flop"

    I suggest benchmarking the CPU in your phone against this CPU and try again.
  • SanX - Tuesday, June 12, 2018 - link

    They mostly serve different purposes and apps and have different TDP. But if you restrict consumption power of Intel processors to the same one of mobile processors then in the same apps it's not clear in advance which one will win.

    Time for ARM to look at the server and supercomputers markets.
  • iranterres - Monday, June 11, 2018 - link

    HAHA. Intel once again trying to fool some people and appeasing the fanboys with something worthless and expensive.
  • xchaotic - Tuesday, June 12, 2018 - link

    So are the regular i7-8600K unable to run all core 5GHz? If so, what't the max stable freq for a non-binned i7-8600K? Personally I went for an even lower/cheaper i5-8400 CPU, but I see why some people prefer to be running max speed all the time...
  • Rudde - Tuesday, June 12, 2018 - link

    I assume you mean the i7-8700k.
    There is a phenomenon called 'the silicon lottery.' Basically, when you buy an i7-8700k, you can't know the max stable frequency. It could max out at 5.2GHz or it could only reach 4.7GHz before going unstable. The thing is, you can't know what you'll end up with.
    This brings us to the i7-8068k. The i7-8068k is pretty much guaranteed to have a max stable frequency above 5GHz. Of course, this matters only when overclocking.
  • Bradyb00 - Tuesday, June 12, 2018 - link

    Is it a lower temp than a 8700k for a given multiplier though? i.e. both 8700k and 8086k at 46x which is cooler? 8700k obviously has to be averaged as not everyone is lucky with the silicon lottery.
    Presumption is the 8086k will run cooler on average due to the better binning.

    In which case I'm happy to pay more to save some degrees in my wee itx build
  • Lolimaster - Tuesday, June 12, 2018 - link

    Why not simply pick the Ryzen 5 2600, same thing with actual lower temps from using high quality solder...

    $189
  • TheinsanegamerN - Monday, June 18, 2018 - link

    Depends on the use case. For pure gaming, I'd stick with intel, which is a bit faster now and, if history is any indication, will hold up a LOT better for gaming in 5 years then the AMD chip will.

    Especially if you run games or emulators dependent on IPC (like PCSX2) the intel chip will perform a lot better then the AMD chip.

    There is also the memory controller. Ryzen 2000 improved, but intel's controller is still superior, and that matters for things like RTS games that consume memory bandwidth like black holes consume stars.
  • Stuka87 - Tuesday, June 12, 2018 - link

    Props to Asrock for providing the system so that you could get us stuff so quickly Ian. Not sure why everybody is complaining about the system and cooling that was used. The system was loaned to you so that you could get us numbers fast, which personally I am happy about. Thanks for your hard work Ian!
  • El Sama - Tuesday, June 12, 2018 - link

    This is quite the premium cost for a small increase in frequency that should be close to what you get to a 8700k OCed, an interesting offering regardless.

Log in

Don't have an account? Sign up now