Grand Theft Auto V

The highly anticipated iteration of the Grand Theft Auto franchise finally hit the shelves on April 14th 2015, with both AMD and NVIDIA in tow to help optimize the title. GTA doesn’t provide graphical presets, but opens up the options to users and extends the boundaries by pushing even the hardest systems to the limit using Rockstar’s Advanced Game Engine. Whether the user is flying high in the mountains with long draw distances or dealing with assorted trash in the city, when cranked up to maximum it creates stunning visuals but hard work for both the CPU and the GPU.

For our test we have scripted a version of the in-game benchmark, relying only on the final part which combines a flight scene along with an in-city drive-by followed by a tanker explosion. For low-end systems we test at 720p on the lowest settings, whereas mid and high-end graphics play at 1080p with very high settings across the board. We record both the average frame rate and the percentage of frames under 60 FPS (16.6ms).

Grand Theft Auto V on ASUS GTX 980 Strix 4GB ($560)Grand Theft Auto V on MSI R9 290X Gaming LE 4GB ($380)Grand Theft Auto V on MSI GTX 770 Lightning 2GB ($245)Grand Theft Auto V on MSI R9 285 Gaming 2GB ($240)Grand Theft Auto V on ASUS R7 240 DDR3 2GB ($70)Grand Theft Auto V on Integrated Graphics

Gaming: Total War Attila Gaming: GRID Autosport
Comments Locked

125 Comments

View All Comments

  • Toss3 - Tuesday, January 3, 2017 - link

    TBH they shouldn't just post mins, but decent FCAT analyses like the ones over on Guru3d.com
  • User.Name - Tuesday, January 3, 2017 - link

    Well that involves a lot more hardware and time to record/analyze the results, which is why I suggested looking at minimum framerates. But you're right that would be a good improvement too.
  • edzieba - Tuesday, January 3, 2017 - link

    "For one thing, average framerates are meaningless when doing CPU tests. You need to be looking at minimum framerates."

    Framerate needs to be dropped entirely. Instead, frame render times (specifically range and variance) give a better picture of perceived 'responsiveness', as well as render times being convertible to an FPS value (though not vice versa).
  • Alexvrb - Tuesday, January 3, 2017 - link

    Agreed, when reviewing CPUs it would stand to reason that you'd want to use games that tax the CPU.
  • Notmyusualid - Friday, January 6, 2017 - link

    Your second link there was especially interesting - thats why I went for more cores than four.

    My 14C/28T Xeon has to feed 2x 1070 FTWs. I don't think quad core & multi-gpu are that great together, in my experience.

    For all the talk about games don't use more than 'x' cores, I see my cores / threads nicely loaded up for many games. Even MW3 shows activity over 12 Threads, however small, and thats old now.

    I just got a 6950X for a song, and the scouser seller backed out on me AFTER I paid. So I get to wait 5 to 7 working days for my money back (thanks PayPal), and I won't get to see how much frequency would have affected my everyday computing. I won't be paying north of 1400 GBP for one, that I can tell you.
  • Mondozai - Tuesday, January 3, 2017 - link

    Who the fuck is testing with GTX 980@1080p? It should be a GTX 1080@1080p because as games' visual demands go up progressively, it will show how the processor ages. This review is useless from that regard.

    Go to Sweclockers or any other website for a real review. AT has fallen so fucking much it's hilarious.
  • Gasaraki88 - Tuesday, January 3, 2017 - link

    Yeah... surprisingly Tom's Hardware has really indepth reviews now a days just like the olden times. Considering that Microsoft has said that OSes lower than Windows 10 will not be supported on Kaby Lake, i'm surprised they are still using Windows 7 to to their tests.
  • Shadowmaster625 - Tuesday, January 3, 2017 - link

    There are lots of people that use a 980 with something like a 2500k or 2600k and might be wondering what a new cpu would do for them.
  • dakishimesan - Tuesday, January 3, 2017 - link

    Also using the same testing setup allows the results to be directly comparable to previous chips. They already mentioned in one of the articles regarding Kaby Lake (I think it was the i5 review) that they will be rolling out a new testbed and testing suite in February.
  • BrokenCrayons - Tuesday, January 3, 2017 - link

    From the Test Bed and Setup page:

    "This is also typically run at JEDEC subtimings where possible."

    -and-

    "Our testing methodology is ‘out-of-the-box’, with the latest public BIOS installed and XMP enabled, and thus subject to the whims of this feature."

    After reading those two lines, I really don't know what Anandtech's memory settings were like for this article.

Log in

Don't have an account? Sign up now