Grand Theft Auto V

The highly anticipated iteration of the Grand Theft Auto franchise finally hit the shelves on April 14th 2015, with both AMD and NVIDIA in tow to help optimize the title. GTA doesn’t provide graphical presets, but opens up the options to users and extends the boundaries by pushing even the hardest systems to the limit using Rockstar’s Advanced Game Engine. Whether the user is flying high in the mountains with long draw distances or dealing with assorted trash in the city, when cranked up to maximum it creates stunning visuals but hard work for both the CPU and the GPU.

For our test we have scripted a version of the in-game benchmark, relying only on the final part which combines a flight scene along with an in-city drive-by followed by a tanker explosion. For low-end systems we test at 720p on the lowest settings, whereas mid and high-end graphics play at 1080p with very high settings across the board. We record both the average frame rate and the percentage of frames under 60 FPS (16.6ms).

Grand Theft Auto V on ASUS GTX 980 Strix 4GB ($560)Grand Theft Auto V on MSI R9 290X Gaming LE 4GB ($380)Grand Theft Auto V on MSI GTX 770 Lightning 2GB ($245)Grand Theft Auto V on MSI R9 285 Gaming 2GB ($240)Grand Theft Auto V on ASUS R7 240 DDR3 2GB ($70)Grand Theft Auto V on Integrated Graphics

Gaming: Total War: Attila Gaming: GRID Autosport
Comments Locked

70 Comments

View All Comments

  • Magichands8 - Tuesday, January 3, 2017 - link

    This Optane looks completely useless. But Optane DRAM sounds like it could be interesting. Depending upon how much slower it is.
  • lopri - Wednesday, January 4, 2017 - link

    I immediately thought of "AMD Memory" which AMD launched after the Bulldozer flop. But then again Intel have been going after this storage caching scheme for years now and I do not think it has taken them anywhere.
  • smilingcrow - Tuesday, January 3, 2017 - link

    Games frame rates to 2 decimal points adds nothing but making it harder to scan the numbers quickly. Enough already.
  • 1_rick - Tuesday, January 3, 2017 - link

    One place where the extra threads of an i7 are useful is if you're using VMs (maybe to run a database server or something.) I've found that an i5 with 8GB can get bogged down pretty drastically just from running a VM.
  • Meteor2 - Tuesday, January 3, 2017 - link

    I kinda think that if you're running db VMs on an i7, you're doing it wrong.
  • t.s - Wednesday, January 4, 2017 - link

    or if you're Android developer using Android Studio.
  • lopri - Wednesday, January 4, 2017 - link

    That is true but as others have implied you can get 6 or 8 real core Xeons for cheaper than these new Kaby Lake chips if VM is what you need the performance of a CPU is for.
  • ddhelmet - Tuesday, January 3, 2017 - link

    Well I am more glad now that I got a Skylake. Even if I waited for this the performance increase is not worth it.
  • Kaihekoa - Tuesday, January 3, 2017 - link

    Y'all need to revise your game benchmarking analysis. At least use some current generation GPUs, post the minimum framerates, and test at 1440p. The rest of your review is exceptional, but the gaming part needs some modernization, please.
  • lopri - Wednesday, January 4, 2017 - link

    I thank the author for a clear yet thorough review. A lot of grounds are covered and the big picture of the chip's performance and features is well communicated. I agree with the author's recommendation at the end as well. I have not felt that I am missing out anything compared to i7's while running a 2500K for my gaming system, and unless you know for certain that you can take advantage of HyperThreading, spending the difference in dollars toward an SSD or a graphics card is a wiser expenditure that will provide you with better computing experience.

    Having said that, I am wtill not compelled to upgrade my 2500K which has been running at 4.8 GHz for years. (It does 5.0 GHz no problem but I run it at 4.8 to leave some "headroom") While I think the 7600K is barely a worthy upgrade (finally!) on its own light, but the added cost cannot be overlooked. A new motherboard, new memory, and potentially a new heat sink will quickly add to the budget, and I am not sure if it is going to be worth all the expenses that will follow.

    Of course all that could be worthwhile if overclocking was fun, but Intel pretty much have killed overclocking and the overclocking community. Intentionally if I might add. Today overclocking does not give one a sense of discovery or accomplishment. Competition between friendly enthusiasts or hostile motherboard/memory vendors has disappeared. Naturally there is no accumulation or exchange of knowledge in the community, and conversations have become frustrating and vain due to lack of overclocking expertise. Only some brute force overclocking with dedicated cooling has some following, and the occasional "overclocking" topics in the forums are really a braggadocio in disguise, of which the competition underneath is really about who spent the most on their rigs with the latest blingy stuff. Needless to say those are not as exciting or illuminating as the real overclocking of the yore, and in my opinion there are better ways to spend money for such a self-gratification without the complication that often accompanies overclocking which in the end fails to impress.

    Intel might have a second thought about its overclocking policies now, but just as many things Intel have done in recent years, it is too little too late. And their chips have no headroom anyway. My work system is due for an upgrade and I am probably going to pick up a couple of E2670s which will give me 16 real cores for less than $200. Why bother with the new stuff when the IPC gain is meager and there is no fun in overclocking? And contribute to Intel's revenue? Thank you but no thank you.

    P.S. Sorry I meant to commend the author for the excellent (albeit redundant) review but ended up ranting about something else. Oh well, carry on..

Log in

Don't have an account? Sign up now