Generational Tests on the i7-6700K: Gaming Benchmarks on High End GPUs

Alien: Isolation

If first person survival mixed with horror is your sort of thing, then Alien: Isolation, based off of the Alien franchise, should be an interesting title. Developed by The Creative Assembly and released in October 2014, Alien: Isolation has won numerous awards from Game Of The Year to several top 10s/25s and Best Horror titles, ratcheting up over a million sales by February 2015. Alien: Isolation uses a custom built engine which includes dynamic sound effects and should be fully multi-core enabled.

For low end graphics, we test at 720p with Ultra settings, whereas for mid and high range graphics we bump this up to 1080p, taking the average frame rate as our marker with a scripted version of the built-in benchmark.

Alien Isolation on MSI R9 290X Gaming LE 4GB ($380)

Alien Isolation on ASUS GTX 980 Strix 4GB ($560)

Total War: Attila

The Total War franchise moves on to Attila, another The Creative Assembly development, and is a stand-alone strategy title set in 395AD where the main story line lets the gamer take control of the leader of the Huns in order to conquer parts of the world. Graphically the game can render hundreds/thousands of units on screen at once, all with their individual actions and can put some of the big cards to task.

For low end graphics, we test at 720p with performance settings, recording the average frame rate. With mid and high range graphics, we test at 1080p with the quality setting. In both circumstances, unlimited video memory is enabled and the in-game scripted benchmark is used.

Total War: Attila on MSI R9 290X Gaming LE 4GB ($380)

Total War: Attila on ASUS GTX 980 Strix 4GB ($560)

Grand Theft Auto V

The highly anticipated iteration of the Grand Theft Auto franchise finally hit the shelves on April 14th 2015, with both AMD and NVIDIA in tow to help optimize the title. GTA doesn’t provide graphical presets, but opens up the options to users and extends the boundaries by pushing even the hardest systems to the limit using Rockstar’s Advanced Game Engine. Whether the user is flying high in the mountains with long draw distances or dealing with assorted trash in the city, when cranked up to maximum it creates stunning visuals but hard work for both the CPU and the GPU.

For our test we have scripted a version of the in-game benchmark, relying only on the final part which combines a flight scene along with an in-city drive-by followed by a tanker explosion. For low end systems we test at 720p on the lowest settings, whereas mid and high end graphics play at 1080p with very high settings across the board. We record both the average frame rate and the percentage of frames under 60 FPS (16.6ms).

Grand Theft Auto V on MSI R9 290X Gaming LE 4GB ($380)

Grand Theft Auto V on ASUS GTX 980 Strix 4GB ($560)

GRID: Autosport

No graphics tests are complete without some input from Codemasters and the EGO engine, which means for this round of testing we point towards GRID: Autosport, the next iteration in the GRID and racing genre. As with our previous racing testing, each update to the engine aims to add in effects, reflections, detail and realism, with Codemasters making ‘authenticity’ a main focal point for this version.

GRID’s benchmark mode is very flexible, and as a result we created a test race using a shortened version of the Red Bull Ring with twelve cars doing two laps. The car is focus starts last and is quite fast, but usually finishes second or third. For low end graphics we test at 1080p medium settings, whereas mid and high end graphics get the full 1080p maximum. Both the average and minimum frame rates are recorded.

GRID: Autosport on MSI R9 290X Gaming LE 4GB ($380)

GRID: Autosport on ASUS GTX 980 Strix 4GB ($560)

Middle-Earth: Shadow of Mordor

The final title in our testing is another battle of system performance with the open world action-adventure title, Shadow of Mordor. Produced by Monolith using the LithTech Jupiter EX engine and numerous detail add-ons, SoM goes for detail and complexity to a large extent, despite having to be cut down from the original plans. The main story itself was written by the same writer as Red Dead Redemption, and it received Zero Punctuation’s Game of The Year in 2014.

For testing purposes, SoM gives a dynamic screen resolution setting, allowing us to render at high resolutions that are then scaled down to the monitor. As a result, we get several tests using the in-game benchmark. For low end graphics we examine at 720p with low settings, whereas mid and high end graphics get 1080p Ultra. The top graphics test is also redone at 3840x2160, also with Ultra settings, and we also test two cards at 4K where possible.

Shadow of Mordor on MSI R9 290X Gaming LE 4GB ($380)

Shadow of Mordor on 2x MSI R9 290X Gaming LE 4GB ($380)

Shadow of Mordor on ASUS GTX 980 Strix 4GB ($560)

Shadow of Mordor on ASUS GTX 980 Strix 4GB ($560)

Generational Tests on the i7-6700K: Gaming Benchmarks on Mid-Range GPUs What You Can Buy: Office and Web Benchmarks
Comments Locked

477 Comments

View All Comments

  • Chaser - Thursday, August 6, 2015 - link

    Now even more pleased with my 5820K rig I bought two months ago.
  • Artas1984 - Thursday, August 6, 2015 - link

    I disagree with the statement that Skylake should now be a definitive replacement for Sandy Bridge.

    It's like saying that your game runs at 200 FPS slowly, so now you have to upgrade to get 250 FPS. Of course i am not talking about games directly, it's a metaphor, but you get the point.

    Also with the way how fast computer electronics develop, people are "forced" to up their quality of life at the expense of buying more important things in this short fu+kin life. Just because there are things manufactured does not mean you have to live someone else's life! I for one give a shit about smart phones and will never use them anyway, i will never use 3D googles or monitors in movies or gaming just because they exist.

    On top of that:

    AMD's chips have not yet reached the performance levels of Sandy Bridge. The piece of crap FX 9590 falls behind 2600K in every multi-threaded bench and get's beaten by 2500K in every game!
  • Oxford Guy - Friday, August 7, 2015 - link

    Take a look at this: http://www.techspot.com/review/1006-the-witcher-3-...
  • Oxford Guy - Friday, August 7, 2015 - link

    It seems there's a reason why Anandtech never puts the FX chips into its charts and instead chooses the weak APUs... Is it because the FX is holding its own nicely now that games like Witcher 3 are finally using all of its threads?
  • Oxford Guy - Friday, August 7, 2015 - link

    A 2012 chip priced as low as $100 (8320E) with a $40 motherboard discount (combo with UD3P set me back a total of $133.75 with tax from Microcenter a few months ago) is holding its own with i7 chips when overclocked, and at least i5s. Too bad for AMD that they released that chip so many years before the gaming industry would catch up. Imagine if it was on 14nm right now instead of 32.
  • boeush - Friday, August 7, 2015 - link

    Oh yeah, real impressive: FX 9590 @ 4.7 Ghz is a whole 1% faster than the 4 year old i5 2500K @ 3.3 Ghz. I'm blown away... Particularly since the 9590 overclocks to maybe 5 Ghz if you are lucky, at 200 W with water cooling, while the 2500K overclocks to 4.5 Ghz on air. And it's not as if that game isn't GPU limited like most of the others...

    Fanboi, please.
  • Oxford Guy - Friday, August 7, 2015 - link

    You're missing the point completely, but that's OK. Anyone who looks at the charts can figure it out for themselves, as the reviewer noted. Also, if you would have taken the time to look at that page before spouting off nonsense, you would have noticed that a high clock rate is not necessary for that chip to have decent performance -- negating the entire argument that extreme overclocking is needed. The game clearly does a better job of load balancing between the 8 threads than prior games have, resulting in a much more competitive situation for the FX (especially the 8 thread FX).

    As for being a fanboy. A fanboy is someone who won't put in an FX and instead just puts in a bunch of weaker APUs, the same thing that has been happening in multiple reviews. Name-calling is not a substitute for actually looking at the data I cited and responding to it accurately.
  • Markstar - Friday, August 7, 2015 - link

    I totally agree - looking at the numbers it is very obvious to me that upgrading is not worth it unless you are heavily into video encoding. Especially for gaming, spending the money on a better graphic card is clearly the better investment as the difference is usually between 1-3%.

    My i5-2500K is "only" at 4.5GHz and I don't see myself upgrading anytime soon, though I have put some money aside for exactly that purpose.
  • sonny73n - Friday, August 7, 2015 - link

    I don't agree with your bold statement: "Sandy Bridge, your time is up". Why do you even compare Skylake and SB K series at their stock speeds? I have my i5-2500K at 4.2GHz now with Prime95 stress test max temp at 64C on air cool. I can easily clock it to 4.8GHz and I have so but never felt the need for that high of clocks. With ~25% overall system improvement in benchmarks and only 3 to 5% in games, this upgrade doesn't justify the cost of a new MB, DDR4 and CPU. I'm sure a few people can utilize this ~25% improvement but I doubt it would make any difference for me on my daily usage. Secondly, Skylake system alone can't run games. Why upgrade my SB when it can run all the games with Evga 780 that I wanted it to? For gamers, wouldn't it be a lot wiser and cheaper to spend on another 780 instead of spending on a new system? And all that upgrade cost is just for 3 to 5% improvement in games? Sorry, I'll pass.
  • MrSpadge - Friday, August 7, 2015 - link

    Ian, when testing the memory scaling or comparing DDR3 and 4 you shouldn't underclock the CPUs. Fixing their frequency is good, but not reducing it. The reason: at lower clock speeds the throughput is reduced, which in turn reduces the need for memory bandwidth. At 3 vs. 4 GHz we're already talking about approximately 75% the bandwidth requirement that a real user would experience. In this case memory latency still matters, of course, but the advantage of higher bandwidth memory is significantly reduced.

Log in

Don't have an account? Sign up now