Gaming Benchmarks

One of the important things to test in our gaming benchmarks this time around is the effect of the Core i7-5820K having 28 PCIe 3.0 lanes rather than the normal 40. This means that the CPU is limited to x16/x8 operation in SLI, rather than x16/x16.

F1 2013

First up is F1 2013 by Codemasters. I am a big Formula 1 fan in my spare time, and nothing makes me happier than carving up the field in a Caterham, waving to the Red Bulls as I drive by (because I play on easy and take shortcuts). F1 2013 uses the EGO Engine, and like other Codemasters games ends up being very playable on old hardware quite easily. In order to beef up the benchmark a bit, we devised the following scenario for the benchmark mode: one lap of Spa-Francorchamps in the heavy wet, the benchmark follows Jenson Button in the McLaren who starts on the grid in 22nd place, with the field made up of 11 Williams cars, 5 Marussia and 5 Caterham in that order. This puts emphasis on the CPU to handle the AI in the wet, and allows for a good amount of overtaking during the automated benchmark. We test at 1920x1080 on Ultra graphical settings.

F1 2013 SLI, Average FPS


Nothing here really shows any advantage of Haswell-E over Ivy Bridge-E, although the 10% gaps to the 990X for minimum frame rates offer some perspective.

Bioshock Infinite

Bioshock Infinite was Zero Punctuation’s Game of the Year for 2013, uses the Unreal Engine 3, and is designed to scale with both cores and graphical prowess. We test the benchmark using the Adrenaline benchmark tool and the Xtreme (1920x1080, Maximum) performance setting, noting down the average frame rates and the minimum frame rates.

Bioshock Infinite SLI, Average FPS


Bioshock Infinite likes a mixture of cores and frequency, especially when it comes to SLI.

Tomb Raider

The next benchmark in our test is Tomb Raider. Tomb Raider is an AMD optimized game, lauded for its use of TressFX creating dynamic hair to increase the immersion in game. Tomb Raider uses a modified version of the Crystal Engine, and enjoys raw horsepower. We test the benchmark using the Adrenaline benchmark tool and the Xtreme (1920x1080, Maximum) performance setting, noting down the average frame rates and the minimum frame rates.

Tomb Raider SLI, Average FPS


Tomb Raider is blissfully CPU agnostic it would seem.

Sleeping Dogs

Sleeping Dogs is a benchmarking wet dream – a highly complex benchmark that can bring the toughest setup and high resolutions down into single figures. Having an extreme SSAO setting can do that, but at the right settings Sleeping Dogs is highly playable and enjoyable. We run the basic benchmark program laid out in the Adrenaline benchmark tool, and the Xtreme (1920x1080, Maximum) performance setting, noting down the average frame rates and the minimum frame rates.

Sleeping Dogs SLI, Average FPS


The biggest graph of CPU performance change is the minimum frame rate while in SLI - the 5960X reaches 67.4 FPS minimum, with only the xx60X CPUs of each generation moving above 60 FPS. That being said, all the Intel CPUs in our test are above 55 FPS, though it would seem that the 60X processors have some more room.

Battlefield 4

The EA/DICE series that has taken countless hours of my life away is back for another iteration, using the Frostbite 3 engine. AMD is also piling its resources into BF4 with the new Mantle API for developers, designed to cut the time required for the CPU to dispatch commands to the graphical sub-system. For our test we use the in-game benchmarking tools and record the frame time for the first ~70 seconds of the Tashgar single player mission, which is an on-rails generation of and rendering of objects and textures. We test at 1920x1080 at Ultra settings.

Battlefield 4 SLI, Average FPS


Battlefield 4 is the only benchmark where we see the 5820K with its 28 PCIe lanes down by any reasonable margin against the other two 5xxx processors, and even then this is around 5% when in SLI. Not many users will notice the difference between 105 FPS and 110 FPS, and minimum frame rates are still 75 FPS+ on all Intel processors.

CPU Benchmarks Additional Overclocking Comparison
Comments Locked

203 Comments

View All Comments

  • jabber - Saturday, August 30, 2014 - link

    At the end of the day the Xeons are just bug fixed lower power i7 chips anyway. But one way that Xeons come into their own is on the second hand market. I'll be picking up ex. corp dual CPU Xeon workstations for peanuts compared to the domestic versions. I have a 7 year old 8 core Xeon workstation that still WPrimes in 7 seconds. Not bad for a $100 box.
  • mapesdhs - Saturday, August 30, 2014 - link


    All correct, though it concerns me that the max RAM of X99 may only be 64GB much
    of the time. After adding two cores and moving up to working with 4K material, that's
    not going to be enough.

    Performance-wise, good for a new build, but sadly probably not good enough as an
    upgrade over a 3930K @ 4.7+ or anything that follows. The better storage options
    might be an incentive for some to upgrade though, depending on their RAID setups
    & suchlike.

    Ian.
  • leminlyme - Tuesday, September 2, 2014 - link

    They are applicable to different crowds, and computing doesn't exclude gaming, whereas Xeons to a degree do (Though I'm sure for most of them you'd be fine, I for one like those PCI lanes, as well as the per core performance on the desktop processors is just typically better. Plus form factor and all that. These fill a glorious niche that I am indeed excited about. They're pretty damn cheap for their quality too. I guess the RAM totally circumvents that benefit though.
  • Mithan - Friday, August 29, 2014 - link

    I am into gaming and nothing is worth upgrading over the 2500 if you have it. For you it's different of course :)
  • AnnihilatorX - Saturday, August 30, 2014 - link

    I am thinking of upgrading my 2500 k actually, because I got a dud CPU which won't even overclock to 4.2Ghz
  • mindbomb - Friday, August 29, 2014 - link

    That's the fault of the software. Seems unfair to blame the chip for that. DX12 should change that anyway.
  • CaedenV - Friday, August 29, 2014 - link

    How exactly will DX12 help? DX12 is good for helping wimpy hardware move from horrible settings to acceptable settings, but for the high end it will not help much at all. Beyond that, it helps the GPU be more efficient and will have little effect on the CPU. Even if it did help the CPU at all, take a look at those charts; pretty much every mid to high end CPU on the market can already saturate a GPU. If the GPU is already the bottle neck then improving the CPU does not help at all.
  • iLovefloss - Friday, August 29, 2014 - link

    DirectX12 promises to make more efficient use of multicore processors. AnandTech has already did a piece on Intel's demonstration of its benefit.
  • bwat47 - Sunday, August 31, 2014 - link

    I'm sick of hearing this nonsense. Even with reasonably high end hardware mantle and DX12 can help minimum framerates and framerate consistency considerably. I have a 2500k and a 280x, and when I use mantle I get a big boost in minimum framerate.
  • The3D - Friday, September 12, 2014 - link

    Given the yet to be released directx 12 and the overall tendency to have less cpu intensive graphics directives ( mantle) i guess the days in which we needed extra powerful cpus to run graphic intensive games are coming to an end.

Log in

Don't have an account? Sign up now