Gaming Benchmarks

One of the important things to test in our gaming benchmarks this time around is the effect of the Core i7-5820K having 28 PCIe 3.0 lanes rather than the normal 40. This means that the CPU is limited to x16/x8 operation in SLI, rather than x16/x16.

F1 2013

First up is F1 2013 by Codemasters. I am a big Formula 1 fan in my spare time, and nothing makes me happier than carving up the field in a Caterham, waving to the Red Bulls as I drive by (because I play on easy and take shortcuts). F1 2013 uses the EGO Engine, and like other Codemasters games ends up being very playable on old hardware quite easily. In order to beef up the benchmark a bit, we devised the following scenario for the benchmark mode: one lap of Spa-Francorchamps in the heavy wet, the benchmark follows Jenson Button in the McLaren who starts on the grid in 22nd place, with the field made up of 11 Williams cars, 5 Marussia and 5 Caterham in that order. This puts emphasis on the CPU to handle the AI in the wet, and allows for a good amount of overtaking during the automated benchmark. We test at 1920x1080 on Ultra graphical settings.

F1 2013 SLI, Average FPS


Nothing here really shows any advantage of Haswell-E over Ivy Bridge-E, although the 10% gaps to the 990X for minimum frame rates offer some perspective.

Bioshock Infinite

Bioshock Infinite was Zero Punctuation’s Game of the Year for 2013, uses the Unreal Engine 3, and is designed to scale with both cores and graphical prowess. We test the benchmark using the Adrenaline benchmark tool and the Xtreme (1920x1080, Maximum) performance setting, noting down the average frame rates and the minimum frame rates.

Bioshock Infinite SLI, Average FPS


Bioshock Infinite likes a mixture of cores and frequency, especially when it comes to SLI.

Tomb Raider

The next benchmark in our test is Tomb Raider. Tomb Raider is an AMD optimized game, lauded for its use of TressFX creating dynamic hair to increase the immersion in game. Tomb Raider uses a modified version of the Crystal Engine, and enjoys raw horsepower. We test the benchmark using the Adrenaline benchmark tool and the Xtreme (1920x1080, Maximum) performance setting, noting down the average frame rates and the minimum frame rates.

Tomb Raider SLI, Average FPS


Tomb Raider is blissfully CPU agnostic it would seem.

Sleeping Dogs

Sleeping Dogs is a benchmarking wet dream – a highly complex benchmark that can bring the toughest setup and high resolutions down into single figures. Having an extreme SSAO setting can do that, but at the right settings Sleeping Dogs is highly playable and enjoyable. We run the basic benchmark program laid out in the Adrenaline benchmark tool, and the Xtreme (1920x1080, Maximum) performance setting, noting down the average frame rates and the minimum frame rates.

Sleeping Dogs SLI, Average FPS


The biggest graph of CPU performance change is the minimum frame rate while in SLI - the 5960X reaches 67.4 FPS minimum, with only the xx60X CPUs of each generation moving above 60 FPS. That being said, all the Intel CPUs in our test are above 55 FPS, though it would seem that the 60X processors have some more room.

Battlefield 4

The EA/DICE series that has taken countless hours of my life away is back for another iteration, using the Frostbite 3 engine. AMD is also piling its resources into BF4 with the new Mantle API for developers, designed to cut the time required for the CPU to dispatch commands to the graphical sub-system. For our test we use the in-game benchmarking tools and record the frame time for the first ~70 seconds of the Tashgar single player mission, which is an on-rails generation of and rendering of objects and textures. We test at 1920x1080 at Ultra settings.

Battlefield 4 SLI, Average FPS


Battlefield 4 is the only benchmark where we see the 5820K with its 28 PCIe lanes down by any reasonable margin against the other two 5xxx processors, and even then this is around 5% when in SLI. Not many users will notice the difference between 105 FPS and 110 FPS, and minimum frame rates are still 75 FPS+ on all Intel processors.

CPU Benchmarks Additional Overclocking Comparison
Comments Locked

203 Comments

View All Comments

  • Michael REMY - Friday, August 29, 2014 - link

    again, in your table of extreme core i7 cpus, you forgot the last 4-core Nehalem which is : the i7-975X at 3.3GHz .
    No, the 965X is not the latest 4-core extreme !
  • Death666Angel - Friday, August 29, 2014 - link

    Considering this would have cost me ~340€ over my i7-4770K (which I have @ 4.5GHz and delidded), because of the price difference in CPU and the fact that I had a 1150 socket mainboard from my retired mining rig, I'm not too salty about it. At least it is 6 core at the low end, that is encouraging. I've been mostly fine with my i7-860 so I guess the i7-4770k will serve me a while.
  • Death666Angel - Saturday, August 30, 2014 - link

    "With ASUS motherboards, they have implemented a new onboard button which tells 2x/3x GPU users which slots to go in with LEDs on the motherboard to avoid confusion."
    Because looking stuff up in the manual is way too complicated!
  • anactoraaron - Friday, August 29, 2014 - link

    The 5820 can be had for $299 at micro center and they will also discount a compatible motherboard by $40. Jus' sayin'. IDK if there's some kind of ad agreement, etc for listing Newegg's price... Anyone shopping for anything should always shop around.
  • tuxRoller - Friday, August 29, 2014 - link

    "Very few PC games lose out due to having PCIe 3.0 x8 over PCIe 3.0 x16"

    Any? Even BF4 might be more due to other factors. It might be more useful to determine these bottlenecks with uhd.
  • Ian Cutress - Monday, September 1, 2014 - link

    I want to try with UHD. Need the monitors though.
  • Mr Perfect - Friday, August 29, 2014 - link

    The 28 lanes of the i7-5820K has almost no effect on SLI gaming at 1080p.


    I realize you where trying to CPU limit the benchmarks by using such a low resolution, but does this still hold up when running, say, three 1440p monitors? Wouldn't that be the time when the GPUs are maxed out and start shuttling large amounts of data between themselves?
  • Ian Cutress - Monday, September 1, 2014 - link

    I want to test with higher resolutions in the near future, although my monitor situation is not as fruitful as I would hope. There is no big AnandTech warehouse, we all work in our corner of the world so shipping around this HW is difficult.
  • KAlmquist - Friday, August 29, 2014 - link

    "The move to DDR4 2133 C15 would seem to have latency benefits over previous DDR3-1866 and DDR3-1600 implementations as well."

    If my math is correct, this is wrong. With DDR4 2133 timings of 15-15-15, each of those 15's corresponds to 14.1 nanoseconds. (Divide 2133 by two to get the actual frequency, then divide the clock count by the frequency.) With DDR3 1600 and the common 9-9-9 timings, each time is only 11.25 nanoseconds. With DDR3, the actual transfer of the data takes four clock cycles (there are eight transfers, but "DDR" stands for "double data rate" meaning that there are two transfers per clock cycle). That translates to 5 nanonseconds on DDR3 1600. DDR4 transfers twice as much data at a time, so with DDR4 2133 a transfer takes eight clock cycles or 7.5 nanoseconds. So DDR3 1600 has lower latency than the DDR4 2133 memory.

    So why does Sandra report a memory latency of around 28.75 nanoseconds (92 clock cycles at 3.2 Ghz) as shown in the chart on page 2 of this review? If a bank does not have an open page, then the memory latency should be 15+15+8 clock cycles, or 35.6 nanoseconds, not counting the latency internal to the processor. So the Sandra benchmark result seems implausible to me. As far as I can tell, the source code for the Sandra benchmark is not available so there is no way to tell exactly what it is measuring.
  • JumpingJack - Monday, September 1, 2014 - link

    Good points.

Log in

Don't have an account? Sign up now