The Intel Haswell-E CPU Review: Core i7-5960X, i7-5930K and i7-5820K Tested
by Ian Cutress on August 29, 2014 12:00 PM ESTGaming Benchmarks
One of the important things to test in our gaming benchmarks this time around is the effect of the Core i7-5820K having 28 PCIe 3.0 lanes rather than the normal 40. This means that the CPU is limited to x16/x8 operation in SLI, rather than x16/x16.
F1 2013
First up is F1 2013 by Codemasters. I am a big Formula 1 fan in my spare time, and nothing makes me happier than carving up the field in a Caterham, waving to the Red Bulls as I drive by (because I play on easy and take shortcuts). F1 2013 uses the EGO Engine, and like other Codemasters games ends up being very playable on old hardware quite easily. In order to beef up the benchmark a bit, we devised the following scenario for the benchmark mode: one lap of Spa-Francorchamps in the heavy wet, the benchmark follows Jenson Button in the McLaren who starts on the grid in 22nd place, with the field made up of 11 Williams cars, 5 Marussia and 5 Caterham in that order. This puts emphasis on the CPU to handle the AI in the wet, and allows for a good amount of overtaking during the automated benchmark. We test at 1920x1080 on Ultra graphical settings.
Nothing here really shows any advantage of Haswell-E over Ivy Bridge-E, although the 10% gaps to the 990X for minimum frame rates offer some perspective.
Bioshock Infinite
Bioshock Infinite was Zero Punctuation’s Game of the Year for 2013, uses the Unreal Engine 3, and is designed to scale with both cores and graphical prowess. We test the benchmark using the Adrenaline benchmark tool and the Xtreme (1920x1080, Maximum) performance setting, noting down the average frame rates and the minimum frame rates.
Bioshock Infinite likes a mixture of cores and frequency, especially when it comes to SLI.
Tomb Raider
The next benchmark in our test is Tomb Raider. Tomb Raider is an AMD optimized game, lauded for its use of TressFX creating dynamic hair to increase the immersion in game. Tomb Raider uses a modified version of the Crystal Engine, and enjoys raw horsepower. We test the benchmark using the Adrenaline benchmark tool and the Xtreme (1920x1080, Maximum) performance setting, noting down the average frame rates and the minimum frame rates.
Tomb Raider is blissfully CPU agnostic it would seem.
Sleeping Dogs
Sleeping Dogs is a benchmarking wet dream – a highly complex benchmark that can bring the toughest setup and high resolutions down into single figures. Having an extreme SSAO setting can do that, but at the right settings Sleeping Dogs is highly playable and enjoyable. We run the basic benchmark program laid out in the Adrenaline benchmark tool, and the Xtreme (1920x1080, Maximum) performance setting, noting down the average frame rates and the minimum frame rates.
The biggest graph of CPU performance change is the minimum frame rate while in SLI - the 5960X reaches 67.4 FPS minimum, with only the xx60X CPUs of each generation moving above 60 FPS. That being said, all the Intel CPUs in our test are above 55 FPS, though it would seem that the 60X processors have some more room.
Battlefield 4
The EA/DICE series that has taken countless hours of my life away is back for another iteration, using the Frostbite 3 engine. AMD is also piling its resources into BF4 with the new Mantle API for developers, designed to cut the time required for the CPU to dispatch commands to the graphical sub-system. For our test we use the in-game benchmarking tools and record the frame time for the first ~70 seconds of the Tashgar single player mission, which is an on-rails generation of and rendering of objects and textures. We test at 1920x1080 at Ultra settings.
Battlefield 4 is the only benchmark where we see the 5820K with its 28 PCIe lanes down by any reasonable margin against the other two 5xxx processors, and even then this is around 5% when in SLI. Not many users will notice the difference between 105 FPS and 110 FPS, and minimum frame rates are still 75 FPS+ on all Intel processors.
203 Comments
View All Comments
schmak01 - Friday, January 16, 2015 - link
I thought the same thing, but it probably depends on the game. I got the MSI XPower AC X99s Board with the 5930K. when I was running a 2500k at 4.5 Ghz for years. I play a lot of FFXIV which is DX9 and therefore CPU strapped. I noticed a marked improvement. Its a multithreaded game so that helps, but on my trusty sandy bridge I was always at 100% across all cores while playing, now its rarely above 15-20%. Areas where Ethernet traffic picks up, high population areas, show a much better improvement as I am not running out of CPU cycles. Lastly Turnbased games like GalCivIII and Civ5 on absurdly large Maps/AI's run much faster. Loading an old game on Civ5 where turns took 3-4 minutes now take a few seconds.There is also the fact that when Broadwell-E's are out in 2016 they will still use the LGA 2011-3 socket and X99 chipset, I figured it was a good time to upgrade for 'future proofing' my box for a while.
Flunk - Friday, August 29, 2014 - link
Right, for rendering, video encoding, server applications and only if there is no GPU-accelerated version for the task at hand. You have to admit that embarrassingly parallel workloads are both rare and quite often better off handed to the GPU.Also, you're neglecting overclocking. If you take that into account the lowest-end Haswell-E only has a 20%-30% advantage. Also, I'm not sure about you but I normally use Xeons for my servers.
Haswell-E has a point, but it's extremely niche and dare I say extremely overpriced? 8-core at $600 would be a little more palatable to me, especially with these low clocks and uninspiring single thread performance.
wireframed - Friday, August 29, 2014 - link
The 5960X is half the price of the equivalent Xeon. Sure if you're budget is unlimited, 1k or 2k per CPU doesn't matter, but how often is that realistic.For content creation, CPU performance is still very much relevant. GPU acceleration just isn't up to scratch in many areas. Too little RAM, not flexible enough. When you're waiting days or weeks for renderings, every bit counts.
CaedenV - Friday, August 29, 2014 - link
improvements are relative. For gaming... not so much. Most games still only use 4 core (or less!), and rely more on the clock rate and GPU rather than specific CPU technologies and advantages, so having a newer 8 core really does not bring much more to the table to most games compared to an older quad core... and those sandy bridge parts could OC to the moon, even my locked part hits 4.2GHz without throwing a fuss.Even for things like HD video editing, basic 3D content creation, etc. you are looking at minor improvements that are never going to be noticed by the end user. Move into 4K editing, and larger 3D work... then you see substantial improvements moving to these new chips... but then again you should probably be on a dual Xeon setup for those kinds of high-end workloads. These chips are for gamers with too much money (a class that I hope to join some day!), or professionals trying to pinch a few pennies... they simply are not practical in their benefits for either camp.
ArtShapiro - Friday, August 29, 2014 - link
Same here. I think the cost of operation is of concern in these days of escalating energy rates. I run the 2500K in a little Antec MITX case with something like a 150 or 160 watt inbuilt power supply. It idles in the low 20s, if I recall, meaning I can leave it on all day without California needing to build more nuclear power plants. I can only cringe at talk about 1500 watt power supplies.wireframed - Friday, August 29, 2014 - link
Performance per watt is what's important. If the CPU is twice as fast, and uses 60% more power! you still come out ahead. The idle draw is actually pretty good for the Haswell-E. It's only when you start overclocking it gets really crazy.DDR4's main selling point is reduced power draw, so that helps as well.
actionjksn - Saturday, August 30, 2014 - link
If you have a 1500 watt power supply, it doesn't mean you're actually using 1500 watts. It will only put out what the system demands at whatever workload you're putting on it at the time. If you replaced your system with one of these big new ones, your monthly bill might go up 5 to 8 dollars per month if you are a pretty heavy user, and you're really hammering that system frequently and hard. The only exception I can think of would be if you were mining Bit Coin 24/7 or something like that. Even then it would be your graphics cards that would be hitting you hard on the electric bill. It may be a little higher in California since you guys get overcharged for pretty much everything.Flashman024 - Friday, May 8, 2015 - link
Just out of curiosity, what do you pay for electricity? Because I pay less here than I did when I lived in IA. We're at .10 KWh to .16 KWh (Tier 3 based on 1000KWh+ usage). Heard these tired blanket statements before we moved, and were pleased to find out it's mostly BS.CaedenV - Friday, August 29, 2014 - link
Agreed, my little i7 2600 still keeps up just fine, and I am not really tempted to upgrade my system yet... maybe a new GPU, but the system itself is still just fine.Let's see some more focus on better single-thread performance, refine DDR4 support a bit more, give PCIe HDDs a chance to catch on, then I will look into upgrading. Still, this is the first real step forward on the CPU side that we have seen in a good long time, and I am really excited to finally see some Intel consumer 8 core parts hit the market.
twtech - Friday, August 29, 2014 - link
The overclocking results are definitely a positive relative to the last generation, but really the pull-the-trigger point for me would have been the 5930K coming with 8 cores.It looks like I'll be waiting another generation as well. I'm currently running an OCed 3930K, and given the cost of this platform, the performance increase for the cost just doesn't justify the upgrade.