Evolution in Performance

The underlying architecture in Haswell-E is not anything new. Haswell desktop processors were first released in July 2013 to replace Ivy Bridge, and at the time we stated an expected 3-17% increase, especially in floating point heavy benchmarks. Users moving from Sandy Bridge should expect a ~20% increase all around, with Nehalem users in the 40% range. Due to the extreme systems only needing more cores, we could assume that the suggested recommendations for Haswell-E over IVB-E and the others were similar but we tested afresh for this review in order to test those assumptions.

For our test, we took our previous CPU review samples from as far back as Nehalem. This means the i7-990X, i7-3960X, i7-4960X and the Haswell-E i7-5960X.

Each of the processors were set to 3.2 GHz on all the cores, and set to four cores without HyperThreading enabled.

Memory was set to the CPU supported frequency at JEDEC settings, meaning that if there should Intel have significantly adjusted the performance between the memory controllers of these platforms, this would show as well. For detailed explanations of these tests, refer to our main results section in this review.

Average results show an average 17% jump from Nehalem to SNB-E, 7% for SNB-E to IVB-E, and a final 6% from IVB-E to Haswell-E. This makes for a 31% (rounded) overall stretch in three generations.

Web benchmarks have to struggle with the domain and HTML 5 offers some way to help use as many cores in the system as possible. The biggest jump was in SunSpider, although overall there is a 34% jump from Nehalem to Haswell-E here. This is split by 14% Nehalem to SNB-E, 6% SNB-E to IVB-E and 12% from IVB-E to Haswell-E.

Purchasing managers often look to the PCMark and SYSmark data to clarify decisions and the important number here is that Haswell-E took a 7% average jump in scores over Ivy Bridge-E. This translates to a 24% jump since Nehalem.

Some of the more common synthetic benchmarks in multithreaded mode showed an average 8% jump from Ivy Bridge-E, with a 29% jump overall. Nehalem to Sandy Bridge-E was a bigger single jump, giving 14% average.

In the single threaded tests, a smaller overall 23% improvement was seen from the i7-990X, with 6% in this final generation.

The take home message, if there was one, from these results is that:

Haswell-E has an 8% improvement in performance over Ivy Bridge-E clock for clock for pure CPU based workloads.

This also means an overall 13% jump from Sandy Bridge-E to Haswell-E.
From Nehalem, we have a total 28% raise in clock-for-clock performance.

Looking at gaming workloads, the difference shrinks. Unfortunately our Nehalem system decided to stop working while taking this data, but we can still see some generational improvements. First up, a GTX 770 at 1080p Max settings:

The only title that gets much improvement is F1 2013 which uses the EGO engine and is most amenable to better hardware under the hood. The rise in minimum frame rates is quite impressive.

For SLI performance:

All of our titles except Tomb Raider get at least a small improvement in our clock-for-clock testing with this time Bioshock also getting in on the action in both average and minimum frame rates.

If we were to go on clock-for-clock testing alone, these numbers do not particularly show a benefit from upgrading from a Sandy Bridge system, except in F1 2013. However our numbers later in the review for stock and overclocked speeds might change that.

Memory Latency and CPU Architecture

Haswell is a tock, meaning the second crack at 22nm. Anand went for a deep dive into the details previously, but in brief Haswell bought better branch prediction, two new execution ports and increased buffers to feed an increased parallel set of execution resources. Haswell adds support for AVX2 which includes an FMA operation to increase floating point performance. As a result, Intel doubled the L1 cache bandwidth. While TSX was part of the instruction set as well, this has since been disabled due to a fundamental silicon flaw and will not be fixed in this generation.

The increase in L3 cache sizes for the highest CPU comes from an increased core count, extending the lower latency portion of the L3 to larger data accesses. The move to DDR4 2133 C15 would seem to have latency benefits over previous DDR3-1866 and DDR3-1600 implementations as well.

The Intel Haswell-E CPU Review Intel Haswell-E Overclocking
POST A COMMENT

203 Comments

View All Comments

  • Ninjawithagun - Tuesday, January 12, 2016 - link

    You can't use the 5820K with an X79 motherboard. Reply
  • dawie1976 - Friday, September 12, 2014 - link

    Yip,same here.I7 4790 @ 8.8 GHz.I am still good Reply
  • myT4U - Friday, March 04, 2016 - link

    Tell us about your system and setup please Reply
  • damianrobertjones - Sunday, March 08, 2015 - link

    Yep 2500k @ 4.8 Ghz. Not really Just found it funny that each new post beat the previous by 100Mhz Reply
  • Stas - Friday, April 10, 2015 - link

    Agreed, doing quite well with 2500k @ 4.8Ghz Reply
  • leminlyme - Tuesday, September 02, 2014 - link

    I don't mean to be a prick, but you're not going to see anything in gaming performance even if intel releases a 32core 200$ 3.0 ghz processor. Because in the end, it's about the developers usage of the processors, and not many game developers want to ostracize the entry level market by making CPU heavy games. Now, when Star Citizen launches, there'll be a bit of a rush for 'better' but not 'best' cpus, and that appears to be virtually the only example worth bringing up in the next forseeable 3years of gaming (atleast so far in the public eye..) All you can do is boost up single core performance to a certain point before there's just no more benefits, upgrading your cpu for gaming is like upgrading your plumbing for pissing. Yeah it still goes through and could see marginal benefits, but you know damn well pissin' ain't shit ;D Reply
  • awakenedmachine - Tuesday, September 02, 2014 - link

    Not prick-like at all, I appreciate the comment. I'm an old dude who hasn't "gamed" for years and I'm just now getting back into it, trying to figure out what will work and for what price. Your insight is very helpful! Sounds like a lot of guys are using OC'ed i5 cores, good to know. Reply
  • swing848 - Thursday, September 04, 2014 - link

    Crank up World of Tanks video settings to maximum and watch your FPS sink like a rock. A high end system is needed to run this title at max settings, that only recently began to use 2 CPU cores and 2 GPUs. No one using a mid-range Intel CPU and upper-midrange single GPU will see 60fps with the video cranked to maximum. Reply
  • Midwayman - Tuesday, September 30, 2014 - link

    That game is just horribly coded. There is no excuse for the amount of CPU and GPU it needs. Reply
  • swing848 - Thursday, September 04, 2014 - link

    Check out my post above regarding MS-FSX.

    And, yes, I have installed Star Citizen some time ago [alpha release FINALLY allowed some play], and my system has done well with it, even in alpha.
    Reply

Log in

Don't have an account? Sign up now