Evolution in Performance

The underlying architecture in Haswell-E is not anything new. Haswell desktop processors were first released in July 2013 to replace Ivy Bridge, and at the time we stated an expected 3-17% increase, especially in floating point heavy benchmarks. Users moving from Sandy Bridge should expect a ~20% increase all around, with Nehalem users in the 40% range. Due to the extreme systems only needing more cores, we could assume that the suggested recommendations for Haswell-E over IVB-E and the others were similar but we tested afresh for this review in order to test those assumptions.

For our test, we took our previous CPU review samples from as far back as Nehalem. This means the i7-990X, i7-3960X, i7-4960X and the Haswell-E i7-5960X.

Each of the processors were set to 3.2 GHz on all the cores, and set to four cores without HyperThreading enabled.

Memory was set to the CPU supported frequency at JEDEC settings, meaning that if there should Intel have significantly adjusted the performance between the memory controllers of these platforms, this would show as well. For detailed explanations of these tests, refer to our main results section in this review.

Average results show an average 17% jump from Nehalem to SNB-E, 7% for SNB-E to IVB-E, and a final 6% from IVB-E to Haswell-E. This makes for a 31% (rounded) overall stretch in three generations.

Web benchmarks have to struggle with the domain and HTML 5 offers some way to help use as many cores in the system as possible. The biggest jump was in SunSpider, although overall there is a 34% jump from Nehalem to Haswell-E here. This is split by 14% Nehalem to SNB-E, 6% SNB-E to IVB-E and 12% from IVB-E to Haswell-E.

Purchasing managers often look to the PCMark and SYSmark data to clarify decisions and the important number here is that Haswell-E took a 7% average jump in scores over Ivy Bridge-E. This translates to a 24% jump since Nehalem.

Some of the more common synthetic benchmarks in multithreaded mode showed an average 8% jump from Ivy Bridge-E, with a 29% jump overall. Nehalem to Sandy Bridge-E was a bigger single jump, giving 14% average.

In the single threaded tests, a smaller overall 23% improvement was seen from the i7-990X, with 6% in this final generation.

The take home message, if there was one, from these results is that:

Haswell-E has an 8% improvement in performance over Ivy Bridge-E clock for clock for pure CPU based workloads.

This also means an overall 13% jump from Sandy Bridge-E to Haswell-E.
From Nehalem, we have a total 28% raise in clock-for-clock performance.

Looking at gaming workloads, the difference shrinks. Unfortunately our Nehalem system decided to stop working while taking this data, but we can still see some generational improvements. First up, a GTX 770 at 1080p Max settings:

The only title that gets much improvement is F1 2013 which uses the EGO engine and is most amenable to better hardware under the hood. The rise in minimum frame rates is quite impressive.

For SLI performance:

All of our titles except Tomb Raider get at least a small improvement in our clock-for-clock testing with this time Bioshock also getting in on the action in both average and minimum frame rates.

If we were to go on clock-for-clock testing alone, these numbers do not particularly show a benefit from upgrading from a Sandy Bridge system, except in F1 2013. However our numbers later in the review for stock and overclocked speeds might change that.

Memory Latency and CPU Architecture

Haswell is a tock, meaning the second crack at 22nm. Anand went for a deep dive into the details previously, but in brief Haswell bought better branch prediction, two new execution ports and increased buffers to feed an increased parallel set of execution resources. Haswell adds support for AVX2 which includes an FMA operation to increase floating point performance. As a result, Intel doubled the L1 cache bandwidth. While TSX was part of the instruction set as well, this has since been disabled due to a fundamental silicon flaw and will not be fixed in this generation.

The increase in L3 cache sizes for the highest CPU comes from an increased core count, extending the lower latency portion of the L3 to larger data accesses. The move to DDR4 2133 C15 would seem to have latency benefits over previous DDR3-1866 and DDR3-1600 implementations as well.

The Intel Haswell-E CPU Review Intel Haswell-E Overclocking
POST A COMMENT

203 Comments

View All Comments

  • jabber - Saturday, August 30, 2014 - link

    At the end of the day the Xeons are just bug fixed lower power i7 chips anyway. But one way that Xeons come into their own is on the second hand market. I'll be picking up ex. corp dual CPU Xeon workstations for peanuts compared to the domestic versions. I have a 7 year old 8 core Xeon workstation that still WPrimes in 7 seconds. Not bad for a $100 box. Reply
  • mapesdhs - Saturday, August 30, 2014 - link


    All correct, though it concerns me that the max RAM of X99 may only be 64GB much
    of the time. After adding two cores and moving up to working with 4K material, that's
    not going to be enough.

    Performance-wise, good for a new build, but sadly probably not good enough as an
    upgrade over a 3930K @ 4.7+ or anything that follows. The better storage options
    might be an incentive for some to upgrade though, depending on their RAID setups
    & suchlike.

    Ian.
    Reply
  • leminlyme - Tuesday, September 02, 2014 - link

    They are applicable to different crowds, and computing doesn't exclude gaming, whereas Xeons to a degree do (Though I'm sure for most of them you'd be fine, I for one like those PCI lanes, as well as the per core performance on the desktop processors is just typically better. Plus form factor and all that. These fill a glorious niche that I am indeed excited about. They're pretty damn cheap for their quality too. I guess the RAM totally circumvents that benefit though. Reply
  • Mithan - Friday, August 29, 2014 - link

    I am into gaming and nothing is worth upgrading over the 2500 if you have it. For you it's different of course :) Reply
  • AnnihilatorX - Saturday, August 30, 2014 - link

    I am thinking of upgrading my 2500 k actually, because I got a dud CPU which won't even overclock to 4.2Ghz Reply
  • mindbomb - Friday, August 29, 2014 - link

    That's the fault of the software. Seems unfair to blame the chip for that. DX12 should change that anyway. Reply
  • CaedenV - Friday, August 29, 2014 - link

    How exactly will DX12 help? DX12 is good for helping wimpy hardware move from horrible settings to acceptable settings, but for the high end it will not help much at all. Beyond that, it helps the GPU be more efficient and will have little effect on the CPU. Even if it did help the CPU at all, take a look at those charts; pretty much every mid to high end CPU on the market can already saturate a GPU. If the GPU is already the bottle neck then improving the CPU does not help at all. Reply
  • iLovefloss - Friday, August 29, 2014 - link

    DirectX12 promises to make more efficient use of multicore processors. AnandTech has already did a piece on Intel's demonstration of its benefit. Reply
  • bwat47 - Sunday, August 31, 2014 - link

    I'm sick of hearing this nonsense. Even with reasonably high end hardware mantle and DX12 can help minimum framerates and framerate consistency considerably. I have a 2500k and a 280x, and when I use mantle I get a big boost in minimum framerate. Reply
  • The3D - Friday, September 12, 2014 - link

    Given the yet to be released directx 12 and the overall tendency to have less cpu intensive graphics directives ( mantle) i guess the days in which we needed extra powerful cpus to run graphic intensive games are coming to an end. Reply

Log in

Don't have an account? Sign up now