Load Delta Power Consumption

Power consumption was tested on the system while in a single MSI GTX 770 Lightning GPU configuration with a wall meter connected to the OCZ 1250W power supply. This power supply is Gold rated, and as I am in the UK on a 230-240 V supply, leads to ~75% efficiency under 50W and 90%+ efficiency at 250W, suitable for both idle and multi-GPU loading. This method of power reading allows us to compare the power management of the UEFI and the board to supply components with power under load, and includes typical PSU losses due to efficiency.

We take the power delta difference between idle and load as our tested value, giving an indication of the power increase from the CPU when placed under stress. Unfortuantely we were not in a position to test the power consumption for the two 6-core CPUs due to the timing of testing.

Power Consumption Delta: Idle to AVX

Because not all processors of the same designation leave the Intel fabs with the same stock voltages, there can be a mild variation and the TDP given on each CPU is understandably an absolute stock limit. Due to power supply efficiencies, we get higher results than TDP, but the more interesting results are the comparisons. The 5960X is coming across as more efficient than Sandy Bridge-E and Ivy Bridge-E, including the 130W Ivy Bridge-E Xeon.

Test Setup

Test Setup
Processor Intel Core i7-5820K
Intel Core i7-5930K
Intel Core i7-5960X
6C/12T
6C/12T
8C/16T
3.3 GHz / 3.6 GHz
3.5 GHz / 3.7 GHz
3.0 GHz / 3.5 GHz
Motherboard ASUS X99 Deluxe
ASRock X99 Extreme4
Cooling Corsair H80i
Cooler Master Nepton 140XL
Power Supply OCZ 1250W Gold ZX Series
Corsair AX1200i Platinum PSU
1250W
1200W
80 PLUS Gold
80 PLUS Platinum
Memory Corsair 4x8 GB
G.Skill Ripjaws4
DDR4-2133
DDR4-2133
15-15-15 1.2V
15-15-15 1.2V
Memory Settings JEDEC
Video Cards MSI GTX 770 Lightning 2GB (1150/1202 Boost)
Video Drivers NVIDIA Drivers 337.88
Hard Drive OCZ Vertex 3
Optical Drive LG GH22NS50
Case Open Test Bed
Operating System Windows 7 64-bit SP1
USB 2/3 Testing OCZ Vertex 3 240GB with SATA->USB Adaptor

Many thanks to...

We must thank the following companies for kindly providing hardware for our test bed:

Thank you to OCZ for providing us with PSUs and SSDs.
Thank you to G.Skill for providing us with memory.
Thank you to Corsair for providing us with an AX1200i PSU and a Corsair H80i CLC.
Thank you to MSI for providing us with the NVIDIA GTX 770 Lightning GPUs.
Thank you to Rosewill for providing us with PSUs and RK-9100 keyboards.
Thank you to ASRock for providing us with some IO testing kit.
Thank you to Cooler Master for providing us with Nepton 140XL CLCs and JAS minis.

A quick word to the manufacturers who sent us the extra testing kit for review, including G.Skill’s Ripjaws 4 DDR4-2133 CL15, Corsair for similar modules, and Cooler Master for the Nepton 140XL CLCs. We will be reviewing the DDR4 modules in due course, including Corsair's new extreme DDR4-3200 kit, but we have already tested the Nepton 140XL in a big 14-way CLC roundup. Read about it here.

Intel Haswell-E Overclocking CPU Benchmarks
POST A COMMENT

203 Comments

View All Comments

  • schmak01 - Friday, January 16, 2015 - link

    I thought the same thing, but it probably depends on the game. I got the MSI XPower AC X99s Board with the 5930K. when I was running a 2500k at 4.5 Ghz for years. I play a lot of FFXIV which is DX9 and therefore CPU strapped. I noticed a marked improvement. Its a multithreaded game so that helps, but on my trusty sandy bridge I was always at 100% across all cores while playing, now its rarely above 15-20%. Areas where Ethernet traffic picks up, high population areas, show a much better improvement as I am not running out of CPU cycles. Lastly Turnbased games like GalCivIII and Civ5 on absurdly large Maps/AI's run much faster. Loading an old game on Civ5 where turns took 3-4 minutes now take a few seconds.

    There is also the fact that when Broadwell-E's are out in 2016 they will still use the LGA 2011-3 socket and X99 chipset, I figured it was a good time to upgrade for 'future proofing' my box for a while.
    Reply
  • Flunk - Friday, August 29, 2014 - link

    Right, for rendering, video encoding, server applications and only if there is no GPU-accelerated version for the task at hand. You have to admit that embarrassingly parallel workloads are both rare and quite often better off handed to the GPU.

    Also, you're neglecting overclocking. If you take that into account the lowest-end Haswell-E only has a 20%-30% advantage. Also, I'm not sure about you but I normally use Xeons for my servers.

    Haswell-E has a point, but it's extremely niche and dare I say extremely overpriced? 8-core at $600 would be a little more palatable to me, especially with these low clocks and uninspiring single thread performance.
    Reply
  • wireframed - Friday, August 29, 2014 - link

    The 5960X is half the price of the equivalent Xeon. Sure if you're budget is unlimited, 1k or 2k per CPU doesn't matter, but how often is that realistic.

    For content creation, CPU performance is still very much relevant. GPU acceleration just isn't up to scratch in many areas. Too little RAM, not flexible enough. When you're waiting days or weeks for renderings, every bit counts.
    Reply
  • CaedenV - Friday, August 29, 2014 - link

    improvements are relative. For gaming... not so much. Most games still only use 4 core (or less!), and rely more on the clock rate and GPU rather than specific CPU technologies and advantages, so having a newer 8 core really does not bring much more to the table to most games compared to an older quad core... and those sandy bridge parts could OC to the moon, even my locked part hits 4.2GHz without throwing a fuss.
    Even for things like HD video editing, basic 3D content creation, etc. you are looking at minor improvements that are never going to be noticed by the end user. Move into 4K editing, and larger 3D work... then you see substantial improvements moving to these new chips... but then again you should probably be on a dual Xeon setup for those kinds of high-end workloads. These chips are for gamers with too much money (a class that I hope to join some day!), or professionals trying to pinch a few pennies... they simply are not practical in their benefits for either camp.
    Reply
  • ArtShapiro - Friday, August 29, 2014 - link

    Same here. I think the cost of operation is of concern in these days of escalating energy rates. I run the 2500K in a little Antec MITX case with something like a 150 or 160 watt inbuilt power supply. It idles in the low 20s, if I recall, meaning I can leave it on all day without California needing to build more nuclear power plants. I can only cringe at talk about 1500 watt power supplies. Reply
  • wireframed - Friday, August 29, 2014 - link

    Performance per watt is what's important. If the CPU is twice as fast, and uses 60% more power! you still come out ahead. The idle draw is actually pretty good for the Haswell-E. It's only when you start overclocking it gets really crazy.

    DDR4's main selling point is reduced power draw, so that helps as well.
    Reply
  • actionjksn - Saturday, August 30, 2014 - link

    If you have a 1500 watt power supply, it doesn't mean you're actually using 1500 watts. It will only put out what the system demands at whatever workload you're putting on it at the time. If you replaced your system with one of these big new ones, your monthly bill might go up 5 to 8 dollars per month if you are a pretty heavy user, and you're really hammering that system frequently and hard. The only exception I can think of would be if you were mining Bit Coin 24/7 or something like that. Even then it would be your graphics cards that would be hitting you hard on the electric bill. It may be a little higher in California since you guys get overcharged for pretty much everything. Reply
  • Flashman024 - Friday, May 08, 2015 - link

    Just out of curiosity, what do you pay for electricity? Because I pay less here than I did when I lived in IA. We're at .10 KWh to .16 KWh (Tier 3 based on 1000KWh+ usage). Heard these tired blanket statements before we moved, and were pleased to find out it's mostly BS. Reply
  • CaedenV - Friday, August 29, 2014 - link

    Agreed, my little i7 2600 still keeps up just fine, and I am not really tempted to upgrade my system yet... maybe a new GPU, but the system itself is still just fine.

    Let's see some more focus on better single-thread performance, refine DDR4 support a bit more, give PCIe HDDs a chance to catch on, then I will look into upgrading. Still, this is the first real step forward on the CPU side that we have seen in a good long time, and I am really excited to finally see some Intel consumer 8 core parts hit the market.
    Reply
  • twtech - Friday, August 29, 2014 - link

    The overclocking results are definitely a positive relative to the last generation, but really the pull-the-trigger point for me would have been the 5930K coming with 8 cores.

    It looks like I'll be waiting another generation as well. I'm currently running an OCed 3930K, and given the cost of this platform, the performance increase for the cost just doesn't justify the upgrade.
    Reply

Log in

Don't have an account? Sign up now