Intel Core i9-7980XE and Core i9-7960X Conclusion

In the 2000s, we had the frequency wars. Trying to pump all the MHz into a single core ended up mega-hurting one company in particular, until the push was made to multi-core and efficient systems. Now in the 2010s, we have the Core Wars. You need to have at least 16 cores to play, and be prepared to burn power like never before. With today’s launch, Intel has kicked their high-end desktop offerings up a notch, both in performance and power consumption.

Pun density aside, both Intel and AMD are pursuing a many-core strategy when it comes to the high-end desktop. Both manufacturers have the same story in their pockets: users need more multi-threaded performance, either for intense compute or for a heavy scoop of compute-based multi-tasking, to include streaming, encoding, rendering, and high-intensity minesweeper VR all at the same time. Due to having so many cores, single threaded performance while under load can be lower than usual, so both companies focus on throughput rather than responsiveness.

The Core i9-7980XE is Intel’s new top-of-the-line HEDT processor. Coming in at 18-cores and drawing 165W, this processor can reach 4.4 GHz at top turbo or 3.4 GHz on all-core turbo. At $1999, it becomes the most expensive (?) consumer focused processor on the market. It is priced at this level for two reasons: first such that it doesn’t cannibalize Intel’s enterprise sales which have a higher profit margin, but also due to Intel’s product stack it fills up several price points from $300 all the way to $2000 now, and with it being Intel’s best consumer processor, they are hoping that it will still sell like hot cakes.

Our performance numbers show that Intel’s best multi-core consumer processor is deserving of that title. In most of our multi-core tests, Intel has a clear lead over AMD: a combination of more cores and a higher single threaded performance compensates for any frequency difference. For anyone with hardcore compute, Intel gets you to the finishing line first in almost every scenario. AMD does win on a few benchmarks, which is something we saw when Johan tested the enterprise versions of Intel's and AMD's CPUs in his review, where he cited AMD’s FP unit as being the leading cause of the performance improvement.

There are going to be three cautionary flags to this tale based on our testing today, for anyone looking at Skylake-X, and they all start with the letter P: Power, Platform, and Price.

Power: In our testing, Intel’s cores can consume between 20W and 7.5W per core, which is a large range. When all cores are at load, as well as the mesh and DRAM controllers, the Core i9-7980XE draws over 190W, well above the TDP rating of 165W. This will cause concern for users that take the TDP value as rote for power consumption – and for any users thinking of overclocking it might also be worth investing in custom cooling loops. The processors from AMD consume ~177W at load, which for two cores less is about the same ballpark.

Platform: X299 motherboards are designed to handle Kaby Lake-X, Skylake-X LCC and Skylake-X HCC processors. Almost all the motherboards should be geared towards the high-end processors which makes platform compatibility a bit of a non-issue, however testing by others recommends some form of active cooling on the power delivery. When investing $1999 in a processor, especially if a user is considering overclocking, it is likely that a good motherboard is needed, and not just the bargain basement model. Some users will point to the competition, where AMD's processors offer enough lanes for three x16 GPUs and three PCIe 3.0 x4 storage devices from the processor at the same time, rather than reduced bandwidth for 3-way and requiring storage to go through the chipset.

Price: $1999 is a new record for consumer processors. Intel is charging this much because it can – this processor does take the absolute workstation performance crown. For high performance, that is usually enough – the sort of users that are interested in this level of performance are not overly interested in performance per dollar, especially if a software license is nearer $10k. However for everyone else, unless you can take advantage of TSX or AVX-512, the price is exorbitant, and all arrows point towards AMD instead. Half the price is hard to ignore.

Users looking at the new processors for workstation use should consider the three Ps. It’s not an easy task, and will highly depend on the user specific workflow. The recommendations ultimately come down to three suggestions:

  • If a user needs the top best workstation processor without ECC, then get Skylake-X.
  • If a user needs ECC or 512GB of DRAM, Xeon-W looks a better bet.
  • If a user has a strict budget or wants another GPU for compute workloads, look at Threadripper.

For those weighing up the Core i9-7960X compared to the Core i9-7980XE, part of me wants to say ‘if you’re going for cores and prepared to spend this much, then go all the way’. If the main purpose is throughput, for the benchmarks that matter, the 7980XE is likely to provide benefit. For multi-taskers, the benefits are less clear, and it would be interesting to get the Core i9-7940X and Core i9-7920X in for testing.

Performance Per Dollar Analysis
Comments Locked

152 Comments

View All Comments

  • ddriver - Monday, September 25, 2017 - link

    You are living in a world of mainstream TV functional BS.

    Quantum computing will never replace computers as we know and use them. QC is very good at a very few tasks, which classical computers are notoriously bad at. The same goes vice versa - QC suck for regular computing tasks.

    Which is OK, because we already have enough single thread performance. And all the truly demanding tasks that require more performance due to their time staking nature scale very well, often perfectly, with the addition of cores, or even nodes in a cluster mode.

    There might be some wiggle room in terms of process and material, but I am not overly optimistic seeing how we are already hitting the limits on silicon and there is no actual progress made on superior alternatives. Are they like gonna wait until they hit the wall to make something happen?

    At any rate, in 30 years, we'd be far more concerned with surviving war, drought and starvation than with computing. A problem that "solves itself" ;)
  • SharpEars - Monday, September 25, 2017 - link

    You are absolutely correct regarding quantum computing and it is photonic computing that we should be looking towards.
  • Notmyusualid - Monday, September 25, 2017 - link

    @ SharpEars

    Yes, as alluded to by IEEE. But I've not looked at it in a couple of years or so, and I think they were still struggling with an optical DRAM of sorts.
  • Gothmoth - Monday, September 25, 2017 - link

    and what have they done for the past 6 years?

    i am glad that i get more cores instead of 5-10% performance per generation.
  • Krysto - Monday, September 25, 2017 - link

    The would if they could. Improvements in IPC have been negligible since Ivy Bridge.
  • kuruk - Monday, September 25, 2017 - link

    Can you add Monero(Cryptonight) performance? Since Cryptonight requires at least 2MB of L3 cache per core for best performance, it would be nice to see how these compare to Threadripper.
  • evilpaul666 - Monday, September 25, 2017 - link

    I'd really like it if Enthusiast ECC RAM was a thing.

    I used to always run ECC on Athlons back in the Pentium III/4 days.Now with 32-128x more memory that's running 30x faster it doesn't seem like it would be a bad thing to have...
  • someonesomewherelse - Saturday, October 14, 2017 - link

    It is. Buy AMD.
  • IGTrading - Monday, September 25, 2017 - link

    I think we're being to kind on Intel.

    Despite the article clearly mentioning it in a proper and professional way, the calm tone of the conclusion seem to legitimize and make it acceptable that Intel basically deceives its customers and ships a CPU that consumes almost 16% more power than its stated TDP.

    THIS IS UNACCEPTABLE and UNPROFESSIONAL from Intel.

    I'm not "shouting" this :) , but I'm trying to underline this fact by putting it in caps.

    People could burn their systems if they design workstations and use cooling solutions for 165W TDP.

    If AMD would have done anything remotely similar, we would have seen titles like "AMD's CPU can fry eggs / system killer / motherboard breaker" and so on ...

    On the other hand, when Intel does this, it is silently, calmly and professionally deemed acceptable.

    It is my view that such a thing is not acceptable and these products should be banned from the market UNTIL Intel corrects its documentation or the power consumption.

    The i7960X fits perfectly in its TDP of 165W, how come i7980X is allowed to run wild and consume 16% more ?!

    This is similar with the way people accepted every crapping design and driver fail from nVIDIA, even DEAD GPUs while complaining about AMD's "bad drivers" that never destroyed a video card like nVIDIA did. See link : https://www.youtube.com/watch?v=dE-YM_3YBm0

    This is not cutting Intel "some slack" this is accepting shit, lies and mockery and paing 2000 USD for it.

    For 2000$ I expect the CPU to run like a Bentley for life, not like modded Mustang which will blow up if you expect it to work as reliably as a stock model.
  • whatevs - Monday, September 25, 2017 - link

    What a load of ignorance. Intel tdp is *average* power at *base* clocks, uses more power at all core turbo clocks here. Disable turbo if that's too much power for you.

Log in

Don't have an account? Sign up now