Intel Core i9-7980XE and Core i9-7960X Conclusion

In the 2000s, we had the frequency wars. Trying to pump all the MHz into a single core ended up mega-hurting one company in particular, until the push was made to multi-core and efficient systems. Now in the 2010s, we have the Core Wars. You need to have at least 16 cores to play, and be prepared to burn power like never before. With today’s launch, Intel has kicked their high-end desktop offerings up a notch, both in performance and power consumption.

Pun density aside, both Intel and AMD are pursuing a many-core strategy when it comes to the high-end desktop. Both manufacturers have the same story in their pockets: users need more multi-threaded performance, either for intense compute or for a heavy scoop of compute-based multi-tasking, to include streaming, encoding, rendering, and high-intensity minesweeper VR all at the same time. Due to having so many cores, single threaded performance while under load can be lower than usual, so both companies focus on throughput rather than responsiveness.

The Core i9-7980XE is Intel’s new top-of-the-line HEDT processor. Coming in at 18-cores and drawing 165W, this processor can reach 4.4 GHz at top turbo or 3.4 GHz on all-core turbo. At $1999, it becomes the most expensive (?) consumer focused processor on the market. It is priced at this level for two reasons: first such that it doesn’t cannibalize Intel’s enterprise sales which have a higher profit margin, but also due to Intel’s product stack it fills up several price points from $300 all the way to $2000 now, and with it being Intel’s best consumer processor, they are hoping that it will still sell like hot cakes.

Our performance numbers show that Intel’s best multi-core consumer processor is deserving of that title. In most of our multi-core tests, Intel has a clear lead over AMD: a combination of more cores and a higher single threaded performance compensates for any frequency difference. For anyone with hardcore compute, Intel gets you to the finishing line first in almost every scenario. AMD does win on a few benchmarks, which is something we saw when Johan tested the enterprise versions of Intel's and AMD's CPUs in his review, where he cited AMD’s FP unit as being the leading cause of the performance improvement.

There are going to be three cautionary flags to this tale based on our testing today, for anyone looking at Skylake-X, and they all start with the letter P: Power, Platform, and Price.

Power: In our testing, Intel’s cores can consume between 20W and 7.5W per core, which is a large range. When all cores are at load, as well as the mesh and DRAM controllers, the Core i9-7980XE draws over 190W, well above the TDP rating of 165W. This will cause concern for users that take the TDP value as rote for power consumption – and for any users thinking of overclocking it might also be worth investing in custom cooling loops. The processors from AMD consume ~177W at load, which for two cores less is about the same ballpark.

Platform: X299 motherboards are designed to handle Kaby Lake-X, Skylake-X LCC and Skylake-X HCC processors. Almost all the motherboards should be geared towards the high-end processors which makes platform compatibility a bit of a non-issue, however testing by others recommends some form of active cooling on the power delivery. When investing $1999 in a processor, especially if a user is considering overclocking, it is likely that a good motherboard is needed, and not just the bargain basement model. Some users will point to the competition, where AMD's processors offer enough lanes for three x16 GPUs and three PCIe 3.0 x4 storage devices from the processor at the same time, rather than reduced bandwidth for 3-way and requiring storage to go through the chipset.

Price: $1999 is a new record for consumer processors. Intel is charging this much because it can – this processor does take the absolute workstation performance crown. For high performance, that is usually enough – the sort of users that are interested in this level of performance are not overly interested in performance per dollar, especially if a software license is nearer $10k. However for everyone else, unless you can take advantage of TSX or AVX-512, the price is exorbitant, and all arrows point towards AMD instead. Half the price is hard to ignore.

Users looking at the new processors for workstation use should consider the three Ps. It’s not an easy task, and will highly depend on the user specific workflow. The recommendations ultimately come down to three suggestions:

  • If a user needs the top best workstation processor without ECC, then get Skylake-X.
  • If a user needs ECC or 512GB of DRAM, Xeon-W looks a better bet.
  • If a user has a strict budget or wants another GPU for compute workloads, look at Threadripper.

For those weighing up the Core i9-7960X compared to the Core i9-7980XE, part of me wants to say ‘if you’re going for cores and prepared to spend this much, then go all the way’. If the main purpose is throughput, for the benchmarks that matter, the 7980XE is likely to provide benefit. For multi-taskers, the benefits are less clear, and it would be interesting to get the Core i9-7940X and Core i9-7920X in for testing.

Performance Per Dollar Analysis
Comments Locked

152 Comments

View All Comments

  • mapesdhs - Monday, September 25, 2017 - link

    Ian, thanks for the great review! Very much appreciate the initial focus on productivity tasks, encoding, rendering, etc., instead of games. One thing though, something that's almost always missing from reviews like this (ditto here), how do these CPUs behave for platform stability with max RAM, especially when oc'd?

    When I started building oc'd X79 systems for prosumers on a budget, they often wanted the max 64GB. This turned out to be more complicated than I'd expected, as reviews and certainly most oc forum "clubs" achieved their wonderful results with only modest amounts of RAM, in the case of X79 typically 16GB. Mbd vendors told me published expectations were never with max RAM in mind, and it was "normal" for a mbd to launch without stable BIOS support for a max RAM config at all (blimey). With 64GB installed (I used two GSkill TridentX/2400 4x8GB kits), it was much harder to achieve what was normally considered a typical oc for a 3930K (mab was the ASUS P9X79 WS, basically an R4E but with PLEX chips and some pro features), especially if one wanted the RAM running at 2133 or 2400. Talking to ASUS, they were very helpful and advised on some BIOS tweaks not mentioned in their usual oc guides to specifically help in cases where all RAM slots were occupied and the density was high, especially a max RAM config. Eventually I was able to get 4.8GHz with 64GB @ 2133. However, with the help of an AE expert (this relates to the lack of ECC I reckon), I was also able to determine that although the system could pass every benchmark I could throw at it (all of toms' CPU tests for that era, all 3DMark, CB, etc.), a large AE render (gobbles 40GB RAM) would result in pixel artefacts in the final render which someone like myself (not an AE user) would never notice, but the AE guy spotted them instantly. This was very interesting to me and not something I've ever seen mentioned in any article, ie. an oc'd consumer PC can be "stable" (benchmarks, Prime95 and all the rest of it), but not correct, ie. the memory is sending back incorrect data, but not in a manner that causes a crash. Dropping the clock to 4.7 resolved the issue. Tests like P95 and 3DMark only test parts of a system; a large AE render hammered the whole lot (storage, CPU, RAM and three GTX 580s).

    Thus, could you or will you be able at some point to test how these CPUs/mbds behave with the max 128GB fitted? I suspect you'd find it a very different experience compared to just having 32GB installed, especially under oc'd conditions. It stresses the IMCs so much more.

    I note the Gigabyte specs page says the mbd supports up to 512GB with Registered DIMMs; any chance a memory corp could help you test that? Mind you, I suspect that without ECC, the kind of user who would want that much RAM would probably not be interested in such a system anyway (XEON or EPYC much more sensible).

    Ian.
  • peevee - Monday, September 25, 2017 - link

    "256 KB per core to 1 MB per core. To compensate for the increase in die area, Intel reduced the size of the size of the L3 from 2.5 MB per core to 1.375 MB per core, keeping the overall L2+L3 constant"

    You might want to check your calculator.
  • tygrus - Monday, September 25, 2017 - link

    Maybe Intel saw the AMD TR numbers and had to add 10-15% to their expected freqs. Sure, there is some power that goes to the CPU which ends up in RAM et. al. but these are expensive room heaters. Intel marketing bunnies thought 165w looked better thn 180w to fool the customers.
  • eddieobscurant - Monday, September 25, 2017 - link

    Wow! Another intel pro review. I was expecting this but having graphs displaying intels perf/$ advantage, just wow , you've really outdone yourselves this time.

    Of course i wanted to see how long are you gonna keep delaying the gaming benchmarks of intel's core i9 due to mess rearrangement horrid performance. I guess you're expecting game developers to fix what can be fixed. It's been already several months, but on ryzen you were displaying a few issues since day 1.

    You tested amd with 2400mhz ram , when you know that performance is affected with anything below 3200mhz.

    Several different intel cpus come and go into your graphs only to show that a different intel cpu is better when core i9 lacks in performance and an amd cpu is better.

    Didn't even mention the negligent performance difference bettween the 7960x and 7980xe. Just take a look at phoronix review.

    Can this site even get any lower? Anands name is the only thing keeping it afloat.
  • mkaibear - Tuesday, September 26, 2017 - link

    Erm, there are five graphs on the performance/$ page, and three of them show AMD with a clear price/$ advantage in everything except the very top end and the very bottom end (and one of the other two is pretty much a tie).

    ...how can you possibly call that a pro-Intel review?
  • wolfemane - Tuesday, September 26, 2017 - link

    And why the heck would you want game reviews on these CPUs anyways? By now we KNOW what the results are gonna be and they won't be astonishing. And more than likely will be under a 7700k. Game benchmarks are utterly worthless for these CPUs and any kind of s surprise by the reader in their lack of overall performance in game is the readers fault for not paying attention to previous reviews.
  • Notmyusualid - Tuesday, September 26, 2017 - link

    Sorry to distract gents (and ladies?), and even though I am not a fan of liquid nitrogen, here:

    http://www.pcgamer.com/overclocked-core-i9-7980xe-...
  • gagegfg - Tuesday, September 26, 2017 - link

    EPYC 7551P vs core i9 790XE

    That is the true comparison, or not?
    $2000 vs $2000
  • gagegfg - Tuesday, September 26, 2017 - link

    EPYC 7551P vs core i9 7980XE

    That is the true comparison, or not?
    $2000 vs $2000
  • IGTrading - Tuesday, September 26, 2017 - link

    That's a perfectly valid comparison with the exception of the fact that Intel's X299 platform will look completely handicapped next to AMD's EPYC based solution and it will have just half of the computational power.

Log in

Don't have an account? Sign up now