Power & Efficiency - 10nm Gains

Power efficiency in the server world infers performance, as the more efficient a CPU is, the more compute power is available in a given TDP. Ice Lake in this regard is extremely interesting given it’s Intel’s first 10nm server design, and in theory should represent a major leap forward for the new 3rd Gen Xeon line-up.

The comparison here is a bit rough this time around, as we’re dealing with a bit of a apples-and-oranges comparisons between the generational SKUs, particularly the 40-core 270W Xeon 8380 and the 28-core 205W Xeon 8280. Fortunately, we had also been sourced a Xeon 6330 from a third vendor, which is a 28-core 205W Ice Lake SP part, which should make generational comparisons a bit more interesting and fairer, although still not quite optimal as we’ll see.

Package Idle Power

Starting off with idle package power, this was something I had made note of in our coverage of AMD’s Milan CPUs a few weeks ago, where the new AMD chip had regressed in terms of apparent IOD power and eating through the power envelope of the socket resulting in some compute performance regressions.

It’s to be noted that we’re not exactly comparing apples-to-apples here, as AMD’s designs are full SoCs, while the Intel CPUs are merely just CPUs that require the usage of an external chipset (Lewisburg Refresh) which by itself uses about 18W, essentially moving that power requirement off-socket. Intel has multiple versions of the chipset on offer, based on Compression/Encryption offload requirements, up to 28.6 W.

Ice Lake Xeon Chipsets
AnandTech SKU Compression
Encryption
RSA TDP
C621A LBG-1G None None 18.0 W
C627A LBG-T 65 Gbps / 100 Gbps 100K OPS 28.6 W
C629A LGB-C 80 Gbps / 100 Gbps None 28.6 W

Intel’s new Ice Lake SP system, similarly to the predecessor Cascade Lake SP system, appear to be very efficient at full system idle, reaching only around 27W per socket. It’s to be noted that these figures are only valid when both sockets are idle, if one socket is under load, the second socket’s power consumption will also grow in tandem even though it’s idle, we’ve seen idle figures up to 70W when the other socket is under full load, and even 90W when one socket is boosting frequencies very high. I suspect this is due to voltages and shared power delivery of the 2-socket system. Generally, it’s not of concern in the real world, but it’s just an interesting titbit to make note of.

The more interesting efficiency data is the actual power and energy consumption under load, and the corresponding performance between the generations. Again, we’re in a bit of a difficult situation here as the comparison isn’t as straightforward as the AMD Milan figures from a few weeks ago where we were comparing equal core-count and equal-TDP SKUs.

The new Xeon 8380 flagship Ice Lake SP CPU comes in at a default TDP of 270W, which is 65W higher than its direct predecessor, the 8280, and also features many more cores. Alongside the 270W default setting, I measured this part under a 205W limited power setting to add an extra data-point.

The Xeon 6330 seems a direct match to the Xeon 8280 (which in turn is identical to a Xeon 6258R), however this ICX part comes in at only $1894 versus the $3950 price point of the 6258R, a pricing that might be indicative of the quality of the silicon bin of this SKU, a point I’ll return to in just a bit.

Intel doesn’t make available core-only power metrics on its recent server chips, so we fall back to total package energy measurements only. I add in the total socket energy consumption for the duration of all workloads, as well as the performance and energy measurements on a per-thread basis as we’re dealing with different core-count designs here.

Ice Lake-SP vs Cascade Lake-SP
Power & Energy Efficiency Estimates
SKU Xeon 8380

(Ice Lake-SP)
Xeon 6330

(Ice Lake-SP)
Xeon 8280

(Cascade Lake-SP)
TDP Setting 270W
 
205W
(RAPL Limit)
205W 205W
 
Threads 80 56
  Perf
 
PKG
(W)
Perf PKG
(W)
Perf PKG
(W)
Perf PKG
(W)
500.perlbench_r 190 268 165 204 123 204 119 204
502.gcc_r 167 266 152 204 121 204 105 203
505.mcf_r 117 263 112 204 92 205 71 201
520.omnetpp_r 99 264 94 204 71 204 69 204
523.xalancbmk_r 136 256 124 204 94 203 91 196
525.x264_r 362 268 309 204 226 204 242 204
531.deepsjeng_r 163 268 140 204 101 204 107 205
541.leela_r 166 268 146 204 101 205 107 204
548.exchange2_r 290 269 248 204 178 205 170 205
557.xz_r 120 264 105 204 79 204 86 204
SPECint2017 est. 167.6 265 149.1 204 111.5 204 108.4 203
kJ Total 1937 1662 1552 1612
Score / W 0.632 0.731 0.546 0.534
Score per Thread 2.09 1.86 1.99 1.94
kJ per Thread 24.21 20.78 27.72 28.78
                 
503.bwaves_r 358 247 357 204 324 205 249 188
507.cactuBSSN_r 182 268 163 204 127 204 116 204
508.namd_r 194 268 164 204 122 204 127 205
510.parest_r 102 267 99 204 85 204 63 191
511.povray_r 242 269 203 203 157 204 152 205
519.lbm_r 38 236 38 204 34 199 26 173
526.blender_r 234 268 201 204 153 204 143 204
527.cam4_r 244 268 220 204 173 204 161 204
538.imagick_r 284 266 249 204 175 204 193 205
544.nab_r 177 269 151 204 109 204 109 205
549.fotonik3d_r 110 244 110 204 99 201 78 154
554.roms_r 78 261 78 204 68 205 50 173
SPECfp2017 est. 160.7 255 147.4 204 118.7 205 104.8 184
kJ Total 3877 3258 2714 2958
Score / W 0.631 0.722 0.546 0.570
Score per Thread 2.01 1.84 2.12 1.87
kJ per Thread 48.47 40.73 48.46 52.82

Starting off with the new flagship CPU, the Xeon 8380 indeed has little trouble to significantly outperform the Xeon 8280 by 54% in both integer and floating-point SPEC suites. This comes as no surprise as the new SKU is also using a higher TDP.

Reducing the Xeon 8380 to 205W, we’re looking at least at a performance comparison at a supposed ISO-power comparison point. Here, the Xeon 8380 again outperforms the 8280 by 40-43%. The actual measured perf/W falls in at +37% for the integer suite and +27% for the FP suite.

As per-thread performance is roughly similar between the two parts here, we can also do an energy per workload comparison, with the Ice Lake SP SKU using -27 to -23% less energy to complete the same task.

Looking at the Xeon 6330 at its default settings, the figures are quite less impressive. At +2.8 and +13.2%, the new design is posting rather lack-lustre performance boosts. The power efficiency and energy consumption figures are also extremely close to that of the 8280.

It’s to be noted, that Intel also has the Xeon 6348 in its line-up which is a 28C part as well, but with a 235W TDP. The results of the 6330 really aren’t too fantastic, even if it’s a weakly binned SKU that comes at a much cheaper price than its predecessor, meaning there’s a possible wide range in silicon quality between the new Ice Lake SKUs, indicating that a badly binned Ice Lake SKU isn’t notably better than a well binned Cascade Lake part.

Topology, Memory Subsystem & Latency SPEC - Multi-Threaded Performance
Comments Locked

169 Comments

View All Comments

  • TomWomack - Wednesday, April 7, 2021 - link

    Is it known whether there will be an IceLake-X this time round? The list of single-Xeon motherboard launches suggests possibly not; it would obviously be appealing to have a 24-core HEDT without paying the Xeon premium.
  • EthiaW - Wednesday, April 7, 2021 - link

    Boeings and Airbuses are never actually sold at their nominal prices, they cost far less, a non-disclosed number, for big buyers after gruesome haggling, sometimes less than half the “catalogue” price.
    I think this is exactly what's intel doing now: set the catalogue price high to avoid losing face, and give huge discount to avoid losing market share.
  • duploxxx - Wednesday, April 7, 2021 - link

    well easy conclusion.

    EPYC 75F3 is the clear winner SKU and the must have for most of the workloads.
    This is based on price - performance - cores and its related 3rd party sw licensing...

    I wonder when Intel will be able to convince VMware to move from a 32core licensing schema to a 40core :)
    They used to get all the dev favor when PAT was still in the house, I had several senior engineers in escalation calls stating that the hypervisor was optimised for Intel ...guess what even under optimised looking for a VM farm in 2020-2021-....you are way better off with an AMD build.
  • WaltC - Wednesday, April 7, 2021 - link

    If you can't beat the competition, then what? Ian seems to be impressed that Intel was finally able to launch a Xeon that's a little faster than its previous Xeon, but not fast enough to justify the price tag in relation to what AMD has been offering for a while. So here we are congratulating Intel on burning through wads more cash to produce yet-another-non-competitive result. It really seems as if Intel *requires* AMD to set its goals and to tell it where it needs to go--and that is sad. It all began with x86-64 and SDRAM from AMD beating out Itanium and RDRAM years ago. And when you look at what Intel has done since it's just not all that impressive. Well, at least we can dispense with the notion that "Intel's 10nm is TSMC's 7nm" as that clearly is not the case.
  • JayNor - Wednesday, April 7, 2021 - link

    What about the networking applications of this new chip? Dan Rodriguez's presentation showed gains of 1.4x to 1.8x for various networking benchmarks. Intel's entry into 5G infrastructure, NFV, vRAN, ORAN, hybrid cloud is growing faster than they originally predicted. They are able to bundle Optane, SmartNICs, FPGAs, eASIC chips, XeonD, P5900 family Atom chips... I don't believe they have a competitor that can provide that level of solution.
  • Bagheera - Thursday, April 8, 2021 - link

    Patr!ck Patr!ck Partr!ck?
  • evilpaul666 - Saturday, April 10, 2021 - link

    It only works in front of a mirror. Donning a hoodie helps, too.
  • Oxford Guy - Wednesday, April 7, 2021 - link

    There is some faulty logic at work in many of the comments, with claims like it's cheating to use a more optimized compiler.

    It's not cheating unless:

    • the compiler produces code that's so much more unstable/buggy that it's quite a bit more untrustworthy than the less-optimized compiler

    • you don't make it clear to readers that the compiler may make the architecture look more performant simply because the other architectures may not have had compiler optimizations on the same level

    • you use the same compiler for different architectures when using a different compiler for one or more other architectures will produce more optimized code for those architectures as well

    • the compiler sabotages the competition, via things like 'genuine Intel'

    Fact is that if a CPU can accomplish a certain amount of work in a certain amount of time, using a certain amount of watts under a certain level of cooling — that is the part's actual performance capability.

    If that means writing machine code directly (not even assembly) to get to that performance level, so what? That's an entirely different matter, which is how practical/economical/profitable/effortful it is to get enough code to measure all of the different aspects of the part's maximum performance capability. The only time one can really cite that as a deal-breaker is if one has hard data to demonstrate that by the time the hand-tuned/optimized code is written changes to the architecture (and/or support chips/hardware) will obsolete the advantage — making the effort utterly fruitless, beyond intellectual curiosity concerning the part's ability. For instance, if one knows that Intel, for instance, is going to integrate new instructions (very soon) that will make various types of hand-tuned assembly obsolete in short order, it can be argued that it's not worth the effort to write the code. People made this argument with some of AMD's Bulldozer/Piledriver instructions, on the basis that enough industry adoption wasn't going to happen. But, frankly... if you're going to make claims about the part's performance, you really should do what you can to find out what it is.
  • Oxford Guy - Wednesday, April 7, 2021 - link

    One can, though, of course... include a disclaimer that 'it seems clear enough that, regardless of how much hand-tuned code is done, the CPU isn't going to deliver enough to beat the competition, if the competition's code is similarly hand-tuned' — if that's the case. Even if a certain task is tuned to run twice as fast, is it going to be twice as fast as tuned code for the competition's stuff? Is its performance per watt deficit going to be erased? Will its pricing no longer be a drag on its perceived competitiveness?

    For example, one could have wrung every last drop of performance out of Bulldozer but it wasn't going to beat Sandy Bridge E — a chip with the same number of transistors. Piledriver could beat at least the desktop version of Sandy in certain workloads when clocked well outside of the optimal (for the node's performance per watt) range but that's where it's very helpful to have tests at the same clock. It was discovered, for instance, that the Fury X and Vega had basically identical performance at the same clock. Since desktop Sandy could easily clock at the same 4.0 GHz Piledriver initially shipped with it could be tested at that rate, too.

    Ideally, CPU makers would release benchmarks that demonstrate every facet of their chip's maximum performance. The concern about those being best-case and synthetic is less of a problem in that scenario because all aspects of the chip's performance would be tested and published. That makes cherry-picking impossible.
  • mode_13h - Thursday, April 8, 2021 - link

    The faulty logic I see is that you seem to believe it's the review's job to showcase the product in the best possible light. No, that's Intel's job, and you can find plenty of that material at intel.com, if that's what you want.

    Articles like this should focus on representing the performance of the CPUs as the bulk of readers are likely to experience it. So, even if using some vendor-supplied compiler with trick settings might not fit your definition of "cheating", that doesn't mean it's a service to the readers.

    I think it could be appropriate to do that sort of thing, in articles that specifically analyze some narrow aspect of a CPU, for instance to determine the hardware's true capabilities or if it was just over-hyped. But, not in these sort of overall reviews.

Log in

Don't have an account? Sign up now