Energy Consumption

A large part of the server market is very sensitive to performance-per-watt. That includes the cloud vendors. For a smaller part of the market, top performance is more important than the performance/watt ratio. Indeed for financial trading, big data analyses, database, and some simulation servers, performance is the top priority. Energy consumption should not be outrageous, but it is not the most important concern.

We tested the energy consumption for a one-minute period in several situations. The first one is the point where the tested server performs best in MySQL: the highest throughput just before the response time goes up significantly. Then we look at the point where throughput is the highest (no matter what response time). This is is the situation where the CPU is fully loaded. And lastly we compare with a situation where the floating point units are working hard (C-ray).

SKU TDP
(on paper)
spec
Idle

W
MySQL
Best throughput
at lowest resp time
(W)
MySQL
Max Throughput
(W)
Peak
vs
idle
(W)
Transactions
per
watt
C-ray
W
Xeon D-1557 45 W 54 99 100 46 73 99
Xeon D-1581 65 W 59 123 125 66 97 124
Xeon E5-2640 v4 90 W 76 135 143 67 71 138
ThunderX 120 W 141 204 223 82 46 190
Xeon E5-2690 v3 135 W 84 249 254 170 47 241

Intel allowed the Xeon "Haswell" E5 v3 to consume quite a bit of power when turbo boost was on. There is a 170W difference between idle and max throughput, and if you assume that 15 W is consumed by the CPU in idle, you get a total under load of 185W. Some of that power has to be attributed to the PSU losses, memory activity (not much) or fan speed. Still we think Intel allowed the Xeon E5 "Haswell" to consume more than the specified TDP. We have noticed the same behavior on the Xeon E5-2699 v3 and 2667 v3: Haswell EP consumes little at low load, but is relatively power hungry at peak load.

The 90W TDP Xeon E5-2640v4 consumes 67W more at peak than in idle. Even if you add 15W to that number, you get only 82W. Considering that the 67W is measured at the wall, it is clear that Intel has been quite conservative with the "Broadwell" parts. We get the same impression when we tried out the Xeon E5-2699 v4. This confirms our suspicion that with Broadwell EP, Intel prioritized performance per watt over throughput and single threaded performance. The Xeon D, as a result, is simply the performance per watt champion.

The Cavium ThunderX does pretty badly here, and one of the reason is that power management either did not work, or at least did not work very well. Changing the power governor was not possible: the cpufreq driver was not recognized. The difference between peak and idle (+/- 80W) makes us suspect that the chip is consuming between 40 and 50W at idle, as measured at the wall. Whether is just a matter of software support or a real lack of good hardware power management is not clear. It is quite possibly both.

We would also advise Gigabyte to use a better performing heatsink for the fastest ThunderX SKUs. At full load, the reported CPU temperature is 83 °C, which leaves little thermal headroom (90°C is critical). When we stopped our CRAC cooling, the gigabyte R120-T30 server forced a full shutdown after only a few minutes while the Xeon D systems were still humming along.

A Quick Look at Floating Point Performance: C-ray Closing Thoughts
Comments Locked

82 Comments

View All Comments

  • silverblue - Thursday, June 16, 2016 - link

    I'm not sure how this is relevant. Johan doesn't review graphics cards, other people at Anandtech do. I bet Guru3D has a much bigger team for that, and I imagine that they have a much narrower scope (i.e. no server stuff).

    I don't think I've looked at a review recently that hasn't had the comments section polluted with "where is the review for x".
  • UrQuan3 - Wednesday, June 15, 2016 - link

    Intel allows their Xeons to sometimes pull double their TDP? No wonder our new machines trip breakers long before I thought they would. I need to test instead of assuming accurate documentation.

    I can see why you chose C-Ray, I'm just sorry a more general ray tracer was not chosen. Still, not it's intended market, though I am suddenly very interested. Ray-tracing and video encoding are my top two tasks.
  • Meteor2 - Thursday, June 16, 2016 - link

    The 'T' in 'TDP' is for thermal. It's a measure of the maximum waste heat which needs to be removed over a certain period of time.
  • UrQuan3 - Wednesday, June 22, 2016 - link

    Yes, it stands for thermal, but power doesn't consumed doesn't just disappear. Convert it to light, convert it to motion, convert it to heat, etc. In this case there is a small amount of motion (electrons) and the rest has to be heat. I expect much higher instantaneous pulls, but this was sustained power. Anyway, I will track down the AVX documentation mentioned below.

    I saw the h264ref. I'll be curious about x264 (handbrake) as the authors seem interested in ARM in the last few years. Unsurprisingly, it is far less optimized than x64. I benchmarked handbrake on the Pi2, Pandaboard, and CI-20 last year, just to see what it would do.
  • JohanAnandtech - Thursday, June 16, 2016 - link

    C-Ray was just a place holder to measure FPU energy consumption. I look into bringing a more potent raytracer into our benchmark suite (povray)

    Video encoding was in the review though, somewhat (h264ref).
  • patrickjp93 - Friday, June 17, 2016 - link

    ARM chips with vector extensions allow it as well. Intel provides separate documentation for AVX-workload TDPs.
  • Antony Newman - Wednesday, June 15, 2016 - link

    Fascinating article.

    Why would Cavium not try and use 54 x A73s in their next chip?

    If ARM are not in the business of making Silicon, and ARM think the '1.2W Ares' will help them break into the Server market ... Then Why do we think ARM isn't working with the likes of Cavium to get a Server SoC that rocks the Intel boat?

    Typos From memory : send -> sent. Through-> thought. There were a few others.

    AJ
  • name99 - Thursday, June 16, 2016 - link

    How do you know ARM aren't working with such a vendor?
    ARM has always said that they expect ARM server CPUs to only be marginally competitive (for very limited situations) in 2017, and to only be really competitive in 2020.

    That suggests, among other things, that if they are working with partners, they have a target launch between those two dates, and they regard all launches before 2017 as essentially nice for PR and fr building up the ecosystem, but essentially irrelevant for commercial purposes.
  • rahvin - Thursday, June 16, 2016 - link

    The problem as pointed out early in this article is that ARM keeps targeting Intel's current products, not the ones that will be out when they get their products out. We've had almost a dozen vendors get to the point of releasing the chip and drop it because it is simply not competitive with Intel. Most of these arm products were under taken when Intel was targeting performance without regard to performance/watt. Now that intel targets the later metric arm server chips haven't been competitive with them.

    Fact is Intel could decimate and totally take over all the markets arm chips occupy, but to do it they'd have to cannibalize their existing high profit sales. This is why they keep canceling Atom chips, the chips turned out so good they were worried they'd cannibalize much more expensive products. This is the reason Avoton is highly restricted in what products and price segments it's allowed into. If Intel opened the flood gates on Avoton they would risk cannibalizing their own server profits.
  • junky77 - Wednesday, June 15, 2016 - link

    So, they did what AMD couldn't for years? I'm trying to figure it out.. their offering seems to be a lot more interesting than AMD's stuff currently

Log in

Don't have an account? Sign up now