Energy Consumption

We tested the energy consumption of our servers for a one-minute period in several scenario. The first scenario is the point where the server under testing performs best in MySQL: the highest throughput just before the response time goes up significantly. 

To test the power usage of the FPU, we measure the power consumption when POV-Ray was using all available threads. 

SKU TDP
(on paper)
spec
Idle
Server

W
MySQL
Best Throughput
at Lowest Resp. Time (*)
(W)
POV-Ray
100% CPU load
Dual Xeon E5-2699 v4 2x145 W 106 412 425
Dual Xeon 8176  2x165W 190 300 453
Dual EPYC 7601 2x180W 151 321 327

Both the Xeon 8176 and Dual EPYC server had a few more additional components (a separate 10 GBe card for example) than the Dual Xeon E5-2699v4 system, but that does not fully explain why idle power is so much higher, especially on the Dual Xeon 8176. We lacked the time to fully investigate this, and the last two systems have relatively new firmware.

The only conclusion that we can draw so far, is that the EPYC 7601 is likely to draw more power when running integer applications, while the rather wide FP units of the Intel CPUs are real power hogs even if they do not run heavy AVX applications. To be continued...

Floating Point performance Closing Thoughts
POST A COMMENT

219 Comments

View All Comments

  • oldlaptop - Thursday, July 13, 2017 - link

    Why on earth is gcc -Ofast being used to mimic "real-world", non-"aggressively optimized"(!) conditions? This is in fact the *most* aggressive optimization setting available; it is very sensitive to the exact program being compiled at best, and generates bloated (low priority on code size) and/or buggy code at worst (possibly even harming performance if the generated code is so big as to harm cache coherency). Most real-world software will be built with -O2 or possibly -Os. I can't help but wonder why questions weren't asked when SPEC complained about this unwisely aggressive optimization setting... Reply
  • peevee - Thursday, July 13, 2017 - link

    "added a second full-blown 512 bit AVX-512 unit. "

    Do you mean "added second 256 ALU, which in combination with the first one implements full 512-bit AVX-512 unit"?
    Reply
  • peevee - Thursday, July 13, 2017 - link

    "getting data from the right top node to the bottom left node – should demand around 13 cycles. And before you get too concerned with that number, keep in mind that it compares very favorably with any off die communication that has to happen between different dies in (AMD's) Multi Chip Module (MCM), with the Skylake-SP's latency being around one-tenth of EPYC's."

    1/10th? Asking data from L3 on the chip next to it will take 130 (or even 65 if they are talking about averages) cycles? Does not sound realistic, you can request data from RAM at similar latencies already.
    Reply
  • AmericasCup - Friday, July 14, 2017 - link

    'For enterprises with a small infrastructure crew and server hardware on premise, spending time on hardware tuning is not an option most of the time.'

    Conversely, our small crew shop has been tuning AMD (selected for scalar floating point operations performance) for years. The experience and familiarity makes switching less attractive.

    Also, you did all this in one week for AMD and two weeks for Intel? Did you ever sleep? KUDOS!
    Reply
  • JohanAnandtech - Friday, July 21, 2017 - link

    Thanks for appreciating the effort. Luckily, I got some help from Ian on Tuesday. :-) Reply
  • AntonErtl - Friday, July 14, 2017 - link

    According to http://www.anandtech.com/show/10158/the-intel-xeon... if you execute just one AVX256 instruction on one core, this slows down the clocks of all E5v4 cores on the same socket for at least 1ms. Somewhere I read that newer Xeons only slow down the core that executes the AVX256 instruction. I expect that it works the same way for AVX512, and yes, this means that if you don't have a load with a heavy proportion of SIMD instructions, you are better off with AVX128 or SSE. The AMD variant of having only 128-bit FPUs and no clock slowdown looks better balanced to me. It might not win Linpack benchmark competitions, but for that one uses GPUs anyway these days. Reply
  • wagoo - Sunday, July 16, 2017 - link

    Typo on the CLOSING THOUGHTS page: "dual Silver Xeon solutions" (dual socket)

    Great read though, thanks! Can finally replace my dual socket shanghai opteron home server soon :)
    Reply
  • Chaser - Sunday, July 16, 2017 - link

    AMD's CPU future is looking very promising! Reply
  • bongey - Tuesday, July 18, 2017 - link

    EPYC power consumption is just wrong. Somehow you are 50W over what everyone else is getting at idle. https://www.servethehome.com/amd-epyc-7601-dual-so... Reply
  • Nenad - Thursday, July 20, 2017 - link

    Interesting SPECint2006 results:
    - Intel in their slide #9 claims that Intel 8160 is 2% faster than EPYC 7601
    - Anandtech in article tests that EPYC 7601 is 42% faster than Intel 8176

    Those two are quite different, even if we ignore that 8176 should be faster than 8160. In other words, those Intel test results look very suspicious.
    Reply

Log in

Don't have an account? Sign up now