Energy Consumption

We tested the energy consumption of our servers for a one-minute period in several scenario. The first scenario is the point where the server under testing performs best in MySQL: the highest throughput just before the response time goes up significantly. 

To test the power usage of the FPU, we measure the power consumption when POV-Ray was using all available threads. 

SKU TDP
(on paper)
spec
Idle
Server

W
MySQL
Best Throughput
at Lowest Resp. Time (*)
(W)
POV-Ray
100% CPU load
Dual Xeon E5-2699 v4 2x145 W 106 412 425
Dual Xeon 8176  2x165W 190 300 453
Dual EPYC 7601 2x180W 151 321 327

Both the Xeon 8176 and Dual EPYC server had a few more additional components (a separate 10 GBe card for example) than the Dual Xeon E5-2699v4 system, but that does not fully explain why idle power is so much higher, especially on the Dual Xeon 8176. We lacked the time to fully investigate this, and the last two systems have relatively new firmware.

The only conclusion that we can draw so far, is that the EPYC 7601 is likely to draw more power when running integer applications, while the rather wide FP units of the Intel CPUs are real power hogs even if they do not run heavy AVX applications. To be continued...

Floating Point performance Closing Thoughts
POST A COMMENT

219 Comments

View All Comments

  • CajunArson - Tuesday, July 11, 2017 - link

    Would a high-end server that was built in 2014 necessarily update? Maybe not.

    Should a high-end server with a brand new microarchitecture use the most recent version of the software if it has any expectation of seeing a real benefit? Absolutely.

    If this was a GPU review and Anandtech used 2 year old drivers on a new GPU (assuming they even worked at all) we wouldn't even be having this conversation.
    Reply
  • BrokenCrayons - Tuesday, July 11, 2017 - link

    Home users playing video games are in a different environment than you find in a business datacenter. There's a lot less money to be lost when a driver update causes a performance regression or eliminates a feature. Conversely, needlessly updating software in the aforementioned datacenter can result in the loss of many millions if something goes wrong. Reply
  • wallysb01 - Tuesday, July 11, 2017 - link

    Conversely, having stuff working, but unnecessarily slowly costs money as well. Its a balance, and if you're spending hundreds of thousands or even millions on a cluster/data center/what have you, you'd probably want to spend at least a little bit of time optimizing it, right? Reply
  • Icehawk - Tuesday, July 11, 2017 - link

    Most of the businesses I have worked for, ranging from 10 people to 50k, use severely outdated software and the barest minimum of patching. Optimization? HA!

    For example I work for a manufacturer & retailer currently, our POS system was last patched in 2012 by the vendor and has been replaced by at least two versions newer. We have XP machines in each of our stores as that is the only OS that can run the software.

    The above is very typical. The 50k company I worked for had software so old and deeply entrenched that modernizing it is virtually impossible. My current company is working on getting to a new product... that was new in 2012 and has also been replaced with a newer version. Whee!
    Reply
  • Icehawk - Tuesday, July 11, 2017 - link

    One other thing - maybe the big shops actually do test/size but none of the places I have worked at and have been involved in do any testing, benchmarking, etc. They just buy whatever their preferred vendor gives them that meets the budget and they *think* will work. My coworker is in charge (lol) of selecting servers for a new office... he has no clue what anything in this article is. He has never read a single review, overview, or test of a processor. I could keep going on like this :( Reply
  • 0ldman79 - Wednesday, July 12, 2017 - link

    Icehawk's comments are so accurate it is scary.

    I can't tell you how many businesses running custom *nix software running in a VM on a Windows server.

    They're not all about speed. Reliability is the single most important factor, speed is somewhere down the line. The people that make those decisions and the people that drink coffee while they're waiting on the machines are very different.

    Neither understand that it could all be done so much better and almost all of them are utterly terrified at the concept of speeding up the process if it means *any* changes are made.
    Reply
  • JohanAnandtech - Friday, July 21, 2017 - link

    We did test with NAMD 2.12 (Dec 2016). Reply
  • sutamatamasu - Tuesday, July 11, 2017 - link

    Glad, AMD make back again to this segment, now we can only see what can Raja to do for server market with Radeon instinct. Reply
  • Kaotika - Tuesday, July 11, 2017 - link

    So this confirms that the previous information regarding Skylake-X core configurations was wrong, and 12-core variant is in fact using HCC-core instead of LCC-core? Reply
  • Ian Cutress - Tuesday, July 11, 2017 - link

    We corrected that in our Skylake-X review. Reply

Log in

Don't have an account? Sign up now