Benchmark Configuration

Since Supermicro claims that these servers are capable of operating at inlet temperatures of 47°C (117 °F) while supporting Xeons with 135 W TDPs, we tested with two extreme processors. First off is the Xeon E5 2650L at 1.8GHz with a low 70W TDP and a very low Tcase of 65°C. It's low power but is highly sensitive to high temperatures. Second, we tested with the fastest Xeon E5 available: the Xeon E5 2697 v2. The TDP is 130W for 12 cores at 2.7GHz and Tcase is 86°C. This is a CPU that needs a lot of power but it's also resistant to high temperatures.

Supermicro 6027R-73DARF (2U Chassis)

CPU Two Intel Xeon processor E5-2697 v2 (2.7GHz, 12c, 30MB L3, 130W)

Two Intel Xeon processor E5-2650L v2 (1.7GHz, 10c, 25MB L3, 70 W)
RAM 64GB (8x8GB) DDR3-1600 Samsung M393B1K70DH0-CK0
Internal Disks 8GB flash disk to boot up, 1 GbE link to iSCSI SAN
Motherboard Supermicro X9DRD-7LN4F
Chipset Intel C602J
BIOS version R 3.0a (December the 6th, 2013)
PSU Supermicro 740W PWS-741P-1R (80+ Platinum)

All C-states are enabled in both the BIOS and ESXi.

How We Tested Temperature vs. CPU Load
Comments Locked

48 Comments

View All Comments

  • extide - Tuesday, February 11, 2014 - link

    Yeah there is a lot of movement in this these days, but the hard part of doing this is at the low voltages used in servers <=24v, you need a massive amount of current to feed several racks of servers, so you need massive power bars and of course you can lose a lot of efficiency on that side as well.
  • drexnx - Tuesday, February 11, 2014 - link

    afaik, the Delta DC stuff is all 48v, so a lot of the old telecom CO stuff is already tailor-made for use there.

    but yes, you get to see some pretty amazing buswork as a result!
  • Ikefu - Tuesday, February 11, 2014 - link

    Microsoft is building a massive data center in my home state just outside Cheyenne, WY. I wonder why more companies haven't done this yet? Its very dry and days above 90F are few and far between in the summer. Seems like an easy cooling solution versus all the data centers in places like Dallas.
  • rrinker - Tuesday, February 11, 2014 - link

    Building in the cooler climes is great - but you also need the networking infrastructure to support said big data center. Heck for free cooling, build the data centers in the far frozen reaches of Northern Canada, or in Antarctica. Only, how will you get the data to the data center?
  • Ikefu - Tuesday, February 11, 2014 - link

    Its actually right along the I-80 corridor that connects Chicago and San Francisco. Several major backbones run along that route and its why many mega data centers in Iowa are also built along I-80. Microsoft and the NCAR Yellowstone super computer are there so the large pipe is definitely accessible.
  • darking - Tuesday, February 11, 2014 - link

    We've used free cooling in our small datacenter since 2007. Its very effective from september to april here in Denmark.
  • beginner99 - Tuesday, February 11, 2014 - link

    That map from Europe is certainly plain wrong. Especially in Spain btu also Greece and italy easily have some day above 35. It also happens couple of days per year were I live, a lot more north than any of those.
  • ShieTar - Thursday, February 13, 2014 - link

    Do you really get 35°C, in the shade, outside, for more than 260 hours a year? I'm sure it happens for a few hours a day in the two hottest months, but the map does cap out at 8500 out of 8760 hours.
  • juhatus - Tuesday, February 11, 2014 - link

    What about wear&tear at running the equipment at hotter temperatures? I remember seeing the chart where higher temperature = shorter life span. I would imagine the OEM's have engineered a bit over this and warranties aside, it should be basic physics?
  • zodiacfml - Wednesday, February 12, 2014 - link

    You just need constant temperature and equipment that works at that temperature. Wear and tear happens significantly at temperature changes.

Log in

Don't have an account? Sign up now