How We Tested

To determine the optimal point between data center temperature and system cooling performance, we created a controlled temperature testing environment, called a "HotBox". Basically, we placed a server inside an insulated box. The box consists of two main layers: at the bottom is the air inlet where a heating element is placed. The hot air is blown inside the box and is then sucked into the front of the server on the second layer. This way we can simulate that inlet air comes from below, as in most data centers. Inlet and outlet are separated and insulated from each other, simulating the hot and cold aisles. Two thermistors measure the temperature of the inlet, one on the right and on the left, just behind the front panel.

Just behind the motherboard, close to back of the server, a pair of thermistors monitors the outlet temperature. And we'd like to thank Wannes De Smet who designed the hotbox!

The servers is fed by a standard European 230V (16 Amps max.) power line. We use the Racktivity ES1008 Energy Switch PDU to measure power consumption. Measurement circuits of most PDUs assume that the incoming AC is a perfect sine wave, but it never is. However, the Rackitivity PDU measures true RMS current and voltage at a very high sample rate: up to 20,000 measurements per second for the complete PDU.

Datamining on Hardware

Building the "Hotbox" was one thing; getting all the necessary data on the other hand is a serious challenge. A home-made PCB collects the data of the thermistors. Our vApus stress testing software interfaces with ESXi to collect hardware usage counters and temperatures; fan speeds are collected from the BMC; and power numbers from the Racktivity PDU. This is all done while placing a realistic load on the ESXi virtual machines. The excellent programming work of Dieter of the Sizing Servers Lab resulted in a large amount of data in our Excel sheets.

To put a realistic load on the machine we use our own real-life load generator called vApus. With vApus we capture real user interaction with a website, add some parameters that can be randomized, and then replay that log a number of times.

The workload consists of four VMs:

  • Drupal LAMP VM running sizingservers.be website
  • Zimbra 8 VM
  • phpBB LAMP VM running clone of real website
  • OLAP (news aggregator database)

The Drupal site gets regular site visitors mixed with the posting of new blog entries and sending email, resulting in a moderate system load. The Zimbra load is disk-intensive, consisting of users creating and sending emails, replying, creating appointments, tasks and contacts. The phpBB workload has a moderate CPU and network load, viewing and creating forum threads with rich content. Finally, the OLAP workload is based on queries from a news aggregator and is mostly CPU bound. These four VMS form one Tile (similar to VmMark "tiles"). We ran two tiles in each test, resulting in a load of 10% to 80%.

The Supermicro "PUE-Optimized" Server Benchmark Configuration
Comments Locked

48 Comments

View All Comments

  • extide - Tuesday, February 11, 2014 - link

    Yeah there is a lot of movement in this these days, but the hard part of doing this is at the low voltages used in servers <=24v, you need a massive amount of current to feed several racks of servers, so you need massive power bars and of course you can lose a lot of efficiency on that side as well.
  • drexnx - Tuesday, February 11, 2014 - link

    afaik, the Delta DC stuff is all 48v, so a lot of the old telecom CO stuff is already tailor-made for use there.

    but yes, you get to see some pretty amazing buswork as a result!
  • Ikefu - Tuesday, February 11, 2014 - link

    Microsoft is building a massive data center in my home state just outside Cheyenne, WY. I wonder why more companies haven't done this yet? Its very dry and days above 90F are few and far between in the summer. Seems like an easy cooling solution versus all the data centers in places like Dallas.
  • rrinker - Tuesday, February 11, 2014 - link

    Building in the cooler climes is great - but you also need the networking infrastructure to support said big data center. Heck for free cooling, build the data centers in the far frozen reaches of Northern Canada, or in Antarctica. Only, how will you get the data to the data center?
  • Ikefu - Tuesday, February 11, 2014 - link

    Its actually right along the I-80 corridor that connects Chicago and San Francisco. Several major backbones run along that route and its why many mega data centers in Iowa are also built along I-80. Microsoft and the NCAR Yellowstone super computer are there so the large pipe is definitely accessible.
  • darking - Tuesday, February 11, 2014 - link

    We've used free cooling in our small datacenter since 2007. Its very effective from september to april here in Denmark.
  • beginner99 - Tuesday, February 11, 2014 - link

    That map from Europe is certainly plain wrong. Especially in Spain btu also Greece and italy easily have some day above 35. It also happens couple of days per year were I live, a lot more north than any of those.
  • ShieTar - Thursday, February 13, 2014 - link

    Do you really get 35°C, in the shade, outside, for more than 260 hours a year? I'm sure it happens for a few hours a day in the two hottest months, but the map does cap out at 8500 out of 8760 hours.
  • juhatus - Tuesday, February 11, 2014 - link

    What about wear&tear at running the equipment at hotter temperatures? I remember seeing the chart where higher temperature = shorter life span. I would imagine the OEM's have engineered a bit over this and warranties aside, it should be basic physics?
  • zodiacfml - Wednesday, February 12, 2014 - link

    You just need constant temperature and equipment that works at that temperature. Wear and tear happens significantly at temperature changes.

Log in

Don't have an account? Sign up now