Loading the Server

The server first gets a few warm-up runs and then we start measuring during a period of about 1000 seconds. The blue lines represent the measurements done with the Xeon E5-2650L, the orange/red lines represent the Xeon E5-2697 v2. We test with three settings:

  • No heating. Inlet temperature is about 20-21°C, regulated by the CRAC
  • Moderate heating. We regulate until the inlet temperature is about 35°C
  • Heavy heating. We regulate until the inlet temperature is about 40°C

First we start with a stress test: what kind of CPU load do we attain? Our objective is to be able to test a realistic load for a virtualized host between 20 and 80% CPU load. Peaks above 80% are acceptable but long periods of 100% CPU load are not.

There are some small variations between the different tests, but the load curve is very similar on the same CPU. The 2.4GHz 12-core Xeon E5-2697 v2 has a CPU load between 1% and 78%. During peak load, the load is between 40% and 80%.

The 8-core 1.8GHz Xeon E5-2650L is not as powerful and has a peak load of 50% to 94%. Let's check out the temperatures. The challenge is to keep the CPU temperature below the specified Tcase.

The low power Xeon stays well below the specified Tcase. Despite the fact that it starts at 55°C when the inlet is set to 40°C, the CPU never reaches 60°C.

The results on our 12-core monster are a different matter. With an inlet temperature up to 35°C, the server is capable of keeping the CPU below 75°C (see red line). When we increase the inlet temperature to 40°C, the CPU starts at 61°C and quickly rises to 80°C. Peaks of 85°C are measured, which is very close to the specified 86°C maximum temperature. Those values are acceptable, but at first sight it seems that there is little headroom left.

The most extreme case would be to fill up all disk bays and DIMM slots and to set inlet temperature to 45°C. Our heating element is not capable of sustaining an inlet of 45°C, but we can get an idea of what would happen by measuring how hard the fans are spinning.

Benchmark Configuration Power Results
POST A COMMENT

48 Comments

View All Comments

  • ShieTar - Tuesday, February 11, 2014 - link

    I think you oversimplify if you just judge the efficiency of the cooling method by the heat capacity of the medium. The medium is not a heat-battery that only absorbs the heat, it is also moved in order to transport energy. And moving air is much easier and much more efficient than moving water.

    So I think in the case of Finland the driving fact is that they will get Air temperatures of up to 30°C in some summers, but the water temperature at the bottom regions of the gulf of Finland stays below 4°C throughout the year. If you would consider a data center near the river Nile, which is usually just 5°C below air temperature, and frequently warmer than the air at night, then your efficiency equation would look entirely different.

    Naturally, building the center in Finland instead of Egypt in the first place is a pretty good decision considering cooling efficiency.
    Reply
  • icrf - Tuesday, February 11, 2014 - link

    Isn't moving water significantly more efficient than moving air because a significant amount of energy when trying to move air goes to compressing it rather than moving it, where water is largely incompressible? Reply
  • ShieTar - Thursday, February 13, 2014 - link

    For the initial acceleration this might be an effect, though energy used for compression isn't necessary lost, as the pressure difference will decay via motion of the air again (but maybe not in the preferred direction. But if you look into the entire equation for a cooling system, the hard part is not getting the medium accelerated, but to keep it moving against the resistance of the coolers, tubes and radiators. And water has much stronger interactions with any reasonably used material (metal, mostly) than air. And you usually run water through smaller and longer tubes than air, which can quickly be moved from the electronics case to a large air vent. Also the viscosity of water itself is significantly higher than that of air, specifically if we are talking about cool water not to far above the freezing point of water, i.e. 5°C to 10°C. Reply
  • easp - Saturday, February 15, 2014 - link

    Below Mach 0.3, air flows can be treated as incompressible. I doubt bulk movement of air in datacenters hits 200+ Mph Reply
  • juhatus - Tuesday, February 11, 2014 - link

    Sir, I can assure you the Nordic Sea hits ~20°C in the summers. But still that tempereture is good enough for cooling.

    In Helsinki they are now collecting the excess heat from data center to warm up the houses in the city area. So that too should be considered. I think many countries could use some "free" heating.
    Reply
  • Penti - Tuesday, February 11, 2014 - link

    Surface temp does, but below the surface it's cooler. Even in small lakes and rivers, otherwise our drinking water would be unusable and 25°C out of the tap. You would get legionella and stuff then. In Sweden the water is not allowed to be or not considered to be usable over 20 degrees at the inlet or out of the tap for that matter. Lakes, rivers and oceans could keep 2-15°C at the inlet year around here in Scandinavia if the inlet is appropriately placed. Certainly good enough if you allow temps over the old 20-22°C. Reply
  • Guspaz - Tuesday, February 11, 2014 - link

    OVH's datacentre here in Montreal cools using a centralized watercooling system and relies on convection to remove the heat from the server stacks, IIRC. They claim a PUE of 1.09 Reply
  • iwod - Tuesday, February 11, 2014 - link

    Exactly what i was about to post. Why Facebook, Microsoft and even Google didn't manage to outpace them. PUE 1.09 is still as far as i know an Industry record. Correct me if i am wrong.

    I wonder if they could get it down to 1.05
    Reply
  • Flunk - Tuesday, February 11, 2014 - link

    This entire idea seems so obvious it's surprising they haven't been doing this the whole time. Oh well, it's hard to beat an idea that cheap and efficient. Reply
  • drexnx - Tuesday, February 11, 2014 - link

    there's a lot of work being done on the UPS side of the power consumption coin too - FB uses both Delta DC UPS' that power their equipment directly at DC from the batteries instead of the wasteful invert to 480vac three phase, then rectify again back at the server PSU level, and Eaton equipment with ESS that bypasses the UPS until there's an actual power loss (for about a 10% efficiency pickup when running on mains power) Reply

Log in

Don't have an account? Sign up now