The Supermicro "PUE-Optimized" Server

We tested the Supermicro Superserver 6027R-73DARF. We chose this particular server for two main reasons: first, it is a 2U rackmount server (larger fans, better airflow) and secondly, it was the only PUE optimized server with 16 DIMMs. Many applications are more memory capacity than CPU limited, so a 16 DIMM server is more desirable to most of our readers than an 8 DIMM server.

On the outside, it looks like most other Supermicro servers, with the exception that the upper third of the front is left open for better airflow. This in contrast with some Supermicro servers where the upper third is filled with disk bays.

This superserver has a few features to ensure that it can can cope with higher temperatures without a huge increase in energy consumption. First of all, it has an 80 Plus Platinum power supply. A platinum PSU is not exceptional anymore: almost every server vendors offers at least the slightly less efficient 80 Plus Gold PSUs. Platinum PSUs are the standard for new servers, DELL and Supermicro even started offering 80 Plus Titanium PSUs (230V). 

Nevertheless, these Platinum PSUs are pretty impressive: they offer better than 92% efficiency from 20% to 100% load.

Secondly, it uses a spreadcore design. Here, the CPU heatsinks do no obstruct each other: the air flow will go over them in parallel.

Three heavy duty fans blow over a relatively simple motherboard design. Notice that even the heatsink on the 8W Intel PCH (602J chipset) is also in parallel with the CPU heatsinks. Indeed, the PCH heatsink will get an unhindered airflow. Last but not least, these servers come with specially designed air shrouds for maximum cooling.

There is some room for improvement though. It would be great to have a model with 2.5-inch drive bays. Supermicro offers a 2,5'' HDD conversion tray (MCP-220-00043-0N), but a native 2.5-inch drive bay model would give even better airflow and serviceability. 

We would also like an easier way to replace the CPUs. The screws of the heatsink tend to wear out quickly. But that is mostly a problem of a lab testing servers, less a problem of a real enterprise. 

Servers and High Inlet Temperatures How We Tested
Comments Locked

48 Comments

View All Comments

  • bobbozzo - Tuesday, February 11, 2014 - link

    "The main energy gobblers are the CRACs"

    Actually, the IT equipment (servers & networking) use more power than the cooling equipment.
    ref: http://www.electronics-cooling.com/2010/12/energy-...
    "The IT equipment usually consumes about 45-55% of the total electricity, and total cooling energy consumption is roughly 30-40% of the total energy use"

    Thanks for the article though.
  • JohanAnandtech - Wednesday, February 12, 2014 - link

    That is the whole point, isn't it? IT equipment uses power to be productive, everything else is supporting the IT equipment and thus overhead that you have to minimize. From the facility power, CRACs are the most important power gobblers.
  • bobbozzo - Tuesday, February 11, 2014 - link

    So, who is volunteering to work in a datacenter with 35-40C cool aisles and 40-45C hot aisles?
  • Thud2 - Wednesday, February 12, 2014 - link

    80,0000, that's sounds like a lot.
  • CharonPDX - Monday, February 17, 2014 - link

    See also Intel's long-term research into it, at their New Mexico data center: http://www.intel.com/content/www/us/en/data-center...
  • puffpio - Tuesday, February 18, 2014 - link

    On the first page you mention "The "single-tenant" data centers of Facebook, Google, Microsoft and Yahoo that use "free cooling" to its full potential are able to achieve an astonishing PUE of 1.15-1."

    This article says that Facebook has a achieved a PUE of 1.07 (https://www.facebook.com/note.php?note_id=10150148...
  • lwatcdr - Thursday, February 20, 2014 - link

    So I wonder when Google will build a data center in say North Dakota. Combine the ample wind power with cold and it looks like a perfect place for a green data center.
  • Kranthi Ranadheer - Monday, April 17, 2017 - link

    Hi Guys,

    Does anyone by chance have a recorded data of Temperature and processor's speed in a server room? Or can someone give me the information about the high-end and low-end values measured in any of the server rooms respectively, considering the equation temperature v/s processor's speed?

Log in

Don't have an account? Sign up now