Benchmark Configuration

Since Supermicro claims that these servers are capable of operating at inlet temperatures of 47°C (117 °F) while supporting Xeons with 135 W TDPs, we tested with two extreme processors. First off is the Xeon E5 2650L at 1.8GHz with a low 70W TDP and a very low Tcase of 65°C. It's low power but is highly sensitive to high temperatures. Second, we tested with the fastest Xeon E5 available: the Xeon E5 2697 v2. The TDP is 130W for 12 cores at 2.7GHz and Tcase is 86°C. This is a CPU that needs a lot of power but it's also resistant to high temperatures.

Supermicro 6027R-73DARF (2U Chassis)

CPU Two Intel Xeon processor E5-2697 v2 (2.7GHz, 12c, 30MB L3, 130W)

Two Intel Xeon processor E5-2650L v2 (1.7GHz, 10c, 25MB L3, 70 W)
RAM 64GB (8x8GB) DDR3-1600 Samsung M393B1K70DH0-CK0
Internal Disks 8GB flash disk to boot up, 1 GbE link to iSCSI SAN
Motherboard Supermicro X9DRD-7LN4F
Chipset Intel C602J
BIOS version R 3.0a (December the 6th, 2013)
PSU Supermicro 740W PWS-741P-1R (80+ Platinum)

All C-states are enabled in both the BIOS and ESXi.

How We Tested Temperature vs. CPU Load
Comments Locked

48 Comments

View All Comments

  • bobbozzo - Tuesday, February 11, 2014 - link

    "The main energy gobblers are the CRACs"

    Actually, the IT equipment (servers & networking) use more power than the cooling equipment.
    ref: http://www.electronics-cooling.com/2010/12/energy-...
    "The IT equipment usually consumes about 45-55% of the total electricity, and total cooling energy consumption is roughly 30-40% of the total energy use"

    Thanks for the article though.
  • JohanAnandtech - Wednesday, February 12, 2014 - link

    That is the whole point, isn't it? IT equipment uses power to be productive, everything else is supporting the IT equipment and thus overhead that you have to minimize. From the facility power, CRACs are the most important power gobblers.
  • bobbozzo - Tuesday, February 11, 2014 - link

    So, who is volunteering to work in a datacenter with 35-40C cool aisles and 40-45C hot aisles?
  • Thud2 - Wednesday, February 12, 2014 - link

    80,0000, that's sounds like a lot.
  • CharonPDX - Monday, February 17, 2014 - link

    See also Intel's long-term research into it, at their New Mexico data center: http://www.intel.com/content/www/us/en/data-center...
  • puffpio - Tuesday, February 18, 2014 - link

    On the first page you mention "The "single-tenant" data centers of Facebook, Google, Microsoft and Yahoo that use "free cooling" to its full potential are able to achieve an astonishing PUE of 1.15-1."

    This article says that Facebook has a achieved a PUE of 1.07 (https://www.facebook.com/note.php?note_id=10150148...
  • lwatcdr - Thursday, February 20, 2014 - link

    So I wonder when Google will build a data center in say North Dakota. Combine the ample wind power with cold and it looks like a perfect place for a green data center.
  • Kranthi Ranadheer - Monday, April 17, 2017 - link

    Hi Guys,

    Does anyone by chance have a recorded data of Temperature and processor's speed in a server room? Or can someone give me the information about the high-end and low-end values measured in any of the server rooms respectively, considering the equation temperature v/s processor's speed?

Log in

Don't have an account? Sign up now