vApus FOS results

In the first test, we use the vApus FOS test that pushes the servers to 90 - 100% CPU load. This performance test is not really important as these kind of server / application combinations are not supposed to run at these high CPU loads. Also remember that this is not an AMD versus Intel graph. The AMD based Open Compute server is used as a memcached server that typically hogs RAM space but does not stress the CPU; the Intel Open Compute Server is built to be a CPU intensive web application server. Thus, you should not compare them directly.

The real comparison is between the HP DL-380G7 and the Facebook Open Compute Xeon Server, which both use the same platform: the same CPUs, the same amount of RAM, and so on. The big question we want to answer is whether Facebook's server that is built specifically for low power use and cloud applications can offer a better performance/watt ratio than one of the best and most popular "general purpose" servers.

vApus FOS performance

When requiring the highest performance levels, the HP DL380 G7 is about 11% faster than the Open Compute alternative. We suspect that the Open Compute server is configured to prefer certain lower power, lower performance ACPI settings. However, as this server is not meant to be an HPC server, this matters little. A web server or even virtualized server should not be run at 95-100% CPU load anyway. Let us take a look at the corresponding power consumption.

vApus FOS Power Consumption

To deliver 11% higher performance, the HP server has to consume about 22% more power. The Open Compute servers deliver a higher performance/watt even at high performance levels. The advantage is small, but again these servers are not meant to operate at 95+ % CPU load.

We also checked the power consumption at idle.

Idle power

The results are amazing: the Open Compute servers need only 74% of the power of the HP, saving a solid 42W when running ide. Also remember that the HP DL380 is already one of the best servers on the market from power consumption point of view.

Let us see what happens if we go for a real-world scenario.

 

 

Introducing Our Open Virtualization Benchmark Measuring Real-World Power Consumption, Part 1
Comments Locked

67 Comments

View All Comments

  • iwod - Thursday, November 3, 2011 - link

    And i am guessing Facebook has at least 10 times more then what is shown on that image.
  • DanNeely - Thursday, November 3, 2011 - link

    Hundreds or thousands of times more is more likely. FB's grown to the point of building its own data centers instead of leasing space in other peoples. Large data centers consume multiple megawatts of power. At ~100W/box, that's 5-10k servers per MW (depending on cooling costs); so that's tens of thousands of servers/data center and data centers scattered globally to minimize latency and traffic over longhaul trunks.
  • pandemonium - Friday, November 4, 2011 - link

    I'm so glad there are other people out there - other than myself - that sees the big picture of where these 'miniscule savings' goes. :)
  • npp - Thursday, November 3, 2011 - link

    What you're talking about is how efficient the power factor correction circuits of those PSUs are, and not how power efficient the units their self are... The title is a bit misleading.
  • NCM - Thursday, November 3, 2011 - link

    "Only" 10-20% power savings from the custom power distribution????

    When you've got thousands of these things in a building, consuming untold MW, you'd kill your own grandmother for half that savings. And water cooling doesn't save any energy at all—it's simply an expensive and more complicated way of moving heat from one place to another.

    For those unfamiliar with it, 480 VAC three-phase is a widely used commercial/industrial voltage in USA power systems, yielding 277 VAC line-to-ground from each of its phases. I'd bet that even those light fixtures in the data center photo are also off-the-shelf 277V fluorescents of the kind typically used in manufacturing facilities with 480V power. So this isn't a custom power system in the larger sense (although the server level PSUs are custom) but rather some very creative leverage of existing practice.

    Remember also that there's a double saving from reduced power losses: first from the electricity you don't have to buy, and then from the power you don't have to use for cooling those losses.
  • npp - Thursday, November 3, 2011 - link

    I don't remember arguing that 10% power savings are minor :) Maybe you should've posted your thoughts as a regular post, and not a reply.
  • JohanAnandtech - Thursday, November 3, 2011 - link

    Good post but probably meant to be a reply to erwinerwinerwin ;-)
  • NCM - Thursday, November 3, 2011 - link

    Johan writes: "Good post but probably meant to be a reply to erwinerwinerwin ;-)"

    Exactly.
  • tiro_uspsss - Thursday, November 3, 2011 - link

    Is it just me, or does placing the Xeons *right* next to each other seem like a bad idea in regards to heat dissipation? :-/

    I realise the aim is performance/watt but, ah, is there any advantage, power usage-wise, if you were to place the CPUs further apart?
  • JohanAnandtech - Thursday, November 3, 2011 - link

    No. the most important rule is that the warm air of one heatsink should not enter the stream of cold air of the other. So placing them next to each other is the best way to do it, placing them serially the worst.

    Placing them further apart will not accomplish much IMHO. most of the heat is drawn away to the back of the server, the heatsinks do not get very hot. You also lower the airspeed between the heatsinks.

Log in

Don't have an account? Sign up now