Facebook Technology Overview

Facebook had 22 Million active users in the middle of 2007; fast forward to 2011 and the site now has 800 Million active users, with 400 million of them logging in every day. Facebook has grown exponentially, to say the least! To cope with this kind of exceptional growth and at the same time offer a reliable and cost effective service requires out of the box thinking. Typical high-end, brute force, ultra redundant software and hardware platforms (for example Oracle RAC databases running on top of a few IBM Power 795 systems) won’t do as they're too complicated, power hungry, and most importantly far too expensive for such extreme scaling.

Facebook first focused on thoroughly optimizing their software architecture, which we will cover briefly. The next step was the engineers at Facebook deciding to build their own servers to minimize the power and cost of their server infrastructure. Facebook Engineering then open sourced these designs to the community; you can download the specifications and mechanical CAD designs at the Open Compute site.

The Facebook Open Compute server design is ambitious: “The result is a data center full of vanity free servers which is 38% more efficient and 24% less expensive to build and run than other state-of-the-art data centers.” Even better is that Facebook Engineering sent two of these Open Compute servers to our lab for testing, allowing us to see how these servers compare to other solutions in the market.

As a competing solution we have an HP DL380 G7 in the lab. Recall from our last server clash that the HP DL380 G7 was one of the most power efficient servers of 2010. Is a server "targeted at the cloud" and designed by Facebook engineering able to beat one of the best and most popular general purpose servers? That is the question we'll answer in this article.

Cloud Computing = x86 and Open Source
POST A COMMENT

62 Comments

View All Comments

  • npp - Thursday, November 03, 2011 - link

    What you're talking about is how efficient the power factor correction circuits of those PSUs are, and not how power efficient the units their self are... The title is a bit misleading. Reply
  • NCM - Thursday, November 03, 2011 - link

    "Only" 10-20% power savings from the custom power distribution????

    When you've got thousands of these things in a building, consuming untold MW, you'd kill your own grandmother for half that savings. And water cooling doesn't save any energy at all—it's simply an expensive and more complicated way of moving heat from one place to another.

    For those unfamiliar with it, 480 VAC three-phase is a widely used commercial/industrial voltage in USA power systems, yielding 277 VAC line-to-ground from each of its phases. I'd bet that even those light fixtures in the data center photo are also off-the-shelf 277V fluorescents of the kind typically used in manufacturing facilities with 480V power. So this isn't a custom power system in the larger sense (although the server level PSUs are custom) but rather some very creative leverage of existing practice.

    Remember also that there's a double saving from reduced power losses: first from the electricity you don't have to buy, and then from the power you don't have to use for cooling those losses.
    Reply
  • npp - Thursday, November 03, 2011 - link

    I don't remember arguing that 10% power savings are minor :) Maybe you should've posted your thoughts as a regular post, and not a reply. Reply
  • JohanAnandtech - Thursday, November 03, 2011 - link

    Good post but probably meant to be a reply to erwinerwinerwin ;-) Reply
  • NCM - Thursday, November 03, 2011 - link

    Johan writes: "Good post but probably meant to be a reply to erwinerwinerwin ;-)"

    Exactly.
    Reply
  • tiro_uspsss - Thursday, November 03, 2011 - link

    Is it just me, or does placing the Xeons *right* next to each other seem like a bad idea in regards to heat dissipation? :-/

    I realise the aim is performance/watt but, ah, is there any advantage, power usage-wise, if you were to place the CPUs further apart?
    Reply
  • JohanAnandtech - Thursday, November 03, 2011 - link

    No. the most important rule is that the warm air of one heatsink should not enter the stream of cold air of the other. So placing them next to each other is the best way to do it, placing them serially the worst.

    Placing them further apart will not accomplish much IMHO. most of the heat is drawn away to the back of the server, the heatsinks do not get very hot. You also lower the airspeed between the heatsinks.
    Reply
  • harrkev - Thursday, November 03, 2011 - link

    You should look again at the sine-wave plots. Power factor has more to do with the phase of the current and not so much how much like a sine-wave it looks like.

    As an example, a purely capacitive or purely inductive load will have a perfect sine wave current (but completely out of phase with the voltage), but have a power factor very close to zero...

    So, those graphs do not really tell us much unless you actually crank the numbers to calculate the real power factor.

    http://en.wikipedia.org/wiki/Power_factor#Non-line...
    Reply
  • ezekiel68 - Thursday, November 03, 2011 - link

    On page 2:

    "The next piece in the Facebook puzzle is that the Open Source tools are Memcached."

    In fact, the tools are not memchached. Instead, software objects from the PHP/c++ stack, programmed by the engineers, are stored in Memcached. Side note - those in the know pronounce it "mem-cache-dee", emphasizing with the last syllable that it is a network daemon. (similar to how the DNS server "bind" is pronounced "bin-dee") So the next piece is Memcached, but the tools are not 'memcached'.
    Reply
  • JohanAnandtech - Thursday, November 03, 2011 - link

    That is something that went wrong in the final editing by Jarred. Sorry about that and I feel bad about dragging Jarred into this, but unfortunately that is what happened. As you can see further, "Facebook mostly uses memcached to alleviate database load", I was not under the impression that the "Open Source tools are Memcached. " :-) Reply

Log in

Don't have an account? Sign up now