Facebook's "Open Compute" Server tested
by Johan De Gelas on November 3, 2011 12:00 AM ESTBenchmark Configuration
HP Proliant DL380 G7
CPU | Two Intel Xeon X5650 at 2.66 GHz |
RAM | 6 x 4GB Kingston DDR3-1333 FB372D3D4P13C9ED1 |
Motherboard | HP proprietary |
Chipset | Intel 5520 |
BIOS version | P67 |
PSU | 2 x HP PS-2461-1C-LF 460W HE |
We have three servers to test. The first is our own standard off-the-shelf server, and HP DL380G7. This server is the natural challenger for the Facebook design, as it is one of the most popular and efficient general purpose servers.
As this server is targeted at a very broad public, it cannot be as lean and mean as the Open Compute servers.
Facebook's Open Compute Xeon version
CPU | Two Intel Xeon X5650 at 2.66 GHz |
RAM | 6 x 4GB Kingston DDR3-1333 FB372D3D4P13C9ED1 |
Motherboard | Quanta Xeon Opencompute 1.0 |
Chipset | Intel 5500 Rev 22 |
BIOS version | F02_3A16 |
PSU | Power-One SPAFCBK-01G 450W |
The Open Compute Xeon server is configured as close to our HP DL380 G7 as possible.
Facebook's Open Compute AMD version
CPU | Two AMD Opteron Magny-Cour 6128 HE at 2.0 GHz |
RAM | 6 x 4GB Kingston DDR3-1333 FB372D3D4P13C9ED1 |
Motherboard | Quanta AMD Open Compute 1.0 |
Chipset | |
BIOS version | F01_3A07 |
PSU | Power-One SPAFCBK-01G 450W |
The benchmark numbers of the AMD Open Compute server are only included for your information. There is no direct comparison possible with the other two systems. The AMD system is better equipped than the Intel, as it has more DIMM slots and uses HE CPUs.
Common Storage system
Each server has an adaptec 5085 PCIe 8x (driver aacraid v1.1-5.1[2459] b 469512) connecting to six Cheetah 300GB 15000 RPM SAS disks in a Promise JBOD J300s.
Software configuration
VMware ESXi 5.0.0 (b 469512 - VMkernel SMP build-348481 Jan-12-2011 x86_64). All vmdks use thick provisioning, independent, and persistent. Power policy is Balanced Power.
Other notes
Both servers were fed by a standard European 230V (16 Amps max.) powerline. The room temperature was monitored and kept at 23°C.
67 Comments
View All Comments
mrsprewell - Thursday, November 3, 2011 - link
This review claims this facebook server is more efficient than Hp's, but I see no prove. They only compares the power supply power factor performance. But what about efficiency? I guess the lab has no 277Vac input(which most datecenter don't have as well) and they can only power the server in 208/230Vac. As a result, they can't compare the servers efficiency. Also they didn't describe at what loading condition is the test being done on... I am sure the HP server has better efficiency than the facebook one at 230Vac input. The only good thing about the facebook one is that it might not need a UPS. But the consequence to that is, you have to use the battery rack from Facebook, which is not standard and can be costly.Also it is nice to know that the Powerone power supply will overheat when using DC input for more than 10min....hahahahh...that's a smart way to cost down the power supply...
marc1000 - Thursday, November 3, 2011 - link
what does this "noSQL" means??? they don't use any relational database at all? how facebook stores information? plain files?erple2 - Thursday, November 3, 2011 - link
Google doesn't use relational databases to store and retrieve its information either. Neither does the high performance data warehouse that was developed on a program I worked on a few years ago - we migrated away from Oracle for cost and performance reasons.I think that the days of the Relational Database are numbered. The mainstay of the Relational Database (stored procedures) are quickly showing their age in a complete inability to debug issues with them outside of expensive specialized tools. We've been replacing them as much as we can with an abstraction layer.
But we still have goofy constructs to deal with (joins just don't make sense from a OO perspective).
I think that Relational Databases's days are numbered.
FunBunny2 - Saturday, November 5, 2011 - link
RDBMS is numbered only for those who've no idea that what they think is "new" is just their grandpappy's olde COBOL crap. Just because you're so young, so inexperienced, and so stupid that you can't model data intelligently; doesn't mean you've got it right. But if you're in love with getting paid by LOC metrics, then Back to The Future is what you want. Remember, Facebook and Twitter and such are just toys; they ain't serious.Ceencee - Wednesday, November 9, 2011 - link
This is completely false, RDBMS have their place but are also extremely inefficient ways to access large amounts of data as ACID compliance hamstrings many of the operations.As someone who would consider himself an Oracle Expert I would say that NoSQL databases like Cassandra and HBase are really exciting technology for the future.
Starfireaw11 - Thursday, November 3, 2011 - link
I can see how the OpenCompute compares well to a DL380G7 in terms of performance vs power consumption and may compare well in price (those details aren't readily available), but the things that the OpenCompute has going for it are that it has been stripped of unneeded components, fitted with efficient fans and matched to efficient power supplies. From what I have seen and done in and around datacenters, these are exactly the objectives of a blade based system, where you can have large, efficient power supplies, large fans and missing or shared devices that are non-critical. I would like to see this article modified to include a comparison against a blade-based solution of equivalent specification to see how that stacks up - if you can swing it, use a fully populated blade chassis and average out the results against the number of blades. The blades also have an advantage of allowing approximately 14 servers in a 9 RU space - allowing approximately 70 servers per 45 RU rack, vs the 30 odd of the OpenCompute.Whenever I need to put equipment into a datacenter, the important specifications are performance, cost price, power efficiency, size, weight and heat. Whenever a large number of servers are required, blades always stack up well, possibly with the exception of weight where there are limitations on floor-loading in a datacenter, but they do compare well with weight when compared to equivalent performing non-blade servers (such as 28 RU of DL380G7s).
Doby - Saturday, November 5, 2011 - link
Although I think blades could be favorable if, at least if you take into account the infrastructure reduction such as networking ports. Thing is, if you look at the HP products that are available there are better alternatives.HP, as the specific example, has a product call the SL6500. Its a second generation product specifically designed for these types of environements, and meant to compete with exactly the type of system that FaceBook created. A comparitive use case would be a 8 node configuration, which would take up 4U of rack space and could run off of 2-4 PSU that would be shared between the nodes. Additionally it has a shared redundant FAN configuration that uses larger, more efficient fans to cool the chassis. Its like blades, but doesn't have any shared networking, is made specifically to be lighter and cheaper, and has options for lower cost nodes.
The DL380 has a few things working against it in this comparison, from hot swapable drives, to enterprise class onboard management (iLO, not just basic BMC), reduandant fans, scalable power infrastructure, 6 PCI-E slots, onboard, high perfromance RAID controller, 4 NICs, and simplified servicability with single tool and/or toolless servicibility, and even a display for component failures.
The SL6500 would be able to have very basic nodes, with non hot swap SATA drives, basic SATA raid function, dual NICs, and features much more inline with the Facebook system. Sure, it woudln't be as specific to Facebooks needs, but would be a more interesting comparison as it would be at least comparing two systems designed for similar roles, not a general enterprise compute node to a purpose built scale out system, but a comparison of 2 scale out platforms.
Ceencee - Wednesday, November 9, 2011 - link
The SL6500 chasis with SL160s G6 servers seems to be a good solution to storage level nodes. Wonder if Facebook will release a storage node spec next?Penti - Saturday, November 5, 2011 - link
You have different cooling requirement also. Obviously Googles or Facebooks option isn't about the maximum density per rack. But they are also not using any traditional hot aisle cold aisle setup. Not will all datacenters be able to handle your 20-30kW rack. In terms off cooling requirements and power.rikmorgan - Thursday, November 3, 2011 - link
The idle power chart shows HP 160w, Open Compute 118w. That's 42w savings, not 32w.