Facebook's "Open Compute" Server testedby Johan De Gelas on November 3, 2011 12:00 AM EST
Introducing Our Open Virtualization Benchmark
vApus Mark II has been our own virtualization benchmark suite that tests how well servers cope with virtualizing "heavy duty applications". We explained the benchmark methodology here. The beauty of vApus Mark II is that:
- We test with real-world applications used in enterprises all over the world
- We can measure response times
- It can scale from 8 to 80 thread servers
- It is lightweight on the client side: one humble client is enough to bring the most massive server to its knees. For a virtualizated server or cluster, you only need a few clients.
There is one big disadvantage, however: the OLAP and web applications are the intellectual property of several software vendors, so we can't let third parties verify our tests. To deal with this, the Sizing Servers Lab developed a new benchmark, called vApus For Open Source workloads, in short vApus FOS.
vApus FOS uses a similar methodology as vApus Mark II with "tiles". The exact software configuration may still change a bit as we tested with the 0.9 version. One vApus FOS 0.9 tile uses four different VMs, consisting of:
- A PhpBB (Apache2, MySQL) website with one virtual CPU and 1GB RAM. The website uses about 8GB of disk space. We simulate up to 50 concurrent uses with press keys every 0.6 to 2.4 s.
- The same VM but with two vCPUs.
- An OLAP MySQL database that is used by an online webshop. The VM gets two vCPUs and 1GB RAM. The database is about 1GB, with up to 500 connections active.
- Last but not least: the Zimbra VM. VMware's open source groupware offering is by far the most I/O intensive VM. This VM gets two vCPUs and 2GB RAM, with up to 100 concurrent users active.
All VMs are based on minimal CentOS 5.6 with VMware Tools installed. vApus FOS can also be run on different hypervisors: we already tried using KVM, but encountered a lot of KVM specific problems.
Post Your CommentPlease log in or sign up to comment.
View All Comments
iwod - Thursday, November 3, 2011 - linkAnd i am guessing Facebook has at least 10 times more then what is shown on that image.
DanNeely - Thursday, November 3, 2011 - linkHundreds or thousands of times more is more likely. FB's grown to the point of building its own data centers instead of leasing space in other peoples. Large data centers consume multiple megawatts of power. At ~100W/box, that's 5-10k servers per MW (depending on cooling costs); so that's tens of thousands of servers/data center and data centers scattered globally to minimize latency and traffic over longhaul trunks.
pandemonium - Friday, November 4, 2011 - linkI'm so glad there are other people out there - other than myself - that sees the big picture of where these 'miniscule savings' goes. :)
npp - Thursday, November 3, 2011 - linkWhat you're talking about is how efficient the power factor correction circuits of those PSUs are, and not how power efficient the units their self are... The title is a bit misleading.
NCM - Thursday, November 3, 2011 - link"Only" 10-20% power savings from the custom power distribution????
When you've got thousands of these things in a building, consuming untold MW, you'd kill your own grandmother for half that savings. And water cooling doesn't save any energy at all—it's simply an expensive and more complicated way of moving heat from one place to another.
For those unfamiliar with it, 480 VAC three-phase is a widely used commercial/industrial voltage in USA power systems, yielding 277 VAC line-to-ground from each of its phases. I'd bet that even those light fixtures in the data center photo are also off-the-shelf 277V fluorescents of the kind typically used in manufacturing facilities with 480V power. So this isn't a custom power system in the larger sense (although the server level PSUs are custom) but rather some very creative leverage of existing practice.
Remember also that there's a double saving from reduced power losses: first from the electricity you don't have to buy, and then from the power you don't have to use for cooling those losses.
npp - Thursday, November 3, 2011 - linkI don't remember arguing that 10% power savings are minor :) Maybe you should've posted your thoughts as a regular post, and not a reply.
JohanAnandtech - Thursday, November 3, 2011 - linkGood post but probably meant to be a reply to erwinerwinerwin ;-)
NCM - Thursday, November 3, 2011 - linkJohan writes: "Good post but probably meant to be a reply to erwinerwinerwin ;-)"
tiro_uspsss - Thursday, November 3, 2011 - linkIs it just me, or does placing the Xeons *right* next to each other seem like a bad idea in regards to heat dissipation? :-/
I realise the aim is performance/watt but, ah, is there any advantage, power usage-wise, if you were to place the CPUs further apart?
JohanAnandtech - Thursday, November 3, 2011 - linkNo. the most important rule is that the warm air of one heatsink should not enter the stream of cold air of the other. So placing them next to each other is the best way to do it, placing them serially the worst.
Placing them further apart will not accomplish much IMHO. most of the heat is drawn away to the back of the server, the heatsinks do not get very hot. You also lower the airspeed between the heatsinks.