Blade servers: server nirvana?

The idea behind blade servers is brilliant. Imagine that you need about twelve 1U servers and you need them to be very reliable, attached to the KVM, networked and managed out of band. The result is that you have 24 power supplies, 12 KVM cables, at least 24 Ethernet cables (1 management, one network) and we are not even counting the number of cables to external storage or other devices.

What if you could put all of these 12 servers in one 6-7U chassis that has 3 (2+1) or 4 (2+2) very big power supplies instead of 24 small ones, and let them use a shared network switch, KVM switch and management module? That is exactly what a blade server is: a 6U, 7U or sometimes 10U chassis that can hold about 8 to 14 hot-swappable, vertically placed "mini-servers" called blades.

A server blade is a complete server with one or two processors and associated memory, disk storage and network controllers. Each blade within a system chassis slides into a blade bay, much like hot swappable hard disks in a storage rack. By sliding a blade into the bay, you also connect it to a shared backplane which is the link to the power supply, DVD-ROM, Ethernet and/or Fibre Channel switches, KVM switch and so on.

Individual Blade and a Blade Chassis

It doesn't take a genius to see that a blade server chassis with blades can be a more interesting option than a lot of 1U servers. The blade server should be easier to manage, offer more processing power in less space, and cost less as many components can be shared instead of being replicated in each 1U servers. Who needs 12 DVD players, 12 different remote management modules, and 24 power supplies?

According to the four biggest players in the server world - namely Intel, IBM, HP and Dell - blade servers are the way to the new enlightenment, to Server Nirvana. And there is no doubt about it, blade servers are hot: blade server sales have increased quite spectacularly, by 40% and more last year. They are certainly a very promising part of the server market... for some server applications.

The big four see the following advantages:
  • Reductions in cable complexity
  • Operational cost savings
  • Data center space savings
  • Lower acquisition costs
  • Improved high availability
  • More efficient power usage
The promises that cable complexity would be reduced, management would be easier (in most cases), space would be saved, and that more processing power could be fit in the same rack space have all materialized. But as always it is important not to be swept away by all the hype.

At the end of 2003, one year after the introduction of the blade server, IDC predicted that "the market share for blade servers will grow to 27% of all server units shipped in 2007" [2]. Currently IDC estimates are that blades account for 5 to 7% of the server market (unit shipments), so you probably can't help but wonder how IDC ever arrived at 27% in 2007. But that doesn't stop IDC from predicting again: by 2010, 25% of the server market will be conquered by blade servers. The truth is that blade servers are not always the best solution and have quite a long way to go before they can completely replace rack servers.

Chassis Format More Blades and Conclusion
Comments Locked

32 Comments

View All Comments

  • AtaStrumf - Sunday, October 22, 2006 - link

    Interesting stuff! Keep up the good work!
  • LoneWolf15 - Thursday, October 19, 2006 - link

    I'm guessing this is possible, but I've never tried it...

    Wouldn't it be possible to use a blade server, and just have the OS on each blade, but have a large, high-bandwith (read: gig ethernet) NAS box? That way, each blade would have, say (for example), two small hard disks in RAID-1 with the boot OS for ensuring uptime, but any file storage would be redirected to RAID-5 volumes created on the NAS box(es). Sounds like the best of both worlds to me.
  • dropadrop - Friday, December 22, 2006 - link

    This is what we've had in all of the places I've been working at during the last 5-6 years. The term used is SAN, not NAS, and servers have traditionally been connected to it via fiberoptics. It's not exactly cheap storage, actually it's really damn expensive.

    To give you a picture, we just got a 22TB SAN at my new employer, and it cost way over 100000$. If you start counting price for gigabyte, it's not cheap at all. Ofcourse this does not take into consideration the price of Fiber Connections (cards on the server, fiber switches, cables ect). Now a growing trend is to use iScsi instead of fiber. Iscsi is scsi over ethernet and ends up being alot cheaper (though not quite as fast).

    Apart from having central storage with higher redundancy, one advantage is performance. A SAN can stripe the data over all the disks in it, for example we have a RAID stripe consisting of over 70 disks...
  • LoneWolf15 - Thursday, October 19, 2006 - link

    (Since I can't edit)

    I forgot to add that it even looks like Dell has some boxes like these that can be attached directly to their servers with cables (I don't remember, but it might be an SAS setup). Support for a large number of drives, and mutliple RAID volumes if necessary.
  • Pandamonium - Thursday, October 19, 2006 - link

    I decided to give myself the project of creating a server for use in my apartment, and this article (along with its subsequent editions) should help me greatly in this endeavor. Thanks AT!
  • Chaotic42 - Sunday, August 20, 2006 - link

    This is a really interesting article. I just started working in a fairly large data center a couple of months ago, and this stuff really interests me. Power is indeed expensive for these places, but given the cost of the equipment and maintenance, it's not too bad. Cooling is a big issue though, as we have pockets of hot and cold air through out the DC.

    I still can't get over just how expensive 9GB WORM media is and how insanely expensive good tape drives are. It's a whole different world of computing, and even our 8 CPU Sun system is too damned slow. ;)
  • at80eighty - Sunday, August 20, 2006 - link

    Target Reader here - SMB owner contemplating my options in the server route

    again - thank you

    you guys fucking \m/
  • peternelson - Friday, August 18, 2006 - link


    Blades are expensive but not so bad on ebay (as is regular server stuff affordable second user).

    Blades can mix architecture eg IBM blades of CELL processor could mix with pentium or maybe opteron blades.

    How important U size is depends if it's YOUR rack or a datacentre rack. Cost/sq ft is more in a datacentre.

    Power is not just $cents per kwh paid to the utility supplier.

    It is cost of cabling and PDU.
    Cost (and efficiency overhead) of UPS
    Cost of remote boot (APC Masterswitch)
    Cost of transfer switch to let you swap out ups batteries
    Cost of having generator power waiting just in case.

    Some of these scale with capacity so cost more if you use more.

    Yes virtualisation is important.

    IBM have been advertising server consolidation (ie not invasion of beige boxes).

    But also see STORAGE consolidation. eg EMC array on a SAN. You have virtual storage across all platforms, adding disks as needed or moving the free space virtually onto a different volume as needed. Unused data can migrate to slower drives or tape.
  • Tujan - Friday, August 18, 2006 - link

    "[(o)]/..\[(o)]"
  • Zaitsev - Thursday, August 17, 2006 - link

    quote:

    We are well aware that we don't have the monopoly on wisdom, so feel free to sent us feedback.


    Fourth paragraph of intro.

    Haven't finished the article yet, but I'm looking forward to the series.

Log in

Don't have an account? Sign up now