Lower Acquisition costs?

In theory, the purchase price of a blade server should be lower than equivalent number of rack servers, due to the reduction in duplicate components (DVD, power supplies etc.) and the saving on KVM cabling, Ethernet cabling, etc. However, there is a complete lack of standardization between IBM, HP and Dell; each has a proprietary blade architecture. As there are no standards, it is pretty hard for other players to enter this market, allowing the big players to charge a big premium for their blade servers.

Sure, many studies - mostly sponsored by one of the big players - show considerable savings, as they compare a fully populated blade server with the most expensive rack servers of the same vendor. If the blade chassis will only get populated gradually in time, and you consider the fact that the competition in the rack server market is much more aggressive, you get a different picture. Most blade server offerings are considerably more expensive than their rack server alternative. Blade Chassis easily cost between $4000 and $8000 and blades are hardly/not less expensive than their 1U counterparts.

In the rack server space the big OEMs have to compete with many well established players such as Supermicro, Tyan, Rackable Systems, etc. At the same time, the big Taiwanese hardware players such as MSI, ASUS and Gigabyte have also entered this market, putting great pressure on the price of a typical rack server.

That kind of competition is only a very small blip on the radar in the blade market, and it is definitely a reason why HP and IBM are putting so much emphasis on their blade offerings. Considering that acquisition costs are still easily about 40-50% of the total TCO, it is clear that the market needs more open standards to open the door to more competition. Supermicro plans to enter the market at the end of year, and Tyan has made a first attempt with their Typhoon series, which is more a HPC solution. It will be interesting to see how flexible these solutions will be compared to those of the two biggest players, HP and IBM.

Flexibility

Most Blade servers are not as flexible as rack servers. Quickly installing an extra RAID card to attach a storage rack for a high performance database application is not possible. And what if you need a database server with a huge amount of RAM, and you don't want to use a clustered database? Rack servers with 16 DIMM slots can easily be found, while blades are mostly limited to 4 or 8 slots. Blades cannot offer that flexibility, or at best they only offer it at a very high price.

In most cases, blades use 2.5 inch hard disks, which are more expensive and offer lower performance than their 3.5 inch counterparts. That is not really surprising as blades have been built with a focus on density, trying to fit as much processing power in a certain amount of rack space as possible. A typical blade today has at most about 140 GB of raw disk capacity, while quite a few 1U racks can have 2 TB (4x500 GB) available.

Finally, of course, there is the lack of standardization, which prevents you from mixing the best solutions of different vendors together in one chassis. Once you buy a chassis from a certain vendor, server blades and option blades must be bought from the same vendor.

Hybrid blades

The ideas behind blades - shared power, networking, KVM and management - are excellent and it would be superb if they could be combined with the flexibility that the current rack servers offer. Rackable Systems seem to be taking the first steps toward enabling customers to use 3.5 inch hard disks and normal ATX motherboards in their "Scale Out" chassis, which makes it a lot more flexible and most likely less expensive too.

The alternative: the heavy rack server with "virtual blades"

One possible solution which could be a serious competitor for blade servers is a heavy duty rack server with VMWare's ESX server. We'll explore virtualization in more detail in a coming article, but for now remember that ESX server has very little overhead, contrary to Microsoft's Virtual Server and VMWare Virtual Server (GSX Server). Our first measurement shows a 5% performance decrease, which can easily be ignored.


Two physical CPU's, but 8 VMs with 1 CPU

Using a powerful server with a lot of redundancy and running many virtual machines is in theory an excellent solution. Compared to a blade server, the CPU and RAM resources will be much better utilized. For example, if you have 8 CPU cores and 10 applications you want to run in 10 different virtual machines. You can give a demanding application 4 cores, and the other 9 applications get only 2 cores. The number of cores you attribute to a certain VM is only maximum amount of CPU power they will be able to use.

As stated earlier, we'll look into virtualization much deeper and report our findings in a later installment of our Server Guide series of articles.


Conclusion

In this first part we explored what makes a server different and we focused on the different server chassis out there. The ideal server chassis form factor is not yet on the market. Rack servers offer great flexibility, but a whole rack of rack servers contains more cables, power supplies, DVDs and other components than necessary.

Blade servers have the potential to push rack servers completely of the market, but lack flexibility as the big OEMs do not want to standardize on a blade chassis as it would open the market to stiff competition and lower their high profit margins. Until then, blade servers offer good value to datacenters with High Performance Computing (HPC) applications, telecommunication applications and massive web server hosting companies.

Hybrid blade servers and big rack servers with virtual machines are a step in the right direction which combine a very good use of resources with the flexibility to adapt the machine to the different needs of the different server applications. We'll investigate this further with practical examples and benchmarks in the upcoming articles.

Special thanks to Angela Rosario (Supermicro), Michael Kalodrich (Supermicro), Geert Kuijken (HP Belgium), and Erwin vanluchene (HP Belgium).


References:

[1] TCO Study Ranks Rackable #1 For Large Scale Server Deployments
http://www.rackable.com/ra_secure/Rackable_TCO_CStudy.pdf

[2]. Making the Business Case for Blade Servers
Sponsored by: IBM Corporation
John Humphreys, Lucinda Borovick, Randy Perry
http://www-03.ibm.com/servers/eserver/bladecenter/pdf/IBM_nortel_wp.pdf

Blade Servers
Comments Locked

32 Comments

View All Comments

  • AtaStrumf - Sunday, October 22, 2006 - link

    Interesting stuff! Keep up the good work!
  • LoneWolf15 - Thursday, October 19, 2006 - link

    I'm guessing this is possible, but I've never tried it...

    Wouldn't it be possible to use a blade server, and just have the OS on each blade, but have a large, high-bandwith (read: gig ethernet) NAS box? That way, each blade would have, say (for example), two small hard disks in RAID-1 with the boot OS for ensuring uptime, but any file storage would be redirected to RAID-5 volumes created on the NAS box(es). Sounds like the best of both worlds to me.
  • dropadrop - Friday, December 22, 2006 - link

    This is what we've had in all of the places I've been working at during the last 5-6 years. The term used is SAN, not NAS, and servers have traditionally been connected to it via fiberoptics. It's not exactly cheap storage, actually it's really damn expensive.

    To give you a picture, we just got a 22TB SAN at my new employer, and it cost way over 100000$. If you start counting price for gigabyte, it's not cheap at all. Ofcourse this does not take into consideration the price of Fiber Connections (cards on the server, fiber switches, cables ect). Now a growing trend is to use iScsi instead of fiber. Iscsi is scsi over ethernet and ends up being alot cheaper (though not quite as fast).

    Apart from having central storage with higher redundancy, one advantage is performance. A SAN can stripe the data over all the disks in it, for example we have a RAID stripe consisting of over 70 disks...
  • LoneWolf15 - Thursday, October 19, 2006 - link

    (Since I can't edit)

    I forgot to add that it even looks like Dell has some boxes like these that can be attached directly to their servers with cables (I don't remember, but it might be an SAS setup). Support for a large number of drives, and mutliple RAID volumes if necessary.
  • Pandamonium - Thursday, October 19, 2006 - link

    I decided to give myself the project of creating a server for use in my apartment, and this article (along with its subsequent editions) should help me greatly in this endeavor. Thanks AT!
  • Chaotic42 - Sunday, August 20, 2006 - link

    This is a really interesting article. I just started working in a fairly large data center a couple of months ago, and this stuff really interests me. Power is indeed expensive for these places, but given the cost of the equipment and maintenance, it's not too bad. Cooling is a big issue though, as we have pockets of hot and cold air through out the DC.

    I still can't get over just how expensive 9GB WORM media is and how insanely expensive good tape drives are. It's a whole different world of computing, and even our 8 CPU Sun system is too damned slow. ;)
  • at80eighty - Sunday, August 20, 2006 - link

    Target Reader here - SMB owner contemplating my options in the server route

    again - thank you

    you guys fucking \m/
  • peternelson - Friday, August 18, 2006 - link


    Blades are expensive but not so bad on ebay (as is regular server stuff affordable second user).

    Blades can mix architecture eg IBM blades of CELL processor could mix with pentium or maybe opteron blades.

    How important U size is depends if it's YOUR rack or a datacentre rack. Cost/sq ft is more in a datacentre.

    Power is not just $cents per kwh paid to the utility supplier.

    It is cost of cabling and PDU.
    Cost (and efficiency overhead) of UPS
    Cost of remote boot (APC Masterswitch)
    Cost of transfer switch to let you swap out ups batteries
    Cost of having generator power waiting just in case.

    Some of these scale with capacity so cost more if you use more.

    Yes virtualisation is important.

    IBM have been advertising server consolidation (ie not invasion of beige boxes).

    But also see STORAGE consolidation. eg EMC array on a SAN. You have virtual storage across all platforms, adding disks as needed or moving the free space virtually onto a different volume as needed. Unused data can migrate to slower drives or tape.
  • Tujan - Friday, August 18, 2006 - link

    "[(o)]/..\[(o)]"
  • Zaitsev - Thursday, August 17, 2006 - link

    quote:

    We are well aware that we don't have the monopoly on wisdom, so feel free to sent us feedback.


    Fourth paragraph of intro.

    Haven't finished the article yet, but I'm looking forward to the series.

Log in

Don't have an account? Sign up now