Lower Acquisition costs?

In theory, the purchase price of a blade server should be lower than equivalent number of rack servers, due to the reduction in duplicate components (DVD, power supplies etc.) and the saving on KVM cabling, Ethernet cabling, etc. However, there is a complete lack of standardization between IBM, HP and Dell; each has a proprietary blade architecture. As there are no standards, it is pretty hard for other players to enter this market, allowing the big players to charge a big premium for their blade servers.

Sure, many studies - mostly sponsored by one of the big players - show considerable savings, as they compare a fully populated blade server with the most expensive rack servers of the same vendor. If the blade chassis will only get populated gradually in time, and you consider the fact that the competition in the rack server market is much more aggressive, you get a different picture. Most blade server offerings are considerably more expensive than their rack server alternative. Blade Chassis easily cost between $4000 and $8000 and blades are hardly/not less expensive than their 1U counterparts.

In the rack server space the big OEMs have to compete with many well established players such as Supermicro, Tyan, Rackable Systems, etc. At the same time, the big Taiwanese hardware players such as MSI, ASUS and Gigabyte have also entered this market, putting great pressure on the price of a typical rack server.

That kind of competition is only a very small blip on the radar in the blade market, and it is definitely a reason why HP and IBM are putting so much emphasis on their blade offerings. Considering that acquisition costs are still easily about 40-50% of the total TCO, it is clear that the market needs more open standards to open the door to more competition. Supermicro plans to enter the market at the end of year, and Tyan has made a first attempt with their Typhoon series, which is more a HPC solution. It will be interesting to see how flexible these solutions will be compared to those of the two biggest players, HP and IBM.

Flexibility

Most Blade servers are not as flexible as rack servers. Quickly installing an extra RAID card to attach a storage rack for a high performance database application is not possible. And what if you need a database server with a huge amount of RAM, and you don't want to use a clustered database? Rack servers with 16 DIMM slots can easily be found, while blades are mostly limited to 4 or 8 slots. Blades cannot offer that flexibility, or at best they only offer it at a very high price.

In most cases, blades use 2.5 inch hard disks, which are more expensive and offer lower performance than their 3.5 inch counterparts. That is not really surprising as blades have been built with a focus on density, trying to fit as much processing power in a certain amount of rack space as possible. A typical blade today has at most about 140 GB of raw disk capacity, while quite a few 1U racks can have 2 TB (4x500 GB) available.

Finally, of course, there is the lack of standardization, which prevents you from mixing the best solutions of different vendors together in one chassis. Once you buy a chassis from a certain vendor, server blades and option blades must be bought from the same vendor.

Hybrid blades

The ideas behind blades - shared power, networking, KVM and management - are excellent and it would be superb if they could be combined with the flexibility that the current rack servers offer. Rackable Systems seem to be taking the first steps toward enabling customers to use 3.5 inch hard disks and normal ATX motherboards in their "Scale Out" chassis, which makes it a lot more flexible and most likely less expensive too.

The alternative: the heavy rack server with "virtual blades"

One possible solution which could be a serious competitor for blade servers is a heavy duty rack server with VMWare's ESX server. We'll explore virtualization in more detail in a coming article, but for now remember that ESX server has very little overhead, contrary to Microsoft's Virtual Server and VMWare Virtual Server (GSX Server). Our first measurement shows a 5% performance decrease, which can easily be ignored.


Two physical CPU's, but 8 VMs with 1 CPU

Using a powerful server with a lot of redundancy and running many virtual machines is in theory an excellent solution. Compared to a blade server, the CPU and RAM resources will be much better utilized. For example, if you have 8 CPU cores and 10 applications you want to run in 10 different virtual machines. You can give a demanding application 4 cores, and the other 9 applications get only 2 cores. The number of cores you attribute to a certain VM is only maximum amount of CPU power they will be able to use.

As stated earlier, we'll look into virtualization much deeper and report our findings in a later installment of our Server Guide series of articles.


Conclusion

In this first part we explored what makes a server different and we focused on the different server chassis out there. The ideal server chassis form factor is not yet on the market. Rack servers offer great flexibility, but a whole rack of rack servers contains more cables, power supplies, DVDs and other components than necessary.

Blade servers have the potential to push rack servers completely of the market, but lack flexibility as the big OEMs do not want to standardize on a blade chassis as it would open the market to stiff competition and lower their high profit margins. Until then, blade servers offer good value to datacenters with High Performance Computing (HPC) applications, telecommunication applications and massive web server hosting companies.

Hybrid blade servers and big rack servers with virtual machines are a step in the right direction which combine a very good use of resources with the flexibility to adapt the machine to the different needs of the different server applications. We'll investigate this further with practical examples and benchmarks in the upcoming articles.

Special thanks to Angela Rosario (Supermicro), Michael Kalodrich (Supermicro), Geert Kuijken (HP Belgium), and Erwin vanluchene (HP Belgium).


References:

[1] TCO Study Ranks Rackable #1 For Large Scale Server Deployments
http://www.rackable.com/ra_secure/Rackable_TCO_CStudy.pdf

[2]. Making the Business Case for Blade Servers
Sponsored by: IBM Corporation
John Humphreys, Lucinda Borovick, Randy Perry
http://www-03.ibm.com/servers/eserver/bladecenter/pdf/IBM_nortel_wp.pdf

Blade Servers
Comments Locked

32 Comments

View All Comments

  • Whohangs - Thursday, August 17, 2006 - link

    Yes, but multiply that by multiple cpus per server, multiple servers per rack, and multiple racks per server room (not to mention the extra cooling of the server room needed for that extra heat) and your costs quickly add up.
  • JarredWalton - Thursday, August 17, 2006 - link

    Multiple servers all consume roughly the same power and have the same cost, so you double your servers (say, spend $10000 for two $5000 servers) and your power costs double as well. That doesn't mean that the power catches up to the initial server cost faster. AC costs will also add to the electricity cost, but in a large datacenter your AC costs don't fluctuate *that* much in my experience.

    Just for reference, I worked in a datacenter for a large corporation for 3.5 years. Power costs for the entire building? About $40-$70,000 per month (this was a 1.5 million square foot warehouse). Costs of the datacenter construction? About $10 million. Costs of the servers? Well over $2 million (thanks to IBM's eServers). I don't think the power draw from the computer room was more than $1000 per month, but it might have been $2000-$3000 or so. The cost of over 100,000 500W halogen lights (not to mention the 1.5 million BTU heaters in the winter) was far more than the costs of running 20 or so servers.

    Obviously, a place like Novel or another company that specifically runs servers and doesn't have tons of cubicle/storage/warehouse space will be different, but I would imagine places with a $100K per month electrical bills probably hold hundreds of millions of dollars of equipment. If someone has actual numbers for electrical bills from such an environment, please feel free to enlighten.
  • Viditor - Friday, August 18, 2006 - link

    It's the cooling (air treatment) that is more important...not just the expense of running the equipment, but the real estate required to place the AC equipment. As datacenters expand, some quickly run out of room for all of the air treatment systems on the roof. By reducing heating and power costs inside the datacenter, you increase the value for each sq ft you pay...
  • TaichiCC - Thursday, August 17, 2006 - link

    Great article. I believe the article also need to include the impact of software when choosing hardware. If you look at some bleeding edge software infrastructure employed by companies like Google, Yahoo, and Microsoft, RAID, PCI-x is no longer important. Thanks to software, a down server or even a down data center means nothing. They have disk failures everyday and the service is not affected by these mishaps. Remember how one of Google's data center caught fire and there was no impact to the service? Software has allowed cheap hardware that doesn't have RAID, SATA, and/or PCI-X, etc to function well and no down time. That also means TCO is mad low since the hardware is cheap and maintenance is even lower since software has automated everything from replication to failovers.
  • Calin - Friday, August 18, 2006 - link

    I don't thing google or Microsoft runs their financial software on a big farm of small, inexpensive computers.
    While the "software-based redundancy" is a great solution for some problems, other problems are totally incompatible with it.
  • yyrkoon - Friday, August 18, 2006 - link

    Virtualization is the way of the future. Server admins have been implimenting this for years, and if you know what you're doing, its very effective. You can in effect segregate all your different type of servers (DNS, HTTP, etc) in separate VMs, and keep multiple snapshots just incase something does get hacked, or otherwise goes down (not to mention you can even have redundant servers in software to kick in when this does happen). While VMWare may be very good compared to VPC, Xen is probably equaly as good by comparrison to VMWare, the performance difference last I checked was pretty large.

    Anyhow, I'm looking forward to anandtechs virtualization part of the article, perhaps we all will learn something :)
  • JohanAnandtech - Thursday, August 17, 2006 - link

    Our focus is mostly on the SMBs, not google :-). Are you talking about cluster fail over? I am still exploring that field, as it is quite expensive to build it in the lab :-). I would be interested in what would be the most interesting technique, with a router which simply switches to another server, or with a heartbeat system, where one server monitors the other.

    I don't think the TCO is that low for implementing that kind of software or solutions, and that hardware is incredibly cheap. You are right when you are talking about "google datacenter scale". But for a few racks? I am not sure. Working with budgets of 20.000 Euro and less, I 'll have to disagree :-).

    Basically what I am trying to do with this server guide is give the beginning server administrators with tight budgets an overview of their options. Too many times SMBs are led to believe they need a certain overhyped solution.
  • yyrkoon - Friday, August 18, 2006 - link

    Well, if the server is in house, its no biggie, but if that server is acrossed the country (or world), then perhaps paying extra for that 'overhyped solution' so you can remotely access your BIOS may come in handy ;) In house, alot of people actually use in-expencive motherboards such as offered by Asrock, paired with a celeron / Sempron CPU. Now, if you're going to run more than a couple of VMs on this machine, then obviously you're going to have to spend more anyhow for multiple CPU sockets, and 8-16 memory slots. Blade servers IMO, is never an option. 4,000 seems awefully low for a blade server also.
  • schmidtl - Thursday, August 17, 2006 - link

    The S in RAS stands for sevicability. Meaning when the server requires maintainance, repair, or upgrades, what is the impact? Does the server need to be completely shut down (like a PC), or can you replace parts while it's running (hot-pluggable).
  • JarredWalton - Thursday, August 17, 2006 - link

    Thanks for the correction - can't say I'm a server buff, so I took the definitions at face value. The text on page 3 has been updated.

Log in

Don't have an account? Sign up now