Chassis format: why rack servers have taken over the market

About 5 years ago, two third of the servers shipped were still towers, and only one third were rack servers. Today, 7 out of 10 servers shipped are rack servers, a bit less than 1 out of 10 servers are blade servers, and the remaining ~20% are towers.

That doesn't mean that towers are completely out of the picture; most companies expect towers to continue to live on. The reason is that if your company only needs a few small servers, rack servers are simply a bit more expensive, and so is the necessary rack switch and so on. So towers might still be an interesting option for a small office server that runs the domain server, a version of MS Small Business Server etc., all on one server. After all a good way to keep TCO low is running everything from one solid machine.

However, from the moment you need more than a few servers you will want a KVM (Keyboard Video Mouse) switch to save some space and be able to quickly switch between your servers while configuring and installing software. You also want to be able to reboot those servers in case something happens when you are not at the office, so your servers are equipped with remote control management. To make them accessible via the internet, you install a gateway and firewall, on which you install the VPN software. (Virtual Private Networking software allows you to access your LAN via a secure connection on the internet.)

With "normal LAN" and remote management cables hooked up to your Ethernet switch, KVM cables going to your KVM switch, and two power cables per server (redundant power), cable management quickly becomes a concern. You also need a better way to store your servers than on dusty desks. Therefore you build your rack servers into a rack cabinet. Cable management, upgrading and repairing servers is a lot easier thanks to special cable management arms and the rack rails. This significantly lowers the costs of maintenance and operation. Rack servers also take much less space than towers, so the facility management costs go down too. The only disadvantage is that you have to buy everything in 19inch wide format: switches, routers, KVM switch and so on.


Cable management arms and rack rails make upgrading a server pretty easy

Racks can normally contain 42 "units". A unit is 1.75 inch (4.4 cm) high. Rack servers are usually 1 to 4 units (1U to 4U) high.


HP DL145, a 1U solution

1U servers (or the "pizza box" servers) focus on density: processing power per U. Some 1U models offer up to four processor sockets and eight cores, such as Supermicro's SC818S+-1000 and Iwill's H4103 server. These servers are excellent for HPC (High Performance Computing) applications, but require an extra investment in external storage for other "storage intensive" applications. The primary disadvantage of 1U servers is the limited expansion possibilities. You'll have one or two horizontal PCI-X/PCI-e slots (via a riser card) at most. The very flat but powerful power supplies are of course also more expensive than normal power supplies, and the number of hard drives is limited to 2 to 4 drives. 1U servers also use very small 7000-10000 rpm fans.


Sun T2000, a 2U server

2U servers can use more "normal" power supplies and fans, and therefore barebones tend to be a bit cheaper than 1U. Some 2U servers such as the Sun T2000 use only half height 2U vertical expansion slots. This might limit your options for third party PCI-X/e cards.


The 4U HP DL585

3U and 4U servers have the advantage that they can place the PCI-X/e cards vertically, which allows many more expansion slots. Disks can also be placed vertically, which gives you a decent amount of local disks. Space for eight disks and sometimes more is possible.

TCO Blade Servers
Comments Locked

32 Comments

View All Comments

  • Whohangs - Thursday, August 17, 2006 - link

    Yes, but multiply that by multiple cpus per server, multiple servers per rack, and multiple racks per server room (not to mention the extra cooling of the server room needed for that extra heat) and your costs quickly add up.
  • JarredWalton - Thursday, August 17, 2006 - link

    Multiple servers all consume roughly the same power and have the same cost, so you double your servers (say, spend $10000 for two $5000 servers) and your power costs double as well. That doesn't mean that the power catches up to the initial server cost faster. AC costs will also add to the electricity cost, but in a large datacenter your AC costs don't fluctuate *that* much in my experience.

    Just for reference, I worked in a datacenter for a large corporation for 3.5 years. Power costs for the entire building? About $40-$70,000 per month (this was a 1.5 million square foot warehouse). Costs of the datacenter construction? About $10 million. Costs of the servers? Well over $2 million (thanks to IBM's eServers). I don't think the power draw from the computer room was more than $1000 per month, but it might have been $2000-$3000 or so. The cost of over 100,000 500W halogen lights (not to mention the 1.5 million BTU heaters in the winter) was far more than the costs of running 20 or so servers.

    Obviously, a place like Novel or another company that specifically runs servers and doesn't have tons of cubicle/storage/warehouse space will be different, but I would imagine places with a $100K per month electrical bills probably hold hundreds of millions of dollars of equipment. If someone has actual numbers for electrical bills from such an environment, please feel free to enlighten.
  • Viditor - Friday, August 18, 2006 - link

    It's the cooling (air treatment) that is more important...not just the expense of running the equipment, but the real estate required to place the AC equipment. As datacenters expand, some quickly run out of room for all of the air treatment systems on the roof. By reducing heating and power costs inside the datacenter, you increase the value for each sq ft you pay...
  • TaichiCC - Thursday, August 17, 2006 - link

    Great article. I believe the article also need to include the impact of software when choosing hardware. If you look at some bleeding edge software infrastructure employed by companies like Google, Yahoo, and Microsoft, RAID, PCI-x is no longer important. Thanks to software, a down server or even a down data center means nothing. They have disk failures everyday and the service is not affected by these mishaps. Remember how one of Google's data center caught fire and there was no impact to the service? Software has allowed cheap hardware that doesn't have RAID, SATA, and/or PCI-X, etc to function well and no down time. That also means TCO is mad low since the hardware is cheap and maintenance is even lower since software has automated everything from replication to failovers.
  • Calin - Friday, August 18, 2006 - link

    I don't thing google or Microsoft runs their financial software on a big farm of small, inexpensive computers.
    While the "software-based redundancy" is a great solution for some problems, other problems are totally incompatible with it.
  • yyrkoon - Friday, August 18, 2006 - link

    Virtualization is the way of the future. Server admins have been implimenting this for years, and if you know what you're doing, its very effective. You can in effect segregate all your different type of servers (DNS, HTTP, etc) in separate VMs, and keep multiple snapshots just incase something does get hacked, or otherwise goes down (not to mention you can even have redundant servers in software to kick in when this does happen). While VMWare may be very good compared to VPC, Xen is probably equaly as good by comparrison to VMWare, the performance difference last I checked was pretty large.

    Anyhow, I'm looking forward to anandtechs virtualization part of the article, perhaps we all will learn something :)
  • JohanAnandtech - Thursday, August 17, 2006 - link

    Our focus is mostly on the SMBs, not google :-). Are you talking about cluster fail over? I am still exploring that field, as it is quite expensive to build it in the lab :-). I would be interested in what would be the most interesting technique, with a router which simply switches to another server, or with a heartbeat system, where one server monitors the other.

    I don't think the TCO is that low for implementing that kind of software or solutions, and that hardware is incredibly cheap. You are right when you are talking about "google datacenter scale". But for a few racks? I am not sure. Working with budgets of 20.000 Euro and less, I 'll have to disagree :-).

    Basically what I am trying to do with this server guide is give the beginning server administrators with tight budgets an overview of their options. Too many times SMBs are led to believe they need a certain overhyped solution.
  • yyrkoon - Friday, August 18, 2006 - link

    Well, if the server is in house, its no biggie, but if that server is acrossed the country (or world), then perhaps paying extra for that 'overhyped solution' so you can remotely access your BIOS may come in handy ;) In house, alot of people actually use in-expencive motherboards such as offered by Asrock, paired with a celeron / Sempron CPU. Now, if you're going to run more than a couple of VMs on this machine, then obviously you're going to have to spend more anyhow for multiple CPU sockets, and 8-16 memory slots. Blade servers IMO, is never an option. 4,000 seems awefully low for a blade server also.
  • schmidtl - Thursday, August 17, 2006 - link

    The S in RAS stands for sevicability. Meaning when the server requires maintainance, repair, or upgrades, what is the impact? Does the server need to be completely shut down (like a PC), or can you replace parts while it's running (hot-pluggable).
  • JarredWalton - Thursday, August 17, 2006 - link

    Thanks for the correction - can't say I'm a server buff, so I took the definitions at face value. The text on page 3 has been updated.

Log in

Don't have an account? Sign up now