Chassis format: why rack servers have taken over the market

About 5 years ago, two third of the servers shipped were still towers, and only one third were rack servers. Today, 7 out of 10 servers shipped are rack servers, a bit less than 1 out of 10 servers are blade servers, and the remaining ~20% are towers.

That doesn't mean that towers are completely out of the picture; most companies expect towers to continue to live on. The reason is that if your company only needs a few small servers, rack servers are simply a bit more expensive, and so is the necessary rack switch and so on. So towers might still be an interesting option for a small office server that runs the domain server, a version of MS Small Business Server etc., all on one server. After all a good way to keep TCO low is running everything from one solid machine.

However, from the moment you need more than a few servers you will want a KVM (Keyboard Video Mouse) switch to save some space and be able to quickly switch between your servers while configuring and installing software. You also want to be able to reboot those servers in case something happens when you are not at the office, so your servers are equipped with remote control management. To make them accessible via the internet, you install a gateway and firewall, on which you install the VPN software. (Virtual Private Networking software allows you to access your LAN via a secure connection on the internet.)

With "normal LAN" and remote management cables hooked up to your Ethernet switch, KVM cables going to your KVM switch, and two power cables per server (redundant power), cable management quickly becomes a concern. You also need a better way to store your servers than on dusty desks. Therefore you build your rack servers into a rack cabinet. Cable management, upgrading and repairing servers is a lot easier thanks to special cable management arms and the rack rails. This significantly lowers the costs of maintenance and operation. Rack servers also take much less space than towers, so the facility management costs go down too. The only disadvantage is that you have to buy everything in 19inch wide format: switches, routers, KVM switch and so on.


Cable management arms and rack rails make upgrading a server pretty easy

Racks can normally contain 42 "units". A unit is 1.75 inch (4.4 cm) high. Rack servers are usually 1 to 4 units (1U to 4U) high.


HP DL145, a 1U solution

1U servers (or the "pizza box" servers) focus on density: processing power per U. Some 1U models offer up to four processor sockets and eight cores, such as Supermicro's SC818S+-1000 and Iwill's H4103 server. These servers are excellent for HPC (High Performance Computing) applications, but require an extra investment in external storage for other "storage intensive" applications. The primary disadvantage of 1U servers is the limited expansion possibilities. You'll have one or two horizontal PCI-X/PCI-e slots (via a riser card) at most. The very flat but powerful power supplies are of course also more expensive than normal power supplies, and the number of hard drives is limited to 2 to 4 drives. 1U servers also use very small 7000-10000 rpm fans.


Sun T2000, a 2U server

2U servers can use more "normal" power supplies and fans, and therefore barebones tend to be a bit cheaper than 1U. Some 2U servers such as the Sun T2000 use only half height 2U vertical expansion slots. This might limit your options for third party PCI-X/e cards.


The 4U HP DL585

3U and 4U servers have the advantage that they can place the PCI-X/e cards vertically, which allows many more expansion slots. Disks can also be placed vertically, which gives you a decent amount of local disks. Space for eight disks and sometimes more is possible.

TCO Blade Servers
Comments Locked

32 Comments

View All Comments

  • schmidtl - Thursday, August 17, 2006 - link

    Looks good. Little history of progression on the S of RAS: disk drives were the first, and the industry sees a large proliferation of RAID configurations with hot swappable drives without any system performance degradation. High end servers have redundant/hot swappable power supplies (Dell brought that en masse to Intel servers). Recently, even CPUs have become hot swappable, something that's been around for a few years on IBM's zSeries mainframes and now pSeries servers (Power5+).
  • stevenestes - Tuesday, March 17, 2015 - link

    I posted a video talking about server basics and an in depth intro to servers, check it out if you'd like https://www.youtube.com/watch?v=v4x6ce66dug

Log in

Don't have an account? Sign up now