Chassis format: why rack servers have taken over the market

About 5 years ago, two third of the servers shipped were still towers, and only one third were rack servers. Today, 7 out of 10 servers shipped are rack servers, a bit less than 1 out of 10 servers are blade servers, and the remaining ~20% are towers.

That doesn't mean that towers are completely out of the picture; most companies expect towers to continue to live on. The reason is that if your company only needs a few small servers, rack servers are simply a bit more expensive, and so is the necessary rack switch and so on. So towers might still be an interesting option for a small office server that runs the domain server, a version of MS Small Business Server etc., all on one server. After all a good way to keep TCO low is running everything from one solid machine.

However, from the moment you need more than a few servers you will want a KVM (Keyboard Video Mouse) switch to save some space and be able to quickly switch between your servers while configuring and installing software. You also want to be able to reboot those servers in case something happens when you are not at the office, so your servers are equipped with remote control management. To make them accessible via the internet, you install a gateway and firewall, on which you install the VPN software. (Virtual Private Networking software allows you to access your LAN via a secure connection on the internet.)

With "normal LAN" and remote management cables hooked up to your Ethernet switch, KVM cables going to your KVM switch, and two power cables per server (redundant power), cable management quickly becomes a concern. You also need a better way to store your servers than on dusty desks. Therefore you build your rack servers into a rack cabinet. Cable management, upgrading and repairing servers is a lot easier thanks to special cable management arms and the rack rails. This significantly lowers the costs of maintenance and operation. Rack servers also take much less space than towers, so the facility management costs go down too. The only disadvantage is that you have to buy everything in 19inch wide format: switches, routers, KVM switch and so on.


Cable management arms and rack rails make upgrading a server pretty easy

Racks can normally contain 42 "units". A unit is 1.75 inch (4.4 cm) high. Rack servers are usually 1 to 4 units (1U to 4U) high.


HP DL145, a 1U solution

1U servers (or the "pizza box" servers) focus on density: processing power per U. Some 1U models offer up to four processor sockets and eight cores, such as Supermicro's SC818S+-1000 and Iwill's H4103 server. These servers are excellent for HPC (High Performance Computing) applications, but require an extra investment in external storage for other "storage intensive" applications. The primary disadvantage of 1U servers is the limited expansion possibilities. You'll have one or two horizontal PCI-X/PCI-e slots (via a riser card) at most. The very flat but powerful power supplies are of course also more expensive than normal power supplies, and the number of hard drives is limited to 2 to 4 drives. 1U servers also use very small 7000-10000 rpm fans.


Sun T2000, a 2U server

2U servers can use more "normal" power supplies and fans, and therefore barebones tend to be a bit cheaper than 1U. Some 2U servers such as the Sun T2000 use only half height 2U vertical expansion slots. This might limit your options for third party PCI-X/e cards.


The 4U HP DL585

3U and 4U servers have the advantage that they can place the PCI-X/e cards vertically, which allows many more expansion slots. Disks can also be placed vertically, which gives you a decent amount of local disks. Space for eight disks and sometimes more is possible.

TCO Blade Servers
Comments Locked

32 Comments

View All Comments

  • JarredWalton - Thursday, August 17, 2006 - link

    Fixed.
  • Whohangs - Thursday, August 17, 2006 - link

    Great stuff, definitely looking forward to more in depth articles in this arena!
  • saiku - Thursday, August 17, 2006 - link

    This article kind of reminds me THG's recent series of articles on how computer graphics cards work.

    For us techies who don't get to peep into our server rooms much, this is a great intro. Especially for guys like me who work in small companies where all we have are some dusty Windows 2000 servers stuck in a small server "room".

    Thanks for this cool info.
  • JohanAnandtech - Friday, August 18, 2006 - link

    Thanks! Been in the same situation as you. Then I got a very small budget for upgrading our serverroom (about $20000) at the university I work for and I found out that there is quite a bit of information about servers but all fragmented, and mostly coming from non-independent sources.
  • splines - Thursday, August 17, 2006 - link

    Excellent work pointing out the benefits and drawbacks of Blades. They are mighty cool, but not this second coming of the server christ that IBM et al would have you believe.

    Good work all round. It looks to be a great primer for those new to the administration side of the business.
  • WackyDan - Thursday, August 17, 2006 - link

    Having worked with blades quite a bit, I can tell you that they are quite a significant innovation.

    I'll disagree with the author of the article that there is no standard. Intel co-designed the IBM bladecenter and licensed it's manufacture to other OEMS. Together, IBM and Intel have/had over 50% share inthe blade space. THat share along with Intel's collaboration is by default considered the standard int he industry.

    Blades, done properly, have huge advantages over their rack counterparts. ie; far less cables. In the IBM's the mid-plane replaces all the individual network and optical cables as the networking modules (copper and fibre) are internal and you can get several flavors... Plus I only need one cable drop to manage 14 servers....

    And if you've never seen 14 blades in 7u of space fully redundant, your are missing out. As for VMware, I've seen it running on blades with the same advantages as it's rack mount peers... and FYI... Blades are still considered rack mount as well...No you are not going to have any 16/32 ways as of yet.... but still, Blades really could replace 80%+ of all traditional rack mount servers.
  • splines - Friday, August 18, 2006 - link

    I don't disagree with you on any one point there. Our business is in the process of moving to multiple blade clusters and attached SANs for our excessively large fileservers.

    But I do think that virtualisation does provide a great stepping-stone for business not quite ready to clear out the racks and invest in a fairly expensive replacement. We can afford to make this change, but many cannot. Even though the likes of IBM are pushing for blades left right and centre I wouldn't discount the old racks quite yet.

    And no, I've haven't had the opportunity to see such a 7U blade setup. Sounds like fun :)
  • yyrkoon - Friday, August 18, 2006 - link

    Wouldnt you push a single system that can run into the tens of thousands, to possibly hundreds of thousands for a single blade ? I know i would ;)
  • Mysoggy - Thursday, August 17, 2006 - link

    I am pretty amazed that they did not mention the cost of power in the TCO section.

    The cost of powering a server in a datacenter can be even greater than the TCA over it's lifetime.

    I love the people that say...oh a got a great deal on this dell server...it was $400 off of the list price. Then they eat through the savings in a few months with shoddy PSUs and hardware that consume more power.

  • JarredWalton - Thursday, August 17, 2006 - link

    Page 3:

    "Facility management: the space it takes in your datacenter and the electricity it consumes"

    Don't overhype power, though. There is no way even a $5,000 server is going to use more in power costs over its expected life. Let's just say that's 5 years for kicks. From http://www.anandtech.com/IT/showdoc.aspx?i=2772&am...">this page, the Dell Irwindale 3.6 GHz with 8GB of RAM maxed out at 374W. Let's say $0.10 per kWHr for electricity as a start:

    24 * 374 = 8976 WHr/Day
    8976 * 365.25 = 3278484 WHr/Year
    3278484 * 5 = 16392420 WHr over 5 years
    16392420 / 1000 = 16392.42 kWHr total

    Cost for electricity (at full load, 24/7, for 5 years): $1639.24

    Even if you double that (which is unreasonable in my experience, but maybe there are places that charge $0.20 per kWHr), you're still only at $3278.48. I'd actually guess that a lot of businesses pay less for energy, due to corporate discounts - can't say for sure, though.

    Put another way, you need a $5000 server that uses 1140 Watts in order to potentially use $5000 of electricity in 5 years. (Or you need to pay $0.30 per kWHr.) There are servers that can use that much power, but they are far more likely to cost $100,000 or more than to cost anywhere near $5000. And of course, power demands with Woodcrest and other chips are lower than that Irwindale setup by a pretty significant amount. :)

    Now if you're talking about a $400 discount to get an old Irwindale over a new Woodcrest or something, then the power costs can easily eat up thost savings. That's a bit different, though.

Log in

Don't have an account? Sign up now