Back to Article

  • AtaStrumf - Sunday, October 22, 2006 - link

    Interesting stuff! Keep up the good work! Reply
  • LoneWolf15 - Thursday, October 19, 2006 - link

    I'm guessing this is possible, but I've never tried it...

    Wouldn't it be possible to use a blade server, and just have the OS on each blade, but have a large, high-bandwith (read: gig ethernet) NAS box? That way, each blade would have, say (for example), two small hard disks in RAID-1 with the boot OS for ensuring uptime, but any file storage would be redirected to RAID-5 volumes created on the NAS box(es). Sounds like the best of both worlds to me.
  • dropadrop - Friday, December 22, 2006 - link

    This is what we've had in all of the places I've been working at during the last 5-6 years. The term used is SAN, not NAS, and servers have traditionally been connected to it via fiberoptics. It's not exactly cheap storage, actually it's really damn expensive.

    To give you a picture, we just got a 22TB SAN at my new employer, and it cost way over 100000$. If you start counting price for gigabyte, it's not cheap at all. Ofcourse this does not take into consideration the price of Fiber Connections (cards on the server, fiber switches, cables ect). Now a growing trend is to use iScsi instead of fiber. Iscsi is scsi over ethernet and ends up being alot cheaper (though not quite as fast).

    Apart from having central storage with higher redundancy, one advantage is performance. A SAN can stripe the data over all the disks in it, for example we have a RAID stripe consisting of over 70 disks...
  • LoneWolf15 - Thursday, October 19, 2006 - link

    (Since I can't edit)

    I forgot to add that it even looks like Dell has some boxes like these that can be attached directly to their servers with cables (I don't remember, but it might be an SAS setup). Support for a large number of drives, and mutliple RAID volumes if necessary.
  • Pandamonium - Thursday, October 19, 2006 - link

    I decided to give myself the project of creating a server for use in my apartment, and this article (along with its subsequent editions) should help me greatly in this endeavor. Thanks AT! Reply
  • Chaotic42 - Sunday, August 20, 2006 - link

    This is a really interesting article. I just started working in a fairly large data center a couple of months ago, and this stuff really interests me. Power is indeed expensive for these places, but given the cost of the equipment and maintenance, it's not too bad. Cooling is a big issue though, as we have pockets of hot and cold air through out the DC.

    I still can't get over just how expensive 9GB WORM media is and how insanely expensive good tape drives are. It's a whole different world of computing, and even our 8 CPU Sun system is too damned slow. ;)
  • at80eighty - Sunday, August 20, 2006 - link

    Target Reader here - SMB owner contemplating my options in the server route

    again - thank you

    you guys fucking \m/
  • peternelson - Friday, August 18, 2006 - link

    Blades are expensive but not so bad on ebay (as is regular server stuff affordable second user).

    Blades can mix architecture eg IBM blades of CELL processor could mix with pentium or maybe opteron blades.

    How important U size is depends if it's YOUR rack or a datacentre rack. Cost/sq ft is more in a datacentre.

    Power is not just $cents per kwh paid to the utility supplier.

    It is cost of cabling and PDU.
    Cost (and efficiency overhead) of UPS
    Cost of remote boot (APC Masterswitch)
    Cost of transfer switch to let you swap out ups batteries
    Cost of having generator power waiting just in case.

    Some of these scale with capacity so cost more if you use more.

    Yes virtualisation is important.

    IBM have been advertising server consolidation (ie not invasion of beige boxes).

    But also see STORAGE consolidation. eg EMC array on a SAN. You have virtual storage across all platforms, adding disks as needed or moving the free space virtually onto a different volume as needed. Unused data can migrate to slower drives or tape.
  • Tujan - Friday, August 18, 2006 - link

    "[(o)]/..\[(o)]" Reply
  • Zaitsev - Thursday, August 17, 2006 - link


    We are well aware that we don't have the monopoly on wisdom, so feel free to sent us feedback.

    Fourth paragraph of intro.

    Haven't finished the article yet, but I'm looking forward to the series.
  • JarredWalton - Thursday, August 17, 2006 - link

    Fixed. Reply
  • Whohangs - Thursday, August 17, 2006 - link

    Great stuff, definitely looking forward to more in depth articles in this arena! Reply
  • saiku - Thursday, August 17, 2006 - link

    This article kind of reminds me THG's recent series of articles on how computer graphics cards work.

    For us techies who don't get to peep into our server rooms much, this is a great intro. Especially for guys like me who work in small companies where all we have are some dusty Windows 2000 servers stuck in a small server "room".

    Thanks for this cool info.
  • JohanAnandtech - Friday, August 18, 2006 - link

    Thanks! Been in the same situation as you. Then I got a very small budget for upgrading our serverroom (about $20000) at the university I work for and I found out that there is quite a bit of information about servers but all fragmented, and mostly coming from non-independent sources. Reply
  • splines - Thursday, August 17, 2006 - link

    Excellent work pointing out the benefits and drawbacks of Blades. They are mighty cool, but not this second coming of the server christ that IBM et al would have you believe.

    Good work all round. It looks to be a great primer for those new to the administration side of the business.
  • WackyDan - Thursday, August 17, 2006 - link

    Having worked with blades quite a bit, I can tell you that they are quite a significant innovation.

    I'll disagree with the author of the article that there is no standard. Intel co-designed the IBM bladecenter and licensed it's manufacture to other OEMS. Together, IBM and Intel have/had over 50% share inthe blade space. THat share along with Intel's collaboration is by default considered the standard int he industry.

    Blades, done properly, have huge advantages over their rack counterparts. ie; far less cables. In the IBM's the mid-plane replaces all the individual network and optical cables as the networking modules (copper and fibre) are internal and you can get several flavors... Plus I only need one cable drop to manage 14 servers....

    And if you've never seen 14 blades in 7u of space fully redundant, your are missing out. As for VMware, I've seen it running on blades with the same advantages as it's rack mount peers... and FYI... Blades are still considered rack mount as well...No you are not going to have any 16/32 ways as of yet.... but still, Blades really could replace 80%+ of all traditional rack mount servers.
  • splines - Friday, August 18, 2006 - link

    I don't disagree with you on any one point there. Our business is in the process of moving to multiple blade clusters and attached SANs for our excessively large fileservers.

    But I do think that virtualisation does provide a great stepping-stone for business not quite ready to clear out the racks and invest in a fairly expensive replacement. We can afford to make this change, but many cannot. Even though the likes of IBM are pushing for blades left right and centre I wouldn't discount the old racks quite yet.

    And no, I've haven't had the opportunity to see such a 7U blade setup. Sounds like fun :)
  • yyrkoon - Friday, August 18, 2006 - link

    Wouldnt you push a single system that can run into the tens of thousands, to possibly hundreds of thousands for a single blade ? I know i would ;) Reply
  • Mysoggy - Thursday, August 17, 2006 - link

    I am pretty amazed that they did not mention the cost of power in the TCO section.

    The cost of powering a server in a datacenter can be even greater than the TCA over it's lifetime.

    I love the people that say...oh a got a great deal on this dell was $400 off of the list price. Then they eat through the savings in a few months with shoddy PSUs and hardware that consume more power.

  • JarredWalton - Thursday, August 17, 2006 - link

    Page 3:

    "Facility management: the space it takes in your datacenter and the electricity it consumes"

    Don't overhype power, though. There is no way even a $5,000 server is going to use more in power costs over its expected life. Let's just say that's 5 years for kicks. From">this page, the Dell Irwindale 3.6 GHz with 8GB of RAM maxed out at 374W. Let's say $0.10 per kWHr for electricity as a start:

    24 * 374 = 8976 WHr/Day
    8976 * 365.25 = 3278484 WHr/Year
    3278484 * 5 = 16392420 WHr over 5 years
    16392420 / 1000 = 16392.42 kWHr total

    Cost for electricity (at full load, 24/7, for 5 years): $1639.24

    Even if you double that (which is unreasonable in my experience, but maybe there are places that charge $0.20 per kWHr), you're still only at $3278.48. I'd actually guess that a lot of businesses pay less for energy, due to corporate discounts - can't say for sure, though.

    Put another way, you need a $5000 server that uses 1140 Watts in order to potentially use $5000 of electricity in 5 years. (Or you need to pay $0.30 per kWHr.) There are servers that can use that much power, but they are far more likely to cost $100,000 or more than to cost anywhere near $5000. And of course, power demands with Woodcrest and other chips are lower than that Irwindale setup by a pretty significant amount. :)

    Now if you're talking about a $400 discount to get an old Irwindale over a new Woodcrest or something, then the power costs can easily eat up thost savings. That's a bit different, though.
  • Whohangs - Thursday, August 17, 2006 - link

    Yes, but multiply that by multiple cpus per server, multiple servers per rack, and multiple racks per server room (not to mention the extra cooling of the server room needed for that extra heat) and your costs quickly add up. Reply
  • JarredWalton - Thursday, August 17, 2006 - link

    Multiple servers all consume roughly the same power and have the same cost, so you double your servers (say, spend $10000 for two $5000 servers) and your power costs double as well. That doesn't mean that the power catches up to the initial server cost faster. AC costs will also add to the electricity cost, but in a large datacenter your AC costs don't fluctuate *that* much in my experience.

    Just for reference, I worked in a datacenter for a large corporation for 3.5 years. Power costs for the entire building? About $40-$70,000 per month (this was a 1.5 million square foot warehouse). Costs of the datacenter construction? About $10 million. Costs of the servers? Well over $2 million (thanks to IBM's eServers). I don't think the power draw from the computer room was more than $1000 per month, but it might have been $2000-$3000 or so. The cost of over 100,000 500W halogen lights (not to mention the 1.5 million BTU heaters in the winter) was far more than the costs of running 20 or so servers.

    Obviously, a place like Novel or another company that specifically runs servers and doesn't have tons of cubicle/storage/warehouse space will be different, but I would imagine places with a $100K per month electrical bills probably hold hundreds of millions of dollars of equipment. If someone has actual numbers for electrical bills from such an environment, please feel free to enlighten.
  • Viditor - Friday, August 18, 2006 - link

    It's the cooling (air treatment) that is more important...not just the expense of running the equipment, but the real estate required to place the AC equipment. As datacenters expand, some quickly run out of room for all of the air treatment systems on the roof. By reducing heating and power costs inside the datacenter, you increase the value for each sq ft you pay... Reply
  • TaichiCC - Thursday, August 17, 2006 - link

    Great article. I believe the article also need to include the impact of software when choosing hardware. If you look at some bleeding edge software infrastructure employed by companies like Google, Yahoo, and Microsoft, RAID, PCI-x is no longer important. Thanks to software, a down server or even a down data center means nothing. They have disk failures everyday and the service is not affected by these mishaps. Remember how one of Google's data center caught fire and there was no impact to the service? Software has allowed cheap hardware that doesn't have RAID, SATA, and/or PCI-X, etc to function well and no down time. That also means TCO is mad low since the hardware is cheap and maintenance is even lower since software has automated everything from replication to failovers. Reply
  • Calin - Friday, August 18, 2006 - link

    I don't thing google or Microsoft runs their financial software on a big farm of small, inexpensive computers.
    While the "software-based redundancy" is a great solution for some problems, other problems are totally incompatible with it.
  • yyrkoon - Friday, August 18, 2006 - link

    Virtualization is the way of the future. Server admins have been implimenting this for years, and if you know what you're doing, its very effective. You can in effect segregate all your different type of servers (DNS, HTTP, etc) in separate VMs, and keep multiple snapshots just incase something does get hacked, or otherwise goes down (not to mention you can even have redundant servers in software to kick in when this does happen). While VMWare may be very good compared to VPC, Xen is probably equaly as good by comparrison to VMWare, the performance difference last I checked was pretty large.

    Anyhow, I'm looking forward to anandtechs virtualization part of the article, perhaps we all will learn something :)
  • JohanAnandtech - Thursday, August 17, 2006 - link

    Our focus is mostly on the SMBs, not google :-). Are you talking about cluster fail over? I am still exploring that field, as it is quite expensive to build it in the lab :-). I would be interested in what would be the most interesting technique, with a router which simply switches to another server, or with a heartbeat system, where one server monitors the other.

    I don't think the TCO is that low for implementing that kind of software or solutions, and that hardware is incredibly cheap. You are right when you are talking about "google datacenter scale". But for a few racks? I am not sure. Working with budgets of 20.000 Euro and less, I 'll have to disagree :-).

    Basically what I am trying to do with this server guide is give the beginning server administrators with tight budgets an overview of their options. Too many times SMBs are led to believe they need a certain overhyped solution.
  • yyrkoon - Friday, August 18, 2006 - link

    Well, if the server is in house, its no biggie, but if that server is acrossed the country (or world), then perhaps paying extra for that 'overhyped solution' so you can remotely access your BIOS may come in handy ;) In house, alot of people actually use in-expencive motherboards such as offered by Asrock, paired with a celeron / Sempron CPU. Now, if you're going to run more than a couple of VMs on this machine, then obviously you're going to have to spend more anyhow for multiple CPU sockets, and 8-16 memory slots. Blade servers IMO, is never an option. 4,000 seems awefully low for a blade server also. Reply
  • schmidtl - Thursday, August 17, 2006 - link

    The S in RAS stands for sevicability. Meaning when the server requires maintainance, repair, or upgrades, what is the impact? Does the server need to be completely shut down (like a PC), or can you replace parts while it's running (hot-pluggable). Reply
  • JarredWalton - Thursday, August 17, 2006 - link

    Thanks for the correction - can't say I'm a server buff, so I took the definitions at face value. The text on page 3 has been updated. Reply
  • schmidtl - Thursday, August 17, 2006 - link

    Looks good. Little history of progression on the S of RAS: disk drives were the first, and the industry sees a large proliferation of RAID configurations with hot swappable drives without any system performance degradation. High end servers have redundant/hot swappable power supplies (Dell brought that en masse to Intel servers). Recently, even CPUs have become hot swappable, something that's been around for a few years on IBM's zSeries mainframes and now pSeries servers (Power5+). Reply
  • stevenestes - Tuesday, March 17, 2015 - link

    I posted a video talking about server basics and an in depth intro to servers, check it out if you'd like Reply

Log in

Don't have an account? Sign up now