Original Link: https://www.anandtech.com/show/2279



Introduction

Despite numerous attempts to kill it, it is still alive and kicking. It is "fat" some say, and it hogs up lots of energy and money. To others it is like a mosquito bearing malaria: nothing more than a transmitter of viruses and other parasites. This "source of all evil" in the IT infrastructure is also known as the business desktop PC. Back at the end of nineties, Larry Ellison (Oracle) wanted to see the PC die, and proposed a thin client device as a replacement dubbed the NC (Network Computer). Unfortunately for Oracle, the only thing that died was the NC, as the desktop PC quickly adapted and became a more tamable beast.

When we entered the 21st century, it became clear that the thin PC is back. Server based computing (SBC), the prime example being Citrix Metaframe Presentation Servers, has become quite popular, and it has helped to reduce the costs of traditional desktop PC computing. What's more, you definitely don't need a full blown desktop client to connect to Citrix servers, so a thin client should be a more cost friendly alternative. When Microsoft Windows Server 2003 came out with a decent Terminal Server, SBC became even more popular for light office work. However the good old PC hung on. First, as interfaces and websites became more graphically intensive, the extra power found in typical PCs made thin clients feel slow. Second, the easily upgradeable PC offered better specs for the same price as the inflexible thin client. Third and more importantly, many applications were not - and still are not - compatible with SBC.

That all could change in 2007, and this time the attempt on the PC's life is much more serious. In fact, the murder is planned by nobody less than the "parents" of the PC. Father IBM is involved, and so is mother Compaq (now part of HP). Yes, two of the most important companies in the history of the PC are ready to slowly kill the 25 year old. Will these super heavyweights finally offer a more cost friendly alternative to the desktop PC? Let's find out.



The Rise of Thin Clients

Every IT professional has heard the various reasons why desktop PCs are not very efficient devices in a professional environment. It's pretty simple: there's nothing "personal" about the data that you process on your PC at your work. In many cases the data represents a lot of work and is worth a lot of money, so it should never be saved on a local hard disk that could crash or be wiped out. Also, as users try to personalize their PCs, they sometimes configure software badly, introduce malware, perhaps crack open the case on occasion, and so on. All this means that PCs require quite a bit of repair and maintenance time from the helpdesk people. As desktops have become more powerful, power requirements have also increase quite a lot. There is nothing new with these complaints: as early as 1987 the Gartner group drew the attention to the Total Cost of Ownership (TCO) associated with (badly) managed business desktops.

That is the reason SBC is becoming so popular, whether in the form of Windows Terminal Server or Citrix Metaframe. 50 to 100 users can connect via thin clients to one central server that runs one instance of Windows Server 2003. That means that you only need to manage one copy of Windows Server instead of all those copies of Windows 2000/XP, which all need to be configured and updated on a regular basis. The thin client has no moving parts: hard disks are absent and the 6W to 9W CPUs require only passive cooling. All user profile information is stored on a central server, making it possible to quickly replace a faulty client.

However, the number of applications that will run properly on a thin client with SBC are limited. If you need to develop a new application or report to your management using heavy data mining, the typical VIA Eden 800 MHz or AMD Geode NX 1 GHz processors that are found in most thin clients won't go very far. If you need to perform some heavy CAD or 3D animation work, you are definitely out of luck. That is where the business desktop still makes a lot of sense.

So what is the alternative that HP and IBM are proposing? As IBM and HP account for 80% of the very profitable blade market, it's no surprise that the new PC alternative has taken the shape of a blade. HP and IBM came up with two solutions: the blade PC and the workstation blade. IBM only offers the workstation blade, and for the blade PC you have to go to Lenovo. HP offers everything, and calls this solution CCI or Consolidated Client Infrastructure. Before we discuss these solutions in more detail, we need to investigate the hardware that is the foundation of this concept.

Both HP and IBM use the same basic configuration as you can see below.


A thin client and blade PC should replace the business desktop PC, according to HP and IBM

The basic idea is that a thin stateless client will access a blade PC and that all valuable data is stored on shared storage device.

The advantages are:
  • The PC user cannot store any valuable documents on the client, thus data is kept central and is always backed up
  • A thin client can be replaced in matter of minutes instead of hours
  • There is less heat generated in the office: a thin client needs about 15 to 30W instead of the 50W to 200W typical of a business desktop PC
  • The electricity bill should be lower as even a blade + thin PC consumes less than a typical desktop PC (according to HP)


The Rise of Thin Clients, Cont'd

To verify these claims, we'll skip the woolly language and instead we'll check out the hardware specs, starting with the thin client.


HP's t5720 thin client
  • 6W AMD Geode NX 1500 1 GHz
  • 256 or 512MB DDR SDRAM (16MB UMA video)
  • SiS741GX integrated UMA video 16MB
  • 4 USB ports (rear)
  • 0.5 or 1GB Flash drive
The Geode NX processor can be considered a low power version of the 3-way superscalar Athlon "Thunderbird" architecture. Just like the Thunderbird, it has a total of 384 KB of cache (128 KB L1 + 256 KB). The Geode should be more than powerful enough to display the RDP information (about 30 Kbit/s) it gets from the blade PC. The SiS741GX is a very humble 3D graphics card (low end GeForce 3 performance) but that doesn't matter as it only has to display 2D graphics. It takes away 16MB of "central" DDR RAM and is capable of displaying a resolution of 2048x1536, which is quite important as this thin client can also be used as a thin workstation (see further). The Thin client has two USB connections inside, which enable you to add a wireless USB key and if necessary 2GB of USB storage space.

Note that the thin client has absolutely no moving parts: no fans, and no hard disk. The 1GB flash drive is loaded with Windows XP Embedded (XPe), a Windows XP with a much smaller footprint but with full Win32API support. The client is thus compatible with all Windows XP applications. Windows XPe can work in about 40-64MB of RAM.



Blade PCs

Next up is the blade PC. The challenge is to make sure that the blade PC and the thin client consume less energy together than a typical business desktop. The big advantage of a blade PC is that all PCs draw from the same the power supply, which allows for the use of a more efficient power supply. HP has designed an enclosure that can house up to twenty blade PCs in a single 3U rack mount chassis, all of which are powered by the same redundant 600W power supply. That means that each blade PC consumes less than 30W.

The HP BladeSystem bc2000 and bc2500 consist of:
  • Low voltage AMD Athlon 64 X2 Dual-Core 3000+ processor (1.60 GHz 2x512K L2) (bc2500) or AMD Athlon 64 2100+ (1.2 GHz 512K L2) (bc2000)
  • ATI RS690T/SB600 w/integrated DirectX 9 compliant graphics
  • Up to 2x2GB 667 MHz DDR2 SDRAM
  • 2.5" 80GB 5400 rpm SATA 1.5 Gb/s
  • 100 Mbit Ethernet (Broadcom 5906M)
When looking at the CPU and HP's PSU specs, something isn't right. The AMD Athlon 64 2100+ is indeed a very low voltage version of the single core Athlon 64 and probably consumes something like 10-15W. However, the Athlon 64 X2 LV at 1.6 GHz is speced as a 35W CPU. Yes, the CPU will probably never need more than 30W, but still a 600W PSU is never going to be able to cope with 20 blades, when the CPU alone is capable of consuming 30W. Each blade can be equipped with 2 DIMMs and a mobile SATA hard disk, which is easily another 15W. Still, most press releases and documents talk about a 600W PSU. Delving a bit deeper, we found a PDF with a short chapter that describes a 1000W PSU.

If you only need a 1.2 GHz Athlon 64, it becomes less clear why you would allocate a blade PC for each user. You might be better off with VDI for example (see below). It might seem like nitpicking, but if you want to add a full rack of relatively modern PCs with dual core CPUs, the system administrator of the data center has to cope with another 12-14 KW (bc2500). If you chose the bc2000 blade, you are looking at only 7.6KW per rack. Clearly there is nothing magical about blade PCs: a 50W blade PC and 15W-30W thin client are not going to save much power compared to a desktop with the same (low voltage) dual core CPU.

Let's take a closer look at the blade PC.


The HP blade PC bc2000

It is a pretty elegant and slim design with a small heatsink on top of the CPU, and two SO-DIMMs for memory expansion. If you don't like your blade PC to be headless, you can add an old Lynx EM4+ video chip which supports an 800x600 resolution when you access the blade PC directly.


Adding a video chip to the blade PC

The video chip plugs into the black "IDE-like" connectors. The second generation of blade PCs still has room for improvement: why use a hard disk if your data is going to be saved on centralized network storage? Solid state drives were probably too expensive at the time of the new blade PC design, but basically the design right now has room for improvement.



Consolidated Client Infrastructure (CCI)

The blade chassis in which these blade PCs find a home is pretty simple. It consists of two redundant power supplies, a few redundant fans, and a BladeSystem switch.


Inside the PC blade chassis

The HP BladeSystem PC blade switch - which is the thin "drawer" beneath the fan module - is a Layer 2 switch that links up the 20 blade PCs (100 Mbit) to 4 Gbit Ethernet uplink ports. The goal is to have 2 ports of 1 Gbit/s uplink and a full failover to the other ports. You can also use the 4 fiber-optic Ethernet SFP slots if you buy the optional HP SX SFP Transceivers. Unfortunately using fiber optic networking also means that the copper Ethernet ports are disabled.


The blade switch is quite a capable switch supporting up to 256 virtual LANs, Spanning Tree Protocols, link aggregation, QoS, and trunking.

So now that we know what components are used, how does this all work? You could simply assign one thin PC to one blade PC, the static CCI model. But since moving over to CCI is all about lowering TCO, there is a better way to do it. A user logs into any thin client. The thin client connects and authenticates to an Active Directory server which works together with the Session Allocation Manager (SAM). SAM determines based on its database whether the user has a desktop session running or not. If so, SAM reconnects the user to the same session. If not, SAM establishes a session and connects the thin client to the appropriate blade PC and tells the blade PC where it can find the user documents, which are stored on the central storage server (NAS or SAN).

HP calls this the dynamic model of CCI. To make this work you need the following:
  • As many thin PCs as users connected to an Ethernet LAN
  • As many blade PCs as the maximum amount of concurrent users plus (at least) one (for fail over purposes) connected via the PC blade switch on the same Ethernet LAN
  • One SAM and Active Directory server connected to the same LAN
  • A Central storage server connected via FC, iSCSI (SAN) or LAN (NAS) to the SAM server
Dynamic CCI comes with several advantages. A bad blade is not a problem: the SAM will not use it and will assign another blade PC to a new user that logs in. You can hot swap the bad blade PC with a new one. HP offers also Rapid Deployment software which is basically a software layer on top of the Altiris Deployment software which makes it easy and flexible to send a drive image to the right blade PC.



Workstation Blades

The IT experts among our readers are probably protesting: CCI was launched in 2005 and blade PCs have been on the market since 2004-2005. So what's new? Besides the fact that the technology has matured and been upgraded, the technology is now able to meet the needs of more demanding applications like those used by CAD users, data miners, financial traders, and even 3D artists. So basically, the blade PCs are also now offering higher graphics and CPU performance. The "magic" to make this work is to replace the normal "remote protocols" such as ICA (Citrix) or RDP (Microsoft) by a proprietary lossless compression and encryption protocol. IBM has not yet given it a name to our knowledge; HP calls it Remote Graphics Software or RGS.

The blade workstation performs all 2D and 3D calculations and compresses and encrypts each frame before it's sent to the thin client. This kind of network transmission of 3D and 2D graphics at high "refresh" rates requires between 2 and 4 Mbit/s of bandwidth on average. The goal is to make it feel like the manipulation of 3D CAD is actually being done on the thin client. For this you need at least 50-60 frames per second and a response time of less than 20ms. With an excellent broadband internet connection, a leased line, or a LAN connection, it should be possible to get good performance even if your thin client is 2000 miles away from the blade PC or workstation blade. As long as your network connection doesn't add more than 10-15 ms to your frame time, it feels like you are working on a normal workstation. RGS works on both Red Hat Linux and Windows XP. You can also license RGS for blade PCs: HP's blade PCs contain a mobile version of AMD's 690G chipset.

IBM's HC10 and HP's XW460c

Both the IBM and HP blades are based on the Intel 5000p chipset platform. The fastest supported CPU is a dual core Xeon 5150 at 2.66 GHz. The HP blade supports 16GB, the IBM blade 8GB. However, the IBM has the faster graphics card: it can support up to the modern Quadro FX1600 (256-bit interface), while the HP is limited to the relatively low-end older FX540M 128MB (128-bit memory interface). Notice that both IBM and HP are using lower clocked mobile versions, a result of using the cramped blade chassis. Both IBM and HP make quite a bit of noise claiming that their workstation blades require quite a bit less power than a typical workstation, around 150-200W versus >300W for a typical PC workstation. Although the workstation blades are slightly more efficient thanks to the fact that several blades use one or two big PSUs, the biggest gains are the result of using mobile but slower performing video chips.



CCI, PC, or Workstation Blades: Does It Make Sense?

There is no question that both HP and IBM offer much more than hardware, and they focus on well-rounded solutions. Service and support, network and storage infrastructure, software deployment, and very low maintenance management: it's all there. That can save a lot of money. However, the most important question is: when does it make sense?

From a Total Cost of Acquisition (TCA) point of view, blade PCs and workstations blades are quite a bit more expensive than traditional desktops. We will ignore the cost of the central storage server, the software, and (most likely) the database server as these are necessary in both models. With traditional PCs you can also set a "roaming profile" that allows your PCs to be stateless as well, which helps to make sure that all (most?) data is saved to a central storage server. Let's first look at the blade PCs:
  • 20 blade PCs costs slightly less than $20000 (if you choose the HP bc2000 blade PC)
  • One chassis (with switch) costs about $7000
  • A thin client t5720 costs about $500-$600
So in total you are looking at a cost of almost $2000 per seat. Included in that price is a switch of $3000, but CCI requires a switch for the blade PCs and one for the thin clients, while traditional desktop PCs only require one switch. An HP Compaq dc5700 SFF business PC with similar specs as our blade PC costs about $600 to $900 and we are sure that some of our readers could find better deals. So the TCA of blade PCs seems to be up to 3 times greater than that of a traditional business desktop.

Next let's look at workstation blades. As IBM has the most interesting workstation blade for CAD engineers, we'll look at it.
  • An IBM HC10 with Core 2 Duo E6700 (2.66GHz 4MB L2), 2GB RAM, and NVIDIA FX1600M is a decent workstation and costs about $3000
  • Workstation blades are bigger, so both IBM and HP require a full blown blade chassis. The chassis and switch cost about $7000 and provide housing for 14 blades
  • A thin clients costs about $500-$600
So in this case we are looking at a cost of more than $4000 per workstation. A similar workstation PC costs about $2000. So in this case we see that CCI is about twice as expensive.

TCO must save the CCI day of course. Both HP and IBM/Lenovo refer to the IDC TCO "The Tangible Benefits of Blade Clients" study. It describes the cost of each solution per year, over a period of 4 years.


The big problems with this IDC study are:
  • It is sponsored by a vendor of blade PCs (Clearcube)
  • It is based on the feedback of customers that have already implemented blade PCs
Now, that is a recipe for disaster. A CIO that took the risk of investing in a new technology is a questionable source for good financial numbers. After all, he had to convince the CFO that the new technology would pay itself back in a few years. The acquisition costs immediately draws our attention: according to the TCO study it is only 50% higher than that of a traditional desktop PC setup, which is very low as you can see from our "quick and dirty" calculation above. While the break/fix numbers seem realistic, the system administration and software deployment costs seem very high for the desktop PCs: you can apply deployment software like Altiris on desktop PCs too, after all.

The profits that are reported here are probably a result of rethinking the system administration and software deployment process instead of being solely the virtue of blade PC technology. In other words, even if the company kept the desktop PCs, considerable savings would have appeared after the smarter system administration and software deployment processes were implemented.

Also note that the power consumption savings are nothing to write home about, even though this was recorded at a time when power hogging Pentium 4 reigned over the business desktop. So now that we have some insight into hard numbers, we can analyze things further.



Making sense of CCI

CCI and blade PCs claim to offer three big advantages:
  1. Less power consumption, especially in the office space
  2. Data is centralized and better protected
  3. Less administration costs, less downtime
Less power consumption is very unlikely as:
  • The current business desktops use and can use much more efficient CPUs and PSUs. Low voltage desktop CPUs are available, consuming less than 35W
  • Many business desktops are replaced by laptops which consume less than 40W in total
  • In the CCI model, you have two devices instead of one, which makes it much harder to consume less than one PC
Centralizing data doesn't require CCI. It is enough to use roaming profiles that are only allowed to write on network drives. So the big gains must come from the third point: less administration, less downtime. Indeed, since the thin client doesn't have any moving parts, it breaks down less often. By using a rather basic operating system instead of a bloated one, it requires a lot less attention. With all of the software is running on the blade, it is somewhat easier to manage.

The biggest window of opportunity for CCI is in enterprises with a lot of geographically dispersed sales offices like a bank. You don't want to have a system administrator traveling around to visit all those sales offices, and you don't want to see your sales people stand by doing nothing while their PCs are getting repaired or reconfigured. You just keep one or two (or more) thin clients in reserve, and if one breaks down, you immediately replace it with a new one. But even then, CCI might be between a rock and a hard place as there are a lot of competing technologies which try to lower the TCO of the desktop PCs.



Between VDI and SBC

Meet the rock - VDI - and the hard place: SBC. Server Based Computing (SBC) might not be the most exciting technology, but it is exciting from a cost point of view. It requires one large server, which means that you have to maintain only one server OS, which can support - depending on the amount of RAM you put in that machine - up to 100 users or more. As sessions don't require that much memory and you have only one kernel running, the memory demands are generally low too. The bandwidth demands are ridiculously low: one user can work over an ISDN line. Best of all, it is much cheaper to let 100 users access one server than paying for 100 OS licenses. Of course, SBC can only be used for "CPU light" software, and for software that is compatible with Terminal Services or Citrix.

VDI doesn't have these limitations. Basically you run a lot of desktops in separate virtual machines on one or more servers. Compatibility is not a problem, as each application gets its own OS running on its own virtual machine. In a sense VDI offers the same thing as CCI, but on virtual instead of physical machines. Our own research shows that if you attach one virtual machine to one CPU core, the performance loss of virtual machine manager is negligible, between 1% (CPU intensive applications) and 8% (memory intensive applications). Once you use more virtual machines than CPUs, this quickly rises to 15% and more in some cases. This is still acceptable, but if you absolutely want the same performance as CCI, you need one core per desktop.

The disadvantage is that VMWare ESX licenses are not cheap, unless you run a lot of virtual machines per socket. It is clear that quad core CPUs are the way to go here: it makes the Intel Xeon 53xx ("Clovertown") very attractive. VDI is certainly not a replacement for Terminal Services (TS). With TS you have to manage one OS for maybe 100 users; with VDI you have to manage 100 OSes for 100 users. For those of you that are relatively new to SBC, VDI, and blade PCs, you can get a brief overview in the table below.

SBC Overview
Feature Server based computing VDI Blade PC Workstation blade Traditional fat Client
Client Terminal PC thin PC Thin PC Thin PC desktop PC
Task of the Terminal server Server processes application data for many clients One virtual PC processes data for one thin client One blade PC processes data for one thin client One blade PC does it all, sends compressed and encrypted graphics stream to one thin client No terminal server
Task of the client Displaying GUI Displaying GUI Displaying GUI Displaying the graphics stream from the workstation blade Displaying GUI & processing business logic
Relation Client - terminal server n to 1 1 to 1 1 to 1 1 to 1 N/A
3D graphics? No No No Yes Yes
Typical tasks Light office work Software that is not too intensive and that doesn't work with SBC Data mining, application development CAD Light office work to Heavy CAD
Impossible tasks CPU or graphics intensive apps Graphics intensive apps Graphics intensive apps High-end CAD applications Nothing
Protocol ICA (Citrix), RDP (MS) RDP RDP RGS (HP), IBM prop. Protocol N/A
Bandwidth 0.02 to 0.03 Mbit/s 0.03 Mbit/s 0.03 Mbit/s 2-4 Mbit/s N/A

So where does CCI fit? This is how HP positions Blade PCs relative to SBC and VDI. The X-axis represenst the increasing complexity of the user's required software, the Y-axis the level of performance one needs.


HP sees a lot overlap between blade PCs and SBC. We don't see so much space. With quad core CPUs, 64-bit Windows 2003 (and Linux), and decent Gigabit NICs, performance shouldn't be a problem for SBC. We mention the 64-bit OS as it is important because it allows servers to take advantage of large swap spaces and more than 4GB of physical RAM. It is hard to see any reason to go blade PC when your application is compatible with Terminal Services.



Conclusion

In many scenarios is the Blade PC a rather complex way of solving the TCO problem of the traditional desktop PC. It seems to be a case of throwing out the child with the bath water. There is a lot that can be done to lower the TCO of the PC:
  • Use energy efficient components; PCs that use one or more mobile components are already available
  • Avoid using moving parts, such as CPU fans or hard disks; this will soon be possible, as Solid State devices get cheaper and low voltage CPUs are available
  • Make the PC a stateless device by using roaming profiles
  • Use (semi) automated management of images (Altiris or similar)
  • Store all data and profile information on network storage
Basically you could call this thin PC a mix of mobile and thin client technology. This seems to beat blade PCs:
  • Downtime is minimized: you can swap one PC for another one
  • Good 3D graphics performance does not require high amount of continuous bandwidth
  • As you have only one computer instead of two (thin client + blade PC), energy bills will be lower
  • TCA is much lower
  • Most of the administration and management costs are reduced, making the difference with a blade PC very small
So it seems that for heavier office and CAD tasks a well managed PC is still the better deal. The relatively high price that HP and others ask for a low end PC in "Small Blade Form Factor" makes it hard to get a good return on investment.

Right now, VDI is not really a competitor, as you can only run a limited number of desktops on one server if you want the same guaranteed performance and RAM space as with blade PCs. As a result the licensing cost of the virtualization software per desktop quickly increases. That is why current implementations of VDI mostly target light loads that are not compatible with SBC. When you run 20 or more desktops from one server, the (e.g. VMWare ESX/infrastructure) licensing costs start to become significant.

CCI and blade PCs do have a window opportunity. As long as VDI licensing cost are relatively high and as long as most business PCs are not (fully) optimized for TCO, blade PCs make a lot of sense in an enterprise with many small remote offices. In that case, if something goes wrong with something like a software deployment, it is good to be able to administer it locally, keeping the infrastructure close to where the system administrators are located.

The business PC is not going away (at least not yet), but as virtualization gets more mature and flexible (and as graphics cards get virtualized too), and as competition heats up, the most promising technology is still VDI. VDI might even prove to be an alternative to blade PCs. For now, SBC and Terminal Services are the only alternatives to desktop PCs that have a proven track record of lowering TCO. At least that is our analysis; what do you think? Let us know!

We'd like to thank Koen Slechten, De Witte Pierre (HP Belgium) and Carine Goris (IBM Belgium) for their help.

Log in

Don't have an account? Sign up now