Public versus private cloud
Just a few years ago, getting an application or IT service running took way too long. Once the person in charge of the project got permission to invest in a new server, the IT infrastructure guys ordered a server and it probably took a few weeks before the server arrived. Then if all went well, the IT infrastructure guys installed the OS and gave the person in charge of the software deployment a static ip to remotely install all the necessary software.

In the virtualization age, the person in charge of the application project calls up the infrastructure people and a bit later a virtual machine is provisioned and ready to install.

So if you have already invested a lot in a virtualized infrastructure and virtualization experts, it is only logical that you want the flexibility of the public cloud in house. Dynamic Resource Scheduling, as VMware calls it, is the first step. A cluster scheduler that shutdowns unnecessary servers, boots them up if necessary and places VMs on the best server is a step forwards to a "private cloud".  According to VMware, about 70 to 80% of the ESX based datacenters are using DRS. Indeed, virtualization managers such as VMware vCenter, Citrix Xencenter, Convirt and Microsoft System Center VMM have made virtualized infrastructure a lot more flexible.

Some people feel that "Private clouds" are an oxymoron, because unless you are the size of Amazon or Google, they can never be as elastic as the "real" clouds, can not leverage the same economies of scale and do not eliminate CAPEX.

But that is theoretical nitpicking: public clouds can indeed scale automatically (See Amazon's Auto Scaling here) with much greater elasticity, but you will probably only use that with the brakes on. Scaling automatically to meet the traffic requirements of a DOS attack could be pretty expensive. Terremark Enterprise Cloud allows the virtual machines to "burst" for a while to respond to peak traffic, but it is limited to for example 1 GHz of extra CPU power or 1 GB of memory. It won't let you triple your VM resources in a few minutes, avoiding a sky high bill afterwards.

And the CAPEX elimination? Just go back one page and look at the Amazon EC2 pricing. If you want to run your server 24/7 using the "pay only what you use" pricing will cost you way too much. You will prefer to reserve your instances/virtual machines, and pay a "set up" or one-time fee: a capital investment to lower the costs of renting a VM.  

The Cloud Killer feature

The real reason why cloud computing is attractive is not elasticity or economies of scale. If it works out well, those are bonuses, but not the real "killer feature". The killer is instantantous self-service, IT consumption or in real human language: the fact that you can simply login and can get what you need in a few minutes. This is the feature that most virtualization managers lacked until recently.

OpenQRM, an Open Source Infrastracture Management Solution is a good example how the line between a  public and private cloud is getting more blurred. This datacenter manager does not need a hypervisors installed anymore. It manages physical machines and installs several different hypervisors (Xen, ESX and KVM) on bare metal in the same datacenter. 

The Cloud Plugin and Visual Cloud Designer make it possible to make a virtual machines on the fly and attacha  "pay as you use" accounting system to it.  OpenQRM is more than a virtualization manager: it is allows you to build real private clouds.

So the real difference between a flexible and intelligent cluster and a private clouds is a simple interface that allows the "IT consumer" to get the resources he/she needs in minutes. And that is exactly what VMware's new vCloud Director does:  adding a self service portal that allows the user to get the resources that he/she needs quickly all within the boundaries set by the IT policies.

So private clouds do have their place. A private cloud is just a public cloud which happens to be operated by an internal infrastructe staff rather than an external one. Or a public cloud is a private cloud that is outsourced. Both are accessible on the internet and on the corporate LAN, and some private clouds might even be larger than some public ones.

Maybe one day, we will laugh with the "Cloud Computing" name, but Infrastructure as a quick Service (IaaS) is here to stay. We want it all and we want it now. 

Cloud Computing Crash Course Hybrid clouds
POST A COMMENT

26 Comments

View All Comments

  • pjkenned - Monday, October 18, 2010 - link

    Stuff is still new but is pretty wow in real life. Clients are based on Android and make that Mitel stuff look like 1990's tech. Reply
  • Gilbert Osmond - Monday, October 18, 2010 - link

    I enjoy and benefit from Anandtech's articles on the larger-picture network & structural aspects of contemporary IT services. I wonder if, as Anandtech's readership age-cohort "grows up" and matures into higher management- and executive-level IT job positions, the demand for articles with this kind of content & focus will increase. I hope so. Reply
  • AstroGuardian - Tuesday, October 19, 2010 - link

    FYI it does to some extent... :) "You can't stop the progress" right? Reply
  • JohanAnandtech - Tuesday, October 19, 2010 - link

    While we get less comments on our enterprise articles, they do pretty well. For example the Server Clash article was in the same league as the latest Geforce and SSD reviews. We can't beat Sandy Bridge previews of course :-).

    And while in the beginning of the IT section we got a lot of AMD vs Intel flames, nowadays we get some very solid discussions, right on target.
    Reply
  • HMTK - Tuesday, October 19, 2010 - link

    Like back then at Ace's? ;-) Reply
  • rbarone69 - Tuesday, October 19, 2010 - link

    You couldn't have said it better! As an IT Director find information that this site gives invaluable to my decision making. Articles like this give me a jumping off point to thinking outside the box or adding tech I never heard of to our existing infrastructure.

    What's amazing is that we put very little in new equipment and are able to do what cost millions just 10 years ago. We can now offer 99.999% normal availability with only a maximum of 30minutes of downtime during a full datacenter switch from Toronto to Chicago!

    The combination of fast multi core processors, virtualization tech and cheaper bandwidth have made this type of service availalbe to companies of all sizes. Very exciting times!
    Reply
  • FunBunny2 - Monday, October 18, 2010 - link

    The problem with Clouding is that systems are built to the lowest common denominator (which is to say, Cheap) hardware. The cutting edge is with SSD storage, and it's not likely that public Clouds are going to spend the money. Reply
  • Mattbreitbach - Monday, October 18, 2010 - link

    I actually see this going forward. I would put money on public cloud hosts offering different storage options, and pricing brackets to match. I also do not believe that many of the emerging cloud environments are being build with the cheapest hardware available. I would be more inclined to think that some of the providers out there are going for high-end clients who are willing to shell out the cash for performance. Reply
  • mlambert - Monday, October 18, 2010 - link

    3PAR, HDS (VSP's) and soon EMC will all have some form of block/page/region level dynamic optimization for auto-tiering between SSD/FC-SAS/SATA. When the majority of your storage is 2TB SATA drives but you still have the hot 3-5% on SSD the costs really come down.

    HDS and 3PAR both do it very well right now... with HDS firmly in the lead come next April...

    The problem I see is the 100-120km dark fiber sync limitation. Once someone figures out how to be sync with 20-40ms latency (or the internets somehow figure out how reduce latency) we will have some pretty cool "clouds".
    Reply
  • rd_nest - Monday, October 18, 2010 - link

    Not willing to start another vendor war here :)

    Wanted to make a minor correction - EMC already has dynamic sub-LUN block optimization..Also called FAST - fully automated storage teiring like you mentioned. This is in both CLARiiON and V-Max...the implementation is different, but works almost same.

    Don't you feel 20-40ms is bit too much?? Most database applications/or any famous MS applications don't like this amount of latency. Though quite subjective, I tend to believe that 10-20ms is what most people want.

    Well, I am sure if it is reduced to 10-20ms, people will start asking for 5ms :)
    Reply

Log in

Don't have an account? Sign up now