Public versus private cloud
Just a few years ago, getting an application or IT service running took way too long. Once the person in charge of the project got permission to invest in a new server, the IT infrastructure guys ordered a server and it probably took a few weeks before the server arrived. Then if all went well, the IT infrastructure guys installed the OS and gave the person in charge of the software deployment a static ip to remotely install all the necessary software.

In the virtualization age, the person in charge of the application project calls up the infrastructure people and a bit later a virtual machine is provisioned and ready to install.

So if you have already invested a lot in a virtualized infrastructure and virtualization experts, it is only logical that you want the flexibility of the public cloud in house. Dynamic Resource Scheduling, as VMware calls it, is the first step. A cluster scheduler that shutdowns unnecessary servers, boots them up if necessary and places VMs on the best server is a step forwards to a "private cloud".  According to VMware, about 70 to 80% of the ESX based datacenters are using DRS. Indeed, virtualization managers such as VMware vCenter, Citrix Xencenter, Convirt and Microsoft System Center VMM have made virtualized infrastructure a lot more flexible.

Some people feel that "Private clouds" are an oxymoron, because unless you are the size of Amazon or Google, they can never be as elastic as the "real" clouds, can not leverage the same economies of scale and do not eliminate CAPEX.

But that is theoretical nitpicking: public clouds can indeed scale automatically (See Amazon's Auto Scaling here) with much greater elasticity, but you will probably only use that with the brakes on. Scaling automatically to meet the traffic requirements of a DOS attack could be pretty expensive. Terremark Enterprise Cloud allows the virtual machines to "burst" for a while to respond to peak traffic, but it is limited to for example 1 GHz of extra CPU power or 1 GB of memory. It won't let you triple your VM resources in a few minutes, avoiding a sky high bill afterwards.

And the CAPEX elimination? Just go back one page and look at the Amazon EC2 pricing. If you want to run your server 24/7 using the "pay only what you use" pricing will cost you way too much. You will prefer to reserve your instances/virtual machines, and pay a "set up" or one-time fee: a capital investment to lower the costs of renting a VM.  

The Cloud Killer feature

The real reason why cloud computing is attractive is not elasticity or economies of scale. If it works out well, those are bonuses, but not the real "killer feature". The killer is instantantous self-service, IT consumption or in real human language: the fact that you can simply login and can get what you need in a few minutes. This is the feature that most virtualization managers lacked until recently.

OpenQRM, an Open Source Infrastracture Management Solution is a good example how the line between a  public and private cloud is getting more blurred. This datacenter manager does not need a hypervisors installed anymore. It manages physical machines and installs several different hypervisors (Xen, ESX and KVM) on bare metal in the same datacenter. 

The Cloud Plugin and Visual Cloud Designer make it possible to make a virtual machines on the fly and attacha  "pay as you use" accounting system to it.  OpenQRM is more than a virtualization manager: it is allows you to build real private clouds.

So the real difference between a flexible and intelligent cluster and a private clouds is a simple interface that allows the "IT consumer" to get the resources he/she needs in minutes. And that is exactly what VMware's new vCloud Director does:  adding a self service portal that allows the user to get the resources that he/she needs quickly all within the boundaries set by the IT policies.

So private clouds do have their place. A private cloud is just a public cloud which happens to be operated by an internal infrastructe staff rather than an external one. Or a public cloud is a private cloud that is outsourced. Both are accessible on the internet and on the corporate LAN, and some private clouds might even be larger than some public ones.

Maybe one day, we will laugh with the "Cloud Computing" name, but Infrastructure as a quick Service (IaaS) is here to stay. We want it all and we want it now. 

Cloud Computing Crash Course Hybrid clouds
Comments Locked

26 Comments

View All Comments

  • HMTK - Tuesday, October 19, 2010 - link

    Well you don't NEED cutting edge storage for a lot of things. In many cases more cheap(er) machines can be interesting than fewer more expensive machines. A lower specced machine in a large cluster of such machines going down might have less of an impact than a high end server in a cluster of only a few machines. For my customers (SMB's) I prefer more but less powerful machines. As usual YMMV.
  • Exelius - Wednesday, October 20, 2010 - link

    Very rarely do people actually need cutting edge. Even if you think you do; more often than not a good SAN operating across a few dozen spindles is much faster than you think. Storage on a small scale is tricky and expensive; storage on a large scale is easy, fast and cheap. You'd be surprised how fast a good SAN can be (even on iSCSI) if you have the right arrays, HBAs and switches.

    And "enterprise" cloud providers like SunGard and IBM will sell you storage, and deliver minimum levels of IOPS and/or throughput. They've done this for at least 5 years (which is the last time I priced one of them out.) It's expensive, but so is building your own environment. And remember to take into account labor costs over the life of the equipment; if your IT guy quits after 2 years you'll have to train someone, hire someone pre-trained, or (most likely,) hire a consultant at $250/hr every time you need anything changed.

    Cloud is cheaper because you only need to train your IT staff on the cloud, not on whatever brand of server, HBAs, SAN, switches, disks, virtualization software, etc... For most companies, operating IT infrastructure is not a core competency, so outsource it already. You outsource your payroll to ADP, so why not your IT infrastructure to Amazon or Google?
  • Murloc - Tuesday, October 19, 2010 - link

    I love these articles about IT stuff in large enterprises.
    They are so clear even for noobs. I don't know anything about this stuff but thanks to anandtech I get to know about these exciting developments.
  • dustcrusher - Tuesday, October 19, 2010 - link

    "It won't let you tripple your VM resources in a few minutes, avoiding a sky high bill afterwards."

    Triple has an extra "p."

    "If it works out well, those are bonusses,"

    Extra "s" in bonuses.
  • HMTK - Wednesday, October 20, 2010 - link

    I believe Johan was thinking of http://en.wikipedia.org/wiki/Tripel
  • JohanAnandtech - Sunday, October 24, 2010 - link

    Scary how you can read my mind. Cheers :-)
  • iwodo - Tuesday, October 19, 2010 - link

    I admit first i am no expert in this field. But Rackspace Cloud Hosting seems much cheaper then Amazon. And i could never understand why use EC2 at all, what advantage does it give compare like RackSpace Cloud.

    What alert me was the cost you posted up, which surprise me.
  • iwodo - Tuesday, October 19, 2010 - link

    Arh.. somehow posted without knowing it.

    And even with the cheaper price of Racksapce, i still consider the Cloud thing as expensive.

    For small to medium size Web Site, Hosting still seems to be best value.
  • JonnyDough - Tuesday, October 19, 2010 - link

    ...and we don't want to "be there". I want control of my data thank you. :)
  • pablo906 - Tuesday, October 19, 2010 - link

    Metro Clusters aren't new and you can already active active metro clusters on 10MB links with a fair amount of success. NetApp does a pretty good job of this with XenServer. Is it scalable to extreme levels, well certainly it's not as scalable as a Fiber Channel on a 1GB link. This is interesting tech and has promise in 5 years. American bandwidth is still archaically priced and Telcos really bend you over for fiber. I spend over 500k /yr on telco side network expenses already and that's using a slew of 1MB links with fiber backbone.

    1GB links simply aren't even available in many places. I personally don't want my DR site 100km away from my main site. I'd like one on each coast if I was designing this system. It's definitely a good first step.

    Having worked for ISP's I think they may be the only people in the world that will find this reasonable to implement quickly. ISP's generally have low latency multi GB link Fiber Rings that meshing a storage Fabric into wouldn't be difficult. The crazy part is it needs nearly the theoretical limit of the 1GB to operate so it really requires additional infrastructure costs. If a Tornado, Hurricane, or Earthquake hits your datacenter 100km away will likely also be feeling the effects. It is nice to replicate data with however in hopes that you don't completely loose all your physical equipment in both.

    How long lasting is FC anyway. It seems there is a ton of emphasis still on FC when 10GB is showing scalability and ease of use that's really nice. It's an interesting cross roads for storage manufacturers. I've spoken to insiders at a couple of the BIG players who question the future of FC. I can't be the only person out there thinking that leveraging FC seems to be a loosing proposition right now. iSCSI over 10Gb is very fast and you have things like Link Aggregation, MPIO, and VLANS that really help scale those solutions and allow you to engineer some very interesting configurations. NFS over 10GB is another great technology that makes management extremely simple. You have VHD files and you move them around as needed.

    Virtualization is a game changer in the Corporate IT world and we're starting to see some really cool ideas coming out of the big players.

Log in

Don't have an account? Sign up now