The best of two worlds

Most existing companies have already invested quite a bit of money and time in deploying their own infrastructure and building up expertise. Also, thinking smartly out of the box in infrastructure land pays off in most cases. And lastly, few people will place their sensitive IP related data somewhere in an external datacenter.

It will be no surprise that the "hybrid cloud" is the ideal model for most companies out there. Just like in the business world, you outsource some of your processes (HR, Facility management etc.) but things related to your core business stay inside. If you are an engineering company, your engineering data should stay inside the walls of your own datacenter.


Click to enlarge

vSphere 4.1 and vCloud Director, one of the possible building blocks of a hybrid cloud"

 

The Hybrid cloud model means you should be able to move VMs from your own datacenter to a public cloud and back. The reality is that it is not that simple to upload a VM to a public cloud service, and that it pretty hard to import the work that you have done in a public cloud back in to your own datacenter. If you want to get idea what it really involves, look here and here.

Many public cloud vendors, formely hosting providers, are now adding up and download capabilities to their self service portals. Being able to quickly download and upload virtual machine between your own infrastructure and that of a hosting provider is the first step towards the "hybrid cloud". Let it be clear: the fully automated hybrid cloud where you manage all your VMs through one interface, moving VMs easily and quickly from your private to a public cloud is not here yet. 

So what do we need besides management software such as vCloud Director? You have probably guessed it already: a storage and networking bridge between datacenters.

Public Vs Private Cloud A cross datacenter network
Comments Locked

26 Comments

View All Comments

  • HMTK - Tuesday, October 19, 2010 - link

    Well you don't NEED cutting edge storage for a lot of things. In many cases more cheap(er) machines can be interesting than fewer more expensive machines. A lower specced machine in a large cluster of such machines going down might have less of an impact than a high end server in a cluster of only a few machines. For my customers (SMB's) I prefer more but less powerful machines. As usual YMMV.
  • Exelius - Wednesday, October 20, 2010 - link

    Very rarely do people actually need cutting edge. Even if you think you do; more often than not a good SAN operating across a few dozen spindles is much faster than you think. Storage on a small scale is tricky and expensive; storage on a large scale is easy, fast and cheap. You'd be surprised how fast a good SAN can be (even on iSCSI) if you have the right arrays, HBAs and switches.

    And "enterprise" cloud providers like SunGard and IBM will sell you storage, and deliver minimum levels of IOPS and/or throughput. They've done this for at least 5 years (which is the last time I priced one of them out.) It's expensive, but so is building your own environment. And remember to take into account labor costs over the life of the equipment; if your IT guy quits after 2 years you'll have to train someone, hire someone pre-trained, or (most likely,) hire a consultant at $250/hr every time you need anything changed.

    Cloud is cheaper because you only need to train your IT staff on the cloud, not on whatever brand of server, HBAs, SAN, switches, disks, virtualization software, etc... For most companies, operating IT infrastructure is not a core competency, so outsource it already. You outsource your payroll to ADP, so why not your IT infrastructure to Amazon or Google?
  • Murloc - Tuesday, October 19, 2010 - link

    I love these articles about IT stuff in large enterprises.
    They are so clear even for noobs. I don't know anything about this stuff but thanks to anandtech I get to know about these exciting developments.
  • dustcrusher - Tuesday, October 19, 2010 - link

    "It won't let you tripple your VM resources in a few minutes, avoiding a sky high bill afterwards."

    Triple has an extra "p."

    "If it works out well, those are bonusses,"

    Extra "s" in bonuses.
  • HMTK - Wednesday, October 20, 2010 - link

    I believe Johan was thinking of http://en.wikipedia.org/wiki/Tripel
  • JohanAnandtech - Sunday, October 24, 2010 - link

    Scary how you can read my mind. Cheers :-)
  • iwodo - Tuesday, October 19, 2010 - link

    I admit first i am no expert in this field. But Rackspace Cloud Hosting seems much cheaper then Amazon. And i could never understand why use EC2 at all, what advantage does it give compare like RackSpace Cloud.

    What alert me was the cost you posted up, which surprise me.
  • iwodo - Tuesday, October 19, 2010 - link

    Arh.. somehow posted without knowing it.

    And even with the cheaper price of Racksapce, i still consider the Cloud thing as expensive.

    For small to medium size Web Site, Hosting still seems to be best value.
  • JonnyDough - Tuesday, October 19, 2010 - link

    ...and we don't want to "be there". I want control of my data thank you. :)
  • pablo906 - Tuesday, October 19, 2010 - link

    Metro Clusters aren't new and you can already active active metro clusters on 10MB links with a fair amount of success. NetApp does a pretty good job of this with XenServer. Is it scalable to extreme levels, well certainly it's not as scalable as a Fiber Channel on a 1GB link. This is interesting tech and has promise in 5 years. American bandwidth is still archaically priced and Telcos really bend you over for fiber. I spend over 500k /yr on telco side network expenses already and that's using a slew of 1MB links with fiber backbone.

    1GB links simply aren't even available in many places. I personally don't want my DR site 100km away from my main site. I'd like one on each coast if I was designing this system. It's definitely a good first step.

    Having worked for ISP's I think they may be the only people in the world that will find this reasonable to implement quickly. ISP's generally have low latency multi GB link Fiber Rings that meshing a storage Fabric into wouldn't be difficult. The crazy part is it needs nearly the theoretical limit of the 1GB to operate so it really requires additional infrastructure costs. If a Tornado, Hurricane, or Earthquake hits your datacenter 100km away will likely also be feeling the effects. It is nice to replicate data with however in hopes that you don't completely loose all your physical equipment in both.

    How long lasting is FC anyway. It seems there is a ton of emphasis still on FC when 10GB is showing scalability and ease of use that's really nice. It's an interesting cross roads for storage manufacturers. I've spoken to insiders at a couple of the BIG players who question the future of FC. I can't be the only person out there thinking that leveraging FC seems to be a loosing proposition right now. iSCSI over 10Gb is very fast and you have things like Link Aggregation, MPIO, and VLANS that really help scale those solutions and allow you to engineer some very interesting configurations. NFS over 10GB is another great technology that makes management extremely simple. You have VHD files and you move them around as needed.

    Virtualization is a game changer in the Corporate IT world and we're starting to see some really cool ideas coming out of the big players.

Log in

Don't have an account? Sign up now