So what is VMware releasing?

While vSphere 4 is technically an upgrade to the existing VMware Infrastructure (the latest version being ESX 3.5), they are now opening up the product to allow management of both the internal and external clouds. This allows users to extend the existing company infrastructure with that of IT service providers all over the world. (Here is a list of providers that are currently ready to offer these services.)

VMware is looking to merge these two worlds with vSphere, but while still acknowledging the key differences between the two: Internal company clouds should be deployed in a non-disruptive, evolutionary way. Any data center that has already been virtualized should be able to move to the cloud without a glitch, and it should remain as trusted, reliable, and secure as expected from an internal infrastructure.

External clouds are to be put to work for extra capacity, and quite possibly high availability and disaster recovery as well. Migrating VMs from the internal to the external cloud should work seamlessly through Storage VMotion (we believe this should work, provided that the VM is on a LUN by itself).Think of this environment as a large-scale VPN, independent of your own infrastructure, which is paid for not in terms of infrastructure costs, but per unit of actual work completed. This environment can be as large or small as is required at the time, and should add a lot of flexibility to existing data centers in an efficient way.

vSphere is truly meant to be the very first "Cloud-OS", a system that can break up separate hardware platforms into what they offer in terms of resources, and use these resources as building blocks for what is, essentially, a supercomputer that encompasses the entire virtualized data center and beyond. VMware aims to make vSphere compatible with any existing or future applications and maintain strict security, even in the external cloud, while keeping the technology as non-intrusive as possible. Ideally, there should be no lock into specific service providers, or locks into irreversible decisions. As long as a company's intent is not to make their entire infrastructure irreversibly dependent on the external cloud, it should always be able to fall back on its internal infrastructure.

Index vSphere, a feature overview


View All Comments

  • Bandoleer - Wednesday, May 27, 2009 - link

    VMware has always stated that logical CPU's (HT) are not considered cores as licensing is concerned. Reply
  • pcfxer - Sunday, April 26, 2009 - link

    e-mail, internet browsing sure, maybe even something like google docs, but running centralized applications is NOT nearly what management/marketing would have you believe it is.

    Any support technician that works with local and centralized application infrastructure (in the likes of Citrix/TSP) would agree with me immediately when I state - TSP is a bugger to debug and diagnose, the issues are complicated by odd application errors that don't relate to what is really happening. What really happens is that the NTUSER.DAT file becomes corrupt, why? (bad resource/data forking??) Not sure, myself, it's closed source on the TSP and on the centralized app (MS Word, etc.) ends.

    Centralized applications also require FAST and HIGH THROUGHPUT to work efficiently. Consider it like this: your computer stores the application so you optimize the HD latency, memory latency, cache latency etc. TSP however, relies on the network. for thin clients it's easy; just optimize the NIC and TCP/IP stack. Beyond that, it is BRUTAL! You start asking vendors how many simultaneous connections and the sustained latency/throughput with X number that they give you and you'll see the VERY puzzled looks on their faces.

    Do it anand, trust me, Cisco sent me to engineering to get the answers that I wanted. Some more expensive switches end up being SLOWER than others because of the latency incurred when using TSP with a larger number of clients.

    Main applications shouldn't be centralized, niche ones, maybe, but the most reliable deployment is a bunch of workstations running local applications. IT ALWAYS WILL! When was a data stream dropped between your hard drive? Does it happen? Yes, rarely. When do packets get dropped that are just as important? OFTEN.
  • has407 - Monday, April 27, 2009 - link

    Just because applications as architected and implemented today don't work well with a centralized/cloud model doesn't mean there's something wrong with the model, or a fundamental reason why applications--as in code that produces the same result--can't or won't behave reasonably with that model.

    We've been through a couple cycles of this, going back and forth between centralized and distributed. We had "cloud computing" (aka "service bureau's") decades ago, and they worked fine, because systems and applications were architected and implemented to work in that model. Ok, so they were green screens and not GUI's. So what? Then came the PC/client-server model, and now we appear to be moving back to the centralized/cloud model.

    The problems cloud/centralized models face today are a mirror image of the past (i.e., what happened in the 90's during the change from centralized to distributed and client-server). The problems are transitional, and are largely the result of attempting to make applications built for one model behave properly in this (*cough*) "new" (*cough*) model. Been there, done that, and we'll do it again. *yawn*

    The only thing that seemingly hasn't changed in decades is people who insist "X" won't work because, by darn, "X" is what they know and because they're narrowly focused on one part of the picture.
  • Milleman - Sunday, April 26, 2009 - link

    Why aren't they supporting the Ubuntu 8.04 LTS version? Very strange, since datacenters or providers really wants to stay supported on OS version as long as possible. Reply
  • has407 - Sunday, April 26, 2009 - link

    They support 8.04, 7.04 and quite a few others; see:">
  • has407 - Wednesday, April 22, 2009 - link

    > Another question left unanswered is whether VMware would allow for quad-socket machines in the Essentials packages...

    They're pretty clear about that, per the VMWare vSphere 4 Pricing, Packaging and Licensing Overview: "[Essentials and Essentials Plus] include licenses for three physical servers (up to two processors each)...". However, it's unclear whether they'll allow a max of 6 processors in any combination up to 3 boxes; e.g., 4+2 or 4+1+1, but few people are likely to looking for those in combination with Essentials (that 4x box is an expensive outlier).

    > For the Essentials Plus, Advanced, and Enterprise Plus packages, the maximum number of physical cores per CPU is upped to 12...

    You don't get that with Essentials Plus; the 6-core/proc limit still holds.
  • LizVD - Thursday, April 23, 2009 - link

    After some digging around on VMware's website, I ran into this:

    "Each physical server may contain up to two physical processors with up to 6 cores per processor."

    on the following page:">

    So I guess that's cleared up now. :)
  • LizVD - Thursday, April 23, 2009 - link

    Thanks for the heads up on this, we based our questions on the documents we received from VMware prior to the press release. As it turns out, those were not very clear on certain numbers and outright wrong on others (as noted in the picture on page 4)

    Now that the information has been released publicly, I'll get right to correcting these things.
  • has407 - Wednesday, April 22, 2009 - link

    > ...otherwise roughly translate to a slightly less than full-featured Standard version (only works on ESXi)...

    Not sure what you mean by "only works on ESXi"? The choice of ESX or ESXi is the same for all editions and is a "deployment-time choice". Or are you referring to specific features that will only work with ESXi? If so, can you be more specific? Thanks.
  • blyndy - Wednesday, April 22, 2009 - link

    "...a fracture of their capacity..." ...? Reply

Log in

Don't have an account? Sign up now