More Terremark Enterprise Cloud Details

Another typical IaaS feature is “bursting”. You can see that there is a “disable/enable” burst button. If you enable bursting, you allow your virtual machines to use more than the purchased GHz or RAM space. Of course you pay a premium for the extra resources, but only for the time you really need that extra power. Terremark guarantees that you get a surplus of 20% in all circumstances. If you do not limit your burst capability, you can get what is left in the vSphere resource pool. In our case, we got up to 24GHz, up from the original 5GHz (reserved) and 10GHz (limit).

The networking part is explained here. You can see the internal, external, and public IP addresses. A basic firewall is available.

You can connect to the consoles of each virtual machine via the SSL CISCO AnyConnect VPN client. The final tab in the Environment section is network. Site to Site VPNs is also possible. Depending on where you live, logging in to digitalOps will connect you to one of the European or American data centers of Terremark. Our virtual servers were located in the data center of Amsterdam (the Netherlands). US customers will typically connect to the Miami, Washington DC, Dallas, or Santa Clara data centers.

The Terremark Enterprise Cloud became available in the US at the end of 2008; in the first half of 2009, it was also made available to European customers.

Terremark Enterprise Cloud The Hardware Behind the Enterprise Cloud
Comments Locked

29 Comments

View All Comments

  • benwilber - Friday, June 3, 2011 - link

    this is a joke, right?

    there is not one bit of useful information in this article. if i wanted to read a Terremark brochure, i'd call our sales rep.

    speaking as an avid reader for more than 12 years, it's my opinion that all these braindead IT virtualization articles are poorly conceived and not worthy of anandtech.
  • krazyderek - Friday, June 3, 2011 - link

    submit a better one then
  • DigitalFreak - Friday, June 3, 2011 - link

    I guess it's a good thing then that your opinion doesn't matter.
  • HMTK - Monday, June 6, 2011 - link

    Yeah, I also prefer yet another vidcard benchmark fest.

    Not.
  • Shadowmaster625 - Friday, June 3, 2011 - link

    Still waiting for that $100 tablet that can provide me a remote desktop that is so responsive you cant even tell it is a remote desktop. I want it to be able to stream video at 480p. With good compression, this only requires a 1 mbps connection. I dont think this is too much to ask for $100. I dont care that much about HD. Streaming a desktop at 30 fps should only require a small fraction of my bandwidth.
  • tech6 - Friday, June 3, 2011 - link

    As you mentioned, Terremark cloud benchmarks vary greatly depending on the underlying hardware. We did some tests on their Miami cloud last year and found the old AMD infrastructure to be a disappointing performer. The software is very clever but, like all clouds, some benchmarking and asking the right questions is essential before making a choice.
  • duploxxx - Sunday, June 5, 2011 - link

    as usual this is very debatable information you provide. How did you bench and what storage platform? what is your compare a 2008 vs 2010? What kind of application did you bench? Specint? :) Just like Anandtech has greatly shown in the past is that appplications performance can be influenced by the type of cpu (look at the web results within the vApp that is clearly showing it likes faster cache architecture and to certain extend influences the final vApp result to much) you need to look at the whole environment and applications running in the environment, this requires decent tools to benchmark your total platform. (We have more code written by dev to automaticaly test any functional and performance aspect then the applications by themselves) everything in a virtual layer can influence the final performance.

    Our company has from 2005 till now always verified the platforms between intel and AMD on virtualization every 2 and 4 socket machine. Currently approx 3000 AMD servers on line all on Vmware private clusters from many generations. They are doing more then fine. The only timeframe that the Intel was faster and a better choice was just at launch time of the Nehalem Xeon for a few months. Offcourse one also need to look at the usecase for example the latest Xeon EX is very interesting with huge amount of small vm's, but requires way more infrastructure to handle for example load and the failure of a server. (Not to mention license cost from some 3th party vendors like Oracle.....)
  • lenghui - Friday, June 3, 2011 - link

    A very well thought-out comparison betwen the in-house and IaaS environments. Even those who have the in-house resources would need to spend a lot of research time to reach a conclusion. In that sense, your review is most invaluable -- saving hundreds of hours or otherwise guess work for your readers. You probably can include a price analysis as the other readers have suggested.

    Thanks, Johan, for the great article.
  • brian2p98 - Friday, June 3, 2011 - link

    This is, imo, the biggest unknown with cloud computing--and the most critical. Poor performance here could result in degradation of performance on the scale of several orders of magnitude. Website hosting, otoh, is rather straightforward. Who cares if 5Ghz of cloud cpu power is equivalent to only 1Ghz of local, so long as buying 25Ghz still makes economic sense?
  • duploxxx - Sunday, June 5, 2011 - link

    depends on how good or bad your app can scale with cpu cores.....

    if it doesn't and you need more vm's to handle the same load you also need other systems to spread the load between apps.

Log in

Don't have an account? Sign up now