Finding a Home for Your Website

Building and deploying a heavy duty web service from the ground up is a long and costly process. At the IT section of AnandTech, we mostly focus on the fun part of the process: choosing and buying a server. However, there is much more to it. Designing the software and taking care of cooling, networking, security, availability, patching and performance is a lot of work. Add all these time investments to the CAPEX investments in your server and it is clear that doing everything yourself is a huge financial risk.

These days, almost everybody outsources a part of this process. The most basic form is collocation: you rely on a hosting provider to provide the internet bandwidth and access, the electricity, and the rack space; you take control of rest of the process. A few steps higher is unmanaged dedicated hosting services. The hosting provider takes care of all the hardware and networking. You get full administrative access to the server (for example root access for Linux), which means the client is responsible for the security and maintenance of his own dedicated box.

The next step is to outsource that part too. With managed hosting services you won’t get full control, but the hosting provider takes care of almost everything: you only have to worry about the look and content of your web service. The Service Level Agreement (SLA) guarantees the quality of service that you get.

The problem with managed and unmanaged hosting services is that they are in many cases too restrictive and don't offer enough control. If performance is lacking, for example, the hosting provider often points to the software configuration while the customer feels that the hardware and network might be the problem. It is also quite expensive to enable the web server to scale to handle peak loads, and high availability may come at a premium.

Cloud Hosting

Enter cloud hosting. Many feel that cloud computing is just old wine in new bottles, but cloud hosting is an interesting evolution. A good cloud hosting starts by building on a clustered hosting solution: instead of relying on one server, we get the high availability and the load balancing capabilities of a complete virtualized cluster.

Virtualization allows the management software to carve up the cluster any way the customers like--choose the number of CPUs, RAM and storage that you want and make your own customized server; if you need more resources for a brief period, the cluster can provide this in a few seconds and you only pay for the time that you actually use this extra capacity. Best of all, cloud hosting allows you to set up a new server in less than an hour. Cloud hosting, or Infrastructure as a Service (IaaS), is definitely something new. Technically it is evolutionary, but from the customer point of view it offers a kind of flexibility that is revolutionary.

There is a downside to the whole cloud IaaS solution: most of the information about the subject is so vague and fluffy that it is nearly useless. What exactly are you getting when you start up an Amazon Instance or your own cloud at the Terremark Enterprise Cloud?

As always, we don’t care much about the marketing fluff; we're more interested in benchmarking in true AnandTech style. We want to know what kind of performance we get when we buy a certain amount of resources. Renting 5GB of RAM is pretty straightforward: it means that our applications should be able to use up to 5GB of RAM space. But what about 5GHz--what does that mean? Is that 5GHz of nostalgic Pentium goodness; or is it 5GHz of the newest complex, out-of-order, integrated memory controller, 1 billion transistor CPU monsters? We hope to provide some answers with our investigations.

Terremark Enterprise Cloud
POST A COMMENT

28 Comments

View All Comments

  • benwilber - Friday, June 03, 2011 - link

    this is a joke, right?

    there is not one bit of useful information in this article. if i wanted to read a Terremark brochure, i'd call our sales rep.

    speaking as an avid reader for more than 12 years, it's my opinion that all these braindead IT virtualization articles are poorly conceived and not worthy of anandtech.
    Reply
  • krazyderek - Friday, June 03, 2011 - link

    submit a better one then Reply
  • DigitalFreak - Friday, June 03, 2011 - link

    I guess it's a good thing then that your opinion doesn't matter. Reply
  • HMTK - Monday, June 06, 2011 - link

    Yeah, I also prefer yet another vidcard benchmark fest.

    Not.
    Reply
  • Shadowmaster625 - Friday, June 03, 2011 - link

    Still waiting for that $100 tablet that can provide me a remote desktop that is so responsive you cant even tell it is a remote desktop. I want it to be able to stream video at 480p. With good compression, this only requires a 1 mbps connection. I dont think this is too much to ask for $100. I dont care that much about HD. Streaming a desktop at 30 fps should only require a small fraction of my bandwidth. Reply
  • tech6 - Friday, June 03, 2011 - link

    As you mentioned, Terremark cloud benchmarks vary greatly depending on the underlying hardware. We did some tests on their Miami cloud last year and found the old AMD infrastructure to be a disappointing performer. The software is very clever but, like all clouds, some benchmarking and asking the right questions is essential before making a choice. Reply
  • duploxxx - Sunday, June 05, 2011 - link

    as usual this is very debatable information you provide. How did you bench and what storage platform? what is your compare a 2008 vs 2010? What kind of application did you bench? Specint? :) Just like Anandtech has greatly shown in the past is that appplications performance can be influenced by the type of cpu (look at the web results within the vApp that is clearly showing it likes faster cache architecture and to certain extend influences the final vApp result to much) you need to look at the whole environment and applications running in the environment, this requires decent tools to benchmark your total platform. (We have more code written by dev to automaticaly test any functional and performance aspect then the applications by themselves) everything in a virtual layer can influence the final performance.

    Our company has from 2005 till now always verified the platforms between intel and AMD on virtualization every 2 and 4 socket machine. Currently approx 3000 AMD servers on line all on Vmware private clusters from many generations. They are doing more then fine. The only timeframe that the Intel was faster and a better choice was just at launch time of the Nehalem Xeon for a few months. Offcourse one also need to look at the usecase for example the latest Xeon EX is very interesting with huge amount of small vm's, but requires way more infrastructure to handle for example load and the failure of a server. (Not to mention license cost from some 3th party vendors like Oracle.....)
    Reply
  • lenghui - Friday, June 03, 2011 - link

    A very well thought-out comparison betwen the in-house and IaaS environments. Even those who have the in-house resources would need to spend a lot of research time to reach a conclusion. In that sense, your review is most invaluable -- saving hundreds of hours or otherwise guess work for your readers. You probably can include a price analysis as the other readers have suggested.

    Thanks, Johan, for the great article.
    Reply
  • brian2p98 - Friday, June 03, 2011 - link

    This is, imo, the biggest unknown with cloud computing--and the most critical. Poor performance here could result in degradation of performance on the scale of several orders of magnitude. Website hosting, otoh, is rather straightforward. Who cares if 5Ghz of cloud cpu power is equivalent to only 1Ghz of local, so long as buying 25Ghz still makes economic sense? Reply
  • duploxxx - Sunday, June 05, 2011 - link

    depends on how good or bad your app can scale with cpu cores.....

    if it doesn't and you need more vm's to handle the same load you also need other systems to spread the load between apps.
    Reply

Log in

Don't have an account? Sign up now