POST A COMMENT

26 Comments

Back to Article

  • GoodBytes - Wednesday, September 29, 2010 - link

    I never understood this virtualisation thing. So ok, I can run a different OS on my desktop computer, to run a specific program. That is nice.. well not really as it takes forever to startup, but I guess it's better than nothing.
    Why have this on a server? Assuming all your software is up-to-date and all works with your sever OS, what's the benefit? What does it allow me to do, that I can't do without them?
    Reply
  • Link23 - Wednesday, September 29, 2010 - link

    Visualization is very useful in a large environment. Case, i have at this point 160 Virtual servers running on 5 Hosts (5 physical servers). For my customer this is very useful since he doesn't need to have the space to store 160 servers or has to worry powering them. Also it is very useful for testing environments if you have developers that need to test client server applications. Reply
  • TheHolyLancer - Wednesday, September 29, 2010 - link

    consolidation, you have a file server, a email server, a web server, a ----- server. each on it's own does not need that much processing power or IO, it is cheaper to buy a single / two really good and reliable sever and virtualize them.

    less electricity use, easier to manage and back up, allows for other nifty tricks like pause the os when it is running mid way to move the server physically etc.

    Some things that needs all the power and io it can get obviously is not the BEST candidate for vm for consolidation use, but then, you can buy several servers and make all of them run the same server at the same time, so then if one server dies, another takes over without down time (read vmware fault-tolerance or active/active server fail over). This is common in the risc/mainframe space to provide near 100% uptime (three or four 9s), and now can be done on el cheapo x84 stuff.
    Reply
  • GoodBytes - Wednesday, September 29, 2010 - link

    Thanks to both!

    I don't know much about server side things, so please excuse my ignorance.
    Why can't you have a file server that is at the same time an e-mail server, that is at the same time a web server? What I am trying to say, is I already made my desktop home computer a FTP server and HTTP temporarily (for experimenting), I was still able to use my computer like I did not have the FTP and HTTP server, so why do you need separate servers? I guess, that if one crash, not everything will fall apart with the usage of separate computers, but using virtual environment leads to the same situation, where the computer that runs them crashes, then well they all crash, so why not merge every server into one, like if you install and use several applications on one computer (I know that each processes runs in their own virtual space on the computer, but you know what I mean. Where you install/configure the server to be an e-mail, file, web server all at once)
    Reply
  • solgae1784 - Wednesday, September 29, 2010 - link

    The reason to separate those roles is to provide isolation so those roles will not affect the others. There are many technical (e.g. conflicts due to different OS settings required from different applications), business (e.g. uptime requirements), and political (e.g. separation of duty required by department rules) reasons that necessitates isolation. In the physical world, this meant additional physical servers that will quickly accumulate costs to buy and maintain them. With virtual, there are much less physical servers to maintain, even if the amount of servers & labor required will stay the same, which translates into less hardware space & cooling requirements & electric costs and more.

    Also, just know that putting all your business critical servers in one machine is simply asking for trouble - if that single machine goes down, virtual or not, then you're out of business. You really need redundancy and disaster recovery plans to make sure your business critical roles stay operative. Redundancy is even more important in the virtual environment, where one physical machine will host multiple servers.

    Fortunately, many virtual environments provide options such as VMware HA (restarts the VMs from failed host to another one) and FT (maintains two copies of a VM and immediately switches to secondary VM if primary VM goes down without incurring any downtime) to protect against hardware failure. Microsoft's Windows Clustering is also there to protect against application failure as well if needed. Disaster recovery in virtual world is also much simpler to implement, such as SAN replication to another site since the servers are now represented by a set of files stored on SAN, and products such as VMware Site Recovery Manager can automate the recovery of the VMs with a click of a button - long as of course, you have planned your disaster recovery well in the first place.
    Reply
  • justaviking - Wednesday, September 29, 2010 - link

    Simple example:

    What if you need to reboot your email server? Maybe because you installed a patch.

    In a VM situation, you simply reboot the email VM.

    If you have everything running on one traditional computer, when you reboot your email server you also reboot your web server, your video server, your database... everything.

    Another example:

    Or what if your email software is not compatible with a certain patch yet, but everything else is. You have to run without the new patch while you wait for total compatibility across your entire software suite. In a VM environment, you can patch each "machine" independently if needed.
    Reply
  • justaviking - Wednesday, September 29, 2010 - link

    Also, most crashes are "software crashes." At least I think so.

    So the odds of your underlying server crashing your entire VM system are very low.

    This way if one of your VMs crashes, due to some software lockup, the other VMs continue to run.

    On a traditional system, a Blue Screen of Death (BSOD) would wipe out all your services.
    Reply
  • justaviking - Wednesday, September 29, 2010 - link

    One more answer to "Why use a VM?" then I'll quit replying to myself...

    Portability.

    If you want to move your email server to a new, faster piece of hardware, it's a lot of work. Installations. Licenses. Etc.

    With a VM, it's sort of "pick up the suitcase and go."

    This might not be the case in every situation, but the time I actually used a VM it was great. I just put the VM onto my work laptop and was up and running in about 1/2 hour. The alternative would be to spend 2 DAYS installing and configuring a database, a web server, my company's software, ensuring license keys were correct, and on and on. So a VM was a great way to clone a training/demo system and make it portable.
    Reply
  • GoodBytes - Wednesday, September 29, 2010 - link

    Wow, thanks you very much for your time.
    I have learned a lot! Now I really see the importance of using virtual machine.

    +++ rep, if that was possible :)
    Reply
  • Stas - Thursday, September 30, 2010 - link

    While we're at it, example from today.
    A client has 1 server running AD, Exchange, spam filter, DB, and is also a storage server. Somehow, the server got hacked - Exchange was sending out load of spam. So much spam that the whole machine came to a halt. Result: business software was down as their DB wasn't responding, no one could access their files on the shares, all that in addition to no email. They couldn't run transactions and lost some money because they had to wait for me to bring the server back to live. Had they spent the money on somewhat a more powerful system and split it up to 3-4 virtual servers with specific tasks, they would only suffer a loss of email functionality for a couple of hours. Let alone the fact that changes are, the vulnerability is due to so many applications requiring so many ports open, security measures relaxed, etc.
    Reply
  • Stas - Thursday, September 30, 2010 - link

    chances* not changes Reply
  • Stuka87 - Thursday, September 30, 2010 - link

    We use VM's for testing the software that we develop. We are able to run full environments (multiple machines that interact) in an easy to deploy and manage setup.

    We run several hundred machines on our cluster at any given time. We don't have the rack space (even with 11 racks) to handle this many machines. But we do have space for a big VMWare cluster. It also means if a machine blows up because of a bug in dev code, we can just deploy a new machine (Which we have scripted)

    So for us, VM's are a HUGE help. Overall it has saved us quite a large junk of money in hardware. Even after we count in the price of the Dell R910's And SunFire X4600's that we use for the clusters
    Reply
  • TeXWiller - Wednesday, September 29, 2010 - link

    The support for ECC is dependent on the BIOS for the Phenom and on the chipset for the Westmere based i3 and i5 series of processors. My personal machine is phenomenally (pun intended) driving 8 GB of ECC memory as I write this. Perhaps Johan was really thinking about the extended capacity brought by registered or buffered memory necessary for bigger configurations? Reply
  • Stuka87 - Thursday, September 30, 2010 - link

    ECC can be supported by some of those chipsets, but buffered memory is not. Typically you want Buffered ECC memory for a VM server. Reply
  • andersenep - Friday, October 01, 2010 - link

    I could be completely wrong, but my impression was that the memory controller in K10 CPUs was the same, that they all support ECC, and that (Un)buffered or (un)registered ECC support was dependent on MB/chipset/BIOS.

    I am not certain why anyone would buy a server MB, and drop a consumer/desktop CPU in it, but isn't this possible, even with buffered/registered RAM?

    My understanding was also that Intel did not offer ECC support of any kind in its desktop CPUs, and reserved this support solely for Xeons. Has this changed? Could I run unbuffered ECC ram with my Core i7?

    Am I completely wrong here?
    Reply
  • TeXWiller - Friday, October 01, 2010 - link

    No ECC for i7. 3xxx series Xeons with the 1156 sockets can drive both registered and unregistered ECC dram depending on the bios and the chipset. The 1366 socket based 3xxx Xeons can drive unregistered ECC/non-ECC only, irrespective of the chipset used. Of the non-Xeon processors, the Westmere based i3 and i5 do work with unbuffered ECC drams with the 3xxx chipsets and a proper bios, while the Lynnfield based i5 and i7 processors don't.
    Recent 1156 server boards can apparently take a Westmere i3/5 with the unbuffered memory. When you need more capacity you can switch to a Lynnfield based Xeon and drive four quad or six double rank registered ECC modules, depending on the board configuration.
    Reply
  • andersenep - Friday, October 01, 2010 - link

    When you say the Westmere i5's and i3's will run unregistered ECC RAM depending on BIOS and chipset, do you mean that it will support ECC scrubbing or will it just "work".

    I have heard this has been an issue with some AMD MBs. Manufacturers claim ECC support, but ECC scrubbing is not supported. It just works like non-ECC RAM which defeats the whole purpose.

    Given that unbuffered ECC DIMMs cost pretty much the same as non-ECC DIMMs, I don't see why Intel and MB manufacturers are fighting supporting ECC in desktop/consumer CPUs and MBs. Why is there some artificial line being drawn between server/desktop components in regards to support ECC (registered/buffered or not)?

    ECC support (even though it's unbuffered) was a key consideration for me in selecting a CPU/MB. I went with an Opteron 1352 because it was cheap enough and powerful enough for my needs, but had I gone with a Phenom II or any other consumer AM2+ CPU, I should still have that same support.

    Answer #1 seems to imply that this assumption is wrong.

    Thanks for the reply.
    Reply
  • TeXWiller - Saturday, October 02, 2010 - link

    Its difficult to say about the ECC support options for the Intel server boards. They should provide chipkill-like error correction for the x8 type of memory for at least 3xxx Xeons. I'm assuming a similar support is provided for the i3 and Pentium with unbuffered memory as most boards seem to support only i3 and Pentium processors even though Intel claims an equal support for i5 in the datasheet. What is interesting is that scrubbing is mentioned in the datasheets of the 5000 and 7000 series Xeons only.

    The limits of ECC support are probably caused by the typical use cases of the consumer "gear" such as "performance at any cost" and the rarity of the cases such as "the home server". The rest is probably implementation cost and greedy market segmentation.

    I have personally bumped into a consumer board which was claimed to support ECC memory only to discover that the support is limited to booting with the said memory. Yellow liquid was oozing from my general direction after that discovery. This was an AM2+ board. Now I'm using an Asus AM2+ board with configurable ECC support with adjustable scrubbing and chipkill under a Phenom 9750. High end Gigabyte boards seem to have a proper ECC support as well.
    Reply
  • brundlefly77 - Wednesday, September 29, 2010 - link

    Licensing licensing licensing yes a HUGE issue for consumer desktop virtualization.

    I realized this week that I needed TWO Windows 7 licenses to run Windows 7 Pro Boot Camp under Fusion on Mac. I can't afford that for the amount of time I use it.

    I understand that VMWare can only address its own licensing issues, but would suggest that investing more resources, time, money, and lawyers into negotiating with Microsoft ways to avoid double-licensing is critical to VMWare's future in selling consumer desktop virtualization solutions.

    I won't even get into Apple's position, which is to basically not allow MacOS to run on anything but bare Apple hardware.
    Reply
  • miteethor - Wednesday, September 29, 2010 - link

    We already have this solved on the server side. Microsoft offers Windows 2008 Datacenter edition which offers unlimited number of virtual machines per processor to be installed. Even though we use VMWare to do the virtualization we purchase this 1-shot license for each VMWare server and we are covered.

    I realize that doesn't help you on the desktop side but there is a solution for datacenters who are using this technology.
    Reply
  • HMTK - Thursday, September 30, 2010 - link

    Still, Windows 2008 Datacenter is only useful when you run a lot of VM's. For a small outfit, Enterprise (which can be virtualized 4 x per license) can be more interesting. YMMV. A SMB could for example have 3 low-end dual socket pizza boxes (like a Proliant DL160 or the cheapest DL360) and vSphere Essentials Plus. Such a cluster would be probably serious overkill hardware-wise to run a dozen VM's but would be worth it for failover. However, in such a case Windows 2008 Datacenter for 6 sockets would be incredibly expensive.

    I'm happy though that Microsoft changed it's licensing for VDI solutions. It's still expensive - and will probably be so until MS has a decent VDI solution itself - but one of my customers was happy to pay +/- € 8000 less PER YEAR for his VMware View solution.
    Reply
  • redisnidma - Thursday, September 30, 2010 - link

    Why can't you guys get AMD involved in this debate/article?
    That would be a plus to the discussion.
    Reply
  • HMTK - Thursday, September 30, 2010 - link

    There is something VMware can do about licensing. With the current model, licensing for vSphere is not only per socket but also by number of cores. You can use either 6 or 12 cores per socket which makes using current AMD CPU's with 8 or 12 cores rather expensive as you have to buy vSphere Advanced or Enterprise Plus. It would be nice if this limitation were removed, especially because mainstream Intel parts will get more than 6 cores as well in the furure. Strange thing is that Hyperthreading does not count here. 6 cores with a total of 12 threads is possible on the cheaper vSphere versions but 8 cores with total 8 threads requires Advanced/Enterprise Plus.

    OTOG vSphere 4.1 became a lot more interesting for SMB's since nog the Essentials Plus package (3 servers with 2 CPU's each) now also gets vMotion and High Availability which makes this product infinitely more interesting than in version 4.0
    Reply
  • Nehemoth - Thursday, September 30, 2010 - link

    Indeed I have days wondering the same question, 6 and 8 cores are practicaly everywhere on a Datacenter, vmware should leverage the cost of those licensing, should be a great plus.

    Next year with Sandy Bridge and with Bulldozer will be worse with the core count increase, let's hope vmware is listening they're customer.

    Also something that I would like to see would be like a development host especial license, let's say for example a go the virtualization route and decide to consolide my 60 to 100 servers, I put all those in 2 or 4 host for FT and HA but I also would like to have a extra host so the IT guys play nicely with the upcoming trend in technology, the license for that host should be cheaper, let's say cause you won't need that Host to be up 24/7, or you won't need HA, or FT, after all HW is way cheaper than SW.
    Reply
  • HMTK - Friday, October 01, 2010 - link

    ESXi is and will remain free (although only 6 cores/CPU) and Standard isn't all that expensive while you could manage it with your existing vCenter server. OR you could get Essentials which gives you a vCenter Foundation (3 hosts) and vSphere licenses for 3 dual socket machines. In fact, if you do not need vMotion and HA, Essentials is ridiculously cheap and Essentials Plus gives you both those features for a fairly modest price. Still, you're always stuck with 6 CPU's/core. Reply
  • Budwise - Friday, October 01, 2010 - link

    There are other options out there aside from Vmware. Lets get some XenServer feedback. Explain to everyone bare metal installs vs OS on OS, what paravirtualized drivers are, how i/o works, how to manage it, NIC bonding options, switch requirements, etc. Reply

Log in

Don't have an account? Sign up now