If you read our last article, it is clear that when your applications are virtualized, you have a lot more options to choose from in order to build your server infrastructure . Let us know how you would build up your "dynamic datacenter" and why!

{poll 157:440}
Comments Locked

48 Comments

View All Comments

  • crafty79 - Friday, October 9, 2009 - link

    A few pizza boxes, workgroup level nfs or iscsi storage and probably 1gigabit backbone should suffice.
  • KingGheedora - Friday, October 9, 2009 - link

    I'm a developer so I don't handle VM builds or any of the hardware stuff. Our IT staff has always handled building machines and VM's for us, so I have no way of knowing if they did something wrong. But so far every virtual server I've used has had horrible, horrible disk performance. I would use VM's for things that barely use disk, like webs servers or app server depending on what the apps do. But definitely not for database, or for some of our custom apps that write to disk alot.

    Anyways I was wondering why the overhead is so high for disk performance on VM's? I suspect the VM's may have not been configured so that each VM has it's own spindle(s). What else can be done, would it be faster if disk space that exists outside the virtual machine (i.e. the disk space is not part of the file allocated for the VM on the virtual server host) were mounted from within the VM? Disk space on something fast like a SAN or something?
  • caliche - Monday, October 12, 2009 - link

    If you are stacking all the disk access on common storage devices in a basic setup, and they would all have to line up and share. A good RAID setup grouping systems to separate storage sets/spindles to get more throughput is a minimum for any x86 virtual system setup and real disk traffic. For a budget setup more spreading VMs across spindles would help, you may need more disk controllers as well. And of course good backups or central code storage is critical - more spindles and heat means more points of potential failure.

    Bigger setups put the disk on a good SAN or other shared storage. RAID/Stripe on the array, let it use central array cache, let the storage buffering protocols smooth out the I/O issues. You can create other problems that way and have to manage a whole new layer of hardware, but at least then you can manage things centrally and adjust/expand if needed.

  • crafty79 - Friday, October 9, 2009 - link

    you hit the nail on the head. It was a design decision on their part to not give the vm's a lot of available IOPS.
  • TheCollective - Thursday, October 8, 2009 - link

    6x Dell r710's with Xeon 5570's and 144GB of RAM (max).
  • joekraska - Sunday, October 11, 2009 - link

    The 144GB of RAM would require 8GB dimms. The cost delta calculation, even with the extra vmware licenses, would suggest 12X Dell R710's with 72GB of RAM ea would be better, wouldn't it? Haven't checked 8GB DIMM costs in a month or two...
  • NewBlackDak - Tuesday, October 13, 2009 - link

    That totally depends on power requirements. We're in a remote data center, and the memory price is made up in 4 months with rent/power/cooling.

    We're doing something similar with Sun 4170s.
    After several configs we landed on a 1U server with as much RAM as you can stuff in it. A couple 10GB NICs with USB, Flash card, or SSD boot.
    All datastores are NFS based on netapp storage with PAM cards.

  • xeopherith - Thursday, October 8, 2009 - link

    Recently I just virtualized 7 of our "servers" using ESXi.

    The reason I say it like that is we have 4 Cisco Phone servers on their own VLAN, one Database server with very little information, and our PDC that does almost everything and lastly a proxy server with dansguardian for internet filtering.

    I just built this server in a 4U Chenbro rackmount case. Tyan dual socket opteron motherboard with 16GB ram and a RAID 10 housing 2TB of storage. Opteron 2378's and an adaptec 5405 if I remember correctly. There is room to upgrade the RAM further to 16GB but I don't think I'll be using that anytime soon. Right now I have 12gb committed but hardly any of it is actually consumed.

    It is running great so far only I plan to add some more networking via the secondary PCI Express slot.
  • Lord 666 - Tuesday, October 13, 2009 - link

    What Cisco phone apps are you running virtualized? Assuming Callmanager and Unity? Currently I'm running Meeting Place Express 2.1 virtualized without issue, have second failover node of Unity on the project plan, and debating Callmanager and UCCX. Realistically, going to wait until UCCX goes Linux next year.
  • xeopherith - Wednesday, October 14, 2009 - link

    I'm currently running call manager and emergency responder but Unity seemed to be slow and eventually stopped working so I'm using a physical server there. I don't think it would be an issue if I created and installed the machine from scratch but this is one thing that doesn't work well from physical to virtual.

Log in

Don't have an account? Sign up now