Back to Article

  • extide - Thursday, November 29, 2012 - link

    It is not very clear if you are actually making use of the 1TB revo drive thing as just simply a 1TB hdd, or if you are indeed using it with the acceleration software and the SSD caching the 1TB HD.

    So, how is that bit set up?

    Honestly if I were you guys, I would set up things a bit differently. Since all the VM's are almost identical you could save a lot of space and end up making much better use of that 100GB of SSD cache. Use differencing VMDK files, so instead of having 13 copies of a 64GB VMDK, you have on copy of the 64GB VM, along with 13 vmdk's that store the "differences". This way you could probably fit everything into 100GB and either just store it on the SSD natively or use the SSD as an accelerator for the 1TB hdd, but it would have pretty much everything the VM's need/use stored on the SSD. Now, exactly how you set this up varies based on what VM app you are using, but I know it is possible with ones like VMWare and Oracle Virtual Box (which is free!).

    What do you think? I mean you could also apply this sort of concept to the rest of the VM's and condense the storage down significantly. Use one big ssd for the main file, and then several other ssd's for the difference files, perhaps say 4 difference files per 64GB ssd.
  • ganeshts - Thursday, November 29, 2012 - link

    extide, Thanks for the comments.

    No, we aren't using the acceleration software with the RevoDrive Hybrid, as it works only for boot disks.

    I am reading up on Hyper-V differencing disks [ ], and it definitely looks like a better way to go about the process. I will experiment with the differencing method and see whether things get simplified in terms of storage requirements while also retaining ease of use.
  • yupsay - Friday, November 30, 2012 - link

    I've been using differencing disks & saving space, improving performance by a bit like that. One of the downsides to be noted is as differencing disks are dynamic under windows 2008 r2 you will be adding chunks of 2 MB at a time while Win 2008 it would be 512 kb. Problem starts when you 've multiple machines running & expanding their VHD footprint. Look out for fragmentation. Reply
  • BellaLohan - Sunday, December 02, 2012 - link

    just as Randy responded I'm in shock that people able to get paid $7078 in four weeks on the network.(Click on Home)
  • eanazag - Thursday, November 29, 2012 - link

    I'd like to know when Netgear is going to support 10 GbE over cat 6 Ethernet copper. I have some new Intel copper X-540 T-2 10 GbE NICs and the switch market is incredibly weak for these. Fiber is getting decent attention, but not Ethernet. I'd love to see even low port count switches ~8 ports. I don't care if it takes a whole 1U to pull off. I don't care about LACP today. Give me even a dumb switch. VLANs would be nice. I just need a switch though. I have them direct connected at the moment and I am really losing out on use scenarios.

    All the switches (Dell and some other vendors) that support 10GbE over copper Ethernet Cat 6/a cable are $10,000+ for 24 ports.

    I have the NICs setup between ESXi and Nexenta iSCSI. I am trying to push the NAS on low counts of data streams (ie. low number of VMs to take advantage of the caching and RAID capabilities of Nexenta).
  • pablo906 - Friday, November 30, 2012 - link

    A few weeks ago the only 10GB over Cu switches slated for the near future were Cisco. This may have changed but I doubt it. Whenever Motorola comes up with a integrated design incorporating the feature then you'll see a ton of other vendors suddenly supporting the feature. Reply
  • jhh - Friday, November 30, 2012 - link

    One of the problem is that no one is making 10G switch chips with only 4-8 ports. Broadcom's smallest switch has 24 10G ports. Some of their older chips had 24 1G ports and 4 10G ports, but no one has made those into switches with 10G Base T ports. Broadcom does have some 4x10G Base-T Phy chips coming in 1Q13, which should help, but I doubt the prices will be extremely low. The 24 port gigabit switches had a cost of close to $2000, so we aren't in the $100 switch range. Then, once one puts warranty expense, R&D recovery, and room for discounts for big customers, the price is quite often 2x or more of the cost, especially for these high-end items.

    The other option is to use SFP+ and direct connect cables, but that doesn't help with the X-540.
  • d3v1on - Thursday, November 29, 2012 - link

    Hi there, I have the same motherboard and was just wondering if the holes line up on the RV 03 case. As I understand it, Asus has decided to include 3 proprietary holes and despite having an EEB compatible case, those 3 holes wouldn't line up.

    Was this an issue when completing this build?
  • ganeshts - Thursday, November 29, 2012 - link

    It was not much of an issue. I remember some holes didn't line up, but the locations were such that it didn't cause any problems related to the stability of the motherboard inside the chassis. Reply
  • d3v1on - Thursday, November 29, 2012 - link

    Thanks heaps Ganesh. Appreciate the quick reply. Reply
  • Andrew911tt - Thursday, November 29, 2012 - link

    From what I understand the OCZ RevoDrive Hybrid is being used just as a PCI-e to Sata converter is that correct?

    I understand the changes that you made on the external network setup, but my question is why did you make this change?
  • ganeshts - Thursday, November 29, 2012 - link

    1. Yes, and we also got 100 GB of NAND as a new drive for the host OS to access

    2. Our previous external network setup (ZyXel switch) had only 24 ports. With 12 VMs, we had plenty of spare ports for the management port and for the NAS units. When moving to 25 VMs, we ran out of ports in the switch. The second reason is that we are planning to evaluate 10 GbE NAS units in the future and it is important to have a switch capable of 10 GbE for that purpose.
  • Andrew911tt - Thursday, November 29, 2012 - link

    I understand what you did, but why did you create the separate sub-nets and isolate them from the internet like in first set up. Reply
  • ganeshts - Friday, November 30, 2012 - link

    We wanted to eliminate unnecessary / unintended traffic from the machines on the live network (192.168.1.x) to the NAS or even the VMs themselves. Reply
  • SunLord - Friday, November 30, 2012 - link

    Why are you using a stupid Revo. You should of gotten an SAS HBA and used 5.25" to 4 x 2.5" bay adapters then you could of put in upto 20 2.5" ssd and an optical drive. Reply
  • SunLord - Friday, November 30, 2012 - link

    Something like this is what i meant for the 4x2.5" adapter
  • Flunk - Friday, November 30, 2012 - link

    Or simply add hang extra bays from the roof of the case. Reply
  • Plifzig - Friday, November 30, 2012 - link

    So, were all the SATA ports occupied? Or were they just all taken? Sounds like they were occupied.

    And also taken.
  • KranZ - Friday, November 30, 2012 - link

    Were you using the default 1500 byte MTU or did you bump the interfaces and VMs up to 9000 byte MTUs? Reply
  • kenyee - Friday, November 30, 2012 - link

    Could you guys please test these things for noise/heat w/ more drives when you test cases?
    E.g., the Nanoxia Deep Silence review recently. Looks like it'd be perfect for something like your SOHO NAS. It was tested w/ an SSD and no hard drives :-P
    The case in this review had a hard drive card.
    If you have so many slots, why would you not load it up?
    And if you're using a camera like the D800 w/ 50MB RAW files and trying to do video w/ terabytes of raw footage, you're going to load it up w/ hard drives...
  • GullLars - Saturday, December 01, 2012 - link

    This was a very interresting read, but why the revodrive hybrid? With the cost of the entire system, why not just go for a Revodrive 3 X2 960GB? That massively reduce VM boot times, and eliminate or push forward any IO bottlenecks the 13 VMs sharing the drive may encounter.

    This once again reminds me that the industry has been way to slow to make 10GbE avalible to the masses, or even powerusers and enthusiasts. I've been running SSD RAIDs for years now, and i'd like to move my HDD RAID to a fileserver, but the bottleneck from GbE has kept me from it. It would also be awesome for LANs, even if the switch only had 1-2 10GbE ports.
  • batguiide - Sunday, December 09, 2012 - link

    Thanks for these tips! I love the tip about checking where the model is in the store. I just finished reading another article that has some more research based tips about making sure you get the best big ticket items for you, which I also found useful. website:[socanpower,ca] Reply
  • Hrel - Friday, December 14, 2012 - link

    Some reviews on those newer NAS units that are based on ARM would be GREAT! I'm extremely cautious of how good that could work. But then again my current NAS is running a Pentium 4 540 I think? 3GHZ hyperthreaded. Works, but not the fastest thing. The CPU is clearly the bottleneck. Reply

Log in

Don't have an account? Sign up now