Updating the Testbed - External Infrastructure

Between the set up of the original testbed and the beginning of the update process, some NAS vendors also approached us to evaluate rackmount units with 10 GbE capability. This meant that the ZyXel GS2200-24 switch that we had been using in our testbed would no longer pass muster. Netgear graciously accepted our request to participate in the testbed upgrade process by providing the ProSafe GSM7352S-200, a 48-port Gigabit L3 managed switch with built-in 10 GbE.

In the first version of the testbed, we had let the ZyXel GS2200-24 act as a DHCP relay and configured the main router (Buffalo AirStation WZR-D1800H) to provide DHCP addresses to all the NAS units, machines and VMs connected to the switch. In essence, it was a live network with the ability for the VMs and the NAS under test to access the Internet too. With the GSM7352S, we decided to isolate the NAS testbed completely.

The first port of the Netgear ProSafe GSM7352S was connected to the ZyXel switch and acts as the management port. The switch acts as a DHCP client and gets a management IP address from the Buffalo router. We configured ports 1 through 12 to remain as part of the default VLAN. Clients connected to these ports obtain their IP addresses (of the form 192.168.1.x) via relay from the main router. Ports 13 through 50 were made members of a second VLAN and a DHCP server issuing addresses of the form 192.168.2.x was associated with this VLAN. No routes were set up between the 192.168.1.x and 192.168.2.x subnets.

The GbE port associated with the host OS of our testbed workstation was connected to Port 2 of the ProSafe GSM7352S. Therefore, we were able to log into the workstation via Remote Desktop from our main network. The NAS under test was connected to ports 47 and 48, which were then set up for aggregation via the switch's web UI. In the case of NAS units with 10 GbE ports, the plan is to connect them to ports 49 and 50 and aggregate them in a similar way.

All the VMs and the NAS itself are under the same subnet and can talk to each other while being isolated from the external network. Since the host OS also has an internal network for management (each VM is connected to the internal network in the 10.0.0.x subnet and also to the switch in the 192.168.2.x subnet), we were able to run all the benchmarks within the isolated network from the Remote Desktop session in the host OS.
 

Updating the Testbed - Workstation Infrastructure Thecus N4800: Testbed in Action
Comments Locked

23 Comments

View All Comments

  • GullLars - Saturday, December 1, 2012 - link

    This was a very interresting read, but why the revodrive hybrid? With the cost of the entire system, why not just go for a Revodrive 3 X2 960GB? That massively reduce VM boot times, and eliminate or push forward any IO bottlenecks the 13 VMs sharing the drive may encounter.

    This once again reminds me that the industry has been way to slow to make 10GbE avalible to the masses, or even powerusers and enthusiasts. I've been running SSD RAIDs for years now, and i'd like to move my HDD RAID to a fileserver, but the bottleneck from GbE has kept me from it. It would also be awesome for LANs, even if the switch only had 1-2 10GbE ports.
  • batguiide - Sunday, December 9, 2012 - link

    Thanks for these tips! I love the tip about checking where the model is in the store. I just finished reading another article that has some more research based tips about making sure you get the best big ticket items for you, which I also found useful. website:[socanpower,ca]
  • Hrel - Friday, December 14, 2012 - link

    Some reviews on those newer NAS units that are based on ARM would be GREAT! I'm extremely cautious of how good that could work. But then again my current NAS is running a Pentium 4 540 I think? 3GHZ hyperthreaded. Works, but not the fastest thing. The CPU is clearly the bottleneck.

Log in

Don't have an account? Sign up now