The Asus Z9PE-D8 WS has 2 GbE ports. However, we have several VMs trying to stress the NAS under test. Sharing two GbE links across 12 VMs was never going to be a suitable solution. The testbed used in earlier reviews had an Intel ESA-I340 quad-port GbE network adapter. Based on the Gigabit 82580, it has a PCIe 2.0 x4 connection enabling it to provide maximum bidirectional throughput for all the four GbE ports.

The Z9PE-D8 WS has 7 PCIe slots, and we decided to fill up three of them with the Intel ESA-I340 cards. They provided a total of twelve physical GbE ports, one for each VM. The ESA I-340 has a full height form factor, but also comes with a low profile bracket. One of the interesting aspects of the I340 server adapter is the VMDq (Virtual Machine Device Queue). It reduces I/O overhead in the hypervisor (and lowers CPU usage) by maintaining hardware queues for data packets sorted by their destination and sending them later to the hypervisor for forwarding to the appropriate VM. Even though we had the VMDq feature enabled in the settings for all the adapters, it probably had negligible effect on the CPU usage in our workloads because we had each VM reserving one port exclusively for its own use. The host machine itself connected to the network through one of the native GbE ports on the motherboard.

Between the NAS testbed and the NAS under test, it is essential that we have a switch capable of standing up to the stressful network traffic. Our switch of choice was one of the 24-port managed enterprise models from ZyXel, the GS2200-24. Even though it is advertised as a 24-port model, it actually has 4 extra ports with dual personalities (i.e, both copper and optical SFP).

The switch has all the necessary features for an L2 managed model including VLANs, CoS/QoS, port trunking and bandwidth control. Port trunking / 802.3ad LACP aggregation is essential because NASes with dual GbE ports provide that as an option to increase performance. Even though a NAS might support link aggregation, it is of no use if the switch to which it is connected doesn’t aggregate the connected ports.

Hardware Build - Memory and Storage Hardware Build - Chassis and PSU
Comments Locked

74 Comments

View All Comments

  • ganeshts - Thursday, September 6, 2012 - link

    Thanks for unearthing that one.. Fixed now.
  • ypsylon - Thursday, September 6, 2012 - link

    14 SSDs. I know it is only to simulate separate clients, but to be honest this whole test is ultimately meaningless. No reasonable business (not talking about 'man with a laptop' kind of company) will entrust crucial data to SSD(s) (in particular non-industry class standard SSDs). Those disks are far too unreliable and HDDs trounce them in that category every time. Whether you like it or not, HDDs are still here and I'm absolutely certain that they will outlive SSDs by a fair margin. Running a business myself and thank you very much HDDs are the only choice, RAID 10, 6 or 60 depending on a job. Bloody SDDs, hate those to the core (tested). Good for laptops or for geeks who benching system 24/7 not for serious job.
  • ypsylon - Thursday, September 6, 2012 - link

    Dang 12 not 14 , ha, ha.
  • mtoma - Thursday, September 6, 2012 - link

    If you love so much the reliability of HDDs, I must ask you: what SSD brand have failed you? Intel? Samsung? You know, they are statistics that show Intel and Samsung SSD are much more reliable 24/7 than many Enterprise HDDs. I mean, on paper, the enterprise HDDs looks great, but in reality they fail more than they should (in a large RAID array vibration is a maine concern). After all, the same basic technology applies to regular HDDs. On top of that, some (if not all) server manufacturers put refurbished HDDs in new servers (I have seen IBM doing that and I was terrified). Perhaps this is not a widespread practice, but it is truly terrifying.
    So, pardon me if I say: to hell with regular HDDs. Buy enterprise grade SSDs, you get the same 5 year warranty.
  • extide - Thursday, September 6, 2012 - link

    Dude you missed the point ENTIRELY, the machine they built is to TEST NAS's. They DID NOT BUILD A NAS.
  • Wardrop - Saturday, September 8, 2012 - link

    I can't work out whether this guy is trolling or not? A very provocative post without really any detail.
  • AmdInside - Thursday, September 6, 2012 - link

    Isn't Win7 x64 Ultimate a little too much for a VM? Would be nice to see videos.
  • ganeshts - Thursday, September 6, 2012 - link

    We wanted an OS which would support both IOMeter and Intel NASPT. Yes, we could have gone with Windows XP, but the Win 7 installer USB drives were on the top of the heap :)
  • AmdInside - Thursday, September 6, 2012 - link

    Thanks
  • zzing123 - Thursday, September 6, 2012 - link

    Hi Ganesh - Thanks for taking my post a few articles back to heart regarding the NAS performance when fully loaded, as it begins to provide some really meaningful results.

    I have to agree with some of the other posters' comments about the workload though. Playing a movie on one, copying on another, running a VM from a third and working of docs through an SMB share on a fourth would probably be a more meaninful workload in a prosumer's home.

    In light of this, might it be an idea to add a new benchmark to AnandTech's Storage Bench that measures all these factors?

    In terms of your setup, there's a balance to be struck. I really like the concept you're doing of using 12 VM's to replicate a realistic environment in the way you can do. However when an office has 12 clients, they're probably using a proper file server or multiple NAS's. 3-4 clients is probably the most typical set up in a SOHO/home setup.

    10GbE testing is missing, and a lot of NAS's are beginning to ship with 10GbE. With switches like the Cisco SG500X-24 also supporting 10GbE and becoming slowly more affordable, 10GbE is slowly but surely becoming more relevant. 1 SSD and 1 GbE connection isn't going to saturate it - 10 will, and is certainly meaninful in a multi-user context, but this is AnandTech. What about absolute performance?

    How about adding a 13th VM that leashes together all the 12 SSD's and aggregates all the 12 I340 links to provide a beast of RAIDed SSD's and 12GbE connectivity (the 2 extra connections should smoke out net adapters that aren't performing to spec as well).

Log in

Don't have an account? Sign up now