The Asus Z9PE-D8 WS has 2 GbE ports. However, we have several VMs trying to stress the NAS under test. Sharing two GbE links across 12 VMs was never going to be a suitable solution. The testbed used in earlier reviews had an Intel ESA-I340 quad-port GbE network adapter. Based on the Gigabit 82580, it has a PCIe 2.0 x4 connection enabling it to provide maximum bidirectional throughput for all the four GbE ports.

The Z9PE-D8 WS has 7 PCIe slots, and we decided to fill up three of them with the Intel ESA-I340 cards. They provided a total of twelve physical GbE ports, one for each VM. The ESA I-340 has a full height form factor, but also comes with a low profile bracket. One of the interesting aspects of the I340 server adapter is the VMDq (Virtual Machine Device Queue). It reduces I/O overhead in the hypervisor (and lowers CPU usage) by maintaining hardware queues for data packets sorted by their destination and sending them later to the hypervisor for forwarding to the appropriate VM. Even though we had the VMDq feature enabled in the settings for all the adapters, it probably had negligible effect on the CPU usage in our workloads because we had each VM reserving one port exclusively for its own use. The host machine itself connected to the network through one of the native GbE ports on the motherboard.

Between the NAS testbed and the NAS under test, it is essential that we have a switch capable of standing up to the stressful network traffic. Our switch of choice was one of the 24-port managed enterprise models from ZyXel, the GS2200-24. Even though it is advertised as a 24-port model, it actually has 4 extra ports with dual personalities (i.e, both copper and optical SFP).

The switch has all the necessary features for an L2 managed model including VLANs, CoS/QoS, port trunking and bandwidth control. Port trunking / 802.3ad LACP aggregation is essential because NASes with dual GbE ports provide that as an option to increase performance. Even though a NAS might support link aggregation, it is of no use if the switch to which it is connected doesn’t aggregate the connected ports.

Hardware Build - Memory and Storage Hardware Build - Chassis and PSU
Comments Locked

74 Comments

View All Comments

  • xTRICKYxx - Wednesday, September 5, 2012 - link

    May I ask why do you guys need such high requirements? And why 12 VMs? I just think this is overkill. But it doesn't matter anyways... If I had a budget like this, I would totally build an awesome NAS like you guys have and follow this guide. Great job!
  • xTRICKYxx - Wednesday, September 5, 2012 - link

    I should clarify I am looking at this NAS as a household commodity, not something where 10+ computers will be heavily accessing it.
  • mfed3 - Wednesday, September 5, 2012 - link

    still didn't read...this is hopeless..
  • extide - Thursday, September 6, 2012 - link

    Dude they are NOT BUILDING A NAS!!!

    They are building a system to TEST other NAS's
  • thomas-hrb - Thursday, September 6, 2012 - link

    It would also be nice to test against some of the other features like for example iSCSI. Also since the Thecus N4800 supports iSCSI, I would like to see that test redone with a slightly different build/deployment.

    Create a single LUN on iSCSI. then mount that LUN in the VM like ESXi, create some VM's 20GB per server should be enough for server 2K8R2 and test it that way.

    I don't know who would use NAS over SAN in an enterprise shop, but some of the small guys who can't afford an enterprise storage solution (less than 25 clients) might want to know how effectively a small NAS, can handle VM's with advanced features like vMotion and fault tolerance. In fact if you try some of those HP ML110G7 (3 of them with a vmware essentials plus kit) you can get 12 CPU cores with 48GB RAM, with licensing for about 10K. This setup will give you a decent amount of reliability, and if the NAS can support data replication, you could get a small setup with enterprise features (even if not enterprise performance) for less than the lost of 1-tray of FC-SAN storage.
  • Wixman666 - Wednesday, September 5, 2012 - link

    It's because they want to be able to really hammer the storage system.
  • The0ne - Wednesday, September 5, 2012 - link

    "The guest OS on each of the VMs is Windows 7 Ultimate x64. The intention of the build is to determine how the performance of the NAS under test degrades when multiple clients begin to access it. This degradation might be in terms of increased response time or decrease in available bandwidth."

    12 is a good size, if not too small for a medium size company.
  • MGSsancho - Wednesday, September 5, 2012 - link

    12 is also a good size for a large workgroup.. Alternatively this is a good benchmark for students in dorms. sure there might be 4-5 people but when you factor in computers using torrents, game consoles streaming netflix along with tvs, could be interesting. granted all of this is streaming except for the torrents and their random i/o. However most torrent clients cache as much of the writes. With the current anandtech bench setup with VMs this can be replicated.
  • DanNeely - Wednesday, September 5, 2012 - link

    The same reason they need 8 threaded benchmark apps to fully test a Quad-HT CPU. They're testing NASes designed to have more than 2 or 3 clients attached at once; simulating a dozen of them puts the load on the nases up, although judging by the results shown by the Thecus N4800 they probably fell short of maxing it out.
  • theprodigalrebel - Wednesday, September 5, 2012 - link

    Well, this IS Anandtech and the article is filed under IT Computing... ;)

Log in

Don't have an account? Sign up now