Memory

The eight DIMM slots in the Z9PE-D8 WS allow up to 64GB of DRAM. Intel NASPT (one of our anticipated VM workloads) requires a minimum of 1GB of DRAM and doesn’t really like 4GB (as it introduces caching effects and leads to anomalous benchmarking results). Even low end clients in SMB environments come with a minimum of 2GB of DRAM nowadays, and hence, we decided to go with 2GB of DRAM for each VM. As SMB NAS speeds approach 200 MBps,  it is sometimes necessary to have data sources and sinks capable of handling such speeds for file transfer tests. One option would be to have a really fast SSD or SSDs in RAID0. However, this introduces unnecessary extra variables into the mix. A RAM disk makes for a better solution, and in our build, also acts as a common storage resource for multiple VMs.

Keeping the above factors in mind, we decided to max out the capabilities of the Z9PE-D8 WS by installing 64GB of DRAM. We used G.Skill’s RipjawsZ F3-12800CL10Q2-64GBZL (8Gx8) modules. This quad-channel DDR3 kit is rated for operation at 1.5V and 1600 MHz with a CAS latency of 10-10-10-30. For our application, no overclocking was necessary. The Z9PE-D8 WS BIOS actually set it to 1333 MHz by default. We did find the performance at that setting to be good enough for our workloads, but decided to run the DIMMs at the native setting in the final configuration. Even though G.Skill targets the X79 platform, we had no trouble using it with the Z9PE-D8 WS. The combination of high capacity and efficiency made the G.Skill RipjawsZ a good choice for our testbed.

Storage

The storage subsystem is one of the most important aspects in a build meant to host multiple VMs concurrently. In our earlier NAS testbed, we used to run our VMs off a Seagate 2TB hard drive which had the host OS and the VMs in separate partitions. However, this is not a suitable solution for running multiple VMs concurrently. Hence, we made a decision to devote one physical disk to each VM. Fortunately, the Z9PE-D8 WS had 14 SATA ports.

Our planned workload doesn’t involve the storage of massive media files or any other such data which calls for hard disk drives in the testbed. The only exception is the robocopy test where we transfer a Blu-ray folder structure (with a size of 10.7GB) to the NAS and back. It is quite easy to handle that with a RAM disk, and hence, we decided to go with an SSD-only build.

We decided to equip the testbed with a 128GB OCZ Vertex 4 SSD for the host OS (Windows Server 2008 R2) and devote one 64GB OCZ Vertex 4 SSD to each VM. During the initial installation, we found that Windows Server 2008 R2 recommended at least 80GB of disk space for the primary partition. With the necessity to store temporary files for 12 VMs at the same time, we would have run the host OS SSD very close to full capacity. To resolve this, we installed another 128GB SSD to store the VM data and other necessary softwares.

The 128GB OCZ Vertex 4 provides up to 560 MBps / 430 MBps sequential read / write performance, and 90K / 120K IOPS for 4KB random reads and writes. At idle, the power consumption is 1.3W and it shoots up to 2.5W when fully active. These numbers remain the same for the 64GB OCZ Vertex 4. However, the sequential read / write performance drops down to 460 MBps / 220 MBps and the IOPS for 4K random reads and writes come in at 70K / 85K. Our aim in going with an SSD-only build was to make sure that the system’s storage subsystem didn’t end up being a bottleneck for our VMs. The much lower power consumption (compared to several distinct hard disk drives) ends up being an added bonus.

 

Hardware Build - Motherboard, CPUs and Coolers Hardware Build - Networking
Comments Locked

74 Comments

View All Comments

  • xTRICKYxx - Wednesday, September 5, 2012 - link

    May I ask why do you guys need such high requirements? And why 12 VMs? I just think this is overkill. But it doesn't matter anyways... If I had a budget like this, I would totally build an awesome NAS like you guys have and follow this guide. Great job!
  • xTRICKYxx - Wednesday, September 5, 2012 - link

    I should clarify I am looking at this NAS as a household commodity, not something where 10+ computers will be heavily accessing it.
  • mfed3 - Wednesday, September 5, 2012 - link

    still didn't read...this is hopeless..
  • extide - Thursday, September 6, 2012 - link

    Dude they are NOT BUILDING A NAS!!!

    They are building a system to TEST other NAS's
  • thomas-hrb - Thursday, September 6, 2012 - link

    It would also be nice to test against some of the other features like for example iSCSI. Also since the Thecus N4800 supports iSCSI, I would like to see that test redone with a slightly different build/deployment.

    Create a single LUN on iSCSI. then mount that LUN in the VM like ESXi, create some VM's 20GB per server should be enough for server 2K8R2 and test it that way.

    I don't know who would use NAS over SAN in an enterprise shop, but some of the small guys who can't afford an enterprise storage solution (less than 25 clients) might want to know how effectively a small NAS, can handle VM's with advanced features like vMotion and fault tolerance. In fact if you try some of those HP ML110G7 (3 of them with a vmware essentials plus kit) you can get 12 CPU cores with 48GB RAM, with licensing for about 10K. This setup will give you a decent amount of reliability, and if the NAS can support data replication, you could get a small setup with enterprise features (even if not enterprise performance) for less than the lost of 1-tray of FC-SAN storage.
  • Wixman666 - Wednesday, September 5, 2012 - link

    It's because they want to be able to really hammer the storage system.
  • The0ne - Wednesday, September 5, 2012 - link

    "The guest OS on each of the VMs is Windows 7 Ultimate x64. The intention of the build is to determine how the performance of the NAS under test degrades when multiple clients begin to access it. This degradation might be in terms of increased response time or decrease in available bandwidth."

    12 is a good size, if not too small for a medium size company.
  • MGSsancho - Wednesday, September 5, 2012 - link

    12 is also a good size for a large workgroup.. Alternatively this is a good benchmark for students in dorms. sure there might be 4-5 people but when you factor in computers using torrents, game consoles streaming netflix along with tvs, could be interesting. granted all of this is streaming except for the torrents and their random i/o. However most torrent clients cache as much of the writes. With the current anandtech bench setup with VMs this can be replicated.
  • DanNeely - Wednesday, September 5, 2012 - link

    The same reason they need 8 threaded benchmark apps to fully test a Quad-HT CPU. They're testing NASes designed to have more than 2 or 3 clients attached at once; simulating a dozen of them puts the load on the nases up, although judging by the results shown by the Thecus N4800 they probably fell short of maxing it out.
  • theprodigalrebel - Wednesday, September 5, 2012 - link

    Well, this IS Anandtech and the article is filed under IT Computing... ;)

Log in

Don't have an account? Sign up now