Memory

The eight DIMM slots in the Z9PE-D8 WS allow up to 64GB of DRAM. Intel NASPT (one of our anticipated VM workloads) requires a minimum of 1GB of DRAM and doesn’t really like 4GB (as it introduces caching effects and leads to anomalous benchmarking results). Even low end clients in SMB environments come with a minimum of 2GB of DRAM nowadays, and hence, we decided to go with 2GB of DRAM for each VM. As SMB NAS speeds approach 200 MBps,  it is sometimes necessary to have data sources and sinks capable of handling such speeds for file transfer tests. One option would be to have a really fast SSD or SSDs in RAID0. However, this introduces unnecessary extra variables into the mix. A RAM disk makes for a better solution, and in our build, also acts as a common storage resource for multiple VMs.

Keeping the above factors in mind, we decided to max out the capabilities of the Z9PE-D8 WS by installing 64GB of DRAM. We used G.Skill’s RipjawsZ F3-12800CL10Q2-64GBZL (8Gx8) modules. This quad-channel DDR3 kit is rated for operation at 1.5V and 1600 MHz with a CAS latency of 10-10-10-30. For our application, no overclocking was necessary. The Z9PE-D8 WS BIOS actually set it to 1333 MHz by default. We did find the performance at that setting to be good enough for our workloads, but decided to run the DIMMs at the native setting in the final configuration. Even though G.Skill targets the X79 platform, we had no trouble using it with the Z9PE-D8 WS. The combination of high capacity and efficiency made the G.Skill RipjawsZ a good choice for our testbed.

Storage

The storage subsystem is one of the most important aspects in a build meant to host multiple VMs concurrently. In our earlier NAS testbed, we used to run our VMs off a Seagate 2TB hard drive which had the host OS and the VMs in separate partitions. However, this is not a suitable solution for running multiple VMs concurrently. Hence, we made a decision to devote one physical disk to each VM. Fortunately, the Z9PE-D8 WS had 14 SATA ports.

Our planned workload doesn’t involve the storage of massive media files or any other such data which calls for hard disk drives in the testbed. The only exception is the robocopy test where we transfer a Blu-ray folder structure (with a size of 10.7GB) to the NAS and back. It is quite easy to handle that with a RAM disk, and hence, we decided to go with an SSD-only build.

We decided to equip the testbed with a 128GB OCZ Vertex 4 SSD for the host OS (Windows Server 2008 R2) and devote one 64GB OCZ Vertex 4 SSD to each VM. During the initial installation, we found that Windows Server 2008 R2 recommended at least 80GB of disk space for the primary partition. With the necessity to store temporary files for 12 VMs at the same time, we would have run the host OS SSD very close to full capacity. To resolve this, we installed another 128GB SSD to store the VM data and other necessary softwares.

The 128GB OCZ Vertex 4 provides up to 560 MBps / 430 MBps sequential read / write performance, and 90K / 120K IOPS for 4KB random reads and writes. At idle, the power consumption is 1.3W and it shoots up to 2.5W when fully active. These numbers remain the same for the 64GB OCZ Vertex 4. However, the sequential read / write performance drops down to 460 MBps / 220 MBps and the IOPS for 4K random reads and writes come in at 70K / 85K. Our aim in going with an SSD-only build was to make sure that the system’s storage subsystem didn’t end up being a bottleneck for our VMs. The much lower power consumption (compared to several distinct hard disk drives) ends up being an added bonus.

 

Hardware Build - Motherboard, CPUs and Coolers Hardware Build - Networking
Comments Locked

74 Comments

View All Comments

  • Zarquan - Thursday, September 6, 2012 - link

    I might be missing something really obvious here .. but if the highest power consumption was 146.7 W (IOMeter 100% Seq 100% Reads [ 12 VMs ]), then why did you need a 850W power supply ?

    Either the system is using a lot more than the 146.7 W you quoted in your power consumption figures, or the power supply is way over specified.
    http://www.anandtech.com/show/6241/building-the-20...
  • ganeshts - Thursday, September 6, 2012 - link

    This is not the only workload we plan to run on the machine.

    We were ready to put up with some inefficiency just to make sure we didn't have to open up the machine and put in a more powerful PSU down the road. The 850W PSU should serve the testbed well for future workloads which might be more stressful.
  • ydafff - Thursday, September 6, 2012 - link

    I’m VCP:5 / 4 and MCSE and MCITP:VA / EA
    This setup for 12 VMs way overkill..
    Best for this test bad will be VMware vSphere Hypervisor( Free ESXi) – much better memory and vCPU and storage management or MS Hyper-V 2008 R2 free server - try to use free Hyper-V 2008 server much less HD space and compute resources needed
    Regarding VMs density you could easy run all 12 VMs(1-2 GB memory) from single Sandy Bridge-E CPU or 1155 Xeon(i7) CPU with really good performance. Storage 2x intel 320 series 600GB SSD in RAID 1(you will need Redundancy) with thin provisioning will do trick.
  • ganeshts - Thursday, September 6, 2012 - link

    ydaff, Thanks for the inputs.

    We are working towards increasing the VM density in the current testbed itself. As another reader pointed out, 12 VMs were not enough to stress the Thecus N4800.

    I decided not go with the Hyper-V 2008 R2 free server because I needed to run some programs / scripts in the host OS and the Z9PE-D8 WS had drivers specifically for Win Server 2008 R2.
  • eanazag - Thursday, September 6, 2012 - link

    Seems like a lot of people are talking about it being over the top. I agree with the route Anandtech took - could have even went farther. How far can they be pushed is my question? I want to see when they start smoking NAS's. The article and concept is great. I like to know how the site sets up its test scenarios and equipment. It lets me know if my use case is higher or lower and what the device being reviewd can do. I look at your testing methods to decide if your data is worth considering. I continue to be an avid reader here because of the effort placed. If you had one PC with one NIC, anyone in their house can test it like that. Why even write reviews about NAS's if that is how far you are going to test? Great job, Anandtech.

    I have some applications at work I would like to create repeatable tests for. An article about how to automate applications for testing would be helpful. I saw that we got a little in this article. I would also like to see more enterprise equipment being tested if you can swing it.
  • KingHerod - Friday, September 7, 2012 - link

    NAS devices are convenient and generally low-power, but it would be nice to see a comparison to some real metal with a real server OS like Server 2k8R2. Maybe a repurposed older computer with a couple drives mirrored and an actual, low end server with some SAS drives.
  • dbarth1409 - Friday, September 7, 2012 - link

    Ganesh,

    Good work. I'm looking forward to seeing some future test results.
  • dijuremo - Monday, September 10, 2012 - link

    This asus motherboard is not truly ACPI compliant, ASUS knows it and they do not want to fix it. Their tech support has given stupid excuses to posts from users trying to run Windows 8 and 2012 server on it.

    If you boot either Windows 8 or 2012 server RTM on it, it blue screens with error:
    0xA5: ACPI_BIOS_ERROR

    You just need to check the reviews at the egg to confirm.

    http://www.newegg.com/Product/Product.aspx?Item=N8...
  • ganeshts - Monday, September 10, 2012 - link

    Looks like Asus has updated support files for Windows 8.
  • VTArbyP - Monday, September 10, 2012 - link

    I wonder what would happen if you did use Linux for the host and VM oses? I suppose that would become a test of Linux vs Windows! Heh.
    More seriously, why not add at least one VM of "the current popular distro" of Linux and and a Mac OS X machine Use them with NTFS drivers and / or reformat a NAS partition to native ext# and another to HFS+. Point being, how does the NAS react to mixed client loads and not all smb, as someone commented above. The other test this beast seems ideal for is comparisons of several non-local storage solutions - someone mentioned iSCSI, and I can imagine tryiing some types of SANs - might add an infiniband adapter - being of interest. The point of that would simply be to see what form of non-local storage was fastest, best value, easiest to maintain, etc, etc for us mortals who want to connect 6 - 12 machines, We, being the folks who DON'T run lans for a living and are not up to speed on what IT people already know

Log in

Don't have an account? Sign up now