Memory

The eight DIMM slots in the Z9PE-D8 WS allow up to 64GB of DRAM. Intel NASPT (one of our anticipated VM workloads) requires a minimum of 1GB of DRAM and doesn’t really like 4GB (as it introduces caching effects and leads to anomalous benchmarking results). Even low end clients in SMB environments come with a minimum of 2GB of DRAM nowadays, and hence, we decided to go with 2GB of DRAM for each VM. As SMB NAS speeds approach 200 MBps,  it is sometimes necessary to have data sources and sinks capable of handling such speeds for file transfer tests. One option would be to have a really fast SSD or SSDs in RAID0. However, this introduces unnecessary extra variables into the mix. A RAM disk makes for a better solution, and in our build, also acts as a common storage resource for multiple VMs.

Keeping the above factors in mind, we decided to max out the capabilities of the Z9PE-D8 WS by installing 64GB of DRAM. We used G.Skill’s RipjawsZ F3-12800CL10Q2-64GBZL (8Gx8) modules. This quad-channel DDR3 kit is rated for operation at 1.5V and 1600 MHz with a CAS latency of 10-10-10-30. For our application, no overclocking was necessary. The Z9PE-D8 WS BIOS actually set it to 1333 MHz by default. We did find the performance at that setting to be good enough for our workloads, but decided to run the DIMMs at the native setting in the final configuration. Even though G.Skill targets the X79 platform, we had no trouble using it with the Z9PE-D8 WS. The combination of high capacity and efficiency made the G.Skill RipjawsZ a good choice for our testbed.

Storage

The storage subsystem is one of the most important aspects in a build meant to host multiple VMs concurrently. In our earlier NAS testbed, we used to run our VMs off a Seagate 2TB hard drive which had the host OS and the VMs in separate partitions. However, this is not a suitable solution for running multiple VMs concurrently. Hence, we made a decision to devote one physical disk to each VM. Fortunately, the Z9PE-D8 WS had 14 SATA ports.

Our planned workload doesn’t involve the storage of massive media files or any other such data which calls for hard disk drives in the testbed. The only exception is the robocopy test where we transfer a Blu-ray folder structure (with a size of 10.7GB) to the NAS and back. It is quite easy to handle that with a RAM disk, and hence, we decided to go with an SSD-only build.

We decided to equip the testbed with a 128GB OCZ Vertex 4 SSD for the host OS (Windows Server 2008 R2) and devote one 64GB OCZ Vertex 4 SSD to each VM. During the initial installation, we found that Windows Server 2008 R2 recommended at least 80GB of disk space for the primary partition. With the necessity to store temporary files for 12 VMs at the same time, we would have run the host OS SSD very close to full capacity. To resolve this, we installed another 128GB SSD to store the VM data and other necessary softwares.

The 128GB OCZ Vertex 4 provides up to 560 MBps / 430 MBps sequential read / write performance, and 90K / 120K IOPS for 4KB random reads and writes. At idle, the power consumption is 1.3W and it shoots up to 2.5W when fully active. These numbers remain the same for the 64GB OCZ Vertex 4. However, the sequential read / write performance drops down to 460 MBps / 220 MBps and the IOPS for 4K random reads and writes come in at 70K / 85K. Our aim in going with an SSD-only build was to make sure that the system’s storage subsystem didn’t end up being a bottleneck for our VMs. The much lower power consumption (compared to several distinct hard disk drives) ends up being an added bonus.

 

Hardware Build - Motherboard, CPUs and Coolers Hardware Build - Networking
Comments Locked

74 Comments

View All Comments

  • ganeshts - Thursday, September 6, 2012 - link

    Thanks for unearthing that one.. Fixed now.
  • ypsylon - Thursday, September 6, 2012 - link

    14 SSDs. I know it is only to simulate separate clients, but to be honest this whole test is ultimately meaningless. No reasonable business (not talking about 'man with a laptop' kind of company) will entrust crucial data to SSD(s) (in particular non-industry class standard SSDs). Those disks are far too unreliable and HDDs trounce them in that category every time. Whether you like it or not, HDDs are still here and I'm absolutely certain that they will outlive SSDs by a fair margin. Running a business myself and thank you very much HDDs are the only choice, RAID 10, 6 or 60 depending on a job. Bloody SDDs, hate those to the core (tested). Good for laptops or for geeks who benching system 24/7 not for serious job.
  • ypsylon - Thursday, September 6, 2012 - link

    Dang 12 not 14 , ha, ha.
  • mtoma - Thursday, September 6, 2012 - link

    If you love so much the reliability of HDDs, I must ask you: what SSD brand have failed you? Intel? Samsung? You know, they are statistics that show Intel and Samsung SSD are much more reliable 24/7 than many Enterprise HDDs. I mean, on paper, the enterprise HDDs looks great, but in reality they fail more than they should (in a large RAID array vibration is a maine concern). After all, the same basic technology applies to regular HDDs. On top of that, some (if not all) server manufacturers put refurbished HDDs in new servers (I have seen IBM doing that and I was terrified). Perhaps this is not a widespread practice, but it is truly terrifying.
    So, pardon me if I say: to hell with regular HDDs. Buy enterprise grade SSDs, you get the same 5 year warranty.
  • extide - Thursday, September 6, 2012 - link

    Dude you missed the point ENTIRELY, the machine they built is to TEST NAS's. They DID NOT BUILD A NAS.
  • Wardrop - Saturday, September 8, 2012 - link

    I can't work out whether this guy is trolling or not? A very provocative post without really any detail.
  • AmdInside - Thursday, September 6, 2012 - link

    Isn't Win7 x64 Ultimate a little too much for a VM? Would be nice to see videos.
  • ganeshts - Thursday, September 6, 2012 - link

    We wanted an OS which would support both IOMeter and Intel NASPT. Yes, we could have gone with Windows XP, but the Win 7 installer USB drives were on the top of the heap :)
  • AmdInside - Thursday, September 6, 2012 - link

    Thanks
  • zzing123 - Thursday, September 6, 2012 - link

    Hi Ganesh - Thanks for taking my post a few articles back to heart regarding the NAS performance when fully loaded, as it begins to provide some really meaningful results.

    I have to agree with some of the other posters' comments about the workload though. Playing a movie on one, copying on another, running a VM from a third and working of docs through an SMB share on a fourth would probably be a more meaninful workload in a prosumer's home.

    In light of this, might it be an idea to add a new benchmark to AnandTech's Storage Bench that measures all these factors?

    In terms of your setup, there's a balance to be struck. I really like the concept you're doing of using 12 VM's to replicate a realistic environment in the way you can do. However when an office has 12 clients, they're probably using a proper file server or multiple NAS's. 3-4 clients is probably the most typical set up in a SOHO/home setup.

    10GbE testing is missing, and a lot of NAS's are beginning to ship with 10GbE. With switches like the Cisco SG500X-24 also supporting 10GbE and becoming slowly more affordable, 10GbE is slowly but surely becoming more relevant. 1 SSD and 1 GbE connection isn't going to saturate it - 10 will, and is certainly meaninful in a multi-user context, but this is AnandTech. What about absolute performance?

    How about adding a 13th VM that leashes together all the 12 SSD's and aggregates all the 12 I340 links to provide a beast of RAIDed SSD's and 12GbE connectivity (the 2 extra connections should smoke out net adapters that aren't performing to spec as well).

Log in

Don't have an account? Sign up now