Building the 2012 AnandTech SMB / SOHO NAS Testbed
by Ganesh T S on September 5, 2012 6:00 PM EST- Posted in
- IT Computing
- Storage
- NAS
Concluding Remarks
The preceding two sections presented the results from the newly added test components using the new testbed. Standalone, they only tell a minor part of the story. In future reviews, we will plot results from multiple NAS units on a single graph (obviously, we won’t be putting the ARM/PowerPC based units against the Atom based ones) so as to get an idea of the efficiency and effectiveness of each NAS and its operating system.
Green computing was one of our main goals when building the testbed. The table below presents the power consumption numbers for the machine under various conditions.
2012 AnandTech NAS Testbed Power Consumption | |
Idle | 118.9 W |
32GB RAM Disk + 12 VMs Idle | 122.3 W |
IOMeter 100% Seq 100% Reads [ 12 VMs ] | 146.7 W |
IOMeter 60% Random 65% Reads [ 12 VMs ] | 128 W |
IOMeter 100% Seq 50% Reads [ 12 VMs ] | 142.8 W |
IOMeter 100% Random 8K 70% Reads [ 12 VMs ] | 131.2 W |
Note that we were able to subject the NAS to access from twelve different clients running Windows for less than 13W per client. This sort of power efficiency is simply not attainable in a non-virtualized environment. We conclude the piece with a table summarizing the build.
2012 AnandTech NAS Testbed Configuration | |
Motherboard | Asus Z9PE-D8 WS Dual LGA2011 SSI-EEB |
CPU | 2 x Intel Xeon E5-2630L |
Coolers | 2 x Dynatron R17 |
Memory | G.Skill RipjawsZ F3-12800CL10Q2-64GBZL (8x8GB) CAS 10-10-10-30 |
OS Drive | OCZ Technology Vertex 4 128GB |
Secondary Drive | OCZ Technology Vertex 4 128GB |
Other Drives | 12 x OCZ Technology Vertex 4 64GB (Offline in the Host OS) |
Network Cards | 3 x Intel ESA I-340 Quad-GbE Port Network Adapter |
Chassis | SilverStoneTek Raven RV03 |
PSU | SilverStoneTek Strider Plus Gold Evoluion 850W |
OS | Windows Server 2008 R2 |
Thank You!
We thank the following companies for making our NAS testbed build a reality:
- Thanks to Intel for the Xeon E5-2630L CPUs and the ESA I-340 quad port network adapters
- Thanks to Asus for the Z9PE-D8 WS dual LGA 2011 workstation motherboard
- Thanks to Dynatron for the R17 coolers
- Thanks to G.Skill for the RipjawsZ 64GB DDR3 DRAM kit
- Thanks to OCZ Technology for the two 128GB Vertex 4 SSDs and twelve 64GB Vertex 4 SSDs
- Thanks to SilverStone for the Raven RV03 chassis and the 850W Strider Gold Evolution PSU
What are readers looking for in terms of multi-client scenario testing in NAS reviews? We are open to feedback as we look to expand our coverage in this rapidly growing market segment.
74 Comments
View All Comments
Tor-ErikL - Thursday, September 6, 2012 - link
As always a great article and a sensible testbench which can be scaled to test everything from small setups to larger setups. good choice!However i would also like some type of test that is less geared towards technical performance and more real world scenarios.
so to help out i give you my real world scenario:
Family of two adults and two teenagers...
Equipment in my house is:
4 latops running on wifi network
1 workstation for work
1 mediacenter running XBMC
1 Synollogy NAS
laptops streams music/movies from my nas - usually i guess no more than two of these runs at the same time
MediaCenter also streams music/movies from the same nas at the same time
in adition some of the laptops browse all the family pictures which are stored on the NAS and does light file copy to and from the NAS.
The NAS itself downloads movies/music/tvshows and does unpacking and internal file transfers
My guess for a typical home use scenario there is not that much intensiv file copying going on, usually only light transfers trough mainly either wifi or 100mb links
I think the key factor is that usually there are multiple clients connecting and streaming different stuff that is the most relevant factor. at tops 4-5 clients
Also as mentioned difference on the different sharing protocols like SMB/CIFS would be interesting to se more details about.
Looking forward for the next chapters in your testbench :)
Jeff7181 - Thursday, September 6, 2012 - link
I'd be very curious to see tests involving deduplication. I know deduplication is found more on enterprise-class type storage systems, but WHS used SIS, and FreeNAS uses ZFS, which supports deduplication._Ryan_ - Thursday, September 6, 2012 - link
It would be great if you guys could post results for the Drobo FS.Pixelpusher6 - Thursday, September 6, 2012 - link
Quick Correction - On the last page under specs for the memory do you mean 10-10-10-30 instead of 19-10-10-30?I was wondering about the setup with the CPUs for this machine. If each of the 12 VMs use 1 dedicated real CPU core then what is the host OS running on? With 2 Xeon E5-2630Ls that would be 12 real CPU cores.
I'm also curious about how hyper-threading works in a situation like this. Does each VM have 1 physical thread and 1 HT thread for a total of 2 threads per VM? Is it possible to run a VM on a single HT core without any performance degradation? If the answer is yes then I'm assuming it would be possible to scale this system up to run 24 VMs at once.
ganeshts - Thursday, September 6, 2012 - link
Thanks for the note about the typo in the CAS timings. Fixed it now.We took a punt on the fact that I/O generation doesn't take up much CPU. So, the host OS definitely shares CPU resources with the VMs, but the host OS handles that transparently. When I mentioned that one CPU core is dedicated to each VM, I meant that the Hyper-V settings for the VM indicated 1 vCPU instead of the allowed 2 , 3 or 4 vCPUs.
Each VM runs only 1 thread. I am still trying to figure out how to increase the VM density in the current set up. But, yes, it looks like we might be able to hit 24 VMs because the CPU requirements from the IOMeter workloads are not extreme.
dtgoodwin - Thursday, September 6, 2012 - link
Kudos on excellent choice of hardware for power efficiency. 2 CPUs, 14 network ports, 8 sticks of RAM, and a total of 14 SSDS idling at just over 100 watts is very impressive.casteve - Thursday, September 6, 2012 - link
Thanks for the build walkthrough, Ganesh. I was wondering why you used a 850W PSU when worst case DC power use is in the 220W range? Instead of the $180 Silverstone Gold rated unit, you could have gone with a lower power 80+ Gold or Platinum PSU for less $'s and better efficiency at your given loads.ganeshts - Thursday, September 6, 2012 - link
Just a hedge against future workloads :)haxter - Thursday, September 6, 2012 - link
Guys yank those NICs and get a dual 10gbe card in place. SOHO is 10Gbe these days. What gives? How are you supposed to test SOHO NAS with each VM so crippled?extide - Thursday, September 6, 2012 - link
10GBe is certainly not SOHO.