Building the 2012 AnandTech SMB / SOHO NAS Testbed
by Ganesh T S on September 5, 2012 6:00 PM EST- Posted in
- IT Computing
- Storage
- NAS
Concluding Remarks
The preceding two sections presented the results from the newly added test components using the new testbed. Standalone, they only tell a minor part of the story. In future reviews, we will plot results from multiple NAS units on a single graph (obviously, we won’t be putting the ARM/PowerPC based units against the Atom based ones) so as to get an idea of the efficiency and effectiveness of each NAS and its operating system.
Green computing was one of our main goals when building the testbed. The table below presents the power consumption numbers for the machine under various conditions.
2012 AnandTech NAS Testbed Power Consumption | |
Idle | 118.9 W |
32GB RAM Disk + 12 VMs Idle | 122.3 W |
IOMeter 100% Seq 100% Reads [ 12 VMs ] | 146.7 W |
IOMeter 60% Random 65% Reads [ 12 VMs ] | 128 W |
IOMeter 100% Seq 50% Reads [ 12 VMs ] | 142.8 W |
IOMeter 100% Random 8K 70% Reads [ 12 VMs ] | 131.2 W |
Note that we were able to subject the NAS to access from twelve different clients running Windows for less than 13W per client. This sort of power efficiency is simply not attainable in a non-virtualized environment. We conclude the piece with a table summarizing the build.
2012 AnandTech NAS Testbed Configuration | |
Motherboard | Asus Z9PE-D8 WS Dual LGA2011 SSI-EEB |
CPU | 2 x Intel Xeon E5-2630L |
Coolers | 2 x Dynatron R17 |
Memory | G.Skill RipjawsZ F3-12800CL10Q2-64GBZL (8x8GB) CAS 10-10-10-30 |
OS Drive | OCZ Technology Vertex 4 128GB |
Secondary Drive | OCZ Technology Vertex 4 128GB |
Other Drives | 12 x OCZ Technology Vertex 4 64GB (Offline in the Host OS) |
Network Cards | 3 x Intel ESA I-340 Quad-GbE Port Network Adapter |
Chassis | SilverStoneTek Raven RV03 |
PSU | SilverStoneTek Strider Plus Gold Evoluion 850W |
OS | Windows Server 2008 R2 |
Thank You!
We thank the following companies for making our NAS testbed build a reality:
- Thanks to Intel for the Xeon E5-2630L CPUs and the ESA I-340 quad port network adapters
- Thanks to Asus for the Z9PE-D8 WS dual LGA 2011 workstation motherboard
- Thanks to Dynatron for the R17 coolers
- Thanks to G.Skill for the RipjawsZ 64GB DDR3 DRAM kit
- Thanks to OCZ Technology for the two 128GB Vertex 4 SSDs and twelve 64GB Vertex 4 SSDs
- Thanks to SilverStone for the Raven RV03 chassis and the 850W Strider Gold Evolution PSU
What are readers looking for in terms of multi-client scenario testing in NAS reviews? We are open to feedback as we look to expand our coverage in this rapidly growing market segment.
74 Comments
View All Comments
ganeshts - Thursday, September 6, 2012 - link
Thanks for unearthing that one.. Fixed now.ypsylon - Thursday, September 6, 2012 - link
14 SSDs. I know it is only to simulate separate clients, but to be honest this whole test is ultimately meaningless. No reasonable business (not talking about 'man with a laptop' kind of company) will entrust crucial data to SSD(s) (in particular non-industry class standard SSDs). Those disks are far too unreliable and HDDs trounce them in that category every time. Whether you like it or not, HDDs are still here and I'm absolutely certain that they will outlive SSDs by a fair margin. Running a business myself and thank you very much HDDs are the only choice, RAID 10, 6 or 60 depending on a job. Bloody SDDs, hate those to the core (tested). Good for laptops or for geeks who benching system 24/7 not for serious job.ypsylon - Thursday, September 6, 2012 - link
Dang 12 not 14 , ha, ha.mtoma - Thursday, September 6, 2012 - link
If you love so much the reliability of HDDs, I must ask you: what SSD brand have failed you? Intel? Samsung? You know, they are statistics that show Intel and Samsung SSD are much more reliable 24/7 than many Enterprise HDDs. I mean, on paper, the enterprise HDDs looks great, but in reality they fail more than they should (in a large RAID array vibration is a maine concern). After all, the same basic technology applies to regular HDDs. On top of that, some (if not all) server manufacturers put refurbished HDDs in new servers (I have seen IBM doing that and I was terrified). Perhaps this is not a widespread practice, but it is truly terrifying.So, pardon me if I say: to hell with regular HDDs. Buy enterprise grade SSDs, you get the same 5 year warranty.
extide - Thursday, September 6, 2012 - link
Dude you missed the point ENTIRELY, the machine they built is to TEST NAS's. They DID NOT BUILD A NAS.Wardrop - Saturday, September 8, 2012 - link
I can't work out whether this guy is trolling or not? A very provocative post without really any detail.AmdInside - Thursday, September 6, 2012 - link
Isn't Win7 x64 Ultimate a little too much for a VM? Would be nice to see videos.ganeshts - Thursday, September 6, 2012 - link
We wanted an OS which would support both IOMeter and Intel NASPT. Yes, we could have gone with Windows XP, but the Win 7 installer USB drives were on the top of the heap :)AmdInside - Thursday, September 6, 2012 - link
Thankszzing123 - Thursday, September 6, 2012 - link
Hi Ganesh - Thanks for taking my post a few articles back to heart regarding the NAS performance when fully loaded, as it begins to provide some really meaningful results.I have to agree with some of the other posters' comments about the workload though. Playing a movie on one, copying on another, running a VM from a third and working of docs through an SMB share on a fourth would probably be a more meaninful workload in a prosumer's home.
In light of this, might it be an idea to add a new benchmark to AnandTech's Storage Bench that measures all these factors?
In terms of your setup, there's a balance to be struck. I really like the concept you're doing of using 12 VM's to replicate a realistic environment in the way you can do. However when an office has 12 clients, they're probably using a proper file server or multiple NAS's. 3-4 clients is probably the most typical set up in a SOHO/home setup.
10GbE testing is missing, and a lot of NAS's are beginning to ship with 10GbE. With switches like the Cisco SG500X-24 also supporting 10GbE and becoming slowly more affordable, 10GbE is slowly but surely becoming more relevant. 1 SSD and 1 GbE connection isn't going to saturate it - 10 will, and is certainly meaninful in a multi-user context, but this is AnandTech. What about absolute performance?
How about adding a 13th VM that leashes together all the 12 SSD's and aggregates all the 12 I340 links to provide a beast of RAIDed SSD's and 12GbE connectivity (the 2 extra connections should smoke out net adapters that aren't performing to spec as well).