Introduction and Testing Methodology

The increasing role of computers and electronic communication has resulted in the need to manage large amounts of data. This need has been felt across a wide spectrum of consumers ranging from home users to big enterprises. In the home / small business space, one option is to use network attached storage units that are ready to use out of the box. Sometimes, it is also necessary to have a server with added flexibility to store the data. This is where storage servers based on Windows or Linux distros come into play. These servers can either be bought as an appliance or assembled in a DIY fashion. Today, we will be looking at a system based on the former approach - the Advatronix Nimbus 400.

Advatronix is a US-based vendor of computing server solutions. We reviewed their Intel Xeon-based Cirrus 1200 solution last year. The Nimbus 400 is a server solution targeting a completely different market segment. Based on the Intel Avoton (Atom) platform, the lineup's members targets scenarios where the primary need is to act as a file server and the general computing workload is not expected to be taxing. That said, the Nimbus 400 ticks the feature checkboxes that consumers usually expect from enterprise-level file server platforms:

  • ECC RAM
  • IPMI 2.0 with iKVM for headless operation and full management control over the network
  • TPM (Trusted Platform Module) provision

The Nimbus 400 provides plenty of customization options.

  • Choice of OS: Diskless w/o OS, CentOS 7.1, FreeNAS, Windows Home Server 2011, Windows Server 2012 R2 Essentials and Windows Server 2012 R2 Standard
  • SoC: Intel Atom C2550 (4C/4T @ 14W TDP) or Intel Atom C2750 (8C/8T @ 20W TDP)
  • DRAM: 2GB to 64GB of DDR3 ECC RAM @ 1600 MHz
  • PSU: 250W or 660W Modular Gold Rated
  • Fixed / OS Drives: 1x or 2x RAID-1 160GB SSD
  • Data Drives: 1x - 4x 3.5" NAS Drives (2, 3 or 4TB) or 4x 3.5" Enterprise Drives (2 or 4TB)

The table below compares our review configuration against other systems that have been evaluated by us using the same methodology (and hence, serve as comparison points).

Comparative File Server Configurations
Aspect Advatronix Nimbus 400 ASRock Rack C2750D4I + U-NAS NSC-800
Form Factor 4-bay mini-tower / mITX motherboard 8-bay mini-tower / mITX motherboard
Platform Intel Avoton C2550 Intel Avoton C2750
CPU Configuration 4C/4T Silvermont x86 Cores
2 MB L2, 14W TDP
2.4 GHz (Turbo: 2.6 GHz)
8C/8T Silvermont x86 Cores
4 MB L2, 20W TDP
2.4 GHz (Turbo: 2.6 GHz)
SoC SATA Ports 2x SATA III (for two hot-swap bays)
4x SATA II (None Used)
2x SATA III (for two hot-swap bays)
4x SATA II (for one OS drive)
Additional SATA Ports Marvell SE9172 (2x)
Marvell SE9230 (4x) (for two fixed OS drives and two hot-swap bays)
Marvell SE9172 (2x) (for two hot-swap bays)
Marvell SE9230 (4x) (for four hot-swap bays)
I/O Ports 4x USB 3.0
2x USB 2.0
1x D-Sub
2x RJ-45 GbE LAN
1x RJ-45 IPMI LAN
1x COM1 Serial Port
3x USB 2.0
1x D-Sub
2x RJ-45 GbE LAN
1x RJ-45 IPMI LAN
1x COM1 Serial Port
Expansion Slots 1x PCIe 2.0 x8 (used by Renesas uPD720201 USB 3.0 Host Controller) 1x PCIe 2.0 x8 (Unused)
Memory 4x 4GB DDR3-1333 ECC UDIMM
Kingston KVR13E9/8HM
9-9-9-24
2x 4GB DDR3-1333 ECC UDIMM
Samsung M391B5273DH0-YH9
Data Drives 4x OCZ Vector 128 GB 8x OCZ Vector 128 GB
Chassis Dimensions 270mm x 200mm x 302mm 316mm x 254mm x 180mm
Power Supply 250W Internal PSU 400W Internal PSU
Diskless Price (when built) USD 767 USD 845

Evaluation Methodology

A file server can be used for multiple purposes, unlike a dedicated NAS. Evaluating a file server with our standard NAS testing methodology wouldn't do justice to the eventual use-cases and would tell only a part of the story to the reader. Hence, we adopt a hybrid approach in which the evaluation is divided into two parts - one, as a standalone computing system and another as a storage device on a network.

In order to get an idea of the performance of the file server as a standalone computing system, we boot up the unit with a USB key containing a Ubuntu-on-the-go installation. The drives in the bays are configured in a mdadm RAID-5 array. Selected benchmarks from the Phoronix Test Suite (i.e, those benchmarks relevant to the usage of a system as a file server) are processed after ensuring that any test utilizing local storage (disk benchmarks, in particular) point to the mdadm RAID-5 array. Usage of the Phoronix Test Suite allows readers to have comparison points for the file server against multiple systems (even those that haven't been benchmarked by us).

As a storage device on a network, there are multiple ways to determine the performance. One option would be to repeat all our NAS benchmarks on the system, but that would be take too much time to process for a given system that we are already testing as a standalone computer. On the other hand, it is also important to look beyond numbers from artificial benchmarks and see how a system performs in terms of business metrics. <a href="https://www.spec.org/sfs2014/>SPEC SFS 2014 comes to our help here. The benchmark tool is best used for evaluation of SANs. However, it also helps us here to see the effectiveness of the file server as a storage node in a network. The SPEC SFS 2014 has been developed by the IOZone folks, and covers evaluation of the filer in specific application scenarios like the number of virtual machines that can be run off the filer, number of simultaneous databases, number of video streams that can be simultaneously recorded and the number of simultaneous software builds that can be processed.

Our SPEC SFS 2014 setup consists of a SMB share on the file server under test connected over an Ethernet network to our NAS evaluation testbed outlined below. Further details about the SPEC SFS 2014 workloads will be provided in the appropriate section.

AnandTech NAS Testbed Configuration
Motherboard Asus Z9PE-D8 WS Dual LGA2011 SSI-EEB
CPU 2 x Intel Xeon E5-2630L
Coolers 2 x Dynatron R17
Memory G.Skill RipjawsZ F3-12800CL10Q2-64GBZL (8x8GB) CAS 10-10-10-30
OS Drive OCZ Technology Vertex 4 128GB
Secondary Drive OCZ Technology Vertex 4 128GB
Tertiary Drive OCZ Z-Drive R4 CM88 (1.6TB PCIe SSD)
Other Drives 12 x OCZ Technology Vertex 4 64GB (Offline in the Host OS)
Network Cards 6 x Intel ESA I-340 Quad-GbE Port Network Adapter
Chassis SilverStoneTek Raven RV03
PSU SilverStoneTek Strider Plus Gold Evolution 850W
OS Windows Server 2008 R2
Network Switch Netgear ProSafe GSM7352S-200

The above testbed runs 10 Windows 7 VMs simultaneously, each with a dedicated 1 Gbps network interface. This simulates a real-life workload of up to 10 clients for the NAS being evaluated. All the VMs connect to the network switch to which the NAS is also connected (with link aggregation, as applicable). The VMs generate the NAS traffic for performance evaluation.

Thank You!

We thank the following companies for helping us out with our NAS testbed:

Setup Impressions and Platform Analysis
POST A COMMENT

18 Comments

View All Comments

  • lwatcdr - Wednesday, August 12, 2015 - link

    I would really like to see some data using FreeNAS and Windows as well as Ubuntu. With the cost of drivers so low both of the NAS systems offer a huge amount of data for home or business. Reply
  • wintermute000 - Thursday, August 13, 2015 - link

    nowhere near enough free RAM for Freenas. 1Gb per Tb is the recommendation. With modern drives @ even RAIDZ1, you do the math Reply
  • wintermute000 - Thursday, August 13, 2015 - link

    Sorry, no idea why but I read it as 4Gb not 4x4Gb, my bad Reply
  • Brutalizer - Sunday, August 16, 2015 - link

    For zfs, it is recommened to use 1GB ram per 1TB disk space - only if you use deduplication. If not, 4GB in total is enough. Zfs has a very efficient disk cache, if you only have 2GB ram in your server you will not any disk cache, which is no big deal actually. Myself used 1GB ram server for a year with solaris and zfs for a year without problems. Lot of ignorance about zfs. Try it out yourself with 2-4 GB ram server and see it will work fine. Reply
  • DanNeely - Wednesday, August 12, 2015 - link

    Aside from the front panel having USB3, this case looks identical to one I bought from Chenbro a few years ago for my DIY nas. I'd be a bit concerned about the quality. The plastic locking half of the handle on one of the drive sleds popped when I pulled it out a month or two ago to add an additional drive to my setup. The metal half was still usable to pull the drive out and it appears to be held in place securely from the rear; but the normal latch mechanism is obviously not working any more. Reply
  • Anonymous Blowhard - Wednesday, August 12, 2015 - link

    I'm concerned about the presence of a Marvell SATA controller + FreeBSD based OS like FreeNAS, since there's been many reports of drives performing poorly or dropping out of ZFS pools under high I/O. Reply
  • bobbozzo - Friday, August 14, 2015 - link

    Remove USB card and insert IBM 1015 RAID card. Hope cabling is compatible. Reply
  • SirGCal - Wednesday, August 12, 2015 - link

    I personally have two 8-drive, one RAID6 and one RAIDZ2, both running Ubuntu. Both of them also run swifter then this. Curious. Reply
  • Ratman6161 - Wednesday, August 12, 2015 - link

    Data is a little stilted because the Asrock is using an 8C 2750 vs the 4 C in the Advantronix - so anything CPU sensitive is not really fair - particularly since the Advantronix is available with the 8C CPU.
    That said, I sort of doubt many people will be running DB's on this sort of machine. And the other tests seem to indicate that the faster CPU doesn't really buy you anything.

    And...4 SSD's in a RAID 5? The cost per GB for doing things that way is very high compared to spinning disks and if its being used in a home setting the performance of the SSD's is not needed. comparing prices online I could get 4x WD Black 750 GB drives for almost $100 cheaper than the 4 x 128 GB Vectors. Take a look at the read performance of the two units. Theoretically the Asrock with an 8 drive array should get better read speed than the Nimbus 400 with only 4. But it doesn't leading me to believe that a lot of the SSD's performance is wasted. Spinning disks is probably the most cost effective way to go with these.
    Reply
  • lwatcdr - Wednesday, August 12, 2015 - link

    Encryption uses up a good amount of CPU time. Reply

Log in

Don't have an account? Sign up now