Hardware and Setup Impressions

The 4-bay variant of the Seagate Business Storage Rackmount series can be purchased diskless, but the 8-bay variant comes with 4TB enterprise-class SATA disks (Constellation ES.3) pre-installed. Available drive configurations include 8TB (4x2TB), 12TB (4x3TB), 16TB (4x4TB) and 24TB (8x3TB). In order to accommodate 8 hot-swappable drives, the unit adopts a sliding tray design with the front part of the top cover capable of being propped open. The eight bays are laid out flat in two rows. The hard drives are mounted on a special caddy designed for this particular configuration. Due to the nature of the chassis, and the bundled hard drives in certain configurations, the storage density offered by the Seagate Business Storage 8-Bay unit is simply unparalleled.

In terms of hardware design, the unit is top-notch. The sliding rail design and the tool-less cooling system access, as well as the front USB port and sound suppression buttons make it easy to access service the unit when necessary. If we had to find scope for improvement, it would be in the caddy design. While the layout of the bays precludes intuitiveness, it could do with some vibration dampening mechanism. While hot-swapping disks, we could feel the vibration of the chassis frame's base from the other spinning disks and that did leave us a bit worried. However, considering that the Constellation ES.3 disks have RV (Rotational Vibration) tolerance built-in, it should not be much of a worry.

In terms of platform design, we can see that six of the bays are serviced by on-board SATA ports and two cables come from a riser card attached to the PCIe slot. The motherboard is a custom one from Asus, and without SSH access, we were unable to determine the chipset being used for the Ivy Bridge ECC-enabled board.

The OS on the unit (Seagate NAS OS) is an evolved version of the LaCie NAS OS that we evaluated in the 5big NAS Pro review. The UI is more streamlined, but some features (such as encryption support and volume expansion above 12 TB) have been cut. Starting the unit in diskless mode involves booting the unit with the rescue USB key inserted. This results in the unit getting a DHCP address, after which the setup process can be completed via the web interface.

Most of the OS features are similar to the LaCie NAS OS. The unit comes with support for the secure Wuala Cloud Storage. We have already covered the capabilities of Wuala's Hybrid Cloud in our LaCie 5big NAS Pro review, so we will not address that aspect further in this review.

A quick overview of the available options in the web interface is provided in the gallery below.

Users can be added (along with an optional e-mail address for Wuala / hybrid cloud access). Shares can be set up with restricted access protocols. For example, a share can be configured to be accessible only over NFS and not SMB. The OS also features an in-built download client which supports BitTorrent, as well as direct HTTP / FTP downloads. Backup jobs can be configured through a front-end for rsync. It is also possible to set up the NAS to act as a rsync destination for other compatible clients (the backups go to a default Net Backup directory). One of the nice features in NAS OS is the ability to restrict access protocols to particular network links. Seagate also provides a dynamic DNS service for accessing the unit over the Internet. It requires forwarding of ports 80 and 443 for HTTP and HTTPS respectively. This service is available only on the primary LAN port. Power management (including scheduled power on and off times as well as hard disk sleep configuration) and monitoring features (CPU, chassis fans, S.M.A.R.T etc.) are present in the OS.

Our usual review methodology for rackmount units uses SSDs, but, considering that Seagate promotes storage density with this solution, our benchmarks were processed with the bundled Seagate Constellation ES.3 4TB drives. The testing usually starts in the diskless mode, with disks being added one-by-one to test out the RAID migration and expansion capabilities. This process went fine for the first four drives. We were able to successfully migrate from a JBOD 4TB volume to a 12 TB RAID-5 volume with four disks. Unfortunately, when adding the fifth disk and trying to expand the existing volume, some OS limitations were exposed.

With the version that we evaluated (NAS OS 3.2), a volume cannot be expanded by more than 8TB per expansion step. In addition, a volume cannot be expanded to exceed 16TB (despite support for creation of a volume bigger than 16TB). The limitations are due to the e2fs component in-charge of the volume format. Seagate assured us that an upcoming firmware update would resolve this issue. In any case, we moved directly to create a 8-disk RAID-5 volume for benchmarking. There were no issues in the RAID-5 rebuild process when we replaced one of the 8 disks.

Introduction Single Client Performance - CIFS and iSCSI on Windows
Comments Locked

28 Comments

View All Comments

  • phoenix_rizzen - Friday, March 14, 2014 - link

    9U for 50 2.5" drives? Something's not right with that.

    You can get 24 2.5" drives into a single 2U chassis (all on the front, slotted vertical). So, if you go to 4U, you can get 48 2.4" drives into the front of the chassis, with room on the back for even more.

    Supermicro's SC417 4U chassis holds 72 2.5" drives (with motherboard) or 88 (without motherboard).

    http://www.supermicro.com/products/chassis/4U/?chs...

    Shoot, you can get 45 full-sized 3.5" drives into a 4U chassis from SuperMicro using the SC416 chassis. 9U for 50 mini-drives is insane!
  • jasonelmore - Saturday, March 15, 2014 - link

    all HDD's have helium
  • ddriver - Friday, March 14, 2014 - link

    LOL, it is almost as fast as a single mechanical drive. At that price - a giant joke. You need that much space with such slow access - this doesn't even qualify for indie professional workstations, much less for the enterprise. With 8 drives in raid 5 you'd think it will perform at least twice as well as it does.
  • FunBunny2 - Friday, March 14, 2014 - link

    Well, as a short-stroked RAID 10 device, you might be able to get 4TB of SSD speed. With drives of decent reliability, not necessarily the Seagates, you get more TB/$/time than some enterprise SSD. Someone could do the arithmetic?
  • shodanshok - Friday, March 14, 2014 - link

    Mmm, no, SSD speed are too much away.

    Even only considering rotational delay and entirely discarding seek time (eg: an extemed short-stroked disk), disk access time remain much higher then SSD. A 15k enterprise class drive need ~4ms to complete a platter rotation, with an average rotational delay of ~2ms. Considering that you can not really cancel seek time, the resulting access latency of even short-stroked disk surely is above 5ms.

    And 15k drives cost much more that consumer drives.

    A simple consumer-level MLC disk (eg: Crucial M500) has a read access latency way lower than 0.05 ms. Write access latency is surely higher, but way better than HD one.

    So: SSDs completely eclipse HDDs on the performance front. Moreover, with high capacity (~1TB) with higher-grade consumer level / entry-level enterprise class SSDs with power failure protection (eg: Crucial M500, Intel DC S3500) you can build a powerfull array at reasonable cost.
  • ddriver - Sunday, March 16, 2014 - link

    I think he means sequential speed. You need big storage for backup or highly sequential data like raw audio/video/whatever, you will not put random read/write data on such storage. That much capacity needs high sequential speeds. Even if you store databases on that storage, the frequently accessed sets will be cached, and overall access will be buffered.

    SSD sequential performance today is pretty much limited by the controller speed to about ~530 mb/sec. A 1TB WD raptor drive does over 200 mb/sec in its fastest region, so I imagine that 4 of those would be able to hit SSD speed at tremendously higher capacity and even more so volume to price ratio.
  • shodanshok - Friday, March 14, 2014 - link

    This thing seems too expensive to me. I mean, if the custom linux based OS has the limitations explained in the (very nice!) article, it is better to use a general purpose distro and simply manage all via LVM. Or even use a storage-centric distribution (eg: freenas, unraid) and simply buy a general-purpose PC/server with many disks...
  • M/2 - Friday, March 14, 2014 - link

    $5100 ??? I could buy a Mac mini or a Mac Pro and a Promise2 RAID for less than that! ....and have Gigabit speeds
  • azazel1024 - Friday, March 14, 2014 - link

    I have a hard time wrapping my head around the price.

    Other than the ECC RAM, that is VERY close to my server setup (same CPU for example). Except mine also has a couple of USB3 ports, twice the USB 2 ports, a third GbE NIC (the onboard) and double the RAM.

    Well...it can't take 8 drives without an add on card, as it only has 6 ports...but that isn't too expensive.

    Total cost of building...less than $300.

    I can't fathom basically $300 of equipment being upsold for 10x the price! Even an upsale on the drives in it doesn't seem justified to get it in to that range of price.

    Heck, you could get a RAID card and do 7 drives in RAID5/6 for redundancy and use commercial 4TB drives with an SSD as a cache drive and a REALLY nice RAID card in to my system, and you'd probably come out at less than half the price and probably with better performance.

    I get building your own is almost always cheaper, but a $3000 discount is just a we bit cheaper on a $5000 hardware price tag, official support or no official support.
  • azazel1024 - Friday, March 14, 2014 - link

    I might also add, looking at the power consumption figures, with my system being near identical, other than lack of ECC memory, but more RAM, more networking connectivity and WITH disks in it, mine consumes 14w less at idle (21w idle). The RAID rebuild figures on 1-2 disks and 2-3 is also a fair amount lower on my server, but more than 10w difference (mine has 2x2TB RAID0 right now and a 60GB SSD as boot drive).

    Also WAY more networking performance. I don't know if the OS doesn't support SMB3.0, or if Anandtech isn't running any network testing with SMB3.0 utilized, but with Windows 8 on my server, I am pushing 2x1GbE to the max, or at least I was when my desktop RAID array was less full (need new array, 80% utilized on my desktop right now as it is only 2x1TB RAID0).

    Even looking at some of the below GbE saturation benchmarks, I am pushing a fair amount more data over my links than the Seagate NAS here is.

    With better disks in my server and desktop I could easily patch in the 3rd GbE NIC in the machine to push up over 240MB/sec over the links to the limit of what the drives can do. I realize a lot of SOHO/SMB implementations are about concurrent users and less about maximum throughput, but the beauty of SMB3.0 and SMB Multichannel is...it does both. No limits on per link speed, you can saturate all of the links for a single user or push multiple users through too.

    I've done RAM disk testing with 3 links enabled and SMB Multichannel enabled and saw duplex 332MB/sec Rx AND Tx between my server and desktop. I just don't have the current array to support that, so I leave only the Intel NICs enabled and leave the on-board NICs on the machines disabled.

Log in

Don't have an account? Sign up now