Hardware Platform & Setup Impressions

The industrial design of the Seagate NAS 4-bay is strikingly different from the 2013 Business Storage NAS units. The fingerprint-magnetic surface has been discarded. In its place, we have an almost metallic looking chassis (which turns out to be completely plastic on closer inspection). The hot-swap bays are easily accessible in the front of the unit . The drive caddies themselves are tool-less ones, as long as 3.5" drives are used. There are screw slots for 2.5" drives on the caddies. Apart from the main unit, the package consists of the following:

  • 2M Cat 5E Ethernet cable
  • 90 W (19V @ 4.7A) external power supply with US power cord
  • Getting started guide / warranty card
  • Screws for 2.5" drive installation

The chassis may come with the drives preconfigured in SimplyRAID (RAID-5, essentially) for non-diskless configurations. In terms of chassis I/O, we have a USB 3.0 port on the front (beneath the power and backup buttons) and another one on the rear side. The rear face also has two network ports, a power inlet and a recessed reset button. The non-diskless units ship with the Seagate NAS HDDs.

The unit was also quite straightforward to take apart (except for the presence of some Torx screws on the rear side). On the inside, we were quite happy to see that Seagate had decided to continue LaCie's tradition of installing a Noctua fan (NF-P12). It is able to keep an excellent thermal profile while being on the quiet side. Getting further inside the chassis, we find Seagate's code-name, 'Dart', for the NAS platform. The ARMADA 370 SoC is itself under a heat sink, but the ASMedia ASM1042A, Marvell 88SE9170 and two Marvell 88E1518 chips are visible readily on the board.

Platform Analysis

The ARMADA 370 block diagram is provided below. The lspci command in the SSH window provides some interesting information, which is corroborated by the teardown pictures in the above gallery.

[admin@Seagate-D4 /]$ lspci -q
00:01.0 PCI bridge: Marvell Technology Group Ltd. Device 7846
00:02.0 PCI bridge: Marvell Technology Group Ltd. Device 7846
01:00.0 IDE interface: Marvell Technology Group Ltd. 88SE9170 PCIe SATA 6Gb/s Controller (rev 12)
02:00.0 USB controller: ASMedia Technology Inc. ASM1042A USB 3.0 Host Controller


The Marvell 88SE9170 takes up one PCIe 2.0 lane and provides support for two SATA ports. The ASM1042A takes up the other PCIe lane and provides support for the two USB 3.0 ports in the system. The 88E1518 chips are Alaska parts which provide 10/100/1000BASE-T PHYs with RGMII support that interface to the two GbE ports in the SoC. This leads us to believe that the ARMADA 370 part being used in the NAS 4-bay is actually the 88F6710, which supports 2x GbE, 2x PCIe 2.0 x1 and 2x SATA simultaneously.

Setup & Usage

After connection to the network, the unit obtains a DHCP address (even in diskless mode) and could be setup using the web UI at http://<NAS-IP>. The Seagate Network Assistant desktop software is also able to track the available Seagate NAS units (initialized or not) on the network (in case one is not able to determine the DHCP IP obtained by the unit from the router side). Setup proceeds after insertion of at least one disk into the unit. If the unit is able to reach the Internet, it checks, downloads and installs the latest firmware update. Letting the process run its due course soon lands us in the minimal UI (as shown in the gallery below).

The Device Manager option takes us back to the familiar NAS OS UI. Even in this minimal interface, we have support for multi-tasking. The drop down menu at the top right shows the currently running apps and also provides a way to close them, if necessary. The App Manager lets users install various apps such as BitTorrent Sync, Elephant Drive, WordPress etc. The selection right now is quite minimal, but that should improve over time.

The device manager options are quite similar to what we saw in the Seagate Business Storage 8-bay Rackmount review. The interesting aspects are that SSH access is now available, and we have an option to install SDrive from the UI. These are all welcome changes which can help Seagate achieve feature parity with the competitors.

We also have some screenshots of the storage and volume handling aspects of NAS OS. Our full test process of starting with one drive, migrating to RAID-1, adding another drive to migrate to RAID-5 and yet another one to expand the RAID-5 volume using a total of 4x 4 TB WD Re drives successfully completed with no issues whatsoever.

We simulated drive failure by yanking out one of the drives during data transfer. The operations from the client didn't face any hiccups, but the NAS UI immediately reported the trouble (alerts can be configured). Inserting a new drive allowed for rebuild. The various steps are shown in the gallery above.

Introduction and Testbed Setup Single Client Performance - CIFS on Windows
Comments Locked

19 Comments

View All Comments

  • MadMan007 - Thursday, July 24, 2014 - link

    Whoa, you mean a motherboard that costs more than this entire NAS performs better? SHOCKING!
  • JeffFlanagan - Thursday, July 24, 2014 - link

    Thanks Ganesh and others who commented on the cooling.
  • harshw - Thursday, July 24, 2014 - link

    Ganesh, I can attest to the fact that LaCie's fan design on their 5Big boxes for example - isn't really good for thermals. Like I commented earlier, I had a LaCie 5Big NAS Pro consistently overheat here in California where the ambient room temps can easily rise to 80F. I replaced the 120mm Noctua with a 150mm Thermaltake and also cut out the fan grill. It worked but the LaCie RAID config was already broken and no amount of recovery would help.
  • ganeshts - Thursday, July 24, 2014 - link

    That is disappointing to hear. I have the 5big Pro running without issues, but I have it perched on a shelf open on all four sides (basically a garage rack converted to hold stuff) where there is plenty of air-flow. One issue with the 5big box is that you have to make sure the underside is pretty clear (air intake is through that).

    That said, I do have more confidence in the NAS 4-bay, as the air is being pulled in through the front side of the unit. But, then, again, the NAS 4-bay doesn't have a metal body, so some heat loss through conduction will be missed.
  • beginner99 - Friday, July 25, 2014 - link

    With these huge RAID-Rebuild times greatly increasing the risk of a second drive failure, wouldn't it make sense to add a dedicated chip that speeds this up? This would even help during normal operation?
  • Zan Lynx - Friday, July 25, 2014 - link

    As far as I know, no current hardware is limited by the CPU when doing RAID rebuild. It is always the amount of time that it takes to read and write so many terabytes of data.

    Now maybe if the drive bays were fitted with SSDs it would be a problem.
  • jabber - Friday, July 25, 2014 - link

    From looking at recent tests and from my own kit at home it really looks like you need around 500MHz of ARM CPU power per 20-22MBps of data pumped down the cable.
  • Zan Lynx - Friday, July 25, 2014 - link

    Maybe the NAS builders should start using AMD Bobcat. My little home server built on a E-350 can fill a gigabit Ethernet link.
  • wintermute000 - Sunday, July 27, 2014 - link

    I agree. heck even Bill us the cost difference, what 50 more or whatever for a real cpu like a celeron

Log in

Don't have an account? Sign up now