Unboxing and Setup Impressions

The Synology DS1812+ package weighs in at 14 lbs (approx. 6.3 Kgs), with the diskless unit coming in at 11.5 lbs (5.1 Kgs). The unit has a built-in PSU and has a desktop form factor with dimensions of 6.2" x 13.4" x 9.2". Along with the main unit, the package consists of the following components:

  • Quick Install guide
  • Software / Utilities CD
  • AC power cable
  • Two Cat5 cables (3ft each)
  • Two keys for the hard drive bays
  • Screws for mounting the hard drives to the bays

The front face of the unit has 8 vertical bays arranged side-by-side with a power button above the center. Status and alert LEDs are to the left of the power button, while the two network status indicators are to the right. On the rear side, we have two 120mm fans taking up most of the space. To the right of the fans, we have four USB 2.0 ports, one eSATA port and the two RJ-45 GbE ports. To the left, we have one more eSATA port and two USB 3.0 ports along with the the power plug receptacle. We would have like some of the ports in the front side of the chassis for easier access. The SD card slots are unfortunately relegated to the entry-level versions only and noneis available on the DS1812+ (perfectly acceptable, given the target market).

On the software side, the Disk Station Manager (DSM 4.2) is a joy to use. A lot of flexibility is provided, along with a number of interesting features. The downside is that it might be overwhelming for some consumers. SSH access is available. This provides us an avenue to get information about the unit without a full-length teardown.

Synology provides support for Disk Groups (a collection of hard drives in the NAS). Each Disk Group can be configured in either SHR (Synology Hybrid RAID - an automatic RAID management feature providing for single or dual disk failure resiliency) or with a manual RAID level. The available RAID levels for manual configuration are dependent on the number of disks in the disk group. Multiple volumes can be created in a given disk group, but users are forced to use the same RAID level for all the volumes.

An overview of the various setup options and other available features in the firmware are provided in the gallery below.

Introduction Testbed Setup and Testing Methodology
Comments Locked

93 Comments

View All Comments

  • saiyan - Sunday, June 16, 2013 - link

    A single hard drive is also a failure waiting to happen, enterprise class or not. When a drive does fail, you don't even get the benefit of 24/7 uptime provided by RAID-5 even when the array is degraded. You don't even have the chance to rebuild your RAID array.

    Seriously, RAID is NOT a backup.
  • SirGCal - Monday, June 17, 2013 - link

    I don't think anyone here ever claimed it was... If they did, I missed it. It's all about keeping data during a repair. Drives won't last forever and 38 hours is a long time to beat on the array to rebuild. On old drives, odds of a second failure go up drastically.
  • Duckhunt2 - Saturday, February 15, 2014 - link

    You building something yourself and someone else buying it aint the best comparison. You have to set up so many things. Time is money. Who has time to do that?
  • SirGCal - Thursday, June 13, 2013 - link

    Sorry, can't edit comments... But ya, performance on this is weak. One of mine, of which empty cost the same, but supports Raid 6, can hold transfers much faster including 400M writes, 600M reads, etc. and that's using 5400 RPM consumer grade drives... 700/900M using performance based hardware or more. Mine is a media share server only needing to serve the house so 4-6 Pure HD sources (all legal, sorry, I do not agree with piracy) at the same time is plenty and this is way more then enough. But this is actually the 'slowest' way I could build it... I went for green since I didn't need any speed in this setup... speed in a real Raid is very easy. Writing is a bit slower, especially in Raid 6 due to the complicated error bit calculations... Reading is butter.
  • santiagoanders - Friday, June 14, 2013 - link

    You have a 10G network to run media sharing? Overkill much?
  • SirGCal - Friday, June 14, 2013 - link

    For short distance, Cat-6 works fine. My whole house is wired Cat-6 for < $800 minus the electrician who was also a friend of mine. So complain all ya like... Just cause you wanna sit there and do wi-fi isn't my fault.
  • santiagoanders - Monday, June 17, 2013 - link

    And how much did you pay for the 10Gbe adapters and switch?
  • Guspaz - Thursday, June 13, 2013 - link

    Is it just me, or is the price of this thing not listed anywhere in the article? Benchmarks are meaningless without a price to give them context.
  • DigitalFreak - Thursday, June 13, 2013 - link

    The 1812+ runs around $999, and the 1813+ is $1099.
  • SirGCal - Friday, June 14, 2013 - link

    To me, that's just too much. I can build the core box itself, FAR more powerful, albiet a bit larger, BUT capable of far more then just sitting there. Can serve as a Subsonic or Plex server, MEDIA stream, Media extender server to Xbox, etc. Even do it's own data workload (handbrake/etc. while running OSx or Windows or even Linux. Anything I choose.). It doesn't have to be a dummy box. And I have two of these running 24/7 and they use VERY little power while doing file server duties. If I load up the CPU to do other tasks, then they'll obviously load up a bit more but...

    Anyhow, I can make, right now, say an A6 5400K (3.6G dual-core APU) with 16G 1866 CAS10, a Seasonic 620 modular, Fractal Design insulated (silent) tower to hold 8 fast swapable bays and a boot drive, an A75 USB3 board, AND the Areca ARC-1223, 6G Raid 6 card. (SAS cards break down to control SATA drives for those thinking about that...) all for $944.94 right now. And that comes with one giga-bit NIC already. Add more if ya want, or more whatever... That's the point. Plus these cases are dead silent. I even have the one with windows and you can't hear anything from them. They are a bit more expensive and you could save $50 going with cheaper options though but I was being frivolous. Here's a screenshot of one I just did for a core for a small one at work: http://www.sirgcal.com/images/misc/raid6coreexampl...

    * The whole point is; I don't understand these 'boxes'. They use nonstandard raid for one. Synology Raid. Which also means if it fails you can't put it on a regular RAID controller to retrieve your data. At least that's how they used to be. Perhaps not anymore.

    * But their price is SO high it doesn't make sense. You can build one yourself, better capabilities all the way around in every way, cheaper. And if you ONLY want raid 5, you can knock about $300 off the price tag. Raid 6 is the bulk of that cost... But honestly IMHO necessary with those sizes, and that many drives in the array...

    If you actually have no clue how to build a PC, perhaps... But find your neighborhood nerd to help ya. Still without RAID 6, these just don't serve a purpose. Get two smaller arrays instead. 4-drives or less for raid 5. Can these even do hot-spares? At least that would be something... It would be a live drive waiting to take over in case of a failure. Not quite RAID 6, but sorta kinda a bit more helpful, at least for safety. They didn't mention it.

Log in

Don't have an account? Sign up now