Original Link: http://www.anandtech.com/show/8366/iosafe-1513-review-a-disasterresistant-synology-ds1513
ioSafe 1513+ Review: A Disaster-Resistant Synology DS1513+by Ganesh T S on August 13, 2014 7:30 AM EST
Introduction and Testbed Setup
The emergence of the digital economy has brought to fore the importance of safeguarding electronic data. The 3-2-1 data backup strategy involves keeping three copies of all essential data, spread over at least two different devices with at least one of them being off-site or disaster-resistant in some way. It is almost impossible to keep copies of large frequently updated data sets current in an off-site data backup strategy. This is where companies like ioSafe come in with their lineup of fire- and waterproof storage devices. We have already reviewed the ioSafe SoloPRO (an external hard drive in a disaster-resistant housing) as well as the ioSafe N2 (a 2-bay Marvell-based NAS with similar disaster protection).
External hard drives are good enough for daily backups, but entirely unsuitable for large and frequently updated data. The latter scenario calls for a network attached storage unit which provides high availability over the local network. The SoloPRO's chassis and hard drive integration strategy made it impossible for end users to replace the hard disk while also retaining the disaster-resistance characteristics. A disaster-resistant RAID-1 NAS with hot-swap capability was needed and the ioSafe N2 / 214 was launched to address these issues. However, with growing data storage requirements amongst SMBs and enterprise users, ioSafe found a market need for disaster resistant NAS units that supported expansion capabilities in addition to large number of drive bays. The ioSafe 1513+ serves to fulfill those requirements.
ioSafe partnered with Synology for the N2 NAS (which was later rebranded as the ioSafe 214). The partnership continues for the ioSafe 1513+, a disaster-resistant version of the Synology 1513+. The main unit has five bays, but, up to two ioSafe N513X expansion chassis can be connected to make 15 bays available in total for the user. Obviously, the N513X chassis is also disaster-resistant. We got our initial look at the ioSafe 1513+ at CES earlier this year. As a recap, the specifications of the unit are provided in the table below.
|ioSafe 1513+ Specifications|
|Processor||Intel Atom D2701 (2C/4T @ 2.13 GHz)|
|RAM||2 GB DDR3 RAM|
5x 3.5"/2.5" SATA 6 Gbps HDD / SSD (Hot-Swappable)
|Network Links||4x 1 GbE|
|External I/O Peripherals||4x USB 2.0, 2x USB 3.0, 2x eSATA|
|VGA / Display Out||None|
|Full Specifications Link||ioSafe 1513+ Specifications|
The ioSafe 1513+ review unit came in a 70 lb. package. Apart from the main unit (which has the PSU in-built), we had an Allen key and a magnetic holder for the same, a U.S power cord and a single 6 ft. network cable.
Interesting aspects to note are the hot-swappable fans, the rubber gasket around the waterproofing door for the drive chamber and the faceplate on the underside that allows for addition of a SO-DIMM module to augment the DRAM in the unit. The drive caddies also have holes for mounting 2.5" drives, a minor complaint that we had in the ioSafe N2 review. The fanless motherboard is mounted at the base of the unit in a separate compartment under the fire-/waterproof chamber for the drives.
Testbed Setup and Testing Methodology
The ioSafe 1513+ can take up to five drives. Users can opt for either JBOD, RAID 0, RAID 1, RAID 5, RAID 6 or RAID 10 configurations. We benchmarked the unit in RAID 5 with five Western Digital WD4000FYYZ RE drives as the test disks. Even though our review unit came with five Toshiba MG03ACA200 2 TB enterprise drives, we opted to benchmark with the WD Re drives to keep the numbers consistent when comparing against NAS units that have been evaluated before. The four ports of the ioSafe 1513+ were link aggregated in 802.3ad LACP to create a 4 Gbps link. The jumbo frames setting, however, was left at the default 1500 bytes. Our testbed configuration is outlined below.
|AnandTech NAS Testbed Configuration|
|Motherboard||Asus Z9PE-D8 WS Dual LGA2011 SSI-EEB|
|CPU||2 x Intel Xeon E5-2630L|
|Coolers||2 x Dynatron R17|
|Memory||G.Skill RipjawsZ F3-12800CL10Q2-64GBZL (8x8GB) CAS 10-10-10-30|
|OS Drive||OCZ Technology Vertex 4 128GB|
|Secondary Drive||OCZ Technology Vertex 4 128GB|
|Tertiary Drive||OCZ Z-Drive R4 CM88 (1.6TB PCIe SSD)|
|Other Drives||12 x OCZ Technology Vertex 4 64GB (Offline in the Host OS)|
|Network Cards||6 x Intel ESA I-340 Quad-GbE Port Network Adapter|
|Chassis||SilverStoneTek Raven RV03|
|PSU||SilverStoneTek Strider Plus Gold Evolution 850W|
|OS||Windows Server 2008 R2|
|Network Switch||Netgear ProSafe GSM7352S-200|
We thank the following companies for helping us out with our NAS testbed:
- Thanks to Intel for the Xeon E5-2630L CPUs and the ESA I-340 quad port network adapters
- Thanks to Asus for the Z9PE-D8 WS dual LGA 2011 workstation motherboard
- Thanks to Dynatron for the R17 coolers
- Thanks to G.Skill for the RipjawsZ 64GB DDR3 DRAM kit
- Thanks to OCZ Technology for the two 128GB Vertex 4 SSDs, twelve 64GB Vertex 4 SSDs and the OCZ Z-Drive R4 CM88
- Thanks to SilverStone for the Raven RV03 chassis and the 850W Strider Gold Evolution PSU
- Thanks to Netgear for the ProSafe GSM7352S-200 L3 48-port Gigabit Switch with 10 GbE capabilities.
- Thanks to Western Digital for the five WD Re hard drives (WD4000FYYZ) to use in the NAS under test.
Chassis Design and Hardware Platform
The Synology DS1513+ has been around for more than a year now. ioSafe announced the disaster-resistant version back at CES and tentatively set the shipment date for March. However, it wasn't until late July that the design was finally perfected. Compared to the SoloPRO and the ioSafe 214 platforms, the 1513+ is quite different when it comes to power consumption and thermal requirements. Tackling the heat dissipation was one of the main challenges faced by ioSafe in the product development process.
ioSafe uses three main patented technologies in disaster-proofing the 1513+:
- HydroSafe: Waterproofing by placement of drives in a metal cage sealed with rubber gaskets.
- DataCast: Fireproofing by surrounding the drive cage with a super-saturated gypsum structure
- FloSafe: Vents in the gypsum structure to allow for cooling during normal operation.
ioSafe has a technology brief explaining how these are applied in the 1513+. It is reproduced below:
In traditional NAS units / storage arrays, the arrangement of the fans has to ensure that air flows across the surface of the drives in order to cool it down. This is not directly possible in the ioSafe NAS designs because the hard drives are inside a waterproof sealed chamber. Ambient air is not designed to enter the waterproof chamber, but, through the fireproof door and to the outside of the extrusion. For cooling purposes, the design relies on a combination of conduction (from the drives to the extrusion) and convection (from the extrusion to the ambient air flowing over the extrusion). Note that the waterproof front door of the drive cage as well as the fireproof front face are essential parts of the cooling mechanism. Without these, the airflow across the serrated drive cage might not be enough to draw away the heat through the fans.
On the software front, the ioSafe 1513+'s Synology DSM is based on a Linux kernel (v 3.2.40). The interesting aspects of the hardware platform can be gleaned after accessing the unit over SSH.
Note that the unit has four Intel I210 GbE NICs connected via PCIe. The USB 3.0 ports are from an Etron EJ168 PCIe x1 to 2x USB 3.0 bridge,while a SiI 3132 PCIe x1 to 2x eSATA host controller with port multiplier support enable the two eSATA ports (to which the expansion chassis units get attached). Since the system utilizes the ICH10 I/O controller hub (this is the standard Cedarview storage platform that Intel promoted a few generations back), all the SATA ports for the drive bays come off the hub without the need for any bridge chips.
Our review unit came with the drives pre-initialized in a SHR volume (1-disk fault tolerance). ioSafe has a special wallpaper for the web UI, but, other than that there is no customization - all DSM features that one might get after initializing the unit from a diskless configuration are available in the pre-installed version also.
We have covered DSM 5.0's setup and usage impressions in our recent DS214play and DS414j reviews. There is not much point in rehashing the same excellent setup and usage experience. That said, each of those reviews concentrated on a particular DSM aspect, and this review will be no different. After the sections presenting the performance numbers, we will take a detailed look at the iSCSI features of DSM 5.0.
Single Client Performance - CIFS & iSCSI On Windows
The single client CIFS and iSCSI performance of the ioSafe 1513+ was evaluated on the Windows platforms using Intel NASPT and our standard robocopy benchmark. This was run from one of the virtual machines in our NAS testbed. All data for the robocopy benchmark on the client side was put in a RAM disk (created using OSFMount) to ensure that the client's storage system shortcomings wouldn't affect the benchmark results. It must be noted that all the shares / iSCSI LUNs are created in a RAID-5 volume.
The market doesn't have too many 5-bay NAS units. In fact, the only other 5-bay NAS unit that we have evaluated before is the LaCie 5big NAS Pro from early 2013. That unit was also based on the Intel Atom D2701, but carried only two GbE links instead of the four in the ioSafe 1513+. In terms of encryption support, vendors have two approaches - encrypt a particular shared folder or encrypt the full volume. Synology only supports folder-level encryption in DSM. The graph below shows the single client CIFS performance for standard as well as encrypted shares on Windows.
We created a 250 GB iSCSI target and mapped it on to a Windows VM in our testbed. The same NASPT benchmarks were run and the results are presented below. Note that we also present numbers for the 'Single LUN on RAID' mode which is supposed to provide the best access performance. It does indeed perform better with write workloads, but loses out on read workloads to the standard file-based LUNs.
Single Client Performance - CIFS & NFS on Linux
A CentOS 6.2 virtual machine was used to evaluate NFS and CIFS performance of the NAS when accessed from a Linux client. We chose IOZone as the benchmark for this case. In order to standardize the testing across multiple NAS units, we mount the CIFS and NFS shares during startup with the following /etc/fstab entries.
//<NAS_IP>/PATH_TO_SMB_SHARE /PATH_TO_LOCAL_MOUNT_FOLDER cifs rw,username=guest,password= 0 0
<NAS_IP>:/PATH_TO_NFS_SHARE /PATH_TO_LOCAL_MOUNT_FOLDER nfs rw,relatime,vers=3,rsize=32768,wsize=32768,namlen=255,hard,proto=tcp,timeo=600,retrans=2, sec=sys,mountaddr <NAS_IP>,mountvers=3,mountproto=udp,local_lock=none,addr=<NAS_IP> 0 0
The following IOZone command was used to benchmark the CIFS share:
IOZone -aczR -g 2097152 -U /PATH_TO_LOCAL_CIFS_MOUNT -f /PATH_TO_LOCAL_CIFS_MOUNT/testfile -b <NAS_NAME>_CIFS_EXCEL_BIN.xls > <NAS_NAME>_CIFS_CSV.csv
IOZone provides benchmark numbers for a multitude of access scenarios with varying file sizes and record lengths. Some of these are very susceptible to caching effects on the client side. This is evident in some of the graphs in the gallery below.
Readers interested in the hard numbers can refer to the CSV program output here.
The NFS share was also benchmarked in a similar manner with the following command:
IOZone -aczR -g 2097152 -U /nfs_test_mount/ -f /nfs_test_mount/testfile -b <NAS_NAME>_NFS_EXCEL_BIN.xls > <NAS_NAME>_NFS_CSV.csv
The IOZone CSV output can be found here for those interested in the exact numbers.
A summary of the bandwidth numbers for various tests averaged across all file and record sizes is provided in the table below. As noted previously, some of these numbers are skewed by caching effects. A reference to the actual CSV outputs linked above make the entries affected by this effect obvious.
|ioSafe 1513+ - Linux Client Performance (MBps)|
|*: Number skewed due to caching effect|
Multi-Client Performance - CIFS
We put the ioSafe1513+ through some IOMeter tests with a CIFS share being accessed from up to 25 VMs simultaneously. The following four graphs show the total available bandwidth and the average response time while being subject to different types of workloads through IOMeter. The tool also reports various other metrics of interest such as maximum response time, read and write IOPS, separate read and write bandwidth figures etc. Some of the interesting aspects from our IOMeter benchmarking run can be found here. The only other device in the graphs below is the LaCie 5big NAS Pro, but note that any direct comparison is rendered moot by the presence of only two network links in that device compared to four in the ioSafe 1513+.
Beyond 12 or so clients, the performance for sequential workloads saturates. However, the powerful nature of the platform is exposed by the fact that even with 25 clients simultaneously stressing the NAS, the response times remains excellent. In addition, the bandwidth numbers for random workloads don't show any signs of degradation even with all clients simultaneously active.
DSM 5.0: iSCSI Features
Synology's DSM is quite feature rich, and it is impossible to do it justice in a single review. Starting with the DS214play, we decided to use each review to focus on one particular aspect of the DSM / firmware ecosystem. The DS214play review dealt with the media aspects and associated apps and the DS414j review explored the backup and synchronization infrastructure. With its Atom-based platform, comparatively large amount of RAM and four network links, we decided that the ioSafe 1513+ was the perfect candidate to explore the iSCSI features of Synology DSM 5.0
Readers dabbling in virtualization would be quite familiar with iSCSI and can conveniently skip this sub-section to move on to the discussion of Synology's iSCSI features further down. In layman's terms, iSCSI is a protocol which allows storage on a networked device to be presented as a physical device on another machine. Readers wanting to go into further detail on the various aspects of iSCSI can refer to Wikipedia.
iSCSI has been attracting a lot of interest from NAS vendors in the recent past due to the rising popularity of virtualized infrastructure. Since iSCSI devices appear as physical drives on the machines in which they are mapped, they are ideal for virtual machine hosts. The mapped iSCSI drives can be used as physical disks for the guest machines or simply as a store for the virtual hard disk files mapped to the guests.
In order to understand iSCSI implementations, it is important to be aware of two concepts, the LUN and the target. The LUN (Logical Unit Number) refers to a device that can be addressed via SCSI protocols and supports read/write operations. On the other hand, a target is an endpoint in the SCSI system to which an initiator (or master) connects. A target without a LUN is essentially useless, as there is nothing that be read or written. A target can have one or more LUNs associated with it.
In simple setups, each target is associated with a single LUN. If there are multiple LUNs associated with a target, each of them will appear as a separate physical disk for the initiator connecting to the target. In the rest of this coverage, we will be proceeding with the single LUN on single target configuration.
Synology's iSCSI Implementation
Synology has a step-by-step guide to get iSCSI services up and running on DSM. In this section, we share our experience and configurations steps with the ioSafe 1513+. After completing all the benchmarks in the previous section, we were left with a SHR volume (1-disk fault tolerance) having a few shared folders. Upon proceeding to create an iSCSI LUN, we were presented with three choices, two of which were grayed out. The only available option was to create a iSCSI LUN (Regular Files).
Synology implements iSCSI LUN (Regular Files) as files under the path "/volume1/@iSCSITrg/". As seen from the above screenshot, Synology touts the dynamic capacity management using Thin Provisioning as the most advantageous aspect. Trying to create such a LUN rightly exposes an option for 'Thin Provisioning'. Enabling this allows for capacity expansion at a later stage.
The ioSafe 1513+ carries virtualization logos in its marketing collateral ("VMware READY", "CITRIX ready" and "Windows Server 2012 Certified"). Synology NAS units carrying these logos have an option for 'Advanced LUN Features' that can be turned on or off. If enabled, VMware VAAI - vStorage APIs for Array Integration - support is reported back to the initiator. This offloads storage tasks such as thin provisioning and cloning of virtual disks from the VM host to the NAS itself. On the Windows side, the advanced LUN features include ODX - Offloaded Data Transfers - for data transfer to not clog up the network bandwidth if possible (i.e, data movement from one part of the NAS volume to another) as well as LUN snapshotting and cloning.
Instead of file-based LUNs, users can also create block-based ones. Synology sums up the implementation difference succinctly in this graphic:
The trick to enable creation of block-based LUNs is to avoid the auto-creation of a SHR volume (or any other RAID volume) when the NAS gets initialized in the beginning. If such volumes exist, it is necessary to remove them. Once all volumes are removed, we can see the two previously-grayed out options getting enabled.
There are two ways to proceed with either of these two options. One is to follow the directions given in the LUN creation wizard and make it automatically create a Disk Group in a particular RAID configuration. The other is to create a disk group beforehand and use it while initializing the new LUN. Note that SHR is not available for such groups.
Choosing the Single LUN on RAID option utilizes the full capacity of all the disks in the Disk Group to a single target with one LUN. Such a LUN could potentially be mapped on a VM host and multiple virtual hard disk files could be create on it. Synology indicates that this configuration provides the best performance.
On the other hand, the Multiple LUNs on RAID option allows for specification of LUN size during creation. Synology claims this provides optimized performance. The advantage is that multiple LUNs can be created to map on as separate physical drives for multiple VMs. The left-over space can be used for the creation of a standard volume on which one can have the usual CIFS / NFS shares. An important aspect to note is that the VAAI and ODX capabilities are not available for block-level LUNs, but only for the regular file-based ones.
In the next section, we deal with our benchmarking methodology and performance numbers for various iSCSI configurations in the ioSafe 1513+.
DSM 5.0: Evaluating iSCSI Performance
We have already taken a look at the various iSCSI options available in DSM 5.0 for virtualization-ready NAS units. This section presents the benchmarks for various types of iSCSI LUNs on the ioSafe 1513+. It is divided into three parts, one dealing with our benchmarking setup, the second providing the actual performance numbers and the final one providing some notes on our experience with the iSCSI features as well as some analysis of the numbers.
Hardware-wise, the NAS testbed used for multi-client CIFS evaluation was utilized here too. The Windows Server 2008 R2 + Hyper-V setup can run up to 25 Windows 7 virtual machines concurrently. The four LAN ports of the ioSafe 1513+ were bonded together in LACP mode (802.3ad link aggregation) for a 4 Gbps link. Jumbo frame settings were left at default (1500 bytes) and all the LUN / target configurations were left at default too (unless explicitly noted here).
Synology provides three different ways to create iSCSI LUNs, and we benchmarked each of them separately. For the file-based LUNs configuration, we created 25 different LUNs and mapped them on to 25 different targets. Each of the 25 VMs in our testbed connected to one target/LUN combination. The standard IOMeter benchmarks that we use for multi-client CIFS evaluation were utilized for iSCSI evaluation also. The main difference to note is that the CIFS evaluation was performed on a mounted network share, while the iSCSI evaluation was done on a 'clean physical disk' (from the viewpoint of the virtual machine). A similar scheme was used for the block-level Multiple LUNs on RAID configuration also.
For the Single LUN on RAID configuration, we had only one target/LUN combination. Synology has an option to enable multiple initiators to map an iSCSI target (for cluster-aware operating systems), and we enabled that. This allowed the same target to map on to all the 25 VMs in our testbed. For this LUN configuration alone, the IOMeter benchmark scripts were slightly modified to change the starting sector on the 'physical disk' for each machine. This allowed each VM to have its own allocated space on which the IOMeter traces could be played out.
The four IOMeter traces were run on the physical disk manifested by mapping the iSCSI target on each VM. The benchmarking started with one VM accessing the NAS. The number of VMs simultaneously playing out the trace was incremented one by one till we had all 25 VMs in the fray. Detailed listings of the IOMeter benchmark numbers (including IOPS and maximum response times) for each configuration are linked below:
Synology's claims of 'Single LUN on RAID' providing the best access performance holds true for large sequential reads. In other access patterns, the regular file-based LUNs perform quite well.. However, the surprising aspect is that none of the configurations can actually saturate the network links to the extent that the multi-client CIFS accesses did. In fact, the best number that we saw (in the Single LUN on RAID) case was around 220 MBps compared to the 300+ MBps that we obtained in our CIFS benchmarks.
The more worrisome fact was that our unit completely locked up while processing the 25-client regular file-based LUNs benchmark routine. On the VMs' side, we found that the target simply couldn't be accessed. The NAS itself was unresponsive to access over SSH or HTTP. Pressing the front power button resulted in a blinking blue light, but the unit wouldn't shut down. There was no alternative, but to yank out the power cord in order to shut down the unit. By default, the Powershell script for iSCSI benchmarking starts with one active VM, processes the IOMeter traces, adds one more VM to the mix and repeats the process - this is done in a loop till the script reaches a stage where all the 25 VMs are active and have run the four IOMeter traces. After restarting the ioSafe 1513+, we reran the Powershell script by enabling the 25-client access alone and the benchmark completed without any problems. Strangely, this issue happened only for the file-based LUNs, and the two sets of block-based iSCSI LUN benchmarks completed without any problems. I searched online and found at least one other person reporting a similar issue, albeit, with a more complicated setup using MPIO (multi-path I/O) - a feature we didn't test out here.
Vendors in this market space usually offer only file-based LUNs to tick the iSCSI marketing checkbox. Some vendors reserve block-level LUNs only for their high-end models. So, Synology must be appreciated for bringing block-based LUNs as an available feature to almost all its products. In our limited evaluation, we found that stability could improve for file-based LUNs. Performance could also do with some improvement, considering that a 4 Gbps aggregated link could not be saturated. With a maximum of around 220 MBps, it is difficult to see how a LUN store on the ioSafe / Synology 1513+ could withstand a 'VM boot storm' (a situation where a large number of virtual machines using LUNs on the same NAS as the boot disk try to start up simultaneously). That said, the unit should be able to handle two or three such VMs / LUNs quite easily.
From our coverage perspective, we talked about Synology DSM's iSCSI feature because it is one of the more comprehensive offerings in this market space. If readers are interested, we can process our multi-VM iSCSI for other SMB-targeted NAS units too. It may reveal details of where each vendor stands when it comes to supporting virtualization scenarios. Feel free to sound off in the comments.
Miscellaneous Aspects and Concluding Remarks
It is expected that most users would configure the ioSafe 1513+ in RAID-5 for optimal balance of redundancy and capacity (reflected in ioSafe's decision to ship the units pre-configured with SHR 1-disk fault tolerance). Hence, we performed all our expansion / rebuild testing as well as power consumption evaluation with the unit configured in RAID-5. The disks used for benchmarking (Western Digital WD4000FYYZ) were also used in this section. The table below presents the average power consumption of the unit as well as time taken for various RAID-related activities.
|ioSafe 1513+ RAID Expansion and Rebuild / Power Consumption|
|Activity||Duration (HH:MM:SS)||Avg. Power (W)|
|Single Disk Init||-||37.9 W|
|JBOD to RAID-1 Migration||11:40:48||49.59 W|
|RAID-1 (2D) to RAID-5 (3D) Migration||38:34:47||59.46 W|
|RAID-5 (3D) to RAID-5 (4D) Expansion||31:33:19||69.95 W|
|RAID-5 (4D) to RAID-5 (5D) Expansion||33:46:59||81.31 W|
|RAID-5 (5D) Rebuild||22:57:12||78.89 W|
One of the issues that we would like Synology to address is the RAID expansion / migration / rebuild durations. Though we don't have the full corresponding data from similar (read, 5-bay) competing units, the expansion durations with QNAP NAS units and rebuilds with the Seagate NAS units are much shorter compared to the ones in the table above.
Coming to the business end of the review, there are two different aspects of the ioSafe 1513+ to comment upon. The first relates to the software platform from Synology. DSM 5.0 is arguably one of the most full featured COTS NAS operating systems around. Its popularity is even reflected in the fact that specific viruses have been created for the platform (though it is also an indication of the security weaknesses that Synology has been actively patching in the recent past). The mobile apps and NAS packages extend the functionality of the appliance to provide a comprehensive private cloud experience. SMB features such as virtualization certifications / iSCSI support further enhance the appeal of the ioSafe 1513+ for enterprise users. All the plus points of the Synology 1513+ (including the performance, capacity expansion, high availability, hot-swappable fans etc.) translate as-is to the ioSafe 1513+.
The second is obviously related to the chassis design that makes the ioSafe 1513+ one of the most unique products that we have evaluated. ioSafe continues to impress us by scaling the disaster-proofing techniques to handle more and more complicated scenarios every year. The ioSafe 1513+ is an awesome piece of engineering aimed at solving the very relevant issue of protecting data from disasters. Fire protection is rated for 30 minutes at 1550°F (ASTM E-119) and the unit's drives are kept safe even in 10 ft. deep water for 3 days. ioSafe provides the option to purchase a Data Recovery Service (DRS) scheme along with the unit. The DRS period can be extended at a simple rate of $2.99/TB/month. The only points that consumers might complain about are the limited 'qualified hard disks' list, fan noise and the cost of the units. From our evaluation, we believe that the unit is best operated in an air-conditioned server room where fan noise should not be an issue. Some of the qualified hard disks are suitable for usage only at ambient temperatures lower than 30°C, but neither that nor the cost are likely to be factors for SMBs and SMEs that constitute the target market of the ioSafe 1513+.