Single Client Performance - CIFS and iSCSI on Windows

The single client CIFS performance of the Synology RS10613xs+ was evaluated on the Windows platforms using Intel NASPT and our standard robocopy benchmark. This was run from one of the virtual machines in our NAS testbed. All data for the robocopy benchmark on the client side was put in a RAM disk (created using OSFMount) to ensure that the client's storage system shortcomings wouldn't affect the benchmark results. It must be noted that all the shares / iSCSI LUNs are created in a RAID-5 volume.

Synology RS10613xs+ CIFS Performance - Windows

We created a 250 GB iSCSI target and mapped it on the Windows VM. The same benchmarks were run and the results are presented below.

Synology RS10613xs+ iSCSI Performance - Windows

Encryption Support Evaluation:

Consumers looking for encryption capabilities can opt to encrypt a iSCSI share with TrueCrypt or some in-built encryption mechanism in the client OS. However, if requirements dictate that the data must be shared across multiple users / computers, relying on encryption in the NAS is the best way to move forward. Most NAS vendors use the industry-standard 256-bit AES encryption algorithm. One approach is to encrypt only a particular shared folder while the other approach is to encrypt the full volume. Some NAS vendors have support for both approaches in their firmware, but Synology only opts for the former. Details of Synology's encryption strategy can be found in this tutorial.

On the hardware side, encryption support can be in the form of specialized hardware blocks in the SoC (common in ARM / PowerPC based NAS units). In x86-based systems, accelerated encryption support is dependent on whether the AES-NI instruction is available on the host CPU (not considering units based on the Intel Berryville platform). Fortunately, the Xeon CPU used in the Synology RS10613xs+ does support AES-NI. So, we can expect performance loss due to encryption enabling to be minimal.

We enabled encryption on a a CIFS share to repeat our Intel NASPT / robocopy benchmarks. The results are presented in the graph below (with the unencrypted folder numbers for comparison purposes).

Synology RS10613xs+ Encryption Performance - Windows

As expected, encryption carries almost no performance hit. In a couple of cases, the numbers seem to even favour the encryption case. It goes to show that the bottleneck is on the disk or network side for those cases, rather than the RAID and encryption-related computation on the NAS CPU.

Testbed Setup and Testing Methodology Single Client Performance - CIFS and NFS on Linux
Comments Locked

51 Comments

View All Comments

  • iAPX - Thursday, December 26, 2013 - link

    2000+ MB/s ethernet interface (2x10Gb/s), 10 hard-drives able to to delivers at least 500MB/s EACH (grand total of 5000MB/s), Xeon quad-core CPU, and tested with ONE client, it delivers less than 120MB/s?!?
    That's what I expect from an USB 3 2.5" external hard-drive, not a SAN of this price, it's totally deceptive!
  • Ammaross - Thursday, December 26, 2013 - link

    Actually, 120MB/s is remarkably exactly what I would expect from a fully-saturated 1Gbps link (120MB/s * 8 bits = 960Mbps). Odd how that works out.
  • xxsk8er101xx - Friday, December 27, 2013 - link

    That's because the PC only has a gigabit NIC. That's actually what you should expect.
  • BrentfromZulu - Thursday, December 26, 2013 - link

    For the few who know, I am the Brent that brought up Raid 5 on the Mike Tech Show (saying how it is not the way to go in any case)

    Raid 10 is the performance king, Raid 1 is great for cheap redundancy, and Raid 10, or OBR10, should be what everyone uses in big sets. If you need all the disk capacity, use Raid 6 instead of Raid 5 because if a drive fails during a rebuild, then you lose everything. Raid 6 is better because you can lose a drive. Rebuilding is a scary process with Raid 5, but Raid 1 or 10, it is literally copying data from 1 disk to another.

    Raid 1 and Raid 10 FTW!
  • xdrol - Thursday, December 26, 2013 - link

    From the drives' perspective, rebuilding a RAID 5 array is exactly the same as rebuilding a RAID 1 or 10 array: Read the whole disk(s) (or to be more exact, sectors with data) once, and write the whole target disk once. It is only different for the controller. I fail to see why is one scarier than the other.

    If your drive fails while rebuilding a RAID 1 array, you are exactly as screwed. The only thing why R5 is worse here is because you have n-1 disks unprotected while rebuilding, not just one, giving you approximately (=negligibly smaller than) n-1 times data loss chance.
  • BrentfromZulu - Friday, December 27, 2013 - link

    Rebuilding a Raid 5 requires reading data from all of the other disks, whereas Raid 10 requires reading data from 1 other drive. Raid 1 rebuilds are not complex, nor Raid 10. Raid 5/6 rebuilding is complex, requires activity from other disks, and because of the complexity has a higher chance of failure.
  • xxsk8er101xx - Friday, December 27, 2013 - link

    You take a big hit on performance with RAID 6.
  • Ajaxnz - Thursday, December 26, 2013 - link

    I've got one of these with 3 extra shelves of disks and 1TB of SSD cache.
    There's a limit of 3 shelves in a single volume, but 120TB (3 shelves of 12 4Tb disks, raid5 on each shelf) with the SSD cache performs pretty well.
    For reference, NFS performance is substantially better than CIFS or iSCSI.

    It copes fine with the 150 virtual machines that support a 20 person development team.

    So much cheaper than a NetAPP or similar - but I haven't had a chance to test the multi-NAS failover - to see if you truly get enterprise quality resilience.
  • jasonelmore - Friday, December 27, 2013 - link

    well at least half a dozen morons got schooled on the different types of RAID arrays. gg, always glad to see the experts put the "less informed" (okay i'm getting nicer) ppl in their place.
  • Marquis42 - Friday, December 27, 2013 - link

    I'd be interested in knowing greater detail on the link aggregation setup. There's no mention of the load balancing configuration in particular. The reason I ask is because it's probably *not* a good idea to bond 1Gbps links with 10Gbps links in the same bundle unless you have access to more advanced algorithms (and even then I wouldn't recommend it). The likelihood of limiting a single stream to ~1Gbps is fairly good, and may limit overall throughput depending on the number of clients. It's even possible (though admittedly statistically unlikely) that you could limit the entirety of the system's network performance to saturating a single 1Gbe connection.

Log in

Don't have an account? Sign up now