Single Client Performance - CIFS and iSCSI on Windows

The single client CIFS performance of the Synology RS10613xs+ was evaluated on the Windows platforms using Intel NASPT and our standard robocopy benchmark. This was run from one of the virtual machines in our NAS testbed. All data for the robocopy benchmark on the client side was put in a RAM disk (created using OSFMount) to ensure that the client's storage system shortcomings wouldn't affect the benchmark results. It must be noted that all the shares / iSCSI LUNs are created in a RAID-5 volume.

Synology RS10613xs+ CIFS Performance - Windows

We created a 250 GB iSCSI target and mapped it on the Windows VM. The same benchmarks were run and the results are presented below.

Synology RS10613xs+ iSCSI Performance - Windows

Encryption Support Evaluation:

Consumers looking for encryption capabilities can opt to encrypt a iSCSI share with TrueCrypt or some in-built encryption mechanism in the client OS. However, if requirements dictate that the data must be shared across multiple users / computers, relying on encryption in the NAS is the best way to move forward. Most NAS vendors use the industry-standard 256-bit AES encryption algorithm. One approach is to encrypt only a particular shared folder while the other approach is to encrypt the full volume. Some NAS vendors have support for both approaches in their firmware, but Synology only opts for the former. Details of Synology's encryption strategy can be found in this tutorial.

On the hardware side, encryption support can be in the form of specialized hardware blocks in the SoC (common in ARM / PowerPC based NAS units). In x86-based systems, accelerated encryption support is dependent on whether the AES-NI instruction is available on the host CPU (not considering units based on the Intel Berryville platform). Fortunately, the Xeon CPU used in the Synology RS10613xs+ does support AES-NI. So, we can expect performance loss due to encryption enabling to be minimal.

We enabled encryption on a a CIFS share to repeat our Intel NASPT / robocopy benchmarks. The results are presented in the graph below (with the unencrypted folder numbers for comparison purposes).

Synology RS10613xs+ Encryption Performance - Windows

As expected, encryption carries almost no performance hit. In a couple of cases, the numbers seem to even favour the encryption case. It goes to show that the bottleneck is on the disk or network side for those cases, rather than the RAID and encryption-related computation on the NAS CPU.

Testbed Setup and Testing Methodology Single Client Performance - CIFS and NFS on Linux
Comments Locked

51 Comments

View All Comments

  • mfenn - Friday, December 27, 2013 - link

    The 802.3ad testing in this article is fundamentally flawed. 802.3ad does NOT, repeat NOT, create a virtual link whose throughput is the sum of its components. What it does is provide a mechanism for automatically selecting which link in a set (bundle) to use for a particular packet based on its source and destination. The definition of "source and destination" depends on the particular hashing algorithm you choose, but the common algorithms will all hash a network file system client / server pair to the same link.

    In a 4 1Gb/s + 2 10Gb/s 802.3ad ling aggregation group, you would expect that 2/3rd's of the clients would get hashed to the 1 Gb/s links and 1/3rd would get hashed to the 10Gb/s links. In a situation where all clients are running in lock-step (i.e. everyone must complete their tests before moving on to the next), you would expect the 10 Gb/s clients to be limited by the 1 Gb/s ones, thus providing a ~ 6 Gb/s line rate ~= 600 MB/s user data result.

    Since 2 * 10 Gb/s > 6 * 1 Gb/s, I recommend retesting with only the two 10 Gb/s links in the 802.3ad aggregation group.
  • Marquis42 - Friday, December 27, 2013 - link

    Indeed, that's what I was going to get at when I asked more about the particulars of the setup in question. Thanks for just laying it out, saved me some time. ;)
  • ganeshts - Friday, December 27, 2013 - link

    mfenn / Marquis42,

    Thanks for the note. I indeed realized this issue after processing the data for the Synology unit. Our subsequent 10GbE reviews which are slated to go out over the next week or so (the QNAP TS-470 and the Netgear ReadyNAS RN-716) have been evaluated with only the 10GbE links in aggregated mode (and the 1 GbE links disconnected).

    I will repeat the Synology multi-client benchmark with RAID-5 / 2 x 10Gb 802.3ad and update the article tomorrow.
  • ganeshts - Saturday, December 28, 2013 - link

    I have updated the piece with the graphs obtained by just using the 2 x 10G links in 802.3ad dynamic link aggregation. I believe the numbers don't change too much compared to teaming all the 6 ports together.

    Just for more information on our LACP setup:

    We are using the GSM7352S's SFP+ ports teamed with link trap and STP mode enabled. Obviously, dynamic link aggregation mode. The Hash Mode is set to 'Src/Dest MAC, VLAN, EType,Incoming Port'.

    I did face problems in evaluating other units where having the 1 Gb links active and connected to the same switch while the 10G ports were link-aggregated would bring down the benchmark numbers. I have since resolved that by completely disconnecting the 1G links in multi-client mode for the 10G-enabled NAS units.
  • shodanshok - Saturday, December 28, 2013 - link

    Hi all,
    while I understand that RAID5 surely has its domains, RAID10 is generally a way better choice, both for redundancy and performance.

    The RAID5 read-modify-write penalty present itself in a pretty heavy way when using anything doing many small writes, as databases ans virtual machines. So, then only case where I would create a RAID5 array is when it will be used as a storage archive (eg: fileserver).

    On the other hand, many, many sysadmins create "by default" RAID5 arrays pretending to consolidate on it many virtual machines. Unless you have a very high-end RAID controller (w/512+ MB of NVCache), they will badly suffer from RAID5 and alignment issues, which are basically non-existent on RAID10.

    One exception can be done for SSD arrays: in that case, a parity-based scheme (RAID5 or, better, RAID6) can do its work done very well, as SSD have no seek latency and tend to be of lower capacity than mechanical disks. However, alignment issues remain significant, and need to be taken into account when creating both the array and the virtual machines on top of it.

    Regards.
  • sebsta - Saturday, December 28, 2013 - link

    Since the introduction of 4k sector size disks things have changed a lot,
    at least in the ZFS world. Everyone who is thinking about building their
    storage system with ZFS and RaidZ should see this Video.

    http://zfsday.com/zfsday/y4k/

    Starting at 17:00 comes the bad stuff for RaidZ users.
    Here the one of the co creators of ZFS basically tells you.....

    Stay away from RaidZ if you are using 4k sector disks.
  • hydromike - Sunday, December 29, 2013 - link

    It depends on the OS implemented if this is a current problem. Many of the commercial ZFS vendors have had this fixed for awhile 18 to 24 months. FreeNAS in its latest release 9.2.0 have fixed this issue. ZFS has been a command-line heavy operation that you really understand drive setup and to tune it for the best speed.
  • sebsta - Sunday, December 29, 2013 - link

    I don't know much about FreeNAS but like FreeBSD they get their ZFS from Illumos.
    Illumos ZFS implementation has no fix. What is ZFS supposed to do if you write 8k to a RaidZ with 4 data disks if the sector size of a disk is 4k?

    The video explain what happens on Illumos. You will end up with something like this

    1st 4k data -> disk1
    2nd 4k data -> disk2
    1st 4k data -> disk3
    2nd 4k data -> disk4
    Parity -> disk5

    So you have written the same data twice plus parity. Much like mirroring with the additional overhead of calculating and writing the parity. FreeNAS has changed the ZFS implementation in that regard?
  • sebsta - Sunday, December 29, 2013 - link

    I did a quick search and at least in January this year FreeBSD had the same issues.
    See here https://forums.freebsd.org/viewtopic.php?&t=37...
  • shodanshok - Monday, December 30, 2013 - link

    Yes this is true, but for this very same reason many enterprise disks remain at 512 Byte per sector.
    Take the "enterprise drives" from WD:
    - the low cost WD SE are Advanced Format ones (4K sector size)
    - the somewhat pricey WD RE have classical 512B sector
    - the top-of-the line WD XE have 512B sector

    So, the 4K formatted disks are proposed for storage archiving duties, while for VMs and DBs the 512B disks remain the norm.

    Regards.

Log in

Don't have an account? Sign up now