Encryption Support Evaluation

Consumers looking for encryption capabilities can opt to encrypt a iSCSI share with TrueCrypt or some in-built encryption mechanism in the client OS. However, if requirements dictate that the data must be shared across multiple users / computers, relying on encryption in the NAS is the best way to move forward. Most NAS vendors use the industry-standard 256-bit AES encryption algorithm. One approach is to encrypt only a particular shared folder while the other approach is to encrypt the full volume. Netgear supports only volume-level encryption. In addition, a USB drive (into which the key is written at the time of volume creation) needs to be attached to the unit for the encrypted volume to remain / be mounted.

On the hardware side, encryption support can be in the form of specialized hardware blocks in the SoC (common in ARM / PowerPC based NAS units). In x86-based systems, accelerated encryption support is dependent on whether the AES-NI instruction is available on the host CPU. The Annapurna Labs SoC has hardware crypto engines that enable minimal hit in performance with encrypted volumes.

HD Video Playback - Encrypted CIFS

2x HD Playback - Encrypted CIFS

4x HD Playback - Encrypted CIFS

HD Video Record - Encrypted CIFS

HD Playback and Record - Encrypted CIFS

Content Creation - Encrypted CIFS

Office Productivity - Encrypted CIFS

File Copy to NAS - Encrypted CIFS

File Copy from NAS - Encrypted CIFS

Dir Copy to NAS - Encrypted CIFS

Dir Copy from NAS - Encrypted CIFS

Photo Album - Encrypted CIFS

robocopy (Write to NAS) - Encrypted CIFS

robocopy (Read from NAS) - Encrypted CIFS

The performance of the encrypted volume is easily the best amongst all the ARM-based NAS units that have been evaluated so far. The x86-based units in the list above don't have AES-NI, and hence, the ARM-based RN202 easily manages to win out. There is definitely a performance hit compared to unencrypted volumes, and this can only be resolved by going to higher performance platforms./p>

Single Client Performance - CIFS & iSCSI on Windows Multi-Client CIFS Performance for Consumer Workloads
Comments Locked

22 Comments

View All Comments

  • Duncan Macdonald - Friday, September 25, 2015 - link

    Any NAS system that is limited to GbE or lower speed will give poor performance compared to even budget SSDs. (A GbE link can transfer about 100MB/sec after allowing for overheads - even low performance SSDs can do much better.) To beat locally mounted SSDs requires 10GbE or faster links. NAS systems are only useful for sharing files (slowly) to multiple computers or providing a backup far enough away to be unlikely to be affected by a common disaster (eg a house fire).
    As for NAS systems with 100Mb/sec links - AVOID (A USB 2.0 stick can be faster!!!)
  • BillyONeal - Friday, September 25, 2015 - link

    But most of the NASes here are well below saturating GigE. A USB 2.0 stick can be faster in extremely limited scenarios but in most cases USB protocol overhead per transfer will make it worse for these kinds of workloads.
  • Metaluna - Friday, September 25, 2015 - link

    Where in the article did anyone suggest using a NAS as a performance alternative to locally attached SSDs? And as for NAS only being useful for sharing files to multiple computers, yeah, that's kind of the whole point for why local area networks and file servers were developed in the first place. That's like saying "A GPU is really only useful for displaying images on your screen"
  • colinstu - Friday, September 25, 2015 - link

    don't know what 'overheads' you're talking about but my Synology NAS and gb network regularly transfer at 115MB/s (114-116). Still not the max theoretical of 125MB/s, but closer to the max then '100'
  • azazel1024 - Saturday, September 26, 2015 - link

    No, max theoretical is not 125MB/sec. That is raw data rate, but you can't actually transfer 125MB/sec of usable data over a 1GbE link. SMB max rate is about 117.5MB/sec using 9k jumbo frames and about 115MB/sec using standard 1500MTU. That is covering TCP/IP overhead as well as SMB overhead. Smaller file will reduce the max by a bit no matter how fast the host and server are because of additional SMB overhead involved in "opening" and "closing" each file transfer.

    NAS are just fine, at least newer moderately fast ones. But, I do have to say, if running windows based clients...a windows based server, if you can't/don't want to move to 10GbE can be significantly higher performing than a NAS, even in "undemanding" file transfers. My G1610 based server manages 235MB/sec between it and my desktop, both running Windows 8.1. Dual GbE NICs combined with SMB Multichannel is a beautiful thing.
  • UtilityMax - Sunday, September 27, 2015 - link

    NAS storage is slower than a directly attached storage! Shocking stuff! News at 11.

    GiE is is actually pretty acceptable for most applications, except a few specialist tasks. 10GbE can still be pretty expensive and power hungry.
  • UtilityMax - Sunday, September 27, 2015 - link

    Sorry mean 10GbE instead of GiE
  • Wixman666 - Sunday, September 27, 2015 - link

    So you decided that comparing apples and walruses is ok? A SSD and a 2 bay NAS have nothing in common for function, capacity, or price. Troll on, dude.
  • johnny_boy - Thursday, October 1, 2015 - link

    Any SSD system that is limited to SATA or even PCIE will give poor performance compared to even budget RAM disks. (A SATA link can transfer about blah MB/sec after allowing for overheads - even low performance RAM disks can do much better.) To beat locally mounted RAM disks requires bleek GbE or faster links. SSDs are only useful for reading and writing data.
    As for SSDs with blomps Mb/sec links - AVOID (A USB 3.0 stick can be faster!!!)
  • Wardrop - Friday, September 25, 2015 - link

    Do the btrfs snapshots show up in Windows under the "Previous versions" tab?

Log in

Don't have an account? Sign up now