RAPID: PCIe-like Performance from a SATA SSD

The software story around Samsung's SSD 840 EVO is quite possibly the strongest we've ever seen from an SSD manufacturer. Samsung's SSD Magician got a major update not too long ago, giving it a downright awesome UI. Magician gives you access to SMART details about your drive and provides decent visualization of things like total host writes. I'd love to see the inclusion of total NAND writes reported somewhere, as reporting host writes alone doesn't take into account write amplification and can give a false sense of security for those users deploying drives into very write intensive environments. There's a prominent drive health indicator which is tied to NAND wear and should draw a lot of attention to itself should things get bad. Samsung's SSD Magician also includes a built in benchmark, controllable overprovisioning and secure erase functionality.

Samsung sent us a beta of the next version of its Magician software (4.2) which includes support for RAPID mode (Real-time Accelerated Processing of I/O Data). RAPID is a feature exclusive to the EVO (for now) and comes courtesy of Samsung's NVELO acquisition from last year. As NVELO focused on NAND caching software, you shouldn't be too surprised by RAPID's role in improving storage performance. Unlike traditional SSD caches however that use NAND to cache mechanical storage, RAPID is designed to further improve the performance of an SSD and not make a HDD more SSD-like. RAPID uses some of your system memory and CPU resources to cache hot data, serving it out of DRAM rather than your SSD.

The architecture is rather simple to understand. Enabling RAPID installs a filter driver on your Windows machine that keeps track of all reads/writes to a single EVO (RAPID only supports caching a single drive today). The filter driver looks at both file types/sizes and LBAs, but it fundamentally caches at the block level (it simply gets hints from the filesystem to determine what to cache). File types that are meaningless to cache are automatically excluded (think very large media files), but things like Outlook PST files are prime targets for caching. Since RAPID works at the block level you can cache frequently used parts of a file, rather than having to worry about a file being too big for the cache.

The cache resides in main memory and is allocated out of non-paged kernel memory. In fact, that's the easiest way to determine whether or not RAPID is actually working - you'll see non-paged kernel memory jump in size after about a minute of idle time on your machine:

Presently RAPID will use no more than 25% of system memory or 1GB, whichever comes first. Both reads and writes are cached, but in different ways. The read cache works as you'd expect, while RAPID more accurately does something like buffering/combining for writes. Reads are simple to cache (just look at what addresses are frequently accessed and draw those into the cache), but writes offer a different set of challenges. If you write to DRAM first and write back to the SSD you run the risk of losing a ton of data in the event of a crash or power failure. Although RAPID obeys flush commands, there's always the risk that anything pending could be lost in a system crash. Recognizing this potential, Samsung tells me that RAPID tries to instead focus on combining low queue depth writes into much larger bundles of data that can be written more like large transfers across many NAND die. To test this theory I ran our 4KB random write IOmeter test at a queue depth of 1 with RAPID enabled and disabled:

Samsung SSD 840 EVO 250GB - 4K Random Write, QD1, 8GB LBA Space
  IOPS MB/s Average Latency Max Latency CPU Utilization
RAPID Disabled 22769.31 93.26 MB/s 0.0435 ms 0.7512 ms 13.81%
RAPID Enabled 73466.28 300.92 MB/s 0.0135 ms 31.4259 ms 31.18%

Write coalescing seems to work extremely well here. With RAPID enabled the system sees even better random write performance than it would at a queue depth of 32. Average latency drops although the max observed latency was definitely higher. I've seen max latency peaks as high as 10ms on the EVO, so the increase in max latency is a bit less severe than what the data here indicates (but it's still large).

My test system uses a quad-core Sandy Bridge, so we're looking at an additional 60 - 70% CPU load on a single core when running an unconstrained IO workload. In real world scenarios I'd expect that impact to be much lower, but there's no getting around the fact that you're spending extra cycles on doing this DRAM caching. RAPID will revert into a pass-through mode if the CPU is already tied up doing other things. The technology is really designed to make use of excess CPU and DRAM in modern day PCs.

The potential performance upside is tremendous. While the EVO is ultimately limited by the performance of 6Gbps SATA, any requests serviced out of main memory are limited by the speed of your DRAM. In practice I never saw more than 4 - 5GB/s out of the cache, but that's still an order of magnitude better than what you'd get from the SSD itself. I ran a couple of tests with and without RAPID enabled to further characterize the performance gains:

Samsung SSD 840 EVO 250GB
  PCMark 7 Secondary Storage Score ATSB - Heavy 2011 Workload (Avg Data Rate) ATSB - Heavy 2011 Workload (Avg Service Time) ATSB - Light 2011 Workload (Avg Data Rate) ATSB - Light 2011 Workload (Avg Service Time)
RAPID Disabled 5414 229.6 MB/s 1101.0 µs 338.3 MB/s 331.4 µs
RAPID Enabled 5977 307.7 MB/s 247.0 µs 597.7 MB/s 145.4 µs
% Increase 10.4% 34.0%   75.0%  

The gains in these tests range from only 10% in PCMark 7 to as much as 75% in our Light 2011 workload. I'm in the process of running a RAPID enabled drive against our Destroyer benchmark to see how it fares there. In our two storage bench tests here the impact is actually mostly on the write side, average performance actually regresses slightly in both cases. I'm not entirely sure why that is other than both of these tests were designed to be a bit more write intensive than normal in order to really stress the weaknesses on SSDs at the time. To make sure that reads could indeed be cached I ran ATTO at a couple of different test sizes, starting with our standard 2GB test:

ATTO makes for a great test because we can see the impact transfer size has on RAPID's caching algorithms. Here we see pretty much no improvement until transfers get larger than 32KB, indicating an optimization for caching large block sequential reads. Note that even though ATTO's test file is 2GB in size (and RAPID's cache is limited to 1GB) we're still able to see some increase in performance. At best RAPID boosts sequential read performance by 34%, driving the 250GB EVO beyond 700MB/s. Since the test file is larger than the maximum size of the cache we're ultimately limited by the performance of the EVO itself.

Writes show a different optimization point. Here we see big uplift above 4KB transfer sizes but more or less the same performance once we move to large block sequential transfers. Again this makes sense as Samsung would want to coalesce small writes into large blocks it can burst across many NAND die, but caching large sequential transfers is just risking potential data loss in the event of a crash/unexpected power loss. Here the potential uplift is even larger - nearly 60% over the RAPID-disabled configuration.

To see what would happen if the entire workload could fit within a 1GB cache I reduced the size of ATTO's test set to 512MB and re-ran the tests:

Oh man. Here performance just shoots through the roof. Max sequential read performance tops out at 3.8GB/s. Note that once again we don't RAPID attempting to cache any smaller transfers, only large sequential transfers are of interest. Towards the end of the curve performance appears to regress when the transfer size exceeds 1MB. What's actually happening is RAPID's performance is exceeding the variable ATTO uses to store its instantaneous performance results. What we're seeing here is a 32-bit integer wrapping itself. 

Writes see similarly insane increases in performance. Here the best performance is north of 4GB/s. When the entire workload can fit in the cache, Samsung appears to relax some of its feelings about not caching large transfers unfortunately. The focus extends beyond just small file writes and we see nearly 4GB/s when we're transferring 8MB of data at a time. We're likely also seeing the same issue where RAPID's performance is so high that it's overflowing the 32-bit integer ATTO uses to report it.

While I appreciate the tremendous increase in both read and write performance, part of me wishes that Samsung would be more conservative in buffering writes. Although the cache map is stored on the C: drive and is persistent across boots, any crash or power loss with uncommitted (non-flushed) writes in the DRAM cache runs the risk of not making it to disk. Samsung is quick to point out that Windows issues flush commands regularly, so the risk should be as low as possible, but you're still risking more than had you not deployed another DRAM cache. If you've got a stable system connected to a UPS (or a notebook on a battery) this will sound like paranoia, but it's still a concern. 

If, however, you want to get PCIe-like SSD speeds without shelling out the money for a PCIe SSD, Samsung's RAPID is the closest you'll get.

TurboWrite: MLC Performance on a TLC Drive Performance Consistency & Testing TRIM
Comments Locked

137 Comments

View All Comments

  • eamon - Thursday, August 1, 2013 - link

    Unless you want to run some kind of continual I/O server, I suspect performance will be fast enough not to matter; I'd only look at pricing if I were you...
  • Busverpasser - Thursday, August 8, 2013 - link

    Hi there, great review, thanks a lot. Actually I do have a question... The article says "The performance story is really good (particularly with the larger capacities), performance consistency out of the box is ok (and gets better if you can leave more free space on the drive)..."

    Does leaving more free space mean that this space is supposed to be unpartitioned or just not filled with data? When I bought my Intel Postville SSD some time ago, I left some space unpartitioned but never really knew whether that was the right thing to do :D. Can someone give me a hint here?
  • xchaotic - Wednesday, August 14, 2013 - link

    @Busverpasser just leave more space free, it doesn't have to be unpartitioned.
    Worst case if you need that extra space for a while, you'll get lower performance, but more storage whenever you need it.
  • speculatrix - Saturday, August 17, 2013 - link

    the table titled "Samsung SSD 840 EVO TurboWrite Buffer Size vs. Capacity" should be titled "Capacity vs Usage vs Endurance"
  • rdugar - Friday, August 23, 2013 - link

    Am in the market for an SSD finally to replace an HDD on a Windows 7 laptop. Was almost set on the 128GB Samsung 840 Pro, but saw the comment on poor performance at almost full capacities.

    Price, reliability and endurance being the most important to me, which one should I go for?

    128Gb Samsung 840 Pro? approx $119 after coupons, etc.
    120 GB Samsung 840 EVO? probably $99 or so
    256 GB Samsung 840 EVO? probably $165 or so
    Other brand and model?

    If I have to spend $120 odd, may as well spend another $50 and get double the capacity....
  • tfop - Saturday, August 24, 2013 - link

    I have a question regarding to the NAND Comparison table.
    How do this Page and Block sizes affect the right Clustersize and Alignment of the Partition?
    If i am getting this right, the SSD 840 EVO would need a 8 KiB Clustersize and a 2 MiB Alignment.
  • Gnomer87 - Wednesday, August 28, 2013 - link

    I have a couple of questions:

    First, how much data is typically written to the average consumer HDD on a daily basis these days? I am thinking it's nowhere close to 50GiB. I guess what I am really interested in knowing, is how much data the operating system(windows 7) writes to the drive for various maintenance uses(if there are any beside defragmenting). In my mind, simply booting up the computer shouldn't mean any writes to the drive at all. Ergo, given my typical use, a 120GB SSD of that caliber, should last a lifetime. Am I right in thinking this? I mean, reading doesn't affect the durability right?

    Secondly: I've been considering getting an SSD for use as a OS drive for a long time, reason of course was to speed up boot time. However, I've long wondered WHY windows boots so slowly from HDDs in the first place. After all, the amount of data loaded during boot up isn't large. In my case the processes post-boot take up around 200 MBs, Assuming the actual amount of data loaded from the drive is about the same, it really shouldn't take that long. My HDD is capable of reading up to 120 MBs in optimal situations, so it's obvious the boot up process isn't optimal by a long shot.

    But why this slow? It can take over a minute before she(my computer) is done loading and starting all processes. Last semester I took course in Operating system at the local university. I must confess I was a horrible student, I didn't show up much. But I do remember a few key elements, namely the scheduler and how this scheduler continually does context switches, letting each process use the CPU, and thus creating parallelism. Now what was really interesting was resource management. It's the scheduler that decides which process is currently running on the cpu, and the scheduler process is run in between each context switch, effectively letting each user process run and have access to resources, such as the hard drive. Now, what happens if all the processes want data from the drive at the same time? Would each process continually interrupt the other processes loading of data, and thus causing the HDD to seek constantly?

    Could that explain why booting takes such idiotic amounts of time? An extremely inefficient resource management that basically ignores the inherent seek-time related weaknesses of an HDD? SSDs, as we know, barely have seek-time, and thus the performance loss from context switching should be negligible.

    I know my cousins SSD powered computer boots near instantly, once it's done with the usual BIOS stuff, the OS is booted and ready for use in mere seconds. And yes, we are talking a completely cold boot here, no sleep or anything like that.
  • abhilashjain30 - Friday, September 20, 2013 - link

    I purchased Samsung 120GB EVO 3 days back from OnlySSD ( http://goo.gl/HqgjId )and Drive performance is too good compare to 120GB 840 Basic Series.
  • abhilashjain30 - Friday, September 20, 2013 - link

    Available at OnlySSD dot com
  • abhilashjain30 - Wednesday, October 2, 2013 - link

    Samsung Evo Series now available online in India. You can check on OnlySSD dot com or PrimeABGB dot com

Log in

Don't have an account? Sign up now