Random Read/Write Speed

The four corners of SSD performance are as follows: random read, random write, sequential read and sequential write speed. Random accesses are generally small in size, while sequential accesses tend to be larger and thus we have the four Iometer tests we use in all of our reviews.

Our first test writes 4KB in a completely random pattern over an 8GB space of the drive to simulate the sort of random access that you'd see on an OS drive (even this is more stressful than a normal desktop user would see). I perform three concurrent IOs and run the test for 3 minutes. The results reported are in average MB/s over the entire time. We use both standard pseudo randomly generated data for each write as well as fully random data to show you both the maximum and minimum performance offered by SandForce based drives in these tests. The average performance of SF drives will likely be somewhere in between the two values for each drive you see in the graphs. For an understanding of why this matters, read our original SandForce article.

Desktop Iometer - 4KB Random Read

Random read speed is very close to that of the 840 Pro. The EVO doesn't look like a mainstream drive here at all.

Desktop Iometer - 4KB Random Write

Even peak random write performance is dangerously close to the 840 Pro. Only the 120GB drive shows up behind the pack. I should add that I'll have to redo the way we test 4KB random writes given how optimized current firmwares/architectures have become. The data here is interesting but honestly the performance consistency data from earlier is a better look at what happens to 4KB random write performance over time.

Desktop Iometer - 4KB Random Write (QD=32)

The relatively small difference between QD3 and QD32 random write performance shows you just how good of a job Samsung's controller is doing at write combining. At high queue depths the EVO is just as fast as the 840 Pro here. So much for TLC being slow.

Sequential Read/Write Speed

To measure sequential performance I ran a 1 minute long 128KB sequential test over the entire span of the drive at a queue depth of 1. The results reported are in average MB/s over the entire test length.

Sequential read and write performance, even at low queue depths is very good on the EVO. You may notice lower M500 numbers here than elsewhere, the explanation is pretty simple. We run all of our read tests after valid data has been written to the drive. Unfortunately the M500 attempts to aggressively GC data on the drive, so even though we fill the drive and then immediately start reading back the M500 is already working in the background which reduces overall performance here.

Desktop Iometer - 128KB Sequential Read

Desktop Iometer - 128KB Sequential Write

AS-SSD Incompressible Sequential Read/Write Performance

The AS-SSD sequential benchmark uses incompressible data for all of its transfers. The result is a pretty big reduction in sequential write speed on SandForce based controllers.

Incompressible Sequential Read Performance - AS-SSD

Incompressible Sequential Write Performance - AS-SSD

 

AnandTech Storage Bench 2013 Performance vs. Transfer Size
Comments Locked

137 Comments

View All Comments

  • ervinshiznit - Thursday, July 25, 2013 - link

    Typo? On the Turbowrite page you say "For most light use cases I can see TurboWrite being a great way to deliver more of an MLC experience but on a TLC drive."
    It should be deliver more of a SLC experience but on a TLC drive.
  • ciri - Sunday, July 28, 2013 - link

    SLC>MLC>TLC
  • Guspaz - Thursday, July 25, 2013 - link

    The fact that RAPID sees any performance improvement at all illustrates to me a failure of the operating system's disk caching subsystem. That's all that RAPID really is, after all, a replacement for the Windows disk cache.

    I'd be curious to see the performance results of RAPID compared to the disk caching subsystems on other platforms, such as Linux and ZFS (which even on Linux has it's own cache called the "ARC"). Are the large improvements because Windows disk caching is particularly bad, or because RAPID is a better implementation than anybody else?
  • themelon - Thursday, July 25, 2013 - link

    Windows is absolutely horrible at filesystem caching and I don't think it does any sort of block caching. It seems to use more of a FIFO algorithm that has no sequential write bypass no matter what you do. ZFS and the 2 block device caches that recently integrated into the linux kernel, bcache and dm-cache, use more of an LRU method. All of them have at least basic sequential bypass detection as well. bcache in particular is tuneable to your load in almost all aspects of performance. Of course these are only block side caching and currently have no filesystem specific knowledge.

    There is some interesting work going on to track hot spots that will eventually allow for preemptive cache warming and/or hot relocation. Right now it is BTRFS specific but it is being integrated below the filesystem layer so any filesystem will eventually be able to take advantage of it.

    ZFS on Linux is a waste of time in my opinion. ZFS's L2ARC and SLOG are great but limited by some of what I feel are architectural flaws in zfs itself. I used to love zfs but the Linux kernel block stack has caught up to it in features and still offers all of the flexibility that it always has.
  • aicom - Friday, July 26, 2013 - link

    Windows' cache system is better than you give it credit for. It does support sequential bypass (see FILE_FLAG_SEQUENTIAL_SCAN flag). It works with filesystem drivers with the Cc* APIs in the kernel. It also supports caching files over a network, even with other clients modifying the files. It does standard read-ahead and write-behind and is supplemented by an adaptive prefetcher (SuperFetch).

    The reason we're seeing such huge gains is because the programs being tested explicitly ask NOT to be cached. The whole point is to test the drive, so they pass FILE_FLAG_NO_BUFFERING to disable caching on the files being accessed.
  • MrSpadge - Saturday, July 27, 2013 - link

    Excellent post!
  • Timur Born - Sunday, July 28, 2013 - link

    Question still arises why the Anand Storage Bench is affected beneficial by RAPID?! Is it because ASB also asks the Windows cache to be bypassed, is it because of the Windows cache flushing parts of its pages every second or does RAPID communicate with the drive (firmware) at a more fundamental level that allows further optimizations?
  • watersb - Friday, July 26, 2013 - link

    Excellent points. I stick with ZFS because I trust it (after many hardware failures but no data loss) and because it is cross-platform.

    Mac HFS does "hot relocation", I believe. And NTFS has always tried to keep hot files in the middle of the disk in order to reduce hard disk seek times. So maybe I don't understand what is meant by hot relocation.
  • piroroadkill - Thursday, July 25, 2013 - link

    I agree. I'm pretty sure Windows' own disk caching is terrible. It's pretty poor even on the server side. They really need to work on that shit.
  • tincmulc - Thursday, July 25, 2013 - link

    How is rapid any better from SuperCache or FancyCache? Not only do they do the same thing, but can also be configured to use more ram or use os invisble memory (32 bit os with more than 3GB of ram) and they work for any drive, even HDDs.

Log in

Don't have an account? Sign up now