Random Read/Write Speed

The four corners of SSD performance are as follows: random read, random write, sequential read and sequential write speed. Random accesses are generally small in size, while sequential accesses tend to be larger and thus we have the four Iometer tests we use in all of our reviews.

Our first test writes 4KB in a completely random pattern over an 8GB space of the drive to simulate the sort of random access that you'd see on an OS drive (even this is more stressful than a normal desktop user would see). I perform three concurrent IOs and run the test for 3 minutes. The results reported are in average MB/s over the entire time. We use both standard pseudo randomly generated data for each write as well as fully random data to show you both the maximum and minimum performance offered by SandForce based drives in these tests. The average performance of SF drives will likely be somewhere in between the two values for each drive you see in the graphs. For an understanding of why this matters, read our original SandForce article.

Desktop Iometer - 4KB Random Read

Random read speed is very close to that of the 840 Pro. The EVO doesn't look like a mainstream drive here at all.

Desktop Iometer - 4KB Random Write

Even peak random write performance is dangerously close to the 840 Pro. Only the 120GB drive shows up behind the pack. I should add that I'll have to redo the way we test 4KB random writes given how optimized current firmwares/architectures have become. The data here is interesting but honestly the performance consistency data from earlier is a better look at what happens to 4KB random write performance over time.

Desktop Iometer - 4KB Random Write (QD=32)

The relatively small difference between QD3 and QD32 random write performance shows you just how good of a job Samsung's controller is doing at write combining. At high queue depths the EVO is just as fast as the 840 Pro here. So much for TLC being slow.

Sequential Read/Write Speed

To measure sequential performance I ran a 1 minute long 128KB sequential test over the entire span of the drive at a queue depth of 1. The results reported are in average MB/s over the entire test length.

Sequential read and write performance, even at low queue depths is very good on the EVO. You may notice lower M500 numbers here than elsewhere, the explanation is pretty simple. We run all of our read tests after valid data has been written to the drive. Unfortunately the M500 attempts to aggressively GC data on the drive, so even though we fill the drive and then immediately start reading back the M500 is already working in the background which reduces overall performance here.

Desktop Iometer - 128KB Sequential Read

Desktop Iometer - 128KB Sequential Write

AS-SSD Incompressible Sequential Read/Write Performance

The AS-SSD sequential benchmark uses incompressible data for all of its transfers. The result is a pretty big reduction in sequential write speed on SandForce based controllers.

Incompressible Sequential Read Performance - AS-SSD

Incompressible Sequential Write Performance - AS-SSD

 

AnandTech Storage Bench 2013 Performance vs. Transfer Size
Comments Locked

137 Comments

View All Comments

  • yut345 - Thursday, December 12, 2013 - link

    That would depend on how large your files are and how much space of the drive you will be using up for storage. I would fill up a 250GB drive almost immediately and certainly slow it down, even though I store most of my files on an external drive. For me, a 1TB would perform better.
  • Romberry - Saturday, July 27, 2013 - link

    Well...that sort of depends, doesn't it? The first 2.5-3GB or so are at close to 400mb/s before depleting the turbowrite buffer and dropping down to around 110-120mb/s, 2-3GB covers a lot of average files. Even a relatively small video fits. And as soon as the turbowrite cache is flushed, you can burst again. All in all, long (very large file) steady state transfer on the 120gb version is average, but more typical small and mid file sizes (below the 3GB turbowrite limit) relatively scream. Seems to me that real world performance is going to be a lot quicker feeling than those large file steady state numbers might suggest. The 120gb version won't be the first pick for ginormous video and graphics file work, but outside of that....3GB will fit a LOT of stuff.
  • MrSpadge - Saturday, July 27, 2013 - link

    Agreed! And if your're blowing past the 3 GB cache you'll need some other SSD or RAID to actually supply your data any faster than the 128 GB 840 Evo can write. Not even GBit LAN can do this.
  • nathanddrews - Thursday, July 25, 2013 - link

    RAPID seems intended for devices with built-in UPS - notebooks and tablets. Likewise, I wouldn't use it on my desktop without a UPS. Seems wicked cool, though.
  • ItsMrNick - Thursday, July 25, 2013 - link

    I don't know if I'm as extreme as you. The fact is your O/S already keeps some unflushed data in RAM anyways - often times "some" means "a lot". If RAPID obeys flush commands from the O/S (and from Anand's article, it seems that it does) then the chances of data corruption should be minimal - and no different than the chances of data corruption without RAPID.
  • Sivar - Thursday, July 25, 2013 - link

    You can always mount your drives in synchronous mode and avoid any caching of data in RAM.
    I wouldn't, though. :)
  • nathanddrews - Thursday, July 25, 2013 - link

    soo00 XTr3M3!!1 Sorry, I just found that humorous. I've actually been meaning to get a UPS for my main rig anyway, it never hurts.
  • MrSpadge - Saturday, July 27, 2013 - link

    It does hurt your purse, though.
  • sheh - Thursday, July 25, 2013 - link

    I wonder how it's any different from the OS caching. Seemingly, that's something that the OS should do the best it can, regardless of which drive it writes to, and with configurability to let the user choose the right balance between quick/unreliable and slower/reliable.
  • Death666Angel - Friday, July 26, 2013 - link

    That was my thought as well. The OS should know what files it uses most and what to cache in RAM. Many people always try to have the most free RAM possible, I'd rather have most of my RAM used as a cache.

Log in

Don't have an account? Sign up now