Random Read/Write Speed

The four corners of SSD performance are as follows: random read, random write, sequential read and sequential write speed. Random accesses are generally small in size, while sequential accesses tend to be larger and thus we have the four Iometer tests we use in all of our reviews.

Our first test writes 4KB in a completely random pattern over an 8GB space of the drive to simulate the sort of random access that you'd see on an OS drive (even this is more stressful than a normal desktop user would see). I perform three concurrent IOs and run the test for 3 minutes. The results reported are in average MB/s over the entire time. We use both standard pseudo randomly generated data for each write as well as fully random data to show you both the maximum and minimum performance offered by SandForce based drives in these tests. The average performance of SF drives will likely be somewhere in between the two values for each drive you see in the graphs. For an understanding of why this matters, read our original SandForce article.

Desktop Iometer - 4KB Random Read (4K Aligned)

Random read performance starts out quite nicely. There's a good improvement over the old m4 and the M500 lineup finds itself hot on the heels of the Samsung SSD 840. There's not much variance between the various capacities here.

Desktop Iometer - 4KB Random Write (4K Aligned) - 8GB LBA Space

It's with the random write performance that we get some insight into how write parallelism works on the M500. The 480GB and 960GB drives deliver roughly the same performance, so all you really need to saturate the 9187 is 32 NAND die. The 240GB sees a slight drop in performance, but the 120GB version with only 8 NAND die sees the biggest performance drop. This is exactly why we don't see a 64GB M500 at launch using 128Gbit die.

Desktop Iometer - 4KB Random Write (8GB LBA Space QD=32)

Ramping up queue depth causes some extra scaling on the 32/64 die drives, but the 240GB and 120GB parts are already at their limits. There physically aren't enough NAND die to see any tangible gains in performance between high and low queue depths here on the smaller drives. This is going to be a problem that everyone will have to deal with ultimately, the M500 just encounters it first.

Sequential Read/Write Speed

To measure sequential performance I ran a 1 minute long 128KB sequential test over the entire span of the drive at a queue depth of 1. The results reported are in average MB/s over the entire test length.

Desktop Iometer - 128KB Sequential Read (4K Aligned)

Low queue depth sequential read performance looks ok but the M500 is definitely not class leading here.

Desktop Iometer - 128KB Sequential Write (4K Aligned)

There's pretty much the same story when we look at sequential writes, although once again the 120GB M500 shows its limits very openly. The 840 and M500 have similar performance levels at the same capacity point, but the M500 is significantly behind the higher end offerings as you'd expect.

AS-SSD Incompressible Sequential Read/Write Performance

The AS-SSD sequential benchmark uses incompressible data for all of its transfers. The result is a pretty big reduction in sequential write speed on SandForce based controllers.

Incompressible Sequential Read Performance - AS-SSD

Ramping up queue depth we see a substantial increase in sequential read performance, but there's still a big delta between the M500 and all of the earlier drives.

Incompressible Sequential Write Performance - AS-SSD

The high-queue depth sequential write story is a bit better for the M500. It's tangibly quicker than the 840 here.

A Preview of The Destroyer, Our 2013 Storage Bench Performance vs. Transfer Size
Comments Locked

111 Comments

View All Comments

  • NCM - Tuesday, April 9, 2013 - link

    TRIM support is built into the OS X, but disabled by default for non-Apple drives. As others have pointed out, the freeware utility 'TRIM Enabler' easily takes care of that. The only other thing to know is that some OS X updates may reset TRIM to 'off', so it's as well to check after any update and re-enable it if necessary.

    I take care of an office full of Macs, including Mac Pros, iMacs, Minis and MacBook Pros, the majority of which have SSDs that I installed. I'm typing this on my 2010 MBP with a 512GB Plextor M3P.

    With the price of SSDs now this is a very worthwhile upgrade, and particularly one that offers a new lease on life for older computers.
  • Bkord123 - Tuesday, April 9, 2013 - link

    All of these comments are going to make my wife mad when I buy yet another gadget! I'm not as worried now about the TRIM issue. Btw, does this site have a page that ranks hard drives? I did look and didn't see anything here.
  • jamyryals - Tuesday, April 9, 2013 - link

    Anand has a Bench utility you can use to compare devices. Here's two popular reliable drives -
    http://www.anandtech.com/bench/Product/792?vs=743
  • glugglug - Tuesday, April 9, 2013 - link

    With most SSDs no longer using 4KB pages, does it make sense to have 8KB and 16KB random write tests?

    Also, does application performance improve if the drives are formatted with an 8KB or 16KB cluster size?
  • Kristian Vättö - Tuesday, April 9, 2013 - link

    Most real world IOs are 4KB.
  • glugglug - Tuesday, April 9, 2013 - link

    Not true, even with the default 4KB cluster size the drives get formatted with. If you format with 16KB clusters, *none* of the IOs will be 4KB.
  • Kristian Vättö - Tuesday, April 9, 2013 - link

    Based on the workloads we've traced (using default cluster size), 4KB is the most common IO size, although it obviously varies and some workloads may have consist of larger IO sizes. Do you have something that backs up your statement? Would be interesting to see that.
  • glugglug - Tuesday, April 9, 2013 - link

    According to the table in the article, for the Anandtech 2011 Heavy Workload, 28% of the IOs are 4KB, not "most".

    I am thinking that what must happen for a 4KB IO on a drive with 16KB pages is that it has to read the current contents of the 16KB page so that the 4KB being rewritten can be merged into it, then write a 16KB page, so each write really ends up being a read + write operation not just the write by itself.

    Worse, when TRIM is used, if the TRIM operation covers only 4KB of the 16KB page, the page can't really be trimmed, because the other 12KB might still be in use; the drive firmware can't know for certain, so having a cluster size match or exceed the drive's page might result in better steady state performance over time because of TRIM not losing track of partial pages.
  • Tjalve - Wednesday, April 10, 2013 - link

    I think there are some caching involved when dealing with writes thats smaller then the page size of the NAND. I would guss that the M500 caches in DRAM. There are other vendors that use the onboard flash for caching. Like Sandisk nCache for example.
  • glugglug - Wednesday, April 10, 2013 - link

    For some SSDs that is definately the case. I'm pretty sure Sandforce needed to do it for example, both because the compression makes the size of the flash writes unpredictable, and because if you look at the cluster sizes the chipset supports to go with various obscure controllers its kind of nuts.

    I don't think that is the case here though, because if you multiple the marketed 4KB random write numbers by 4KB, you pretty much get exactly the sequential write speed, and write-back caching to deal with the smaller writes would result in much better sequential performance.

Log in

Don't have an account? Sign up now