Random Read/Write Speed

The four corners of SSD performance are as follows: random read, random write, sequential read and sequential write speed. Random accesses are generally small in size, while sequential accesses tend to be larger and thus we have the four Iometer tests we use in all of our reviews.

Our first test writes 4KB in a completely random pattern over an 8GB space of the drive to simulate the sort of random access that you'd see on an OS drive (even this is more stressful than a normal desktop user would see). I perform three concurrent IOs and run the test for 3 minutes. The results reported are in average MB/s over the entire time. We use both standard pseudo randomly generated data for each write as well as fully random data to show you both the maximum and minimum performance offered by SandForce based drives in these tests. The average performance of SF drives will likely be somewhere in between the two values for each drive you see in the graphs. For an understanding of why this matters, read our original SandForce article.

Iometer - 4KB Random Write, 8GB LBA Space, QD=3

Here you can see the cap on 4KB random writes alive and well. As I've mentioned in previous articles, we're finally good enough when it comes to 4KB random write performance for current desktop workloads - so despite the cap you won't see any real world impact of it in our tests.

Many of you have asked for random write performance at higher queue depths. What I have below is our 4KB random write test performed at a queue depth of 32 instead of 3. While the vast majority of desktop usage models experience queue depths of 0 - 5, higher depths are possible in heavy I/O (and multi-user) workloads:

Iometer - 4KB Random Write, 8GB LBA Space, QD=32

Iometer - 4KB Random Read, QD=3

Sequential Read/Write Speed

To measure sequential performance I ran a 1 minute long 128KB sequential test over the entire span of the drive at a queue depth of 1. The results reported are in average MB/s over the entire test length.

Iometer - 128KB Sequential Write

Iometer - 128KB Sequential Read

Introduction AnandTech Storage Bench 2011
Comments Locked

44 Comments

View All Comments

  • Anand Lal Shimpi - Thursday, May 5, 2011 - link

    Those drivers were only used on the X58 platform, I use Intel's RST10 on the SNB platform for all of the newer tests/results. :)

    Take care,
    Anand
  • iwod - Thursday, May 5, 2011 - link

    I lost count of many times i post this series. Anyway people continue to worship 4K Random Read Write now have seen the truth. Seq Read Write is much more important then u think.

    Since the test are basically two identical pieces of Hardware, but one with Random Write Cap, the results shows real world doesn't show any advantage. We need more Seq performance!

    Interestingly we aren't limited by the controller or NAND itself. But the connection method, SATA 6Gbps. We need to start using PCI-Express 4x slot, as Intel has shown in the leaked roadmap. Going to PCI-E 3.0 would give us 4GB/s with 4x slot. That should be plenty of room for improvement. ONFI 3.0 next year should allow us to reach 2GB+ Seq Read Write easily.
  • krumme - Thursday, May 5, 2011 - link

    I think Anand heard to much to Intel voice in this ssd story
    4k random madness was Intel g2 business
    And all went in the wrong direction
    Anand was - and is - the ssd review site
  • Anand Lal Shimpi - Thursday, May 5, 2011 - link

    The fact of the matter is that both random and sequential performance is important. It's Amdahl's law at its best - if you simply increase the sequential read/write speed of these drives without touching random performance, you'll eventually be limited by random performance. Today I don't believe we are limited by random performance but it's still something that has to keep improving in order for us to continue to see overall gains across the board.

    Take care,
    Anand
  • Hrel - Thursday, May 5, 2011 - link

    Damn! 200 dollars too expensive for the 120GB. Stopped reading.
  • snuuggles - Thursday, May 5, 2011 - link

    Good lord, every single article that discusses OWC seems to include some sort of odd-ball tangent or half-baked excuse for some crazy s**t they are pulling.

    Hey, I know they have the fastest stuff around, but there's just something so lame about these guys, I have to say on principle: "never, ever, will I buy from OWC"
  • nish0323 - Thursday, August 11, 2011 - link

    What crazy s**t are they pulling? I've got 5 drives from them, all SSDs, all perform great. The 6G ones have a 5-year warranty, 2 longer than all other SSD manufacturers right now.
  • neotiger - Thursday, May 5, 2011 - link

    A lot of people and hosting companies use consumer SSD for server workload such as MySQL and Solr.

    Can you benchmark these SSD's performances on server workload?
  • Anand Lal Shimpi - Thursday, May 5, 2011 - link

    It's on our roadmap to do just that... :)

    Take care,
    Anand
  • rasmussb - Saturday, May 7, 2011 - link

    Perhaps you have answered this elsewhere, or it will be answered in your future tests. If so, please forgive me.

    As you point out, the drive performance is based in large part upon the compressibility of the source data. Relatively incompressible data results in slower speeds. What happens when you put a pair (or more) of these in a RAID 0 array? Since units of data are alternating between drives, how does the SF compression work then? Does previously compressible data get less compressible because any given drive is only getting, at best (2-drive array) half of the original data?

    Conversely, does incompressible data happen to get more compressible when you're splitting it amongst two or more drives in an array?

    Server workload on a single drive versus in say a RAID 5 array would be an interesting comparison. I'm sure your tech savvy minds are already over this in your roadmap. I'm just asking in the event it isn't.

Log in

Don't have an account? Sign up now