Random Read/Write Speed

The four corners of SSD performance are as follows: random read, random write, sequential read and sequential write speed. Random accesses are generally small in size, while sequential accesses tend to be larger and thus we have the four Iometer tests we use in all of our reviews.

Our first test writes 4KB in a completely random pattern over an 8GB space of the drive to simulate the sort of random access that you'd see on an OS drive (even this is more stressful than a normal desktop user would see). I perform three concurrent IOs and run the test for 3 minutes. The results reported are in average MB/s over the entire time. We use both standard pseudo randomly generated data for each write as well as fully random data to show you both the maximum and minimum performance offered by SandForce based drives in these tests. The average performance of SF drives will likely be somewhere in between the two values for each drive you see in the graphs. For an understanding of why this matters, read our original SandForce article.

Iometer - 4KB Random Write, 8GB LBA Space, QD=3

Peak performance on the 120GB Vertex 3 is just as impressive as the 240GB pre-production sample as well as the m4 we just tested. Write incompressible data and you'll see the downside to having fewer active die, the 120GB drive now delivers 84% of the performance of the 240GB drive. In 3Gbps mode the 240 and 120GB drives are identical.

Many of you have asked for random write performance at higher queue depths. What I have below is our 4KB random write test performed at a queue depth of 32 instead of 3. While the vast majority of desktop usage models experience queue depths of 0 - 5, higher depths are possible in heavy I/O (and multi-user) workloads:

Iometer - 4KB Random Write, 8GB LBA Space, QD=32

At high queue depths the gap between the 120 and 240GB Vertex 3s grows a little bit when we're looking at incompressible data.

Iometer - 4KB Random Read, QD=3

Random read performance is what suffered the most with the transition from 240GB to 120GB. The 120GB Vertex 3 is slower than the 120GB Corsair Force F120 (SF-1200, similar to the Vertex 2) in our random read test. The Vertex 3 is actually about the same speed as the old Indilinx based Nova V128 here. I'm curious to see how this plays out in our real world tests.

Sequential Read/Write Speed

To measure sequential performance I ran a 1 minute long 128KB sequential test over the entire span of the drive at a queue depth of 1. The results reported are in average MB/s over the entire test length.

Iometer - 128KB Sequential Write

Highly compressible sequential write speed is identical to the 240GB drive, but use incompressible data and the picture changes dramatically. The 120GB has far fewer NAND die to write to in parallel and in this case manages 76% of the performance of the 240GB drive.

Iometer - 128KB Sequential Read

Sequential read speed is also lower than the 240GB drive. Compared to the SF-1200 drives there's still a big improvement as long as you've got a 6Gbps controller.

The Vertex 3 120GB AnandTech Storage Bench 2011
Comments Locked

153 Comments

View All Comments

  • dagamer34 - Wednesday, April 6, 2011 - link

    Any idea when these are going to ship out into the wild? I've got a 120GB Vertex 2 in my 2011 MacBook Pro that I'd love to stick into my Windows 7 HTPC so it's more responsive.
  • Ethaniel - Wednesday, April 6, 2011 - link

    I just love how Anand puts OCZ on the grill here. It seems they'll just have to step it up. I was expecting some huge numbers coming from the Vertex 3. So far, meh.
  • softdrinkviking - Wednesday, April 6, 2011 - link

    "OCZ insists that there's no difference between the Spectek stuff and standard Micron 25nm NAND"

    Except for the fact that Spectek is 34nm I am assuming?
    There surely must be some significant difference in performance between 25 and 34, right?
  • softdrinkviking - Wednesday, April 6, 2011 - link

    sorry, i think that wasn't clear.
    what i mean is that it seems like you are saying the difference in process nodes is purely related to capacity, but isn't there some performance advantage to going lower as well?
  • softdrinkviking - Wednesday, April 6, 2011 - link

    okay. forget it. i looked back through and found the part where you write about the 25nm being slower.

    that's weird and backwards. i wonder why it gets slower as it get smaller, when cpus are supposedly going to get faster as the process gets smaller?

    are their any semiconductor engineers reading this article who know?
    are the fabs making some obvious choice which trades in performance at a reduced node for cost benefits, in an attempt to increase die capacities and lower end-user costs?
  • lunan - Thursday, April 7, 2011 - link

    i think because the chip get larger but IO interface to the controller remain the same (the inner raid). instead of addressing 4GB of NAND, now one block may consists of 8GB or 16GB NAND.

    in case of 8 interface,
    4x8GB =32GB NAND but 8x8GB=64GB NAND, 8x16GB=128GB NAND

    the smaller the shrink is, the bigger the nand, but i think they still have 8 IO interface to the controller, hence the time takes also increased with every shrinkage.

    CPU or GPU is quite different because they implement different IO controller. the base architecture actually changes to accommodate process shrink.

    they should change the base architecture with every NAND if they wish to archive the same speed throughput, or add a second controller....

    I think....i may not be right >_<
  • lunan - Thursday, April 7, 2011 - link

    for example the vertex 3 have 8GB NAND with 16(8 front and 8 back) connection to the controller. now imagine if the NAND is 16GB or 32 GB and the interface is only 16 with 1 controller?

    maybe the CPU approach can be done to this problem. if you wish to duplicate performace and storage, you do dual core (which is 1 cpu core beside the other)....

    again...maybe....
  • softdrinkviking - Friday, April 8, 2011 - link

    thanks for your reply. when i read it, i didn't realize that those figures were referring to the capacity of the die.

    as soon as i re-read it, i also had the same reaction about redesigning the controller, it seems the obvious thing to do,
    so i can't believe that the controller manufacturer's haven't thought of it.
    there must be something holding them back, probably $$.
    the major SSD players all appear to be trying to pull down the costs of drives to encourage widespread adoption.

    perhaps this is being done at the expense of obvious performance increases?
  • Ammaross - Thursday, April 7, 2011 - link

    I think if you re-reread (yes, twice), you'll note that with the die shrink, the block size was upped from 4K to 8K. This is twice the space to be programmed or erased per write. This is where the speed performance disappears, regardless of the number of dies in the drive.
  • Anand Lal Shimpi - Wednesday, April 6, 2011 - link

    Sorry I meant Micron 34nm NAND. Corrected :)

    Take care,
    Anand

Log in

Don't have an account? Sign up now