Random Read/Write Speed

The four corners of SSD performance are as follows: random read, random write, sequential read and sequential write speed. Random accesses are generally small in size, while sequential accesses tend to be larger and thus we have the four Iometer tests we use in all of our reviews.

Our first test writes 4KB in a completely random pattern over an 8GB space of the drive to simulate the sort of random access that you'd see on an OS drive (even this is more stressful than a normal desktop user would see). I perform three concurrent IOs and run the test for 3 minutes. The results reported are in average MB/s over the entire time. We use both standard pseudo randomly generated data for each write as well as fully random data to show you both the maximum and minimum performance offered by SandForce based drives in these tests. The average performance of SF drives will likely be somewhere in between the two values for each drive you see in the graphs. For an understanding of why this matters, read our original SandForce article.

Iometer - 4KB Random Write, 8GB LBA Space, QD=3

Peak performance on the 120GB Vertex 3 is just as impressive as the 240GB pre-production sample as well as the m4 we just tested. Write incompressible data and you'll see the downside to having fewer active die, the 120GB drive now delivers 84% of the performance of the 240GB drive. In 3Gbps mode the 240 and 120GB drives are identical.

Many of you have asked for random write performance at higher queue depths. What I have below is our 4KB random write test performed at a queue depth of 32 instead of 3. While the vast majority of desktop usage models experience queue depths of 0 - 5, higher depths are possible in heavy I/O (and multi-user) workloads:

Iometer - 4KB Random Write, 8GB LBA Space, QD=32

At high queue depths the gap between the 120 and 240GB Vertex 3s grows a little bit when we're looking at incompressible data.

Iometer - 4KB Random Read, QD=3

Random read performance is what suffered the most with the transition from 240GB to 120GB. The 120GB Vertex 3 is slower than the 120GB Corsair Force F120 (SF-1200, similar to the Vertex 2) in our random read test. The Vertex 3 is actually about the same speed as the old Indilinx based Nova V128 here. I'm curious to see how this plays out in our real world tests.

Sequential Read/Write Speed

To measure sequential performance I ran a 1 minute long 128KB sequential test over the entire span of the drive at a queue depth of 1. The results reported are in average MB/s over the entire test length.

Iometer - 128KB Sequential Write

Highly compressible sequential write speed is identical to the 240GB drive, but use incompressible data and the picture changes dramatically. The 120GB has far fewer NAND die to write to in parallel and in this case manages 76% of the performance of the 240GB drive.

Iometer - 128KB Sequential Read

Sequential read speed is also lower than the 240GB drive. Compared to the SF-1200 drives there's still a big improvement as long as you've got a 6Gbps controller.

The Vertex 3 120GB AnandTech Storage Bench 2011
Comments Locked

153 Comments

View All Comments

  • Xcellere - Wednesday, April 6, 2011 - link

    It's too bad the lower capacity drives aren't performing as well as the 240 GB version. I don't have a need for a single high capacity drive so the expenditure in added space is unnecessary for me. Oh well, that's what you get for wanting bleeding-edge tech all the time.
  • Kepe - Wednesday, April 6, 2011 - link

    If I've understood correctly, they're using 1/2 of the NAND devices to cut drive capacity from 240 GB to 120 GB.
    My question is: why don't they use the same amount of NAND devices with 1/2 the capacity instead? Again, if I have understood correctly, that way the performance would be identical compared to the higher capacity model.
    Is NAND produced in only one capacity packages or is there some other reason not to use NAND devices of differing capacities?
  • dagamer34 - Wednesday, April 6, 2011 - link

    Because price scaling makes it more cost-effective to use fewer, more dense chips than separate smaller, less dense chips as the more chips made, the cheaper they eventually become.

    Like Anand said, this is why you can't just as for a 90nm CPU today, it's just too old and not worth making anymore. This is also why older memory gets more expensive when it's not massively produced anymore.
  • Kepe - Wednesday, April 6, 2011 - link

    But couldn't they just make smaller dies? Just like there are different sized CPU/GPU dies for different amounts of performance. Cut the die size in half, fit 2x the dies per wafer, sell for 50% less per die than the large dies (i.e. get the same amount of money per wafer).
  • A5 - Wednesday, April 6, 2011 - link

    No reason for IMFT to make smaller dies - they sell all of the large dies coming out of the fab (whether to themselves or 3rd parties), so why bother making a smaller one?
  • vol7ron - Wednesday, April 6, 2011 - link

    You're missing the point on economies of scale.

    Having one size means you don't have leftover parts, or have to pay for a completely different process (which includes quality control).

    These things are already expensive, adding the logistical complexity would only drive the prices up. Especially, since there are noticeable difference in the manufacturing process.

    I guess they could take the poorer performing silicon and re-market them. Like how Anand mentioned that they take poorer performning GPUs and just sell them at a lower clockrate/memory capacity, but it could be that the NAND production is more refined and doesn't have that large of a difference.

    Regardless, I think you mentioned the big point: inner RAIDs improve performance. Why 8 chips, why not more? Perhaps heat has something to do with it, and (of course) power would be the other reason, but it would be nice to see higher performing, more power-hungry SSDs. There may also be a performance benefit in larger chips too, though, sort of like DRAM where 1x2GB may perform better than 2x1GB (not interlaced).

    I'm still waiting for the manufacturers to get fancy, perhaps with multiple controllers and speedier DRAM. Where's the Vertex3 Colossus.
  • marraco - Tuesday, April 12, 2011 - link

    Smaller dies would improve yields, and since they could enable full speed, it would be more competitive.

    A bigger chip with a flaw may invalidate the die, but if divided in two smaller chips it would recover part of it.

    On other side, probably yields are not as big problem, since bad sectors can be replaced with good ones by the controller.
  • Kepe - Wednesday, April 6, 2011 - link

    Anand, I'd like to thank you on behalf of pretty much every single person on the planet. You're doing an amazing job with making companies actually care about their customers and do what is right.
    Thank you so much, and keep up the amazing work.

    - Kepe
  • dustofnations - Wednesday, April 6, 2011 - link

    Thank God for a consumer advocate with enough clout for someone important to listen to them.

    All too often valid and important complaints fall at the first hurdle due to dumb PR/CS people who filter out useful information. Maybe this is because they assume their customers are idiots, or that it is too much hassle, or perhaps don't have the requisite technical knowledge to act sensibly upon complex complaints.
  • Kepe - Wednesday, April 6, 2011 - link

    I'd say the reason is usually that when a company has sold you its product, they suddenly lose all interest in you until they come up with a new product to sell. Apple used to be a very good example with its battery policy. "So, your battery died? We don't sell new or replace dead batteries, but you can always buy the new, better iPod."
    It's this kind of ignorance towards the consumers that is absolutely appalling, and Anand is doing a great job at fighting for the consumer's rights. He should get some sort of an award for all he has done.

Log in

Don't have an account? Sign up now