Random Read/Write Speed

The four corners of SSD performance are as follows: random read, random write, sequential read and sequential write speed. Random accesses are generally small in size, while sequential accesses tend to be larger and thus we have the four Iometer tests we use in all of our reviews.

Our first test writes 4KB in a completely random pattern over an 8GB space of the drive to simulate the sort of random access that you'd see on an OS drive (even this is more stressful than a normal desktop user would see). I perform three concurrent IOs and run the test for 3 minutes. The results reported are in average MB/s over the entire time. We use both standard pseudo randomly generated data for each write as well as fully random data to show you both the maximum and minimum performance offered by SandForce based drives in these tests. The average performance of SF drives will likely be somewhere in between the two values for each drive you see in the graphs. For an understanding of why this matters, read our original SandForce article.

Iometer - 4KB Random Write, 8GB LBA Space, QD=3

Peak performance on the 120GB Vertex 3 is just as impressive as the 240GB pre-production sample as well as the m4 we just tested. Write incompressible data and you'll see the downside to having fewer active die, the 120GB drive now delivers 84% of the performance of the 240GB drive. In 3Gbps mode the 240 and 120GB drives are identical.

Many of you have asked for random write performance at higher queue depths. What I have below is our 4KB random write test performed at a queue depth of 32 instead of 3. While the vast majority of desktop usage models experience queue depths of 0 - 5, higher depths are possible in heavy I/O (and multi-user) workloads:

Iometer - 4KB Random Write, 8GB LBA Space, QD=32

At high queue depths the gap between the 120 and 240GB Vertex 3s grows a little bit when we're looking at incompressible data.

Iometer - 4KB Random Read, QD=3

Random read performance is what suffered the most with the transition from 240GB to 120GB. The 120GB Vertex 3 is slower than the 120GB Corsair Force F120 (SF-1200, similar to the Vertex 2) in our random read test. The Vertex 3 is actually about the same speed as the old Indilinx based Nova V128 here. I'm curious to see how this plays out in our real world tests.

Sequential Read/Write Speed

To measure sequential performance I ran a 1 minute long 128KB sequential test over the entire span of the drive at a queue depth of 1. The results reported are in average MB/s over the entire test length.

Iometer - 128KB Sequential Write

Highly compressible sequential write speed is identical to the 240GB drive, but use incompressible data and the picture changes dramatically. The 120GB has far fewer NAND die to write to in parallel and in this case manages 76% of the performance of the 240GB drive.

Iometer - 128KB Sequential Read

Sequential read speed is also lower than the 240GB drive. Compared to the SF-1200 drives there's still a big improvement as long as you've got a 6Gbps controller.

The Vertex 3 120GB AnandTech Storage Bench 2011
Comments Locked

153 Comments

View All Comments

  • kensiko - Thursday, April 7, 2011 - link

    It's true, I never saw any big company letting customers having so much impact on them. The forum is really the big thing here.
  • lukechip - Wednesday, April 6, 2011 - link

    I've just bought an 80GB Vertex 2. OCZ state that only "E" parts are affected, but at StorageReview, they show that they had a non "E" part which contained 25nm NAND. Also, OCZ say that the only parts affected are the 60 GB and 120 GB models.

    I've just purchased an 80 GB model, and have no idea what is inside it, nor whether I'd prefer it to be an 'old' one or a 'new' one.

    The new SKUs that Anand listed indicate that moving forwards, all 80, 160 and 200 GB Vertex 2 units will be 25nm only, and all 60, 120 and 240 GB Vertex 2 units will be 34nm only. I can't imagine they can keep this up for long, as 34nm runs out and they have to move the 60, 120 and 240 GB models to 25 nm.

    What I suspect is that prior to 25 nm NAND becoming available, all 80 GB units used the Hynix 32 nm NAND. Based on Anand's tests, I suspect this mean they were the worst performing units in the line up. 80 GB units built using the new 25 nm NAND would actually perform better than those built with Hynix 32 nm NAND.

    So whereas 60 GB and 120 GB customers really want to have a unit based on 34 nm NAND, 80 GB customers like me really want to have a drive based on 25 nm NAND. Hence OCZ are not offering replacements for 80 GB units. A new 80 GB unit is better than an old 80 GB unit, even though it is not as good as an old 60 GB unit

    So my questions are:

    1/ Is what I am suggesting above true ?
    2/ How can I tell what NAND I've got ? I've updated the firmware on my 80 GB unit soon after buying it, so the approach of using firmware version to determine NAND type doesn't seem too reliable to me ?

    Personally, I find my unit plenty fast enough. And I understand that OCZ and other SSD vendors must accomodate what their suppliers present them with. However the lack of tranparency, and the "lucky dip" approach that we have to take when buying an SSD from OCZ lead me to conclude that they

    1/ don't respect their customers and/or
    2/ are very naive and stupid to expect that customers won't notice them pulling a 'bait and switch'
  • B3an - Thursday, April 7, 2011 - link

    Anand... you seem to have forgotten something in your conclusion. You say it's best to go for the 240GB if torn between that and the 120GB. But being as two 120GB Vertex 3's are only very slightly more expensive than the 240GB version, wouldn't it make more sense to just get two 120GB's for RAID 0? Because you'd get considerably better performance than the 240GB then considering how well SSD's scale in RAID 0.

    Really great and interesting review BTW.
  • Alopex - Thursday, April 7, 2011 - link

    I'd really like to see this question addressed, as well. According to several tests, SSDs scale in pretty much all categories after a minimal queue depth. It seems like the random reads here are the 120gb model's achilles' heel, but given the linearity of the scaling, it might be safe-ish to assume that 2x 120gb RAID 0 will equal 1x 240gb. For nearly the same price, it would then seem you get the same storage size, fixed the discrepancy between the two models, and hopefully see significant performance gains in the other categories like sequential read/write.

    I'm building a new computer at the moment, and in light of this article, I'm still planning to go with 2x 120gb Vertex 3s in RAID 0, unless someone can provide a convincing argument to do otherwise. At the moment, the only thing that really makes me hesitate is to see what the other vendors have planned for "next-gen" SSD performance. Then again, if I had that attitude I'd be waiting forever ;-)

    Many thanks for the article, though!
  • casteve - Thursday, April 7, 2011 - link

    No TRIM available in RAID.
  • B3an - Thursday, April 7, 2011 - link

    Not a big problem. I've had 3 different SSD sets in RAID 0 over the years, and i've not needed TRIM. And a certain crappy OS with a fruity theme dont even support TRIM without a hack job.
  • ComputerNovice22 - Thursday, April 7, 2011 - link

    You wrote "
    In the worst case comparison the F120 we have here is 30% faster than your 34nm Hynix Vertex 2."

    I believe you meant 32nm Hynix, I'm not sure I'm right or not and I'm not trying to be one of those people that just likes to be right either, just wanted to let you know just in-case.

    On another note though I LOVE the article, I bought a vertex 2 recently and I was very angry with OCZ after I hooked it up and realized it was a 25nm SSD ... I ended up just buying a 120Gb (510 elm-crest)
  • Lux88 - Thursday, April 7, 2011 - link

    1. Thank you for investigating NAND performance so thoroughly.
    2. Thank you for benching drives with "common" capacities.
    3. Thank you for protecting consumer interests.

    Great article. Great site. Fantastic Anand.
  • sor - Thursday, April 7, 2011 - link

    I worked at a Micron test facility years ago. I can only speak for DRAM, but I imagine NAND is much the same. Whenever someone drops a tray of chips and they go sprawling all over the floor... SpekTek. Whenever a machine explodes and starts crunching chips... SpekTek. I used to laugh when I saw PNY memory in BestBuy with a SpecTek mark on its chips selling for 2x what good RAM at newegg would cost.

    Basically anything that's dropped, damaged, or doesn't meet spec somehow, gets put into SpecTek and re-binned according to what it's now capable of. It's a brand that allows Micron to make money off of otherwise garbage parts, without diluting their own brand. On the good end the part may have just had some bent leads that needed to be fixed, on the bad end the memory can be sold and run at much slower specs or smaller capacity (blowing fuses in the chip to disable bad parts), or simply scrapped altogether.
  • sleepeeg3 - Thursday, April 7, 2011 - link

    Thanks for the info, but IMO the bottom line is if it works reliably and it allows them to deliver something at a lower price, I am all for it. If it backfires on them and they get massive failure rates, consumers will respond by buying someone else's product. That's the beauty of capitalism.

Log in

Don't have an account? Sign up now