TRIM Performance

In practice, SandForce based drives running a desktop workload do very well and typically boast an average write amplification below 1 (more writes to the device than actual writes to NAND). My personal SF-1200 drive had a write amplification of around 0.6 after several months of use. However if subjected to a workload composed entirely of incompressible writes (e.g. tons of compressed images, videos and music) you can back the controller into a corner.

To simulate this I filled the drive with incompressible data, ran a 4KB (100% LBA space, QD32) random write test with incompressible data for 20 minutes, and then ran AS-SSD (another incompressible data test) to see how low performance could get:

OWC Mercury Extreme Pro 6G 120GB - Resiliency - AS SSD Sequential Write Speed - 6Gbps
  Clean After Torture After TRIM
OWC ME Pro 6G 120GB 163.6 MB/s 62.7 MB/s 104.9 MB/s
OCZ Vertex 3 120GB 162.1 MB/s 38.3 MB/s 101.5 MB/s

You'll notice that the After Torture number is better than the 120GB Vertex 3. Remember that the OWC drive's 4KB random write performance is capped, so the drive simply doesn't get as fragmented in 20 minutes as the Vertex 3 - the behavior of the two drives would be the same if we wrote the same physical amount of data to each drive. Neither drive is quite as resilient as the 240GB version. If you are using your SSD primarily for incompressible data (images/videos/music) storage and manipulation then you're probably better off with an Intel SSD 510 or Crucial m4.

Power Consumption

Power consumption is a bit higher on the OWC drive than the Vertex 3, perhaps the firmware differences extend beyond just 4KB random write performance:

Idle Power - Idle at Desktop

Load Power - 128KB Sequential Write

Load Power - 4KB Random Write, QD=32

Final Words

With the exception of the (temporary?) 4KB random write cap and slightly higher power consumption, OWC's Mercury Extreme Pro 6G is a dead ringer for OCZ's Vertex 3 - at least at the 120GB capacity. The two come with comparable warranties which brings the decision down to pricing, where OCZ currently has a $20 advantage.

That's about as open and shut as you can get. My preference is still 240GB for anything SF-2200 based given the sizable increase in performance, but that added performance and capacity does come at a high price. Speaking of which, OCZ's retail 240GB Vertex 3 is next on my hit list...

AnandTech Storage Bench 2010
Comments Locked

44 Comments

View All Comments

  • Anand Lal Shimpi - Thursday, May 5, 2011 - link

    Those drivers were only used on the X58 platform, I use Intel's RST10 on the SNB platform for all of the newer tests/results. :)

    Take care,
    Anand
  • iwod - Thursday, May 5, 2011 - link

    I lost count of many times i post this series. Anyway people continue to worship 4K Random Read Write now have seen the truth. Seq Read Write is much more important then u think.

    Since the test are basically two identical pieces of Hardware, but one with Random Write Cap, the results shows real world doesn't show any advantage. We need more Seq performance!

    Interestingly we aren't limited by the controller or NAND itself. But the connection method, SATA 6Gbps. We need to start using PCI-Express 4x slot, as Intel has shown in the leaked roadmap. Going to PCI-E 3.0 would give us 4GB/s with 4x slot. That should be plenty of room for improvement. ONFI 3.0 next year should allow us to reach 2GB+ Seq Read Write easily.
  • krumme - Thursday, May 5, 2011 - link

    I think Anand heard to much to Intel voice in this ssd story
    4k random madness was Intel g2 business
    And all went in the wrong direction
    Anand was - and is - the ssd review site
  • Anand Lal Shimpi - Thursday, May 5, 2011 - link

    The fact of the matter is that both random and sequential performance is important. It's Amdahl's law at its best - if you simply increase the sequential read/write speed of these drives without touching random performance, you'll eventually be limited by random performance. Today I don't believe we are limited by random performance but it's still something that has to keep improving in order for us to continue to see overall gains across the board.

    Take care,
    Anand
  • Hrel - Thursday, May 5, 2011 - link

    Damn! 200 dollars too expensive for the 120GB. Stopped reading.
  • snuuggles - Thursday, May 5, 2011 - link

    Good lord, every single article that discusses OWC seems to include some sort of odd-ball tangent or half-baked excuse for some crazy s**t they are pulling.

    Hey, I know they have the fastest stuff around, but there's just something so lame about these guys, I have to say on principle: "never, ever, will I buy from OWC"
  • nish0323 - Thursday, August 11, 2011 - link

    What crazy s**t are they pulling? I've got 5 drives from them, all SSDs, all perform great. The 6G ones have a 5-year warranty, 2 longer than all other SSD manufacturers right now.
  • neotiger - Thursday, May 5, 2011 - link

    A lot of people and hosting companies use consumer SSD for server workload such as MySQL and Solr.

    Can you benchmark these SSD's performances on server workload?
  • Anand Lal Shimpi - Thursday, May 5, 2011 - link

    It's on our roadmap to do just that... :)

    Take care,
    Anand
  • rasmussb - Saturday, May 7, 2011 - link

    Perhaps you have answered this elsewhere, or it will be answered in your future tests. If so, please forgive me.

    As you point out, the drive performance is based in large part upon the compressibility of the source data. Relatively incompressible data results in slower speeds. What happens when you put a pair (or more) of these in a RAID 0 array? Since units of data are alternating between drives, how does the SF compression work then? Does previously compressible data get less compressible because any given drive is only getting, at best (2-drive array) half of the original data?

    Conversely, does incompressible data happen to get more compressible when you're splitting it amongst two or more drives in an array?

    Server workload on a single drive versus in say a RAID 5 array would be an interesting comparison. I'm sure your tech savvy minds are already over this in your roadmap. I'm just asking in the event it isn't.

Log in

Don't have an account? Sign up now