Missing TRIM - Does it Matter?

Clearly the performance of two X25-Vs in RAID 0 is great, but you do lose TRIM - isn't that a dealbreaker? Honestly, it depends. For sequential accesses, TRIM isn't necessary on the Intel drives. The X25 controller does a good job of aggressively cleaning and recycling NAND blocks and you'll pretty consistently write at peak performance if your workload is almost all sequential.

The more random your access pattern is, the more you'll miss TRIM. Thankfully desktops don't spend too much of their time randomly writing data across the drive, but I'd say a good 30% of most desktop writes are random to an extent. Over time, these random writes will build up and bring down the overall performance of your RAID array until you either secure erase the drives or write sequentially to all available free space.

There is one other option for curbing the performance degradation before it happens. Remember the relationship between spare area and write amplification:

The more random your workload, the higher your write amplification (and thus the lower your performance, shorter your NAND lifespan). Increasing spare area can go a long way to reducing write amplification. While it can't eliminate it, it can definitely make a dent.

If you're looking to keep performance as high as possible with a pair of X25-Vs in RAID, you can always allocate more NAND as spare area. Secure erase each drive, create your RAID array, and then create your partition on the drive smaller than max capacity (try 10 - 20% smaller). The unpartitioned space should automatically be used by the controller as spare area. To test the effectiveness of this approach I took an X25-V, filled it with garbage data, and then wrote random data across the drive as fast as possible for 20 minutes. I then ran HD Tach to get a visualization of write latency (expressed by sudden drops in bandwidth) vs. LBA:

A standard 80GB X25-M wouldn't be this bad off, the X25-V gets extra penalized by having such a limited capacity to begin with. You can see that the drive is attempting to write at full speed but gets brought down to nearly 0MB/s as it has to constantly clean dirty blocks. Constant TRIMing would never let the drive get into this state. It's worth mentioning that a desktop usage pattern shouldn't get this happen either. Another set of sequential writes will clean up most of this though:

Intel's controller is very resillient. Even without TRIM, as long as your access pattern has some amount of a sequential component you'll be able to eventually recover performance.

Now look at what happens if we only use 60GB of the 74.5GB RAID 0 array upon creation and run the same test:

Performance isn't nearly as bad. That added spare area really comes in handy. Of course another pass corrects nearly everything:

If you don't need the added space, using a smaller partition is a great way to ensure high performance for as long as possible. The effectiveness of this approach is a difficult thing to benchmark given that it's only after months of normal use that you get enough random writes to the drive to be a problem. The good news is that even if you bombard the X25-Vs with random writes, the drives can quickly recover as soon as they're hit with some sequential data.

AnandTech Storage Bench Final Words
Comments Locked

87 Comments

View All Comments

  • rhvarona - Tuesday, March 30, 2010 - link

    Some Adaptec Series 2, Series 5 and Series 5Z RAID controller cards allows you to add one or more SSD drives as a cache for your array.

    So, for example, you can have 4x1TB SATA disks in RAID 10, and 1 32GB Intel SLC SSD as a transparent cache for frequently accessed data.

    The feature is called MaxIQ. One card that has it is the Adaptec 2405 which retails for about $250 shipped.

    The kit is the Adaptec MaxIQ SSD Cache Performance Kit, but it ain't cheap! Retails for about $1,200. Works great for database and web servers though.
  • GDM - Tuesday, March 30, 2010 - link

    Hi I was under the impression that intel has new raid drivers that can pass through the TRIM command. Can you please rerun the test if that is true. Also can you test the 160gbs in raid?

    And although benchmarks are nice, do you really notice it during normal use?

    Regards,
  • Makaveli - Tuesday, March 30, 2010 - link

    You cannot do TRIM to an SSD Raid even with the new intel drivers.

    The drivers will allow you to pass trim to a single SSD+ HD RAID setup.

  • Roomraider - Wednesday, March 31, 2010 - link

    Wrong, Wrong, Wrong!!!!!!!
    The new drivers does in fact pass Trim to Raid-0 in Windows 7. My 2 160 g2' striped in 0 now has trim running on the array "verified via Windows 7 Trim cmd" . According to Intel, this works with any Trim enabled SSD' No Raid 5 support yet.
  • jed22281 - Friday, April 2, 2010 - link

    what so Anand is wrong when he speak to Intel engineers directly?
    I've seen several other threads where this claims has since been quashed.
  • WC Annihilus - Tuesday, March 30, 2010 - link

    Well this is definitely a test I was looking for. I just bought 3 of the Kingston drives off Amazon cheap and was trying to decide whether to RAID them or use them separately for OS/apps and games. Would a partition of 97.5GB (so about 14GB unpartitioned) be good enough for a wear-leveling buffer?
  • GullLars - Tuesday, March 30, 2010 - link

    Yes, it should be. You can consider making it 90GiB (gibibytes, 90*2^30 bytes), if you anticipate a lot of random writes and not a lot of larger files going in and out regularly.

    You will likely get about 550MB/s sequential read, and enough IOPS for anything you may do (unless you start doing databases, WMvare and stuff). 120MB/s sustained and consistent write should also keep you content.

    Tip: use a small stripe size, even 16KB stripe will work whitout fuzz on these controllers.
  • WC Annihilus - Tuesday, March 30, 2010 - link

    Main reason I want to go with a 97.5GB partition is because that's the size of my current OS/apps/games partition. It's got about 21GB free, which I wanted to keep in case I wanted to install more games.

    In regards to stripe size, most of the posts I've seen suggest 64KB or 128KB are the best choices. What difference does this make? Why do you suggest smaller stripe sizes?

    Plans are for the SSDs to be OS/apps/games, with general data going on a pair of 1.5TB hard drives. Usage is mainly gaming, browsing, and watching videos, with some programming and the occasional fiddling with DVDs and video editing
  • GullLars - Tuesday, March 30, 2010 - link

    Then you should be fine with a 97,5GB partition.
    The reason smaller is better when it comes to stripe size on SSD RAIDs has to do with the nature of the storage medium combined with the mechanisms of RAID. I will explain in short here, and you can read up more for yourself you are more curious.

    Intel SSDs can do 90-100% of their sequential bandwidth with 16-32KB blocks @ QD 1, and at higher queue depths they can reach it at 8KB blocks. Harddisks on the other hand reach their maximum bandwidth around 64-128KB sequential blocks, and do not benefit noticably from increasing the queue depth.

    When you RAID-0, the files that are larger than the stripe size get split up in chucks equal in size to the stripe size and distributed amongs the units in the RAID. Say you have a 128KB file (or want to read a 128KB chunk of a larger file), this will get divided into 8 pieces when the stripe size is 16KB, and with 3 SSDs in the RAID this means 3 chunks for 2 of the SSDs, and 2 chukcs for the third. When you read this file, you will read 16KB blocks from all 3 SSDs at Queue Depth 2 and 3. If you check out ATTO, you will see 2x 16KB @ QD 3 + 1x 16KB @ QD 2 summarize to higher bandwidth than 1x 128KB @ QD 1.

    The bandwidth when reading or writing files equal to or smaller the stripe size will not be affected by the RAID. The sequential bandwidth of blocks of 1MB or larger will also be the same since the SSDs will be able to deliver max bandwidth with any stripe size since data is striped over all in blocks large enough or enough blocks to reach max bandwidth for each SSD.

    So to summarize, benefits and drawbacks of using a small stripe size:
    + Higher performance of files/blocks above the stripe size while still relatively small (<1MB)
    - Additional computational overhead from managing more blocks in-flight, although this is negligable for RAID-0.
    The added performance of small-medium files/blocks from a small stripe size can make a difference for OS/apps, and can be meassured in PCmark Vantage.
  • WC Annihilus - Tuesday, March 30, 2010 - link

    Many thanks for the explanation. I may just go ahead and fiddle with various configurations and choose which feels best to me.

Log in

Don't have an account? Sign up now