The Trim Command: Coming Soon to a Drive Near You

We run into these problems primarily because the drive doesn’t know when a file is deleted, only when one is overwritten. Thus we lose performance when we go to write a new file at the expense of maintaining lightning quick deletion speeds. The latter doesn’t really matter though, now does it?

There’s a command you may have heard of called TRIM. The command would require proper OS and drive support, but with it you could effectively let the OS tell the SSD to wipe invalid pages before they are overwritten.

The process works like this:

First, a TRIM-supporting OS (e.g. Windows 7 will support TRIM at some point) queries the hard drive for its rotational speed. If the drive responds by saying 0, the OS knows it’s a SSD and turns off features like defrag. It also enables the use of the TRIM command.

When you delete a file, the OS sends a trim command for the LBAs covered by the file to the SSD controller. The controller will then copy the block to cache, wipe the deleted pages, and write the new block with freshly cleaned pages to the drive.

Now when you go to write a file to that block you’ve got empty pages to write to and your write performance will be closer to what it should be.

In our example from earlier, here’s what would happen if our OS and drive supported TRIM:

Our user saves his 4KB text file, which gets put in a new page on a fresh drive. No differences here.

Next was a 8KB JPEG. Two pages allocated; again, no differences.

The third step was deleting the original 4KB text file. Since our drive now supports TRIM, when this deletion request comes down the drive will actually read the entire block, remove the first LBA and write the new block back to the flash:


The TRIM command forces the block to be cleaned before our final write. There's additional overhead but it happens after a delete and not during a critical write.

Our drive is now at 40% capacity, just like the OS thinks it is. When our user goes to save his 12KB JPEG, the write goes at full speed. Problem solved. Well, sorta.

While the TRIM command will alleviate the problem, it won’t eliminate it. The TRIM command can’t be invoked when you’re simply overwriting a file, for example when you save changes to a document. In those situations you’ll still have to pay the performance penalty.

Every controller manufacturer I’ve talked to intends on supporting TRIM whenever there’s an OS that takes advantage of it. The big unknown is whether or not current drives will be firmware-upgradeable to supporting TRIM as no manufacturer has a clear firmware upgrade strategy at this point.

I expect that whenever Windows 7 supports TRIM we’ll see a new generation of drives with support for the command. Whether or not existing drives will be upgraded remains to be seen, but I’d highly encourage it.

To the manufacturers making these drives: your customers buying them today at exorbitant prices deserve your utmost support. If it’s possible to enable TRIM on existing hardware, you owe it to them to offer the upgrade. Their gratitude would most likely be expressed by continuing to purchase SSDs and encouraging others to do so as well. Upset them, and you’ll simply be delaying the migration to solid state storage.

Free Space to the Rescue Restoring Your Drive to Peak Performance
Comments Locked

250 Comments

View All Comments

  • Luddite - Friday, March 20, 2009 - link

    So even with the TRIM command, when working with large files, say, in photoshop and saving multiple layers, the performance will stil drop off?
  • proviewIT - Thursday, March 19, 2009 - link

    I bought a Vertex 120GB and it is NOT working on my Nvidia chipsets motherboard. Anyone met the same problem? I tried intel chipsets motherboard and seems ok.
    I used HDtach to test the read/write performance 4 days ago, wow, it was amazing. 160MB/s in write. But today I felt it slower and used HDtach to test again, it downs to single digit MB per second. Can I recover it or I need to return it?
  • kmmatney - Thursday, March 19, 2009 - link

    Based on the results and price, I would say that the OCZ Vertex deserves a Editor's choice of some sort (Gold, Silver)...
  • Tattered87 - Thursday, March 19, 2009 - link

    While I must admit I skipped over some of the more technical bits where SSD was explained in detail, I read the summaries and I've gotta admit this article was extremely helpful. I've been wanting to get one of these for a long time now but they've seemed too infantile in technological terms to put such a hefty investment in, until now.

    After reading about OCZ's response to you and how they've stepped it up and are willing to cut unimportant statistics in favor of lower latencies, I actually decided to purchase one myself. Figured I might as well show my appreciation to OCZ by grabbing up a 60GB SSD, not to mention it looks like it's by far the best purchase I can make SSD-wise for $200.

    Thanks for the awesome article, was a fun read, that's for sure.
  • bsoft16384 - Thursday, March 19, 2009 - link

    Anand, I don't want to sound too negative in my comments. While I wouldn't call them unusable, there's no doubt that the random write performance of the JMicron SSDs sucks. I'm glad that you're actually running random I/O tests when so many other websites just run HDTune and call it a day.

    That X25-M for $340 is looking mighty tempting, though.
  • MrSpadge - Thursday, March 19, 2009 - link

    Hi,

    first: great article, thanks to Anand and OCZ!

    Something crossed my mind when I saw the firmware-based trade-off between random writes and sequential transfer rates: couldn't that be adjusted dynamically to get the best of both worlds? Default to the current behaviour but switch into something resembling te old one when extensive sequential transfers are detected?

    Of course this neccesiates that the processor would be able to handle additional load and that the firmware changes don't involve permanent changes in the organization of the data.

    Maybe the OCZ-Team already thought about this and maybe nobody's going to read this post, buried deep within the comments..

    MrS
  • Per Hansson - Thursday, March 19, 2009 - link

    Great work on the review Anand
    I really enjoyed reading it and learning from it
    Will there be any tests of the old timers like Mtron etc?
  • tomoyo - Thursday, March 19, 2009 - link

    That was kind of strange to me too. But I assume Anand really means the desktop market, not the server storage/business market. Since it's highly doubtful that the general consumer will spend many times as much money for 15k SAS drives.
  • Gary Key - Thursday, March 19, 2009 - link

    The intent was based it being the fastest for a consumer based desktop drive, the text has been updated to reflect that fact.
  • tomoyo - Thursday, March 19, 2009 - link

    I've always been someone who wants real clarify and truth to the information on the internet. That's a problem because probably 90% of things are not. But Anand is one man I feel a lot of trust for because of great and complete articles such as this. This is truly the first time that I feel like I really understand what goes into ssd performance and why it can be good or bad. Thank you so much for being the most inciteful voice in the hardware community. And keep fighting those damn manufacturers who are scared of the facts getting in the way of their 200MB/s marketing bs.

Log in

Don't have an account? Sign up now