Free Space to the Rescue

There’s not much we can do about the scenario I just described; you can’t erase individual pages, that’s the reality of NAND-flash. There are some things we can do to make it better though.

The most frequently used approach is to under provision the drive. Let’s say we only shipped our drive with 20KB of space to the end user, but we actually had 24KB of flash on the drive. The remaining 4KB could be used by our controller; how, you say?

In the scenario from the last page we had to write 12KB of data to our drive, but we only had 8KB in free pages and a 4KB invalid page. In order to write the 12KB we had to perform a read-modify-write which took over twice as long as a 12KB write should take.

If we had an extra 4KB of space our 12KB write from earlier could’ve proceeded without a problem. Take a look at how it would’ve worked:

We’d write 8KB to the user-facing flash, and then the remaining 4KB would get written to the overflow flash. Our write speed would still be 12KB/s and everything would be right in the world.

Now if we deleted and tried to write 4KB of data however, we’d run into the same problem again. We’re simply delaying the inevitable by shipping our drive with an extra 4KB of space.

The more spare-area we ship with, the longer our performance will remain at its peak level. But again, you have to pay the piper at some point.

Intel ships its X25-M with 7.5 - 8% more area than is actually reported to the OS. The more expensive enterprise version ships with the same amount of flash, but even more spare area. Random writes all over the drive are more likely in a server environment so Intel keeps more of the flash on the X25-E as spare area. You’re able to do this yourself if you own an X25-M; simply perform a secure erase and immediately partition the drive smaller than its actual capacity. The controller will use the unpartitioned space as spare area.

Understanding the SSD Performance Degradation Problem The Trim Command: Coming Soon to a Drive Near You
POST A COMMENT

235 Comments

View All Comments

  • Gasaraki88 - Friday, March 20, 2009 - link

    This truly was a GREAT article. I enjoyed reading it and was very informative. Thank you so much. That's why Anandtech is the best site out there. Reply
  • davidlants - Friday, March 20, 2009 - link

    This is one of the best tech articles I have ever read, I created an account just to post this comment. I've been a fan of Anandtech for years and articles like this (and the RV700 article from a while back) show the truly unique perspective and access that Anand has that simply no other tech site can match. GREAT WORK!!! Reply
  • Zak - Friday, March 20, 2009 - link

    I just got the Apex. I'd probably cough up more dough for the Vertex after reading this. However, I've run it for two days as my system disk in MacPro and haven't noticed any issues, it's really fast. But I guess I'll get Vertex for my Windows 7 build.

    Z.
    Reply
  • Nemokrad - Friday, March 20, 2009 - link

    What I find intriguing about this article is that these smaller manufacturers do not do real world internal testing for these things. They should not need 3rd parties like you to figure this shit out for them. Maybe now OCZ will learn what they need to do for the future. Reply
  • JonasR - Friday, March 20, 2009 - link


    Thanks for an excellent article. I have one question does anyone know which controller is beeing used in the new Patriot 256GB V.3 SSD?
    Reply
  • tgwgordon - Friday, March 20, 2009 - link

    Anyone know if the Vertex Anand used had 32M or 64M cache? Reply
  • Dennis Travis - Friday, March 20, 2009 - link

    Excellent and informative article as always Anand. Thanks so much for posting the truth!! Reply
  • IsLNdbOi - Friday, March 20, 2009 - link

    Can't remember what page it was, but you showed some charts on the performance of SSDs at their lowest possible performance levels.

    At their lowest possible performance levels are they still faster than the 300GB Raptor?
    Reply
  • Edgemeal - Friday, March 20, 2009 - link

    It's too bad Windows and applications don't let you select where all the data that needs to be updated and saved to is stored. If that was an option a SSD could be used to only load data (EXE files and support files) and a HDD could be used to store files that are updated frequently, like a web browser for example, their constantly caching files, from the sound of this article that would kill the performance of a SSD in no time.

    Great article, I'll stick to HDDs for now.
    Reply
  • Luddite - Friday, March 20, 2009 - link

    So even with the TRIM command, when working with large files, say, in photoshop and saving multiple layers, the performance will stil drop off? Reply

Log in

Don't have an account? Sign up now