The Blind SSD

Modern OSes talk to hard drives using logical block addressing. While hard drives are rotational media, logical block addressing organizes sectors on a hard drive linearly. When you go to save a file, Windows simply issues a write command for your file at a specific logical block address, say LBA 15 for example.

Your OS knows what LBAs are available and which ones are occupied. When you delete a file, the LBAs that point to that file on your hard disk are listed as available. The data you’ve deleted hasn’t actually been removed and it doesn’t get wiped until those sectors on the drive are actually overwritten.

Believe it or not, SSDs actually work the same way.

The flash translation layer in a SSD controller maps LBAs to pages on the drive. The table below explains what happens to the data on the SSD depending on the action in the OS:

Action in the OS Reaction on a HDD Reaction on an SSD
File Create Write to a Sector Write to a Page
File Overwrite Write new data to the same Sector Write to a Different Page if possible, else Erase Block and Write to the Same Page
File Delete Nothing Nothing

 

When you delete a file in your OS, there is no reaction from either a hard drive or SSD. It isn’t until you overwrite the sector (on a hard drive) or page (on a SSD) that you actually lose the data. File recovery programs use this property to their advantage and that’s how they help you recover deleted files.

The key distinction between HDDs and SSDs however is what happens when you overwrite a file. While a HDD can simply write the new data to the same sector, a SSD will allocate a new (or previously used) page for the overwritten data. The page that contains the now invalid data will simply be marked as invalid and at some point it’ll get erased.

Strength in Numbers, What makes SSDs Fast Understanding the SSD Performance Degradation Problem
POST A COMMENT

235 Comments

View All Comments

  • siliq - Wednesday, April 01, 2009 - link

    With Anand's excellent article, it's clear that the sequential read/write thoroughput doesn't matter so much - all SSDs, even the notorious JMicron series, can do a good job on that metric. What is relevant to our daily use is the random write rate. Latencies and IOs/second are the most important metric in the realm of SSD.

    Based on that, I would suggest Anand (and other Tech reporters) to include a real world test of evaluating the Random Write performance for SSD. Because current real-world tests: booting windows, loading games, rendering 3D, etc. they focus on the random read. However, measuring how long it takes to install Windows, Microsoft Visual Studio, or a 4-GB PC Game would thoroughly test the Random Write / Latency performance. I think this is a good complementary of our current testing methodology
    Reply
  • Sabresiberian - Tuesday, March 31, 2009 - link

    Just wanted to add my thanks to Anand for this article in particular and for the quality work he has done over the years; I am so grateful for Anandtech's quality and information and the fact that it has been maintained! Reply
  • Sabresiberian - Tuesday, March 31, 2009 - link

    Oops didn't proof, sorry about the misspell Anand! Reply
  • hongmingc - Saturday, March 28, 2009 - link

    Anand, This is a great Article and a good story too.
    The OCZ story caught my attention that a quick firmware upgrade make a big improvement. From my understanding that SSD system designers try to trade off Space, Speed, and Durability (Also SSD :)) due the nature of NAND flash.
    We can clearly see the trade off of Space and Speed when SSD is getting more full the slower the speed (This is due to out-of-place write to increase the write operation and a block reclaim routine). However, Speed is also sacrificed to achieve the Durability (by doing wear leveling). Remember SLC nand's life time is about 100K write, while MLC nand has only about 10K write. Without considering doing wear leveling to improve the life cycle of the SSD, the firmware can be much simple and easy which will improve the write operation speed quite a bit.
    I echo you that the performance test should reflect user's daily usage which can be small size files write and may not be 80% full.
    However, users may be more concern about the Durability, the life cycle of the SSD.
    Is there such a test? How long will the black box OCZ Vertex live?
    How long will the regular OCZ Vertex live? and How long will the X25 live?
    Reply
  • antcasq - Sunday, April 05, 2009 - link

    This article was excellent, explaining several issues regarding performance.

    It would be great if the next article abou ssd addresses durability and reliability.

    My main concert is the swap partition (Linux) or virtual memory file (Windows). I found an post in another website saying that this is not an issue. Is it true? I find it hard to believe. Maybe in a real world test/scenario the problem will arise.
    http://robert.penz.name/137/no-swap-partition-jour...">http://robert.penz.name/137/no-swap-partition-jour...

    I hope AnandTech can take my concerns into consideration.

    Best regards
    Reply
  • stilz - Friday, March 27, 2009 - link

    This is the first hardware review I've read from start to finish, and the time is well worth the information you've provided.

    Thank you for your honest, professional and knowledgeable work. Also kudos to OCZ, I'll definitely consider the Vertex while making purchases.
    Reply
  • Bytales - Friday, March 27, 2009 - link

    As i read the article, i'm thinking of ways to slow down the down the degrading process. Intel is gonna ship x-25m 320gb this year. If i buy this drive and use it as an OS drive, i will obviously won't need the whole 320GB. Say i would need only 40 to 50 GB. I can make a secure erase (if the drive isn't new), made a partition of 50GB, and leave the remaining space unpartitioned. Will that solve the problem in any way ?
    Another way to solve the problem, would be a method inside the OS. The OS could use a user controlled % of the RAM memory, as a cache for those small 4kb files. Since ram reads and writes are way faster, i think it will also help. Say you got 8GB ram, and use 2gb for this purpose, and then the OS would only have 6gb ram for its use, while 2gb is used for these smaller files. That would increase also the lifespan of the SSD. Can this be possible ?
    Reply
  • Hellfire26 - Thursday, March 26, 2009 - link

    In reference to SSD's, I have read a lot of articles and comments about improved firmware and operating system support. I hope manufacturers don't forget about the on-board RAID controller.

    From the articles and comments made by users around the web, who have tested SSD's in a Raid 0 configuration, I believe that two Intel X25-M SSD's in a RAID 0 configuration would more than saturate current on-board RAID controllers.

    Intel is doing a die shrink of the NAND memory that is going into their SSD's come this fall. I would expect these new Intel SSD's to show faster read and write times. Other manufacturers will also find ways to increase the speed of their SSD's.

    SSD's scale well in a RAID configuration. It would be a shame if the on-board RAID controller limited our throughput. The alternative would be very expensive add-in RAID cards.
    Reply
  • FlaTEr1C - Wednesday, March 25, 2009 - link

    Anand, once again you wrote an article that no one else could've written. This is why I'm reading this site since 2004 and will always do. Your articles and reviews are without exception unique and a must-read. Thank you for this thorough background, analysis and review of SSD.

    I was looking a long time for a solution to make my desktop experience faster and I think I'll order a 60GB Vertex. 200€(germany) is still a lot of money but it will be worth it.

    Once again, great work Anand!
    Reply
  • blackburried - Wednesday, March 25, 2009 - link

    It's referred to as "discard" in the kernel functions.

    It works very well w/ SSD's that support TRIM, like fusion-io's drives.
    Reply

Log in

Don't have an account? Sign up now