SSD Aging: Read Speed is Largely Unaffected

Given the nature of the SSD performance-over-time “problem” you’d expect to only pay the performance penalty when writing files, not reading. And for once, I don’t have any weird exceptions to talk about - this is generally the case.

The table below shows sequential read performance for 2MB blocks on new vs. “used” SSDs. I even included data for a couple of the hard drives in the "Used" column; for those numbers I'm simply measuring transfer rates from the slowest parts of the platter:

2MB Sequential Read Speed New "Used"
Intel X25-E   240.1 MB/s
Intel X25-M 264.1 MB/s 230.2 MB/s
JMicron JMF602B MLC 134.7 MB/s 134.7 MB/s
JMicron JMF602Bx2 MLC 164.1 MB/s 164.1 MB/s
OCZ Summit 248.6 MB/s 208.6 MB/s
OCZ Vertex 257.8 MB/s 250.1 MB/s
Samsung SLC   101.4 MB/s
Seagate Momentus 5400.6 77.9 MB/s -
Western Digital Caviar SE16 104.6 MB/s 54.3 MB/s
Western Digital VelociRaptor 118.0 MB/s 79.2 MB/s

 

The best SSDs still transfer data at over 2x the rate of the VelociRaptor.

Read latency is also extremely good on these worn SSDs:

I left the conventional hard drives out of the chart simply because they completely screw up the scale. The VelociRaptor has a latency of 7.2ms in this iometer test with a queue depth of 3 IOs; that's an order of magnitude slower than the slowest SSD here.

Since you only pay the overhead penalty when you go to write to a previously-written block, the performance degradation only really occurs when you’re writing - not when you’re reading.

Now your OS is always writing to your drive, and that’s why we see a performance impact even if you’re just launching applications and opening files and such, but the penalty is much less tangible when it comes to read performance.

New vs Used SSD Performance The Verdict
Comments Locked

250 Comments

View All Comments

  • sotoa - Friday, April 3, 2009 - link

    Long time reader, first time post.
    I really liked the background story and appreciate how Anand delves deep into the the SSD's (as well as other products in other articles).

    Thanks for looking out for the little guy!
    Keep up the great work!
  • siliq - Wednesday, April 1, 2009 - link

    With Anand's excellent article, it's clear that the sequential read/write thoroughput doesn't matter so much - all SSDs, even the notorious JMicron series, can do a good job on that metric. What is relevant to our daily use is the random write rate. Latencies and IOs/second are the most important metric in the realm of SSD.

    Based on that, I would suggest Anand (and other Tech reporters) to include a real world test of evaluating the Random Write performance for SSD. Because current real-world tests: booting windows, loading games, rendering 3D, etc. they focus on the random read. However, measuring how long it takes to install Windows, Microsoft Visual Studio, or a 4-GB PC Game would thoroughly test the Random Write / Latency performance. I think this is a good complementary of our current testing methodology
  • Sabresiberian - Tuesday, March 31, 2009 - link

    Just wanted to add my thanks to Anand for this article in particular and for the quality work he has done over the years; I am so grateful for Anandtech's quality and information and the fact that it has been maintained!
  • Sabresiberian - Tuesday, March 31, 2009 - link

    Oops didn't proof, sorry about the misspell Anand!
  • hongmingc - Saturday, March 28, 2009 - link

    Anand, This is a great Article and a good story too.
    The OCZ story caught my attention that a quick firmware upgrade make a big improvement. From my understanding that SSD system designers try to trade off Space, Speed, and Durability (Also SSD :)) due the nature of NAND flash.
    We can clearly see the trade off of Space and Speed when SSD is getting more full the slower the speed (This is due to out-of-place write to increase the write operation and a block reclaim routine). However, Speed is also sacrificed to achieve the Durability (by doing wear leveling). Remember SLC nand's life time is about 100K write, while MLC nand has only about 10K write. Without considering doing wear leveling to improve the life cycle of the SSD, the firmware can be much simple and easy which will improve the write operation speed quite a bit.
    I echo you that the performance test should reflect user's daily usage which can be small size files write and may not be 80% full.
    However, users may be more concern about the Durability, the life cycle of the SSD.
    Is there such a test? How long will the black box OCZ Vertex live?
    How long will the regular OCZ Vertex live? and How long will the X25 live?
  • antcasq - Sunday, April 5, 2009 - link

    This article was excellent, explaining several issues regarding performance.

    It would be great if the next article abou ssd addresses durability and reliability.

    My main concert is the swap partition (Linux) or virtual memory file (Windows). I found an post in another website saying that this is not an issue. Is it true? I find it hard to believe. Maybe in a real world test/scenario the problem will arise.
    http://robert.penz.name/137/no-swap-partition-jour...">http://robert.penz.name/137/no-swap-partition-jour...

    I hope AnandTech can take my concerns into consideration.

    Best regards
  • stilz - Friday, March 27, 2009 - link

    This is the first hardware review I've read from start to finish, and the time is well worth the information you've provided.

    Thank you for your honest, professional and knowledgeable work. Also kudos to OCZ, I'll definitely consider the Vertex while making purchases.
  • Bytales - Friday, March 27, 2009 - link

    As i read the article, i'm thinking of ways to slow down the down the degrading process. Intel is gonna ship x-25m 320gb this year. If i buy this drive and use it as an OS drive, i will obviously won't need the whole 320GB. Say i would need only 40 to 50 GB. I can make a secure erase (if the drive isn't new), made a partition of 50GB, and leave the remaining space unpartitioned. Will that solve the problem in any way ?
    Another way to solve the problem, would be a method inside the OS. The OS could use a user controlled % of the RAM memory, as a cache for those small 4kb files. Since ram reads and writes are way faster, i think it will also help. Say you got 8GB ram, and use 2gb for this purpose, and then the OS would only have 6gb ram for its use, while 2gb is used for these smaller files. That would increase also the lifespan of the SSD. Can this be possible ?
  • Hellfire26 - Thursday, March 26, 2009 - link

    In reference to SSD's, I have read a lot of articles and comments about improved firmware and operating system support. I hope manufacturers don't forget about the on-board RAID controller.

    From the articles and comments made by users around the web, who have tested SSD's in a Raid 0 configuration, I believe that two Intel X25-M SSD's in a RAID 0 configuration would more than saturate current on-board RAID controllers.

    Intel is doing a die shrink of the NAND memory that is going into their SSD's come this fall. I would expect these new Intel SSD's to show faster read and write times. Other manufacturers will also find ways to increase the speed of their SSD's.

    SSD's scale well in a RAID configuration. It would be a shame if the on-board RAID controller limited our throughput. The alternative would be very expensive add-in RAID cards.
  • FlaTEr1C - Wednesday, March 25, 2009 - link

    Anand, once again you wrote an article that no one else could've written. This is why I'm reading this site since 2004 and will always do. Your articles and reviews are without exception unique and a must-read. Thank you for this thorough background, analysis and review of SSD.

    I was looking a long time for a solution to make my desktop experience faster and I think I'll order a 60GB Vertex. 200€(germany) is still a lot of money but it will be worth it.

    Once again, great work Anand!

Log in

Don't have an account? Sign up now