New vs Used SSD Performance

We begin our look at how the overhead of managing pages impacts SSD performance with iometer. The table below shows iometer random write performance; there are two rows for each drive, one for “new” performance after a secure erase and one for “used” performance after the drive has been well used.

4KB Random Write Speed New "Used"
Intel X25-E   31.7 MB/s
Intel X25-M 39.3 MB/s 23.1 MB/s
JMicron JMF602B MLC 0.02 MB/s 0.02 MB/s
JMicron JMF602Bx2 MLC 0.03 MB/s 0.03 MB/s
OCZ Summit 12.8 MB/s 0.77 MB/s
OCZ Vertex 8.2 MB/s 2.41 MB/s
Samsung SLC 2.61 MB/s 0.53 MB/s
Seagate Momentus 5400.6 0.81 MB/s -
Western Digital Caviar SE16 1.26 MB/s -
Western Digital VelociRaptor 1.63 MB/s -


Note that the “used” performance should be the slowest you’ll ever see the drive get. In theory, all of the pages are filled with some sort of data at this point.

All of the drives, with the exception of the JMicron based SSDs went down in performance in the “used” state. And the only reason the JMicron drive didn’t get any slower was because it is already bottlenecked elsewhere; you can’t get much slower than 0.03MB/s in this test.

These are pretty serious performance drops; the OCZ Vertex runs at nearly 1/4 the speed after it’s been used and Intel’s X25-M can only crunch through about 60% the IOs per second that it did when brand new.

So are SSDs doomed? Is performance going to tank over time and make these things worthless?

"Used" SSD performance vs. conventional hard drives.

Pay close attention to the average write latency in the graph above. While Intel’s X25-M pulls an extremely fast sub-0.3ms write latency normally, it levels off at 0.51ms in its used mode. The OCZ Vertex manages a 1.43ms new and 4.86ms used. There’s additional overhead for every write but a well designed SSD will still manage extremely low write latencies. To put things in perspective, look at these drives at their worst compared to Western Digital’s VelociRaptor.The degraded performance X25-M still completes write requests in around 1/8 the time of the VelociRaptor. Transfer speeds are still 8x higher as well.

Note that not all SSDs see their performance drop gracefully. The two Samsung based drives perform more like hard drives here, but I'll explain that tradeoff much later in this article.

How does this all translate into real world performance? I ran PCMark Vantage on the new and used Intel drive to see how performance changed.

PCMark Overall Score New "Used" % Drop
Intel X25-M 11902 11536 3%
OCZ Summit 10972 9916 9.6%
OCZ Vertex 11253 9836 14.4%
Samsung SLC 10143 9118 10.1%
Seagate Momentus 5400.6 6817 - -
Western Digital VelociRaptor 7500 - -


The real world performance hit varies from 0 - 14% depending on the drive. While the drives are still faster than a regular hard drive, performance does drop in the real world by a noticeable amount. The trim command would keep the drive’s performance closer to its peak for longer, but it would not have prevented this from happening.

PCMark Vantage HDD Test New "Used" % Drop
Intel X25-M 29879 23252 22%
JMicron JMF602Bx2 MLC 11613 11283 3%
OCZ Summit 25754 16624 36%
OCZ Vertex 20753 17854 14%
Samsung SLC 17406 12392 29%
Seagate Momentus 5400.6 3525 -  
Western Digital VelociRaptor 6313 -  


HDD specific tests show much more severe drops, ranging from 20 - 40% depending on the drive. Despite the performance drop, these drives are still much faster than even the fastest hard drives.

Simulating a Used Drive SSD Aging: Read Speed is Largely Unaffected


View All Comments

  • MagicPants - Wednesday, March 18, 2009 - link

    Don't they ever try using their own devices? One second of latency should slap any user in the face. It should be very easy for a manufacturer to build a system with their new technology put it in front of people and see what happens, but apparently they're not doing this.

    They wait for reviewers to do the work for them and then get upset when they find a problem.

    What the manufacturers should be taking away from this article is:

    1) Try your competitor's products
    2) Try your own products
    3) Try them in real life as opposed to synthetic tests
    4) Compare everything you've tried and market the performance that matters
  • 7Enigma - Thursday, March 19, 2009 - link

    But that would make sense....and we know marketing rarely does. Reply
  • paulinus - Wednesday, March 18, 2009 - link

    That art is great. Finally someone done ssd test's right, and said loud what we, customers, can get for that hefty pricetags.
    I've supposed that only choices are intel and new ocz's. Now I know, and big kudos for that.
    Just need a bit more $$ for x25-m, it'll be ideal for heavy workstation use, and biggest vertex'll replace wd black in my aging 6910p :)
  • punjabiplaya - Wednesday, March 18, 2009 - link

    Great info. I'm looking to get an SSD but was put off by all these setbacks. Why should I put away my HDDS and get something a million times more expensive that stutters?
    This article is why I visit AT first.
  • Hellfire26 - Wednesday, March 18, 2009 - link

    Anand, when you filled up the drives to simulate a full drive, did you also write to the extended area that is reserved? If you didn't, wouldn't the Intel SLC drive (as an example) not show as much of a performance drop, versus the MLC drive? As you stated, Intel has reserved more flash memory on the SLC drive, above the stated SSD capacity.

    I also agree with GourdFreeMan, that the physical block size needs to be reduced. Due to the constant erasing of blocks, the Trim command is going to reduce the life of the drive. Of course, drive makers could increase the size of the cache and delay using the Trim command until the number of blocks to be erased equals the cache available. This would more efficiently rearrange the valid data still present in the blocks that are being erased (less writes). Microsoft would have to design the Trim command so it would know how much cache was available on the drive, and drive makers would have to specifically reserve a portion of their cache for use by the Trim command.

    I also like Basilisk's comment about increasing the cluster size, although if you increase it too big, you are likely to be wasting space and increasing overhead. Surely, even if MS only doubles the cluster size for NTFS partitions to 8KB's, write cycles to SSD's would be reduced. Also, There is the difference between 32bit and 64bit operating systems to consider. However, I don't have the knowledge to say whether Microsoft can make these changes without running into serious problems with other aspects of the operating system.
  • Anand Lal Shimpi - Wednesday, March 18, 2009 - link

    I only wrote to the LBAs reported to the OS. So on the 80GB Intel drive that's from 0 - 74.5GB.

    I didn't test the X25-E as extensively as the rest of the drives so I didn't look at performance degradation as closely just because I was running out of time and the X25-E is sooo much more expensive. I may do a standalone look at it in the near future.

    Take care,
  • gss4w - Wednesday, March 18, 2009 - link

    Has anyone at Anandtech talked to Microsoft about when the "Trim" command will be supported in Windows 7. Also it would be great if you could include some numbers from Windows 7 beta when you do a follow-up.

    One reason I ask is that I searched for "Windows 7 ssd trim" and I saw a presentation from WinHEC that made it sound like support for the trim command would be a requirement for SSD drives to meet the Windows 7 logo requirements. I would think if this were the case then Windows 7 would have support for trim. However, this article made it sound like support for Trim might not be included when Windows 7 is initially released, but would be added later.

  • ryedizzel - Thursday, March 19, 2009 - link

    I think it is obvious that Windows 7 will support TRIM. The bigger question this article points out is whether or not the current SSDs will be upgradeable via firmware- which is more important for consumers wanting to buy one now. Reply
  • Martimus - Wednesday, March 18, 2009 - link

    It took me an hour to read the whole thing, but I really enjoyed it. It reminded me of the time I spent testing circuitry and doing root cause analysis. Reply
  • alpha754293 - Wednesday, March 18, 2009 - link

    I think that it would be interesting if you were to be able to test the drives for the "desktop/laptop/consumer" front by writing a 8 GiB file using 4 kiB block sizes, etc. for the desktop pattern and also to test the drive then with a larger sizes and larger block size for the server/workstation pattern as well.

    You present some very very good arguments and points, and I found your article to be thoroughly researched and well put.

    So I do have to commend you on that. You did an excellent job. It is thoroughly enjoyable to read.

    I'm currently looking at a 4x 256 GB Samsung MLC on Solaris 10/ZFS for apps/OS (for PXE boot), and this does a lot of the testing; but I would be interested to see how it would handle more server-type workloads.

Log in

Don't have an account? Sign up now