The Performance Degradation Problem

When Intel first released the X25-M, Allyn Malventano discovered a nasty corner case where the drive would no longer be able to run at its full potential. You basically had to hammer on the drive with tons of random writes for at least 20 minutes, but eventually the drive would be stuck at a point of no return. Performance would remain low until you secure erased the drive.

Although it shouldn't appear in real world use, the worry was that over time a similar set of conditions could align resulting in the X25-M performing slower than it should. Intel, having had much experience with similar types of problems (e.g. FDIV, Pentium III 1.13GHz), immediately began working on a fix and released the fix a couple of months after launch. The fix was nondestructive although you saw much better performance if you secure erased your drive first.

SandForce has a similar problem and I have you all and bit-tech to thank for pointing it out. In bit-tech's SandForce SSD reviews they test TRIM functionality by filling a drive with actual data (from a 500GB source including a Windows install, pictures, movies, documents, etc...). The drive is then TRIMed, and performance is measured.

If you look at bit-tech's charts you'll notice that after going through this process, the SandForce drives no longer recover their performance after TRIM. They are stuck in a lower performance state making the drives much slower when writing incompressible data.

You can actually duplicate the bit-tech results without going through all of that trouble. All you need to do is write incompressible data to all pages of a SandForce drive (user accessible LBAs + spare area), TRIM the drive and then measure performance. You'll get virtually the same results as bit-tech:

AS-SSD Incompressible Write Speed
  Clean Performance Dirty (All Blocks + Spare Area Filled) After TRIM
SandForce SF-1200 (120GB) 131.7MB/s 70.3MB/s 71MB/s

The question is why.

I spoke with SandForce about the issue late last year. To understand the cause we need to remember how SSDs work. When you go to write to an SSD, the controller must first determine where to write. When a drive is completely empty, this decision is pretty easy to make. When a drive is not completely full to the end user but all NAND pages are occupied (e.g. in a very well used state), the controller must first supply a clean/empty block for you to write to.

When you fill a SF drive with incompressible data, you're filling all user addressable LBAs as well as all of the drive's spare area. When the SF controller gets a request to overwrite one of these LBAs the drive has to first clean a block and then write to it. It's the block recycling path that causes the aforementioned problem.

In the SF-1200 SandForce can only clean/recycle blocks at a rate of around 80MB/s. Typically this isn't an issue because you won't be in a situation where you're writing to a completely full drive (all user LBAs + spare area occupied with incompressible data). However if you do create an environment where all blocks have data in them (which can happen over time) and then attempt to write incompressible data, the SF-1200 will be limited by its block recycling path.

So why doesn't TRIMing the entire drive restore performance?

Remember what TRIM does. The TRIM command simply tells the controller what LBAs are no longer needed by the OS. It doesn't physically remove data from the SSD, it just tells the controller that it can remove the aforementioned data at its own convenience and in accordance with its own algorithms.

The best drives clean dirty blocks as late as possible without impacting performance. Aggressive garbage collection only increases write amplification and wear on the NAND, which we've already established SandForce doesn't really do. Pair a conservative garbage collection/block recycling algorithm with you attempting to write an already full drive with tons of incompressible data and you'll back yourself into a corner where the SF-1200 continues to be bottlenecked by the block recycling path. The only way to restore performance at this point is to secure erase the drive.

This is a real world performance issue on SF-1200 drives. Over time you'll find that when you go to copy a highly compressed file (e.g. H264 video) that your performance will drop to around 80MB/s. However, the rest of your performance will remain as high as always. This issue only impacts data that can't be further compressed/deduped by the SF controller. While SandForce has attempted to alleviate it in the SF-1200, I haven't seen any real improvements with the latest firmware updates. If you're using your SSD primarily to copy and store highly compressed files, you'll want to consider another drive.

Luckily for SandForce, the SF-2500 controller alleviates the problem. Here I'm running the same test as above. Filling all blocks of the Vertex 3 Pro with incompressible data and then measuring sequential write speed. There's a performance drop, but it's no where near as significant as what we saw with the SF-1200:

AS-SSD Incompressible Write Speed
  Clean Performance Dirty (All Blocks + Spare Area Filled) After TRIM
SandForce SF-1200 (120GB) 131.7 MB/s 70.3 MB/s 71 MB/s
SandForce SF-2500 (200GB) 229.5 MB/s 230.0 MB/s 198.2 MB/s

It looks like SandForce has increased the speed of its block recycling engine among other things, resulting in a much more respectable worst case scenario of ~200MB/s.

Verifying the Fix

I was concerned that perhaps SandForce simply optimized for the manner in which AS-SSD and Iometer write incompressible data. In order to verify the results I took a 6.6GB 720p H.264 movie and copied it from an Intel X25-M G2 SSD to one of two SF drives. The first was a SF-1200 based Corsair Force F120, and the second was an OCZ Vertex 3 Pro (SF-2500).

I measured both clean performance as well as performance after I'd filled all blocks on the drive. The results are below:

6.6GB 720p H.264 File Copy (X25-M G2 Source to Destination)
  Clean Performance Dirty (All Blocks + Spare Area Filled) After TRIM
SandForce SF-1200 (120GB) 138.6 MB/s 78.5 MB/s 81.7 MB/s
SandForce SF-2500 (200GB) 157.5 MB/s 158.2 MB/s 157.8 MB/s

As expected the SF-1200 drive drops from 138MB/s down to 81MB/s. The drive is bottlenecked by its block recycling path and performance never goes up beyond 81MB/s.

The SF-2000 however doesn't drop in performance. Brand new performance is at 157MB/s and post-torture it's still at 157MB/s. What's interesting however is that the incompressible file copy performance here is lower than what Iometer and AS-SSD would have you believe. Iometer warns that even its fully random data pattern can be defeated by drives with good data deduplication algorithms. Unless there's another bottleneck at work here, it looks like the SF-2000 is still reducing the data that Iometer is writing to the drive. The AS-SSD comparison actually makes a bit more sense since AS-SSD runs at a queue depth of 32 and this simple file copy is mostly at a queue depth of 1. Higher queue depths will make better use of parallel NAND channels and result in better performance.

Sequential Read/Write Speed AnandTech Storage Bench 2011: Much Heavier
Comments Locked

144 Comments

View All Comments

  • Out of Box Experience - Tuesday, February 22, 2011 - link

    Thanks for answering my question

    and you are right

    with over 50% of all PCs still running XP, it would indeed be stupid for the major SSD companies to overlook this important segment of the market

    with their new SSDs ready to launch for Windows 7 machines, they should be releasing plug and play replacements for all the XP machines out there any day now..................NOT!

    Are they stupid or what??

    no conspiracy here folks
    just the facts
  • Kjella - Thursday, February 24, 2011 - link

    Fact: Most computers end their life with the same hardware they started with. Only a small DIY market actually upgrades their hard disk and migrates their OS/data. So what if 50% runs XP? 49% of those won't replace their HDD with an SSD anyway. They might get a new machine with an SSD though, and almost all new machines get Windows 7 now.
  • Cow86 - Thursday, February 17, 2011 - link

    Very interesting indeed....good article too. One has to wonder though - looking at what is currently happening with 25 nm NAND in vertex 2 drives, which have lower performance and reliability than their 34 nm brethren ánd are sold at the same price without any indication - how the normal Vertex 3 will fare...Hoping they'll be as good in that regard as the original vertex 2's, and I may well indeed jump on the SSD bandwagon this year :) Been holding off for lower price (and higher performance, if I can get it without a big price hike); I want 160 GB to be able to have all my games and OS on there.
  • lecaf - Thursday, February 17, 2011 - link

    Vertex 3 with 25 NAND will also suffer performance loss.

    It is not the NAND it self having the issue but the numbers of the chips. You get same capacity with half the chips, so the controller has less opportunity to write in parallel.

    This is the same reason why with Crucial's C300 the larger (256) drive is faster than the smaller (128).

    Speed will drop for smaller drivers but if price goes down this will be counterbalanced by larger capacity faster drives.

    The "if" is very questionable of course considering that OCZ replaced NAND on current Vertex2 with no price cut (not even a change in part number; you just discover you get a slower drive after you mount it)
  • InsaneScientist - Thursday, February 17, 2011 - link

    Except that there are already twice as many chips as there are channels (8 channels, 16 NAND chips - see pg 3 of the article), so halving the number of chips simply brings the channel to chip ratio down to 1:1, which is hardly a problem.
    It's when you have unused channels that things slow down.
  • lecaf - Thursday, February 17, 2011 - link

    1:1 can be a problem... depending who is the bottleneck.

    If NAND speed saturates the channel bandwidth then I agree there is no issue, but if the channel has available bandwidth, it could use it to feed an extra NAND and speed up things.

    But that's theory ... check benchmarks here:
    http://www.storagereview.com/ocz_vertex_2_25nm_rev...
  • Chloiber - Thursday, February 17, 2011 - link

    It's possible to use 25nm chips with the same capacity, as OCZ is trying to do right now with the 25nm replacements of the Vertex 2.
  • Nentor - Thursday, February 17, 2011 - link

    Why are they making these flash chips smaller if there are the lower performance and reliability problems?

    What is wrong with 34nm?

    I can understand with cpu there are the benefits of less heat and such, but with the flash chips?
  • Zshazz - Thursday, February 17, 2011 - link

    It's cheaper to produce. Less materials used and higher number of product output.
  • semo - Thursday, February 17, 2011 - link

    OCZ should spend less time sending out drives with no housing and work on correctly marketing and naming their 25nm Vertex 2 drives.

    http://forums.anandtech.com/showthread.php?t=21433...

    How can OCZ get away with calling a 55GB drive "60GB" and then trying to bamboozle everyone with technicalities and SandForce marketing words and abbreviations is beyond me.

    It wasn't too long when they were in hot water with their jmicron Core drives and now they're doing this?

Log in

Don't have an account? Sign up now