Still Resilient After Truly Random Writes

In our Agility 2 review I did what you all asked: used a newer build of Iometer to not only write data in a random pattern, but write data comprised of truly random bits in an effort to defeat SandForce’s data deduplication/compression algorithms. What we saw was a dramatic reduction in performance:

Iometer Performance Comparison - 4K Aligned, 4KB Random Write Speed
  Normal Data Random Data % of Max Perf
Corsair Force 100GB (SF-1200 MLC) 164.6 MB/s 122.5 MB/s 74.4%
OCZ Agility 2 100GB (SF-1200 MLC) 44.2 MB/s 46.3 MB/s 105%

 

Iometer Performance Comparison
Corsair Force 100GB (SF-1200 MLC) Normal Data Random Data % of Max Perf
4KB Random Read 52.1 MB/s 42.8 MB/s 82.1%
2MB Sequential Read 265.2 MB/s 212.4 MB/s 80.1%
2MB Sequential Write 251.7 MB/s 144.4 MB/s 57.4%

While I don’t believe that’s representative of what most desktop users would see, it does give us a range of performance we can expect from these drives. It also gave me another idea.

To test the effectiveness and operation of TRIM I usually write a large amount of data to random LBAs on the drive for a long period of time. I then perform a sequential write across the entire drive and measure performance. I then TRIM the entire drive, and measure performance again. In the case of SandForce drives, if the applications I’m using to write randomly and sequentially are using data that’s easily compressible then the test isn’t that valuable. Luckily with our new build of Iometer I had a way to really test how much of a performance reduction we can expect over time with a SandForce drive.

I used Iometer to randomly write randomly generated 4KB data over the entire LBA range of the Vertex 2 drive for 20 minutes. I then used Iometer to sequentially write randomly generated data over the entire LBA range of the drive. At this point all LBAs should be touched, both as far as the user is concerned and as far as the NAND is concerned. We actually wrote at least as much data as we set out to write on the drive at this point.

Using HDTach, I measured performance across the entire drive:

The sequential read test is reading back our highly random data we wrote all over the drive, which you’ll note takes a definite performance hit.

Performance is still respectably high and if you look at write speed, there are no painful blips that would result in a pause or stutter during normal usage. In fact, despite the unrealistic workload, the drive proves to be quite resilient.

TRIMing all LBAs restores performance to new:

The takeaway? While SandForce’s controllers aren’t immune to performance degradation over time, we’re still talking about speeds over 100MB/s even in the worst case scenario and with TRIM the drive bounces back immediately.

I’m quickly gaining confidence in these drives. It’s just a matter of whether or not they hold up over time at this point.

The Test

With the differences out of the way, the rest of the story is pretty well known by now. The Vertex 2 gives you a definite edge in small file random write performance, and maintains the already high standards of SandForce drives everywhere else.

The real world impact of the high small file random write performance is negligible for a desktop user. I’d go so far as to argue that we’ve reached the point of diminishing returns to boosting small file random write speed for the majority of desktop users. It won’t be long before we’ll have to start thinking of new workloads to really start stressing these drives.

I've trimmed down some of our charts, but as always if you want a full rundown of how these SSDs compare against one another be sure to use our performance comparison tool: Bench.

CPU Intel Core i7 965 running at 3.2GHz (Turbo & EIST Disabled)
Motherboard: Intel DX58SO (Intel X58)
Chipset: Intel X58 + Marvell SATA 6Gbps PCIe
Chipset Drivers: Intel 9.1.1.1015 + Intel IMSM 8.9
Memory: Qimonda DDR3-1333 4 x 1GB (7-7-7-20)
Video Card: eVGA GeForce GTX 285
Video Drivers: NVIDIA ForceWare 190.38 64-bit
Desktop Resolution: 1920 x 1200
OS: Windows 7 x64
Starting with the Differences: Power Consumption Sequential Read/Write Speed
POST A COMMENT

44 Comments

View All Comments

  • carleeto - Thursday, April 29, 2010 - link

    I don't think people are going to use an SSD for music and movies any time soon. At least, not until the price per GB falls within 200% of a normal hard drive. Where I could see this kind of thing being used a lot on an SSD is with a Truecrypt partition that is used to store source code, documents, mail etc. That's a lot of small writes and reads and the result, because of the encryption layer is really quite random. So I'd actually disagree with Anand here - it is something that is going to be quite relevant to a security conscious user and that is quite a large market, when you factor in enterprises. Reply
  • NandFlashGuy - Wednesday, April 28, 2010 - link

    At my workplace, all PCs have PGP software installed. That should make the data in all writes to disk look like random data, meaning less than optimal performance.

    Anand, can you measure performance under the normal benchmarks with PGP installed? It's a realistic use case for anyone in the corporate world.
    Reply
  • Squuiid - Wednesday, April 28, 2010 - link

    Anand, any news on how your replacement Crucial RealSSD C300 is holding up? Did Crucial fix the performance deterioration bug you last talked about?
    Can you recommend the Crucial over the Vertex 2, or vice versa?
    Reply
  • Grit - Wednesday, April 28, 2010 - link

    I'd like to second that request. The Crucial drive manages impressive speeds in most benchmarks and does so without the loss in space. I can live with a 256GB SSD, but a 200GB SSD is cutting it a bit too close. Reply
  • DesktopMan - Wednesday, April 28, 2010 - link

    Will there be any tests on the AES features? Since this is a feature not present in most SSDs an article on how it works and performs would be very interesting. Reply
  • vol7ron - Wednesday, April 28, 2010 - link

    All these comments, so little time :)

    Looks good.
    Reply
  • diamondsw - Wednesday, April 28, 2010 - link

    As much ink has been spilled about SandForce, I still haven't seen anything that would indicate it's a better choice than the Crucial RealSSD C300, which has better performance at a (slightly) better price. Am I missing something important? Reply
  • arehaas - Wednesday, April 28, 2010 - link

    Crucial C300 have a problem with its firmware that Crucial hasn't solved yet. Performance degrades significantly. Anand found this problem in a "Crucial's RealSSD C300: An Update on My Drive" from March 25. Crucial is currently promising to release the new firmware in mid-May, but they have shifted this deadline already twice. There is no guarantee they will manage to do it in May. Major reviewers do not recommend buying C300 SSD yet. Reply
  • xiphmont - Wednesday, April 28, 2010 - link

    I expect Crucial will fix their firmware issue just as it appears that Sandforce has fixed theirs.

    The Sandforce's redundancy (silent correction and reprovisioning around bit errors and failed flash cells) is what sells me on the Sandforce. If the promises are true, these drives will last longer and throw unrecoverable errors far less often as the NAND ages. Performance is a nice extra.

    It terrifies me that the other mass production SSDs appear to offer no redundancy or error detection/correction of stored bits at all.
    Reply
  • jimhsu - Thursday, April 29, 2010 - link

    Sandforce attempts to scare you with this in their marketing literature. ALL SSDs (even crappy first gen JMicron ones) do a substantial amount of error correction (the raw error rate for flash is something ridiculously bad like 10^-7 to 10^-8). I think even camera flash memory has embedded error correction (don't take my word for it though). Sandforce just does "more" than its competitors. Reply

Log in

Don't have an account? Sign up now