Performance Consistency

In our Intel SSD DC S3700 review I introduced a new method of characterizing performance: looking at the latency of individual operations over time. The S3700 promised a level of performance consistency that was unmatched in the industry, and as a result needed some additional testing to show that. The reason we don't have consistent IO latency with SSDs is because inevitably all controllers have to do some amount of defragmentation or garbage collection in order to continue operating at high speeds. When and how an SSD decides to run its defrag and cleanup routines directly impacts the user experience. Frequent (borderline aggressive) cleanup generally results in more stable performance, while delaying that can result in higher peak performance at the expense of much lower worst case performance. The graphs below tell us a lot about the architecture of these SSDs and how they handle internal defragmentation.

To generate the data below I took a freshly secure erased SSD and filled it with sequential data. This ensures that all user accessible LBAs have data associated with them. Next I kicked off a 4KB random write workload across all LBAs at a queue depth of 32 using incompressible data. I ran the test for just over half an hour, no where near what we run our steady state tests for but enough to give me a good look at drive behavior once all spare area filled up.

I recorded instantaneous IOPS every second for the duration of the test. I then plotted IOPS vs. time and generated the scatter plots below. Each set of graphs features the same scale. The first two sets use a log scale for easy comparison, while the last set of graphs uses a linear scale that tops out at 40K IOPS for better visualization of differences between drives.

The high level testing methodology remains unchanged from our S3700 review. Unlike in previous reviews however, I did vary the percentage of the drive that I filled/tested depending on the amount of spare area I was trying to simulate. The buttons are labeled with the advertised user capacity had the SSD vendor decided to use that specific amount of spare area.  If you want to replicate this on your own all you need to do is create a partition smaller than the total capacity of the drive and leave the remaining space unused to simulate a larger amount of spare area. The partitioning step isn't absolutely necessary in every case but it's an easy way to make sure you never exceed your allocated spare area. It's a good idea to do this from the start (e.g. secure erase, partition, then install Windows), but if you are working backwards you can always create the spare area partition, format it to TRIM it, then delete the partition. Finally, this method of creating spare area works on the drives I've tested here but not all controllers may behave the same way.

The first set of graphs shows the performance data over the entire 2000 second test period. In these charts you'll notice an early period of very high performance followed by a sharp dropoff. What you're seeing in that case is the drive alllocating new blocks from its spare area, then eventually using up all free blocks and having to perform a read-modify-write for all subsequent writes (write amplification goes up, performance goes down).

The second set of graphs zooms in to the beginning of steady state operation for the drive (t=1400s). The third set also looks at the beginning of steady state operation but on a linear performance scale. Click the buttons below each graph to switch source data.

Impact of Spare Area
  Intel SSD DC S3700 200GB Corsair Neutron 240GB OCZ Vector 256GB Samsung SSD 840 Pro 256GB SanDisk Ultra Plus 256GB
25% Spare Area -

The Ultra Plus' performance consistency, at least in the default configuration, looks a bit better than Samsung's SSD 840 Pro. The 840 Pro is by no means the gold standard here so that's not saying too much. What is interesting however is that the 840 Pro does much better with an additional 25% spare area compared to the Ultra Plus. SanDisk definitely benefits from more spare area, just not as much as we're used to seeing.

The next set of charts look at the steady state (for most drives) portion of the curve. Here we'll get some better visibility into how everyone will perform over the long run.

Impact of Spare Area
  Intel SSD DC S3700 200GB Corsair Neutron 240GB OCZ Vector 256GB Samsung SSD 840 Pro 256GB SanDisk Ultra Plus 256GB
25% Spare Area -

The final set of graphs abandons the log scale entirely and just looks at a linear scale that tops out at 50K IOPS. We're also only looking at steady state (or close to it) performance here:

Impact of Spare Area
  Intel SSD DC S3700 200GB Corsair Neutron 240GB OCZ Vector 256GB Samsung SSD 840 Pro 256GB SanDisk Ultra Plus 256GB
25% Spare Area -

Here we see just how far performance degrades. Performance clusters around 5K IOPS, which is hardly good for a modern SSD. Increasing spare area helps considerably, but generally speaking this isn't a drive that you're going to want to fill.


Performance vs. Transfer Size AnandTech Storage Bench 2011


View All Comments

  • name99 - Monday, January 07, 2013 - link

    If you want to buy flash, buy flash.
    If you want to buy an iPod, buy an iPod.
    But DON'T whine that an iPod (or iPhone, or iPad) costs "too much" because the flash in it is cheap.

    It's not 2003. We all know Apple costs more (sometimes a little more, sometimes a lot more) than the competition. We also all know that Apple provides more (sometimes a little bit more, sometimes a lot more) than the competition. What we DON'T need is MORE fscking half-assed bitching about whether Apple does or does not "cost too much".

    You want to do that, go read the comments on any damn tech site for any year from about 1976 onwards. Just don't waste our time bringing up the issue on modern threads, especially threads that have NOTHING to do with the cost of Apple's products.
  • Death666Angel - Monday, January 07, 2013 - link

    It's not Apples products only, this applies to all tablets/smartphones. Google charges 50$/€ for an 8/16GB upgrade. Also, if you want to whine about people whining about Apple, please don't. Reply
  • mayankleoboy1 - Monday, January 07, 2013 - link

    And if you want to whine about people whining about people whining about Apple, please dont. Reply
  • runeks - Monday, August 05, 2013 - link

    I really have to say I agree with this.

    This is not a choice between an Apple (or any other company's) device costing either "$n for a 16GB version and $n+200 for the 64GB" version or "$n for a 16GB version and $n+50 for the 64GB version". It's the choice between the former or a "$n+150 16GB version and a $n+200 64GB version". See? You're happy now, because the difference in price reflects the cost of the more expensive part. Except no one will every buy a 16GB version. Why would they? Only $50 more for 64GB. At the same time they would shut out a part of the market that just wants an iPhone, which they can get for $n, if you don't have the requirement of them only selling devices with largers capacity at the added cost for the part in question.

    Pricing is so much more than just the cost of a chip. It's the way the company makes its money back on an investment. They wouldn't be able to sell the larger capacity phone at +$50 because that would raise the price of the low capacity phone and shut out a part of the market that wants that $n phone. And these people would say "Oh, see, Samsung phones only cost $n for a 16GB version, why does an iPhone have to cost $n+150?" It's an eternal neverending loop of some segment of your customer base that are not quite satisfied. The people who really like the 64GB version are dissatisfied with paying $200 more than the 16GB version, but the people who only need 16GB are glad to be able to get it for $200 less than the 64GB version. You change the rules and you just make a different customer segment unhappy, or just unwilling to buy your product.
  • SanX - Wednesday, January 09, 2013 - link

    4k read/write random/seq - 600MB/s
    8k read/write random/seq - 600MB/s
    128k read/write random/seq - 600MB/s

    8GB SDRAM power off safe cache
  • zyk - Wednesday, January 09, 2013 - link

    It's nice to see Anand finally get some different brands to test, but unfortunately he probably missed the party on some key drives. There had been excellent prices on SanDisk's Extreme line of drives and it would have been nice to see how SanDisk's combination of high density / low die count toggle mode NAND stacked up against the likes of the Patriot Wildfire or OCZ Vertex 3 Max IOPS - Which was less extreme of a channel limitation as this marvell based drive. Also missing from his tests were drives from those such as Mushkin Enhanced who's Chronos Deluxe was frequently the cheapest togglemode+sandforce 2nd gen drive on newegg. Since all these other brands are missing from Anand's testing, it would be nice to get these charts to show a little bit more details in them about controller and NAND type rather than just linkspeed and occationally firmware version, so they don't have to be constantly looked up when making decisions. Reply
  • Kristian Vättö - Wednesday, January 09, 2013 - link

    I finally got a Chronos Deluxe from Mushkin. I originally asked for it over 6 months ago (due to reders' request) but there were some issues with getting in contact with Mushkin.

    I think Anand also has a SanDisk Extreme.

    To be honest, there aren't big differences between SandForce based SSDs. Corsair Force GS is an example of a SF-2281 drive with Toggle Mode NAND and FW 5.x.x, all similar SSDs I've tested (coming soon!) perform very similarly.
  • fritz33 - Friday, July 12, 2013 - link

    I just bought a Lenovo X201 Tablet. Can i use this drive with activated Windows 7 Bitlocker? Will Bitlocker slow down this drive and wear it down noticeable? Or does this drive have a hardware encrypten like the Crucial M500 from your review? If so how would i use it instaed of Bitlocker? I am sorry but I don't have any experience in encrypted drive. Reply

Log in

Don't have an account? Sign up now