Performance Consistency

In our Intel SSD DC S3700 review I introduced a new method of characterizing performance: looking at the latency of individual operations over time. The S3700 promised a level of performance consistency that was unmatched in the industry, and as a result needed some additional testing to show that. The reason we don't have consistent IO latency with SSDs is because inevitably all controllers have to do some amount of defragmentation or garbage collection in order to continue operating at high speeds. When and how an SSD decides to run its defrag and cleanup routines directly impacts the user experience. Frequent (borderline aggressive) cleanup generally results in more stable performance, while delaying that can result in higher peak performance at the expense of much lower worst case performance. The graphs below tell us a lot about the architecture of these SSDs and how they handle internal defragmentation.

To generate the data below I took a freshly secure erased SSD and filled it with sequential data. This ensures that all user accessible LBAs have data associated with them. Next I kicked off a 4KB random write workload across all LBAs at a queue depth of 32 using incompressible data. I ran the test for just over half an hour, no where near what we run our steady state tests for but enough to give me a good look at drive behavior once all spare area filled up.

I recorded instantaneous IOPS every second for the duration of the test. I then plotted IOPS vs. time and generated the scatter plots below. Each set of graphs features the same scale. The first two sets use a log scale for easy comparison, while the last set of graphs uses a linear scale that tops out at 50K IOPS for better visualization of differences between drives.

The high level testing methodology remains unchanged from our S3700 review. Unlike in previous reviews however, I did vary the percentage of the drive that I filled/tested depending on the amount of spare area I was trying to simulate. The buttons are labeled with the advertised user capacity had the SSD vendor decided to use that specific amount of spare area. If you want to replicate this on your own all you need to do is create a partition smaller than the total capacity of the drive and leave the remaining space unused to simulate a larger amount of spare area. The partitioning step isn't absolutely necessary in every case but it's an easy way to make sure you never exceed your allocated spare area. It's a good idea to do this from the start (e.g. secure erase, partition, then install Windows), but if you are working backwards you can always create the spare area partition, format it to TRIM it, then delete the partition. Finally, this method of creating spare area works on the drives we've tested here but not all controllers may behave the same way.

The first set of graphs shows the performance data over the entire 2000 second test period. In these charts you'll notice an early period of very high performance followed by a sharp dropoff. What you're seeing in that case is the drive allocating new blocks from its spare area, then eventually using up all free blocks and having to perform a read-modify-write for all subsequent writes (write amplification goes up, performance goes down).

The second set of graphs zooms in to the beginning of steady state operation for the drive (t=1400s). The third set also looks at the beginning of steady state operation but on a linear performance scale. Click the buttons below each graph to switch source data.

  Intel SSD DC S3500 480GB Intel SSD DC S3700 200GB Seagate 600 Pro 400GB
Default

While it's not quite as pretty of a curve as what we saw with the S3700, the two drives are clearly related. Intel's SSD DC S3500 delivers incredibly predictable performance. The biggest take away is that Intel is able to still deliver good performance consistency even with much less NAND spare area than on the S3700. Architecture enables what we see here, not just spare area.

  Intel SSD DC S3500 480GB Intel SSD DC S3700 200GB Seagate 600 Pro 400GB
Default

Remember this predictable little pattern? The periodic dips in performance are Intel's defrag/GC routines operating at regular (and frequent) intervals. You'll also notice the slight upward trend here; the S3500 is looking to improve its performance over time, even under heavy load, without sacrificing consistency.

  Intel SSD DC S3500 480GB Intel SSD DC S3700 200GB Seagate 600 Pro 400GB
Default

This zoomed in view really gives us great perspective as to what's going on. I included comparable data from other drives in the S3700 review, but the size/scales of those charts made inclusion here not possible in short form.

The Drives & Architecture Random & Sequential IO Performance
Comments Locked

54 Comments

View All Comments

  • nathanddrews - Wednesday, June 12, 2013 - link

    The difference is that the S3500 comes over provisioned and the others don't. While you and I have the knowledge and skill to do it ourselves, most people - even IT staff - would have zero clue or interest in how to do something like that.
  • zanon - Wednesday, June 12, 2013 - link

    Give me a break, "most people" aren't interested in an S3500 period or even a prosumer drive, their primary focus would be capacity and cost (since at that level any modern SSD at all will be great). By definition, anyone interested in this or other such drives isn't "most people". "IT staff" or prosumers can perfectly well format/partition a drive, an easy GUI for it comes with every OS they'd use, it's hardly the kind of technical operation that'd make it a rare case. And since it only ever needs to be done once and then can be ignored forever, it can even be setup by someone else.

    Anand has considered it important enough to spend significant time on and test in all other recent reviews, and I think that speaks for itself. It's of direct relevance.
  • cheeselover - Wednesday, June 12, 2013 - link

    does increasing overprovising on the intel drive change the performance much? this article compares s3500 to 600 pro but overprovising is much higher on the seagate drive (512gb of flash to get 400gb of storage). the intel drive is listed as 264gb of flash for 240gb and that translate to 512gb of flash for 480gb.

    also wondering how the pricing works out considering for the same amount of flash the seagate drives get 20% less storage space.
  • sallgeud - Wednesday, June 12, 2013 - link

    As of right now it's been nearly 6 weeks since the last retailer and wholesaler received their shipments of S3700s. The word from most of them is that we're at least 6 more weeks away from the next expected deliveries. For those of us in the server world, it would be great if they could just produce and ship what they already make... and thus far throwing money at my monitor has done nothing.
  • mtoma - Wednesday, June 12, 2013 - link

    Regarding the testing methodology: on page 3, Mr. Shimpi said (as usual) the following: "To generate the data below I took a freshly secure erased SSD and filled it with sequential data". Ok, so how EXACTLY he did that? I mean, secure erasing the Intel SSD. I was in a couple of very frustrating positions, when I tried to secure erase Intel and Samsung SSD's, following the kind (read DUMB) suggestions of Samsung SSD Magician and Intel SSD Toolbox. On the Samsung drive I finnaly did it, I secure erased the drive. On Intel, no way. Intel SSD Toolbox kept saing that I must power down the drive, and then power on. But that din't work. I noticed a lot of angry users of Intel SSDs who could not secure erase their drive.
    So allow me to repeat the question: HOW MR. SHIMPI SECURE ERASED THE DRIVE? Thanks!
  • alainiala - Wednesday, June 12, 2013 - link

    Interesting, the comment about the high idle power usage making this drive not ideal for consumer use... Our channel partner was recommending this as a replacement for the 320 Series for our laptops.
  • mjz - Thursday, June 13, 2013 - link

    why would you even have to upgrade the SSDs in the laptops? I think your channel partner is just trying to make some money. The intel 320 ssd when used in a laptop is good for 98% of tasks
  • neodan - Thursday, June 13, 2013 - link

    Unrelated question but if you guys had a choice between having the Crucial M500 480GB or the Samsung 830 512GB for the same price which would pick overall?
  • Wolfpup - Thursday, June 13, 2013 - link

    I continue to be a firm believer in Micron/Crucial and Intel's drives-quality and reliability and non-flakieness over (sometimes) better performance. ANY decent SSD for years now has provided crazy performance. As far as I'm concerned, that's now a moot point, save for drives that dip super low weirdly.

    What I care about is reliability and the testing these two companies do compared to other companies. I mean whoopdedo if one company makes an SSD that's 400 bajillion MB/s and another does 400 bajillian + 20 MB/s if the latter is going to corrupt my data after six months.

    I've currently got two Intel drives and Crucial in active use (one in my Playstation 3) and all of them have run great with zero issues. Thrilled that Intel's using their own controllers again and not the "we spent an entire year fixing Sandforce's gigantic bugs and it still has gigantic bugs" Sandforce stuff.

    Hmm, I guess actually I have a Samsung in my Macbook which has been okay too.
  • Juddog - Thursday, June 13, 2013 - link

    Excellent job Anand! I just hope Intel can keep up with supplying these things; I tried to get my hands on an S3700 after they came out and they were all completely sold out everywhere.

Log in

Don't have an account? Sign up now