Performance Consistency

In our Intel SSD DC S3700 review I introduced a new method of characterizing performance: looking at the latency of individual operations over time. The S3700 promised a level of performance consistency that was unmatched in the industry, and as a result needed some additional testing to show that. The reason we don't have consistent IO latency with SSDs is because inevitably all controllers have to do some amount of defragmentation or garbage collection in order to continue operating at high speeds. When and how an SSD decides to run its defrag and cleanup routines directly impacts the user experience. Frequent (borderline aggressive) cleanup generally results in more stable performance, while delaying that can result in higher peak performance at the expense of much lower worst case performance. The graphs below tell us a lot about the architecture of these SSDs and how they handle internal defragmentation.

To generate the data below I took a freshly secure erased SSD and filled it with sequential data. This ensures that all user accessible LBAs have data associated with them. Next I kicked off a 4KB random write workload across all LBAs at a queue depth of 32 using incompressible data. I ran the test for just over half an hour, no where near what we run our steady state tests for but enough to give me a good look at drive behavior once all spare area filled up.

I recorded instantaneous IOPS every second for the duration of the test. I then plotted IOPS vs. time and generated the scatter plots below. Each set of graphs features the same scale. The first two sets use a log scale for easy comparison, while the last set of graphs uses a linear scale that tops out at 50K IOPS for better visualization of differences between drives.

The high level testing methodology remains unchanged from our S3700 review. Unlike in previous reviews however, I did vary the percentage of the drive that I filled/tested depending on the amount of spare area I was trying to simulate. The buttons are labeled with the advertised user capacity had the SSD vendor decided to use that specific amount of spare area. If you want to replicate this on your own all you need to do is create a partition smaller than the total capacity of the drive and leave the remaining space unused to simulate a larger amount of spare area. The partitioning step isn't absolutely necessary in every case but it's an easy way to make sure you never exceed your allocated spare area. It's a good idea to do this from the start (e.g. secure erase, partition, then install Windows), but if you are working backwards you can always create the spare area partition, format it to TRIM it, then delete the partition. Finally, this method of creating spare area works on the drives we've tested here but not all controllers may behave the same way.

The first set of graphs shows the performance data over the entire 2000 second test period. In these charts you'll notice an early period of very high performance followed by a sharp dropoff. What you're seeing in that case is the drive allocating new blocks from its spare area, then eventually using up all free blocks and having to perform a read-modify-write for all subsequent writes (write amplification goes up, performance goes down).

The second set of graphs zooms in to the beginning of steady state operation for the drive (t=1400s). The third set also looks at the beginning of steady state operation but on a linear performance scale. Click the buttons below each graph to switch source data.

  Corsair Neutron 240GB Crucial M500 960GB Samsung SSD 840 Pro 256GB Seagate 600 480GB Seagate 600 Pro 400GB
Default
25% Spare Area    

Now this is a bit surprising. I expected a tightly clustered group of IOs like we got with the LAMD based Corsair Neutron, but instead we see something different entirely. There's a clustering of IOs around the absolute minimum performance, but it looks like the controller is constantly striving for better performance. If there's any indication that Seagate's firmware is obviously different than what Corsair uses, this is it. If we look at the 400GB Seagate 600 Pro we get a good feel for what happens with further over provisioning. The 400GB Pro maintains consistently high performance for longer than the 480GB 600, and when it falls off the minimums are also higher as you'd expect.

  Corsair Neutron 240GB Crucial M500 960GB Samsung SSD 840 Pro 256GB Seagate 600 480GB Seagate 600 Pro 400GB
Default
25% Spare Area    

Zooming in, the Seagate 600 definitely doesn't look bad - it's far better than the Samsung or Crucial offerings, but still obviously short of Corsair's Neutron. I almost wonder if Seagate prioritized peak performance a bit here in order to be more competitive in most client benchmarks.

  Corsair Neutron 240GB Crucial M500 960GB Samsung SSD 840 Pro 256GB Seagate 600 480GB Seagate 600 Pro 400GB
Default
25% Spare Area    

The situation really looks a lot worse than it is here. The 600's performance isn't very consistent, but there's a clear floor at just above 5000 IOPS which is quite respectable. Compared to the Crucial and Samsung drives, the 600/600 Pro offer much better performance consistency. I do wish that Seagate had managed to deliver even more consistent performance given that we know what the controller is capable of. For client usage I suspect this won't matter, but in random write heavy enterprise workloads with large RAID arrays it isn't desirable behavior.

The Seagate 600 & 600 Pro AnandTech Storage Bench 2013 Preview
Comments Locked

59 Comments

View All Comments

  • StealthGhost - Wednesday, May 8, 2013 - link

    To me it was always just that HDDs are vastly different than SSDs. Making SSDs when you make HDDs is like starting from scratch almost. That is why the most random companies are making SSDs, because they made flash storage before. Corsair, OCZ, Crucial harddrives? I've owned RAM from all 3 but never a harddrive, but it makes sense for them to side step over from RAM to SSD, not so much for WD to go all the way down and then back up over to SSD.

    I hope WD becomes a big name in SSDs though, I have 5 WD harddrives that I can think of off the top of my head and one is from 2003. As you can tell, they're my favorite HDD manufacturer.
  • phillyry - Sunday, May 12, 2013 - link

    Cactusdog, "I don't understand why Seagate and WD were so slow in the SSD market."

    Because they didn't want to destroy their reputations with a the shenanigans that was going on in the first couple gens of SSDs. They wisely waited until the tech was mature so that their multibillion dollar reputations wouldn't go down the drain.
  • Tams80 - Tuesday, May 7, 2013 - link

    Same. I think around 500GB is the minimum I'd prepared to go with (for a laptop/mobile computer). They are still a bit too pricey and from my experience the hybrid drives, while good, aren't really worth it. 1TB would be great, but that will probably require waiting a few years.

    The 840 Pro looks to still be the best, but yes, it's still far too expensive for me. =(
  • klmccaughey - Tuesday, May 7, 2013 - link

    The 240GB non Pro 840 is pretty good unless you are doing a lot of writing - very well priced.

    I have a 2TB HD and a 256GB Steam drive. With Steam Tool / or caching software that is plenty. My C drive is 2 x 128GB Vertex 3's in Raid 0.

    Don't wait to switch! Just get what you can and add more when you can - you will never look back :)
  • MrSpadge - Tuesday, May 7, 2013 - link

    Yeah.. just make smart use of the space you've got and you should be able to get by with much smaller SSDs than 500 GB. Personally using 64 GB to cache my 3 TB HDD - fast enough for me :)
  • phillyry - Sunday, May 12, 2013 - link

    The OP is talking about a mobile computer (laptop), not a desktop solution where you can have additional hard drives.
  • creed3020 - Tuesday, May 7, 2013 - link

    Seagate may be late to the game but wow what an entrance! Going with the LM87800 almost guaranteed a strong performer as we already know from the Corsair Neutron's history. Their own special sauce added to the firmware shows that they are taking this market seriously.

    The HDD manufacturers, glorious duopoly and all, need to see the writing on the wall and get some products into this market vertical. There will be a need for spinning platters for years to come still as 4TB SSDs are still a good ways out.

    I'm currently on the fence for a Samsung 840 500GB but this announcement I need to wait and see how the reliability on these drives pans out as this may be the better choice.
  • Oxford Guy - Wednesday, May 8, 2013 - link

    MLC drives are generally going to be more reliable than TLC drives, unless you're dealing with firmware bugs (like the horribly buggy 1st generation Sandforce controller in the Vertex 2e)
  • name99 - Wednesday, May 8, 2013 - link

    "Seagate may be late to the game but wow what an entrance!"

    To me this is in the interesting point. Presumably Seagate are interested in surviving for more than the next five years. Which means they have to be in the broadly defined storage business, not just the HD business. Which in turn means: raises the question --- presumably they want to be the equivalent in the flash business of their role in the HD business?

    What would that take? If they were doing it seriously, it would take
    (a) own the controller. It seems they already own the firmware. Perhaps they don't care much about the LAMD/Hynix link because the next step is to design their own controller?
    (b) fab the flash. Until they do that, as has been said, they're just one of a dozen assemblers. Of course fabbing flash is not a completely trivial business to get into... So --- buy Hynix (or someone else)? Or not the whole company, but at least the flash division? I suspect we will see something like this.

    If they DO own the firmware, the chip, and the flash, they are at least in a rather better position.
    They can start to apply real engineering to these devices in a way we haven't yet seen, most obviously in much better power performance, both idle power and peak random writes power. There may also be scope for other innovations once you own the entire pipeline, for example you can tweak the flash being fabbed for a more precise set of specs, or you can drive it to tend to certain (known) failure modes which your firmware is set up to work around.
  • JellyRoll - Tuesday, May 7, 2013 - link

    Corsair recently released new versions of the Neutrons with a die shrink, are these Neutrons compared in the article with the new NAND?

Log in

Don't have an account? Sign up now