Performance Consistency

In our Intel SSD DC S3700 review I introduced a new method of characterizing performance: looking at the latency of individual operations over time. The S3700 promised a level of performance consistency that was unmatched in the industry, and as a result needed some additional testing to show that. The reason we don't have consistent IO latency with SSDs is because inevitably all controllers have to do some amount of defragmentation or garbage collection in order to continue operating at high speeds. When and how an SSD decides to run its defrag and cleanup routines directly impacts the user experience. Frequent (borderline aggressive) cleanup generally results in more stable performance, while delaying that can result in higher peak performance at the expense of much lower worst case performance. The graphs below tell us a lot about the architecture of these SSDs and how they handle internal defragmentation.

To generate the data below I took a freshly secure erased SSD and filled it with sequential data. This ensures that all user accessible LBAs have data associated with them. Next I kicked off a 4KB random write workload across all LBAs at a queue depth of 32 using incompressible data. I ran the test for just over half an hour, no where near what we run our steady state tests for but enough to give me a good look at drive behavior once all spare area filled up.

I recorded instantaneous IOPS every second for the duration of the test. I then plotted IOPS vs. time and generated the scatter plots below. Each set of graphs features the same scale. The first two sets use a log scale for easy comparison, while the last set of graphs uses a linear scale that tops out at 40K IOPS for better visualization of differences between drives.

The high level testing methodology remains unchanged from our S3700 review. Unlike in previous reviews however, I did vary the percentage of the drive that I filled/tested depending on the amount of spare area I was trying to simulate. The buttons are labeled with the advertised user capacity had the SSD vendor decided to use that specific amount of spare area.  If you want to replicate this on your own all you need to do is create a partition smaller than the total capacity of the drive and leave the remaining space unused to simulate a larger amount of spare area. The partitioning step isn't absolutely necessary in every case but it's an easy way to make sure you never exceed your allocated spare area. It's a good idea to do this from the start (e.g. secure erase, partition, then install Windows), but if you are working backwards you can always create the spare area partition, format it to TRIM it, then delete the partition. Finally, this method of creating spare area works on the drives I've tested here but not all controllers may behave the same way.

The first set of graphs shows the performance data over the entire 2000 second test period. In these charts you'll notice an early period of very high performance followed by a sharp dropoff. What you're seeing in that case is the drive alllocating new blocks from its spare area, then eventually using up all free blocks and having to perform a read-modify-write for all subsequent writes (write amplification goes up, performance goes down).

The second set of graphs zooms in to the beginning of steady state operation for the drive (t=1400s). The third set also looks at the beginning of steady state operation but on a linear performance scale. Click the buttons below each graph to switch source data.

Impact of Spare Area
  Intel SSD DC S3700 200GB Intel SSD 335 240GB Intel SSD 525 240GB Corsair Neutron 240GB OCZ Vector 256GB Samsung SSD 840 Pro 256GB
Default
25% Spare Area - -

As promised I re-ran our consistency tests on the 525 and came up with somewhat different but still slightly odd results, at least compared to the 335. There's a clear degradation in consistency over time, however both the pre-fill and 4KB random writes are using incompressible data which could be a bit unrealistic here. Between your OS and installed applications, there's bound to be more "free" space on any full SF-2281 drive thanks to the inherently compressible nature of a lot of software. The 25% spare area (192GB) toggle shows us what happens to IO consistency if you either only use 192GB of the 256GB of NAND or if you use the entire drive but have some data on it that's fully compressible. The result isn't anywhere near as impactful as what we see on other drives. The SF-2281 controller is reasonably well behaved to begin with, but the fact remains that with incompressible data the controller has to do a lot more work than it was banking on - which causes large variance in IO latency. Minimum performance is still quite good though, especially if you compare the 525 in its default configuration to Samsung's SSD 840 Pro for example. The 525 just doesn't respond as well to additional spare area as conventional SSDs.

The next set of charts look at the steady state (for most drives) portion of the curve. Here we'll get some better visibility into how everyone will perform over the long run.

Impact of Spare Area
  Intel SSD DC S3700 200GB Intel SSD 335 240GB Intel SSD 525 240GB Corsair Neutron 240GB OCZ Vector 256GB Samsung SSD 840 Pro 256GB
Default
25% Spare Area - -

The final set of graphs abandons the log scale entirely and just looks at a linear scale that tops out at 50K IOPS. We're also only looking at steady state (or close to it) performance here:

Impact of Spare Area
  Intel SSD DC S3700 200GB Intel SSD 335 240GB Intel SSD 525 240GB Corsair Neutron 240GB OCZ Vector 256GB Samsung SSD 840 Pro 256GB
Default
25% Spare Area - -

IO consistency isn't that great for the SF-2281 controller, although minimum performance remains very good despite the wide distribution of IO latencies. Throwing more spare area at the problem (or just having some compressible data on your drive) does help get rid of the really unusual dips in performance, but the overall distribution remains loosely clustered.

Performance vs. Transfer Size AnandTech Storage Bench 2011
Comments Locked

26 Comments

View All Comments

  • Shadowmaster625 - Monday, February 4, 2013 - link

    So intel is serious about ultrabooks? Their actions seem to prove otherwise.

    That SSD controller should be integrated into all their i-series cpus. Windows is slow and unwieldy in large part due to intel not providing an efficient nonvolatile storage platform. If every windows installation had guaranteed access to even a small amount of fast nonvolatile memory, then maybe we would see less cpu cycles wasted moving data around from HDD to RAM to another part of RAM and then back to HDD then back to RAM 50 million times.
  • FunBunny2 - Monday, February 4, 2013 - link

    Here's what Linus said back in 2007:
    ... but Flash-based storage has such a different performance profile from rotating media, that I suspect that it will end up having a large impact on filesystem design. Right now, most filesystems tend to be designed with the latencies of rotating media in mind.
  • melgross - Monday, February 4, 2013 - link

    I don't understand why mSATA drives seem to cost more than regular drives. Most drives come in an aluminum case, which isn't cheap to make, plus the assembly that's then required. I would think that these, being just chips on a circuit board, without even the soldered on sockets a regular drive has, would be a good 10% less, not more.
  • mcnabney - Monday, February 4, 2013 - link

    Manufacturing cost frequently has little to do with MSRP.

    An obvious example of cheaper to produce media selling for more would be compact disks and DVDs versus their predecessors using magnetic tape. The digital media had a significant upcharge despite production costs being significantly lower.
  • DanNeely - Monday, February 4, 2013 - link

    Probably lower sales volumes. Even if the total sales volumes (and marginal development cost per unit) are similar, relatively few consumers are buying mSata vs 2.5" models. This results in less competition between retailers to push margins down and higher expenses for the same (because of the longer time periods between when they buy and sell the devices that their purchase costs have to be covered by borrowing).
  • melgross - Monday, February 4, 2013 - link

    Manufacturers will be the main customers for these drives. But price also determines sales volume. Price them too high and volume drops. There couldn't have been too much extra R&D on these, as they are basically the same as drives inside a case. The small circuit boards cost little to design and manufacturer, and Intel is paying the same thing for the chips.
  • RU482 - Monday, February 4, 2013 - link

    good to see that newegg is off to some early profiteering with the 120GB version on sale at $169.99
  • philipma1957 - Monday, February 4, 2013 - link

    Simple make a mobo that runs them at sata III speeds. I have yet to find one.

    A small form itx with a 480gb crucial m500 running at full sata III would be a reason to buy a mSata.

    I really quite a bit frustrated as this block to my build in the coolermaster elite 120 cases. the case allows

    an i7 3770k

    a full sized psu

    a full sized bluray

    a full sized hd7970

    the better itx mobos have a msata slot that boots but they are sata II.

    it is a real shame as this case allows for high over clock speeds with easy mods.

    you can have an almost perfect one card machine with the msata ssd.

    you can fit a 2.5 inch sata ssd so you can have a very powerful machine but a power wire a sata wire and the drive case itself do make the case more crowded.
  • Jaseemxx91 - Monday, February 4, 2013 - link

    Can anybody tell me if this will fit inside my Ux31e Ultrabook?

    Anand have said that it has a different port than that of an Mac book Air here

    www.anandtech.com/show/5854/asus-zenbook-ssd-and-apples-macbook-air-ssd-are-not-compatible

    My Zenbook ssd is dead, and i dont know which one to get, that is compatible in my system.

    Could somebody help me with this?

    I would really love an Answer from Anand himself, as he has already seen and inspected the Connector himself. But as i am not So Lucky guy, i dont really think Anand would even read this :/

    Btw, Congrats on this extremelY informative Site of yours. Keep up the Great Work. :)
  • Death666Angel - Monday, February 4, 2013 - link

    Asus uses a proprietary connector as well, so any off the market mSATA drive will not work. Look for specific UX31 drives. Email ASUS, they should be able to provide you with information and maybe even a replacement (for free or for money). If you want to reach Anand directly, try Twitter. :)

Log in

Don't have an account? Sign up now