Performance Consistency

In our Intel SSD DC S3700 review I introduced a new method of characterizing performance: looking at the latency of individual operations over time. The S3700 promised a level of performance consistency that was unmatched in the industry, and as a result needed some additional testing to show that. The reason we don't have consistent IO latency with SSDs is because inevitably all controllers have to do some amount of defragmentation or garbage collection in order to continue operating at high speeds. When and how an SSD decides to run its defrag and cleanup routines directly impacts the user experience. Frequent (borderline aggressive) cleanup generally results in more stable performance, while delaying that can result in higher peak performance at the expense of much lower worst case performance. The graphs below tell us a lot about the architecture of these SSDs and how they handle internal defragmentation.

To generate the data below I took a freshly secure erased SSD and filled it with sequential data. This ensures that all user accessible LBAs have data associated with them. Next I kicked off a 4KB random write workload across all LBAs at a queue depth of 32 using incompressible data. I ran the test for just over half an hour, no where near what we run our steady state tests for but enough to give me a good look at drive behavior once all spare area filled up.

I recorded instantaneous IOPS every second for the duration of the test. I then plotted IOPS vs. time and generated the scatter plots below. Each set of graphs features the same scale. The first two sets use a log scale for easy comparison, while the last set of graphs uses a linear scale that tops out at 40K IOPS for better visualization of differences between drives.

The high level testing methodology remains unchanged from our S3700 review. Unlike in previous reviews however, I did vary the percentage of the drive that I filled/tested depending on the amount of spare area I was trying to simulate. The buttons are labeled with the advertised user capacity had the SSD vendor decided to use that specific amount of spare area.  If you want to replicate this on your own all you need to do is create a partition smaller than the total capacity of the drive and leave the remaining space unused to simulate a larger amount of spare area. The partitioning step isn't absolutely necessary in every case but it's an easy way to make sure you never exceed your allocated spare area. It's a good idea to do this from the start (e.g. secure erase, partition, then install Windows), but if you are working backwards you can always create the spare area partition, format it to TRIM it, then delete the partition. Finally, this method of creating spare area works on the drives I've tested here but not all controllers may behave the same way.

The first set of graphs shows the performance data over the entire 2000 second test period. In these charts you'll notice an early period of very high performance followed by a sharp dropoff. What you're seeing in that case is the drive alllocating new blocks from its spare area, then eventually using up all free blocks and having to perform a read-modify-write for all subsequent writes (write amplification goes up, performance goes down).

The second set of graphs zooms in to the beginning of steady state operation for the drive (t=1400s). The third set also looks at the beginning of steady state operation but on a linear performance scale. Click the buttons below each graph to switch source data.

Impact of Spare Area
  Intel SSD DC S3700 200GB Intel SSD 335 240GB Intel SSD 525 240GB Corsair Neutron 240GB OCZ Vector 256GB Samsung SSD 840 Pro 256GB
Default
25% Spare Area - -

As promised I re-ran our consistency tests on the 525 and came up with somewhat different but still slightly odd results, at least compared to the 335. There's a clear degradation in consistency over time, however both the pre-fill and 4KB random writes are using incompressible data which could be a bit unrealistic here. Between your OS and installed applications, there's bound to be more "free" space on any full SF-2281 drive thanks to the inherently compressible nature of a lot of software. The 25% spare area (192GB) toggle shows us what happens to IO consistency if you either only use 192GB of the 256GB of NAND or if you use the entire drive but have some data on it that's fully compressible. The result isn't anywhere near as impactful as what we see on other drives. The SF-2281 controller is reasonably well behaved to begin with, but the fact remains that with incompressible data the controller has to do a lot more work than it was banking on - which causes large variance in IO latency. Minimum performance is still quite good though, especially if you compare the 525 in its default configuration to Samsung's SSD 840 Pro for example. The 525 just doesn't respond as well to additional spare area as conventional SSDs.

The next set of charts look at the steady state (for most drives) portion of the curve. Here we'll get some better visibility into how everyone will perform over the long run.

Impact of Spare Area
  Intel SSD DC S3700 200GB Intel SSD 335 240GB Intel SSD 525 240GB Corsair Neutron 240GB OCZ Vector 256GB Samsung SSD 840 Pro 256GB
Default
25% Spare Area - -

The final set of graphs abandons the log scale entirely and just looks at a linear scale that tops out at 50K IOPS. We're also only looking at steady state (or close to it) performance here:

Impact of Spare Area
  Intel SSD DC S3700 200GB Intel SSD 335 240GB Intel SSD 525 240GB Corsair Neutron 240GB OCZ Vector 256GB Samsung SSD 840 Pro 256GB
Default
25% Spare Area - -

IO consistency isn't that great for the SF-2281 controller, although minimum performance remains very good despite the wide distribution of IO latencies. Throwing more spare area at the problem (or just having some compressible data on your drive) does help get rid of the really unusual dips in performance, but the overall distribution remains loosely clustered.

Performance vs. Transfer Size AnandTech Storage Bench 2011
Comments Locked

26 Comments

View All Comments

  • PolarisOrbit - Monday, February 4, 2013 - link

    The comparison table on the first page indicates a 90GB drive and then it's never mentioned again in the rest of the article.
  • Denithor - Monday, February 4, 2013 - link

    Addressed in the first paragraph:

    "At the time we only presented performance for a single 240GB drive, however Intel decided to break the mold and send us nearly every capacity in the 525 lineup."

    I would imagine that the 90GB model is the one they didn't provide.
  • AndrewDobie - Monday, February 4, 2013 - link

    Love my job, since I've been bringing in $5600… I sit at home, music playing while I work in front of my new iMac that I got now that I'm making it online.(Click Home information)
    http://goo.gl/nY29F
  • philipma1957 - Monday, February 4, 2013 - link

    first what mobo can I buy that will run at sata III speeds.

    I have 3 mobos with msata's all 3 use sata II speed's.

    I have an intel h77 itx

    an asrock z77 itx

    an asus z77 matx

    second question when will the crucial m500 480gb drop?

    third question which has nothing to do with msata just sata. when does sata IV come out.

    I get a bit bored with ssd reviews since speed is pretty much capped at 550 read write.

    iops are not going to go to 550 read write at random 4k.

    so sata 4 would mean all new speeds to droll over.

    a bigger msata like the crucial m500 would be nice.

    and an itx board that used msata at sata III would be nice.

    not knocking intel but ssd's seem to have become more of the same.

    As Jimmy Fallon would say bigger harder faster stronger.
    oh one last thing the 5 year warranty is nice.
  • Kristian Vättö - Monday, February 4, 2013 - link

    Crucial M500 is Q2'13, that's all we know for now. I'm very interested in the drive as well, hopefully we can get samples soon.

    SATA Express is the future of SATA, we likely won't see SATA IV (12Gbps) for a few years (if we'll ever even see it). We may see some SATAe based SSDs/mobos H2'13 but I haven't heard any specific time frame.
  • DanNeely - Monday, February 4, 2013 - link

    That's disappointing; I was hoping they'd be available by 13Q2 to go along with the Haswell launch since there will be an uptick of enthusiast system building then. Based on prior history my Haswell box will last at least until Skylake launches; and probably until the tick following it four years from now.
  • ssj3gohan - Monday, February 4, 2013 - link

    ... because I just care that much :D

    But seriously, Anandtech. Let's talk logic here. Intel has specification sheets with detailed power consumption numbers. It says for the entire SSD 525 series: 250mW idle typical, 300mW under mobilemark 2007.

    How can you look at your own power consumption numbers and say 'well, close enough, let's just publish it even though it's 100% too high and clearly not correct'?

    Here's how to fix your SSD power consumption errors: measure power consumption going directly into the drive on all rails and use it in a system that has device initiated power management features enabled (DIPM), that is: basically all operating systems nowadays.
  • Anand Lal Shimpi - Monday, February 4, 2013 - link

    As I mentioned in the review, the mSATA adapter we test with only supplies 5V to the drive. To address this going forward I need to modify a board with a native mSATA connector and measure 3.3V on the board itself. The results here at least allow you to compare the various capacities of the 525.

    MobileMark 2007 is mostly an idle test, which is why none of our loaded numbers have ever come close to any spec sheet that reports it. This is the same reason we don't use it in our notebook reviews.

    We will be switching to DIPM-only testing in our 2013 storage suite update, which will unfortunately break backwards comparability with our older results.

    Take care,
    Anand
  • RU482 - Monday, February 4, 2013 - link

    Anand,
    What mSATA adapter board are you using?
    Just a thought, you should be able to lift a lead on the 5-to-3.3V regulator on the board and measure the output current with a multi-meter.
  • extide - Monday, February 4, 2013 - link

    Also note that those mSATA to SATA converters use a little linear regulator(like a 7833) to convert the 5v to 3.3v. Linear regulators are very in-efficient, as they essentially turn the "extra" voltage into heat by using a network of resistors.

    If you are simply measuring the power into the adapter then you are not getting a very good look at the actually mSATA device power draw.

    HOWEVER, if someone else is using the adapter in their system they will also be utilizing that same linear regulator, and thus see the same power usage as they show in the article here.

Log in

Don't have an account? Sign up now