Performance Consistency

In our Intel SSD DC S3700 review Anand introduced a new method of characterizing performance: looking at the latency of individual operations over time. The S3700 promised a level of performance consistency that was unmatched in the industry, and as a result needed some additional testing to show that. The reason we don't have consistent IO latency with SSDs is because inevitably all controllers have to do some amount of defragmentation or garbage collection in order to continue operating at high speeds. When and how an SSD decides to run its defrag and cleanup routines directly impacts the user experience. Frequent (borderline aggressive) cleanup generally results in more stable performance, while delaying that can result in higher peak performance at the expense of much lower worst-case performance. The graphs below tell us a lot about the architecture of these SSDs and how they handle internal defragmentation.

To generate the data below we take a freshly secure erased SSD and fill it with sequential data. This ensures that all user accessible LBAs have data associated with them. Next we kick off a 4KB random write workload across all LBAs at a queue depth of 32 using incompressible data. We run the test for just over half an hour, nowhere near what we run our steady state tests for but enough to give a good look at drive behavior once all spare area fills up.

We record instantaneous IOPS every second for the duration of the test and then plot IOPS vs. time and generate the scatter plots below. Each set of graphs features the same scale. The first two sets use a log scale for easy comparison, while the last set of graphs uses a linear scale that tops out at 40K IOPS for better visualization of differences between drives.

The high level testing methodology remains unchanged from our S3700 review. Unlike in previous reviews however, we vary the percentage of the drive that gets filled/tested depending on the amount of spare area we're trying to simulate. The buttons are labeled with the advertised user capacity had the SSD vendor decided to use that specific amount of spare area. If you want to replicate this on your own all you need to do is create a partition smaller than the total capacity of the drive and leave the remaining space unused to simulate a larger amount of spare area. The partitioning step isn't absolutely necessary in every case but it's an easy way to make sure you never exceed your allocated spare area. It's a good idea to do this from the start (e.g. secure erase, partition, then install Windows), but if you are working backwards you can always create the spare area partition, format it to TRIM it, then delete the partition. Finally, this method of creating spare area works on the drives we've tested here but not all controllers are guaranteed to behave the same way.

The first set of graphs shows the performance data over the entire 2000 second test period. In these charts you'll notice an early period of very high performance followed by a sharp dropoff. What you're seeing in that case is the drive allocating new blocks from its spare area, then eventually using up all free blocks and having to perform a read-modify-write for all subsequent writes (write amplification goes up, performance goes down).

The second set of graphs zooms in to the beginning of steady state operation for the drive (t=1400s). The third set also looks at the beginning of steady state operation but on a linear performance scale. Click the buttons below each graph to switch source data.

  Intel SSD 530 240GB Intel SSD 335 240GB Corsair Neutron 240GB Samsung SSD 840 EVO 250GB Samsung SSD 840 Pro 256GB
Default
25% OP -

Even though the SF-2281 is over two and a half years old, its performance consistency is still impressive. Compared to the SSD 335, there's been quite significant improvement as it takes nearly double the time for SSD 530 to enter steady-state. Increasing the over-provisioning doesn't seem to have a major impact on performance, which is odd. On one hand it's a good thing as you can fill the SSD 530 without worrying that its performance will degrade but on the other hand, the steady-state performance could be better. For example the Corsair Neutron beats the SSD 530 by a fairly big margin with 25% over-provisioning.

  Intel SSD 530 240GB Intel SSD 335 240GB Corsair Neutron 240GB Samsung SSD 840 EVO 250GB Samsung SSD 840 Pro 256GB
Default
25% OP -

 

  Intel SSD 530 240GB Intel SSD 335 240GB Corsair Neutron 240GB Samsung SSD 840 EVO 250GB Samsung SSD 840 Pro 256GB
Default
25% OP -

TRIM Validation

To test TRIM, I filled the drive with incompressible sequential data and proceeded with 60 minutes of incompressible 4KB random writes at queue depth of 32. I measured performance after the torture as well as after a single TRIM pass with AS-SSD since it uses incompressible data and hence suits for this purpose.

Intel SSD 530 Resiliency - AS-SSD Incompressible Sequential Write
  Clean After Torture (60 min) After TRIM
Intel SSD 530 240GB 315.1MB/s 183.3MB/s 193.3MB/s

SandForce's TRIM has never been fully functional when the drive is pushed into a corner with incompressible writes and the SSD 530 doesn't bring any change to that. This is really a big problem with SandForce drives if you're going to store lots of incompressible data (such as MP3s, H.264 videos and other highly compressed formats) because sequential speeds may suffer even more in the long run. As an OS drive the SSD 530 will do just fine since it won't be full of incompressible data, but I would recommend buying something non-SandForce if the main use will be storage of incompressible data. Hopefully SandForce's third generation controller will bring a fix to this.

Introduction, The Drive & The Test AnandTech Storage Bench 2013
POST A COMMENT

60 Comments

View All Comments

  • AnnonymousCoward - Monday, November 18, 2013 - link

    I enjoyed the review, but why can't you have a single real world benchmark??? You compare CPUs based on the time it takes to encode/decode, and fps in games. That tells readers the quantified difference. Your SSD data tells the reader nothing about Windows startup time, file copy time, and program load time. This has been an overlook on Anandtech from Day 1. I've brought this up multiple times in these comments, but you guys somehow don't get it. Reply
  • dhisumdhisum - Tuesday, November 19, 2013 - link

    Debroah, will you marry me? I don't work, I am a bum. Reply
  • dac7nco - Tuesday, November 19, 2013 - link

    Greatest reply ever. Reply
  • Bullwinkle J Moose - Saturday, November 23, 2013 - link

    Technically, that was a proposal...
    The reply has not yet been given
    Reply
  • Tjalve - Wednesday, November 20, 2013 - link

    I have actually don that kind of testing. But i use 20min idle time.
    http://www.nordichardware.se/SSD-Recensioner/svens...

    The text is in swedish so use google translate to translate to english. But scroll down and qlik on the links.
    But check the diffrence between test 6 and 7 in the graphs.
    Reply
  • Tjalve - Wednesday, November 20, 2013 - link

    I have actually don that kind of testing. But i use 20min idle time.
    http://www.nordichardware.se/SSD-Recensioner/svens...

    The text is in swedish so use google translate to translate to english. But scroll down and qlik on the links.
    But check the diffrence between test 6 and 7 in the graphs.
    Reply
  • nicolaim - Wednesday, November 27, 2013 - link

    MyDigitalSSD sells M.2 SSDs at retail, so saying M.2 SSDs are OEM-only is incorrect. Reply
  • mi1stormilst - Friday, December 06, 2013 - link

    The Intel 530 is $169.99 on newegg today ... tack on the 10% discount code floating around (NAFSAVETENDEC6W) for newegg and you have a bargain at $155.98 shipped!!! Reply
  • PKR - Sunday, December 08, 2013 - link

    With my Macbook pro Mid 2010, and Intel 530 240gb with DC12 firmware, I think this ssd is slow - I am only getting about 200 mbps write and 260 mbps read speed. Very disappointed, as I the reviews online pointed to speeds in the range of 500 mbps.

    I tried the installation two ways - one by cloning the system partition using carbon copy cloner, and another using a fresh install from super-drive and updating .. In both cases, speed didn't change.

    If it matters, I have 4 partitions on the drive. The system partition is of size 100gb, with about 40gb free space after migrating my content.
    Reply
  • Wolfpup - Monday, December 16, 2013 - link

    I switched from Intel to Micron/Crucial after Intel switched to Sandforce controllers...I'd still pick this over OTHER sandforce drives, but I'm still picking an M500 over this... Reply

Log in

Don't have an account? Sign up now