Performance Consistency

In our Intel SSD DC S3700 review Anand introduced a new method of characterizing performance: looking at the latency of individual operations over time. The S3700 promised a level of performance consistency that was unmatched in the industry, and as a result needed some additional testing to show that. The reason we don't have consistent IO latency with SSDs is because inevitably all controllers have to do some amount of defragmentation or garbage collection in order to continue operating at high speeds. When and how an SSD decides to run its defrag and cleanup routines directly impacts the user experience. Frequent (borderline aggressive) cleanup generally results in more stable performance, while delaying that can result in higher peak performance at the expense of much lower worst-case performance. The graphs below tell us a lot about the architecture of these SSDs and how they handle internal defragmentation.

To generate the data below we take a freshly secure erased SSD and fill it with sequential data. This ensures that all user accessible LBAs have data associated with them. Next we kick off a 4KB random write workload across all LBAs at a queue depth of 32 using incompressible data. We run the test for just over half an hour, nowhere near what we run our steady state tests for but enough to give a good look at drive behavior once all spare area fills up.

We record instantaneous IOPS every second for the duration of the test and then plot IOPS vs. time and generate the scatter plots below. Each set of graphs features the same scale. The first two sets use a log scale for easy comparison, while the last set of graphs uses a linear scale that tops out at 40K IOPS for better visualization of differences between drives.

The high level testing methodology remains unchanged from our S3700 review. Unlike in previous reviews however, we vary the percentage of the drive that gets filled/tested depending on the amount of spare area we're trying to simulate. The buttons are labeled with the advertised user capacity had the SSD vendor decided to use that specific amount of spare area. If you want to replicate this on your own all you need to do is create a partition smaller than the total capacity of the drive and leave the remaining space unused to simulate a larger amount of spare area. The partitioning step isn't absolutely necessary in every case but it's an easy way to make sure you never exceed your allocated spare area. It's a good idea to do this from the start (e.g. secure erase, partition, then install Windows), but if you are working backwards you can always create the spare area partition, format it to TRIM it, then delete the partition. Finally, this method of creating spare area works on the drives we've tested here but not all controllers are guaranteed to behave the same way.

The first set of graphs shows the performance data over the entire 2000 second test period. In these charts you'll notice an early period of very high performance followed by a sharp dropoff. What you're seeing in that case is the drive allocating new blocks from its spare area, then eventually using up all free blocks and having to perform a read-modify-write for all subsequent writes (write amplification goes up, performance goes down).

The second set of graphs zooms in to the beginning of steady state operation for the drive (t=1400s). The third set also looks at the beginning of steady state operation but on a linear performance scale. Click the buttons below each graph to switch source data.

  Intel SSD 530 240GB Intel SSD 335 240GB Corsair Neutron 240GB Samsung SSD 840 EVO 250GB Samsung SSD 840 Pro 256GB
Default
25% OP -

Even though the SF-2281 is over two and a half years old, its performance consistency is still impressive. Compared to the SSD 335, there's been quite significant improvement as it takes nearly double the time for SSD 530 to enter steady-state. Increasing the over-provisioning doesn't seem to have a major impact on performance, which is odd. On one hand it's a good thing as you can fill the SSD 530 without worrying that its performance will degrade but on the other hand, the steady-state performance could be better. For example the Corsair Neutron beats the SSD 530 by a fairly big margin with 25% over-provisioning.

  Intel SSD 530 240GB Intel SSD 335 240GB Corsair Neutron 240GB Samsung SSD 840 EVO 250GB Samsung SSD 840 Pro 256GB
Default
25% OP -

 

  Intel SSD 530 240GB Intel SSD 335 240GB Corsair Neutron 240GB Samsung SSD 840 EVO 250GB Samsung SSD 840 Pro 256GB
Default
25% OP -

TRIM Validation

To test TRIM, I filled the drive with incompressible sequential data and proceeded with 60 minutes of incompressible 4KB random writes at queue depth of 32. I measured performance after the torture as well as after a single TRIM pass with AS-SSD since it uses incompressible data and hence suits for this purpose.

Intel SSD 530 Resiliency - AS-SSD Incompressible Sequential Write
  Clean After Torture (60 min) After TRIM
Intel SSD 530 240GB 315.1MB/s 183.3MB/s 193.3MB/s

SandForce's TRIM has never been fully functional when the drive is pushed into a corner with incompressible writes and the SSD 530 doesn't bring any change to that. This is really a big problem with SandForce drives if you're going to store lots of incompressible data (such as MP3s, H.264 videos and other highly compressed formats) because sequential speeds may suffer even more in the long run. As an OS drive the SSD 530 will do just fine since it won't be full of incompressible data, but I would recommend buying something non-SandForce if the main use will be storage of incompressible data. Hopefully SandForce's third generation controller will bring a fix to this.

Introduction, The Drive & The Test AnandTech Storage Bench 2013
Comments Locked

60 Comments

View All Comments

  • spacecadet34 - Friday, November 15, 2013 - link

    Given that this very drive is today's Newegg Canada's ShellShocker deal, I'd say this review is quite timely!
  • ExodusC - Friday, November 15, 2013 - link

    I picked up a 180GB Intel 530 recently after doing a lot of searching for a cheap SSD for my OS and some programs. It replaced my old first generation 60GB OCZ Vertex. I was hesitant about using a SandForce controller drive, since many people apparently still have issues with certain drives, but I decided to jump on the 530.

    I'm pleased with the performance and price, and I was blown away by Intel's software, allowing you to flash the drive to the latest firmware while it's running with your OS on it. That's a huge leap above the pains of trying to get my old OCZ drive to flash to the latest firmware (which is sometimes a destructive flash).
  • Samus - Friday, November 15, 2013 - link

    Amazingly I haven't ever had an issue with an Intel Sandforce drive. I had some quirkiness (not detecting upon reboot/resume from hibernation) with a 330 at launch but they fixed it almost immediately with a firmware.

    I can't say the same for OCZ. I've owned 3 of their drives and 2 failed, including the RMA's, in under 6 months. One failed in 3 days. Just wouldn't detect in BIOS, even on different machines or with a USB SATA cable. Ironically, the Vertex 2 240GB I have has been solid for over 2 years in my media center running 24/7 so there is no rhyme or reason to it.

    If only Intel's networking division was as on-the-ball with software updates as their storage division. My Intel 7260 AC wifi card occasionally doesn't detect any networks and it is a very common problem. At least they sorted the Bluetooth issues.
  • ExodusC - Friday, November 15, 2013 - link

    The 60GB OCZ Vertex I replaced actually was not my first. I RMA'd my original drive after I think I screwed up a firmware flash (it seemed to be my fault and not the drive's). Another reason I'm happy with my Intel drive, the firmware updating is so incredibly painless and low risk.

    If you're familiar with Anand's SSD anthology and the history behind the Vertex, you might remember that the first generation OCZ Vertex with production firmware was the first consumer SSD that didn't suffer from awful stuttering issues (due in large part to Anand's communication with OCZ on the issue). Other companies followed suit and prioritized consistent performance over maximum throughput. At the time, the Vertex was a no-brainer (this was in the pre-Intel X25-M days).

    You're right that OCZ seems to have some QC issues nowadays. On the plus side, I can definitely say that OCZ's customer support is top notch. They were extremely fast in qualifying me to RMA my drive after the failed firmware flash.
  • 'nar - Monday, November 18, 2013 - link

    You are complaining with no details to back it up. You said your Vertex 2 works fine, but you failed to mention the model OCZ drives that failed.

    I have used Vertex (Limited, 2, 3) and now Vector drives and have not had a bad experience yet. But I looked into the hardware and never considered the Solid or Agility series in the first place. I have replaced another guy's SSD three times. I finally told him to give up on RMA's and buy a quality drive. Solid and Agility are not quality, they are cheap. That's why OCZ finally dropped them.

    I've installed dozen of SSD's, mostly Intel/Sandforce models, and have never had an unexplained failure. I did have one, but that system killed a hard drive a month even before I installed the SSD, so it is just a quirk of that system.

    I have OCZ in all of my own systems(9) because they eek out a bit more performance, but the Intel Toolbox is a winner for me to use for others where I cannot be there for support.
  • jonjonjonj - Thursday, November 21, 2013 - link

    looks like you made a ocz fanboy mad. ocz deserves the terrible reputation they have and after all the bad drives they sold i wouldn't touch one.
  • Bullwinkle J Moose - Saturday, November 23, 2013 - link

    You did not screw up the firmware flash!!!
    The number one failure mechanism for OCZ is a firmware update as could easily have been verified by the complaints at OCZ's forum and Newegg customer reviews

    I have torture tested OCZ SSD's (Vertex 1 and 2) by killing power, not aligning partitions, defragging and several other methods not recommended by OCZ

    Nothing would damage the drives until the firmware was updated as per OCZ instructions as can be seen by the thousands of customer complaints

    Anyone commenting otherwise is a LIAR and did not research this topic thoroughly or honestly!
  • Cellar Door - Friday, November 15, 2013 - link

    My Intel failed after just a year and a half - so don't think they are immune to it.
  • Sivar - Saturday, November 16, 2013 - link

    This is true. Nothing is immune to manufacturing defects.
    I had an opportunity for a few years to see actual return rates for many hard drive and SSD manufacturers. Intel SSDs consistently had the lowest failure rates in the industry, at least through the 520. I haven't the most current data, but I would be surprised if the numbers suddenly changed since then.
  • Sivar - Saturday, November 16, 2013 - link

    Note that the OCZ Vertex 3 and later have been pretty solid. The previous generations were so alarmingly bad that I am a little surprised they are still in business.

Log in

Don't have an account? Sign up now