Performance Consistency

In our Intel SSD DC S3700 review Anand introduced a new method of characterizing performance: looking at the latency of individual operations over time. The S3700 promised a level of performance consistency that was unmatched in the industry, and as a result needed some additional testing to show that. The reason we don't have consistent IO latency with SSDs is because inevitably all controllers have to do some amount of defragmentation or garbage collection in order to continue operating at high speeds. When and how an SSD decides to run its defrag and cleanup routines directly impacts the user experience. Frequent (borderline aggressive) cleanup generally results in more stable performance, while delaying that can result in higher peak performance at the expense of much lower worst-case performance. The graphs below tell us a lot about the architecture of these SSDs and how they handle internal defragmentation.

To generate the data below we take a freshly secure erased SSD and fill it with sequential data. This ensures that all user accessible LBAs have data associated with them. Next we kick off a 4KB random write workload across all LBAs at a queue depth of 32 using incompressible data. We run the test for just over half an hour, nowhere near what we run our steady state tests for but enough to give a good look at drive behavior once all spare area fills up.

We record instantaneous IOPS every second for the duration of the test and then plot IOPS vs. time and generate the scatter plots below. Each set of graphs features the same scale. The first two sets use a log scale for easy comparison, while the last set of graphs uses a linear scale that tops out at 40K IOPS for better visualization of differences between drives.

The high level testing methodology remains unchanged from our S3700 review. Unlike in previous reviews however, we vary the percentage of the drive that gets filled/tested depending on the amount of spare area we're trying to simulate. The buttons are labeled with the advertised user capacity had the SSD vendor decided to use that specific amount of spare area. If you want to replicate this on your own all you need to do is create a partition smaller than the total capacity of the drive and leave the remaining space unused to simulate a larger amount of spare area. The partitioning step isn't absolutely necessary in every case but it's an easy way to make sure you never exceed your allocated spare area. It's a good idea to do this from the start (e.g. secure erase, partition, then install Windows), but if you are working backwards you can always create the spare area partition, format it to TRIM it, then delete the partition. Finally, this method of creating spare area works on the drives we've tested here but not all controllers are guaranteed to behave the same way.

The first set of graphs shows the performance data over the entire 2000 second test period. In these charts you'll notice an early period of very high performance followed by a sharp dropoff. What you're seeing in that case is the drive allocating new blocks from its spare area, then eventually using up all free blocks and having to perform a read-modify-write for all subsequent writes (write amplification goes up, performance goes down).

The second set of graphs zooms in to the beginning of steady state operation for the drive (t=1400s). The third set also looks at the beginning of steady state operation but on a linear performance scale. Click the buttons below each graph to switch source data.

  OCZ Vector 150 240GB OCZ Vector 256GB Corsair Neutron 240GB Sandisk Extreme II 480GB Samsung SSD 840 Pro 256GB
Default
25% OP

Performance consistency is simply outstanding. OCZ told us that they focused heavily on IO consistency in the Vector 150 and the results speak for themselves. Obviously the added over-provisioning helps but if you compare it to the Corsair Neutron with the same ~12% over-provisioning, the Vector 150 wins. The Neutron and other LAMD based SSDs have been one of the most consistent SSDs to date, so the Vector 150 beating the Neutron is certainly an honorable milestone for OCZ. However if you increase the over-provisioning to 25%, the Vector 150's advantage doesn't scale. In fact, the original Vector is slightly more consistent with 25% over-provisioning than the Vector 150 but both are definitely among the most consistent.

  OCZ Vector 150 240GB OCZ Vector 256GB Corsair Neutron 240GB SanDisk Extreme II 480GB Samsung SSD 840 Pro 256GB
Default
25% OP

 

  OCZ Vector 150 240GB OCZ Vector 256GB Corsair Neutron 240GB SanDisk Extreme II 480GB Samsung SSD 840 Pro 256GB
Default
25% OP

 

TRIM Validation

Above is an HD Tach graph I ran on a secure erased drive to get the baseline performance. The graph below is from a run that I ran after our performance consistency test (first filled with sequential data and then hammered with 4KB random writes at queue depth of 32 for 2000 seconds):

And as always, performance degrades, although the Vector 150 does pretty good with recovering the performance if you write sequential data to the drive. Finally I TRIM'ed the entire volume and reran HD Tach to make sure TRIM is functional.

It is. You can also see the impact of OCZ's "performance mode" in the graphs. Once 50% of the LBAs have been filled, the drive will reorganize the data, which causes the performance degradation. If you leave the drive idling after filling over half of it, the performance will return close to brand new state within minutes. Our internal tests with the original Vector have shown that the data reorganization takes less than 10 minutes, so it's nothing to be concerned about. The HD Tach graphs give a much worse picture of the situation than it really is.

Introduction & The Drive AnandTech Storage Bench 2013
Comments Locked

59 Comments

View All Comments

  • Kristian Vättö - Friday, November 8, 2013 - link

    "The reason they haven't been bought is that the barefoot is just a rebadged marvell controller.they have no ip."

    The Octane and Vertex 4 used Marvell based silicon but the Barefoot 3 is OCZ's/Indilinx' own silicon. Or unless you have some proof that states otherwise.
  • JellyRoll - Friday, November 8, 2013 - link

    The 'proof' is common sense. First, they have lied in the past, saying that the first Barefoots were from them, until a document was leaked, remember that?
    Second, every single company producing a good controller that is not owned by a fab or other large enterprise company has been bought in the last few years. The exception is OCZ. If they actually had a controller, someone else would have bought them. Doesn't it strike you as funny that this Barefoot also has the same "Aragon Co-Processor" as the controller that they admitted they were lying about, which was Marvel? If they lied once, what makes you think they wouldn't lie again? Wasnt it Anand that broke that story in the first place? The only difference is the leaker was found and fired.
  • blanarahul - Friday, November 29, 2013 - link

    " Doesn't it strike you as funny that this Barefoot also has the same "Aragon Co-Processor" as the controller that they admitted they were lying about, which was Marvel? ".

    No. It doesn't. Vector was the first drive that used their so-called Aragon Processor.

    Or may be you can find me a document/webpage as proof.

    Anyway, I hope the Indilinx and PLX guys go to Toshiba. The rest of OCZ should go to Corsair.
  • JellyRoll - Thursday, November 7, 2013 - link

    OCZ stock is worth 45cents a share, 3x lower value this week alone. By all observers from numerous sources in the market ocz is in a state of collapse. Readers should be warned, they will not be there to honor that rma.
  • Shadowmaster625 - Thursday, November 7, 2013 - link

    The irony of a dead OCZ review sample. Screw this company. I jsut had my 2nd to last OCZ drive die on me this week. It started with a failure to update java which lead to a failure of the windows installer. Finally it just died and the BIOS wouldnt even see the drive anymore. I have one more OCZ drive left, and I am sure it will die soon. Yes, every single OCZ drive that I have even bought has died.
  • Stefanfj - Thursday, November 7, 2013 - link

    That is really sad and unfortunate, I have an Agility3 60GB which I bought June 2011 (IIRC), still working perfectly in a friend's PC, I also have two Vertex4 128GB drives (seperate computers), as well as having sold two Vertex4 64GB drives to two other friends - they all working fine... Guess it really is just a big lottery maybe...
  • clarkn0va - Thursday, November 7, 2013 - link

    In my experience this depends entirely on which line you are buying. I have and use four or five Vertex and Agility series drives and I've sold dozens of others. I have yet to see a single one of them fail.

    By contrast, I bought and sold a handful of lower end OCZ drives, including the Petrol, Solid and Fuel, and the majority of these died in short time. I even had a RMA replacement die after about a month of light use. The last Petrol to fail was replaced by an Agility because thankfully OCZ had stopped shipping the Petrol.

    So yeah, I stick to their top shelf SSDs and happily pocket the savings over the lower performing and more expensive Intel drives.
  • zodiacsoulmate - Thursday, November 7, 2013 - link

    My Vector 256 GB died in 3 month, now I'm on my third replaced 256 GB drive...
  • colonelclaw - Thursday, November 7, 2013 - link

    I don't like kicking anyone when they're down, but I've heard rumours that OCZ have burnt through all their cash and are dangerously close to going bust. Does anyone know if this is true?
  • marc1000 - Thursday, November 7, 2013 - link

    Kristian, I was expecting to see the Intel SSD 530 in the comparison list. Do you have an ETA about testing it someday? Or is the performance too similar to SSD 335 to warrant a review?

    thanks

Log in

Don't have an account? Sign up now