AnandTech Storage Bench - The Destroyer

The Destroyer has been an essential part of our SSD test suite for nearly two years now. It was crafted to provide a benchmark for very IO intensive workloads, which is where you most often notice the difference between drives. It's not necessarily the most relevant test to an average user, but for anyone with a heavier IO workload The Destroyer should do a good job at characterizing performance.

AnandTech Storage Bench - The Destroyer
Workload Description Applications Used
Photo Sync/Editing Import images, edit, export Adobe Photoshop CS6, Adobe Lightroom 4, Dropbox
Gaming Download/install games, play games Steam, Deus Ex, Skyrim, Starcraft 2, BioShock Infinite
Virtualization Run/manage VM, use general apps inside VM VirtualBox
General Productivity Browse the web, manage local email, copy files, encrypt/decrypt files, backup system, download content, virus/malware scan Chrome, IE10, Outlook, Windows 8, AxCrypt, uTorrent, AdAware
Video Playback Copy and watch movies Windows 8
Application Development Compile projects, check out code, download code samples Visual Studio 2012

The table above describes the workloads of The Destroyer in a bit more detail. Most of the workloads are run independently in the trace, but obviously there are various operations (such as backups) in the background. 

AnandTech Storage Bench - The Destroyer - Specs
Reads 38.83 million
Writes 10.98 million
Total IO Operations 49.8 million
Total GB Read 1583.02 GB
Total GB Written 875.62 GB
Average Queue Depth ~5.5
Focus Worst case multitasking, IO consistency

The name Destroyer comes from the sheer fact that the trace contains nearly 50 million IO operations. That's enough IO operations to effectively put the drive into steady-state and give an idea of the performance in worst case multitasking scenarios. About 67% of the IOs are sequential in nature with the rest ranging from pseudo-random to fully random. 

AnandTech Storage Bench - The Destroyer - IO Breakdown
IO Size <4KB 4KB 8KB 16KB 32KB 64KB 128KB
% of Total 6.0% 26.2% 3.1% 2.4% 1.7% 38.4% 18.0%

I've included a breakdown of the IOs in the table above, which accounts for 95.8% of total IOs in the trace. The leftover IO sizes are relatively rare in between sizes that don't have a significant (>1%) share on their own. Over a half of the transfers are large IOs with one fourth being 4KB in size.

AnandTech Storage Bench - The Destroyer - QD Breakdown
Queue Depth 1 2 3 4-5 6-10 11-20 21-32 >32
% of Total 50.0% 21.9% 4.1% 5.7% 8.8% 6.0% 2.1% 1.4

Despite the average queue depth of 5.5, a half of the IOs happen at queue depth of one and scenarios where the queue depths is higher than 10 are rather infrequent. 

The two key metrics I'm reporting haven't changed and I'll continue to report both data rate and latency because the two have slightly different focuses. Data rate measures the speed of the data transfer, so it emphasizes large IOs that simply account for a much larger share when looking at the total amount of data. Latency, on the other hand, ignores the IO size, so all IOs are given the same weight in the calculation. Both metrics are useful, although in terms of system responsiveness I think the latency is more critical. As a result, I'm also reporting two new stats that provide us a very good insight to high latency IOs by reporting the share of >10ms and >100ms IOs as a percentage of the total.

I'm also reporting the total power consumed during the trace, which gives us good insight into the drive's power consumption under different workloads. It's better than average power consumption in the sense that it also takes performance into account because a faster completion time will result in less watt-hours consumed. Since the idle times of the trace have been truncated for faster playback, the number doesn't fully address the impact of idle power consumption, but nevertheless the metric is valuable when it comes active power consumption. 

AnandTech Storage Bench - The Destroyer (Data Rate)

For a high-end drive, the Vector 180 has average data rate in our heaviest 'The Destroyer' trace. At 480GB and 960GB it's able to keep up with the Extreme Pro, but the 240GB model doesn't bear that well when compared to the competition. 

AnandTech Storage Bench - The Destroyer (Latency)

The same story continues when looking at average latency, although I have to say that the differences between drives are quite marginal. What's notable is how consistent the Vector 180 is regardless of the capacity.

AnandTech Storage Bench - The Destroyer (Latency)

Positively, the Vector 180 has very few high latency IOs and actually leads the pack when looking at all capacities. 

AnandTech Storage Bench - The Destroyer (Latency)

The Vector 180 also appears to be very power efficient under load and manages to beat every other SSD I've run through the test so far. Too bad there is no support for slumber power modes because the Barefoot 3 seems to excel otherwise when it comes to power.

AnandTech Storage Bench - The Destroyer (Power)

Performance Consistency AnandTech Storage Bench - Heavy
Comments Locked

89 Comments

View All Comments

  • nils_ - Wednesday, March 25, 2015 - link

    It's an interesting concept (especially when the Datacenter uses a DC Distribution instead of AC), but I don't know if I would be comfortable with batteries in everything. A capacitor holds less of a charge but doesn't deteriorate over time and the only component that really needs to stay on is the drive (or RAID controller if you're into that).
  • nils_ - Wednesday, March 25, 2015 - link

    "I don't think it has been significant enough to warrant physical power loss protection for all client SSDs."

    If a drive reports a flush as complete, the operating system must be confident that the data is already written to the underlying device. Any drive that doesn't deliver this is quite simply defective by design. Back in the day this was already a problem with some IDE and SATA drives, they reported a write operation as complete once the data hit the drive cache. Just because something is rated as consumer grade does not mean that they should ship defective devices.

    Even worse is that instead of losing the last few writes you'll potentially lose all the data stored on the drive.

    If I don't care whether the data makes it to the drive I can solve that in software.
  • shodanshok - Wednesday, March 25, 2015 - link

    If a drive receive an ATA FLUSH command, it _will_ write to stable storage (HDD platters or NAND chips) before returning. For unimportant writes (the ones not marked with FUA or encapsulated into an ATA FLUSH) the drive is allowed to store data into cache and return _before_ the data hit the actual permanent storage.

    SSDs adds another problem: by the very nature of MCL and TLC cells, data at rest (already comitted to stable storage) are at danger by the partial page write effect. So, PMF+ and Crucial's consumer drive Power Loss Protection are _required_ for reliable use of the drive. Drives that don't use at least partial power loss protection should use a write-through (read-only cache) approach at least for the NAND mapping table or very frequent flushes of the mapping table (eg: Sandisk)
  • mapesdhs - Wednesday, March 25, 2015 - link


    How do the 850 EVO & Pro deal with this scenario atm?

    Ian.
  • Oxford Guy - Wednesday, March 25, 2015 - link

    "That said, while drive bricking due to mapping table corruption has always been a concern, I don't think it has been significant enough to warrant physical power loss protection for all client SSDs."

    I see you never owned 240 GB Vertex 2 drives with 25nm NAND.
  • prasun - Wednesday, March 25, 2015 - link

    "PFM+ will protect data that has already been written to the NAND"

    They should be able to do this by scanning NAND. The capacitor probably makes life easier, but with better firmware design this should not be necessary.

    With the capacitor, the steady state performance should be consistent, as they won't need to flush mapping table to NAND regularly.

    Since this is also not the case, this points to bad firmware design
  • marraco - Wednesday, March 25, 2015 - link

    I have a bricked Vertex 2 resting a meter away. It was so expensive that I cannot resign to trow it at the waste.

    I will never buy another OCZ product, ever.

    OCZ refused to release the software needed to unbrick it. Is just a software problem. OCZ got my money, but refuses to make it work.

    Do NOT EVER buy anything from OCZ.
  • ocztosh - Wednesday, March 25, 2015 - link

    Hello Marraco, thank you for your feedback and sorry to hear that you had an issue with the Vertex 2. That particular drive was Sandforce based and there was no software to unbrick it unfortunately, nor did the previous organization have the source code for firmware. This was actually one of the reasons that drove the company to push to develop in-house controllers and firmware, so we could control these elements which ultimately impacts product design and support.

    Please do contact our support team and reference this thread. Even though this is a legacy product we would be more than happy to help and provide support. Thank you again for your comments and we look forward to supporting you.
  • mapesdhs - Wednesday, March 25, 2015 - link

    Indeed, the Vertex4 and Vector series are massively more reliable, but the OCZ haters
    ignore them entirely, focusing on the old Vertex2 series, etc. OCZ could have handled
    some of the support issues back then better, but the later products were more reliable
    anyway so it was much less of an issue. With the newer warranty structure, Toshiba
    ownership & NAND, etc., it's a very different company.

    Irony is, I have over two dozen Vertex2E units and they're all working fine (most are
    120s, with a sprinkling of 60s and 240s). One of them is an early 3.5" V2E 120GB,
    used in an SGI Fuel for several years, never a problem (recently replaced with a
    2.5" V2E 240GB).

    Btw ocztosh, I've been talking to some OCZ people recently about why certain models
    force a 3gbit SAS controller to negotiate only a 1.5gbit link when connected to a SATA3
    SSD. This occurs with the Vertex3/4, Vector, etc., whereas connecting the SATA2 V2E
    correctly results in a 3Gbit link. Note I've observed similar behaviour with other brands,
    ditto other SATA2 SSDs (eg. SF-based Corsair F60, 3Gbit link selected ok). The OCZ
    people I talked to said there's nothing they can do to fix whatever the issue might be,
    but what I'm interested in is why it happens; if I can find that out then maybe I can
    figure a workaround. I'm using LSI 1030-based PCIe cards, eg. SAS3442, SAS3800,
    SAS3041, etc. I'd welcome your thoughts on the issue. Would be nice to get a Vertex4
    running with a 3Gbit link in a Fuel, Tezro or Origin/Onyx.

    Note I've been using the Vertex4 as a replacement for ancient 1GB SCSI disks in
    Stoll/SIRIX systems used by textile manufacturers, works rather well. Despite the
    low bandwidth limit of FastSCSI2 (10MB/sec), it still cut the time for a full backup
    from 30 mins to just 6 mins (tens of thousands of small pattern files). Alas, with
    the Vertex4 no longer available, I switched to the Crucial M550 (since it does have
    proper PLP). I'd been hoping to use the V180 instead, but its lack of full PLP is an issue.

    Ian.
  • alacard - Wednesday, March 25, 2015 - link

    In my view the performance consistency basically blows the lid off of OCZ and the reliability of their Barefoot controller. Despite reporting from most outlets, for years now drives based off of this technology have suffered massive failure rates due to sudden power loss. Here we have definitive evidence of those flaws and the lengths OCZ is going to in order to work around them (note, i didn't say 'fix' them).

    The fact that they were willing to go to the extra cost of adding the power loss module in addition to crippling the sustained performance of their flagship drive in order to flush the cache out of DRAM speaks VOLUMES about how bad their reliability was before. You don't go to such extreme - potentially kiss of death measures - without a good boot up your ass pushing you headlong toward them. In this case said boot was constructed purely out of OCZ's fear that releasing yet ANOTHER poorly constructed drive would finally put their reputation out of it's misery for good and kill any chance a future sales.

    OCZ has cornered themselves in a no win scenario:

    1) They don't bother making the drive reliable and in doing so save the cost of the power loss module and keep the sustained speed of the Vector 180 high. The drive reviews well with no craters in performance and the few customers OCZ has left buy another doomed Barefoot SSD that's practically guaranteed to brick on them within a few months. As a result they loose those customers for good along with their company.

    or

    2) The go to the cost of adding the power loss module and cripple the drives performance to ensure that the drive is reliable. The drive reviews horribly and no one buys it.

    This is their position. Kiss of death indeed.

    Ultimately, i think it speaks to how complicated controller development is and that if you don't have a huge company with millions of R&D funds at your disposal it's probably best if you don't throw your hat into that ring. It's a shame but it seems to be the way high tech works. (Global oligopoly, here we come.)

    All things considered, it's nice that this is finally all out in the open.

Log in

Don't have an account? Sign up now