AnandTech Storage Bench - The Destroyer

The Destroyer has been an essential part of our SSD test suite for nearly two years now. It was crafted to provide a benchmark for very IO intensive workloads, which is where you most often notice the difference between drives. It's not necessarily the most relevant test to an average user, but for anyone with a heavier IO workload The Destroyer should do a good job at characterizing performance.

AnandTech Storage Bench - The Destroyer
Workload Description Applications Used
Photo Sync/Editing Import images, edit, export Adobe Photoshop CS6, Adobe Lightroom 4, Dropbox
Gaming Download/install games, play games Steam, Deus Ex, Skyrim, Starcraft 2, BioShock Infinite
Virtualization Run/manage VM, use general apps inside VM VirtualBox
General Productivity Browse the web, manage local email, copy files, encrypt/decrypt files, backup system, download content, virus/malware scan Chrome, IE10, Outlook, Windows 8, AxCrypt, uTorrent, AdAware
Video Playback Copy and watch movies Windows 8
Application Development Compile projects, check out code, download code samples Visual Studio 2012

The table above describes the workloads of The Destroyer in a bit more detail. Most of the workloads are run independently in the trace, but obviously there are various operations (such as backups) in the background. 

AnandTech Storage Bench - The Destroyer - Specs
Reads 38.83 million
Writes 10.98 million
Total IO Operations 49.8 million
Total GB Read 1583.02 GB
Total GB Written 875.62 GB
Average Queue Depth ~5.5
Focus Worst case multitasking, IO consistency

The name Destroyer comes from the sheer fact that the trace contains nearly 50 million IO operations. That's enough IO operations to effectively put the drive into steady-state and give an idea of the performance in worst case multitasking scenarios. About 67% of the IOs are sequential in nature with the rest ranging from pseudo-random to fully random. 

AnandTech Storage Bench - The Destroyer - IO Breakdown
IO Size <4KB 4KB 8KB 16KB 32KB 64KB 128KB
% of Total 6.0% 26.2% 3.1% 2.4% 1.7% 38.4% 18.0%

I've included a breakdown of the IOs in the table above, which accounts for 95.8% of total IOs in the trace. The leftover IO sizes are relatively rare in between sizes that don't have a significant (>1%) share on their own. Over a half of the transfers are large IOs with one fourth being 4KB in size.

AnandTech Storage Bench - The Destroyer - QD Breakdown
Queue Depth 1 2 3 4-5 6-10 11-20 21-32 >32
% of Total 50.0% 21.9% 4.1% 5.7% 8.8% 6.0% 2.1% 1.4

Despite the average queue depth of 5.5, a half of the IOs happen at queue depth of one and scenarios where the queue depths is higher than 10 are rather infrequent. 

The two key metrics I'm reporting haven't changed and I'll continue to report both data rate and latency because the two have slightly different focuses. Data rate measures the speed of the data transfer, so it emphasizes large IOs that simply account for a much larger share when looking at the total amount of data. Latency, on the other hand, ignores the IO size, so all IOs are given the same weight in the calculation. Both metrics are useful, although in terms of system responsiveness I think the latency is more critical. As a result, I'm also reporting two new stats that provide us a very good insight to high latency IOs by reporting the share of >10ms and >100ms IOs as a percentage of the total.

I'm also reporting the total power consumed during the trace, which gives us good insight into the drive's power consumption under different workloads. It's better than average power consumption in the sense that it also takes performance into account because a faster completion time will result in less watt-hours consumed. Since the idle times of the trace have been truncated for faster playback, the number doesn't fully address the impact of idle power consumption, but nevertheless the metric is valuable when it comes active power consumption. 

AnandTech Storage Bench - The Destroyer (Data Rate)

For a high-end drive, the Vector 180 has average data rate in our heaviest 'The Destroyer' trace. At 480GB and 960GB it's able to keep up with the Extreme Pro, but the 240GB model doesn't bear that well when compared to the competition. 

AnandTech Storage Bench - The Destroyer (Latency)

The same story continues when looking at average latency, although I have to say that the differences between drives are quite marginal. What's notable is how consistent the Vector 180 is regardless of the capacity.

AnandTech Storage Bench - The Destroyer (Latency)

Positively, the Vector 180 has very few high latency IOs and actually leads the pack when looking at all capacities. 

AnandTech Storage Bench - The Destroyer (Latency)

The Vector 180 also appears to be very power efficient under load and manages to beat every other SSD I've run through the test so far. Too bad there is no support for slumber power modes because the Barefoot 3 seems to excel otherwise when it comes to power.

AnandTech Storage Bench - The Destroyer (Power)

Performance Consistency AnandTech Storage Bench - Heavy
POST A COMMENT

89 Comments

View All Comments

  • Shark321 - Wednesday, March 25, 2015 - link

    Tosh, it's a pity PFM does not work on the internal cache of the drive. You can still get file system damage during a power loss. Reply
  • AVN6293 - Sunday, December 20, 2015 - link

    Does this drive support Opal 2.0 eDrive (FIPS/Hippa compliance) ? Reply
  • AVN6293 - Sunday, December 20, 2015 - link

    ...And can the over provisioning be increased by the user ? Reply
  • ats - Wednesday, March 25, 2015 - link

    Actually, all consumer drives need power loss protection and they realistically need it much more than drives targeted at the actual enterprise side of the market. It comes down to simple probabilities. The average enterprise SSD is going to be backed by at least 1 additional layer of power loss prevention (UPS et al), have a robust backup infrastructure, and likely mirroring (offsite) on top.

    In contrast, consumer drives are unlikely to have any power loss prevention, unlikely to have anything approaching a backup infrastructure, and highly unlikely to have robust data resiliency(offsite mirroring et al).

    So like many others, Anandtech gets it exactly wrong wrt PLP and SSDs. The fact that manufacturers have been able to get away without providing PLP on consumer SSDs is almost criminal. The fact that review sites accept this as perfectly OK is pretty much criminal on their part.

    And what should pretty much be a rage storm for consumers is the actual cost of providing PLP on an SSD is literally a couple of $ in capacitors. Not to mention many consumer drives without PLP have enterprise drives using the exact same PCB with PLP. That we as consumers have allowed companies to have PLP as a point of differentiation is to our great detriment, esp when the actual cost of PLP is in the noise even for cheap low capacity SSDs.

    If a drive cannot survive a power loss with data integrity then it certainly shouldn't get a recommendation nor should any consumer even consider it.
    Reply
  • Shiitaki - Wednesday, March 25, 2015 - link

    You do raise some very good points. I think the enterprise still needs it because they want as many ways to protect the data that they can get, after all it's only a couple of bucks. The consumer would benefit to a greater degree since that is likely all they would have is the caps in the SSD. However the consumer is their own worse enemy, a couple of bucks makes a difference for most consumers.

    I've had no issues, and until I read this article, gave no thought to pulling power on a system using an SSD! And I've done it ALOT! Not a single bad block yet! And that is with 6 SSD's in various machines from 4 manufacturers and 8 product lines. Though none of them with Windows, all Linux and OS X.

    Sometimes I wonder just how wide spread issues really are. On the internet it's hard to tell since it's the angry people doing most of the posting.

    In the end though whether you area company or individual, if it isn't backed up. you really don't need it.
    Reply
  • trparky - Wednesday, April 29, 2015 - link

    I do an image of my system SSD every week and my computer is always plugged into a UPS, and yes, that's my home setup. The power is my area is known to be dirty power, not complete drop-outs but if you measured the voltage output it would make most electrical engineers shake their heads and smack their foreheads. Reply
  • zodiacsoulmate - Tuesday, March 24, 2015 - link

    also the Arconis 2013 is basically useless since it only runs on windows 7.... Reply
  • ocztosh - Tuesday, March 24, 2015 - link

    Hi Zodiassoulmate, just wanted to confirm that the Vector 180 drives are shipping with Acronis 2014. Reply
  • DanNeely - Tuesday, March 24, 2015 - link

    That's a step in the right direction; but is still last years product. Acronis 2015 is already out. Am I overly cynical for thinking Acronis offered the 2014 version at a discount hoping to make it up by convincing some of the SSD buyers to upgrade to the new version after installing? Reply
  • Samus - Tuesday, March 24, 2015 - link

    Acronis TrueImage 2015 is complete shit. Check the Acronis forums: most people (like myself - a paying annual customer since 2010) have gone back to 2014. The most recent update (October) still did not fix issues with image compatibility, GPT partition compatibility (added for 2015) and UEFI boot mapping. Aside from the lingering compatibility, reliability and stability issues, the interface is terrible. They've basically turned it into a backup product for single PC's instead of a imaging product. Even the USB bootable ISO I typically boot off a flash drive for imaging/cloning is inherently unstable and occasionally even corrupts the destination. Nobody has confirmed the "Universal Restore" works for Windows 7, yet another broken feature that worked FINE in 2014.

    Acronis lost me as a long-time customer to Miray because 2015 was SO botched and after waiting months for them to fix it, I gave up and had to find a product that could adequately clone UEFI OS's installed on GPT partitions. I use this product almost daily to upgrade PC's to SSD's. Unfortunately Miray's boot environment is a little slower, even with the verification disabled and "fast copy" turned on, likely because it runs a different USB stack.

    I don't blame OCZ for sticking with 2014 like every other Acronis licensee has, including Crucial and Intel. 2014 is mature and stable, but it is not the modern solution - especially with Windows 10 around the corner. Acronis will forfeit this market to Miray or in-house solutions like Samsungs' Clonemaster if they don't get their act together. It's just astonishing how well Acronis was doing until 2015.
    Reply

Log in

Don't have an account? Sign up now