The OCZ Vector 180 (240GB, 480GB & 960GB) SSD Review
by Kristian Vättö on March 24, 2015 2:00 PM EST- Posted in
- Storage
- SSDs
- OCZ
- Barefoot 3
- Vector 180
AnandTech Storage Bench - The Destroyer
The Destroyer has been an essential part of our SSD test suite for nearly two years now. It was crafted to provide a benchmark for very IO intensive workloads, which is where you most often notice the difference between drives. It's not necessarily the most relevant test to an average user, but for anyone with a heavier IO workload The Destroyer should do a good job at characterizing performance.
AnandTech Storage Bench - The Destroyer | ||||||||||||
Workload | Description | Applications Used | ||||||||||
Photo Sync/Editing | Import images, edit, export | Adobe Photoshop CS6, Adobe Lightroom 4, Dropbox | ||||||||||
Gaming | Download/install games, play games | Steam, Deus Ex, Skyrim, Starcraft 2, BioShock Infinite | ||||||||||
Virtualization | Run/manage VM, use general apps inside VM | VirtualBox | ||||||||||
General Productivity | Browse the web, manage local email, copy files, encrypt/decrypt files, backup system, download content, virus/malware scan | Chrome, IE10, Outlook, Windows 8, AxCrypt, uTorrent, AdAware | ||||||||||
Video Playback | Copy and watch movies | Windows 8 | ||||||||||
Application Development | Compile projects, check out code, download code samples | Visual Studio 2012 |
The table above describes the workloads of The Destroyer in a bit more detail. Most of the workloads are run independently in the trace, but obviously there are various operations (such as backups) in the background.
AnandTech Storage Bench - The Destroyer - Specs | ||||||||||||
Reads | 38.83 million | |||||||||||
Writes | 10.98 million | |||||||||||
Total IO Operations | 49.8 million | |||||||||||
Total GB Read | 1583.02 GB | |||||||||||
Total GB Written | 875.62 GB | |||||||||||
Average Queue Depth | ~5.5 | |||||||||||
Focus | Worst case multitasking, IO consistency |
The name Destroyer comes from the sheer fact that the trace contains nearly 50 million IO operations. That's enough IO operations to effectively put the drive into steady-state and give an idea of the performance in worst case multitasking scenarios. About 67% of the IOs are sequential in nature with the rest ranging from pseudo-random to fully random.
AnandTech Storage Bench - The Destroyer - IO Breakdown | |||||||||||
IO Size | <4KB | 4KB | 8KB | 16KB | 32KB | 64KB | 128KB | ||||
% of Total | 6.0% | 26.2% | 3.1% | 2.4% | 1.7% | 38.4% | 18.0% |
I've included a breakdown of the IOs in the table above, which accounts for 95.8% of total IOs in the trace. The leftover IO sizes are relatively rare in between sizes that don't have a significant (>1%) share on their own. Over a half of the transfers are large IOs with one fourth being 4KB in size.
AnandTech Storage Bench - The Destroyer - QD Breakdown | ||||||||||||
Queue Depth | 1 | 2 | 3 | 4-5 | 6-10 | 11-20 | 21-32 | >32 | ||||
% of Total | 50.0% | 21.9% | 4.1% | 5.7% | 8.8% | 6.0% | 2.1% | 1.4 |
Despite the average queue depth of 5.5, a half of the IOs happen at queue depth of one and scenarios where the queue depths is higher than 10 are rather infrequent.
The two key metrics I'm reporting haven't changed and I'll continue to report both data rate and latency because the two have slightly different focuses. Data rate measures the speed of the data transfer, so it emphasizes large IOs that simply account for a much larger share when looking at the total amount of data. Latency, on the other hand, ignores the IO size, so all IOs are given the same weight in the calculation. Both metrics are useful, although in terms of system responsiveness I think the latency is more critical. As a result, I'm also reporting two new stats that provide us a very good insight to high latency IOs by reporting the share of >10ms and >100ms IOs as a percentage of the total.
I'm also reporting the total power consumed during the trace, which gives us good insight into the drive's power consumption under different workloads. It's better than average power consumption in the sense that it also takes performance into account because a faster completion time will result in less watt-hours consumed. Since the idle times of the trace have been truncated for faster playback, the number doesn't fully address the impact of idle power consumption, but nevertheless the metric is valuable when it comes active power consumption.
For a high-end drive, the Vector 180 has average data rate in our heaviest 'The Destroyer' trace. At 480GB and 960GB it's able to keep up with the Extreme Pro, but the 240GB model doesn't bear that well when compared to the competition.
The same story continues when looking at average latency, although I have to say that the differences between drives are quite marginal. What's notable is how consistent the Vector 180 is regardless of the capacity.
Positively, the Vector 180 has very few high latency IOs and actually leads the pack when looking at all capacities.
The Vector 180 also appears to be very power efficient under load and manages to beat every other SSD I've run through the test so far. Too bad there is no support for slumber power modes because the Barefoot 3 seems to excel otherwise when it comes to power.
89 Comments
View All Comments
nathanddrews - Tuesday, March 24, 2015 - link
This exactly. LOLSamus - Wednesday, March 25, 2015 - link
Isn't it a crime to put Samsung and support in the same sentence? That companies Achilles heal is complete lack of support. Look at all the people with GalaxyS3's and smart tv's that were left out to dry the moment next gen models came out. And on a polarizingly opposite end of the spectrum is Apple who still supports the nearly 4 year old iPhone 4S. I'm no Apple fan but that is commendable and something all companies should pay attention too. Customer support pays off.Oxford Guy - Wednesday, March 25, 2015 - link
Apple did a shit job with the white Core Duo iMacs which all develop bad pixel lines. We had fourteen in a lab and all of them developed the problem. Apple also dropped the ball on people with the 8600 GT and similar Nvidia GPUs in their Macbook Pros by refusing to replace the defective GPUs with anything other than new defective GPUs. Both, as far as I know, caused class-action lawsuits.Oxford Guy - Wednesday, March 25, 2015 - link
I forgot to mention that not only did Apple not actually fix the problem with those bad GPUs, customers would have to jump through a bunch of hoops like bringing their machines to an Apple Store so someone there could decide if they qualify or not for a replacement defective GPU.matt.vanmater - Tuesday, March 24, 2015 - link
I am curious, does the drive return a write IO as complete as soon as it is stored in the DRAM?If so, this drive could be fantastic to use as a ZFS ZIL.
Think of it this way: you partition it so the size does not exceed the DRAM size (e.g. 512MB), and use that partition as ZIL. The small partition size guarantees that any writes to the drive fit in DRAM, and the PFM guarantees there is no loss. This is similar in concept to short-stroking hard drives with a spinning platter.
For those of you that don't know, ZFS performance is significantly enhanced by the existence of a ZIL device with very low latency (and DRAM on board this drive should fit that bill). A fast ZIL is particularly important for people who use NFS as a datastore for VMWare. This is because VMWare forces NFS to Sync write IOs, even if your ZFS config is to not require sync. This device may or may not perform as well as a DDRDRIVE (ddrdrive.com) but it comes in at about 1/20th the price so it is a very promising idea!
ocztosh -- has your team considered the use of this device as a ZFS array ZIL device like i describe above?
Kristian Vättö - Tuesday, March 24, 2015 - link
PFM+ is limited to protecting the NAND mapping table, so any user data will still be lost in case of a sudden power loss. Hence the Vector 180 isn't really suitable for the scenario you described.matt.vanmater - Wednesday, March 25, 2015 - link
OK good to know. To be honest though, what matters more in this scenario (for me) is if the device returns a write io as successful immediately when it is stored in DRAM, or if it waits until it is stored in flash.As nils_ mentions below, a UPS is another way of partially mitigating a power failure. In my case, the battery backup is a nice to have rather than a must have.
matt.vanmater - Tuesday, March 24, 2015 - link
One minor addition... OCZ was clearly thinking about ZFS ZIL devices when they announced prototype devices called "Aeon" about 2 years ago. They even blogged about this use case:http://eblog.ocz.com/ssd-powered-clouds-times-chan...
Unfortunately OCZ never brought these drives to market (I wish they did!) so we're stuck waiting for a consumer DRAM device that isn't 10+ year old technology or $2k+ in price tag.
nils_ - Wednesday, March 25, 2015 - link
Something like the PMC Flashtec devices? Those are boards with 4-16GiB of DRAM backed by the same size of flash chips and capacitors with a NVMe interface. If the system loses power the DRAM is flushed to flash and restored when the power is back on. This is great for things like ZIL, Journals, doublewrite buffer (like in MySQL/MariaDB), ceph journals etc..And before it comes up, a UPS can fail too (I've seen it happen more often than I'd like to count).
matt.vanmater - Wednesday, March 25, 2015 - link
I saw those PMC Flashtec devices as well and they look promising, but I don't see any for sale yet. Hopefully they don't become vaporware like OCZ Aeon drives.Also, in my opinion I prefer a SATAIII or SAS interface over PCI-e, because (in theory) a SATA/SAS device will work in almost any motherboard on any Operating System without special drivers, whereas PCI-e devices will need special device drivers for each OS. Obviously, waiting for drivers to be created limits which systems a device can be used in.
True PCI-e will definitely have greater throughput than SATA/SAS, but the ZFS ZIL use case needs very low latency and NOT necessarily high throughput. I haven't seen any data indicating that PCI-e is any better/worse on IO latency than SATA/SAS.