Random Read Performance

One of the major changes in our 2015 test suite is the synthetic Iometer tests we run. In the past we used to test just one or two queue depths, but real world workloads always contain a mix of different queue depths as shown by our Storage Bench traces. To get the full scope in performance, I'm now testing various queue depths starting from one and going all the way to up to 32. I'm not testing every single queue depth, but merely how the throughput scales with the queue depth. I'm using exponential scaling, meaning that the tested queue depths increase in powers of two (i.e. 1, 2, 4, 8...). 

Read tests are conducted on a full drive because that is the only way to ensure that the results are valid (testing with an empty drive can substantially inflate the results and in reality the data you are reading is always valid rather than full of zeros). Each queue depth is tested for three minutes and there is no idle time between the tests. 

I'm also reporting two metrics now. For the bar graph, I've taken the average of QD1, QD2 and QD4 data rates, which are the most relevant queue depths for client workloads. This allows for easy and quick comparison between drives. In addition to the bar graph, I'm including a line graph, which shows the performance scaling across all queue depths. To keep the line graphs readable, each drive has its own graph, which can be selected from the drop-down menu.

I'm also plotting power for SATA drives and will be doing the same for PCIe drives as soon as I have the system set up properly. Our datalogging multimeter logs power consumption every second, so I report the average for every queue depth to see how the power scales with the queue depth and performance.

Iometer - 4KB Random Read

Random read performance at small queue depths has never been an area where the Vector 180 has excelled in. Given that these are one of the most common IOs, it's an area where I would like to see improvement on OCZ's behalf.

Iometer - 4KB Random Read (Power)

Power consumption, on the other hand, is excellent, which is partially explained by the lower performance. 

Samsung SM951 512GB

Having a closer look at the performance data across all queue depths reveals the reason for Vector 180's poor random read performance. For some reason, the performance only starts to scale properly after queue depth of 4, but even then the scaling isn't as aggressive as on some other drives. 

Random Write Performance

Write performance is tested in the same way as read performance, except that the drive is in a secure erased state and the LBA span is limited to 16GB. We already test performance consistency separately, so a secure erased drive and limited LBA span ensures that the results here represent peak performance rather than sustained performance.

Iometer - 4KB Random Write

In random write performance the Vector 180 does considerably better, although it's still not the fastest drive around. 

Iometer - 4KB Random Write (Power)

Even though the random write performance doesn't scale at all with capacity, the power consumption does. Still, the Vector 180 is quite power efficient compared to other drives.

Samsung SM951 512GB

The Vector 180 scales smoothly across all queue depths, but it could scale a bit more aggressively because especially the QD4 score is a bit low. On a positive side, the Vector 180 does very well at QD1, though.

AnandTech Storage Bench - Light Sequential Performance
Comments Locked

89 Comments

View All Comments

  • nathanddrews - Tuesday, March 24, 2015 - link

    This exactly. LOL
  • Samus - Wednesday, March 25, 2015 - link

    Isn't it a crime to put Samsung and support in the same sentence? That companies Achilles heal is complete lack of support. Look at all the people with GalaxyS3's and smart tv's that were left out to dry the moment next gen models came out. And on a polarizingly opposite end of the spectrum is Apple who still supports the nearly 4 year old iPhone 4S. I'm no Apple fan but that is commendable and something all companies should pay attention too. Customer support pays off.
  • Oxford Guy - Wednesday, March 25, 2015 - link

    Apple did a shit job with the white Core Duo iMacs which all develop bad pixel lines. We had fourteen in a lab and all of them developed the problem. Apple also dropped the ball on people with the 8600 GT and similar Nvidia GPUs in their Macbook Pros by refusing to replace the defective GPUs with anything other than new defective GPUs. Both, as far as I know, caused class-action lawsuits.
  • Oxford Guy - Wednesday, March 25, 2015 - link

    I forgot to mention that not only did Apple not actually fix the problem with those bad GPUs, customers would have to jump through a bunch of hoops like bringing their machines to an Apple Store so someone there could decide if they qualify or not for a replacement defective GPU.
  • matt.vanmater - Tuesday, March 24, 2015 - link

    I am curious, does the drive return a write IO as complete as soon as it is stored in the DRAM?

    If so, this drive could be fantastic to use as a ZFS ZIL.

    Think of it this way: you partition it so the size does not exceed the DRAM size (e.g. 512MB), and use that partition as ZIL. The small partition size guarantees that any writes to the drive fit in DRAM, and the PFM guarantees there is no loss. This is similar in concept to short-stroking hard drives with a spinning platter.

    For those of you that don't know, ZFS performance is significantly enhanced by the existence of a ZIL device with very low latency (and DRAM on board this drive should fit that bill). A fast ZIL is particularly important for people who use NFS as a datastore for VMWare. This is because VMWare forces NFS to Sync write IOs, even if your ZFS config is to not require sync. This device may or may not perform as well as a DDRDRIVE (ddrdrive.com) but it comes in at about 1/20th the price so it is a very promising idea!

    ocztosh -- has your team considered the use of this device as a ZFS array ZIL device like i describe above?
  • Kristian Vättö - Tuesday, March 24, 2015 - link

    PFM+ is limited to protecting the NAND mapping table, so any user data will still be lost in case of a sudden power loss. Hence the Vector 180 isn't really suitable for the scenario you described.
  • matt.vanmater - Wednesday, March 25, 2015 - link

    OK good to know. To be honest though, what matters more in this scenario (for me) is if the device returns a write io as successful immediately when it is stored in DRAM, or if it waits until it is stored in flash.

    As nils_ mentions below, a UPS is another way of partially mitigating a power failure. In my case, the battery backup is a nice to have rather than a must have.
  • matt.vanmater - Tuesday, March 24, 2015 - link

    One minor addition... OCZ was clearly thinking about ZFS ZIL devices when they announced prototype devices called "Aeon" about 2 years ago. They even blogged about this use case:
    http://eblog.ocz.com/ssd-powered-clouds-times-chan...

    Unfortunately OCZ never brought these drives to market (I wish they did!) so we're stuck waiting for a consumer DRAM device that isn't 10+ year old technology or $2k+ in price tag.
  • nils_ - Wednesday, March 25, 2015 - link

    Something like the PMC Flashtec devices? Those are boards with 4-16GiB of DRAM backed by the same size of flash chips and capacitors with a NVMe interface. If the system loses power the DRAM is flushed to flash and restored when the power is back on. This is great for things like ZIL, Journals, doublewrite buffer (like in MySQL/MariaDB), ceph journals etc..

    And before it comes up, a UPS can fail too (I've seen it happen more often than I'd like to count).
  • matt.vanmater - Wednesday, March 25, 2015 - link

    I saw those PMC Flashtec devices as well and they look promising, but I don't see any for sale yet. Hopefully they don't become vaporware like OCZ Aeon drives.

    Also, in my opinion I prefer a SATAIII or SAS interface over PCI-e, because (in theory) a SATA/SAS device will work in almost any motherboard on any Operating System without special drivers, whereas PCI-e devices will need special device drivers for each OS. Obviously, waiting for drivers to be created limits which systems a device can be used in.

    True PCI-e will definitely have greater throughput than SATA/SAS, but the ZFS ZIL use case needs very low latency and NOT necessarily high throughput. I haven't seen any data indicating that PCI-e is any better/worse on IO latency than SATA/SAS.

Log in

Don't have an account? Sign up now