Mixed Random Read/Write Performance

Mixed read/write tests are also a new addition to our test suite. In real world applications a significant portion of workloads are mixed, meaning that there are both read and write IOs. Our Storage Bench benchmarks already illustrate mixed workloads by being based on actual real world IO traces, but until now we haven't had a proper synthetic way to measure mixed performance. 

The benchmark is divided into two tests. The first one tests mixed performance with 4KB random IOs at six different read/write distributions starting at 100% reads and adding 20% of writes in each phase. Because we are dealing with a mixed workload that contains reads, the drive is first filled with 128KB sequential data to ensure valid results. Similarly, because the IO pattern is random, I've limited the LBA span to 16GB to ensure that the results aren't affected by IO consistency. The queue depth of the 4KB random test is three.

Again, for the sake of readability, I provide both an average based bar graph as well as a line graph with the full data on it. The bar graph represents an average of all six read/write distribution data rates for quick comparison, whereas the line graph includes a separate data point for each tested distribution. 

Iometer - Mixed 4KB Random Read/Write

Quite surprisingly the SM951 and Samsung drives in general don't do very well with mixed data.

Samsung SM951 512GB

The reason lies in the fact that the performance of Samsung drives plummets when the share of writes is increased. At 80/20 read/write, the Samsung drives manage to do pretty well, but after that the performance declines to about 40MB/s. What's odd is that the performance is also bad with 100% writes, whereas with other drives we usually see a spike here. I'm guessing there's some garbage collection going on here that causes the performance degradation. 

Mixed Sequential Read/Write Performance

The sequential mixed workload tests are also tested with a full drive, but I've not limited the LBA range as that's not needed with sequential data patterns. The queue depth for the tests is one.

Iometer - Mixed 128KB Sequential Read/Write

With 128KB sequential data, however, the SM951 is the king of the hill. There's a clear difference between PCIe and SATA based drives, although it's worthy to note that the difference is mostly due to PCIe drives having much higher throughput at 100% reads and writes (i.e. the infamous bathtub curve). 

Samsung SM951 512GB
Sequential Performance ATTO & AS-SSD
Comments Locked

128 Comments

View All Comments

  • Kevin G - Tuesday, February 24, 2015 - link

    "I also verified that the SM951 is bootable in tower Mac Pros (2012 and earlier)."

    Excellent. The old 2010/2012 towers continue to show that being expandable provides long term benefit. I'm glad that I picked up my tower Mac Pro when I did.

    Now to find a carrier that'll convert the 4x PCIe 3.0 link of the M.2 connector to an 8x PCIe 2.0 link for a Mac Pro. (Two two M.2s to a single 16x PCIe 2.0 link.)
  • extide - Tuesday, February 24, 2015 - link

    You will need a PLX chip to do that, you can't just put 2 x4 devices into an x8 slot...
  • jimjamjamie - Wednesday, February 25, 2015 - link

    It's pretty hilarious how many people drink the shiny plastic trash bin kool-aid.
  • Tunnah - Tuesday, February 24, 2015 - link

    I'm not super knowledgeable on the whole thing, but isn't NVMe really only a big deal for enterprise, as it's more a benefit for multi drive setups ?
  • Kristian Vättö - Tuesday, February 24, 2015 - link

    It's of course a bigger deal for enterprises because the need for performance is higher. However, NVMe isn't just a buzzword for the client space because it reduced the protocol latency, which in turn results in higher performance at low queue depths that are common for client workloads.
  • knweiss - Sunday, March 1, 2015 - link

    Kristian, did you ever test how much influence the filesystem has? I would love to see a filesystem comparison on the various platforms with NVMe drivers (Windows, Linux, FreeBSD, etc).
  • The_Assimilator - Tuesday, February 24, 2015 - link

    Hopefully NVMe will be standard on SSDs by the time Skylake and 100-series chipsets arrive.
  • sna1970 - Tuesday, February 24, 2015 - link

    What is the point of this expensive drive when you can have the same numbers using 2 SSD in Raid 0 ?

    and please no one says to me risk of Data Loss .. SSD are not mechanical and the chance of loosing 1 SSD is the same of 2 of them.
  • Kristian Vättö - Tuesday, February 24, 2015 - link

    RAID only tends to increase high QD and large IO transfers where the IO load can easily be distributed between two or more drives. Low QD performance at small IO sizes can actually be worse due to additional overhead from the RAID drivers.
  • dzezik - Tuesday, February 24, 2015 - link

    Hi sna1970. You misses Bernouli's "introduced the principle of the maximum product of the probabilities of a system of concurrent errors" it is quite old 1782 but is is still valid. Have You ever been in school. Do You have mathematics classes?

Log in

Don't have an account? Sign up now