Mixed Random Read/Write Performance

Mixed read/write tests are also a new addition to our test suite. In real world applications a significant portion of workloads are mixed, meaning that there are both read and write IOs. Our Storage Bench benchmarks already illustrate mixed workloads by being based on actual real world IO traces, but until now we haven't had a proper synthetic way to measure mixed performance. 

The benchmark is divided into two tests. The first one tests mixed performance with 4KB random IOs at six different read/write distributions starting at 100% reads and adding 20% of writes in each phase. Because we are dealing with a mixed workload that contains reads, the drive is first filled with 128KB sequential data to ensure valid results. Similarly, because the IO pattern is random, I've limited the LBA span to 16GB to ensure that the results aren't affected by IO consistency. The queue depth of the 4KB random test is three.

Again, for the sake of readability, I provide both an average based bar graph as well as a line graph with the full data on it. The bar graph represents an average of all six read/write distribution data rates for quick comparison, whereas the line graph includes a separate data point for each tested distribution. 

Iometer - Mixed 4KB Random Read/Write

Quite surprisingly the SM951 and Samsung drives in general don't do very well with mixed data.

Samsung SM951 512GB

The reason lies in the fact that the performance of Samsung drives plummets when the share of writes is increased. At 80/20 read/write, the Samsung drives manage to do pretty well, but after that the performance declines to about 40MB/s. What's odd is that the performance is also bad with 100% writes, whereas with other drives we usually see a spike here. I'm guessing there's some garbage collection going on here that causes the performance degradation. 

Mixed Sequential Read/Write Performance

The sequential mixed workload tests are also tested with a full drive, but I've not limited the LBA range as that's not needed with sequential data patterns. The queue depth for the tests is one.

Iometer - Mixed 128KB Sequential Read/Write

With 128KB sequential data, however, the SM951 is the king of the hill. There's a clear difference between PCIe and SATA based drives, although it's worthy to note that the difference is mostly due to PCIe drives having much higher throughput at 100% reads and writes (i.e. the infamous bathtub curve). 

Samsung SM951 512GB
Sequential Performance ATTO & AS-SSD
Comments Locked

128 Comments

View All Comments

  • DanNeely - Tuesday, February 24, 2015 - link

    "In any case, I strongly recommend having a decent amount of airflow inside the case. My system only has two case fans (one front and one rear) and I run it with the side panels off for faster accessibility, so mine isn't an ideal setup for maximum airflow."

    With the space between a pair of PCIe x16 slots appearing to have become the most popular spot to put M2 slots I worry that thermal throttling might end up being worse for a lot of end user systems than on your testbench because it'll be getting broiled by GPUs. OTOH even with a GPU looming overhead, it should be possible to slap an aftermarket heatsink on using thermal tape. My parts box has a few I think would work that I've salvaged from random hardware (single wide GPUs???) over the years; if you've got anything similar lying around I'd be curious if it'd be able to fix the throttling problem.
  • Kristian Vättö - Tuesday, February 24, 2015 - link

    I have a couple Plextor M6e Black Edition drives, which are basically M.2 adapters with an M.2 SSD and a quite massive heatsink. I currently have my hands full because of upcoming NDAs, but I can certainly try to test the SM951 with a heatsink and the case fully assembled before it starts to ship.
  • DanNeely - Tuesday, February 24, 2015 - link

    Ok, I'd definitely be interested in seeing an update when you've got the time. Thanks.
  • Railgun - Tuesday, February 24, 2015 - link

    While I can see it's a case of something is better than nothing, given the mounting options of an M.2 drive, a couple of chips will not get any direct cooling benefit. In fact, they're sitting in a space where virtually zero airflow will be happening.

    The Plextor solution. and any like it is all well and good, but for those that utilize a native M.2 port on any given mobo, they're kind of out of luck. As it turns out, I also have a GPU blocking just above mine for any decent sized passive cooling; 8cm at best. Maybe that's enough, but the two chips on the other side have the potential to simply cook.
  • DanNeely - Tuesday, February 24, 2015 - link

    Depends if it's the flash chips or the ram/controller that're overheating. I think the latter two are on top and heat sinkable.
  • jhoff80 - Tuesday, February 24, 2015 - link

    It'd be even worse too for many of the mini-ITX boards that are putting the M.2 slot underneath the board.

    I mean, something like M.2 is ideal for these smaller cases where cabling can become an issue, so having the slot on the bottom of the board combined with a drive needing airflow sounds like grounds for a disaster.
  • extide - Tuesday, February 24, 2015 - link

    Yeah I bet it's the controller that is being throttled, because IT is overheating, not the actual NAND chips.
  • ZeDestructor - Tuesday, February 24, 2015 - link

    I second this motion. Prefereably as a seperate article so I don't miss it (I only get to AT via RSS nowadays)
  • rpg1966 - Tuesday, February 24, 2015 - link

    Maybe a dumb question, but: the 512GB drive has 4 storage chips (two on the front, two on the back), therefore each chip stores 128GB. If the NAND chips are 64Gbit (8GB), that means there are 16 packages in each chip - is that right?
  • Kristian Vättö - Tuesday, February 24, 2015 - link

    That is correct. Samsung has been using 16-die packages for quite some time now in various products.

Log in

Don't have an account? Sign up now