AnandTech Storage Bench - Light

The Light trace is designed to be an accurate illustration of basic usage. It's basically a subset of the Heavy trace, but we've left out some workloads to reduce the writes and make it more read intensive in general. 

AnandTech Storage Bench - Light - Specs
Reads 372,630
Writes 459,709
Total IO Operations 832,339
Total GB Read 17.97 GB
Total GB Written 23.25 GB
Average Queue Depth ~4.6
Focus Basic, light IO usage

The Light trace still has more writes than reads, but a very light workload would be even more read-centric (think web browsing, document editing, etc). It has about 23GB of writes, which would account for roughly two or three days of average usage (i.e. 7-11GB per day). 

AnandTech Storage Bench - Light - IO Breakdown
IO Size <4KB 4KB 8KB 16KB 32KB 64KB 128KB
% of Total 6.2% 27.6% 2.4% 8.0% 6.5% 4.8% 26.4%

The IO distribution of the Light trace is very similar to the Heavy trace with slightly more IOs being 128KB. About 70% of the IOs are sequential, though, so that is a major difference compared to the Heavy trace.

AnandTech Storage Bench - Light - QD Breakdown
Queue Depth 1 2 3 4-5 6-10 11-20 21-32 >32
% of Total 73.4% 16.8% 2.6% 2.3% 3.1% 1.5% 0.2% 0.2%

Over 90% of the IOs have a queue depth of one or two, which further proves the importance of low queue depth performance. 

AnandTech Storage Bench - Light (Data Rate)

The SM951 yet again provides roughly twice the data rate compared to the XP941 and with a full drive the difference is even more significant. 

AnandTech Storage Bench - Light (Latency)

The same goes for average latency where the SM951's score is about one third of the XP941's. The SM951 can without a doubt boost performance with lighter IO loads as well, although in very light workloads the bottleneck tends to be the speed of user input (think about document creation for instance).

AnandTech Storage Bench - Light (Latency)

AnandTech Storage Bench - Heavy Random Performance
Comments Locked

128 Comments

View All Comments

  • Kevin G - Tuesday, February 24, 2015 - link

    "I also verified that the SM951 is bootable in tower Mac Pros (2012 and earlier)."

    Excellent. The old 2010/2012 towers continue to show that being expandable provides long term benefit. I'm glad that I picked up my tower Mac Pro when I did.

    Now to find a carrier that'll convert the 4x PCIe 3.0 link of the M.2 connector to an 8x PCIe 2.0 link for a Mac Pro. (Two two M.2s to a single 16x PCIe 2.0 link.)
  • extide - Tuesday, February 24, 2015 - link

    You will need a PLX chip to do that, you can't just put 2 x4 devices into an x8 slot...
  • jimjamjamie - Wednesday, February 25, 2015 - link

    It's pretty hilarious how many people drink the shiny plastic trash bin kool-aid.
  • Tunnah - Tuesday, February 24, 2015 - link

    I'm not super knowledgeable on the whole thing, but isn't NVMe really only a big deal for enterprise, as it's more a benefit for multi drive setups ?
  • Kristian Vättö - Tuesday, February 24, 2015 - link

    It's of course a bigger deal for enterprises because the need for performance is higher. However, NVMe isn't just a buzzword for the client space because it reduced the protocol latency, which in turn results in higher performance at low queue depths that are common for client workloads.
  • knweiss - Sunday, March 1, 2015 - link

    Kristian, did you ever test how much influence the filesystem has? I would love to see a filesystem comparison on the various platforms with NVMe drivers (Windows, Linux, FreeBSD, etc).
  • The_Assimilator - Tuesday, February 24, 2015 - link

    Hopefully NVMe will be standard on SSDs by the time Skylake and 100-series chipsets arrive.
  • sna1970 - Tuesday, February 24, 2015 - link

    What is the point of this expensive drive when you can have the same numbers using 2 SSD in Raid 0 ?

    and please no one says to me risk of Data Loss .. SSD are not mechanical and the chance of loosing 1 SSD is the same of 2 of them.
  • Kristian Vättö - Tuesday, February 24, 2015 - link

    RAID only tends to increase high QD and large IO transfers where the IO load can easily be distributed between two or more drives. Low QD performance at small IO sizes can actually be worse due to additional overhead from the RAID drivers.
  • dzezik - Tuesday, February 24, 2015 - link

    Hi sna1970. You misses Bernouli's "introduced the principle of the maximum product of the probabilities of a system of concurrent errors" it is quite old 1782 but is is still valid. Have You ever been in school. Do You have mathematics classes?

Log in

Don't have an account? Sign up now