AnandTech Storage Bench - Light

Our Light storage test has relatively more sequential accesses and lower queue depths than The Destroyer or the Heavy test, and it's by far the shortest test overall. It's based largely on applications that aren't highly dependent on storage performance, so this is a test more of application launch times and file load times. This test can be seen as the sum of all the little delays in daily usage, but with the idle times trimmed to 25ms it takes less than half an hour to run. Details of the Light test can be found here.

AnandTech Storage Bench - Light (Data Rate)

On the Light test we finally see the 600p pull ahead of SATA SSDs, albeit not when the drive is full. This test shows what the 600p can do before it gets overwhelmed by sustained writes, and it's also the first time where the PCIe 2.0 x2 connection is a significant bottleneck.

AnandTech Storage Bench - Light (Latency)

The average service times of the 600p rank about where the should: worse than the other NVMe SSDs, but also better than the SATA drives can manage. When the 600p is full its latency is significantly worse and isn't quite as good as Samsung's SATA SSDs, but it is nothing to complain about.

AnandTech Storage Bench - Light (Latency)

Aside from the usual caveat that it suffers acutely when full, the 600p meets expectations for the number of latency outliers.

AnandTech Storage Bench - Light (Power)

The 600p manages to pull ahead of the OCZ RD400 in power consumption and is close to Samsung's NVMe SSDs in efficiency, but the SATA drives are all significantly more efficient.

AnandTech Storage Bench - Heavy Random Performance
Comments Locked

63 Comments

View All Comments

  • ddriver - Tuesday, November 22, 2016 - link

    A fool can dream James, a fool can dream...

    He also wants to live in a really big house made of cards and bathe in dry water, so his hair don't get wet :D
  • Kevin G - Wednesday, November 23, 2016 - link

    Conceptually a PCIe bridge/NVMe RAID controller could implement additional PCIe lanes on the drive side for RAID5/6 purposes. For example, 16 lanes to the bridge and six 4 lane slots on the other end. There is still the niche in the server space where reliability is king and having removable and redundant media is important. Granted, this niche is likely served better by U.2 for hot swap bays than M.2 but they'd use the same conceptual bridge/RAID chip proposed here.
  • vFunct - Wednesday, November 23, 2016 - link

    > However WHY would you want to do that when you could just go get an Intel P3520 2TB drive or for higher speed a P3700 2TB drive.

    Those are geared towards database applications (and great for it, as I use them), not media stores.

    Media stores are far more cost sensitive.
  • jjj - Tuesday, November 22, 2016 - link

    And this is why SSD makers should be forced to list QD1 perf numbers, it's getting ridiculous.
  • powerarmour - Tuesday, November 22, 2016 - link

    I hate TLC.
  • Notmyusualid - Tuesday, November 22, 2016 - link

    I'll second that.
  • ddriver - Tuesday, November 22, 2016 - link

    Then you will love QLC
  • BrokenCrayons - Wednesday, November 23, 2016 - link

    I'm not a huge fan either, but I was also reluctant to buy into MLC over much more durable SLC despite the cost and capacity implications. At this point, I'd like to see some of these newer, much more durable solid state memory technologies that are lurking in labs find their way into the wider world. Until then, TLC is cheap and "good enough" for relatively disposable consumer electronics, though I do keep a backup of my family photos and the books I've written...well, several backups since I'd hate to lose those things.
  • bug77 - Tuesday, November 22, 2016 - link

    The only thing that comes to mind is: why, intel, why?
  • milli - Tuesday, November 22, 2016 - link

    Did you test the MX300 with the original firmware or the new firmware?

Log in

Don't have an account? Sign up now