Sequential Read Performance

The sequential read test requests 128kB blocks and tests queue depths ranging from 1 to 32. The queue depth is doubled every three minutes, for a total test duration of 18 minutes. The test spans the entire drive, and the drive is filled before the test begins. The primary score we report is an average of performances at queue depths 1, 2 and 4, as client usage typically consists mostly of low queue depth operations.

Iometer - 128KB Sequential Read

Even when limited to PCIe 2.0 x2 the 600p has slightly higher sequential read speed than SATA drives can manage, but when given more PCIe bandwidth the 600p doesn't catch up to the more expensive NVMe drives.

Iometer - 128KB Sequential Read (Power)

The 600p actually manages to surpass the power efficiency of several SATA SSDs, but it can't compete with the other NVMe drives that deliver twice the data rate.

The 600p starts at just under 400MB/s hits its read speed limit at QD4 with around 1150MB/s. The other PCIe SSDs perform at least that well at QD1 and go up from there.

Sequential Write Performance

The sequential write test writes 128kB blocks and tests queue depths ranging from 1 to 32. The queue depth is doubled every three minutes, for a total test duration of 18 minutes. The test spans the entire drive, and the drive is filled before the test begins. The primary score we report is an average of performances at queue depths 1, 2 and 4, as client usage typically consists mostly of low queue depth operations.

Iometer - 128KB Sequential Write

It is a surprise to see the Intel 600p performing better in the motherboard's M.2 slot than in the PCIe 3.0 adapter, but in both cases the sustained write speeds are so slow that the interface is not a limitation.

Iometer - 128KB Sequential Write (Power)

The power consumption of the 600p when it's in the PCIe 3.0 adapter is high enough that temperature may be a factor in this test, and the 600p may have performed better in the motherboard's M.2 slot simply due to better positioning and orientation in the case.

It is a familiar pattern for PCIe SSDs to see the highest write speeds at the beginning of the test, and a completely flat graph thereafter as thermal limits kick in. We're just used to seeing the performance curve near the top of the graph instead of at the bottom.

Sequential Write
PCIe 3.0 x4 adapter motherboard M.2 PCIe 2.0 x2

A comparison of the second by second performance during the sequential write test shows that the 600p reaches a steady state with the same kind of inconsistency we saw for random writes, and in the PCIe 3.0 adapter the performance is reduced across the board and the worst drops in performance are much closer to zero.

Random Performance Mixed Read/Write Performance
Comments Locked

63 Comments

View All Comments

  • ddriver - Tuesday, November 22, 2016 - link

    A fool can dream James, a fool can dream...

    He also wants to live in a really big house made of cards and bathe in dry water, so his hair don't get wet :D
  • Kevin G - Wednesday, November 23, 2016 - link

    Conceptually a PCIe bridge/NVMe RAID controller could implement additional PCIe lanes on the drive side for RAID5/6 purposes. For example, 16 lanes to the bridge and six 4 lane slots on the other end. There is still the niche in the server space where reliability is king and having removable and redundant media is important. Granted, this niche is likely served better by U.2 for hot swap bays than M.2 but they'd use the same conceptual bridge/RAID chip proposed here.
  • vFunct - Wednesday, November 23, 2016 - link

    > However WHY would you want to do that when you could just go get an Intel P3520 2TB drive or for higher speed a P3700 2TB drive.

    Those are geared towards database applications (and great for it, as I use them), not media stores.

    Media stores are far more cost sensitive.
  • jjj - Tuesday, November 22, 2016 - link

    And this is why SSD makers should be forced to list QD1 perf numbers, it's getting ridiculous.
  • powerarmour - Tuesday, November 22, 2016 - link

    I hate TLC.
  • Notmyusualid - Tuesday, November 22, 2016 - link

    I'll second that.
  • ddriver - Tuesday, November 22, 2016 - link

    Then you will love QLC
  • BrokenCrayons - Wednesday, November 23, 2016 - link

    I'm not a huge fan either, but I was also reluctant to buy into MLC over much more durable SLC despite the cost and capacity implications. At this point, I'd like to see some of these newer, much more durable solid state memory technologies that are lurking in labs find their way into the wider world. Until then, TLC is cheap and "good enough" for relatively disposable consumer electronics, though I do keep a backup of my family photos and the books I've written...well, several backups since I'd hate to lose those things.
  • bug77 - Tuesday, November 22, 2016 - link

    The only thing that comes to mind is: why, intel, why?
  • milli - Tuesday, November 22, 2016 - link

    Did you test the MX300 with the original firmware or the new firmware?

Log in

Don't have an account? Sign up now