Random Read Performance

One of the major changes in our 2015 test suite is the synthetic Iometer tests we run. In the past we used to test just one or two queue depths, but real world workloads always contain a mix of different queue depths as shown by our Storage Bench traces. To get the full scope in performance, I'm now testing various queue depths starting from one and going all the way to up to 32. I'm not testing every single queue depth, but merely how the throughput scales with the queue depth. I'm using exponential scaling, meaning that the tested queue depths increase in powers of two (i.e. 1, 2, 4, 8...). 

Read tests are conducted on a full drive because that is the only way to ensure that the results are valid (testing with an empty drive can substantially inflate the results and in reality the data you are reading is always valid rather than full of zeros). Each queue depth is tested for three minutes and there is no idle time between the tests. 

I'm also reporting two metrics now. For the bar graph, I've taken the average of QD1, QD2 and QD4 data rates, which are the most relevant queue depths for client workloads. This allows for easy and quick comparison between drives. In addition to the bar graph, I'm including a line graph, which shows the performance scaling across all queue depths. To keep the line graphs readable, each drive has its own graph, which can be selected from the drop-down menu.

I'm also plotting power for SATA drives and will be doing the same for PCIe drives as soon as I have the system set up properly. Our datalogging multimeter logs power consumption every second, so I report the average for every queue depth to see how the power scales with the queue depth and performance.

Iometer - 4KB Random Read

Despite having NVMe, the SSD 750 doesn't bring any improvements to low queue depth random read performance. Theoretically NVMe should be able to improve low QD random read performance because it adds less overhead compared to the AHCI software stack, but ultimately it's the NAND performance that's the bottleneck, although 3D NAND will improve that by a bit.

Intel SSD 750 1.2TB (PCIe 3.0 x4 - NVMe)

The performance does scale nicely, though, and at queue depth of 32 the SSD 750 is able to hit over 200K IOPS. It's capable of delivering even more than that because unlike AHCI, NVMe can support more than 32 commands in the queue, but since client workloads rarely go above QD32 I see no point in test higher queue depths just for the sake of high numbers. 

 

Random Write Performance

Write performance is tested in the same way as read performance, except that the drive is in a secure erased state and the LBA span is limited to 16GB. We already test performance consistency separately, so a secure erased drive and limited LBA span ensures that the results here represent peak performance rather than sustained performance.

Iometer - 4KB Random Write

In random write performance the SSD 750 dominates the other drives. It seems Intel's random IO optimization really shows up here because the SM951 doesn't even come close. Obviously the lower latency of NVMe helps tremendously and since the SSD 750 features full power loss protection it can also cache more data in DRAM without the risk of data loss, which yields substantial performance gains. 

Intel SSD 750 1.2TB (PCIe 3.0 x4 - NVMe)

The SSD 750 also scales very efficiently and doesn't stop scaling until queue depth of 8. Note how big the difference is at queue depths of 1 and 2 -- for any random write centric workload the SSD 750 is an absolute killer.

AnandTech Storage Bench - Light Sequential Performance
Comments Locked

132 Comments

View All Comments

  • Kristian Vättö - Thursday, April 2, 2015 - link

    That's up to the motherboard manufacturers. If they provide BIOS with NVMe support then yes, but I wouldn't get my hopes up as the motherboard OEMs don't usually do updates for old boards.
  • vailr - Thursday, April 2, 2015 - link

    If Z97 board bioses from Asus, Gigabyte, etc. are going to be upgradeable to support Broadwell for all desktop (socket 1150) motherboards, wouldn't they also want to include NVMe support? I'm assuming such support is at least within the realm of possibility, for both Z87 and Z97 boards.
  • TheRealPD - Thursday, April 2, 2015 - link

    Has anyone worked out exactly what the limitation is/why the bios needs upgrading yet?

    Simply that I had the idea that the P3700 had it's own nvme orom, nominally akin to a raid card... ...& that people have had issues with the updated mobo bioses replacing intel's one with a generic one...

    ...which kind of suggests that the bios update could conceivably not be a requirement for some nvme drives.
  • vailr - Friday, April 3, 2015 - link

    A motherboard bios update would be required to provide bootability. Without that update, an NVMe drive could only function as a secondary storage drive. As stated elsewhere, each device model needs specific support added to the motherboard bios. Samsung's SM941 (an M.2 SSD form factor device) is a prime example of this conundrum, and why it's not generally available as a retail device. Although it can be found for sale at Newegg or on eBay.
  • TheRealPD - Friday, April 3, 2015 - link

    Ummmm... Well, for example, looking at http://www.thessdreview.com/Forums/ssd-discussion/... then the P3700 could be used as a boot drive on a Z87 board in July 2014 - so clearly that wasn't using a mobo bios with an added nvme orom as ami hadn't even released their generic nvme orom that's being added to the Z97 boards.

    (& from recollection, on Z97 boards, in Windows the P3700 is detected as an intel nvme device without the bios update... ...& an ami nvme one with the update)

    This appears to effectively the same as, say, an lsi sas raid card loading it's own orom during the boot process & the drives on it becoming bootable - as obviously, as new raid cards with new feature sets are introduced, you don't have to have updates for every mobo bios.

    Now, whilst I can clearly appreciate that *if* a nvme drive didn't have it's own orom then there would be issues, it really doesn't seem to be the case with drives that do... ...so is there some other issue with the nvme feature set or...?

    Now, obviously this review is about another intel nvme pcie ssd - so it might be reasonable to imagine that it could well also have it's own orom - but, more generally, I'm questioning the assumption that just because it's an nvme drive you can *only* fully utilise it with a board with an updated bios...

    ...& that if it's the case that some nvme ssds will & some won't have their own orom (& it doesn't affect the feature set), it would be a handy thing to see talked about in the reviews as it means that people with older machines are neither put off buying nor buy an inappropriate ssd when more consumer orientated ones are released.
  • TheRealPD - Saturday, April 4, 2015 - link

    I think I've kind of found the answer via a few different sources - it's not that nvme drives necessarily won't work properly with booting & whatnot on older boards... it's that there's no stated consistency as to what will & won't work...

    So apparently they can simply not work on some boards d.t. a bios conflict & there can separately be address space issues... So the ami nvme orom & uefi bios updates are about compatibility - *not* that an nvme ssd with its own orom will or won't necessarily work without them on any particular setup.

    it would be very useful if there was some extra info about this though...

    - well, it's conceivable that at least part of the problem is akin to the issues on much older boards with the free bios capacity for oroms & multiple raid configurations... ...where if you attempted to both enable all of the onboard controllers for raid (as this alters the bios behaviour to load them) &/or had too many additional controllers then one or more of them simply wouldn't operate d.t. the bios limitation; whereas they'd all work both individually & with smaller no's enabled/installed... ...so people with older machines who haven't seen this issue previously simply because they've never used cards with their own oroms or the ssd is the extra thing where they're hitting the limit, are now seeing what some of us experienced years ago.

    - or, similarly, that there's a min uefi version that's needed - I know that intel's recommending 2.3.1 or later for compatibility but clearly they were working on some boards prior to that...
  • pesho00 - Thursday, April 2, 2015 - link

    Why they omit M2? I really think this is a mistake missing the whole mobile market while SM951 will penetrate both!
  • Kristian Vättö - Thursday, April 2, 2015 - link

    Because M.2 would melt with that beast of a controller.
  • metayoshi - Thursday, April 2, 2015 - link

    The Idle power spec of this drive is 4W, while the SM951 is at 50 mW with an L1.2 power consumption at 2mW. Your notebook's battery life will suffer greatly with a drive this power hungry.
  • jwilliams4200 - Thursday, April 2, 2015 - link

    Even though you could not run the performance tests with additional overprovisioning on the 750, you should still show the comparison SSDs with additional overprovisioning.

    The fair comparison is NOT with the Intel 750 no OP versus other SSDs with no OP. The comparison you should be showing is similar capacity vs. similar capacity. So, for example, a 512GB Samsung 850 Pro with OP to leave it with 400GB usable, versus and Intel 750 with 400GB usable.

    I also think it would be good testing policy to test ALL SSDs twice, once with no OP, and once with 50% overprovisioning, running them through all the tests with 0% and 50% OP. The point is not that 50% OP is typical, but rather that it will reveal the best and worst case performance that the SSD is capable of. The reason I say 50% rather than 20% or 25% is that the optimal OP varies from SSD to SSD, especially among models that already come with significant OP. So, to be sure that you OP enough that you reach optimal performance, and to provide historical comparison tests, it is best just to arbitrarily choose 50% OP since that should be more than enough to achieve optimal sustained performance on any SSD.

Log in

Don't have an account? Sign up now