Random Read Performance

One of the major changes in our 2015 test suite is the synthetic Iometer tests we run. In the past we used to test just one or two queue depths, but real world workloads always contain a mix of different queue depths as shown by our Storage Bench traces. To get the full scope in performance, I'm now testing various queue depths starting from one and going all the way to up to 32. I'm not testing every single queue depth, but merely how the throughput scales with the queue depth. I'm using exponential scaling, meaning that the tested queue depths increase in powers of two (i.e. 1, 2, 4, 8...). 

Read tests are conducted on a full drive because that is the only way to ensure that the results are valid (testing with an empty drive can substantially inflate the results and in reality the data you are reading is always valid rather than full of zeros). Each queue depth is tested for three minutes and there is no idle time between the tests. 

I'm also reporting two metrics now. For the bar graph, I've taken the average of QD1, QD2 and QD4 data rates, which are the most relevant queue depths for client workloads. This allows for easy and quick comparison between drives. In addition to the bar graph, I'm including a line graph, which shows the performance scaling across all queue depths. To keep the line graphs readable, each drive has its own graph, which can be selected from the drop-down menu.

I'm also plotting power for SATA drives and will be doing the same for PCIe drives as soon as I have the system set up properly. Our datalogging multimeter logs power consumption every second, so I report the average for every queue depth to see how the power scales with the queue depth and performance.

Iometer - 4KB Random Read

Despite having NVMe, the SSD 750 doesn't bring any improvements to low queue depth random read performance. Theoretically NVMe should be able to improve low QD random read performance because it adds less overhead compared to the AHCI software stack, but ultimately it's the NAND performance that's the bottleneck, although 3D NAND will improve that by a bit.

Intel SSD 750 1.2TB (PCIe 3.0 x4 - NVMe)

The performance does scale nicely, though, and at queue depth of 32 the SSD 750 is able to hit over 200K IOPS. It's capable of delivering even more than that because unlike AHCI, NVMe can support more than 32 commands in the queue, but since client workloads rarely go above QD32 I see no point in test higher queue depths just for the sake of high numbers. 

 

Random Write Performance

Write performance is tested in the same way as read performance, except that the drive is in a secure erased state and the LBA span is limited to 16GB. We already test performance consistency separately, so a secure erased drive and limited LBA span ensures that the results here represent peak performance rather than sustained performance.

Iometer - 4KB Random Write

In random write performance the SSD 750 dominates the other drives. It seems Intel's random IO optimization really shows up here because the SM951 doesn't even come close. Obviously the lower latency of NVMe helps tremendously and since the SSD 750 features full power loss protection it can also cache more data in DRAM without the risk of data loss, which yields substantial performance gains. 

Intel SSD 750 1.2TB (PCIe 3.0 x4 - NVMe)

The SSD 750 also scales very efficiently and doesn't stop scaling until queue depth of 8. Note how big the difference is at queue depths of 1 and 2 -- for any random write centric workload the SSD 750 is an absolute killer.

AnandTech Storage Bench - Light Sequential Performance
Comments Locked

132 Comments

View All Comments

  • perula - Thursday, April 9, 2015 - link

    [For Sell] Counterfeit Dollar(perula0@gmail.com)Euro,POUNDS,PASSPORTS,ID,Visa Stamp.

    Email/ perula0@gmail.com/
    Text;+1(201) 588-4406

    Greetings to everyone on the forum,
    we supply perfectly reproduced fake money with holograms and all security features available.
    Indistinguishable to the eye and to touch.
    also provide real valid and fake passports for any country
    delivery is discreet

    We offer free shipping for samples which is 1000 worth fake as MOQ
  • oddbjorn - Tuesday, April 14, 2015 - link

    I just recieved my 750 yesterday and soon found myself slightly bummed out by the lacking NVMe BIOS-support in my ASUS P8Z77-V motherboard. I managed to get the drive working (albeit non-bootable) by placing it in the black PCIe 2.0 slot of the mainboard, but this is hardly a long term solution. I posted a question to the https://pcdiy.asus.com/ website regarding possible future support for these motherboards and this morning they had publised a poll to check the interest for BIOS/UEFI-support for NVMe's. Please vote here if you (like me) would like to see this implemented! https://pcdiy.asus.com/2015/04/asus-nvme-support-p...
  • Elchi - Wednesday, April 15, 2015 - link

    If you are a happy owner of an older ASUS MB (z77, x79, z87) please vote for NVme support !

    http://pcdiy.asus.com/2015/04/asus-nvme-support-po...
  • iliketoprogrammeoo99 - Monday, April 20, 2015 - link

    hey, this drive is now on preorder at amazon!

    http://amzn.to/1DDKwoI

    only $449 on amazon.
  • vventurelli74 - Monday, May 4, 2015 - link

    Lets say I had an Intel 5520 Chipset based computer that has multiple PCIe 2.0 Slots. I would be able to get almost the maximum read performance (Since PCIe 2.0 is 500MB/s per 1X, 4X = 2000MB/s, which is exciting on an older computer. I am curious as to if this would be a bootable solution on my desktop though. With 12 Cores and 24 Threads, this computer is far from under-powered, and it would be nice to breath life into this machine, but the BIOS would have no NVMe support that I can think of. I know it has Intel SSD support, but this is from a different era. I wish someone could confirm that this either will, or will not be bootable on non-MVMe mobo's. I am getting conflicting answers.
  • vventurelli74 - Monday, May 4, 2015 - link

    Nevermind, finally found the requirements that this drive will not be bootable on on NVMe machines, whats more is even using it as a 'secondary' drive requires UEFI apparently. My computer wouldn't be able to use this card at all? That would suck.
  • xyvyx2 - Friday, May 8, 2015 - link

    Great review!

    Kristian, any chance you have two of these drives in the same machine & you could test RAID0 performance? I'm running into some slow read performance when using two Samsung PCIe drives in a Dell server w/ a RAID1 or RAID0 config. It's not like regular bottlenecking where you hit a performance cap, but where transfer rate drops down to ~ 1/5th the speed at a lower xfer rate.

    I thought this was just a Storage Spaces problem, but the same holds true w/ regular windows software raid. I got up to about 4,200 MB/sec, then it tanked. I then ran two simultaneous ATTO tests on two of the drives and they both behaved normally & peaked at 2,700 MB/sec... so I don't think I'm hitting a PCIe bus limitation... I think it's all software.

    I posted more detail on Technet here:
    https://social.technet.microsoft.com/Forums/en-US/...
  • shadowfang - Saturday, September 26, 2015 - link

    How does the pcie card perform on a system without nvme?

Log in

Don't have an account? Sign up now