AnandTech Storage Bench - Light

Our Light storage test has relatively more sequential accesses and lower queue depths than The Destroyer or the Heavy test, and it's by far the shortest test overall. It's based largely on applications that aren't highly dependent on storage performance, so this is a test more of application launch times and file load times. This test can be seen as the sum of all the little delays in daily usage, but with the idle times trimmed to 25ms it takes less than half an hour to run. Details of the Light test can be found here. As with the ATSB Heavy test, this test is run with the drive both freshly erased and empty, and after filling the drive with sequential writes.

ATSB - Light (Data Rate)

Both capacities of the Samsung PM981 offer great average data rates on the Light test. Their performance when full or empty is improved over the Samsung 960 EVO and comes close to the 960 PRO.

ATSB - Light (Average Latency)ATSB - Light (99th Percentile Latency)

The average and 99th percentile latency scores of the PM981s aren't much of an improvement over Samsung's last generation, but this is still a new record for flash-based SSDs, even though the PM981 is using TLC NAND.

ATSB - Light (Average Read Latency)ATSB - Light (Average Write Latency)

The average write latency of the PM981s is great whether the test is run on a full or empty drive, but the average read latency is slightly worse than the 960 PRO when the test is run on a full drive.

ATSB - Light (99th Percentile Read Latency)ATSB - Light (99th Percentile Write Latency)

The 99th percentile read latency of the PM981s is record-setting when the Light test is run on an empty drive, but only the 1TB sets a record when the test is run on a full drive. The 99th percentile write latency is excellent on both drives in either test scenario.

AnandTech Storage Bench - Heavy Random Performance
POST A COMMENT

53 Comments

View All Comments

  • mapesdhs - Thursday, November 30, 2017 - link

    And Drazick, what do you mean by 2.5" drives? If you're referring to SATA, well then no, it's already at its limit of 550MB/sec, and producing something akin to SATA4 would be pointless when it's also hobbled by the old AHCI protocol.

    Also, "don't like" is an emotional response; what's your evidence and argument that they're a bad product somehow? Have you used them?
    Reply
  • WithoutWeakness - Thursday, November 30, 2017 - link

    By 2.5" drives I'm sure he means the same form factor as standard SATA 2.5" SSDs except using a newer, faster connection just like the U.2 connectors that Dan mentioned. We definitely hit the limit of what SATA 3 can deliver and it would be nice to have a new standard that can leverage PCIe NVMe SSDs in a form factor that allows us to use cables to put drives elsewhere in a case for better layouts and airflow. U.2 was supposed to be that connector but there are basically no drives that support the standard and very few boards with more than 1 U.2 port. There are a few adapters on the market that allow you to install an M.2 drive into a 2.5" enclosure with a U.2 connector on it but until motherboards have more than 1 U.2 port it won't be a real replacement for the ubiquity of SATA. Reply
  • msabercr - Friday, December 01, 2017 - link

    Actually there are m.2 to U.2 connectors readily available from most MB vendors, and 7mm U.2 datacenter drives are starting to become a thing. See Intel SSD DC P4501. I wouldn't be surprised if AIC disappears after too long. Limiting the power draw would be the major hurdle in creating such drives but it's not impossible. The EDSFF is going to pave the way for many high density compact form factors for NVMe moving forward. Reply
  • sleeplessclassics - Thursday, November 30, 2017 - link

    One more thing which I think will be different when these drives are launched as retail devices is the driver support for Phoenix controller. While, it is always difficult to pinpoint the exact bottlenecks on such bleeding edge technology, I think a driver that is better optimized for Phoenix controller will definitely produce better results (ceteris paribus)

    Also, there have been rumors of QLC-Nand. If that is true, that could be the differentiator between EVO and PRO series.
    Reply
  • romrunning - Thursday, November 30, 2017 - link

    Yes - QLC... more latency, lower endurance, slower writes - what's not to like? :-S Reply
  • Spunjji - Thursday, November 30, 2017 - link

    Lower price..? Higher densities and increased production? That's what it's all about.

    If 3D QLC performs like 2D TLC then it'll do just fine for mass storage.
    Reply
  • mapesdhs - Thursday, November 30, 2017 - link

    Good point given the way in which most products seem to be abe to tolerate far more writes than for which they're officially rated, in which case it's likely most users will want something newer long before a QLC product's endurance has been reached. If one is doing something that will drain the endurance a lot faster, then one should be using something more suitable anyway. Reply
  • romrunning - Thursday, November 30, 2017 - link

    Sure, but QLC is just like TLC - once you force it on enough people and you say it's "good enough", then the higher-performing but costlier flash (like SLC/MLC) slowly is removed from the product portfolio. I'm not in favor of these race-to-the-bottom "advances", just to reduce the price a bit for hte consumer but more for the mfg. You may get a slight bump in capacity, but for me, the performance/endurance trade-off with a slight reduction in price isn't worth it.

    Now, I suppose it doesn't matter anymore to me since I'll still be buying the 960 Pro until the Optane 900p reaches better pricing. But the slippery slope you encounter is that new product "advances" are usually better when comapred to to the "current" state of tech. If the current standard is QLC, then the new "improvement" might only be raising it to levels that SLC/MLC were at previously. So the possibility is that it may not be that much of an improvement.
    Reply
  • bcronce - Thursday, November 30, 2017 - link

    For read heavy mass storage drives, slower writes is fine. SSDs are getting fast enough that the IO or CPU is the bottleneck. Higher read latency for small queues will hurt performance, but not by a whole lot.

    The endurance is only an issue if you re-write your data a lot, like a paging file or a game drive that sees a lot of updates. A relatively static mass-media drive will probably be just fine.
    Reply
  • sleeplessclassics - Thursday, November 30, 2017 - link

    Latency can (till some extent) be handled with a bigger dram buffer. Also, controllers are the key here and not the NAND type. Today, even TLC can perform better than MLC/SLC just 2-3 generations ago due to better controllers.

    A couple of years ago and even last year, 500GB ssd was around $80. If the prices were sane, 64-layer 3D TLC would have been below $50 for sure.
    And 96-layer QLC can give real competition to the HDDs.

    As for lower endurance, that can be handled by slightly higher provisioning and slower writes....well they would be okay for 95% of the mainstream users.
    Enthusiasts have optane and Z-NAND
    Reply

Log in

Don't have an account? Sign up now