Conclusion

We knew from tests last year of Western Digital's SATA drives and the Toshiba XG5 that the SanDisk/Toshiba 64-layer 3D TLC was a huge improvement over their planar NAND, and possibly was even the fastest and most power efficient TLC NAND yet. It is now clear that those drives weren't even making the best possible use of that flash. With Western Digital's new in-house controller, the BiCS 3 TLC really shines. The new WD Black and SanDisk Extreme PRO are unquestionably high-end NVMe SSDs that match the Samsung 960 EVO and sometimes even beat the 960 PRO.

There are very few disappointing results from the WD Black. Even when it isn't tied for first or second place, it performs well above the low-end NVMe drives. The two biggest problems appear to be a poor start to the sequential read test, and another round of NVMe idle power management bugs to puzzle through. Almost all NVMe drives have at least some quirks when it comes to idle power management, in stark contrast to the nearly universal and flawless support among SATA drives for at least the slumber state and usually also DevSleep (which cannot be used on desktops). The power efficiency of the WD Black under load is excellent, so it is clear that the Western Digital NVMe controller isn't inherently a power hog. Whatever incompatibility the WD Black's power management currently has with our testbed won't matter to other desktop users, and hopefully isn't representative of today's notebooks. The bigger surprises from the WD Black are when it performs much better than expected, especially during the mixed sequential I/O test where nothing comes close.

Samsung established an early lead in the NVMe SSD race and has held on to their top spot as many brands have tried and failed to introduce high-end NVMe SSDs with either planar NAND or the lackluster first-generation Intel/Micron 3D NAND. None of those SSDs was a more obvious underachiever than the original WD Black NVMe SSD from last year, which used 15nm planar TLC and could barely outperform a decent SATA drive. The first WD Black SSD didn't deserve Western Digital's high-performance branding. This new WD Black is everything last year's model should have been, and it should be able to stay relevant throughout this year even when Samsung gets around to releasing the successors to the 960 PRO and 960 EVO—which they really need to do soon.

NVMe SSD Price Comparison
  120-128GB 240-256GB 400-512GB 960-1200GB
WD Black (3D NAND)
SanDisk Extreme PRO
  $119.99 (48¢/GB) $226.75 (45¢/GB) $449.99 (45¢/GB)
Intel SSD 760p $88.32 (69¢/GB) $122.25 (48¢/GB) $223.26 (44¢/GB) $471.52 (46¢/GB)
Samsung 960 PRO     $327.99 (64¢/GB) $608.70 (59¢/GB)
Samsung 960 EVO   $119.99 (48¢/GB) $199.99 (40¢/GB) $449.99 (45¢/GB)
WD Black (2D NAND)   $104.28 (41¢/GB) $182.00 (36¢/GB)  
Plextor M9Pe   $119.99 (47¢/GB) $213.43 (42¢/GB) $408.26 (40¢/GB)
MyDigitalSSD SBX $59.99 (47¢/GB) $99.99 (39¢/GB) $159.99 (31¢/GB) $339.99 (33¢/GB)
Toshiba OCZ RD400 $109.99 (86¢/GB) $114.99 (45¢/GB) $309.99 (61¢/GB) $466.45 (46¢/GB)

The MSRPs for the WD Black roughly match current street prices for the Samsung 960 EVO, which is exactly what the WD Black should be competing against. Neither drive has a clear overall performance advantage in the 1TB capacity we've analyzed with this review, though the WD Black has a modest power efficiency advantage (our idle power problems notwithstanding). Since release, the Intel 760p has also climbed up to this price range, and it doesn't belong there.

The Plextor M9Pe is finally available for purchase after a paper launch early this year. It uses Toshiba's 64L TLC and a Marvell controller, so it closely represents what this year's WD Black would have been without Western Digital's new in-house controller. We will have preformance results for the M9Pe soon.

Western Digital's long years working to develop 3D NAND and their new NVMe controller have paid off. They're once again a credible contender in the high end space, and their latest SATA SSDs are doing pretty well, too. This year's SSD market now has serious competition in almost every price bracket.

Power Management
POST A COMMENT

70 Comments

View All Comments

  • Chaitanya - Thursday, April 5, 2018 - link

    Nice to see some good competition to Samsung products in SSD space. Would like to see durability testing on these drives. Reply
  • HStewart - Thursday, April 5, 2018 - link

    Yes it nice to have competition in this area and important thing to notice here a long time disk drive manufacture is changes it technology to meet changes in storage technology. Reply
  • Samus - Thursday, April 5, 2018 - link

    Looks like WD's purchase of SanDisk is showing some payoff. If only Toshiba would have taken advantage of OCZ (who purchased Indilinx) in-house talent. The Barefoot controller showed a lot of promise and could have easily been updated to support low power states and TLC NAND. But they shelved it. I don't really know why Toshiba bought OCZ. Reply
  • haukionkannel - Friday, April 6, 2018 - link

    Indeed! Samsung did have too long time performance supremesy and that did make the company to upp the prices (natural development thought).
    Hopefully this better situation help uss customers in reasonable time frame. Too much bad news to consumers last years considering the prices.
    Reply
  • XabanakFanatik - Thursday, April 5, 2018 - link

    Whatever happened to performance consistency testing? Reply
  • Billy Tallis - Thursday, April 5, 2018 - link

    The steady state QD32 random write test doesn't say anything meaningful about how modern SSDs will behave on real client workloads. It used to be a half-decent test before everything was TLC with SLC caching and the potential for thermal throttling on M.2 NVMe drives. Now, it's impossible to run a sustained workload for an hour and claim that it tells you something about how your drive will handle a bursty real world workload. The only purpose that benchmark can serve today is to tell you how suitable a consumer drive is for (ab)use as an enterprise drive. Reply
  • iter - Thursday, April 5, 2018 - link

    Most of the tests don't say anything meaningful about "how modern SSDs will behave on real client workloads". You can spend 400% more money on storage that will only get you 4% of performance improvement in real world tasks.

    So why not omit synthetic tests altogether while you are at it?
    Reply
  • Billy Tallis - Thursday, April 5, 2018 - link

    You're alluding to the difference between storage performance and whole system/application performance. A storage benchmark doesn't necessarily give you a direct measurement of whole system or application performance, but done properly it will tell you about how the choice of an SSD will affect the portion of your workload that is storage-dependent. Much like Amdahl's law, speeding up storage doesn't affect the non-storage bottlenecks in your workload.

    That's not the problem with the steady-state random write test. The problem with the steady state random write test is that real world usage doesn't put the drive in steady state, and the steady state behavior is completely different from the behavior when writing in bursts to the SLC cache. So that benchmark isn't even applicable to the 5% or 1% of your desktop usage that is spent waiting on storage.

    On the other hand, I have tried to ensure that the synthetic benchmarks I include actually are representative of real-world client storage workloads, by focusing primarily on low queue depths and limiting the benchmark duration to realistic quantities of data transferred and giving the drive idle time instead of running everything back to back. Synthetic benchmarks don't have to be the misleading marketing tests designed to produce the biggest numbers possible.
    Reply
  • MrSpadge - Thursday, April 5, 2018 - link

    Good answer, Billy. It won't please everyone here, but that's impossible anyway. Reply
  • iter - Thursday, April 5, 2018 - link

    People do want to see how much time it takes before cache gives out. Don't presume to know what all people do with their systems.

    As I mentioned 99% of the tests are already useless when it comes to indicating overall system performance. 99% of the people don't need anything above mainstream SATA SSD. So your point on excluding that one test is rather moot.

    All in all, it seems you are intentionally hiding the weakness of certain products. Not cool. Run the tests, post the numbers, that's what you get paid for, I don't think it is unreasonable to expect that you do your job. Two people pointed out the absence of that tests, which is two more than those who explicitly stated they don't care about it, much less have anything against it. Statistically speaking, the test is of interest, and I highly doubt it will kill you to include it.
    Reply

Log in

Don't have an account? Sign up now