Conclusion

When Memblaze updated their PBlaze5 SSDs with newer 64-layer 3D TLC NAND, they could have left everything else more or less the same and the result would likely have been a new generation of drives with improved performance and power efficiency across the board. Instead, Memblaze decided to rebalance the product line a bit, using the improved performance of Micron's second-generation 3D NAND as an opportunity to rein in the rather high overprovisioning ratios that the PBlaze5 initially used. The refreshed PBlaze5 models offer more usable capacity for the same amount of raw flash memory on the drive, without sacrificing much performance. The difference really adds up for high-capacity drives: with the OP ratio used on the first generation, our 6.4TB PBlaze5 C916 would have instead had a usable capacity of only 5.3 TB.

The PBlaze5 C916 also retains the same 3 drive writes per day (DWPD) endurance rating as the older PBlaze5 C900, which puts these drives in one of the highest endurance tiers that still uses mainstream high-capacity TLC NAND flash memory.

Cutting down on spare area reserved for the drive's internal use usually has a big impact on steady-state write speeds. For the PBlaze5 this impact is reduced by the switch to faster flash memory, but the newer C916 still loses some write performance in most of our tests. Even when it does not match the performance of its predecessor, the PBlaze5 C916 clearly offers a higher class of write performance than competing drives in the 1 DWPD market segment.

The switch to newer 3D NAND flash memory allows the C916 to use much less power than the C900, which helps offset the relatively high baseline power consumption of the massive 16-channel SSD controller Memblaze uses. The Intel and Samsung drives we compared against use smaller controllers that give them an advantage in power consumption, but now that Memblaze is using similar NAND, the PBlaze5 can come out ahead in power efficiency whenever the workload is heavy enough to make use of the higher NAND channel count and wider PCIe interface.

On top of the more efficient NAND, the new PBlaze5 comes with richer power management capabilities than we have encountered in any other datacenter SSD, with a power limit that can be adjusted from the default 25W down to 10W in 1W increments. Our tests of the PBlaze5 C916 in its 10W power state brought it down to similar peak power levels as the competing drives with smaller controllers. This throttling didn't affect every test; at low queue depths and for very read-oriented workloads the C916 was already comfortably below 10W. Write speed is severely constrained by the reduced power limit, but even at 10W the C916 still generally offers better steady-state write performance than the drives in the 1 DWPD market segment.

The adjustable power limit doesn't make a lot of sense for a big add-in card SSD like the PBlaze5 C916, but may be a useful feature for the U.2 versions. It seems like Memblaze has done a good job of implementing this capability without undue sacrifice to overall performance or QoS. High-density deployments of the U.2 versions that may not be able to offer enough airflow to manage 15-20W per drive can still benefit from most of the performance offered by the PBlaze5.

The biggest improvement the new PBlaze5 brings over its predecessors is one we are unfortunately not in a position to accurately quantify. The PBlaze5 C916 with 64L TLC costs much less than the PBlaze5 C900 with 32L NAND. The higher density NAND is cheaper to produce, prices have crashed over the past year due to excellent supply conditions across the market, and the newer PBlaze5 generation has higher usable capacities for the same raw capacity of flash. All told, this means the volume price of our 6.4TB C916 is probably significantly lower than what our 4TB C900 was going for; but of course, those prices are almost never made public for enterprise and datacenter drives that aren't sold through retailers.

There's no doubt that the newer Memblaze PBlaze5 is a much better overall value than the previous generation. It isn't better in every way, but it makes smart tradeoffs and stays in the same market segment. We'd have to test some more direct competitors with comparable endurance ratings and the same PCIe x8 interface to know whether the PBlaze5 C916 is the best high-end TLC drive currently available, but there are only a few other products out there that aspire to offer this combination of performance, capacity and endurance. The PBlaze5 C916 clearly stands above the more mainstream product segments and should be taken seriously as a competitor at the very high end of the SSD market.

Mixed I/O & NoSQL Database Performance
Comments Locked

13 Comments

View All Comments

  • Samus - Wednesday, March 13, 2019 - link

    That. Capacitor.
  • Billy Tallis - Wednesday, March 13, 2019 - link

    Yes, sometimes "power loss protection capacitor" doesn't need to be plural. 1800µF 35V Nichicon, BTW, since my photos didn't catch the label.
  • willis936 - Wednesday, March 13, 2019 - link

    That’s 3.78W for one minute if they’re running at the maximum voltage rating (which they shouldn’t and probably don’t), if anyone’s curious.
  • DominionSeraph - Wednesday, March 13, 2019 - link

    It's cute, isn't it?

    https://www.amazon.com/BOSS-Audio-CPBK2-2-Capacito...
  • takeshi7 - Wednesday, March 13, 2019 - link

    I wish companies made consumer PCIe x8 SSDs. It would be good since many motherboards can split the PCIe lanes x8/x8 and SLI is falling out of favor anyways.
  • surt - Wednesday, March 13, 2019 - link

    I bet 90% of motherboard buyers would prefer 2 x16 slots vs any other configuration so they can run 1 GPU and 1 very fast SSD. I really don't understand why the market hasn't moved in this direction.
  • MFinn3333 - Wednesday, March 13, 2019 - link

    Because SSD's have a hard time saturating 4x PCIe slots, 16x would just take up space for no real purpose.
  • Midwayman - Wednesday, March 13, 2019 - link

    Maybe, but it sucks that your GPU gets moved to 8x. 16/4 would be an easier split to live with.
  • bananaforscale - Thursday, March 14, 2019 - link

    Not really, GPUs are typically bottlenecked by local memory (VRAM), not PCIe.
  • Opencg - Wednesday, March 13, 2019 - link

    performance would not be very noticeable. and even in the few cases it would be, it would require more expensive cpus and mobos thus mitigating the attractiveness to very few consumers. and fewer consumers means even higher prices. we will get higher throughput but its much more likely with pci 4.0/5.0 than 2 16x

Log in

Don't have an account? Sign up now