How to Enable NVMe Zoned Namespaces

Hardware changes for ZNS

At a high level, in order to enable ZNS, most drives on the market only require a firmware update. ZNS doesn't put any new requirements on SSD controllers or other hardware components; this feature can be implemented for existing drives with firmware changes alone.

The critical element in hardware comes down to when an SSD is designed to support only zoned namespaces. First and foremost, a ZNS-only SSD doesn't need anywhere near as much overprovisioning as a traditional enterprise SSD. ZNS SSDs are still responsible for performing wear leveling, but this no longer requires a large spare area for the garbage collection process. Used properly, ZNS allows the host software to avoid almost all of the circumstances that would lead to write amplification inside the SSD. Enterprise SSDs commonly use overprovisioning ratios up to 28% (800GB usable per 1024GB of flash on typical 3 DWPD models) and ZNS SSDs can expose almost all of that capacity to the host system without compromising the ability to deliver high sustained write performance. ZNS SSDs still need some reserve capacity (for example, to cope with failures that crop up in flash memory as it wears out), but Western Digital says we can expect ZNS to allow roughly a factor of 10 reduction in overprovisioning ratios.

Better control over write amplification also means QLC NAND is a more viable option for use cases that would otherwise require TLC NAND. Enterprise storage workloads often lead to write amplification factors of 2-5x. With ZNS, the SSD itself causes virtually no write amplification and clever host software can avoid causing much write amplification, so the overall effect is a boost to drive lifespan that offsets the lower endurance of QLC compared to TLC (or beyond QLC). Even in a ZNS SSD, QLC NAND is still fundamentally slower than TLC, but that same near-elimination of background data management within the SSD means a QLC-based ZNS SSD can probably compete with TLC-based traditional SSDs on QoS metrics even if the total throughput is lower.

 

The other major hardware change enabled by ZNS is a drastic reduction in DRAM requirements. The Flash Translation Layer (FTL) in traditional block-based SSDs requires about 1GB of DRAM for every 1TB of NAND flash. This is used to store the address mapping or indirection tables that record the physical NAND flash memory address that is currently storing each Logical Block Address (LBA). The 1GB per 1TB ratio is a consequence of the FTL managing the flash with a granularity of 4kB. Right off the bat, ZNS gets rid of that requirement by letting the SSD manage whole zones that are hundreds of MB each. Tracking which physical NAND erase blocks comprise each zone now requires so little memory that it could be done with on-controller SRAM even for SSDs with tens of TB of flash. ZNS doesn't completely eliminate the need for SSDs to include DRAM, because the metadata that the drive needs to store about each zone is larger than what a traditional FTL needs to store for each LBA, and drives are likely to also use some DRAM for caching writes - more on this later.

NVMe Zoned Namespaces Explained The Software Model
Comments Locked

45 Comments

View All Comments

  • WorBlux - Wednesday, December 22, 2021 - link

    > Kinda seems like they introduced a whole new problem, there?

    Sort of, many of these drives are meant to be used with the API's directly accessible to your application. Which means the application now has to solve problems that the OS hand-waved away.

    If your application uses the filesystem API, only the filesystem has to worry about this. But if you want to application to leverage the determinism and parallelism available in the ZNS drives then it should be able to leverage the zone append command (which is the big advance over the zbc api of SMR hard drives)
  • Steven Wells - Saturday, August 8, 2020 - link

    So as a DRAM cost play this might save low single percentage point savings of the parts costs which seems like not enough motivation to consider. So clearly most the savings comes from reduced over provisioning of the flash needed to get get a similar write application and trading that against the extra lift required by the host. Curious if anyone has shared TCO studies on this to validate a clear cost savings for all the heavy lifting required by both drive and data center customer.
  • matias.bjorling - Monday, August 10, 2020 - link

    Thanks for the comprehensive write-up, Billy. It's great to see you writing about ZNS on Anandtech - I've followed it for 20 years! I haven't thought that my work on creating Open-Channel SSDs and Zoned Namespaces would one day be featured on Anandtech. Thanks for making it mainstream!
  • umeshm - Monday, August 24, 2020 - link

    This is the best explanation and analysis I have found on ZNS. Thank you, Billy!

    I have a question about how 4KB LBAs are persisted when the flash page size is 16KB and when 4 pages (QLC) are stored on each wordline.

    You mentioned that a 4KB LBA is partially programmed into flash page, but is vulnerable to read disturb. But my understanding so far is that only SLC supports partial-page programming. So a 4KB LBA would need to be buffered in NVRAM within the SSD until there is a full page (16KB) worth of data to write to a page. Then, the wordline is partially programmed, because the 3 other pages have not been programmed yet, so the wordline still needs to be protected through additional caching or buffering in NVRAM.

    Could you or someone else please either confirm or correct my understanding?

    Again, I really appreciate the effort and thinking that has gone into this article.

    Umesh
  • weilinzhu - Monday, June 20, 2022 - link

    very helpful article, thanks a lot! As you wrote: "A recent Western Digital demo with a 512GB ZNS prototype SSD showed the drive using a zone size of 256MB (for 2047 zones total) but also supporting 2GB zones." could you please kindly tell me where it was published! thanks in advance!!

Log in

Don't have an account? Sign up now