At Flash Memory Summit, Samsung announced their fourth generation of 3D NAND and several of the more obvious SSD upgrades it enables. Taking a page from Intel and Micron's strategy book, they also announced a new memory type and corresponding SSD product while saying essentially nothing about what the new memory actually is.

The fourth generation 3D NAND bumps the layer count up to 64, compared to the 48-layer design used by the third generation V-NAND that was announced last fall and has been slowly rolling out to their SSD products over the course of this year. So far Samsung has talked about a 512Gb TLC part, and at least initially the MLC parts will probably be made from the same die and thus have two thirds the capacity. (Samsung's second generation 3D NAND was initially available as 128Gb TLC or 86Gb MLC, with 128Gb MLC parts introduced later.) The new NAND also supports an increased interface speed of 800Mbps, which is key to reducing the performance penalty that comes from consolidating more flash onto fewer independent chips.

With a per-die capacity of 512Gb (64GB), Samsung can now put 1TB of TLC flash in a single package. This means most product lines will be seeing an increase in capacity at the high end of the range. Their BGA SSD products will be offering 1TB capacity even in the 11.5mm by 13mm form factor. The 16TB PM1633a SAS SSD will be eclipsed by the new 32TB PM1643. Likely to be further out, the PM1725 PCIe add-in card SSD will be succeeded by the PM1735 with a PCIe 4 x8 host interface.

Complementing the NAND update will be a new non-standard oversized M.2 form factor 32mm wide and 114mm long, compared to the typical enterprise M.2 size of 22mm by 110mm. A little extra room can go a long way, and Samsung will be using it to produce 8TB drives. These will be enterprise SSDs and Samsung showed a diagram of these enabling 256TB of flash in a 1U server. Samsung will also be producing 4TB drives in standard M.2 sizing.

In what is likely a bid to steal some thunder from 3D XPoint memory before it can ship, Samsung announced Z-NAND memory technology and a Z-SSD product based around Z-NAND and a new SSD controller. They said nothing about the operating principles of Z-NAND, but they did talk about their plans for the Z-SSD products.

Samsung Z-SSD is being marketed as addressing the performance gap between DRAM and SSDs. Samsung's slides during their keynote showed some performance comparisons against the PM963 NVMe TLC SSD and against an unnamed "PRAM based" solution. The logical point of comparison would be against 3D XPoint NVMe drives, but Samsung can't have real performance and power numbers on those when they're still under development by Intel and Micron. Thus the PRAM based solution Samsung refers to is probably one of their own earlier R&D efforts that didn't make it to market. The Z-SSD ties or comes out ahead on every benchmark Samsung showed, but NVMe NAND flash SSDs were missing from the power consumption comparison.

The slides stated that there will be a 1TB Z-SSD this year and 2TB and 4TB Z-SSDs next year, while the press release issued later states that more generally that the Z-SSD is expected to be released next year. The press release also states that Z-NAND "shares the fundamental structure of V-NAND and has a unique circuit design and controller that can maximize performance". Given that, the launch timeframe and capacities that are only a little lower than NAND flash SSDs, it seems that Z-NAND isn't drastically different from existing memory technologies and it may even be little more than SLC flash in disguise, trying for a comeback.

Comments Locked

51 Comments

View All Comments

  • ats - Thursday, August 11, 2016 - link

    DWPD tells you mostly jack about the underlying storage's endurance. DWPD is heavily influenced by over provisioning and write amplification. Without known the OP and WA of the drives in question, using DWPD is purely meaningless. Hell I can take just about any consumer drive and get it to 10k DWPD pretty easily, send me the worst TLC drive out there, and I can make it into a 100k DWPD drive using simple command lines.
  • Kristian Vättö - Thursday, August 11, 2016 - link

    100,000 DWPD? That's not even possible. With a 128GB drive, you would have to write 148GB/s to fill the drive 100,000 times in one day.

    DWPD is what counts in the real world. Sure it's influences by OP and WA, but you can't get rid of OP and WA when dealing with an actual drive. The user, be that a consumer or an IT architect, only cares about how much they can write to the drive, not how many P/E cycles the underlying memory technology has.
  • fanofanand - Thursday, August 11, 2016 - link

    Stop with the facts, his emotional rant was far more interesting.
  • MrSpadge - Thursday, August 11, 2016 - link

    He was surely referring to increasing OP manually, which makes the amount of writes needed for a "full drive write" progressively smaller and helps with WA. Pushing this to the extreme doesn't make sense - but if a manufacturer chose to do it, they could claim a huge DWPD number.
  • fanofanand - Thursday, August 11, 2016 - link

    Which would still be physically impossible to do 100,000 times per day.
  • thetuna - Thursday, August 11, 2016 - link

    Imagine a 1B drive with 1TB of over provisioning.
    Obviously ridiculous, but not physically impossible.
  • ats - Thursday, August 11, 2016 - link

    Take 128GB drive. Provision to 1.28GB. 100k DWPD done. (this is in reality how a lot of actual enterprise drives are made, massive over provisioning).

    The point being that DWPD isn't an endurance specification but a warranty specification and there are numerous ways to shift the number.
  • MrSpadge - Thursday, August 11, 2016 - link

    Please read the article you just linked. Micron provides fine answers to the performance and endurance question. In short: approximately 1000x is what the cell can do, whereas 25 DWPD and the teased performance is what the current product is engineered for.
  • JoeyJoJo123 - Thursday, August 11, 2016 - link

    WTF is longavity?
  • ddriver - Thursday, August 11, 2016 - link

    No need to wonder, the benefit will be zero. Mainstream applications break free from the storage bottleneck by means of a single SATA SSD, adding faster storage does next to nothing, since the bottleneck is now the CPU. It is actually a good thing those will be pointless for consumers and prosumers, since they will be too expensive to afford anyway, so people won't be that bummed about not being able to afford them.

Log in

Don't have an account? Sign up now