3D NAND: Hitting The Reset Button on Scaling

Now that we understand how 3D NAND works, it is time to see what it is all about. As we now know, the problem with 2D NAND is the shrinking cell size and the proximity of the cells, which results in degraded reliability and endurance. Basically, 3D NAND must solve these two issues but it must also remain scalable to be economical. So how does it do that? This is where the third dimension comes into play.

The cost of a semiconductor is proportional to the die size. If you shrink the die, you effectively get more dies from a single wafer, resulting in a lower cost per die. Alternatively you can add more functionality (i.e. transistors) to each die. In the case of NAND, that means you can build a higher capacity die while keeping the die size the same, which gives more gigabits per wafer and thus reducing cost. If you cannot shrink the die, then you have just hit a dead-end because the cost will not scale. That is what has happened with 2D NAND because the shrinks on X and Y axes have run out of gas.

What 3D NAND does is add a Z-axis to the game. Because it stacks cells vertically, it is no longer as dependent on the X and Y axes since the die size can be reduced by adding more layers. As a result, Samsung's V-NAND takes a more relaxed position on the X and Y axes by going back to a 40nm process node, which increases the cell size and leaves more room between individual cells, eliminating the major issues 2D NAND has. The high amount of layers compensates for the much larger process node, resulting in a die that is the same size and capacity as the state of the art 2D NAND dies but without the caveats.

The above graph gives some guidance as to how big each cell in V-NAND really is. On the next page, I will go through the method of how cell size is really calculated and how V-NAND compares with Micron’s 16nm NAND but the above gives a good picture of the benefit that 3D NAND has. Obviously, when each cell is larger and the distance between individual cells is higher, there are more electrons to play with (i.e. more room for voltage state changes) and the cell to cell interference decreases substantially. Those two are the main reasons why V-NAND is capable of achieving up to ten times the endurance of 2D NAND.

Moreover, scaling in vertical dimension does not have the same limitations as scaling in the X and Y axes do. Because the cost of a semiconductor is still mostly determined by the die area and not by the height, there is no need to cram cells very close to each other. As a result, there is very little interference between the cells even in the vertical direction. Also, the usage of high-K dielectrics means that the control gate does not have to wrap around the charge trap. The result is that there is a hefty barrier of silicon dioxide (which is an insulator) between each cell, which is far more insulating than the rather thin ONO layer in 2D NAND. Unfortunately, I do not know what is the exact distance between each cell in the vertical dimension but I think it is safe to assume that it is noticeably more than the ~20nm in 2D NAND since there is no need for aggressive vertical scaling. 

As for how far Samsung believes their V-NAND can scale, their roadmap shows a 1Tbit die planned for 2017. That is very aggressive because it essentially implies that the die capacity will double every year (256Gbit next year, 512Gbit in 2016 and finally 1Tbit in 2017). The most interesting part is that Samsung is confident that they can do this simply by increasing the layer count, meaning that the process node will stay at 40nm. 

3D NAND: How It Works 3D NAND In Numbers: Is It Economical?
Comments Locked

160 Comments

View All Comments

  • TrackSmart - Tuesday, July 1, 2014 - link

    I second this. The Anandtech SSD tests were designed so that we could tell the difference between drives that are all so fast - that there is no way to tell them apart in ordinary usage scenarios. I see the value of testing the theoretical performance of drives as manufacturers push the technological limits.

    That said, at the end of the day user-experience is what matters. I agree with emn13 that the "light workload" test is already more strenuous than anything the average user is likely to do, and looking at the chart, we see that almost every drive is within a range of ~280 to ~380 MB/s. I'm guessing that the range in performance gets even narrower for "real world" workloads.

    So keep up the innovative SSD testing, but be sure to put these theoretical performance gains into a real-world context when you get to the Conclusions section of these articles. Not everyone will benefit from these theoretical increases in performance.
  • hojnikb - Tuesday, July 1, 2014 - link

    Is Samsung planning on doing TLC based V-NAND anytime soon ?
    It would be great for a mainstream drive, since endurance would be higher (due to older node), speeds would probobly also went up (so no need for gimicks like turbowrite).
    Or is it not mature enough to scale down to TLC ?
  • artifex - Tuesday, July 1, 2014 - link

    You had me at 10 years warranty. I don't mind the slight premium if I'm not buying another one midway through the cycle. Sure, it will be obsolete well before it dies, but that term signals Samsung is really confident about their reliability.
  • Gigaplex - Tuesday, July 1, 2014 - link

    Since it's twice the price of competition like the MX100, you're better off replacing mid way through the cycle.
  • Arnulf - Tuesday, July 1, 2014 - link

    I must have missed this in the article - are these V.NAND cells as used in 850 Pro drives 2 or 3 bits per cell ? I got the "larger lithography improves endurance" part, I'm just wondering whether they opted for more conservative option (MLC) there as well.
  • extide - Tuesday, July 1, 2014 - link

    These are MLC, or 2 bit per cell.

    It would be interesting if the non pro 850 comes out with TLC V-NAND!
  • himem.sys - Tuesday, July 1, 2014 - link

    Heh, we are waiting for tests 850pro vs 840pro, because there are no bigger differences "on paper".
  • sirvival - Tuesday, July 1, 2014 - link

    Hi,
    one question:
    In the review the idle power consumption for e.g. the 850 128gig is 35 mw.
    I wanted to compare that to my Samsung 470 so I went to Bench and selected the drives for comparison.
    There it says that the 850 uses 0.29 Watt.
    So how comes there is a difference?
  • KAlmquist - Tuesday, July 1, 2014 - link

    Anandtech Bench has four SSD power numbers:
    SSD Slumber Power (HIPM+DIPM)
    Drive Power Consumption - Idle
    Drive Power Consumption - Sequential Write
    Drive Power Consumption - Random Write

    The confusing things are that (1) the review only listed slumber power, not idle power, and (2) Bench lists both numbers but doesn't place the slumber power next to the other power values.
  • mutantmagnet - Tuesday, July 1, 2014 - link

    I also find the lack of powerloss protection being a big negative over this hard drive. Until REFS has all the features it needs in Windows that you would get in Linux this is going to be an important feature for anyone who values data integrity. Even after that happens it still might be very important.

Log in

Don't have an account? Sign up now