3D NAND: Hitting The Reset Button on Scaling

Now that we understand how 3D NAND works, it is time to see what it is all about. As we now know, the problem with 2D NAND is the shrinking cell size and the proximity of the cells, which results in degraded reliability and endurance. Basically, 3D NAND must solve these two issues but it must also remain scalable to be economical. So how does it do that? This is where the third dimension comes into play.

The cost of a semiconductor is proportional to the die size. If you shrink the die, you effectively get more dies from a single wafer, resulting in a lower cost per die. Alternatively you can add more functionality (i.e. transistors) to each die. In the case of NAND, that means you can build a higher capacity die while keeping the die size the same, which gives more gigabits per wafer and thus reducing cost. If you cannot shrink the die, then you have just hit a dead-end because the cost will not scale. That is what has happened with 2D NAND because the shrinks on X and Y axes have run out of gas.

What 3D NAND does is add a Z-axis to the game. Because it stacks cells vertically, it is no longer as dependent on the X and Y axes since the die size can be reduced by adding more layers. As a result, Samsung's V-NAND takes a more relaxed position on the X and Y axes by going back to a 40nm process node, which increases the cell size and leaves more room between individual cells, eliminating the major issues 2D NAND has. The high amount of layers compensates for the much larger process node, resulting in a die that is the same size and capacity as the state of the art 2D NAND dies but without the caveats.

The above graph gives some guidance as to how big each cell in V-NAND really is. On the next page, I will go through the method of how cell size is really calculated and how V-NAND compares with Micron’s 16nm NAND but the above gives a good picture of the benefit that 3D NAND has. Obviously, when each cell is larger and the distance between individual cells is higher, there are more electrons to play with (i.e. more room for voltage state changes) and the cell to cell interference decreases substantially. Those two are the main reasons why V-NAND is capable of achieving up to ten times the endurance of 2D NAND.

Moreover, scaling in vertical dimension does not have the same limitations as scaling in the X and Y axes do. Because the cost of a semiconductor is still mostly determined by the die area and not by the height, there is no need to cram cells very close to each other. As a result, there is very little interference between the cells even in the vertical direction. Also, the usage of high-K dielectrics means that the control gate does not have to wrap around the charge trap. The result is that there is a hefty barrier of silicon dioxide (which is an insulator) between each cell, which is far more insulating than the rather thin ONO layer in 2D NAND. Unfortunately, I do not know what is the exact distance between each cell in the vertical dimension but I think it is safe to assume that it is noticeably more than the ~20nm in 2D NAND since there is no need for aggressive vertical scaling. 

As for how far Samsung believes their V-NAND can scale, their roadmap shows a 1Tbit die planned for 2017. That is very aggressive because it essentially implies that the die capacity will double every year (256Gbit next year, 512Gbit in 2016 and finally 1Tbit in 2017). The most interesting part is that Samsung is confident that they can do this simply by increasing the layer count, meaning that the process node will stay at 40nm. 

3D NAND: How It Works 3D NAND In Numbers: Is It Economical?
Comments Locked

160 Comments

View All Comments

  • beginner99 - Tuesday, July 1, 2014 - link

    Sounds great until you see the price. Even for an enthusiast a crucial MX100 is probably the more reasonable choice. It's half the price...
  • juhatus - Tuesday, July 1, 2014 - link

    Yeah, the most important factor of a SATA SSD is the price, not how it will saturate the 550Mb limit in a scenario that will never happen for most consumers. Now even a mention on the first page, eh? Please put a little pressure for the manufacturers to move on to M.2 and NVME.

    **In the Midnight hour, She cried more, more, more!!**
  • boogerlad - Tuesday, July 1, 2014 - link

    Are you guys going to review the SM1715? Really curious to see how it stacks up against the Intel, especially in client workloads.
  • pesho00 - Tuesday, July 1, 2014 - link

    I realy hoped to see 2,3,4 TB drives with this technology :(
    But we will weith ;)
    Nice drive, not so nice price ;)
  • MrSpadge - Tuesday, July 1, 2014 - link

    Not much point in offering them now if you consider the price of the 1 TB version.
  • Samus - Tuesday, July 1, 2014 - link

    Those Koreans...they're just killer engineers.
  • trumanhw - Tuesday, July 1, 2014 - link

    I don't get it. I get about 505 read and 495 write on Evo. This goes up by TEN percent in performance and then all but DOUBLES the price?

    I'm confident someone here has understanding of this I'm missing--please reply and just point me to the parts that change my view.

    Thanks
  • hojnikb - Tuesday, July 1, 2014 - link

    Your EVO has worse endurance, lower IOPS, lower consistency and most of all lower Write speeds (495MB/s write is due to trick called turbowrite).
    Sequential speeds are only a part of the story.
  • emn13 - Tuesday, July 1, 2014 - link

    I always wonder who this kind of drive is supposed to be aimed at. Yes, it's fast; but the competition is fast enough that I really can't imagine anyone *noticing* the difference outside of artificial extremely heavy non-stop I/O. And I don't mean light-load non-stop, I mean I/O's maxed out non-stop so the drives can't take time to garbage collect.

    So... a power user is unlikely to notice much difference outside of short bursts of high-I/O apps, and as the rather heavy 2011 light load demonstrates, the drives are already maxing out there too. I just can't think of a real-world load where a human being would notice the performance difference and care about it; that would imply a performance difference of around a factor 2.

    So we're left with a super fast drive (good for bragging rights, but what else?), but some apparently intentionally missing features like power-loss protection. Why would even a heavy user choose this over, say, samsungs own 840 EVO, or crucial's MX100/M500?

    I just don't see the value here. To me this looks like microoptimization and losing sight of the bigger picture.

    Price matters. Features matter. Performance - only until you're fast enough.
  • hojnikb - Tuesday, July 1, 2014 - link

    Kinga agree with this. If you REALLY need every bit of performance, you won't be looking for SATA drives at all.
    This probobly has a nieche.
    But mainstream drives are really fast enough these days for most people.

Log in

Don't have an account? Sign up now