3D NAND: Hitting The Reset Button on Scaling

Now that we understand how 3D NAND works, it is time to see what it is all about. As we now know, the problem with 2D NAND is the shrinking cell size and the proximity of the cells, which results in degraded reliability and endurance. Basically, 3D NAND must solve these two issues but it must also remain scalable to be economical. So how does it do that? This is where the third dimension comes into play.

The cost of a semiconductor is proportional to the die size. If you shrink the die, you effectively get more dies from a single wafer, resulting in a lower cost per die. Alternatively you can add more functionality (i.e. transistors) to each die. In the case of NAND, that means you can build a higher capacity die while keeping the die size the same, which gives more gigabits per wafer and thus reducing cost. If you cannot shrink the die, then you have just hit a dead-end because the cost will not scale. That is what has happened with 2D NAND because the shrinks on X and Y axes have run out of gas.

What 3D NAND does is add a Z-axis to the game. Because it stacks cells vertically, it is no longer as dependent on the X and Y axes since the die size can be reduced by adding more layers. As a result, Samsung's V-NAND takes a more relaxed position on the X and Y axes by going back to a 40nm process node, which increases the cell size and leaves more room between individual cells, eliminating the major issues 2D NAND has. The high amount of layers compensates for the much larger process node, resulting in a die that is the same size and capacity as the state of the art 2D NAND dies but without the caveats.

The above graph gives some guidance as to how big each cell in V-NAND really is. On the next page, I will go through the method of how cell size is really calculated and how V-NAND compares with Micron’s 16nm NAND but the above gives a good picture of the benefit that 3D NAND has. Obviously, when each cell is larger and the distance between individual cells is higher, there are more electrons to play with (i.e. more room for voltage state changes) and the cell to cell interference decreases substantially. Those two are the main reasons why V-NAND is capable of achieving up to ten times the endurance of 2D NAND.

Moreover, scaling in vertical dimension does not have the same limitations as scaling in the X and Y axes do. Because the cost of a semiconductor is still mostly determined by the die area and not by the height, there is no need to cram cells very close to each other. As a result, there is very little interference between the cells even in the vertical direction. Also, the usage of high-K dielectrics means that the control gate does not have to wrap around the charge trap. The result is that there is a hefty barrier of silicon dioxide (which is an insulator) between each cell, which is far more insulating than the rather thin ONO layer in 2D NAND. Unfortunately, I do not know what is the exact distance between each cell in the vertical dimension but I think it is safe to assume that it is noticeably more than the ~20nm in 2D NAND since there is no need for aggressive vertical scaling. 

As for how far Samsung believes their V-NAND can scale, their roadmap shows a 1Tbit die planned for 2017. That is very aggressive because it essentially implies that the die capacity will double every year (256Gbit next year, 512Gbit in 2016 and finally 1Tbit in 2017). The most interesting part is that Samsung is confident that they can do this simply by increasing the layer count, meaning that the process node will stay at 40nm. 

3D NAND: How It Works 3D NAND In Numbers: Is It Economical?
Comments Locked

160 Comments

View All Comments

  • YazX_ - Monday, July 7, 2014 - link

    Prices are not going down, good thing we have Crucial who have best bang for the buck, ofcourse performance wise is not compared to sandisk or samsung, but its still a very fast SSD, for normal users and gamers, Mx100 is the best drive you can get for its price.
  • soldier4343 - Thursday, July 17, 2014 - link

    My next upgrade the Pro 850 512gb version over my OCZ 4 256gb.
  • bj_murphy - Friday, July 18, 2014 - link

    Thanks Kristian for such an amazing, in depth review. I especially loved the detailed explanation of current 2D NAND vs 3D NAND, how it all works, and why it's all so important. Possibly one of my favourite Anandtech articles to date!
  • DPOverLord - Wednesday, July 23, 2014 - link

    Looking at this it does not seem to be a HUGE difference than raid 0 of 2 Samsung Pro 840 512GB (1tb in raid 0).

    To upgrade at this point does not make the most sense.
  • Nickolai - Wednesday, July 23, 2014 - link

    How are you implementing over-provisioning?
  • joochung - Tuesday, July 29, 2014 - link

    I don't see this mentioned anywhere, but were the tests performed with RAPID enabled or disabled? I understand that some of the tests could not run with RAPID enabled, but for those other tests which do run on a formatted partition (i.e. not run on the raw disk), its not clear if RAPID is enabled or disabled. Therefore its not clear how RAPID will affect the results in each test.
  • Rekonn - Wednesday, July 30, 2014 - link

    Anyone know if you can use the 850 Pro ssds on a Dell PERC H700 raid controller? Per documentation, controller only supports 3 Gb/s SATA.
  • janos666 - Thursday, August 14, 2014 - link

    I always wondered if there is any practical and notable difference between dynamic and static over-provisioning.
    I mean... since TRIM should blank out the empty LBAs anyway, I don't see the point in leaving unpartitioned space for static over-provisioning for home users. From a general user standpoint, having as much usable space available as possible (even if we try to restrict ourself from ever utilizing it all) seems to be a lot more practical (until it's actually usable with an acceptable speed, so even if notably slower but still fast enough...) than keeping a (significantly more, but still not perfectly) constant random write performance.

    So, I always create a system partition as big as possibly (I do the partitioning manually: a minimal size EFI boot partition + everything else at one piece) without leaving unpartitioned space for over-provisioning and I try to leave as much space empty as possible.

    However, one time, after I filled my 840 Pro up to ~95% and I kept it like that for 1-2 days, it never "recovered" . Even after I manually ran "defrag c: /O" to make sure the freed up space is TRIMed, sequential write speeds were really slow and random write speeds were awful. I ha to create a backup image with DD, fill the drive with zeros a few times and finally run an ATA Secure Erase before restoring the backup image.

    Even though I was never gentle with the drive (I don't do stupid things like disabling swapping and caching just to reduce it's wear, I bought it to use it...) and I did something which is not recommended (filled almost all the user-accessible space with data and kept using it like that for a few days as a system disk), this wasn't something I expected from this SSD. (Even though this is what I usually get from Samsung. It always looks really nice but later on something turns out which reduces it's value/price from good or best to average or worse.) This was supposed to be a "Pro" version.
  • stevesy - Friday, September 12, 2014 - link

    I don't normally go out of my way to comment on a product but I felt this product deserved the effort. I've been using personal computer since personal computers first came out. I fully expected my upgrade from an old 50gig SSD to be a nightmare.

    I installed the new 500gig Evo 850 as a secondary, cloned, switch it to primary and had it booting in about 15 minutes. No problems, no issues, super fast, WOW. Glad Samsung got it figured out. I'll be a lot less concerned my next upgrade and won't be waiting until I'm at my last few megabytes before upgrading again.
  • basil.bourque - Friday, September 26, 2014 - link

    I must disagree with the conclusion, "there is not a single thing missing in the 850 Pro". Power-loss protection is a *huge* omission, especially for a "Pro" product.

Log in

Don't have an account? Sign up now