3D NAND in Numbers: Is it Economical?

As with all new semiconductor technologies, the one big question is whether it is economical. There are a ton of memory technologies that have better characteristics than 2D NAND (MRAM and ReRAM to name a couple) but none of them is currently scalable enough to challenge NAND in cost. IMFT's 16nm node is the smallest node shipping in volume, so let's compare Samsung's second generation V-NAND to that to see how it stacks up.

I am basing my analysis on Andrew Walker's blog post from 3DInCites. He is a respected analyst in 3D technologies and his blog post is based on Samsung's and Micron's presentations at the 2014 International Solid-State Circuits Conference, so the data should be as accurate as it can be.

Update 7/8: I was able to find some more accurate data regarding the die size, so I've updated this section with the new data.

  Samsung 2nd Gen V-NAND Micron 16nm NAND
Process Node 40nm 16nm
# of Layers 32 -
Die Capacity 86Gbit 128Gbit
Die Size 95.4mm2 173mm2
Cell Size 40,300nm2 1,312nm2

The only downside of Walker's data is that it only covers Samsung's first generation V-NAND, which was a 24-layer design with a die size of 133mm2 and capacity of 128Gbit. Fortunately, the second generation V-NAND is using the same 40nm node, so the only substantial difference between the first and second generation is the number of layers. The 86Gbit die capacity is certainly a bit odd but it actually makes sense as it is simply the first generation die cut in half with the added layers [(128Gbit / 2) * (32 / 24)]. As such, it is relatively safe to assume that the other variables (cell size etc.) are the same as with the first generation and that is what the table above is based on.

Update 7/8: Actually, the peripheral circuitry does not scale with the memory array, meaning that the die size is actually quite a bit larger than I originally thought.

Before we go deeper with the density analysis, I want to explain how cell size is calculated. As you can see, the figures are way too big to make any sense if you just look at the 40nm and 16nm figures because no matter how you try to multiply them, the results do not make sense. 

Oftentimes when cell size is discussed, it is only the actual size of the cell that is taken into account, which leaves the distance between cells out of the conclusion. However, the so called empty parts (they are not really empty as you saw in the X-ray a couple of pages back) take a part of the die area similar to the cells, meaning that they cannot be excluded. The appropriate way to measure cell size is from the inner corner of a cell to the outer corners of neighbouring cells in both X and Y axes. This is demonstrated by the black square on the above graph.

With the proper measurements, this is how V-NAND compares to 16nm NAND when just looking at cell size alone (i.e. excluding how the layers impact density). Doesn't look too good, huh?

If you are wondering where the 16nm comes from, it is mostly just marketing. 16nm refers to the smallest length (or resolution as it is often called) in the die, meaning that it is the finest line that can be printed. In the case of NAND, that can either be the wordline or bitline, or the distance between them. With Micron's 16nm NAND, that is likely the length of the wordline and the distance between two wordlines as the two are 32nm when combined (i.e. 16nm each). 

The actual cell size did not make justice to V-NAND because the whole idea behind the move to 3D NAND is to increase the cell size and distances between cells to get rid of the issues 2D NAND has. In the graph above, I took the amount of layers into account because you essentially need 32 2D NAND cells to achieve the same density as with 32-layer V-NAND and the game totally changes. 

The math behind that graph is just a couple of simple arithmetic equations. The actual cell area is 40,300 (155nm*260nm), meaning that the relative cell area is simply the actual cell area divided by the number of layers. That gives us 1,259nm2(40,300nm/ 32). To get the relative dimensions, the actual dimensions are divided by the square root of the number of layers (e.g. 155nm / √32 = 27nm).

NAND Cell Size - Relative

In the end, the relative cell size turns out to be smaller than Micron's 16nm NAND. The difference is not huge (~4%) but when the performance, power consumption and endurance advantages are taken into account, V-NAND is a clear winner. 

Bit Density

Another way to look at cost efficiency is the bit density. While cell size is a good measure, it does not take peripheral circuitry and ECC/spare bytes into account, which take a part of the die as well. Bit density is simply die capacity divided by die size and it gives us a number for quick and easy comparison. 

In this comparison, V-NAND is the leader without any doubts. The bit density is as much as 73% higher, resulting in a much more cost efficient design. The reason why the difference between cell size and bit density is so large is that 2D NAND requires a lot more die area for ECC bytes because it is more error prone. With V-NAND there is less ECC needed thanks to its higher reliability. In addition, Micron's peripheral circuitry design is relatively die consuming, so I wonder how Toshiba's/SanDisk's 15nm stacks up with V-NAND as they are claiming to have a more efficient peripheral circuitry design.

Update 7/8: V-NAND is still denser than the latest 2D NAND nodes but the difference is not overwhelming. 

All in all, there is a lot more than just cell area and bit density when it comes to cost efficiency. Yield and equipment cost are two major factors, which are both better for 2D NAND as it is a well known technology and some of the machinery can be reused when moving from one node to another. 

3D NAND: Hitting The Reset Button on Scaling RAPID 2.0: Support For More RAM & Updated Caching Algorithm
Comments Locked

160 Comments

View All Comments

  • YazX_ - Monday, July 7, 2014 - link

    Prices are not going down, good thing we have Crucial who have best bang for the buck, ofcourse performance wise is not compared to sandisk or samsung, but its still a very fast SSD, for normal users and gamers, Mx100 is the best drive you can get for its price.
  • soldier4343 - Thursday, July 17, 2014 - link

    My next upgrade the Pro 850 512gb version over my OCZ 4 256gb.
  • bj_murphy - Friday, July 18, 2014 - link

    Thanks Kristian for such an amazing, in depth review. I especially loved the detailed explanation of current 2D NAND vs 3D NAND, how it all works, and why it's all so important. Possibly one of my favourite Anandtech articles to date!
  • DPOverLord - Wednesday, July 23, 2014 - link

    Looking at this it does not seem to be a HUGE difference than raid 0 of 2 Samsung Pro 840 512GB (1tb in raid 0).

    To upgrade at this point does not make the most sense.
  • Nickolai - Wednesday, July 23, 2014 - link

    How are you implementing over-provisioning?
  • joochung - Tuesday, July 29, 2014 - link

    I don't see this mentioned anywhere, but were the tests performed with RAPID enabled or disabled? I understand that some of the tests could not run with RAPID enabled, but for those other tests which do run on a formatted partition (i.e. not run on the raw disk), its not clear if RAPID is enabled or disabled. Therefore its not clear how RAPID will affect the results in each test.
  • Rekonn - Wednesday, July 30, 2014 - link

    Anyone know if you can use the 850 Pro ssds on a Dell PERC H700 raid controller? Per documentation, controller only supports 3 Gb/s SATA.
  • janos666 - Thursday, August 14, 2014 - link

    I always wondered if there is any practical and notable difference between dynamic and static over-provisioning.
    I mean... since TRIM should blank out the empty LBAs anyway, I don't see the point in leaving unpartitioned space for static over-provisioning for home users. From a general user standpoint, having as much usable space available as possible (even if we try to restrict ourself from ever utilizing it all) seems to be a lot more practical (until it's actually usable with an acceptable speed, so even if notably slower but still fast enough...) than keeping a (significantly more, but still not perfectly) constant random write performance.

    So, I always create a system partition as big as possibly (I do the partitioning manually: a minimal size EFI boot partition + everything else at one piece) without leaving unpartitioned space for over-provisioning and I try to leave as much space empty as possible.

    However, one time, after I filled my 840 Pro up to ~95% and I kept it like that for 1-2 days, it never "recovered" . Even after I manually ran "defrag c: /O" to make sure the freed up space is TRIMed, sequential write speeds were really slow and random write speeds were awful. I ha to create a backup image with DD, fill the drive with zeros a few times and finally run an ATA Secure Erase before restoring the backup image.

    Even though I was never gentle with the drive (I don't do stupid things like disabling swapping and caching just to reduce it's wear, I bought it to use it...) and I did something which is not recommended (filled almost all the user-accessible space with data and kept using it like that for a few days as a system disk), this wasn't something I expected from this SSD. (Even though this is what I usually get from Samsung. It always looks really nice but later on something turns out which reduces it's value/price from good or best to average or worse.) This was supposed to be a "Pro" version.
  • stevesy - Friday, September 12, 2014 - link

    I don't normally go out of my way to comment on a product but I felt this product deserved the effort. I've been using personal computer since personal computers first came out. I fully expected my upgrade from an old 50gig SSD to be a nightmare.

    I installed the new 500gig Evo 850 as a secondary, cloned, switch it to primary and had it booting in about 15 minutes. No problems, no issues, super fast, WOW. Glad Samsung got it figured out. I'll be a lot less concerned my next upgrade and won't be waiting until I'm at my last few megabytes before upgrading again.
  • basil.bourque - Friday, September 26, 2014 - link

    I must disagree with the conclusion, "there is not a single thing missing in the 850 Pro". Power-loss protection is a *huge* omission, especially for a "Pro" product.

Log in

Don't have an account? Sign up now