3D NAND in Numbers: Is it Economical?

As with all new semiconductor technologies, the one big question is whether it is economical. There are a ton of memory technologies that have better characteristics than 2D NAND (MRAM and ReRAM to name a couple) but none of them is currently scalable enough to challenge NAND in cost. IMFT's 16nm node is the smallest node shipping in volume, so let's compare Samsung's second generation V-NAND to that to see how it stacks up.

I am basing my analysis on Andrew Walker's blog post from 3DInCites. He is a respected analyst in 3D technologies and his blog post is based on Samsung's and Micron's presentations at the 2014 International Solid-State Circuits Conference, so the data should be as accurate as it can be.

Update 7/8: I was able to find some more accurate data regarding the die size, so I've updated this section with the new data.

  Samsung 2nd Gen V-NAND Micron 16nm NAND
Process Node 40nm 16nm
# of Layers 32 -
Die Capacity 86Gbit 128Gbit
Die Size 95.4mm2 173mm2
Cell Size 40,300nm2 1,312nm2

The only downside of Walker's data is that it only covers Samsung's first generation V-NAND, which was a 24-layer design with a die size of 133mm2 and capacity of 128Gbit. Fortunately, the second generation V-NAND is using the same 40nm node, so the only substantial difference between the first and second generation is the number of layers. The 86Gbit die capacity is certainly a bit odd but it actually makes sense as it is simply the first generation die cut in half with the added layers [(128Gbit / 2) * (32 / 24)]. As such, it is relatively safe to assume that the other variables (cell size etc.) are the same as with the first generation and that is what the table above is based on.

Update 7/8: Actually, the peripheral circuitry does not scale with the memory array, meaning that the die size is actually quite a bit larger than I originally thought.

Before we go deeper with the density analysis, I want to explain how cell size is calculated. As you can see, the figures are way too big to make any sense if you just look at the 40nm and 16nm figures because no matter how you try to multiply them, the results do not make sense. 

Oftentimes when cell size is discussed, it is only the actual size of the cell that is taken into account, which leaves the distance between cells out of the conclusion. However, the so called empty parts (they are not really empty as you saw in the X-ray a couple of pages back) take a part of the die area similar to the cells, meaning that they cannot be excluded. The appropriate way to measure cell size is from the inner corner of a cell to the outer corners of neighbouring cells in both X and Y axes. This is demonstrated by the black square on the above graph.

With the proper measurements, this is how V-NAND compares to 16nm NAND when just looking at cell size alone (i.e. excluding how the layers impact density). Doesn't look too good, huh?

If you are wondering where the 16nm comes from, it is mostly just marketing. 16nm refers to the smallest length (or resolution as it is often called) in the die, meaning that it is the finest line that can be printed. In the case of NAND, that can either be the wordline or bitline, or the distance between them. With Micron's 16nm NAND, that is likely the length of the wordline and the distance between two wordlines as the two are 32nm when combined (i.e. 16nm each). 

The actual cell size did not make justice to V-NAND because the whole idea behind the move to 3D NAND is to increase the cell size and distances between cells to get rid of the issues 2D NAND has. In the graph above, I took the amount of layers into account because you essentially need 32 2D NAND cells to achieve the same density as with 32-layer V-NAND and the game totally changes. 

The math behind that graph is just a couple of simple arithmetic equations. The actual cell area is 40,300 (155nm*260nm), meaning that the relative cell area is simply the actual cell area divided by the number of layers. That gives us 1,259nm2(40,300nm/ 32). To get the relative dimensions, the actual dimensions are divided by the square root of the number of layers (e.g. 155nm / √32 = 27nm).

NAND Cell Size - Relative

In the end, the relative cell size turns out to be smaller than Micron's 16nm NAND. The difference is not huge (~4%) but when the performance, power consumption and endurance advantages are taken into account, V-NAND is a clear winner. 

Bit Density

Another way to look at cost efficiency is the bit density. While cell size is a good measure, it does not take peripheral circuitry and ECC/spare bytes into account, which take a part of the die as well. Bit density is simply die capacity divided by die size and it gives us a number for quick and easy comparison. 

In this comparison, V-NAND is the leader without any doubts. The bit density is as much as 73% higher, resulting in a much more cost efficient design. The reason why the difference between cell size and bit density is so large is that 2D NAND requires a lot more die area for ECC bytes because it is more error prone. With V-NAND there is less ECC needed thanks to its higher reliability. In addition, Micron's peripheral circuitry design is relatively die consuming, so I wonder how Toshiba's/SanDisk's 15nm stacks up with V-NAND as they are claiming to have a more efficient peripheral circuitry design.

Update 7/8: V-NAND is still denser than the latest 2D NAND nodes but the difference is not overwhelming. 

All in all, there is a lot more than just cell area and bit density when it comes to cost efficiency. Yield and equipment cost are two major factors, which are both better for 2D NAND as it is a well known technology and some of the machinery can be reused when moving from one node to another. 

3D NAND: Hitting The Reset Button on Scaling RAPID 2.0: Support For More RAM & Updated Caching Algorithm


View All Comments

  • beginner99 - Tuesday, July 1, 2014 - link

    Sounds great until you see the price. Even for an enthusiast a crucial MX100 is probably the more reasonable choice. It's half the price... Reply
  • juhatus - Tuesday, July 1, 2014 - link

    Yeah, the most important factor of a SATA SSD is the price, not how it will saturate the 550Mb limit in a scenario that will never happen for most consumers. Now even a mention on the first page, eh? Please put a little pressure for the manufacturers to move on to M.2 and NVME.

    **In the Midnight hour, She cried more, more, more!!**
  • boogerlad - Tuesday, July 1, 2014 - link

    Are you guys going to review the SM1715? Really curious to see how it stacks up against the Intel, especially in client workloads. Reply
  • pesho00 - Tuesday, July 1, 2014 - link

    I realy hoped to see 2,3,4 TB drives with this technology :(
    But we will weith ;)
    Nice drive, not so nice price ;)
  • MrSpadge - Tuesday, July 1, 2014 - link

    Not much point in offering them now if you consider the price of the 1 TB version. Reply
  • Samus - Tuesday, July 1, 2014 - link

    Those Koreans...they're just killer engineers. Reply
  • trumanhw - Tuesday, July 1, 2014 - link

    I don't get it. I get about 505 read and 495 write on Evo. This goes up by TEN percent in performance and then all but DOUBLES the price?

    I'm confident someone here has understanding of this I'm missing--please reply and just point me to the parts that change my view.

  • hojnikb - Tuesday, July 1, 2014 - link

    Your EVO has worse endurance, lower IOPS, lower consistency and most of all lower Write speeds (495MB/s write is due to trick called turbowrite).
    Sequential speeds are only a part of the story.
  • emn13 - Tuesday, July 1, 2014 - link

    I always wonder who this kind of drive is supposed to be aimed at. Yes, it's fast; but the competition is fast enough that I really can't imagine anyone *noticing* the difference outside of artificial extremely heavy non-stop I/O. And I don't mean light-load non-stop, I mean I/O's maxed out non-stop so the drives can't take time to garbage collect.

    So... a power user is unlikely to notice much difference outside of short bursts of high-I/O apps, and as the rather heavy 2011 light load demonstrates, the drives are already maxing out there too. I just can't think of a real-world load where a human being would notice the performance difference and care about it; that would imply a performance difference of around a factor 2.

    So we're left with a super fast drive (good for bragging rights, but what else?), but some apparently intentionally missing features like power-loss protection. Why would even a heavy user choose this over, say, samsungs own 840 EVO, or crucial's MX100/M500?

    I just don't see the value here. To me this looks like microoptimization and losing sight of the bigger picture.

    Price matters. Features matter. Performance - only until you're fast enough.
  • hojnikb - Tuesday, July 1, 2014 - link

    Kinga agree with this. If you REALLY need every bit of performance, you won't be looking for SATA drives at all.
    This probobly has a nieche.
    But mainstream drives are really fast enough these days for most people.

Log in

Don't have an account? Sign up now