SSD pricing has come down tremendously since Intel’s X25-M hit the scene in 2008. Back then we were talking about 80GB for around $600, while today Micron and Samsung will sell you a 1TB SSD for the same price. Moore’s Law drove this scaling in capacity. Smaller transistors paved the way for higher density NAND in the same die area, and increasing volumes helped to keep the fabs full.

Although scaling transistor size helped address the cost-per-bit problem, it created a new one: decreasing endurance and performance. Just looking at what’s happened to IMFT NAND over the past few generations gives us an idea of what we’re dealing with:

Micron NAND Evolution
  50nm 34nm 25nm 20nm 20nm 16nm
Single Die Max Capacity 16Gbit 32Gbit 64Gbit 64Gbit 128Gbit 128Gbit
Program Page (typical) 900 µs 1200 µs 1300 µs 1300 µs 1600 µs ?
Rated Program/Erase Cycles 10000 5000 3000 3000 3000 ~3000

Both endurance and performance trend downwards. Modern SSDs deal with the endurance problem by implementing sophisticated error checking/correction, and the performance problem is masked by exploiting parallelism as best as possible.

The root cause is basic physics. To store 2 bits per cell in present day MLC NAND, we’re talking about counting a couple dozen electrons. Go to 3 bits per cell (TLC NAND) and the difference between levels shrinks further. Scale down to 14/15/16nm NAND and everything gets more difficult. We’re heading towards a situation where the difference between levels in a NAND cell are a matter of a handful of electrons.

Measuring such subtle differences in stored charge is difficult, especially when NAND cells continue to shrink and are packed very tightly together. Interference between the cells becomes a much bigger concern below 20nm. Back at 50nm these sorts of issues were easy to mitigate with decent controllers and good quality NAND. Now we’re seeing more sophisticated ECC and DSP-like work being done by the controller in order to get accurate data in/out of the drive.

Scaling of traditional NAND via smaller manufacturing processes will come to an end. I’ve heard estimates saying we’ll see one or two more shrinks before it becomes unfeasible to scale cell size any further. The question then becomes: what’s next?

Last week Samsung gave their answer: V-NAND, a 3D NAND leveraging charge trap flash technology.

How it Works

Rather than increasing density by shrinking cell size, Samsung’s V-NAND takes a few steps back in process technology and instead stacks multiple layers of NAND cells on top of one another.

The change isn’t so simple. I’ve used the floating gate transistor diagram in many of our big SSD articles in the past, but that cell design doesn’t apply to V-NAND. In the floating gate MOSFET, electrons are stored on the gate itself - a conductor. Defects in the transistor (e.g. from repeated writes) can cause a short between the gate and channel, depleting any stored charge in the gate. If the gate is no longer able to reliably store a charge, then the cell is bad and can no longer be written to. Ultimately this is what happens when you wear out an SSD.

With V-NAND, Samsung abandons the floating gate MOSFET and instead turns to its own Charge Trap Flash (CTF) design. An individual cell looks quite similar, but charge is stored on an insulating layer instead of a conductor. This seemingly small change comes with a bunch of benefits, including higher endurance and a reduction in overall cell size. That’s just part of the story though.

V-NAND takes this CTF architecture, and reorganizes it into a non-planar design. The insulator surrounds the channel, and the control gate surrounds it. The 3D/non-planar design increases the physical area that can hold a charge, which in turn improves performance and endurance.

The final piece of the V-NAND puzzle is to stack multiple layers of these 3D CTF NAND cells. Since Samsung is building density vertically, there’s not as much pressure to shrink transistor sizes. With relaxed planar space constraints, Samsung turned to an older manufacturing process (30nm class, so somewhere between 30 and 39nm) as the basis of V-NAND.

By going with an older process, Samsung inherently benefits from higher endurance and interference between cells is less of an issue. Combine those benefits with the inherent endurance advantages of CTF and you end up with a very reliable solution. Whereas present day 19/20nm 2-bit-per-cell MLC NAND is good for around 3000 program/erase cycles, Samsung’s 30nm-class V-NAND could withstand over 10x that (35K p/e cycles).

The move to an older process node doesn’t even come at a reduction in density thanks to just how high Samsung can stack its V-NAND. In its announcement last week, Samsung introduced a first generation 128Gbit MLC V-NAND solution with 24 layers. The 128Gbit device delivers 2x the density in the same physical space as a 19/20nm 64Gbit device, or roughly equivalent density to a traditional 128Gbit planar device at 15nm - the obvious benefit being better endurance, and performance.

Since we’re dealing with physically larger NAND cells that are also less susceptible to certain types of interference, program times are lower - meaning that performance is higher. With fewer retries required for all reads/writes, overall power consumption can be lower as well. At the end of the day, it’s just a better solution.

Samsung went one step further and announced that it was sampling an enterprise SSD built around V-NAND to key customers. Since V-NAND is presently only available in a 128Gbit density, capacities aren't anything earth shattering but I suspect that'll change soon enough. It’s too early to talk about pricing and availability unfortunately.

V-NAND is the future for Samsung. We’ll see V-NAND in appear in enterprise, client and even mobile (phones/tablets). There will likely be a period of overlap as Samsung transitions to V-NAND, but expect to see that transition begin next year across multiple markets.

A 128Gbit device is nice, but that’s not the end of the road. Samsung is predicting it’ll be able to build a 1Tb device by 2017, showcasing just how much scaling it can get out of 3D NAND. Samsung could theoretically also move to a smaller process node to further increase density, although ultimately that will run into the same limitations we’re facing today with traditional, planar NAND. The initial design only stores two bits per cell, but if need be Samsung could also move to a TLC V-NAND.

What about the rest of the present day NAND players? I suspect those that don’t move to CTF and/or 3D NAND will instead have their sights set on a bigger departure from NAND: some form of resistive RAM. The potential performance gains there are even higher, and you can also deploy a vertically stacked solution to gain density.

The good news is that it looks like we’re going to have some great options that will enable continued improvements in SSD density, cost and performance.

Comments Locked

44 Comments

View All Comments

  • Jambe - Wednesday, September 4, 2013 - link

    Me three.
  • Jaybus - Wednesday, August 21, 2013 - link

    It works completely differently than flash. In Crossbar's case, amorphous Si is sandwiched between a metallic top electrode (Ag) and a polysilicon bottom electrode formed in a crossbar structure. Rather than trapping electrons, whole atoms are moved around to chemically change the resistance between top and bottom electrodes of each individual cell. Each cell is a sort of variable resistor where current in one direction increases its resistance, while current in the other direction decreases its resistance. This is also known as a "memristor" because it remembers its current resistance state with or without current flow. There is a good intro article in EE Times at http://www.eetimes.com/document.asp?doc_id=1319139... describing Crossbar's technology. The article's comments go into more detail, but be forewarned the comments are written by EEs and are not for the faint of heart.
  • Sjors E - Thursday, August 22, 2013 - link

    Thnx Jaybus, just the kind of article I was looking for!
  • Kevin G - Wednesday, August 21, 2013 - link

    With 30 nm class V-NAND coming in at the same capacities as a hypothetical 15 nm die, things are looking good for continued storage growth. Not only will V-NAND have several major process revisions to further increase density, there is also the option to increase the number of vertical layers. 1 Tbit dies really do seem feasible by 2017.
  • MartinT - Wednesday, August 21, 2013 - link

    24 layers of V-NAND sounds rather expensive to produce, never mind the 128 layers needed for the projected Tb device.

    If this is the solution, we're screwed.
  • Solid State Brain - Wednesday, August 21, 2013 - link

    According to one of the slides above, with 8 layers needed for 16 Gb, but only 24 for 128 Gb it looks like benefits in capacity by adding more layers have an exponential trend. I can't quite figure out the exact relationship, but for 1 Tb V-NAND chips I guess that somewhere around 32 layers or so will be needed.
  • SunLord - Wednesday, August 21, 2013 - link

    It really depends on how much a 24 layer 39nm wafer costs compared to a normal 19nm or smaller nand. Shrinking down to 15 nm or below is gonna get expensive with very low endurance. I kinda doubt Samsung would move forward into full scale production with something that will be more expensive to produce than what their rivals can make. I doubt Samsung will build a 39nm 1Tb V-NAND but if they due they'll probably before enterprise customers and we won't see them in consumer grade devices till they do a shrink down to 29nm.

    I have to say it's interesting that samsung labels there process in 10 nm classes such as 39-30nm and 29-20nm instead of just saying the exact size such as 34nm or 25nm like others due.
  • jjj - Thursday, August 22, 2013 - link

    I highly doubt this chip is 24 layers , Samsung's language in the announcement was "can stack as many as 24 cell layers" and at 24 layers the density would be terrible compared to 2D , I was expecting 5-6 layers at best for this first chip.
  • jjj - Wednesday, August 21, 2013 - link

    "Samsung introduced a first generation 128Gbit MLC V-NAND solution with 24 layers."
    Are you sure sure this chip is 24 layers? The press rls was saying up to 24 layers i believe so was expecting a lot less layers.

    Maybe worth mentioning that Micron is starting 16nm production and yields might be better on 2D NAND so not sure how cost effective 3D is now.
    As for the other big players they do have 3D NAND on their public roadmaps, most should go there in 2015-2016 , the timing is mostly about cost ( you can check their investors presentations or ask directly) but maybe Samsung is forcing them to accelerate their plans. Next seems to be 3D ReRAM by the end of the decade.
  • James5mith - Wednesday, August 21, 2013 - link

    OK, I'm going to say something that might be really dumb here, but bear with me....

    "Defects in the transistor (e.g. from repeated writes) can cause a short between the gate and channel, depleting any stored charge in the gate. If the gate is no longer able to reliably store a charge, then the cell is bad and can no longer be written to."

    So why not mark that a permanent "0" bit, and use it as appropriate? I.e. Only write to that cell when it would contain a zero anyway. You are at a known state, so leave it drained, but it can still be useful to store bits in, as long as they are 0 bits. I understand that tracking this kind of thing might take some work, but it would be a way to get useful life out of a cell, even after it had died.

Log in

Don't have an account? Sign up now