Three Bits and Three Dimensions: What's the Deal?

I've covered 3D NAND and TLC NAND in detail in the past and in this article I'll just explain what 3D technology means to TLC NAND and vice versa, so head over to the links above if you are in the search for a deeper analysis of the two technologies.

Truth to be told, 3D technology is ideal for TLC. The fundamental problem of TLC NAND has always been the limited endurance and performance, which is caused by the additional voltage states that are needed to store three bits (i.e. eight possible bit outputs) in one cell. With eight voltage states compared to four in MLC, TLC NAND is less resistant to wear out because it takes a smaller change in the cell charge to corrupt the cell value. Due to the way NAND works, the cells (or the insulators in the cell to be exact) wear out over time, which induces electron leakage that alters the cell charge and hence the voltage state. This gets worse with die shrinks because the number of electrons decreases, making the NAND even less tolerant to wear out.

The key aspect of 3D V-NAND is the process node. By going back to 40nm lithography, the number of electrons increase exponentially, which makes TLC a much more viable technology than it was with modern planar NAND. Obviously, V-NAND doesn't change the basics of TLC NAND because it still takes eight voltage states to differentiate all the possible 3-bit outputs, but thanks to the increased number of electrons there is more breathing room between the states and thus the cells are more error tolerant.

Samsung claims 10x reduction in voltage state overlaps, which is a massive change for the better. You can see how crammed the planar TLC voltage states are, so it's no wonder that the endurance is low because the states are practically overlapping at each point in the voltage distribution and hence even tiny changes in the cell voltage can alter the cell's voltage state.

The larger cell structure also enables higher performance because it takes less iterations to program a cell. With planar TLC NAND it took multiple very high voltage pulses as well as numerous verification process to reach the right charge, but with looser voltage distribution the programming process has less steps and thus takes less time.

And given the lower read/program latencies and less need for error correction, the power consumption is also considerably lower.

In addition to better latency and power consumption characteristics, Samsung claims doubled the density over its 19nm planar TLC NAND, but without knowing the die size of 128Gbit 32-layer TLC V-NAND, it's hard to say how accurate this is. There have been some whispering that the 128Gbit die would actually be identical to the 86Gbit MLC die because 86Gbit multiplied by 1.5 equals 129Gbit and at the silicon level MLC and TLC aren't any different, but for now that's just speculation.

What I do know is that Samsung started the mass production of TLC V-NAND later, which suggests that the two aren't completely uniform. Moreover, from what I know TLC NAND requires some changes to the peripheral circuitry in order to read three bits from one cell, so while the NAND memory arrays could be alike the die size is still likely at least slightly different. Anyway, we'll find out when Chipworks (or some other silicon analysis company) takes a closer look at the NAND die itself.

Inside The Drives & Updated TurboWrite Endurance: Close to Planar MLC NAND
Comments Locked

97 Comments

View All Comments

  • R3MF - Tuesday, December 9, 2014 - link

    my mistake, i had presumed that the Pro was not a consumer part.

    still, six months on with the arrival of tons of X99 and Z97 boards sporting m.2 slots, and the drives based on the marvell controller just months away, i'd have thought it would merit a mention.
  • hojnikb - Tuesday, December 9, 2014 - link

    There is a separate article adressing this....
  • cm2187 - Tuesday, December 9, 2014 - link

    But out of curiosity, what are you going to do with the extra performance? Who actually has any use specs higher than what the EVO already offers. It is certainly the case on some heavy load server but for end users, even enthusiasts like me, I am not sure I would get an even slightly better experience by beating the SATA 3 specs.
  • R3MF - Tuesday, December 9, 2014 - link

    I presume that SATA express and m.2 were invented for no reason then?

    Bandwidth is useful, as is lower latency.
  • cm2187 - Wednesday, December 10, 2014 - link

    Well, it's not because it is invented that it is useful. It's like having dozen of cores in a CPU. Some applications will have some use for these cores (certainly relevant on servers or for virtualization). But the vast majority of common applications are single threaded so people should rather focus on higher clock rates. I'm always happy to see higher specs but I just wonder which of my application will be faster with M2.
  • Supercell99 - Sunday, December 14, 2014 - link

    Virtualization. I run VMware with several OS's running at the same time on my desktop. Being able to startup and have these run off a low latency disk is nice. Power users always have a need for high bandwidth, low latency I/O.
  • hojnikb - Wednesday, December 10, 2014 - link

    These can be used for PCI-E or SATA protoco. In fact, most m.2 drives run sata instead of pci-e
  • dcaxax - Tuesday, December 9, 2014 - link

    I'm unconvinced by Samsung. My SSD 830 is doing ok, having suffered an acceptable 30% performance decline (which may be correctable via secure erase but I will not test this).

    But my 840 (non-EVO) which works in my HTPC and sees limited use outside of the hibernation file (60 of the drive is empty by the way) is now running more than 60-70% slower.

    This is just unacceptable in a system which supports trim (win 7 x64). Samsung have done nothing to rectify this, claiming these problems occur on their EVO line. Until they change their approach, I'm inclined to distrust their latest cost-saving "innovation" and give my money to crucial instead.
  • simonpschmitt - Tuesday, December 9, 2014 - link

    Dear Mr. Vättö,
    while I don't think it would necessary belong into this article R3MF has a point. What is the state of SATAExpress, NGFF, m.2, ... currently? My laptop is 18 Month old and has an unuses m.2-slot witch, to my knowledge, nobody ever put an SSD into. You seem to have an ear to the ground when it comes to the SSD-Industry. There are a few questions you might have a qualified opinion about:
    - Will we be seeing current-gen (meaning 850 EVO-gen) m.2/SATAExpress SSDs?
    - If yes, up to wich capacity in wich form factor (2242, 2280, ...)?
    - with regards to m.2: Will there be mainly PCIe (2 lanes/4lanes) drives or SATA?
    - When do you suppose these will be an economically viable alternative to 2.5" given both slots are available?

    In your personal opinion:
    - What is the point of SATAExpress when literally every SATAExpress-Device also has m.2?
    - Will there be a subjective improvement for the normal or enthusiast (non Datacenter) user with the switch to PCIe?

    A quick blurb, perhaps in the form of a short pipeline aticle, would be much appreciated.

    An other thing I always wondered about: While I am amazed with the percieved benefits of an SSD vs. an HDD game-load-times often seem not to change at all. It's more of an oddity than a real concern but my new system (i5 4200, 8GB, 840 EVO) often has the same load times than my old system (i3 330, 4GB, HDD). I always thought load times were mainly dependent on how fast the data can be read (HDD/SDD bound) and how fast it can be processed/extracted (CPU bound). Is there a factor I'm missing or do games just not take advantage of certain kinds of faster hardware.

    Perhaps you or some of the other readers can help me with my curiosity.

    Thanks, Simon
  • metayoshi - Tuesday, December 9, 2014 - link

    I can't answer all of your questions, but I can answer the gaming part.

    With regards to gaming, it really depends on the game. Many games these days are relatively optimized on loading, so running them on an HDD or SSD doesn't matter too much since they like to load parts of the game in the background. However, there are some games where having an SSD is completely noticeable. As an avid World of Warcraft player, I can tell which raid members have SSDs and which don't because those of us with SSDs simply appear in the raid much faster than those on HDDs when switching zones. I mean, it definitely doesn't hinder gameplay too much since the only thing that takes a while is actually getting into the zone. The rest of the zone is in RAM already, so getting to and fighting bosses are instantaneous. I used to have a 7200 RPM drive before too. For me, the difference is completely noticeable now that I have had an SSD for a couple of years now.

Log in

Don't have an account? Sign up now