Availability and Controller support

Just to make this clear, TLC isn't anything new. For example Hynix had a 32Gb 48nm TLC die in 2008. This is because TLC was originally used for devices like USB flash drives, where its poor endurance would be negligible. Most SSD OEMs have been toying with TLC SSDs for at least a year now but we haven't seen any commercial products. OCZ had originally planned to introduce its first TLC based SSD in the Q1 2012, however TLC pricing simply hasn't made sense yet. Unless OCZ can leverage a significant cost savings over 2-bit-per-cell MLC, the added headaches of bringing a lower performing TLC part to market don't make sense.

However there's still significant motivation to migrate towards TLC NAND. Further bringing down costs, particularly for consumer SSDs aimed at light, particularly read heavy workloads makes a lot of sense. Increasing pressure from Intel to deliver cheaper SSD enabled Ultrabooks, and Apple's desire to move all mainstream Macs to solid state storage are two major motivations. MLC NAND pricing will eventually get low enough to meet these (and more) needs, but TLC definitely accelerates the process.

TLC does require controller and firmware support. In the client SSD space only OCZ has been aggressive with announcing that its Indilinx Everest controller supports 3-bit-per-cell NAND. 

Adding controller support for an extra bit per cell is more than just updating the datasheet and claiming it works. The ECC engine needs to be updated as the controller will face more frequent and more severe errors with TLC NAND (and its associated lower endurance rating).

Maintaining low write amplification is even more important with TLC NAND. With significantly fewer available program/erase cycles, burning through them due to high write amplification isn't acceptible. While NAND endurance isn't really an issue for most client MLC drives, it may be an issue for TLC based drives. 

Weaknesses of TLC: One Step Worse than MLC Final Thoughts


View All Comments

  • themossie - Friday, February 24, 2012 - link

    The information is straight from Micron, it's just an awkward way to explain the concept. If you want to keep the industry standard capacities in your explanation, perhaps show the math as capacity/(1, 2, 3) = transistors rather than transistors * (1, 2, 3) = capacity? If capacity is fixed, solving for number of transistors required seems more intuitive.

    Corsair, OCZ and Kingston all make 90 GB Sandforce 2281 SSDs. I don't know how many channels / what NAND die they use. Searching that information brought up this website first every time! Upon further consideration, I blame aNAND... :-)
  • Kristian Vättö - Saturday, February 25, 2012 - link

    90GB SSDs have 96GB of NAND in them (remember that SandForce drives have ~7% over-provisioning). Most 2.5" drives have sockets for 16 NAND devices so that's simply twelve 8GB packages. Reply
  • Confusador - Friday, February 24, 2012 - link

    I read the comments thread looking for this answer, so thank you. I still don't see the logic behind it, as others have pointed out that storage capacities haven't been power-of-2 for decades. It could conceivably be firmware related, but given that overprovisioning makes (e.g.) 60 and 120 GB fairly common that seems unlikely.

    Anyway, just some questions to keep in mind as you're in contact with the manufacturers. Thanks again for the great article, as the coverage here continues to be second to none.
  • AnnihilatorX - Friday, February 24, 2012 - link

    It has been claimed that algorithms to minimize write amplification will follow Moore's Law

    That's not really possible due to information theory. You can only compress information to reduce write by so much (entropy theory). The improvement will be more like an exponential decay rather than an exponential growth (Moore's law)
  • Shadowmaster625 - Friday, February 24, 2012 - link

    I estimate somewhere around $80 billion has been invested in the NAND flash market, cumulatively. Despite this enormous capital investment, I am surprised prices are still so high. You'd think with this type of mass economy of scale, it wouldnt cost so much to produce 1TB of flash. I wonder how much energy it takes to produce 1TB of flash... Reply
  • MrSpadge - Friday, February 24, 2012 - link

    There's so much unused space in 2.5" SSDs, let alone 3.5" drives for desktops. People wouldn't need to worry about TLC endurance, if the NAND was put into sockets and could easily be replaced. Or upgraded later on for higher capacities. And by the time you'd be doing this NAND prices will have fallen again. There'd need to be a standard for this, though...

  • mark53916 - Friday, February 24, 2012 - link

    As late as 2010 SLC's typically had 10 year retention time when new, down to
    about 1 year as cells got reprogrammed and the end of life was
    indicated for the device. (The number of erase cycles was
    also higher than now, but had be decreasing for a few years prior

    I don't know about new cell retention time when new for SLC's
    now, but MLCs either show no spec or the retention time spec for NEW
    cells is about 18 months.

    For the various reasons mentioned in the article and earlier comments,
    the effect of MLCs is that speed has been reduced and data retention time
    is reduced and the fraction of long error correction time has increased

    MLCs are not suitable for long term backups and spinning drives were never
    good for more than 5 years EXPECTED powered off life)

    MLCs just get 2 times as much storage for the same price 18 months earlier.

    In the meantime, due to supply issues (capacity being used for MLC instead
    of SLC) Thus SLC typically cost 8 times as much per GB compared
    to MLC, rather than less than 2 times as much.) This amounts
    to about a 3 year delay in SLCs reaching a given price level.

    (MLC also typically comes with implementation side effects
    [interleaved data layout, in particular] that means that data in
    unchanged pages as seen outside of the SSD is rewritten
    because data was changed at the interleaved logical location,
    not because the SSD software decided that the data was getting
    "weak" and needed to be refreshed.)
  • Hulk - Friday, February 24, 2012 - link

    Timely, informative, well written, and just the right amount of technical detail.
    Really nice job.
  • valnar - Friday, February 24, 2012 - link

    I'm not sure who the target audience of TLC is. Is there really a group of people out there that is willing to sacrifice reliability and data integrity for price or capacity? I certainly wouldn't.

    It's bad enough that modern hard drives in the 2TB range have longevity problems. I don't want my SSD to be in the same boat, especially since that SSD tends to be the boot drive on most PC's.
  • foolsgambit11 - Friday, February 24, 2012 - link

    I'm assuming TLC is a subclass of MLC, and not actually distinct as it's laid out in this article. Before TLC came along, all MLC belonged to (what I'll call) the DLC subclass, yeah?

    SLC = Single level cell
    MLC = Multi level cell
    a. DLC = Dual level cell
    b. TLC = Triple level cell

Log in

Don't have an account? Sign up now