We just started testing SSDs based on IMFT 25nm NAND Flash and look at what Intel/Micron just announced? The first 8GB MLC NAND device built on a 20nm process. This is obviously an announcement of pre-production silicon, it’ll take IMFT until the second half of this year (at least) to start shipping production quality 20nm NAND. 


I
MFT 20nm 8GB NAND, 118mm2

 
At 50nm IMFT NAND was good for 10,000 program/erase cycles (rated, not actual). The move to 34nm dropped that to 3000 - 5000 program erase cycles, a value that was maintained with the move to 25nm. I asked Micron how long NAND will last at 20nm and was told that p/e cycles are comparable to where 25nm was at a similar point in its development cycle. Micron expects quality to ramp over time and ultimately hit a similar point to existing 25nm NAND, which is good for controller manufacturers as it means any ECC/NAND redundancy efforts they have already implemented should support the new 20nm product.
 
An 8GB 2-bit-per-cell MLC NAND device built at 20nm has a die area of 118mm2, down from 167mm2 at 25nm. A single 8GB NAND device wasn’t built at 34nm. 
 

 
IMFT is on a 15 - 18 month process cadence, meaning this transition was of course planned for in advance. The first 20nm NAND is being manufactured at the IMFT Fab in Lehi, Utah, which is currently producing 25nm NAND. Some equipment upgrades are necessary to support 20nm. IMFT will also transition its fabs in Manassas, VA and Singapore to 20nm at a later point.
 
For consumers there’s an obvious win. We need smaller transistor geometries to reduce the cost of NAND, which ultimately reduces the cost of SSDs. The days of 50% annual price reductions are over however, expect to see a conservative 20 - 30% drop in price for SSDs that use 20nm NAND over 25nm NAND.
POST A COMMENT

41 Comments

View All Comments

  • softdrinkviking - Friday, April 15, 2011 - link

    i don't think it's that cut and dry.

    read anand's article on micron's clearnand that he published last december, pay special attention to the section called "The Drawbacks of Nand Scaling."

    to paraphrase, NAND actually gets weaker with every program/erase cycle and the smaller the cells get, the less physical material there is to deteriorate before they burn out.
    so, yes, as the process scales down, the nand gets weaker by orders of magnitude.
    he goes on to say that 1. nand cycles have, so far, been very conservatively rated, so many companies are just stretching the numbers out, even though the 34nm stuff really is capable of more cycles, and 2. that the only reasonable way that smaller nand can compensate for the write endurance problem is to use smarter controllers that work with better economy.

    i haven't come across anything about the effects of advancement in the immersion lithography process, but i am assuming the big focus at the fabs is trying to get as much useful nand out of each wafer as possible.
    i'm not sure how much, if at all, that process advancements (using the same materials) could possibly increase the p/e cycles.
    maybe somebody else here can answer that question?
    Reply
  • Out of Box Experience - Friday, April 15, 2011 - link

    SSD makers might want to start rethinking how they use capacitors to maintain data reliability for long term storage instead of short term power outages

    We need SSD's that can hold our data reliably for 100 years instead of 1

    or DVD's that can maintain read only data for 100 years instead of 10

    or hard drives that mirror our SSD's in the background for long term data storage instead of SSD's that simply cache our hard drives

    THINK LONG TERM STORAGE

    not short term profit

    Reply
  • tno - Thursday, April 14, 2011 - link

    That's not too far from NC. Reply
  • Mr Perfect - Thursday, April 14, 2011 - link

    Every once in a while you see some new tech pop up that has the potential to replace NAND, but you never hear the OEMs talk about them. Just how far can they scale flash? Reply
  • Metaluna - Thursday, April 14, 2011 - link

    Yes it reminds me of the 1980's (dating myself) when everyone was talking about how silicon was going to be replaced by gallium arsenide. But then silicon fab technology just kept getting better and better and GaAs never really took off except for certain boutique applications (like super-high speed telecom equipment). I wonder if we will see that with Flash. Reply
  • FunBunny2 - Thursday, April 14, 2011 - link

    I haven't looked in a while (not even to confirm the accuracy of this comment), but Unity Semiconductor has been working on a "replacement" for NAND for a few years.

    The issue with NAND SSD's is that they are consumable. An HDD, if it gets past infant mortality, can keep running for a decade; all of mine have, never had a head crash or other catastrophic failure. Getting data off a drive can be a pain, what with registry entries and install keys and such. Knowing that you'll have to do that in a few years, no matter what, is reason enough to not use SSD.

    For enterprise drives, it's a bit more manageable, in that replacement is part of the normal course of business. However, I've read, from credible sites, that MLC/prosumer parts don't often make it to 3 years. It isn't time, but total data writes that matter, and a RDBMS drive can get there really fast. Changing to BCNF schemas makes the arithmetic work better, but getting BCNF to be "normal" will take a lot of education and a while.
    Reply
  • Hector2 - Thursday, April 14, 2011 - link

    "The issue with NAND SSD's is that they are consumable"

    Really, how many keep a HDD for 10 years ? If you never upgarde your PC, then there would be some but how many people today run on IDE ? Even if I don't upgrade my PC, and I do, I end up upgrading to a denser drive every couple of years. Sure, I still have a 1 GB HDD somewhere in storage that would still work if my motherboard supported IDE but what's the point ? I'll still be upgrading my storage system as they get denser and faster and I expect that will mean SSD more & more in the future. I just upgraded my PC and guess what, it's a mix of SSD & Sata HDDs. No big surprise there.
    Reply
  • haplo602 - Friday, April 15, 2011 - link

    this all depends on your use cases.

    I am still running my systems from old 40GB IDE drives. You don't need to reinstall Linux every time something new comes out :-) Basicaly I keep my user data on separate filesystems and reinstall/upgrade the OS as I need.

    When you upgrade to denser drives, do you always migrate all your data ? how long does it take ? How much of that data do you realy reuse ? I have CDs full of movies I have never seen, same with games, mp3s etc.

    If you just reinstall windows/apps/games on a new drive, that's fine. It's easy to do on new drives. However keeping large photo/movie/music collections and migrating them all the time is not comfortable.

    The ultimate problem with SSDs is that they have projected lifetime that is very short. In magnetic HDDs, the MTBF was limited by mechanical parts. With SSDs it's the actual storage medium. That's a big difference.
    Reply
  • L. - Friday, April 15, 2011 - link

    It's a big difference, that can be POSITIVE.

    That is the main part nobody talks about.

    If (and I wouldn't say it's the case) SSD's lowest utilization limit is related to NAND write cycles, then in most cases an SSD's lifetime is much much much much much much much (think I got enough of these) longer than that of an HDD.

    How long (what actions) does it take for an HDD to show bad blocks ?

    I personally don't hold the answer but I can tell you ALL HDD's I've seen with bad blocks had not written 5.000 times their total capacity before that happened.

    The reality is that there is a fair chance SSD's are much more reliable than HDD's in non-extremely-intensive-write cases, such as mostly everything.
    Reply
  • L. - Friday, April 15, 2011 - link

    The issue with NAND SSD's is a non-issue.
    Wake up bro, you cannot kill a NAND device due to write cycle limits in any scenario but the most extreme and unlikely.

    What you have read is not about modern controllers, it's about first gen crap that did not handle write leveling and all that.

    Today, with MLC NAND drives that have wear leveling and are managed to never exceed 90% full, even a RDBMS drive will NOT fail in 3 years (as I said, except if you do something really unusual).

    You have to understand that with write leveling, you can more or less say that those 5000 drives approximate to 5000*sdd size total data writing (or less with sandforce type stuff - and yes there is a whole f*ton to compress in a database) which except in abnormal case is not going to happen at all. (this still means 5*full capacity * 90% data written each and every day...).

    For the enterprise, there will be no move until google or some other big player makes a public statement on how they saved a f*ton of money on drives and power by using MLC nand as storage.

    It is not a problem of technology but a problem of conservative mindsets combined with feedback from early adoption of dev-technology (SSD's without write leveling and all that were really just prototypes).
    Reply

Log in

Don't have an account? Sign up now