Intel & Micron Announce First 20nm MLC NAND Flash for Use in SSDs
by Anand Lal Shimpi on April 14, 2011 11:49 AM EST
We just started testing SSDs based on IMFT 25nm NAND Flash and look at what Intel/Micron just announced? The first 8GB MLC NAND device built on a 20nm process. This is obviously an announcement of pre-production silicon, it’ll take IMFT until the second half of this year (at least) to start shipping production quality 20nm NAND.
IMFT 20nm 8GB NAND, 118mm2
At 50nm IMFT NAND was good for 10,000 program/erase cycles (rated, not actual). The move to 34nm dropped that to 3000 - 5000 program erase cycles, a value that was maintained with the move to 25nm. I asked Micron how long NAND will last at 20nm and was told that p/e cycles are comparable to where 25nm was at a similar point in its development cycle. Micron expects quality to ramp over time and ultimately hit a similar point to existing 25nm NAND, which is good for controller manufacturers as it means any ECC/NAND redundancy efforts they have already implemented should support the new 20nm product.
An 8GB 2-bit-per-cell MLC NAND device built at 20nm has a die area of 118mm2, down from 167mm2 at 25nm. A single 8GB NAND device wasn’t built at 34nm.
IMFT is on a 15 - 18 month process cadence, meaning this transition was of course planned for in advance. The first 20nm NAND is being manufactured at the IMFT Fab in Lehi, Utah, which is currently producing 25nm NAND. Some equipment upgrades are necessary to support 20nm. IMFT will also transition its fabs in Manassas, VA and Singapore to 20nm at a later point.
For consumers there’s an obvious win. We need smaller transistor geometries to reduce the cost of NAND, which ultimately reduces the cost of SSDs. The days of 50% annual price reductions are over however, expect to see a conservative 20 - 30% drop in price for SSDs that use 20nm NAND over 25nm NAND.
38 Comments
View All Comments
Mr Perfect - Thursday, April 14, 2011 - link
Every once in a while you see some new tech pop up that has the potential to replace NAND, but you never hear the OEMs talk about them. Just how far can they scale flash?Metaluna - Thursday, April 14, 2011 - link
Yes it reminds me of the 1980's (dating myself) when everyone was talking about how silicon was going to be replaced by gallium arsenide. But then silicon fab technology just kept getting better and better and GaAs never really took off except for certain boutique applications (like super-high speed telecom equipment). I wonder if we will see that with Flash.FunBunny2 - Thursday, April 14, 2011 - link
I haven't looked in a while (not even to confirm the accuracy of this comment), but Unity Semiconductor has been working on a "replacement" for NAND for a few years.The issue with NAND SSD's is that they are consumable. An HDD, if it gets past infant mortality, can keep running for a decade; all of mine have, never had a head crash or other catastrophic failure. Getting data off a drive can be a pain, what with registry entries and install keys and such. Knowing that you'll have to do that in a few years, no matter what, is reason enough to not use SSD.
For enterprise drives, it's a bit more manageable, in that replacement is part of the normal course of business. However, I've read, from credible sites, that MLC/prosumer parts don't often make it to 3 years. It isn't time, but total data writes that matter, and a RDBMS drive can get there really fast. Changing to BCNF schemas makes the arithmetic work better, but getting BCNF to be "normal" will take a lot of education and a while.
Hector2 - Thursday, April 14, 2011 - link
"The issue with NAND SSD's is that they are consumable"Really, how many keep a HDD for 10 years ? If you never upgarde your PC, then there would be some but how many people today run on IDE ? Even if I don't upgrade my PC, and I do, I end up upgrading to a denser drive every couple of years. Sure, I still have a 1 GB HDD somewhere in storage that would still work if my motherboard supported IDE but what's the point ? I'll still be upgrading my storage system as they get denser and faster and I expect that will mean SSD more & more in the future. I just upgraded my PC and guess what, it's a mix of SSD & Sata HDDs. No big surprise there.
haplo602 - Friday, April 15, 2011 - link
this all depends on your use cases.I am still running my systems from old 40GB IDE drives. You don't need to reinstall Linux every time something new comes out :-) Basicaly I keep my user data on separate filesystems and reinstall/upgrade the OS as I need.
When you upgrade to denser drives, do you always migrate all your data ? how long does it take ? How much of that data do you realy reuse ? I have CDs full of movies I have never seen, same with games, mp3s etc.
If you just reinstall windows/apps/games on a new drive, that's fine. It's easy to do on new drives. However keeping large photo/movie/music collections and migrating them all the time is not comfortable.
The ultimate problem with SSDs is that they have projected lifetime that is very short. In magnetic HDDs, the MTBF was limited by mechanical parts. With SSDs it's the actual storage medium. That's a big difference.
L. - Friday, April 15, 2011 - link
It's a big difference, that can be POSITIVE.That is the main part nobody talks about.
If (and I wouldn't say it's the case) SSD's lowest utilization limit is related to NAND write cycles, then in most cases an SSD's lifetime is much much much much much much much (think I got enough of these) longer than that of an HDD.
How long (what actions) does it take for an HDD to show bad blocks ?
I personally don't hold the answer but I can tell you ALL HDD's I've seen with bad blocks had not written 5.000 times their total capacity before that happened.
The reality is that there is a fair chance SSD's are much more reliable than HDD's in non-extremely-intensive-write cases, such as mostly everything.
L. - Friday, April 15, 2011 - link
The issue with NAND SSD's is a non-issue.Wake up bro, you cannot kill a NAND device due to write cycle limits in any scenario but the most extreme and unlikely.
What you have read is not about modern controllers, it's about first gen crap that did not handle write leveling and all that.
Today, with MLC NAND drives that have wear leveling and are managed to never exceed 90% full, even a RDBMS drive will NOT fail in 3 years (as I said, except if you do something really unusual).
You have to understand that with write leveling, you can more or less say that those 5000 drives approximate to 5000*sdd size total data writing (or less with sandforce type stuff - and yes there is a whole f*ton to compress in a database) which except in abnormal case is not going to happen at all. (this still means 5*full capacity * 90% data written each and every day...).
For the enterprise, there will be no move until google or some other big player makes a public statement on how they saved a f*ton of money on drives and power by using MLC nand as storage.
It is not a problem of technology but a problem of conservative mindsets combined with feedback from early adoption of dev-technology (SSD's without write leveling and all that were really just prototypes).
B3an - Friday, April 15, 2011 - link
Anand has also said for 25nm nand (and now 20nm nand too, as it has the same program/erase cycles):"Let's quickly do the math again. If you have a 100GB drive and you write 7GB per day you'll program every MLC NAND cell in the drive in just over 14 days—that's one cycle out of three thousand. Outside of SandForce controllers, most SSD controllers will have a write amplification factor greater than 1 in any workload. If we assume a constant write amplification of 20x (and perfect wear leveling) we're still talking about a useful NAND lifespan of almost 6 years. In practice, write amplification for desktop workloads is significantly lower than that."
Thats an absolute worst case scenario that in reality is pretty much near impossible. In reality a modern SSD will last many many times longer even with heavy daily use.
Or put another way:
"my drive has 203.4GB of space to spread out those 7GB of writes per day. That means in roughly 29 days my SSD, if it wear levels perfectly, I will have written to every single available flash block on my drive. Tack on another 7 days if the drive is smart enough to move my static data around to wear level even more properly. So we're at approximately 36 days before I exhaust one out of my ~10,000 write cycles. Multiply that out and it would take 360,000 days of using my machine for all of my NAND to wear out; once again, assuming perfect wear leveling. That's 986 years. Your NAND flash cells will actually lose their charge well before that time comes, in about 10 years."
Thats for 50nm name. For 20/25nm you cut that in half, which is 493 years of use.
L. - Friday, April 15, 2011 - link
Also, SSD's = high performance, and as far as I know, even if you needed to replace your ssd's every year, you would still get better performance / price than you can ever dream of with hdd's (that is for those rare people who actually make full use of an ssd's performance characteristics).hackztor - Thursday, April 14, 2011 - link
3000 writes is now the norm but due to algorithms, most people will never reach it for a long time. There are ways to make your ssd last longer too, have your torrents download to a hd, extract iso files on hd ect.