In the past week, quite a few media outlets have posted articles claiming that SSDs will lose data in a matter of days if left unpowered. While there is some (read: very, very little) truth to that, it has created a lot of chatter and confusion in forums and even I have received a few questions about the validity of the claims, so rather than responding to individual emails/tweets from people who want to know more, I thought I would explain the matter in depth to everyone at once. 

First of all, the presentation everyone is talking about can be found here. Unlike some sites reported, it's not a presentation from Seagate -- it's an official JEDEC presentation from Alvin Cox, the Chairman of JC-64.8 subcommittee (i.e. SSD committee) at the time, meaning that it's supposed to act as an objective source of information for all SSD vendors. It is, however, correct that Mr. Cox works as a Senior Staff Engineer at Seagate, but that is irrelevant because the whole purpose of JEDEC is to bring manufacturers together to develop open standards. The committee members and chairmen are all working for some company and currently the JC-64.8 subcommittee is lead by Frank Chu from HGST.

Before we go into the actual data retention topic, let's outline the situation by focusing on the conditions that must be met when the manufacturer is determining the endurance rating for an SSD. First off, the drive must maintain its capacity, meaning that it cannot retire so many blocks that the user capacity would decrease. Secondly, the drive must meet the required UBER (number of data errors per number of bits read) spec as well as be within the functional failure requirement. Finally, the drive must retain data without power for a set amount of time to meet the JEDEC spec. Note that all these must be conditions must be met when the maximum number of data has been written i.e. if a drive is rated at 100TB, it must meet these specs after 100TB of writes.

The table above summarizes the requirements for both client and enterprise SSDs. As we can see, the data retention requirement for a client SSD is one-year at 30°C, which is above typical room temperature. The retention does depend on the temperature, so let's take a closer look of how the retention scales with temperature.

EDIT: Note that the data in the table above is based on material sent by Intel, not Seagate.

At 40°C active and 30°C power off temperature, a client SSD is set to retain data for 52 weeks i.e. one year. As the table shows, the data retention is proportional to active temperature and inversely proportional to power off temperature, meaning that a higher power off temperature will result in decreased retention. In a worst case scenario where the active temperature is only 25-30°C and power off is 55°C, the data retention can be as short as one week, which is what many sites have touted with their "data loss in matter of days" claims. Yes, it can technically happen, but not in typical client environment.

In reality power off temperature of 55°C is not realistic at all for a client user because the drive will most likely be stored somewhere in the house (closet, basement, garage etc.) in room temperature, which tends to be below 30°C. Active temperature, on the other hand, is usually at least 40°C because the drive and other components in the system generate heat that puts the temperature over room temperature.

As always, there is a technical explanation to the data retention scaling. The conductivity of a semiconductor scales with temperature, which is bad news for NAND because when it's unpowered the electrons are not supposed to move as that would change the charge of the cell. In other words, as the temperature increases, the electrons escape the floating gate faster that ultimately changes the voltage state of the cell and renders data unreadable (i.e. the drive no longer retains data). 

For active use the temperature has the opposite effect. Because higher temperature makes the silicon more conductive, the flow of current is higher during program/erase operation and causes less stress on the tunnel oxide, improving the endurance of the cell because endurance is practically limited by tunnel oxide's ability to hold the electrons inside the floating gate.

All in all, there is absolutely zero reason to worry about SSD data retention in typical client environment. Remember that the figures presented here are for a drive that has already passed its endurance rating, so for new drives the data retention is considerably higher, typically over ten years for MLC NAND based SSDs. If you buy a drive today and stash it away, the drive itself will become totally obsolete quicker than it will lose its data. Besides, given the cost of SSDs, it's not cost efficient to use them for cold storage anyway, so if you're looking to archive data I would recommend going with hard drives for cost reasons alone.

Comments Locked

86 Comments

View All Comments

  • abufrejoval - Monday, June 8, 2015 - link

    Those parts of the story are well understood. What I'd like to know is what you have to do to keep the drives refreshed: Do I have to actively overwrite the data? Is the drive somehow capable of recognizing that it should do an internal check and refresh cycle?

    The most valuable data I store is typically photographs and videos from the family, and of course all kinds of documents and financial records and scans. Most of that stuff is meant to last 10 years minimum; photos etc. future generations should decide what to do with.

    Basically I'm planning for a technology refresh every five years or so, but otherwise the data is just there to stay, never to be read unless two copies have failed.

    1st copy is an active RAID6 and a standby RAID5 for the 2nd (really meant to be ZFS, but that's another story). Beyond that I use removable drives for a 3rd copy. Those started as 3.5" magneto optical drives, but they became impractical in terms of capacity, neither drive nor media are being sold any more and SCSI is becoming hard to maintain.

    Additional copies are given as 4th or 5th copy to members of the family to protect geographically.

    But magnetical drives are, well mechanical, and I don't trust mechanics all that much: These old klunkers are just likely to get dropped in transfer or fail once you really, really need them.

    In a way I feel much safer using SSDs for storage of really valuable data, well except that they wear out when you use them and they loose their charge when you don't.

    Wear levelling code is very sophisticated these days, but somehow I'm pretty sure, none of the firmware writers have gone great lengths into testing and validating long periods of disconnected SSD use. At the price these things did cost originally, who'd ever think about putting them in storage for any extended period of time, right?

    Except that this good old Postville drive, which still signaled 99% remaining life after some years of constant (but light) use, has long since been replaced by faster and bigger SATA 3 cousins but lives out its life as baby pics archive.

    It's filled to the proper 80% and now I guess I should plug it into the server for a verify pass every couple of months, right?

    How do I tell it to do that? How does it know, that it should check and compensate bit rot?
    Does it or can it have any notion of how much time has passed since the last powerdown?

    Shouldn't it be time-stamping at least erase blocks?

    I guess the only way you'll ever get those flash cells stuffed with electrons again would be through overwriting? What about metadata? Chances are it doesn't get any better treatment than normal data (unless you have one of these modern TLC/SLC hybrids) and thus overwriting may not be nearly as good as a full erase and refill?

    Somehow I shudder at erasing first what I want to protect so I guess I'll have to think up some kind of rotation scheme... and some nice new specialized archiving SSDs companies can now market: How would anyone even test them?

    I got these big RAIDs to make things easier and now I believe I've just opened a huge can of worms called media management, something we used to do with tapes in huge libraries...
  • ProDigit - Friday, August 28, 2015 - link

    USB sticks use the same kind of chips.
    One of the first no-name USB sticks I've ever bought in 2000, had a single chip, with 128MB of memory on it.
    Files on that stick still read, after 15 years!
    Another drive, I purchased in 2005, 2GB, from transcend, has failed, and needed a reformat.
    After quickformatting the stick, Recuva was able to retrieve most of the data stored on the USB stick.
    So I fear more for controller and FAT/TOC table, than the actual data; since one error in the FAT table, can result in the drive being unreadable.
  • valnar - Friday, June 16, 2017 - link

    I know this is an old article, but my questions weren't answered in several pages....plus, maybe new knowledge has come to light in the last 2 years?? :)

    1) Say I have an SSD in an old XP (heck, even Win98) gaming box that only gets turned on once in a blue moon. What is required to keep the data "refreshed", so to speak?

    2) Should I turn it on once a year so the bits don't go bad? And if so, then for HOW LONG should I leave it on to make that happen? What exactly gets refreshed by simply sending electricity through it?

    3) And if #2 is correct, what does that say about static data, such as my \Windows installation where files don't get moved or changed. Do those get refreshed in the same way? Does turning it on even matter?

    4) So that begs the question, what about the static data on our current rigs that we use today? If we aren't copying, changing and refreshing cells, will that static data go bad faster than the moving data?

    I can't find the answers to these anywhere. All my PC's have SSD's now, both young and old.
  • ND4695 - Saturday, March 17, 2018 - link

    Your right! A lot of questions still need to be answered and its 2018, so new data on SSD performance and reliability should be out by now.

    I only use external hard drives for archiving, which only happens once or twice a year. So, I would like to know if upgrading my hard drives to SSD's would be beneficial or should I still wait. The next best step in archiving would be to switch to Mdisc or LTO tape, but Mdisc's still don't have sufficient space and LTO tape is cheap (but the drive is really expensive).

    I hope they answer some of these questions soon.

Log in

Don't have an account? Sign up now