Endurance

Samsung isn't quoting any specific TB written values for how long it expects the EVO to last, although the drive comes with a 3 year warranty. Samsung doesn't explicitly expose total NAND writes in its SMART details but we do get a wear level indicator (SMART attribute 177). The wear level indicator starts at 100 and decreases linearly down to 1 from what I can tell. At 1 the drive will have exceeded all of its rated p/e cycles, but in reality the drive's total endurance can significantly exceed that value.

Kristian calculated around 1000 p/e cycles using the wear level indicator on his 840 sample last year or roughly 242TB of writes, but we've seen reports of much more than that (e.g. this XtremeSystems user who saw around 432TB of writes to a 120GB SSD 840 before it died). I used Kristian's method of mapping sequential writes to the wear level indicator to determine the rated number of p/e cycles on my 120GB EVO sample:

Samsung SSD 840 EVO Endurance Estimation
  Samsung SSD EVO 120GB
Total Sequential Writes 4338.98 GiB
Wear Level Counter Decrease -3 (raw value = 35)
Estimated Total Writes 144632.81 GiB
Estimated Rated P/E Cycles 1129 cycles

Using the 1129 cycle estimate (which is an improvement compared to last year's 840 sample), I put together the table below to put any fears of endurance to rest. I even upped the total NAND writes per day to 50 GiB just to be a bit more aggressive than the typically quoted 10 - 30 GiB for consumer workloads:

Samsung SSD 840 EVO TurboWrite Buffer Size vs. Capacity
  120GB 250GB 500GB 750GB 1TB
NAND Capacity 128 GiB 256 GiB 512 GiB 768 GiB 1024 GiB
NAND Writes per Day 50 GiB 50 GiB 50 GiB 50 GiB 50 GiB
Days per P/E Cycle 2.56 5.12 10.24 15.36 20.48
Estimated P/E Cycles 1129 1129 1129 1129 1129
Estimated Lifespan in Days 2890 5780 11560 17341 23121
Estimated Lifespan in Years 7.91 15.83 31.67 47.51 63.34
Estimated Lifespan @ 100 GiB of Writes per Day 3.95 7.91 15.83 23.75 31.67

Endurance scales linearly with NAND capacity, and the worst case scenario at 50 GiB of writes per day is just under 8 years of constant write endurance. Keep in mind that this is assuming a write amplification of 1, if you're doing 50 GiB of 4KB random writes you'll blow through this a lot sooner. For a client system however you're probably looking at something much lower than 50 GiB per day of total writes to NAND, random IO included.

I also threw in a line of lifespan estimates at 100 GiB of writes per day. It's only in this configuration that we see the 120GB drive drop below 4 years of endurance, again based on a conservative p/e estimate. Even with 100 GiB of NAND writes per day, once you get beyond the 250GB EVO we're back into absolutely ridiculous endurance estimates.

Keep in mind that all of this is based on 1129 p/e cycles, which is likely less than half of what the practical p/e cycle limit on Samsung's 19nm TLC NAND. To go ahead and double those numbers and then you're probably looking at reality. Endurance isn't a concern for client systems using the 840 EVO.

Inside the Drives & Spare Area TurboWrite: MLC Performance on a TLC Drive
Comments Locked

137 Comments

View All Comments

  • ervinshiznit - Thursday, July 25, 2013 - link

    Typo? On the Turbowrite page you say "For most light use cases I can see TurboWrite being a great way to deliver more of an MLC experience but on a TLC drive."
    It should be deliver more of a SLC experience but on a TLC drive.
  • ciri - Sunday, July 28, 2013 - link

    SLC>MLC>TLC
  • Guspaz - Thursday, July 25, 2013 - link

    The fact that RAPID sees any performance improvement at all illustrates to me a failure of the operating system's disk caching subsystem. That's all that RAPID really is, after all, a replacement for the Windows disk cache.

    I'd be curious to see the performance results of RAPID compared to the disk caching subsystems on other platforms, such as Linux and ZFS (which even on Linux has it's own cache called the "ARC"). Are the large improvements because Windows disk caching is particularly bad, or because RAPID is a better implementation than anybody else?
  • themelon - Thursday, July 25, 2013 - link

    Windows is absolutely horrible at filesystem caching and I don't think it does any sort of block caching. It seems to use more of a FIFO algorithm that has no sequential write bypass no matter what you do. ZFS and the 2 block device caches that recently integrated into the linux kernel, bcache and dm-cache, use more of an LRU method. All of them have at least basic sequential bypass detection as well. bcache in particular is tuneable to your load in almost all aspects of performance. Of course these are only block side caching and currently have no filesystem specific knowledge.

    There is some interesting work going on to track hot spots that will eventually allow for preemptive cache warming and/or hot relocation. Right now it is BTRFS specific but it is being integrated below the filesystem layer so any filesystem will eventually be able to take advantage of it.

    ZFS on Linux is a waste of time in my opinion. ZFS's L2ARC and SLOG are great but limited by some of what I feel are architectural flaws in zfs itself. I used to love zfs but the Linux kernel block stack has caught up to it in features and still offers all of the flexibility that it always has.
  • aicom - Friday, July 26, 2013 - link

    Windows' cache system is better than you give it credit for. It does support sequential bypass (see FILE_FLAG_SEQUENTIAL_SCAN flag). It works with filesystem drivers with the Cc* APIs in the kernel. It also supports caching files over a network, even with other clients modifying the files. It does standard read-ahead and write-behind and is supplemented by an adaptive prefetcher (SuperFetch).

    The reason we're seeing such huge gains is because the programs being tested explicitly ask NOT to be cached. The whole point is to test the drive, so they pass FILE_FLAG_NO_BUFFERING to disable caching on the files being accessed.
  • MrSpadge - Saturday, July 27, 2013 - link

    Excellent post!
  • Timur Born - Sunday, July 28, 2013 - link

    Question still arises why the Anand Storage Bench is affected beneficial by RAPID?! Is it because ASB also asks the Windows cache to be bypassed, is it because of the Windows cache flushing parts of its pages every second or does RAPID communicate with the drive (firmware) at a more fundamental level that allows further optimizations?
  • watersb - Friday, July 26, 2013 - link

    Excellent points. I stick with ZFS because I trust it (after many hardware failures but no data loss) and because it is cross-platform.

    Mac HFS does "hot relocation", I believe. And NTFS has always tried to keep hot files in the middle of the disk in order to reduce hard disk seek times. So maybe I don't understand what is meant by hot relocation.
  • piroroadkill - Thursday, July 25, 2013 - link

    I agree. I'm pretty sure Windows' own disk caching is terrible. It's pretty poor even on the server side. They really need to work on that shit.
  • tincmulc - Thursday, July 25, 2013 - link

    How is rapid any better from SuperCache or FancyCache? Not only do they do the same thing, but can also be configured to use more ram or use os invisble memory (32 bit os with more than 3GB of ram) and they work for any drive, even HDDs.

Log in

Don't have an account? Sign up now