TurboWrite: MLC Performance on a TLC Drive

All NAND trends towards lower performance as we move down to smaller process geometries. Clever architectural tricks are what keep overall SSD performance increasing each generation, but if you look at Crucial's M500 you'll see that it's not always possible to do. Historically, whenever a level of the memory hierarchy got too slow, the industry would more or less agree to insert another level above it to help hide latency. The problem is exascerbated once you start talking about TLC NAND. Samsung's mitigation to the problem is to dedicate a small portion of each TLC NAND die as an SLC write buffer. The feature is called TurboWrite. Initial writes hit the TurboWrite buffer at very low latency and are quickly written back to the rest of the TLC NAND array.

Since the amount of spare area available on the EVO varies depending on capacity, TurboWrite buffer size varies with capacity. The smallest size is around 3GB while the largest is 12GB on the 1TB EVO:

Samsung SSD 840 EVO TurboWrite Buffer Size vs. Capacity
  120GB 250GB 500GB 750GB 1TB
TurboWrite Buffer Size 3GB 3GB 6GB 9GB 12GB

I spent some time poking at the TurboWrite buffer and it pretty much works the way you'd expect it to. Initial writes hit the buffer first, and as long as they don't exceed the size of the buffer the performance you get is quite good. If your writes stop before exceeding the buffer size, the buffer will write itself out to the TLC NAND array. You need a little bit of idle time for this copy to happen, but it tends to go pretty quickly as it's just a sequential move of data internally (we're talking about a matter of 15 - 30 seconds). Even before the TurboWrite buffer is completely emptied, you can stream new writes into the buffer. It all works surprisingly well. For most light use cases I can see TurboWrite being a great way to deliver more of an MLC experience but on a TLC drive.

TurboWrite's impact is best felt on the lower capacity drives that don't have as many NAND die to stripe requests across (thus further hiding long program latencies). The chart below shows sequential write performance vs. time for all of the EVO capacities. The sharp drop in performance on each curve is when the TurboWrite buffer is exceeded and sequential writes start streaming to the TLC NAND array instead:

On the 120GB drive the delta between TurboWrite and standard performance is huge. On the larger drives the drop isn't as big and the TurboWrite buffer is also larger, the combination of the two is why the impact isn't felt as muchon those drives. It's this TurboWrite buffer that gives the EVO its improvement in max sequential write speed over last year's vanilla SSD 840.

Endurance: Not a Problem Even at 19nm RAPID: PCIe-like Performance from a SATA SSD
Comments Locked

137 Comments

View All Comments

  • verjic - Thursday, February 13, 2014 - link

    I'm talking about 120 Gb version
  • verjic - Thursday, February 13, 2014 - link

    Also what is Write/Read IOMeter Bootup and Write/Read IOMeter IOMix - what means their speed? Thank You
  • AhDah - Thursday, May 15, 2014 - link

    The TRIM validation graph shows a tremendous performance drop after a few gigs of writes, even after TRIM pass, the write speed is only 150MBps.
    Does this mean once the drive is 75%-85% filled up, the write speed will always be slow?

    I'm tempted to get Crucial M550 because of this down fall.
  • njwhite2 - Wednesday, October 15, 2014 - link

    Kudos to Anand Lal Shimpi! This is one of the finest reviews I have ever read! No jargon. No unexplained acronyms. Quantitative testing of compared items instead of reviewer bias. Explanation of why the measured criteria are imortant to the end user! Just fabulous! I read dozens of reviews each week, so I'm surprised I had not stumbled upon Anandtech before. I'm (for sure) going to check out their smartphone reviews. Most of those on other sites are written by Apple fans or Android fans and really don't tell the potential purchaser what they need to know to make the best choice for them.
  • IT_Architect - Thursday, October 22, 2015 - link

    I would be interested in how reliable they are. The reason I ask is one time, when the time the Intel SLC technology was just under two years old, and there was no MLC or TLC, I needed speed to load a database from scratch 6 times an hour during incredible traffic times. I was getting requests by users at the rate of 66 times a second per server, which each required many reads of the database per request. I couldn't swap databases without breaking sessions, and mirror and unmirror did not work well. I would have to pay a ton to duplicate a redundant array in SSDs. Then I asked the data center how many of these drives they had out there. They (SoftLayer) queried and came back with 700+. Then I asked them how many they've had go bad. They queried their records and it was none, not so much as a DOA. I reasoned from that I would be just as likely to have a chassis or disk controller go bad. None of them have any moving parts, and the drives are low power. Those were enterprise drives of course because that's all there was at that time.

    In 2011 I bought a Dell M6600. Dell was shipping them with the Micron SSD. I was concerned about the lifespan and I do a lot of reading and writing with it and work constantly with virtual machines while prototyping, and VM files are huge. It calculated out to 4 years. While researching, I came across that situation where Dell had "cold feet" about OEMing them due to lifespan. Micron/Intel demonstrated to them 10x the rated lifespan, which convinced Dell. There was plenty of other trouble with consumer-level SSDs at the time, which gave the technology a bad name. The Micron/Intel was one of the very few solid citizens at the time. I went with it, although I didn't buy my M6600 with it because Dell had such a premium on them. I had two problems with the drive, which by the way is still in service today. The first was the drive just stopped doing anything one day. I called Micron and it turned out to be a bug in the firmware. If I had two drives arrayed, it would have stopped both at the same time. I upgraded the firmware and never had that problem again. The next time I was troubleshooting the laptop and putting the battery in and out and the computer would no longer boot. I again called Micron. It was by design. They said disconnect the power, pull the battery, and wait one hour. I did, and it has worked perfectly since. If I had an array, it would have stopped both at the same time.

    Today, the market is much more mature and the technology no longer has a bad name. A redundant array is no substitute for a backup anyway. A redundant array brings business continuity and speed. Are we just as likely or more so to have a motherboard go out? We don't have redundant motherboards unless without having another entire computer. Unlike a power supplies and CPUs, SSDs are low-current devices. I'm considering the possibility that we may be at the point, even for consumer-level drives, where redundant arrays for SSDs are just plain silly.
  • Gothmoth - Sunday, January 8, 2017 - link

    in real life my RAPID test showed no benefits AT ALL!!

    all it does is making low level benchmarks look better.
    you should test with real applications. RAPID is a useless feature.
  • jeyjey - Friday, June 7, 2019 - link

    I have one of this drive. I need to find a little part that is fired, I need to replace it to try to enter the data inside. Please help.

Log in

Don't have an account? Sign up now