TurboWrite: MLC Performance on a TLC Drive

All NAND trends towards lower performance as we move down to smaller process geometries. Clever architectural tricks are what keep overall SSD performance increasing each generation, but if you look at Crucial's M500 you'll see that it's not always possible to do. Historically, whenever a level of the memory hierarchy got too slow, the industry would more or less agree to insert another level above it to help hide latency. The problem is exascerbated once you start talking about TLC NAND. Samsung's mitigation to the problem is to dedicate a small portion of each TLC NAND die as an SLC write buffer. The feature is called TurboWrite. Initial writes hit the TurboWrite buffer at very low latency and are quickly written back to the rest of the TLC NAND array.

Since the amount of spare area available on the EVO varies depending on capacity, TurboWrite buffer size varies with capacity. The smallest size is around 3GB while the largest is 12GB on the 1TB EVO:

Samsung SSD 840 EVO TurboWrite Buffer Size vs. Capacity
  120GB 250GB 500GB 750GB 1TB
TurboWrite Buffer Size 3GB 3GB 6GB 9GB 12GB

I spent some time poking at the TurboWrite buffer and it pretty much works the way you'd expect it to. Initial writes hit the buffer first, and as long as they don't exceed the size of the buffer the performance you get is quite good. If your writes stop before exceeding the buffer size, the buffer will write itself out to the TLC NAND array. You need a little bit of idle time for this copy to happen, but it tends to go pretty quickly as it's just a sequential move of data internally (we're talking about a matter of 15 - 30 seconds). Even before the TurboWrite buffer is completely emptied, you can stream new writes into the buffer. It all works surprisingly well. For most light use cases I can see TurboWrite being a great way to deliver more of an MLC experience but on a TLC drive.

TurboWrite's impact is best felt on the lower capacity drives that don't have as many NAND die to stripe requests across (thus further hiding long program latencies). The chart below shows sequential write performance vs. time for all of the EVO capacities. The sharp drop in performance on each curve is when the TurboWrite buffer is exceeded and sequential writes start streaming to the TLC NAND array instead:

On the 120GB drive the delta between TurboWrite and standard performance is huge. On the larger drives the drop isn't as big and the TurboWrite buffer is also larger, the combination of the two is why the impact isn't felt as muchon those drives. It's this TurboWrite buffer that gives the EVO its improvement in max sequential write speed over last year's vanilla SSD 840.

Endurance: Not a Problem Even at 19nm RAPID: PCIe-like Performance from a SATA SSD
Comments Locked

137 Comments

View All Comments

  • MrCommunistGen - Thursday, July 25, 2013 - link

    YES! I've been excitedly waiting for this review since the announcement!
  • Byte - Thursday, July 25, 2013 - link

    Writes for the 120GB are still quite slow.
  • chizow - Thursday, July 25, 2013 - link

    That's nearly universal though for all the entry-level capacity SSDs on the market, it's similar to RAID 0, when you can write to symmetrical NAND packages you see a significant increase in write speeds.
  • OUT FOX EM - Monday, July 29, 2013 - link

    Speaking of RAID 0, if you'll notice, all the drives of 250GB and higher perform around the same. You are MUCH better off getting 4x250GB drives instead of the 1TB. With most models the cost will actually be about the same, but the speed of the RAID will be 4x faster as well while maintaining the same capacity.

    Of course there are other drawbacks like space inside your PC and amount of available SATA ports on your motherboard, for instance, but if those aren't a factor, buying multiple SSD's is a much better option in terms of performance. I don't see many reviews mention this fact.
  • Jorgisven - Thursday, August 1, 2013 - link

    Much better in terms of performance, but I wouldn't recommend RAID 0 for 4 SSD hard drives. RAID6 is likely a better option, as it is fault tolerant without losing too much space. It's a bit of a personal decision, but the RAID concepts stand true whether it's SSD or not. Additionally, 4x250 is likely a good percentage more expensive than the already expensive 1TB SSD.
  • Democrab - Thursday, August 15, 2013 - link

    I'm not sure about you, but I'm only storing replaceable data on my SSDs...There are game saves but they're automatically put on Google Drive too so I get backups easily, it's easy to set something like that up and then just get the benefits of RAID0 although I'd be using a RAID card as the chipset would likely bottleneck it.
  • yut345 - Thursday, December 12, 2013 - link

    I agree. Due to the volatile nature of SSDs, and the fact that if they go down your data can't really be recovered like it could be on mechanical drive, I do not plan to store anything on the drive that I don't also back up somewhere else.
  • m00dawg - Friday, August 23, 2013 - link

    With only 4 drives, a RAID10 would be much preferable. 1/2 the available space (same as a 4 drive RAID6 in this case), but without the need to calculate parity, worry (as much) about partitioning alignment, and you can still handle up to 2 drive failures (though only if they are on different stripes).
  • fallaha56 - Friday, September 19, 2014 - link

    sorry but disagree this will defeat the point unless you're on a top-end raid controller -and then you get no TRIM

    when there's no moving parts reliability becomes much less of an issue, esp for an OS drive with cloud and local backup like most of us high-end users do
  • Stas - Tuesday, September 24, 2013 - link

    That's what I did for the recent laser data processing builds. 4x250GB 840s and a 1TB HDD for nightly backup. Only data is stored on the array. Speeds are up to 1600MB/sec. Needless to say, the client is very happy :)

Log in

Don't have an account? Sign up now