I'm continually amazed by Samsung's rise to power in the SSD space. If you compare their market dominating products today to what we were reviewing from Samsung just a few years ago you'd assume they came from a different company. The past three generations of Samsung consumer SSDs have been good, but if you focus exclusively on the past two generations (830/840) they've been really good.

Last year Samsung bifurcated its consumer SSD lineup by intoducing the 840 Pro in addition to the vanilla 840. We'd seen other companies explore a similar strategy, but usually by playing with synchronous vs asynchronous NAND or sometimes just using different NAND suppliers between lines. Samsung used NAND to differentiate the two but went even more extreme. The non-Pro version of the 840 was the first large scale consumer SSD made with 3-bit-per-cell MLC NAND, more commonly known as TLC (triple-level-cell) NAND. Companies had toyed with the idea of going TLC well before the 840's release but were usually stopped either by economic or endurance realities. The 840 changed all of that. Although it didn't come with tremendous cost savings initially, over time the Samsung SSD 840 proved to be one of the better values on the market - you'd just have to get over the worry of wearing out TLC NAND.

Despite having a far more limited lifespan compared to its 2bpc MLC brethren, the TLC NAND Samsung used in its 840 turned out to be quite reliable. Even our own aggressive estimates pegged typical client write endurance on the 840 at more than 11 years for the 128GB model.

Samsung 19nm TLC NAND

We haven't seen Samsung's love of TLC embraced by other manufacturers. The most significant contrast actually comes from Micron, another NAND supplier turned SSD manufacturer, and its M500. Relying on 2bpc MLC NAND, the M500 gets its cost down by using a combination of large page/block sizes (to reduce overall die area) as well as aggressively embracing the latest NAND manufacturing processes (in this case 20nm). That's always been the Intel/Micron way - spend all of your time getting to the next process node quickly, and drive down cost that way rather than going TLC. The benefit of the TLC approach is the potential for even more cost reduction, but the downside is it usually takes a while to get production to yield high enough endurance TLC to make it viable for use in SSDs. The question of which is quicker is pretty simple to answer. If we look at the 25nm and 20nm generations from IMFT, the manufacturer was able to get down to new process nodes quicker than Samsung could ship TLC in volume.

The discussion then shifts to whether or not TLC makes sense at that point, or if you'd be better off just transitioning to the next process node on MLC. Samsung clearly believes its mainstream TLC/high-end MLC split makes a lot of sense, and seeing how the 840 turned out last time I tend to agree. It's not the only solution, but given how supply constrained everyone is on the latest NAND processes this generation - any good solution to get more die per wafer is going to be well received. Samsung doesn't disclose die areas of its NAND, so we unfortunately can't tell just how much more area efficient its TLC approach is compared to IMFT's 128Gb/16K page area efficient 20nm MLC NAND.

As with any other business in the tech industry, it turns out that a regular, predictable release cadence is a great way to build marketshare. Here we are, around 9 months after the release of the Samsung SSD 840 and we have its first successor: the 840 EVO.

As its name implies, Samsung's SSD 840 EVO is an evolution over last year's SSD 840. The EVO still uses 3-bit-per-cell TLC NAND, but it moves to a smaller process geometry. Samsung calls its latest NAND process 10nm-class or 1x-nm, which can refer to feature sizes anywhere from 10nm to 19nm but we've also heard it referred to as 19nm TLC. The new 19nm TLC is available in capacities of up to 128Gbit per die, like IMFT's latest 20nm MLC process. Unlike IMFT's 128Gb offering, Samsung remains on a 8KB page size even with this latest generation of NAND. The number of pages per block is also more like IMFT's previous 64Gbit 20nm MLC at 256:

IMFT vs. Samsung NAND Comparison
  IMFT 20nm MLC IMFT 20nm MLC Samsung 19nm TLC Samsung 21nm TLC Samsung 21nm MLC
Bits per Cell 2 2 3 3 2
Single Die Max Capacity 64Gbit 128Gbit 128Gbit 128Gbit 64Gbit
Page Size 8KB 16KB 8KB 8KB 8KB
Pages per Block 256 512 256 192 128
Read Page (max) 100 µs 115 µs ? ? ?
Program Page (typical) 1300 µs 1600 µs ? ? ?
Erase Block (typical) 3 ms 3.8 ms ? ? ?
Die Size 118mm2 202mm2 ? ? ?
Gbit per mm2 0.542 0.634 ? ? ?
Rated Program/Erase Cycles 3000 3000 1000 - 3000 1000 - 3000 3000 (?)

The high level specs, at least those Samsung gives us, points to an unwillingness to sacrifice latency even further in order to shrink die area. The decision makes sense since TLC is already expected to have 50% longer program times than 2bpc MLC. IMFT on the other hand has some latency to give up with its MLC NAND, which is why we see the move to 2x larger page and block sizes with its 128Gbit NAND die. Ultimately that's going to be the comparison that's the most interesting - how Samsung's SSD 840 EVO with its 19nm TLC NAND stacks up to Crucial's M500, the first implementation of IMFT's 128Gbit 20nm MLC NAND.

Modern Features

Along with the NAND update, the EVO also sees a pretty significant controller upgrade. The underlying architecture hasn't changed, Samsung's MEX controller is still based on the same triple-core Cortex R4 design as the previous generation MDX controller. The cores now run at 400MHz compared to 300MHz previously, which helps enable some of the higher performance on the EVO. The MEX controller also sees an update to SATA 3.1, something we first saw with SanDisk's Extreme II. SATA 3.1 brings a number of features, one of the most interesting being support for queued TRIM commands.

The EVO boasts hardware AES-256 encryption, and has its PSID printed on each drive label like Crucial's M500. In the event that you set and lose the drive's encryption key, you can use the PSID to unlock the drive (although all data will be lost). At launch the EVO doesn't support TCG Opal and thus Microsoft's eDrive spec, however Samsung tells us that a firmware update scheduled for September will enable both of these things - again bringing the EVO to encryption feature parity with Crucial's M500.

As one of the world's prominent DRAM makers, it's no surprise to find a ton of DRAM used to cache the firmware and indirection table on the EVO. DRAM size scales with capacity, although Samsung tosses a bit more than is necessary at a couple capacity points (e.g. 250GB).

Samsung SSD 840 EVO DRAM
  120GB 250GB 500GB 750GB 1TB
DRAM Size 256MB LPDDR2-1066 512MB LPDDR2-1066 512MB LPDDR2-1066 1GB LPDDR2-1066 1GB LPDDR2-1066

The move to 19nm 128Gbit TLC NAND die paves the way for some very large drive capacities. Similar to Crucial's M500, the 840 EVO is offered in configurations of up to 1TB.

Samsung SSD 840 EVO Specifications
  120GB 250GB 500GB 750GB 1TB
Controller, Interface Samsung MEX, SATA 3.1
NAND Samsung 19nm 3bpc TLC Toggle DDR 2.0 NAND
Form Factor 2.5" 7mm
Max Sequential Read
Max Sequential Write
Max 4KB Random Read
Max 4KB Random Write
Encryption AES-256 FDE, PSID printed on SSD label
Warranty 3 years

I'll get to the dissection of performance specs momentarily, but you'll notice some very high peak random and sequential performance out of these mainstream drives. The peak performance improvement over last year's 840 is beyond significant. The keyword there is peak of course.


Samsung expects the 840 EVO to be available in the channel at the beginning of August. What we have in the table below are suggested MSRPs, which as long as supply isn't limited usually end up being higher than street prices:

SSD Pricing Comparison - 7/24/2013
  120/128GB 240/250/256GB 480/500/512GB 750GB 960GB/1TB
Crucial M500 $120.99 $193.56 $387.27   $599.99
Intel SSD 335   $219.99      
Samsung SSD 840 $98.44 $168.77 $328.77    
Samsung SSD 840 EVO $109.99 $189.99 $369.99 $529.99 $649.99
Samsung SSD 840 Pro $133.49 $230.95 $458.77    
SanDisk Extreme II $129.99 $229.77 $449.99    
SanDisk Ultra Plus $96.85 $174.29      
OCZ Vertex 450 $129.99 $246.84      

Prices are a bit higher than the outgoing Samsung SSD 840, which makes sense since we're looking at the beginning of the cost curve of a new process node. Crucial's highly sought after $600 960GB M500 seems finally back in stock just in time for the EVO to go head to head with it. Samsung is expecting roughly a $50 premium for the 1TB EVO over the Crucial solution, but over time I'd expect that gap to shrink down to nothing (or in favor of Samsung). The EVO is considerably more affordable than Samsung's 840 Pro, and the higher capcacity points are at particularly tempting prices.

Inside the Drives & Spare Area


View All Comments

  • TheinsanegamerN - Saturday, July 27, 2013 - link

    if you have a mac, the samsung is your best bet. TRIM can be enabled quite easily on a mac if it is not done automatically, so you can use trim. as for RAPID, it replaces window's terrible i/o caching process. osx does not have this problem, so you dont have to worry about that. now, the sata2 interface will be a bottleneck, but it will still be much faster than a hard drive. id go for either this evo drive or the 840 250gb Reply
  • wpapolis - Saturday, July 27, 2013 - link

    Yes, you reaffirmed what I was already thinking.

    Plus, when I upgrade this MacBook, I have the option to move the drive. Though, I have to say, performance is still pretty good, but each OS upgrade seems to make things a bit more sluggish.

    With 8GB RAM, and a current SSD, things should be good for a bit longer.

    Thanks for the feedback,

  • Grim0013 - Sunday, July 28, 2013 - link

    I wonder what, if anything, the impact of Turbo Write is on drive endurance, as in, does the SLC buffer have the effect of "shielding" the TLC from some amount of write amplification (WA)? More specifically, I was thinking that in the case of small random writes (high WA), many of them would be going to the SLC first, then when the data is transferred to the TLC, I wonder if the buffering affords the controller the opportunity to write the data is such a way as to reduce WA on the TLC?

    In fact, I wonder if that is something that is done...if the controller is able to characterize certain types of files as being likely the be frequently modified then just keep them in the SLC semi-permanently. Stuff like the page file and other OS stuff that is constantly modified...I'm not very well-versed on this stuff so I'm just guessing. It just seems like taking advantage of SLCs crazy p/e endurance in addition to it's speed could really help make these things bulletproof.
  • shodanshok - Sunday, July 28, 2013 - link

    Yea, I was thinking the same thing. After all, Sandisk already did it on the Ultra Plus and Ultra II SSDs: they have a small pseudo-SLC zone used both for greater performance and reducing WA. Reply
  • shodanshok - Sunday, July 28, 2013 - link

    I am not so exited about RAPID: data integrity is a delicate thing, so I am not so happy to trust Samsung (or others) replacing the key well-tested caching algorithm natively built into the OS.

    Anyway, Windows' write caching is not so quick because the OS, by default, flush its in-memory cache each second. Moreover, it normally issue a barrier event to flush the disk's DRAM cache. This last thing can be disabled, but the flush of the in-memory cache can not be changed, as far I know.

    Linux, on the other side, use much aggressive caching policy: it issue an in-memory cache flush (pagecache) ever 30 seconds, and it aggressively try to coalesce multiple writes into a single transactions. This parameter is configurable using the /proc interface. Moreover, if you have a BBU or power-tolerant disk subsystem, you can even disable the barrier instruction normally issued to the disk's DRAM cache.
  • Timur Born - Sunday, July 28, 2013 - link

    My Windows 8 setup uses quite exactly 1 gb RAM for write caching, regardless of whether it's writing to a 5400 rpm 2.5" HD, 5400 rpm 3.5" HD or Crucial M4 256 gb SSD. That's exactly the size of the RAPID cache. The "flush its cache each second" part becomes a problem when the source and destination are on the same drive, because once Windows starts writing the disk queue starts to climb.

    But even then it should mostly be a problem for spinning HDs that don't really like higher queue numbers. Even more so when you copy multiple files via Windows Explorer, which reads and write files concurrently even on spinning HDs.

    So I wonder if RAPID's only real advantage is its feature to coalesce multiple small writes into single big ones for durations longer than one second?!
  • Timur Born - Sunday, July 28, 2013 - link

    By the way, my personal experience is that CPU power saving features, as set up in both in the default "Balanced" and the "High Performance" power-profiles, have far more of an impact on SSD performance than caching stuff. I can up my M4' 4K random performance by 60% and more just by messing with CPU power savings to be less aggressive (or off). Reply
  • shodanshok - Monday, July 29, 2013 - link

    If I correctly remember, Windows use at most 1/8 of total RAM size for write caching. How much RAM did you have? Reply
  • Timur Born - Tuesday, July 30, 2013 - link

    8 gb, so you may be correct. Or you may mix it up with the 1/8 part of dirty cache that is being flushed by the Windows cache every second. Or both may be 1/8. ;-) Reply
  • zzz777 - Monday, July 29, 2013 - link

    I'm interested in caching writes to a ram disk then to storage. This reminds me if the concept of a write-back cache: for almost everyone The possibility of data corruption is so low that there's no reason not to enable it: can this ssd ramdisk write quickly enough that home users also don't have to worry about using it? Beyond that I'm not a normal home user, I want to see benchmarks for virtualization, I want the quickest way to create, modify and test a vm before putting it on front life hardware Reply

Log in

Don't have an account? Sign up now