NAND Lesson: Why Die Capacity Matters

SSDs are basically just huge RAID arrays of NAND. A single NAND die isn't very fast but when you put a dozen or more of them in parallel, the performance adds up. Modern SSDs usually have between 8 and 64 NAND dies depending on the capacity and the rule of "the more, the better" applies here, at least to a certain degree. (Controllers are usually designed for a certain amount of NAND die, so too many dies can negatively impact performance because the controller has more pages/blocks to track and process.) But die parallelism is just a part of the big picture—it all starts inside the die.

Meet the inside version of our Mr. NAND die. Each die is usually divided into two planes, which are further divided into blocks that are then divided into pages. In the early NAND days there were no planes, just blocks and pages, but as the die capacities increased the manufacturers had to find a way to get more performance out of a single die. The solution was to divide the die into two planes, which can be read from or written to (nearly) simultaneously. Without planes you could only read or program one page per die at a time but two-plane reading/programming allows two pages to be read or programmed at the same time.

The reason I said "nearly" is because programming the NAND involves more than just the programming time. There is latency from all the command, address and data inputs, which are marginal compared to the program time but with two-plane programming they take twice the time (you'll still have to send all the necessary commands and addresses separately for both soon-to-be-programmed pages).

I did some rough calculations based on the data I have (though to be honest, it's probably not enough to make my calculations bulletproof) and it seems that the two-plane programming latency is about 2% compared to two individual dies (i.e. it takes 2% longer to program two pages with two-plane programming than with two individual dies). In other words, we can conclude that two-plane programming gives us roughly twice the throughput compared to one-plane programming.

"Okay," you're thinking, "that's fine and all, but what's the point of this? This isn't a new technology and has nothing to do with the M550!" Hold on, it'll make sense as you read further.

Case: M500

  M550 128GB M500 120GB
NAND Die Capacity 64Gbit (8GB) 128Gbit (16GB)
NAND Page Size 16KB 16KB
Sequential Write 350MB/s 130MB/s
4KB Random Write 75K IOPS 35K IOPS

The Crucial M500 was the first client SSD to utilize 128Gbit per die NAND. That allowed Crucial to go higher than 512GB without sacrificing performance but also meant a hit in performance at the smaller capacities. As mentioned many times before, the key to SSD performance is parallelism and when the die capacity doubles, the parallelism is cut in half. For the 120/128GB model this meant that instead of having sixteen dies like in the case of 64Gbit NAND, it only had eight 128Gbit dies.

It takes 1600µs to write 16KB (one page) to Micron's 128Gbit NAND. Convert that to throughput and you get 10MB/s. Well, that's the simple version and not exactly accurate. With eight dies, the total write throughput would be only 80MB/s but the 120GB M500 is rated 130MB/s. The big picture is more than just the program time as in reality you have to take into account the interface latency as well as the gains from two-plane programming and cache-mode (the command, address and data latches are cached so there is no need to wait for them between programmings).

Example of cache programming

Like I described above, two-plane programming gives us roughly twice the throughput compared to one-plane programming. As a result, instead of writing one 16KB page in 1600µs, we are able to write two pages with 32KB of data in total. That doubles our throughput from 80MB/s to 160MB/s. There is some overhead from the commands like the picture above shows but thankfully today's interfaces are so fast that it's only in the magnitude of a few percents and in real world the usable throughput should be around 155MB/s. The 120GB M500 manages around 140MB/s in sequential write, so 155MB/s of NAND write throughput sounds reasonable since there is always some additional latency from channel and die switching. Program times are also averages and vary slightly from die to die and it's possible that the set program times may actually be slightly over 1600µs to make sure all dies meet the criteria. 

Case: M550

While the M500 used solely 128Gbit NAND, Crucial is bringing back the 64Gbit die for the 128GB and 256GB M550s. The switch means twice the amount of die and as we've now learned, that means twice the performance. This is actually Micron's second generation 64Gbit 20nm NAND with 16KB page size similar to their 128Gbit NAND. The increase in page size is required for write throughput (about 60% increase over 8KB page) but it adds complexity to garbage collection and can increase write amplification if not implemented efficiently (and hence lower endurance). 

Micron wouldn't disclose the program time for this part but I'm thinking there is some improvement over the original 128Gbit part. As process nodes mature, you're usually able to squeeze out a little more performance (and endurance) out of the same chip and I'm thinking that is what's happening here. To get ~370MB/s out of the 128GB M550, the program time would have to be 1300-1400µs to be inline with the performance. It's certainly possible that there's something else going on (better channel switching management for instance) but it's clear that Crucial/Micron has been able to better optimize the NAND in the M550.

The point here was to give an idea of where the NAND performance comes from and why there is such dramatical difference between the M550 and M500. Ultimately all the NAND performance characteristics are something the manufacturers won't disclose and hence the figures here may not be accurate but should at least give a rough idea of what is happening at the low level.

Introduction, The Drive & The Test Performance Consistency & TRIM Validation
Comments Locked


View All Comments

  • hojnikb - Thursday, March 20, 2014 - link

    Thats only taking flash endurance into account. That doesn't mean controller wont crap on you years sooner.
  • q.epsilon.p - Sunday, April 6, 2014 - link

    Dude given that the 500M 480 and 550M 512 models are so different in price and so similar in performance I would go for the 480BG.

    Although the 500m 240 and 500m 120 perform much slower, but they were always slower compared to the competition so you had that information available before buying them. But considering its write speed where they really suffered I wouldn't have been much concerned.

    and tbh you are not really going to notice the difference without benchmarks and the price difference is big enough to make it worth it.
  • Homeles - Tuesday, March 18, 2014 - link

    The M500 being phased out doesn't make sense. Technically the M500 is more advanced with its 128Gb dies, despite its slower performance.
  • elerick - Tuesday, March 18, 2014 - link

    At first glance I tend to agree that this is a middle of the pack SSD. I however do seriously consider the power loss protection worth its weight in gold. I have a Samsung 830 and these days I value reliability above "good enough" performance. Once PCI-E SSD starts producing consumer priced drives I'll go that route.
  • Death666Angel - Friday, March 21, 2014 - link

    Can we boot off PCIe SSDs yet (consumer grade Windows)?
  • hojnikb - Tuesday, March 18, 2014 - link

    I really expected better consistency, but well... Maybe next time (or next firmware update ?).

    Although its nice to see they were able to boost write speed so much. Looks like smaller dies do pay off.

    Anyone knows what native write accelaration stands for ? At first i though it was something like turbowrite (not knowing they will use 64Gbit flash) but this does not appear to be the case ...
  • Kristian Vättö - Tuesday, March 18, 2014 - link

    Crucial specifically said they don't have an SLC or DRAM buffer and the write performance should be the same across all LBAs (the HD Tach graph shows that this is true). To me it sounds like the Native Write Acceleration is just a marketing trick aimed at Samsung and others who use buffers to boost performance.
  • hojnikb - Tuesday, March 18, 2014 - link

    Yeah it appears so.
    It really nice to see, that they aren't using any nasty tricks like turbowrite, just to inflate numbers.
  • jospoortvliet - Thursday, March 20, 2014 - link

    I think turbowrite is an awesome idea - it helps exactly where consumers need it... Short write bursts. Sure, not that interesting for pro use perhaps but isn't that the point of consumer products?
  • hojnikb - Thursday, March 20, 2014 - link

    Awsome idea for marketing department maybe. Considering that your avarage joe only looks at the sequential speeds, it makes EVO compared to lets say m500 a way way better drive, even though in reality thats not the case.
    Its not like turbowrite is bad or anything (i think it a great solution for "slow" write devices such as TLC) but i just hate that they are fooling people (they could easily market both normal speeds and turbo speeds)

Log in

Don't have an account? Sign up now