NAND Lesson: Why Die Capacity Matters

SSDs are basically just huge RAID arrays of NAND. A single NAND die isn't very fast but when you put a dozen or more of them in parallel, the performance adds up. Modern SSDs usually have between 8 and 64 NAND dies depending on the capacity and the rule of "the more, the better" applies here, at least to a certain degree. (Controllers are usually designed for a certain amount of NAND die, so too many dies can negatively impact performance because the controller has more pages/blocks to track and process.) But die parallelism is just a part of the big picture—it all starts inside the die.

Meet the inside version of our Mr. NAND die. Each die is usually divided into two planes, which are further divided into blocks that are then divided into pages. In the early NAND days there were no planes, just blocks and pages, but as the die capacities increased the manufacturers had to find a way to get more performance out of a single die. The solution was to divide the die into two planes, which can be read from or written to (nearly) simultaneously. Without planes you could only read or program one page per die at a time but two-plane reading/programming allows two pages to be read or programmed at the same time.

The reason I said "nearly" is because programming the NAND involves more than just the programming time. There is latency from all the command, address and data inputs, which are marginal compared to the program time but with two-plane programming they take twice the time (you'll still have to send all the necessary commands and addresses separately for both soon-to-be-programmed pages).

I did some rough calculations based on the data I have (though to be honest, it's probably not enough to make my calculations bulletproof) and it seems that the two-plane programming latency is about 2% compared to two individual dies (i.e. it takes 2% longer to program two pages with two-plane programming than with two individual dies). In other words, we can conclude that two-plane programming gives us roughly twice the throughput compared to one-plane programming.

"Okay," you're thinking, "that's fine and all, but what's the point of this? This isn't a new technology and has nothing to do with the M550!" Hold on, it'll make sense as you read further.

Case: M500

  M550 128GB M500 120GB
NAND Die Capacity 64Gbit (8GB) 128Gbit (16GB)
NAND Page Size 16KB 16KB
Sequential Write 350MB/s 130MB/s
4KB Random Write 75K IOPS 35K IOPS

The Crucial M500 was the first client SSD to utilize 128Gbit per die NAND. That allowed Crucial to go higher than 512GB without sacrificing performance but also meant a hit in performance at the smaller capacities. As mentioned many times before, the key to SSD performance is parallelism and when the die capacity doubles, the parallelism is cut in half. For the 120/128GB model this meant that instead of having sixteen dies like in the case of 64Gbit NAND, it only had eight 128Gbit dies.

It takes 1600µs to write 16KB (one page) to Micron's 128Gbit NAND. Convert that to throughput and you get 10MB/s. Well, that's the simple version and not exactly accurate. With eight dies, the total write throughput would be only 80MB/s but the 120GB M500 is rated 130MB/s. The big picture is more than just the program time as in reality you have to take into account the interface latency as well as the gains from two-plane programming and cache-mode (the command, address and data latches are cached so there is no need to wait for them between programmings).

Example of cache programming

Like I described above, two-plane programming gives us roughly twice the throughput compared to one-plane programming. As a result, instead of writing one 16KB page in 1600µs, we are able to write two pages with 32KB of data in total. That doubles our throughput from 80MB/s to 160MB/s. There is some overhead from the commands like the picture above shows but thankfully today's interfaces are so fast that it's only in the magnitude of a few percents and in real world the usable throughput should be around 155MB/s. The 120GB M500 manages around 140MB/s in sequential write, so 155MB/s of NAND write throughput sounds reasonable since there is always some additional latency from channel and die switching. Program times are also averages and vary slightly from die to die and it's possible that the set program times may actually be slightly over 1600µs to make sure all dies meet the criteria. 

Case: M550

While the M500 used solely 128Gbit NAND, Crucial is bringing back the 64Gbit die for the 128GB and 256GB M550s. The switch means twice the amount of die and as we've now learned, that means twice the performance. This is actually Micron's second generation 64Gbit 20nm NAND with 16KB page size similar to their 128Gbit NAND. The increase in page size is required for write throughput (about 60% increase over 8KB page) but it adds complexity to garbage collection and can increase write amplification if not implemented efficiently (and hence lower endurance). 

Micron wouldn't disclose the program time for this part but I'm thinking there is some improvement over the original 128Gbit part. As process nodes mature, you're usually able to squeeze out a little more performance (and endurance) out of the same chip and I'm thinking that is what's happening here. To get ~370MB/s out of the 128GB M550, the program time would have to be 1300-1400µs to be inline with the performance. It's certainly possible that there's something else going on (better channel switching management for instance) but it's clear that Crucial/Micron has been able to better optimize the NAND in the M550.

The point here was to give an idea of where the NAND performance comes from and why there is such dramatical difference between the M550 and M500. Ultimately all the NAND performance characteristics are something the manufacturers won't disclose and hence the figures here may not be accurate but should at least give a rough idea of what is happening at the low level.

Introduction, The Drive & The Test Performance Consistency & TRIM Validation
Comments Locked

100 Comments

View All Comments

  • hojnikb - Tuesday, March 18, 2014 - link

    You forgot plextor m5pro xtreme in top tier :)
  • extide - Tuesday, March 18, 2014 - link

    Oh Yeah, totally forgot about that one :)
  • kyuu - Tuesday, March 18, 2014 - link

    The 840 EVO is a damn good performer, and the Seagate 600 gives you a damn good capacity/price ratio (I bought the 240GB for $130 the other week). Otherwise, looks good.
  • jed22281 - Friday, March 21, 2014 - link

    When it comes to perf. "overall", not all those are absolute top-tier, some of them should be in a upper-mid-tier category.
    There's only approx. 3 that could truly be considered as absolute top-tier: Extreme II & Ocz 150 are 2 of them.
  • jay401 - Tuesday, March 18, 2014 - link

    Regarding your price comparison chart on the Final Words page: At the time of publication, the Samsung 840 EVO 256GB drive is listed at $139.99 on Amazon.com and has been for a couple days.
  • Kristian Vättö - Tuesday, March 18, 2014 - link

    We are only using NewEgg for comparisons as otherwise it takes ages to do the chart. Besides, the pricing can change on daily basis so it would only be accurate for a short while. The idea is to give a rough idea of pricing -- ultimately every buyer should do their decision based on the current deals.
  • jay401 - Wednesday, March 19, 2014 - link

    Understood. In this case though, Amazon's had it that cheap for about a week now. :)
  • venk90 - Tuesday, March 18, 2014 - link

    INSANELY GOOD DEAL ON AMAZON !

    The 512 GB crucial m550 SSD is listed at 169$ !!
    Buy it now before the prices are corrected !
  • dave_the_nerd - Tuesday, March 18, 2014 - link

    Wow... bought. Worst case, I'll return it, but... damn.
  • venk90 - Tuesday, March 18, 2014 - link

    I bought 20 !
    Going to E-Bay all of them or return it worst case !

    Greedy me ? Haha !

Log in

Don't have an account? Sign up now