NAND Lesson: Why Die Capacity Matters

SSDs are basically just huge RAID arrays of NAND. A single NAND die isn't very fast but when you put a dozen or more of them in parallel, the performance adds up. Modern SSDs usually have between 8 and 64 NAND dies depending on the capacity and the rule of "the more, the better" applies here, at least to a certain degree. (Controllers are usually designed for a certain amount of NAND die, so too many dies can negatively impact performance because the controller has more pages/blocks to track and process.) But die parallelism is just a part of the big picture—it all starts inside the die.

Meet the inside version of our Mr. NAND die. Each die is usually divided into two planes, which are further divided into blocks that are then divided into pages. In the early NAND days there were no planes, just blocks and pages, but as the die capacities increased the manufacturers had to find a way to get more performance out of a single die. The solution was to divide the die into two planes, which can be read from or written to (nearly) simultaneously. Without planes you could only read or program one page per die at a time but two-plane reading/programming allows two pages to be read or programmed at the same time.

The reason I said "nearly" is because programming the NAND involves more than just the programming time. There is latency from all the command, address and data inputs, which are marginal compared to the program time but with two-plane programming they take twice the time (you'll still have to send all the necessary commands and addresses separately for both soon-to-be-programmed pages).

I did some rough calculations based on the data I have (though to be honest, it's probably not enough to make my calculations bulletproof) and it seems that the two-plane programming latency is about 2% compared to two individual dies (i.e. it takes 2% longer to program two pages with two-plane programming than with two individual dies). In other words, we can conclude that two-plane programming gives us roughly twice the throughput compared to one-plane programming.

"Okay," you're thinking, "that's fine and all, but what's the point of this? This isn't a new technology and has nothing to do with the M550!" Hold on, it'll make sense as you read further.

Case: M500

  M550 128GB M500 120GB
NAND Die Capacity 64Gbit (8GB) 128Gbit (16GB)
NAND Page Size 16KB 16KB
Sequential Write 350MB/s 130MB/s
4KB Random Write 75K IOPS 35K IOPS

The Crucial M500 was the first client SSD to utilize 128Gbit per die NAND. That allowed Crucial to go higher than 512GB without sacrificing performance but also meant a hit in performance at the smaller capacities. As mentioned many times before, the key to SSD performance is parallelism and when the die capacity doubles, the parallelism is cut in half. For the 120/128GB model this meant that instead of having sixteen dies like in the case of 64Gbit NAND, it only had eight 128Gbit dies.

It takes 1600µs to write 16KB (one page) to Micron's 128Gbit NAND. Convert that to throughput and you get 10MB/s. Well, that's the simple version and not exactly accurate. With eight dies, the total write throughput would be only 80MB/s but the 120GB M500 is rated 130MB/s. The big picture is more than just the program time as in reality you have to take into account the interface latency as well as the gains from two-plane programming and cache-mode (the command, address and data latches are cached so there is no need to wait for them between programmings).

Example of cache programming

Like I described above, two-plane programming gives us roughly twice the throughput compared to one-plane programming. As a result, instead of writing one 16KB page in 1600µs, we are able to write two pages with 32KB of data in total. That doubles our throughput from 80MB/s to 160MB/s. There is some overhead from the commands like the picture above shows but thankfully today's interfaces are so fast that it's only in the magnitude of a few percents and in real world the usable throughput should be around 155MB/s. The 120GB M500 manages around 140MB/s in sequential write, so 155MB/s of NAND write throughput sounds reasonable since there is always some additional latency from channel and die switching. Program times are also averages and vary slightly from die to die and it's possible that the set program times may actually be slightly over 1600µs to make sure all dies meet the criteria. 

Case: M550

While the M500 used solely 128Gbit NAND, Crucial is bringing back the 64Gbit die for the 128GB and 256GB M550s. The switch means twice the amount of die and as we've now learned, that means twice the performance. This is actually Micron's second generation 64Gbit 20nm NAND with 16KB page size similar to their 128Gbit NAND. The increase in page size is required for write throughput (about 60% increase over 8KB page) but it adds complexity to garbage collection and can increase write amplification if not implemented efficiently (and hence lower endurance). 

Micron wouldn't disclose the program time for this part but I'm thinking there is some improvement over the original 128Gbit part. As process nodes mature, you're usually able to squeeze out a little more performance (and endurance) out of the same chip and I'm thinking that is what's happening here. To get ~370MB/s out of the 128GB M550, the program time would have to be 1300-1400µs to be inline with the performance. It's certainly possible that there's something else going on (better channel switching management for instance) but it's clear that Crucial/Micron has been able to better optimize the NAND in the M550.

The point here was to give an idea of where the NAND performance comes from and why there is such dramatical difference between the M550 and M500. Ultimately all the NAND performance characteristics are something the manufacturers won't disclose and hence the figures here may not be accurate but should at least give a rough idea of what is happening at the low level.

Introduction, The Drive & The Test Performance Consistency & TRIM Validation
Comments Locked

100 Comments

View All Comments

  • hojnikb - Thursday, March 20, 2014 - link

    I'm guessing there is lots of headrom in the marvell controler (seeing how other marvell drives perform) so there is a possibilty that they could squeze out a little bit more. But thats all on crucial.
    But i wouldn't call it quits, because with m4, they did boost read performance quite a bit after the lauch. Time will tell i guess.
  • nick2crete - Friday, March 21, 2014 - link

    Thanks,
    just got one M550 256gb ,i have also the Samsung 840 pro ,to be honest i didnt see any performance difference ,ok i have them in Marvell 9230 pci e x2 controller and is well known that Samsung dont like Marvell controllers ..but still ..
  • emn13 - Thursday, March 20, 2014 - link

    I guess expectations are everything. The m500 is the cheapest drive available at large size at the moment; with good features, and mediocre performance I think of it as a kind of baseline - if you can't beat the M500, then what's the point?

    So I guess the M550's fate really comes down to price, and time will tell how that goes.
  • trichome333 - Wednesday, March 19, 2014 - link

    I finally went SSD with a 240gb M500 for $115 and Prime from Amazon. Went ahead and did a fresh install and Windows 7 literally loads in seconds after the logo on the dark screen forms. I think it restarts too fast as my BIOS post screen kinda sets there for a second before posting on restart. BF4 loads went from 2-3 minutes to 20 seconds. I coudnt be more happy coming from SATA II 7200k HDDs. We have several machines around the house and mine is mainly gaming so I dont do many big writes or convert video. What Ive noticed is huge increases and would advise anyone on the fence to make the move. M500 will be PLENTY for 99% of all users IMO.
  • hojnikb - Wednesday, March 19, 2014 - link

    Install Windows 8/8.1 and your boot time will be even shorter. Couple that with an uefi capable board and you can get near instant boot.
  • nathanddrews - Wednesday, March 19, 2014 - link

    Yeah, but then he'd have Windows 8 and lose all that productivity.
  • hojnikb - Wednesday, March 19, 2014 - link

    Meaning what ?
  • mikato - Thursday, March 20, 2014 - link

    The whole review is like "meh" and then BAM, look at that pricing. Ok then
  • Death666Angel - Friday, March 21, 2014 - link

    Any chance you could update the SSD Slumber Power chart with values for the other sizes? Seems weird to just have the smallest SSD in there, when capacity clearly adds to consumption.
  • Hrel - Tuesday, March 25, 2014 - link

    " can't say I'm very pleased with the IO consistency of the M550. There is a moderate increase (~4K IOPS vs 2.5K in M500) in steady-state performance but other than that there isn't much good to say. " What are you talking about? The Samsung is the only other 256GB drive in there and it's less consistent than the Crucial. Am I missing something? Those consistency numbers look great!

Log in

Don't have an account? Sign up now