Encryption Done Right?

Arguably one of the most interesting features of the M500 is its hardware encryption engine. Like many modern drives, the M500 features 256-bit AES encryption engine - all data written to the drive is stored encrypted. By default you don't need to supply a password to access the data, the key is just stored in the controller and everything is encrypted/decrypted on the fly. As with most SSDs with hardware encryption, if you set an ATA password you'll force the generation of a new key and that'll ensure no one gets access to your data.

Unfortunately, most ATA passwords aren't very secure so the AES-256 engine ends up being a bit overkill when used in this way. Here's where the M500 sets itself apart from the pack. The M500's firmware is TCG Opal 2.0 and IEEE-1667 compliant. The TCG Opal support alone lets you leverage third party encryption tools to more securily lock down your system. The combination of these two compliances however makes the M500 compatible with Microsoft's eDrive standard. 

In theory, Windows 8's BitLocker should leverage the M500's hardware encryption engine instead of using a software encryption layer on top of it. The result should be better performance and power consumption. Simply enabling BitLocker didn't seem to work for me (initial encryption time should take a few seconds not 1+ hours if it's truly leveraging the M500's hardware encryption), however according to Crucial it's a matter of making sure both my test platform and the drive support the eDrive spec. There's hardly any good info about this online so I'm still digging on how to make it work. Once I figure it out I'll update this post. Update: It works!

Assuming this does work however, the M500 is likely going to be one of the first drives that's a must have if you need to run with BitLocker enabled on Windows 8. The performance impact of software encryption isn't huge on non-SandForce drives, but minimizing it to effectively nothing would be awesome.

Crucial is also printing a physical security ID on all M500 drives. The PSID is on the M500's information label and is used in the event that you have a password protected drive that you've lost the auth code for. In the past you'd have a brick on your hand. With the M500 and its PSID, you can do a PSID revert using 3rd party software and at least get your drive back. The data will obviously be lost forever but the drive will be in an unlocked and usable state. I'm also waiting to hear back from Crucial on what utilities can successfully do a PSID reset on the M500.

NAND Configurations, Spare Area & DRAM

 

I've got the full lineup of M500s here for review. All of the drives are 2.5" 7mm form factor designs, but they all ship with a spacer you can stick on the drive for use in trays that require a 9.5mm drive (mSATA and M.2/NGFF versions will ship in Q2). The M500 chassis is otherwise a pretty straightforward 8 screw design (4 hold the chassis together, 4 hold the PCB in place). There's a single large thermal pad that covers both the Marvell 9187 controller and DDR3-1600 DRAM, allowing them to use the metal chassis for heat dissipation. The M500 is thermally managed. Should the controller temperature exceed 70C, the firmware will instruct the drive to reduce performance until it returns to normal operating temperature. The drive reduces speed without changing SATA PHY rate, so it should be transparent to the host.

The M500 is Crucial's first SSD to use 20nm NAND, which means this is the first time it has had to deal with error and defect rates at 20nm. For the most part, really clever work at the fabs and on the firmware side keeps the move to 20nm from being a big problem. Performance goes down but endurance stays constant. According to Crucial however, defects are more prevalent at 20nm - especially today when the process, particularly for these new 128Gbit die parts, is still quite new. To deal with potentially higher defect rates, Crucial introduced RAIN (Redundant Array of Independent NAND) support to the M500. We've seen RAIN used on Micron's enterprise SSDs before, but this is the first time we're seeing it used on a consumer drive.

You'll notice that Crucial uses SandForce-like capacity points with the M500. While the m4/C400 had an industry standard ~7% of its NAND set aside as spare area, the M500 roughly doubles that amount. The extra spare area is used exclusively for RAIN and to curb failure due to NAND defects, not to reduce write amplification. Despite the larger amount of spare area, if you want more consistent performance you're going to have to overprovision the M500 as if it were a standard 7% OP drive.

The breakdown of capacities vs. NAND/DRAM on-board is below:

Crucial M500 NAND/DRAM Configuration
  # of NAND Packages # of Die per Package Total NAND on-board DRAM
960GB 16 4 1024GB 1GB
480GB 16 2 512GB 512MB
240GB 16 1 256GB 256MB
120GB 8 1 128GB 256MB

As with any transition to higher density NAND, there's a reduction in the number of individual NAND die and packages in any given configuration. The 9187 controller has 8 NAND channels and can interleave requests on each channel. In general we've seen the best results when 16 or 32 devices are connected to an 8-channel controller. In other words, you can expect a substantial drop off in performance when going to the 120GB M500. Peak performance will come with the 480GB and 960GB drives.

You'll also note the lack of a 60GB offering. Given the density of this NAND, a 60GB drive would only populate four channels - cutting peak sequential performance in half. Crucial felt it would be best not to come out with a 60GB drive at this point, and simply release a version that uses 64Gbit die at some point in the future.

The heavy DRAM requirements point to a flat indirection table, similar to what we saw Intel move to with the S3700. Less than 5MB of user data is ever stored in the M500's DRAM at any given time, the bulk of the DRAM is used to cache the drive's OS, firmware and logical to physical mapping (indirection) table. Relatively flat maps should be easy to defragment, but that's assuming the M500's garbage collection and internal defragmentation routines are optimal.

Introduction & The 128Gbit 20nm NAND Die Performance Consistency
Comments Locked

111 Comments

View All Comments

  • Crazy1 - Tuesday, April 9, 2013 - link

    Using the 2012 charts from Tom's Hardware I was able to compile some numbers between the 840 pro and popular mobile HDDs. While not a comprehensive comparison, these numbers are coming from a single source, so they should be reliable enough to provide a general understanding that there is a power savings when using an SSD instead of an HDD. These are the average power consumption numbers during the following workloads.

    ------------------------------------idle----------video playback---------Database-
    840Pro 128GB 0.03W 0.4W 1.2W
    840Pro 256GB 0.03W 0.5W 1.4W
    840Pro 512GB 0.04W 0.6W 1.5W

    WD blue 500GB 0.36W 0.94W 2.2W
    WD blue 1TB 0.6W 1.1W 1.9W
    WD black 750GB 0.9W 1.4W 2.4W
    Seagate XT 750GB* 0.8W 2.1W 2.6W

    * The XT 750GB is running sata III. The XT 500GB running sata II gives power numbers closer to the two WD Blue drives.

    It's fairly clear from these numbers that the 840pro uses less power than mobile HDD's. This isn't true for all SSD's though. Some of the Sandforce-based SSDs result in similar average wattage numbers as WD Blue drives. Those SSDs are still more power efficient because they have a better performance-per-watt ratio.
  • Crazy1 - Tuesday, April 9, 2013 - link

    My table was ruined. Hopefully this is easier to read.

    ------------------------------------idle----------video playback---------Database-
    840Pro 128GB _______0.03W ________0.4W__________ 1.2W
    840Pro 256GB _______0.03W ________0.5W __________1.4W
    840Pro 512GB _______0.04W ________0.6W __________1.5W

    WD blue 500GB ______0.36W ________0.94W _________2.2W
    WD blue 1TB ________ 0.6W _________1.1W __________1.9W
    WD black 750GB _____0.9W _________1.4W __________2.4W
    Seagate XT 750GB*___0.8W _________2.1W __________2.6W
  • vol7ron - Monday, December 9, 2013 - link

    Take it to the forums
  • leexgx - Tuesday, July 1, 2014 - link

    personally do not trust TW site but SSDs do use less power then an HDD over the same time (say 10-30 seconds) Peak power on some SSDs mite be higher then an laptop HDD but that be for an extreme short times so a SSD will be idle for most of the time where as HDD is very likely going to be still be active reading due to its Slow random access speeds (Writes normally but your typical laptop is mostly reads, HDDs reads and writes are about the same power wise)
  • Wolfpup - Monday, April 15, 2013 - link

    Yeah, SSDs don't automatically use less power than mechanical drives...and for that matter aren't automatically more reliable either.
  • Arkive - Tuesday, April 9, 2013 - link

    "Bang for the buck" depends entirely on how much storage you need and it's use-case.
  • UltraTech79 - Saturday, April 13, 2013 - link

    Did you just want to be the guy that made a shocking statement or something? Even if its totally fucking false? SSD wont be better bang for the buck for at least another two generations, probably three.

    And everyone discussing power req. is amusing. We are talking about fractions of cents, people, and if you think that adds up somehow in a server environment, you forgot to 'add up' the fact that these will not last even half as long as a enterprise quality HDD/SDD, so your going to have to replace them at least once. Bang for the buck? Bullshit. What these things are, are awesome fast preforming little pieces of amazing - but you will pay for it, get real.
  • leexgx - Tuesday, July 1, 2014 - link

    he was talking about laptop hdds,but still mostly incorrect as Most SSDs idle at under 0.1w where an SSD will be at for most of its life (assuming consumer laptop, not server or workstation loads)
  • mutantmagnet - Wednesday, April 17, 2013 - link

    SSDs are dramatically more vulnerable to brown outs and power surges than mechanical drives. This SSD's price point made me consider briefly I could forgo hard drives completely but SSDs aren't quite there yet.
  • leexgx - Tuesday, July 1, 2014 - link

    interesting (bit of old post to reply to) an SSD dieing to a power surge, i would resolve your issue first

Log in

Don't have an account? Sign up now