Encryption Done Right?

Arguably one of the most interesting features of the M500 is its hardware encryption engine. Like many modern drives, the M500 features 256-bit AES encryption engine - all data written to the drive is stored encrypted. By default you don't need to supply a password to access the data, the key is just stored in the controller and everything is encrypted/decrypted on the fly. As with most SSDs with hardware encryption, if you set an ATA password you'll force the generation of a new key and that'll ensure no one gets access to your data.

Unfortunately, most ATA passwords aren't very secure so the AES-256 engine ends up being a bit overkill when used in this way. Here's where the M500 sets itself apart from the pack. The M500's firmware is TCG Opal 2.0 and IEEE-1667 compliant. The TCG Opal support alone lets you leverage third party encryption tools to more securily lock down your system. The combination of these two compliances however makes the M500 compatible with Microsoft's eDrive standard. 

In theory, Windows 8's BitLocker should leverage the M500's hardware encryption engine instead of using a software encryption layer on top of it. The result should be better performance and power consumption. Simply enabling BitLocker didn't seem to work for me (initial encryption time should take a few seconds not 1+ hours if it's truly leveraging the M500's hardware encryption), however according to Crucial it's a matter of making sure both my test platform and the drive support the eDrive spec. There's hardly any good info about this online so I'm still digging on how to make it work. Once I figure it out I'll update this post. Update: It works!

Assuming this does work however, the M500 is likely going to be one of the first drives that's a must have if you need to run with BitLocker enabled on Windows 8. The performance impact of software encryption isn't huge on non-SandForce drives, but minimizing it to effectively nothing would be awesome.

Crucial is also printing a physical security ID on all M500 drives. The PSID is on the M500's information label and is used in the event that you have a password protected drive that you've lost the auth code for. In the past you'd have a brick on your hand. With the M500 and its PSID, you can do a PSID revert using 3rd party software and at least get your drive back. The data will obviously be lost forever but the drive will be in an unlocked and usable state. I'm also waiting to hear back from Crucial on what utilities can successfully do a PSID reset on the M500.

NAND Configurations, Spare Area & DRAM

 

I've got the full lineup of M500s here for review. All of the drives are 2.5" 7mm form factor designs, but they all ship with a spacer you can stick on the drive for use in trays that require a 9.5mm drive (mSATA and M.2/NGFF versions will ship in Q2). The M500 chassis is otherwise a pretty straightforward 8 screw design (4 hold the chassis together, 4 hold the PCB in place). There's a single large thermal pad that covers both the Marvell 9187 controller and DDR3-1600 DRAM, allowing them to use the metal chassis for heat dissipation. The M500 is thermally managed. Should the controller temperature exceed 70C, the firmware will instruct the drive to reduce performance until it returns to normal operating temperature. The drive reduces speed without changing SATA PHY rate, so it should be transparent to the host.

The M500 is Crucial's first SSD to use 20nm NAND, which means this is the first time it has had to deal with error and defect rates at 20nm. For the most part, really clever work at the fabs and on the firmware side keeps the move to 20nm from being a big problem. Performance goes down but endurance stays constant. According to Crucial however, defects are more prevalent at 20nm - especially today when the process, particularly for these new 128Gbit die parts, is still quite new. To deal with potentially higher defect rates, Crucial introduced RAIN (Redundant Array of Independent NAND) support to the M500. We've seen RAIN used on Micron's enterprise SSDs before, but this is the first time we're seeing it used on a consumer drive.

You'll notice that Crucial uses SandForce-like capacity points with the M500. While the m4/C400 had an industry standard ~7% of its NAND set aside as spare area, the M500 roughly doubles that amount. The extra spare area is used exclusively for RAIN and to curb failure due to NAND defects, not to reduce write amplification. Despite the larger amount of spare area, if you want more consistent performance you're going to have to overprovision the M500 as if it were a standard 7% OP drive.

The breakdown of capacities vs. NAND/DRAM on-board is below:

Crucial M500 NAND/DRAM Configuration
  # of NAND Packages # of Die per Package Total NAND on-board DRAM
960GB 16 4 1024GB 1GB
480GB 16 2 512GB 512MB
240GB 16 1 256GB 256MB
120GB 8 1 128GB 256MB

As with any transition to higher density NAND, there's a reduction in the number of individual NAND die and packages in any given configuration. The 9187 controller has 8 NAND channels and can interleave requests on each channel. In general we've seen the best results when 16 or 32 devices are connected to an 8-channel controller. In other words, you can expect a substantial drop off in performance when going to the 120GB M500. Peak performance will come with the 480GB and 960GB drives.

You'll also note the lack of a 60GB offering. Given the density of this NAND, a 60GB drive would only populate four channels - cutting peak sequential performance in half. Crucial felt it would be best not to come out with a 60GB drive at this point, and simply release a version that uses 64Gbit die at some point in the future.

The heavy DRAM requirements point to a flat indirection table, similar to what we saw Intel move to with the S3700. Less than 5MB of user data is ever stored in the M500's DRAM at any given time, the bulk of the DRAM is used to cache the drive's OS, firmware and logical to physical mapping (indirection) table. Relatively flat maps should be easy to defragment, but that's assuming the M500's garbage collection and internal defragmentation routines are optimal.

Introduction & The 128Gbit 20nm NAND Die Performance Consistency
Comments Locked

111 Comments

View All Comments

  • mayankleoboy1 - Wednesday, April 10, 2013 - link

    thanks! These look much better, and more realworld+consumer usage.
  • metafor - Wednesday, April 10, 2013 - link

    I'd be very interested to see an endurance test for this drive and how it compares to the TLC Samsung drives. One of the bigger selling points of 2-level MLC is that it has a much longer lifespan, isn't it?
  • 73mpl4R - Wednesday, April 10, 2013 - link

    Thank you for a great review. If this is a product that paves the way for better drives with 128Gbit dies, then this is most welcome. Interesting with the encryption aswell, gonna check it out.
  • raclimja - Wednesday, April 10, 2013 - link

    power consumption is through the roof.

    very disappointed with it.
  • toyotabedzrock - Wednesday, April 10, 2013 - link

    If you wrote 1.5 TB of data for this test then you used 2% of the drives write life in 10-11 hours.

    As a heavy multitasker this worries me greatly. Especially if you edit large video files.
  • Solid State Brain - Wednesday, April 10, 2013 - link

    As I written in one of the comments above, they probably state 72 TiB of maximum supported writes for liability and commercial reasons. They don't want users to be using these as enterprise/professional drives (and chances are that if you write more than 40 GiB/day continuously for 5 years you're not a normal consumer). Most people barely write 1.5 TiB in 6 months of use anyway. So even if 72 TiB don't seem much, they're actually quite a lot of writes.

    Taking into account drive and NAND specifications, and an average write amplification of 2.0x (although in case of sequential workloads such as video editing this should be much closer to 1.0x), a realistic estimate as a minimum drive endurance would be:

    120 GB => 187.5 TiB
    240 GB => 375.0 TiB
    480 GB => 750.0 TiB
    960 GB => 1.46 PiB

    Of course, it's not that these drives will stop working after 3000 write cycles. They will go on as long as uncorrectable write errors (which increase as the drive gets used) remain within usable margins.
  • glugglug - Wednesday, April 10, 2013 - link

    It is very easy to come up with use cases where a "normal" user will end up hitting the 72TB of writes quickly.

    Most obvious example is a user who is using this large SSD to transition from a large HDD without it being "just a boot drive", so they archive a lot of stuff.

    Depending on MSSE settings, it will likely uncompress everything into C:\Windows\Temp when it does scans each night scan.

    You don't want to know how much of my X-25M G1's lifespan I killed in about 6 months time before finding out about that and junctioning my temp directories off of the SSD.
  • Solid State Brain - Wednesday, April 10, 2013 - link

    I am currently using a Samsung 840 250GB with TLC memory, without any hard disk installed in my system. I use it for everything from temp files to virtual machines to torrents. I even reinstalled the entire system a few times because I hopped between Linux and Windows "just because". I haven't performed any "SSD optimization" either. A purely plug&play usage, and it isn't a "boot drive" either. Furthermore, my system is always on. Not quite a normal usage I'd say.

    In 47 days of usage I've written 2.12 TiB and used 10 write cycles out of 1000. This translates in 13 years of drive life at my current usage rate.

    My usage graph + SMART data:
    http://i.imgur.com/IwWZ9Kg.png

    Temp directories alone aren't going to kill your SSD, not directly at least. It likely was something caused by some anomalous write-happy application, not Windows by itself.
  • juhatus - Wednesday, April 10, 2013 - link

    What would you recommend overprovisioning for 256Gb M4 with bitlocker, 10-15-25% ? Also what was the M4's firmware you used to compare to M500? Also are there any benefits for M500 with bitlocker on windows 7? thanks for review, please add 25% results for M4 too :)
  • Solid State Brain - Wednesday, April 10, 2013 - link

    Increasing overprovisioning is only going to matter when continuously writing to the drive without never (or rarely) executing a TRIM operation every time an amount of data roughly equivalent (in practice, less, depending on workload and drive conditions) to the amount of free space gets written.

    This almost never happens in real life usage by the target userbase of such a drive. It's a matter for servers, for those who for a reason or another (like hi-definition video editing) perform many sustained writes, or for those working in an environment without TRIM support (which isn't the case for Windows 7/8, although it can be for MacOS or Linux - where it has to be manually enabled).

    Anandtech SSD benchmarks aren't very realistic for most users, and the same can be said for their OP reccomendations.

Log in

Don't have an account? Sign up now