Final Words

For SSDs to become more cost effective they need to implement higher density NAND, which is often at odds with performance, endurance or both. Samsung chose the endurance side of the equation, but kept performance largely intact with the vanilla 840. Given that most client workloads aren't write heavy, the tradeoff made a lot of sense. With the M500, Crucial came at the problem from the performance angle. Keep endurance the same, but sacrifice performance in order to hit the right cost target. In the long run I suspect it'll need to be a combination of both approaches, but for now that leaves us in a unique position with the M500.

The M500's performance is by no means bad, but it's definitely slower than the competition. Crucial targeted Samsung's SSD 840, but in most cases the TLC based 840 is faster than the M500. There's probably some room for improvement in the M500's firmware, but there's no escaping the fact that read, program and erase latencies are all higher as a result of the move to larger pages/blocks with the drive's 128Gbit NAND die. The benefit to all of this should be cost, but we'll have to wait and see just how competitive the smaller capacities of the M500 are on cost.

The saving grace when it comes to the M500's performance, at least compared to Samsung's offerings, is worst case IO consistency in a full drive state. If you have the luxury of keeping around 20% of your drive free, Samsung maintains its performance advantage. If, on the other hand, you plan on using almost all of your drive's capacity - the M500 does have better behavior than even the 840 Pro. It's an interesting tradeoff, but going forward I feel like we're going to have to start distinguishing between both usage models. The M500 definitely isn't the best when it comes to delivering both high performance and consistent IO, that title continues to belong to Corsair with its Link_A_Media based Neutron drive. But among the current crop of non-SandForce tier 1 SSD manufacturers, the M500 does reasonably well.

The encryption story on the M500 is potentially very interesting. Assuming the drive is indeed fully supported as a Windows 8 eDrive like Crucial claims, the M500 would be the obvious choice for anyone who had to run with BitLocker enabled. The prospect of seeing more SSDs with hardware encryption that can be leveraged by the OS is downright exciting. Honestly I wasn't aware of the eDrive spec until testing the M500, but now I want to see something similar from Apple as well.

Power consumption is another potentially good story from Crucial, assuming idle power in a notebook is truly as low as it claims. Power under load is competitive with Samsung's SSD 840 Pro, and actually even lower than the vanilla 840. Given that neither of those drives is particularly power hungry, the M500 does well there. Support for DevSleep is a nice addition. The combination of the M500's encryption support and DevSleep give us a good idea of two platform features that we should hope to see from all modern drives during this next generation.

 

All of this brings us to recommendation time. The easiest of the M500 drives to recommend and dismiss are the highest and lowest capacity versions, respectively. The 960GB M500 is the cheapest 1TB-class SSD I've seen to date, and it's likely the best buy if you need that much storage in a single drive. Performance still falls short of the fastest drives in this space, but if you need the capacity and plan on using all of it the M500 is really the only game in town. I've been hammering on the 960GB very hard over the past few days and while it hasn't been long enough to clear the drive as reliable, so far it's handled everything I've thrown at it very well (including our new Destroyer benchmark). I know I've personally been waiting for a good, high-capacity SSD for notebook use and based on my options today, I'd have no issues going with the 960GB M500.

On the other side of the fence, the 120GB version sacrifices a lot of performance as a result of only using a total of 8 NAND die within the drive. Unless its street price is significantly more attractive than its MSRP, I don't see a reason to choose the 120GB M500.

Recommending the two middle capacities (240/480GB) will really depend on street pricing. Based on their MSRPs, the M500 doesn't appear to be any more competitive here. I suspect that we will see closer-to-840 pricing after a few weeks of being in the channel, at which point they may be worth another look. For now, we play the waiting game.

Power Consumption
Comments Locked

111 Comments

View All Comments

  • Solid State Brain - Saturday, April 13, 2013 - link

    In theory, the spare area can be only configured on a clean drive, which means one would have to secure erase it (and therefore lose all data) and then create a partition smaller than the drive's maximum user capacity. The remaining unused (raw, unpartitioned) capacity should then be used by the drive as spare area for wear leveling operations, in addition to the factory OP area (usually derived from the GiB->GB capacity difference). In practice it *should* be sufficient to notify the drive that the empty space is actually empty with a TRIM command before resizing the partition.

    In your case the Samsung Magician software allows to double the drive's factory spare area (no other adjustment possible, at least in version 4). It doesn't perform a secure erase, so perhaps it isn't really necessary after all.

    I don't know however if the Samsung 840 controller actually actively detects when a certain portion of the drive is "raw/unpartitioned". Theory dictates that it shouldn't be able to discern that without the OS somehow telling it so.

    If a partition-wide TRIM operation is enough, then one can increase overprovisioning manually on an live/used system by:

    1) Performing a full-system TRIM command with the Windows 8 integrated "drive defrag/optimization" tool (or with the "fstrim" command line tool in Linux, although this works only on ext4 partitions), or with dedicated third party utilities (some commercial defragmentation software performs a system-wide trim on SSDs instead of regular defrag).
    2) Resize the last partition manually with Computer Management>Disk Management>Shrink Partition.

    Anyway, in practice all this hassle is going to benefit you only if you routinely perform dozens of gigabytes of sustained writes per day in a possibly trim-less environment. I doubt very much that most users would be able to feel any difference with their workloads.
  • AlB80 - Saturday, April 13, 2013 - link

    "Total NAND on-board" and "DRAM" values are specified in "GB" and "MB", but it should be "GiB" and "MiB".
  • JellyRoll - Saturday, April 13, 2013 - link

    Shut up JohnW lol
  • JellyRoll - Saturday, April 13, 2013 - link

    There is a huge misstatement in the article..."I introduced a new method of characterizing performance: looking at the latency of individual operations over time."
    First: it isnt individual operations, several thousand are taking place per one second interval.
    Second: Anand did not introduce this type of testing, it was a blatant copying of other another tech websites testing.
  • JellyRoll - Saturday, April 13, 2013 - link

    There is a huge misstatement in the article..."I introduced a new method of characterizing performance: looking at the latency of individual operations over time."
    First: it isnt individual operations, several thousand are taking place per one second interval.
    Second: Anand did not introduce this type of testing, it was a blatant copying of other another tech websites testing.
  • twtech - Sunday, April 14, 2013 - link

    I think it's kind of interesting in the comments, people are looking at the performance figures and saying, "Oh, it doesn't perform as well as a Samsung 840 Pro, so I'm disappointed."

    I have a couple computers booting off an M4 (slower than the M500), and one that has a Samsung 830 as the boot drive. The Samsung is quite a bit faster in benchmarks, but do I notice? Nope, not really. The jump to having any SSD at all is significant. The jump from one SSD to another - provided neither have something like firmware issues causing stuttering as some old models did - is negligible.

    I think the more important factor here is that we have a nearly 1TB SSD for $600 - less than what 512GB drives were selling for 1 year ago. That's big enough that many users may not even need a separate mechanical storage drive.
  • JellyRoll - Sunday, April 14, 2013 - link

    Part of the issue is the unrealistic test parameters. Testing with such ridiculously severe workloads is not irepresentative of a real-world use.
  • Wolfpup - Monday, April 15, 2013 - link

    Unfortunately I couldn't wait for the launch of the M500...had to "make due" with a 512GB M4. Oh well, it's still a great drive!
  • random2 - Monday, April 15, 2013 - link

    I cannot imagine anyone who doesn't have some sort of tech background, trying to read these articles. Granted I am no certificated IT professional, I have been very interested in hardware and software for over a decade, and have been a reader of Anandtech for almost as long. Which brings me to this. Can we not have some of the terms abbreviated or otherwise, hyper-linked at least to an article providing further explanation?

    Case in point; ONFI 3.0
  • af3 - Tuesday, April 16, 2013 - link

    I was thinking of ordering a $350 256G Lacie Thunderbolt Rugged external SSD for the purposes of booting another OS without needing to use space on my internal/main (SSD) drive.

    Can anyone tell me whether there might be a superior (in terms of performance and cost) alternative that might utilize something like one of these new Micron drives?

    Does anyone know whether or not the Lacie is fast and whether or not I might have something better by getting another external Thunderbolt device and installing one of these Micron drives?

Log in

Don't have an account? Sign up now