Final Words

For SSDs to become more cost effective they need to implement higher density NAND, which is often at odds with performance, endurance or both. Samsung chose the endurance side of the equation, but kept performance largely intact with the vanilla 840. Given that most client workloads aren't write heavy, the tradeoff made a lot of sense. With the M500, Crucial came at the problem from the performance angle. Keep endurance the same, but sacrifice performance in order to hit the right cost target. In the long run I suspect it'll need to be a combination of both approaches, but for now that leaves us in a unique position with the M500.

The M500's performance is by no means bad, but it's definitely slower than the competition. Crucial targeted Samsung's SSD 840, but in most cases the TLC based 840 is faster than the M500. There's probably some room for improvement in the M500's firmware, but there's no escaping the fact that read, program and erase latencies are all higher as a result of the move to larger pages/blocks with the drive's 128Gbit NAND die. The benefit to all of this should be cost, but we'll have to wait and see just how competitive the smaller capacities of the M500 are on cost.

The saving grace when it comes to the M500's performance, at least compared to Samsung's offerings, is worst case IO consistency in a full drive state. If you have the luxury of keeping around 20% of your drive free, Samsung maintains its performance advantage. If, on the other hand, you plan on using almost all of your drive's capacity - the M500 does have better behavior than even the 840 Pro. It's an interesting tradeoff, but going forward I feel like we're going to have to start distinguishing between both usage models. The M500 definitely isn't the best when it comes to delivering both high performance and consistent IO, that title continues to belong to Corsair with its Link_A_Media based Neutron drive. But among the current crop of non-SandForce tier 1 SSD manufacturers, the M500 does reasonably well.

The encryption story on the M500 is potentially very interesting. Assuming the drive is indeed fully supported as a Windows 8 eDrive like Crucial claims, the M500 would be the obvious choice for anyone who had to run with BitLocker enabled. The prospect of seeing more SSDs with hardware encryption that can be leveraged by the OS is downright exciting. Honestly I wasn't aware of the eDrive spec until testing the M500, but now I want to see something similar from Apple as well.

Power consumption is another potentially good story from Crucial, assuming idle power in a notebook is truly as low as it claims. Power under load is competitive with Samsung's SSD 840 Pro, and actually even lower than the vanilla 840. Given that neither of those drives is particularly power hungry, the M500 does well there. Support for DevSleep is a nice addition. The combination of the M500's encryption support and DevSleep give us a good idea of two platform features that we should hope to see from all modern drives during this next generation.

 

All of this brings us to recommendation time. The easiest of the M500 drives to recommend and dismiss are the highest and lowest capacity versions, respectively. The 960GB M500 is the cheapest 1TB-class SSD I've seen to date, and it's likely the best buy if you need that much storage in a single drive. Performance still falls short of the fastest drives in this space, but if you need the capacity and plan on using all of it the M500 is really the only game in town. I've been hammering on the 960GB very hard over the past few days and while it hasn't been long enough to clear the drive as reliable, so far it's handled everything I've thrown at it very well (including our new Destroyer benchmark). I know I've personally been waiting for a good, high-capacity SSD for notebook use and based on my options today, I'd have no issues going with the 960GB M500.

On the other side of the fence, the 120GB version sacrifices a lot of performance as a result of only using a total of 8 NAND die within the drive. Unless its street price is significantly more attractive than its MSRP, I don't see a reason to choose the 120GB M500.

Recommending the two middle capacities (240/480GB) will really depend on street pricing. Based on their MSRPs, the M500 doesn't appear to be any more competitive here. I suspect that we will see closer-to-840 pricing after a few weeks of being in the channel, at which point they may be worth another look. For now, we play the waiting game.

Power Consumption
Comments Locked

111 Comments

View All Comments

  • NCM - Tuesday, April 9, 2013 - link

    TRIM support is built into the OS X, but disabled by default for non-Apple drives. As others have pointed out, the freeware utility 'TRIM Enabler' easily takes care of that. The only other thing to know is that some OS X updates may reset TRIM to 'off', so it's as well to check after any update and re-enable it if necessary.

    I take care of an office full of Macs, including Mac Pros, iMacs, Minis and MacBook Pros, the majority of which have SSDs that I installed. I'm typing this on my 2010 MBP with a 512GB Plextor M3P.

    With the price of SSDs now this is a very worthwhile upgrade, and particularly one that offers a new lease on life for older computers.
  • Bkord123 - Tuesday, April 9, 2013 - link

    All of these comments are going to make my wife mad when I buy yet another gadget! I'm not as worried now about the TRIM issue. Btw, does this site have a page that ranks hard drives? I did look and didn't see anything here.
  • jamyryals - Tuesday, April 9, 2013 - link

    Anand has a Bench utility you can use to compare devices. Here's two popular reliable drives -
    http://www.anandtech.com/bench/Product/792?vs=743
  • glugglug - Tuesday, April 9, 2013 - link

    With most SSDs no longer using 4KB pages, does it make sense to have 8KB and 16KB random write tests?

    Also, does application performance improve if the drives are formatted with an 8KB or 16KB cluster size?
  • Kristian Vättö - Tuesday, April 9, 2013 - link

    Most real world IOs are 4KB.
  • glugglug - Tuesday, April 9, 2013 - link

    Not true, even with the default 4KB cluster size the drives get formatted with. If you format with 16KB clusters, *none* of the IOs will be 4KB.
  • Kristian Vättö - Tuesday, April 9, 2013 - link

    Based on the workloads we've traced (using default cluster size), 4KB is the most common IO size, although it obviously varies and some workloads may have consist of larger IO sizes. Do you have something that backs up your statement? Would be interesting to see that.
  • glugglug - Tuesday, April 9, 2013 - link

    According to the table in the article, for the Anandtech 2011 Heavy Workload, 28% of the IOs are 4KB, not "most".

    I am thinking that what must happen for a 4KB IO on a drive with 16KB pages is that it has to read the current contents of the 16KB page so that the 4KB being rewritten can be merged into it, then write a 16KB page, so each write really ends up being a read + write operation not just the write by itself.

    Worse, when TRIM is used, if the TRIM operation covers only 4KB of the 16KB page, the page can't really be trimmed, because the other 12KB might still be in use; the drive firmware can't know for certain, so having a cluster size match or exceed the drive's page might result in better steady state performance over time because of TRIM not losing track of partial pages.
  • Tjalve - Wednesday, April 10, 2013 - link

    I think there are some caching involved when dealing with writes thats smaller then the page size of the NAND. I would guss that the M500 caches in DRAM. There are other vendors that use the onboard flash for caching. Like Sandisk nCache for example.
  • glugglug - Wednesday, April 10, 2013 - link

    For some SSDs that is definately the case. I'm pretty sure Sandforce needed to do it for example, both because the compression makes the size of the flash writes unpredictable, and because if you look at the cluster sizes the chipset supports to go with various obscure controllers its kind of nuts.

    I don't think that is the case here though, because if you multiple the marketed 4KB random write numbers by 4KB, you pretty much get exactly the sequential write speed, and write-back caching to deal with the smaller writes would result in much better sequential performance.

Log in

Don't have an account? Sign up now