Final Words

My thoughts about the M600 are mixed. On the one hand I am happy to see that Micron is showing its commitment to the client market by investing on features like Dynamic Write Acceleration because to be honest, Micron has not really introduced anything new to its client SSDs since the M500. Innovation in the client segment is difficult because the market is so cost driven and even though a pseudo-SLC cache is nothing new, Micron's way of implementing it is.

On the other hand, I am a bit disappointed by the performance of the M600 and especially Dynamic Write Acceleration. In theory Dynamic Write Acceleration sounds great because it should provide the maximum acceleration capacity under every circumstance and thus maximize performance, but the truth is that the speed improvements over the MX100 are minimal. Add to that the fact that the M600 is actually outperformed by the 840 EVO, which utilizes TLC NAND with smaller SLC caches. It is not like the M600 is a slow or bad drive, not at all; it is just that I expected a bit more given the combination of MLC NAND and dynamic SLC cache.

The positive side of Dynamic Write Acceleration is the increased endurance. While 72TB was without a doubt enough for average client workloads, it is never a bad thing to have more. Especially OEMs tend to appreciate higher endurance because it is associated with higher reliability, and it also opens a wider market for the M600 as it can be used in workstation setups without having to worry about drives wearing out. Of course, I would pick a faster drive like 850 Pro for workstation use, but for OEMs the cost tends to be more important.

NewEgg Price Comparison (9/28/2014)
  120/128GB 240/256GB 480/512GB 960GB/1TB
Micron M600 $80 $140 $260 $450
Crucial MX100 $75 $112 $213 -
Crucial M550 $90 $150 $272 $480
SanDisk Ultra II $80 $110 $200 $433
SanDisk Extreme Pro - $190 $370 $590
Samsung SSD 850 Pro $120 $200 $380 $700
Samsung SSD 840 EVO $82 $140 $236 $500
OCZ ARC 100 $75 $120 $240 -
Plextor M6S $75 $135 $280 -
Intel SSD 530 $84 $140 $250 -

Since the M600 is an OEM-only product, it will not be available through the usual retail channels. Thus the pricing will depend highly on the quantity ordered, so the prices in the above table are just approximate prices for orders of one that Micron provided us. The M600 enjoys a price premium over the MX100, but I suspect that in high volumes the M600 pricing should drop close to the MX100 levels, perhaps even lower.

All in all, I would have liked to see Micron going after Samsung's 850 Pro and SanDisk's Extreme Pro with the M600, but I do see the logic behind sticking to the high volume mainstream market. In terms of performance, features, and price, the M600 is a solid product and I am certain that PC OEMs will see the appeal in MLC NAND and high endurance over competitors' TLC offerings, especially in the more professional-oriented PC segments.

It will nevertheless be interesting to see how the separation of retail and OEM product teams plays out for Micron. I am eager to see whether Micron can optimize Dynamic Write Acceleration for heavier workloads and finally provide competition in the high-end SSD market as well. For now, this is a good first step, but it might take a revision or two before Dynamic Write Acceleration can reach its full potential.

Power Consumption
Comments Locked

56 Comments

View All Comments

  • Kristian Vättö - Monday, September 29, 2014 - link

    I thought I had that there, but looks like I forgot to add it in a hurry. Anyway, I've added it now :)
  • MarcHFR - Tuesday, September 30, 2014 - link

    Shodanshok,

    In fact DWA is not better for endurance, it's worst.
    - Writting random writes in sequential form is already done on all SSD by write combining.
    - DWA increase write amplification since the data is first wrote in "SLC" mode then rewrote in "MLC mode".

    For 2 bit of data :
    - 2 cells are used for SLC mode
    - then 1 cell is used for MLC mode
    vs
    - 1 cell is used for MLC mode w/o DWA

    Since write speed is rarely a problem in daily usage and since there is counterpart, i don't understand the positive reception for TurboWrite, nCache 2, Dynamic Write Acceleration, etc...
  • shodanshok - Tuesday, September 30, 2014 - link

    Hi,
    it really depends on how the Write Acceleration is implemented. While it is true that badly designed WA caches can have a bad effect of flash endurance, a good designed one (and under a favorable workload) can lessen the load on the flash as a whole.

    Micron is not discussing their pSLC implementation in detail, so let speak about Sandisk NCache which is more understood at the moment.

    NCache works by reserving a fixed amount of NAND die to pSLC. This pSLC slice, while built on top of MLC cells, is good for, say, 10X the cycles of standard MLC (so ~30.000 cycles). The reason is simple: by using them as SLC, you have much higher margin for voltage drop.

    Now, lets follow a write down to the flash. When a write arrive to the disk, it places the new data to the pSLC array. After that we have two possibilities:

    1. no new write for the same LBA arrives in short time, so the pSLC array is flushed to the main MLC portion. Total writes with WA: 2 (1 pSLC / 1 MLC) - without WA: 1 (MLC)

    2. if a new write is recorder for the same LBA _before_ the pSLC array is flushed, the new write will overwrite the data stored in the pSLC portion. After some idle time, the pSLC array is flushed to the MCL one. Total writes with WA: 3 (2 pSLC / 1 MLC) - without WA: 2 (MLC)

    In the rewrite scenario (n.2) the MLC portion see only a single write vs the two MLC writes of the no-WA drive. While it is true that the pSLC portion sustain increased stress, its longevity is much longer than the main MLC array so it should not be a problem is their cycles are "eaten" faster. On the other hand, the MLC array is much more prone to flash wearing, so any decrease in writes are very welcomed.

    This rewrite behavior is the exact reason behind SanDisk's quoted write amplification number, which is only 0.8: without Write Acceleration, a write amplification less then 1.0 can be achieved only using come compression/deduplication scheme.

    Regards.
  • MarcHFR - Tuesday, September 30, 2014 - link

    As you said it's really depend on the workload for nCache 2, write vs rewrite.

    But another point of view is that for example a 120 GB Ultra II with 5 GB nCache 2.0 could be a 135 GB Ultra II without NAND die reserved to nCache 2.0.
  • shodanshok - Tuesday, September 30, 2014 - link

    True, but rewrite is quite pervasive.

    For example, any modern, journaled filesystem will constantly rewrite an on-disk circular buffer.
    Databases use a similar concept (double-write) with another on-disk circular buffer.
    The swapfile is constantly rewritten
    ...

    Anyway, it surely remain a matter of favorable vs unfavorable workload.

    Regards.
  • Cerb - Tuesday, September 30, 2014 - link

    Only some FSes, usually with non-default settings, will double-write any file data, though. What most do is some form of meta-data journaling, where new writes preferably go into free space (one more reason not to fill your drives all the way up!), and the journal logs the writing of the new state. But, the data itself is not in the journal. EXT3/4 can be set write twice, but don't by default. NTFS, JFS, and XFS, among others, simply don't have such a feature at all. So, the additional writing is pretty minimal, being just metadata. You're not likely to be writing GBs/day to the FS journal.

    Databases generally should write everything twice, though, so that they never are in an unrecoverable state, if the hardware is functioning correctly.
  • AnnonymousCoward - Monday, September 29, 2014 - link

    I have yet to get an answer to this question: what's the point of doing purely synthetic and relative-performance tests, and how does that tell the reader the tangible difference of these drives?

    You don't test video cards in terms of IOPS or how fast they pound through a made-up suite. You test what matters: fps.

    You also test what matters for CPUs: encoding time, gaming fps, or CAD filter time.

    With phones, you test actual battery time or actual page loading time.

    With SSDs, why would you not test things like how fast Windows loads, program load time, and time to transfer files? That matters more than any of the current tests! Where am I going wrong, Kristian?
  • Kristian Vättö - Monday, September 29, 2014 - link

    Proper real world testing is subject to too many variables to be truly reproducible and accurate. Testing Windows boot time and app load time is something that can be done, but the fact is that in a real world scenario you will be having more than one app running at a time and a countless number of Windows background processes. Once more variables are introduced to the test, the results become less accurate unless all variables can be accurately measured, which cannot really be done (at least not without extensive knowledge of Windows' architecture).

    The reasoning is the same as to why we don't test real-time or multiplayer gaming performance. It's just that the test scenarios are not fully reproducible unless the test is scripted to run the exact same scenario over and over again (like the built-in game benchmarks and our Storage Benches).

    That said, I've been working on making our Storage Bench data more relevant to real world usage and I already have a plan on how to do that. It won't change the limitations of the test (it's still trace-based with no TRIM available, unfortunately), but I hope to present the data in a way that is more relevant than just pure MB/s.
  • AnnonymousCoward - Tuesday, September 30, 2014 - link

    Thanks for your reply.

    You said it yourself: boot time and app load time can be done. These are 2 of the top 5 reasons people buy SSDs. To get around the "uncontrolled" nature, just do multiple trials and take the average.

    Add a 3rd test: app load time while heavy background activity is going on, such as copying a 5GB file to an HDD.

    4th test: IrfanView batch conversion; time to re-save 100 JPEG files.

    All of those can be done on a fresh Windows install with minimal variables.
  • AnnonymousCoward - Tuesday, September 30, 2014 - link

    To expand on my 3rd test: kick off a program that scans your hard drive (like anti-spyware or anti-virus) and then test app load time.

    You might be overestimating the amount of disk transfers that go on during normal computer usage. Right now, for example, I've got 7 programs open, and task manager shows 0% CPU usage on all 4 cores. It takes the same time to launch any app now as when I have 0 other programs open. So I think the test set I described would be quite representative of real life, and a massive benefit over what you're currently testing.

Log in

Don't have an account? Sign up now