A Word on Reliability

The M3 and M3 Pro came with 5-year warranties but the M5S only comes with 3-year warranty. The quick conclusion would be that the M5S must be inferior in some way because Plextor would not give it the same 5-year warranty as before. Plextor is obviously claiming that the change in warranty is only to differentiate their entry-level M5S and a future performance model. I don't doubt that and I even wanted Plextor to provide some differentiation between their SSDs because the M3 and M3 Pro were too similar. Offering different warranties is one good way to do that.

Plextor says that all their SSDs go through the same validation process, regardless of the series. I don't know the exact specifics of their testing methods, but according to their website all SSDs are tested for 20 hours in a high temperature burn-in test. Plextor is also claiming that their average annual failure rate is 0.5%. That's actually believable because according to third party data, Intel has had return rate as low as 0.1% but the 8MB bug increased the rate of returns. I checked NewEgg reviews for Plextor's M3 and M3 Pro and only 4.2% of the reviews (189 user reviews in total) were one or two eggs, which usually indicates a serious problem with the drive. That figure is roughly on-par with Intel's and Samsung's. I didn't calculate the exact figures for them but a quick look at NewEgg shows that their drives have around 3-7% of one or two-egg reviews. Of course, NewEgg user reviews are not the most reliable and the sample is also too small, but they give us some insight of reliability.

Plextor M3

I've been using the 256GB M3 as my boot drive since the review went live and I haven't had a single issue with the drive. One man's experience is of course not enough to declare that a drive is reliable, but I think it's safe to say that at least the M3 and M3 Pro are not plagued with issues. Assuming that the M5S follows the same path, there should be nothing to worry about.

Introduction Inside the M5S and Test Setup
Comments Locked

43 Comments

View All Comments

  • shodanshok - Wednesday, July 18, 2012 - link

    Hi Kristian,
    thank you for your reply.

    I understand that measuring WA is your "special sauce" (anything to do with SMART 0xE6-0xF1 attributes ? ;)), but the interesting thing is the Plextor was able to minimize WA while, at the same time, maximize idle GC efficiency.

    Other drivers that heavily use GC (eg: Toshiba and previously Indilinx controllers) seems to cause a much higher WA.

    Thank you for these comprehensive review.
  • sheh - Thursday, July 19, 2012 - link

    Thanks.

    I have to say, though, that it's difficult to give credence to data that is the result of undisclosed calculations, and not even by the hardware manufacturers.
  • Kristian Vättö - Thursday, July 19, 2012 - link

    The method we use was disclosed by a big SSD manufacturer a few years ago. It does not rely on SMART or power consumption, and it can be run on any drive.

    If we revealed the method we use, we would basically be giving it out to every other site. Tech industry is quite insolent about "stealing" nowadays, getting content from other sites without giving credit seems to be fine by today's standards.

    Also, our method is just one way of estimating worst case write amplification.
  • shodanshok - Thursday, July 19, 2012 - link

    Hi Kristian
    I totally understand your point.

    Thank you for these great reviews ;)
  • sheh - Thursday, July 19, 2012 - link

    I can't say I understand this logic, but so be it. Thanks for replying. :)
  • jwilliams4200 - Sunday, July 22, 2012 - link

    Does it work for Sandforce SSDs? Because I noticed your WA chart does not have any Sandforce SSDs.

    Are you just measuring the fresh out-of-box (or secure erase) write speed with HD Tune, then torturing the drives and then measuring the worst case write speed with HD Tune? Then saying WA = FOB write speed / worst case write speed ?

    If that is what you are doing, then I don't think it is very accurate. Any SSDs that have aggressive background garbage collection could make the "worst case" write speed fluctuate or stabilize at a value that does not reflect the worst case write amplification.
  • Kristian Vättö - Sunday, July 22, 2012 - link

    SandForce drives break the chart, hence I couldn't include any. SandForce drives typically have worst case WA of around 2x, though.

    I still cannot say what our testing methods are. Anand has made the decision that he doesn't want to share the method and I have to respect that. You can email him and ask about our method - I can't share our methods without his permission.

    In the end it's an estimation, nothing more. How accurate, it's hard to say as it will vary depending on usage.
  • jwilliams4200 - Monday, July 23, 2012 - link

    So it is TERRIBLY inaccurate, because Sandforce SSDs actually have worst case write amplification of well over 10, just like other SSDs.

    In that case, I assume I was correct that you are just using ratio of write speeds from HD Tune, but since HD Tune writes highly compressible data, you are getting bogus results for Sandforce SSDs (actually, I should say, even more inaccurate for Sandforce SSDs than for non-Sandforce)

    Anand really needs to reconsider some of his policies. This "secret" test method is just absurd.
  • jwilliams4200 - Wednesday, July 18, 2012 - link

    It all hinges on finding a way of measuring "flash writes", the amount erased/written to flash chips, as opposed to "host writes", which is easy to measure (the amount your computer writes to the SSD).

    Usually you can find or guess which one of the SMART attributes represents flash writes. You can start by doing large sequential writes to the SSD (for which the WA should be close to, but a little over, 1) and monitoring the SMART attributes to see which one changes like it is monitoring flash writes.

    I remember some time ago an anandtech article mentioned another way of doing it. I'm not sure if they are using this method now or not (I have my doubts about the accuracy of the method). It had to do with measuring the power usage and somehow correlating that to how much writing to flash is occurring. The reason I have doubts about the accuracy of the method is that it would require measuring a sort of "baseline" power consumption when writing to the flash, and to get the baseline you would have to control the conditions of the write (for example, doing it write after a secure erase) in order that you can guess/assume what the WA is, so that you will then be able to compute the WA in more complicated conditions based on the "baseline". But that is rather like pulling yourself up by your own bootstraps, so I would not trust the results.

    The first method I described is the way to go, unless the SSD does not have a SMART attribute that measures flash writes.
  • cserwin - Wednesday, July 18, 2012 - link

    I have to say seing the Plextor brand name resurface kindles a warm, happy feeling.

    There was a time when they made the optical drives to have. A Plextor CD-ROM, a 3DFX Voodoo, a 17" Sony Trinitron, IBM Dekstar...

    Good luck, Plextor. Nice to see the old school still kickin.

Log in

Don't have an account? Sign up now