Miscellaneous Aspects and Concluding Remarks

The My Passport SSD targets the mainstream market, and its performance in the benchmarks is understandable. However, there are a few things in its favor - the value additions such as hardware encryption and backup software from WD, as well as the price. Before discussing the pros and cons, it is important to check out some other relevant aspects of the product.

USB Host and Encryption Impacts

Our tests above show that we were never able to hit WD's claimed 500 MBps+ numbers even in the artificial benchmarks. In order to identify whether the USB 3.1 Gen 2 host in our testbed (the Intel Alpine Ridge controller) had any negative impact on the device, we processed CrystalDiskMark using one of the USB 3.1 Gen 2 Type-C ports in the Zotac CI523 nano. The USB 3.1 Type-C ports in that PC are enabled by an ASMedia ASM1142 controller.

The above numbers are definitely a step up from what we got with our testbed's USB 3.1 Gen 2 port, indicating that the My Passport SSD is likely to perform better with USB 3.1 Gen 2 ports from an ASMedia controller (between 25 - 35 MBps for sequential workloads, and a significant improvement in write IOPS for random workloads).

We also used this opportunity to test the impact of enabling the TCG Opal 2.0 features of the SSD using the WD Security software. A password is used to protect the contents of the drive, and the software is very simple to use (geared towards the mainstream consumer). The picture below presents some screenshots from the WD Security software.

The CrystalDiskMark workloads were processed on a password protected drive.

Comparing the numbers with the 'no encryption' case above shows that enabling the hardware encryption has almost no discernible impact on the performance. The numbers for both cases are within the run-to-run variations that one usually expects from CrystalDiskMark.

TRIM Support

SanDisk has never enabled TRIM on any of its external SSDs till now. The WD My Passport SSD finally sees a change. The bridge chip is able to map the SCSI Unmap commands to TRIM, as our test below shows. Note that the SSD had to be formatted in NTFS for the manual TRIM command to work.

Triggering a manual TRIM on an exFAT-formatted SSD returned an indeterminate status.

Concluding Remarks

The My Passport SSD completes the efforts of Western Digital to segment the external SSD market segment and divide it up between its three different brands - WD, SanDisk, and G-Technology.

The performance of the My Passport SSD is good enough for the mainstream market, though power users might be disappointed. Western Digital needs to pay more attention to the thermal design for performance consistency. UASP also needs to be enabled in the firmware for the unit to match the performance of similar external SSDs in the market. Western Digital indicated that UASP could be enabled in future production runs, but, no concrete timeframe was provided.

TRIM support makes the My Passport SSD a great choice for a portable OS drive. The product is based on the SanDisk X400 TLC SSD, and this allows for an attractive price point. The MSRP is $400 for the 1TB variant, but, we have seen the Best Buy price fluctuate between $350 and $400 during the time that we tracked it.

Price per GB

The M.2 version of the SanDisk 1TB X400 has a street price of around $330. A M.2 to USB 3.1 Gen 2 enclosure (such as the StarTech SM21BMU31C3) has a street price of $32. The My Passport SSD has a better industrial design, a few software value additions for backup, and also has hardware encryption (not available in the retail M.2 version of the X400). Given the $350 - $400 price range for the My Passport SSD, it is a toss-up between the two options. Mainstream consumers want a plug and play solution, and that gives the My Passport SSD a slight edge in its target market.

Buy the WD My Passport SSD 1TB from Best Buy

AnandTech DAS Suite and Performance Consistency
Comments Locked


View All Comments

  • StevoLincolnite - Wednesday, June 28, 2017 - link

    Really not interested in SSD's for mass storage. Not until they are affordable at 4~ Terabytes or larger.

    Anyone know what they are like for archival purposes verses a mechanical disk?
  • azazel1024 - Wednesday, June 28, 2017 - link

    Retention errors are caused by charge leakage over time from a flash cell. It varies and at this point I assume all SSD controllers implement strategies and algorithms to reduce this source of errors (cannot eliminate it, as a NAND cell will eventually leak current back to a 0 state, it just takes a very long time). Also all modern SSD controllers do flash correct and refresh (FCR reads out the page, corrects any errors and then refreshes the page).

    BTW I pulled this out of a white paper on flash memory retention and error correction strategies.

    Also as the number of writes to a cell accumulate over time, it's retention ability also drops off. Which is one thing you don't see tested in SSD endurance benchmarks/tests. A 1TB SSD might have 1000 P/E cycles on it and after the first program cycle it might retain the data for a year, with zero FCR or anything else done to preserve the data in the cells. By P/E cycle 980 it might be storing that data for a month. It might start accumulating some correctable page errors around 1080, but it might also be down to only a couple of weeks of retaining its memory.

    Temperature also has an impact on retention age. At 20C the 1080 day error retention equivalent at 70C is 94 hours, basically 4 days. So if you left a NAND flash cell at 70C, it would "age" equivalent to almost 3 years at room temperature. DO NOT STORE NAND FLASH IN A HOT LOCATION. The table I found in the white paper lists the aging factor based on temperature which is derived from Arrhenius' Law.

    So a hot car might "kill" the data on an SSD in a couple of months, compared to room temperature storage that might take a few years. At 50C the aging factor is 27.5. At 60C it is 90.2. Up at 90C it is 2143.6.

    By a consequence, storing it at very low temperatures will give you a fractional aging factor.

    Also to cut to the chase, from what I could find in the article, 2xnm NAND flash has roughly a 1 year retention age, impacted somewhat by the different ECC and error correction strategies implemented by the controller. The white paper gets in to deep details, but I can't suss out if that 1 year retention is at P/E cycles = 1, or at the maximum. With this 2xnm NAND flash the white paper looked at raw error rates related to retention age and found the difference between P/E = 1 and P/E = 5000 as about 20-40x (scale of graph is hard to see on the low end of P/E cycles) higher raw error rate of P/E = 5000. Down around P/E = 1000 it was more like only 3-5x higher than P/E = 0.

    TLC is going to be worse than MLC which is going to be worse than SLC. Of course, the endurance rating is also different between them, which probably takes some of that in to account as well. There is also a difference between slow and fast leakage cells. It is possible "archival" SD cards and such are made with slow leakage cells which might have a much longer endurance.

    Extremely long story executive summary at the end (cause I am a jerk), NAND flash sucks for long term cold archival, unless you are storing it COLD. Probably not too many people put some stuff on an SSD and then walk away and not come back till 10 years later, or even a year later. An archival SSD/flash memory card is also probably not going to sit on a shelf for a year plus without every being accessed, data added, whatever.

    Modern SSD controllers (and possibly SD/uSD card controllers) also employ strategies to refresh the cells periodically so they don't "age out".

    DO NOT STORE FLASH BASED DEVICES IN A HOT CAR. A black SSD, phone, flash card, etc. stored in a car, with the windows up with the sun beating down on the device can easily hit or exceed 70C. A couple of days of sitting in the sun like that could easily cause significant data corruption.

    Also PS, that 1 year data retention thing, might not be universal across the board, that just seems to be the design goal and newer SSDs might be better or worse than that (smaller cells are going to lose charge faster, which might again be part of why the P/E endurance is lower), better controllers might be able to pull data with lower error rates with more decayed cells or have better ECC strategies. Also once that 1 year is exceed, it doesn't mean all of the data is gone, just that statistically you are likely to start encountering some uncorrectable errors. It might just be 1 error that is uncorrectable across the entire disk, but that is data lost and the uncorrectable errors are going to start skyrocketing.
  • DanNeely - Wednesday, June 28, 2017 - link

    Maybe not SSDs; but as they've gotten larger I've seen people using thumb drives for backup instead of USB HDDs. That's going to be a mostly stale set of data and in many cases will only be infrequently powered even if the controller in it is smart enough to refresh the data.
  • Glaring_Mistake - Wednesday, June 28, 2017 - link

    About the one year's data retention is that according to JEDEC specs it should be able to hold data for one year (under specific conditions and for client SSDs, not enterprise drives) after using all of its specified P/E cycles.
    So it's after it's used those 1000 P/E, not before you start to use it.

    Anandtech had a pretty good explanation of how it works here: http://www.anandtech.com/show/9248/the-truth-about...

    Like you said a lot of drives rewrites data that is in bad shape, Silicon Motion even has a specific name for its function - StaticDataRefresh.

    While smaller lithographies lose charge faster than larger lithographies everything else being equal it kind of also depends on the construction of the NAND and the controller which can make a bit of a difference.

    I've actually seen one 16nm MLC drive slow down before one 15nm TLC drive did.
    Which may not be exactly what you would expect given that the second was at a disadvantage in terms of both type of NAND and lithography.
    That was under specific conditions though, most of the time the MLC one would likely hold up better.
  • Xajel - Wednesday, June 28, 2017 - link

    Damn it's looks very good, but doesn't perform very good.. I hope some company will release similar designed enclosure with Type-C connectivity.
  • vailr - Wednesday, June 28, 2017 - link

    Samsung's T3 external SSD has both: USB 3.0 and USB Type C. The included M.2 SSD is not physically compatible with desktop M.2 slots, however (for those wanting that option).
  • LordConrad - Wednesday, June 28, 2017 - link

    The Samsung T3 external SSD is not M.2, it is a mSATA drive inside the external case.
  • Samus - Thursday, June 29, 2017 - link

    I actually love that WD includes an adapter like that. It's a nice touch for compatibility when shuffling around data.

    Also happy to see they support a full SMART array, no proprietary BS preventing display of some information. Sandisk is pretty good about this, so no surprise.

    Alas, this is an expensive way to shuffle around data considering laptop HDD's can hit 200MB/sec sequentially for a fraction of the price.
  • VulkanMan - Wednesday, June 28, 2017 - link

    "WD Security allows the setting of a password (up to 25 characters) that activates the hardware encryption features on the drive."

    AFAIK, unless they changed something, this unit always has hardware encryption enabled.
    The software allows you to set a password, but, that does NOT change the hard-coded encryption key they installed at the factory.
    If you swap the unit with another unit, you can't read the drive, so, if the controller dies out, kiss your data goodbye.

    In other words, say you wrote a 2GB file to this unit. Take it out, and plug it into a SATA connection, you won't be able to decipher the data.
    If you use the security software to add a password, it does NOT re-encrypt the whole drive using your password.

    I would pass on all these hard-coded encryption key units.
  • ganeshts - Wednesday, June 28, 2017 - link

    Yes, that likely explains why setting / removing the password has no effect on the performance.

    The hardcoded encryption key is probably good enough for mainstream users. Btw, I believe the actual encryption key is a combination of the user password and the hardcoded one.

Log in

Don't have an account? Sign up now