Mixed Random Read/Write Performance

The mixed random I/O benchmark starts with a pure read test and gradually increases the proportion of writes, finishing with pure writes. The queue depth is 3 for the entire test and each subtest lasts for 3 minutes, for a total test duration of 18 minutes. As with the pure random write test, this test is restricted to a 16GB span of the drive, which is empty save for the 16GB test file.

Iometer - Mixed 4KB Random Read/Write

Despite poor random read speeds, the MX300 is only slightly slower than the MX200 on mixed random workloads, and is faster than most MLC drives.

Iometer - Mixed 4KB Random Read/Write (Power)

Once more setting a power usage record, the MX300 is more efficient than even the BX100.

The MX300 manages to never decrease in performance when the proportion or writes increases, showing that its SLC write caching is working very effectively. Power consumption doesn't begin to increase until the test is almost completely writes.

Mixed Sequential Read/Write Performance

The mixed sequential access test covers the entire span of the drive and uses a queue depth of one. It starts with a pure read test and gradually increases the proportion of writes, finishing with pure writes. Each subtest lasts for 3 minutes, for a total test duration of 18 minutes. The drive is filled before the test starts.

Iometer - Mixed 128KB Sequential Read/Write

Since SATA drives all perform about the same on sequential reads, rankings on this test are determined mainly by sequential write performance and whether the controller can process the mixed workload effectively. The MX300 is slower than the SanDisk X400 but faster than the other planar TLC drives.

Iometer - Mixed 128KB Sequential Read/Write (Power)

With another third place ranking for power usage, the MX300 beats all the planar TLC drives on efficiency but is unremarkable by MLC standards.

The MX300 bottoms out around 126MB/s which would be respectable for a planar TLC drive, but the MX200 never drops below 200MB/s.

Sequential Performance ATTO, AS-SSD & Idle Power Consumption
Comments Locked

85 Comments

View All Comments

  • Arnulf - Wednesday, June 15, 2016 - link

    You can't tell a difference between NVMe drive with 2000 MB/s read speed and SATA drive with 500 MB/s read speed?

    I own 830, it's a great drive, but it's SATA drive.
  • Impulses - Wednesday, June 15, 2016 - link

    I'd believe it, at least for run of the mill tasks... More demanding usage cases and apps will see a benefit but for the average user gaming and doing web/office stuff? SATA is enough.
  • Samus - Tuesday, June 14, 2016 - link

    Just buy a used MLC drive dirt cheap on eBay. Even if it has 20TB written (what I've seen about average for used SSD's, but who knows) that'll still exponentially outlast a TLC drive, while being more consistent.

    Or just pick up a new old stock M500 480GB SSD for <$100. M500's are my go to drives. Still haven't seen one fail. Not lightening fast, not slow, but very reliable and consistent. Sandisk also makes quite a few 480GB MLC drives for <$100.

    Stay away from TLC. I just don't believe in the long run they are going to have adequate data retention and reliability.
  • JoeyJoJo123 - Tuesday, June 14, 2016 - link

    >adequate data retention and reliability.

    Can you even name one instance of a TLC drive failing on you, dude?

    Tech Report's already covered actual real-life endurance, and here you continuing this "SSD Endurance" meme, as if it mattered.

    http://techreport.com/review/27909/the-ssd-enduran...

    Several TLC drives lasted for far more than 1 Petabyte read/writes. It's more likely that SATA port interfaces won't exist 10 years from now on the new motherboard you buy than a TLC drive you buy today would be dead with average daily use after a lifespan of 10 years.
  • Glaring_Mistake - Tuesday, June 14, 2016 - link

    The test Techreport ran did not really cover data retention, if their goal was to test data retention they would gone about it differently.
    Because instead of focusing on writing until the SSDs just laid down and died they would have used up a certain amount of write cycles and then left them unpowered and later tested if the data stored on them was intact.

    An example of such tests not being indicative of data retention would of course be the 840 EVO managing over several PB of writes in such tests but despite how well it performed in those tests it still leaked electrons at a rapid rate for any data stored on it.

    Also there is just one TLC drive in Techreports test and it came close to a PB of writes before it died but did not go past it.
  • Gigaplex - Tuesday, June 14, 2016 - link

    >Can you even name one instance of a TLC drive failing on you, dude?

    I've had one fail hours after unboxing it, but that can happen with any product. The warranty replacement worked fine.

    I've also had a different one (840 non-EVO) have serious performance degradation issues, and Samsung only applied the firmware fix to the EVO versions.
  • Samus - Wednesday, June 15, 2016 - link

    Joeyjojo. Do you even know what data retention is, dude?
  • JoeyJoJo123 - Wednesday, June 15, 2016 - link

    Data retention for SSDs for non-consumer workloads, for example the case of a NAS/SAN supplied with an SSD-only volume, isn't an issue. NAS/SANs should be up 24/7, and data retention is an issue of NAND retaining data despite long-term loss of power. NAS/SANs should be up 24/7, so therefore SSDs not having power in a NAS/SAN is a non-issue.

    Data retention for SSDs for consumer workloads, for example the case of a DAS, boot drive, or scratch drive, is again, mostly a non-issue. Boot drives and scratch drives get power every time the device is powered on, and most consumers using an SSD internally on a PC will actually boot their device at least once a week. DAS is more fickle, as this comes down to how often the user needs to use their external SSD, for example. Assuming someone actually went out of the way to buy a fast SSD-based external rather than a slower HDD-based external, would typically mean that they use their external drive often enough that they can warrant paying extra for less space, for the benefit of having faster storage.

    Nobody uses SSDs as cold storage. Not even HDDs are useful for cold storage. This is where tape drives are best at, as they have the best data retention of any drive type on the market.

    The entire argument of TLC-based SSDs having poor data retention should be a complete non-issue, because if you're using the SSD in its most suitable application (devices you use frequently that need the better speed that SSDs offer over HDDs, or network storage that is always available for other devices requesting files hosted on that server), then data retention is a non-issue.

    It's like dogging on a sports car (ex: Mustang) for not being gas-efficient, or dogging on a hybrid-electric car (ex: Prius) for not being fast. Two different solutions for two different problems.

    It's literally the same story for PC data storage. TLC, MLC, and SLC NAND-based SSDs are all worse at data retention than HDDs, and HDDs are worse at data retention than tape drives. If you wanted data retention, why are you even looking at SSDs? Likewise, if you wanted fuel-efficiency, why would you go to an online article talking about a brand new Dodge Viper that makes 12 MPG city, then make a post on the article dogging sports cars in general for awful fuel efficiency?

    SSDs are a data storage product tuned for sequential and random read/write speeds, and TLC is a particular flavor of SSD NAND that's tuned for particularly cost-effective speed.

    tl;dr
    Get the right product for the right situation. If you're looking at SSDs for any kind of long-term data storage retention, then you pretty much have your head in the dirt.
  • Impulses - Wednesday, June 15, 2016 - link

    Agreed.
  • Samus - Saturday, June 18, 2016 - link

    You do realize one of the reasons SSD's haven't been catching on in the PC market for the last decade comes down to data retention. If a manufacture builds a system, images it, and it sits in a box in a hot warehouse for 9-12 months, the data in many consumer level drives will be corrupted. Which means the system will be sent in for warranty to be reimaged and that isn't cost efficient for OEM's so they don't even bother putting themselves in that position.

    The Seagate SSHD's were unaffected by this aspect of solid state storage because they only cache high IO hit rates. If the buffer is corrupted tibia simply flushed and rebuilt. Odds are after a fresh image the buffer hasn't even been built yet because no more than one IO hit has occurred to any sector of the drive.

    Believe me, I work in refurbishing and data retention of SSD's, even older MLC models, is a serious issue. Considering the number of voltage states is exponentially higher in TLC drives, data retention is an even greater issue. In refurbishing, the only was to actually recover a frozen or corrupted drive from data retention illness is a secure erase. And as I mentioned, some of these systems have manufacture dates just a year old (they could be new overstock that were shifted to our reseller because the warranty expired or they are older models...) and depending on how they were stored or where they sat, sometimes the systems can't even boot Windows to the OOBE.

Log in

Don't have an account? Sign up now