Random Read Performance

The random read test requests 4kB blocks and tests queue depths ranging from 1 to 32. The queue depth is doubled every three minutes, for a total test duration of 18 minutes. The test spans the entire drive, which is filled before the test starts. The primary score we report is an average of performances at queue depths 1, 2 and 4, as client usage typically consists mostly of low queue depth operations.

Iometer - 4KB Random Read

The X400's random read performance is almost as good as the Samsung 850 EVO and much better than the other planar TLC drives. This is a significant accomplishment, as random reads are often the slowest operation for TLC drives and hard to improve with caching.

Iometer - 4KB Random Read (Power)

The X400's mid-range random read performance is achieved with relatively low power consumption, making it surprisingly efficient for a TLC drive.

At the highest queue depths the X400 can't quite reach the limits of the fastest drives, but overall the scaling of both performance and power usage are reasonable for a mid-range SATA drive.

Random Write Performance

The random write test writes 4kB blocks and tests queue depths ranging from 1 to 32. The queue depth is doubled every three minutes, for a total test duration of 18 minutes. The test is limited to a 16GB portion of the drive, and the drive is empty save for the 16GB test file. The primary score we report is an average of performances at queue depths 1, 2 and 4, as client usage typically consists mostly of low queue depth operations.

Iometer - 4KB Random Write

The X400's low queue depth random write speeds fall in the middle of the gap between the slowest MLC drive and the rest of the planar TLC drives.

Iometer - 4KB Random Write (Power)

The X400's power consumption is low, but with performance well behind any MLC drive the efficiency is only better than the other planar TLC drives.

The SanDisk X400's random write speeds show almost no scaling with increased queue depths, behavior that is typical of low-end TLC drives. The QD1 performance is as good as any SATA drive and while the OCZ Trion 150 hits great speeds at QD16 and QD32, the SanDisk X400 is clearly faster for the more realistic lower queue depths.

AnandTech Storage Bench - Light Sequential Performance
Comments Locked

41 Comments

View All Comments

  • Billy Tallis - Friday, May 6, 2016 - link

    I changed TiB to TB. In reality the sizes are only nominal. The exact capacity of the X400 is 1,024,209,543,168 bytes while 1TiB would be 1,099,511,627,776 bytes and 1000GB drives like the 850 EVO are 1,000,204,886,016 bytes.
  • HollyDOL - Friday, May 6, 2016 - link

    yay, that's some black magic with spare areas / crc prossibly...
    X vs Xi prefixes are treatcherous... while with kilo it does only 2,4%, with Tera it's already 9,95%...more than enough to hide OS and majority of installed software :-)
  • bug77 - Tuesday, May 10, 2016 - link

    Then you should put that into the article, unless you're intentionally trying to be misleading ;)
  • SaolDan - Friday, May 6, 2016 - link

    Neat!!
  • hechacker1 - Friday, May 6, 2016 - link

    I'm tempted to buy two 512GB and RAID 0 them. Does anybody know if that would improve performance consistency compared to a single 1TB drive? I don't really care about raw bandwidth, but 4k IOPS for VMs. I'm having trouble finding benchmarks showing what RAID 0 does to latency outliers
  • CaedenV - Friday, May 6, 2016 - link

    As someone who has been running 2 SSDs in RAID0 for the last few years I would recommend against it. That is not to say that I have had any real issues with it, just that it is not really worth doing.
    1) once you have a RAID your boot time takes much longer as you have to POST, RAID, and then POST again, then boot. This undoes any speed benefit for fast start times that SSDs bring you.
    2) It adds points of failure. Having 2 drives means that things are (more or less) twice as likely to fail. SSDs are not as scary as they use to be, but it is still added risk for no real world benefit.
    3) Very little real-world benefit. While in benchmarks you will see a bit of improvement, real world workloads are very bursty. And the big deal with RAID with mechanical drives is the ability to que up read requests in rapid succession to dramatically reduce seek time (or at least hide seek time). With SSDs there is practically no seek time to begin with, so that advantage is not needed. For read/write performance you will also see a minor increase, but typically the bottleneck will be at the CPU, GPU, or now even the bus itself.

    Sure, if you are a huge power-user that is editing multiple concurrent 4K video streams then maybe you will need that extra little bit of performance... but for most people you would never notice the difference.

    The reason I did it 4 years ago was simply a cost and space issue. I started with a 240GB SSD that cost ~$250 which was a good deal. Then when the price dropped to $200 I picked up another and put it in RAID 0 because I needed the extra space and could not afford a larger drive. Now with the price of a single 1TB drive so low, and with RAID having just as many potential issues as potential up-sides, I would just stick with a single drive and be done with it.
  • Impulses - Friday, May 6, 2016 - link

    I did it with 2x 128GB 840s at one point and again last year for the same reasons, cost and space... 1TB EVO x2 (using a 256GB SM951 as OS drive) If I were to add more SSD space now I'd probably just end up with a 2TB EVO.

    Probably won't need to RAID up drives to form a single large-enough volume again in the future, this X400 is already $225 on Amazon (basically hours after the article went up with the $244 price).

    I don't even need the dual 1TB in RAID in an absolute sense, but it's more convenient to have a single large volume shared by games/recent photos than to balance two volumes.

    I don't think the downsides are a big deal, but I wouldn't do it for performance reasons either, and I backup often.
  • phoenix_rizzen - Friday, May 6, 2016 - link

    Get 4 of them and stick them into a RAID10. :)
  • Lolimaster - Friday, May 6, 2016 - link

    There's no point in doind raid0 with SSD's. You won't decrease latency/access times or improve random 4k reads (they will be worse most of the time).

    Sequential gains are meaningless (if it matter to you then you should stick to a PCI-e/m.2 NVME drive)
  • Pinn - Friday, May 6, 2016 - link

    I have the Samsung NVME M.2 512gb in the only m.2 slot and am aching to get more storage. Should I just fill one of my PCIe slots (x99)?

Log in

Don't have an account? Sign up now