Sequential Read Performance

Our first test of sequential read performance uses short bursts of 128MB, issued as 128kB operations with no queuing. The test averages performance across eight bursts for a total of 1GB of data transferred from a drive containing 16GB of data. Between each burst the drive is given enough idle time to keep the overall duty cycle at 20%.

Burst 128kB Sequential Read (Queue Depth 1)

The burst sequential read performance of the Toshiba RC100 is faster than any SATA drive can manage, and is only slightly slower than the MyDigitalSSD SBX. The Host Memory Buffer feature has no significant impact here.

Our test of sustained sequential reads uses queue depths from 1 to 32, with the performance and power scores computed as the average of QD1, QD2 and QD4. Each queue depth is tested for up to one minute or 32GB transferred, from a drive containing 64GB of data.

Sustained 128kB Sequential Read

On the longer sequential read test, the RC100 places slightly ahead of other low-end NVMe drives, but there's still a pretty large gap separating it from the high-end drives that can deliver multiple GB/s at low queue depths.

Sustained 128kB Sequential Read (Power Efficiency)
Power Efficiency in MB/s/W Average Power in W

Power efficiency from the Toshiba RC100 is decent by NVMe standards, but not record setting. Total power draw approaches 2W for the 480GB model, which is still quite low for NVMe drives.

HMB appears to have a moderate impact on sequential read performance for the 480GB RC100 at some queue depths. Both capacities hit maximum performance when the queue depth is at least 8.

Sequential Write Performance

Our test of sequential write burst performance is structured identically to the sequential read burst performance test save for the direction of the data transfer. Each burst writes 128MB as 128kB operations issued at QD1, for a total of 1GB of data written to a drive containing 16GB of data.

Burst 128kB Sequential Write (Queue Depth 1)

The burst sequential write performance of the Toshiba RC100 is good for a low-end NVMe drive (or an older high-end drive), but is far below the current high-end drives.

Our test of sustained sequential writes is structured identically to our sustained sequential read test, save for the direction of the data transfers. Queue depths range from 1 to 32 and each queue depth is tested for up to one minute or 32GB, followed by up to one minute of idle time for the drive to cool off and perform garbage collection. The test is confined to a 64GB span of the drive.

Sustained 128kB Sequential Write

On the longer sequential write test, the RC100 performs quite well with HMB on—it slightly outperforms the 250GB Samsung 960 EVO, but can't keep pace with the newer 970 EVO. Even without HMB, the RC100 is one of the faster low-end NVMe drives for sequential writes, but having that extra buffer helps a lot.

Sustained 128kB Sequential Write (Power Efficiency)
Power Efficiency in MB/s/W Average Power in W

The Toshiba RC100 finally manages to score a power efficiency win: it just barely cracks 2W during this test and performance is better than most NVMe drives that pull 4W on this test.

he sequential write speed of the 480GB RC100 plateaus at 1GB/s at a queue depth of 2 or higher, but there was a drop in performance at the end of the test that may have been the SLC cache finally running out. The performance from the 240GB takes a bit longer to reach full speed, and without HMB it is both slower and less consistent.

Random Performance Mixed Read/Write Performance
Comments Locked

62 Comments

View All Comments

  • Mikewind Dale - Thursday, June 14, 2018 - link

    Interesting review. Thanks.

    I'm hoping that smaller, 11" and 13" laptops will start offering M.2 2242 instead of eMMC. I've been wary of purchasing a smaller laptop because I'm afraid that if the NAND ever reaches its lifespan, the laptop will be dead, with no way to replace the storage. An M.2 2242 would solve that problem.
  • PeachNCream - Thursday, June 14, 2018 - link

    Boot options in the BIOS may allow you to select USB or SD as an option in the event that a modern eMMC system suffers from a soldered on drive failure. In that case, it's still possible to boot from an OS and use the computer. In that case, I'd go for some sort of lightweight Linux OS for performance reasons, but even a full distro works okay on USB 3.0 and up. SD is a slower option, but you may not want your OS drive to protrude from the side of the computer. Admittedly, that's a sort of cumbersome solution to keeping a low-budget PC alive when replacement costs aren't usually that high.
  • peevee - Thursday, June 14, 2018 - link

    "but this is only on platforms with properly working PCIe power management, which doesn't include most desktops"

    Billy, could you please elaborate on this?
  • artifex - Thursday, June 14, 2018 - link

    Yeah, I'd also like to hear more about this.
  • Billy Tallis - Thursday, June 14, 2018 - link

    I've never encountered a desktop motherboard that had PCIe ASPM on by default, so at most it's a feature for power users and OEMs that actually care about power management. I've seen numerous motherboards that didn't even have the option of enabling PCIe ASPM, but the trend from more recent products seems to be toward exposing the necessary controls. Among boards that do let you fully enable ASPM, it's still possible for using it to expose bugs with peripherals that breaks things—sometimes the peripheral in question is a SSD. The only way I'm able to get low-power idle measurements out of PCIe SSDs on the current testbed is to tell Linux to ignore what the motherboard firmware says and force PCIe ASPM on, but this doesn't work for everything. Without some pretty sensitive power measurement equipment, it's almost impossible for an ordinary desktop user to know if their PCIe SSD is actually achieving the <10mW idle power that most drives advertise.
  • peevee - Thursday, June 14, 2018 - link

    So by "properly working" you mean "on by default in BIOS"? Or there are actual implementation bugs in some Intel or AMD CPUs or chipsets?
  • Billy Tallis - Thursday, June 14, 2018 - link

    Implementation bugs seem to be primarily a problem with peripheral devices (including peripherals integrated on the motherboard), which is why motherboard manufacturers are often justified in having ASPM off by default or entirely unavailable.
  • AdditionalPylons - Thursday, June 14, 2018 - link

    That's very interesting. And thanks Billy for a nice review! I too appreciate you doing something different. There will unfortunately always be someone angry on the Internet.
  • Kwarkon - Friday, June 15, 2018 - link

    L1.2 is a special PCIe link state that requires hardware CLREQ signal. When L1.2 is active all communication on PCIe is down thus both host and NVME device do not have to listen for data.
    Desktops don't have this signal ( it is grounded), so even if you tell the SSD (NVME admin commands) that L1.2 support is enabled it will still not be able to negotiate it.

    In most cases m.2 NVME require certain PCIe link state to get lowest power for their Power State.
    The PS x are just states that if all conditions are met than the SSD will get its power down to somewhere around stated value.

    You can always check tech specs of the NVME. If in fact low power is supported than the lowest power will be stated as "deep sleep L1.2 " or similar.
  • Death666Angel - Saturday, June 16, 2018 - link

    Prices in Germany do not line up one bit with the last chart. :D The HP EX920 1TB is 335€ and the ADATA SX8200 960GB is 290€. The SBX just has a weird amazon.de reseller who sells the 512GB version for 200€. The 970 Evo 1TB is 330€ and the Intel 760p 1TB is 352€. And for completeness, the WD Black 1TB is 365€. Even when accounting for exchange rates and VAT, the relative prices are nowhere near the US ones. :)

Log in

Don't have an account? Sign up now