Sequential Read Performance

Our first test of sequential read performance uses short bursts of 128MB, issued as 128kB operations with no queuing. The test averages performance across eight bursts for a total of 1GB of data transferred from a drive containing 16GB of data. Between each burst the drive is given enough idle time to keep the overall duty cycle at 20%.

Burst 128kB Sequential Read (Queue Depth 1)

The burst sequential read performance of the Toshiba RC100 is faster than any SATA drive can manage, and is only slightly slower than the MyDigitalSSD SBX. The Host Memory Buffer feature has no significant impact here.

Our test of sustained sequential reads uses queue depths from 1 to 32, with the performance and power scores computed as the average of QD1, QD2 and QD4. Each queue depth is tested for up to one minute or 32GB transferred, from a drive containing 64GB of data.

Sustained 128kB Sequential Read

On the longer sequential read test, the RC100 places slightly ahead of other low-end NVMe drives, but there's still a pretty large gap separating it from the high-end drives that can deliver multiple GB/s at low queue depths.

Sustained 128kB Sequential Read (Power Efficiency)
Power Efficiency in MB/s/W Average Power in W

Power efficiency from the Toshiba RC100 is decent by NVMe standards, but not record setting. Total power draw approaches 2W for the 480GB model, which is still quite low for NVMe drives.

HMB appears to have a moderate impact on sequential read performance for the 480GB RC100 at some queue depths. Both capacities hit maximum performance when the queue depth is at least 8.

Sequential Write Performance

Our test of sequential write burst performance is structured identically to the sequential read burst performance test save for the direction of the data transfer. Each burst writes 128MB as 128kB operations issued at QD1, for a total of 1GB of data written to a drive containing 16GB of data.

Burst 128kB Sequential Write (Queue Depth 1)

The burst sequential write performance of the Toshiba RC100 is good for a low-end NVMe drive (or an older high-end drive), but is far below the current high-end drives.

Our test of sustained sequential writes is structured identically to our sustained sequential read test, save for the direction of the data transfers. Queue depths range from 1 to 32 and each queue depth is tested for up to one minute or 32GB, followed by up to one minute of idle time for the drive to cool off and perform garbage collection. The test is confined to a 64GB span of the drive.

Sustained 128kB Sequential Write

On the longer sequential write test, the RC100 performs quite well with HMB on—it slightly outperforms the 250GB Samsung 960 EVO, but can't keep pace with the newer 970 EVO. Even without HMB, the RC100 is one of the faster low-end NVMe drives for sequential writes, but having that extra buffer helps a lot.

Sustained 128kB Sequential Write (Power Efficiency)
Power Efficiency in MB/s/W Average Power in W

The Toshiba RC100 finally manages to score a power efficiency win: it just barely cracks 2W during this test and performance is better than most NVMe drives that pull 4W on this test.

he sequential write speed of the 480GB RC100 plateaus at 1GB/s at a queue depth of 2 or higher, but there was a drop in performance at the end of the test that may have been the SLC cache finally running out. The performance from the 240GB takes a bit longer to reach full speed, and without HMB it is both slower and less consistent.

Random Performance Mixed Read/Write Performance
Comments Locked

62 Comments

View All Comments

  • bug77 - Thursday, June 14, 2018 - link

    I'm talking about what is, you're talking wishful thinking.
    PCIe is supposed to cater to a lot of devices, it can't change its sleep current just because of one type of devices in particular. Not saying it's impossible, just that it's highly unlikely.
  • PeachNCream - Monday, June 18, 2018 - link

    Since SATA has not been entirely replaced by NVMe yet, there is still time (and lots of it really) for changes. It's simply a matter of a drive identifying itself to the PCIe bus and then making on-the-fly sleep state changes. Yes, that's non-trivial, but far from wishful thinking.
  • Gasaraki88 - Thursday, June 14, 2018 - link

    SATA needs to go away. That is old technology for old drives. NVMe should be the new standard for hard drives, just like SAS was a better protocol than SATA, NVMe has less overhead and is designed for NAND storage.
  • Targon - Thursday, June 14, 2018 - link

    Space, and because people like these super-thin machines. Also, without the extra packaging, it may be less expensive to make a card based SSD compared to a 2.5 inch SSD drive. Smaller=cheaper when it comes to shipping/packaging as well.

    SATA hasn't really had any evolution over the past few years as well, so without something big to hype, SATA isn't a buzz word that attracts buyers. No SATA 4 standard, so they can't say it is the latest and greatest, while card based SSDs have an appeal as seeming to be a newer technology.
  • HStewart - Thursday, June 14, 2018 - link

    One thing I am curious about is what performance do you need for SSD in external USB drive - I have a couple of them. These cheaper drivers are probably good for that purpose
  • timecop1818 - Thursday, June 14, 2018 - link

    Except cheap USB to M.2 adapters ONLY support SATA drives. The review unit is NVMe.
  • Targon - Thursday, June 14, 2018 - link

    USB 3.1 at the minimum if you want an external SSD in my opinion.
  • HStewart - Thursday, June 14, 2018 - link

    The one I am using ( actually two of them ) is WavLink USB 3.1 Gen 2 that actually does 10gbs '

    https://www.amazon.com/gp/product/B06Y5XJG8J/ref=o...

    It is not intended be primary storage - but works quite nice for my needs.

    One thing some one should come out with lower cost TB3 drive case - right now they are at premium.
  • peevee - Thursday, June 14, 2018 - link

    I wonder who would possibly buy the 120GB version given that only extra #20 will bring it to useful capacity and performance?
  • Jorgp2 - Thursday, June 14, 2018 - link

    Could you elaborate on how to configure the Host Memory Buffer Size?

Log in

Don't have an account? Sign up now