AnandTech Storage Bench - Heavy

Our Heavy storage benchmark is proportionately more write-heavy than The Destroyer, but much shorter overall. The total writes in the Heavy test aren't enough to fill the drive, so performance never drops down to the steady state. This test is far more representative of a power user's day to day usage, and is heavily influenced by the drive's peak performance. The Heavy workload test details can be found here.

AnandTech Storage Bench - Heavy (Data Rate)

Performance of the 950 Pro is comparable to the SM951, which is to say that it's significantly better than everything else we've tested. The penalty when starting with a fill drive is a bit larger than normal, but simply being full isn't enough to tank the performance the way a sustained test can.

AnandTech Storage Bench - Heavy (Latency)AnandTech Storage Bench - Heavy (Latency)

Average service time and latency outliers are vastly better than any SATA drive, but NVMe doesn't seem to make a huge difference.

AnandTech Storage Bench - Heavy (Power)

The high performance comes with the price of high power consumption, and the total energy used over the course of this test is significantly higher than all the high-performance SATA drives we're comparing against.

AnandTech Storage Bench - The Destroyer AnandTech Storage Bench - Light
Comments Locked

142 Comments

View All Comments

  • Der2 - Thursday, October 22, 2015 - link

    Wow. The 950. A BEAST in the performance SHEETS.
  • ddriver - Thursday, October 22, 2015 - link

    Sequential performance is very good, but I wonder how come random access shows to significant improvements.
  • dsumanik - Thursday, October 22, 2015 - link

    Your system is only as fast as the slowest component.

    Honestly, ever since the original x-25 the only performance metric I've found to have a real world impact on system performance (aside from large file transfers) with regards to boot times, games, and applications is the random write speed of a drive.

    If a drive has solid sustained random write speed, your system will seem to be much more responsive in most of my usage scenarios.

    950 pro kind of failed to impress in this dept as far as I'm concerned. While i am glad to see the technology moving in this direction, I was really looking for a generational leap here with this product, which didn't seem to happen, at least not across the board.

    Unfortunately I think i will hold off on any purchases until i see the technology mature another generation or two, but hey if you are a water-cooling company, there is a market opportunity for you here.

    Looks like until some further die shrinks happen nvme is going to be HOT.
  • AnnonymousCoward - Thursday, October 22, 2015 - link

    > Your system is only as fast as the slowest component.

    Uhh no. Each component serves a different purpose.
  • cdillon - Thursday, October 22, 2015 - link

    >> > Your system is only as fast as the slowest component.
    >Uhh no. Each component serves a different purpose.

    Memory, CPU, and I/O resources need to be balanced if you want to reach maximum utilization for a given workload. See "Amdahl's Law". Saying that it's "only as fast as the slowest component" may be a gross over-simplification, but it's not entirely wrong.
  • xenol - Wednesday, November 4, 2015 - link

    It still highly depends on the application. If my workload is purely CPU based, then all I have to do is get the best CPU.

    I mean, for a jack-of-all-trades computer, sure. But I find that sort of computer silly.
  • xype - Monday, October 26, 2015 - link

    Your response makes no sense.
  • III-V - Thursday, October 22, 2015 - link

    I find it odd that random access and IOPS haven't improved. Power consumption has gone up too.

    I'm excited for PCIe and NVMe going mainstream, but I'm concerned the kinks haven't quite been ironed out yet. Still, at the end of the day, if I were building a computer today with all new parts, this would surely be what I'd put in it. Er, well maybe -- Samsung's reliability hasn't been great as of late.
  • Solandri - Thursday, October 22, 2015 - link

    SSD speed increases come mostly from increased parallelism. You divide up the the 10 MB file into 32 chunks and write them simultaneously, instead of 16 chunks.

    Random access benchmarks are typically done with the smallest possible chunk (4k) thus eliminating any benefits from parallel processing. The Anandtech benchmarks are a bit deceptive because they average QD=1, 2, 4 (queue depth of 1, 2, and 4 parallel data read/writes). But at least the graphs show the speed at each QD. You can see the 4k random read speed at QD=1 is the same as most SATA SSDs.

    It's interesting the 4k random write speeds have improved substantially (30 MB/s read, 70 MB/s write is typical in SATA SSDs). I'd be interested in an in-depth Anandtech feature delving into why reads seem to be stuck at below 50 MB/s, while writes are approaching 200 MB/s. Is there a RAM write-cache on the SSD and the drive is "cheating" by reporting the data as written when it's only been queued in the cache? Whereas reads still have to wait for completion of the measurement of the voltage on the individual NAND cells?
  • ddriver - Thursday, October 22, 2015 - link

    It is likely samsung is holding random access back artificially, so that they don't cannibalize their enterprise market. A simple software change, a rebrand and you can sell the same hardware at much higher profit margins.

Log in

Don't have an account? Sign up now