Performance Consistency

Performance consistency tells us a lot about the architecture of these SSDs and how they handle internal defragmentation. The reason we don’t have consistent IO latency with SSD is because inevitably all controllers have to do some amount of defragmentation or garbage collection in order to continue operating at high speeds. When and how an SSD decides to run its defrag or cleanup routines directly impacts the user experience as inconsistent performance results in application slowdowns.

To test IO consistency, we fill a secure erased SSD with sequential data to ensure that all user accessible LBAs have data associated with them. Next we kick off a 4KB random write workload across all LBAs at a queue depth of 32 using incompressible data. The test is run for just over half an hour and we record instantaneous IOPS every second.

We are also testing drives with added over-provisioning by limiting the LBA range. This gives us a look into the drive’s behavior with varying levels of empty space, which is frankly a more realistic approach for client workloads.

Each of the three graphs has its own purpose. The first one is of the whole duration of the test in log scale. The second and third one zoom into the beginning of steady-state operation (t=1400s) but on different scales: the second one uses log scale for easy comparison whereas the third one uses linear scale for better visualization of differences between drives. Click the buttons below each graph to switch the source data.

For more detailed description of the test and why performance consistency matters, read our original Intel SSD DC S3700 article.

  Samsung SSD 850 Pro Samsung SSD 840 Pro SanDisk Extreme Pro Intel SSD 730 OCZ Vector 150
7% Over-Provisioning - - -
12% Over-Provisioning
25% Over-Provisioning

Wow, this is awesome. Even with the default 7% over-provisioning, the 850 Pro is pushing almost as many IOPS as the Extreme Pro with its 12% over-provisioning. When the over-provisioning is increased to the same 12% level, the 850 Pro is a leader without a doubt. Only the Vector 150 can come close, although it is nowhere hear as constant as the IOPS is ranging between 10K and 30K, whereas the 850 Pro can maintain a steady line.

When compared with the 840 Pro, the upgrade is tremendous. IO consistency was always the weak point of the 840 Pro, so it is great to see that Samsung has paid a great effort to fix that in the 850 Pro. A part of the performance increase obviously comes from the usage of V-NAND because with shorter program and erase latencies, the steady-state performance increases as the garbage collection takes less time and there are more empty blocks available.

Some of you may wonder the odd capacities at 25% over-provisioning but the reason is that I noticed an error in the old ones. Basically, the old 25% numbers were in gibibytes (i.e. 1024^3 bytes) whereas the other capacities have always been in gigabytes (1000^3 bytes). I decided to unify the capacities and now they are all reported in gigabytes. The actual testing or over-provisioning levels have not changes -- it is simply a matter of how the capacities are represented.

  Samsung SSD 850 Pro Samsung SSD 840 Pro SanDisk Extreme Pro Intel SSD 730 OCZ Vector 150
7% Over-Provisioning - - -
12% Over-Provisioning
25% Over-Provisioning

 

  Samsung SSD 850 Pro Samsung SSD 840 Pro SanDisk Extreme Pro Intel SSD 730 OCZ Vector 150
7% Over-Provisioning - - -
12% Over-Provisioning
25% Over-Provisioning

 

RAPID 2.0: Support For More RAM & Updated Caching Algorithm AnandTech Storage Bench 2013
Comments Locked

160 Comments

View All Comments

  • frenchy_2001 - Tuesday, July 1, 2014 - link

    the 3D structure and design are revolutionary, but the manufacturing technology is actually a very mature one (40nm). This makes it *MUCH* cheaper than the 1X used by their competition.

    Samsung has really struck gold with that design, as it allows them to scale in both dimensions, depending on the result and cost of each. While 2D NAND is facing really tough challenges to increase density, V-NAND is allowed to either scale up (more layers) or restart scaling pitch, as manufacturing is *very well* understood from 40nm->~16nm. They just need to experiment with it and see what makes economic sense and good trade-offs.
  • toyotabedzrock - Tuesday, July 1, 2014 - link

    Almost seems like it would be cheaper to ramp up the production of silicon ingots and drive that cost down further than the r&d for this.
  • frenchy_2001 - Tuesday, July 1, 2014 - link

    Silicon ingots cost is marginal. The real cost for scaling is all the R&D necessary to make the pitch smaller. Even using bigger wafers (current ones are 300mm, there have been talks of 450mm for a while, but cost is a deterrent, as a whole fab needs to be re-tooled for the upgrade) only improves yields and costs marginally.
    NAND scaling down is facing huge challenges, due both to process (who to image those ~15nm line on a wafer) and electrical limits (~3 electrons inside your cell at 15nm). 3D NAND allows to restart the growth by bypassing those challenges (step back to 40nm process and scale in the Z axis).
    General SOCs are facing similar process limits (there is no solution below 10nm so far, despite the whole industry cooperating to find one), even if their design limits are more relaxed (SOCs are not trapping charges, but cross talk and interference are starting to be challenges too).
  • UltraWide - Monday, June 30, 2014 - link

    Will there be a version with PCIe or M.2?
  • Gigaplex - Monday, June 30, 2014 - link

    If you'd read the article, you'd know the controller doesn't support PCIe.
  • Gigaplex - Monday, June 30, 2014 - link

    "This further suggests that the issue lies in our tests instead of the RAPID software itself as end-users will always run the drive with a partition anyway."

    Um, no. I don't care what the end user does, the software shouldn't cause a BSOD. If it can't cache without a partition, it should simply not attempt to cache. This is just a case of Samsung thinking that just because they do some nice hardware, that they're experts in software. They're really not. RAM caching of I/O isn't specific to SSDs anyway, why are they tying it to an SSD launch?
  • Donuts123 - Wednesday, July 2, 2014 - link

    Yeah, that's a huge red flag for me, I definitely wouldn't use the RAPID software. Another layer to go wrong (and apparently it does). I hope Anandtech submits details of the BSODs they saw to Samsung.

    RAPID probably just uses the Samsung SSD as a dongle. Presumably RAPID is derived from Samsung's acquisition of NVELO, see http://www.anandtech.com/show/6518/samsung-acquire...
  • Guspaz - Monday, June 30, 2014 - link

    Wait a minute, 150TB endurance on a 1TB drive? Only 150 cycles? That doesn't make any sense, that's absurdly low.

    Then again, Intel's rating for the 335 doesn't make any sense either. They say 20GB a day for 3 years, or about 22TB... But they also rate it for 3000 cycles, and the media wear indicator on the drive is set to treat 3000 as full wear, and that represents 720TB...
  • Kristian Vättö - Tuesday, July 1, 2014 - link

    The endurance figures are usually based on a 4KB random write workload and are thus worst-case numbers. 150TB of random writes means a ton of more NAND writes than 150TB, that's why. I explained the calculation of TBW here:

    http://www.anandtech.com/show/7947/micron-m500-dc-...

    However, as I mentioned in the article, in the client space the endurance is more for guidance (i.e. don't put these in servers!) than an actual technical limit.
  • emn13 - Tuesday, July 1, 2014 - link

    ...but outside of server-like workloads, what's going to benefit from this performance?

Log in

Don't have an account? Sign up now