Initial Thoughts

Since we are dealing with two drives, it makes sense to split the conclusion into two and I will start with the 845DC PRO. While all we have today is a performance preview, the 845DC PRO is turning out to be one of the best enterprise SATA SSDs that we have tested. With only 28% over-provisioning, the PRO offers the most consistent 4KB random write performance that we have seen to date. When you add the fact that the PRO is also rated at ten drive writes per day, it is shaping up to be an excellent drive for write intensive workloads.

Price Comparison - MSRP
Capacity 400GB 800GB
Samsung 845DC PRO $960 ($2.4/GB) $1,830 ($2.29/GB)
Intel SSD DC S3700 $729 ($1.82/GB) $1,459 ($1.82/GB)

While the performance is great, pricing could be more competitive. Intel's DC S3700 is considerably cheaper at both capacities and offers the same 10 drive writes per day endurance. The 845DC PRO does provide higher 4KB random write performance (~50K IOPS vs ~35K IOPS) and is a bit more consistent, but ultimately the workload determines whether the extra performance is worth the extra cost. For workloads where absolute performance is more important than capacity, the 845DC PRO is a better pick as it provides slightly more IOPS per dollar, but the S3700 still offers lower $/GB if capacity is a concern. Of course, as enterprise SSDs are usually bought in bulk, the prices may vary depending on the volume and the MSRPs listed here may not be fully accurate. 

Price Comparison - MSRP
Capacity 240GB 480GB 800/960GB
Samsung 845DC EVO $250 ($1.04/GB) $490 ($1.02/GB) $969 ($1.01/GB)
Intel SSD DC S3500 $219 ($0.91/GB) $439 ($0.91/GB) $729 ($0.91/GB)

While the 845DC EVO is not crafted for write-intensive workloads, it still provides very consistent random write performance, although obviously the performance is lower than the PRO's. The EVO is very comparable to Intel's SSD DC S3500 as both have random write IOPS of around 15K and even the consistencies are close to a match. Endurance wise both are rated at about 0.35 drive writes per day despite the fact that Samsung is using TLC NAND instead of MLC, so it is clear that Samsung is going directly after Intel's S3500 with the EVO. It is too early to make any final conclusions yet as the EVO is really designed for mixed and read-centric workloads, which are not included in our performance preview, but if the write performance consistency is any clue the EVO will be a tough competitor for Intel's S3500. 

Unfortunately I do not have an ETA for the full review yet. It will be a while, though, because testing an enterprise SSD takes a long time as the drive must be tested in steady-state to mimic a realistic scenario, and I need to test a bunch of older drives to have more data points. Moreover, there are some very interesting client drives coming in the next few weeks that will take priority, but the full review is coming along with our new enterprise SSD test suite. Today is a glimpse of some of the new things that we will be looking at, but the full suite will be way more extensive than what you have seen today. Stay tuned!

Performance Consistency - Standard Deviation
Comments Locked

31 Comments

View All Comments

  • Laststop311 - Wednesday, September 3, 2014 - link

    Wish the consumer m2 drives would be released already. Samsung sm951 with pcie gen 3.0 x4 controller would be nice to be able to buy.
  • tuxRoller - Wednesday, September 3, 2014 - link

    All chart titles are the same on page five (performance consistency average iops).
  • tuxRoller - Wednesday, September 3, 2014 - link

    Actually, all the charts carry the same title, but different data.
  • Kristian Vättö - Thursday, September 4, 2014 - link

    The titles are basically "name of the SSD and its capacity - 4KB Random Write (QD32) Performance". The name of the SSD should change when you select a different SSD but every graph has the "4KB Random Write (QD32) Performance" attached to it.
  • CountDown_0 - Wednesday, September 3, 2014 - link

    Hi Kristian,
    a small suggestion: when talking about worst case IOPS you write that "The blue dots in the graphs stand for average IOPS just like before, but the red dots show the worst-case IOPS for every second." Ok, but I'd write it in the graph legend instead.
  • Kristian Vättö - Thursday, September 4, 2014 - link

    It's something I thought about and can certainly consider adding it in the future.
  • rossjudson - Thursday, September 4, 2014 - link

    I'd suggest the following. Use FIO to do your benchmarking. It supports generating and measuring just about every load you'd care about. You can also use it in a distributed mode, so you can run as many tests as you have hardware to support, at the same time.

    Second, don't use logarithmic axes on your charts. The drives you describe here take *huge* dropoffs in performance after their caches fill up and they have to start "working for a living". You are masking this performance drop by not using linear measures.

    Third, divide up your time axis into (say) 60 second chunks, and show the min/max/95/99/99.9/99.9 latency marks. Most enterprise customers care about sustained performance and worst case performance. A really slow IO is going to hold up a bunch of other stuff. There are two ways out of that: Speculative IO (wait a little while for success then issue another IO to another device), or manage and interleave background tasks (defrag/garbage collect) very carefully in the storage device. Better yet, don't have the problem at all. The marketing stats on these drives have nothing to do with the performance they exhibit when they are subject to non-stop, mixed loads.

    Unless you are a vendor that constantly tests precisely those loads, and ensures they work, stay working, and stay tight on latency.
  • SuperVeloce - Thursday, September 4, 2014 - link

    Great review... but dropdown menu for graphs annoys me. ugh
  • Kristian Vättö - Thursday, September 4, 2014 - link

    What do you find annoying in them? I can certainly consider alternative options if you can suggest any.
  • grebic - Thursday, October 2, 2014 - link

    Hi Kristian. I need to bother you with a question: do you think isit worth it to stick this SSD in a NAS? I have a ''fanless'' QNAP HS-210, 2 bay small form NAS, without drives for the moment, so in order to have a complete zero noise and time ''resistence'' to go for SSDs. But I was forgoten what was mentioned here "no wear leveling, no garbage collection'', so I'm wondering if in time the performances will decrease dramatically I'm thinking that the OS of NAS is not knowing to do such ''treatments'' over SSDs for maintaining performances, no? It's not in my intention to do operations over operations on NAS but I would like to know that my data will be ''safe'' and easy ''accesible'' over long time, OK? Very appreciated your oppinion. Thanks, Cristian

Log in

Don't have an account? Sign up now