Final Words

For years Intel has been criticized for not caring about the client SSD space anymore. The X25-M and its different generations were all brilliant drives and essentially defined the standards for a good client SSD, but since then none of Intel's client SSDs have had the same "wow" effect. That's not to say that Intel's later client SSDs have been inferior, it's just that they haven't really had any competitive advantage over the other drives on the market. It's no secret that Intel changed its SSD strategy to focus on the enterprise segment and frankly it still makes a lot of sense because the profits are more lucrative and enterprise has a lot more room for innovation as the customers value more than just rock-bottom pricing. 

With the release of the SSD 750, it's safe to say that any argument of Intel not caring about the client market is now invalid. Intel does care, but rather than bringing products with new complex technologies to the market at a very early stage, Intel wants to ensure that the market is ready and there's industry wide support for the product. After all, NVMe requires BIOS support and that support has only been present for a few months now, making it logical not to release the SSD 750 any sooner. 

Given the enterprise background of the SSD 750, it's more optimized for consistency than raw peak performance. The SM951, on the other hand, is a more generic client drive that concentrates on peak performance to improve performance under typical client workloads. That's visible in our benchmarks because the only test where the SSD 750 is able to beat the SM951 is The Destoyer trace, which illustrates a very IO intensive workload that only applies to power users and professionals. It makes sense for Intel to focus on that very specific target group because those are the people who are willing to pay premium for higher storage performance.

With that said, I'm not sure if I fully agree with Intel's heavy random IO focus. The sequential performance isn't bad, but I think the SSD 750 as it stands today is a bit unbalanced and could use some improvements to sequential performance even if it came at the cost of random performance. 

Price Comparison (4/2/2015)
  128GB 256GB 400GB 512GB 1.2TB
Intel SSD 750 (MSRP) - - $389   $1,029
Samsung SM951 $120 $239 - $459 -

RamCity actually just got its first batch of SM951s this week, so I've included it in the table for comparison (note that the prices on RamCity's website are in AUD, so I've translated them into USD and also subtracted the 10% tax that is only applicable to Australian orders). The SSD 750 is fairly competitive in price, although obviously you have to fork out more money than you would for a similar capacity SATA drive. Nevertheless, going under a dollar per gigabyte is very reasonable given the performance and full power loss protection that you get with the SSD 750. 

All in all, the SSD 750 is definitely a product I recommend as it's the fastest drive for IO intensive workloads by a large margin. I can't say it's perfect and for slightly lighter IO workloads the SM951 wins my recommendation due to its more client-oriented design, but the SSD 750 is really a no compromise product that is aimed for a relatively small high-end niche, and honestly it's the only considerable option in its niche. If your IO workload needs the storage performance of tomorrow, Intel and the SSD 750 have you covered today.

ATTO, AS-SSD & TRIM Validation
Comments Locked

132 Comments

View All Comments

  • Kristian Vättö - Friday, April 3, 2015 - link

    As I explained in the article, I see no point in testing such high queue depths in a client-oriented review because the portion of such IOs is marginal. We are talking about a fraction of a percent, so while it would show big numbers it has no relevance to the end-user.
  • voicequal - Saturday, April 4, 2015 - link

    Since you feel strongly enough to levy a personal attack, could you also explain why you think QD128 is important? Anandtech's storage benchmarks are likely a much better indication of user experience unless you have a very specific workload in mind.
  • d2mw - Friday, April 3, 2015 - link

    Guys why are you cutpasting the same old specs table and formulaic article? For a review of the first consumer NVMe I'm sorely disappointed you didn't touch on latency metrics: one of the most important improvements with the NVMe bus
  • Kristian Vättö - Friday, April 3, 2015 - link

    There are several latency graphs in the article and I also suggest that you read the following article to better understand what latency and other storage metrics actually mean (hint: latency isn't really different from IOPS and throughput).

    http://www.anandtech.com/show/8319/samsung-ssd-845...
  • Per Hansson - Friday, April 3, 2015 - link

    Hi Kristian, what evidence do you have that the firmware in the SSD 750 is any different from that found in the DC P3600 / P3700?
    According to leaked reports released before they have the same firmware: http://www.tweaktown.com/news/43331/new-consumer-i...

    And if you read the Intel changelog you see in firmware 8DV10130: "Drive sub-4KB sequential write performance may be below 1MB/sec"
    http://downloadmirror.intel.com/23931/eng/Intel_SS...
    Which was exactly what you found in the original review of the P3700:
    http://www.anandtech.com/show/8147/the-intel-ssd-d...
    http://www.anandtech.com/bench/product/1239

    Care to retest with the new firmware?
    I suspect you will get identical performance.
  • Per Hansson - Saturday, April 4, 2015 - link

    I should be more clear: I mean that you retest the P3700.
    And obviously the performance of the 750 wont match that, as it is based of the P3500.
    But I think you get what I mean anyway ;)
  • djsvetljo - Friday, April 3, 2015 - link

    I am unclear of which connector will this use. Does it use the video card PCI-E port?

    I have MSI Z97 MATE board that has one PCI-E gen3 x16 and one PCI-E gen2 x 4. Will I be able to use it and will I be limited somehow?
  • DanNeely - Friday, April 3, 2015 - link

    if you use the 2.0 x4 slot your maximum throughput will top out at 2gb/sec. For client workloads this probably won't matter much since only some server workloads can hit situations where the drive can exceed that rate.
  • djsvetljo - Friday, April 3, 2015 - link

    So it uses the GPU express port although the card pins are visually shorter ?
  • eSyr - Friday, April 3, 2015 - link

    > although in real world the maximum bandwidth is about 3.2GB/s due to PCIe inefficiency
    What does this phrase mean? If you're referring to 8b10b encoding, this is plainly false, since PCIe gen 3 utilized 128b130b coding. If you're referring to the overheds related to TLP and DLLP headers, this is depends on device's and PCIe RC's maximum transaction size. But, even with (minimal) 128 byte limit it would be 3.36 GB/s. In fact, modern PCIe RCs support much larger TLPs, thus eliminating header-related overheads.

Log in

Don't have an account? Sign up now