Final Words

For years Intel has been criticized for not caring about the client SSD space anymore. The X25-M and its different generations were all brilliant drives and essentially defined the standards for a good client SSD, but since then none of Intel's client SSDs have had the same "wow" effect. That's not to say that Intel's later client SSDs have been inferior, it's just that they haven't really had any competitive advantage over the other drives on the market. It's no secret that Intel changed its SSD strategy to focus on the enterprise segment and frankly it still makes a lot of sense because the profits are more lucrative and enterprise has a lot more room for innovation as the customers value more than just rock-bottom pricing. 

With the release of the SSD 750, it's safe to say that any argument of Intel not caring about the client market is now invalid. Intel does care, but rather than bringing products with new complex technologies to the market at a very early stage, Intel wants to ensure that the market is ready and there's industry wide support for the product. After all, NVMe requires BIOS support and that support has only been present for a few months now, making it logical not to release the SSD 750 any sooner. 

Given the enterprise background of the SSD 750, it's more optimized for consistency than raw peak performance. The SM951, on the other hand, is a more generic client drive that concentrates on peak performance to improve performance under typical client workloads. That's visible in our benchmarks because the only test where the SSD 750 is able to beat the SM951 is The Destoyer trace, which illustrates a very IO intensive workload that only applies to power users and professionals. It makes sense for Intel to focus on that very specific target group because those are the people who are willing to pay premium for higher storage performance.

With that said, I'm not sure if I fully agree with Intel's heavy random IO focus. The sequential performance isn't bad, but I think the SSD 750 as it stands today is a bit unbalanced and could use some improvements to sequential performance even if it came at the cost of random performance. 

Price Comparison (4/2/2015)
  128GB 256GB 400GB 512GB 1.2TB
Intel SSD 750 (MSRP) - - $389   $1,029
Samsung SM951 $120 $239 - $459 -

RamCity actually just got its first batch of SM951s this week, so I've included it in the table for comparison (note that the prices on RamCity's website are in AUD, so I've translated them into USD and also subtracted the 10% tax that is only applicable to Australian orders). The SSD 750 is fairly competitive in price, although obviously you have to fork out more money than you would for a similar capacity SATA drive. Nevertheless, going under a dollar per gigabyte is very reasonable given the performance and full power loss protection that you get with the SSD 750. 

All in all, the SSD 750 is definitely a product I recommend as it's the fastest drive for IO intensive workloads by a large margin. I can't say it's perfect and for slightly lighter IO workloads the SM951 wins my recommendation due to its more client-oriented design, but the SSD 750 is really a no compromise product that is aimed for a relatively small high-end niche, and honestly it's the only considerable option in its niche. If your IO workload needs the storage performance of tomorrow, Intel and the SSD 750 have you covered today.

ATTO, AS-SSD & TRIM Validation
Comments Locked

132 Comments

View All Comments

  • knweiss - Thursday, April 2, 2015 - link

    According to Semiaccurate the 400 GB drive has "only" 512 MB DRAM.
    (Unfortunately, ARK hasn't been updated yet so I can't verify.)
  • eddieobscurant - Thursday, April 2, 2015 - link

    You're right it's probably 512mb for the 400gb model and 1gb for the 1.2tb model
  • Azunia - Thursday, April 2, 2015 - link

    In PCPer's review of this drive, they actually talk about the problems of benchmarking this drive. (https://www.youtube.com/watch?v=ubxgTBqgXV8)

    Seems like some benchmarks like Iometer cannot actually feed the drive, due to being programmed with a single thread. Have you had similar experiences during benchmarking, or is their logic faulty?
  • Kristian Vättö - Friday, April 3, 2015 - link

    I didn't notice anything that would suggest a problem with Iometer's capability of saturating the drive. In fact, Intel provided us Iometer benchmarking guidelines for the review, although they didn't really differ from what I've been doing for a while now.
  • Azunia - Friday, April 3, 2015 - link

    Reread their article and it seems like the only problem is the Iometer's Fileserver IOPS Test, which peaks at around 200.000 IOPS, since you don't use that one thats probably the reason why you saw no problem.
  • Gigaplex - Thursday, April 2, 2015 - link

    "so if you were to put two SSD 750s in RAID 0 the only option would be to use software RAID. That in turn will render the volume unbootable"

    It's incredibly easy to use software RAID in Linux on the boot drives. Not all software RAID implementations are as limiting as Windows.
  • PubFiction - Friday, April 3, 2015 - link

    "For better readability, I now provide bar graphs with the first one being an average IOPS of the last 400 seconds and the second graph displaying the standard deviation during the same period"

    lol why not just portray standard deviation as error bars like they are supposed to be shown. Kudos for being one of the few sites to recognize this but what a convoluted senseless way of showing them.
  • Chloiber - Friday, April 3, 2015 - link

    I think the practical tests of many other reviews show that the normal consumer has absolutely no benefit (except being able to copy files faster) from such an SSD. We have reached the peak a long time ago. SSDs are not the limiting factor anymore.

    Still, it's great to see that we finally again major improvements. It was always sad that all SSDs got limited by the interface. This was the case with SATA 2, it's the case with SATA 3.
  • akdj - Friday, April 3, 2015 - link

    Thanks for sharing Kristian
    Query about the thorough put using these on external Thunderbolt docks and PCIe 'decks' (several new third party drive and GPU enclosires experimenting with the latter ...and adding powerful desktop cards {GPU} etc) ... Would there still be the 'bottle neck' (not that SLI nor Crossfire with the exception of the MacPro and how the two AMDs work together--would be a concern in OS X but Windows motherboards...) if you were to utilize the TBolt headers to the PCIe lane-->CPU? These seem like a better idea than external video cards for what I'm Doing on the rMBPs. The GPUs are quick enough, especially in tandem with the IrisPro and its ability to 'calculate' ;) -- but a 2.4GB twin card RAID external box with a 'one cord' plug hot or cold would be SWEEET,
  • wyewye - Friday, April 3, 2015 - link

    Kristian: test with QD128, moron, its NVM.

    Anandtech becomes more and more idiotic: poor articles and crappy hosting, you have to real pages multiple times to access anything.

    Go look at TSSDreview for a competent review.

Log in

Don't have an account? Sign up now