I don't think it's an overstatement to say that Intel introduced us to the era of modern SSDs back in 2008 with the X25-M. It wasn't the first SSD on the market, but it was the first drive that delivered the aspects we now take for granted: high, consistent and reliable performance. Many SSDs in the early days focused solely on sequential performance as that was a common performance metric for hard drives, but Intel understood that the key to better user performance wasn't the maximum throughput, but the small random IOs that take unbearably long to complete on HDDs. Thanks to Intel's early understanding of real world workloads and implementing the knowledge to a well designed product, it took several years before others were able to fully catch up with the X25-M.

But when the time came to upgrade to SATA 6Gbps, Intel missed the train. The initial SATA 6Gbps drives had to rely on third party silicon because Intel's own SATA 6Gbps controller was still in development, and to put it frankly the SSD 510 and SSD 520 just didn't pack the same punch as the X25-M did. The others had also done their homework and gone back to drawing board, which meant that Intel was no longer in the special position it was in 2008. Once the SSD DC S3700 with in-house Intel SATA 6Gbps controller finally materialized in late 2012, it quickly built back the Intel image that the company had in the X25-M days. The DC S3700 wasn't as revolutionary as the X25-M was, but it again focused on areas where other manufacturers had been lacking, namely performance consistency.

The first and second generation Intel X25-M

While Intel was arguably late to the SATA 6Gbps game, the company already had something much bigger in its mind. Something that would abandon the bottlenecks of SATA interface and challenge the X25-M in significance in the history of SSDs. That product was the SSD DC P3700, the world's first drive with custom PCIe NVMe controller and the first NVMe drive that was widely available.

Ever since our SSD DC P3700 review, there's been massive interest from enthusiasts and professionals for a more client-oriented product based on the same platform. With eMLC, ten drive writes per day endurance and a full enterprise-class feature set, the SSD DC P3700 was simply out of reach for consumers at $3 per gigabyte because the smallest 400GB SKU cost the same as a decent high power PC build. Intel didn't ignore your prayers and wishes and with today's release of the SSD 750 Intel is delivering what many of you have been craving for months: NVMe with a consumer friendly price tag in a 2.5" form factor via SFF-8639 or a PCIe add-in card.

Intel SSD 750 Specifications
Capacity 400GB 1.2TB
Form Factor 2.5" 15mm SFF-8639 or PCIe Add-In Card (HHHL)
Interface PCIe 3.0 x4 - NVMe
Controller Intel CH29AE41AB0
NAND Intel 20nm 128Gbit MLC
Sequential Read 2,200MB/s 2,400MB/s
Sequential Write 900MB/s 1,200MB/s
4KB Random Read 430K IOPS 440K IOPS
4KB Random Write 230K IOPS 290K IOPS
Idle Power Consumption 4W 4W
Read/Write Power Consumption 9W / 12W 10W / 22W
Encryption N/A
Endurance 70GB Writes per Day for Five Years
Warranty Five Years
MSRP $389 $1,029

Even though the SSD 750 is built upon the SSD DC P3700 platform, it's a completely different product. Intel spent a lot of time on redesigning the firmware to be more suitable for client applications, which differ greatly from typical enterprise workloads. The SSD 750 is supposed to be more focused on random performance as the majority of IOs in client workloads tend to have random patterns and be small in size. The sequential write speeds may seem a bit low for such high capacities for that reason, but ultimately Intel's goal was to provide better real world performance rather than focus on maximum benchmark numbers, which has been Intel's strategy ever since the X25-M days.

At the time of launch, the SSD 750 will only be available in capacities of 400GB and 1.2TB. An 800GB SKU is being considered, but I think Intel is still testing the waters with the SSD 750 and thus the initial lineup is limited to just two SKUs. After all, the ultra high-end is a niche market and even in that space the SSD 750 is much more expensive that existing SATA drives, so a gradual roll out makes a lot of sense. I think for enthusiasts the 400GB model is the sweet spot because it provides enough capacity for the OS and applications/games, whereas professionals will likely want to spring for the 1.2TB if they are looking for high-speed storage for work files (video editing is a prime example). 

The SSD 750 utilizes Intel-Micron's 20nm 128Gbit MLC NAND. The die configuration is actually fairly interesting because the packages on the front-side on the PCB (i.e. the one that's covered by the heat sink and where the controller is) are quad-die with 64GiB capacity (4x128Gbit), whereas the packages on the back-side of the PCB are all single-die. I suspect Intel did this for heat reasons because PCIe is more capable of utilizing NAND to its full potential, which increases the heat output and obviously four dies inside one package generate more heat than a single die. With 18 packages on the front-side and 14 on the backside, the raw NAND capacity comes in at 1,376GiB, resulting in effective over-provisioning of 18.8% with 1,200GB of usable capacity.

The controller is the same 18-channel behemoth running at 400MHz that is found inside the SSD DC P3700. Nearly all client-grade controllers today are 8-channel designs, so with over twice the number of channels Intel has a clear NAND bandwidth advantage over the more client-oriented designs. That said, the controller is also much more power hungry and the 1.2TB SSD 750 consumes over 20W under load, so you won't be seeing an M.2 variant with this controller. 

Similar to the SSD DC P3700, the SSD 750 features full power loss protection that protects all data in the DRAM, including user data in flight. I'm happy to see that Intel understands how power loss protection can be a critical feature for the high-end client segment as well because especially professional users can't have the risk of losing any data.

The Form Factors & SFF-8639 Connector

The SSD 750 is available in two form factors: a traditional half-height, half-length add-in card and 2.5" 15mm drive. The 2.5" form factor utilizes an SFF-8639 connector that is mostly used in the enterprise, but it's slowly making its way to the high-end client side as well (ASUS just announced TUF Sabertooth X99 two weeks ago at CeBit). The SFF-8639 is essentially SATA Express on steroids and offers four lanes of PCIe connectivity for up to 4GB/s of bandwidth with PCIe 3.0 (although in real world the maximum bandwidth is about 3.2GB/s due to PCIe inefficiency). Honestly, aside from the awkward name, SFF-8639 is what SATA Express should have been from the beginning because nearly all upcoming PCIe controller designs will feature four PCIe lanes, which renders SATA Express useless as there's no point in handicapping a drive with an interface that's only capable of providing half of the available bandwidth. That said, I wasn't at the table when SATA-IO made the decision, but it's clear that the spec wasn't fully thought through. 

The SFF-8639 connector

Similar to SATA Express, SFF-8639 has a separate SATA power input in the cable. That's admittedly quite unwieldy, but it's necessary to keep the motherboard and cable costs reasonable. The SSD 750 requires both 3.3V and 12V rails for power, so if the drive was to draw power from PCIe it would have required some additional components on the motherboard side, which is something that the motherboard OEMs are hesitant about due to the added cost, especially since it's just one port that may not even be used by the end-user.

The motherboard end of the SFF-8639 cable

As the industry moves forward and PCIe becomes more common, I think we'll see SFF-8639 being adopted more widely. The 2.5" form factor is really the best for a desktop system because the drive location is not fixed to one spot on the motherboard or in the case. While M.2 and add-in cards provide a cleaner look thanks to the lack of cables, they both eat precious motherboard area that could be used for something else. That's the reason why motherboards don't usually have more than one M.2 slot as the area taken by the slot can't really be used for any other components. Another issue especially with add-in cards is the heat coming from other PCIe cards (namely high power GPUs) that can potentially throttle the drive, whereas drive bays tend to be located in the front of the case with good airflow and no heat coming from surrounding components. 

Utilizing the Full Potential of NVMe

Because the SSD 750 is a PCIe 3.0 design, it must be connected directly to the CPU's PCIe 3.0 lanes for maximum throughout. All the chipsets in Intel's current line up are of the slower PCIe 2.0 flavor, which would effectively cut the maximum throughput to half of what the SSD 750 is capable of. The even bigger issue is that the DMI 2.0 interface that connects the platform controller hub (PCH) to the CPU is only four lanes wide (i.e. up to 2GB/s), so if you connect the SSD 750 to the PCH's PCIe lanes and access other devices connected to the PCB (e.g. USB, SATA or LAN) at the same time the performance would be even further handicapped.

 

Intel Z97 chipset block diagram

Utilizing the CPU's PCIe lanes presents some possible bottlenecks for the users of Z97 chipset because the normal Haswell CPUs feature only sixteen PCIe 3.0 lanes. In other words, if you wish to use the SSD 750 with a Z97 chipset you have to give up some GPU PCIe bandwidth because the SSD 750 will take four lanes out of the sixteen. With a single GPU setup that's hardly an issue, but with SLI/CrossFire setup there's a possibility of some bandwidth handicapping if the GPUs and SSD are utilizing the interface simultaneously. Also, with NVIDIA's PCIe x8 requirement, it limits itself to a single NVIDIA card implementation. Fortunately it's quite rare that an application would tax the GPUs and storage at the same time since games tend to load data to RAM for faster access and especially with the help of PCIe switches it's possible to grant all devices the lanes they require (although the maximum bandwidth isn't increased, but switches allow full x16 bandwidth to the GPUs when they need it). 

Intel X99 chipset block diagram

With Haswell-E and its 40 PCIe 3.0 lanes, there are obviously no issues with bandwidth even with an SLI/CrossFire setup and two SSD 750s. Unfortunately the X99 (or any other chipset) doesn't support PCIe RAID, so if you were to put two SSD 750s in RAID 0 the only option would be to use software RAID. That in turn will render the volume unbootable and I had some performance issues with two Samsung XP941s in software RAID, so at this point I would advice against RAIDing the SSD 750s. We'll have to wait for Intel's next generation chipsets to get proper RAID support for PCIe SSDs.

As for older chipsets, Intel isn't guaranteeing compatibility with 8-series chipsets and older. The main issue here is that the motherboard OEMs aren't usually willing to support older chipsets in the form of BIOS updates and the SSD 750 (and NVMe in general) requires some BIOS modifications in order to be bootable. That said, some older motherboards may work with the SSD 750 just fine, but I suggest you do some research online or contact the motherboard manufacturer before pulling the trigger on the SSD 750.

Bootable? Yes

Understandably the big question many of you have is whether the SSD 750 can be used as a boot drive. I've confirmed that the drive is bootable in my testbed with ASUS Z97 Deluxe motherboard with the latest BIOS and it should be bootable on any motherboard with proper NVMe support. Intel will have a list of supported motherboards on the SSD 750 product page, which are all X99 and Z97 based at the moment but the support will likely expand over time (it's up to the motherboard manufacturers to release a BIOS version with NVMe support). 

Furthermore, I know many of you want to see some actual real world tests that compare NVMe to SATA drives and I'm working on a basic test suite to cover that. Unfortunately, I didn't have the time to include it in this review due to this and last weeks' NDAs, but I will publish it as a separate article as soon as it's done. If there are any specific tests that you would like to see, feel free to make suggestions in the comments below and I'll see what I can do.

AnandTech 2015 SSD Test System
CPU Intel Core i7-4770K running at 3.5GHz (Turbo & EIST enabled, C-states disabled)
Motherboard ASUS Z97 Deluxe (BIOS 2205)
Chipset Intel Z97
Chipset Drivers Intel 10.0.24+ Intel RST 13.2.4.1000
Memory Corsair Vengeance DDR3-1866 2x8GB (9-10-9-27 2T)
Graphics Intel HD Graphics 4600
Graphics Drivers 15.33.8.64.3345
Desktop Resolution 1920 x 1080
OS Windows 8.1 x64
Performance Consistency
Comments Locked

132 Comments

View All Comments

  • oranos - Friday, April 3, 2015 - link

    Insane performance, insane value. What else to say? Intel never loses a step and surprises at every turn.
  • Peichen - Friday, April 3, 2015 - link

    Sounds like there is going to be big form factor change coming to desktop computer in the next few years. Complete removal of 5.25 and 3.5" drives, M.2 and 2.5" drives taking over, CPU limted to <77W and video card to <250W.

    I should hold off replacing my still very good case until I am building a new computer in 3~4 years.
  • cjones13 - Friday, April 3, 2015 - link

    how would this drive compare with a 4 drive (samsung 850 pro 512), two card Sonnet tempo ssd pro plus arrangement? this set up is about $600 more, but 800GB larger and overall ~same $/GB @.82
  • Freakie - Friday, April 3, 2015 - link

    Maybe I'm just blind, but I don't see this 750 in Bench? Did someone forget to add it to Bench or is there a reason why it's not in there?
  • boe - Saturday, April 4, 2015 - link

    Those 10TB and 32TB SSDs can't come soon enough. I just hope they come down to an affordable price very soon as standard SSDs are still way to expensive per TB for any real storage needs.
  • gattberserk - Saturday, April 4, 2015 - link

    Can I ask why is the boot time so slow? For a drive this expensive this is not something that is tolerable.

    Is it possible to do a boot up timing with the fast boost function enabled? I wanna see how fast will it be as compared with other SATA drives using the same fast boot function.

    The boot up time will be the last factor to decide if I wanna pull trigger on this one.
  • Laststop311 - Saturday, April 4, 2015 - link

    This drive is a beast and just raised the cost of my skylake-e build another 1000 dollars. Maybe an even better 2nd generation version will be out by then. Upgrading my gulftown to 8 core skylake-e flagship. 4.3ghz i7-980x will have lasted me 7 years by the time skylake-e comes out which is a pretty darn good service life. Convert the ole gulftown into a seedbox/personal cloud nas/htpc/living room gaming console. Kill all the oc's and undervolt cpu for the lowest voltage stable at stock and turn all the noctua fans down with ULN adapters into silence mode. It will be rough re buying a buncha parts I wouldn't of had to if I didn't keep the PC together but it's too good of a PC still to dismantle for parts. Will be nice having a beastly backup pc.

    My skylake-e build has really ballooned in price but this next upgrade should last a full decade with a couple gpu upgrades using the flagship skylake-e 8 core i7 + 1.2TB intel 750 boot drive + nvidia/amd flagship 16nm FF+ GPU. Basically like 3000 dollars just in 3 parts :(. Thats ok tho it brings too many features to the table pci-e 4.0 DMI 3.0 USB 3.1 built into chipset natively 10gbit ethernet natively up to 3x ultra m2 slots and the SFF connector used in this drive possibly thunderbolt 2 built in natively of course quad channel ddr4. Hopefully better overclocking with the heat producing FIVR removed guessing 4.7-5ghz will be possible on good water cooling to the 8 core.

    Sorry got on a tangent. I'm just excited there are finally enough upgrades to make a new PC worth it. No applause for intel tho it took them 7 years to make a gulftown PC worth upgrading. I should see a nice IPC gain from i7-980x gulftown to skylake-e. I'll be happy with 50-60% IPC gain and 500 extra mhz on my 980x so 4.8ghz. I think 6x 140mm high static pressure noctuas in push/pull and a 420mm rad should provide enough cooling for 4.8ghz on 8 core skylake-e if the chip is capable. Goal is to push it to 5.0ghz tho and get 700mhz speed increase + additional 55% IPC gain.
  • gattberserk - Sunday, April 5, 2015 - link

    Unfortunately Skylake e is not coming in another 2 years. There is no news of BW-E even, and that will be another year before Skylake will come in.

    By den, 750 would have been obsolette, esp with Samsung 3D NAND in NVMe PCIe SSD.
  • JatkarP - Saturday, April 4, 2015 - link

    <$1/GB at 2400/1200 MBps R/W performance. What else you need !!
  • Ethos Evoss - Saturday, April 4, 2015 - link

    |I wud rather go for new Plextor which is 5 time cheaper and with same specs..

Log in

Don't have an account? Sign up now