Intel's SSD 600p was the first PCIe SSD using TLC NAND to hit the consumer market. It is Intel's first consumer SSD with 3D NAND and it is by far the most affordable NVMe SSD: current pricing is on par with mid-range SATA SSDs. While most other consumer PCIe SSDs have been enthusiast-oriented products aiming to deliver the highest performance possible, the Intel 600p merely attempts to break the speed limits of SATA without breaking the bank.

The Intel SSD 600p has almost nothing in common with Intel's previous NVMe SSD for consumers (the Intel SSD 750). Where the Intel SSD 750 uses Intel's in-house enterprise SSD controller with consumer-oriented firmware, the Intel 600p uses a third-party controller. The SSD 600p is a M.2 PCIe SSD with peak power consumption only slightly higher than the SSD 750's idle. By comparison, the Intel SSD 750 is a high power and high performance drive that comes in PCIe expansion card and 2.5" U.2 form factors, both with sizable heatsinks. 

Intel SSD 600p Specifications Comparison
  128GB 256GB 512GB 1TB
Form Factor single-sided M.2 2280
Controller Intel-customized Silicon Motion SM2260
Interface PCIe 3.0 x4
NAND Intel 384Gb 32-layer 3D TLC
SLC Cache Size 4 GB 8.5 GB 17.5 GB 32 GB
Sequential Read 770 MB/s 1570 MB/s 1775 MB/s 1800 MB/s
Sequential Write (SLC Cache) 450 MB/s 540 MB/s 560 MB/s 560 MB/s
4KB Random Read (QD32) 35k IOPS 71k IOPS 128.5k IOPS 155k IOPS
4KB Random Write (QD32) 91.5k IOPS 112k IOPS 128k IOPS 128k IOPS
Endurance 72 TBW 144 TBW 288 TBW 576 TBW
Warranty 5 years

The Intel SSD 600p is our first chance to test Silicon Motion's SM2260 controller, their first PCIe SSD controller. Silicon Motion's SATA SSD controllers have built a great reputation for being affordable, low power and providing good mainstream performance. One key to the power efficiency of Silicon Motion's SATA SSD controllers is their use of an optimized single core ARC processor (via Synopsys), but in order to meet the SM2260's performance target, Silicon Motion has finally switched to a dual core ARM processor. The controller chip used on the SSD 600p has some customizations specifically for Intel and bears both Intel and SMI logos.

The 3D TLC NAND used on the Intel SSD 600p is the first generation 3D NAND co-developed with Micron. We've already evaluated Micron's Crucial MX300 with the same 3D TLC and found it to be a great mainstream SATA SSD. The MX300 was unable to match the performance of Samsung's 3D TLC NAND as found in the 850 EVO, but the MX300 is substantially cheaper and remarkably power efficient, both in comparison to Samsung's SSDs and to other SSDs that use the same controller as the MX300 but planar NAND.

Intel uses the same 3D NAND flash die for its MLC and TLC parts. The MLC configuration that has not yet found its way to the consumer SSD market has a capacity of 256Gb (32GB) per die, which gives the TLC configuration a capacity of 384Gb (48GB). Micron took advantage of this odd size to offer the MX300 in non-standard capacities, but for the SSD 600p Intel is offering normal power of two capacities with large fixed size SLC write caches in the spare area. The ample spare area also allows for a write endurance rating of about 0.3 drive writes per day for the duration of the five year warranty.

Intel 3D TLC NAND, four 48GB dies for a total of 192GB per package

The Intel SSD 600p shares its hardware with two other Intel products: the SSD Pro 6000p for business client computing and the SSD E 6000p for the embedded and IoT market. The Pro 6000p is the only one of the three to support encryption and Intel's vPro security features. The SSD 600p relies on the operating system's built-in NVMe driver and Intel's consumer SSD Toolbox software which was updated in October to support the 600p.

For this review, the primary comparisons will not be against high-end NVMe drives but against mainstream SATA SSDs, as these are ultimately the closest to 'mid-to-low range' NVMe as we can get. The Crucial MX300 has given us a taste of what the Intel/Micron 3D TLC can do, and it is currently one of the best value SSDs on the market. The Samsung 850 EVO is very close to the Intel SSD 600p in price and sets the bar for the performance the SSD 600p needs to provide in order to be a good value.

Because the Intel SSD 600p is targeting a more mainstream audience and more modest level of performance than most other M.2 PCIe SSDs, I have additionally tested its performance in the M.2 slot built in to the testbed's ASUS Z97 Pro motherboard. In this configuration the SSD 600p is limited to a PCIe 2.0 x2 link, as compared to the PCIe 3.0 x4 link that is available during the ordinary testing process where an adapter is used in the primary PCIe x16 slot. This extra set of results does not include power measurements but may be more useful to desktop users who are considering adding a cheap NVMe SSD to an older but compatible existing system.

AnandTech 2015 SSD Test System
CPU Intel Core i7-4770K running at 3.5GHz
(Turbo & EIST enabled, C-states disabled)
Motherboard ASUS Z97 Pro (BIOS 2701)
Chipset Intel Z97
Memory Corsair Vengeance DDR3-1866 2x8GB (9-10-9-27 2T)
Graphics Intel HD Graphics 4600
Desktop Resolution 1920 x 1200
OS Windows 8.1 x64
Performance Consistency
POST A COMMENT

63 Comments

View All Comments

  • close - Thursday, November 24, 2016 - link

    ddriver, you're the guy who insisted he designed a 5.25" hard drive that's better than anything on the market despite being laughed at and proven wrong beyond any shadow of a doubt but still insist on beginning and ending almost all of your comments with "you don't have a clue", "you probably don't know". Projecting much?

    You're not an engineer and you're obviously not even remotely good at tech. You have no idea (and it actually does matters) how this works. You just make up scenarios in your head with how you *think* it works and then you throw a tantrum when you're contradicted by people who don't have to imagine this stuff, they know it.

    In your scenario you have 2 clients using 2 galleries at the same time (reasonable enough, 2 users/server just like any respectable content server). You server reads image 1, sends it, then reads image 2 and sends it because when working with a gallery this is exactly how it works (it definitely won't be 200 users requesting thousands of thumbnails for each gallery and then having to send that to each client). Then the network bandwidth will be an issue because your content server is limited to 100Mbps, maybe 1Gbps, since you only designed it for 2 concurrent users. A server delivering media content - so a server who's ONLY job is to DELIVER MEDIA CONTENT - will have that kind of bandwidth that's "vastly exceeded by the drive's performance", the kind that can't cope with several hard drives furiously seeking hundreds or thousands of files. And of course it doesn't matter if you have 2 users or 2000, it's all the same to a hard drive, it simply sucks it up and takes it like a man. That's why they're called HARD...

    Most content delivery servers use a hefty solid state cache in front of the hard drives and hope that the content is in the cache. The only reasons spinning drives are still in the picture are capacity and cost per GB. Except ddriver's 5.25" drive that beats anything in every metric imaginable.

    Oh and BTW, before the internet became mainstream there was slightly less data to move around. While drive performance increased 10 fold since then the data being move increased 100 times or more.
    But heck, we can stick to your scenario that 2 users access 2 pictures on a content server with a 10/100 half duplex.

    Now quick, whip out those good ol' lines: "you're a troll wannabe", "you have no clue". Than will teach everybody that you're not a wannabe and not to piss all over you. ;)
    Reply
  • vFunct - Wednesday, November 23, 2016 - link

    > I'd think the best answer to that would be a custom motherboard with the appropriate slots on it to achieve high storage densities in a slim (maybe something like a 1/2 1U rackmount) chassis.

    I agree that the best option would be for motherboard makers to create server motherboards with a ton of vertical M.2 slots, like DIMM slots, and space for airflow. We also need to be able to hot-swap these out by sliding out the chassis, uncovering the case, and swapping out a defective one as needed.

    A problem with U.2 connectors is that they have thick cabling all over the place. Having a ton of M.2 slots on the motherboard avoids all that.
    Reply
  • saratoga4 - Tuesday, November 22, 2016 - link

    If only they made it with a SATA interface! Reply
  • DanNeely - Tuesday, November 22, 2016 - link

    As a SATA device it'd be meh. Peak performance would be bottlenecked at the same point as every other SATA SSD, and it loses out to the 850 evo, nevermind the 850 pro in consistency. Reply
  • Samus - Tuesday, November 22, 2016 - link

    There are lots of good reliable SATA m2 drives on the market. The thing that makes the 600p special is it is priced at near parity with them when most PCIe SSD's have a 20-30% premium.

    A really good m2 2280 option is the MX300 or 850 EVO. Sandisk has some great m2 2260 drives.
    Reply
  • ddriver - Tuesday, November 22, 2016 - link

    Even in the case of such "server" you are better off with sata ssds, get a decent hba or raid card or two, connect 8-16 sata ssds and you have it. Price is better, performance in raid would be very good, and when a drive needs replacing, you can do it in 30 seconds without even powering off the machine.

    The only actual sense this product makes is in budget ultra portable laptops or x86 tablets, because it takes up less space, performance wise there will not be any difference in user experience between that and a sata drive, but it will enable a thinner chassis.

    There is no "density advantage" for nvme, there is only FORM FACTOR advantage, and that is only in scenarios where that's the systems primary and sole storage device. What enables density is the nand density, and the same dense chips can be used just as well in a sata or sas drive. Furthermore I don't recall seeing a mobo that has more than 2 m2 slots. A pci card with 4 m2 slots itself will not be exactly compact either. I've seen such, they are as big as upper mid-range video card. It takes about as much space as 4 standard 2.5' drives, however unlike 4x2'5" you can't put it into htpc form factor.
    Reply
  • ddriver - Tuesday, November 22, 2016 - link

    Also, the 1tb p600 is nowhere to be found, and even so, m2 peaks at 2tb for the 960 pro, which is wildly expensive. Whereas with 2.5" there is already a 4tb option and 8tb is entirely possible, the only thing that's missing is demand. Samsung demoed 16tb 2.5" sdd over a year ago. I'd say that the "density advantage" is very much on the side of 2.5" ssds. Reply
  • BrokenCrayons - Tuesday, November 22, 2016 - link

    Probably not. Reply
  • XabanakFanatik - Tuesday, November 22, 2016 - link

    If Samsung stopped refusing to make two-sided M.2 drives and actually put the space to use there could easily be a 4TB 960 Pro.... and it would cost $2800. Reply
  • JamesAnthony - Tuesday, November 22, 2016 - link

    Those cards are widely available, (I have some), 16x PCIe 3.0 interface and then 4 M.2 slots with each slot getting 4x PCIe 3.0 bandwidth, then a cooling fan for them.

    However WHY would you want to do that when you could just go get an Intel P3520 2TB drive or for higher speed a P3700 2TB drive. Standard PCIe interface format card for either low profile or standard profile slots?

    The only advantage an M.2 drive has is being small, but if you are going to put it in a standard PCIe slot, then why not just go with a purpose built PCIe NVMe SSD drive & not have to worry about thermal throttling on the M.2 cards?
    Reply

Log in

Don't have an account? Sign up now