Final Words

The Intel SSD 600p is intended to be the most mainstream PCIe SSD yet without the hefty price premium that previous PCIe SSDs have carried relative to SATA SSDs. Its performance needs to be evaluated in the context of its price and intended market, both of which are quite different from that of products like the Samsung 960 Pro and 960 EVO. The more appropriate standard to compare against is the Samsung 850 EVO.

Even with our expectations thus lowered, the Intel SSD 600p fails to measure up. But this isn't a simple case of a budget drive that turns out to be far slower than its specifications would imply. The SSD 600p does offer peak performance that is as high as promised. The trouble is that it only provides that performance in a narrow range of circumstances, and most of our usual benchmarks go far beyond that and show the 600p at its worst.

The biggest problem with the Intel SSD 600p seems to be its implementation of SLC caching. The cache is plenty and its fixed size prevents the drive from performing vastly worse when full, the way the Crucial MX300 and OCZ VX500 suffer. But the 600p sends all writes through the SLC cache even when it is full, which creates extra work for the SSD controller, and the SM2260 can't keep up. Once the SLC cache has been filled, further sustained writes will put the drive through frequent garbage collection cycles to flush part of the SLC cache. While that extra background work is proceeding, the 600p slows to a crawl and peak write latencies can spike to over half a second.

In the early days of the SSD market, many drives and controllers were condemned for seizing up under load. The SSD 600p reminds us of those problems, but it isn't so severely crippled. The SSD 600p is completely unsuitable for a database server, but at its worst is is only as bad as a budget SATA SSD, not a dying hard drive. Ordinary interactive desktop usage provides the SSD 600p plenty of idle time to clean up and the 600p will perform better than any SATA SSD. Even when the 600p is subjected to an unrealistically intense sustained write load, its stutters are very brief and in between it catches up very quickly with bursts of very high performance. In spite of its problems, the SSD 600p managed a steady-state random write speed higher than almost all consumer SATA SSDs.

The Intel SSD 600p would be a bad choice for a user who regularly shuffles around tens of gigabytes of data. On paper however, it offers great performance for light workloads. The problem is that for workloads light enough to never expose the 600p's flaws, even a slower and cheaper SATA SSD is plenty fast and the 600p's advantages would be difficult to feel (bar installation in a smaller form factor). The niche that the SSD 600p is most suited for is also one that doesn't need a faster SSD. The SSD 600p doesn't set any records for price per gigabyte except among NVMe SSDs, and its power efficiency is a problem for mobile users. Taken together, these factors mean that users for whom the SSD 600p would work well will almost always be better served by getting a cheaper and possibly larger SATA SSD if they have the space.

  128GB 250-256GB 400-512GB 1TB 2TB
Samsung 960 EVO (MSRP)   $129.88 (52¢/GB) $249.99 (50¢/GB) $479.99 (48¢/GB)  
Samsung 960 Pro (MSRP)     $329.99 (64¢/GB) $629.99 (62¢/GB) $1299.99 (63¢/GB)
Plextor M8Pe $74.99 (59¢/GB) $114.99 (45¢/GB) $189.99 (37¢/GB) $414.99 (41¢/GB)  
Intel SSD 600p $63.99 (50¢/GB) $79.99 (31¢/GB) $164.53 (32¢/GB) $302.99 (30¢/GB)  
Samsung 850 EVO   $94.99 (38¢/GB) $164.99 (33¢/GB) $314.90 (32¢/GB) $624.99 (31¢/GB)
Crucial MX300
 
  $69.99 (26¢/GB) $123.09 (23¢/GB) $244.99 (23¢/GB) $480.00 (23¢/GB)
    $169.99 (23¢/GB) (750GB)  

It is possible that the Intel SSD 600p's flaws could be mitigated by different firmware. The SM2260 controller is obviously capable of handling high data rates when it isn't busy unnecessarily shuffling data in and out of the SLC cache. We don't know for sure why Micron chose to cancel the Ballistix TX3 SSD that was due to use SM2260 with 3D MLC, but even if that combination wasn't going to be able to compete in the highest market segment, the controller is certainly capable of going far beyond the performance limits of SATA.

The Intel/Micron 3D TLC NAND is clearly not as fast as Samsung's 3D TLC V-NAND, but the Crucial MX300 has already shown us that the SSD 600p's limitations are not all directly the result of the NAND being too slow. It is unlikely that Intel will overhaul the firmware of the 600p, but it is quite possible that future products will do a better job with this hardware. The first product we tested with Silicon Motion's SM2256 controller was the infamous Crucial BX200, but it was followed up by successors like the ADATA SP550 that proved the SM2256 could make for a good value SSD.

The results of testing the SSD 600p in the motherboard's more limited PCIe 2.0 x2 M.2 slot bring up some interesting questions about the future of low-end NVMe products. For the most part, the effects of the bandwidth limitation on the SSD 600p were barely noticeable. PCIe 3.0 x4 is far faster than necessary to simply be faster than SATA, and supporting an interface that fast has costs in both controller die size and power consumption. The SSD 600p might have been better served by a controller that sacrificed excess host interface bandwidth to allow for a more powerful processor within the same TDP, or to just lower the price a bit further. Even though OEMs are striving to ensure that their M.2 slots can support the fastest SSD, not every drive needs to use all of that bandwidth.

ATTO, AS-SSD & Idle Power Consumption
Comments Locked

63 Comments

View All Comments

  • close - Thursday, November 24, 2016 - link

    ddriver, you're the guy who insisted he designed a 5.25" hard drive that's better than anything on the market despite being laughed at and proven wrong beyond any shadow of a doubt but still insist on beginning and ending almost all of your comments with "you don't have a clue", "you probably don't know". Projecting much?

    You're not an engineer and you're obviously not even remotely good at tech. You have no idea (and it actually does matters) how this works. You just make up scenarios in your head with how you *think* it works and then you throw a tantrum when you're contradicted by people who don't have to imagine this stuff, they know it.

    In your scenario you have 2 clients using 2 galleries at the same time (reasonable enough, 2 users/server just like any respectable content server). You server reads image 1, sends it, then reads image 2 and sends it because when working with a gallery this is exactly how it works (it definitely won't be 200 users requesting thousands of thumbnails for each gallery and then having to send that to each client). Then the network bandwidth will be an issue because your content server is limited to 100Mbps, maybe 1Gbps, since you only designed it for 2 concurrent users. A server delivering media content - so a server who's ONLY job is to DELIVER MEDIA CONTENT - will have that kind of bandwidth that's "vastly exceeded by the drive's performance", the kind that can't cope with several hard drives furiously seeking hundreds or thousands of files. And of course it doesn't matter if you have 2 users or 2000, it's all the same to a hard drive, it simply sucks it up and takes it like a man. That's why they're called HARD...

    Most content delivery servers use a hefty solid state cache in front of the hard drives and hope that the content is in the cache. The only reasons spinning drives are still in the picture are capacity and cost per GB. Except ddriver's 5.25" drive that beats anything in every metric imaginable.

    Oh and BTW, before the internet became mainstream there was slightly less data to move around. While drive performance increased 10 fold since then the data being move increased 100 times or more.
    But heck, we can stick to your scenario that 2 users access 2 pictures on a content server with a 10/100 half duplex.

    Now quick, whip out those good ol' lines: "you're a troll wannabe", "you have no clue". Than will teach everybody that you're not a wannabe and not to piss all over you. ;)
  • vFunct - Wednesday, November 23, 2016 - link

    > I'd think the best answer to that would be a custom motherboard with the appropriate slots on it to achieve high storage densities in a slim (maybe something like a 1/2 1U rackmount) chassis.

    I agree that the best option would be for motherboard makers to create server motherboards with a ton of vertical M.2 slots, like DIMM slots, and space for airflow. We also need to be able to hot-swap these out by sliding out the chassis, uncovering the case, and swapping out a defective one as needed.

    A problem with U.2 connectors is that they have thick cabling all over the place. Having a ton of M.2 slots on the motherboard avoids all that.
  • saratoga4 - Tuesday, November 22, 2016 - link

    If only they made it with a SATA interface!
  • DanNeely - Tuesday, November 22, 2016 - link

    As a SATA device it'd be meh. Peak performance would be bottlenecked at the same point as every other SATA SSD, and it loses out to the 850 evo, nevermind the 850 pro in consistency.
  • Samus - Tuesday, November 22, 2016 - link

    There are lots of good reliable SATA m2 drives on the market. The thing that makes the 600p special is it is priced at near parity with them when most PCIe SSD's have a 20-30% premium.

    A really good m2 2280 option is the MX300 or 850 EVO. Sandisk has some great m2 2260 drives.
  • ddriver - Tuesday, November 22, 2016 - link

    Even in the case of such "server" you are better off with sata ssds, get a decent hba or raid card or two, connect 8-16 sata ssds and you have it. Price is better, performance in raid would be very good, and when a drive needs replacing, you can do it in 30 seconds without even powering off the machine.

    The only actual sense this product makes is in budget ultra portable laptops or x86 tablets, because it takes up less space, performance wise there will not be any difference in user experience between that and a sata drive, but it will enable a thinner chassis.

    There is no "density advantage" for nvme, there is only FORM FACTOR advantage, and that is only in scenarios where that's the systems primary and sole storage device. What enables density is the nand density, and the same dense chips can be used just as well in a sata or sas drive. Furthermore I don't recall seeing a mobo that has more than 2 m2 slots. A pci card with 4 m2 slots itself will not be exactly compact either. I've seen such, they are as big as upper mid-range video card. It takes about as much space as 4 standard 2.5' drives, however unlike 4x2'5" you can't put it into htpc form factor.
  • ddriver - Tuesday, November 22, 2016 - link

    Also, the 1tb p600 is nowhere to be found, and even so, m2 peaks at 2tb for the 960 pro, which is wildly expensive. Whereas with 2.5" there is already a 4tb option and 8tb is entirely possible, the only thing that's missing is demand. Samsung demoed 16tb 2.5" sdd over a year ago. I'd say that the "density advantage" is very much on the side of 2.5" ssds.
  • BrokenCrayons - Tuesday, November 22, 2016 - link

    Probably not.
  • XabanakFanatik - Tuesday, November 22, 2016 - link

    If Samsung stopped refusing to make two-sided M.2 drives and actually put the space to use there could easily be a 4TB 960 Pro.... and it would cost $2800.
  • JamesAnthony - Tuesday, November 22, 2016 - link

    Those cards are widely available, (I have some), 16x PCIe 3.0 interface and then 4 M.2 slots with each slot getting 4x PCIe 3.0 bandwidth, then a cooling fan for them.

    However WHY would you want to do that when you could just go get an Intel P3520 2TB drive or for higher speed a P3700 2TB drive. Standard PCIe interface format card for either low profile or standard profile slots?

    The only advantage an M.2 drive has is being small, but if you are going to put it in a standard PCIe slot, then why not just go with a purpose built PCIe NVMe SSD drive & not have to worry about thermal throttling on the M.2 cards?

Log in

Don't have an account? Sign up now