Final Words

The Intel SSD 600p is intended to be the most mainstream PCIe SSD yet without the hefty price premium that previous PCIe SSDs have carried relative to SATA SSDs. Its performance needs to be evaluated in the context of its price and intended market, both of which are quite different from that of products like the Samsung 960 Pro and 960 EVO. The more appropriate standard to compare against is the Samsung 850 EVO.

Even with our expectations thus lowered, the Intel SSD 600p fails to measure up. But this isn't a simple case of a budget drive that turns out to be far slower than its specifications would imply. The SSD 600p does offer peak performance that is as high as promised. The trouble is that it only provides that performance in a narrow range of circumstances, and most of our usual benchmarks go far beyond that and show the 600p at its worst.

The biggest problem with the Intel SSD 600p seems to be its implementation of SLC caching. The cache is plenty and its fixed size prevents the drive from performing vastly worse when full, the way the Crucial MX300 and OCZ VX500 suffer. But the 600p sends all writes through the SLC cache even when it is full, which creates extra work for the SSD controller, and the SM2260 can't keep up. Once the SLC cache has been filled, further sustained writes will put the drive through frequent garbage collection cycles to flush part of the SLC cache. While that extra background work is proceeding, the 600p slows to a crawl and peak write latencies can spike to over half a second.

In the early days of the SSD market, many drives and controllers were condemned for seizing up under load. The SSD 600p reminds us of those problems, but it isn't so severely crippled. The SSD 600p is completely unsuitable for a database server, but at its worst is is only as bad as a budget SATA SSD, not a dying hard drive. Ordinary interactive desktop usage provides the SSD 600p plenty of idle time to clean up and the 600p will perform better than any SATA SSD. Even when the 600p is subjected to an unrealistically intense sustained write load, its stutters are very brief and in between it catches up very quickly with bursts of very high performance. In spite of its problems, the SSD 600p managed a steady-state random write speed higher than almost all consumer SATA SSDs.

The Intel SSD 600p would be a bad choice for a user who regularly shuffles around tens of gigabytes of data. On paper however, it offers great performance for light workloads. The problem is that for workloads light enough to never expose the 600p's flaws, even a slower and cheaper SATA SSD is plenty fast and the 600p's advantages would be difficult to feel (bar installation in a smaller form factor). The niche that the SSD 600p is most suited for is also one that doesn't need a faster SSD. The SSD 600p doesn't set any records for price per gigabyte except among NVMe SSDs, and its power efficiency is a problem for mobile users. Taken together, these factors mean that users for whom the SSD 600p would work well will almost always be better served by getting a cheaper and possibly larger SATA SSD if they have the space.

  128GB 250-256GB 400-512GB 1TB 2TB
Samsung 960 EVO (MSRP)   $129.88 (52¢/GB) $249.99 (50¢/GB) $479.99 (48¢/GB)  
Samsung 960 Pro (MSRP)     $329.99 (64¢/GB) $629.99 (62¢/GB) $1299.99 (63¢/GB)
Plextor M8Pe $74.99 (59¢/GB) $114.99 (45¢/GB) $189.99 (37¢/GB) $414.99 (41¢/GB)  
Intel SSD 600p $63.99 (50¢/GB) $79.99 (31¢/GB) $164.53 (32¢/GB) $302.99 (30¢/GB)  
Samsung 850 EVO   $94.99 (38¢/GB) $164.99 (33¢/GB) $314.90 (32¢/GB) $624.99 (31¢/GB)
Crucial MX300
 
  $69.99 (26¢/GB) $123.09 (23¢/GB) $244.99 (23¢/GB) $480.00 (23¢/GB)
    $169.99 (23¢/GB) (750GB)  

It is possible that the Intel SSD 600p's flaws could be mitigated by different firmware. The SM2260 controller is obviously capable of handling high data rates when it isn't busy unnecessarily shuffling data in and out of the SLC cache. We don't know for sure why Micron chose to cancel the Ballistix TX3 SSD that was due to use SM2260 with 3D MLC, but even if that combination wasn't going to be able to compete in the highest market segment, the controller is certainly capable of going far beyond the performance limits of SATA.

The Intel/Micron 3D TLC NAND is clearly not as fast as Samsung's 3D TLC V-NAND, but the Crucial MX300 has already shown us that the SSD 600p's limitations are not all directly the result of the NAND being too slow. It is unlikely that Intel will overhaul the firmware of the 600p, but it is quite possible that future products will do a better job with this hardware. The first product we tested with Silicon Motion's SM2256 controller was the infamous Crucial BX200, but it was followed up by successors like the ADATA SP550 that proved the SM2256 could make for a good value SSD.

The results of testing the SSD 600p in the motherboard's more limited PCIe 2.0 x2 M.2 slot bring up some interesting questions about the future of low-end NVMe products. For the most part, the effects of the bandwidth limitation on the SSD 600p were barely noticeable. PCIe 3.0 x4 is far faster than necessary to simply be faster than SATA, and supporting an interface that fast has costs in both controller die size and power consumption. The SSD 600p might have been better served by a controller that sacrificed excess host interface bandwidth to allow for a more powerful processor within the same TDP, or to just lower the price a bit further. Even though OEMs are striving to ensure that their M.2 slots can support the fastest SSD, not every drive needs to use all of that bandwidth.

ATTO, AS-SSD & Idle Power Consumption
Comments Locked

63 Comments

View All Comments

  • vFunct - Tuesday, November 22, 2016 - link

    These would be great for server applications, if I could find PCIe add-in cards that have 4x M.2 slots.

    I'd love to be able to stick 10 or 100 or so of these in a server, as an image/media store.
  • ddriver - Tuesday, November 22, 2016 - link

    You should call intel to let them know they are marketing it in the wrong segment LOL
  • ddriver - Tuesday, November 22, 2016 - link

    To clarify, this product is evidently the runt of the nvme litter. For regular users, it is barely faster than sata devices. And once it runs out of cache, it actually gets slower than a sata device. Based on its performance and price, I won't be surprised if its reliability is just as subpar. Putting such a device in a server is like putting a drunken hobo in a Lamborghini.
  • BrokenCrayons - Tuesday, November 22, 2016 - link

    Assuming a media storage server scenario, you'd be looking at write once and read many where the cache issues aren't going to pose a significant problem to performance. Using an array of them would also mitigate much of that write performance using some form of RAID. Of course that applies to SATA devices as well, but there's a density advantange realized in NVMe.
  • vFunct - Tuesday, November 22, 2016 - link

    bingo.

    Now, how can I pack a bunch of these in a chassis?
  • BrokenCrayons - Tuesday, November 22, 2016 - link

    I'd think the best answer to that would be a custom motherboard with the appropriate slots on it to achieve high storage densities in a slim (maybe something like a 1/2 1U rackmount) chassis. As for PCIe slot expansion cards, there's a few out there that would let you install 4x M.2 SSDs on a PCIe slot, but they'd add to the cost of building such a storage array. In the end, I think we're probably a year or three away from using NVMe SSDs in large storage arrays outside of highly customized and expensive solutions for compaines that have the clout to leverage something that exotic.
  • ddriver - Tuesday, November 22, 2016 - link

    So are you going to make that custom motherboard for him, or will he be making it for himself? While you are at it, you may also want to make a cpu with 400 pcie lanes so that you can connect those 100 lousy budget p600s.

    Because I bet the industry isn't itching to make products for clueless and moneyless dummies. There is already a product that's unbeatable for media storage - an 8tb ultrastar he8. As ssd for media storage - that makes no sense, and a 100 of those only makes a 100 times less sense :D
  • BrokenCrayons - Tuesday, November 22, 2016 - link

    "So are you going to make that..."

    Sure, okay.
  • Samus - Tuesday, November 22, 2016 - link

    ddriver, you are ignoring his specific application when judging his solution to be wrong. For imaging, sequential throughput is all that matters. I used to work part time in PC refurbishing for education and we built a bench to image 64 PC's at a time over 1Gbe with a dual 10Gbe fiber backbone to a server using, which was at the time the best option on the market, an OCZ RevoDrive PCIe SSD. Even this drive was crippled by a single 10Gbe connection let alone dual 10Gbe connections, which is why we eventually installed TWO of them in RAID 1.

    This hackjob configuration allowed imaging 60+ PC's simultaneously over GBe in about 7 minutes when booting via PXE, running a diskpart script and imagex to uncompress a sysprep'd image.

    The RevoDrive's were not reliable. One would fail like clockwork almost annually, and eventually in 2015 after I had left I heard they fell back to a pair of Plextor M2 2280's in a PCIe x4 adapter for better reliability. It was, and still is, however, very expensive to do this compared to what the 600p is offering.

    Any high-throughput sequential reading application would greatly benefit from the performance and price the 600p is offering, not to mention Intel has class leading reliability in the SSD sector of 0.3%/year failure rate according to their own internal 2014 data...there is no reason to think of all companies Intel won't keep reliability as a high priority. After all, they are still the only company to mastermind the Sandforce 2200, a controller that had incredibly high failure rates across every other vendor and effectively lead to OCZ's bankruptcy.
  • ddriver - Tuesday, November 22, 2016 - link

    So how does all this connect to, and I quote, "stick 10 or 100 or so of these in a server, as an image/media store"?

    Also, he doesn't really have "his specific application", he just spat a bunch of nonsense he believed would be cool :D

    Lastly, next time try multicasting, this way you can simultaneously send data to 64 hosts at 1 gbps without the need for dual 10gbit or an uber expensive switch, achieving full parallelism and an effective 64 gbps. In that case a regular sata ssd or even an hdd would have sufficed as even mechanical drives have no problem saturating the 1 gbps lines you to the targets. You could have done the same work, or even better, at like 1/10 of the cost. You could even do 1000 system at a time, or as many as you want, just daisy chain more switches, terabit, petabit effective cumulative bandwidth is just as easily achievable.

Log in

Don't have an account? Sign up now