Final Words

The Intel SSD 600p is intended to be the most mainstream PCIe SSD yet without the hefty price premium that previous PCIe SSDs have carried relative to SATA SSDs. Its performance needs to be evaluated in the context of its price and intended market, both of which are quite different from that of products like the Samsung 960 Pro and 960 EVO. The more appropriate standard to compare against is the Samsung 850 EVO.

Even with our expectations thus lowered, the Intel SSD 600p fails to measure up. But this isn't a simple case of a budget drive that turns out to be far slower than its specifications would imply. The SSD 600p does offer peak performance that is as high as promised. The trouble is that it only provides that performance in a narrow range of circumstances, and most of our usual benchmarks go far beyond that and show the 600p at its worst.

The biggest problem with the Intel SSD 600p seems to be its implementation of SLC caching. The cache is plenty and its fixed size prevents the drive from performing vastly worse when full, the way the Crucial MX300 and OCZ VX500 suffer. But the 600p sends all writes through the SLC cache even when it is full, which creates extra work for the SSD controller, and the SM2260 can't keep up. Once the SLC cache has been filled, further sustained writes will put the drive through frequent garbage collection cycles to flush part of the SLC cache. While that extra background work is proceeding, the 600p slows to a crawl and peak write latencies can spike to over half a second.

In the early days of the SSD market, many drives and controllers were condemned for seizing up under load. The SSD 600p reminds us of those problems, but it isn't so severely crippled. The SSD 600p is completely unsuitable for a database server, but at its worst is is only as bad as a budget SATA SSD, not a dying hard drive. Ordinary interactive desktop usage provides the SSD 600p plenty of idle time to clean up and the 600p will perform better than any SATA SSD. Even when the 600p is subjected to an unrealistically intense sustained write load, its stutters are very brief and in between it catches up very quickly with bursts of very high performance. In spite of its problems, the SSD 600p managed a steady-state random write speed higher than almost all consumer SATA SSDs.

The Intel SSD 600p would be a bad choice for a user who regularly shuffles around tens of gigabytes of data. On paper however, it offers great performance for light workloads. The problem is that for workloads light enough to never expose the 600p's flaws, even a slower and cheaper SATA SSD is plenty fast and the 600p's advantages would be difficult to feel (bar installation in a smaller form factor). The niche that the SSD 600p is most suited for is also one that doesn't need a faster SSD. The SSD 600p doesn't set any records for price per gigabyte except among NVMe SSDs, and its power efficiency is a problem for mobile users. Taken together, these factors mean that users for whom the SSD 600p would work well will almost always be better served by getting a cheaper and possibly larger SATA SSD if they have the space.

  128GB 250-256GB 400-512GB 1TB 2TB
Samsung 960 EVO (MSRP)   $129.88 (52¢/GB) $249.99 (50¢/GB) $479.99 (48¢/GB)  
Samsung 960 Pro (MSRP)     $329.99 (64¢/GB) $629.99 (62¢/GB) $1299.99 (63¢/GB)
Plextor M8Pe $74.99 (59¢/GB) $114.99 (45¢/GB) $189.99 (37¢/GB) $414.99 (41¢/GB)  
Intel SSD 600p $63.99 (50¢/GB) $79.99 (31¢/GB) $164.53 (32¢/GB) $302.99 (30¢/GB)  
Samsung 850 EVO   $94.99 (38¢/GB) $164.99 (33¢/GB) $314.90 (32¢/GB) $624.99 (31¢/GB)
Crucial MX300
 
  $69.99 (26¢/GB) $123.09 (23¢/GB) $244.99 (23¢/GB) $480.00 (23¢/GB)
    $169.99 (23¢/GB) (750GB)  

It is possible that the Intel SSD 600p's flaws could be mitigated by different firmware. The SM2260 controller is obviously capable of handling high data rates when it isn't busy unnecessarily shuffling data in and out of the SLC cache. We don't know for sure why Micron chose to cancel the Ballistix TX3 SSD that was due to use SM2260 with 3D MLC, but even if that combination wasn't going to be able to compete in the highest market segment, the controller is certainly capable of going far beyond the performance limits of SATA.

The Intel/Micron 3D TLC NAND is clearly not as fast as Samsung's 3D TLC V-NAND, but the Crucial MX300 has already shown us that the SSD 600p's limitations are not all directly the result of the NAND being too slow. It is unlikely that Intel will overhaul the firmware of the 600p, but it is quite possible that future products will do a better job with this hardware. The first product we tested with Silicon Motion's SM2256 controller was the infamous Crucial BX200, but it was followed up by successors like the ADATA SP550 that proved the SM2256 could make for a good value SSD.

The results of testing the SSD 600p in the motherboard's more limited PCIe 2.0 x2 M.2 slot bring up some interesting questions about the future of low-end NVMe products. For the most part, the effects of the bandwidth limitation on the SSD 600p were barely noticeable. PCIe 3.0 x4 is far faster than necessary to simply be faster than SATA, and supporting an interface that fast has costs in both controller die size and power consumption. The SSD 600p might have been better served by a controller that sacrificed excess host interface bandwidth to allow for a more powerful processor within the same TDP, or to just lower the price a bit further. Even though OEMs are striving to ensure that their M.2 slots can support the fastest SSD, not every drive needs to use all of that bandwidth.

ATTO, AS-SSD & Idle Power Consumption
Comments Locked

63 Comments

View All Comments

  • Samus - Wednesday, November 23, 2016 - link

    Multicast helped but when you are saturating the backbone of the switch with 60Gbps of traffic it only slightly improves transfer. With light traffic we were getting 170-190MB/sec transfer rate but with a full image battery it was 120MB/sec. Granted with Unicast it never cracked 110MB/sec under any condition.
  • ddriver - Wednesday, November 23, 2016 - link

    Multicast would be UDP, so it would have less overhead, which is why you are seeing better bandwidth utilization. Point is with multicast you could push the same bandwidth to all clients simultaneously, whereas without multicast you'd be limited by the medium and switching capacity on top of the TCP/IP overhead.

    Assuming dual 10gbit gives you the full theoretical 2500 mb/s, if you have 100 mb/s to each client, that means you will only be able to serve no more than 25 clients. Whereas with multicast you'd be able to push those 170-190 mb/s to any number of clients, tens, hundreds, thousands or even millions, and by daisy chaining simply gigabit routers you make sure you don't run out of switching capacity. Of course, assuming you want to send the same identical data to all of them.
  • BrokenCrayons - Wednesday, November 23, 2016 - link

    "Also, he doesn't really have "his specific application", he just spat a bunch of nonsense he believed would be cool :D"

    Technical sites are great places to speculate about what-ifs of computer technology with like minded people. It's absolutely okay to disagree with someone's opinion, but I don't think you're doing so in a way that projects your thoughts as calm, rational, or constructive. It seems as though idle speculation on a very insignificant matter is treated as a threat worthy of attack in your mind. I'm not sure why that's the case, but I don't think it's necessary. I try to tell my children to keep things in perspective and not to make a mountain out of a problem if its not necessary. It's something that helps them get along in their lives now that they're more independent of their system of parental checks and balances. Maybe stopping for a few moments to consider whether or not the thing that's upsetting you and making you feel mad inside is a good idea. It could put some of these reader comments into a different, more lucid perspective.
  • ddriver - Tuesday, November 22, 2016 - link

    Oh and obviously, he meant "image" as in pictures, not image as in os images LOL, that was made quite obvious by the "media" part.
  • tinman44 - Monday, November 28, 2016 - link

    The 960 EVO is only a little bit more expensive for consistent, high performance compared to the 600p. Any hardware implementation where more than a few people are using the same drive should justify getting something worthwhile, like a 960 pro or real enterprise SSD, but the 960 EVO comes very close to the performance of those high-end parts for a lot less money.

    ddriver: compare perf consistency of the 600p and the 960 EVO, you don't want the 600p.
  • vFunct - Wednesday, November 23, 2016 - link

    > There is already a product that's unbeatable for media storage - an 8tb ultrastar he8. As ssd for media storage - that makes no sense, and a 100 of those only makes a 100 times less sense :D

    You've never served an image gallery, have you?

    You know it takes 5-10 ms to serve a single random long-tail image from an HDD. And a single image gallery on a page might need to serve dozens (or hundreds) of them, taking up up to 1 second of drive time.

    Do you want to tie up an entire hard for one second, when you have hundreds of people accessing your galleries per second?

    Hard drives are terrible for image serving on the web, because of their access times.
  • ddriver - Wednesday, November 23, 2016 - link

    You probably don't know, but it won't really matter, because you will be bottlenecked by network bandwidth. hdd access times would be completely masked off. Also, there is caching, which is how the internet ran just fine before ssds became mainstream.

    You will not be losing any service time waiting for the hdd, you will be only limited by your internet bandwidth. Which means that regardless of the number of images, the client will receive the entire data set only 5-10 msec slower compared to an ssd. And regardless of how many clients you may have connected, you will always be limited by your bandwidth.

    Any sane server implementation won't read the entire gallery in a burst, which may be hundreds of megabytes before it services another client. So no single client will ever block the hdd for a second. Practically every contemporary hdd have ncq, which means the device will deliver other requests while your network is busy delivering data. Servers buffer data, so say you have two clients requesting 2 different galleries at the same time, the server will read the first image for the first client and begin sending it, and then read the first image for the second client and begin sending it. The hdd will actually be idling quite a lot waiting, because your connection bandwidth will be vastly exceeded by the drive's performance. And regardless of how many clients you may have, that will not put any more strain on the hdd, as your network bandwidth will remain the same bottleneck. If people end up waiting too long, it won't be the hdd but the network connection.

    But thanks for once again proving you don't have a clue, not that it wasn't obvious from your very first post ;)
  • vFunct - Friday, November 25, 2016 - link

    > You probably don't know, but it won't really matter, because you will be bottlenecked by network bandwidth. hdd access times would be completely masked off. Also, there is caching, which is how the internet ran just fine before ssds became mainstream.

    ddriver, just stop. You literally have no idea what you're talking about.

    Image galleries aren't hundreds of megabytes. Who the hell would actually send out that much data at once? No image gallery sends out full high-res images at once. Instead, they might be 50 mid-size thumbnails of 20kb each that you scroll through on your mobile device, and send high-res images later when you zoom in. This is like literally every single e-commerce shopping site in the world.

    Maybe you could take an internship at a startup to gain some experience in the field? But right now, I recommend you never, ever speak in public ever again, because you don't know anything at all about web serving.
  • close - Wednesday, November 23, 2016 - link

    @ddriver, I really didn't expect you to laugh at other people's ideas for new hardware given your "thoroughly documented" 5.25" hard drive brain-fart.
  • ddriver - Wednesday, November 23, 2016 - link

    Nobody cares what clueless troll wannabes like you expect, you are entirely irrelevant.

Log in

Don't have an account? Sign up now