ATTO

ATTO's Disk Benchmark is a quick and easy freeware tool to measure drive performance across various transfer sizes.

ATTO Performance

Both read and write speeds fall off toward the end of the ATTO test, indicating that thermal throttling is starting to happen. When limited to PCIe 2.0 x2, the performance is somewhat variable and does not show any clear signs of thermal throttling.

AS-SSD

AS-SSD is another quick and free benchmark tool. It uses incompressible data for all of its tests, making it an easy way to keep an eye on which drives are relying on transparent data compression. The short duration of the test makes it a decent indicator of peak drive performance.

Incompressible Sequential Read PerformanceIncompressible Sequential Write Performance

On the short AS-SSD test, the 600p delivers a great sequential read speed that puts it pretty close to high-end NVMe drives. Write speeds are just a hair over what SATA drives can achieve.

Idle Power Consumption

Since the ATSB tests based on real-world usage cut idle times short to 25ms, their power consumption scores paint an inaccurate picture of the relative suitability of drives for mobile use. During real-world client use, a solid state drive will spend far more time idle than actively processing commands.

There are two main ways that a NVMe SSD can save power when idle. The first is through suspending the PCIe link through the Active State Power Management (ASPM) mechanism, analogous to the SATA Link Power Management mechanism. Both define two power saving modes: an intermediate power saving mode with strict wake-up latency requirements (eg. 10µs for SATA "Partial" state) and a deeper state with looser wake-up requirements (eg. 10ms for SATA "Slumber" state). SATA Link Power Management is supported by almost all SSDs and host systems, though it is commonly off by default for desktops. PCIe ASPM support on the other hand is a minefield and it is common to encounter devices that do not implement it or implement it incorrectly. Forcing PCIe ASPM on for a system that defaults to disabling it may lead to the system locking up; this is the case for our current SSD testbed and thus we are unable to measure the effect of PCIe ASPM on SSD idle power.

The NVMe standard also defines a drive power management mechanism that is separate from PCIe link power management. The SSD can define up to 32 different power states and inform the host of the time taken to enter and exit these states. Some of these power states can be operational states where the drive continues to perform I/O with a restricted power budget, while others are non-operational idle states. The host system can either directly set these power states, or it can declare rules for which power states the drive may autonomously transition to after being idle for different lengths of time.

The big caveat to NVMe power management is that while I am able to manually set power states under Linux using low-level tools, I have not yet seen any OS or NVMe driver automatically engage this power saving. Work is underway to add Autonomous Power State Transition (APST) support to the Linux NVMe driver, and it may be possible to configure Windows to use this capability with some SSDs and NVMe drivers. NVMe power management including APST fortunately does not depend on motherboard support the way PCIe ASPM does, so it should eventually reach the same widespread availability that SATA Link Power Management enjoys.

We report two idle power values for each drive: an active idle measurement taken with none of the above power management states engaged, and an idle power measurement with either SATA LPM Slumber state or the lowest-power NVMe non-operational power state, if supported.

Idle Power Consumption
Active Idle Power Consumption (No LPM)

Silicon Motion has made a name for themselves with very low power SSDs. The SM2260 used in the Intel 600p doesn't really keep that tradition alive. It does support NVMe power saving modes, but they don't accomplish much. The active idle power consumption without NVMe power saving modes is much better than the other PCIe SSDs we've tested, but still relatively high by the standards of SATA SSDs.

Mixed Read/Write Performance Final Words
Comments Locked

63 Comments

View All Comments

  • Samus - Wednesday, November 23, 2016 - link

    Multicast helped but when you are saturating the backbone of the switch with 60Gbps of traffic it only slightly improves transfer. With light traffic we were getting 170-190MB/sec transfer rate but with a full image battery it was 120MB/sec. Granted with Unicast it never cracked 110MB/sec under any condition.
  • ddriver - Wednesday, November 23, 2016 - link

    Multicast would be UDP, so it would have less overhead, which is why you are seeing better bandwidth utilization. Point is with multicast you could push the same bandwidth to all clients simultaneously, whereas without multicast you'd be limited by the medium and switching capacity on top of the TCP/IP overhead.

    Assuming dual 10gbit gives you the full theoretical 2500 mb/s, if you have 100 mb/s to each client, that means you will only be able to serve no more than 25 clients. Whereas with multicast you'd be able to push those 170-190 mb/s to any number of clients, tens, hundreds, thousands or even millions, and by daisy chaining simply gigabit routers you make sure you don't run out of switching capacity. Of course, assuming you want to send the same identical data to all of them.
  • BrokenCrayons - Wednesday, November 23, 2016 - link

    "Also, he doesn't really have "his specific application", he just spat a bunch of nonsense he believed would be cool :D"

    Technical sites are great places to speculate about what-ifs of computer technology with like minded people. It's absolutely okay to disagree with someone's opinion, but I don't think you're doing so in a way that projects your thoughts as calm, rational, or constructive. It seems as though idle speculation on a very insignificant matter is treated as a threat worthy of attack in your mind. I'm not sure why that's the case, but I don't think it's necessary. I try to tell my children to keep things in perspective and not to make a mountain out of a problem if its not necessary. It's something that helps them get along in their lives now that they're more independent of their system of parental checks and balances. Maybe stopping for a few moments to consider whether or not the thing that's upsetting you and making you feel mad inside is a good idea. It could put some of these reader comments into a different, more lucid perspective.
  • ddriver - Tuesday, November 22, 2016 - link

    Oh and obviously, he meant "image" as in pictures, not image as in os images LOL, that was made quite obvious by the "media" part.
  • tinman44 - Monday, November 28, 2016 - link

    The 960 EVO is only a little bit more expensive for consistent, high performance compared to the 600p. Any hardware implementation where more than a few people are using the same drive should justify getting something worthwhile, like a 960 pro or real enterprise SSD, but the 960 EVO comes very close to the performance of those high-end parts for a lot less money.

    ddriver: compare perf consistency of the 600p and the 960 EVO, you don't want the 600p.
  • vFunct - Wednesday, November 23, 2016 - link

    > There is already a product that's unbeatable for media storage - an 8tb ultrastar he8. As ssd for media storage - that makes no sense, and a 100 of those only makes a 100 times less sense :D

    You've never served an image gallery, have you?

    You know it takes 5-10 ms to serve a single random long-tail image from an HDD. And a single image gallery on a page might need to serve dozens (or hundreds) of them, taking up up to 1 second of drive time.

    Do you want to tie up an entire hard for one second, when you have hundreds of people accessing your galleries per second?

    Hard drives are terrible for image serving on the web, because of their access times.
  • ddriver - Wednesday, November 23, 2016 - link

    You probably don't know, but it won't really matter, because you will be bottlenecked by network bandwidth. hdd access times would be completely masked off. Also, there is caching, which is how the internet ran just fine before ssds became mainstream.

    You will not be losing any service time waiting for the hdd, you will be only limited by your internet bandwidth. Which means that regardless of the number of images, the client will receive the entire data set only 5-10 msec slower compared to an ssd. And regardless of how many clients you may have connected, you will always be limited by your bandwidth.

    Any sane server implementation won't read the entire gallery in a burst, which may be hundreds of megabytes before it services another client. So no single client will ever block the hdd for a second. Practically every contemporary hdd have ncq, which means the device will deliver other requests while your network is busy delivering data. Servers buffer data, so say you have two clients requesting 2 different galleries at the same time, the server will read the first image for the first client and begin sending it, and then read the first image for the second client and begin sending it. The hdd will actually be idling quite a lot waiting, because your connection bandwidth will be vastly exceeded by the drive's performance. And regardless of how many clients you may have, that will not put any more strain on the hdd, as your network bandwidth will remain the same bottleneck. If people end up waiting too long, it won't be the hdd but the network connection.

    But thanks for once again proving you don't have a clue, not that it wasn't obvious from your very first post ;)
  • vFunct - Friday, November 25, 2016 - link

    > You probably don't know, but it won't really matter, because you will be bottlenecked by network bandwidth. hdd access times would be completely masked off. Also, there is caching, which is how the internet ran just fine before ssds became mainstream.

    ddriver, just stop. You literally have no idea what you're talking about.

    Image galleries aren't hundreds of megabytes. Who the hell would actually send out that much data at once? No image gallery sends out full high-res images at once. Instead, they might be 50 mid-size thumbnails of 20kb each that you scroll through on your mobile device, and send high-res images later when you zoom in. This is like literally every single e-commerce shopping site in the world.

    Maybe you could take an internship at a startup to gain some experience in the field? But right now, I recommend you never, ever speak in public ever again, because you don't know anything at all about web serving.
  • close - Wednesday, November 23, 2016 - link

    @ddriver, I really didn't expect you to laugh at other people's ideas for new hardware given your "thoroughly documented" 5.25" hard drive brain-fart.
  • ddriver - Wednesday, November 23, 2016 - link

    Nobody cares what clueless troll wannabes like you expect, you are entirely irrelevant.

Log in

Don't have an account? Sign up now