ATTO

ATTO's Disk Benchmark is a quick and easy freeware tool to measure drive performance across various transfer sizes.

ATTO Performance

Both read and write speeds fall off toward the end of the ATTO test, indicating that thermal throttling is starting to happen. When limited to PCIe 2.0 x2, the performance is somewhat variable and does not show any clear signs of thermal throttling.

AS-SSD

AS-SSD is another quick and free benchmark tool. It uses incompressible data for all of its tests, making it an easy way to keep an eye on which drives are relying on transparent data compression. The short duration of the test makes it a decent indicator of peak drive performance.

Incompressible Sequential Read PerformanceIncompressible Sequential Write Performance

On the short AS-SSD test, the 600p delivers a great sequential read speed that puts it pretty close to high-end NVMe drives. Write speeds are just a hair over what SATA drives can achieve.

Idle Power Consumption

Since the ATSB tests based on real-world usage cut idle times short to 25ms, their power consumption scores paint an inaccurate picture of the relative suitability of drives for mobile use. During real-world client use, a solid state drive will spend far more time idle than actively processing commands.

There are two main ways that a NVMe SSD can save power when idle. The first is through suspending the PCIe link through the Active State Power Management (ASPM) mechanism, analogous to the SATA Link Power Management mechanism. Both define two power saving modes: an intermediate power saving mode with strict wake-up latency requirements (eg. 10µs for SATA "Partial" state) and a deeper state with looser wake-up requirements (eg. 10ms for SATA "Slumber" state). SATA Link Power Management is supported by almost all SSDs and host systems, though it is commonly off by default for desktops. PCIe ASPM support on the other hand is a minefield and it is common to encounter devices that do not implement it or implement it incorrectly. Forcing PCIe ASPM on for a system that defaults to disabling it may lead to the system locking up; this is the case for our current SSD testbed and thus we are unable to measure the effect of PCIe ASPM on SSD idle power.

The NVMe standard also defines a drive power management mechanism that is separate from PCIe link power management. The SSD can define up to 32 different power states and inform the host of the time taken to enter and exit these states. Some of these power states can be operational states where the drive continues to perform I/O with a restricted power budget, while others are non-operational idle states. The host system can either directly set these power states, or it can declare rules for which power states the drive may autonomously transition to after being idle for different lengths of time.

The big caveat to NVMe power management is that while I am able to manually set power states under Linux using low-level tools, I have not yet seen any OS or NVMe driver automatically engage this power saving. Work is underway to add Autonomous Power State Transition (APST) support to the Linux NVMe driver, and it may be possible to configure Windows to use this capability with some SSDs and NVMe drivers. NVMe power management including APST fortunately does not depend on motherboard support the way PCIe ASPM does, so it should eventually reach the same widespread availability that SATA Link Power Management enjoys.

We report two idle power values for each drive: an active idle measurement taken with none of the above power management states engaged, and an idle power measurement with either SATA LPM Slumber state or the lowest-power NVMe non-operational power state, if supported.

Idle Power Consumption
Active Idle Power Consumption (No LPM)

Silicon Motion has made a name for themselves with very low power SSDs. The SM2260 used in the Intel 600p doesn't really keep that tradition alive. It does support NVMe power saving modes, but they don't accomplish much. The active idle power consumption without NVMe power saving modes is much better than the other PCIe SSDs we've tested, but still relatively high by the standards of SATA SSDs.

Mixed Read/Write Performance Final Words
Comments Locked

63 Comments

View All Comments

  • close - Thursday, November 24, 2016 - link

    ddriver, you're the guy who insisted he designed a 5.25" hard drive that's better than anything on the market despite being laughed at and proven wrong beyond any shadow of a doubt but still insist on beginning and ending almost all of your comments with "you don't have a clue", "you probably don't know". Projecting much?

    You're not an engineer and you're obviously not even remotely good at tech. You have no idea (and it actually does matters) how this works. You just make up scenarios in your head with how you *think* it works and then you throw a tantrum when you're contradicted by people who don't have to imagine this stuff, they know it.

    In your scenario you have 2 clients using 2 galleries at the same time (reasonable enough, 2 users/server just like any respectable content server). You server reads image 1, sends it, then reads image 2 and sends it because when working with a gallery this is exactly how it works (it definitely won't be 200 users requesting thousands of thumbnails for each gallery and then having to send that to each client). Then the network bandwidth will be an issue because your content server is limited to 100Mbps, maybe 1Gbps, since you only designed it for 2 concurrent users. A server delivering media content - so a server who's ONLY job is to DELIVER MEDIA CONTENT - will have that kind of bandwidth that's "vastly exceeded by the drive's performance", the kind that can't cope with several hard drives furiously seeking hundreds or thousands of files. And of course it doesn't matter if you have 2 users or 2000, it's all the same to a hard drive, it simply sucks it up and takes it like a man. That's why they're called HARD...

    Most content delivery servers use a hefty solid state cache in front of the hard drives and hope that the content is in the cache. The only reasons spinning drives are still in the picture are capacity and cost per GB. Except ddriver's 5.25" drive that beats anything in every metric imaginable.

    Oh and BTW, before the internet became mainstream there was slightly less data to move around. While drive performance increased 10 fold since then the data being move increased 100 times or more.
    But heck, we can stick to your scenario that 2 users access 2 pictures on a content server with a 10/100 half duplex.

    Now quick, whip out those good ol' lines: "you're a troll wannabe", "you have no clue". Than will teach everybody that you're not a wannabe and not to piss all over you. ;)
  • vFunct - Wednesday, November 23, 2016 - link

    > I'd think the best answer to that would be a custom motherboard with the appropriate slots on it to achieve high storage densities in a slim (maybe something like a 1/2 1U rackmount) chassis.

    I agree that the best option would be for motherboard makers to create server motherboards with a ton of vertical M.2 slots, like DIMM slots, and space for airflow. We also need to be able to hot-swap these out by sliding out the chassis, uncovering the case, and swapping out a defective one as needed.

    A problem with U.2 connectors is that they have thick cabling all over the place. Having a ton of M.2 slots on the motherboard avoids all that.
  • saratoga4 - Tuesday, November 22, 2016 - link

    If only they made it with a SATA interface!
  • DanNeely - Tuesday, November 22, 2016 - link

    As a SATA device it'd be meh. Peak performance would be bottlenecked at the same point as every other SATA SSD, and it loses out to the 850 evo, nevermind the 850 pro in consistency.
  • Samus - Tuesday, November 22, 2016 - link

    There are lots of good reliable SATA m2 drives on the market. The thing that makes the 600p special is it is priced at near parity with them when most PCIe SSD's have a 20-30% premium.

    A really good m2 2280 option is the MX300 or 850 EVO. Sandisk has some great m2 2260 drives.
  • ddriver - Tuesday, November 22, 2016 - link

    Even in the case of such "server" you are better off with sata ssds, get a decent hba or raid card or two, connect 8-16 sata ssds and you have it. Price is better, performance in raid would be very good, and when a drive needs replacing, you can do it in 30 seconds without even powering off the machine.

    The only actual sense this product makes is in budget ultra portable laptops or x86 tablets, because it takes up less space, performance wise there will not be any difference in user experience between that and a sata drive, but it will enable a thinner chassis.

    There is no "density advantage" for nvme, there is only FORM FACTOR advantage, and that is only in scenarios where that's the systems primary and sole storage device. What enables density is the nand density, and the same dense chips can be used just as well in a sata or sas drive. Furthermore I don't recall seeing a mobo that has more than 2 m2 slots. A pci card with 4 m2 slots itself will not be exactly compact either. I've seen such, they are as big as upper mid-range video card. It takes about as much space as 4 standard 2.5' drives, however unlike 4x2'5" you can't put it into htpc form factor.
  • ddriver - Tuesday, November 22, 2016 - link

    Also, the 1tb p600 is nowhere to be found, and even so, m2 peaks at 2tb for the 960 pro, which is wildly expensive. Whereas with 2.5" there is already a 4tb option and 8tb is entirely possible, the only thing that's missing is demand. Samsung demoed 16tb 2.5" sdd over a year ago. I'd say that the "density advantage" is very much on the side of 2.5" ssds.
  • BrokenCrayons - Tuesday, November 22, 2016 - link

    Probably not.
  • XabanakFanatik - Tuesday, November 22, 2016 - link

    If Samsung stopped refusing to make two-sided M.2 drives and actually put the space to use there could easily be a 4TB 960 Pro.... and it would cost $2800.
  • JamesAnthony - Tuesday, November 22, 2016 - link

    Those cards are widely available, (I have some), 16x PCIe 3.0 interface and then 4 M.2 slots with each slot getting 4x PCIe 3.0 bandwidth, then a cooling fan for them.

    However WHY would you want to do that when you could just go get an Intel P3520 2TB drive or for higher speed a P3700 2TB drive. Standard PCIe interface format card for either low profile or standard profile slots?

    The only advantage an M.2 drive has is being small, but if you are going to put it in a standard PCIe slot, then why not just go with a purpose built PCIe NVMe SSD drive & not have to worry about thermal throttling on the M.2 cards?

Log in

Don't have an account? Sign up now