Final Thoughts

While testing SATA Express and writing this article, I constantly had one thought in my head: do we really need SATA Express? Everything it provides can be accomplished with existing hardware and standards. Desktops already have PCIe slots, so we don't need SATAe to bring PCIe SSDs to desktop users. In fact, SATAe could be viewed as a con because it takes at least two PCIe lanes and dedicates them to storage, whereas normal PCIe slots can be used for any PCIe devices. With only 16+8 (CPU/PCH) PCIe lanes available in mainstream platforms, there are no lanes to waste.

For the average user, it wouldn't make much difference if you took two or four lanes away for SATAe, but gamers and enthusiasts can easily use up all the lanes already (higher-end motherboards tend to have additional controllers for SATA, USB 3.0, Thunderbolt, Ethernet, audio etc., which all use PCIe lanes). Sure there are PCIe switches that add lanes (but not bandwidth), and these partially solve the issue but add cost. And if you add too many devices behind a switch there's a high chance that bandwidth will become a bottleneck if all are in use simultaneously.

I'm just not sure if I like the idea of taking two, potentially four or six, PCIe lanes and dedicating them to SATAe. I'd much rather have regular PCIe slots and let the end-user decide what to do with them. Of course, part of the problem is that we're dealing with not having enough lanes to satisfy all use cases, and SATAe could spur Intel and other chipset to provide more native PCIe lanes.

For laptops and other small form factor builds SATAe makes even less sense because that's the purpose of M.2. 2.5" SSDs can't compete with M.2 in space efficiency and that is what counts in the mobile industry. The only purpose of SATAe in mobile that I can see is laptops that use 2.5" SATA drives by default that can then be upgraded to 2.5" PCIe SSDs. That would allow OEMs to use the same core chassis design for multiple SKUs that could then be differentiated with the form of storage and it would also allow better end-user upgradeability. However, I still believe M.2 is the future in mobile especially as we are constantly moving towards smaller and thinner designs where 2.5" is simply too big. The 2.5" scenario would mainly be a niche scenario for laptops that don't have an M.2 or mSATA slot.

This is how small mSATA and M.2 are

Another issue exists in the OEM space. There are already four dominant form factors: 2.5" SATA, half-height/length PCIe, mSATA, and M.2. With SATA Express we would need an additional one: 2.5" SATAe (PCIe). The half-height/length PCIe is easy because all you need is an adapter for an M.2 PCIe SSD like Plextor has, but 2.5" PCIe is a bit trickier. It would be yet another model for OEMs to build and given the current NAND situation I'm not sure whether the OEMs are very happy about that.

The problem is that the more form factors there are, the harder it is to manage stock efficiently. If you build too many units in a form factor that doesn't sell, you end up having used tons of NAND on something that could have been better used in another form factor with more demand. This is why M.2 and half-height/length PCIe are great for the OEMs—they only need to manufacture M.2 SSDs and the end-product can be altered based on demand by adding a suitable adapter.

Fortunately the inclusion of both SATA and PCIe in SF-3700 (and some others too, e.g. OCZ's upcoming Jetstream Express controller) helps because OEMs only need to build one 2.5" drive that can be turned into either SATA or PCIe based on the demand. However, not all controllers support this, so there are still cases where OEMs face the issue of an additional model--and even for those drives that do support SATA and PCIe, it takes additional die area and R&D resources, resulting in higher costs.

Ultimately I don't believe the addition of a new form factor is a major issue because if there is customer demand, the OEMs will offer supply. It may, however, slow down the adoption of SATAe because the available models will be limited (i.e. you can score a better deal by getting a regular PCIe SSD) as some manufacturers will certainly be slower in adopting new form factors.

All in all, the one big issue with SATAe is the uncertainty due to the lack of product announcements. Nobody has really come forward and outlined plans for SATAe integration, which makes me think it's not something we'll see very soon. Leaks suggest that Intel won't be integrating SATAe into its 9-series chipsets, which will push mainstream availability back by at least a year. While chipset integration is not required to enable SATAe, it lowers the cost for motherboard OEMs since fewer parts and validation are required. Thus I suspect that SATAe will mainly be a high-end only feature for the next year and a half or so and it won't be until Intel integrates it into chipsets that we'll see mainstream adoption.

Testing SATA Express
Comments Locked

131 Comments

View All Comments

  • mkozakewich - Friday, March 14, 2014 - link

    Ooh, or what if we had actual M.2 slots on desktop motherboards that could take a ribbon to attach 2.5" PCIe SSDs?
  • phoenix_rizzen - Thursday, March 13, 2014 - link

    Yeah. Seems strange that they wouldn't re-use the M.2 or mSATA connector for this. Why take up 2 complete SATA slots, and add an extra connector? What are they doing with the SATA connectors when running in SATAe mode?

    It amost would have made sense to make a cable that plugged into <whatever> at the drive end, and just slotted into a PCIe x1 or x2 or x4 slot on the mobo. Skipped the dedicated slot entirely. Then they wouldn't need that hokey power dongle off the drive connector.
  • frenchy_2001 - Friday, March 14, 2014 - link

    They were looking for backward compatibility with current storage and in that context, the decision makes sense. No need to think about how to plug it, it just slots right where the rest of the storage goes and can even accept its predecessor.
    It's a desktop/server/storage centric product, not really meant for laptop/portable.

    But I agree its place is becoming squished between full PCIe (used already in data centers) and miniPCIe/M2 used in portables. As the requirement is already 2x PCIe lanes (like the others), it will be hard to use for lots of storage and if you cannot fit 24 of those in a rack (which is how most server use SATA/SAS), as few servers have 48 lanes of PCIe hanging around unused then it seems only reserved to desktop/workstation and those can easily use PCIe storage...
  • phoenix_rizzen - Friday, March 14, 2014 - link

    Yeah, until you try to connect more than 2 of those to a motherboard. And good luck getting that to work on a mini-ATX/micro-ATX board. Why use up two whole SATA ports, and still use an extra port for PCIe side of it?

    How are you going to make add-in controller cards for 4+ drives? There's no room for 4 of those connectors anywhere. And trying to do a multi-lane setup like SFF-8087 for this will be rediculous.

    The connector is dumb, no matter how you look at it. Especially since it doesn't support power.
  • jasonelmore - Saturday, March 15, 2014 - link

    it looks like the only reason to be excited about this connector is for using older Hard Drives 2.5 or 3.5 form factor, and putting them on a faster bus.

    Other than that, other solutions exist and they do it quicker and with less power. its just a solution to let people use old hardware longer.
  • phobos512 - Thursday, March 13, 2014 - link

    It's not an assumption. The cabling adds distance to the signal path, which increases latency. Electrons don't travel at infinite speed; merely the speed of light (in a vacuum; in a cable it is of course reduced).
  • ddriver - Thursday, March 13, 2014 - link

    You might be surprised now negligible the effect of the speed of electrons is for the total overall latency.
  • Khenglish - Thursday, March 13, 2014 - link

    It's negligible.

    The worst cables carry a signal at 66% of the speed of light, with the best over 90%. If we take the worst case scenario of 66% we get this:

    speed of light = 3*10^8 m/s
    1m / (.66 * 3*10^8 m/s) = 5ns per meter

    If we have a really long 5m cable that's 25ns. Kristian says it takes 115us to read a page. You never read less than 1 page at a time.

    25ns/115us = .0217% for a long 5m cable. Completely insignificant latency impact.
  • willis936 - Thursday, March 13, 2014 - link

    The real latency number to look at is the one cited on the nvme page: 2.8us. It's not so negligible then. It does affect control overhead a good deal.

    Also I have a practical concern of channel loss. You can't just slap a pcie lane onto a 1m cable. Pcie is designed to ride a vein of traces straight to a socket, straight to a card. You're now increasing the length of those traces, still putting it through a socket, and now putting it through a long, low cost cable. Asking more than 1.5GB/s might not work as planned going forward.
  • DanNeely - Thursday, March 13, 2014 - link

    Actually you can. Pcie cabling has been part of the spec since 2007; and while there isn't an explicit max length in the spec, at least one vendor is selling pcie2.0 cables that are up to 7m long for passive versions and 25m for active copper cables. Fiberoptic 3.0 cables are available to 300m.

Log in

Don't have an account? Sign up now