What Is SATA Express?

Officially SATA Express (SATAe from now on) is part of the SATA 3.2 standard. It's not a new command or signaling protocol but merely a specification for a connector that combines both traditional SATA and PCIe signals into one simple connector. As a result SATAe is fully compatible with all existing SATA drives and cables and the only real difference is that the same connector (although not the same SATA cable) can be used with PCIe SSDs.

As SATAe is just a different connector for PCIe, it supports both the PCIe 2.0 and 3.0 standards. I believe most solutions will rely on PCH PCIe lanes for SATAe (like the ASUS board we have), so until Intel upgrades the PCH PCIe to 3.0, SATAe will be limited to ~780MB/s. It's of course possible for motherboard OEMs to route the PCIe for SATAe from the CPU, enabling 3.0 speeds and up to ~1560MB/s of bandwidth, but obviously the PCIe interface of the SSD needs to be 3.0 as well. The SandForce, Marvell, and Samsung designs are all 2.0 but at least OCZ is working on a 3.0 controller that is scheduled for next year.

The board ASUS sent us has two SATAe ports as you can see in the image above. This is a similar port that you should find in a final product once SATAe starts shipping. Notice that the motherboard connector is basically just two SATA ports and a small additional connector—the SATA ports work normally when using a standard SATA cable. It's only when the connector meets the special SATAe cable that PCIe magic starts happening.

ASUS mentioned that the cable is not a final design and may change before retail availability. I suspect we'll see one larger cable instead of three separate ones for esthetic and cable management reasons. As there are no SATAe drives available yet, our cable has the same connector on both ends and the connection to a PCIe drive is provided with the help of a separate SATAe daughterboard. In the final design the other end of the cable will be similar to the current SATA layout (data+power), so it will plug straight into a drive.

That looks like the female part to the SATA connector in your SSD, doesn't it?

Unlike regular PCIe, SATAe does not provide power. This was a surprise for me because I expected SATAe to fully comply with the PCIe spec, which provides up to 25W for x2 and x4 devices. I'm guessing the cable assembly would have become too expensive with the inclusion of power and not all SATA-IO members are happy even with the current SATAe pricing (about $1 in bulk per cable compared to $0.30 for normal SATA cables). As a result, SATAe drives will still source their power straight from the power supply. The SATAe connector is already quite large (about the same size as SATA data + power), so instead of a separate power connector we'll likely see something that looks like this:

In other words, the SATAe cable has a power input, which can be either 15-pin SATA or molex depending on the vendor. The above is just SATA-IO's example/suggestion—they haven't actually made any standard for the power implementation and hence we may see some creative workarounds from OEMs.

 

Introduction Why We Need Faster SSDs
Comments Locked

131 Comments

View All Comments

  • mkozakewich - Friday, March 14, 2014 - link

    Ooh, or what if we had actual M.2 slots on desktop motherboards that could take a ribbon to attach 2.5" PCIe SSDs?
  • phoenix_rizzen - Thursday, March 13, 2014 - link

    Yeah. Seems strange that they wouldn't re-use the M.2 or mSATA connector for this. Why take up 2 complete SATA slots, and add an extra connector? What are they doing with the SATA connectors when running in SATAe mode?

    It amost would have made sense to make a cable that plugged into <whatever> at the drive end, and just slotted into a PCIe x1 or x2 or x4 slot on the mobo. Skipped the dedicated slot entirely. Then they wouldn't need that hokey power dongle off the drive connector.
  • frenchy_2001 - Friday, March 14, 2014 - link

    They were looking for backward compatibility with current storage and in that context, the decision makes sense. No need to think about how to plug it, it just slots right where the rest of the storage goes and can even accept its predecessor.
    It's a desktop/server/storage centric product, not really meant for laptop/portable.

    But I agree its place is becoming squished between full PCIe (used already in data centers) and miniPCIe/M2 used in portables. As the requirement is already 2x PCIe lanes (like the others), it will be hard to use for lots of storage and if you cannot fit 24 of those in a rack (which is how most server use SATA/SAS), as few servers have 48 lanes of PCIe hanging around unused then it seems only reserved to desktop/workstation and those can easily use PCIe storage...
  • phoenix_rizzen - Friday, March 14, 2014 - link

    Yeah, until you try to connect more than 2 of those to a motherboard. And good luck getting that to work on a mini-ATX/micro-ATX board. Why use up two whole SATA ports, and still use an extra port for PCIe side of it?

    How are you going to make add-in controller cards for 4+ drives? There's no room for 4 of those connectors anywhere. And trying to do a multi-lane setup like SFF-8087 for this will be rediculous.

    The connector is dumb, no matter how you look at it. Especially since it doesn't support power.
  • jasonelmore - Saturday, March 15, 2014 - link

    it looks like the only reason to be excited about this connector is for using older Hard Drives 2.5 or 3.5 form factor, and putting them on a faster bus.

    Other than that, other solutions exist and they do it quicker and with less power. its just a solution to let people use old hardware longer.
  • phobos512 - Thursday, March 13, 2014 - link

    It's not an assumption. The cabling adds distance to the signal path, which increases latency. Electrons don't travel at infinite speed; merely the speed of light (in a vacuum; in a cable it is of course reduced).
  • ddriver - Thursday, March 13, 2014 - link

    You might be surprised now negligible the effect of the speed of electrons is for the total overall latency.
  • Khenglish - Thursday, March 13, 2014 - link

    It's negligible.

    The worst cables carry a signal at 66% of the speed of light, with the best over 90%. If we take the worst case scenario of 66% we get this:

    speed of light = 3*10^8 m/s
    1m / (.66 * 3*10^8 m/s) = 5ns per meter

    If we have a really long 5m cable that's 25ns. Kristian says it takes 115us to read a page. You never read less than 1 page at a time.

    25ns/115us = .0217% for a long 5m cable. Completely insignificant latency impact.
  • willis936 - Thursday, March 13, 2014 - link

    The real latency number to look at is the one cited on the nvme page: 2.8us. It's not so negligible then. It does affect control overhead a good deal.

    Also I have a practical concern of channel loss. You can't just slap a pcie lane onto a 1m cable. Pcie is designed to ride a vein of traces straight to a socket, straight to a card. You're now increasing the length of those traces, still putting it through a socket, and now putting it through a long, low cost cable. Asking more than 1.5GB/s might not work as planned going forward.
  • DanNeely - Thursday, March 13, 2014 - link

    Actually you can. Pcie cabling has been part of the spec since 2007; and while there isn't an explicit max length in the spec, at least one vendor is selling pcie2.0 cables that are up to 7m long for passive versions and 25m for active copper cables. Fiberoptic 3.0 cables are available to 300m.

Log in

Don't have an account? Sign up now