Final Thoughts

While testing SATA Express and writing this article, I constantly had one thought in my head: do we really need SATA Express? Everything it provides can be accomplished with existing hardware and standards. Desktops already have PCIe slots, so we don't need SATAe to bring PCIe SSDs to desktop users. In fact, SATAe could be viewed as a con because it takes at least two PCIe lanes and dedicates them to storage, whereas normal PCIe slots can be used for any PCIe devices. With only 16+8 (CPU/PCH) PCIe lanes available in mainstream platforms, there are no lanes to waste.

For the average user, it wouldn't make much difference if you took two or four lanes away for SATAe, but gamers and enthusiasts can easily use up all the lanes already (higher-end motherboards tend to have additional controllers for SATA, USB 3.0, Thunderbolt, Ethernet, audio etc., which all use PCIe lanes). Sure there are PCIe switches that add lanes (but not bandwidth), and these partially solve the issue but add cost. And if you add too many devices behind a switch there's a high chance that bandwidth will become a bottleneck if all are in use simultaneously.

I'm just not sure if I like the idea of taking two, potentially four or six, PCIe lanes and dedicating them to SATAe. I'd much rather have regular PCIe slots and let the end-user decide what to do with them. Of course, part of the problem is that we're dealing with not having enough lanes to satisfy all use cases, and SATAe could spur Intel and other chipset to provide more native PCIe lanes.

For laptops and other small form factor builds SATAe makes even less sense because that's the purpose of M.2. 2.5" SSDs can't compete with M.2 in space efficiency and that is what counts in the mobile industry. The only purpose of SATAe in mobile that I can see is laptops that use 2.5" SATA drives by default that can then be upgraded to 2.5" PCIe SSDs. That would allow OEMs to use the same core chassis design for multiple SKUs that could then be differentiated with the form of storage and it would also allow better end-user upgradeability. However, I still believe M.2 is the future in mobile especially as we are constantly moving towards smaller and thinner designs where 2.5" is simply too big. The 2.5" scenario would mainly be a niche scenario for laptops that don't have an M.2 or mSATA slot.

This is how small mSATA and M.2 are

Another issue exists in the OEM space. There are already four dominant form factors: 2.5" SATA, half-height/length PCIe, mSATA, and M.2. With SATA Express we would need an additional one: 2.5" SATAe (PCIe). The half-height/length PCIe is easy because all you need is an adapter for an M.2 PCIe SSD like Plextor has, but 2.5" PCIe is a bit trickier. It would be yet another model for OEMs to build and given the current NAND situation I'm not sure whether the OEMs are very happy about that.

The problem is that the more form factors there are, the harder it is to manage stock efficiently. If you build too many units in a form factor that doesn't sell, you end up having used tons of NAND on something that could have been better used in another form factor with more demand. This is why M.2 and half-height/length PCIe are great for the OEMs—they only need to manufacture M.2 SSDs and the end-product can be altered based on demand by adding a suitable adapter.

Fortunately the inclusion of both SATA and PCIe in SF-3700 (and some others too, e.g. OCZ's upcoming Jetstream Express controller) helps because OEMs only need to build one 2.5" drive that can be turned into either SATA or PCIe based on the demand. However, not all controllers support this, so there are still cases where OEMs face the issue of an additional model--and even for those drives that do support SATA and PCIe, it takes additional die area and R&D resources, resulting in higher costs.

Ultimately I don't believe the addition of a new form factor is a major issue because if there is customer demand, the OEMs will offer supply. It may, however, slow down the adoption of SATAe because the available models will be limited (i.e. you can score a better deal by getting a regular PCIe SSD) as some manufacturers will certainly be slower in adopting new form factors.

All in all, the one big issue with SATAe is the uncertainty due to the lack of product announcements. Nobody has really come forward and outlined plans for SATAe integration, which makes me think it's not something we'll see very soon. Leaks suggest that Intel won't be integrating SATAe into its 9-series chipsets, which will push mainstream availability back by at least a year. While chipset integration is not required to enable SATAe, it lowers the cost for motherboard OEMs since fewer parts and validation are required. Thus I suspect that SATAe will mainly be a high-end only feature for the next year and a half or so and it won't be until Intel integrates it into chipsets that we'll see mainstream adoption.

Testing SATA Express
Comments Locked

131 Comments

View All Comments

  • Khenglish - Thursday, March 13, 2014 - link

    That 2.8 uS you found is driver interface overhead from an interface that doesn't even exist yet. You need to add this to the access latency of the drive itself to get the real latency.

    Real world SSD read latency for tiny 4K data blocks is roughly 900us on the fastest drives.

    It would take an 18000 meter cable to add even 10% to that.
  • willis936 - Thursday, March 13, 2014 - link

    Show me a consumer phy that can transmit 8Gbps over 100m on cheap copper and I'll eat my hat.
  • Khenglish - Thursday, March 13, 2014 - link

    The problem is long cables is attenuation, not latency. Cables can only be around 50M long before you need a repeater.
  • mutercim - Friday, March 14, 2014 - link

    Electrons have mass, they can't ever travel at the speed of light, no matter the medium. The signal itself would move at the speed of light (in vacuum), but that's a different thing.

    /pedantry
  • Visual - Friday, March 14, 2014 - link

    It's a common misconception, but electrons don't actually need to travel the length of the cable for a signal to travel through it.
    In layman's terms, you don't need to send an electron all the way to the other end of the cable, you just need to make the electrons that are already there react in a certain way as to register a required voltage or current.
    So a signal is a change in voltage, or a change in the electromagnetic fields, and that travels at the speed of light (no, not in vacuum, in that medium).
  • AnnihilatorX - Friday, March 14, 2014 - link

    Just to clarify, it is like pushing a tube full of tennis balls from one end. Assuming the tennis balls are all rigid so deformation is negligible, the 'cause and effect' making the tennis ball on the other end move will travel at speed of light.
  • R3MF - Thursday, March 13, 2014 - link

    having 24x PCIe 3.0 lanes on AMD's Kaveri looks pretty far-sighted right now.
  • jimjamjamie - Thursday, March 13, 2014 - link

    if they got their finger out with a good x86 core the APUs would be such an easy sell
  • MrSpadge - Thursday, March 13, 2014 - link

    Re: "Why Do We Need Faster SSDs"

    You power consumption argument ignores one fact: if you use the same controller, NAND and firmware it costs you x Wh to perform a read or write operation. If you simply increase the interface speed and hence perform more of these operations per time, you also increase the energy required per time, i.e. power consumption. I your example the faster SSD wouldn't continue to draw 3 W with the faster interface: assuming a 30% throughput increase expecting a power draw of 4 W would be reasonable.

    Obviously there are also system components actively waiting for that data. So if the data arrives faster (due to lower latency & higher throughput) they can finish the task quicker and race to sleep. This counterbalances some of the actual NAND power draw increases, but won't negate it completely.
  • Kristian Vättö - Thursday, March 13, 2014 - link

    "If you simply increase the interface speed and hence perform more of these operations per time, you also increase the energy required per time, i.e. power consumption."

    The number of IO operations is a constant here. A faster SSD does not mean that the overall number of operations will increase because ultimately that's up to the workload. Assuming that is the same in both cases, the faster SSD will complete the IO operations faster and will hence spend more time idling, resulting in less power drawn in total.

    Furthermore, a faster SSD does not necessarily mean higher power draw. As the graph on page one shows, PCIe 2.0 increases baseline power consumption by only 2% compared to SATA 6Gbps. Given that SATA 6Gbps is a bottleneck in current SSDs, more processing power (and hence more power) is not required to make a faster SSD. You are right that it may result in higher NAND power draw, though, because the controller will be able to take better advantage of parallelism (more NAND in use = more power consumed).

    I understand the example is not perfect as in real world the number of variables is through the roof. However, the idea was to debunk the claim that PCIe SSDs are just a marketing trick -- they are that too but ultimately there are gains that will reach the average user as well.

Log in

Don't have an account? Sign up now