During the hard drive era, the Serial ATA International Organization (SATA-IO) had no problems keeping up with the bandwidth requirements. The performance increases that new hard drives provided were always quite moderate because ultimately the speed of the hard drive was limited by its platter density and spindle speed. Given that increasing the spindle speed wasn't really a viable option for mainstream drives due to power and noise issues, increasing the platter density was left as the only source of performance improvement. Increasing density is always a tough job and it's rare that we see any sudden breakthroughs, which is why density increases have only given us small speed bumps every once in a while. Even most of today's hard drives can't fully saturate the SATA 1.5Gbps link, so it's obvious that the SATA-IO didn't have much to worry about. However, that all changed when SSDs stepped into the game.

SSDs no longer relied on rotational media for storage but used NAND, a form of non-volatile storage, instead. With NAND the performance was no longer dictated by the laws of rotational physics because we were dealing with all solid-state storage, which introduced dramatically lower latencies and opened the door for much higher throughputs, putting pressure on SATA-IO to increase the interface bandwidth. To illustrate how fast NAND really is, let's do a little calculation.

It takes 115 microseconds to read 16KB (one page) from IMFT's 20nm 128Gbit NAND. That works out to be roughly 140MB/s of throughput per die. In a 256GB SSD you would have sixteen of these, which works out to over 2.2GB/s. That's about four times the maximum bandwidth of SATA 6Gbps. This is all theoretical of course—it's one thing to dump data into a register but transferring it over an interface requires more work. However, the NAND interfaces have also caught up in the last couple of years and we are now looking at up to 400MB/s per channel (both ONFI 3.x and Toggle-Mode 2.0). With most client platforms being 8-channel designs, the potential NAND-to-controller bandwidth is up to 3.2GB/s, meaning it's no longer a bottleneck.

Given the speed of NAND, it's not a surprise that the SATA interface quickly became a bottleneck. When Intel finally integrated SATA 6Gbps into its chipsets in early 2011, SandForce immediately came out with its SF-2000 series controllers and said, "Hey, we are already maxing out SATA 6Gbps; give us something faster!" The SATA-IO went back to the drawing board and realized that upping the SATA interface to 12Gbps would require several years of development and the cost of such rapid development would end up being very high. Another major issue was power; increasing the SATA protocol to 12Gbps would have meant a noticeable increase in power consumption, which is never good.

Therefore the SATA-IO had to look elsewhere in order to provide a fast yet cost efficient standard in a timely matter. Due to these restrictions, it was best to look at already existing interfaces, more specifically PCI Express, to speed up the time to the market as well as cut costs.

  Serial ATA PCI Express
  2.0 3.0 2.0 3.0
Link Speed 3Gbps 6Gbps 8Gbps (x2)
16Gbps (x4)
16Gbps (x2)
32Gbps (x4)
Effective Data Rate ~275MBps ~560MBps ~780MBps
~1560MBps
~1560MBps
~3120MBps (?)

PCI Express makes a ton of sense. It's already integrated into all major platforms and thanks to scalability it offers the room for future bandwidth increases when needed. In fact, PCIe is already widely used in the high-end enterprise SSD market because the SATA/SAS interface was never enough to satisfy the enterprise performance needs in the first place.

Even a PCIe 2.0 x2 link offers about a 40% increase in maximum throughput over SATA 6Gbps. Like most interfaces, PCIe 2.0 isn't 100% efficient and based on our internal tests the bandwidth efficiency is around 78-79%, so in the real world you should expect to get ~780MB/s out of a PCIe 2.0 x2 link, but remember that SATA 6Gbps isn't 100% either (around 515MB/s is the typical maximum we see). The currently available PCIe SSD controller designs are all 2.0 based but we should start to see some PCIe 3.0 drives next year. We don't have efficiency numbers for 3.0 yet but I would expect to see nearly twice the bandwidth of 2.0, making +1GB/s a norm.

But what exactly is SATA Express? Hop on to next page to read more!

What Is SATA Express?
Comments Locked

131 Comments

View All Comments

  • willis936 - Friday, March 14, 2014 - link

    A 4.5GHz 4770k doesn't render my video, crunch my matlab, and host my minecraft at arbitrarily amazingly fast speeds, but it's a big step up from a Q6600 :p
  • MrBungle123 - Friday, March 14, 2014 - link

    That cable looks horrible, I'd rather them just move SSD's to a card.
  • TEAMSWITCHER - Friday, March 14, 2014 - link

    Second That! Hardware makers need to abandon SATA Express and start working on new motherboard form factors that would allow for attaching the flash drives directly to the motherboard. SATA Express is another compromised design-by-committee. Just what the struggling PC industry needs right now! Jeepers!!!
  • iwod - Friday, March 14, 2014 - link

    The future is Mobile. Where Laptop already overtook Desktop in numbers. So why another chunky ugly old hack for SSD? Has Apple not taught them a lesson where Design matters?

    And the speed, It is just too slow. We should at least be at 16Gbps, and since any of these standard aren't even coming out fast enough i would have expected the interface to leap to 32Gbps. Plenty of headroom for SSD Controller to improve and work on. And Intel isn't even bundling enough PCIe Lanes direct from CPU.

    Why cant we build something that is a little future proof?
  • willis936 - Friday, March 14, 2014 - link

    Cost matters. The first thing they'll tell you in economics 101 is that we live in a world with finite resources and infinite wants. There's a reason we don't all have i7 processors, 4K displays, and 780 GPUs right now. Thunderbolt completely missed it's window for adoption because the cost vs. benefit wasn't there and OEMs didn't pick it up. The solutions will be made as the market wants it. The reason the connector is an ugly hack is so you can have the option for high bandwidth single drives or multiple slower drives. It's not pretty and I'd personally like to just see it as a phy/protocol stack that uses the PCIe connector with some aneg to figure out if it's a SATAe or PCIe device but that might cause problems if PCIe doesn't handle things like that already.

    Your mobile connector will come, or rather is already here.
  • dszc - Saturday, March 15, 2014 - link

    Thanks Kristian. Great article.
    I vote for PCIe / NVMe / M.2. SATAe seems like a step in the wrong direction. Time to move on. SATA SSDs are great for backward compatibility to help a legacy system, but seem a horrible way to to design a new system. Too big. Too many cables. Too much junk. Too expensive. SATAe seems to be applying old thinking to new technology.
  • watersb - Sunday, March 16, 2014 - link

    I don't get the negative reactions in many of the comments.

    Our scientific workloads are disk-IO bound, rather than CPU-bound. The storage stack is ripe for radical simplification. SATAe is a step in that direction.
  • rs2 - Sunday, March 16, 2014 - link

    This will never fly. For one thing the connectors are too massive. Most high-end mainboards allow 6 to 8 SATA drives to be connected, and some enthusiasts use nearly that many. That will never be possible with the SATAe connector design; there's just not enough space on the board.

    And the consuming 2 PCI-E lanes per connector is the other limiting factor. It's a reasonable solution when you just need one or two ports. But in the 8-drive case you're talking about needing 16 extra lanes. Where are those meant to come from?
  • willis936 - Sunday, March 16, 2014 - link

    How many ssds do you plan to use at once? I can't think of a single use case where more than one ssd is needed, or even wanted if bandwidth isn't an issue. One ssd and several hard drives is certainly plausible. So there are 6 instead of 8 usable ports for hard drives. How terrible.
  • Shiitaki - Monday, March 17, 2014 - link

    So exactly what problem is this fixing? The problem of money, this is a pathetically attempt at licensing fees. SSD manufacturers could simply change the software and have their drives appear to the operation system as a pci-e based sata controller with permanently attached drive TODAY. It would her genius to be able to plug a drive into a slot and be done with it. We don't need anything new. We already have m-pci-e. Moving to a m-pci-ex4 would have been a better idea. The you could construct backplances with the new m-pci-ex4 connectors that aggrate and connect to a motherboard using a lci-ex8/16 slot.

    This article covers the story of a organization fighting desperately to not disappear into the history books of the computer industry.

Log in

Don't have an account? Sign up now