Testing SATA Express

SATAe is not commercially available yet but ASUS sent us a pre-production unit of the SATA Express version of their Z87 Deluxe motherboard along with the necessary peripherals to test SATAe. This is actually the same motherboard as our 2014 SSD testbed but with added SATAe functionality.

Test Setup
CPU Intel Core i7-4770K at 3.5GHz (Turbo & EIST enabled, C-states disabled)
Motherboard ASUS Z87 Deluxe SATA Express (BIOS 1707)
Chipset Intel Z87
Chipset Drivers 9.4.0.1026
Storage Drivers Intel RST 12.9.0.1001
Memory Corsair Vengeance DDR3-1866 2x8GB (9-10-9-27 2T)
Graphics Intel HD Graphics 4600
Graphics Drivers 15.33.8.64.3345
Power Supply Corsair RM750
OS Windows 7 Ultimate 64-bit

Before we get into the actual tests, we would like to thank the following companies for helping us with our 2014 SSD testbed.

The ASUS Z87 Deluxe SATA Express has two SATAe ports: one routed from the Platform Controller Hub (PCH) and the other provided by an ASMedia ASM106SE chip. The ASMedia is an unreleased chip, hence there is no information to be found about it and ASUS is very tight-lipped about the whole thing. I'm guessing we are dealing with the same SATA 6Gbps design as other ASM106x chips but with added PCIe pass-through functionality to make the chip suitable for SATA Express.

I did a quick block diagram that shows the storage side of the ASUS SATAe board we have. Basically there are four lanes in total dedicated to SATAe with support for up to two SATAe drives in addition to four SATA 6Gbps devices. Alternatively you can have up to eight SATA 6Gbps devices if neither of the SATAe ports is operating in PCIe mode.

Since there are no SATAe drives available at this point, ASUS sent us a SATAe demo daughterboard along with the motherboard. The daughterboard itself is very simple: it has the same SATAe connector as found in the motherboard, two molex power inputs, a clock cable header, and a PCIe slot.

This is what the setup looks like in action (though as you can see, I took the motherboard out of the case since inside case photos didn't turn out so well with the poor camera I have). The black and red cable is the external clock cable, which is only temporary and won't be needed with a final SATAe board.

The Tests

For testing I used Plextor's 256GB M6e PCIe SSD, which is a PCIe 2.0 x2 SSD with Marvell's new 88SS9183 PCIe controller. Plextor rates the M6e at up to 770MB/s read and 580MB/s write, so we should be capable of reaching the full potential of PCIe 2.0 x2. Additionally I tested the SATA 6Gbps ports with a 256GB OCZ Vertex 450. I used the same sequential 128KB Iometer tests that we use in our SSD reviews but I ramped up the queue depth to 32 to make sure we are looking at a maximum throughput situation.

Iometer—128KB Sequential Read (QD32)

There is no practical difference between a PCIe slot on the motherboard and PCIe that is routed through SATA Express. I'm a little surprised that there is absolutely no hit in performance (other than a negligible 1.5MB/s that's basically within the margin of error) because after all we are using cabling that should add latency. It seems that SATA-IO has been able to make the cabling efficient enough to transmit PCIe without additional overhead.

As for SATA 6Gbps, the performance is the same as well, which isn't surprising since only the connector is slightly different while electrically everything is the same. With the ASMedia chipset there is ~25-27% reduction in performance but that is inline with the previous ASMedia SATA 6Gbps chipsets I've seen. As I mentioned earlier, I doubt that the ASM106SE brings anything new to the SATA side of the controller and that's why I wasn't expecting more than 400MB/s. Generally you'll only get full SATA bandwidth from an Intel chipset or a higher-end SATA/RAID card.

Iometer—128KB Sequential Write (QD32)

The same goes for write performance. The only case where you are going to see a difference is if you connect to the ASMedia SATA 6Gbps port. I did run some additional benchmarks (like our performance consistency test) to see if a different workload would yield different results but all my tests showed that SATAe in PCIe mode is as fast as a real PCIe slot, so I'm not going to post a bunch additional graphs showing that the two are equivalent.

NVMe vs AHCI: Another Win for PCIe Final Thoughts
Comments Locked

131 Comments

View All Comments

  • willis936 - Friday, March 14, 2014 - link

    A 4.5GHz 4770k doesn't render my video, crunch my matlab, and host my minecraft at arbitrarily amazingly fast speeds, but it's a big step up from a Q6600 :p
  • MrBungle123 - Friday, March 14, 2014 - link

    That cable looks horrible, I'd rather them just move SSD's to a card.
  • TEAMSWITCHER - Friday, March 14, 2014 - link

    Second That! Hardware makers need to abandon SATA Express and start working on new motherboard form factors that would allow for attaching the flash drives directly to the motherboard. SATA Express is another compromised design-by-committee. Just what the struggling PC industry needs right now! Jeepers!!!
  • iwod - Friday, March 14, 2014 - link

    The future is Mobile. Where Laptop already overtook Desktop in numbers. So why another chunky ugly old hack for SSD? Has Apple not taught them a lesson where Design matters?

    And the speed, It is just too slow. We should at least be at 16Gbps, and since any of these standard aren't even coming out fast enough i would have expected the interface to leap to 32Gbps. Plenty of headroom for SSD Controller to improve and work on. And Intel isn't even bundling enough PCIe Lanes direct from CPU.

    Why cant we build something that is a little future proof?
  • willis936 - Friday, March 14, 2014 - link

    Cost matters. The first thing they'll tell you in economics 101 is that we live in a world with finite resources and infinite wants. There's a reason we don't all have i7 processors, 4K displays, and 780 GPUs right now. Thunderbolt completely missed it's window for adoption because the cost vs. benefit wasn't there and OEMs didn't pick it up. The solutions will be made as the market wants it. The reason the connector is an ugly hack is so you can have the option for high bandwidth single drives or multiple slower drives. It's not pretty and I'd personally like to just see it as a phy/protocol stack that uses the PCIe connector with some aneg to figure out if it's a SATAe or PCIe device but that might cause problems if PCIe doesn't handle things like that already.

    Your mobile connector will come, or rather is already here.
  • dszc - Saturday, March 15, 2014 - link

    Thanks Kristian. Great article.
    I vote for PCIe / NVMe / M.2. SATAe seems like a step in the wrong direction. Time to move on. SATA SSDs are great for backward compatibility to help a legacy system, but seem a horrible way to to design a new system. Too big. Too many cables. Too much junk. Too expensive. SATAe seems to be applying old thinking to new technology.
  • watersb - Sunday, March 16, 2014 - link

    I don't get the negative reactions in many of the comments.

    Our scientific workloads are disk-IO bound, rather than CPU-bound. The storage stack is ripe for radical simplification. SATAe is a step in that direction.
  • rs2 - Sunday, March 16, 2014 - link

    This will never fly. For one thing the connectors are too massive. Most high-end mainboards allow 6 to 8 SATA drives to be connected, and some enthusiasts use nearly that many. That will never be possible with the SATAe connector design; there's just not enough space on the board.

    And the consuming 2 PCI-E lanes per connector is the other limiting factor. It's a reasonable solution when you just need one or two ports. But in the 8-drive case you're talking about needing 16 extra lanes. Where are those meant to come from?
  • willis936 - Sunday, March 16, 2014 - link

    How many ssds do you plan to use at once? I can't think of a single use case where more than one ssd is needed, or even wanted if bandwidth isn't an issue. One ssd and several hard drives is certainly plausible. So there are 6 instead of 8 usable ports for hard drives. How terrible.
  • Shiitaki - Monday, March 17, 2014 - link

    So exactly what problem is this fixing? The problem of money, this is a pathetically attempt at licensing fees. SSD manufacturers could simply change the software and have their drives appear to the operation system as a pci-e based sata controller with permanently attached drive TODAY. It would her genius to be able to plug a drive into a slot and be done with it. We don't need anything new. We already have m-pci-e. Moving to a m-pci-ex4 would have been a better idea. The you could construct backplances with the new m-pci-ex4 connectors that aggrate and connect to a motherboard using a lci-ex8/16 slot.

    This article covers the story of a organization fighting desperately to not disappear into the history books of the computer industry.

Log in

Don't have an account? Sign up now