Testing SATA Express

SATAe is not commercially available yet but ASUS sent us a pre-production unit of the SATA Express version of their Z87 Deluxe motherboard along with the necessary peripherals to test SATAe. This is actually the same motherboard as our 2014 SSD testbed but with added SATAe functionality.

Test Setup
CPU Intel Core i7-4770K at 3.5GHz (Turbo & EIST enabled, C-states disabled)
Motherboard ASUS Z87 Deluxe SATA Express (BIOS 1707)
Chipset Intel Z87
Chipset Drivers 9.4.0.1026
Storage Drivers Intel RST 12.9.0.1001
Memory Corsair Vengeance DDR3-1866 2x8GB (9-10-9-27 2T)
Graphics Intel HD Graphics 4600
Graphics Drivers 15.33.8.64.3345
Power Supply Corsair RM750
OS Windows 7 Ultimate 64-bit

Before we get into the actual tests, we would like to thank the following companies for helping us with our 2014 SSD testbed.

The ASUS Z87 Deluxe SATA Express has two SATAe ports: one routed from the Platform Controller Hub (PCH) and the other provided by an ASMedia ASM106SE chip. The ASMedia is an unreleased chip, hence there is no information to be found about it and ASUS is very tight-lipped about the whole thing. I'm guessing we are dealing with the same SATA 6Gbps design as other ASM106x chips but with added PCIe pass-through functionality to make the chip suitable for SATA Express.

I did a quick block diagram that shows the storage side of the ASUS SATAe board we have. Basically there are four lanes in total dedicated to SATAe with support for up to two SATAe drives in addition to four SATA 6Gbps devices. Alternatively you can have up to eight SATA 6Gbps devices if neither of the SATAe ports is operating in PCIe mode.

Since there are no SATAe drives available at this point, ASUS sent us a SATAe demo daughterboard along with the motherboard. The daughterboard itself is very simple: it has the same SATAe connector as found in the motherboard, two molex power inputs, a clock cable header, and a PCIe slot.

This is what the setup looks like in action (though as you can see, I took the motherboard out of the case since inside case photos didn't turn out so well with the poor camera I have). The black and red cable is the external clock cable, which is only temporary and won't be needed with a final SATAe board.

The Tests

For testing I used Plextor's 256GB M6e PCIe SSD, which is a PCIe 2.0 x2 SSD with Marvell's new 88SS9183 PCIe controller. Plextor rates the M6e at up to 770MB/s read and 580MB/s write, so we should be capable of reaching the full potential of PCIe 2.0 x2. Additionally I tested the SATA 6Gbps ports with a 256GB OCZ Vertex 450. I used the same sequential 128KB Iometer tests that we use in our SSD reviews but I ramped up the queue depth to 32 to make sure we are looking at a maximum throughput situation.

Iometer—128KB Sequential Read (QD32)

There is no practical difference between a PCIe slot on the motherboard and PCIe that is routed through SATA Express. I'm a little surprised that there is absolutely no hit in performance (other than a negligible 1.5MB/s that's basically within the margin of error) because after all we are using cabling that should add latency. It seems that SATA-IO has been able to make the cabling efficient enough to transmit PCIe without additional overhead.

As for SATA 6Gbps, the performance is the same as well, which isn't surprising since only the connector is slightly different while electrically everything is the same. With the ASMedia chipset there is ~25-27% reduction in performance but that is inline with the previous ASMedia SATA 6Gbps chipsets I've seen. As I mentioned earlier, I doubt that the ASM106SE brings anything new to the SATA side of the controller and that's why I wasn't expecting more than 400MB/s. Generally you'll only get full SATA bandwidth from an Intel chipset or a higher-end SATA/RAID card.

Iometer—128KB Sequential Write (QD32)

The same goes for write performance. The only case where you are going to see a difference is if you connect to the ASMedia SATA 6Gbps port. I did run some additional benchmarks (like our performance consistency test) to see if a different workload would yield different results but all my tests showed that SATAe in PCIe mode is as fast as a real PCIe slot, so I'm not going to post a bunch additional graphs showing that the two are equivalent.

NVMe vs AHCI: Another Win for PCIe Final Thoughts
Comments Locked

131 Comments

View All Comments

  • Khenglish - Thursday, March 13, 2014 - link

    That 2.8 uS you found is driver interface overhead from an interface that doesn't even exist yet. You need to add this to the access latency of the drive itself to get the real latency.

    Real world SSD read latency for tiny 4K data blocks is roughly 900us on the fastest drives.

    It would take an 18000 meter cable to add even 10% to that.
  • willis936 - Thursday, March 13, 2014 - link

    Show me a consumer phy that can transmit 8Gbps over 100m on cheap copper and I'll eat my hat.
  • Khenglish - Thursday, March 13, 2014 - link

    The problem is long cables is attenuation, not latency. Cables can only be around 50M long before you need a repeater.
  • mutercim - Friday, March 14, 2014 - link

    Electrons have mass, they can't ever travel at the speed of light, no matter the medium. The signal itself would move at the speed of light (in vacuum), but that's a different thing.

    /pedantry
  • Visual - Friday, March 14, 2014 - link

    It's a common misconception, but electrons don't actually need to travel the length of the cable for a signal to travel through it.
    In layman's terms, you don't need to send an electron all the way to the other end of the cable, you just need to make the electrons that are already there react in a certain way as to register a required voltage or current.
    So a signal is a change in voltage, or a change in the electromagnetic fields, and that travels at the speed of light (no, not in vacuum, in that medium).
  • AnnihilatorX - Friday, March 14, 2014 - link

    Just to clarify, it is like pushing a tube full of tennis balls from one end. Assuming the tennis balls are all rigid so deformation is negligible, the 'cause and effect' making the tennis ball on the other end move will travel at speed of light.
  • R3MF - Thursday, March 13, 2014 - link

    having 24x PCIe 3.0 lanes on AMD's Kaveri looks pretty far-sighted right now.
  • jimjamjamie - Thursday, March 13, 2014 - link

    if they got their finger out with a good x86 core the APUs would be such an easy sell
  • MrSpadge - Thursday, March 13, 2014 - link

    Re: "Why Do We Need Faster SSDs"

    You power consumption argument ignores one fact: if you use the same controller, NAND and firmware it costs you x Wh to perform a read or write operation. If you simply increase the interface speed and hence perform more of these operations per time, you also increase the energy required per time, i.e. power consumption. I your example the faster SSD wouldn't continue to draw 3 W with the faster interface: assuming a 30% throughput increase expecting a power draw of 4 W would be reasonable.

    Obviously there are also system components actively waiting for that data. So if the data arrives faster (due to lower latency & higher throughput) they can finish the task quicker and race to sleep. This counterbalances some of the actual NAND power draw increases, but won't negate it completely.
  • Kristian Vättö - Thursday, March 13, 2014 - link

    "If you simply increase the interface speed and hence perform more of these operations per time, you also increase the energy required per time, i.e. power consumption."

    The number of IO operations is a constant here. A faster SSD does not mean that the overall number of operations will increase because ultimately that's up to the workload. Assuming that is the same in both cases, the faster SSD will complete the IO operations faster and will hence spend more time idling, resulting in less power drawn in total.

    Furthermore, a faster SSD does not necessarily mean higher power draw. As the graph on page one shows, PCIe 2.0 increases baseline power consumption by only 2% compared to SATA 6Gbps. Given that SATA 6Gbps is a bottleneck in current SSDs, more processing power (and hence more power) is not required to make a faster SSD. You are right that it may result in higher NAND power draw, though, because the controller will be able to take better advantage of parallelism (more NAND in use = more power consumed).

    I understand the example is not perfect as in real world the number of variables is through the roof. However, the idea was to debunk the claim that PCIe SSDs are just a marketing trick -- they are that too but ultimately there are gains that will reach the average user as well.

Log in

Don't have an account? Sign up now