Testing SATA Express

SATAe is not commercially available yet but ASUS sent us a pre-production unit of the SATA Express version of their Z87 Deluxe motherboard along with the necessary peripherals to test SATAe. This is actually the same motherboard as our 2014 SSD testbed but with added SATAe functionality.

Test Setup
CPU Intel Core i7-4770K at 3.5GHz (Turbo & EIST enabled, C-states disabled)
Motherboard ASUS Z87 Deluxe SATA Express (BIOS 1707)
Chipset Intel Z87
Chipset Drivers 9.4.0.1026
Storage Drivers Intel RST 12.9.0.1001
Memory Corsair Vengeance DDR3-1866 2x8GB (9-10-9-27 2T)
Graphics Intel HD Graphics 4600
Graphics Drivers 15.33.8.64.3345
Power Supply Corsair RM750
OS Windows 7 Ultimate 64-bit

Before we get into the actual tests, we would like to thank the following companies for helping us with our 2014 SSD testbed.

The ASUS Z87 Deluxe SATA Express has two SATAe ports: one routed from the Platform Controller Hub (PCH) and the other provided by an ASMedia ASM106SE chip. The ASMedia is an unreleased chip, hence there is no information to be found about it and ASUS is very tight-lipped about the whole thing. I'm guessing we are dealing with the same SATA 6Gbps design as other ASM106x chips but with added PCIe pass-through functionality to make the chip suitable for SATA Express.

I did a quick block diagram that shows the storage side of the ASUS SATAe board we have. Basically there are four lanes in total dedicated to SATAe with support for up to two SATAe drives in addition to four SATA 6Gbps devices. Alternatively you can have up to eight SATA 6Gbps devices if neither of the SATAe ports is operating in PCIe mode.

Since there are no SATAe drives available at this point, ASUS sent us a SATAe demo daughterboard along with the motherboard. The daughterboard itself is very simple: it has the same SATAe connector as found in the motherboard, two molex power inputs, a clock cable header, and a PCIe slot.

This is what the setup looks like in action (though as you can see, I took the motherboard out of the case since inside case photos didn't turn out so well with the poor camera I have). The black and red cable is the external clock cable, which is only temporary and won't be needed with a final SATAe board.

The Tests

For testing I used Plextor's 256GB M6e PCIe SSD, which is a PCIe 2.0 x2 SSD with Marvell's new 88SS9183 PCIe controller. Plextor rates the M6e at up to 770MB/s read and 580MB/s write, so we should be capable of reaching the full potential of PCIe 2.0 x2. Additionally I tested the SATA 6Gbps ports with a 256GB OCZ Vertex 450. I used the same sequential 128KB Iometer tests that we use in our SSD reviews but I ramped up the queue depth to 32 to make sure we are looking at a maximum throughput situation.

Iometer—128KB Sequential Read (QD32)

There is no practical difference between a PCIe slot on the motherboard and PCIe that is routed through SATA Express. I'm a little surprised that there is absolutely no hit in performance (other than a negligible 1.5MB/s that's basically within the margin of error) because after all we are using cabling that should add latency. It seems that SATA-IO has been able to make the cabling efficient enough to transmit PCIe without additional overhead.

As for SATA 6Gbps, the performance is the same as well, which isn't surprising since only the connector is slightly different while electrically everything is the same. With the ASMedia chipset there is ~25-27% reduction in performance but that is inline with the previous ASMedia SATA 6Gbps chipsets I've seen. As I mentioned earlier, I doubt that the ASM106SE brings anything new to the SATA side of the controller and that's why I wasn't expecting more than 400MB/s. Generally you'll only get full SATA bandwidth from an Intel chipset or a higher-end SATA/RAID card.

Iometer—128KB Sequential Write (QD32)

The same goes for write performance. The only case where you are going to see a difference is if you connect to the ASMedia SATA 6Gbps port. I did run some additional benchmarks (like our performance consistency test) to see if a different workload would yield different results but all my tests showed that SATAe in PCIe mode is as fast as a real PCIe slot, so I'm not going to post a bunch additional graphs showing that the two are equivalent.

NVMe vs AHCI: Another Win for PCIe Final Thoughts
Comments Locked

131 Comments

View All Comments

  • frenchy_2001 - Friday, March 14, 2014 - link

    no, it does not. It adds latency, which is the delay before any command is received. Speed stays the same and unless your transmission depends on hand shake and verification and can block, latency is irrelevant.
    See internet as a great example. Satellite gives you fast bandwidth (it can send a lot of data at a time), but awful latency (it takes seconds to send the data).
    As one point of those new technology is to add a lot of queuing, latency becomes irrelevant, as there is always some data to send...
  • nutjob2 - Saturday, March 15, 2014 - link

    You're entirely incorrect. Speed is a combination of both latency and bandwidth and both are important, depending on how the data is being used.

    Your dismissal of latency because "there is always data to send" is delusional. That's just saying that if you're maxing out the bandwidth of your link then latency doesn't matter. Obviously. But in the real world disk requests are small and intermittent and not large enough to fill the link, unless you're running something like a database server doing batch processing. As the link speed gets faster (exactly what we're talking about here) and typical data request sizes stay roughly the same then latency becomes a larger part of the time it takes to process a request.

    Perceived and actual performance on most computers are very sensitive to disk latency since the disk link is the slowest link in the processing chain.
  • MrPoletski - Thursday, March 13, 2014 - link

    wait:
    by Kristian Vättö on March 13, 2014 7:00 AM EST

    It's currently March 13, 2014 6:38 AM EST - You got a time machine over at Anandtech?
  • Ian Cutress - Thursday, March 13, 2014 - link

    I think the webpage is in EDT now, but still says EST.
  • Bobs_Your_Uncle - Saturday, March 15, 2014 - link

    PRECISELY the point of Kristian's post. It's NOT a time machine in play, but rather the dramatic effects of reduced latency. (The other thing that happens is the battery in your laptop actually GAINS charge in such instances.)
  • mwarner1 - Thursday, March 13, 2014 - link

    The cable design, and especially its lack of power transmission, is even more short sighted & hideous than that of the Micro-B USB3.0 cable.
  • 3DoubleD - Thursday, March 13, 2014 - link

    Agreed, what a terrible design. Not only is this cable a monster, but I can already foresee the slow and painful rollout of PCIe2.0 SATAe when we should be skipping directly to PCIe3.0 at this point.

    Also, the reasons given for needing faster SATA SSDs are sorely lacking. Why do we need this hideous connector when we already have PCIe SSDs? Plenty of laptop vendors are having no issue with this SATA bottleneck. I also debate whether a faster, more power hungry interface is actually better on battery life. The SSD doesn't always run at full speed when being accessed, so the battery life saved will be less than the 10 min calculated in the example... if not worse that the reference SATA3 case! And the very small number of people who edit 4k videos can get PCIe SSDs already.
  • DanNeely - Thursday, March 13, 2014 - link

    Blame Intel and AMD for only putting pcie 2.0 on the southbridge chips that everything not called a GPU are connected to in consumer/enthusiast systems.
  • Kristian Vättö - Thursday, March 13, 2014 - link

    A faster SSD does not mean higher power consumption. The current designs could easily go above 550MB/s if SATA 6Gbps wasn't bottlenecking, so a higher power controller is not necessary in order to increase performance.
  • fokka - Thursday, March 13, 2014 - link

    i think what he meant is that while the actual workload may be processed faster and an idle state is reached sooner on a faster interface, the faster interface itself uses more power than sata 6g. so the question now is in what region the savings of the faster ssd are and in what region the additional power consumption of the faster interface.

Log in

Don't have an account? Sign up now