Testing SATA Express

SATAe is not commercially available yet but ASUS sent us a pre-production unit of the SATA Express version of their Z87 Deluxe motherboard along with the necessary peripherals to test SATAe. This is actually the same motherboard as our 2014 SSD testbed but with added SATAe functionality.

Test Setup
CPU Intel Core i7-4770K at 3.5GHz (Turbo & EIST enabled, C-states disabled)
Motherboard ASUS Z87 Deluxe SATA Express (BIOS 1707)
Chipset Intel Z87
Chipset Drivers 9.4.0.1026
Storage Drivers Intel RST 12.9.0.1001
Memory Corsair Vengeance DDR3-1866 2x8GB (9-10-9-27 2T)
Graphics Intel HD Graphics 4600
Graphics Drivers 15.33.8.64.3345
Power Supply Corsair RM750
OS Windows 7 Ultimate 64-bit

Before we get into the actual tests, we would like to thank the following companies for helping us with our 2014 SSD testbed.

The ASUS Z87 Deluxe SATA Express has two SATAe ports: one routed from the Platform Controller Hub (PCH) and the other provided by an ASMedia ASM106SE chip. The ASMedia is an unreleased chip, hence there is no information to be found about it and ASUS is very tight-lipped about the whole thing. I'm guessing we are dealing with the same SATA 6Gbps design as other ASM106x chips but with added PCIe pass-through functionality to make the chip suitable for SATA Express.

I did a quick block diagram that shows the storage side of the ASUS SATAe board we have. Basically there are four lanes in total dedicated to SATAe with support for up to two SATAe drives in addition to four SATA 6Gbps devices. Alternatively you can have up to eight SATA 6Gbps devices if neither of the SATAe ports is operating in PCIe mode.

Since there are no SATAe drives available at this point, ASUS sent us a SATAe demo daughterboard along with the motherboard. The daughterboard itself is very simple: it has the same SATAe connector as found in the motherboard, two molex power inputs, a clock cable header, and a PCIe slot.

This is what the setup looks like in action (though as you can see, I took the motherboard out of the case since inside case photos didn't turn out so well with the poor camera I have). The black and red cable is the external clock cable, which is only temporary and won't be needed with a final SATAe board.

The Tests

For testing I used Plextor's 256GB M6e PCIe SSD, which is a PCIe 2.0 x2 SSD with Marvell's new 88SS9183 PCIe controller. Plextor rates the M6e at up to 770MB/s read and 580MB/s write, so we should be capable of reaching the full potential of PCIe 2.0 x2. Additionally I tested the SATA 6Gbps ports with a 256GB OCZ Vertex 450. I used the same sequential 128KB Iometer tests that we use in our SSD reviews but I ramped up the queue depth to 32 to make sure we are looking at a maximum throughput situation.

Iometer—128KB Sequential Read (QD32)

There is no practical difference between a PCIe slot on the motherboard and PCIe that is routed through SATA Express. I'm a little surprised that there is absolutely no hit in performance (other than a negligible 1.5MB/s that's basically within the margin of error) because after all we are using cabling that should add latency. It seems that SATA-IO has been able to make the cabling efficient enough to transmit PCIe without additional overhead.

As for SATA 6Gbps, the performance is the same as well, which isn't surprising since only the connector is slightly different while electrically everything is the same. With the ASMedia chipset there is ~25-27% reduction in performance but that is inline with the previous ASMedia SATA 6Gbps chipsets I've seen. As I mentioned earlier, I doubt that the ASM106SE brings anything new to the SATA side of the controller and that's why I wasn't expecting more than 400MB/s. Generally you'll only get full SATA bandwidth from an Intel chipset or a higher-end SATA/RAID card.

Iometer—128KB Sequential Write (QD32)

The same goes for write performance. The only case where you are going to see a difference is if you connect to the ASMedia SATA 6Gbps port. I did run some additional benchmarks (like our performance consistency test) to see if a different workload would yield different results but all my tests showed that SATAe in PCIe mode is as fast as a real PCIe slot, so I'm not going to post a bunch additional graphs showing that the two are equivalent.

NVMe vs AHCI: Another Win for PCIe Final Thoughts
Comments Locked

131 Comments

View All Comments

  • Kristian Vättö - Tuesday, March 18, 2014 - link

    Bear in mind that SATA-IO is not just some random organization that does standards for fun - it consists of all the players in the storage industry. The current board has members from Intel, Marvell, HP, Dell, SanDisk etc...
  • BMNify - Thursday, March 20, 2014 - link

    indeed, and yet its now clear these and the other design by committee organization's are no longer fit for purpose , producing far to little far to late....

    ARM IP =the generic current CoreLink CCN-508 that can deliver up to 1.6 terabits of sustained usable system bandwidth per second with a peak bandwidth of 2 terabits per second (256 GigaBytes/s) at processor speeds scaling all the way up to 32 processor cores total.

    Intel IP QPI = Intel's Knights Landing Xeon Phi due in 2015 with its antiquated QPI interconnect and its expected ultra short-reach (USR) interconnection only up to 500MB/s data throughput seems a little/lot short on real data throughput by then...
  • Hrel - Monday, March 17, 2014 - link

    Cost: Currently PCI-E SSD's are inexplicably expensive. If this is gonna be the same way it won't sell no matter how many PCI-E lanes Intel builds into it's chipset. My main concern with using the PCI-E bus is cost. Can someone explain WHY those cost so much more? Is it just the niche market or is there an actual legitimate reason for it? Like, PCI-E controllers are THAT much harder to create than SATA ones?

    I doubt that's the case very much. If it is then I guess prices will drop as that gets easier but for now they've priced themselves out of competition.

    Why would I buy a 256GB SSD on PCI-E for $700 when I can buy a 256GB SSD on SATA for $120? That shit makes absolutely no sense. I could see like a 10-30% price premium, no more.
  • BMNify - Tuesday, March 18, 2014 - link

    "Can someone explain WHY those cost so much more?"
    greed...
    due mostly to not invented here is the reason we are not yet using a version of everspin's MRAM 240 pin, 64MByte DIMM with x72 configuration with ECC for instance http://www.everspin.com/image-library/Everspin_Spi...

    it can be packaged for any of the above forms M2 etc too rathe than have motherboard vendors put extra ddr3 ram slots decicated to this ddr3 slot compatable everspin MRAM today with the needed extra ddr3 ram controllers included in any CPU/SoC....

    rather than licence this existing (for 5 years ) commercial MRAM product and collaborate together to make and improve the yield and help them shrink it down to 45nm to get it below all of today's dram fastest speeds etc they all want an invented here product and will make the world markets wait for no good reason...
  • Kristian Vättö - Tuesday, March 18, 2014 - link

    Because most PCIe SSDs (the Plextor M6e being an exception) are just two or four SATA SSDs sitting behind a SATA to PCIe bridge. There is added cost from the bridge chip other additional controller, although the main reason are the laws of economics. Retail PCIe SSDs are low volume because SATA is still the dominant interface and that increases production costs for the OEMs. Low order quantities are also more expensive for the retailers.

    In short, OEMs are just trying to milk enthusiasts with PCIe drives but ones we'll see PCIe entering the mainstream market, you'll no longer have to pay extra for them (e.g. SF3700 combines SATA and PCIe in a single chip, so PCIe isn't more expensive with it).
  • Ammohunt - Thursday, March 20, 2014 - link

    Disappointed there wasn't a SAS offering compared 6GB SAS != 6G SATA
  • jseauve - Thursday, March 20, 2014 - link

    Awesome computer
  • westfault - Saturday, March 22, 2014 - link

    "The SandForce, Marvell, and Samsung designs are all 2.0 but at least OCZ is working on a 3.0 controller that is scheduled for next year."

    When you say OCZ is developing on a PCIe 3.0 controller do you mean that they were working on one before they were purchased by Toshiba, or was this announced since they were acquired by Toshiba? I understand that Toshiba has kept the OCZ name, but is it certain that they have continued all R&D from before OCZ's bankruptcy?
  • dabotsonline - Monday, April 28, 2014 - link

    Roll on SATAe with PCIe 4.0, let alone 3.0 next year!
  • MRFS - Tuesday, January 20, 2015 - link

    I've felt the same way about SATAe and PCIe SSDs --
    cludgy and expensive, respectively.

    Given the roadmaps for PCIe 3.0 and 4.0, it makes sense to me, imho,
    to "sync" SATA and SAS storage with 8G and 16G transmission clocks
    and the 128b/130b "jumbo frame" now implemented in the PCIe 3.0 standard.

    Ideally, end users will have a choice of clock speeds, perhaps with pre-sets:
    6G, 8G, 12G and 16G.

    In actual practice now, USB 3.1 uses a 10G clock and 128b/132b jumbo frame:

    max headroom = 10G / 8.25 bits per byte = 1.212 GB/second.

    132 bits / 16 bytes = 8.25 bits per byte, using the USB 3.1 jumbo frame

    To save a lot of PCIe motherboards, which are designed for expansion,
    PCIe 2.0 and 3.0 expansion slots can be populated with cards
    which implement 8G clocks and 128b/130b jumbo frames.

    That one evolutionary change should put pressure on SSD manufacturers
    to offer SSDs with support for both features.

    Why "SATA-IV" does not already sync with PCIe 3.0 is anybody's guess.

    We tried to discuss this with the SATA-IO folks may moons ago,
    but they were quite committed to their new SATAe connector. UGH!

Log in

Don't have an account? Sign up now