Two weeks ago Marvell announced their first PCIe SSD controller with NVMe support, named as 88SS1093. It supports PCIe 3.0 x4 interface with up to 4GB/s of bandwidth between the controller and the host, although Marvell has yet to announce any actual performance specs. While PCIe 3.0 x4 is in theory capable of delivering 4GB/s, in our experience the efficiency of PCIe has been about 80%, so in reality I would expect peak sequential performance of around 3GB/s. No word on the channel count of the controller, but if history provides any guidance the 88SS1093 should feature eight NAND channels similar to its SATA siblings. Silicon wise the controller is built on a 28nm CMOS process and features three CPU cores.

The 88SS1093 has support for 15nm MLC and TLC and 3D NAND, although I fully expect it to be compatible with Micron's and SK Hynix' 16nm NAND as well (i.e. 15nm TLC is just the smallest it can go). TLC support is enabled by the use of LDPC error-correction, which is part of Marvell's third generation NANDEdge technology. Capacities of up to 2TB are supported and the controller fits in both 2.5" and M.2 designs thanks to its small package size and thermal optimization (or should I say throttling). 

The 88SS1093 is currently sampling to Marvell's key customers and product availability is in 2015. Given how well Intel's SSD DC P3700 fared in our tests, I am excited to see more NVMe designs popping up. Marvell has known to be the go-to controller source for many of the major SSD manufacturers (SanDisk and Micron/Crucial to name a couple), so the 88SS1093 will play an important part in bringing NVMe to the client market.

POST A COMMENT

23 Comments

View All Comments

  • iwod - Thursday, August 21, 2014 - link

    Finally, I have been waiting for two weeks for Anandtech to report this! Since other places aren't much good at discussing it.

    "in our experience the efficiency of PCIe has been about 80%"

    What causes that? I am pretty sure the PCIe has very low overhead.
    I think this will be the next SSD for any current SSD owner to upgrade to. Since all current SSD are piratically limited by SATA. And May be its time for Apple to make their own firmware and SSD with this controller?
    Reply
  • Kristian Vättö - Thursday, August 21, 2014 - link

    "Finally, I have been waiting for two weeks for Anandtech to report this! Since other places aren't much good at discussing it."

    That's the reason why I'm not a big fan of live reporting at trade shows. As everyone is trying to be the first, I rather take my time and add some analysis instead of rewriting the PR. Too bad I didn't have the chance to meet with Marvell at FMS, so my details are limited to the PR :/

    As for the PCIe efficiency, I'm not sure about that (yet). Based to my internal tests you can only get ~780MB/s out of a PCIe 2.0 x2 link and ~1560MB/s with x4, and Ryan, our GPU editor, confirmed similar efficiency with PCIe 3.0 with CUDA bandwidth.

    From what I have heard, there are ways to increase the maximum bandwidth (which is why SF3700 is rated at up to 1.8GB/s with PCIe 2.0 x4) by playing with PCIe clock settings but I have yet to try that. I will definitely investigate this once we have more PCIe SSDs shipping.
    Reply
  • repoman27 - Thursday, August 21, 2014 - link

    It's due to protocol overhead and is directly related to the TLP Maximum Payload Size. Each Transaction Layer Packet has either a 12 or 16 byte header depending on whether it's 32 or 64-bit, optional ECRC which adds another 4 bytes, a 2 byte sequence number, LCRC which uses 4 bytes, and another couple bytes for framing. The TLPs are also interspersed with 8 byte Data Link Layer Packets at regular intervals. With a TLP Max Payload Size of 128 B, which is typical of current Intel desktop and mobile platforms, and provided no retransmissions, that works out to a theoretical peak efficiency of 2560 bytes of payload throughput for every 3112 bytes transferred, or ~82%. With larger maximum payload sizes, better efficiency can be achieved—up to 99% for a payload size of 4096 B.

    I really hope this controller provides for more than 8 channels, seeing as you would need 16 channels running at north of 200 MB/s apiece to hit the 3240 MB/s that a PCIe 3.0 x4 link is capable of.
    Reply
  • Kristian Vättö - Thursday, August 21, 2014 - link

    Thanks for the detailed explanation, it makes a lot more sense now.

    Most of the currently available NAND already support ONFI 3.0 or Toggle-Mode 2.0, which are good for up to 400MB/s per channel, so achieving 3GB/s should be possible even with an 8-channel design.
    Reply
  • repoman27 - Thursday, August 21, 2014 - link

    And a quick count shows the 88SS1093 package has 557 balls vs 400 for the 88SS9187, 320 for the 88SS1074, or 289 for the 88NV9145. So it could be a more than 8 channel design, or they actually expect the 400 MT/s NAND interfaces to deliver close to 400 MB/s. Reply
  • npz - Thursday, August 21, 2014 - link

    Most dekstop BIOS actually give you the ability to set the TLP payload size up to 4k from a several years ago, and the onboard chipset devices do support it. The only issue are add-on devices and switches. But all modern devices should support 4k packets. Very few however support ECRC. Reply
  • micksh - Friday, August 22, 2014 - link

    Since long time ago PCIe controller has been in CPU. Intel desktop processors support only 128 bytes TLP payload size. Server CPUs (E, EP series in LGA2011 socket) support 256b maximum. Reply
  • iwod - Friday, August 22, 2014 - link

    That is something i have been thinking about as well. We are running out of PCI-Express lanes direct from the CPU, We need 4x for SSD. Direct Connected I/O !!!, 16x for GPU, and a few more for other connectivity. Reply
  • DanNeely - Friday, August 22, 2014 - link

    Skylake is rumored to have 20 CPU lanes on its massmarket/consumer model to feed PCIe storage without getting in the way of the GPU. Reply
  • iwod - Saturday, August 23, 2014 - link

    Well looks like I will have to skip Broadwell generation then. With this controller, PCI-E based SSD and NVMe i think the bottleneck will be shifted to somewhere else. Hopefully Software; OS / Filesystem will catch up to take advantage of it soon. Reply

Log in

Don't have an account? Sign up now