At OCP Summit last week, Marvell unveiled a new generation of NVMe SSD controllers and a unique NVMe switch that blurs the lines between a standard PCIe switch and a traditional RAID controller.

The most straightforward of Marvell's three new NVMe products is the 88SS1098 "Zao" NVMe SSD controller. With 8 NAND channels and a PCIe 3 x4 host interface, the 88SS1098 is the direct successor to the 88SS1093 "Eldora" and 88SS1092 "Eldora Plus" controllers for high-end client SSDs and low-end enterprise NVMe SSDs. Performance increases will be enabled primarily by the introduction of Marvell's fourth-generation LDPC error correction engine with improved throughput, and by a 50% increase in the maximum supported NAND interface speed. The new controller also adds a fourth ARM Cortex-R4 CPU core. The DRAM controller has been upgraded to support LPDDR4, which should help reduce power consumption of M.2 NVMe SSDs. Up to 8GB of DRAM is supported, matching the 88SS1092's primary advantage over the 88SS1093 that is limited to 2GB of DRAM.

A number of new enterprise-oriented features have been added to Marvell's latest NVMe controllers. The PCIe host interface can be operated as a dual-port pair of PCIe 3 x2 links for high availability. SR-IOV virtualization support is included with support for up to 64 virtual instances of the controller. The new NVMe Streams and IO determinism features are supported, allowing drives to offer improved performance and endurance under mixed workloads when the host system's NVMe drivers support those features. Even without streams and IO determinism, latency QoS is improved by the migration of more controller functionality from the CPU cores to dedicated fixed-function hardware, and better support for suspending in-progress NAND program operations to perform a quicker read operation. The new controllers support a total of 132 queue pairs of up to 256 I/O commands in flight, so the controller can handle intense workloads and won't require host systems to share queues between CPU cores. QLC NAND is officially supported.


Marvell 88SS1088 Block Diagram

The high-end alternative in Marvell's new controller generation is the new 88SS1088 controller. This is a dual-chip controller solution, essentially a pair of 88SS1098 controllers with a dedicated 4GB/s link between them. Only one of the two controller chips connects to the host system so drives based around the 88SS1088 solution will still be limited to a PCIe 3 x4 interface or dual-port PCIe 3 x2. However, the total supported DRAM and NAND is doubled, allowing for 16GB of DRAM and 16TB of NAND spread across a total of 16 channels.

Drives using this dual-chip controller solution will appear to the host system as a single NVMe device and the host system will not have to implement its own software RAID-0 as required by products like the Intel P3608 that are essentially two separate SSDs sharing one circuit board. Performance from the 88SS1088 controller will be a bit higher than the 88SS1098 but the PCIe link will be a significant bottleneck at high queue depths. It is likely that the dual-chip design of the 88SS1088 puts it at a disadvantage relative to a monolithic single-die 16-channel controller design, but without a wider host interface the difference is unlikely to matter, and the 88SS1088 should be more economical than a massive native 16-channel controller. Marvell claims the 88SS1098 can deliver up to 700k random read IOPS and sequential reads of up to 3.6 GB/s, and the 88SS1088 can do about 800k random read IOPS.

The 88S1098 and 88SS1088 use a common architecture for firmware, so drive makers can easily re-use most of their firmware code between those two controllers and future controllers in this generation from Marvell. No other controllers have been announced yet, but Marvell is certain to introduce at least one low-end controller for entry-level client SSDs, and larger controllers are also a possibility if the 88SS1088's dual-controller design isn't enough to satisfy high-end enterprise demand.

Introducing NVMe Switches

In addition to a new generation of NVMe SSD controllers, Marvell has introduced a new category of NVMe chipset. The new Marvell 88NR2241 Intelligent NVMe Switch can replace some uses of PCIe switches: it has a PCIe x8 host interface, and up to four NVMe SSDs can be connected behind the switch. Unlike a regular PCIe switch, the SSDs behind the 88NR2241 switch are not individually accessible to the host system. Instead, the switch itself implements the NVMe 1.3 protocol and provides abstraction of the individual SSDs behind the switch. The storage of each of the individual SSDs can be presented to the host system as a separate namespace on the switch's NVMe controller, or the storage can be pooled with RAID 0,1 and 10 modes. Because the switch is the endpoint for NVMe transactions with the host system, it can provide advanced features that may not be supported by the individual SSDs, such as redundant dual-port support, multiple namespaces, SR-IOV virtualization, and NVMe-MI management support. The 88NR2241 implements the NVMe protocol but doesn't do any of the hard work of NAND flash management, so it is a very low-overhead intermediary and does not require any external DRAM. It may add slightly more latency than a simple PCIe switch would, but that extra latency is unlikely to matter to flash-based SSDs. Marvell has measured random read speeds of 1.6M IOPS and sequential speeds of 6.4GB/s using a PCIe 3 x8 host interface.


Marvell 88NR2241 development board

The Marvell 88NR2241 can work in front of any standard NVMe SSDs, even those using competitors' controllers. One use case Marvell envisions is creating enterprise-grade storage solutions out of consumer-class NVMe controllers and SSDs. The Marvell 88NR2241 has relatively small port and lane counts compared the large fan-out PCIe switches used in many servers, and Marvell's switch is not aimed at entirely replacing those large switches. Instead, the 88NR2241 is likely to be used in smaller hot-swappable storage modules that may still connect to a fan-out switch, or directly to the CPU on platforms that have plenty of PCIe lanes. The 88NR2241 may be used with several SSD controllers on larger SSD form factors like the new EDSFF Long cards derived from the Intel Ruler, or on PCIe add-in cards as either a HBA with cables to SSDs, or with several SSD controllers on the same card. Smaller form factors like the Samsung NF1 and M.2 are unlikely to have enough room for the 88NR2241 to be worthwhile.


Potential layout for EDSFF Long SSD using Marvell 88NR2241 (not to scale)

Hardware RAID for NVMe has not been as popular as for drives using SATA and SAS interfaces. Part of the problem has been a dearth of controllers that can provide RAID functionality without imposing a performance bottleneck that eliminates the advantages of NVMe. Some recent top of the line RAID controllers from Broadcom/Avago/LSI can bond groups of four of their SAS/SATA links to connect to NVMe SSDs, but this leaves them with a disappointing port count and a few other inconvenient limitations. The Marvell 88NR2241 doesn't address all the demand for NVMe RAID capabilities (especially without RAID5/6 support), but it is a big step forward. Its compatibility with existing NVMe host drivers will make it very easy to deploy from a software perspective. Marvell didn't discuss future plans for other NVMe switches, but it is likely that they are planning larger variants to go after more of the traditional PCIe switch market and some of the SAS/SATA HBA market.

Source: Marvell

POST A COMMENT

9 Comments

View All Comments

  • jardows2 - Tuesday, March 27, 2018 - link

    6.4GB/s. Wow! I remember when talking about 66MB/s was a big deal! Reply
  • Dug - Tuesday, March 27, 2018 - link

    66MB/s isn't a big deal? I really need to catch up. Reply
  • ytoledano - Tuesday, March 27, 2018 - link

    Why aren't all these controllers PCIe3x16? Reply
  • Billy Tallis - Tuesday, March 27, 2018 - link

    Adding more PCIe lanes would substantially increase the price and power consumption of these chips. The SSD controllers would need much higher NAND channel counts to be able to use the bandwidth of an x16 link, which would make for the biggest and most expensive SSD controller ASIC ever. Such a chip would have a very small market—many servers don't even have x16 slots, instead opting for a higher number of x8 slots, and the drives would need a minimum of 2TB to even come close to full performance.

    As for why the NVMe switch doesn't have an x16 uplink, I suspect that it is simply due to this being the smallest in what will eventually be a broad product family.
    Reply
  • Dug - Tuesday, March 27, 2018 - link

    I'm looking at the top picture wondering how do I connect an ssd to that?
    I've seen controllers where you attach the m.2 directly to the controller, but this doesn't seem the same.
    I can't find a solution that attaches to it, or find out if more than one ssd can connect, or what cables are needed.
    Reply
  • Billy Tallis - Wednesday, March 28, 2018 - link

    The photo at the top is an SSD, with lots of extra connectors for debugging. The controller in the center is the 88SS1098, and it's surrounded by either 2TB or 4TB of Toshiba 3D TLC. Reply
  • Nainesh - Sunday, April 01, 2018 - link

    This is connected through M.2 adaptor on EVB. This board is for demo only. No cables are required. Same product is also available is in U.2 form factor which you can connect two M.2 SSDs. Reply
  • msroadkill612 - Wednesday, April 04, 2018 - link

    Using 4x nvme on an 8 lane device means they are using pcie3 x2 links, as they should more in desktops imo.

    For an affordable nvme to aim for 2GB read and write is achievable. Only the best and dearest nvmeS get near 3.5GB/s read & ~2.4GB/s write in burst sequential.

    For most its more like sub 1.5 GB/s write and sub 2.4GB/s read. The rest of the 4GB/s bandwidth allocated from scarce io resources is wasted.

    In the amd amd am4 ryzen and intel desktop world, lanes are extremely scarce, so many are precluded from using this powerful new nvme resource to the full for want of lanes, yet we are squandering what we have by overproviding for nvmeS which barely use the extra bandwidth.
    Reply
  • sethk - Friday, April 06, 2018 - link

    I hope the low lane count and simplicity of the switch / raid controller (88NR2241) are to keep costs down. Speaking of which, is this a product I can buy? Pricing, availability? Reply

Log in

Don't have an account? Sign up now