In the upper echelons of commercial workhouses, having access to copious amounts of local NVMe storage is more of a requirement than ‘something nice to have’. We’ve seen solutions in this space include custom FPGAs to software breakout boxes, and more recently a number of the motherboard vendors have provided PCIe x16 Quad M.2 cards for the market. The only downside is that they rely on the processor bifurcation, i.e. the ability for the processor to drive multiple devices from a single PCIe x16 slot. HighPoint has got around that limitation.

The current way of getting four NVMe M.2 drives in a single PCIe x16 slot sounds fairly easy. There are 16 lanes in the slot, and each drive can take up to four lanes, so what is all the fuss? The problem arises from the CPU side of the equation: that PCIe slot connects directly to one PCIe x16 root complex on the chip, and depending on the configuration it may only be expecting one device to be connected to it. The minute you put four devices in, it might not know what to do. In order to get this to work, you need a single device to act as a communication facilitator between the drives and the CPU. This is where PCIe switches come in. Some motherboards already use these to split a PCIe x16 complex into x8/x8 when different cards are inserted. For something a bit bigger, like bootable NVMe, then HighPoint use something bigger (and more expensive).

The best way to get around that limitation is to use a PCIe switch, and in this case HighPoint is using PLX8747 chip with custom firmware for booting. This switch is not cheap (not since Avago is now at the helm of the company and increased the pricing by several multiples), but it does allow for that configurable interface between the CPU and the drives that works in all scenarios. Up until today, HighPoint already had a device on the market for this, the SSD7101-A, which enabled four M.2 NVMe drives to connect to the machine. What makes the SSD7102 different is that the firmware inside the PLX chip has been changed, and it now allows for booting from a RAID of NVMe drives.

The SSD7102 supports booting with RAID 0 across all four drives, with RAID 1 across pairs of drives, or booting from a single drive in a JBOD configuration. Each drive in the JBOD can be configured to be a boot drive, allowing for multiple OS installs across the different drives. The SSD7102 supports any M.2 NVMe drives from any vendor, although for RAID setups it is advised that identical drives are used.

The card is a single slot device with a heatsink and a 50mm blower fan, to keep every drive cool. Drives up to the full 22110 standard are supported, and HighPoint says that the card is supported under Windows 10, Server 2012 R2 (or later), Linux Kernel 3.3 (or later) and macOS 10.13 (or later). Management of the drives after installation occurs through a browser based tool or a custom API for deployments that want to do their own management, and rebuilding arrays is automatic with auto-resume features. MTBF is set at just under 1M hours, with a typical power draw (minus drives) at 8W. HighPoint states that both AMD and Intel are supported, and given the presence of the PCIe switch, I suspect the card would also ‘work’ in PCIe x8 or x4 modes too.

The PCIe card is due out in November, either direct from HighPoint or through reseller/distribution partners. It is expected to have an MSRP of $399, the same as the current SSD7101-A which does not have the RAID bootable option.

Related Reading

Comments Locked

23 Comments

View All Comments

  • TomWomack - Wednesday, October 24, 2018 - link

    What on Earth is the point of paying extra to be able to boot from your large NVMe RAID? Your board surely already has an NVMe slot which you can put the small boot drive in!

    (I agree with the point about RAID controllers; I'd be happy with a system that gave me full-speed /dev/nvme1n1 through /dev/nvme4n1 and just let md deal with putting the bytes in the right place)
  • tommo1982 - Tuesday, October 30, 2018 - link

    It'd be interesting to see it tested by Anandtech.

Log in

Don't have an account? Sign up now