Microchip Announces First PCIe 5.0 Switchesby Billy Tallis on February 3, 2021 9:00 AM EST
- Posted in
- PCIe Switches
- PCIe 5.0
- Sapphire Rapids
Building on their recent announcement of PCIe 5.0 retimers, Microchip has announced their first PCIe 5.0 switches, as part of their Switchtec PFX product line. On paper these look like a very straightforward update to their existing Switchtec PFX switches for PCIe 4.0, carrying over all the important features but doubling the speed.
The final version of the PCI Express 5.0 specification was released in May 2019, but significant adoption is not expected to begin until Intel's Sapphire Rapids Xeon processors ship, planned for later this year. Microchip is positioning themselves to be one of the most important vendors helping enable the transition, and they expect to be the only company offering both switches and retimers for PCIe 5.0. Components like switches and retimers are becoming increasingly important with each iteration of PCIe as higher speeds are achieved at the cost of range; servers using PCIe 5.0 will only be able to put a handful of devices close enough to the CPU to operate at PCIe 5.0 speeds without some kind of repeater. Retimers like Microchip's XpressConnect parts are simple pass-through repeaters, while switches like the new Switchtec PFX parts can fan out PCIe connectivity from one or more uplink ports to numerous downstream ports.
As with the PCIe 4.0 members of the Switchtec PFX product line, the new PCIe 5.0 switches will be available with lane counts from 28 to 100. These switches support port bifurcation down to x2 links, with bifurcation down to x1 supported by some of the lanes on the switch. The switches also support up to 48 Non-Transparent Bridges (NTBs), allowing for large multi-host PCIe fabrics to be assembled using several switches. However, initial demand for PCIe is expected to center around GPUs, machine learning accelerators and high-speed NICs, so many of those advanced features will be underutilized early on, and the chips will be primarily used to feed those extremely bandwidth-hungry peripherals with an x16 link each. SSDs using just two or four lanes each are expected to be slower about moving to PCIe 5.0.
The new PCIe 5.0 Switchtec PFX switches are currently sampling to select customers, including a development/evaluation board based around the 100-lane switch. Microchip wouldn't disclose any pricing information for the new switches, but they are bound to be more expensive than the PCIe gen4 switches with the same lane counts. Power consumption is also going up, but Microchip wouldn't quantify the change.
Microchip's lineup of PCIe switches for earlier generations also includes the Switchtec PSX and PAX families with more advanced functionality than the PFX switches. PCIe 5.0 versions of the PSX and PAX families have not been announced, but it's normal for those versions to come later. Microchip's only competition for leading-edge PCIe switches comes from Broadcom/PLX PEX switches. Broadcom has not yet publicly announced their PCIe 5.0 switches, but they are doubtless also planning to take advantage of the launch of Intel's Sapphire Rapids platform.
Post Your CommentPlease log in or sign up to comment.
View All Comments
Xajel - Wednesday, February 3, 2021 - linkDamn, I wish these are cheap and not overpriced by current image of them being a server market product.
A simple switch with x1 lanes bifurcation can be the perfect solution for any consumer grade motherboard as a true replacement for SATA, the port can be configured with x1, x2 or x4 lanes depending on the speed of the SSD we want. A single x4 port can be connected to 4 drives of x1 lanes for example or 2 drives of x2 lanes or a single x4 lanes drive.
The OCuLink connector seems to be perfect for such application, assuming it can be updated for PCIe 5.0.
But I guess Intel & AMD are forced to have their own PCIe switches integrated in the chipsets because a third party switch is so damn expensive. And we're limited to these as Motherboard manufacturers can't afford to put expensive switches because "consumer" users want more PCIe lanes.
Deicidium369 - Wednesday, February 3, 2021 - linkOculink is dead - Intel cancelled it - so no new Oculink
Rest of your post is gibberish
Billy Tallis - Wednesday, February 3, 2021 - linkWhat connector do you think is going to be most popular for PCIe 5.0 cabling between motherboards and SSD backplanes?
Deicidium369 - Wednesday, February 3, 2021 - linkIt won't be Oculink. If we are talking about enabling disaggregation in CXL 2.x - then it could be copper or it could be optical - with optical being most likely - and not just for connecting storage, but also eventual pools of resources like FPGAs, AI (Habana or TPU), GPU (Intel, Nvidia or AMD) and memory (volatile and non volatile)... then optical makes the most sense - Intel's silicon photonics being a prime candidate... but Nvidia with Mellanox IP would be a possibility as well.
as far as CXL goes - PCIe5 is a temporary stopgap to PCIe6 - with even more issues when using copper runs longer that a few inches (motherboard).
edzieba - Friday, February 5, 2021 - linkYep, totally dead. So dead there are products shipping with it and in active use, with recent updates to PCIe 4.0. Very dead indeed. Quite impressive to Intel to be able to kill a standard they did not originate and do not control, too.
Of course, you may have completely mistaken Oculink for Omnipath, which has no relation.
mode_13h - Friday, February 5, 2021 - linkYeah, that must've been what he was thinking. I believe Omnipath is indeed discontinued.
WaltC - Wednesday, February 3, 2021 - linkIt's going to be a long time before PCIe5 hits the consumer markets. It's amusing to note than when AMD was shipping PCie4 CPUs, GPUs, and chipsets into the market how some people were saying that "You don't really 'need' PCie4--PCie3 is plenty," the bit about 'need' is an Intel standby for trying to convince people that they don't 'need' a competitor's superior products and tech. Before Intel licensed x86-64 from AMD, Intel's line was "You don't need 64-bits on the desktop" as Intel thought it was "too much" for consumers while encouraging business to go Itanium for 64 bit performance. Intel ran an entire ad campaign based on that slogan! As we all know, it didn't work.
I mean $1k Z590 motherboards that don't have system-wide PCIe4 buses? AMD is selling hybrid PCIe4/PCie3 motherboards for 25% of what Intel wants to charge for its most advanced bus support--AMD's B550 "value" motherboards! And some people think Intel is preparing to release PCIe5 chipsets and CPUs? Not a chance, I'd say. I know AMD is in no hurry for PCIe5, either.
eek2121 - Wednesday, February 3, 2021 - linkBy "long time before" you actually mean "months".
DanNeely - Wednesday, February 3, 2021 - linkI'll believe it when I see it. About a year or two before the first AMD PCIe4 boards came out I read an article on an engineering site (IEEE????) that predicted that meeting the needed PCB tolerances and/or redrivers needed to support PCIe4 would add about $100 to full size mobos; and that with a maximum PCB path length of only ~2" (vs 4" for 4.0 and 8" for 3.0) requiring a ton of redrivers on top of even more stringent PCB manufacturing that adding PCIe5 to the board would cost about $400.
The major price increases we've seen first with AMD and now Intel's 4.0 capable boards make it clear the first part of the article's predictions were correct. That leaves me skeptical that we'll see 5.0 show up below the Threadripper/HEDT pseudo-workstation tier of boards anytime soon. (Maybe the top slot and an m.2 riser card next to the dimms; but unless they can figure out how to reduce PCB costs and make dirt cheap redrivers I'm not expecting 5.0 to be widespread anytime soon.)
back2future - Wednesday, February 3, 2021 - linkneed for PCIe5 instead of PCIe4 should consider how long system and peripherals are idling between usage of massively increased bandwidth to/from peripherals (gpu, storage, acellerator hardware, maybe network >400Gb -1Tb) and whats the power/temperature increase for chipset and specialized circuits for enabling accustomed mainboard layout (maybe even with compatibility for PCIe6 in mind, if that's within a reasonable time scale on faster markets?)