At Computex 2019 during AMD's keynote, AMD CEO Dr. Lisa Su unveiled details new X570 chipset designed for the upcoming Ryzen 3000 series "Matisse" processors. The new 3rd generation of AM4 motherboards include PCIe 4.0 support, native USB 3.1 Gen2 ports, and a slightly more power hungry TDP, which means most motherboard manufacturers are likely to use actively cooled chipset heatsinks.

The new X570 chipset marks the first consumer motherboard chipset to feature PCIe 4.0 by default. We saw that manufacturers were preparing its previous generation X470/B450 motherboards to offer limited PCIe 4.0 support through a wave of firmware updates – taking advantage of the CPU's independent on-die PCIe controller – but due to certain requirements of PCIe 4.0, it's likely that previous generation boards will only have the top full-length slot running PCIe 4.0 when paired with a Ryzen 3000 series processor. The limitations are due to the length of the traces that the PCIe 4.0 interface requires. This means traces longer than a few inches won't be able to operate PCIe 4.0 unless the traces are fitted with redrivers to push the signal further down the board.


The MSI MEG X570 Godlike motherboard

With the new X570 boards, the implementation of PCIe 4.0 has been thought of from the get-go, with motherboards optimized for the interface with most models likely featuring redrivers on the furthest away slots. When paired with a Ryzen 3000 series processor, for a typical X570 motherboard what we'll see is that the lanes going into the top PCIe slot will come directly from the processor itself, as will the four PCIe lanes assigned to the first NVMe M.2 slot. This means that the X570 chipset – which also supports PCIe 4.0, but is not a requirement for the CPU-hosted lanes to use PCIe 4.0 – can use its own lanes for USB 3.1 Gen2 capability. Not only this, but it also allows the chipset to handle Wi-Fi, Bluetooth, and SATA responsibilities.

One of the caveats to a more powerful chipset is that it draws around 11 W of power; for comparative reasons, the X470 chipset drew around 6 W of power to operate. (This is different to the 15W being reported - it appears AMD is making two variants of the chipset, with the 11W on consumer boards and the 15W for enterprise, with the 15W having more PCIe lanes.)

Another change to the way AMD develops its chipsets is that everything is now done in-house, licensing IP from ASMedia and others, instead of completely outsourcing its chipset design to ASMedia as it did with X470 and X370. The main reasons for the TDP increase are due to PCIe 4.0. Speaking to AMD's partners, we expect a series of updates thick and fast to add additional power management features to the chipset between now and the CPU launch.

We're still waiting on a full block diagram disclosure from AMD, and when we do we'll update this news.

Want to keep up to date with all of our Computex 2019 Coverage?
 
Laptops
 
Hardware
 
Chips
 
Follow AnandTech's breaking news here!
POST A COMMENT

30 Comments

View All Comments

  • abufrejoval - Sunday, May 26, 2019 - link

    If there was really flexible ways to bi/quad-furicate the 16 PCIe 4 lanes to what peripherals like 10Gbit Ethernet can actually handle, it could do wonders in workstation space.

    Wish it was as flexible as the base architecture of Inifinity Fabric actually seems to be, but don't know how OEMs could achieve that with the hard wires they need into the motherboards.
    Reply
  • npz - Monday, May 27, 2019 - link

    The problem is you can't just bifurcate the lanes, when all the current cards are PCIE 3.0. There's no way to simply split or use a switch to convert the bandwidth. A PCIE 3.0 x4 device will still use 4 lanes on PCIE 4.0 connection, except it'll simply be at half speed.

    You can't convert that PCIE 3.0 x4 device to a PCIE 4.0 x2 slot without an expensive and latency/performance hurting shim conversion device.
    Reply
  • npz - Monday, May 27, 2019 - link

    This problem has been with us forever. I have AMD X399 and Intel X99 mobos. In either, when I use my old PCIE 2.0 x8 cards, I have no choice but to "waste" x8 PCIE 3.0 lanes, running at half speed. Reply
  • abufrejoval - Monday, May 27, 2019 - link

    Well, that why we used to have PCIe switch chips in higher-end motherboards until a couple of years ago when Avago did all these mergers and then raised the prices through the roof.

    I guess what's really required is a re-design of the motherboard approach:

    Effectively south-bridges are switches and one mistake in the current design is to limit it to one per CPU (don't know if that actually *is* a limit or just the way it's done).

    So why not add another Southbridge at the 4x NVMe port or in fact offer a variant with three Southbridges that support 8 gen3 lanes on a physical 16x slot, amonst other things?

    Actually with 24 lanes overall, splitting into 8+16 or 12+12 may make more sense for more flexible "Southswitches".

    Now that desktops have 8-12 cores regularly, multiple NVMe devices find their ways into your PC, Thunderbolt3 aka USB 4 are swallowing lanes and Gigabits are counting up on Ethernet this fixed PCIe lane allocation scheme with 4 generations of bandwidths is really starting to show its age.

    Whenever you have a true performance breakthrough in IT, you also tend to hit your nose on completely unanticipated new bottlenecks.
    Reply
  • Alexvrb - Monday, May 27, 2019 - link

    Yeah the chipset is acting like a fancy switch so you are getting the 4.0 bandwidth to and from the CPU, regardless of what gen devices you have attached to it. So that helps... but having more/bigger chipsets adds to cost, power consumption, and board complexity - which brings me to my next point.

    If you're simultaneously overloading the chipset with attached devices, as well as the direct lanes, you might want to consider an HEDT platform. Some boards allow 8/8 or even 8/4/4 on the direct lanes, which reduces the need to overload the chipset - although for the direct lanes you do still "waste" bandwidth when using 3.0 devices. The bottom line is that AM4 / Ryzen 3000 has enough lanes for 99.9% of consumer use cases, as well as many professional use cases, and yet these ARE consumer parts.
    Reply
  • StevenD - Tuesday, May 28, 2019 - link

    So basically you would be good with a x2 PCIE 4.0, which I'm pretty sure will be possible with some of the motherboards. Reply
  • bcronce - Tuesday, May 28, 2019 - link

    PCIE is a switched protocol and supports arbitrary numbers of lanes that can be teamed. One of the tech sites, possible anandtech, was playing around with PCIe 1.0 many many years ago by placing nail polish over the PCIe tabs. They had a 16x video card running with only 3 PCIe lanes. They even did asymmetrical, where the upload and download had a different number of lanes.

    Many chipsets are effectively PCIe switches. They may expose something like 32 lanes to all of the devices, but only have 8 lanes to the CPU. Not to mention that PCIe is a DMA protocol. There is no reason a 16x 4.0 device can't talk to a 1x 1.0 device over DMA.
    Reply
  • StevenD - Tuesday, May 28, 2019 - link

    Never understood the obsession with 10G. Unless you're moving VMs to other machines constantly or editing raw 4k over the network it's really overkill even for the enthusiast consumer.
    Your speed is 4GBps, until recently with NVME it was almost impossible to feed the network card.
    Reply
  • bcronce - Tuesday, May 28, 2019 - link

    To properly schedule network traffic to minimize jitter and loss, you need about 25% more bandwidth than provisioned. If you expect to transfer up to 1Gb/s of traffic, you should really had 1.25Gb/s of bandwidth if you don't want networking hickups.

    I am less concerned with 10Gb than 2.5/5.0Gb, but those two new protocols are based on 10Gb, and pretty much everyone who makes nbase-t, supports 2.5/5/10. May as well just say "10".
    Reply
  • Chaitanya - Monday, May 27, 2019 - link

    Is that "lightning" m.2 a pcie 4.0 based M.2 slot? Reply

Log in

Don't have an account? Sign up now