AM5 Chipsets: X670 and B650, Built by ASMedia

Finally, let’s talk about the chipsets that are going to be driving the new AM5 platform. Kicking things off, we have the B650 and X670 chipsets, as well as their Extreme variations. Since AMD is starting the rollout of their new platform with their high-end CPUs, they are matching this with the rollout of their high-end chipsets.

For this week’s launch, the initial boards available are all from the X670 family. B650 boards will, in turn, be coming next month. We’ll break down the difference between the two families below, but at a high level, X670 offers more I/O options than B650. And while not strictly a feature of the chipset, the market segmentation is such that the bulk of high-end AM5 boards – those boards with a massive amount of VRMs and other overclocker/tweaker-friendly features – will be X670 boards.

That said, for simplicity’s sake we’re going to start with the B650 chipset, and build up from there.

AMD AM5 Chipset Comparison
Feature X670E X670 B650E B650
CPU PCIe (PCIe) 5.0 (Essentially Mandatory) 4.0
(5.0 Optional)
5.0 (Essentially Mandatory) 4.0
(5.0 Optional)
CPU PCIe (M.2 Slots) At Least 1 PCIe 5.0 Slot
Total CPU PCIe Lanes 24
Max Chipset PCIe Lanes 12x 4.0 + 8x 3.0 8x 4.0 + 4x 3.0
SuperSpeed 10Gbps USB Ports 4 CPU + 12 Chipset
or
4 CPU + 10 Chipset + 1 Chipset 20Gbps
or
4 CPU + 8 Chipset + 2 Chipset 20Gbps

4 CPU + 6 Chipset
or
4 CPU + 4 Chipset + 1 Chipset 20Gbps

DDR5 Support Quad Channel (128-bit bus)
Speeds TBD
Wi-Fi 6E Yes
CPU Overclocking Support Y Y Y Y
Memory Overclocking Support Y Y Y Y
Available September 2022 October 2022

B650, AMD’s mainstream AM5 chipset, can best be thought of as a PCIe 4.0 switch with a bunch of additional I/O baked in. And as is typical for chipsets these days, several of the I/O lanes coming from the chipset are flexible lanes that can be reallocated between various protocols. Meanwhile, uplink to the CPU is a PCIe 4.0 x4 connection.

For PCIe connectivity, B650 offers 8 PCIe 4.0 lanes, which can either have PCIe slots or further integrated peripherals (LAN, Wi-Fi, etc) hung off of them. This and the uplink speed are both notable improvements over the B550 chipset, which was PCIe 3.0 throughout, despite Ryzen 3000/5000 offering PCIe 4.0 connectivity. So B650 has a lot more bandwidth coming into it, and available to distribute to peripherals.

There are also a quartet of PCIe 3.0 lanes which are shared with the SATA ports, allowing for either 4 PCIe lanes, 2 lanes + 2 SATA, or 4 SATA ports. Notably, the dedicated SATA ports found on the 500 series chipsets are gone, so motherboards will always have to sacrifice PCIe lanes to enable SATA ports. For the B650 this amounts to a net loss of 2 SATA ports, as the most ports it can drive without a discrete storage controller is 4.

Meanwhile on the USB front, motherboard vendors get more Superspeed USB ports than before. The chipset offers a fixed 4 10Gbps Superspeed ports, and then an additional output can be configured as either a single 20Gbps (2x2) port, or two 10Gbps ports. Finally, the chipset can drive a further 6 USB 2 ports, mostly for on-board peripheral use. There are no USB root ports limited to 5Gbps here, so all USB 3.x ports, whether coming from the CPU or the chipset, are capable of 10Gbps operation.

AMD has once again outsourced chipset development for this generation to ASMedia, who also designed the B550 chipset. AMD has not disclosed a TDP for the chipset, but like B550 before it, it is designed to run with passive cooling.

Outside of the technical capabilities of the B650 chipset itself, AMD is also imposing some feature requirements on motherboard makers as part of the overall AM5 platform, and this is where the Extreme designation comes in. All B650 (and X670) motherboards must support at least 1 PCIe 5.0 x4 connection for storage; Raphael has enough lanes to drive two storage devices at those speeds, but it will be up to motherboard manufacturers if they want to actually run at those speeds (given the difficulty of PCIe 5.0 routing).

Extreme motherboards, in turn, will also require that PCIe 5.0 is supported to at least one PCIe slot – normally, the x16 PCIe Graphics (PEG) slot. Non-extreme motherboards will not require this, and while motherboard vendors could technically do it anyhow, it would defeat the purpose of (and higher margins afforded by) the Extreme branding. Conversely, while AMD has been careful to toe a line about calling 5.0 slots outright mandatory on Extreme motherboards, it’s clear that there’s some kind of licensing or validation program in place where motherboard makers would be driving up their costs for no good reason if they tried to make an Extreme board without 5.0 slots.

It’s frankly more confusing than it should be, owing to a lack of hard and definite rules set by AMD; but the messaging from AMD is that it shouldn’t be a real issue, and that if you see an Extreme motherboard, it will offer PCIe 5.0 to its graphics slot. Past that, offering 5.0 to additional slots, bifurcation support, etc is up to motherboard vendors. The more PCIe 5.0 slots they enable, the more expensive boards are going to be.

Meanwhile the high-end counterpart to the B650 chipset is the X670 chipset, which is pretty much just two B650 chipsets on a single board. While not explicitly confirmed by AMD, as we’ll see in the logical diagram for X670, there’s no way to escape the conclusion that X670 is just using B650 dies daisy chained off of one another to add more I/O lanes.

Officially, X670 is a two-chip solution, using what AMD terms the “downstream” and “upstream” chipsets. The upstream chip is connected to the CPU via a PCIe 4.0 x4 connection, and meanwhile the downstream chip is connected to the upstream chip via another PCIe 4.0 x4 connection.

By doubling up on the number of chips on board, the number of I/O lanes and options are virtually doubled. The sum total of the two chips offers up to 12 PCIe 4.0 lanes (the last 4 are consumed by the upstream chip feeding the downstream chip) and a further 8 PCIe 3.0 lanes that can be shifted between PCIe and up to 8 SATA ports.

Meanwhile on the USB front, there are now 8 fixed USB 2 ports and 8 fixed SuperSpeed USB 10Gbps ports. For USB flex I/O, motherboard makers can select from either 2 20Gbps ports, 1 20Gbps port plus 2 10Gbps ports, or 4 10Gbps ports.

And while this configuration adds more I/O lanes (and thus more I/O bandwidth), it should be noted that all of these I/O lanes are still gated behind the PCIe 4.0 x4 connection going back to the CPU. So the amount of backhaul bandwidth available between the chipsets and the CPU is not any higher than it is on B650. The name of the game here is flexibility; AMD is not designing this platform for lots of sustained, high-speed I/O outside of the CPU-connected x16 PCIe graphics slot and M.2 slots. Rather, it’s designed to have a lot of peripherals attached that are either low bandwidth, or only periodically need high bandwidths. If you need significantly more sustained I/O bandwidth, then in AMD’s ecosystem there is a very clear push towards Threadripper Pro products.

Finally, X670 Extreme (X670E) will impose the same PCIe 5.0 requirements as B650E. This means Extreme boards will offer PCIe 5.0 connectivity for at least one PCIe lane, while X670 boards are expected to come with just PCIe 4.0 slots. These will be the most expensive boards, owing to a combination of requiring two chipsets, as well as the extra costs and redrivers that go into extending PCIe 5.0 farther throughout a motherboard.

On that note, when discussing the new chipsets with AMD, the company did offer an explanation for why X670 daisy chains the chipsets. In short, daisy chaining allows for additional routing – the downstream chipset can be placed relative to the upstream chipset, instead of relative to the CPU (and PCIe devices then placed relative to the chipsets). In other words, this allows for spreading out I/O so that it’s not all so close to the CPU, making better use of the full (E)ATX board. As well, hanging both chipsets off of the CPU would consume another 4 PCIe lanes, which AMD would rather see going to additional storage.

More I/O For AM5: PCIe 5, Additional PCIe Lanes, & More Displays DDR5 & AMD EXPO Memory: Memory Overclocking, AMD's Way
POST A COMMENT

205 Comments

View All Comments

  • Tomatotech - Friday, September 30, 2022 - link

    Nice idea but you’re swimming against the flow of history. The trend is always to more tightly integrate various components into smaller and smaller packages. Apple have moved to onboard RAM in the same package as the CPU which has bought significant bandwidth advantages and seems to have boosted iGPU to the level of low-end dGPUs.

    The main takeaway from your metaphor of the 650w dGPU with a 55w mainboard and 100-200w CPU is that high-end dGPUs are now effectively separate computers in their own right - especially as a decent one can be well over 50% of the cost of the whole PC - and are being constrained by having to fit into the PC in terms of physical space, power supply capacity, and cooling capacity.

    It’s a shrinking market on both the low end and high end for home use of dGPU, given these innovations and constraints and I don’t know where it’s going to go from here.

    Since I got optic fibre, I’ve started renting cloud based high-end dGPU and it has been amazing albeit the software interface has been frustrating at times. With symmetric gigabit service and 1-3ms ping, it’s like having it under my desk. I worked out that for unlimited hours and given the cost of electricity, it would take 10 years for my cloud rental costs to match the cost of buying and running a home high end dGPU.

    Not everyone has optic fibre of course but globally it’s rolling out year by year so the trend is clear again.
    Reply
  • Castillan - Wednesday, September 28, 2022 - link

    "

    clang version 10.0.0
    clang version 7.0.1 (ssh://git@github.com/flang-compiler/flang-driver.git
    24bd54da5c41af04838bbe7b68f830840d47fc03)

    -Ofast -fomit-frame-pointer
    -march=x86-64
    -mtune=core-avx2
    -mfma -mavx -mavx2
    "

    ...and then later the article says:

    "The performance increase can be explained by a number of variables, including the switch from DDR4 to DDR5 memory, a large increase in clock speed, as well as the inclusion of the AVX-512 instruction set, albeit using two 256-bit pumps."

    The problem here being that those arguments to Clang will NOT enable AVX-512. Only AVX2 will be enabled. I verified this on an AVX512 system.

    To enable AVX512, at least at the most basic level, you'll want to use "-mavx512f ". There's also a whole stack of other AVX512 capabilities, which are enabled with "-mavx512dq -mavx512bw -mavx512vbmi -mavx512vbmi2 -mavx512vl" but some may not be supported. It won't hurt to include those on the command line though, until you try to compile something that makes use of those specific features, and then you'll see a failure if the platform doesn't support those extensions.
    Reply
  • Ryan Smith - Friday, September 30, 2022 - link

    Correct. AVX-512 is not in play here. That is an error in analysis on our part. Thanks! Reply
  • pman6 - Thursday, September 29, 2022 - link

    intel supports 8k60 AV1 decode.

    Does ryzen 7000 support 8k60 ??
    Reply
  • GeoffreyA - Monday, October 3, 2022 - link

    The Radeon Technology Group is getting 16K ready. Reply
  • yhselp - Thursday, September 29, 2022 - link

    I'd love to see you investigate memory scaling on the Zen 4 core. Reply
  • Myrandex - Thursday, September 29, 2022 - link

    The table on page four mentions "Quad Channel (128-bit bus)" for memory support. Does that mean we could have a 4 memory slot solution, with one memory module per channel, with four channel support? This way to drastically increase memory bandwidth all while maintaining those fast DDR5 frequencies? Reply
  • Ryan Smith - Friday, September 30, 2022 - link

    No. That configuration would be no different than a 2 DIMM setup in terms of bandwidth or capacity. Slotted memory is all configured DIMMs; as in Dual Inline Memory Module. Reply
  • GeoffreyA - Friday, September 30, 2022 - link

    All in all, excellent work, AMD, on the 7950X. Undoubtedly shocking performance. Even that dubious AVX-512 benchmark where Intel used to win, Zen 4 has taken command of it. However, lower your prices, AMD, and don't be so greedy. Little by little, you are becoming Intel. Don't be evil.

    Thanks, Ryan and Gavin, for the review and all the hard work. Much appreciated. Have a great week.
    Reply
  • Footman36 - Friday, September 30, 2022 - link

    Yawn. I really don't see what the big fuss is about. I currently run 5600X and was interested to see how the 7600X compared and while it does look like a true uplift in performance over the 5600X, I would have to factor in cost of new motherboard and DDR5 ram! On top of that, the comparison is not exactly apples to apples in the testing. 7600X has a turbo speed of 5.3, 5600X 4.6. 7600X runs with 5200 DDR5 and 5600X 3200 DDR4, 7600X has TDP 105W, 5600X 65W. If you take a look at the final page where the 7950X is tested in ECO mode which effectively supplies 65W instead of 105W you lose 18% performance. If we try to do apples to apples and use eco mode with 7600X, to get apples to apples with 65W of 5600W, then lower boost to 4.6ghz then the performance of the 2 cpu's looks very similar. Perhaps not the way I should be analyzing the results, but just my observation.... Reply

Log in

Don't have an account? Sign up now