System Performance

Not all motherboards are created equal. On the face of it, they should all perform the same and differ only in the functionality they provide - however, this is not the case. The obvious pointers are power consumption, but also the ability for the manufacturer to optimize USB speed, audio quality (based on audio codec), POST time and latency. This can come down to the manufacturing process and prowess, so these are tested.

For B550, we are running using Windows 10 64-bit with the 1909 update.

Power Consumption

Power consumption was tested on the system while in a single ASUS GTX 980 GPU configuration with a wall meter connected to the Thermaltake 1200W power supply. This power supply has ~75% efficiency > 50W, and 90%+ efficiency at 250W, suitable for both idle and multi-GPU loading. This method of power reading allows us to compare the power management of the UEFI and the board to supply components with power under load, and includes typical PSU losses due to efficiency. These are the real-world values that consumers may expect from a typical system (minus the monitor) using this motherboard.

While this method for power measurement may not be ideal, and you feel these numbers are not representative due to the high wattage power supply being used (we use the same PSU to remain consistent over a series of reviews, and the fact that some boards on our testbed get tested with three or four high powered GPUs), the important point to take away is the relationship between the numbers. These boards are all under the same conditions, and thus the differences between them should be easy to spot.

Power: Long Idle (w/ GTX 980)Power: OS Idle (w/ GTX 980)Power: Prime95 Blend (w/ GTX 980)

The B550D4-4L performs very well in our power consumption testing, which can be attributed to the lack of controllers onboard in comparison to other B550 models tested. Despite being a commercial-focused motherboard, it includes support for Precision Boost Overdrive on Ryzen processors, which from our test at full-load is clearly visible. 

Non-UEFI POST Time

Different motherboards have different POST sequences before an operating system is initialized. A lot of this is dependent on the board itself, and POST boot time is determined by the controllers on board (and the sequence of how those extras are organized). As part of our testing, we look at the POST Boot Time using a stopwatch. This is the time from pressing the ON button on the computer to when Windows starts loading. (We discount Windows loading as it is highly variable given Windows specific features.)

Non UEFI POST Time

In our non-UEFI POST time testing, the ASRock did well, considering professional boards tend to take much longer to POST into Windows. The first time we POSTed up the board, it did take considerably longer due to the BMC, but even at the default setting and when stripped of non-essential controllers, there wasn't much difference in POST times.

DPC Latency

Deferred Procedure Call latency is a way in which Windows handles interrupt servicing. In order to wait for a processor to acknowledge the request, the system will queue all interrupt requests by priority. Critical interrupts will be handled as soon as possible, whereas lesser priority requests such as audio will be further down the line. If the audio device requires data, it will have to wait until the request is processed before the buffer is filled.

If the device drivers of higher priority components in a system are poorly implemented, this can cause delays in request scheduling and process time. This can lead to an empty audio buffer and characteristic audible pauses, pops and clicks. The DPC latency checker measures how much time is taken processing DPCs from driver invocation. The lower the value will result in better audio transfer at smaller buffer sizes. Results are measured in microseconds.

Deferred Procedure Call Latency

We test DPC latency with default settings straight out of the box, and as our graph shows, the ASRock isn't close to being competitive with other models we've tested. The B550D4-4L also doesn't include integrated audio, so users looking to build an audio workstation will need to rely on external audio controllers.

Board Features, Test Bed and Setup CPU Performance, Short Form
Comments Locked

73 Comments

View All Comments

  • bananaforscale - Saturday, May 22, 2021 - link

    This.
  • mode_13h - Friday, May 21, 2021 - link

    > I think 2x2.5G would be more appropriate for the target market of this board.

    Probably the main issue is that support for 2.5 GigE is (still?) uncommon on enterprise switches.

    > Anybody considering 10Gbe is likely on the verge of adopting 25/40/100G anyway

    A lot of people are just starting to move up to 10 GigE. Anything faster doesn't make a lot of sense for SOHO applications.
  • bananaforscale - Saturday, May 22, 2021 - link

    Especially considering how overpriced 10G twisted pair NICs are.
  • mode_13h - Saturday, May 22, 2021 - link

    Eh, I got a pair 2 years ago for < $100 each. I've spent more on a 3Com 10 Megabit PCI NIC, back in the late 90's. Or maybe it was 100 Mbps.
  • Samus - Monday, May 24, 2021 - link

    Probably 100mbps if it was PCI. The 100Mbps ISA NICs were pretty damn pricy because by the time 100Mbps became commonplace, ISA was on its way out and PCI was becoming mainstream (Pentium-era.)

    Even now an 100Mbps ISA network card is $50+
  • PixyMisa - Friday, May 21, 2021 - link

    By preference, but some datacenters use Cat6 and others use SFP. Others have already moved up to 25GbE. 10GBaseT is perfect for workstations, but not necessarily so for servers.
  • mode_13h - Saturday, May 22, 2021 - link

    > some datacenters use Cat6

    Really? For what? Management? Twisted-pair is very energy-intensive at 10 Gigabits, and can't go much above. So, I'd imagine they just use it for management @ 1 Gbps.

    Within racks, I'd expect to see SFP+ over copper. Between racks, it's optical all the way.
  • Samus - Monday, May 24, 2021 - link

    I've toured a lot of datacenters in my lifetime and I can honestly say I haven't seen copper wiring used for anything but IPMI and in extreme cases POTS for telephone backup comms though even this is mostly dead now as it has been replaced by cellular. Even HP ILO2 supports fiber for remote management, and you can bet at the distance and energy profile data centers are working with, they use fiber wherever they can.
  • alexey@altagon.com - Friday, May 21, 2021 - link

    Agree, companies are saving money and customers are paying more.
  • Spunjji - Monday, May 24, 2021 - link

    That's an opinion, for sure.

Log in

Don't have an account? Sign up now