System Performance

Not all motherboards are created equal. On the face of it, they should all perform the same and differ only in the functionality they provide - however, this is not the case. The obvious pointers are power consumption, but also the ability for the manufacturer to optimize USB speed, audio quality (based on audio codec), POST time and latency. This can come down to the manufacturing process and prowess, so these are tested.

For B550, we are running using Windows 10 64-bit with the 1909 update.

Power Consumption

Power consumption was tested on the system while in a single ASUS GTX 980 GPU configuration with a wall meter connected to the Thermaltake 1200W power supply. This power supply has ~75% efficiency > 50W, and 90%+ efficiency at 250W, suitable for both idle and multi-GPU loading. This method of power reading allows us to compare the power management of the UEFI and the board to supply components with power under load, and includes typical PSU losses due to efficiency. These are the real-world values that consumers may expect from a typical system (minus the monitor) using this motherboard.

While this method for power measurement may not be ideal, and you feel these numbers are not representative due to the high wattage power supply being used (we use the same PSU to remain consistent over a series of reviews, and the fact that some boards on our testbed get tested with three or four high powered GPUs), the important point to take away is the relationship between the numbers. These boards are all under the same conditions, and thus the differences between them should be easy to spot.

Power: Long Idle (w/ GTX 980)Power: OS Idle (w/ GTX 980)Power: Prime95 Blend (w/ GTX 980)

The B550D4-4L performs very well in our power consumption testing, which can be attributed to the lack of controllers onboard in comparison to other B550 models tested. Despite being a commercial-focused motherboard, it includes support for Precision Boost Overdrive on Ryzen processors, which from our test at full-load is clearly visible. 

Non-UEFI POST Time

Different motherboards have different POST sequences before an operating system is initialized. A lot of this is dependent on the board itself, and POST boot time is determined by the controllers on board (and the sequence of how those extras are organized). As part of our testing, we look at the POST Boot Time using a stopwatch. This is the time from pressing the ON button on the computer to when Windows starts loading. (We discount Windows loading as it is highly variable given Windows specific features.)

Non UEFI POST Time

In our non-UEFI POST time testing, the ASRock did well, considering professional boards tend to take much longer to POST into Windows. The first time we POSTed up the board, it did take considerably longer due to the BMC, but even at the default setting and when stripped of non-essential controllers, there wasn't much difference in POST times.

DPC Latency

Deferred Procedure Call latency is a way in which Windows handles interrupt servicing. In order to wait for a processor to acknowledge the request, the system will queue all interrupt requests by priority. Critical interrupts will be handled as soon as possible, whereas lesser priority requests such as audio will be further down the line. If the audio device requires data, it will have to wait until the request is processed before the buffer is filled.

If the device drivers of higher priority components in a system are poorly implemented, this can cause delays in request scheduling and process time. This can lead to an empty audio buffer and characteristic audible pauses, pops and clicks. The DPC latency checker measures how much time is taken processing DPCs from driver invocation. The lower the value will result in better audio transfer at smaller buffer sizes. Results are measured in microseconds.

Deferred Procedure Call Latency

We test DPC latency with default settings straight out of the box, and as our graph shows, the ASRock isn't close to being competitive with other models we've tested. The B550D4-4L also doesn't include integrated audio, so users looking to build an audio workstation will need to rely on external audio controllers.

Board Features, Test Bed and Setup CPU Performance, Short Form
Comments Locked

73 Comments

View All Comments

  • mode_13h - Saturday, May 22, 2021 - link

    Yeah, but 5x 1 gigabit ports is kinda ridiculous. It's not as if that costs nothing and uses no PCIe lanes.
  • Spunjji - Monday, May 24, 2021 - link

    It costs very little and uses very few lanes, though - depending on how they've done it, it could be as few as one lane for the 4 1GbE ports but is likely no more than 2. The management port will be using another, but that's still plenty left over for whatever the user needs.
  • Spunjji - Monday, May 24, 2021 - link

    "The exact smae argument could be made the other way around"
    Only if you ignore cost! It makes sense to integrate the minimum where upgrades are possible, rather than forcing the far higher cost of 10GbE everyone who buys your board.
  • fmyhr - Thursday, May 20, 2021 - link

    Yup! Love that they put GOOD 1Gb NICs in there: i210s. Perfect for edge router, physically isolating different networks.
  • Lucky Stripes 99 - Thursday, May 20, 2021 - link

    I also agree that dual 2.5 Gbps would be more ideal as the market begins to move away from 1 Gbps. There are niche uses for quad Eth port boards, but the ones I'm most familiar with tend to use smaller form-factor boards.

    I get the feeling that this was designed for a specific industrial/embedded customer with a unique use case who didn't mind Asrock releasing to the general market.
  • BedfordTim - Friday, May 21, 2021 - link

    You could for example hook up 4 GiGE cameras. Most can't take advantage of 2.5Gbe ports, but saturate a 1Gbe port.
  • BedfordTim - Friday, May 21, 2021 - link

    There are quite a few Atom boards with 2.5Gbe ports now.
  • ZENSolutionsLLC - Friday, May 21, 2021 - link

    Because regardless of the bandwidth, a single 10G nic is a single point of failure, which is a big NO NO in a corporate Enterprise IT environment. Multi 1GB nics are used (still very much) for LACP links spanning multiple switching fabrics. Also highly used on VMWare and HyperV hosts to separate out management traffic, VMotion, etc... and for aggregation and link failover.
  • Jorgp2 - Friday, May 21, 2021 - link

    The fuck kind of server would have 2.5G or 5G ethernet?
  • bananaforscale - Saturday, May 22, 2021 - link

    A roll your own NAS.

Log in

Don't have an account? Sign up now