System Performance

Not all motherboards are created equal. On the face of it, they should all perform the same and differ only in the functionality they provide - however, this is not the case. The obvious pointers are power consumption, but also the ability for the manufacturer to optimize USB speed, audio quality (based on audio codec), POST time and latency. This can come down to the manufacturing process and prowess, so these are tested.

For X570 we are running using Windows 10 64-bit with the 1903 update as per our Ryzen 3000 CPU review.

Power Consumption

Power consumption was tested on the system while in a single ASUS GTX 980 GPU configuration with a wall meter connected to the Thermaltake 1200W power supply. This power supply has ~75% efficiency > 50W, and 90%+ efficiency at 250W, suitable for both idle and multi-GPU loading. This method of power reading allows us to compare the power management of the UEFI and the board to supply components with power under load, and includes typical PSU losses due to efficiency. These are the real world values that consumers may expect from a typical system (minus the monitor) using this motherboard.

While this method for power measurement may not be ideal, and you feel these numbers are not representative due to the high wattage power supply being used (we use the same PSU to remain consistent over a series of reviews, and the fact that some boards on our test bed get tested with three or four high powered GPUs), the important point to take away is the relationship between the numbers. These boards are all under the same conditions, and thus the differences between them should be easy to spot.

Power: Long Idle (w/ GTX 980)Power: OS Idle (w/ GTX 980)Power: Prime95 Blend (w/ GTX 980)

The power consumption at full load is marginally higher than the MSI MEG X570 Ace by a single watt, but in both idle and long ide power states, the power consumption is considerably higher. The larger PCB and bigger controller set are contributing factors.

Non-UEFI POST Time

Different motherboards have different POST sequences before an operating system is initialized. A lot of this is dependent on the board itself, and POST boot time is determined by the controllers on board (and the sequence of how those extras are organized). As part of our testing, we look at the POST Boot Time using a stopwatch. This is the time from pressing the ON button on the computer to when Windows starts loading. (We discount Windows loading as it is highly variable given Windows specific features.)

Non UEFI POST Time

As with the MSI MEG X570 Ace model, the MSI MEG X570 Godlike also has extremely long POST times both at default settings and with controllers switched off. We did manage to make the POST time quicker by over two seconds by switching off networking and audio controllers, but this remains disappointing in comparison to other models tested with our AMD Ryzen 7 3700X processor.

DPC Latency

Deferred Procedure Call latency is a way in which Windows handles interrupt servicing. In order to wait for a processor to acknowledge the request, the system will queue all interrupt requests by priority. Critical interrupts will be handled as soon as possible, whereas lesser priority requests such as audio will be further down the line. If the audio device requires data, it will have to wait until the request is processed before the buffer is filled.

If the device drivers of higher priority components in a system are poorly implemented, this can cause delays in request scheduling and process time. This can lead to an empty audio buffer and characteristic audible pauses, pops and clicks. The DPC latency checker measures how much time is taken processing DPCs from driver invocation. The lower the value will result in better audio transfer at smaller buffer sizes. Results are measured in microseconds.

Deferred Procedure Call Latency

We test the DPC at the default settings straight from the box, and the MSI MEG X570 Godlike does perform noticeably better than the MSI MEG X570 Ace. The ASRock models do tend to have the upper hand when it comes to out of the box DPC latency. 

Board Features, Test Bed and Setup CPU Performance, Short Form
Comments Locked

116 Comments

View All Comments

  • oynaz - Saturday, August 31, 2019 - link

    I actually prefer more cores to faster cores in my DAW. Each effect bus, or track, cannot be split into multiple cores, true, but you usually quite a few buses going.
  • inighthawki - Friday, August 30, 2019 - link

    Gaming
  • Sweetbabyjays - Thursday, August 29, 2019 - link

    In a professional setting, where you are doing thread intensive workloads, and IT is not cool with you overclocking...then yes, I totally agree 3900x makes way more sense.

    "use less power overall" ? 9900k has a TDP of 95W, while the 3900x has a TDP of 105W, Additionally the Z390 chipset has a TDP of 6W while the X570 has a TDP of 11W. Now I know there is a discrepancy between how AMD and Intel measure TDP, so the numbers at face value may not be telling the whole story. That said, I would be very interested to see overall system power draw for both to test the veracity of your statement.
  • AshlayW - Thursday, August 29, 2019 - link

    Oh boy, you actually think the 9900K uses 95W? Joke's on you pal, that's at 3.6 GHz. At full turbo clocks the 9900K uses 150-200W. Ryzen 3000 is almost twice the performance per watt in some scenarios.
  • Trikkiedikkie - Saturday, August 31, 2019 - link

    With the 3900 having many things inside the processor, whereas the 9900 has extra chips needed. And Intel's numbers only count for baseclock
  • Sweetbabyjays - Thursday, August 29, 2019 - link

    "trounce it with it's higher core-count parts in multi-threaded scenarios." Aside from some synthetic benchmarks, I suggest looking at the puget systems website for professional benchmarks, if you're looking for more real world professional performance scenarios.

    The 12 core part is better in some(in some the 9900k is better) scenarios, but rarely(if ever) by more than 10%. Perhaps your definition of "trounce" is different from mine thought.

    If you're gaming much more often than you are working/creating the increased core count really wont improve your overall computing experience, if at all.
  • Oliseo - Thursday, August 29, 2019 - link

    This is true. But the argument remains, just how many people actually use highly mutli-threaded scenarios.

    I'd wager if you got a venn diagram of gamers and content creators, the content creators would simply be a small spot on the very large gaming circle.

    I know a lot of gamers, yet I struggle to meet 3D cad designers or Film Editors.

    So yes, you're right, AMD will trounce Intel in that respect. But until we get games using more than 8 cores, the majority of people will not be better off because they simply don't need those extra cores as they don't run any software that can make use of them.

    And that goes for AMD folks wanting to get the AMD chips as well.
  • Trikkiedikkie - Saturday, August 31, 2019 - link

    Gaming is soo small compared to people doing actual work.
  • AshlayW - Thursday, August 29, 2019 - link

    $150 more, for 10% higher single core performance when both CPUs already have extremely good single core performance, and you can place a 4700X in the same motherboard next year that will have even higher single core than the 9900K? Seriously people, consumer stupidity is why Intel is still selling CPUs.
  • Trikkiedikkie - Saturday, August 31, 2019 - link

    Single core is soooo last century.

    Only people that have very little serious work apart from Adobe want that.

Log in

Don't have an account? Sign up now