System Performance

Not all motherboards are created equal. On the face of it, they should all perform the same and differ only in the functionality they provide - however, this is not the case. The obvious pointers are power consumption, but also the ability for the manufacturer to optimize USB speed, audio quality (based on audio codec), POST time and latency. This can come down to the manufacturing process and prowess, so these are tested.

For X570 we are running using Windows 10 64-bit with the 1903 update as per our Ryzen 3000 CPU review.

Power Consumption

Power consumption was tested on the system while in a single ASUS GTX 980 GPU configuration with a wall meter connected to the Thermaltake 1200W power supply. This power supply has ~75% efficiency > 50W, and 90%+ efficiency at 250W, suitable for both idle and multi-GPU loading. This method of power reading allows us to compare the power management of the UEFI and the board to supply components with power under load, and includes typical PSU losses due to efficiency. These are the real-world values that consumers may expect from a typical system (minus the monitor) using this motherboard.

While this method for power measurement may not be ideal, and you feel these numbers are not representative due to the high wattage power supply being used (we use the same PSU to remain consistent over a series of reviews, and the fact that some boards on our testbed get tested with three or four high powered GPUs), the important point to take away is the relationship between the numbers. These boards are all under the same conditions, and thus the differences between them should be easy to spot.

Power: Long Idle (w/ GTX 980)Power: OS Idle (w/ GTX 980)Power: Prime95 Blend (w/ GTX 980)

The power consumption at full load is very efficient with a clear cut lead of over 14 W over the MSI MEG X570 Godlike and 13 W over the slightly lesser spec MSI MEG X570 Ace. In our long Idle test, the X570 Aorus Xtreme performed surprisingly worse with a power draw of 82 W; this looks like an anomaly but this was tested three times with similar results, probably indicative of a system running something in the background when long idle is detected. Looking at our OS Idle result, this put the X570 Aorus Xtreme back into the normal range of results we've seen from AM4 motherboards with a respectable power draw of just 63 W.

Non-UEFI POST Time

Different motherboards have different POST sequences before an operating system is initialized. A lot of this is dependent on the board itself, and POST boot time is determined by the controllers on board (and the sequence of how those extras are organized). As part of our testing, we look at the POST Boot Time using a stopwatch. This is the time from pressing the ON button on the computer to when Windows starts loading. (We discount Windows loading as it is highly variable given Windows specific features.)

Non UEFI POST Time

The GIGABYTE X570 Aorus Xtreme performed competitively with other boards on test with a default POST time of just over 25 seconds. This isn't too bad but doesn't quite match up with ASRock models we have tested so far which dominate our charts. With audio and networking controllers disabled, we managed to shave off a couple of seconds off the overall boot time.

DPC Latency

Deferred Procedure Call latency is a way in which Windows handles interrupt servicing. In order to wait for a processor to acknowledge the request, the system will queue all interrupt requests by priority. Critical interrupts will be handled as soon as possible, whereas lesser priority requests such as audio will be further down the line. If the audio device requires data, it will have to wait until the request is processed before the buffer is filled.

If the device drivers of higher priority components in a system are poorly implemented, this can cause delays in request scheduling and process time. This can lead to an empty audio buffer and characteristic audible pauses, pops and clicks. The DPC latency checker measures how much time is taken processing DPCs from driver invocation. The lower the value will result in better audio transfer at smaller buffer sizes. Results are measured in microseconds.

Deferred Procedure Call Latency

We test the DPC at the default settings straight from the box, and the GIGABYTE X570 Aorus Xtreme performed similarly to the MSI MEG X570 Godlike which this model competes with at the upper-end of the X570 product stack.

Board Features, Test Bed and Setup CPU Performance, Short Form
Comments Locked

42 Comments

View All Comments

  • Smell This - Tuesday, September 24, 2019 - link

    Maybe __ but I'm not sure I get your point.

    A conventional top down plugin has to bend 180-degrees to route under the tray as opposed to a 90-degree 'bend' ??
  • DanNeely - Tuesday, September 24, 2019 - link

    A 90* plug doesn't get you anything unless you're routing the cable on the same side as the board (which isn't normal these days outside of SFF), or can make a tight 90* bend as the cable comes out of the routing hole. Unless you have a really flexible cable you're not going to be able to do that. Instead you end up having to make a 270* loop (up, then forward into where drive bays used to be, and then down and back to the board), so you still end up with a big loop.

    With a big loop a 180* can be done without putting any bending stress on the plug. A 270* either needs more cable to match the same bending radius or will have to be tighter and puts more stress on the board as a result. With the one system I had this sort of setup in there wasn't enough slack in the cable to do a loop with enough slack that it wasn't trying to bend/twist the board up. When I plugged the cable in before the board was screwed down it was flexing the board up when I tried screwing it down on the edge with the socket. With the board screwed down first, it was very difficult to get the plug to the socket partly because of the tray meaning I could only grip the cable from one side and partly because the cable didn't want to be bent tightly enough to go in. On the whole it was among the most frustrating build steps I ever did and the stiffness of the cable meant that it completely failed at the notional goal of keeping the wires out of the way that's normally behind 90* edge plugs.

    My initial thought was that a rigid 90* adapter that extended out to the cable management hole would avoid the problem by removing the need to tightly bend the cable to fit. Thinking a bit more, that probably wouldn't be enough because making a tight bend behind the board would be just as difficult; you'd either need a 180* piece so the cable could stay flat on the backside of the board, or a short extension with all loose wires to make it work.
  • Ratman6161 - Tuesday, September 24, 2019 - link

    How about this:

    https://www.newegg.com/cooler-master-cma-cemb00xxb...
  • Ratman6161 - Tuesday, September 24, 2019 - link

    https://www.google.com/search?q=ATX+24+Pin+90%C2%B...
  • MamiyaOtaru - Wednesday, October 9, 2019 - link

    cool, connect that to the side plug and come at it from behind /s
  • eek2121 - Tuesday, September 24, 2019 - link

    IMO we need a better solution for all the connectors that exist on motherboards. For example, those USB3 connectors. How many times have I bent a pin trying to plug one in when it accidentally gets pulled out? More than I'd care to admit. I mean hell, at least put a snap/latch on it similar to what most SATA cables have. Ideally, we've had 1 cable running from the case to the motherboard, and 1 cable running from the PSU to the motherboard. Both connectors would had the little snap or latch or whatever you call it, and both would be right angled so that they can easily be hidden from view for a nice clean look.
  • DanNeely - Wednesday, September 25, 2019 - link

    USB-C does use a smaller and more robust connector than USB 3.0 (you can see one on this board near the diagnostic code display) that appears to take its design inspiration from PCIe.

    A single cable from the PSU to the mobo would run into one size fits all problems and end up huge, ex the difference between the needs of an SFF system using a 4 pin CPU header and a high end work station/gaming board using 2x8pin CPU headers and a PCIe header (to give extra power for multiple GPUs).

    What could be done easily enough would be to gut the 24 pin cable by making about half of its wires optional; even if not followed up by a new smaller plug/socket a few years later it would remove a lot of the headaches from the worst connector on the mobo. This could be done safely because the original 20 pin connector dates back to when the CPU ran on 3.3v, everything else ran on 5V, and hardly anything needed 12V; vs today when 5V is used almost exclusively for USB, 3.3V for odds and ends (eg 10 of the 25/75W a PCIe card can draw from the mobo is 3.3v not 12v), while everything else runs 12V to component specific voltage regulators.

    The reason nothing's happened is more or less the same as why the mess of jumper style headers for the front panel has never been replaced by a standard block style connection. The PC industry as a whole no longer cares about desktops enough to expend the effort needed for a major new standardization round. Big OEMs can and do address the issues via proprietary components scoring spare part lockin as a bonus; while for everyone else (eg the people who make parts for customer built systems/boutique vendors) the upfront time spent and short term costs from needing to bundle legacy/modern adapters for a few years is too high to try and push something on their own. Residual trauma from the effort spent on the failed BTX standard some years back was probably an issue back when desktops were still important enough of a market segment to get serious engineering effort in standard modernization as well.
  • Dug - Monday, October 7, 2019 - link

    I just have to chime in and agree with changing the entire layout. Look what oem's can do when they aren't tied to the ancient atx power supply and standard pin layout. Look at the power supply used on an imac pro. That's how it should be done. These giant cables and connectors are really unnecessary.
  • 4everalone - Tuesday, September 24, 2019 - link

    I wish MB makers would start providing SFP+ ports instead of 10GBASE-T ports. That way we at-least have the option of running fiber/copper.
  • TheinsanegamerN - Tuesday, September 24, 2019 - link

    I like the look of the board and passive X570 cooling, but am dissapointed at the lack of expansion slots. No USB 3 header? Really? Just a gen 2 that cant be used on the vast majority of cases, and even if it can it will onyl feed a singular port? No PCIe x1 slots for, say, a USB 3 header card to make up for the lack of internal headers?

    Granted, this is a subjective problem, not many people use more then 1-2 slots, but for the price, I would want way mroe expansion for future upgrades. Think USB 3 headers, replacement NIC or sound cards in case of on board failure, NVMe cards for RAID arrays and better cooling, ece.

Log in

Don't have an account? Sign up now