System Performance

Not all motherboards are created equal. On the face of it, they should all perform the same and differ only in the functionality they provide - however, this is not the case. The obvious pointers are power consumption, POST time, and latency. This can come down to the manufacturing process and prowess, so these are tested.

Power Consumption

Power consumption was tested on the system while in a single MSI GTX 1080 Gaming configuration with a wall meter connected to the power supply. The only system that wasn't tested with a graphics card in our results is the GIGABYTE MZ72-HB0, which was tested via the ASPEED AST2500 BMC Controller.

Our power supply has ~75% efficiency > 50W, and 90%+ efficiency at 250W, suitable for both idle and multi-GPU loading. This method of power reading allows us to compare the power management of the UEFI and the board to supply components with power under load, and includes typical PSU losses due to efficiency. These are the real-world values that consumers may expect from a typical system (minus the monitor) using this motherboard.

While this method for power measurement may not be ideal, and you feel these numbers are not representative due to the high wattage power supply being used (we use the same PSU to remain consistent over a series of reviews, and the fact that some boards on our testbed get tested with three or four high powered GPUs), the important point to take away is the relationship between the numbers. These boards are all under the same conditions, and thus the differences between them should be easy to spot.

Power Long Idle
GIGABYTE MZ72-HB0 Long Idle result was powered off with the BMC controller on - normally this test is idle in the OS and left until the display turns off. It just goes to show how much power keeps the BMC going.


Power OS IdleMaximum Power Blend (w/GTX 1080)

When comparing power consumption figures to other AMD EPYC/Threadripper boards we've tested, we don't really have any main comparison points. In the EPYC 7351P testing, we were using a single CPU at 170 W, whereas in the GIGABYTE, we have two AMD EPYC 7763 processors which each have a 280 W TDP. At full load, is monstrous on the power with a peak power reading of 782 W at the wall. In our long idle test, the board was powered down barring the BMC controller, which is apparent in our figures with a low power reading of just 14.6 W.

Non-UEFI POST Time

Different motherboards have different POST sequences before an operating system is initialized. A lot of this is dependent on the board itself, and POST boot time is determined by the controllers on board (and the sequence of how those extras are organized). As part of our testing, we look at the POST Boot Time using a stopwatch. This is the time from pressing the ON button on the computer to when Windows starts loading. (We discount Windows loading as it is highly variable given Windows-specific features.)

Non-UEFI POST Time

When it came to POST time testing, we typically see that server and workstation models have a much longer POST time than conventional desktop models. This is due to controller initializations and as such, the GIGABYTE takes between two to three minutes to boot into Windows. With non-essential controllers disabled including networking, we did manage to shave an additional 15 seconds off the default POST time.

DPC Latency

Deferred Procedure Call latency is a way in which Windows handles interrupt servicing. In order to wait for a processor to acknowledge the request, the system will queue all interrupt requests by priority. Critical interrupts will be handled as soon as possible, whereas lesser priority requests such as audio will be further down the line. If the audio device requires data, it will have to wait until the request is processed before the buffer is filled.

If the device drivers of higher priority components in a system are poorly implemented, this can cause delays in request scheduling and process time. This can lead to an empty audio buffer and characteristic audible pauses, pops and clicks. The DPC latency checker measures how much time is taken processing DPCs from driver invocation. The lower the value will result in better audio transfer at smaller buffer sizes. Results are measured in microseconds.

Deferred Procedure Call Latency

Typically server and workstation motherboards aren't optimized for DPC latency out of the box, and as we test DPC at default settings, the GIGABYTE is certainly not optimized for this.

Board Features, Test Bed and Setup GIGABYTE MZ72-HB0 Conclusion
Comments Locked

28 Comments

View All Comments

  • tygrus - Monday, August 2, 2021 - link

    There are not many apps/tasks that make good use of more than the 64c/128t. Some of those tasks are better suited for GPU, accelerators or a cluster of networked systems. Some tasks just love having the TB's RAM while others will be limited by data IO (storage drives, network). YMMV. Have fun with testing it but it will be interesting to find people with real use cases that can afford this.
  • questionlp - Monday, August 2, 2021 - link

    Being capable of handling more than 64c/128t across two sockets doesn't mean that everyone will drop more than that on this board. You can install two higher clock 32c/64t processors into each socket, have shed load of RAM and I/O for in-memory databases, software-defined (insert service here) or virtualization (or a combination of those).

    Installer lower core count, even higher clock speed CPUs and you have yourself an immensely capable platform for per-core licensed enterprise database solutions.
  • niva - Wednesday, August 4, 2021 - link

    You can but why would you when you can get a system where you can slot a single CPU with 64C?

    This is a board for the cases where 64C is clearly not enough, and really catering towards server use, for cases where less cores but more power per core are needed, there are simply better options.
  • questionlp - Wednesday, August 4, 2021 - link

    The fastest 64c/128t Epyc CPU right now as a base clock of 2.45 GHz (7763) while you can get 2.8 GHz with a 32c/128t 7543. Slap two of those on this board, you'll get a lot more CPU power than a single 64c/128t and double the number of memory channels.

    Another consideration is licensing. IIRC, VMware per-CPU licensing maxes out at 32c per socket. To cover a single 64c Epyc, you would end up with the same license count as two 32c Epyc configuration. Some customers were grandfathered in back in 2020; but, that's no longer the case for new licenses. Again, you can scale better with 2 CPU configuration than 1 CPU.

    It all depends on the targeted workload. What may work for enterprise virtualization won't work for VPC providers, etc.
  • linuxgeex - Monday, August 2, 2021 - link

    The primary use case is in-memory databases and/or high-volume low-latency transaction services. The secondary use case is rack unit aggregation, which is usually accomplished with virtualisation. ie you can fit 3x as many 80-thread high performance VPS into this as you can into any comparably priced Intel 2U rack slot, so this has huge value in a datacenter for anyone selling such a VPS in volume.
  • logoffon - Monday, August 2, 2021 - link

    Was there a revision 2.0 of this board?
  • Googer - Tuesday, August 3, 2021 - link

    There is a revision 3.0 of this board.
  • MirrorMax - Friday, August 27, 2021 - link

    No and more importantly this is exactly the same board as rev1 but with a Rome/Milan bios, so you can bios update rev1 boards to rev3 basically, odd that the review doesn't touch on this
  • BikeDude - Monday, August 2, 2021 - link

    Task Manager screenshot reminded me of Norton Speed Disk; We now have more CPUs than we had disk clusters back in the day. :P
  • WaltC - Monday, August 2, 2021 - link

    In one place you say it took 2.5 minutes to post, in another place you say it took 2.5 minutes to cold boot into Win10 pro. I noticed you used a Sata 3 connector for your boot drive, apparently, and I was reminded of booting Win7 from a Sata3 7200rpm platter drive taking me 90-120 seconds to cold boot--in Win7 the more crowded your system with 3rd-party apps and games the longer it took to boot...;) (That's not the case with Win10/11, I'm glad to say, as with TB's of installed programs I still cold boot in ~12 secs from an NVMe OS partition.) Basically, servers are not expected to do much in the way of cold booting as up time is what most customers are interested in...but I doubt the S3 drive had much to do with the 2.5 minute cold-boot time, though. An NVMe drive might have shaved a few seconds off the cold-boot, but that's about it, imo.

    Interesting read! Enjoyed it. Yes, the server market is far and away different from the consumer markets.

Log in

Don't have an account? Sign up now