GIGABYTE MZ72-HB0 Conclusion

When it comes to dual-socket motherboards for servers, the vast majority have larger PCBs designed for server casing sizes such as EEB, rather than desktop sizes like ATX. We find it great to see some vendors kicking the trend of this slightly is the GIGABYTE MZ72-HB0 with dual SP3 sockets on an E-ATX sized frame. Not only does this make the GIGABYTE slightly unconventional, but it gives it added flexibility for a variety of uses, including regular PC cases that include support for this size.

Compared to some of the larger models from other brands, the MZ72-HB0 does make some sacrifices to be able to fit everything onto an E-ATX PCB, including one module per channel in memory, and fewer PCIe slots with not all the PCIe capacity supported. GIGABYTE has used as much of the board as possible, which includes eight memory slots per socket, so sixteen slots in total with support for up to 4 TB of DDR4-3200. For storage, GIGABYTE includes flexibility for users between conventional SATA drives and NVMe based drives, with four 7-pin SATA ports, one PCIe 4.0 x4 M.2 slot, two NVMe SlimSAS 4i connectors, and allows users to install either twelve SATA devices or three PCIe 4.0 x4 NVMe drives via three SlimSAS ports. 


GIGABYTE MZ72-HB0 (Rev 3.0) with 2 x AMD EPYC 7763 and 512 GB DDR4-3200 installed

With functionality being the focus here, GIGABYTE includes a variety of features to allow for server and workstation deployment, with dual 10 Gb Base-T Ethernet powered by a Broadcom BCM57416 controller, with a dedicated management LAN port and D-sub video output offering BMC access via an ASPEED BMC controller. Backing this up is GIGABYTE's latest MegaRAC SP-X interface which includes both HTML5 and Java functionality. It provides a range of functional elements including access to sensors in real-time over a network, performs power-related tasks such as reboots and shutdowns, or even firmware backup and update, which can be useful if it's installed into a data center environment. 

Performance on a configuration such as this is somewhat insane, using two EPYC 64-core processors built around Zen 3. We saw the good generation-on-generation performance in our EPYC Milan Review, but in our system testing here, the GIGABYTE did as well as expected in our POST time testing which took just over two and a half minutes to boot into Windows from a cold boot. In terms of power, two 280 W processors are going to pull a lot of wattage from the wall at full load, and our DPC latency testing shows the GIGABYTE isn't suitable for audio production; that's not a surprise.

Being a dual-socket board, the one large consideration to make about such a design is the form-factor deployments. Being an E-ATX board, the principal use-case for most of our audience at least would be as a workstation, or at least some server deployment in a more usual PC enclosure. What’s important to consider here is the cooling requirements - it being a more server-oriented design, it lacks the usual consumer-grade larger heatsinks, and thus requires a lot more airflow, which can become an issue when you have two 280W CPUs along with a ton of DRAM, not to mention additional PCIe devices such as a GPU. Careful planning for adequate cooling is paramount to achieve the best performance. 

Final Thoughts

There are a few dual-socket EPYC 7003 motherboards available today at retailers, including brands such as Supermicro, ASRock Rack, and GIGABYTE. A lot of EPYC 7003 options are available in customizable barebones too, which are inherently more expensive and can vary widely in price depending on the desire configuration. The GIGABYTE MZ72-HB0 Rev 3.0 for EPYC 7003 has an MSRP of $1060, but GIGABYTE themselves informed us that they expect retailers to sell for around the $1000 mark. Looking at the functionality and the targeted market, the price isn't a bad one, and considering that each of the AMD EYPC 7763 processors retails for $7890, the cost of the motherboard is relatively cheap by comparison.

System Performance
Comments Locked

28 Comments

View All Comments

  • tygrus - Monday, August 2, 2021 - link

    There are not many apps/tasks that make good use of more than the 64c/128t. Some of those tasks are better suited for GPU, accelerators or a cluster of networked systems. Some tasks just love having the TB's RAM while others will be limited by data IO (storage drives, network). YMMV. Have fun with testing it but it will be interesting to find people with real use cases that can afford this.
  • questionlp - Monday, August 2, 2021 - link

    Being capable of handling more than 64c/128t across two sockets doesn't mean that everyone will drop more than that on this board. You can install two higher clock 32c/64t processors into each socket, have shed load of RAM and I/O for in-memory databases, software-defined (insert service here) or virtualization (or a combination of those).

    Installer lower core count, even higher clock speed CPUs and you have yourself an immensely capable platform for per-core licensed enterprise database solutions.
  • niva - Wednesday, August 4, 2021 - link

    You can but why would you when you can get a system where you can slot a single CPU with 64C?

    This is a board for the cases where 64C is clearly not enough, and really catering towards server use, for cases where less cores but more power per core are needed, there are simply better options.
  • questionlp - Wednesday, August 4, 2021 - link

    The fastest 64c/128t Epyc CPU right now as a base clock of 2.45 GHz (7763) while you can get 2.8 GHz with a 32c/128t 7543. Slap two of those on this board, you'll get a lot more CPU power than a single 64c/128t and double the number of memory channels.

    Another consideration is licensing. IIRC, VMware per-CPU licensing maxes out at 32c per socket. To cover a single 64c Epyc, you would end up with the same license count as two 32c Epyc configuration. Some customers were grandfathered in back in 2020; but, that's no longer the case for new licenses. Again, you can scale better with 2 CPU configuration than 1 CPU.

    It all depends on the targeted workload. What may work for enterprise virtualization won't work for VPC providers, etc.
  • linuxgeex - Monday, August 2, 2021 - link

    The primary use case is in-memory databases and/or high-volume low-latency transaction services. The secondary use case is rack unit aggregation, which is usually accomplished with virtualisation. ie you can fit 3x as many 80-thread high performance VPS into this as you can into any comparably priced Intel 2U rack slot, so this has huge value in a datacenter for anyone selling such a VPS in volume.
  • logoffon - Monday, August 2, 2021 - link

    Was there a revision 2.0 of this board?
  • Googer - Tuesday, August 3, 2021 - link

    There is a revision 3.0 of this board.
  • MirrorMax - Friday, August 27, 2021 - link

    No and more importantly this is exactly the same board as rev1 but with a Rome/Milan bios, so you can bios update rev1 boards to rev3 basically, odd that the review doesn't touch on this
  • BikeDude - Monday, August 2, 2021 - link

    Task Manager screenshot reminded me of Norton Speed Disk; We now have more CPUs than we had disk clusters back in the day. :P
  • WaltC - Monday, August 2, 2021 - link

    In one place you say it took 2.5 minutes to post, in another place you say it took 2.5 minutes to cold boot into Win10 pro. I noticed you used a Sata 3 connector for your boot drive, apparently, and I was reminded of booting Win7 from a Sata3 7200rpm platter drive taking me 90-120 seconds to cold boot--in Win7 the more crowded your system with 3rd-party apps and games the longer it took to boot...;) (That's not the case with Win10/11, I'm glad to say, as with TB's of installed programs I still cold boot in ~12 secs from an NVMe OS partition.) Basically, servers are not expected to do much in the way of cold booting as up time is what most customers are interested in...but I doubt the S3 drive had much to do with the 2.5 minute cold-boot time, though. An NVMe drive might have shaved a few seconds off the cold-boot, but that's about it, imo.

    Interesting read! Enjoyed it. Yes, the server market is far and away different from the consumer markets.

Log in

Don't have an account? Sign up now