It's no secret that AMD is looking to carve out a bigger share in the server market with their enterprise EPYC processors, and much fanfare has been made about the high core-count offered for the price when compared to Intel's Xeon range of processors. The ASRock Rack EPYCD8-2T looks to utilize all of the processing power offered by AMD's EYPC, and the EPYCD8-2T has a professional-centric feature set built into its ATX design. We also have eight memory slots, up to nine SATA ports, has two OCuLink to U.2 slots, dual 10 G Ethernet, and seven PCIe 3.0 slots. This model also supports both AMD's EPYC 7001 Naples and 7002 Rome processors (Rome via an update).

ASRock Rack EPYCD8-2T Overview

ASRock Rack is the enterprise arm of ASRock, and caters to the workstation, server and data center market. For the longest time, ASRock Rack catered mainly to Intel's offerings, including Intel Xeon and the large Atom designs. Now the company has a small but expanding team focusing on the EPYC side of the market, and the ASRock Rack EPYCD8-2T is an ATX sized solution which is compatible with both Naples and Rome. Today we will be focusing on the EPYCD8-2T and its server and workstation feature set.

The ASRock Rack EPYCD8-2T is an ATX sized single LGA 4094 socket option designed for AMD's EPYC processors. The board has an all-green PCB and has a transposed CPU socket designed for more efficient airflow when installed into a 1U or related chassis type. Memory support stretches across eight slots with support for RDIMMS up to 32 GB, and LRDIMMs up to 128 GB per slot. This means that the EPYCD8-2T can house up to 1 TB of DDR4 operating in eight-channel. This model has supports DDR4-3200/2933/2666/2400, both in the RDIMM and LRDIMM variety.

Providing BMC maintenance functions is the stalwart Aspeed AST2500 management controller which allows users to remotely manage the system. The networking is taken care of by an Intel X550-AT2 Ethernet controller which provides dual 10 G Ethernet on the rear panel. A separate Realtek RTL8211E acting as a dedicated IPMI Ethernet port, with a D-sub 2D video output powered by the Aspeed AST2500.

On the PCIe front, the ASRock Rack EPYCD8-2T has plenty of expansion slot support to make the most of the 128 lanes from the CPU. which include four full-length and three half-length (but open-ended) PCIe 3.0 slots. These slots operate at x16/x8/x16/x8/x16/x8 which makes for a total of 88 PCIe 3.0 lanes dedicated to graphics and expansion support.  

For the storage, ASRock Rack includes two mini SAS HD connectors which each offers the capability to install up to four SATA devices, with up to eight in total. A separate SATA DOM port allows for another SATA device to be installed bringing the boards total of SATA capability up to nine. A total of two PCIe 3.0 x4 M.2 slots are located vertically below the right-hand side bank of memory slots, with two OCuLink ports just to the right of the DRAM slots for U.2 devices.


ASRock Rack EPYCD8-2T Block Diagram

The ASRock EPYCD8-2T was originally built for Naples (7001), but Rome (7002) is supported by updating the firmware to v2.30. It is worth noting that this update requires a 32 MB BIOS chip - some of the early units (like ours) only have a 16 MB chip. 

The performance of the ASRock Rack EPYCD8-2T is competitive with other models we have tested, including the GIGABYTE MZ31-AR0. In comparison to the GIGABYTE model, the EPYCD8-2T shows much better power efficiency with a strong showing in our long idle power testing, as well as at full-load with our AMD EPYC 7351P processor. Server and workstation motherboards tend to take longer to boot up into Windows due to controller and BMC initialization during POST, but our POST time testing shows the EPYCD8-2T to POST in just over 50 seconds, with a slightly quicker POST time of 45 seconds with non-essentially controllers disabled. The ASRock Rack EPYCD8-2T is the only model I've personally tested on any platform to be under 50 µs in our DPC Latency testing, making this a solid option for users building an audio-focused workstation. 

 

The ASRock Rack EPYCD8-2T currently retails for $498 at Newegg and represents just handful of single-socket LGA4094 models at the sub $500 price point. Included in that list is the similar, but cheaper ASRock Rack EPYCD8 ($460) which is essentially the same board, but without dual 10 G Ethernet. Other models at a similar price point include the Supermicro MBD-H11SSL-NC ($470) with dual 1 G Ethernet and fewer SATA, as well as the ASUS KNPA-U16 ($462) which has superior storage and better memory support but opts for two 1GbE too. The distinguishing factor in specifications for the EPYCD8-2T that we're reviewing today is the Intel X550 Dual 10 gigabit Ethernet controller, and seven PCIe 3.0 slots which is impressive on an ATX sized model.

Read on for more extended analysis.

Visual Inspection
POST A COMMENT

41 Comments

View All Comments

  • eastcoast_pete - Monday, April 20, 2020 - link

    I don't think that question was asked in earnest. However, if it was, I agree with you. Reply
  • SampsonJackson - Monday, April 20, 2020 - link

    That is absolutely incorrect. We do it with Infiniband cards via RDMA and easily saturate multiple 100Gbps cards. Der8auer demonstrated ~28GB/s on a RAID0 using Threadripper 1st gen (~224Gbps) and was only limited by the RAID driver thread saturating a CPU core.. further scaling is possible using the inbox NVMe driver (up to endpoint/bus saturation). Are these realistic workloads? No. Is it possible? No problem. Reply
  • vFunct - Monday, April 20, 2020 - link

    CPUs on media servers have been saturating 100G for years now. Netflix is doing that, for example. https://netflixtechblog.com/serving-100-gbps-from-... Reply
  • vFunct - Monday, April 20, 2020 - link

    And they're delivering 200gbps now: https://wccftech.com/netflix-evaluating-replacing-... Reply
  • brunis.dk - Monday, April 20, 2020 - link

    I think ASSRock should just rename themselves to ASRack for simplicity. Reply
  • kobblestown - Monday, April 20, 2020 - link

    What's with the 6-pin fan connectors? Can I plug a regular 4-pin PWM fan into it? Reply
  • dotes12 - Monday, April 20, 2020 - link

    I looked up the user manual and yes, it's keyed so that both a normal 3-pin and 4-pin fan will work with the 6-pin motherboard connector without an adapter. It appears that the extra two pins are used for a temperature sensor that's built into the fan. Per the manual, pin 5 is labeled "Sensor" and pin 6 is labeled "NC", and the custom fan speed has an option called "Smart Fan Temp Control" where you can have it increase a specific fan speed based on the temperature the fan is reporting. Reply
  • kobblestown - Monday, April 20, 2020 - link

    Oh, that's cool. Thanks for checking it out. Reply
  • cygnus1 - Monday, April 20, 2020 - link

    I was originally going to say "WTF are they thinking releasing such a high end AMD board in 2020 that doesn't support PCIe 4.0 when the appropriate CPU is installed. What a waste." But then I realized this board is about a year old already. As others mentioned below, the ROMED8-2T is almost the replacement for this year old board being reviewed. The biggest thing missing from that one is the x16 slots. And for whatever reason they didn't leave the x8 slots open ended to allow for x16 cards to fit. Reply
  • WaltC - Monday, April 20, 2020 - link

    This motherboard is a cheap EPYC *server* mboard, and that is all it is...;) Keyword being "cheap"--paring down the system bus to PCIe3.x cuts the system bandwidth in half, compared with 4.0, which translates to manufacturing a lower-cost mboard relative to the layers needed to properly support the signal integrity of a PCIe4.0 system bus. A PCIe3.x system bus also requires less power than PCIex4. It's easy to forget, I suppose, that PCIe4 is *double* the bandwidth of PCIe3. But as a cheap server mboard, PCIe4 may not be a better fit than PCIe3.x.

    This "review" is a bit strange, imo...;) Not only does it directly compare different mboards, but it also compares those mboards running different CPUs, as well, as if to illustrate some obscure point. I would have done things a bit differently, like, for instance, restricting my choice of motherboards to those server boards capable of running this CPU--and *actually running* the EPYC CPU featured here...;) Maybe throw in a couple of system bandwidth tests and applications to illustrate advantages of the increased bandwidth PCIe4x brings to the table, along with extra costs, etc. Otherwise, what one winds up comparing are CPUs instead of motherboards, imo. As server mboards go, this one is not "high end" at all--it's actually a "budget" server mboard, imo--hence the compromises with system bus bandwidth, etc. Simply put, this mboard was not designed to "compete" with "enthusiast-class" retail mboards used for gaming--as should be obvious. People looking for budget-class server motherboards for EPYC-class cpus won't care about PCIe4, the "colors" used, RGB, multi-GPUs, etc. Those things add to cost and energy consumption, and, of course, superficial color schemes/RGB offer no power efficiency or performance enhancements of any kind.
    Reply

Log in

Don't have an account? Sign up now