Power Consumption and Thermal Performance

The pfSense installation in the Supermicro SuperServer E302-9D was configured with the default ruleset first, as described in an earlier section. The only free 1000Mbps LAN port was then connected to a spare LAN port in the sink. Two instances of the iPerf3 server were initialized on each of the sink's interfaces connected to the DUT. The firewall rules for the newly connected interface were modified to allow the benchmark traffic. Two iPerf3 clients were started in each connected interface of both the source and the conductor - one in normal, and the other in reverse mode. This benchmark was allowed to run for one hour in an attempt to saturate the duplex link of all of the DUT's interfaces (other than the ones connected to the management network and IPMI). After one hour, the source and sink were turned off, followed after some time by the DUT itself. The power consumption at the wall was recorded during the whole process using an Ubiquiti mFi mPower unit.

The E302-9D pfSense installation idles at around 70W. At full load with all network interfaces active, the power consumption reaches 90W. Keeping just the IPMI active (allowing for the BMC to remotely power up the server) costs slightly more than 15W. Keeping in mind the target market for the system, it would be good on Supermicro's part to see if the 15W number can be reduced further. pfSense / FreeBSD is not a particularly power-efficient OS. Having observed the idle power consumption of both Windows Server 2019 and Ubuntu 20.04 LTS on the same system to be in the high 40s, the inefficiency of pfSense was slightly disappointing. In common firewall applications / deployments in datacenters and server racks, this is not much of a concern (as the system is likely to be never idle), but, embedded applications may not always be in high-traffic mode. Some optimizations from the OS side / Intel drivers may help here.

Towards the end of the stress test, we also captured the temperature sensors' outputs as conveyed by Supermicro's IPMIView tool. The CPU temperature of 73C was well within the 90C limit. However, the SSD was a bit too hot at 82C throughout, as was the MB_10G, VRM, and DIMMs between 80 and 92C. The SSD was partly our fault (the power-hungry Mushkin Sandforce-based SATA SSD is definitely not something to be recommended for a tightly enclosed passively cooled system like the E302-9D, particularly when the SSD makes no contact with the metal casing).

The FLIR One Pro thermal camera was used to take thermal photographs of the chassis at the end of the stress test. Temperatures of around 70C were noticed at various points.

Under normal conditions with light traffic (i.e, power consumption remaining around 70W), temperatures were around 60C - 65C. Additional thermal photographs are available in the above gallery. Given the temperature profile, the unit is best placed away from where one's hands might accidentally touch it.

On the thermal solution itself, Supermicro has done an excellent job in cooling down the SoC (as evident from the bright spot directly above the SoC in the thermal photographs and the teardown gallery earlier in the piece). The heat-sink is well thought-out and blends well with the chassis in terms of contact area. It is not clear whether anything can be done about the VRMs and DIMMs, but users should definitely consider a low-power SSD or ensure that the installed SSD has a chance for its heat to be conductively taken away.

Packet Processing Benchmarks with pkt-gen Miscellaneous Aspects and Concluding Remarks
Comments Locked

34 Comments

View All Comments

  • Jorgp2 - Thursday, July 30, 2020 - link

    Maybe you should learn the difference between a switch and a router first.
  • newyork10023 - Thursday, July 30, 2020 - link

    Why do you people have to troll everywhere you go?
  • Gonemad - Wednesday, July 29, 2020 - link

    Oh boy. I once got Wi-Fi "AC" 5GHz, 5Gbps, and 5G mobile networks mixed once by my mother. It took a while to explain those to her.

    Don't use 10G to mean 10 Gbps, please! HAHAHA.
  • timecop1818 - Wednesday, July 29, 2020 - link

    Fortunately, when Ethernet says 10Gbps, that's what it means.
  • imaheadcase - Wednesday, July 29, 2020 - link

    Put the name Supermicro on it and you know its not for consumers.
  • newyork10023 - Wednesday, July 29, 2020 - link

    The Supermicro manual states that a PCIe card installed is limited to networking (and will require a fan installed). An HBA card can't be installed?
  • abufrejoval - Wednesday, July 29, 2020 - link

    Since I use both pfSense as a firewall and a D-1541 Xeon machine (but not for the firewall) and I share the dream of systems that are practically silent, I feel compelled to add some thoughts:

    I started using pfSense on a passive J1900 Atom board which had dual Gbit on-board and cost less than €100. That worked pretty well until my broadband exceeded 200Mbit/s, mostly because it wasn’t just a firewall, but also added Suricata traffic inspection (tried Snort, too, very similar results).

    And that’s what’s wrong with this article: 10Gbit Xeon-Ds are great when all you do is push packet, but don’t look at them. They are even greater when you terminate SSL connections on them with the QuickAssist variants. They are great when they work together with their bigger CPU brothers, who will then crunch on the logic of the data.

    In the home-appliance context that you allude to, you won’t have ten types of machines to optimally distribute that work. QuickAssist won’t deliver benefits while the CPU will run out of steam far before even a Gbit connection is saturated when you use it just for the front end of the DMZ (firewall/SSL termination/VPN/deep inspection/load-balancing-failover).

    Put proxies, caches or even application servers on them as well, even a single 10Gbit interface may be a total waste.

    I had to resort to an i7-7700T which seems a bit quicker than the D-2123IT at only 35Watts TDP (and much cheaper) to sustain 500Mbit/s download bandwidth with the best gratis Suricata rule set. Judging by CPU load observations it will just about manage the Gbit loads its ports can handle, pretty sure that 2.5/5/10 Gbit will just throttle on inspection load, like the J1900 did at 200Mbit/s.

    I use a D-1541 as an additional compute node in an oVirt 3 node HCI gluster with 3x 2.5Gbit J5005 storage nodes. I can probably go to 6x 2.5Gbit before its 10Gbit NIC becomes a bottleneck.

    The D-1541’s benefit there is lots of RAM and cores, while it’s practically silent with 45 Watts TDP and none of the applications on it require vast amounts of CPU power.

    I am waiting for an 8-core AMD 4000 Pro 35 Watt TDP APU to come as Mini-ITX capable of handling 64 or 128GB of ECC-RAM to replace the Xeon D-1541 and bring the price for such a mini server below that of a laptop with the same ingredients.
  • newyork10023 - Wednesday, July 29, 2020 - link

    With an HBA (were it possible, hence my question), the 10Gbps serves a possible use (storage). Pushing and inspection exceeds x86 limits now. See TNSR for real x86 limits (wighout inspection).
  • abufrejoval - Wednesday, July 29, 2020 - link

    That would seem apply to the chassis, not to the mainboard or SoC.
    There is nothing to prevent it from working per se.

    I am pretty sure you can add a 16-port SAS HBA or even NVMeOF card and plenty of external storage, if thermals and power fit. A Mellanox 100Gbit card should be fine electrically, logically etc, even if there is nothing behind to sustain that throughput.

    I've had an Nvidia GTX1070 GPU in the SuperMicro Mini-ITX D-1541 for a while, no problem at all, functionally, even if games still seem to prefer Hertz over cores. Actually GPU accellerated machine learning inference was the original use case of that box.
  • newyork10023 - Wednesday, July 29, 2020 - link

    As pointed out, the D2123IT has no QAT, so a QAT accelerator would take up an available PCIe slot. It could push 10G packets then, but not save them or think (AI) on them.

Log in

Don't have an account? Sign up now