Power Consumption and Thermal Performance

The pfSense installation in the Supermicro SuperServer E302-9D was configured with the default ruleset first, as described in an earlier section. The only free 1000Mbps LAN port was then connected to a spare LAN port in the sink. Two instances of the iPerf3 server were initialized on each of the sink's interfaces connected to the DUT. The firewall rules for the newly connected interface were modified to allow the benchmark traffic. Two iPerf3 clients were started in each connected interface of both the source and the conductor - one in normal, and the other in reverse mode. This benchmark was allowed to run for one hour in an attempt to saturate the duplex link of all of the DUT's interfaces (other than the ones connected to the management network and IPMI). After one hour, the source and sink were turned off, followed after some time by the DUT itself. The power consumption at the wall was recorded during the whole process using an Ubiquiti mFi mPower unit.

The E302-9D pfSense installation idles at around 70W. At full load with all network interfaces active, the power consumption reaches 90W. Keeping just the IPMI active (allowing for the BMC to remotely power up the server) costs slightly more than 15W. Keeping in mind the target market for the system, it would be good on Supermicro's part to see if the 15W number can be reduced further. pfSense / FreeBSD is not a particularly power-efficient OS. Having observed the idle power consumption of both Windows Server 2019 and Ubuntu 20.04 LTS on the same system to be in the high 40s, the inefficiency of pfSense was slightly disappointing. In common firewall applications / deployments in datacenters and server racks, this is not much of a concern (as the system is likely to be never idle), but, embedded applications may not always be in high-traffic mode. Some optimizations from the OS side / Intel drivers may help here.

Towards the end of the stress test, we also captured the temperature sensors' outputs as conveyed by Supermicro's IPMIView tool. The CPU temperature of 73C was well within the 90C limit. However, the SSD was a bit too hot at 82C throughout, as was the MB_10G, VRM, and DIMMs between 80 and 92C. The SSD was partly our fault (the power-hungry Mushkin Sandforce-based SATA SSD is definitely not something to be recommended for a tightly enclosed passively cooled system like the E302-9D, particularly when the SSD makes no contact with the metal casing).

The FLIR One Pro thermal camera was used to take thermal photographs of the chassis at the end of the stress test. Temperatures of around 70C were noticed at various points.

Under normal conditions with light traffic (i.e, power consumption remaining around 70W), temperatures were around 60C - 65C. Additional thermal photographs are available in the above gallery. Given the temperature profile, the unit is best placed away from where one's hands might accidentally touch it.

On the thermal solution itself, Supermicro has done an excellent job in cooling down the SoC (as evident from the bright spot directly above the SoC in the thermal photographs and the teardown gallery earlier in the piece). The heat-sink is well thought-out and blends well with the chassis in terms of contact area. It is not clear whether anything can be done about the VRMs and DIMMs, but users should definitely consider a low-power SSD or ensure that the installed SSD has a chance for its heat to be conductively taken away.

Packet Processing Benchmarks with pkt-gen Miscellaneous Aspects and Concluding Remarks
Comments Locked

34 Comments

View All Comments

  • eastcoast_pete - Tuesday, July 28, 2020 - link

    Thanks, interesting review! Might be (partially) my ignorance of the design process, but wouldn't it be better from a thermal perspective to use the case, especially the top part of the housing directly as heat sink? The current setup transfers the heat to the inside space of the unit and then relies on passive con
    vection or radiation to dispose of the heat. Not surprised that it gets really toasty in there.
  • DanNeely - Tuesday, July 28, 2020 - link

    From a thermal standpoint yes - if everything is assembled perfectly. With that design though, you'd need to screw attach the heat sink to the CPU via screws from below, and remove/reattach it from the CPU every time you open the case up. This setup allows the heatsink to be semi-permanently attached to the CPU like in a conventional install.

    You're also mistaken about it relying on passive heat transfer, the top of the case has some large thermal pads that will make contact with the tops of the heat sinks. (They're the white stuff on the inside of the lid in the first gallery photo; made slightly confusing by the lid being rotated 180 from the mobo.) Because of the larger contact area and lower peak heat concentration levels thermal pads are much less finicy about being pulled apart and slapped together than the TIM between a chip and the heatsink base.
  • Lindegren - Tuesday, July 28, 2020 - link

    Could be Solved by having the CPU on the opposite side og the board
  • close - Wednesday, July 29, 2020 - link

    Lower power designs do that quite often. The MoBo is flipped so it faces down, the CPU is on the back side of the MoBo (top side of the system) covered by a thick, finned panel to serve as passive radiator. They probably wanted to save on designing a MoBo with the CPU on the other side.
  • eastcoast_pete - Tuesday, July 28, 2020 - link

    Appreciate the comment on the rotated case; those thermal pads looked oddly out of place. But, as Lindegren's comment pointed out, having the CPU on the opposite site of this, after all, custom MB, one could have the main heat source (SoC/CPU) facing "up", and all others facing "down".
    For maybe irrational reasons, I just don't like VRMs, SSDs and similar getting so toasty in an always-on piece of networking equipment.
  • YB1064 - Wednesday, July 29, 2020 - link

    Crazy expensive price!
  • Valantar - Wednesday, July 29, 2020 - link

    I think you got tricked by the use of a shot of the motherboard with a standard server heatsink. Look at the teardown shots; this version of the motherboard is paired with a passive heat transfer block with heat pipes which connects directly to the top chassis. No convection involved inside of the chassis. Should be reasonably efficient, though of course the top of the chassis doesn't have that many or that large fins. A layer of heat pipes running across it on the inside would probably have helped.
  • herozeros - Tuesday, July 28, 2020 - link

    Neat review! I was hoping you could offer an opinion on why they elected to not include a SKU without quickassist? So many great router scenarios with some juicy 10G ports, but bottlenecks if you’re trafficing in resource intensive IPSec connections, no? Thanks!
  • herozeros - Tuesday, July 28, 2020 - link

    Me English are bad, should read “a SKU without Quickassist”
  • GreenReaper - Tuesday, July 28, 2020 - link

    The MSRP of the D-2123IT is $213. All D-2100 CPUs with QAT are >$500:
    https://www.servethehome.com/intel-xeon-d-2100-ser...
    https://ark.intel.com/content/www/us/en/ark/produc...
    And the cheapest of those has a lower all-core turbo, which might bite for consistency.

    It's also the only one with just four cores. Thanks to this it's the only one that hits a 60W TDP.
    Bear in mind internals are already pushing 90C, in what is presumably a reasonably cool location.

    The closest (at 235% the cost) is the 8-core D-2145NT (65W, 1.9Ghz base, 2.5Ghz all-core turbo).
    Sure, it *could* do more processing, but for most use-cases it won't be better and may be worse. To be sure it wasn't slower, you'd want to step up to D-2146NT; but now it's 80W (and 301% the cost). And the memory is *still* slower in that case (2133 vs 2400). Basically you're looking at rack-mount, or at the very least some kind of active cooling solution - or something that's not running on Intel.

    Power is a big deal here. I use a quad-core D-1521 as a CPU for a relatively large DB-driven site, and it hits ~40W of its 45W TDP. For that you get 2.7Ghz all-core, although it's theoretically 2.4-2.7Ghz. The D-1541 with twice the cores only gets ~60% of the performance, because it's _actually_ limited by power. So I don't doubt TDP scaling indicates a real difference in usage.

    A lower CPU price also gives SuperMicro significant latitude for profit - or for a big bulk discount.

Log in

Don't have an account? Sign up now