System Benchmarks

Power Consumption

Power consumption is normally tested on the system while in a single MSI GTX 770 Lightning GPU configuration with a wall meter connected to the OCZ 1250W power supply, however for this review due to the PCIe arrangement we had an R7 240 equipped. This power supply is Gold rated, and as I am in the UK on a 230-240 V supply, leads to ~75% efficiency > 50W, and 90%+ efficiency at 250W, suitable for both idle and multi-GPU loading. This method of power reading allows us to compare the power management of the UEFI and the board to supply components with power under load, and includes typical PSU losses due to efficiency. These are the real world values that consumers may expect from a typical system (minus the monitor) using this motherboard.

While this method for power measurement may not be ideal, and you feel these numbers are not representative due to the high wattage power supply being used (we use the same PSU to remain consistent over a series of reviews, and the fact that some boards on our test bed get tested with three or four high powered GPUs), the important point to take away is the relationship between the numbers. These boards are all under the same conditions, and thus the differences between them should be easy to spot.

Power Consumption: Long Idle with GTX 770

Power Consumption: Idle with GTX 770

Power Consumption: OCCT Load with GTX 770

Having two processors installed doesn't take much more power at idle than our i7-5960X X99 counterparts, but when the CPU load starts to flow, the obvious differences arise. Interestingly the dual 65W combination for the E5 2650L v3 CPUs used less power than a single 130W CPU.

Windows 7 POST Time

Different motherboards have different POST sequences before an operating system is initialized. A lot of this is dependent on the board itself, and POST boot time is determined by the controllers on board (and the sequence of how those extras are organized). As part of our testing, we look at the POST Boot Time using a stopwatch. This is the time from pressing the ON button on the computer to when Windows 7 starts loading. (We discount Windows loading as it is highly variable given Windows specific features.) 

Windows 7 POST Time - Default

Windows 7 POST Time - Stripped

As mentioned earlier in the review, POST time on server motherboards is naturally slow due to the server management tools as well as the extra controllers. POST times are not that important for servers anyway, given that they tend to be restarted far less frequently than desktops or workstations.

USB Backup

For this benchmark, we transfer a set size of files from the SSD to the USB drive using DiskBench, which monitors the time taken to transfer. The files transferred are a 1.52 GB set of 2867 files across 320 folders – 95% of these files are small typical website files, and the rest (90% of the size) are small 30 second HD videos. In an update to pre-Z87 testing, we also run MaxCPU to load up one of the threads during the test which improves general performance up to 15% by causing all the internal pathways to run at full speed.

USB 3.0 Copy Times

DPC Latency

Deferred Procedure Call latency is a way in which Windows handles interrupt servicing. In order to wait for a processor to acknowledge the request, the system will queue all interrupt requests by priority. Critical interrupts will be handled as soon as possible, whereas lesser priority requests such as audio will be further down the line. If the audio device requires data, it will have to wait until the request is processed before the buffer is filled.

If the device drivers of higher priority components in a system are poorly implemented, this can cause delays in request scheduling and process time.  This can lead to an empty audio buffer and characteristic audible pauses, pops and clicks. The DPC latency checker measures how much time is taken processing DPCs from driver invocation. The lower the value will result in better audio transfer at smaller buffer sizes. Results are measured in microseconds.

DPC Latency

The DPC Latency for the dual E5 2697 v3 setup was not bad - our previous barrier to good/bad was 200 microseconds, although Z97 and X99 have both push the average well below 100. The other two CPUs caused large spikes in our DPC testing, giving results of 502 and 714 microseconds.

In The Box, Test Setup Professional Performance
Comments Locked

17 Comments

View All Comments

  • PCTC2 - Wednesday, December 3, 2014 - link

    Coming from the HPC space, seeing 512GB-1TB of RAM was pretty regular, but seeing 1.5TB-2TB was rare, but did occur. However, now with systems being able to have 6TB of RAM in a single 4U rack server is pretty incredible (4P servers with 96 DIMMs, Intel E7 v2 support).

    However, there are a few odd things about this board. For one, the QSFP+ is totally unnecessary, as it only supports 2x10GbE, and is not either 1) Infiniband or 2) 40GbE. Sure, with LACP, you could have bonded 20GbE, but you either need a splitter cable (QSFP+ to 4x SFP, with 2 SFP unusable) or a switch that supports multiple links over QSFP+ (a 40GbE with 10GbE breakout capabilities). Also, the decision to use the SFF-8087 connectors for the SATA and individual ports for SAS confounds me, as you lose the sideband support with individual cables, and onboard SATA doesn't support the sideband, thus losing some functionality with some backplanes. Also, the card Gigabyte advertises with this board is an LSI 2308, an HBA and not a full hardware RAID.

    Some of Gigabyte's B2B systems have intrigued me, especially their 8x Tesla/Phi system in 2U, but this board just doesn't seem completely thought out.
  • jhh - Wednesday, December 3, 2014 - link

    I suspect the QSFP was designed to support a Fortville, but they didn't get them qualified in time. That would get them a true 40 Gig port, or 4x10G
  • fackamato - Friday, December 5, 2014 - link

    What's fortville?
  • Cstefan - Friday, December 5, 2014 - link

    Intel 40GBE QSFP+
    Nothing the consumer need worry over for a long time yet.
  • Klimax - Sunday, December 7, 2014 - link

    With some results already available:
    http://www.tweaktown.com/reviews/6857/supermicro-a...
  • Cstefan - Friday, December 5, 2014 - link

    I run multiple database servers with 2TB of ram. My next round is slated for 4TB. And absolutely no joke, they reversed the SAS and SATA connectors in a monumentally stupid move.
  • ddriver - Wednesday, December 3, 2014 - link

    Well, surprisingly no gaming benchmarks this time, but what's with the "professional performance" benches? How many professionals out there make their money on running cinebench? How about some real workstation workloads for a change?
  • JeffFlanagan - Wednesday, December 3, 2014 - link

    This isn't a workstation, or a gaming machine.
  • ddriver - Wednesday, December 3, 2014 - link

    I actually applauded the absence of gaming benchmarks this time. As for whether this is for a workstation machines, I'd say it is far more suited for a workstation than suited for running winrar and image viewing software.

    And just to note this "review" of a "serve" motherboard doesn't have a single server benchmark whatsoever...
  • mpbrede - Wednesday, December 3, 2014 - link

    My usual gripe about acronyms that are not accompanied by an explanation when the term is first used. THis time aggravated by a typo, I'm sure.

    "The system is based on the C612 chipset, which is similar to the consumer based X99 but with 2P related features, such as MTCP over PCIe."

    I'm pretty sure you meant to type MCTP (Management Component Transport Protocol) and not the mTCP (microTCP?) or MTCP (Malaysian Technical Cooperation Programme or has something to do with Transport Layer Support for Highly Available Network
    Services)

Log in

Don't have an account? Sign up now