System Performance

Power Consumption

Power consumption was tested on the system while in a single MSI GTX 770 Lightning GPU configuration with a wall meter connected to the OCZ 1250W power supply. This power supply is Gold rated, and as I am in the UK on a 230-240 V supply, leads to ~75% efficiency > 50W, and 90%+ efficiency at 250W, suitable for both idle and multi-GPU loading. This method of power reading allows us to compare the power management of the UEFI and the board to supply components with power under load, and includes typical PSU losses due to efficiency. These are the real world values that consumers may expect from a typical system (minus the monitor) using this motherboard.

While this method for power measurement may not be ideal, and you feel these numbers are not representative due to the high wattage power supply being used (we use the same PSU to remain consistent over a series of reviews, and the fact that some boards on our test bed get tested with three or four high powered GPUs), the important point to take away is the relationship between the numbers. These boards are all under the same conditions, and thus the differences between them should be easy to spot.

Power Consumption: Long Idle with GTX 770

Power Consumption: Idle with GTX 770

Power Consumption: OCCT Load with GTX 770

With the added PLX switches and LSI RAID controller, the Extreme11 was from the outset not going to be overly great when it came to power consumption. The same conclusions came from the ASRock X99 WS-E/10G with its power hungry 10G chip, and as a result these boards match each other both at idle and CPU load.

Windows 7 POST Time

Different motherboards have different POST sequences before an operating system is initialized. A lot of this is dependent on the board itself, and POST boot time is determined by the controllers on board (and the sequence of how those extras are organized). As part of our testing, we look at the POST Boot Time using a stopwatch. This is the time from pressing the ON button on the computer to when Windows 7 starts loading. (We discount Windows loading as it is highly variable given Windows specific features.) 

Windows 7 POST Time - Default

Windows 7 POST Time - Stripped

The extra controllers cause a small bump in POST time, with the final result being near the bottom of our testing results.

Rightmark Audio Analyzer 6.2.5

Rightmark:AA indicates how well the sound system is built and isolated from electrical interference (either internally or externally). For this test we connect the Line Out to the Line In using a short six inch 3.5mm to 3.5mm high-quality jack, turn the OS speaker volume to 100%, and run the Rightmark default test suite at 192 kHz, 24-bit. The OS is tuned to 192 kHz/24-bit input and output, and the Line-In volume is adjusted until we have the best RMAA value in the mini-pretest. We look specifically at the Dynamic Range of the audio codec used on board, as well as the Total Harmonic Distortion + Noise.

Dynamic Range of X99 Extreme11 at 100% volume

Rightmark: AA, Dynamic Range, 24-bit / 192 kHz

Rightmark: AA, THD+N, 24-bit / 192 kHz

The Extreme11 results match what we have seen before on other ASRock X99 boards with Realtek ALC1150 audio codecs – around 103 dB for dynamic range and above -78 dB for THD+N.

USB Backup

For this benchmark, we transfer a set size of files from the SSD to the USB drive using DiskBench, which monitors the time taken to transfer. The files transferred are a 1.52 GB set of 2867 files across 320 folders – 95% of these files are small typical website files, and the rest (90% of the size) are small 30 second HD videos. In an update to pre-Z87 testing, we also run MaxCPU to load up one of the threads during the test which improves general performance up to 15% by causing all the internal pathways to run at full speed.

USB 2.0 Copy Times

USB 3.0 Copy Times

USB 2.0 performance is somewhat middling, but USB 3.0 performance on the PCH is some of the best we have seen.

DPC Latency

Deferred Procedure Call latency is a way in which Windows handles interrupt servicing. In order to wait for a processor to acknowledge the request, the system will queue all interrupt requests by priority. Critical interrupts will be handled as soon as possible, whereas lesser priority requests such as audio will be further down the line. If the audio device requires data, it will have to wait until the request is processed before the buffer is filled.

If the device drivers of higher priority components in a system are poorly implemented, this can cause delays in request scheduling and process time.  This can lead to an empty audio buffer and characteristic audible pauses, pops and clicks. The DPC latency checker measures how much time is taken processing DPCs from driver invocation. The lower the value will result in better audio transfer at smaller buffer sizes. Results are measured in microseconds.

DPC Latency

LSI 3008 Performance

Unlike our X79 Extreme11 review, I unfortunately did not have a series of SSDs on hand to test in a similar manner. Nevertheless, the implementation for the X99 version is the same as the X79, and to recap our X79 Extreme11 results gives the following for peak sequential read speeds. The legend gives our X79 setup in terms of SATA 6 Gbps ports + SATA 3 Gbps ports (thus 2+0 gives a RAID-0 array of two SATA 6 Gbps ports), with the final eight being solely populated on the LSI controller.

This in order to match the best PCH performance in this setup, it required three drives in RAID-0 on the LSI ports. Similar results can be extrapolated for X99 whereby six of the 10 SATA ports on the PCH are capable of RAID, and a similar number on the LSI would be needed to match it. Unfortunately any RAID array that crosses both the PCH and the LSI ports needs to be from software.

In The Box, Test Setup and Overclocking CPU Performance
Comments Locked

58 Comments

View All Comments

  • lordken - Friday, April 3, 2015 - link

    Rather you should apology for being lazy. abufrejoval did run some math for you, so its pretty clear that all 18x ports wont deliver full bandwidth. If you need to run 18x SSD at full speed then you probably need server board or something.

    If you want to troll go elsewhere.
  • petar_b - Friday, January 29, 2016 - link

    abufrejoval is not theoretical - 1 SSD on PCH can do 400Mb/s, but 4 SSDs simultaneously can give less 100MB/s transfer each. Now move that on SAS controller and each SSD gives 400MB/s.

    Once you start using SSDs on SAS - you will never go back to PCH.

    I posted article a year ago on the web showing differences I have measured with crystal benchmark - values are shocking... measurements were based on ASRock X79 Extreme11, same SAS controller just CPU and RAM bit slower.
  • wyewye - Friday, March 13, 2015 - link

    Good point duploxxx.
    I haven't seen a professionally done review on this site for quite a while.
  • petar_b - Friday, January 29, 2016 - link

    Motherboard has nothing to do with gaming, go for ROG if you wish gaming. Business use, rendering, 3D where storage needs to be fast and has to be SAS.

    We are using older generation of the board X79 with PLX and SAS controller. There are no words nor space here to explain you what performance increase we we hook up 8 SSDs (960G) on SAS instead of Intel...

    It's perfect for virtualization on or cloud realization - example: 128GB RAM + 6T SSDs can accomodate more than 20 vmware images, each with 4GB ram, running perfectly on Xeon.

    @dicobalt - keeping porn ? it's so sad ppl think no further than gaming and watching tv. go buy book and learn something... mathlab, 3d studio and earn money. actually get a tv and watch porn there.
  • 3DoubleD - Wednesday, March 11, 2015 - link

    Thanks for the review! This board is incredible. I run a storage server with a software raid (Unraid) and this board alone would handle all of my SATA port needs without the need for any PCIe SATA cards. The only issue is the price though. For $600 I could easily buy a $150 Z97 motherboard with 8 SATA ports and two PCIe 8x slots, buy two $150 PCIe 2.0 8x cards (each with 8 SATAIII ports), and I'd still have money left over (probably put it towards a better case!). Also, that's not counting the significant difference in CPU and DDR4 costs.

    Clearly this motherboard is meant for a use case beyond a simple storage server (so many PCIe 8x slots!), so I can't say they missed their intended mark. However, I really wish they could attempt something like this on the Z97 platform, more than 10 SATA ports but with no more than two (or three) PCIe 8x slots (even if some of them are 4x). Aim for a price below $250.

    I can't pretend it would be a big seller, but I know I'd buy one!
  • WithoutWeakness - Thursday, March 12, 2015 - link

    ASRock has the Z87 Extreme11 with 22 SATA III ports (6 from chipset, 16 from LSI controller) along with 4-way SLI support (x8,x8,x8,x8) and a pair of Thunderbolt 2 ports. I'm not sure how feasible it is to plan on using all of those with only 16 PCIe 3.0 lanes from the socket 1150 CPU but it sounds like everything you're asking for. Unfortunately it came in over $500, double your asking price.

    I think you'll be hard pressed to get what you're looking for at that $250 mark, especially on a Z97 board. Socket 1150 CPUs only have 16 lanes and every manufacturer who is willing to put an 8+ port RAID controller on board will also want a PLX PCIe bridge chip to avoid choking other PCIe devices (GPU's, M.2 drives, etc). The RAID chip alone would bring a $100 motherboard into the $200+ range and adding the PLX chip would likely bring it to $250+. At that point every manufacturer is going to look at a board with 14+ SATA ports, a PLX chip, and a Z97 chipset and say "lets sell it to gamers" and slap on some monster VRM setup, additional USB 3.0 ports, 4 PCIe 16x lanes, bake in some margin, and sell it for $400+.
  • 3DoubleD - Friday, March 13, 2015 - link

    Makes sense. Thanks for the suggestion, I'll look into it. Not sure why I've never come across this board, doesn't seem like it is sold at any of the common outlets I shop at (Newegg.ca, ect.). Still, going the add-in SATA cards seems to be the more economical way.
  • wintermute000 - Sunday, March 15, 2015 - link

    You wouldn't have ECC with Z97.
    Maybe unraid is better than ZFS/BTRFS but I still wouldn't roll with that much storage on a software solution (vs HW RAID) without ECC.
  • Vorl - Wednesday, March 11, 2015 - link

    This is such a strange board. with 18 SATA connections, the first thing everyone will think is "storage server". if all 18 ports were handled with the same high end RAID controller then the $600 price tag would make sense. As it is, this system is just a confused jumble of parts slapped together.

    Who needs 4 PCIE x16 slots on a storage server? That is an expense for no reason.
    Who needs 18 SATA connections that are all mixed around on different controllers that can't all be hardware raided together? Sure, you can run software raid, but for $600 you can buy a nice raid card, and sas to sata breakout cards and cables, and still be ahead due to full hardware performance with cache.

    Also, for a server, why would they not have the IGP port? I may be missing something, but I thought they CPU has integrated graphics.

    Just not an awesome setup from what I can tell.
    So.. why bother having all those sata ports if they aren't all tied to RAID?

    They add an LSI controller, and that isn't even what handles RAID on the system.
  • 1nf1d3l - Wednesday, March 11, 2015 - link

    LGA2011 processors do not have integrated graphics.

Log in

Don't have an account? Sign up now