System Performance

Power Consumption

Power consumption was tested on the system while in a single MSI GTX 770 Lightning GPU configuration with a wall meter connected to the OCZ 1250W power supply. This power supply is Gold rated, and as I am in the UK on a 230-240 V supply, leads to ~75% efficiency > 50W, and 90%+ efficiency at 250W, suitable for both idle and multi-GPU loading. This method of power reading allows us to compare the power management of the UEFI and the board to supply components with power under load, and includes typical PSU losses due to efficiency. These are the real world values that consumers may expect from a typical system (minus the monitor) using this motherboard.

While this method for power measurement may not be ideal, and you feel these numbers are not representative due to the high wattage power supply being used (we use the same PSU to remain consistent over a series of reviews, and the fact that some boards on our test bed get tested with three or four high powered GPUs), the important point to take away is the relationship between the numbers. These boards are all under the same conditions, and thus the differences between them should be easy to spot.

Power Consumption: Long Idle with GTX 770

Power Consumption: Idle with GTX 770

Power Consumption: OCCT Load with GTX 770

With the added PLX switches and LSI RAID controller, the Extreme11 was from the outset not going to be overly great when it came to power consumption. The same conclusions came from the ASRock X99 WS-E/10G with its power hungry 10G chip, and as a result these boards match each other both at idle and CPU load.

Windows 7 POST Time

Different motherboards have different POST sequences before an operating system is initialized. A lot of this is dependent on the board itself, and POST boot time is determined by the controllers on board (and the sequence of how those extras are organized). As part of our testing, we look at the POST Boot Time using a stopwatch. This is the time from pressing the ON button on the computer to when Windows 7 starts loading. (We discount Windows loading as it is highly variable given Windows specific features.) 

Windows 7 POST Time - Default

Windows 7 POST Time - Stripped

The extra controllers cause a small bump in POST time, with the final result being near the bottom of our testing results.

Rightmark Audio Analyzer 6.2.5

Rightmark:AA indicates how well the sound system is built and isolated from electrical interference (either internally or externally). For this test we connect the Line Out to the Line In using a short six inch 3.5mm to 3.5mm high-quality jack, turn the OS speaker volume to 100%, and run the Rightmark default test suite at 192 kHz, 24-bit. The OS is tuned to 192 kHz/24-bit input and output, and the Line-In volume is adjusted until we have the best RMAA value in the mini-pretest. We look specifically at the Dynamic Range of the audio codec used on board, as well as the Total Harmonic Distortion + Noise.

Dynamic Range of X99 Extreme11 at 100% volume

Rightmark: AA, Dynamic Range, 24-bit / 192 kHz

Rightmark: AA, THD+N, 24-bit / 192 kHz

The Extreme11 results match what we have seen before on other ASRock X99 boards with Realtek ALC1150 audio codecs – around 103 dB for dynamic range and above -78 dB for THD+N.

USB Backup

For this benchmark, we transfer a set size of files from the SSD to the USB drive using DiskBench, which monitors the time taken to transfer. The files transferred are a 1.52 GB set of 2867 files across 320 folders – 95% of these files are small typical website files, and the rest (90% of the size) are small 30 second HD videos. In an update to pre-Z87 testing, we also run MaxCPU to load up one of the threads during the test which improves general performance up to 15% by causing all the internal pathways to run at full speed.

USB 2.0 Copy Times

USB 3.0 Copy Times

USB 2.0 performance is somewhat middling, but USB 3.0 performance on the PCH is some of the best we have seen.

DPC Latency

Deferred Procedure Call latency is a way in which Windows handles interrupt servicing. In order to wait for a processor to acknowledge the request, the system will queue all interrupt requests by priority. Critical interrupts will be handled as soon as possible, whereas lesser priority requests such as audio will be further down the line. If the audio device requires data, it will have to wait until the request is processed before the buffer is filled.

If the device drivers of higher priority components in a system are poorly implemented, this can cause delays in request scheduling and process time.  This can lead to an empty audio buffer and characteristic audible pauses, pops and clicks. The DPC latency checker measures how much time is taken processing DPCs from driver invocation. The lower the value will result in better audio transfer at smaller buffer sizes. Results are measured in microseconds.

DPC Latency

LSI 3008 Performance

Unlike our X79 Extreme11 review, I unfortunately did not have a series of SSDs on hand to test in a similar manner. Nevertheless, the implementation for the X99 version is the same as the X79, and to recap our X79 Extreme11 results gives the following for peak sequential read speeds. The legend gives our X79 setup in terms of SATA 6 Gbps ports + SATA 3 Gbps ports (thus 2+0 gives a RAID-0 array of two SATA 6 Gbps ports), with the final eight being solely populated on the LSI controller.

This in order to match the best PCH performance in this setup, it required three drives in RAID-0 on the LSI ports. Similar results can be extrapolated for X99 whereby six of the 10 SATA ports on the PCH are capable of RAID, and a similar number on the LSI would be needed to match it. Unfortunately any RAID array that crosses both the PCH and the LSI ports needs to be from software.

In The Box, Test Setup and Overclocking CPU Performance
Comments Locked

58 Comments

View All Comments

  • Vorl - Wednesday, March 11, 2015 - link

    ahh, like I said, I might have missed something. Thanks!

    I was just looking at the haswell family and know it does support IGP. I didn't know that 2011/-E doesn't
  • yuhong - Saturday, March 14, 2015 - link

    Yea, servers are where 2D graphics on a separate chip on the motherboard is still common.
  • Kevin G - Wednesday, March 11, 2015 - link

    Native PCIe SSDs or 10G Ethernet controllers would make good use of the PCIe slots.

    A PCIe slot will be necessary for graphics, at lest during first time setup. Socket 2011-3 chips don't have integrated graphics so it is necessary. (It is possible to setup everything headless but you'll be glad you have a GPU if anything goes wrong.)

    As for why use the LSI controller, it is a decent HBA for software RAID like those used under ZFS. For FreeNAS/NAS4Free users, the numerous number of ports enables some rather larger arrays or features like hot sparing or SSD caching.
  • Vorl - Wednesday, March 11, 2015 - link

    for 10G Ethernet controllers/Fiber HBAs you only need (need is such a strong word too, considering 10g ethernet, and 8gb fiber only need 3 and 2 lanes respectively for PCIe 2.0.) 8x slots. for super fast PCIe storage like SSDs you only need 4x slots which is still 2GB/s for PCIe 2.0 They would have been better served adding more PCIe 8x slots, but then again, what would be the point of 18 SATA slots if you were going to add storage controllers in the PCIe 16x slots?

    The 4x16 PCIE x16 slots makes me think compute server, but that doesn't mesh with 18 SATA ports. If the database engines were able to use graphics cards now (which I know is being worked on) this system might make more sense.

    It still makes me think they just tried to slap a bunch of stuff together without any real thought about what the system would really be used for. I am all for goign fishing and seeing what people would use a board like this for, except that the $600 price tag put's it out of anyone but the most specialized use cases.

    As for the LSI controller, like someone mentioned above, you can get a cheaper board with 8x sata PCIe cards to give you the same number of ports. More ports even since most boards these days come with 6x sata 6Gbs connections The 1mb of cache is so silly for the LSI chip that it's laughable.

    The 128mb of cache for the RAID controller is a little better, but again, with just 6 RAID ports, what's the point?

    The whole board is just a mess of confusion.
  • 3DoubleD - Wednesday, March 11, 2015 - link

    Similar to my thinking in my post above.

    If you are going for a software RAID setup with a ludicrous number of SATA ports, you can get a Z97 board with 3 full PCIe slots (x8,x8,x4) with 8 SATA ports. With three supermicro cards (two 8x SATAIII and one 8x SATAII because of the x4 PCIe slot) you would have 32 SATA ports and it would cost you $650. The software raid I use "only" accepts up to 25 drives, so that last card is only necessary if you need that 1 extra drive, so for $500 you could run a 24 drive array with a M.2 or SATA Express SSD for a cache/system drive. And as you pointed out, since it is Z97, it would have on board video.

    Basically, given the price of these non-RAID add-in SATA cards, I'd say that any manufacturer making a marketing play on SATA ports needs to keep the cost of each additional SATA port to <$20/port over the price of a board with similar PCIe slot configurations.

    As you said, if this board had 18 SATA ports that could support hardware RAID, then it would be worth the additional price tag. This is probably not possible though since 10 SATA ports are from the chipset and the rest from an additional controller. For massive hardware RAID setups your better off getting a PCIe 2.0 x16 card (for 16 SATAIII drives) or a PCIe 3.0 x16 card (if such a thing even exists, it could theoretically handle 32 SATAIII drives). I'm sure such large hardware RAID arrays become overwhelming for the controller and would cost a fortune.

    Anyway, this must be some niche prosumer application that requires ludicrous amounts of non-RAID storage and 4 co-processor slots. I can't imagine what it is though.
  • Runiteshark - Wednesday, March 11, 2015 - link

    No clue why they didn't do a LSI 3108 and have the port for the add on BBU and cache unit like Supermicro does on some of their boards. Also not sure why these companies can't put 10g copper connectors at minimum on these boards. Again, supermicro does it without issue.
  • DanNeely - Wednesday, March 11, 2015 - link

    There're people who think combining their gaming godbox and blueray rip mega storage box into a single computer is a good idea. They're the potential market for a monstrosity like this.

    You know what they say, "A fool and his money will probably make someone else rich."
  • Murloc - Wednesday, March 11, 2015 - link

    I guess this is aimed at the rather unlikely situation of someone wanting both storage and computation/gaming in the same place.

    You know, there are people out there who just want the best and don't care about wasting money on features they don't need.
  • Zak - Thursday, March 12, 2015 - link

    I agree. For reasons Vorl mentioned this is a pointless board. I can't imagine a target market for this. My first reaction was also, wow, beastly storage server. But then yeah, different controllers. What is the point?
  • eanazag - Thursday, March 12, 2015 - link

    It is not a server board. Haswell-E desktop board. I have no use for that many SATA ports but someone might.

    2 x DVD or BD drives
    2 x SSDs on RAID 1 for boot

    Use Windows to mirror the two below RAID 0 volumes.
    7 x SSDs in RAID 0
    7 x SSDs in RAID 0

    The mirrored RAID 0 volumes could get you about 3-6 GBps transfer rates on reads from a 400 MBps SSD in sequential read. Maybe a little less in write speeds. All done with mediocre SSDs.

    This machine would cost over $2000.

Log in

Don't have an account? Sign up now