Many thanks to...

We must thank the following companies for kindly providing hardware for our test bed:

Thank you to OCZ for providing us with 1250W Gold Power Supplies.
Thank you to G.Skill for providing us with memory kits.
Thank you to Corsair for providing us with an AX1200i PSU, Corsair H80i CLC and 16GB 2400C10 memory.
Thank you to ASUS for providing us with the AMD GPUs and some IO Testing kit.
Thank you to ECS for providing us with the NVIDIA GPUs.
Thank you to Rosewill for providing us with the 500W Platinum Power Supply for mITX testing, BlackHawk Ultra, and 1600W Hercules PSU for extreme dual CPU + quad GPU testing, and RK-9100 keyboards.
Thank you to ASRock for providing us with the 802.11ac wireless router for testing.

Test Setup

Test Setup
Processor Intel Core i7-4960X ES
6 Cores, 12 Threads, 3.6 GHz (4.0 GHz Turbo)
Motherboards EVGA X79 Dark
ASUS Rampage IV Black Edition
ASUS P9X79-E WS
Cooling Corsair H80i
Thermalright TRUE Copper
Power Supply OCZ 1250W Gold ZX Series
Corsair AX1200i Platinum PSU
Memory 2 x Corsair Vengeance Pro 2x8 GB DDR3 2400 10-12-12 Kit
Memory Settings XMP (2400 10-12-12)
Video Cards ASUS HD7970 3GB
ECS GTX 580 1536MB
Video Drivers Catalyst 13.1
NVIDIA Drivers 310.90 WHQL
Hard Drive OCZ Vertex 3 256GB
Optical Drive LG GH22NS50
Case Open Test Bed
Operating System Windows 7 64-bit
USB 2/3 Testing OCZ Vertex 3 240GB with SATA->USB Adaptor
WiFi Testing D-Link DIR-865L 802.11ac Dual Band Router

Power Consumption

Power consumption was tested on the system as a whole with a wall meter connected to the OCZ 1250W power supply, while in a dual 7970 GPU configuration.  This power supply is Gold rated, and as I am in the UK on a 230-240 V supply, leads to ~75% efficiency > 50W, and 90%+ efficiency at 250W, which is suitable for both idle and multi-GPU loading.  This method of power reading allows us to compare the power management of the UEFI and the board to supply components with power under load, and includes typical PSU losses due to efficiency.  These are the real world values that consumers may expect from a typical system (minus the monitor) using this motherboard.

While this method for power measurement may not be ideal, and you feel these numbers are not representative due to the high wattage power supply being used (we use the same PSU to remain consistent over a series of reviews, and the fact that some boards on our test bed get tested with three or four high powered GPUs), the important point to take away is the relationship between the numbers.  These boards are all under the same conditions, and thus the differences between them should be easy to spot.

Power Consumption - Idle

The idle power numbers from the P9X79-E WS are a little higher than the others, presumably due to the large number of extra controllers present.

Windows 7 POST Time

Different motherboards have different POST sequences before an operating system is initialized.  A lot of this is dependent on the board itself, and POST boot time is determined by the controllers on board (and the sequence of how those extras are organized).  As part of our testing, we are now going to look at the POST Boot Time - this is the time from pressing the ON button on the computer to when Windows 7 starts loading.  (We discount Windows loading as it is highly variable given Windows specific features.)  These results are subject to human error, so please allow +/- 1 second in these results.

POST (Power-On Self-Test) Time

Typically large motherboards with extra features take longer to POST into Windows 7, such as some of our 25+ second tests, but the P9X79-E WS does better than expected coming just under 15 seconds.

In The Box, Overclocking System Benchmarks
Comments Locked

53 Comments

View All Comments

  • pewterrock - Friday, January 10, 2014 - link

    Intel Widi-capable network card (http://intel.ly/1iY9cjx) or if on Windows 8.1 use Miracast (http://bit.ly/1ktIfpq). Either will work with this receiver (http://amzn.to/1lJjrYS) at the TV or monitor.
  • dgingeri - Friday, January 10, 2014 - link

    WiDi would only work for one user at a time. It would have to be a Virtual Desktop type thing like extide mentions, but, as he said, that doesn't work too well for home user activities. Although, it could be with thin-clients: one of these for each user http://www.amazon.com/HP-Smart-Client-T5565z-1-00G...
  • eanazag - Wednesday, January 15, 2014 - link

    Yes and no. Virtual Desktops exist and can be done. Gaming is kind of a weak and expensive option. You can allocate graphics cards to VMs, but latency for screen are not going to be optimal for the money. Cheaper and better to go individual systems. If you're just watchnig youtube and converting video it wouldn't be a bad option and can be done reasonably. Check out nVidia's game streaming servers. It exists. The Grid GPUs are pushing in the thousands of dollars, but you would only need one. Supermicro has some systems that, I believe, fall into that category. VMware and Xenserver/Xendesktop can share the video cards as the hypervisors. Windows server with RemoteFX may work better. I haven't tried that.
  • extide - Friday, January 10, 2014 - link

    Note: At the beginning of the article you mention 5 year warranty but at the end you mention 3 years. Which is it?
  • Ian Cutress - Friday, January 10, 2014 - link

    Thanks for pointing the error. I initially thought I had read it as five but it is three.
  • Li_Thium - Friday, January 10, 2014 - link

    At last...triple SLI with space between from ASUS.
    Plus one and only SLI bridge: ASRock 3way 2S2S.
  • artemisgoldfish - Friday, January 10, 2014 - link

    I'd like to see how this board compares against an x16/x16/x8 board with 3 290Xs (if thermal issues didn't prevent this). Since they communicate from card to card through PCIe rather than a Crossfire bridge, a card in PCIe 5 communicating with a card in PCIe 1 would have to traverse the root complex and 2 switches. Wonder what the performance penalty would be like.
  • mapesdhs - Friday, January 10, 2014 - link


    I have the older P9X79 WS board, very nice BIOS to work with, easy to setup a good oc,
    currently have a 3930K @ 4.7. I see your NV tests had two 580s; aww, only two? Mine
    has four. :D (though this is more for exploring CUDA issues with AE rather than gaming)
    See: http://valid.canardpc.com/zk69q8

    The main thing I'd like to know is if the Marvell controller is any good, because so far
    every Marvell controller I've tested has been pretty awful, including the one on the older
    WS board. And how does the ASMedia controller compare? Come to think of it, does
    Intel sell any kind of simple SATA RAID PCIe card which just has its own controller so
    one can add a bunch of 6gbit ports that work properly?

    Should anyone contemplate using this newer WS, here are some build hints: fans on the
    chipset heatsinks are essential; it helps a lot with GPU swapping to have a water cooler
    (I recommend the Corsair H110 if your case can take it, though I'm using an H80 since
    I only have a HAF 932 with the PSU at the top); take note of what case you choose if you
    want to have a 2/3-slot GPU in the lowest slot (if so, the PSU needs space such as there
    is in an Aerocool X-Predator, or put the PSU at the top as I've done with my HAF 932);
    and if multiple GPUs are pumping out heat then remove the drive cage & reverse the front
    fan to be an exhaust.

    Also, the CPU socket is very close to the top PCIe slot, so if you do use an air cooler,
    note that larger units may press right up against the back of the top-slot GPU (a Phanteks
    will do this, the cooler I originally had before switching to an H80).

    I can mention a few other things if anyone's interested, plus some picture build links. All
    the same stuff would apply to the newer E version. Ah, an important point: if one upgrades
    the BIOS on this board, all oc profiles will be erased, so make sure you've either used the
    screenshot function to make a record of your oc settings, or written them down manually.

    Btw Ian, something you missed which I think is worth mentioning: compared to the older
    WS, ASUS have moved the 2-digit debug LED to the right side edge of the PCB. I suspect
    they did this because, as I discovered, with four GPUs installed one cannot see the debug
    display at all, which is rather annoying. Glad they've moved it, but a pity it wasn't on the
    right side edge to begin with.

    Hmm, one other question Ian, do you know if it's possible to use any of the lower slots
    as the primary display GPU slot with the E version? (presumably one of the blue slots)
    I tried this with the older board but it didn't work.

    Ian.

    PS. Are you sure your 580 isn't being hampered in any of the tests by its meagre 1.5GB RAM?
    I sourced only 3GB 580s for my build (four MSI Lightning Xtremes, 832MHz stock, though they
    oc like crazy).
  • Ian Cutress - Saturday, January 11, 2014 - link

    Dual GTX 580s is all I got! We don't all work in one big office at AnandTech, as we are dotted around the world. It is hard to source four GPUs of exactly the same type without laying down some personal cash in the process. That being said, for my new 2014 benchmark suite starting soon, I have three GTX 770 Lightnings which will feature in the testing.

    On the couple of points:
    Marvell Controller: ASUS use this to enable SSD Caching, other controllers do not do it. That is perhaps at the expense of speed, although I do not have appropriate hardware (i.e. two drives in RAID 0 suitable of breaking SATA 6 Gbps) connected via SATA. Perhaps if I had something like an ACARD ANS-9010 that would be good, but sourcing one would be difficult, as well as being expensive.
    Close proximity to first PCIe: This happens with all motherboards that use the first slot as a PCIe device, hence the change in mainstream boards to now make that top slot a PCIe x1 or nothing at all.
    OC Profiles being erased: Again, this is standard. You're flashing the whole BIOS, OC Profiles included.
    2-Digit Debug: The RIVE had this as well - basically any board where ASUS believes that users will use 4 cards has it moved there. You also need an E-ATX layout or it becomes an issue with routing (at least more difficult to trace on the PCB).
    Lower slots for GPUs: I would assume so, but I am not 100%. I cannot see any reason why not, but I have not tested it. If I get a chance to put the motherboard back on the test bed (never always easy with a backlog of boards waiting to be tested) I will attempt.
    GPU Memory: Perhaps. I work with what I have at the time ;) That should be less of an issue for 2014 testing.

    -Ian
  • mapesdhs - Saturday, January 11, 2014 - link


    Ian Cutress writes:
    > ... It is hard to source four GPUs of exactly the same type without
    > laying down some personal cash in the process. ...

    True, it took a while and some moolah to get the cards for my system,
    all off eBay of course (eg. item 161179653299).

    > ... I have three GTX 770 Lightnings which will feature in the testing.

    Sounds good!

    > Marvell Controller: ASUS use this to enable SSD Caching, other controllers do not do it.

    So far I've found it's more useful for providing RAID1 with mechanical drives.
    A while ago I built an AE system using the older WS board; 3930K @ 4.7, 64GB @ 2133,
    two Samsung 830s on the Intel 6gbit ports (C-drive and AE cache), two Enterprise SATA
    2TB on the Marvell in RAID1 for long term data storage. GPUs were a Quadro 4000 and
    three GTX 580 3GB for CUDA.

    > (i.e. two drives in RAID 0 suitable of breaking SATA 6 Gbps) ...

    I tested an HP branded LSI card with 512MB cache, behaved much as expected:
    2GB/sec for accesses that can exploit the cache, less than that when the drives
    have to be read/written, scaling pretty much based on the no. of drives.

    > Close proximity to first PCIe: This happens with all motherboards that use the first
    > slot as a PCIe device, hence the change in mainstream boards to now make that top slot
    > a PCIe x1 or nothing at all.

    It certainly helps with HS spacing on an M4E.

    > OC Profiles being erased: Again, this is standard. You're flashing the whole BIOS, OC
    > Profiles included.

    Pity they can't find a way to preserve the profiles though, or at the very least
    include a warning when about to flash that the oc profiles are going to be wiped.

    > 2-Digit Debug: The RIVE had this as well - basically any board where ASUS believes
    > that users will use 4 cards has it moved there. ...

    Which is why it's a bit surprising that the older P9X79 WS doesn't have it on the edge.

    > Lower slots for GPUs: I would assume so, but I am not 100%. I cannot see any reason
    > why not, but I have not tested it. If I get a chance to put the motherboard back on
    > the test bed (never always easy with a backlog of boards waiting to be tested) I will
    > attempt.

    Ach I wouldn't worry about it too much. It was a more interesting idea with the older
    WS because the slot spacing meant being able to fit a 1-slot Quadro in a lower slot
    would give a more efficient slot usage for 2-slot CUDA cards & RAID cards.

    > GPU Memory: Perhaps. I work with what I have at the time ;) That should be less of an
    > issue for 2014 testing.

    I asked because of my experiences of playing Crysis2 at max settings just at 1920x1200
    on two 1GB cards SLI (switching to 3GB cards made a nice difference). Couldn't help
    wondering if Metro, etc., at 1440p would exceed 1.5GB.

    Ian.

Log in

Don't have an account? Sign up now