Any highly priced motherboard should come with almost everything supplied in the box, and ones labeled ‘Workstation’ I should imagine have everything that a prosumer might need.  This includes GPU bridges, SATA cables and anything else a user might need (Molex to SATA, 12V extension cables?).  In the P9X79-E WS box we get the following:

Driver CD
User Guide
Rear IO Panel
10 SATA Cables
Flexi SLI Bridge
Rigid 3-way SLI Bridge
Rigid 4-way SLI Bridge
COM Rear Bracket
USB and IEEE1394 Rear Bracket
Molex to 2x SATA power cable

Well, I was right about the full complement of SATA cables, even additional power cables, SLI bridges and it is good to see the rear brackets for the less commonly used ports on board, which may be an important facet of a prosumer build.

ASUS P9X79-E WS Overclocking

Experience with ASUS P9X79-E WS

The P9X79-E WS is a workstation board, and often any overclocking features are a secondary thought – given that the purpose of such a product is the prosumer Xeon market, the fact that it supports the regular consumer level CPUs is more a bonus than anything else.  But rather than use a server chipset and work down, ASUS have used the consumer chipset and worked up to include Xeons over the consumer level.  As we are using a consumer CPU for this test, all the overclocking options were available, albeit limited.

For automatic overclocks, the AI Suite software offers Fast and Extreme modes, whereby the Fast mode is mirrored in the OC Tuner option in the BIOS and by the switch on the motherboard.  The fast mode implements a set overclock whereas the extreme mode uses the preset as a starting point to probe the system for faster speeds.  Unfortunately due to our lackluster CPU sample, both these settings returned almost the same result.

For manual overclocking, all the options that most regular overclockers are familiar with are here, and compared to the Rampage IV Extreme we actually had some success in beating BIOS set voltages to hit certain frequencies.  Nonetheless, the big extended heatsink on the motherboard does play a part and we saw 90C at the limit of our CPU.

Methodology:

Our standard overclocking methodology is as follows.  We select the automatic overclock options and test for stability with PovRay and OCCT to simulate high-end workloads.  These stability tests aim to catch any immediate causes for memory or CPU errors.

For manual overclocks, based on the information gathered from previous testing, starts off at a nominal voltage and CPU multiplier, and the multiplier is increased until the stability tests are failed.  The CPU voltage is increased gradually until the stability tests are passed, and the process repeated until the motherboard reduces the multiplier automatically (due to safety protocol) or the CPU temperature reaches a stupidly high level (100ºC+).  Our test bed is not in a case, which should push overclocks higher with fresher (cooler) air. 

Automatic Overclock:

The automatic overclock options are fond in the TurboV EVO section of AI Suite, and offer one button selections.  Our results are as follows:

On the ‘Fast’ setting, the system changed the CPU strap from 100 MHz to 125 MHz, as well as adjusting the CPU to 33x base turbo with 36x full turbo.  This gave a MHz range of 4125 MHz to 4500 MHz, with the CPU set at 1.300 volts and Load Line Calibration on Auto.  At this setting the CPU scored 2210.71 in PovRay, a peak temperature of 74C in OCCT and a load voltage of 1.288 volts on the CPU.

On the ‘Extreme’ setting, the system rebooted to the ‘Fast’ mode speed and then attempted to stress test the CPU, first by incrementing the multiplier and then the BCLK.  Unfortuantely there was an issue with the software when upping the multiplier, causing the system to loop the same screen animation.  If the system is reset at that point, the software tries again but with BCLK.  On this setting, the system ended up with the same multiplier range as the ‘Fast’ setting, but at 125.50 MHz, a small 0.50 MHz difference – the CPU voltage and LLC were set the same as the Fast mode.  With this setting, PovRay scored 2235.18, OCCT peak temperature was 74C and the system reported a load voltage of 1.288 volts.

Manual Overclock:

For our manual overclock, we start at the 40x multiplier, set load line calibration to Ultra High, the CPU voltage to 1.100 volts and start testing.  If the system is stable (PovRay and OCCT test), the multiplier is increased; if unstable, the voltage is increased.  Here are our results:

At 4.5 GHz the OCCT test was moving above 90C, so we decided to stop there in our testing.  It was interesting to see such a level voltage up to 4.2 GHz.

BIOS and Software Test Setup, Power Consumption, POST Time
Comments Locked

53 Comments

View All Comments

  • pewterrock - Friday, January 10, 2014 - link

    Intel Widi-capable network card (http://intel.ly/1iY9cjx) or if on Windows 8.1 use Miracast (http://bit.ly/1ktIfpq). Either will work with this receiver (http://amzn.to/1lJjrYS) at the TV or monitor.
  • dgingeri - Friday, January 10, 2014 - link

    WiDi would only work for one user at a time. It would have to be a Virtual Desktop type thing like extide mentions, but, as he said, that doesn't work too well for home user activities. Although, it could be with thin-clients: one of these for each user http://www.amazon.com/HP-Smart-Client-T5565z-1-00G...
  • eanazag - Wednesday, January 15, 2014 - link

    Yes and no. Virtual Desktops exist and can be done. Gaming is kind of a weak and expensive option. You can allocate graphics cards to VMs, but latency for screen are not going to be optimal for the money. Cheaper and better to go individual systems. If you're just watchnig youtube and converting video it wouldn't be a bad option and can be done reasonably. Check out nVidia's game streaming servers. It exists. The Grid GPUs are pushing in the thousands of dollars, but you would only need one. Supermicro has some systems that, I believe, fall into that category. VMware and Xenserver/Xendesktop can share the video cards as the hypervisors. Windows server with RemoteFX may work better. I haven't tried that.
  • extide - Friday, January 10, 2014 - link

    Note: At the beginning of the article you mention 5 year warranty but at the end you mention 3 years. Which is it?
  • Ian Cutress - Friday, January 10, 2014 - link

    Thanks for pointing the error. I initially thought I had read it as five but it is three.
  • Li_Thium - Friday, January 10, 2014 - link

    At last...triple SLI with space between from ASUS.
    Plus one and only SLI bridge: ASRock 3way 2S2S.
  • artemisgoldfish - Friday, January 10, 2014 - link

    I'd like to see how this board compares against an x16/x16/x8 board with 3 290Xs (if thermal issues didn't prevent this). Since they communicate from card to card through PCIe rather than a Crossfire bridge, a card in PCIe 5 communicating with a card in PCIe 1 would have to traverse the root complex and 2 switches. Wonder what the performance penalty would be like.
  • mapesdhs - Friday, January 10, 2014 - link


    I have the older P9X79 WS board, very nice BIOS to work with, easy to setup a good oc,
    currently have a 3930K @ 4.7. I see your NV tests had two 580s; aww, only two? Mine
    has four. :D (though this is more for exploring CUDA issues with AE rather than gaming)
    See: http://valid.canardpc.com/zk69q8

    The main thing I'd like to know is if the Marvell controller is any good, because so far
    every Marvell controller I've tested has been pretty awful, including the one on the older
    WS board. And how does the ASMedia controller compare? Come to think of it, does
    Intel sell any kind of simple SATA RAID PCIe card which just has its own controller so
    one can add a bunch of 6gbit ports that work properly?

    Should anyone contemplate using this newer WS, here are some build hints: fans on the
    chipset heatsinks are essential; it helps a lot with GPU swapping to have a water cooler
    (I recommend the Corsair H110 if your case can take it, though I'm using an H80 since
    I only have a HAF 932 with the PSU at the top); take note of what case you choose if you
    want to have a 2/3-slot GPU in the lowest slot (if so, the PSU needs space such as there
    is in an Aerocool X-Predator, or put the PSU at the top as I've done with my HAF 932);
    and if multiple GPUs are pumping out heat then remove the drive cage & reverse the front
    fan to be an exhaust.

    Also, the CPU socket is very close to the top PCIe slot, so if you do use an air cooler,
    note that larger units may press right up against the back of the top-slot GPU (a Phanteks
    will do this, the cooler I originally had before switching to an H80).

    I can mention a few other things if anyone's interested, plus some picture build links. All
    the same stuff would apply to the newer E version. Ah, an important point: if one upgrades
    the BIOS on this board, all oc profiles will be erased, so make sure you've either used the
    screenshot function to make a record of your oc settings, or written them down manually.

    Btw Ian, something you missed which I think is worth mentioning: compared to the older
    WS, ASUS have moved the 2-digit debug LED to the right side edge of the PCB. I suspect
    they did this because, as I discovered, with four GPUs installed one cannot see the debug
    display at all, which is rather annoying. Glad they've moved it, but a pity it wasn't on the
    right side edge to begin with.

    Hmm, one other question Ian, do you know if it's possible to use any of the lower slots
    as the primary display GPU slot with the E version? (presumably one of the blue slots)
    I tried this with the older board but it didn't work.

    Ian.

    PS. Are you sure your 580 isn't being hampered in any of the tests by its meagre 1.5GB RAM?
    I sourced only 3GB 580s for my build (four MSI Lightning Xtremes, 832MHz stock, though they
    oc like crazy).
  • Ian Cutress - Saturday, January 11, 2014 - link

    Dual GTX 580s is all I got! We don't all work in one big office at AnandTech, as we are dotted around the world. It is hard to source four GPUs of exactly the same type without laying down some personal cash in the process. That being said, for my new 2014 benchmark suite starting soon, I have three GTX 770 Lightnings which will feature in the testing.

    On the couple of points:
    Marvell Controller: ASUS use this to enable SSD Caching, other controllers do not do it. That is perhaps at the expense of speed, although I do not have appropriate hardware (i.e. two drives in RAID 0 suitable of breaking SATA 6 Gbps) connected via SATA. Perhaps if I had something like an ACARD ANS-9010 that would be good, but sourcing one would be difficult, as well as being expensive.
    Close proximity to first PCIe: This happens with all motherboards that use the first slot as a PCIe device, hence the change in mainstream boards to now make that top slot a PCIe x1 or nothing at all.
    OC Profiles being erased: Again, this is standard. You're flashing the whole BIOS, OC Profiles included.
    2-Digit Debug: The RIVE had this as well - basically any board where ASUS believes that users will use 4 cards has it moved there. You also need an E-ATX layout or it becomes an issue with routing (at least more difficult to trace on the PCB).
    Lower slots for GPUs: I would assume so, but I am not 100%. I cannot see any reason why not, but I have not tested it. If I get a chance to put the motherboard back on the test bed (never always easy with a backlog of boards waiting to be tested) I will attempt.
    GPU Memory: Perhaps. I work with what I have at the time ;) That should be less of an issue for 2014 testing.

    -Ian
  • mapesdhs - Saturday, January 11, 2014 - link


    Ian Cutress writes:
    > ... It is hard to source four GPUs of exactly the same type without
    > laying down some personal cash in the process. ...

    True, it took a while and some moolah to get the cards for my system,
    all off eBay of course (eg. item 161179653299).

    > ... I have three GTX 770 Lightnings which will feature in the testing.

    Sounds good!

    > Marvell Controller: ASUS use this to enable SSD Caching, other controllers do not do it.

    So far I've found it's more useful for providing RAID1 with mechanical drives.
    A while ago I built an AE system using the older WS board; 3930K @ 4.7, 64GB @ 2133,
    two Samsung 830s on the Intel 6gbit ports (C-drive and AE cache), two Enterprise SATA
    2TB on the Marvell in RAID1 for long term data storage. GPUs were a Quadro 4000 and
    three GTX 580 3GB for CUDA.

    > (i.e. two drives in RAID 0 suitable of breaking SATA 6 Gbps) ...

    I tested an HP branded LSI card with 512MB cache, behaved much as expected:
    2GB/sec for accesses that can exploit the cache, less than that when the drives
    have to be read/written, scaling pretty much based on the no. of drives.

    > Close proximity to first PCIe: This happens with all motherboards that use the first
    > slot as a PCIe device, hence the change in mainstream boards to now make that top slot
    > a PCIe x1 or nothing at all.

    It certainly helps with HS spacing on an M4E.

    > OC Profiles being erased: Again, this is standard. You're flashing the whole BIOS, OC
    > Profiles included.

    Pity they can't find a way to preserve the profiles though, or at the very least
    include a warning when about to flash that the oc profiles are going to be wiped.

    > 2-Digit Debug: The RIVE had this as well - basically any board where ASUS believes
    > that users will use 4 cards has it moved there. ...

    Which is why it's a bit surprising that the older P9X79 WS doesn't have it on the edge.

    > Lower slots for GPUs: I would assume so, but I am not 100%. I cannot see any reason
    > why not, but I have not tested it. If I get a chance to put the motherboard back on
    > the test bed (never always easy with a backlog of boards waiting to be tested) I will
    > attempt.

    Ach I wouldn't worry about it too much. It was a more interesting idea with the older
    WS because the slot spacing meant being able to fit a 1-slot Quadro in a lower slot
    would give a more efficient slot usage for 2-slot CUDA cards & RAID cards.

    > GPU Memory: Perhaps. I work with what I have at the time ;) That should be less of an
    > issue for 2014 testing.

    I asked because of my experiences of playing Crysis2 at max settings just at 1920x1200
    on two 1GB cards SLI (switching to 3GB cards made a nice difference). Couldn't help
    wondering if Metro, etc., at 1440p would exceed 1.5GB.

    Ian.

Log in

Don't have an account? Sign up now