The PLX Arrangement

The P9X79-E WS has an interesting chipset diagram – in order to power each of the seven PCIe lanes in the specified x16/x8/x8/x8/x16/x8/x8 way, ASUS uses a PLX switch which acts as a FIFO buffer/mux and increase bandwidth to the GPUs that need it most, a sort of ‘fill twice pour once’ approach.  We covered the chipset diagram earlier in the review:

In this diagram the thick lines are where x16 lanes are directed, and the thin lanes are x8.  So PCIe 3 has 8 lanes from the PLX and 8 lanes from the Quick Switch normally, but when PCIe 2 is populated, the Quick Switch will move the eight lanes over to PCIe 2.

But the effect on gaming can amount to 1-2% per PLX chip, as shown in our previous reviews.  This appears again in the P9X79-E WS, but this is a small price to pay for having functionality up to seven PCIe devices.

Metro2033

Our first analysis is with the perennial reviewers’ favorite, Metro2033.  It occurs in a lot of reviews for a couple of reasons – it has a very easy to use benchmark GUI that anyone can use, and it is often very GPU limited, at least in single GPU mode.  Metro2033 is a strenuous DX11 benchmark that can challenge most systems that try to run it at any high-end settings.  Developed by 4A Games and released in March 2010, we use the inbuilt DirectX 11 Frontline benchmark to test the hardware at 1440p with full graphical settings.  Results are given as the average frame rate from a second batch of 4 runs, as Metro has a tendency to inflate the scores for the first batch by up to 5%.

Metro 2033 - One 7970, 1440p, Max Settings

Metro 2033 1 GPU 2 GPU 3 GPU
AMD
NVIDIA  

Dirt 3

Dirt 3 is a rallying video game and the third in the Dirt series of the Colin McRae Rally series, developed and published by Codemasters.  Dirt 3 also falls under the list of ‘games with a handy benchmark mode’.  In previous testing, Dirt 3 has always seemed to love cores, memory, GPUs, PCIe lane bandwidth, everything.  The small issue with Dirt 3 is that depending on the benchmark mode tested, the benchmark launcher is not indicative of game play per se, citing numbers higher than actually observed.  Despite this, the benchmark mode also includes an element of uncertainty, by actually driving a race, rather than a predetermined sequence of events such as Metro 2033.  This in essence should make the benchmark more variable, but we take repeated in order to smooth this out.  Using the benchmark mode, Dirt 3 is run at 1440p with Ultra graphical settings.  Results are reported as the average frame rate across four runs.

Dirt 3 - One 7970, 1440p, Max Settings

Dirt 3 1 GPU 2 GPU 3 GPU
AMD
NVIDIA  

Civilization V

A game that has plagued my testing over the past twelve months is Civilization V.  Being on the older 12.3 Catalyst drivers were somewhat of a nightmare, giving no scaling, and as a result I dropped it from my test suite after only a couple of reviews.  With the later drivers used for this review, the situation has improved but only slightly, as you will see below.  Civilization V seems to run into a scaling bottleneck very early on, and any additional GPU allocation only causes worse performance.

Our Civilization V testing uses Ryan’s GPU benchmark test all wrapped up in a neat batch file.  We test at 1080p, and report the average frame rate of a 5 minute test.

Civilization V - One 7970, 1440p, Max Settings

Civilization V 1 GPU 2 GPU 3 GPU
AMD
NVIDIA  

Sleeping Dogs

While not necessarily a game on everybody’s lips, Sleeping Dogs is a strenuous game with a pretty hardcore benchmark that scales well with additional GPU power due to its SSAA implementation.  The team over at Adrenaline.com.br is supreme for making an easy to use benchmark GUI, allowing a numpty like me to charge ahead with a set of four 1440p runs with maximum graphical settings.

Sleeping Dogs - One 7970, 1440p, Max Settings

Sleeping Dogs 1 GPU 2 GPU 3 GPU
AMD
NVIDIA  

Computation Benchmarks ASUS P9X79-E WS Conclusion
Comments Locked

53 Comments

View All Comments

  • pewterrock - Friday, January 10, 2014 - link

    Intel Widi-capable network card (http://intel.ly/1iY9cjx) or if on Windows 8.1 use Miracast (http://bit.ly/1ktIfpq). Either will work with this receiver (http://amzn.to/1lJjrYS) at the TV or monitor.
  • dgingeri - Friday, January 10, 2014 - link

    WiDi would only work for one user at a time. It would have to be a Virtual Desktop type thing like extide mentions, but, as he said, that doesn't work too well for home user activities. Although, it could be with thin-clients: one of these for each user http://www.amazon.com/HP-Smart-Client-T5565z-1-00G...
  • eanazag - Wednesday, January 15, 2014 - link

    Yes and no. Virtual Desktops exist and can be done. Gaming is kind of a weak and expensive option. You can allocate graphics cards to VMs, but latency for screen are not going to be optimal for the money. Cheaper and better to go individual systems. If you're just watchnig youtube and converting video it wouldn't be a bad option and can be done reasonably. Check out nVidia's game streaming servers. It exists. The Grid GPUs are pushing in the thousands of dollars, but you would only need one. Supermicro has some systems that, I believe, fall into that category. VMware and Xenserver/Xendesktop can share the video cards as the hypervisors. Windows server with RemoteFX may work better. I haven't tried that.
  • extide - Friday, January 10, 2014 - link

    Note: At the beginning of the article you mention 5 year warranty but at the end you mention 3 years. Which is it?
  • Ian Cutress - Friday, January 10, 2014 - link

    Thanks for pointing the error. I initially thought I had read it as five but it is three.
  • Li_Thium - Friday, January 10, 2014 - link

    At last...triple SLI with space between from ASUS.
    Plus one and only SLI bridge: ASRock 3way 2S2S.
  • artemisgoldfish - Friday, January 10, 2014 - link

    I'd like to see how this board compares against an x16/x16/x8 board with 3 290Xs (if thermal issues didn't prevent this). Since they communicate from card to card through PCIe rather than a Crossfire bridge, a card in PCIe 5 communicating with a card in PCIe 1 would have to traverse the root complex and 2 switches. Wonder what the performance penalty would be like.
  • mapesdhs - Friday, January 10, 2014 - link


    I have the older P9X79 WS board, very nice BIOS to work with, easy to setup a good oc,
    currently have a 3930K @ 4.7. I see your NV tests had two 580s; aww, only two? Mine
    has four. :D (though this is more for exploring CUDA issues with AE rather than gaming)
    See: http://valid.canardpc.com/zk69q8

    The main thing I'd like to know is if the Marvell controller is any good, because so far
    every Marvell controller I've tested has been pretty awful, including the one on the older
    WS board. And how does the ASMedia controller compare? Come to think of it, does
    Intel sell any kind of simple SATA RAID PCIe card which just has its own controller so
    one can add a bunch of 6gbit ports that work properly?

    Should anyone contemplate using this newer WS, here are some build hints: fans on the
    chipset heatsinks are essential; it helps a lot with GPU swapping to have a water cooler
    (I recommend the Corsair H110 if your case can take it, though I'm using an H80 since
    I only have a HAF 932 with the PSU at the top); take note of what case you choose if you
    want to have a 2/3-slot GPU in the lowest slot (if so, the PSU needs space such as there
    is in an Aerocool X-Predator, or put the PSU at the top as I've done with my HAF 932);
    and if multiple GPUs are pumping out heat then remove the drive cage & reverse the front
    fan to be an exhaust.

    Also, the CPU socket is very close to the top PCIe slot, so if you do use an air cooler,
    note that larger units may press right up against the back of the top-slot GPU (a Phanteks
    will do this, the cooler I originally had before switching to an H80).

    I can mention a few other things if anyone's interested, plus some picture build links. All
    the same stuff would apply to the newer E version. Ah, an important point: if one upgrades
    the BIOS on this board, all oc profiles will be erased, so make sure you've either used the
    screenshot function to make a record of your oc settings, or written them down manually.

    Btw Ian, something you missed which I think is worth mentioning: compared to the older
    WS, ASUS have moved the 2-digit debug LED to the right side edge of the PCB. I suspect
    they did this because, as I discovered, with four GPUs installed one cannot see the debug
    display at all, which is rather annoying. Glad they've moved it, but a pity it wasn't on the
    right side edge to begin with.

    Hmm, one other question Ian, do you know if it's possible to use any of the lower slots
    as the primary display GPU slot with the E version? (presumably one of the blue slots)
    I tried this with the older board but it didn't work.

    Ian.

    PS. Are you sure your 580 isn't being hampered in any of the tests by its meagre 1.5GB RAM?
    I sourced only 3GB 580s for my build (four MSI Lightning Xtremes, 832MHz stock, though they
    oc like crazy).
  • Ian Cutress - Saturday, January 11, 2014 - link

    Dual GTX 580s is all I got! We don't all work in one big office at AnandTech, as we are dotted around the world. It is hard to source four GPUs of exactly the same type without laying down some personal cash in the process. That being said, for my new 2014 benchmark suite starting soon, I have three GTX 770 Lightnings which will feature in the testing.

    On the couple of points:
    Marvell Controller: ASUS use this to enable SSD Caching, other controllers do not do it. That is perhaps at the expense of speed, although I do not have appropriate hardware (i.e. two drives in RAID 0 suitable of breaking SATA 6 Gbps) connected via SATA. Perhaps if I had something like an ACARD ANS-9010 that would be good, but sourcing one would be difficult, as well as being expensive.
    Close proximity to first PCIe: This happens with all motherboards that use the first slot as a PCIe device, hence the change in mainstream boards to now make that top slot a PCIe x1 or nothing at all.
    OC Profiles being erased: Again, this is standard. You're flashing the whole BIOS, OC Profiles included.
    2-Digit Debug: The RIVE had this as well - basically any board where ASUS believes that users will use 4 cards has it moved there. You also need an E-ATX layout or it becomes an issue with routing (at least more difficult to trace on the PCB).
    Lower slots for GPUs: I would assume so, but I am not 100%. I cannot see any reason why not, but I have not tested it. If I get a chance to put the motherboard back on the test bed (never always easy with a backlog of boards waiting to be tested) I will attempt.
    GPU Memory: Perhaps. I work with what I have at the time ;) That should be less of an issue for 2014 testing.

    -Ian
  • mapesdhs - Saturday, January 11, 2014 - link


    Ian Cutress writes:
    > ... It is hard to source four GPUs of exactly the same type without
    > laying down some personal cash in the process. ...

    True, it took a while and some moolah to get the cards for my system,
    all off eBay of course (eg. item 161179653299).

    > ... I have three GTX 770 Lightnings which will feature in the testing.

    Sounds good!

    > Marvell Controller: ASUS use this to enable SSD Caching, other controllers do not do it.

    So far I've found it's more useful for providing RAID1 with mechanical drives.
    A while ago I built an AE system using the older WS board; 3930K @ 4.7, 64GB @ 2133,
    two Samsung 830s on the Intel 6gbit ports (C-drive and AE cache), two Enterprise SATA
    2TB on the Marvell in RAID1 for long term data storage. GPUs were a Quadro 4000 and
    three GTX 580 3GB for CUDA.

    > (i.e. two drives in RAID 0 suitable of breaking SATA 6 Gbps) ...

    I tested an HP branded LSI card with 512MB cache, behaved much as expected:
    2GB/sec for accesses that can exploit the cache, less than that when the drives
    have to be read/written, scaling pretty much based on the no. of drives.

    > Close proximity to first PCIe: This happens with all motherboards that use the first
    > slot as a PCIe device, hence the change in mainstream boards to now make that top slot
    > a PCIe x1 or nothing at all.

    It certainly helps with HS spacing on an M4E.

    > OC Profiles being erased: Again, this is standard. You're flashing the whole BIOS, OC
    > Profiles included.

    Pity they can't find a way to preserve the profiles though, or at the very least
    include a warning when about to flash that the oc profiles are going to be wiped.

    > 2-Digit Debug: The RIVE had this as well - basically any board where ASUS believes
    > that users will use 4 cards has it moved there. ...

    Which is why it's a bit surprising that the older P9X79 WS doesn't have it on the edge.

    > Lower slots for GPUs: I would assume so, but I am not 100%. I cannot see any reason
    > why not, but I have not tested it. If I get a chance to put the motherboard back on
    > the test bed (never always easy with a backlog of boards waiting to be tested) I will
    > attempt.

    Ach I wouldn't worry about it too much. It was a more interesting idea with the older
    WS because the slot spacing meant being able to fit a 1-slot Quadro in a lower slot
    would give a more efficient slot usage for 2-slot CUDA cards & RAID cards.

    > GPU Memory: Perhaps. I work with what I have at the time ;) That should be less of an
    > issue for 2014 testing.

    I asked because of my experiences of playing Crysis2 at max settings just at 1920x1200
    on two 1GB cards SLI (switching to 3GB cards made a nice difference). Couldn't help
    wondering if Metro, etc., at 1440p would exceed 1.5GB.

    Ian.

Log in

Don't have an account? Sign up now