To start, we want to thank the many manufacturers who have donated kit for our test beds in order to make this review, along with many others, possible.

Thank you to OCZ for providing us with 1250W Gold Power Supplies.
Thank you to G.Skill for providing us with the memory kits.
Thank you to ASUS for providing us with the AMD GPUs and some IO Testing kit.
Thank you to ECS for providing us with the NVIDIA GPUs.
Thank you to Corsair for providing us with the Corsair H80i CLC.
Thank you to Rosewill for providing us with the 500W Platinum Power Supply for mITX testing, a BlackHawk Ultra, and 1600W Hercules PSU for extreme dual CPU + quad GPU testing, and RK-9100 keyboards.
Thank you to Gigabyte for providing us with the X5690 CPUs.

Also many thanks go to the manufacturers who over the years have provided review samples which contribute to this review.

Testing Methodology

In order to keep the testing fair, we set strict rules in place for each of these setups. For every new chipset, the SSD was formatted and a fresh installation of the OS was applied. The chipset drivers for the motherboard were installed, along with NVIDIA drivers then AMD drivers. The games were preinstalled on a second partition, but relinked to ensure they worked properly. The games were then tested as follows:

Metro 2033: Benchmark Mode, two runs of four scenes at 1440p, max settings. First run of four is discarded, average of second run is taken (minus outliers).

DiRT 3: Benchmark Mode, four runs of the first scene with 8 cars at 1440p, max settings. Average is taken.

Civilization V: One five minute run of the benchmark mode accessible at the command line, at 1440p and max settings. Results produced are total frames in sets of 60 seconds, average taken.

Sleeping Dogs: Using the Adrenaline benchmark software, four scenes at 1440p in Ultra settings. Average is taken.

If the platform was being used for the next CPU (e.g. Maximus V Formula, moving from FX-8150 to FX-8350), there's no need to reinstall. If the platform is changed for the next test, a full reinstall and setup takes place.

How to Read This Review

Due to the large number of different variables in our review, it is hard to accurately label each data point with all the information about that setup. It also stands to reason that just putting the CPU model is also a bad idea when the same CPU could be in two different motherboards with different GPU lane allocations. There is also the memory aspect to consider, as well as if a motherboard uses MCT at stock. Here is a set of labels correlating to configurations you will see in this review:

CPU[+][(CP)] (PCIe version – lane allocation to GPUs [PLX])

First is the name of the CPU, then an optional + identifier for MCT enabled motherboards. (CP) indicates we are dealing with a Bulldozer derived CPU and using the Core Parking updates. Inside the parentheses is the PCIe version of the lanes we are dealing with, along with the lane allocation to each GPU. The final flag is if a PLX chip is involved in lane allocation.

Thus, for example:

A10-5800K (2 – x16/x16): A10-5800K with two GPUs in PCIe 2.0 mode
A10-5800K (CP) (2 – x16/x16): A10-5800K using Core Parking updates with two GPUs in PCIe 2.0 mode
FX-8350K (2 – x16/x16/x8): FX-8350 with three GPUs in PCIe 2.0 mode
i7-3770K (3/2 – x8/x8 + x4): i7-3770K powering three GPUs in PCIe 3.0 but the third GPU is using the PCIe 2.0 x4 from the chipset
i7-3770K+ (3 – x16): i7-3770K (with MCT) powering one GPU in PCIe 3.0 mode
i7-3770K+ (3 – x8/x8/x8/x8 PLX): i7-3770K (with MCT) powering four GPUs in PCIe 3.0 via a PLX chip

Common Configuration Points

All the system setups below have the following consistent configurations points:

- A fresh install of Windows 7 Ultimate 64-bit
- Either an Intel Stock CPU Cooler, a Corsair H80i CLC or Thermalright TRUE Copper
- OCZ 1250W Gold ZX Series PSUs (Rosewill 1600W Hercules for The Beast)
- Up to 4x ASUS AMD HD 7970 GPUs, using Catalyst 13.1
- Up to 2x ECS NVIDIA GTX 580 GPUs, using GeForce WHQL 310.90
- SSD Boot Drives, either OCZ Vertex 3 128GB or Kingston HyperX 120GB
- LG GH22NS50 Optical Drives
- Open Test Beds, either a DimasTech V2.5 EasyHard or a CoolerMaster Test Lab

AMD Configurations

A6-3650 + Gigabyte A75-UD4H + 16GB DDR3-1866 8-10-10
A8-3850 + ASRock A75 Extreme6 + 16GB DDR3 1866 8-10-10
A8-5600K + Gigabyte F2A85-UP4 + 16GB DDR3-2133 9-10-10
A10-5800K + Gigabyte F2A85-UP4 + 16GB DDR3-2133 9-10-10
X2-555 BE + ASUS Crosshair V Formula + 16GB DDR3 1600 8-8-8
X4-960T + ASUS Crosshair V Formula + 16GB DDR3-1600 8-8-8
X6-1100T + ASUS Crosshair V Formula + 16GB DDR3-1600 8-8-8
FX-8150 + ASUS Crosshair V Formula + 16GB DDR3-2133 10-12-11
FX-8350 + ASUS Crosshair V Formula + 16GB DDR3-2133 9-11-10
FX-8150 + ASUS Crosshair V Formula + 16GB DDR3-2133 10-12-11 + CP
FX-8350 + ASUS Crosshair V Formula + 16GB DDR3-2133 9-11-10 + CP

Intel Configurations

E6400 + MSI i975X Platinum + 4GB DDR2-666 5-6-6
E6700 + ASUS P965 Commando + 4GB DDR2-666 4-5-5
Celeron G465 + ASUS Maximus V Formula + 16GB DDR3-2133 9-11-11
i5-2500K + ASUS Maximus V Formula + 16GB DDR3-2133 9-11-11
i7-2600K + ASUS Maximus V Formula + 16GB DDR3-2133 9-11-11
i3-3225 + ASUS Maximus V Formula + 16GB DDR3-2400 10-12-12
i7-3770K + Gigabyte Z77X-UP7 + 16GB DDR3-2133 9-11-11
i7-3770K + ASUS Maximus V Formula + 16GB DDR3-2400 9-11-11
i7-3770K + Gigabyte G1.Sniper M3 + 16GB DDR3-2400 9-11-11
i7-3930K + ASUS Rampage IV Extreme + 16GB DDR3-2133 10-12-12
i7-3960X + ASRock X79 Professional 16GB DDR3-2133 10-12-12
Xeon X5690 + EVGA SR-2 + 6GB DDR3 1333 6-7-7
2x Xeon X5690 + EVGA SR-2 + 9GB DDR3 1333 6-7-7

The Beast

The Beast is a special machine put together to help with the review as a result of various hardware coming into my possession all at the same time. The core of the system is an EVGA SR-2 motherboard, the best and last dual processor motherboard to deal with overclockable Xeon processors. This is paired with a couple of X5690 Xeon processors, the highest clocked Westmere Xeon that Intel offers, and many thanks to Gigabyte for loaning these to us for a pair of reviews. I went and purchased a pair of Intel Xeon socket 1567 coolers for the system, which have a 2U z-height restriction but are copper piped and cooled by powerful (and loud) delta fans. These provided enough cooling power to push the Xeons from 3.43GHz to 4.6GHz during some overclocking attempts, so are more than adequate for the job at hand (if you can put up with the noise).

Gallery: The Beast

Our system is paired with some high quality DDR3 Hyper memory, once famed for its overclocking prowess but due to frequent deaths from high voltage, is now relegated as a memory for overclockers. However at stock this memory performs great, often in the region of DDR3-2000 C7, so our memory kits are well primed for this setup.

Of course a full system is nothing without a case and power supply to help justify a build. With the motherboard being absolutely huge, no standard case would take it – only large cases designed for desktop-based server 2P motherboards are adequate. Luckily there is one case which is selling well, and Dustin reviewed recently – the Rosewill Blackhawk Ultra. Aside from the weight, this case had no issues with installing the motherboard; it could easily fit in another 10 HDDs, four optical bays, and any major GPU setup you could possibly think of – with plenty of fans just for good measure. Read Dustin’s review for a more thorough analysis, but I have some good shots of the system and motherboard installed for you:


Rosewill also has the perfect power supply for dealing with a dual processor, quad CrossFireX setup. First, consider how many connections this 2P setup needs – we have a normal 24-pin ATX connector for the motherboard, one 8-pin CPU power connector for each CPU, an additional 6-pin PCIe power connector for each CPU to provide extra power, another 6-pin PCIe power connector to provide power to the PCIe slots, and then two 6+2 PCIe power connectors for the GPUs. That makes 11 PCIe connectors needed in total, and this is alongside all the fans in the case and whatever SSD/ODD setup a user wants. The power supply used for this monster is the 1600W Hercules, rated 80PLUS Silver. With access to 16 PCIe connectors, the only way you might need any more is with a compute rig having seven single slot cards each needing two connectors. With the CPUs and GPUs both overclocked, our system was drawing almost 1500W at the wall (at a 240V source) under a high CPU+GPU load.

Using a 2P system as a desktop comes with its own set of issues, namely some CPU benchmarks not optimized for 2P, or in this case, some trouble getting some games to even work. It seems that the more money you can throw at a gaming system the more problems start to arise, but The Beast provides a nice comparison point when we look at high-end Ivy Bridge, Sandy Bridge-E and Piledriver processors in multiple-GPU setups.

Our first port of call with all our testing is CPU throughput analysis, using our regular motherboard review benchmarks.

CPUs, GPUs, Motherboards, and Memory CPU Benchmarks
Comments Locked

242 Comments

View All Comments

  • Patrese - Wednesday, May 8, 2013 - link

    Awesome article, thanks! Is it possible to include some sort of gaming physics testing? Now that PhysX is beginning to catch some momentum, I'd be great to see if a 8-module AMD processor handles physics stuff better than a 4-core comparable Intel one, and at what point does a dedicated physics card starts to make sense, if at all.

    I’d be also nice if a “mainstream gaming” article could be made too. Benchmarks at 1080p with cards like the 660Ti and 7850, for instance. No need for 3 way SLI/CF on those, so you'll not need as much time in Narnia. :)
  • araczynski - Wednesday, May 8, 2013 - link

    interesting read, although i find it too focused to be of much general use (or useful future reference). i'd like to have seen for example how an E8500 holds up (too big of a gap between E6500 and i52500), as well as at least ONE game i would even bother playing (skyrim/witcher/etc). and of course like you mentioned, even a slightly bigger sampling of graphics cards. (i think you mentioned that).

    anywho, i realize this wasn't meant to be anything exhaustive (i do appreciate having the CPU/GPU benches available here as a good reference though), and i do like the detail/explanation length you went into.

    so thanks :)
  • xinthius - Wednesday, May 8, 2013 - link

    But AMD offers good price to performance at lower tiers, they should be recommend.
  • yougotkicked - Wednesday, May 8, 2013 - link

    Regarding your comments on the role of artificial intelligence in game performance/programming: I've just finished a course in AI, and while implementations may vary quite a bit from game to game, many AI programs can be reduced to highly-parallel brute-force computation, simply evaluating the resulting states of many potential decisions for a numerical representation of their desirability, then selecting the best option from the set of evaluated actions. Obviously this is something that will vary greatly from game to game, but in games with many independent AI managed elements, I would expect a certain amount of the processing to be offloaded to the GPU.

    Other than that I agree with you on the demands of AI in games; my professor (who specializes in game AI and has experience in the industry) said that the AI is usually given about 10% of the CPU time in a game, so it's rarely a limiting factor.

    I'm still working through the whole article (really enjoying it so far) so I'm sure I'll have many more comments/questions later.
  • IanCutress - Wednesday, May 8, 2013 - link

    Based on previous CUDA experience, CUDA doesn't like a lot of IF statements in its routines. So if you're offloading different AI parts onto the GPU, unless all the elements are being put through the same set of if commands (and states), it won't work too well, with some warps taking a lot longer than others if there is large branch deviation. It's a task suited to MIMD environments, like a CPU. Then again, it really depends on the game. Clever AI is already here, because we confine it to a self-created system. One could argue that the bots in CounterStrike are not particularly smart, but the system can put their accuracy up to 100% to make it harder. It's a lot of give and take, perhaps. It is times like these I wish I did CompSci rather than Chemistry :) I need to take one of those MIT online AI courses. You know, inbetween testing!

    Ian
  • yougotkicked - Wednesday, May 8, 2013 - link

    I suppose conditionals would make offloading some AI components to the GPU impractical, but there still remains a subset of AI computations which seem very GPU friendly to me. State evaluation functions seems like a prime example, the CPU would be responsible for deciding which options to evaluate, building an array of states to be evaluated by the GPU. These situations probably don't come up very often in FPSs, but in something like Civilization I can see it being quite common.

    I've actually got to head over to that class now, I'll ask the professor if he knows of any AI's using GPU computing in modern games.
  • airmantharp - Wednesday, May 8, 2013 - link

    Like Ian said, GPU's aren't good 'branch' processors, but I do see where you're coming from. Things like real physics, audio environment maps, and pre-render lighting maps could be fed to AI routines running on the CPU. This would allow for a much greater 'simulation awareness' for AI actions.
  • yougotkicked - Wednesday, May 8, 2013 - link

    I spoke with my professor and he said that as far as he knows, many people have discussed to prospect of using GPU's for AI, but nobody has actually done so yet. He's going to ask some friends of his at some major game studios to see if they are working on it.

    He did agree with me that there are some aspects that could be computed on a GPU, but a lot of the existing AI methods are inherently sequential, so offloading it to the GPU will require new algorithms in many cases.
  • TheQweaker - Thursday, May 9, 2013 - link

    You may wish to check nVidia's GTC conference web site where you can find some GPU AI Research. Also, nVidia published various PDF slides on GPU Path Planning.

    If you look deeper in some specific AI Domains such as, say, AI Planning (first used in F.E.A.R. in 2005, lately used in KillZone 3 and Transformers 3: The Fall of the Cybertron) you can find papers investigating the use of GPUs.

    On of the bottom line of current GPU AI research is that GPUs crunch large numbers of data very fast so, currently, there is not much hope in using the many GPU threads for tiny amounts of data of state space search.

    Hoping this helps.

    -- The Qweaker.
  • yougotkicked - Thursday, May 9, 2013 - link

    Thanks for pointing me towards those papers, they look pretty interesting and I've been looking for a topic to write my final paper on ;)

Log in

Don't have an account? Sign up now