CPUs, GPUs, Motherboards, and Memory

For an article like this, getting a range of CPUs, which includes the most common and popular, is very important. I have been at AnandTech for just over two years now, and in that time we have had Sandy Bridge, Llano, Bulldozer, Sandy Bridge-E, Ivy Bridge, Trinity and Vishera, of which I tend to get supplied the top end processors of each generation for testing. (As a motherboard reviewer, it is important to make the motherboard the limiting factor.) A lot of users have jumped to one of these platforms, although a large number are still on Wolfdale (Core2), Nehalem, Westmere, Phenom II (Thuban/Zosma/Deneb) or Athlon II.

I have attempted to pool all my AnandTech resources, contacts, and personal resources together to get a good spread of the current ecosystem, with more focus on the modern end of the spectrum. It is worth noting that a multi-GPU user is more likely to have the top line Ivy Bridge, Vishera or Sandy Bridge-E CPU, as well as a top range motherboard, rather than an old Wolfdale. Nevertheless, we will see how they perform. There are a few obvious CPU omissions that I could not obtain for this first review which will hopefully be remedied over time in our next update.

The CPUs

My criteria for obtaining CPUs was to use at least one from the most recent architectures, as well as a range of cores/modules/threads/speeds. The basic list as it stands is:

AMD

Name Platform /
Architecture
Socket Cores / Modules
(Threads)
Speed Turbo L2/L3 Cache
A6-3650 Llano FM1 4 (4) 2600 N/A 4 MB / None
A8-3850 Llano FM1 4 (4) 2900 N/A 4 MB / None
A8-5600K Trinity FM2 2 (4) 3600 3900 4 MB / None
A10-5800K Trinity FM2 2 (4) 3800 4200 4 MB / None
Phenom II X2-555 BE Callisto K10 AM3 2 (2) 3200 N/A 1 MB / 6 MB
Phenom II X4-960T Zosma K10 AM3 4 (4) 3200 N/A 2 MB / 6 MB
Phenom II X6-1100T Thuban K10 AM3 6 (6) 3300 3700 3 MB / 6 MB
FX-8150 Bulldozer AM3+ 4 (8) 3600 4200 8 MB / 8 MB
FX-8350 Piledriver AM3+ 4 (8) 4000 4200 8 MB / 8 MB

Intel

Name Architecture Socket Cores
(Threads)
Speed Turbo L2/L3 Cache
E6400 Conroe 775 2 (2) 2133 N/A 2 MB / None
E6700 Conroe 775 2 (2) 2667 N/A 4 MB / None
Celeron G465 Sandy Bridge 1155 1 (2) 1900 N/A 0.25 MB / 1.5 MB
Core i5-2500K Sandy Bridge 1155 4 (4) 3300 3700 1 MB / 6 MB
Core i7-2600K Sandy Bridge 1155 4 (8) 3400 3800 1 MB / 8 MB
Core i3-3225 Ivy Bridge 1155 2 (4) 3300 N/A 0.5 MB / 3 MB
Core i7-3770K Ivy Bridge 1155 4 (8) 3500 3900 1 MB / 8 MB
Core i7-3930K Sandy Bridge-E 2011 6 (12) 3200 3800 1.5 MB / 12 MB
Core i7-3960X Sandy Bridge-E 2011 6 (12) 3300 3900 1.5 MB / 15 MB
Xeon X5690 Westmere 1366 6 (12) 3467 3733 1.5 MB / 12 MB

A small selection

There omissions are clear to see, such as the i5-3570K, a dual core Llano/Trinity, a dual/tri module Bulldozer/Piledriver, i7-920, i7-3820, or anything Nehalem. These will hopefully be coming up in another review.

The GPUs

My first and foremost thanks go to both ASUS and ECS for supplying me with these GPUs for my test beds. They have been in and out of 60+ motherboards without any issue, and will hopefully continue. My usual scenario for updating GPUs is to flip AMD/NVIDIA every couple of generations – last time it was HD5850 to HD7970, and as such in the future we will move to a 7-series NVIDIA card or a set of Titans (which might outlive a generation or two).

ASUS HD 7970 (HD7970-3GD5)

The ASUS HD 7970 is the reference model at the 7970 launch, using GCN architecture, 2048 SPs at 925 MHz with 3GB of 4.6GHz GDDR5 memory. We have four cards to be used in 1x, 2x, 3x and 4x configurations where possible, also using PCIe 3.0 when enabled by default.

ECS GTX 580 (NGTX580-1536PI-F)

ECS is both a motherboard manufacturer and an NVIDIA card manufacturer, and while most of their VGA models are sold outside of the US, some do make it onto etailers like Newegg. This GTX 580 is also a reference model, with 512 CUDA cores at 772 MHz and 1.5GB of 4GHz GDDR5 memory. We have two cards to be used in 1x and 2x configurations at PCIe 2.0.

The Motherboards

The CPU is not always the main part of the picture for this sort of review – the motherboard is equally important as the motherboard dictates how the CPU and the GPU communicate with each other, and what the lane allocation will be. As mentioned on the previous page, there are 20+ PCIe configurations for Z77 alone when you consider some boards are native, some use a PLX 8747 chip, others use two PLX 8747 chips, and about half of the Z77 motherboards on the market enable four PCIe 2.0 lanes from the chipset for CrossFireX use (at high latency).

We have tried to be fair and take motherboards that may have a small premium but are equipped to deal with the job. As a result, some motherboards may also use MultiCore Turbo, which as we have detailed in the past, gives the top turbo speed of the CPU regardless of the loading.

As a result of this lane allocation business, each value in our review will be attributed to both a CPU, whether it uses MCT, and a lane allocation. This would mean something such as i7-3770K+ (3 - x16/x8/x8) would represent an i7-3770K with MCT in a PCIe 3.0 tri-GPU configuration. More on this below.

For Sandy Bridge and Ivy Bridge: ASUS Maximus V Formula, Gigabyte Z77X-UP7 and Gigabyte G1.Sniper M3.

The ASUS Maximus V Formula has a three way lane allocation of x8/x4/x4 for Ivy Bridge, x8/x8 for Sandy Bridge, and enables MCT.

The Gigabyte Z77X-UP7 has a four way lane allocation of x16/x16, x16/x8/x8 and x8/x8/x8/x8, all via a PLX 8747 chip. It also has a single x16 that bypasses the PLX chip and is thus native, and all configurations enable MCT.

The Gigabyte G1.Sniper M3 is a little different, offering x16, x8/x8, or if you accidentally put the cards in the wrong slots, x16 + x4 from the chipset. This additional configuration is seen on a number of cheaper Z77 ATX motherboards, as well as a few mATX models. The G1.Sniper M3 also implements MCT as standard.

For Sandy Bridge-E: ASRock X79 Professional and ASUS Rampage IV Extreme

The ASRock X79 Professional is a PCIe 2.0 enabled board offering x16/x16, x16/x16/x8 and x16/x8/x8/x8.

The ASUS Rampage IV Extreme is a PCIe 3.0 enabled board offering the same PCIe layout as the ASRock, except it enables MCT by default.

For Westmere Xeons: The EVGA SR-2

Due to the timing of the first roundup, I was able to use an EVGA SR-2 with a pair of Xeons on loan from Gigabyte for our server testing. The SR-2 forms the basis of our beast machine below, and uses two Westmere-EP Xeons to give PCIe 2.0 x16/x16/x16/x16 via NF200 chips.

For Core 2 Duo: The MSI i975X Platinum PowerUp and ASUS Commando (P965)

The MSI is the motherboard I used for our quick Core 2 Duo comparison pipeline post a few months ago – I still have it sitting on my desk, and it seemed apt to include it in this test. The MSI i975X Platinum PowerUp offers two PCIe 1.1 slots, capable of Crossfire up to x8/x8. I also rummaged through my pile of old motherboards and found the ASUS Commando with a CPU installed, and as it offered x16+x4, this was tested also.

For Llano: The Gigabyte A75-UD4H and ASRock A75 Extreme6

Llano throws a little oddball into the mix, being a true quad core unlike Trinity. The A75-UD4H from Gigabyte was the first one to hand, and offers two PCIe slots at x8/x8. Like the Core 2 Duo setup, we are not SLI enabled.

After finding an A8-3850 CPU as another comparison point for the A6-3650, I pulled out the A75 Extreme6, which offers three-way CFX as x8/x8 + x4 from the chipset as well as the configurations offered by the A75-UD4H.

For Trinity: The Gigabyte F2A85X-UP4

Technically A85X motherboards for Trinity support up to x8/x8 in Crossfire, but the F2A85X-UP4, like other high end A85X motherboards, implements four lanes from the chipset for 3-way AMD linking. Our initial showing on three-way via that chipset linking was not that great, and this review will help quantify this.

For AM3: The ASUS Crosshair V Formula

As the 990FX covers a lot of processor families, the safest place to sit would be on one of the top motherboards available. Technically the Formula-Z is newer and supports Vishera easier, but we have not had the Formula-Z in to test, and the basic Formula was still able to run an FX-8350 as long as we kept the VRMs cool as a cucumber. The CVF offers up to three-way CFX and SLI testing (x16/x8/x8).

The Memory

Our good friends at G.Skill are putting their best foot forward in supplying us with high end kits to test. The issue with the memory is more dependent on what the motherboard will support – in order to keep testing consistent, no overclocks were performed. This meant that boards and BIOSes limited to a certain DRAM multiplier were set at the maximum multiplier possible. In order to keep things fairer overall, the modules were adjusted for tighter timings. All of this is noted in our final setup lists.

Our main memory testing kit is our trusty G.Skill 4x4GB DDR3-2400 RipjawsX kit which has been part of our motherboard testing for over twelve months. For times when we had two systems being tested side by side, a G.Skill 4x4GB DDR3-2400 Trident X kit was also used.

For The Beast, which is one of the systems that has the issue with higher memory dividers, we pulled in a pair of tri-channel kits from X58 testing. These are high-end kits as well, currently discontinued as they tended to stop working with too much voltage. We have sets of 3x2GB OCZ Blade DDR3-2133 8-9-8 and 3x1GB Dominator GT DDR3-2000 7-8-7 for this purpose, which we ran at 1333 6-7-6 due to motherboard limitations at stock settings.

To end, our Core 2 Duo CPUs clearly gets their own DDR2 memory for completeness. This is a 2x2GB kit of OCZ DDR2-1033 5-6-6.

 

 

Choosing a Gaming CPU: Single + Multi-GPU at 1440p, 400+ Data Points To Consider Testing Methodology, Hardware Configurations, and The Beast
Comments Locked

242 Comments

View All Comments

  • JarredWalton - Wednesday, May 8, 2013 - link

    "While I haven't programmed AI..." Doesn't that make most of your other assumptions and guesses related to this area invalid?

    As for the rest, the point of the article isn't to compare HD 7970 with GTX 580, or to look at pure CPU scaling; rather, it's to look at CPU and GPU scaling in games at settings people are likely to use with a variety of CPUs, which necessitates using multiple motherboards. Given that in general people aren't going to buy two or three GPUs to run at lower resolutions and detail settings, the choice to run 1440p makes perfect sense: it's not so far out of reach that people don't use it, and it will allow the dual, triple, and quad GPU setups room to stretch (when they can).

    The first section shows CPU performance comparison, just as a background to the gaming comparisons. We can see how huge the gap is in CPU performance between a variety of processors, but how does that translate to gaming, and in particular, how does it translate to gaming with higher performance GPUs? People don't buy a Radeon HD 5450 for serious gaming, and they likely don't play games.

    For the rest: there is no subset of games that properly encompass "what people actually play". But if we're looking at what people play, it's going to include a lot of Flash games and Facebook games that work fine on Intel HD 4000. I guess we should just stop there? In other words, we know the limitations of the testing, and there will always be limitations. We can list many more flaws or questions that you haven't, but if you're interested in playing games on a modern PC, and you want to know a good choice for your CPU and GPU(s), the article provides a good set of data to help you determine if you might want to upgrade or not. If you're happy playing at 1366x768 and Medium detail, no, this won't help much. If you want minimum detail and maximum frame rate at 1080p, it's also generally useless. I'd argue however that the people looking for either of those are far less in number, or at least if they do exist they're not looking to research gaming performance until it affects them.
  • wcg66 - Wednesday, May 8, 2013 - link

    Ian, thanks for this. I'd really like to see how these tests change even higher resolutions, 3 monitor setups of 5760x1080, for example. There are folks claiming that the additional PCIe lanes in the i7 e-series makes for significantly better performance. Your results don't bare this out. If anything the 3930K is behind or sometimes barely ahead (if you consider error margins, arguably it's on par with the regular i7 chips.) I own an i7 2700K and 3930K.
  • Moon Patrol - Wednesday, May 8, 2013 - link

    Awesome review! Very impressed with the effort and time put into this! Thanks a lot!
    It be cool if you could maybe somewhere fit an i7 860 in somewhere over there. Socket 1156 is feeling left out :P I have i7 860...
  • Quizzical - Wednesday, May 8, 2013 - link

    Great data for people who want to overload their video card and figure out which CPU will help them do it. But it's basically worthless for gamers who want to make games run smoothly and look nice and want to know what CPU will help them do it.

    Would you do video card benchmarks by running undemanding games at minimum settings and using an old single core Celeron processor? That's basically the video card equivalent to treating this as a CPU benchmark. The article goes far out if its way to make things GPU-bound so that you can't see differences between CPUs, both by the games chosen and the settings within those games.

    But hey, if you want to compare a Radeon HD 7970 to a GeForce GTX 580, this is the definitive article for it and there will never be a better data set for that.
  • JarredWalton - Wednesday, May 8, 2013 - link

    Troll much? The article clearly didn't go too far out of the way to make things GPU bound, as evidenced by the fact that two of the games aren't GPU bound even with a single 7970. How many people out there buy a 7970 to play at anything less than 1080p -- or even at 1080p? I'd guess most 7970 owners are running at least 1440p or multi-monitor...or perhaps just doing Bitcoin, but that's not really part of the discussion here, unless the discussion is GPU hashing prowess.
  • Quizzical - Wednesday, May 8, 2013 - link

    If they're not GPU bound with a single 7970, then why does adding a second 7970 (or a second GTX 580) greatly increase performance in all four games? That can't happen if you're looking mostly at a CPU bottleneck, as it means that the CPU is doing a lot more work than before in order to deliver those extra frames. Indeed, sometimes it wouldn't happen even if you were purely GPU bound, as CrossFire and SLI don't always work properly.

    If you're trying to compare various options for a given component, you try to do tests that where the different benchmark results will mostly reflect differences in the particular component that you're trying to test. If you're trying to compare video cards, you want differences in scores to mostly reflect video card performance rather than being bottlenecked by something else. If you're trying to compare solid state drives, you want differences in scores to mostly reflect differences in solid state drive performance rather than being bottlenecked by something else. And if you're trying to compare processors, you want differences in scores to mostly reflect differences in CPU performance, not to get results that mostly say, hey, we managed to make everything mostly limited by the GPU.

    When you're trying to do benchmarks to compare video cards, you (or whoever does video card reviews on this site) understand this principle perfectly well. A while back, there was a review on this site in which the author (which might be you; I don't care to look it up) specifically said that he wanted to use Skyrim, but it was clearly CPU-bound for a bunch of video cards, so it wasn't included in the review.

    If you're not trying to make the games largely GPU bound, then why do you go to max settings? Why don't you turn off the settings that you know put a huge load on the GPU and don't meaningfully affect the CPU load? If you're doing benchmarking, the only reason to turn on settings that you know put a huge load on the GPU and no meaningful load on anything else is precisely that you want to be GPU bound. That makes sense for a video card review. Not so much if you're trying to compare processors.
  • JarredWalton - Wednesday, May 8, 2013 - link

    You go to max settings because that's what most people with a 7970 (or two or three or four) are going to use. This isn't a purely CPU benchmark article, and it's not a purely GPU benchmark article; it's both, and hence, the benchmarks and settings are going to have to compromise somewhat.

    Ian could do a suite of testing at 640x480 (or maybe just 1366x768) in order to move the bottleneck more to the CPU, but no one in their right mind plays at that resolution with a high-end GPU. On a laptop, sure, but on a desktop with an HD 7970 or a GTX 580? Not a chance! And when you drop settings down to minimum (or even medium), it does change the CPU dynamic a lot -- less textures, less geometry, less everything. I've encountered games where even when I'm clearly CPU limited, Ultra quality is half the performance of Medium quality.
  • IndianaKrom - Friday, May 10, 2013 - link

    Basically for the most part the single GPU game tests tell us absolutely nothing about the CPU because save for a couple especially old or low end CPUs, none of them even come close to hindering the already completely saturated GPU. The 2-4 GPU configurations are much more interesting because they show actual differences between different CPU and motherboard configurations. I do think it would be interesting to also show a low resolution test which would help reveal the impact of crossfire / SLI overhead versus a single more powerful GPU and could more directly expose the CPU limit.
  • Zink - Wednesday, May 8, 2013 - link

    You should use a DSLR and edit the pictures better. The cover image is noisy and lacks contrast.
  • makerofthegames - Wednesday, May 8, 2013 - link

    Very interesting article. And a lot of unwarranted criticism in the comments.

    I'm kind of disappointed that the dual Xeons failed so many benchmarks. I was looking to see how I should upgrade my venerable 2x5150 machine - whether to go with fast dual-cores, or with similar-speed quad-cores. But all the benchmarks for the Xeons was either "the same as every other CPU", or "no results".

    Oh well, I have more important things to upgrade on it anyways. And I realize that "people using Xeon 5150s for gaming" is a segment about as big as "Atom gamers".

Log in

Don't have an account? Sign up now