GPU Limited Gaming Oddities

Scott Wasson first picked up on this anomaly in his GPU-limited FarCry 2 results at the bottom of this page. Jon Stokes pointed it out and our own Gary Key duplicated and expanded upon the results.

The situation is this: in some cases, Nehalem can go from being much faster than Phenom II, to being measurably slower within the same benchmark depending on resolution. Gary was the first to tie the issue to the GPU used. Gary found that NVIDIA GPUs appeared to behave this way on Nehalem/Phenom II while AMD GPUs didn't. In other words, NVIDIA GPUs were running faster on AMD hardware while AMD GPUs were running faster on Intel hardware. It's all very strange.

It's no surprise that Ryan and I are working on the reviews for AMD's next-generation DX11 GPUs due out before the end of September. I cloned my GPU testbed SSD and moved it over to my CPU testbeds. I then proceeded to run a subset of our GPU tests on the Core i7 920, Core i7 870, Core i5 750, Phenom II X4 965 BE and Core 2 Quad Q9450 on two different GPUs, a GeForce GTX 275 and a Radeon HD 4890.

Let's go through the results game by game, shall we?

I'll start with Gary's FarCry 2 benchmark. We're running in DX10 mode with the optimal quality defaults (latest patch) and 2X AA. Much more GPU-bound than our normal CPU gaming tests, but that's exactly what we're looking for here. The benchmark of choice is "Ranch small", it comes with the game:

So I've duplicated Gary's results. The Nehalem cores all perform about the same, the i7 920 is a bit slower thanks to lacking turbo mode it seems. But look at the Phenom II X4, it is significantly faster regardless of resolution. Now look at the same test with a Radeon HD 4890:

The Phenom II X4 965 BE advantage disappears completely. That's odd.

Next, I ran the FarCry 2 benchmark we're using for our upcoming GPU reviews. It's the Playback action demo with Ultra Quality defaults and 4X AA enabled. First on NVIDIA hardware:

The Core i7 920 falls a bit behind the other Nehalems and while the Phenom II X4 965 BE pulls ahead slightly at 2560 x 1600, the performance is generally GPU bound across the board. An unexpected result is that the Core 2 Quad Q9450 at 1680 x 1050 is actually CPU bound. There may just be a gaming reason to upgrade your CPU after all. Now let's switch to AMD hardware:

Now this is strange. The Core 2 Quad doesn't fall behind in performance, in fact it ties the Core i7 870 at 1680 x 1050. In other words, it doesn't appear to be CPU bound anymore at 1680 x 1050. Confused?

Let's keep going.

The next game I tested was Crysis Warhead. Again I ran all of the numbers in DX10 mode, this time with "Gamer" quality presets but with "Enthusiast" quality shaders. I ran the "frost" benchmark included with the initial version of the game.

All of the lines are overlapping as they should be, we're in a GPU limited situation afterall. The 870 pulls ahead slightly at the end but it's nothing to get terribly excited about.

Switch to the Radeon HD 4890 and we now have an outlier. The Core i7 920 is measurably slower than everything else at 1680 x 1050. The only change we made was the graphics card/drivers. Next.

Dawn of War II is a RTS/RPG that includes a wonderful built in benchmark. I ran with all settings maxed out in the game (including turning AA "on"):

At 1680 x 1050 we actually see some performance breakdown here. The Lynnfields are fastest, most likely due to faster turbo modes. The Core i7 920 is next on the charts, followed by the Phenom II X4 965 BE. At the bottom we have the Core 2 Quad Q9450. But at 2560 x 1600 they all converge at roughly the same point. Since many users have monitors capable of resolutions lower than 1920 x 1200 it's quite possible that the differences between these CPUs would be noticeable.

Things don't change too much as we switch graphics cards. The Phenom II X4 does a bit better with the Radeon HD 4890, but that's about the only change.

Left 4 Dead is next. All settings are maxed including Anisotropic Filtering at 16X. V-Sync is disabled and AA is set to 4X MSAA.

These numbers mostly make sense. The i7 870 is the fastest, followed by the i5 750 and the i7 920 - you have turbo to thank for that. The Phenom II is a bit slower and the Core 2 Quad is a lot slower. But by the time you hit 2560 x 1600, all roads lead to around 76 fps.

Similar behavior with ATI hardware, whew.

HAWX is a combat flight simulator that also doubles as a great DX10 benchmark. I ran the DX10 version of the game with all settings at their highest values with the exception of Ambient Occlusion, which was set to "low".

This is another one of those games where the Phenom II pulls ahead of the Nehalem processors even at a supposedly GPU-bound 2560 x 1600 resolution. The advantage isn't huge, about 7% but the Core 2 Quad gives us some indication as to what's going on. The Q9450 actually beats everything here - perhaps it's a large L2 thing? Now look at what happens with a Radeon HD 4890:

The Core 2 Quad still does better than everything else, but pretty much everything converges at the same point. The Phenom II advantage seems to disappear. So far we have HAWX and FarCry 2 exhibiting this behavior. Mental note, next benchmark please.

Our final test is Battleforge, a free to play online card based RTS. I ran with all settings maxed out:

Here we see the opposite happening - the Phenom II X4 965 BE is far slower than anything else at 1680 x 1050. As expected, all CPUs tend to converge at the same point if you crank the resolution up high enough.

Switch graphics cards and the AMD disadvantage actually disappears. It's the opposite of what we've been seeing in games like FarCry 2 and HAWX where switching to an AMD GPU causes the AMD advantage to disappear.

What can we conclude from all of this data? Not much unfortunately. There are a couple of certainties:

1) Even at relatively stressful GPU settings, 1680 x 1050 with 4X AA enabled, some games are still CPU bound. The next-generation of DX11 GPUs will make this even more true.

2) Gaming performance isn't totally clean cut between all of these CPUs. There are situations where Nehalem is faster, Penryn is faster or Phenom II is faster. The trend appears to be that Nehalem is generally the fastest, followed by Phenom II and only rarely does the Core 2 Quad end up on top.

How do I explain the odd behavior that we've seen in some of these games? Honestly, I'm not sure if there's any one explanation. What appears to happen is a perfect storm of CPU power, GPU power, GPU drivers, cache sizes, clock speeds and instruction mix. In some cases it looks to be cache related as the Core 2 and Phenom II both do very well and have a noticeably larger L2 than Nehalem, but in other cases it's much more difficult to explain by any one variable. The fact that the situation changes almost entirely when switching to ATI hardware is what makes me believe the GPU driver is playing some role in all of this.

Ultimately it's not a big (or consistent) enough of an issue to get too worked up about, but it's definitely something real and not just a figment of testbed imagination. I've shared all of my data with hopes of figuring out exactly what's going on, but as I mentioned in my Lynnfield review - not all applications/games are going to play out the same way. I'll update you if I do find anything out.

Lynnfield vs. Bloomfield: Overclocked and Without Turbo
Comments Locked

46 Comments

View All Comments

  • ClownPuncher - Monday, September 21, 2009 - link

    Do all of your comments need insults in them? Take a Midol and stfu.
  • Inkie - Saturday, September 19, 2009 - link

    Uh, did you perhaps notice that these benchmarks were with turbo disabled at equal clocks? There haven't been any benchmarks that alter the standard story: most of the time going from 3ch to 2ch shows little benefit for most desktop users (or indeed using higher bandwidth memory...lots of articles showing this), compared to the benefit of higher clockspeed. Now, I know that you are going to reply with something about overclocking blah blah blah ad nauseum, but many users never overclock their CPUs. You know, serious users. I'd never use a computer seriously without ECC, but others do. If I wasn't planning to upgrade to 6-core or something and I didn't want a Xeon for ECC support, I'd certainly choose Ci7 on P55 over Ci7 on X58 for any kind of comparable price (which is what this release was really about: new price points for the performance on offer). That's without considering power consumption. You are just desperately trying to cling to your 'Bloomfield superiority'. Grow up.
  • Inkie - Saturday, September 19, 2009 - link

    Sorry, I meant going from 2ch to 3ch.
  • C'DaleRider - Saturday, September 19, 2009 - link

    Wish the teenage retard hiding behind his nick would just dry up and blow away.

    Proper memory? Hint: The Patriot Viper memory used in their testing is excellent memory and easily OC's to 1600 and beyond, but adimttedly you have to back off the CAS to 8 at 1600.

    Get a life and when you move out of your parent's basement and get your own house, job, and life, just let us know.

    By then, your teenage angst know-it-all attitude should have faded as it typically does for most adults.
  • TA152H - Saturday, September 19, 2009 - link

    What are you talking about? Can you read, you sub-human dolt?

    What memory are you talking about?

    My problem with Anand's testing, on other tests, was he would running PC 1066 on the Bloomfield, and PC 1333 on the Lynnfield.

    If you were anything but a sub-hunan ape, you'd have understood that.
  • Zoomer - Saturday, September 19, 2009 - link

    In theory, an on die PCIe controller would help performance by lowering CPU<->GPU communication latency.

    The reason why it's being bashed it because it limits max overclock freq and requires VCore to be turned up.
  • imsabbel - Saturday, September 19, 2009 - link

    Even your beloved Bloomfield has an on-die memory controller.
    How the hell should your GPU access memory without going to through the CPU, genius?
  • Sagath - Saturday, September 19, 2009 - link

    No.

    Simply put, he said on-die PCIe controller, not memory.

    And he was stating that he thinks (and is looking for confirmation) that the PCIe controller (That thing that, you know, gives info to your PCIe slots?) uses the same bus as the on-die memory controller.

    Thus if both your CPU AND GPU want information from memory, how do they prioritize it? Which controller gets to use the BUS first?

    Try pulling your head out of your ass, and use some comprehension skills before insulting someone next time, genius.
  • Inkie - Saturday, September 19, 2009 - link

    I think he was referring to this:

    "The x58 doesn't have to use the wider memory bus of the Bloomfield, so this problem doesn't exist"

    He was merely stating that it doesn't matter if there is an intervening QPI connection: the GPU still needs to access the memory controller (for the kind of situations that TAH was talking about). I can't really see why Intel would use an inferior access solution for Lynnfield than it does for the Bloomfield/X58 combination.
  • Inkie - Saturday, September 19, 2009 - link

    ...unless someone is going to start talking about the number of PCIe lanes, but that is something that Anand has already talked about.

Log in

Don't have an account? Sign up now