Lynnfield vs. Bloomfield: Overclocked and Without Turbo

The second request was how Lynnfield and Bloomfield stacked up with turbo disabled but when overclocked. At higher frequencies the demands on the memory subsystem go up so it's more than a valid concern.

I took both systems and overclocked them to 3.8GHz, a level that wasn't too difficult to achieve (more on this later):

The average performance difference doesn't appear to change even as we scale up clock speed. Lynnfield actually does better here thanks to better than expected scores in the game tests (the on-die PCIe controller to blame?), but also falls behind in some other tests (e.g. x264). Overall the performance difference seems to hold even when overclocked; the performance you give up when going to Lynnfield at stock speeds with turbo disabled doesn't get any worse at overclocked speeds.

It's also worth noting that there are applications that we haven't tested that could demand even more of the memory subsystem, but on average, for most users I'd say that the third memory channel isn't worth the price difference.

Hitting 3.8GHz: The Good, The Bad and The Ugly

I picked 3.8GHz for the comparison on the previous page and I just wanted to share what I had to do to reach that frequency.

Bloomfield was by far the easiest to get up to 3.8GHz. I just increased the BCLK and the system POSTed at 3.8GHz. After going through several benchmarks I found that I needed to add a tiny bit of voltage (~40mV) to make it completely stable, but I really didn't have to do anything above and beyond that.

Lynnfield was a bit more difficult. After increasing the BCLK there was a lot more guess and test of voltage levels before I could get the system completely stable. As I mentioned in our Lynnfield launch article, thanks to the on-die PCIe controller any serious overclock will require a bit of voltage. I ended up running the chip at around 1.265V for full stability.

It was far easier to overclock Lynnfield if I just used voltages above 1.30V, but then I ran into another problem: heat. The chip wouldn't hit 3.8GHz regularly at such high voltages, although Gary's testing indicates that a bigger heatsink/fan could fix that. With some work you can definitely overclock Lynnfield using the retail heatsink/fan, it's just not nearly as easy as Bloomfield.

And finally we get to the Phenom II X4 965 BE. With Vista 32-bit installed, the Phenom II system had no problem running at 3.8GHz - however all of our application tests run under a 64-bit OS and this is the Phenom II's achilles' heel. Getting the system stable at 3.8GHz in a 64-bit OS was the most difficult out of the three overclocks I performed for this article. The chip required an uncomfortable amount of voltage and ultimately I couldn't get my sample 100% stable at 3.8GHz in 64-bit Vista (although 32-bit OSes weren't an issue).

If you're curious, the performance gap between AMD and Intel does widen considerably at these higher frequencies:

Processor Adobe Photoshop CS4 DivX x264 - 1st Pass x264- 2nd Pass WME
AMD Phenom II @ 3.8GHz 19.5 seconds 39.1 seconds 85.0 fps 22.2 fps 26 seconds
Intel Bloomfield @ 3.8GHz 13.3 seconds 28.8 seconds 100.0 fps 36.3 fps 21 seconds
Intel Lynnfield @ 3.8GHz 13.6 seconds 29.0 seconds 95.7 fps 33.9 fps 24 seconds

Power Consumption While Overclocked

Guru3D pointed out an important observation in their Lynnfield review: power consumption goes up considerably when you overclock. It's not just the overclock, but it's the process of increasing core voltage that makes power consumption skyrocket. This is partly why I stress stock-voltage overclocking so much. Let me give you an example:

Processor Stock Power Consumption Power Consumption While Overclocked to 3.8GHz @ 1.3V
Intel Core i7 875 181W 215W

 

That's a pretty hefty gain in power consumption, over 18% but we get a 29.% increase in clock frequency. Remember my troubles getting the Phenom II X4 965 BE to work in 64-bit Windows? I ran some numbers to show exactly what a lot of extra voltage will do to power consumption:

Processor Stock Voltage @ 3.4GHz Stock Voltage @ 3.8GHz +0.2Vcore, +0.1V NB @ 3.8GHz
AMD Phenom II X4 965 BE 223W 239W 300W

 

Increasing the clock speed by 400MHz only drives up power consumption by 7%, boosting voltage on top of that results in an additional 25% power gain. When overclocking you always want to increase as much as possible while adding as little voltage as possible to maintain the most power efficient system.

Index GPU Limited Gaming Oddities
Comments Locked

46 Comments

View All Comments

  • SixOfSeven - Monday, September 21, 2009 - link

    Another good piece on the new CPUs, but without some new coolers or LGA1156 mounting kits it's a little hard to put into practice. Any news on the LGA1156 HSF front?
  • strikeback03 - Monday, September 21, 2009 - link

    In the power consumption area, you list a core i7 875 and giver power consumption when overclocked. is this supposed to be an i7 975 or a i7 870 (and curse you intel for using such similar names). Would it be possible to get a table showing both a 920 and an 8x0 at some overclock at idle and load?
  • 7Enigma - Monday, September 21, 2009 - link

    Anand,

    While many are odd, the graph that really makes my head spin is the second one that shows the 4890 on the ranch small level. For some reason the older Q9450 is significantly faster than the rest of the competition, even outpacing the 870 which has a clockspeed advantage! When looking at several other graphs with the 4890 it's again apparent that the Q9450 has some weird magic going on where performance doesn't match the expected results (FarCry2 Playback action bench, HAWX 4X AA Max Quality Low AO). I'm assuming the mobo's are different, but could that honestly make the 5-6% difference between the 920 and the Q9450? (I'm assuming you are using the same graphics drivers correct?)

    This could be a piece of the puzzle that would allow you guys to figure out a possible reason.

    And thank you for the followup article. While I (and many others) thought a lot of the recent comments were from trolls looking to damage this site's comments section, I was interested in seeing the OC'd power numbers and direct comparison Turbo-disabled at higher OC's (where you would likely have Turbo disabled anyways if used for a gaming boot).
  • 7Enigma - Monday, September 21, 2009 - link

    Posted this message after reading the article but before reading the comments. I think GeorgeH might be spot on as he described a reason behind what my original comment was referring too.

    GeorgeH, I second the beer!
  • GeorgeH - Saturday, September 19, 2009 - link

    The GPU oddities look almost completely like a hyperthreading issue to me. There are two types of anomalies in your graphs: performance inversions (a definitively "slower" CPU performing better than a "faster" one) and line crossing (a slower CPU becoming faster at a different resolution.) Only 2 games really demonstrate these behaviors to any significant degree: Far Cry 2 and HAWX. Those two titles benefit the most (to my knowledge) from going from 2 cores to 4, implying they have the most threads running at any one time and therefore would be most susceptible to HT issues.

    Looking at just those two titles, the i5 performs consistently better than its i7 brethren. As the only real difference between the two is hyprthreading, I conclude HT is the culprit (as it sometimes was in the "days of yore" on the P4.) Turbo mode as well as differences in base clock speed and memory architectures mask the problem, but I'd bet money that if you turn off HT your graphs will be much more normal.
  • coconutboy - Sunday, September 20, 2009 - link

    If true and HT is the culprit, GeorgeH deserves a beer for figuring out the anomaly. Nice spot there, dude.
  • gwolfman - Monday, September 21, 2009 - link

    What about Battleforge then with the inverse affect?
  • GeorgeH - Monday, September 21, 2009 - link

    Dunno.

    My guess would be that it's using some Havok technologies that are more completely or efficiently GPU accelerated on AMD/ATI GPUs than on NVIDIA ones. Far Cry 2, Dawn of War II, and HAWX all demonstrate much more significant CPU dependence on NVIDIA hardware than on ATI hardware, and AFAIK all three use Havok technologies to some extent. I don't know if Battleforge uses any Havok technologies, but its behavior does roughly match the other titles that do.
  • silverblue - Sunday, September 20, 2009 - link

    I'm not sure if they've looked at this back when i7 itself was launched, however it's definitely worthy of a second look. Simply disabling HT then retesting may yield all the answers we are looking for, even if there's only a couple of games where this issue exists for the i7s.
  • swaaye - Sunday, September 20, 2009 - link

    Hmmm, I think that this guy's comments should be considered.

Log in

Don't have an account? Sign up now