Gaming with Core 2 and CrossFire on 975X

We were so used to getting excited over AMD processor launches that we almost forgot what an important Intel CPU launch was like. You see, AMD and Intel behave very differently when at a dinner table preparing to eat their meals. AMD will eat when its partners eat; companies like ATI and NVIDIA get to share in the joy of a new AMD product launch as they are busy building chipsets for the new platform. That's why we get a new nForce chipset whenever AMD launches a new CPU. Intel on the other hand isn't as generous; Intel likes to eat first, and then whatever remains after it's nice and full can be scraped off the table and given to its partners. This is why today's launch is taking place pretty much exclusively on Intel chipsets, with retail products based on ATI/NVIDIA chipsets shipping in the coming months.

Intel's table manners aren't as nice as AMD's largely because they don't have to be. Intel has a lot more fabs than AMD, however they aren't all pumping out 65nm Core 2 Duos on 300mm wafers; instead many of them are still using old 90nm or 130nm process technology. It's not exactly economically feasible to keep converting all of the fabs to the latest technology as soon as it's available, so Intel uses up excess capacity in its older fabs by producing chipsets. AMD does not have this luxury so it depends on companies like ATI, NVIDIA, SiS and VIA for the platform side of things, and thus is much nicer at the dinner table.

Eating habits aside, what this means for us is that our only real options to test Core 2 Duo are with Intel chipsets. NVIDIA's nForce 590 SLI reference board for Core 2 Duo is in our labs but its BIOS isn't finalized yet so NVIDIA is asking us to hold off on using it for a couple more weeks. At the same time, we're hearing that we shouldn't expect any retail motherboards using ATI chipsets for Core 2 Duo motherboards until September at the earliest, once again leaving us with Intel.

Don't get us wrong; Intel chipsets are far from a terrible option. In fact, Intel continues to make extremely trouble-free platforms. It's not stability or performance that we're concerned about, as Intel has got both of those down pat. The issue however is multi-GPU compatibility.

You see, NVIDIA is a lot like Intel in that it wants to eat first or maybe, if the right people are at the table, at the same time as its partners. The problem with two companies that have identical eating habits is that no one ends up eating, and thus we have no SLI support on Intel chipsets. NVIDIA views this as an upper hand because honestly it's the only tangible advantage anyone has ever held over an Intel chipset since the days when Intel and Rambus were inseparable. If you want the best multi-GPU solution on the market you buy NVIDIA graphics cards, but they won't run (together) on Intel chipsets so you've got to buy the NVIDIA chipset as well - sounds like NVIDIA is trying to eat some of Intel's dinner, and this doesn't make Intel very happy.

Luckily for Intel, there's this little agreement it has with NVIDIA's chief competitor - ATI. Among other things, it makes sure that Intel platforms (or platform in this case, since it only officially works on the 975X) can support CrossFire, ATI's multi-GPU technology. Unfortunately, CrossFire isn't nearly as polished as NVIDIA's SLI. Case in point would be benchmarking for this Core 2 Duo article, which used a pair of X1900 XTs running in CrossFire mode. During our testing, CrossFire decided to disable itself after a simple reboot - twice. No warnings, no hardware changes, just lower frame rates after a reboot and a CrossFire enable checkbox that had become unchecked. Needless to say it was annoying, but by now we already know that CrossFire needs work and ATI is on it.

More than anything this is simply a message to ATI and Intel: if CrossFire had been in better shape, the high end gaming enthusiast could have been satisfied today, but instead they will have to wait a little longer for the first nForce 500 motherboards with Core 2 support to arrive (or settle for a nForce 4 board with Core 2 support).

Why does multi-GPU even matter? Given how fast Intel's Core 2 processors are, we needed to pair them with a GPU setup that was well matched - in this case we went with a pair of X1900 XTs running in CrossFire mode. With a pair of X1900 XTs we could run at 1600 x 1200 for all of our gaming tests, achieving a good balance between CPU and GPU loads and adequately characterizing the gaming performance of Intel's Core 2 line.

Encoding Performance using DivX 6.1, WME9, Quicktime (H.264) & iTunes Gaming Performance using Quake 4, Battlefield 2 & Half Life 2 Episode 1
Comments Locked

202 Comments

View All Comments

  • crystal clear - Saturday, July 15, 2006 - link

    Just to remind all that-" Intel decides to release a B2 stepping of its Conroe
    processors.Also
    BEWARE of Engineering samples & reviews based on Engineering samples inlcuded.
  • OcHungry - Saturday, July 15, 2006 - link

    I don’t know if this Q been asked or answered (sorry no time reading 120 posts)
    Bu Mr. Anand, can I kindly ask you couple of concerns:
    1) why don’t we see any reference to temp. under load? Temp. is a crucial factor in deciding wether or not I buy conroe since I live in a very hot climate.
    2)A lot people make reference to 64bit and window vista and that conroe is a 32bit architecture and will not perform as good in 64bit. Is it true? can we have some 64bit benchmarks? It would be great to do this test in multitasking.
    3) I have also noticed (from so many who have had pre-release ES @ XS, and other forums) that overclocked conroe does not correspond directly to performance, unlike A64.
    What I mean is: If A64 is overclocked 20%, the performance increases ~20% 9more or less, in most cases), But have not seen this to hold w/ conroe. So I am wondering what would happen if we put a low end conroe, such as E6400 against A64 4000 x2, 2x1mb cache (same price range after price drop) and overclock them to their limit, using stock cooling, and do the benchmarks (64bit included). The reason I am interested in this type of review is because I am an average end user on budget and would like to know which would give me better price/performance. I think I am speaking for at least 90% of consumers. Not everyone buys $1000 cpu and consumers on budget is detrimental to survival of conroe or AM2 cpus. This alone should give you enough incentive to put together a review oriented around us, the mainstream computer users. We can make or break any chipmaker.
    So please Mr. Anad, can we have another review along those lines described above?
    We greatly appreciate it.
    Thanks,
    ochungry
  • aznskickass - Saturday, July 15, 2006 - link

    Hey ochungry, I believe Xbitlabs did an overclocking comparison between a Conroe E6300 and an A64 X2 3800+, and while they didn't do any 64bit benchmarks, the conclusion is that at stock speeds the E6300 is slightly faster than X2 3800+, and when both are overclocked to the max, E6300's lead increases even more.

    Here is the review/comparison:
    http://xbitlabs.com/articles/cpu/display/core2duo-...">http://xbitlabs.com/articles/cpu/display/core2duo-...
  • OcHungry - Saturday, July 15, 2006 - link

    Those guys @ X-bit lab do not fool me. And I hope Anandtech conducts the same test w/ best suited memory module for “BOTH” platforms. We know (so as Xbitlab knows) that, because of IMC, A64 performs its best @ tightest memory timings. X-bit lab should have used http://www.newegg.com/Product/Product.asp?Item=N82...">This memory module if the test was going to be fair and square. Furthermore A64 3800 x2 @ 3ghz is 10x300, which means DDR2 667 1:1 could have been better to use. This http://www.newegg.com/Product/Product.asp?Item=N82...">DDR2 667 @ 3-3-3-10 would have given about 10% better performance than DDR2 4-4-4-12 that they used. X-bit does not mention anything about memory/cpu ratio. What divider was used? Was it 133/266? Or as close to 1:1 as possible? Sooner or later the truth will prevail when we end users try it for ourselves (oh BTW, not ES), and we will see if xbitlab and others were genuinely interested on behalf of consumers, or the interest destined to ill-fated purpose. Will not accuse anyone, but it all look very fishy.
    I am certain that Mr. Anad will clear all these conspicuous reviews , and hand us another that concerns the consumer’s majority- us average users.
  • IntelUser2000 - Saturday, July 15, 2006 - link

    quote:

    We know (so as Xbitlab knows) that, because of IMC, A64 performs its best @ tightest memory timings. X-bit lab should have used This memory module if the test was going to be fair and square. Furthermore A64 3800 x2 @ 3ghz is 10x300, which means DDR2 667 1:1 could have been better to use. This DDR2 667 @ 3-3-3-10 would have given about 10% better performance than DDR2 4-4-4-12 that they used.


    W-R-O-N-G!!! DDR2-800 at EVEN slower 5-5-5-15 timings is FASTER THAN DDR2-667 3-3-3-10: http://xbitlabs.com/articles/cpu/display/amd-socke...">http://xbitlabs.com/articles/cpu/display/amd-socke...

    Prefer AT's results?: http://www.anandtech.com/cpuchipsets/showdoc.aspx?...">http://www.anandtech.com/cpuchipsets/showdoc.aspx?...

    DDR2-800 is faster. The myth that it performs amazingly better(comparatively) with lower latency is just, a myth. I believe there was a thread in forums that tells exactly that.
  • OcHungry - Sunday, July 16, 2006 - link

    Iguess you don’t understand much about AMD's memory latency, direct connect, and 1:1 ratio.
    It is not like Intel's illusionary, that faster than FSB is better. It is not and is useless. Anad proved it here. But this subject has a tendency to be dragged on for ever by those who don’t understand the concept of IMC. So it's better leave it alone.
    But I still would like to see tightest timing and 1:1 ratio. It is now clear to me that those reviews in favor of Intel, artfully evade this argument/request, knowing it will give AMD advantage over Intel's FSB.
  • aznskickass - Sunday, July 16, 2006 - link

    *sigh*

    How is the AMD disadvantaged if BOTH platforms are reviewed using the same RAM?

    AMD needs ultra low latency DDR2 to attain best performance? Well, bad luck, Intel doesn't , there is no 'deliberate' conspiracy to put AMD in a bad light.

    Look, if you just want to hang on to the notion that AMD has been cheated in the reviews, then go ahead and get your X2 4200+ and see how close you can get to Conroes numbers.

    I'll be using my E6600 @ 3.5GHz+ and laughing at your stupidity.
  • aznskickass - Saturday, July 15, 2006 - link

    I consider Xbitlabs to be one of the more trustworthy sites around, and do note that they are testing mainstream chips, and expensive CL3 DDR2 just doesn't make sense in a budget setup, which further puts Conroe in a good light, as it doesn't require expensive CL3 DDR2 to perform well.
  • sum1 - Friday, July 14, 2006 - link

    A good read. For the editor, I found 4 errors, search the full lines of text below at:
    http://www.anandtech.com/printarticle.html?i=2795">http://www.anandtech.com/printarticle.html?i=2795

    That begin said,
    and H.264 in coding
    as soon as its available
    at the fastest desktop processor we've ever tested
    ("and" would be more readable that "at" in this sentence)
  • JarredWalton - Friday, July 14, 2006 - link

    Thanks. At least one of those (in coding) can be blamed on my use of Dragon NaturallySpeaking and not catching the error. The others... well, they can be blamed on me not catching them too. LOL. The last one is sort of a difference of opinion, and I've replaced the comma with a dash, as that was the intended reading. :)

    --Jarred

Log in

Don't have an account? Sign up now