AMD vs. Intel - Clock for Clock

Now it's time to tackle the touchy subject: how do AMD and Intel stack up to one another? First off, let's look at identical clock speeds to compare architectures.

Clock for Clock Comparison - DM-ShangriLa

Clock for Clock Comparison - DM-HeatRay

Clock for Clock Comparison - vCTF-Suspense 

At 3.0GHz, granted at a CPU-bound resolution, Intel holds a 26 - 31% performance advantage over AMD. Intel's Core 2 processors have historically done better clock for clock than AMD's K8, so it's not too much of a surprise, but an important mark in the sand.

We then cranked up the resolution to 1920 x 1200, and increased the world detail slider up to 5 to give us a more realistic situation for this clock speed comparison. The results were a bit surprising:

GPU Bound CPU Comparison - DM-ShangriLa

GPU Bound CPU Comparison - DM-HeatRay

GPU Bound CPU Comparison - vCTF-Suspense  

Despite being a mostly GPU-bound scenario, Intel still managed a 9% performance advantage over AMD at 3.0GHz. We suspect that there's something fishy going on as the test is quite GPU-bound, yet going from Intel to AMD yields a reasonable performance drop.

We looked at a 3.0GHz Athlon 64 X2 and compared it to its closest Intel price competitor, the Core 2 Duo E6550 (2.33GHz) at our high res settings:

The Intel performance advantage drops to 7% on average, but it's much larger than it should be given that we're dealing with a GPU-bound scenario. Note that difference between 2.33GHz and 3.0GHz on Intel is next to nothing, thus proving the GPU-limited case, so we're either dealing with an Unreal Engine 3 issue related to either the AMD CPUs or the nForce 590 SLI chipset/drivers we used. We've let Epic know, but for now it looks like UT3 definitely prefers Intel's Core 2, even when GPU-bound.

Who Cares about Clock Speeds? Overall CPU Comparison
Comments Locked

72 Comments

View All Comments

  • Ryan Smith - Wednesday, October 17, 2007 - link

    See the post right above yours. ;-)
  • CRimer76 - Wednesday, October 17, 2007 - link

    Isn't this game supposed to use the Ageia crap to improve performance? Would love to see some benchies on that.
  • Ryan Smith - Wednesday, October 17, 2007 - link

    Yes, it can use the PhysX hardware to improve performance. However flybys are completely useless for testing PhysX, because there is no physics work going on. Since I cover the PPU side of things, once we have the ability to play with demos we'll have an article up.
  • MadBoris - Wednesday, October 17, 2007 - link

    Great work showing all the different gradients on vid card, cache, cpu, threading.
    The game really scales impressively on hardware.
    How did Sweeney manage to make the 2900xt shine? hehe

    I've got some small reservations using the flyby's, hopefully you guys will have a demo run for next time around. Theoretically, a demo loop should be pretty similar, mainly more CPU intensive, but I'm curious to see.

    Good work showing off the games scaling.
  • Sunrise089 - Wednesday, October 17, 2007 - link

    Not only does the X1950XT absolutely destroy the 7900GTX (the ATI card REALLY ended up being the better long-term buy), but the HD 2900XT looks absolutely great here. If this sort of performance is indicative of the future (and it may not be - perhaps there is something that will favor nVidia when the release copy of the game arrives with better graphics) then ATI looks much better than it did even last month.

    PLEASE do a follow up with this game at launch/when the new midrange GPUs launch. It's going to be very interesting to see the price/performance between the new 8800GT, an overclocked HD 2900pro, and the new ATI midrange card (HD 2950pro?)
  • tmx220 - Wednesday, October 17, 2007 - link

    they used an X1950XTX, though it would be about the same
    they should have used the XT because it can still be found, and for a decent price
  • Darth Farter - Wednesday, October 17, 2007 - link

    Anand,
    the last 2 game reviews are nice to have 1024x768 CPU comparisons, but I think we're seeing too many pages of 1024x768 of that which probably only 2% of the guys planning on running these games will use.

    I'd rather suggest showing at least the scaling if there's an intention of showing CPU limitations to the common 1280x1024, the 1680x1050(1600x1200) and the 1920x1200 to reflect what we're also actually are going to see on screen as user by swapping/Upgrading a CPU/Platform to another model/platform while planning on playing with our obviously high end GFX card at 1920x1200.

    an Overall CPU comparison with at only 1024x768 left me severely disappointed though I can understand the time constraints. This regrettably reminds me at Toms with their beside-the-point tactics in articles.

    Just my .02 for making Anands a bit better.

    Ty
  • IKeelU - Wednesday, October 17, 2007 - link

    I'd really like to see 1280x720. I usually play on my 720p TV and I imagine that resolution would be CPU-bound as well, though probably less so than 1024x768.
  • Roy2001 - Wednesday, October 17, 2007 - link

    I am more interest to see Phenom data.
  • hubajube - Wednesday, October 17, 2007 - link

    I'm officially done with Anandtech's conclusions on benchmarks. Why are you making these blanket "AMD is not competitive" statements? A 9% difference in fps performance and now AMD is shit?! Not to mention that whopping 9% difference only works out to roughly 5 whole frames per second! ROFLMAO!!!!!

    Brad: Hey Johnny! Did you see the new UT3 review from Anandtech? They showed the Intel CPU's KILLING AMD by five frames per second!

    Johnny: Holy shit Brad!!!!! AMD is going out of business with piss poor performance like that. I'll NEVER buy AMD again.

Log in

Don't have an account? Sign up now