UT3 Teaches us about CPU Architecture

For our first real look at Epic's Unreal Engine 3 on the PC, we've got a number of questions to answer. First and foremost we want to know what sort of CPU requirements Epic's most impressive engine to date commands.

Obviously the GPU side will be more important, but it's rare that we get a brand new engine to really evaluate CPU architecture with so we took this opportunity to do just that. While we've had other UE3 based games in the past (e.g. Rainbow Six: Vegas, Bioshock), this is the first Epic created title at our disposal.

The limited benchmarking support of the UT3 Demo beta unfortunately doesn't lend itself to being the best CPU test. The built-in flybys don't have much in the way of real-world physics as the CPU spends its extra time calculating spinning weapons and the position of the camera flying around, but there are no explosions or damage to take into account. The final game may have a different impact on CPU usage, but we'd expect things to get more CPU-intensive, not less, in real world scenarios. We'll do the best we can with what we have, so let's get to it.

Cache Scaling: 1MB, 2MB, 4MB

One thing we noticed about the latest version of Valve's Source engine is that it is very sensitive to cache sizes and memory speed in general, which is important to realize given that there are large differences in cache size between Intel's three processor tiers (E6000, E4000 and E2000).

The Pentium Dual-Core chips are quite attractive these days, especially thanks to how overclockable they are. If you look back at our Midrange CPU Roundup you'll see that we fondly recommend them, especially when mild overclocking gives you the performance of a $160 chip out of a $70 one. The problem is that if newer titles are more dependent on larger caches then these smaller L2 CPUs become less attractive; you can always overclock them, but you can't add more cache.

To see how dependent Unreal Engine 3 and the UT3 demo are on low latency memory accesses we ran 4MB, 2MB and 1MB L2 Core 2 processors at 1.8GHz to compare performance scaling.

L2 Cache Comparison - DM-ShangriLa

L2 Cache Comparison - DM-HeatRay

L2 Cache Comparison - vCTF-Suspense 

From 1MB to 2MB there's a pretty hefty 12 - 13% increase in performance at 1.8GHz, but the difference from 2MB to 4MB is slightly more muted at 4 - 8.5%. An overall 20% increase in performance simply due to L2 cache size on Intel CPUs at 1.8GHz is impressive. We note the clock speed simply because the gap will only widen at higher clock speeds; faster CPUs are more data hungry and thus need larger caches to keep their execution units adequately fed.

In order to close the performance deficit, you'd have to run a Pentium Dual-Core at almost a 20% higher frequency than a Core 2 Duo E4000, and around a 35% higher frequency than a Core 2 Duo E6000 series processor.

Index FSB Scaling: 1066MHz, 1333MHz
Comments Locked

72 Comments

View All Comments

  • Ryan Smith - Wednesday, October 17, 2007 - link

    See the post right above yours. ;-)
  • CRimer76 - Wednesday, October 17, 2007 - link

    Isn't this game supposed to use the Ageia crap to improve performance? Would love to see some benchies on that.
  • Ryan Smith - Wednesday, October 17, 2007 - link

    Yes, it can use the PhysX hardware to improve performance. However flybys are completely useless for testing PhysX, because there is no physics work going on. Since I cover the PPU side of things, once we have the ability to play with demos we'll have an article up.
  • MadBoris - Wednesday, October 17, 2007 - link

    Great work showing all the different gradients on vid card, cache, cpu, threading.
    The game really scales impressively on hardware.
    How did Sweeney manage to make the 2900xt shine? hehe

    I've got some small reservations using the flyby's, hopefully you guys will have a demo run for next time around. Theoretically, a demo loop should be pretty similar, mainly more CPU intensive, but I'm curious to see.

    Good work showing off the games scaling.
  • Sunrise089 - Wednesday, October 17, 2007 - link

    Not only does the X1950XT absolutely destroy the 7900GTX (the ATI card REALLY ended up being the better long-term buy), but the HD 2900XT looks absolutely great here. If this sort of performance is indicative of the future (and it may not be - perhaps there is something that will favor nVidia when the release copy of the game arrives with better graphics) then ATI looks much better than it did even last month.

    PLEASE do a follow up with this game at launch/when the new midrange GPUs launch. It's going to be very interesting to see the price/performance between the new 8800GT, an overclocked HD 2900pro, and the new ATI midrange card (HD 2950pro?)
  • tmx220 - Wednesday, October 17, 2007 - link

    they used an X1950XTX, though it would be about the same
    they should have used the XT because it can still be found, and for a decent price
  • Darth Farter - Wednesday, October 17, 2007 - link

    Anand,
    the last 2 game reviews are nice to have 1024x768 CPU comparisons, but I think we're seeing too many pages of 1024x768 of that which probably only 2% of the guys planning on running these games will use.

    I'd rather suggest showing at least the scaling if there's an intention of showing CPU limitations to the common 1280x1024, the 1680x1050(1600x1200) and the 1920x1200 to reflect what we're also actually are going to see on screen as user by swapping/Upgrading a CPU/Platform to another model/platform while planning on playing with our obviously high end GFX card at 1920x1200.

    an Overall CPU comparison with at only 1024x768 left me severely disappointed though I can understand the time constraints. This regrettably reminds me at Toms with their beside-the-point tactics in articles.

    Just my .02 for making Anands a bit better.

    Ty
  • IKeelU - Wednesday, October 17, 2007 - link

    I'd really like to see 1280x720. I usually play on my 720p TV and I imagine that resolution would be CPU-bound as well, though probably less so than 1024x768.
  • Roy2001 - Wednesday, October 17, 2007 - link

    I am more interest to see Phenom data.
  • hubajube - Wednesday, October 17, 2007 - link

    I'm officially done with Anandtech's conclusions on benchmarks. Why are you making these blanket "AMD is not competitive" statements? A 9% difference in fps performance and now AMD is shit?! Not to mention that whopping 9% difference only works out to roughly 5 whole frames per second! ROFLMAO!!!!!

    Brad: Hey Johnny! Did you see the new UT3 review from Anandtech? They showed the Intel CPU's KILLING AMD by five frames per second!

    Johnny: Holy shit Brad!!!!! AMD is going out of business with piss poor performance like that. I'll NEVER buy AMD again.

Log in

Don't have an account? Sign up now