UT3 Teaches us about CPU Architecture

For our first real look at Epic's Unreal Engine 3 on the PC, we've got a number of questions to answer. First and foremost we want to know what sort of CPU requirements Epic's most impressive engine to date commands.

Obviously the GPU side will be more important, but it's rare that we get a brand new engine to really evaluate CPU architecture with so we took this opportunity to do just that. While we've had other UE3 based games in the past (e.g. Rainbow Six: Vegas, Bioshock), this is the first Epic created title at our disposal.

The limited benchmarking support of the UT3 Demo beta unfortunately doesn't lend itself to being the best CPU test. The built-in flybys don't have much in the way of real-world physics as the CPU spends its extra time calculating spinning weapons and the position of the camera flying around, but there are no explosions or damage to take into account. The final game may have a different impact on CPU usage, but we'd expect things to get more CPU-intensive, not less, in real world scenarios. We'll do the best we can with what we have, so let's get to it.

Cache Scaling: 1MB, 2MB, 4MB

One thing we noticed about the latest version of Valve's Source engine is that it is very sensitive to cache sizes and memory speed in general, which is important to realize given that there are large differences in cache size between Intel's three processor tiers (E6000, E4000 and E2000).

The Pentium Dual-Core chips are quite attractive these days, especially thanks to how overclockable they are. If you look back at our Midrange CPU Roundup you'll see that we fondly recommend them, especially when mild overclocking gives you the performance of a $160 chip out of a $70 one. The problem is that if newer titles are more dependent on larger caches then these smaller L2 CPUs become less attractive; you can always overclock them, but you can't add more cache.

To see how dependent Unreal Engine 3 and the UT3 demo are on low latency memory accesses we ran 4MB, 2MB and 1MB L2 Core 2 processors at 1.8GHz to compare performance scaling.

L2 Cache Comparison - DM-ShangriLa

L2 Cache Comparison - DM-HeatRay

L2 Cache Comparison - vCTF-Suspense 

From 1MB to 2MB there's a pretty hefty 12 - 13% increase in performance at 1.8GHz, but the difference from 2MB to 4MB is slightly more muted at 4 - 8.5%. An overall 20% increase in performance simply due to L2 cache size on Intel CPUs at 1.8GHz is impressive. We note the clock speed simply because the gap will only widen at higher clock speeds; faster CPUs are more data hungry and thus need larger caches to keep their execution units adequately fed.

In order to close the performance deficit, you'd have to run a Pentium Dual-Core at almost a 20% higher frequency than a Core 2 Duo E4000, and around a 35% higher frequency than a Core 2 Duo E6000 series processor.

Index FSB Scaling: 1066MHz, 1333MHz
Comments Locked

72 Comments

View All Comments

  • p30n - Wednesday, October 17, 2007 - link

    very good point.
  • retrospooty - Wednesday, October 17, 2007 - link

    "What does this show us? At least for UT3 quad (vs dual) is rather a waste."

    ya, thats pretty much what they said in the article. They tested it so the results can be known.
  • thompsjt1 - Wednesday, October 17, 2007 - link

    "THERE ARE NO HIGH RESOLUTION TEXTURES" in the UT3 BETA demo. They didn't include them for download size reasons and I am sure they WILL include them in the real official demo and I think once you are running these high resolution textures and settings are maximized, we will see bigger difference of Nvidia vs AMD numbers.
  • pnyffeler - Wednesday, October 17, 2007 - link

    The article spent a lot of time on the effect of the size of the cache on an Intel processor, but what about AMD? Does the size of the cache matter, or is this yet another example of Intel's Northbridge system being trumped by AMD's advantage of having the memory controller on the CPU?

    I have no misconceptions that AMD has a chance of topping an Intel here. I'm just curious to see how much better Nehalem will be.

    P.S. Thumbs down on the CPU comparison. You said in the setup you were going to test an X2 4200, but it never made the charts. And what about an 8600 GT? I'm going to be running this game at 640x480, aren't I....
  • NullSubroutine - Wednesday, October 17, 2007 - link

    I believe they did not really test AMD's cpus right now due to the awaiting arrival of Phenom which is less than a month away.

    Looks as though the 200-250 range (RV670 bins) are going to kick some bahooty given their higher core speeds, especially at the 1280x1024, 1600x1200, and 1680x1050 resolutions.
  • Chaser - Wednesday, October 17, 2007 - link

    quote:

    but the real surprise is how competitive AMD is with the Radeon HD 2900 XT.


    I knew with mature drivers this card would rock. It only too a short amount of time. Good job ATI and Anandtech for demonstrating this.
  • aka1nas - Wednesday, October 17, 2007 - link

    That's mainly because there is no AA applied because the UT3 engine doesn't support it in Dx9 mode. AA has been the R600s stumbling block.
  • NullSubroutine - Wednesday, October 17, 2007 - link

    Multi-Sampling seems to run fine, it is Super Sampling that seems to be broken.
  • shabby - Wednesday, October 17, 2007 - link

    The beta demo looks nothing like epic wanted us to think, these pics are back from july.
    http://ve3d.ign.com/images/fullsize/143/PC/Unreal-...">http://ve3d.ign.com/images/fullsize/143/PC/Unreal-...
  • swaaye - Wednesday, October 17, 2007 - link

    It looks nothing like that because that is a ultra supersampled bullshot. Just like every single other game gets these days. In reality, UT3 looks as good as Gears of War right now, and will look a lot better once we get the high quality assets with the full game.

Log in

Don't have an account? Sign up now