It's been a long time coming, but we finally have Epic's first Unreal Engine 3 based game out on the PC. While the final version of Unreal Tournament 3 is still a little farther out, last week's beta release kept us occupied over the past several days as we benchmarked the engine behind Rainbow Six: Vegas, Gears of War and Bioshock.

Used in some very beautiful games, Epic's Unreal Engine 3 has been bringing us some truly next-generation game titles and is significantly more demanding on the CPU and GPU than Valve's Source engine. While far from the impossible-to-run that Oblivion was upon its release, UE3 is still more stressful on modern day hardware than most of what we've seen thus far.

The Demo Beta

Although Unreal Tournament 3 is due out before the end of the year, what Epic released is a beta of the UT3 Demo and thus it's not as polished as a final demo. The demo beta has the ability to record demos but it can't play them back, so conventional benchmarking is out. Thankfully Epic left in three scripted flybys that basically take a camera and fly around the levels in a set path, devoid of all characters.

Real world UT3 performance will be more strenuous than what these flybys show but it's the best we can muster for now. The final version of UT3 should have full demo playback functionality, with which we'll be able to provide better performance analysis. The demo beta also only ships with medium quality textures, so the final game can be even more stressful/beautiful if you so desire.

The flybys can run for an arbitrary period of time, we standardized on 90 seconds for each flyby in order to get repeatable results while still keeping the tests manageable to run. There are three flyby benchmarks that come bundled with the demo beta: DM-ShangriLa, DM-HeatRay and vCTF-Suspense.

As their names imply, the ShangriLa and HeatRay flybys are of the Shangri La and Heat Ray deathmatch levels, while the vCTF-Suspense is a flyby of the sole vehicle CTF level that comes with the demo.

Our GPU tests were run at the highest quality settings and with the -compatscale=5 switch enabled, which puts all detail settings at their highest values.

Our CPU tests were run at the default settings without the compatscale switch as we're looking to measure CPU performance and not GPU performance.

The Test

Test Setup
CPU

Intel Core 2 Extreme QX6850 (3.33GHz 4MB 1333FSB)
Intel Core 2 Quad Q6700 (2.66GHz 4MB 1066FSB)
Intel Core 2 Quad Q6600 (2.40GHz 4MB 1066FSB)
Intel Core 2 Duo E6750 (2.66GHz 4MB 1333FSB)
Intel Core 2 Duo E6550 (2.33GHz 4MB 1333FSB)
Intel Core 2 Duo E4500 (2.2GHz 2MB 800FSB)
Intel Core 2 Duo E4400 (2.0GHz 2MB 800FSB)
Intel Pentium Dual-Core E2160 (1.8GHz 1MB 800FSB)
Intel Pentium Dual-Core E2140 (1.6GHz 1MB 800FSB)
AMD Athlon 64 X2 6400+ (3.2GHz 2x1MB)
AMD Athlon 64 X2 6000+ (3.0GHz 2x1MB)
AMD Athlon 64 X2 5600+ (2.8GHZ 2x1MB)
AMD Athlon X2 5000+ (2.6GHz 2x512K)
AMD Athlon X2 4200+ (2.2GHz 2x512K)
AMD Athlon X2 4000+ (2.1GHz 2x512K)

Motherboard Intel: Gigabyte GA-P35C-DS3R
AMD: ASUS M2N32-SLI Deluxe
Video Cards AMD Radeon HD 2900 XT
AMD Radeon X1950 XTX
NVIDIA GeForce 8800 Ultra
NVIDIA GeForce 8800 GTX
NVIDIA GeForce 8800 GTS 320MB
NVIDIA GeForce 7900 GTX
Video Drivers AMD: Catalyst 7.10
NVIDIA: 163.75
Hard Drive Seagate 7200.9 300GB 8MB 7200RPM
RAM 2x1GB Corsair XMS2 PC2-6400 4-4-4-12
Operating System Windows Vista Ultimate 32-bit
UT3 Teaches us about CPU Architecture
POST A COMMENT

72 Comments

View All Comments

  • Matthew12222 - Thursday, July 17, 2008 - link

    2MB L2 cache VS 4MB L2 cache! And FSB 1000mhz VS 1333mhz. THis is after proving L2 makes a big difference and Fsb a smaller one. Reply
  • SiliconDoc - Thursday, February 07, 2008 - link

    I have been enjoying Anandtech now for many years, and have appreciated the articles I've learned so much from for so long. One of the first uses of that knowledge was passing along the review on the HOT 591P to a very good programmer friend, upon which he purchased the board for his at home portion of work.
    That said, I had to finally make a username so I could comment on - the bias that is so often shown against AMD here. It is often subtle, in certain wordings and in less than blatantly obvious ways, but it has bothered me for some time. I guess that's the way the cookie crumbles, everyone has a favorite, for whatever reason.
    Concerning this article, I plodded along, and then found out that something was amiss once again with the tests chosen, or the equipment chosen, that resulted in a strange result, to AMD's disadvantage. I've seen it here it seems 100 times. Like good representatives, the articles writers pass along that they notified the manufacturers/companies, relieving some of the disdain for it I tasted. I wonder if AMD feels the same way. I doubt it. I suspect they are and have been angry about it.
    If it was the UT3 game, or the sli board, or whatever, why was the test posted as "valid" when it scientifically proved something other than card framerate limits were amiss ?
    I just can't help wondering how ragingly angry AMD reps are that view this type of thing, over and over again.
    I'm not sure why the bias is so consistent here, but it is here, and I just wish it wasn't.
    I've really never seen this site "unable" to diagnose the most obscure of matters when it comes to performance issues, but then again, if it's an AMD chip, often the two shoulders go up and the blank pout is given, then the applause for Intel is heartily enjoyed.
    If I'm not the first person who has said something like this, good.
    I don't generally read any comments, so maybe everyone has accepted the slant already and moved on.
    Nonetheless I think this site is wonderful, and will no doubt be visiting it for years to come, learning and learning and learning more and more, as much as I can to help me in my endeavors.
    For that, I have real way to repay the good that has been done for me, I hope that in some way exlpains why I feel fine expressing my opinion concerning the processor wars and the handling of the same by this site.
    Thanks to all at anandtech and all the rest of the fans out there.
    Reply
  • BlackOmega - Thursday, November 08, 2007 - link

    Very useful article.

    Anyway, I would sugest you guys posted the diference in the minimum framerate attained by both processors...

    I'm running a Athlon 4200+ overclocked to 2.8ghz, and after some benchmarking I found out that there are certain areas in game where the frame rate would drop severely. In fly by runs I get 100 fps + running in 1024x768, but in actual game play, places like the ShockRifle/Helmet in Shangrila make the frame rate drop to ~40 fps.

    It would be nice if you guys could test those areas and see how the diferent processors affect minimum frame rate / specially heavy areas of the map.

    I'm also very interested in how cache affects AMD's processors performance.
    Reply
  • Nil Einne - Saturday, October 27, 2007 - link

    Historically, AMD's A64 architecture has been a lot less cache sensitive then the Intel's C2. It would have been interesting to see how the A64 performance depended on cache akin to the C2 but sadly you didn't test this. Reply
  • Tuvokkk - Sunday, October 21, 2007 - link

    please anand let us know the command to run the flyby benchs and the settings u used so that we can compare our results Reply
  • Zoomer - Friday, October 19, 2007 - link

    How much disk space does the beta demo take up? I tried to install the 700mb installer and it complains that there's insufficient disk space, even though I have 20gb free in my partition containing my programs and user data, and >200GB spread out over a few other partitions.

    Or does it do a dumb check of c:\ ? That could be a problem; my c partition is only 4 gigs big and contains only windows files.
    Reply
  • TSIMonster - Thursday, October 18, 2007 - link

    I'd like to see how the 2900 does with the addition of AA & AF. Typically, that is where it falls behind slightly. The architecture seems fine, its just the power usage and lack of AA & AF support that gets me thus far. Reply
  • poohbear - Thursday, October 18, 2007 - link

    thanks anandtech for a great article!!!! very detailed and informative stuff. cheers again for the article and i hope u revisit it when the full game comes out. Mind u, were u using DX10 or 9 and can u do a comparison on this end too? Reply
  • mongoosesRawesome - Thursday, October 18, 2007 - link

    I'm wondering if the large discrepancy in performance between 1M and 4M cache CPU's remains when you turn up the frequency? It seems that a lot of people are buying the lower clocked pentium dual cores and overclocking them to 3GHz speeds. Could you compare the chips in that situation in order to see if the cache matters as much at high frequencies as it does at low frequencies? Reply
  • shuffle2 - Wednesday, October 17, 2007 - link

    I would love to see the game run with hardware physics enabled - with both the card installed and also with only the software installed. I currently run the beta demo on highest settings available, including hardware acceleration, and no errors are thrown up at any time. also, the game performs very, very smoothly. Reply

Log in

Don't have an account? Sign up now