The Benchmarks

We ran all of our tests on the following system configurations:

Test Setup
CPU Core 2 Duo E6550 (2.33GHz 4MB 1333FSB)
Core 2 Duo E4500 (2.2GHz 2MB 800FSB)
Core 2 Duo E4400 (2.0GHz 2MB 800FSB)
Pentium E2160 (1.8GHz 1MB 800FSB)
Pentium E2140 (1.6GHz 1MB 800FSB)
AMD Athlon X2 5000+ (2.6GHz 2x512K)
AMD Athlon X2 4800+ (2.5GHz 2x512K)
AMD Athlon X2 4200+ (2.2GHz 2x512K)
AMD Athlon X2 4000+ (2.1GHz 2x512K)
Motherboard Intel: ASUS P5K-V G33
AMD: Biostar TF-7050M2
Video Cards AMD Radeon HD 2900 XT
AMD Radeon HD 2600 XT
AMD Radeon HD 2600 Pro
AMD Radeon HD 2400 XT
NVIDIA GeForce 8800 Ultra
NVIDIA GeForce 8800 GTX
NVIDIA GeForce 8800 GTS 320MB
NVIDIA GeForce 8600 GTS
NVIDIA GeForce 8600 GT
Video Drivers AMD: Catalyst 7.10
NVIDIA: 163.69
Hard Drive Seagate 7200.9 300GB 8MB 7200RPM
RAM 2x1GB Corsair XMS2 PC2-6400 4-4-4-12
Operating System Windows Vista Ultimate 32-bit


We'll start off with a look at CPU performance. For these tests we ran at 1024 x 768 to minimize any GPU bottlenecks and highlight differences between CPUs.

CPU Performance

Our first benchmark is of an indoor map with a reasonable sized firefight taking place. The frame rates are high due to the indoor environment but the explosions and computer interaction keep our CPUs busy doing more than just feeding data to the GPU.

Half Life 2 Episode 2


Intel continues to hold onto the overall performance crown as AMD has nothing faster than the Athlon 64 X2 6400+, but the race is a bit closer at lower price points.

The limited production 6400+, albeit more expensive than Intel's Core 2 Duo E6750, ends up underperforming its closest competitor. The Core 2 Duo E6550 is about 5% faster than the Athlon 64 X2 6000+, but honestly the performance advantage isn't large enough to really matter, especially in more realistic GPU-bound scenarios.

The Core 2 Duo E4000 series ends up losing a bit of ground to AMD thanks to having a smaller L2 cache and slower FSB, both of which Episode 2 is particularly sensitive to. The Athlon 64 X2 5600+ is 12% faster than its price-competitor, the Core 2 Duo E4500. It's interesting to note the impact of L2 cache size on performance even in the AMD camp; the Athlon 64 X2 5000+ only has a 512KB L2 per core (vs. 1MB of the 5600+) and performance drops significantly, to the point where it's a toss up between the 5000+ and the Core 2 E4400.

At the low end of the spectrum, Half Life 2's dependency on very fast memory accesses and large cache sizes really penalize the Pentium Dual-Core processors, both of which are bested by their AMD rivals. AMD's margin of victory isn't tremendous but it's clear that Intel's lack of an on-die memory controller does hinder gaming performance of these small-cache parts.

Look at real world performance however, even the difference between an Athlon 64 X2 4200+ and a Pentium E2160 will be largely masked by GPU limitations at higher resolutions.

Our next benchmark takes place in the outland_03 level, in a small indoor environment resulting in ridiculously high frame rates for all of the CPUs. The performance comparison in this benchmark is almost purely academic, simply highlighting differences between microprocessors as even the cheapest CPUs in this comparison pull over 170 fps in this test.

Half Life 2 Episode 2


Half Life 2 Episode 2 continues to be very sensitive to FSB frequency, which is part of the reason why we see the dual core E6750 (1333MHz FSB) pull ahead of the quad core Q6700 (1066MHz FSB). There's slight overhead associated with running HL2 on a quad-core system, which doesn't make for a better situation, not to mention that the game doesn't take advantage of more than two threads. Quad-core owners won't really be plagued by lower performance than their dual core compatriots, but they simply don't get any benefit out of the latest version of Valve's Source engine. The standings remain relatively unchanged, once more we see that L2 cache size and latency strongly impact performance. The 90nm Athlon 64 X2 5600+ features a 1MB L2 cache per core with a slightly lower access latency than the 65nm Athlon 65 X2 5000+, with its 512KB L2 per core, the resulting performance difference is significant; the 5600+ holds onto a 20% performance advantage over the 5000+, despite only a 7.6% increase in clock speed.

Intel's Core 2 lineup gets hurt even worse as you look at the 800MHz FSB/2MB L2 E4000 parts. The E4500 is seriously outperformed by the 5600+ and ends up being equal to the 5000+, despite the latter being priced lower.

The Pentium Dual-Core processors pull up the rear once more, thanks to even lower clock speeds and meager 1MB L2 caches shared between two cores.

Our final performance test takes place in the outland_10 map, an outdoor environment where we spend most of our time driving around (poorly) and avoiding the topography of the level.

Half Life 2 Episode 2


The standings remain unchanged: Intel holds the overall performance crown, the two are competitive at $150 - $200 price points, and AMD manages to pull ahead in the sub $150 market.

Index GPU Performance
Comments Locked

46 Comments

View All Comments

  • Spoelie - Sunday, October 14, 2007 - link

    any word on the msaa units in the render backen of the rv670, are they fixed?
    seeing the x1950xtx with the exact same framerate as the 2900xt when 4xAA is turned on really hurts.

    this just shows how good the r580 core really was, being able to keep up with the 88kGTS on this workload
  • NullSubroutine - Thursday, October 18, 2007 - link

    I believe the RV670 did address AA as well as some other fixes and improvements (per reported on other sites that the core was not just a die shrink.)
  • xinoxide - Sunday, October 14, 2007 - link

    why is this card not included, the increase in memory speed and capacity do help framerates, especially when sampling many parts of the image, an example being shadow cache, and AA resamples. ive been finding anand to be "somewhat" biased in this respect, they show the 8800ULTRA performance over the 8800GTX itself, and while ati struggles in driver aspects, doesnt mean there is no increase in performance from the HD2900XT 512MB to the HD2900XT 1GB
  • felang - Sunday, October 14, 2007 - link

    I can't seem to download the demo files...
  • DorkOff - Saturday, October 13, 2007 - link

    I for one really, really appreciate the attached *.dem files. I wish all benchmarking reviews, be it of video cards or cpus, would include these. Thank you Anandtech
  • MadBoris - Saturday, October 13, 2007 - link

    I'm starting to get disappointed in the trend of the effects of console transitions in technology enhancements. It's no suprise that UE3 and Valve source will perfom well, they have to run well on consoles afterall. What I don't like is lack of DX10 integration (although it can be argued it's premature and has been fake in other games), as well as things like AA 'notably absent' in UE3. Obviously the platform compatibility challenges are keeping them more than busy where extra features like this are out of reach. I guess that is the tradeoff, I'm sure the games are fun though, so not too much to complain about. Technology is going to level off a bit though for a while apparently. I'm sure older value HW can be glad by all this.

    Real soon, if not already, we will have more power on PC than will be used for some time to come with the exception of one...This year it will be Crytek to pushing the envelope, and those GTS320's (great for multiplatform games limited by consoles) are going to show that indeed the 512MB memory that was on video cards which started appearing many years ago, shouldn't have been a trend so easily ignored by PC gamers going for a 320.

    Otherwise, looking forward to many more of these for UE3 and of course Crysis.
  • MadBoris - Saturday, October 13, 2007 - link

    I forgot to mention things like x64 adoption as another thing that won't be happening in games for years due to limited memory constraints in consoles setting the bar. Crytek cannot move the industry along by itself, infact, it may even get a black eye for doing what Epic, id, Valve used to do, but don't anymore. Many despised the upward demands of HW of the gaming industry, but the advances we have today are due to them, including things like multicores and GPU's like we have, have all been helped move along faster by the gaming industry.

    Memory is inexpensive and imagine if we could move on up to 3 or 4 GB in games in the coming couple years, game worlds could become quite large and full, and level loading not nearly as often or interrupting. But alas x64 will be another thing that won't be utilized in the years to come. With the big boys like Epic, id, Valve all changing their marketing strategies and focus it appears things will never be the same, atleast for the leaps every 5 years by the consoles now dictating terms. It's even doubtful that consoles in next gen would use x64 because they won't spend money to add more memory, therefore have no need for x64. 32 bit, how do we get out of it when MS won't draw the line.
    Sorry if this is deemed a bit off topic, but being a tech article about a game, it kind of got my brain thinking about these things.
  • munky - Saturday, October 13, 2007 - link

    What's the point of running your cpu benches at 1024 resolution? We already know Intel will have the performance lead, but these benches say nothing about how much difference the cpu makes at resolutions people actually use, like 1280, 1600, and 1920.
  • Final Hamlet - Saturday, October 13, 2007 - link

    ...your GPU-world starts with an ATI 2900XT and ends with an Nvidia 8800Ultra. Mine does not. I really would like to see common GPUs (take a look @ the Valve Hardware Survey... how many of those ridiculous high-end hardware do you see there?). Please include the ATI 1xxx and the Nvidia 7xxxx @ some _realistic_ resolutions (sry... maybe you have a home theater with 10,000x5,000 - but testing that has no relevance to about 99%+ of your readers, so why do you keep on producing articles for less than 1% of PC players?)
  • archcommus - Saturday, October 13, 2007 - link

    I'm running an Athlon 64 3200+, 1 GB of memory, and an X800 XL, and the game plays beautifully smooth (not a single hiccup even during action) at 1280x1024 with MAX EVERYTHING - by max everything I mean very high texture detail, high everything else, 6x AA, 16x AF. So if you're running a regular LCD resolution (not widescreen) you basically don't need benchmarks at all - if you built your system anytime within the last 2-3 years (up to two generations ago), it's going to run fine even with max settings. Thus the benchmarks are tailored to people running much higher resolutions, and because of that need higher-end hardware.

    Considering the game still looks great with the exception of some more advanced lighting techniques that new games have, I'm very impressed with what Valve has done.

Log in

Don't have an account? Sign up now