Introduction

At the end of 2004, barely over 6 years since the release of the original Half Life, Valve unleashed the long awaited sequel upon the world. We stayed up late that launch night benchmarking the new game, worried that it would only run well on ATI cards, we were pleasantly surprised that Valve had made a Half Life 2 that ran very well on virtually all hardware with the exception of the GeForce FX.

A year and a half later, Valve brought out Episode One, an attempt at episodic content that was supposed to guarantee quicker game releases, more frequent updates to the story and a better overall experience for gamers. Performance changed a bit with the release of Episode One and its associated version of Valve's Source engine, and the game quickly became a regular part of our CPU and GPU test suites.

Once more, around a year and a half later, Valve finally released Episode Two, the second installment in the Half Life 2 episodic series. Armed with the latest version of the Source engine, we went to town on benchmarking the new game to see where things have changed, if at all.

Our experiences with Half Life 2 and Episode One kept expectations realistic this time around; Valve has historically sacrificed overall image quality in order to maintain playability on even the slowest hardware. What you'll see here today is that every single component we tested, down to the cheapest CPU and GPU, are more than enough to run Half Life 2: Episode Two. Of course having a faster CPU will allow you to extract more performance out of faster GPUs, and faster graphics cards give you the ability to run at higher resolutions, but the minimum requirements for playability are more than reasonable for any modern day system.

For those of you interested, we are offering our demo files for download so you can compare your own systems. The demos are zipped up here: athl2ep2.zip.
The Test and CPU Performance
Comments Locked

46 Comments

View All Comments

  • Spoelie - Sunday, October 14, 2007 - link

    any word on the msaa units in the render backen of the rv670, are they fixed?
    seeing the x1950xtx with the exact same framerate as the 2900xt when 4xAA is turned on really hurts.

    this just shows how good the r580 core really was, being able to keep up with the 88kGTS on this workload
  • NullSubroutine - Thursday, October 18, 2007 - link

    I believe the RV670 did address AA as well as some other fixes and improvements (per reported on other sites that the core was not just a die shrink.)
  • xinoxide - Sunday, October 14, 2007 - link

    why is this card not included, the increase in memory speed and capacity do help framerates, especially when sampling many parts of the image, an example being shadow cache, and AA resamples. ive been finding anand to be "somewhat" biased in this respect, they show the 8800ULTRA performance over the 8800GTX itself, and while ati struggles in driver aspects, doesnt mean there is no increase in performance from the HD2900XT 512MB to the HD2900XT 1GB
  • felang - Sunday, October 14, 2007 - link

    I can't seem to download the demo files...
  • DorkOff - Saturday, October 13, 2007 - link

    I for one really, really appreciate the attached *.dem files. I wish all benchmarking reviews, be it of video cards or cpus, would include these. Thank you Anandtech
  • MadBoris - Saturday, October 13, 2007 - link

    I'm starting to get disappointed in the trend of the effects of console transitions in technology enhancements. It's no suprise that UE3 and Valve source will perfom well, they have to run well on consoles afterall. What I don't like is lack of DX10 integration (although it can be argued it's premature and has been fake in other games), as well as things like AA 'notably absent' in UE3. Obviously the platform compatibility challenges are keeping them more than busy where extra features like this are out of reach. I guess that is the tradeoff, I'm sure the games are fun though, so not too much to complain about. Technology is going to level off a bit though for a while apparently. I'm sure older value HW can be glad by all this.

    Real soon, if not already, we will have more power on PC than will be used for some time to come with the exception of one...This year it will be Crytek to pushing the envelope, and those GTS320's (great for multiplatform games limited by consoles) are going to show that indeed the 512MB memory that was on video cards which started appearing many years ago, shouldn't have been a trend so easily ignored by PC gamers going for a 320.

    Otherwise, looking forward to many more of these for UE3 and of course Crysis.
  • MadBoris - Saturday, October 13, 2007 - link

    I forgot to mention things like x64 adoption as another thing that won't be happening in games for years due to limited memory constraints in consoles setting the bar. Crytek cannot move the industry along by itself, infact, it may even get a black eye for doing what Epic, id, Valve used to do, but don't anymore. Many despised the upward demands of HW of the gaming industry, but the advances we have today are due to them, including things like multicores and GPU's like we have, have all been helped move along faster by the gaming industry.

    Memory is inexpensive and imagine if we could move on up to 3 or 4 GB in games in the coming couple years, game worlds could become quite large and full, and level loading not nearly as often or interrupting. But alas x64 will be another thing that won't be utilized in the years to come. With the big boys like Epic, id, Valve all changing their marketing strategies and focus it appears things will never be the same, atleast for the leaps every 5 years by the consoles now dictating terms. It's even doubtful that consoles in next gen would use x64 because they won't spend money to add more memory, therefore have no need for x64. 32 bit, how do we get out of it when MS won't draw the line.
    Sorry if this is deemed a bit off topic, but being a tech article about a game, it kind of got my brain thinking about these things.
  • munky - Saturday, October 13, 2007 - link

    What's the point of running your cpu benches at 1024 resolution? We already know Intel will have the performance lead, but these benches say nothing about how much difference the cpu makes at resolutions people actually use, like 1280, 1600, and 1920.
  • Final Hamlet - Saturday, October 13, 2007 - link

    ...your GPU-world starts with an ATI 2900XT and ends with an Nvidia 8800Ultra. Mine does not. I really would like to see common GPUs (take a look @ the Valve Hardware Survey... how many of those ridiculous high-end hardware do you see there?). Please include the ATI 1xxx and the Nvidia 7xxxx @ some _realistic_ resolutions (sry... maybe you have a home theater with 10,000x5,000 - but testing that has no relevance to about 99%+ of your readers, so why do you keep on producing articles for less than 1% of PC players?)
  • archcommus - Saturday, October 13, 2007 - link

    I'm running an Athlon 64 3200+, 1 GB of memory, and an X800 XL, and the game plays beautifully smooth (not a single hiccup even during action) at 1280x1024 with MAX EVERYTHING - by max everything I mean very high texture detail, high everything else, 6x AA, 16x AF. So if you're running a regular LCD resolution (not widescreen) you basically don't need benchmarks at all - if you built your system anytime within the last 2-3 years (up to two generations ago), it's going to run fine even with max settings. Thus the benchmarks are tailored to people running much higher resolutions, and because of that need higher-end hardware.

    Considering the game still looks great with the exception of some more advanced lighting techniques that new games have, I'm very impressed with what Valve has done.

Log in

Don't have an account? Sign up now