Final Words

With the low cost of CPUs these days and with many affordable GPUs on the market, building a system capable of playing Half Life 2: Episode Two just isn't that difficult.

On the CPU side we were caught off guard by exactly how much cache size impacted performance in Episode Two, rendering the Core 2 Duo E4000 and Pentium E2000 series processors much slower than their competition.

AMD was also far more competitive than expected, most likely as a result of the Source engine's dependence on low latency memory accesses. While Intel continues to hold the performance crown, in the $133 and lower price points AMD actually ends up being the better processor to have. If and when Phenom can get to those price points, AMD could actually end up being significantly more competitive than it has been since the launch of Core 2.

Given the performance impact we've seen from faster FSBs and larger caches however, Intel's Penryn core should do a good job of fixing lower end performance once Intel's 45nm core makes its way down to lower price points as well. It also remains to be seen how much of the cache sensitivity we saw here today will translate into other up and coming games, such as today's Unreal Engine 3 based UT3 demo.

While NVIDIA is the only solutions for those who wish to run Episode 2 with all the features enabled at 2560x1600 with 4xAA enabled, the 2900 XT does outperform the 8800 GTS at the $400 price point. The 8800 GTS 320MB is once again a huge value for the money as it performs almost identically to the 8800 GTS 640MB part (with the exception of anything above 1920x1200 with 4xAA which handicaps the lower memory card).

As we mentioned, almost anything can play Episode 2, but if you want high quality at 1280x1024, you'll at least need the equivalent performance of a modern $100+ graphics card. Serious (and even casual) PC gamers will very likely already have something that meets this requirement. Clearly this is no Crysis, but at the same time we applaud Valve's efforts to keep its engine up to date.

GPU Performance
Comments Locked

46 Comments

View All Comments

  • Spoelie - Sunday, October 14, 2007 - link

    any word on the msaa units in the render backen of the rv670, are they fixed?
    seeing the x1950xtx with the exact same framerate as the 2900xt when 4xAA is turned on really hurts.

    this just shows how good the r580 core really was, being able to keep up with the 88kGTS on this workload
  • NullSubroutine - Thursday, October 18, 2007 - link

    I believe the RV670 did address AA as well as some other fixes and improvements (per reported on other sites that the core was not just a die shrink.)
  • xinoxide - Sunday, October 14, 2007 - link

    why is this card not included, the increase in memory speed and capacity do help framerates, especially when sampling many parts of the image, an example being shadow cache, and AA resamples. ive been finding anand to be "somewhat" biased in this respect, they show the 8800ULTRA performance over the 8800GTX itself, and while ati struggles in driver aspects, doesnt mean there is no increase in performance from the HD2900XT 512MB to the HD2900XT 1GB
  • felang - Sunday, October 14, 2007 - link

    I can't seem to download the demo files...
  • DorkOff - Saturday, October 13, 2007 - link

    I for one really, really appreciate the attached *.dem files. I wish all benchmarking reviews, be it of video cards or cpus, would include these. Thank you Anandtech
  • MadBoris - Saturday, October 13, 2007 - link

    I'm starting to get disappointed in the trend of the effects of console transitions in technology enhancements. It's no suprise that UE3 and Valve source will perfom well, they have to run well on consoles afterall. What I don't like is lack of DX10 integration (although it can be argued it's premature and has been fake in other games), as well as things like AA 'notably absent' in UE3. Obviously the platform compatibility challenges are keeping them more than busy where extra features like this are out of reach. I guess that is the tradeoff, I'm sure the games are fun though, so not too much to complain about. Technology is going to level off a bit though for a while apparently. I'm sure older value HW can be glad by all this.

    Real soon, if not already, we will have more power on PC than will be used for some time to come with the exception of one...This year it will be Crytek to pushing the envelope, and those GTS320's (great for multiplatform games limited by consoles) are going to show that indeed the 512MB memory that was on video cards which started appearing many years ago, shouldn't have been a trend so easily ignored by PC gamers going for a 320.

    Otherwise, looking forward to many more of these for UE3 and of course Crysis.
  • MadBoris - Saturday, October 13, 2007 - link

    I forgot to mention things like x64 adoption as another thing that won't be happening in games for years due to limited memory constraints in consoles setting the bar. Crytek cannot move the industry along by itself, infact, it may even get a black eye for doing what Epic, id, Valve used to do, but don't anymore. Many despised the upward demands of HW of the gaming industry, but the advances we have today are due to them, including things like multicores and GPU's like we have, have all been helped move along faster by the gaming industry.

    Memory is inexpensive and imagine if we could move on up to 3 or 4 GB in games in the coming couple years, game worlds could become quite large and full, and level loading not nearly as often or interrupting. But alas x64 will be another thing that won't be utilized in the years to come. With the big boys like Epic, id, Valve all changing their marketing strategies and focus it appears things will never be the same, atleast for the leaps every 5 years by the consoles now dictating terms. It's even doubtful that consoles in next gen would use x64 because they won't spend money to add more memory, therefore have no need for x64. 32 bit, how do we get out of it when MS won't draw the line.
    Sorry if this is deemed a bit off topic, but being a tech article about a game, it kind of got my brain thinking about these things.
  • munky - Saturday, October 13, 2007 - link

    What's the point of running your cpu benches at 1024 resolution? We already know Intel will have the performance lead, but these benches say nothing about how much difference the cpu makes at resolutions people actually use, like 1280, 1600, and 1920.
  • Final Hamlet - Saturday, October 13, 2007 - link

    ...your GPU-world starts with an ATI 2900XT and ends with an Nvidia 8800Ultra. Mine does not. I really would like to see common GPUs (take a look @ the Valve Hardware Survey... how many of those ridiculous high-end hardware do you see there?). Please include the ATI 1xxx and the Nvidia 7xxxx @ some _realistic_ resolutions (sry... maybe you have a home theater with 10,000x5,000 - but testing that has no relevance to about 99%+ of your readers, so why do you keep on producing articles for less than 1% of PC players?)
  • archcommus - Saturday, October 13, 2007 - link

    I'm running an Athlon 64 3200+, 1 GB of memory, and an X800 XL, and the game plays beautifully smooth (not a single hiccup even during action) at 1280x1024 with MAX EVERYTHING - by max everything I mean very high texture detail, high everything else, 6x AA, 16x AF. So if you're running a regular LCD resolution (not widescreen) you basically don't need benchmarks at all - if you built your system anytime within the last 2-3 years (up to two generations ago), it's going to run fine even with max settings. Thus the benchmarks are tailored to people running much higher resolutions, and because of that need higher-end hardware.

    Considering the game still looks great with the exception of some more advanced lighting techniques that new games have, I'm very impressed with what Valve has done.

Log in

Don't have an account? Sign up now