Half-Life 2: Episode One Performance

Half-Life 2: Episode One is the most recent episode of Valve's continuation of the Half-Life 2 storyline. This saga stands out to us as having the most interesting storyline of just about any game, and also having the most interesting characters (with possibly the best voice acting of any game). Not only does Episode One expand on Half-Life 2 with more original gameplay elements, but the developers also gave the game a graphical upgrade with this installment. HDR and bloom effects were added that give the engine a needed face-lift, while keeping the subtlety of the Source engine. We're very anxiously awaiting the release of Episode Two to see what direction Valve takes this game next.

To test this game, we use the built in timedemo feature. Our test consists of a section of the game during a rocket launcher battle against a Combine gunship, with lots of explosions and boards exploding into shards, making good use of Half-Life 2's Havok physics engine. We test this game with the quality settings to "high" when possible, with "reflect all" chosen which usually isn't on by default. We also set anisotropic filtering to 8x. We look for an average frame rate of about 30 or higher in order to get an enjoyable experience.

Half-Life 2: Episode One


The original Half-Life 2 engine broke a lot of ground in its design, and it still shapes the way games are made today. One of the great things about the Half-Life 2 engine was that even though it looked great, it still got very good frame rates across a very wide range of cards, even on the lower end. The newer features of the Episode One engine have given the game a bit more of a performance hit, but it still gets better frame rates than many competing games.

F.E.A.R. Performance Splinter Cell: Chaos Theory Performance
Comments Locked

33 Comments

View All Comments

  • guidryp - Thursday, November 2, 2006 - link

    They spec like this:

    1650XT: 8 vertex Pipes, 24 pixel pipes, 8 Raster pipes, 575MHz, Mem 675MHz X 128 bus.
    7600GT: 5 vertex Pipes, 12 pixel pipes, 8 Raster pipes 560MHz, Mem 700MHz X 128 bus.

    And the ATI card barely holds it's own? I was expecting a walkaway after reading the specs.
  • coldpower27 - Saturday, November 4, 2006 - link

    It should be 24 Pixel Shaders vs 12 Pixel Shaders.

    while both have 8 ROP's, it is probably the X1650 XT only has 8 TMU while the 7600 GT has 12 as both are half their flagship derivatives. Ignore vertex amounts those tpyically aren't half and don't contribute to much on the most part to performance it seems anyway.

    X1900 XTX 48 Pixel Shaders, 16 Rasterization Operators, 650MHZ, Mem 775MHZ x 256 Bit Bus
    7900 GTX 24 Pixel Shaders, 16 Rasterization Operators, 650MHZ, Mem 800MHZ x 256 Bit Bus

    The X1900 XTX doesn't walkaway from the 7900 GTX on the whole either.
  • trinibwoy - Wednesday, November 1, 2006 - link

    Do you guys do other testing that you comment on that is not represented by the graphs? The numbers show a 1 fps difference, yet you use terms like "significant" and "clearly beats". Maybe some median low fps numbers would help demonstrate what you're saying.

    quote:

    An interesting thing about Oblivion is that it favors ATI hardware over NVIDIA, and this is evident here when we look at the X1650 XT compared with the 7600 GT. In this case, the X1650 XT has a small but significant performance lead over the 7600 GT. Because of this, the X1650 XT is more likely to be playable at 1024x768 than the 7600 GT. This is one case where the X1650 XT clearly beats the 7600 GT just in terms of performance. Oblivion players may want to consider this card once it's available, but only assuming the price is reasonable.
  • soydeedo - Monday, October 30, 2006 - link

    hey i know i can make an educated guess as to where the x1650xt would end up on q4 benches compared to nvidia's offerings, but i'm still curious why this game was not included in the testing? with quakewars around the corner i think people are still interested in doom 3 engine performance.
  • johnsonx - Monday, October 30, 2006 - link

    I suppose this name is part of ATI's general trend lately. It used to be that the XT moniker meant the same GPU with slightly higher clocks. Now it seems like the XT parts are a separate family. The X1300XT has nothing to do with the other X1300's (rather it's a rebadged X1600Pro), the X1900XT has more pipes than non-XT members of the X1900 family, and now the X1650XT has nothing to do with the rest of the X1600/1650 family.

    It all makes it a bit hard to choose.
  • Kougar - Monday, October 30, 2006 - link

    On page 10 it is mentioned that

    quote:

    Something that jumps out at us here is that the X1650 XT got slightly better performance than the 7600 GT in both of these games with 4xAA enabled. Without AA enabled, the 7600 GT did better than the X1650 XT in these games. The amount of difference between the performance of both of these cards is about the same with and without AA.


    This is completely going against the bar graphs, specifically the HL Episode One graph. The x1650XT got up and began walking away from the 7600GT without AA, but with AA it tripped and slide into place just behind the 7600GT. At resolutions below 1600by1200 it even began losing by a sizeable margin.
  • Josh Venning - Monday, October 30, 2006 - link

    This paragraph has been tweaked a little bit. In HL2 Episode one the X1650 XT only does better than the 7600 GT at the highest resolution with AA enabled, but in Battlefield 2 it performs a little better over most of the resolutions.
  • Cybercat - Monday, October 30, 2006 - link

    How many vertex units does this thing have?
  • JarredWalton - Monday, October 30, 2006 - link

    See (updated) table on page 2: it has 8 vertex units, 24 pixel pipes, and 8 ROPs. Basically, lots more power than the X1600 XT. I would guess the pixel pipes are more like R580 pipes (i.e. more shader power, but not necessarily the same as an NVIDIA pixel pipeline in raw power).
  • Cybercat - Monday, October 30, 2006 - link

    Alright, cool. I keep a chart with stats of graphics cards, so I'm just making sure I have the vertice throughput correct. Other than the useless X1650 Pro, ATI seems to have a much more competitive mainstream line now. There is now more confusion than ever, though.

Log in

Don't have an account? Sign up now