Call of Juarez

There has been quite a bit of controversy and drama surrounding the journey of Call of Juarez from DirectX 9 to DirectX 10. As many may remember, AMD handed out demos of the DirectX 10 version of Call of Juarez prior to the launch of R600. This build didn't fully support NVIDIA hardware, so many review sites opted not to test it. On its own, this is certainly fine and no cause for worry. It's only normal to expect a company to want to show off something cool running on their hardware even if it isn't as fully functional as the final product will be.

But, after NVIDIA found out about this, they set out to help Techland bring their demo up to par and get it to run properly on G80 based systems. Some publications were able to get an updated build of the game from Techland which included NVIDIA's fixes. When we requested the same from them, they declined to provide us with this updated code. They cited the fact that they would be releasing a finalized benchmark in the near future. Again, this was fine with us and nothing out of the ordinary. We would have liked to get our hands on the NVIDIA update, but it's Techland's code and they can do what they want with it.

Move forward to the release of the of the Call of Juarez benchmark we currently have for testing, and now we have a more interesting situation on our hands. Techland decided to implement something they call "HDR Correct" antialiasing. This feature is designed to properly blend polygon edges in cases with very high contrast due to HDR lighting. Using a straight average or even a "gamma corrected" blend of MSAA samples can result in artifacts in extreme cases when paired with HDR.



The real caveat here is that doing HDR correct AA requires custom MSAA resolve. AMD hardware must always necessarily perform AA resolves in the shader hardware (as the R/RV6xx line lack dedicated MSAA resolve hardware in their render backends), so this isn't a big deal for them. NVIDIA's MSAA hardware, on the other hand, is bypassed. The ability of DX10 to allow individual MSAA samples to be read back is used to perform the custom AA resolve work. This incurs quite a large performance hit for what NVIDIA says is little to no image quality gain. Unfortunately, we are unable to compare the two methods ourselves, as we don't have the version of the benchmark that actually ran using NVIDIA's MSAA hardware.

NVIDIA also tells us that some code was altered in Call of Juarez's parallax occlusion mapping shader that does nothing but degrade the performance of this shader on NVIDIA hardware. Again, we are unable to verify this claim ourselves. There are also other minor changes that NVIDIA feels unnecessarily paint AMD hardware in a better light than the previous version of the benchmark.

But Techland's response to all of this is that game developers are the one's who have the final say in what happens with their code. This is definitely a good thing, and we generally expect developers to want to deliver the best experience possible to their users. We certainly can't argue with this sentiment. But whether or not anything is going on under the surface, it's very clear that Techland and NVIDIA are having some relationship issues.

No matter what's really going on, it's better for the gamer if hardware designers and software developers are all able to work closely together to design high quality games that deliver a consistent experience to the end user. We want to see all of this as just an unfortunate series of miscommunications. And no matter what the reason, we are here today with what Techland has given us. The performance of their code as it is written is the only thing that really matters, as that is what gamers will experience. We will leave all other speculation in the hands of the reader.

So, what are the important DirectX 10 features that this benchmark uses? We see geometry shaders to simulate water particle effects, alpha-to-coverage for smooth leaf and grass edges, and custom MSAA resolve for HDR correct AA.

Call of Juarez


Call of Juarez Performance


The AMD Radeon HD 2900 XT clearly outperforms the GeForce 8800 GTS here. At the low end, none of our cards are playable under any option the Call of Juarez benchmark presents. While all the numbers shown here are with large shadow maps and high quality shadows, even without these features, the 2400 XT only posted about 10 fps at 1024x768. We didn't bother to test it against the rest of our cards because it just couldn't stack up.

Call of Juarez


Call of Juarez 4xAA Performance


With 4xAA enabled, our low-end NVIDIA hardware really tanks. Remember that even these cards must resolve all MSAA samples in their shader hardware. AMD's parts are designed to always handle AA in this manner, but NVIDIA's parts only support the feature inasmuch as DX10 requires it.

We do see some strange numbers from the low-end NVIDIA cards at 1600x1200, but its likely that they performed so poorly here that rendering certain aspects of the scene failed to the point of improving performance (in other words, it's likely not everything was rendered properly even though we didn't notice anything).

The Test Company of Heroes
Comments Locked

59 Comments

View All Comments

  • jay401 - Friday, July 6, 2007 - link

    They're right though, the charts need work. They are not intuitive and there are multiple better ways to present 'percent change' data that would make sense on first glance without the reader having to decipher an unintuitive method that is contrary to the readability of the article.
  • DerekWilson - Friday, July 6, 2007 - link

    The dx9 vs dx10 scaling graphs have been altered to present the data in a different way.

    Please let me know if this is still not adequate.
  • Andyvan - Thursday, July 5, 2007 - link

    I had the exact same reaction to the charts. For the Lost Planet chart with the two colors, either pick better (more standard) colors, or make the performance drop bars grow to the left (or down), and the performance increase bars grow to the right (or up).

    -- Andyvan
  • PrinceGaz - Thursday, July 5, 2007 - link

    The best thing to do with those charts is change them to show relative performance in DX10 compared to DX9, with 100% meaning no change (same performance in DX10 as DX9). Improvements with DX10 give scores above 100%, reduced performance gives a result below 100%.

    Doing that would make the graphs much easier to understand than the current mess.
  • sterlinglittle - Thursday, July 5, 2007 - link

    This might be a silly question as I can't recall the current status of MultiGPU performance with Vista drivers. Will it be possible to test these games with SLI/CrossFire configurations soon?
  • gigahertz20 - Thursday, July 5, 2007 - link

    The results show exactly why I am waiting to buy a DX10 video card, all these people who rushed out to buy a Geforce 8800GTX or AMD 2900XT..hah..especially all those 2900XT fanboys who said the R600 would destroy the 8800GTX in DX10 benchmarks because it has 320 stream processors and a 512-bit memory interface....well guess what, the benchmarks are in and they show the R600 is still the power hunry POS video card it is.
  • KeithTalent - Thursday, July 5, 2007 - link

    I'm not sure how it is a 'hah' to the people that purchased these cards as they still blow everything else out of the water in DX9, I mean it is not even close.

    So for those of us running at higher resolutions (1920x1200 or higher), an 8800/2900 or two made perfect sense (and still does). I doubt very many people were expecting great DX10 performance right away anyway, particularly as the games available barely make use of it.

    KT
  • Sceptor - Thursday, July 5, 2007 - link

    I agree with your idea, I've always skipped over one generation of hardware to another.

    Especially when users are still "testing" Vista gaming for Microsoft, Nvidia and AMD I see no need to part with my money until performance is at least on par with DX9.

    Good article...Nice to see some real numbers on DX10 vs DX9
  • DerekWilson - Thursday, July 5, 2007 - link

    there are applications where the 2900 xt does outperform its competition, as is shown by call of juarez.

    it really depends on how developers go forward. we'll have to wait and see what happens. judging the future of AMD/NVIDIA competition under dx10 isn't really feasible with only 3 apps to go by.

    one thing's for sure though, we'd love to see better performance out of the mainstream parts from both camps. And having some parts to fill in the gap between the lower and higher end hardware would be nice too.
  • defter - Thursday, July 5, 2007 - link

    I think the point here is that many claimed that "R6xx is designed for DX10, don't judge it based on DX9 performance blah blah blah". Those claims gave the impression, that relative DX10 performance of R6xx series will be much better than their DX9 performance.

    Your tests show that on average, R6xx takes a HIGHER performance hit from moving to DX10. Thus, under DX10 R6xx is even SLOWER than it was under DX9.

Log in

Don't have an account? Sign up now