Call of Duty 4: Modern Warfare Performance

Version: 1.4

Settings: All Highest Quality

For this benchmark, we use FRAPS to measure average frame rate during the opening cut-scene of the game. We start FRAPS as soon as the screen clears in the helicopter and we stop it right as the captain grabs his head gear. As we saw in the preview, this game does scale beyond two GPUs, and our tests here show some very interesting results.

Call of Duty 4 Multi-GPU Scaling over Resolution


Unlike in Oblivion, Call of Duty 4 scales well with three or four GPUs no matter what resolution we are running. We did enable the option to support dual graphics cards, and it is clear that when developers put some effort into explicitly supporting multi-GPU configurations good results can be achieved. This is also interesting in light of the fact that this game is a little more flat when it comes to resolution scaling than other titles.

Call of Duty 4 Performance


Call of Duty 4 Performance
  1280x1024 1600x1200 1920x1200 2560x1600
NVIDIA GeForce 9600 GT SLI 127.7 116 104.3 78.8
NVIDIA GeForce 8800 Ultra SLI 145.8 134.1 127.4 102.8
NVIDIA GeForce 8800 Ultra 78.3 75.7 70 57.5
NVIDIA GeForce 9600 GT 66.5 59.4 55 40.8
AMD Radeon HD 3870x2 (x 2) 89.6 86.3 82.6 74.8
AMD Radeon HD 3870x2 + 3870 78.2 74.8 72.9 64.5
AMD Radeon HD 3870 X2 61.5 56 53.8 47.5
AMD Radeon HD 3870 46.4 41.3 38.2 29.6


In spite of the fact that four GPUs scales well on AMD hardware, NVIDIA's 8800 Ultra GPUs in SLI handily outperform the quad solution, as does 9600 GT SLI up to 2560x1600. Call of Duty 4 has certainly favored NVIDIA hardware, as is shown by the fact that a single 8800 Ultra can keep up with three 3870 cards in CrossFireX, and a single 9600 GT performs on par with the 3870X2.

The Elder Scrolls IV: Oblivion Performance S.T.A.L.K.E.R. Performance
Comments Locked

36 Comments

View All Comments

  • DerekWilson - Saturday, March 8, 2008 - link

    that is key ... as is what ViRGE said above.

    in addition, people who want to run 4 GPUs in a system are not going to be the average gamer. this technology does not offer the return on investment anyone with a midrange system would want. people who want to make use of this will also want to eliminate any other bottlenecks to get the most out of it in their systems.

    not only does skulltrail help us eliminate bottlenecks and look at the potential of the graphics subsystem, in this case i would even make the argument that the system is a good match for the technology.
  • Sind - Saturday, March 8, 2008 - link

    I agree, I don't think the Skulltrail is doing anyone favours of how they could judge utilising these MGPU solutions in a "average" system that the reader on Anand would be using. X38 seems very popular as is 780i, I really don't think even more then 1% of your traffic would ever utilise the system you used to do this review. I've read the other CrossfireX reviews from around the net, and most had no problems at all, and infact most noted that it worked straight out with no messing around with the lengthy directions that were indicated in the article to get it to work.
  • ViRGE - Saturday, March 8, 2008 - link

    Something very, very important to keep in mind is that Skulltrail is the only board out right now that supports Crossfire and SLI. If AT wants to benchmark both technologies without switching the boards and compromising the results, this is the only board they can use.
  • Cookie Monster - Saturday, March 8, 2008 - link

    No 8800Ultra or GTX Tri-SLI for comparison?
  • DerekWilson - Saturday, March 8, 2008 - link

    we were looking at 2 card configurations here ... i'll check out three and four card configs later
  • JarredWalton - Saturday, March 8, 2008 - link

    Unfortunately, Tri-SLI requires a 780i motherboard. That's fine for Tri-SLI, but CrossFire (and CrossFireX) won't work on 780i AFAIK. I also think Skulltrail may have its own set of issues that prevent things from working optimally - but that's conjecture rather than actual testing. Derek and Anand have Skulltrial; I don't.
  • Slash3 - Saturday, March 8, 2008 - link

    ...graphs are both using the same image. The Oblivion Performance and 4xAA/16AF Performance line graphs (oblivionscale.png) are just duplicates and link to the same file. :)
  • JarredWalton - Saturday, March 8, 2008 - link

    Fixed, thanks.
  • slashbinslashbash - Saturday, March 8, 2008 - link

    Graphics really are fairly unique in the computing world in that they are easily parallelized. While we're pretty quickly reaching a point of diminishing returns in number of cores in a general-purpose CPU (8 is more than enough for any current desktop type of usage), the same point has not been reached for graphics. That is why we continue to see increasing numbers of pipelines in individual GPU's, and why we continue to see effective scaling to multiple cards and multiple GPU's per card. As long as there is memory bandwidth to support the GPU power, the GPU looks like it is capable of taking advantage of much more parallelization. I expect 1000+ pipes on a 2-billion-transistor+ GPU by 2011.

    So, I expect multi-GPU to remain with us, but any high-end multi-GPU setup will always be surpassed by a single-GPU solution within a generation or two.
  • DerekWilson - Saturday, March 8, 2008 - link

    that's not the issue ... graphics is infinitely parallelizeable ...

    the problems are die size and power.

    beyond a certain die size there is huge drop off in the amount of money and IHV can make on their silicon. despite the fact that every chip could have been made larger, we are working with engineers, not scientists -- they have a budget.

    multiGPU allows IHVs to improve performance nearly linearly in some cases without the non-linear increase in cost they would see from (nearly) doubling the size of their GPU.

    ...

    then there is power. as dies shrink and we can fit more into a smaller space, will GPU makers still be able to make chips as big as R600 was? power density goes way up as die size goes down. power requirements are already crazy and it could get very difficult to properly dissipate the heat from a chips with small enough surface area and huge enough power output ... ...

    but speading the heat out over two less powerful cards would help handle that.

    ...

    in short, multigpu isn't about performance ... it's about engineering, flexibility and profitability. we could always get better performance improvement from a single GPU if it could be built to match the specs of a multiGPU config.

Log in

Don't have an account? Sign up now