What about Crysis on Very High?

So we tested Crysis on Very high settings with our single 9800 GX2 and Quad setup. Here’s what we got:




As we can see, with very high quality, performance starts to diverge between the dual and quad GPU NVIDIA solutions. It’s also interesting to note that performance doesn’t drop a great deal when moving up in quality. This indicates that the 9800 GX2 is still system bound in some way. And oddly enough, it looks like it is more system bound at the higher quality setting.

To test this absurd theory, we decided to see what happened when we overclocked our 8 CPUs from 3.2 to 4GHz. Let’s test the theory that we are CPU bound to about 45-50 fps at high quality and 40 fps at very high quality. Here is what our 25% overclock netted us when we tested with both very high and high quality using the 9800 GX2 in Quad SLI.


I’ll give you all a second to pick your jaws up off the floor....

Ok, break’s over. We see more than a 50% performance improvement per percent increase in CPU clock speed; a 25% clock speed increase netted us more than half that in real performance under Very High Quality settings. We saw less than 50% improvement at lower res for High Quality plus Very High Shaders until we hit 1920x1200, which netted us a 15% gain on a 25% increase in clock speed.

This indicates that the higher the graphical quality, the MORE CPU bound we are. Crazy isn’t it? It's counter-intuitive, but pure fact. In speaking with NVIDIA about this (they have helped us a lot in understanding some of our issues here), the belief is that more accurate and higher quality physics at higher graphical quality settings is what causes this overhead. Also, keep in mind that we are testing in a timedemo with AI disabled.

And that’s not where it ends. We are platform bound as well. Yes, I said platform bound. A quick check on 780i returned these results:


Not that we are still CPU bound here even though performance is about 25% higher than on Skulltrail (we benefit even more from a platform change than from an overclock). This is with the same speed CPU. It’s quite unfortunate that we stumbled across all of this last night with the help of NVIDIA while trying to troubleshoot our issues. Remember we said that NVIDIA expects a 60% improvement in Crysis at 19x12 with Very High Quality settings. The added number of cores, the fact that I’m only able to run two FB-DIMMS at the moment (for half of the system memory bandwidth I should have), the arrangement of the PCIe lanes on Skulltrail … All of this contributes to our system limitation here and our inability to see scaling from 9800 GX2.

Now, we have seen better performance on Skulltrail in the past, so it is unclear if there is something we can do to remove some of this system limitation at this point. We will certainly be exploring this further as we would still like to make a single platform work for all of our graphics testing. If we can’t then we’ll move on, but it is useful to discover if this is an Intel issue, an OS issue, a driver issue, or something else. If it’s fixable we need to find out how to fix it, as there are probably two or three people out there who’ve purchased a D5400XS board and will not be happy if it performs much worse than 780i boards in cases like this. My working theory right now is that I applied some hotfix or changed some seemingly benign OS setting that caused some problem somewhere. But like I said, we’ll track it down.

Overall Performance Scaling with 4 GPUs Do we see the same thing with World in Conflict?
Comments Locked

54 Comments

View All Comments

  • strikeback03 - Wednesday, March 26, 2008 - link

    you posted your comment 20 hrs ago. I don't see any comments posted by any Anandtech staff in this review since that time, so no guarantee they have actually read your comment.
  • gudodayn - Tuesday, March 25, 2008 - link

    < Layzer253 ~ No, they shouldnt. It works just fine >

    Given that 9800x2 is a power card, more powerful than 3870x2 that 9800x2 will run into CPU limitations..........

    Then in theory, when using the same platform (same CPU, RAM, etc.), shouldn't 9800x2 score at least within the ball park range of the 3870x2??

    It would just mean with a faster CPU, 9800x2 will have lots of room for improvement whereas the 3870x2 wouldn't!!

    But that's not what's happening here though, is it??

    For some reason, 3870x2 in Crossfire is scaling a lot better than 9800x2 in SLI ~ in a lot of the tests!!

    Either SLI drivers are messed up or 9800x2 cant run quad GPUs effectively.........
  • LemonJoose - Tuesday, March 25, 2008 - link

    I honestly don't know why the hardware vendors keep trying to push these high end solution that aren't stable and don't work the way they are supposed to. Exactly who is the market for $500 motherboards that require expensive RAM designed for servers, $1000 CPUs that will be outperfromed by mainstream CPUs in a year or less, and $600 video cards that have stability issues and driver problems even when not run in SLI mode.

    I would really love it Anandtech and other hardware sites would come out and give these products the 1 out of 10 or 2 out of 10 review scores that they deserve, and tell the hardware companies to spend more time developing solutions for real entusiasts with mainstream pocketbooks, instead of wasting their engineering resources on these high end solutions that nobody wants.
  • strikeback03 - Wednesday, March 26, 2008 - link

    Dunno if anyone has actually bought Skulltrail, but obviously people do buy $1000 CPUs and $500 video cards, as some GX2 owners have already posted in this thread, and some people previously bought the Ultra even when it was well over $500. Not something I would do, but there are obviously those with the time and money to play with this kind of thing.
  • B3an - Tuesday, March 25, 2008 - link

    To the writer of this article, or anyone else... i have this strange problem with the GX2.

    In the article you're getting 60+ FPS @ 2560x1200 res with 4xAA. Now if i do this i get nowhere near that frame rate. Without AA it's perfect, but with AA it completely kills it at that res.

    At first i was thinking that the 512MB usable VRAM was not enough for 2560x1600 + AA so i get a slide slow with 4xAA at that res. But from these tests, you're not getting that.

    What backed up my theory for this is that with games with higher res/bigger textures, even 2xAA would kill frame rates. I'd go from 60+ FPS to literally single digit FPS just by turning on 2xAA @ 2560x1200. But with older games with lower res textures i can turn the AA up a lot higher before this happens.

    Does anyone know what the problem could be? Because i'd really like to run games at 2560x1600 with AA, but cannot at the moment.

    I'm on Vista SP1, with 4GB RAM, and a Quad @ 3.5GHz.
    I've also tried 3 different sets of drivers.
  • nubie - Tuesday, March 25, 2008 - link

    OK, it is about time that Socketable GPU's are on the market, how about a mATX board with a edge connected PCIe x32 slot? Make the video board ATX compliant (IE video board + mATX board = ATX compliant).

    Then we can finally cool these damn video cards, and maybe find a way of getting them power that doesn't get in the way.

    You purchase the proper daughterboard for your class (main-stream, enthusiast, overclocker/benchmarker), and it will come with the ram slots(or onboard ram) and proper voltage circuitry. Then you can change just the GPU when you need to upgrade.

    I know it would be hard to implement and confusing, but it would be less confusing than the current situation once we got used to it, and it would be a hell of a lot more sane.

    You could use a PCI-e extender to connect it to existing PCI-e mATX or full ATX motherboards.

    It is either this, or put the GPU socket on the damn motherboard already, it is 2008, it needs to happen. (If the videocard was connected over Hypertransport III you wouldn't have any problem with bandwidth). The next step really needs to be a general-purpose processor built into the video card, that way you aren't cpu/system bound like the current ridiculous setup (The 790i chipset seems to be helping with the ability to have the northbridge do batch commands across the GPU's, but minimum we need to see some Physics moving away from the CPU, general physics haven't changed since the universe started, why waste a general purpose CPU on them?)
  • nubie - Tuesday, March 25, 2008 - link

    Just to re-iterate, why am I buying a new card every time when they just seem to be recycling the reference design? All GPU systems need the same as a motherboard, Clean voltage to the main processor and its memory, as well as a bus for the RAM and some I/O. The 7900 8800 and 9600 are practically the same boards with the same memory bus, can't we have it socketed?
  • tkrushing - Tuesday, March 25, 2008 - link

    I am all for SLI/Crossfire or whatever you can afford to do but why are we starting to lose focus on single card solution users? I'm sure if I really wanted to sacrafice I could save up for a multiple GPU system but a single g92 for less than half the price for a relatively small performance hit in crysis. And yes I love it but I'm saying it, I just think Crysis is a poorly optimized game to some degree. Give use new and not reused single card solutions! (9 series)
  • tviceman - Tuesday, March 25, 2008 - link

    I have been thinking the same thing lately, but last week after reading about Intel's plans to use one of it's cores in CPU's as a graphics unit, I started thinking about the ramifications of this. I am willing to bet that Nvidia is trying to develop a hybrid CPU/GPU to compete on the same platform that Intel and AMD will eventually have. If this is true, that it's probably a reasonable explanation as to why there has been a severe lack of all new GPU's since the launch of the 8xxx series a year ago.
  • dlasher - Tuesday, March 25, 2008 - link

    page 1, paragraph 2, line 1:

    Is:
    "...it’s not for the feint of heart.."

    Should be:
    "...it’s not for the faint of heart.."

Log in

Don't have an account? Sign up now