Initial Rage Performance Investigation

All of the technical information is fine, but we still come back to the question of performance. Earlier we showed the Low/High/Custom screenshots, and while the Low mode is definitely blurrier, it should at least come with higher performance, right? Well, perhaps on some hardware configurations that would be true, but on my particular system the benchmark results are really easy: put 60 FPS across the chart and you’re done. Yes, even at 2560x1600 with 8xAA, Rage hits the 60FPS barrier and refuses to go any faster. Not so useful, but I’m running a GTX 580 and that’s currently the fastest single-GPU solution on the market, right? I decided to test a much lower end setup, so I grabbed the ASUS G74SX I recently reviewed to give that a look and ran it through a similar set of tests, and I used a second system similar to my gaming rig but with an HD 6950 2GB.

Performance on the G74SX does fluctuate a bit (particularly if I enable VSYNC), but there’s still some obvious image quality scaling going on. (You see textures popping in at times, but mostly it only happens when you first enter a level.) At the Low settings and 1080p, performance checks in at 58FPS (just shy of the 60FPS that VSYNC would want), and bumping up the in-game settings to Large/High didn’t change that much—it dropped to 57FPS, which is margin of error for FRAPS. Using the custom configuration likewise didn’t affect performance. In short, the hardware in the G74SX is running at very close to the target 60FPS regardless of what settings I select—even enabling 8xAA doesn’t change the performance much, although at 4xAA the notebook does lock in at a solid 60FPS. The only way to get performance to tank is to enable 16xAA, which drops frame rates to 24FPS. All of this is at 1080p, however, as I don’t have a way to test higher resolutions with a notebook.

So two down and no interesting information unearthed in regards to performance. What about AMD? I’ve got an HD 6950 2GB with an overclocked i7-920 (3.3GHz) and 12GB RAM, very similar to my main gaming system. I connected that to a 30” display and tested at 1080p and 2560x1600. AMD’s hardware is apparently limited to 8xAA, and performance is lower than on the GTX 580 at these punishing settings. Low/High settings however still fail to make a difference, with average frame rate with 8xAA being around 42 FPS; dropping to 4xAA brings the HD 6950 back to the 60FPS cap, and likewise 1080p at 8xAA hits 60FPS.

The initial release of Rage had a configuration option “com_syncToTime” that you could set to -1 to remove the frame rate cap, but doing so would apparently speed up everything (e.g. so at 120FPS everything would move twice as fast). I never tested this before the Steam update came out, and post-patch I can’t get com_syncToTime to work. So far, I’ve tried “cvaradd com_synctotime -1;” in my rageConfig.cfg file, which did nothing. I’ve also tried using the same command from the in-game console, and I’ve tried it as a command-line argument. If anyone has input, though, send it along and I’ll be happy to try again. Until we can get rid of the 60FPS cap and dynamic image quality, however, benchmarking Rage is essentially meaningless.

Update #2: One reader suggested that our custom settings were mostly useless (ignored by the game), which is very likely, so we removed the link. He also suggested we force 16k textures as that would improve both quality and potentially make the game into a more useful benchmark. Unfortunately, neither actually worked out. 16k texturing did not noticeably improve image quality or reduce performance in most areas; the only way it improves quality is if you're in a scene where the total number of textures in use exceeds the 8k limit, which only happens in large outdoor vistas. As a downside, setting the game to 16k texture cache did make it more unstable, particularly at certain resolution/AA combinations. 2560x1600 8xAA with 16k textures on a GTX 580 crashes, for example, and 2560x1600 4xAA with 16k textures can take 5-20 seconds to load level transitions (when the game doesn't crash). 2GB cards might (emphasis: might) fare better, but that doesn't change the fact that most areas show no difference between the 8k and 16k texture cache.

A second suggestion was that we force 16xAA to improve quality and put more of a load on the GPU, thus making the game run below 60FPS and allowing us to do more of a benchmark comparison. This misses the point that a game benchmark needs to be at meaningful settings; otherwise we can just go back to using 3DMarks. The other problem is that once we move beyond 8xAA, antialiasing starts to degrade image quality by making everything blurry. AMD cards limit you to 8xAA, but you can force NVIDIA up to 32xAA through the config file. Here's another gallery of comparison shots, showing both forced texture size and antialiasing comparisons, all on the GTX 580. (The image corruption at 16xAA appears to be a problem with FRAPS, as in-game we didn't notice the rendering errors.)

AMD vs. NVIDIA Image Quality Comparison

There’s one final item we can quickly discuss, and that’s image quality comparisons between different hardware. I grabbed screenshots from five locations on the same three test systems, with all of them locking in at 60FPS at 1080p 4xAA. I skipped the custom configuration file and just went with the in-game settings, testing each system at Small/Low and Large/High for Texture Cache and Anisotropic Filtering. Here’s a second gallery, this time with 1080p files. As before, I’ve saved all of the images as high quality JPEGs, as uploading (and downloading) 3MB images takes quite a while, but if there’s a desire for the original PNGs I’ll see about uploading them as well.

Looking at the still images, there’s very little if any difference between most of the screenshots. In motion, however, right now I notice some quality differences on the AMD hardware, with a few areas that have flickering textures and some other minor distortions. There’s nothing bad enough to make the game unplayable, but browsing the forums it does sound as though more users are having issues with Rage on AMD hardware right now. Long-term, I expect all of this will get worked out, but at least for the next month or two you may encounter some driver bugs or other issues with Rage. Caveat emptor.

Wrap Up

I’m not going to pretend to be a professional game reviewer, just like I’m not a movie critic. I’ve enjoyed both movies and games that have been panned by the critics; other games and movies that rated highly have been seriously dull for me. What I can say is that I enjoyed Rage enough to finish the single-player campaign, and while the story isn’t great, the graphics are still quite impressive. When I look at the steady 60FPS, I can certainly understand id’s rationale: take the hardware configuration confusion out of the equation and just let people play your game at a silky smooth frame rate. The megatexturing is also quite nice, as the lack of repeated textures is definitely noticeable. What really impresses me, however, is id's ability to have a game look this good and still pull 60FPS at maximum quality settings. As good as that might be for gamers, however, in the current state it makes Rage essentially useless as a benchmark.

We tried to do a few comparisons between NVIDIA and AMD graphics solutions, but the game engine (and drivers) feel a bit too raw for that right now. In another month or two, AMD, NVIDIA, and id Software will likely have all the bugs worked out and everyone can get around to playing the game as intended. Perhaps we’ll also get a useable benchmark mode where we can at least see how the graphics solutions compare. Until then, Rage remains an impressive technology demonstration and a game that some people will certainly enjoy; it’s just not a good benchmark for us right now.

Technical Discussion
Comments Locked

80 Comments

View All Comments

  • cactusdog - Saturday, October 15, 2011 - link

    What is it with games these days? It seems like nearly every game, even highly anticipated games are broken, have features missing or have serious flaws when they're first released. Its like AMD and Nvidia dont give them any attention until they are released and people are complaining.

    Its like they dont care about PC gaming anymore.
  • Stas - Saturday, October 15, 2011 - link

    Try Hard Reset. PC exclusive of highest quality. Don't apply, if you don't enjoy a mix of Serious Sam, Painkiller, and Doom 3 :)
  • cmdrdredd - Saturday, October 15, 2011 - link

    It's not AMD or Nvidia at all...blame the devs and bean counters at the big publishers for pushing for console releases.
  • AssBall - Saturday, October 15, 2011 - link

    ^^ This
  • ckryan - Saturday, October 15, 2011 - link

    Back when I was just knee-high to a bullfrog, I seem to remember having some technical difficulties with Ultima VII. In that case, it was certainly worth the aggravation. It's not a new phenomenon, but in fairness, games and the platforms they run on are much more complicated now. In between the time you spend with Microsoft for your Xbox version and Sony for your PS3 version, you have to find time to work with AMD/nVidia for your PC version. Technical issues are bad, but giving me a barely-warmed over console port is the worse sin. Sometimes the itchy rash of consolitis is minor, like some small UI issues (Press the Start Button!). Other times, it's deeply annoying, making me wonder if the developers have actually ever played a PC game before. Some games, like TES:Oblivion, have issues that can be corrected later by a dedicated mod community -- another important element for the identity of PC gaming.

    So while technical issues at launch for a game that's been in development for 3 to 5 years are inexcusable, there are some issues that are worse. Inane plots, bad mechanics, and me-too gameplay styles are making me question how committed some companies are to making PC gaming worthwhile (for consumers and therefore the industry as well). The PC is far and away the finest gaming platform, but some developers really need to try harder. Maybe PC gamers just expect more, but I say that we're a forgiving bunch -- but you have to bring your (triple) A-game.
  • JonnyDough - Monday, October 17, 2011 - link

    ^^ THIS!
  • Snowstorms - Saturday, October 15, 2011 - link

    consoles have drastically influenced PC optimization

    it has gotten so bad that I have started to check if the game is a PC exclusive, if it's not I'm already on my toes
  • brucek2 - Saturday, October 15, 2011 - link

    id claims the problems were mostly driver incompatibilities, which is a story we've all heard before.

    I wish publishers would include the drivers they qualified their games on as an optional part of their release. In an ideal situation the current and/or beta drivers already on your system would be fine and you wouldn't need them. But in the all too common screwed up situation, you could at least install those particular drivers for this particular game and know it was supposed to work.
  • taltamir - Saturday, October 15, 2011 - link

    Why is it nvidia/AMD's job to fix games?
    they do it, but they shouldn't need to. The drivers are out, they exist, they are speced.

    And every single indie developer manages to make games that work without a driver update. Meanwhile major A titles ofter are released buggy and then nVidia and AMD scramble to hack it to so that the drivers identify the EXE and fix it. For example, Both companies gut the AA process included in most games and forces it to use a proper & modern AA solution for better quality and performance. (MSAA has been obsolete for years.)
  • JarredWalton - Saturday, October 15, 2011 - link

    Most major AA titles have the developers in contact with NVIDIA and AMD, and they tell the GPU guys what stuff isn't working well and needs fixing (although anecdotally, I hear NVIDIA is much better at responding to requests). This is particularly true when a game does something "new", i.e. an up-to-date OpenGL release when we really haven't seen much from that arena in a while. I agree with the above post that says the devs should state what driver version they used for testing, but I wouldn't be surprised if that's covered by an NDA with the GPU companies.

Log in

Don't have an account? Sign up now