Initial Rage Performance Investigation

All of the technical information is fine, but we still come back to the question of performance. Earlier we showed the Low/High/Custom screenshots, and while the Low mode is definitely blurrier, it should at least come with higher performance, right? Well, perhaps on some hardware configurations that would be true, but on my particular system the benchmark results are really easy: put 60 FPS across the chart and you’re done. Yes, even at 2560x1600 with 8xAA, Rage hits the 60FPS barrier and refuses to go any faster. Not so useful, but I’m running a GTX 580 and that’s currently the fastest single-GPU solution on the market, right? I decided to test a much lower end setup, so I grabbed the ASUS G74SX I recently reviewed to give that a look and ran it through a similar set of tests, and I used a second system similar to my gaming rig but with an HD 6950 2GB.

Performance on the G74SX does fluctuate a bit (particularly if I enable VSYNC), but there’s still some obvious image quality scaling going on. (You see textures popping in at times, but mostly it only happens when you first enter a level.) At the Low settings and 1080p, performance checks in at 58FPS (just shy of the 60FPS that VSYNC would want), and bumping up the in-game settings to Large/High didn’t change that much—it dropped to 57FPS, which is margin of error for FRAPS. Using the custom configuration likewise didn’t affect performance. In short, the hardware in the G74SX is running at very close to the target 60FPS regardless of what settings I select—even enabling 8xAA doesn’t change the performance much, although at 4xAA the notebook does lock in at a solid 60FPS. The only way to get performance to tank is to enable 16xAA, which drops frame rates to 24FPS. All of this is at 1080p, however, as I don’t have a way to test higher resolutions with a notebook.

So two down and no interesting information unearthed in regards to performance. What about AMD? I’ve got an HD 6950 2GB with an overclocked i7-920 (3.3GHz) and 12GB RAM, very similar to my main gaming system. I connected that to a 30” display and tested at 1080p and 2560x1600. AMD’s hardware is apparently limited to 8xAA, and performance is lower than on the GTX 580 at these punishing settings. Low/High settings however still fail to make a difference, with average frame rate with 8xAA being around 42 FPS; dropping to 4xAA brings the HD 6950 back to the 60FPS cap, and likewise 1080p at 8xAA hits 60FPS.

The initial release of Rage had a configuration option “com_syncToTime” that you could set to -1 to remove the frame rate cap, but doing so would apparently speed up everything (e.g. so at 120FPS everything would move twice as fast). I never tested this before the Steam update came out, and post-patch I can’t get com_syncToTime to work. So far, I’ve tried “cvaradd com_synctotime -1;” in my rageConfig.cfg file, which did nothing. I’ve also tried using the same command from the in-game console, and I’ve tried it as a command-line argument. If anyone has input, though, send it along and I’ll be happy to try again. Until we can get rid of the 60FPS cap and dynamic image quality, however, benchmarking Rage is essentially meaningless.

Update #2: One reader suggested that our custom settings were mostly useless (ignored by the game), which is very likely, so we removed the link. He also suggested we force 16k textures as that would improve both quality and potentially make the game into a more useful benchmark. Unfortunately, neither actually worked out. 16k texturing did not noticeably improve image quality or reduce performance in most areas; the only way it improves quality is if you're in a scene where the total number of textures in use exceeds the 8k limit, which only happens in large outdoor vistas. As a downside, setting the game to 16k texture cache did make it more unstable, particularly at certain resolution/AA combinations. 2560x1600 8xAA with 16k textures on a GTX 580 crashes, for example, and 2560x1600 4xAA with 16k textures can take 5-20 seconds to load level transitions (when the game doesn't crash). 2GB cards might (emphasis: might) fare better, but that doesn't change the fact that most areas show no difference between the 8k and 16k texture cache.

A second suggestion was that we force 16xAA to improve quality and put more of a load on the GPU, thus making the game run below 60FPS and allowing us to do more of a benchmark comparison. This misses the point that a game benchmark needs to be at meaningful settings; otherwise we can just go back to using 3DMarks. The other problem is that once we move beyond 8xAA, antialiasing starts to degrade image quality by making everything blurry. AMD cards limit you to 8xAA, but you can force NVIDIA up to 32xAA through the config file. Here's another gallery of comparison shots, showing both forced texture size and antialiasing comparisons, all on the GTX 580. (The image corruption at 16xAA appears to be a problem with FRAPS, as in-game we didn't notice the rendering errors.)

AMD vs. NVIDIA Image Quality Comparison

There’s one final item we can quickly discuss, and that’s image quality comparisons between different hardware. I grabbed screenshots from five locations on the same three test systems, with all of them locking in at 60FPS at 1080p 4xAA. I skipped the custom configuration file and just went with the in-game settings, testing each system at Small/Low and Large/High for Texture Cache and Anisotropic Filtering. Here’s a second gallery, this time with 1080p files. As before, I’ve saved all of the images as high quality JPEGs, as uploading (and downloading) 3MB images takes quite a while, but if there’s a desire for the original PNGs I’ll see about uploading them as well.

Looking at the still images, there’s very little if any difference between most of the screenshots. In motion, however, right now I notice some quality differences on the AMD hardware, with a few areas that have flickering textures and some other minor distortions. There’s nothing bad enough to make the game unplayable, but browsing the forums it does sound as though more users are having issues with Rage on AMD hardware right now. Long-term, I expect all of this will get worked out, but at least for the next month or two you may encounter some driver bugs or other issues with Rage. Caveat emptor.

Wrap Up

I’m not going to pretend to be a professional game reviewer, just like I’m not a movie critic. I’ve enjoyed both movies and games that have been panned by the critics; other games and movies that rated highly have been seriously dull for me. What I can say is that I enjoyed Rage enough to finish the single-player campaign, and while the story isn’t great, the graphics are still quite impressive. When I look at the steady 60FPS, I can certainly understand id’s rationale: take the hardware configuration confusion out of the equation and just let people play your game at a silky smooth frame rate. The megatexturing is also quite nice, as the lack of repeated textures is definitely noticeable. What really impresses me, however, is id's ability to have a game look this good and still pull 60FPS at maximum quality settings. As good as that might be for gamers, however, in the current state it makes Rage essentially useless as a benchmark.

We tried to do a few comparisons between NVIDIA and AMD graphics solutions, but the game engine (and drivers) feel a bit too raw for that right now. In another month or two, AMD, NVIDIA, and id Software will likely have all the bugs worked out and everyone can get around to playing the game as intended. Perhaps we’ll also get a useable benchmark mode where we can at least see how the graphics solutions compare. Until then, Rage remains an impressive technology demonstration and a game that some people will certainly enjoy; it’s just not a good benchmark for us right now.

Technical Discussion
Comments Locked

80 Comments

View All Comments

  • Valitri - Tuesday, October 18, 2011 - link

    After reading threw the article, I have to say I am impressed with the title of the article. Bravo!

    On another note, I absolutely love post apocolyptic type games (Borderlands and Fallout), and after reading this it really makes me what to pick it up. However, I am in the middle of Deus Ex: HR, trying to weed off my WoW addiction, and don't mind waiting for better drivers for my 6970. So I will be picking this title up soon, thanks for the opinions and generally nice article to read.
  • aguilpa1 - Tuesday, October 18, 2011 - link

    Michael is this u? This post sounds awful familiar.
  • JarredWalton - Tuesday, October 18, 2011 - link

    Who is "Michael"?
  • Valitri - Wednesday, October 19, 2011 - link

    Sorry to dissapoint, my name is not Michael. I haven't ever posted on this site before, just been reading it for 4 or 5 years.
  • T2k - Thursday, October 20, 2011 - link

    ...I guess once you suck at making games you will always suck at making games, huh?
  • cactusdog - Saturday, October 22, 2011 - link

    You cant be serious, sure RAGE has issues but ID are pioneers of computer gaming. Quake, Doom and Wolfenstein are classics.

    Theres so much disrespect from gamers....
  • deV14nt - Monday, October 24, 2011 - link

    It's because many of them weren't even born when those games came out. They really have no idea.

    Regardless, I think id took a wrong turn some time around id Tech 4. Q3 engine games were the smoothest twitch FPS games. Best gaming tournaments around then too.
  • HangFire - Wednesday, October 26, 2011 - link

    I've been thinking about this for a few days. Obviously doing screen captures and visually comparing texture resolution is no way to go about benchmarking.

    To do benchmarking properly with this engine, you'll need the cooperation of id, to make available some variables for you. Those variables would record how hard the de-rez system is working.

    If you could record how much and how long the engine is forced to de-rez, you could put together some very understandable graphs. One in realtime showing how much de-rez as you play through a level, when things get busy on a slow machine you'll see a lot of de-rez. Another graph showing how much time spent at less than 100% resolution, average de-rez, etc.

    The problem with this approach is that (at the moment) no other game or engine is available to compare to. It would allow you to compare different hardware.
  • meatfestival - Sunday, November 6, 2011 - link

    The renderer was rewritten. It uses DX9 (in single player)

    The multiplayer uses OpenGL, it's based on the Quake Wars version of the engine.
  • NT78stonewobble - Saturday, November 12, 2011 - link

    "A second suggestion was that we force 16xAA to improve quality and put more of a load on the GPU, thus making the game run below 60FPS and allowing us to do more of a benchmark comparison. This misses the point that a game benchmark needs to be at meaningful settings; otherwise we can just go back to using 3DMarks."

    I kinda disagree here... I would like to compaire severely stressed performance on real game engines.

    Since many game engines are used in more than 1 game and the newer games often push the limits of the game engines.

    Basically it's just an attempt to predict how future and more graphically intensive games base on the same engine would perform.

Log in

Don't have an account? Sign up now