Initial Rage Performance Investigation

All of the technical information is fine, but we still come back to the question of performance. Earlier we showed the Low/High/Custom screenshots, and while the Low mode is definitely blurrier, it should at least come with higher performance, right? Well, perhaps on some hardware configurations that would be true, but on my particular system the benchmark results are really easy: put 60 FPS across the chart and you’re done. Yes, even at 2560x1600 with 8xAA, Rage hits the 60FPS barrier and refuses to go any faster. Not so useful, but I’m running a GTX 580 and that’s currently the fastest single-GPU solution on the market, right? I decided to test a much lower end setup, so I grabbed the ASUS G74SX I recently reviewed to give that a look and ran it through a similar set of tests, and I used a second system similar to my gaming rig but with an HD 6950 2GB.

Performance on the G74SX does fluctuate a bit (particularly if I enable VSYNC), but there’s still some obvious image quality scaling going on. (You see textures popping in at times, but mostly it only happens when you first enter a level.) At the Low settings and 1080p, performance checks in at 58FPS (just shy of the 60FPS that VSYNC would want), and bumping up the in-game settings to Large/High didn’t change that much—it dropped to 57FPS, which is margin of error for FRAPS. Using the custom configuration likewise didn’t affect performance. In short, the hardware in the G74SX is running at very close to the target 60FPS regardless of what settings I select—even enabling 8xAA doesn’t change the performance much, although at 4xAA the notebook does lock in at a solid 60FPS. The only way to get performance to tank is to enable 16xAA, which drops frame rates to 24FPS. All of this is at 1080p, however, as I don’t have a way to test higher resolutions with a notebook.

So two down and no interesting information unearthed in regards to performance. What about AMD? I’ve got an HD 6950 2GB with an overclocked i7-920 (3.3GHz) and 12GB RAM, very similar to my main gaming system. I connected that to a 30” display and tested at 1080p and 2560x1600. AMD’s hardware is apparently limited to 8xAA, and performance is lower than on the GTX 580 at these punishing settings. Low/High settings however still fail to make a difference, with average frame rate with 8xAA being around 42 FPS; dropping to 4xAA brings the HD 6950 back to the 60FPS cap, and likewise 1080p at 8xAA hits 60FPS.

The initial release of Rage had a configuration option “com_syncToTime” that you could set to -1 to remove the frame rate cap, but doing so would apparently speed up everything (e.g. so at 120FPS everything would move twice as fast). I never tested this before the Steam update came out, and post-patch I can’t get com_syncToTime to work. So far, I’ve tried “cvaradd com_synctotime -1;” in my rageConfig.cfg file, which did nothing. I’ve also tried using the same command from the in-game console, and I’ve tried it as a command-line argument. If anyone has input, though, send it along and I’ll be happy to try again. Until we can get rid of the 60FPS cap and dynamic image quality, however, benchmarking Rage is essentially meaningless.

Update #2: One reader suggested that our custom settings were mostly useless (ignored by the game), which is very likely, so we removed the link. He also suggested we force 16k textures as that would improve both quality and potentially make the game into a more useful benchmark. Unfortunately, neither actually worked out. 16k texturing did not noticeably improve image quality or reduce performance in most areas; the only way it improves quality is if you're in a scene where the total number of textures in use exceeds the 8k limit, which only happens in large outdoor vistas. As a downside, setting the game to 16k texture cache did make it more unstable, particularly at certain resolution/AA combinations. 2560x1600 8xAA with 16k textures on a GTX 580 crashes, for example, and 2560x1600 4xAA with 16k textures can take 5-20 seconds to load level transitions (when the game doesn't crash). 2GB cards might (emphasis: might) fare better, but that doesn't change the fact that most areas show no difference between the 8k and 16k texture cache.

A second suggestion was that we force 16xAA to improve quality and put more of a load on the GPU, thus making the game run below 60FPS and allowing us to do more of a benchmark comparison. This misses the point that a game benchmark needs to be at meaningful settings; otherwise we can just go back to using 3DMarks. The other problem is that once we move beyond 8xAA, antialiasing starts to degrade image quality by making everything blurry. AMD cards limit you to 8xAA, but you can force NVIDIA up to 32xAA through the config file. Here's another gallery of comparison shots, showing both forced texture size and antialiasing comparisons, all on the GTX 580. (The image corruption at 16xAA appears to be a problem with FRAPS, as in-game we didn't notice the rendering errors.)

AMD vs. NVIDIA Image Quality Comparison

There’s one final item we can quickly discuss, and that’s image quality comparisons between different hardware. I grabbed screenshots from five locations on the same three test systems, with all of them locking in at 60FPS at 1080p 4xAA. I skipped the custom configuration file and just went with the in-game settings, testing each system at Small/Low and Large/High for Texture Cache and Anisotropic Filtering. Here’s a second gallery, this time with 1080p files. As before, I’ve saved all of the images as high quality JPEGs, as uploading (and downloading) 3MB images takes quite a while, but if there’s a desire for the original PNGs I’ll see about uploading them as well.

Looking at the still images, there’s very little if any difference between most of the screenshots. In motion, however, right now I notice some quality differences on the AMD hardware, with a few areas that have flickering textures and some other minor distortions. There’s nothing bad enough to make the game unplayable, but browsing the forums it does sound as though more users are having issues with Rage on AMD hardware right now. Long-term, I expect all of this will get worked out, but at least for the next month or two you may encounter some driver bugs or other issues with Rage. Caveat emptor.

Wrap Up

I’m not going to pretend to be a professional game reviewer, just like I’m not a movie critic. I’ve enjoyed both movies and games that have been panned by the critics; other games and movies that rated highly have been seriously dull for me. What I can say is that I enjoyed Rage enough to finish the single-player campaign, and while the story isn’t great, the graphics are still quite impressive. When I look at the steady 60FPS, I can certainly understand id’s rationale: take the hardware configuration confusion out of the equation and just let people play your game at a silky smooth frame rate. The megatexturing is also quite nice, as the lack of repeated textures is definitely noticeable. What really impresses me, however, is id's ability to have a game look this good and still pull 60FPS at maximum quality settings. As good as that might be for gamers, however, in the current state it makes Rage essentially useless as a benchmark.

We tried to do a few comparisons between NVIDIA and AMD graphics solutions, but the game engine (and drivers) feel a bit too raw for that right now. In another month or two, AMD, NVIDIA, and id Software will likely have all the bugs worked out and everyone can get around to playing the game as intended. Perhaps we’ll also get a useable benchmark mode where we can at least see how the graphics solutions compare. Until then, Rage remains an impressive technology demonstration and a game that some people will certainly enjoy; it’s just not a good benchmark for us right now.

Technical Discussion
Comments Locked

80 Comments

View All Comments

  • KikassAssassin - Sunday, October 16, 2011 - link

    All Rage really needed to do was check your driver version, and if you're using an older driver than the game was designed for, pop up a message telling you to update your driver, with a link to download it from AMD/nvidia's website. They knew perfectly well that the game was broken on all but the latest drivers that were being released alongside the game, and few people were going to have those drivers installed when they tried playing the game for the first time, but they didn't do anything to let people know that beforehand.

    I'm a big fan of id and I'm enjoying Rage, but they really dropped the ball with this game's launch.
  • lkuzmanov - Monday, October 17, 2011 - link

    erm... it's not the chip makers responsibility to ensure a game runs - they certainly can and do help the devs, but QA should be part of the development / release cycle. in this context what's happening with Rage is just silly - your product either runs with the currently available drivers or you don't release it - it should be that simple.

    "Until then, Rage remains an impressive technology demonstration and a game that some people will certainly enjoy; it’s just not a good benchmark for us right now."

    the game would've qualified as "an impressive technology demonstration" had it come out on in a somewhat playable state. it didn't. oddly enough, however, the tone of most reviews I've seen remains positive.
  • JarredWalton - Monday, October 17, 2011 - link

    This totally depends on your hardware. It ran fine on my system prior to the patch, albeit with the beta NVIDIA drivers. I think it's mostly on lower spec machines and/or AMD GPUs that people are having problems.
  • ProDigit - Saturday, October 15, 2011 - link

    Their conclusions are a story by itself!
  • JarredWalton - Saturday, October 15, 2011 - link

    Not sure exactly what you're trying to say here... that I should have posted just a two paragraph blurb? Or that you like the long articles?
  • Taft12 - Tuesday, October 18, 2011 - link

    I think he's saying "I have a short attention span, doesn't everybody?"
  • Iketh - Saturday, October 15, 2011 - link

    you're a minority
  • ltcommanderdata - Saturday, October 15, 2011 - link

    I was hoping you could comment on what version of OpenGL id Tech 5 is using? I'm under the impression that it's using OpenGL 2.1 + a ton of extensions. Do you think it would have been easier for AMD and nVidia to support if id had stuck to a more standard OpenGL implementation such as stock OpenGL 3.0?

    And any word on adding OpenCL support for GPGPU transcode? In earlier conference presentations, id said they were looking at a variety of options of including CUDA and OpenCL. I'm surprised they didn't go with OpenCL seeing it would have supported both CPUs and AMD and nVidia GPUs instead of requiring a separate codepath for nVidia GPUs and a CPU codepath for everything else. Certainly now that OpenGL 1.1 drivers are available for AMD CPUs and GPUs, nVidia GPUs, and Intel CPUs and soon Ivy Bridge, now is definitely the time for the first OpenCL accelerated game.
  • Ryan Smith - Saturday, October 15, 2011 - link

    The context that Rage creates is an OpenGL 3.2 context. But I could not tell you off of the top of my head what they're using from OpenGL 3.x, but OpenGL is nebulous in general like that. Given that they're using a 3.2 context, I wouldn't be too worried about what specific features they are or are not using. It's not like OpenGL 4.x where there are specific headliner features (e.g. tessellation) that people are familiar with.

    As for OpenCL, the last thing I heard was the famous post a couple of weeks ago about CUDA, which noted that OpenCL was a performance issue. There hasn't been any news on that.to which I'm aware of.
  • swaaye - Tuesday, October 18, 2011 - link

    On Beyond3D in the PC Gaming forum, NV's Evan Hart (engineer behind most of the GPU transcoding) stopped by and mentioned that OpenCL was tested perhaps years ago on older hardware and it performed poorly at the time. Apparently it was not further explored. I'm not sure which company lacked the motivation to go further with it.

Log in

Don't have an account? Sign up now