Initial Rage Performance Investigation

All of the technical information is fine, but we still come back to the question of performance. Earlier we showed the Low/High/Custom screenshots, and while the Low mode is definitely blurrier, it should at least come with higher performance, right? Well, perhaps on some hardware configurations that would be true, but on my particular system the benchmark results are really easy: put 60 FPS across the chart and you’re done. Yes, even at 2560x1600 with 8xAA, Rage hits the 60FPS barrier and refuses to go any faster. Not so useful, but I’m running a GTX 580 and that’s currently the fastest single-GPU solution on the market, right? I decided to test a much lower end setup, so I grabbed the ASUS G74SX I recently reviewed to give that a look and ran it through a similar set of tests, and I used a second system similar to my gaming rig but with an HD 6950 2GB.

Performance on the G74SX does fluctuate a bit (particularly if I enable VSYNC), but there’s still some obvious image quality scaling going on. (You see textures popping in at times, but mostly it only happens when you first enter a level.) At the Low settings and 1080p, performance checks in at 58FPS (just shy of the 60FPS that VSYNC would want), and bumping up the in-game settings to Large/High didn’t change that much—it dropped to 57FPS, which is margin of error for FRAPS. Using the custom configuration likewise didn’t affect performance. In short, the hardware in the G74SX is running at very close to the target 60FPS regardless of what settings I select—even enabling 8xAA doesn’t change the performance much, although at 4xAA the notebook does lock in at a solid 60FPS. The only way to get performance to tank is to enable 16xAA, which drops frame rates to 24FPS. All of this is at 1080p, however, as I don’t have a way to test higher resolutions with a notebook.

So two down and no interesting information unearthed in regards to performance. What about AMD? I’ve got an HD 6950 2GB with an overclocked i7-920 (3.3GHz) and 12GB RAM, very similar to my main gaming system. I connected that to a 30” display and tested at 1080p and 2560x1600. AMD’s hardware is apparently limited to 8xAA, and performance is lower than on the GTX 580 at these punishing settings. Low/High settings however still fail to make a difference, with average frame rate with 8xAA being around 42 FPS; dropping to 4xAA brings the HD 6950 back to the 60FPS cap, and likewise 1080p at 8xAA hits 60FPS.

The initial release of Rage had a configuration option “com_syncToTime” that you could set to -1 to remove the frame rate cap, but doing so would apparently speed up everything (e.g. so at 120FPS everything would move twice as fast). I never tested this before the Steam update came out, and post-patch I can’t get com_syncToTime to work. So far, I’ve tried “cvaradd com_synctotime -1;” in my rageConfig.cfg file, which did nothing. I’ve also tried using the same command from the in-game console, and I’ve tried it as a command-line argument. If anyone has input, though, send it along and I’ll be happy to try again. Until we can get rid of the 60FPS cap and dynamic image quality, however, benchmarking Rage is essentially meaningless.

Update #2: One reader suggested that our custom settings were mostly useless (ignored by the game), which is very likely, so we removed the link. He also suggested we force 16k textures as that would improve both quality and potentially make the game into a more useful benchmark. Unfortunately, neither actually worked out. 16k texturing did not noticeably improve image quality or reduce performance in most areas; the only way it improves quality is if you're in a scene where the total number of textures in use exceeds the 8k limit, which only happens in large outdoor vistas. As a downside, setting the game to 16k texture cache did make it more unstable, particularly at certain resolution/AA combinations. 2560x1600 8xAA with 16k textures on a GTX 580 crashes, for example, and 2560x1600 4xAA with 16k textures can take 5-20 seconds to load level transitions (when the game doesn't crash). 2GB cards might (emphasis: might) fare better, but that doesn't change the fact that most areas show no difference between the 8k and 16k texture cache.

A second suggestion was that we force 16xAA to improve quality and put more of a load on the GPU, thus making the game run below 60FPS and allowing us to do more of a benchmark comparison. This misses the point that a game benchmark needs to be at meaningful settings; otherwise we can just go back to using 3DMarks. The other problem is that once we move beyond 8xAA, antialiasing starts to degrade image quality by making everything blurry. AMD cards limit you to 8xAA, but you can force NVIDIA up to 32xAA through the config file. Here's another gallery of comparison shots, showing both forced texture size and antialiasing comparisons, all on the GTX 580. (The image corruption at 16xAA appears to be a problem with FRAPS, as in-game we didn't notice the rendering errors.)

AMD vs. NVIDIA Image Quality Comparison

There’s one final item we can quickly discuss, and that’s image quality comparisons between different hardware. I grabbed screenshots from five locations on the same three test systems, with all of them locking in at 60FPS at 1080p 4xAA. I skipped the custom configuration file and just went with the in-game settings, testing each system at Small/Low and Large/High for Texture Cache and Anisotropic Filtering. Here’s a second gallery, this time with 1080p files. As before, I’ve saved all of the images as high quality JPEGs, as uploading (and downloading) 3MB images takes quite a while, but if there’s a desire for the original PNGs I’ll see about uploading them as well.

Looking at the still images, there’s very little if any difference between most of the screenshots. In motion, however, right now I notice some quality differences on the AMD hardware, with a few areas that have flickering textures and some other minor distortions. There’s nothing bad enough to make the game unplayable, but browsing the forums it does sound as though more users are having issues with Rage on AMD hardware right now. Long-term, I expect all of this will get worked out, but at least for the next month or two you may encounter some driver bugs or other issues with Rage. Caveat emptor.

Wrap Up

I’m not going to pretend to be a professional game reviewer, just like I’m not a movie critic. I’ve enjoyed both movies and games that have been panned by the critics; other games and movies that rated highly have been seriously dull for me. What I can say is that I enjoyed Rage enough to finish the single-player campaign, and while the story isn’t great, the graphics are still quite impressive. When I look at the steady 60FPS, I can certainly understand id’s rationale: take the hardware configuration confusion out of the equation and just let people play your game at a silky smooth frame rate. The megatexturing is also quite nice, as the lack of repeated textures is definitely noticeable. What really impresses me, however, is id's ability to have a game look this good and still pull 60FPS at maximum quality settings. As good as that might be for gamers, however, in the current state it makes Rage essentially useless as a benchmark.

We tried to do a few comparisons between NVIDIA and AMD graphics solutions, but the game engine (and drivers) feel a bit too raw for that right now. In another month or two, AMD, NVIDIA, and id Software will likely have all the bugs worked out and everyone can get around to playing the game as intended. Perhaps we’ll also get a useable benchmark mode where we can at least see how the graphics solutions compare. Until then, Rage remains an impressive technology demonstration and a game that some people will certainly enjoy; it’s just not a good benchmark for us right now.

Technical Discussion
Comments Locked

80 Comments

View All Comments

  • SSIV - Saturday, October 15, 2011 - link

    This is my first post on Anandtech ever. I've been a reader for a long time, but never felt the necessity to resort to posting prior to this moment.

    So, mostly due to rageConfig.cfg misinformation on the internet, the article's verdict of RAGE as a benchmarking tool is void.

    Firstly, the steam forums config doesn't do anything, if you try changing some of the environment variables written in it from the in-game console you'll notice no change. There's only a few variables worth changing. The rest is locked.

    Secondly, the variables worth changing are mentioned on nvidia's website:
    http://www.geforce.com/News/articles/how-to-unlock...

    Concerning the constant-fps/variable-quality statement you'll notice that this can be easily changed to constant-quality (if you read the above link's content). Hi-res textures are enabled once you change a few variables to 16384 (read link for details).

    For heavy benchmarking I'd recommend using this config:

    jobs_numThreads 12 // makes big difference on AMD X6 1055T
    vt_maxPPF 128 // max is 128
    vt_pageimagesizeuniquediffuseonly2 16384
    vt_pageimagesizeuniquediffuseonly 16384
    vt_pageimagesizeunique 16384
    vt_pageimagesizevmtr 16384
    vt_restart
    r_multiSamples 32
    vt_maxaniso 4 // 0-16 (32?)
    image_anisotropy 4 // same

    CUDA transcode is not included in config because it can be changed on-demand under video options. Whereas settings such as r_multiSamples 32 cannot be set from menu (16 is highest).

    Please revise the benchmarking verdict, because it doesn't do the game justice.
    Also, it'd make Anandtech the first site to make a real benchmark of RAGE.
    I'd like to see this happen :)
  • ssj4Gogeta - Saturday, October 15, 2011 - link

    @Jarred Walton
    Please do take a look at this, as the images posted there show a very noticeable improvement in texture quality.
  • JarredWalton - Saturday, October 15, 2011 - link

    SSIV, you're correct that most of the variables in that config I linked didn't do anything, which is why I didn't test with them for most of the time. I ran some screenshot comparisons, shrugged, and moved on. And you might be surprised to hear that I actually already read that whole article before you every posted; I just never tried forcing the 16K textures (as the article itself states you "might" see an improvement). Anyway, I'll check 16K textures, but let me clarify a few things.

    1) Setting Texture Cache to "Large" gets to very nearly the same quality as the forced 8K textures.
    2) CUDA transcode was tested separately, and has no apparent impact on performance for my systems.
    3) Forcing constant quality is fine, but that doesn't do anything for the 60FPS frame rate cap...
    4) ...unless you force at least 16xAA, and possibly 32xAA. (I tried 32xAA once before and the game crashed, but perhaps that was a bad config file.)

    Even if you can get below the 60FPS frame rate cap, however, that does not make a game a good benchmark. Testing at silly levels of detail to try to get below the frame rate cap is not useful, especially if many of the changes that will cut the frame rate down far enough result in negligible (or worse) image quality.

    Thanks for the input, but unless I'm sorely mistaken this CFG file hacking still won't make Rage into a useful benchmark. Anyway, I'll investigate a few more things and add an addendum, based on what I discover.
  • JarredWalton - Saturday, October 15, 2011 - link

    Addendum: 16k textures do absolutely nothing for quality. What's more, you really start to run into other issues. Specifically:

    1) Using a GTX 580 with 12GB of system RAM, 16k forced textures (using your above list of custom settings) causes every level to load super slow -- you can see the low-res textures, frame rate is at 1-2 FPS, and it takes 5 to 20 seconds (sometimes longer) for a level to finish loading. This is of course assuming nothing crashes, which in my limited experience over the past 30 minutes is quite common.

    2) 16k textures with 8xAA enabled crashes at 2560x1600, so I had to drop to 4xAA to even get levels to attempt to load (though 1080p 8xAA works okay).

    3) Even with 16k textures, the difference compared to 8k screenshots is virtually zero. Without running an image diff, I couldn't tell which looks better, but even the file sizes are nearly equal

    4) Finally, even if we get rid of the 16k textures idea, 16xAA looks worse than 4xAA and 8xAA, because everything becomes overly blurred, and 32xAA looks downright awful. "Hooray! No jaggies! Boo! No detail!"
  • JarredWalton - Saturday, October 15, 2011 - link

    Also FYI, there are now two paragraphs and a final image gallery on page 3 discussing the above.
  • SSIV - Sunday, October 16, 2011 - link

    I'm aghast at the vigor of your response! Thank you for investigating and researching this Jarred Walton. With things looking as they are now I can see that not much else can be done about the benchmark verdict.

    What >8xAA settings do to the game sounds worrysome however. I assume id will address this sooner or later.
    Because the textures load iteratively, they're blurry at first. Under heavy load the engine might only have enough time to AA the first first texture layer (might be a realtime constraint). But this is just my hypothesis, which shouldn't be true because it implies some sort of texture iteration load block.

    16k has small differences, mostly on road signs and bump mapped cement blocks. But like you said, it's not a significant improvement compared to 8k.

    Thank you for your effort and making an insightful review!
  • SSIV - Monday, October 17, 2011 - link

    A 32-bit game can only allocate as much memory as 32 bits permits, which lands around 3072MiB. So unless RAGE uses PAE, which diminishes overall ram performance, your 12GiB of ram will never be utilized.

    Having said that, the sentence beginning with "Using a GTX 580 with 12GB of system RAM" is slightly disappointing.

    Again, great review.
  • JonnyDough - Monday, October 17, 2011 - link

    Maybe not, but his link sure did help show me what Rage can really look like. Wow! Those textures make the game look sooo much better! I'm sure an old console can't produce that clarity!
  • SSIV - Monday, October 17, 2011 - link

    Here's some more texture comparisons if you're interested:
    http://forums.steamgames.com/forums/showpost.php?p...

    Biggest difference between 8k & 16k are signs and jagged peaks of blocks/rocks. Goes unnoticed once motion blur kicks in.

    As a side note, I hope a game that'll utilize the dynamic lighting part of this game engine will appear soon. That'll be an interesting add to the Tech 5 stew.
  • JarredWalton - Monday, October 17, 2011 - link

    I think these images show what I hinted at above: depending on your system and settings, if you go beyond a certain point in "quality" you may end up with inconsistent overall quality. 2560x1600 with 4xAA and 16k textures certainly did it for my GTX 580, and 1920x1080 at 8xAA seemed to do it as well. Some of the linked images show definite improvements, but interestingly there's at least one shot (Img 2) where the 8k texturing isn't consistently better than the 4k texturing -- two of the building walls as well as the distant mountains look like 1k textures or something.

    If you have enough VRAM, 16k textures with 8xAA looks like the best you'll get, provided you only run at 1080p or lower resolution (or at least not 2560x1600/2560x1440). On my system, however, 16k causes problems far too often for me to recommend it. I tried binding a key to switch between 16k and 8k texture cache, and most of the time when I try switching to 16k the game crashes. I'd be curious to hear what hardware others are using (and what drivers) where 16k is stable at 2560x1600.

Log in

Don't have an account? Sign up now