Technical Discussion

The bigger news with Rage is that this is id’s launch title to demonstrate what their id Tech 5 engine can do. It’s also the first major engine in a long while to use OpenGL as the core rendering API, which makes it doubly interesting for us to investigate as a benchmark. And here’s where things get really weird, as id and John Carmack have basically turned the whole gaming performance question on its head. Instead of fixed quality and variable performance, Rage shoots for fixed performance and variable quality. This is perhaps the biggest issue people are going to have with the game, especially if they’re hoping to be blown away by id’s latest graphical tour de force.

Running on my gaming system (if you missed it earlier, it’s an i7-965X @ 3.6GHz, 12GB RAM, GTX 580 graphics), I get a near-constant 60FPS, even at 2560x1600 with 8xAA. But there’s the rub: I don’t ever get more than 60FPS, and certain areas look pretty blurry no matter what I do. The original version of the game offered almost no options other than resolution and antialiasing, while the latest patch has opened things up a bit by adding texture cache and anisotropic filtering settings—these can be set to either Small/Low (default pre-patch) or Large/High. If you were hoping for a major change in image quality, however, post-patch there’s still plenty going on that limits the overall quality. For one, even with 21GB of disk space, id’s megatexturing may provide near-unique textures for the game world but many of the textures are still low resolution. Antialiasing is also a bit odd, as it appears have very little effect on performance (up to a certain point); the most demanding games choke at 2560x1600 4xAA, even with a GTX 580, but Rage chugs along happily with 8xAA. (16xAA on the other hand cuts frame rates almost in half.)

The net result is that both before and after the latest patch, people have been searching for ways to make Rage look better/sharper, with marginal success. I grabbed one of the custom configurations listed on the Steam forums to see if that helped at all. There appears to be a slight tweak in anisotropic filtering, but that’s about it. [Edit: removed link as the custom config appears mostly worthless—see updates.] I put together a gallery of several game locations using my native 2560x1600 resolution with 8xAA, at the default Small/Low settings (for texturing/filtering), at Large/High, and using the custom configuration (Large/High with additional tweaks). These are high quality JPEG files that are each ~1.5MB, but I have the original 5MB PNG files available if anyone wants them.

You can see that post-patch, the difference between the custom configuration and the in-game Large/High settings is negligible at best, while the pre-patch (default) Small/Low settings have some obvious blurriness in some locations. Dead City in particular looked horribly blurred before the patch; I started playing Rage last week, and I didn’t notice much in the way of texture blurriness until I hit Dead City, at which point I started looking for tweaks to improve quality. It looks better now, but there are still a lot of textures that feel like they need to be higher resolution/quality.

Something else worth discussing while we’re on the subject is Rage’s texture compression format. S3TC (also called DXTC) is the standard compressed texture format, first introduced in the late 90s.  S3TC/DXTC achieves a constant 4:1 or 6:1 compression ratio of textures. John Carmack has stated that all of the uncompressed textures in Rage occupy around 1TB of space, so obviously that’s not something they could ship/stream to customers, as even with a 6:1 compression ratio they’d still be looking at 170GB of textures. In order to get the final texture content down to a manageable 16GB or so, Rage uses the HD Photo/JPEG XR format to store their textures. The JPEG XR content then gets transcoded on-the-fly into DXTC, which is used for texturing the game world.

The transcoding process is one area where NVIDIA gets to play their CUDA card once more. When Anand benchmarked the new AMD FX-8150, he ran the CPU transcoding routine in Rage as one of numerous tests. I tried the same command post-patch, and with or without CUDA transcoding my system reported a time of 0.00 seconds (even with one thread), so that appears to be broken now as well. Anyway, I’d assume that a GTX 580 will transcode textures faster than any current CPU, but just how much faster I can’t say. AMD graphics on the other hand will currently have to rely on the CPU for transcoding.

Update: Sorry, I didn't realize that you had to have a game running rather than just using vt_benchmark at the main menu. Bear in mind that I'm using a different location than Anand used in his FX-8150 review; my save is in Dead City, which tends to be one of the more taxing areas. I'm using two different machines as a point of reference, one a quad-core (plus Hyper-Threading) 3.65GHz i7-965 and the other a quad-core i7-2630QM. I've also got results with and without CUDA, since both systems are equipped with NVIDIA GPUs. Here's the result, which admittedly isn't much:

Rage Transcoding Performance

This is using "vt_benchmark 8" and reporting the best score, but regardless of the number of threads it's pretty clear that CUDA is able to help speed up the image transcoding process. How much this actually affects gameplay isn't so clear, as new textures are likely transcoded in small bursts once the initial level load is complete. It's also worth pointing out that the GPU transcoding looks like it would be of more benefit with slower CPUs, as my desktop realized a 41% improvement while the lower clocked notebook (even with a slower GPU) realized a 52% improvement. I also tested the GTX 580 and GTX 560M with and without CUDA transcoding and didn’t notice a difference in perforamnce, but I don’t have any empirical data. That brings us to the final topic.

Rage Against the (Benchmark) Machine Performance Investigation and Wrap-Up
Comments Locked

80 Comments

View All Comments

  • Valitri - Tuesday, October 18, 2011 - link

    After reading threw the article, I have to say I am impressed with the title of the article. Bravo!

    On another note, I absolutely love post apocolyptic type games (Borderlands and Fallout), and after reading this it really makes me what to pick it up. However, I am in the middle of Deus Ex: HR, trying to weed off my WoW addiction, and don't mind waiting for better drivers for my 6970. So I will be picking this title up soon, thanks for the opinions and generally nice article to read.
  • aguilpa1 - Tuesday, October 18, 2011 - link

    Michael is this u? This post sounds awful familiar.
  • JarredWalton - Tuesday, October 18, 2011 - link

    Who is "Michael"?
  • Valitri - Wednesday, October 19, 2011 - link

    Sorry to dissapoint, my name is not Michael. I haven't ever posted on this site before, just been reading it for 4 or 5 years.
  • T2k - Thursday, October 20, 2011 - link

    ...I guess once you suck at making games you will always suck at making games, huh?
  • cactusdog - Saturday, October 22, 2011 - link

    You cant be serious, sure RAGE has issues but ID are pioneers of computer gaming. Quake, Doom and Wolfenstein are classics.

    Theres so much disrespect from gamers....
  • deV14nt - Monday, October 24, 2011 - link

    It's because many of them weren't even born when those games came out. They really have no idea.

    Regardless, I think id took a wrong turn some time around id Tech 4. Q3 engine games were the smoothest twitch FPS games. Best gaming tournaments around then too.
  • HangFire - Wednesday, October 26, 2011 - link

    I've been thinking about this for a few days. Obviously doing screen captures and visually comparing texture resolution is no way to go about benchmarking.

    To do benchmarking properly with this engine, you'll need the cooperation of id, to make available some variables for you. Those variables would record how hard the de-rez system is working.

    If you could record how much and how long the engine is forced to de-rez, you could put together some very understandable graphs. One in realtime showing how much de-rez as you play through a level, when things get busy on a slow machine you'll see a lot of de-rez. Another graph showing how much time spent at less than 100% resolution, average de-rez, etc.

    The problem with this approach is that (at the moment) no other game or engine is available to compare to. It would allow you to compare different hardware.
  • meatfestival - Sunday, November 6, 2011 - link

    The renderer was rewritten. It uses DX9 (in single player)

    The multiplayer uses OpenGL, it's based on the Quake Wars version of the engine.
  • NT78stonewobble - Saturday, November 12, 2011 - link

    "A second suggestion was that we force 16xAA to improve quality and put more of a load on the GPU, thus making the game run below 60FPS and allowing us to do more of a benchmark comparison. This misses the point that a game benchmark needs to be at meaningful settings; otherwise we can just go back to using 3DMarks."

    I kinda disagree here... I would like to compaire severely stressed performance on real game engines.

    Since many game engines are used in more than 1 game and the newer games often push the limits of the game engines.

    Basically it's just an attempt to predict how future and more graphically intensive games base on the same engine would perform.

Log in

Don't have an account? Sign up now