Whenever a PC game pushes the limits of what current hardware can do, it generally ends up being fairly GPU bound. In the past, as long as you had pretty much any Socket-939 Athlon 64 you had enough CPU power to drive even the fastest single GPU video cards. You would typically be running at fairly GPU-bound graphics settings - even if you were CPU-bound, frame rates would be high enough that it wouldn't really matter. However, every now and then there comes a game that is an equal opportunity stress test on your system, requiring an extremely fast CPU as well as a high end GPU. Bethesda Softworks' latest hit title, The Elder Scrolls IV: Oblivion, is such a game.

In our initial article on Oblivion performance we compared high end and mid range PCI Express GPUs, discovering that we had finally found a game that was stressful enough to truly demand more GPU power than what is currently available on the market. Today's article uses the same benchmarks that we used in our first article, but focuses on finding the right mix of CPU and GPU performance for the best Oblivion experience.

It's worth stating up-front that we are not going to attempt to find ideal settings for every possible CPU/GPU configuration available. There are many tweaks that can be made that will dramatically improve performance on slower CPUs. Reducing the height of the grass as well as the density - or turning off grass entirely - will help a lot. Running without HDR, using medium textures, turning off shadow filtering... you can easily get performance to a level that many people will find acceptable, but it always comes at the cost of reducing the quality of the graphics - or at least the complexity of the graphics. We're interested in characterizing CPU performance under identical configurations for this article, providing an apples-to-apples look at how the Oblivion engine runs on a variety of processors.

The Test
Comments Locked

36 Comments

View All Comments

  • feraltoad - Friday, May 5, 2006 - link

    I would have like to seem some High Resolution Benchmarks, or at least one. At 1680*1050 would this show that the CPU was no longer the weakest link with the Video Card Scrambling to render the scene? Would the exra mhz of the cpus still have a marked effect? Maybe you see the answers to these questions as obvious but not me. :( I play at this resolution. I'm sure some people complain but I've tried running around with fraps in different places and as long as the FPS doesn't seem to drop below 24fps then it isn't bad at all. If I look at an Oblivion gate I get 39 fps steady, granted this isn't with Dremoras trying to club me. I find that long views in towns (anvil) are rougher than the Oblivion gates. Perhaps its all the NPCs or shadows of buildings?

    However, very nice article, and VERY DEPRESSING! Sad to see the Kings of the Video Cards struggle with a game this much. Also insanely pathetic to see that a fx60 will net you a whopping 9 fps in the most demanding part of the game. 9FPS?!? That's a frickin' $1000 dollar chip. You could buy it a $115 dollar A643000+ and OC it (just 250htt) to just 3500+ speeds and you only have a 9 fps difference from a $100 to a $1000 chip? That is messed up. (yeah I know you can OC the fx too, but still at $1000 would you expect that to be necessary? It should brew coffee w/the other core while using office apps. also I don't believe anybody buys a 3000+ w/out planning to OC) So much for "getting what you pay for". I guess it is comforting to normal people who can't drop a grand on the cpu alone.

    I too would have like to seen if a Aegia Phyx chip would have made a difference by freeing up the cpu for other things. Aegia could have really gained some momentum if they could have had this game support their chip & had a board for around ~$150.

    People keep talking about patches. Can a patch really signifigantly optimize a game anyway than decreasing quality for performance?

    Jarred what is the biggest increase in performance in a single game that you have seen with new video card drivers? Can we expect Nvidia or ATI to dramatically increase performance with a drive update?

    After all, this will hurt vid card sales if people can expect sub-mediocre performance from the best graphics cards. It will also boost 360 sales if they can't increase performance. Especially, when I release my patent pending super dongle that will allow two Xbox 360s to be used together in a SLI/Crossfire configuration. haha (j/k. but my dongle is quite super, I assure you. ;)
  • moletus - Tuesday, May 2, 2006 - link

    2 months ago, i was about to go shop 1900xtx for 600euros. Than i saw Xbox360.
    Guess what! (hint: it costed with all the bells and whistles 550eur :)

    Why on earth burn so much money when you can have all the chocolate(+AA and HDR) in the world for 1/3 of the money and still cant run the game?

    Im no MickeySoft fanboy but the 360-ride has been surprinsinly smooth.
  • The Black Cursor - Monday, May 1, 2006 - link

    Both articles were quite informative, but I don't quite have enough information to make a decision between the upper-mid range GPUs and CPUs at similar price points,

    ATI X1800XT vs. NVIDIA 7900GT

    AMD A64 3700+ vs. AMD A64 X2 3800+

    Any opinions?


    Be seeing you...

    ---> TBC (To Buy Components)
  • xFlankerx - Monday, May 1, 2006 - link

    I say the X2 3800+ with the 7900GT. While the X2 most likely will not make a difference in the game itself, as noted by my last comment, it WILL provide for an overall smoother experience from your PC.
  • xFlankerx - Sunday, April 30, 2006 - link

    I just came from reading the Anandtech Oblivion CPU testing. I beg to differ with the conclusion that you need a fast CPU to get the most out of your GPU. I'm more of the opinion that the CPU and GPU handle different tasks. We know that the CPU is much stronger than the GPU in MOST cases. Now the CPU is what provides the GPU with instructions for what the GPU needs to do. If the CPU is already feeding the GPU more instructions than it can handle, then there is no point in having a faster CPU. See, there was hardly a difference in the FPS even with a 1.8GHz A64 when compared to a 2.6GHz A64, in areas where it's GPU-intensive. The Oblivion gates are by far the most intesive part of the game, GPU-wise. Now every single one of the single-card solutions bottlenecked the CPU in these cases. Only Dual X1900XTXs were able to take advantage of the CPU.

    On the other hand, the Town benchmarks are influenced quite a bit by the CPU. This is easily explained when you notice that in a town, there is a lot for the CPU to calculate that is not in the Oblivion Gates. There are hordes of NPCs in a city, and the CPU has to control every single one of them (oblivion NPCs lead lives of their own, by far the most complicated AI in a game yet). Thus, the stronger the CPU, the better your frames per second in a crowded area will be. The clumpiness of all the videocards performance here somewhat reinforces the point.

    The Dungeon benchmarks did surprise me. There is a decent amount for the GPU to render, though not as much as the other areas. However, there is very little for the CPU to render. And yet, we see quite a bit of improvement with a faster CPU. I'm not entirely sure how to explain that.

    My point is, basically, that "You don't need a fast CPU to get the most out of your GPU." If anything, it should be the other way around, since the GPU is what can't keep up with the CPU. I think that the conclusion should be more like, "You need a fast GPU to handle the graphics of the game, but you will suffer a drop if FPS if your CPU cannot keep up with the AI in the game in crowded areas."
  • kuk - Sunday, April 30, 2006 - link

    Any idea on how a Core Duo setup would perform?
  • ballero - Sunday, April 30, 2006 - link

    Anand, are you sure that the performance increases are due to the game?
    AFAIK Catalyst are multithreading driver.
  • RyanHirst - Saturday, April 29, 2006 - link

    Ok, I was just needling a bit about the opteron servers; I didn't really think they'd be a part of the test. :>

    First, PCI-X x16 is standard on all dual-socket940 NForce professional boards, and several have 2 16x slots. I did forget that you can't do CrossFire on an Nvidia board-- but even just checking a server board with one, two, and four cores as a different test, would that not be worth a look?

    Hyperthreading doesn't really answer the question. A (perhaps) 5% improvement with HT is consistent with the "6 of one...." results HT gives in dynamic multithread envirnonments (chess, say, or video games -- where the thread results must be compared to values from other threads, or completed within a certain time). There never has been any reason to assume that, because a progam does not get a boost from HyperThreading, it will not benifit from an additional full processor.
    I'm not arguing that a dual socket-940 rig is mainstream, but should we argue about which is less relevant to 99.9% of the gaming public, an Opteron or a $1200 cross-fire setup? The point I'm trying to make is that regardless of how common it is, a multicore system could answer some questions that remain open. Namely,
    --What are the maximal advantages of multithreading in this first generation of games?

    Every review site seemed happy to pontificate on this subject when there was virtually no data, but now that there is a test candidate, no one seems interested in actually testing it.
    Was it ever in doubt that you can improve a game's performance by throwing a CrossFire x1900xtx at it? No one would argue with this premise. *THE* question that chipmakers are taking revolutionary stands on for video games... is about cores. And I would just REALLY like someone to take a look at it.
    Sorry that was long; I realized the first post sounded grumpy because I didn't explain my reasoning. Cheers, and I take the time to write this because your reviews are always the best; when I ask myself what information I would need to make an objective judgement about something, it's always information you present and analyze. Thanks again!
  • RyanHirst - Saturday, April 29, 2006 - link

    Oh, yeah and about the small boost in performance of the second core: since the multithread coding in Oblivion is (by their own statements) a result of their XBox360 version, I was assuming that the game would be generating low-complexity, nearly branchless threads, none of which would tax the extra processor at anything close to 100%... but that the game engine can generate a LOT of them. Is that not more or less the model of the XBox360 processor?
  • JarredWalton - Saturday, April 29, 2006 - link

    Xbox 360 processor is quite a bit different, and handling up to six threads over three cores. The problem is, it runs a completely different machine architecture, so it runs through different compiler. Even if their code to take full advantage of the Xbox 360 capabilities, they might not bother porting the entire spectrum over to x86. More likely -- and as far as I know there's no way to prove this -- the Xbox 360 version doesn't use more than about half the potential of the Xenon CPU. (Go ahead and call me "doubting Thomas". LOL)

    Anyway, maybe Anand will find a time to try running a few tests on a quad core versus dual core setup. I unfortunately don't have access to any quad core systems. :(

Log in

Don't have an account? Sign up now