Metro 2033

Our first analysis is with the perennial reviewers’ favorite, Metro 2033. It occurs in a lot of reviews for a couple of reasons – it has a very easy to use benchmark GUI that anyone can use, and it is often very GPU limited, at least in single GPU mode. Metro 2033 is a strenuous DX11 benchmark that can challenge most systems that try to run it at any high-end settings. Developed by 4A Games and released in March 2010, we use the inbuilt DirectX 11 Frontline benchmark to test the hardware at 1440p with full graphical settings. Results are given as the average frame rate from a second batch of 4 runs, as Metro has a tendency to inflate the scores for the first batch by up to 5%.

One 7970

Metro 2033 - One 7970, 1440p, Max Settings

With one 7970 at 1440p, every processor is in full x16 allocation and there seems to be no split between any processor with 4 threads or above. Processors with two threads fall behind, but not by much as the X2-555 BE still gets 30 FPS. There seems to be no split between PCIe 3.0 or PCIe 2.0, or with respect to memory.

Two 7970s

Metro 2033 - Two 7970s, 1440p, Max Settings

When we start using two GPUs in the setup, the Intel processors have an advantage, with those running PCIe 2.0 a few FPS ahead of the FX-8350. Both cores and single thread speed seem to have some effect (i3-3225 is quite low, FX-8350 > X6-1100T).

Three 7970s

Metro 2033 - Three 7970, 1440p, Max Settings

More results in favour of Intel processors and PCIe 3.0, the i7-3770K in an x8/x4/x4 surpassing the FX-8350 in an x16/x16/x8 by almost 10 frames per second. There seems to be no advantage to having a Sandy Bridge-E setup over an Ivy Bridge one so far.

Four 7970s

Metro 2033 - Four 7970, 1440p, Max Settings

While we have limited results, PCIe 3.0 wins against PCIe 2.0 by 5%.

One 580

Metro 2033 - One 580, 1440p, Max Settings

From dual core AMD all the way up to the latest Ivy Bridge, results for a single GTX 580 are all roughly the same, indicating a GPU throughput limited scenario.

Two 580s

Metro 2033 - Two 580s, 1440p, Max Settings

Similar to one GTX580, we are still GPU limited here.

Metro 2033 conclusion

A few points are readily apparent from Metro 2033 tests – the more powerful the GPU, the more important the CPU choice is, and that CPU choice does not matter until you get to at least three 7970s. In that case, you want a PCIe 3.0 setup more than anything else.

CPU Benchmarks GPU Benchmarks: Dirt 3
POST A COMMENT

111 Comments

View All Comments

  • MarcVenice - Tuesday, June 04, 2013 - link

    Please, for the love of god, add a game like Crysis 3 or Far Cry 3. Your current games are all very old, and you will see a bigger difference in newer games. Reply
  • garrun - Tuesday, June 04, 2013 - link

    Agree with request for Crysis 3. It has enough options to deliver a great visual experience and GPU beating, and it also scales well to multi-monitor resolutions for testing at extremes. Reply
  • BrightCandle - Tuesday, June 04, 2013 - link

    gamegpu.ru have done a lot of testing on all games with a variety of CPUs. Anandtech's choice of games actually edge cases. Once you start looking at a wider list of games (Just do a few CPUs but lots of games) you'll see a much bigger trend of performance difference especially in a lot of the non AAA titles. Around 50% of games show a preference for 3930k's at this point over a 2600k, so more multithreading is start to appear but you need to test a lot more games or you wont catch that trend and instead come to a misleading conclusion. Reply
  • ninjaquick - Tuesday, June 04, 2013 - link

    I am not sure that the CPU is used any more in more recent games. This is a CPU test, and testing older games that are known to be CPU dependent is a must.

    Moving forward, with the next gen consoles that is, testing the absolute newest multiplatform games will be a bit more relevant. However, even Farcry 3 and Crysis 3 are mostly GPU bound, so there will be little to no difference in performance by changing the CPUs out.
    Reply
  • superjim - Tuesday, June 04, 2013 - link

    Was thinking the same. Tomb Raider, BF3, Crysis 3, hell even Warhead would be good. Reply
  • garrun - Tuesday, June 04, 2013 - link

    I think Supreme Commander or Supreme Commander 2 would make an excellent CPU demo. Those games have been, and remain CPU limited in a way no other games are, and for good reasons (complexity, AI, unit count), rather than poor coding. A good way to do this is to record a complex 8 player game against AI and then play it back at max speed, timing the playback. That benchmark responds pretty much 1:1 with clock speed increases and also has a direct improvement effect on gameplay when dealing with large, complex battles with thousands of units on map. The upcoming Planetary Annihilation should also be a contender for this, but isn't currently in a useful state for benchmarking. Reply
  • Traciatim - Tuesday, June 04, 2013 - link

    I kind of hope Planetary Annihilation will have both server and client benchmarks available, since this seems like it would be a pretty amazing platform for benchmarking. Reply
  • IanCutress - Tuesday, June 04, 2013 - link

    Interesting suggestion - is SupCom2 still being updated for performance in drivers? Does playback come out with the time automatically or is it something I'll have to try and code with a batch file. Please email me with details if you would like, I've never touched SupCom2 before.

    Ian
    Reply
  • yougotkicked - Tuesday, June 04, 2013 - link

    this sounds quite interesting, though I wonder if the AI is runtime bound rather than solution bound, as this could make the testing somewhat nondeterministic.

    To clarify what I mean; a common method in AI programming is to let algorithms continue searching for better and better solution, interrupting the algorithm when a time limit has passed and taking the best solution it has found so far. Such approaches can result in inconsistent gameplay when pitting multiple AI units against each other, which may change the game state too much between trials to serve as a good testing platform.

    Even if the AI does use this approach it may not bias the results enough to matter, so I guess the only way to be sure is to run the tests a few times and see how consistent the results are on a single test system.
    Reply
  • Zoeff - Tuesday, June 04, 2013 - link

    Forget about SupCom2 - That game has been scaled down quite a bit compared to SupCom1 and isn't as demanding to CPUs. There's also an active SupCom1 community that has and still is pushing out community made patches. :-)

    SupCom actually has a build-in benchmark that plays a scripted map with some fancy camera work. Anyone can launch this by adding "/map perftest" to your shortcut. That said, it doesn't seem to be working properly anymore after several patches nor does it actually give any useful data as the sim score is capped at 10k for today's CPUs. And yet it's extremely easy to cripple any CPU you throw at it when simply playing the game. Just open up an 81x81km map with 7 AI enemies and watch your computer slow to a crawl as the map starts filling up.

    And yes, the AI is "solution bound". Replays of recorded games with AI in them wouldn't work otherwise.

    I wonder if somebody could create a custom SupCom1 benchmark... *Hint Hint*
    Reply

Log in

Don't have an account? Sign up now