Metro2033

Our first analysis is with the perennial reviewers’ favorite, Metro2033.  It occurs in a lot of reviews for a couple of reasons – it has a very easy to use benchmark GUI that anyone can use, and it is often very GPU limited, at least in single GPU mode.  Metro2033 is a strenuous DX11 benchmark that can challenge most systems that try to run it at any high-end settings.  Developed by 4A Games and released in March 2010, we use the inbuilt DirectX 11 Frontline benchmark to test the hardware at 1440p with full graphical settings.  Results are given as the average frame rate from a second batch of 4 runs, as Metro has a tendency to inflate the scores for the first batch by up to 5%.

One 7970

Metro 2033 - One 7970, 1440p, Max Settings

Almost all our test results fall between 31-35 FPS, which technically means a 10% difference between Nehalem CPUs and the latest Intel and AMD CPUs.

Two 7970s

Metro 2033 - Two 7970s, 1440p, Max Settings

Doubling up to two 7970s and the Nehalems are in the ballpark of the Piledriver CPUs, but for comparison the quad core i5-4670K is similar to the full fat i7-4770K.  Anything quad core and Intel, Sandy Bridge and above, hits 60 FPS average.

Three 7970s

Metro 2033 - Three 7970, 1440p, Max Settings

At three GPUs we have a bit more seperation going on, with the Nehalems losing out due to IPC - only on the NF200 enabled motherboard do we get 70 FPS.  There are no benefits moving to the hex-core Ivy Bridge-E i7-4960X, but the jump from 4670K to 4770K nets five FPS.

One 580

Metro 2033 - One 580, 1440p, Max Settings

Similar to the 7970s, most modern CPUs perform the same.  Beware of single core CPUs however, with the G465 not fairing well.

Two 580s

Metro 2033 - Two 580s, 1440p, Max Settings

Similarly in dual NVIDIA GPU, there is not much difference - ~3 FPS at most unless you deal with dual core CPUs.  Interestingly the results seem to be a little varied within that 41-44 FPS band.

Metro2033 Conclusion

In terms of single GPU, almost all the CPUs we have tested perform the same within a margin.  On dual AMD GPUs we start to see a split, with the older Nehalem CPUs falling under 60 FPS.  On tri-GPU setups the i5-4430 performs close to the Nehalems, and moving from 4670K to 4770K merits a jump from 72.47 FPS to 74-77, depending on lane allocation.

CPU Benchmarks GPU Benchmarks: Dirt 3
Comments Locked

137 Comments

View All Comments

  • tackle70 - Thursday, October 3, 2013 - link

    The 8350 is with the 2600k, not the 3930k...

    So yeah, it's a very good showing for AMD, but not as good as what you indicate. Also, according to sweclockers, an overclocked i5 is still superior to an overclocked 83xx CPU, so make of that what you wish.

    I'm just glad we're seeing games starting to use more than 2-4 threads effectively.
  • Traciatim - Thursday, October 3, 2013 - link

    Much more likely is that games will just become less and less reliant on CPU power because of the terrible netbook processors in the consoles and will instead rely more and more on the GPU. The PC versions of games will just be the same game with a high res texture pack and some extra graphics bling to use up GPU cycles while your processor sits around shuffling a little data.
  • Flunk - Friday, October 4, 2013 - link

    I'm not sure AMD will benefit that much. As soon as consumer CPUs have a reason to have more cores they're just release a new chip with more cores. There is absolutely no reason that they can't release a 8 or ever 12 core desktop processor, they're already selling them for servers.
  • Flunk - Friday, October 4, 2013 - link

    Forgot to mention, Watch Dogs is probably x64 only because they want to use more than 2GB of RAM (which is the limit for the user-mode memory partition in Win32).
  • Nirvanaosc - Thursday, October 3, 2013 - link

    Looking just at the gaming results, does this means that almost any CPU is capable to feed the GPU at 1440p and it is always GPU limited?
  • Nirvanaosc - Thursday, October 3, 2013 - link

    I mean in single GPU config.
  • Traciatim - Thursday, October 3, 2013 - link

    That's pretty much just the games they picked. If you could reliably benchmark large scale PC games like Planetside 2, or other popular large scale MMO's reliably you'd pretty much see the exact opposite. The trouble is, it seems like no MMO makers give you reliable benchmarking tools so you can't use them for tests like these.
  • ryccoh - Thursday, October 3, 2013 - link

    I would really like to see a CPU comparison for strategy games.
    For example, one could have a save game of a far advanced game in Civilization 5 or Total War with many AI players on the largest map and then see how the waiting time varies between the different CPUs. This should be feasible, shouldn't it?
    I'm running an i5 2500k @4.6ghz and it just isn't cutting it for Civilization 5 on a large map once you're far into the game, it would be nice to see whether getting hyperthreading and more cores would be worth it.
  • glugglug - Thursday, October 3, 2013 - link

    Having waited the ridiculous amounts of time between turns on Civ V, and having dual monitors, I put task manager up on the second monitor while it was running, to see that Civ V *IS NOT MULTITHREADED. AT ALL*. Setting the CPU affinity to make it use only 1 logical core makes absolutely no performance difference at all! The only thing I can think of for why a better result would be seen on quad-core systems would be that it likes having a larger L3 cache.
  • glugglug - Thursday, October 3, 2013 - link

    P.S. If my "Civ V just likes cache" theory is right, an Iris Pro laptop should be the ultimate Civ V machine.

Log in

Don't have an account? Sign up now