Metro 2033

Our first analysis is with the perennial reviewers’ favorite, Metro 2033. It occurs in a lot of reviews for a couple of reasons – it has a very easy to use benchmark GUI that anyone can use, and it is often very GPU limited, at least in single GPU mode. Metro 2033 is a strenuous DX11 benchmark that can challenge most systems that try to run it at any high-end settings. Developed by 4A Games and released in March 2010, we use the inbuilt DirectX 11 Frontline benchmark to test the hardware at 1440p with full graphical settings. Results are given as the average frame rate from a second batch of 4 runs, as Metro has a tendency to inflate the scores for the first batch by up to 5%.

One 7970

Metro 2033 - One 7970, 1440p, Max Settings

With one 7970 at 1440p, every processor is in full x16 allocation and there seems to be no split between any processor with 4 threads or above. Processors with two threads fall behind, but not by much as the X2-555 BE still gets 30 FPS. There seems to be no split between PCIe 3.0 or PCIe 2.0, or with respect to memory.

Two 7970s

Metro 2033 - Two 7970s, 1440p, Max Settings

When we start using two GPUs in the setup, the Intel processors have an advantage, with those running PCIe 2.0 a few FPS ahead of the FX-8350. Both cores and single thread speed seem to have some effect (i3-3225 is quite low, FX-8350 > X6-1100T).

Three 7970s

Metro 2033 - Three 7970, 1440p, Max Settings

More results in favour of Intel processors and PCIe 3.0, the i7-3770K in an x8/x4/x4 surpassing the FX-8350 in an x16/x16/x8 by almost 10 frames per second. There seems to be no advantage to having a Sandy Bridge-E setup over an Ivy Bridge one so far.

Four 7970s

Metro 2033 - Four 7970, 1440p, Max Settings

While we have limited results, PCIe 3.0 wins against PCIe 2.0 by 5%.

One 580

Metro 2033 - One 580, 1440p, Max Settings

From dual core AMD all the way up to the latest Ivy Bridge, results for a single GTX 580 are all roughly the same, indicating a GPU throughput limited scenario.

Two 580s

Metro 2033 - Two 580s, 1440p, Max Settings

Similar to one GTX580, we are still GPU limited here.

Metro 2033 conclusion

A few points are readily apparent from Metro 2033 tests – the more powerful the GPU, the more important the CPU choice is, and that CPU choice does not matter until you get to at least three 7970s. In that case, you want a PCIe 3.0 setup more than anything else.

CPU Benchmarks GPU Benchmarks: Dirt 3
Comments Locked

116 Comments

View All Comments

  • majorleague - Wednesday, June 5, 2013 - link

    Here is a youtube link showing 3dmark11 and windows index rating for the 4770k 3.5ghz Haswell. Not overclocked.
    This is apparently around 10-20fps slower than the 6800k in most games. And almost twice the price!!
    Youtube link:
    http://www.youtube.com/watch?v=k7Yo2A__1Xw
  • kilkennycat - Wednesday, June 5, 2013 - link

    Quote:" The only way to go onto 3-way or 4-way SLI is via a PLX 8747 enabled motherboard, which greatly enhances the cost of a motherboard build. This should be kept in mind when dealing with the final results."

    The only way? X79 supports up to 4 8X channels of PCie 2/3.
    The 4-core 3820 overclocks readily and on a X79 board is a very small cost enhancement
    over a high-end non-PLX8747 1155-socket setup. Plus the upgrade benefit of stepping up to the 6-core 3930K if one wants to combine usage for professional multicore applications with gaming.
  • random2 - Wednesday, June 5, 2013 - link

    "What we see is 30.73% of gamers running at 1080p, but 4.16% of gamers are above 1080p."

    So an article and benches are provided for the benefit of 4.16% of the gamers who might be running more pixels vs the 65% (almost 3 million) lions share of gamers that must be running at fewer pixels than found at 1080p. Very strange.
  • Dribble - Thursday, June 6, 2013 - link

    Just to point out the blindingly obvious but who would spend big $$$ on a 1440p monitor and a top end gpu and then buy a low end budget cpu (A8-5600)...

    The realistic min recommendation is going to be a i3570K.
  • xineis - Thursday, June 6, 2013 - link

    So, how would a 955BE perform compared to the CPUs on the test? From what I understand, I should just keep this CPU, as a new one is not going to make much of a difference?
  • Zoatebix - Friday, June 7, 2013 - link

    Thank you for doing all this work. A great follow-up to the original!

    Could you please correct some charts on the CPU Benchmarks page, though? The "Video Conversion - x264 HD Benchmark" section is displaying the charts for the "Grid Solvers - Explicit Finite Difference" section.
  • Klimax - Saturday, June 8, 2013 - link

    Frankly not best article. Resolution too high for GPU and then recommending CPU based on it. CPU, which will not provide performance needed for games. (Techreport showed that APU is not good idea when paired with real GPU; FPS might be in range, but latency is in hell)
  • JNo - Sunday, June 9, 2013 - link

    Ian, I'm afraid I have to agree with some of the naysayers here. You've tried so hard to have clean *scientific* analysis that you've failed to see the wood for the trees. In actual fact I fear you've reached the opposite of a scientific conclusion *because* you only focussed on easily obtainable/reproducible results.

    Just because results for modern games are hard to obtain, doesn't mean you can ignore them despite it being a hard path to walk. I have 1440p but agree that it's not relevant to the vast majority and anyone affording a 1440p monitor won't care to save $40 on AMD A8 vs core i5. So you have to be *realistic* (as well as scientific).

    I know from a few years of international finance analysis that when doing an independent study, there is a chance you can come to a conclusion that flies in the face of the market or common opinion. You have to be *SO* careful when this happens and quadruple check what you have ended up with because 99% of the time, the market or 'hive mind' is correct and there is an error or misunderstanding in your own work. After all, the conglomerate conclusion of hundreds of often intelligent people is hardly likely to wrong, even if you are a smart guy. The chance that you have found the truth and that everyone else is wrong really is about 1% (yes it does happen but it is a once in a blue moon type of event).

    It might seem a huge hit to admit that much of your hard work was misdirected but it could save more pain in the long run to go back to the drawing board and consider what you are trying to achieve and how best to go about it. A very small sample of older titles at unpopular resolutions really could skew results to be misleading.
  • CiccioB - Wednesday, June 12, 2013 - link

    I agree. However we have still to understand what was the thesis Ian wanted to demonstrate.
    If it was "AMD CPU don't have to appear so bad vs Intel" the strategy used for the demonstration is quite good.
    On the other hand, if it was "Let's see which is the best CPU for playing games" the strategy is a complete fail. And it still is partially the same if it were "Let's see which is the cheapest CPU to cope with a bottlenecked GPU", as those old games, but Civ5, all do not have any complex AI o scripts which are a CPU intensive task .
    If I were to judge this work as a homework I would evaluate it as F because it is intended for a small part of the market, using old benchmarks not valid today, incomplete (lack of FCAT) with a wrong setup (bottlenecking GPUs to evaluate CPU performances?).
    Wrong on all aspects but, unless said, the intent was to show that AMD CPU are just trailing Intel most expensive ones instead of being a complete generation behind. In this case evaluation can be a B, but becomes quite limited if we look at the represented market (is 3% of a market that is capable of spending well more that an average gamers a good target to demonstrate that they can spare few bucks using an otherwise castrated CPU?)

    For all these reasons I may say that this is one of the worst article I have ever read on this site. It show some incompetence or worse a bias.
  • Filiprino - Thursday, June 20, 2013 - link

    It's cool that you test old CPUs, so we can see the improvement of CPU processing power over the years.

Log in

Don't have an account? Sign up now