Metro 2033

Our first analysis is with the perennial reviewers’ favorite, Metro 2033. It occurs in a lot of reviews for a couple of reasons – it has a very easy to use benchmark GUI that anyone can use, and it is often very GPU limited, at least in single GPU mode. Metro 2033 is a strenuous DX11 benchmark that can challenge most systems that try to run it at any high-end settings. Developed by 4A Games and released in March 2010, we use the inbuilt DirectX 11 Frontline benchmark to test the hardware at 1440p with full graphical settings. Results are given as the average frame rate from a second batch of 4 runs, as Metro has a tendency to inflate the scores for the first batch by up to 5%.

One 7970

Metro 2033 - One 7970, 1440p, Max Settings

With one 7970 at 1440p, every processor is in full x16 allocation and there seems to be no split between any processor with 4 threads or above. Processors with two threads fall behind, but not by much as the X2-555 BE still gets 30 FPS. There seems to be no split between PCIe 3.0 or PCIe 2.0, or with respect to memory.

Two 7970s

Metro 2033 - Two 7970s, 1440p, Max Settings

When we start using two GPUs in the setup, the Intel processors have an advantage, with those running PCIe 2.0 a few FPS ahead of the FX-8350. Both cores and single thread speed seem to have some effect (i3-3225 is quite low, FX-8350 > X6-1100T).

Three 7970s

Metro 2033 - Three 7970, 1440p, Max Settings

More results in favour of Intel processors and PCIe 3.0, the i7-3770K in an x8/x4/x4 surpassing the FX-8350 in an x16/x16/x8 by almost 10 frames per second. There seems to be no advantage to having a Sandy Bridge-E setup over an Ivy Bridge one so far.

Four 7970s

Metro 2033 - Four 7970, 1440p, Max Settings

While we have limited results, PCIe 3.0 wins against PCIe 2.0 by 5%.

One 580

Metro 2033 - One 580, 1440p, Max Settings

From dual core AMD all the way up to the latest Ivy Bridge, results for a single GTX 580 are all roughly the same, indicating a GPU throughput limited scenario.

Two 580s

Metro 2033 - Two 580s, 1440p, Max Settings

Similar to one GTX580, we are still GPU limited here.

Metro 2033 conclusion

A few points are readily apparent from Metro 2033 tests – the more powerful the GPU, the more important the CPU choice is, and that CPU choice does not matter until you get to at least three 7970s. In that case, you want a PCIe 3.0 setup more than anything else.

CPU Benchmarks GPU Benchmarks: Dirt 3
POST A COMMENT

111 Comments

View All Comments

  • roedtogsvart - Tuesday, June 04, 2013 - link

    Other than 6 core Xeon, I mean... Reply
  • A5 - Tuesday, June 04, 2013 - link

    Hasn't had time to test it yet, and hardware availability. He covers that point pretty well in this article and the first one. Reply
  • chizow - Tuesday, June 04, 2013 - link

    Yeah I understand and agree, would definitely like to see some X58 and Kepler results. Reply
  • ThomasS31 - Tuesday, June 04, 2013 - link

    Seems 1440p is too demanding on the GPU side to show the real gaming difference between these CPUs.

    Is 1440p that common in gaming these days?

    I have the impression (from my CPU change experiences) that we would see different differences at 1080p for example.
    Reply
  • A5 - Tuesday, June 04, 2013 - link

    Read the first page? Reply
  • ThomasS31 - Tuesday, June 04, 2013 - link

    Sure. Though still for single GPU, it would be a wiser choice to be "realistic" and do 1080p that is more common (on single monitor average Joe gamer type of scenario).
    And go 1440p (or higher) for multi GPUs and enthusiast.

    The purpose of the article is choosing a CPU and that needs to show some sort of scaling in near real life scenarios, but if the GPU kicks in from start it will not be possible to evaluate the CPU part of the performance equation in games.

    Or maybe it would be good to show some sort of combined score from all the test, so the Civ V and other games show some differentation at last in the recommendation as well, sort of.
    Reply
  • core4kansan - Tuesday, June 04, 2013 - link

    The G2020 and G860 might well be the best bang-for-buck cpus, especially if you tested at 1080p, where most budget-conscious gamers would be anyway. Reply
  • Termie - Tuesday, June 04, 2013 - link

    Ian,

    A couple of thoughts for you on methodology:

    (1) While I understand the issue of MCT is a tricky one, I think you'd be better off just shutting it off, or if you test with it, noting the actually core speeds that your CPUs are operating at, which should be 200MHz above nominal Turbo.

    (2) I don't understand the reference to an i3-3225+, as MCT should not have any effect on a dual-core chip, since it has no Turbo mode.

    (3) I understand the benefit of using time demos for large-scale testing like what you're doing, but I do think you should use at least one modern game. I'd suggest replacing Metro2033, which has incredibly low fps results due to a lack of engine optimization, with Tomb Raider, which has a very simple, quick, and consistent built-in benchmark.

    Thanks for all your hard work to add to the body of knowledge on CPUs and gaming.

    Termie
    Reply
  • IanCutress - Tuesday, June 04, 2013 - link

    Hi Ternie,

    To answer your questions:

    (1) Unfortunately for a lot of users, even DIY not just system integrators, they leave the motherboard untouched (even at default memory, not XMP). So choosing that motherboard with MCT might make a difference in performance. Motherboards without MCT are also different between themselves, depending on how quickly they respond to CPU loading and ramp up the speed, and then if they push it back down to idle immediately in a low period or keep the high turbo for a few seconds in case the CPU loading kicks back in.

    2) This is a typo - I was adding too many + CPU results at the same time and got carried away.

    3) While people have requested more 'modern' games, there are a couple of issues. If I release something that has just come out, the older drivers I have to use for consistency will either perform poorly or not scale (case in point, Sleeping Dogs on Catalyst 12.3). If I am then locked into those drivers for a year, users will complain that this review uses old drivers that don't have the latest performance increases (such as 8% a month for new titles not optimized) and that my FPS numbers are unbalanced. That being said, I am looking at what to do for 2014 and games - it has been suggested that I put in Bioshock Infinite and Tomb Raider, perhaps cut one or two. If there are any suggestions, please email me with thoughts. I still have to keep the benchmarks regular and have to run without attention (timedemos with AI are great), otherwise other reviews will end up being neglected. Doing this sort of testing could easily be a full time job, which in my case should be on motherboards and this was something extra I thought would be a good exercise.
    Reply
  • Michaelangel007 - Tuesday, June 04, 2013 - link

    It is sad to poor journalism in the form of excuses in an otherwise excellent article. :-/

    1. Any review sites that make excuses for why they ignore FCAT just highlights that they don't _really_ understand the importance of _accurate_ frame stats.
    2. Us hardcore games can _easily_ tell the difference betwen 60 Hz and 30 Hz. I bought a Titan to play games at 1080p @ 100+ Hz on the Asus VG248QE using nVidia's LightBoost to eliminate ghosting. You do your readers a dis-service by again not understand the issue.
    3. Focusing on 1440 is largely useless as it means people can't directly compare how their Real-World (tm) system compares to the benchmarks.
    4. If your benchmarks are not _exactly_ reproducible across multiple systems you are doing it wrong. Name & Shame games that don't allow gamers to run benchmarks. Use "standard" cut-scenes for _consistency_.

    It is sad to see the quality of a "tech" article gloss and trivial important details.
    Reply

Log in

Don't have an account? Sign up now