Metro: Last Light

As always, kicking off our look at performance is 4A Games’ latest entry in their Metro series of subterranean shooters, Metro: Last Light. The original Metro: 2033 was a graphically punishing game for its time and Metro: Last Light is in its own right too. On the other hand it scales well with resolution and quality settings, so it’s still playable on lower end hardware.

For the bulk of our analysis we’re going to be focusing on our 2560x1440 results, as monitors at this resolution will be what we expect the 290X to be primarily used with. A single 290X may have the horsepower to drive 4K in at least some situations, but given the current costs of 4K monitors that’s going to be a much different usage scenario.

With that said, for focusing on 4K on most games we’ve thrown in results both at a high quality setting, and a lower quality setting that makes it practical to run at 4K off of a single card. Given current monitor prices it won’t make a ton of sense to try to go with reduced quality settings just to save $550 – and consequently we may not keep the lower quality benchmarks around for future articles – but for the purposes of looking at a new GPU it’s useful to be able to look at single-GPU performance at framerates that are actually playable.

With that said, starting off with Metro at 2560 the 290X hits the ground running on our first benchmark. At 55fps it’s just a bit shy of hitting that 60fps average we love to cling to, but among all of our single-GPU cards it is the fastest, beating even the traditional powerhouse that is GTX Titan. Consequently the performance difference between 290X and GTX 780 (290X’s real competition) is even greater, with the 290X outpacing the GTX 780 by 13%, all the while being $100 cheaper. As we’ll see these results are a bit better than the overall average, but all told we’re not too far off. For as fast as GTX 780 is, 290X is going to be appreciably (if not significantly) faster.

290X also does well for itself compared to the Tahiti based 280X. At 2560 the 290X’s performance advantage stands at 31%, which as we alluded to earlier is greater than the increase in die size, offering solid proof that AMD has improved their performance per mm2 of silicon despite the fact that they’re still on the same 28nm manufacturing process. That 31% does come at a price increase of 83% however, which although normal for this price segment serves as a reminder that the performance increases offered by the fastest video cards with the biggest GPUs do not come cheaply.

Meanwhile for one final AMD comparison, let’s quickly look at the 290X in uber mode. As the 290X is unable to sustain the power/heat workload of a 1000MHz Hawaii GPU for an extended period of time, at its stock (quiet settings) it has to pull back on performance in order to meet reasonable operational parameters. Uber mode on the other hand represents what 290X and the Hawaii can do when fully unleashed; the noise costs won’t be pretty (as we’ll see), but in the process it builds on 290X’s existing leads and increases them by another 5%. And that’s really going to be one of the central narratives for 290X once semi-custom and fully-custom cards come online: Despite being a fully enabled part, 290X does not give us everything Hawaii is truly capable of.

Moving on, let’s talk about multi-GPU setups and 4K. Metro is a solid reminder that not every game scales similarly across different GPUs, and for that matter that not every game is going to significantly benefit from multi-GPU setups. Metro for its part isn’t particularly hospitable to multi-GPU cards, with the best setup scaling by only 53% at 2560. This is better than some games that won’t scale at all, but it won’t be as good as those games that see a near-100% performance improvement. Which consequently is also why we dropped Metro as a power benchmark, as this level of scaling is a poor showcase for the power/temp/noise characteristics of a pair of video cards under full load.

The real story here of course is that it’s another strong showing for AMD at both 2560 and 4K. At 2560 the 290X CF sees better performance scaling than the GTX 780 SLI – 53% versus 41% – further extending the 290X’s lead. Bumping the resolution up to 4K makes things even more lopsided in AMD’s favor, as at this point the NVIDIA cards essentially fail to scale (picking up just 17%) while the 290X sees an even greater scaling factor of 63%. As such for those few who can afford to seriously chase 4K gaming, the 290X is the only viable option in this scenario. And at 50fps average for 4K at high quality, 4K gaming at reasonable (though not maximum) quality settings is in fact attainable when it comes to Metro.

Meanwhile for single-GPU configurations at 4K, 4K is viable, but only at Metro’s lowest quality levels. This will be the first of many games where such a thing is possible, and the first of many games where going up to 4K in this manner further improves on AMD’s lead at 4K. Again, we’re not of the opinion that 4K at these low quality settings is a good way to play games, but it does provide some insight and validationg into AMD’s claims that their hardware is better suited for 4K gaming.

A Note On Crossfire, 4K Compatibility, Power, & The Test Company of Heroes 2
Comments Locked

396 Comments

View All Comments

  • ninjaquick - Thursday, October 24, 2013 - link

    so 4-5% faster than Titan?
  • Drumsticks - Thursday, October 24, 2013 - link

    If the 780Ti is $599, then that means the 780 should see at least a $150 (nearly 25%!) price drop, which is good with me.
  • DMCalloway - Thursday, October 24, 2013 - link

    So, what you are telling me is Nvidia is going to stop laughing- all- the- way- to-the-bank and price the 780ti for less than current 780 prices? Current 780 owners are going to get HOT and flood the market with used 780's.
  • dragonsqrrl - Thursday, October 24, 2013 - link

    Why is it that this is only ever the case when Nvidia performs a massive price drop? Nvidia price drop = early adopters getting screwed (even though 780 has been out for ~6 months now). AMD price drop = great value for enthusiasts, go AMD! ... lolz.
  • Minion4Hire - Thursday, October 24, 2013 - link

    Titan is a COMPUTE card. A poor man's (relatively speaking) proper compute solution. The fact that it is also a great gaming card is almost incidental. No one needs a 6GB frame buffer for gaming right now. The Titan comparisons are nearly meaningless.

    The "nearly" part is the unknown 780 TI. Nvidia could enable the remaining CUs on 780 to at least give the TI comparable performance to Titan. But who cares that Titan is $1000? It isn't really relevant.
  • ddriver - Thursday, October 24, 2013 - link

    Even much cheaper radeons compeltely destroy the titan as well as every other nvidia gpu in compute, do not be fooled by a single, poorly implemented test, the nvidia architecture plainly sucks in double precision performance.
  • ShieTar - Thursday, October 24, 2013 - link

    Since "much cheaper" Radeons tend to deliver 1/16th DP performance, you seem to not really know what you are talking about. Go read up on a relevant benchmark suite on professional and compute cards, e.g. http://www.tomshardware.com/reviews/best-workstati... The only tasks where AMD cards shine are those implemented in OpenCL.
  • ddriver - Thursday, October 24, 2013 - link

    "Much cheaper" relative to the price of the titan, not entry level radeons... You clutched onto a straw and drowned...

    OpenCL is THE open and portable industry standard for parallel computing, did you expect radeons to shine at .. CUDA workloads LOL, I'd say OpenCL performance is all I really need, it has been a while since I played or cared about games.
  • Pontius - Tuesday, October 29, 2013 - link

    I'm in the same boat as you ddriver, all I care about is OpenCL in these articles. I go straight to that section usually =)
  • TheJian - Friday, October 25, 2013 - link

    You're neglecting the fact that everything you can do professionally in openCL you can already do faster in cuda. Cuda is taught in 600+ universities for a reason. It is in over 200 pro apps and has been funded for 7+yrs unlike opencl which is funded by a broke company hoping people will catch on one day :) Anandtech refuses to show cuda (gee they do have an AMD portal after all...LOL) but it exists and is ultra fast. You really can't name a pro app that doesn't have direct support or support via plugin for Cuda. And if you're buying NV and running opencl instead of cuda (like anand shows calling it compute crap) you're an idiot. Why don't they run Premiere instead of Sony crap for video editing? Because Cuda works great for years in it. Same with Photoshop etc...

    You didn't look at folding@home DP benchmark here in this review either I guess. 2.5x faster than 290x. As you can see it depends on what you do and the app you use. I consider F@H stupid use of electricity but that's just me...LOL. Find anything where OpenCL (or any AMD stuff, directx, opengl) beats CUDA. Compute doesn't just mean OpenCL, it means CUDA too! Dumb sites just push openCL because its OPEN...LOL. People making money use CUDA and generally buy quadro or tesla (they own 90% of the market for a reason, or people would just buy radeons right?).
    http://www.anandtech.com/show/7457/the-radeon-r9-2...
    DP in F@H here. Titan sort of wins right? 2.5x or so over 290x :) It's comic both here and toms uses a bunch of junk synthetic crap (bitmining, Asics do that now, basemark junk, F@H, etc) to show how good AMD is, but forget you can do real work with Cuda (heck even bitmining can be done with cuda)

    When you say compute, I think CUDA, not opencl on NV. As soon as you toss in Cuda the compute story changes completely. Unfortunately even Toms refuses to pit OpenCL vs. Cuda just like here at anandtech (but that's because both love OpenCL and hate proprietary stuff). But at least they show you in ShieTar's link (which craps out, remove the . at the end of the link) that Titan kills even the top quadro cards (it's a Tesla remember for $1500 off). It's 2x+ faster than quadro's in almost everything they tested. So yeah, Titan is very worth it for people who do PRO stuff AND game.
    http://www.tomshardware.com/reviews/best-workstati...
    For the lazy, fixed ShieTar's link.

    All these sites need to do is fire up 3dsmax, cinema4d, Blender, adobe (pick your app, After Effect, Premiere, Photoshop) and pit Cuda vs. OpenCL. Just pick an opencl plugin for AMD (luxrender) and Octane/furryball etc for NV then run the tests. Does AMD pay all these sites to NOT do this? I comment and ask on every workstation/vid card article etc at toms, they never respond...LOL. They run pure cuda, then pure opencl, but act like they never meet. They run crap like basemark for photo/video editing opencl junk (you can't make money on that), instead of running adobe and choosing opencl(or directx/opengl) for AMD and Cuda for NV. Anandtech runs Sony Vegas which a quick google shows has tons of problems with NV. Heck pit Sony/AMD vs. Adobe/NV. You can run the same tests in both on video, though it would be better to just use adobe for both but they won't do that until AMD gets done optimizing for the next rev...ROFL. Can't show AMD in a bad light here...LOL. OpenCL sucks compared to Cuda (proprietary or not...just the truth).

Log in

Don't have an account? Sign up now