In keeping with our desire to refresh our GPU test suite periodically, we’re going to be redoing our GPU test suite to rotate in some more modern games, along with rotating in some DirectX11 games capable of taking advantage of this generation of GPU’s full capabilities. And while we already have a pretty solid idea of what we’re going to run, we wanted to throw out this question anyhow and see what responses we get.

What games would you like to see in our next GPU test suite, and why?

What we’d like to see is whether our choices line up with what our readers would like to see. We can’t promise that we’ll act on any specific responses, but we have our eyes and ears open to well-reasoned suggestions. So let us know what you think by commenting below.

POST A COMMENT

240 Comments

View All Comments

  • Sniffet - Saturday, March 27, 2010 - link

    I don´t play much games but I use GPU for computing and it would be very interesting to see a benchmark with the upcoming cards with renderers like Octane, Iray (not released yet) and Arion.

    Octane unbiased render would be a great indication on how the graphic cards perform in a workstation: http://www.refractivesoftware.com">http://www.refractivesoftware.com

    Also some Arion rendertimes would be nice: http://www.randomcontrol.com">http://www.randomcontrol.com

    Since all these renders are based on CUDA it leaves AMD out in the cold, and this I think is a very important reason to choose your graphic cards well. Also some benchmarks for Adobe Premiere CS5 would be interesting indeed (also based on CUDA)..


    Reply
  • Werelds - Thursday, March 25, 2010 - link

    I'll just do this in bullet points for clarity.

    - OpenGL games; as said earlier in the comments, there are non-Windows gamers out there, and there's gradually more games supporting OpenGL again.

    - OpenCL/DirectCompute? I know most of it is still beta, but it's something that will hopefully see more use in games in the nearby future.

    - Despite being synthetic, Unigine's engine is actually going to be used for a number of games, some of which are already in development.
    Reply
  • Werelds - Thursday, March 25, 2010 - link

    Forgot to mention another thing.

    Mention which games are based on which engine (and create a shortlist of other games based on it), and make small notes whether they're optimised for one side or feature. Even Crysis for example has a couple of CFG settings that are only enabled for NVIDIA by default, but which should be available for ATI as well (fixing these settings fixes the FPS drop caused by caching which ATI gets at the start - TWIMTBP eh?).
    Reply
  • kevith - Wednesday, March 24, 2010 - link

    Is it on purpose there are these weird commercials, disguised as a post in the forum? Reply
  • kevith - Wednesday, March 24, 2010 - link

    A thing that seem to miss in almost any review of GFXs are the quality of the rendered pictures. Is that because there really isn´t one?

    Because as I see it, I would love to sacrifice a few FPS, if the picturequality is better on one card over the other. Of course such a term is very hard to measure or describe, as it is a matter of personal taste.

    But - given there actually are diffrences - it would be something NEW in graphics cards tests. And not just another truckload of - admittedly nescessary - games and synthetics.

    Kent Thomsen, DK,EU.
    Reply
  • Hadenman - Monday, March 22, 2010 - link

    I would like to see Metro 2033 and Crysis 2. I also like the idea of testing some MMO's at max settings. I have this (http://www.overclock.net/system.php?i=50333)">http://www.overclock.net/system.php?i=50333) setup, which ain't too bad imo, and I play Aion. I can turn everything all the way up except for AA. Once I push AA past 2x, it starts to turn into a slide show. Reply
  • jiulemoigt - Monday, March 22, 2010 - link

    well an RTS with lots of particles like DoW II punishes some fairly up to date systems, so that is useful to see how much the CPU and how much the GPU is being pushed.

    As far a rpg go dragon age has some real time cut scenes you might be able to use. With a fast enough connection you could test several machines in an mmo by logging in and walking around the same area. Even rotate the cards around repeat the test.

    Last if you want a synthetic test that has relevance download UE3 create a sample level and test that, as many companies lease the unreal engine, and then add in their twists. Which gives you an idea of what the base point they are starting from.
    Reply
  • BigMoosey74 - Saturday, March 20, 2010 - link

    I want to make a plug for Battlefield Bad Company 2. The frostbite engine is pretty GPU heavy and should be tapping into DX10.1 and DX11 to help test out the newer cards. Although the BF series has never been too graphically intensive, BadCo2 seems to be quite the opposite.

    CPU discussion came up but I think it is an interesting subject because it does effect how a game runs. Usually, reviews just go around this by overclocking to a point where the CPU isn't a bottleneck.

    For a total 180, have you considered doing a total game benchmark that would tackle both CPU and VGA performance? Because reviews usually use the uber top end systems and OC their CPU to extreme speeds, game performances usually don't translate for many people. Having targeted performance numbers (even if disabling cores and downclocking CPUs is the means of getting there) for a specific game is really helpful information. Thanks!
    Reply
  • DarkUltra - Saturday, March 20, 2010 - link

    Hi!
    I just read the GF100 / Fermi would have multiple setup / geometry engines. This is a huge step forward. All previous and current 3d cards I know of, except the GTX 460 and 480, have a single setup engine.

    Let me quote the gf100 anandtech preview:
    - To put things in perspective, between NV30 (GeForce FX 5800) and GT200 (GeForce GTX 280), the geometry performance of NVIDIA’s hardware only increases roughly 3x in performance. Meanwhile the shader performance of their cards increased by over 150x. Compared just to GT200, GF100 has 8x the geometry performance of GT200, and NVIDIA tells us this is something they have measured in their labs.
    http://www.anandtech.com/video/showdoc.aspx?i=3721...">http://www.anandtech.com/video/showdoc.aspx?i=3721...

    Now the question I wonder is, can we test if the older cards and the new GF100 is bottlenecked by the geometry engine? I have never heard a review mention the performance of the setup / geometry engine or if it can be a bottleneck or not. To test this you would have run at a low resolution with a fast CPU and the GPU at different clock rates.
    Reply
  • Azethoth - Friday, March 19, 2010 - link

    I pretty much only play WoW these days and with a 30" apple monitor I am interested in performance at max resolution.

    As is this means no max settings even on my Radeon 5970. So it is not a meaningless benchmark. Enabling max everything brings it to an unplayable crawl.

    PS: There is something evil about the stairs down to Blood Queen Lanathel's room. 60fps -> 17fps or less. (2-3fps on my old 4870 X2 at minimum settings). Each time I look at them I hear "Stare into the abyss..."
    Reply

Log in

Don't have an account? Sign up now