DiRT 3

For racing games our racer of choice continues to be DiRT, which is now in its 3rd iteration. Codemasters uses the same EGO engine between its DiRT, F1, and GRID series, so the performance of EGO is has been relevant for a number of number of racing games over the years.

DiRT 3

DiRT 3

DiRT 3

With DiRT 2 NVIDIA often held a slight edge in performance, and due to the reuse of the EGO engine this hasn’t really changed in DiRT 3. As a result the 7970 is still faster than the GTX 580, but not by as much as in other games. At 2560 this manifests itself as a 19% lead, while at 1920 it’s down to 6%, and embarrassingly enough at 1680 the 7970 actually falls behind the GTX 580 by just a hair. DiRT 3 is not particularly shader heavy so that may be part of the reason that the 7970 can’t easily clear the GTX 580 here, but that doesn’t fully explain what we’re seeing. At the end of the day this is all academic since everything north of the GTX 570 can clear 60fps even at 2560, but it would be nice to eventually figure out why NVIDIA does better than average here.

Meanwhile compared to the 6970 the 7970 enjoys one of its bigger leads. Here the 7970 leads by about 45% at 2560 and 1920, and finally falls slightly to 37% at 1680.

DiRT 3 - Minimum Frame Rate

DiRT 3 - Minimum Frame Rate

DiRT 3 - Minimum Frame Rate

We’ve had a number of requests for more minimum framerates throughout the years, so we’re also going to include the minimum framerates for DiRT 3 as the minimums have proven to be reliably consistent in our benchmarks.

Given that the 7970 was already seeing a smaller than typical lead over the GTX 580, it’s not wholly surprising to see that it fares a bit worse when it comes to minimums.  At 2560 its minimum framerate of 64.2fps means that the 7970 will deliver smooth performance every last moment, but at the same time this is only 11% better than the GTX 580. At 1920 this becomes a dead heat between the two cards. Meanwhile compared to the 6970 the 7970 is always about 45% ahead.

Metro: 2033 Total War: Shogun 2
Comments Locked

292 Comments

View All Comments

  • CrystalBay - Thursday, December 22, 2011 - link

    Hi Ryan , All these older GPUs ie (5870 ,gtx570 ,580 ,6950 were rerun on the new hardware testbed ? If so GJ lotsa work there.
  • FragKrag - Thursday, December 22, 2011 - link

    The numbers would be worthless if he didn't
  • Anand Lal Shimpi - Thursday, December 22, 2011 - link

    Yep they're all on the new testbed, Ryan had an insane week.

    Take care,
    Anand
  • Lifted - Thursday, December 22, 2011 - link

    How many monitors on the market today are available at this resolution? Instead of saying the 7970 doesn't quite make 60 fps at a resolution maybe 1% of gamers are using, why not test at 1920x1080 which is available to everyone, on the cheap, and is the same resolution we all use on our TV's?

    I understand the desire (need?) to push these cards, but I think it would be better to give us results the vast majority of us can relate to.
  • Anand Lal Shimpi - Thursday, December 22, 2011 - link

    The difference between 1920 x 1200 vs 1920 x 1080 isn't all that big (2304000 pixels vs. 2073600 pixels, about an 11% increase). You should be able to conclude 19x10 performance from looking at the 19x12 numbers for the most part.

    I don't believe 19x12 is pushing these cards significantly more than 19x10 would, the resolution is simply a remnant of many PC displays originally preferring it over 19x10.

    Take care,
    Anand
  • piroroadkill - Thursday, December 22, 2011 - link

    Dell U2410, which I have :3

    and Dell U2412M
  • piroroadkill - Thursday, December 22, 2011 - link

    Oh, and my laptop is 1920x1200 too, Dell Precision M4400.
    My old laptop is 1920x1200 too, Dell Latitude D800..
  • johnpombrio - Wednesday, December 28, 2011 - link

    Heh, I too have 3 Dell U2410 and one Dell 2710. I REALLY want a Dell 30" now. My GTX 580 seems to be able to handle any of these monitors tho Crysis High-Def does make my 580 whine on my 27 inch screen!
  • mczak - Thursday, December 22, 2011 - link

    The text for that test is not really meaningful. Efficiency of ROPs has almost nothing to do at all with this test, this is (and has always been) a pure memory bandwidth test (with very few exceptions such as the ill-designed HD5830 which somehow couldn't use all its theoretical bandwidth).
    If you look at the numbers, you can see that very well actually, you can pretty much calculate the result if you know the memory bandwidth :-). 50% more memory bandwidth than HD6970? Yep, almost exactly 50% more performance in this test just as expected.
  • Ryan Smith - Thursday, December 22, 2011 - link

    That's actually not a bad thing in this case. AMD didn't go beyond 32 ROPs because they didn't need to - what they needed was more bandwidth to feed the ROPs they already had.

Log in

Don't have an account? Sign up now