Total War: Attila

The Total War franchise moves on to Attila, another The Creative Assembly development, and is a stand-alone strategy title set in 395AD where the main story line lets the gamer take control of the leader of the Huns in order to conquer parts of the world. Graphically the game can render hundreds/thousands of units on screen at once, all with their individual actions and can put some of the big cards to task. In our benchmark, the in-game scripted benchmark is used with the option for ‘unlimited video memory’ enabled.

For this test we used the following settings with our graphics cards:

Total War: Attila Settings
  Resolution Quality
Low GPU Integrated Graphics 1280x720 Performance
ASUS R7 240 1GB DDR3
Medium GPU MSI GTX 770 Lightning 2GB 1920x1080 Quality
MSI R9 285 Gaming 2G
High GPU ASUS GTX 980 Strix 4GB 1920x1080 Quality
MSI R9 290X Gaming 4G

Total War: Attila on Integrated Graphics

For the integrated graphics, despite the difference between the APUs and Core parts, we can see the effect of 10% GPU frequency and a smaller L3 cache has on the i3-6100 (which has 3MB). The i3-6100TE is an oddball of the group, by actually having 4MB of L3 cache, which nudges it ahead of the regular i3-6100 by a small amout. Either way, the Intel GPUs aren't great for Attila gaming at 720p Low.

Total War: Attila on ASUS R7 240 DDR3 2GB ($70)

Total War: Attila on MSI R9 285 Gaming 2GB ($240)

Total War: Attila on MSI GTX 770 Lightning 2GB ($245)

Total War: Attila on MSI R9 290X Gaming LE 4GB ($380)

Total War: Attila on ASUS GTX 980 Strix 4GB ($560)

With the discrete graphics cards, the Core i3s again sit at the top or near the top in a regular staircase. Attila still seems to be a bit of a hog for frame rates at 1080p Ultra, barely scraping 30 FPS average on the GTX 980 with the Core i3 parts.

Gaming Comparison: Alien Isolation Gaming Comparison: Grand Theft Auto
Comments Locked

94 Comments

View All Comments

  • tipoo - Monday, August 8, 2016 - link

    Looks like even a Skylake i3 may be able to retire the venerable 2400/2500K, higher frame rates and better frame times at that. However a native quad does prevent larger dips.
  • Kevin G - Monday, August 8, 2016 - link

    I have a feeling much that is due to the higher base clock on the SkyLake i3 vs. the i5 2500K. Skylake's IPC improvements also help boost performance here too.

    The real challenge is if the i3 6320 can best the i5 2500k as the same 3.9 Ghz base clock speed. Sandy Bridge was a good overclocker so hitting those figures shouldn't be difficult at all.
  • tipoo - Monday, August 8, 2016 - link

    That's true, overclocked the difference would diminish. But you also get modernities like high clocked DDR4 in the switchover.

    At any rate, funny that a dual core i3 can now fluidly run just about everything, it's two cores are probably faster than the 8 in the current consoles.
  • Lolimaster - Monday, August 8, 2016 - link

    Benchrmarks don't tell you about the hiccups when playing with a dual core. Specially with things like Crysis 3 or even worse ROt Tomb Raider where you get like half the fps just by using a dual core bs a cheapo Athlon 860K.
  • gamerk2 - Monday, August 8, 2016 - link

    That's why Frame Times are also measured, which catches those hitches.
  • Samus - Tuesday, August 9, 2016 - link

    I had a lot of issues with my Sandy Bridge i3-2125 in Battlefield 3 circa 2011 with lag and poor minimum frame rates.

    After long discussions on the forums, it was determined disabling hyper threading actually improved frame rate consistency. So at least in the Sandy Bridge IPC, and probably dating back to Nehalem or even Prescott, Jackson Technology or whatever you want to call it, has a habit of stalling the pipeline if there are too many cache misses to complete the instruction. Obviously more cache resolves this, so the issue isn't as prominent on the i7's, and it would certainly explain why the 4MB i3's are more consistent performers than the 3MB variety.

    Of course the only way to prove if hyper threading is causing performance inconsistency is to disable it. It'd be a damn unique investigation for Anandtech to do a IPC improvement impact on it's affect on hyper-threading performance over the years, perhaps even dating back to the P4.
  • AndrewJacksonZA - Wednesday, August 10, 2016 - link

    HOW ON EARTH DID I MISS THIS?!?!

    Thank you for introducing me to Intel's tech known as "Jackson!" This is now *SO* on my "To Buy" list!

    Thank you Samus! :-D
  • bug77 - Monday, August 8, 2016 - link

    Neah, I went i5-2500k -> i5-6600k and there's no noticeable difference. The best part of the upgrade was those new I/O ports on the new motherboard, but it's a sad day when you upgrade after 4 years and the most you have to show is you new M2 or USB 3.1 ports (and USB 3.1 is only added through a 3rd party chip).
    Sure, if I bench it, the new i5 is faster, but since the old i5 wasn't exactly slow, I can't say that I see a significant improvement.

    Now, if you mean that instead of getting an i5-2500k one can now look at a Skylake i3, I'm not going to argue with you there. Though (money permitting) the boost speed might be nice to have anyway.
  • Cellar Door - Monday, August 8, 2016 - link

    This is a poorly educated comment:

    a) Your perceived speed might be limited by your storage
    b) You don't utilize your cpu's multitasking abilities fully(all cores)
  • Duckeenie - Monday, August 8, 2016 - link

    Why did you continue to post your comment if you believed you were making poorly educated points?

Log in

Don't have an account? Sign up now