Gaming Tests: Strange Brigade

Strange Brigade is based in 1903’s Egypt, and follows a story which is very similar to that of the Mummy film franchise. This particular third-person shooter is developed by Rebellion Developments which is more widely known for games such as the Sniper Elite and Alien vs Predator series. The game follows the hunt for Seteki the Witch Queen, who has arose once again and the only ‘troop’ who can ultimately stop her. Gameplay is cooperative centric with a wide variety of different levels and many puzzles which need solving by the British colonial Secret Service agents sent to put an end to her reign of barbaric and brutality.

The game supports both the DirectX 12 and Vulkan APIs and houses its own built-in benchmark as an on-rails experience through the game. For quality, the game offers various options up for customization including textures, anti-aliasing, reflections, draw distance and even allows users to enable or disable motion blur, ambient occlusion and tessellation among others. Strange Brigade supports Vulkan and DX12, and so we test on both.

  • 720p Low, 1440p Low, 4K Low, 1080p Ultra

The automation for Strange Brigade is one of the easiest in our suite – the settings and quality can be changed by pre-prepared .ini files, and the benchmark is called via the command line. The output includes all the frame time data.

AnandTech Low Res
Low Qual
Medium Res
Low Qual
High Res
Low Qual
Medium Res
Max Qual
Average FPS
95th Percentile

All of our benchmark results can also be found in our benchmark engine, Bench.

Gaming Tests: Red Dead Redemption 2 Broadwell with eDRAM: Still Has Gaming Legs
Comments Locked

120 Comments

View All Comments

  • brucethemoose - Monday, November 2, 2020 - link

    Is HBM2e access latency really lower than DDR4/5?

    I cant find any timing info or benchmarks, but my understanding is that its lower than GDDR6, which already has much higher latency than DDR4.
  • PeachNCream - Monday, November 2, 2020 - link

    I'd like to say thanks for this review! I really love the look backwards at older hardware in relationship to modern systems. It really shows that in processor power terms that Broadwell/Haswell remain fairly relevant and the impact of eDRAM (or non-impact in various workloads) makes for really interesting reading.
  • brucethemoose - Monday, November 2, 2020 - link

    Another possibility: the "Radeon Cache" on an upcoming APU acts as a last level cache for the entire chip, just like Apple (and Qualcomm?) SoCs.

    Theres no extra packaging costs, no fancy 2nd chip, and it would save power.
  • Jorgp2 - Monday, November 2, 2020 - link

    You do realize that Intel has had that about as long as they've had GPUs on their CPUs right?
  • brucethemoose - Monday, November 2, 2020 - link

    You mean the iGPUs share L3?

    Well, its wasn't a particularly large cache or powerful GPU until Broadwell came around.
  • Jorgp2 - Tuesday, November 3, 2020 - link

    >Well, its wasn't a particularly large cache or powerful GPU until Broadwell came around.

    Larger than the caches on even AMDs largest GPUs until recently.

    Hawaii had a 4MB cache, Vega had a 6MB I believe.
  • eastcoast_pete - Monday, November 2, 2020 - link

    Thanks Ian, great article! Regarding a large, external L4 Cache: any guess on how speed and latency of eDRAM made in more modern silicon would compare with Broadwell's 22 nm one? Let's say if made in Intel's current 14 nm (++ etc)? And, if that'll speed it up enough to make it significantly better than current fast DDR4, would that be a way for Intel to put some "electronic nitrous" on its Tiger Lake and Rocket Lake chips? Because they do need something, or they'll get spanked badly by the new Ryzens.
  • brucethemoose - Monday, November 2, 2020 - link

    I'm guessing most of the latency comes from the travel between the chips, not from the speed of the eDRAM itself. So a shrink wouldnt help much, but EMIB might?

    There is talk of replacing on-chip SRAM in L3 cache with eDRAM, kind of like what IBM already does. So basically, its a size vs speed tradeoff, which is very interesting indeed.
  • quadibloc - Monday, November 2, 2020 - link

    Well, AMD seems to think it was a good idea, given the 128 MB Infinity Cache on their latest graphics cards...
  • Leeea - Monday, November 2, 2020 - link

    Close, but not quite the same.

    AMD has their infinity cache in the GPU die. One piece of silicon for the whole thing. This may have faster I/O and less power consumption.

    Intel's eDRAM caches were separate a separate piece of silicon all together.

Log in

Don't have an account? Sign up now