Gaming: Far Cry 5

The latest title in Ubisoft's Far Cry series lands us right into the unwelcoming arms of an armed militant cult in Montana, one of the many middles-of-nowhere in the United States. With a charismatic and enigmatic adversary, gorgeous landscapes of the northwestern American flavor, and lots of violence, it is classic Far Cry fare. Graphically intensive in an open-world environment, the game mixes in action and exploration.

Far Cry 5 does support Vega-centric features with Rapid Packed Math and Shader Intrinsics. Far Cry 5 also supports HDR (HDR10, scRGB, and FreeSync 2). We use the in-game benchmark for our data, and report the average/minimum frame rates.

All of our benchmark results can also be found in our benchmark engine, Bench.

AnandTech IGP Low Medium High
Average FPS
95th Percentile

Gaming: Grand Theft Auto V Gaming: F1 2018
Comments Locked

206 Comments

View All Comments

  • Drazick - Sunday, November 17, 2019 - link

    The DDR Technology is orthogonal.
    I want Quad and the latest memory available.
  • guyr - Friday, December 20, 2019 - link

    Anything is possible, of course. 5 years ago, who would have predicted 16 cores in a consumer-oriented CPU? However, neither Intel nor AMD has made any moves beyond 2 memory channels in the consumer space. The demand is simply not there to justify the increase in complexity and price. In the professional space, more channels are easily justified and the target market doesn't hesitate to pay the higher prices. So, it's all driven by what the market will bear.
  • alufan - Saturday, November 16, 2019 - link

    weird intel launches its chip a couple of weeks ago and it stayed upfront and main story for over a week, AMD launches what is in effect the best CPU ever tested by this site and it lasts a few Days before being pushed aside for another intel article am sure the intention by the reporters is to be fair and unbiased however I can see how the commercial motives of the site are being manipulated looks like intels up to its old tricks again, the thread ripper article lasted even less time but no chips have been tested(or at least released) yet which I guess makes sense
  • penev91 - Sunday, November 17, 2019 - link

    Just ignore everything Intel/AMD related on Anandtech. There's been an obvious bias for years.
  • Atom2 - Saturday, November 16, 2019 - link

    There has never been a situation as big as this one, where the bench software was benchmarked more than the hardware. Comprehensive overview of historic software development? Whatever the reason, it seems that keeping back AVX512 to only select few CPUs, was an unfortunate decision by Intel, which only contributed to the situation. Yes, you know, if you compile your code with compiler from 1998 and ignore all the guidelines how to write fast code ... Voila... For some reason however, nobody tries to run 20 year old CPU code on GPU though.
  • chrkv - Monday, November 18, 2019 - link

    Second page "On the Ryzen High Performance power plan, our sustained single core frequency dropped to 4450 MHz" - I believe just "the High Performance" should be here.
    Page 4 "Despite 5.0 GHz all-core turbo being on the 9900K" - should be "9900KS".
  • Irata - Tuesday, November 19, 2019 - link

    Quick question: Are any of your benchmarks affected by the Mathlab issue (Ryzen are crippled because a poor code path is used due to a vendor ID check for "genuine Intel" )?
  • twotwotwo - Tuesday, November 19, 2019 - link

    Intel's had these consumer-platform-based "entry-level Xeons" (once E3, now E) for a while. Despite some obvious limits, and that there are other low-end server options, enough folks want 'em to seed an ecosystem of rackmount and blade servers from Supermicro, Dell, etc.

    Anyway, the "pro" (ECC/management enabled) variant of Ryzen seems like a great fit for that. 16 cores and 24 PCIe 4 lanes are probably more useful for little servers than for most desktop users. It's also more balanced than the 8/16C EPYCs; it's cool they have 128 lanes and tons of memory channels, but it takes very specific applications to use them all with that few cores (caching?). Ideally the lesser I/O and lower TDPs also help make denser/cheaper boxes, and the consumer-ish clocks pay off for some things.

    The biggest argument against is that the entry-level server market is probably shrinking anyway as users rent tiny slices of huge boxes from cloud providers instead. It also probably doesn't have the best margins. So maybe you could release a competitive product there and still not make all that much off it.
  • halfflat - Thursday, November 21, 2019 - link

    Very curious about the AVX512 vs AVX2 results for 3dPM. It's really unusual to see even a 2x performance increase going from AVX2 to AVX512 on the same architecture, given that running AVX512 instructions will lower the clock.

    The non-AVX versions, I'm presuming, are utilizing SSE2.

    The i9-9900K gets a factor of 2 increase going from SSE2 to AVX2, which is pretty much what one would expect with twice as many fp operations per instruction. But the i9-7960X performance with AVX512 is *ten times* improved over SSE2, when the vector is only four times as wide and the cores will be running at a lower clock speed.

    Is there some particular AVX512-only operation that is determining this huge performance gap? Some further analysis of these results would be very interesting.
  • AIV - Wednesday, November 27, 2019 - link

    Somebody posted that it's caused by 64 bit integer multiplies, which are supported in AVX512, but not in AVX2 and thus fallback to scalar operations.

Log in

Don't have an account? Sign up now