Synthetics

Moving on, we have our synthetic performance testing, taking a look at geometry throughput, pixel throughput, memory compression, and more.

Synthetic: TessMark - Image Set 4 - 64x Tessellation

Given the significance of cutting a quarter of Navi 10’s GDDR6 memory bus, I was very curious to see what that would do for synthetic performance. But for better or worse, this has created more questions than it has solved.

The numbers listed below are accurate, in as much as these are the results I get when testing these cards. Whether they are correct, however, is another matter.

The problem, in short, is that due to AMD’s very aggressive power savings/idling implementation for their Navi 10 cards, I have been unable to get these cards to run at their full memory clockspeeds when executing the the Beyond3D Suite benchmark suite. The GPU clocks regularly pass 1600MHz like they should, however AMD’s telemetry is reporting that memory clocks are rarely hitting 7Gbps, let alone 12Gbps+. As a result, we end up with results like the pixel test below, where the RX 5600 XT is beating the RX 5700, an otherwise impossible outcome.

As best as I can tell, this issue has been going on since the launch of the Radeon RX 5700 series back in July, but it’s only now that I’ve noticed it, in large part due to the RX 5600 XT cards being slightly less aggressive in their idling. In other words, those cards are boosting to higher memory clockspeeds more often, putting them ahead of the RX 5700 and bringing the clocking issue front and center.

I’m still working on a proper fix for the issue, but for now the results with Navi 10 cards should be taken with a large grain of salt. The benchmark itself is still fine, but AMD’s aggressive power management (and lack of an easy means to disable it) is kneecapping AMD’s performance in these benchmarks.

Synthetic: Beyond3D Suite - Pixel Fillrate

Synthetic: Beyond3D Suite - Integer Texture Fillrate (INT8)

Synthetic: Beyond3D Suite - Floating Point Texture Fillrate (FP32)

Synthetic: Beyond3D Suite - INT8 Buffer Compression

Synthetic: Beyond3D Suite - FP32 Buffer Compression

Compute Power, Temperatures, & Noise
Comments Locked

202 Comments

View All Comments

  • Targon - Tuesday, January 21, 2020 - link

    Performance per dollar within a given price category makes sense, but in many situations, lower end cards will end up being better when it comes to performance per dollar. Beyond $400, your performance per dollar does drop, but you can't argue when people want a $600+ card because they want to game at 4k resolutions and the $400 cards just can't handle that resolution.
  • thecoolnamesweretaken - Tuesday, January 21, 2020 - link

    While I agree with you I wish more benchmarks included the 1070 Ti rather than the 1070. I imagine as an owner of such a card I must be in the extreme minority or perhaps reviewers never bothered to acquire one since it was released so late in the cycle before the move to the RTX 20xx architecture.
  • Retycint - Tuesday, January 21, 2020 - link

    Seconded. The 1070Ti was a relatively popular mining card (at least in my country) and hence the local used market is flooded with used 1070Ti's for about $170-180, which is an absolute steal for the performance and basically renders the entire mid range market obsolete
  • Krayzieka - Tuesday, January 21, 2020 - link

    Is this With the new driver boost?
  • Ryan Smith - Tuesday, January 21, 2020 - link

    This is with the lastest drivers. AMD's Radeon Boost (dynamic resolution) feature is not enabled.
  • maroon1 - Tuesday, January 21, 2020 - link

    AMD's Radeon Boost features is horrible specially if you running below 4K

    Watch some review like a hardware unboxed about it. They even recommend not using it for 1080p because you sacrifice a lot of image quality
  • Duckferd - Tuesday, January 21, 2020 - link

    All contextual. If you are running at 1080p on certain games with an APU, for example, it's still worthwhile with a minimal amount of boost (83%) because it keeps frametimes consistent when you're already constrained and most need it (i.e. panning in FPS games).

    How 'horrible' it is also depends on whether you can perceive the dynamic resolution changes as well. This is going to vary quite a bit depending on user configuration and tolerance, but I think the feature is worthwhile to include.
  • Cooe - Tuesday, January 21, 2020 - link

    Did you even watch that Hardware Unboxed video? They were very extremely impressed with the performance/visuals at 4K (and using a resolution downscaler of ANY KIND on lesser resolutions is an inherently horrible idea, not anything wrong with AMD's approach), though of course, the algorithm still has it's issues and was more like a proof of concept than anything you'd want to daily drive yet. But your original comment is absolutely NOT the point they ended at, so please don't spreading nonsense.
  • Spunjji - Wednesday, January 22, 2020 - link

    It's possible that spreading nonsense is maroon1's actual job :/
  • Irata - Tuesday, January 21, 2020 - link

    Well, Techspot really liked it and found it a lot better thsn DLSS in their review.

Log in

Don't have an account? Sign up now