Synthetics

While we’ve already had our an in-depth at Navi with the launch of the RX 5700 series earlier this year, new GPUs within the family sometimes expose bottlenecks that we haven’t seen before. So our synthetic tests can help to highlight these bottlenecks, as well as any other changes that the GPU designers may have made in the process of scaling down their GPUs.

Synthetic: Beyond3D Suite - Pixel Fillrate

The RX 5500 XT does surprisingly well in our pixel fillrate benchmark. Even though it only has half the ROPs and half of the memory bandwidth of the more powerful RX 5700, it’s able to deliver ~79% of the pixel fillrate in this test. This is much better than I was expecting. It may be a sign that AMD’s ROP partitions aren’t seeing great scaling from 32 to 64 pixels per clock, or alternatively that AMD has made some significant efforts in keeping the RX 5500 XT from diving too hard due to its more limited resources.

Synthetic: Beyond3D Suite - Integer Texture Fillrate (INT8)

Synthetic: Beyond3D Suite - Floating Point Texture Fillrate (FP32)

Meanwhile texture fillrates are more in line with our expectations. The RX 5500 XT has 14 fewer CUs than the RX 5700 but a slightly higher clockspeed, and its results reflect that.

Synthetic: Beyond3D Suite - INT8 Buffer Compression

Synthetic: Beyond3D Suite - FP32 Buffer Compression

Our buffer compression ratios are also relatively consistent with what we’ve seen on the RX 5700 cards. AMD does have capable delta color compression technology; however it seems to struggle under intensive synthetic workloads. Under lighter workloads we see better compression ratios, but lower throughput overall.

Synthetic: TessMark - Image Set 4 - 64x Tessellation

Compute Power, Temperature, & Noise
Comments Locked

97 Comments

View All Comments

  • Valantar - Thursday, December 12, 2019 - link

    What? This class of GPU is in no way whatsoever capable of gaming at 4K. Why include a bunch of tests where the results are in the 5-20fps range? That isn't useful to anyone.
  • Zoomer - Saturday, December 21, 2019 - link

    AT used to include. I just ignored it for a card of this class; probably others did as well.
  • Ravynmagi_ - Thursday, December 12, 2019 - link

    I lean more Nvidia too and I didn't get that impression from the article. I felt it was fair to AMD and Nvidia in it's comparison of the performance and facts. I wasn't bothered by where they decided to cut off their chart.
  • FreckledTrout - Friday, December 13, 2019 - link

    Same here. I don't need to see numbers elucidating how bad these low end cards are at 4k. Let's move on.
  • Dragonstongue - Thursday, December 12, 2019 - link

    I <3 how compute these days adamantly refuse to use the "old standard"
    i.e MINING

    this shows Radeon in vastly different light, as the different forms of such absolutely show difference generation on generation, more so Radeon than Ngreedia err I mean Nvidia.

    seeing as one can take the wee bit of time to have a -pre set that really needs very little change (per brand and per specific GPU being used)

    instead of using "canned" style bechmarks, that often are very much *bias* towards those who hold more market share and/or have the heavier fist to make sure they are shown as "best" even when the full story simply is NOT being fully told...yep am looking direct at INTC/NVDA ... business is business, they certainly walk that BS line constantly, to very damaging consequence for EVERYONE

    ............

    I personally think in this regard, AMD likely would have been "best off" to up the power budget a wee touch, so the "clear choice" between going with older stuff they probably and likely not want to be producing as much anymore (likely costlier) that is RX 4/5xx generation such as the 570-580 more importantly 590, this "little card" would be that much better off, instead, they seem to "adamant" want to target the same limiting factor of limited memory bus size (even though fast VRAM) still wanting to be @ the "claimed golden number" of "sub" $200 price point --- means USA or this price often moves from "acceptable" to, why bother when get older far more potent stuff for either not much more or as of late, about the same (rarely less, though it does happen)

    1080p, I can see this, myself still using a Radeon 7870 on a 144Hz monitor "~3/4" jacked up settings (granted it is not running at full rate as the GPU does not support run this at full speed, but my Ryzen 3600 helps huge.

    still, a wee bit more power budget or something would effectively "bury" or make moot 580 - 590, then wanting to sell for that "golden" $200 price point, would make much more sense, seeing as they launched the 480 - 580 "at same pricing" (for USA) in my mind, and all I have read, with the terrific yields TSMC has managed to get as well as the "reasonable low cost to produce due to very very few "errors" THIS should have targeted 175 200 max.

    They are a business, no doubt, though they in all honesty should have looked at the "logical side" that is, "we know we cannot take down the 1660 super / Ti the way we would like to, while sticking with the shader count / memory bus, so why not say fudge it, add that extra 10w (effectively matching 7870 from many many generations back in the real world usage) so we at least give potential buyers a real hard time to decide between an old GPU (570-580-590) or a brand spanking new one that is very cool running AND not at all same power use, I am sure it will sell like hotcakes, provided we do what we can to make sure buyers everywhere can get this "for the most part" at a guaranteed $200 or less price point, will that not tick our competition right off?"

    ..........
  • thesavvymage - Thursday, December 12, 2019 - link

    What are you even trying to say here.....
  • Valantar - Thursday, December 12, 2019 - link

    I was lost after the first sentence. If it can be called a sentence. I truly have no idea what this rant is about.
  • Fataliity - Thursday, December 12, 2019 - link

    I think the game bundle is what they chose as their selling point. I'm sure they get a good deal with game pass being the supplier of CPU/GPU on xbox. So their bundle is most likely almost free for them. Which pushes the value up. Without bundle I imagine 5500 4gb being 130 and 8gb being 180.
  • TheinsanegamerN - Sunday, December 15, 2019 - link

    That's a LOTTA words just to say "AMD just made another 580 for $20 less, please clap."
  • kpb321 - Thursday, December 12, 2019 - link

    The ~$100ish 570's still look like a great deal as long as they are still available. For raw numbers they have basically the same memory bandwidth and compute as a 5500 but the newer card ends up being slightly faster and uses a bit less power. It is overall more efficient but IMO no where near enough to justify the price premium over the older cards. I'm not as sure that the 570/580 or 5500 will have enough compute power for the 4 vs 8gb of memory to really make a difference but my 570 happens to be an 8gb card anyway.

Log in

Don't have an account? Sign up now