Final Words

For users in the market for a new CPU designed to bolster frame rates in a gaming system, the AMD Ryzen 5800X3D is undoubtedly a powerful choice.

Even at $449, the Ryzen 7 5800X3D can perform as well, if not at times better than any chip from Intel's 12th Gen Core series of processors. The catch is that, as is often the case with new/unique technologies, the gains are uneven among games. This essentially means if the game doesn't benefit from the large 96 MB of 3D V-Cache onboard, it performs similar to the already existing Ryzen 7 5800X, which can be had for $350; $100 cheaper than the 5800X3D. In that respect, it would be helpful if AMD published a guide of games that they've found to materially benefit from the large L3 cache, just so gamers and users could decide whether the Ryzen 7 5800X3D is worth it.

The other angle is that, compared to the Intel ecosystem, the Intel Core i7-12700K for $380 represents a better buy in terms of all-around performance, where it has the advantage in most CPU compute workloads. So while the Ryzen 7 5800X3D has a niche it does quite well at, there's certainly an opportunity cost in getting this chip versus something that is a bit more rounded.

Let's dissect the performance across the different areas such as gaming and compute:

AMD Ryzen 7 5800X3D Analysis: Biased Towards Gamers

Looking at our data across our CPU testing suites (compute and gaming), the AMD Ryzen 7 5800X3D is a mixed bag. The Ryzen 7 5800X3D has an MSRP of $449 and represents a $100 hike over what the original Ryzen 7 5800X ($350) is currently selling for. Now let's dissect our analysis into two sections, gaming and compute, and piece it together.

AMD Ryzen 7 5800X3D Gaming Analysis: 3D V-Cache is Fantastic For Some Games (But Not All)

When AMD initially announced its Ryzen 7 5800X3D processor with its latest technological advancement in 3D V-Cache stacking, AMD made it clear that this processor was explicitly designed for gamers. The idea was that the additional levels of L3 Cache would play more to gamer's strengths with a larger buffer for more frequently used data instead of directly accessing it from system memory (DRAM). In gaming, a larger pool of L3 cache can indeed benefit and improve frametimes, as this data is closer to the CPU cores and is more quickly accessed by the individual cores from the processor as it would when accessing it from DRAM. 

Now it's worth mentioning that in our testing, AMD's V-Cache-equipped chip didn't annihilate the competition in all of the games in our suite. Still, it certainly significantly impacted AMD-favored titles in its arsenal.

(i-7) Far Cry 5 - 1080p Ultra - Average FPS

Looking at performance in Far Cry 5 at 1080p Ultra, which is notably an AMD partnered title, the AMD Ryzen 7 5800X3D with 96 MB of 3D stacked L3 Cache performed just under 27% faster than the previous AMD Ryzen 7 5800X processor. It was also around 9% better than the similarly priced Intel Core i7-12700K, which benefits from four more cores, higher IPC performance, and faster cores than the 5800X3D. 

(g-7) Borderlands 3 - 1080p Max - Average FPS

In Borderlands 3 at 1080p with maximum settings, the AMD Ryzen 7 5800X3D also topped our tables, even beating out the more expensive and premium Intel Core i9-12900K by over 1%, and beats all of the other Ryzen 5000 processors tested. This does give AMD some affirmation that its 3D V-Cache can push its technology as a win for gamers. Still, unfortunately, not every game title will benefit from massive levels of L3 Cache.

(e-5) Final Fantasy 15 - 4K Standard - Average FPS

As for 4K gaming, there's not a lot to say here. This is almost always the realm of GPU-limited games, which limits the benefits a faster CPU can bring. Focusing on Final Fantasy 15 at 4K resolutions and using the standard preset, the 96 MB of L3 didn't have the desired effect on performance. The Ryzen 7 5700X3D does pull ahead, beating the previous Ryzen 7 5800X by around 3.4%, but it's small potatoes at this point.

AMD Ryzen 7 5800X3D Compute Analysis: Extra L3 Does Little For Compute Performance

While AMD pitched the Ryzen 7 5800X3D and its 96 MB of L3 cache as being beneficial in gaming, they didn't make the same kind of claims when it comes to applications and general compute performance. Because the benefits of a larger L3 cache are even more sporadic in these cases, AMD has wisely opted not to promote the part based on general compute performance here. Which is not to say that it's bad news for AMD, but as our data confirms, the L3 cache generally doesn't have much of an impact here.

It's worth noting that due to the 3D V-Cache, which is essentially 32 MB + 64 MB stacked vertically on top of each other using AMD's new chiplet packaging technology, AMD has had to be more conservative with core frequency to be in line with its 105 W TDP power rating. In comparison to the Ryzen 7 5800X, which has a base frequency of 3.8 GHz and a turbo core clock speed of 4.7 GHz, the Ryzen 7 5800X3D has a 400 MHz lower base frequency of 3.4 GHz, which its turbo core frequency is 200 MHz lower by comparison with a maximum core frequency of 4.5 GHz. This reduction in core frequency plays a key role in compute performance when directly comparing the 5800X3D and 5800X. 

(4-7b) CineBench R23 Multi-Thread

Looking at multi-threaded performance in Cinebench R23, the Ryzen 7 5800X3D performs 5.2% worse off than the existing Ryzen 7 5800X. This can be directly attributed to the differences in core frequency. 

(4-1) Blender 2.83 Custom Render Test

In our Blender 2.83 benchmark, the additional 64 MB of L3 cache on the Ryzen 7 5800X3D over the Ryzen 7 5800X made no difference in performance. Once again, the extra 3D stacked L3 V-Cache made no real impression on compute performance. With the critical sections of many non-gaming workloads either fitting within the confines of current L3 caches – or not fitting at all – there's no making up for more cores and higher clockspeeds most of the time.

AMD Ryzen 7 5800X3D is For Gaming, But Compute Performance is Average

Touted as the 'World's Fastest PC Gaming Processor' by AMD, this claim rings true in some aspects but not so much in others. The keyword in this claim is gaming, and it's true that the additional 64 MB of L3 cache does bring many benefits to the gaming market, albeit benefiting some games more than others.

The general outcome of our analysis is straightforward. If a specific title can benefit from a larger L3 cache, then the Ryzen 7 5800X3D and its 3D stacked V-Cache will prove very potent and fruitful to gamers looking to maximize frame rates. In situations where the game cannot benefit from the extra L3 cache, then the Intel 12th Gen Core series chips prove better with higher core clock speeds and higher IPC leading to higher overall performance.

The biggest benefits to gaming performance came in 720p and lower resolutions – which is to be expected since the CPU-limited scenario is something of a best case here – but the Ryzen 7 5800X3D at 1080p and 4K proved fruitful in selected titles. So even in workloads where the CPU isn't entirely free to run ahead of the GPU, it's clear that there's still some benefit to AMD's latest chip.

Otherwise, when it comes to raw compute performance, the Ryzen 7 5800X3D isn't as potent as Intel's 12th Gen Core processors – and in most of our computational-related tests, it also fell behind the vanilla Ryzen 7 5800X processor. This is because the L3 cache doesn't have as much of an influence on general purpose compute performance, and the Ryzen 7 5800X3D is clocked slightly lower due to voltage restraints (1.35 V) on the VCore.

Ultimately, It's pretty easy to make a case for the Ryzen 7 5800X3D as a solid gaming chip, especially as it decimates the competition in some titles. Just so long as potential buyers understand that the Ryzen 7 5800X3D is going to excel in gaming more than it excels in Excel. Otherwise, for users looking for a solid all-rounder capable of reasonable frame rates in gaming and reliable application and productivity performance, the Intel Core 12th Gen series has a little more oomph under the hood, albeit with a much higher power draw. 

 

Performance matters aside, the Ryzen 7 5800X3D and its other V-cache ilk represent a massive step forward for AMD's engineering team. Stacking additional L3 cache is a novel way to expand the size of the L3 pool without significantly blowing up die sizes or resort to high-latency off-die caches, which gives AMD some important flexibility here to make such a part viable.

And clearly, AMD is happy with what they've accomplished as well. Along with the Ryzen chip, AMD has rolled out V-cache chips for enterprise and server products, where it is already playing a critical factor in performance. As a result, it comes as no surprise that AMD has confirmed that there will be 3D V-Cache products on its impending Zen 4 core (due fall 2022), and with Zen 5 which is expected sometime in 2024, with both consumer (Ryzen) and server (EPYC) products planned. So while all signs point to V-cache remaining a premium solution, it also means that we're far from done seeing what AMD can do for PC performance with larger L3 caches.

CPU Benchmark Performance: Legacy and Web
Comments Locked

125 Comments

View All Comments

  • Gavin Bonshor - Thursday, June 30, 2022 - link

    We test at JEDEC to compare apples to apples from previous reviews. The recommendation is on my personal experience and what AMD recommends.
  • HarryVoyager - Thursday, June 30, 2022 - link

    Having done an upgrade from a 5800X to a 5800X3D, one of the interesting things about the 5800X3D is that its largely RAM insensitive. You can get pretty much the same performance out of DDR4-2366 as you can 3600+.

    And its not that it is under-performing. The things that it beats the 5800X at, it still beats it at, even when the 5800X is running very fast low latency RAM.

    The up shot is, if you're on an AM4 platform with stock ram, you actually get a lot of improvement from the 5800X3D in its favored applications
  • Lucky Stripes 99 - Saturday, July 2, 2022 - link

    This is why I hope to see this extra cache come to the APU series. My 4650G is very RAM speed sensitive on the GPU side. Problem is, if you start spending a bunch of cash on faster system memory to boost GPU speeds, it doesn't take long before a discrete video card becomes the better choice.
  • Oxford Guy - Saturday, July 2, 2022 - link

    The better way to test is to use both the optimal RAM and the slow JEDEC RAM.
  • sonofgodfrey - Thursday, June 30, 2022 - link

    Wow, awhile since I looked at these gaming benchmarks. These FPS times are way past the point of "minimum" necessary. I submit two conclusions:
    1) At some point you just have to say the game is playable and just check that box.
    2) The benchmarks need to reflect this result.

    If I were doing these tests, I would probably just set a low limit for FPS and note how much (% wise) of the benchmark run was below that level. If it is 0%, then that CPU/GPU/whatever combination just gets a "pass", if not it gets a "fail" (and you could dig into the numbers to see how much it failed).

    Based on this criteria, if I had to buy one of these processors for gaming, I would go with the least costly processor here, the i5-12600k. It does the job just fine, and I can spend the extra $210 on a better GPU/Memory/SSD.
    (Note: I'm not buying one of these processors, I don't like Alder Lake for other reasons, and this is not an endorsement of Alder Lake)
  • lmcd - Thursday, June 30, 2022 - link

    Part of the intrigue is that it can hit the minimums and 1% lows for smooth play with 120Hz/144Hz screens.
  • hfm - Friday, July 1, 2022 - link

    I agree. I'm using a 5600X + 3080 + 32GB dual channel dual rank and my 3080 is still the bottleneck most of the time at the resolution I play all my games in, 1440p@144Hz
  • mode_13h - Saturday, July 2, 2022 - link

    > These FPS times are way past the point of "minimum" necessary.

    You're missing the point. They test at low resolutions because those tend to be CPU-bound. This exaggerates the difference between different CPUs.

    And the relevance of such testing is because future games will probably lean more heavily on the CPU than current games. So, even at higher resolutions, we should expect to see future game performance affected by one's choice of a CPU, today, to a greater extent than current games are.

    So, in essence, what you're seeing is somewhat artificial, but that doesn't make it irrelevant.

    > I would probably just set a low limit for FPS and
    > note how much (% wise) of the benchmark run was below that level.

    Good luck getting consensus on what represents a desirable framerate. I think the best bet is to measure mean + 99th percentile and then let people decide for themselves what's good enough.
  • sonofgodfrey - Tuesday, July 5, 2022 - link

    >Good luck getting consensus on what represents a desirable framerate.

    You would need to do some research (blind A-B testing) to see what people can actually detect.
    There are probably dozens of human factors PhD thesis about this in the last 20 years.
    I suspect that anything above 60 Hz is going to be the limit for most people (after all, a majority of movies are still shot at 24 FPS).

    >You're missing the point. They test at low resolutions because those tend to be CPU-bound. This exaggerates the difference between different CPUs.

    I can see your logic, but what I see is this:
    1) Low resolution test is CPU bound: At several hundred FPS on some of these tests they are not CPU bound, and the few percent difference is no real difference.
    2) Predictor of future performance: Probably not. Future games if they are going to push the CPU will use a) even more GPU offloading (e.g. ray-tracing, physics modeling), b) use more CPUs in parallel, c) use instruction set additions that don't exist or are not available yet (AVX 512, AI accelleration). IOW, you're benchmark isn't measuring the right "thing", and you can't know what the right thing is until it happens.
  • mode_13h - Thursday, July 7, 2022 - link

    > You would need to do some research (blind A-B testing) to see what people can actually detect.

    Obviously not going to happen, on a site like this. Furthermore, readers have their own opinions of what framerates they want and trying to convince them otherwise is probably a thankless errand.

    > I suspect that anything above 60 Hz is going to be the limit for most people
    > (after all, a majority of movies are still shot at 24 FPS).

    I can tell you from personal experience this isn't true. But, it's also not an absolute. You can't divorce the refresh rate from other properties of the display, like whether the pixel illumination is fixed or strobed.

    BTW, 24 fps movies look horrible to me. 24 fps is something they settled on way back when film was heavy, bulky, and expensive. And digital cinema cameras are quite likely used at higher framerates, if only so they can avoid judder when re-targeting to 30 or 60 Hz targets.

    > At several hundred FPS on some of these tests they are not CPU bound,

    When different CPUs produce different framerates with the same GPU, then you know the CPU is a limiting factor to some degree.

    > and the few percent difference is no real difference.

    The point of benchmarking is to quantify performance. If the difference is only a few percent, then so be it. We need data in order to tell us that. Without actually testing, we wouldn't know.

    > Predictor of future performance: Probably not.

    That's a pretty bold prediction. I say: do the testing, report the data, and let people decide for themselves whether they think they'll need more CPU headroom for future games.

    > Future games if they are going to push the CPU will use
    > a) even more GPU offloading (e.g. ray-tracing, physics modeling),

    With the exception of ray-tracing, which can *only* be done on the GPU, then why do you think games aren't already offloading as much as possible to the GPU?

    > b) use more CPUs in parallel

    That starts to get a bit tricky, as you have increasing numbers of cores. The more you try to divide up the work involved in rendering a frame, the more overhead you incur. Contrast that to a CPU with faster single-thread performance, and you know all of that additional performance will end up reducing the CPU portion of frame preparation. So, as nice as parallelism is, there are practical challenges when trying to scale up realtime tasks to use ever increasing numbers of cores.

    > c) use instruction set additions that don't exist or are not available yet (AVX 512, AI accelleration).

    Okay, but if you're buying a CPU today that you want to use for several years, you need to decide which is best from the available choices. Even if future CPUs have those features and future games can use them, that doesn't help me while I'm still using the CPU I bought today. And games will continue to work on "legacy" CPUs for a long time.

    > IOW, you're benchmark isn't measuring the right "thing",
    > and you can't know what the right thing is until it happens.

    Let's be clear: it's not *my* benchmark. I'm just a reader.

    Also, video games aren't new and the gaming scene changes somewhat incrementally, especially given how many years it now takes to develop them. So, tests done today should have similar relevance in the next few years as what test from a few years ago would tell us about gaming performance today.

    I'll grant you that it would be nice to have data to support this: if someone would re-benchmark modern games with older CPUs and compare the results from those benchmarks to ones takes when the CPUs first launched.

Log in

Don't have an account? Sign up now