CPU Benchmark Performance: DDR5 vs DDR4

Traditionally we test our memory settings at JEDEC specifications. JEDEC is the standards body that determines the requirements for each memory standard. In this case, the Core i9 supports the following aligning with those standards:

  • DDR4-3200 CL22
  • DDR5-4800B CL40*

There's an * next to the DDR5 for a couple of reasons. First, when asked, Intel stated that 4800A (CL34) was the official support, however since the technical documents have now been released, we've discovered that it is 4800B (CL40). Secondly, 4800B CL40 technically only applies to 1 module per 64-bit channel on the motherboard, and only when the motherboard has two 64-bit slots to begin with. We covered Intel's memory support variants in a previous article, and in this instance, we're using DDR5-4800B memory in our testing.

(1-1) Agisoft Photoscan 1.3, Complex Test(1-2) AppTimer: GIMP 2.10.18(2-1) 3D Particle Movement v2.1 (non-AVX)(2-2) 3D Particle Movement v2.1 (Peak AVX)(2-3) yCruncher 0.78.9506 ST (250m Pi)(2-4) yCruncher 0.78.9506 MT (2.5b Pi)(2-4b) yCruncher 0.78.9506 MT (250m Pi)(2-5) NAMD ApoA1 Simulation(2-6) AI Benchmark 0.1.2 Total(3-1) DigiCortex 1.35 (32k Neuron, 1.8B Synapse)(3-2b) Dwarf Fortress 0.44.12 World Gen 129x129, 550 Yr(3-3) Dolphin 5.0 Render Test(3-4c) Factorio v1.1.26 Test, 20K Hybrid(4-3a) Crysis CPU Render at 320x200 Low(4-5) V-Ray Renderer(4-7a) CineBench R23 Single Thread(4-7b) CineBench R23 Multi-Thread(5-1a) Handbrake 1.3.2, 1080p30 H264 to 480p Discord(5-1b) Handbrake 1.3.2, 1080p30 H264 to 720p YouTube(5-1c) Handbrake 1.3.2, 1080p30 H264 to 4K60 HEVC(5-2c) 7-Zip 1900 Combined Score(5-3) AES Encoding(5-4) WinRAR 5.90 Test, 3477 files, 1.96 GB(7-1) Kraken 1.1 Web Test(7-2) Google Octane 2.0 Web Test(7-3) Speedometer 2.0 Web Test(8-1c) Geekbench 5 Single Thread(8-1d) Geekbench 5 Multi-Thread(8-2a) AIDA DRAM Read Speed(8-2b) AIDA DRAM Write Speed(8-2c) AIDA DRAM Copy Speed

As explained in our SPEC section, DDR5 memory not only brings bandwidth improvements but also the increased number of channels (4x32-bit vs 2x64-bit) means that the memory can be better utilized as threads pile on the memory requests. So while we don't see much improvement in single threaded workloads, there are a number of multi-threaded workloads that would love the increased performance.

CPU Benchmark Performance: Windows 11 vs Windows 10 Gaming Performance: DDR5 vs DDR4
Comments Locked

474 Comments

View All Comments

  • mode_13h - Sunday, November 7, 2021 - link

    I'm not even going to say this is a bad idea. The problem is that it's a big change and Intel normally prepares the software developer community for big new ISA extensions a year+ in advance!

    Again, what you're talking about is an ABI change, which is a big deal. Not only that, but to require code to handle dynamically switching between AVX2 and AVX-512 paths means that it can't use different datastructures for each codepath. It even breaks the task pre-emption model, since there need to be some limitations on where the code needs to have all its 512-bit registers flushed so it can handle switching to the AVX2 codepath (or vice versa).

    This adds a lot of complexity to the software, and places a greater testing burden on software developers. All for (so far) one CPU. It just seems a bit much, and I'm sure a lot of software companies would just decide not to touch AVX-512 until things settle down.
  • GeoffreyA - Sunday, November 7, 2021 - link

    My view on this topic is that Intel made a sound decision disabling AVX512. Some of the comments are framing it as if they made a mistake, because the tech community discovered it was still there, but I don't see any problem. Only, the wording was at fault, this controversial "fused off" statement. And actually, the board makers are at fault, too, enabling a hidden feature and causing more confusion.

    On the question of whether it's desirable, allowing one core with the instructions and another without, would've been a recipe for disaster---and that, too, for heaven knows what gain. The simplest approach was bringing both cores onto the same footing. Indeed, I think this whole P/E paradigm is worthless, adding complexity for minimal gain.
  • Oxford Guy - Monday, November 8, 2021 - link

    ‘Intel made a sound decision disabling AVX512’

    That’s not what happened.
  • O-o-o-O - Sunday, November 7, 2021 - link

    Really? Our tech guys tried out Xeon Phi but couldn't make use of it. Years later, Xeon Phi was abruptly discontinued due to lack of demand. GPGPUs are much easier to handle.

    Yeah, coding cost and risks aside, it's interesting to see complex work of art in the modern CPU. But I'd rather wish for expansion of GPU support (like shared memory and higher band-width).
  • kwohlt - Sunday, November 7, 2021 - link

    My understanding is that Raptor Lake's change is replacing Golden Cover P cores with Raptor Cove P cores, doubling Gracemont E-Cores per SKU, and using the same Intel 7 process. Granted, it's all leaks at this point, but with Gracemont being reused for Raptor Lake, I don't expect AVX-512 next year either.
  • mode_13h - Monday, November 8, 2021 - link

    > Raptor Lake's change is ... doubling Gracemont E-Cores ... using the same Intel 7 process.

    I was merely speculating that this *might* just be a transient problem. If they're using the same process node for Raptor Lake, which seems very plausible, then it's understandable if they don't want to increase the size or complexity of their E-cores.

    However, there's some precedent, in the form of Knights Landing, where Intel bolted on dual AVX-512 pipelines + SMT4 to a Silvermont Atom core. And with a more mature Intel 7 node, perhaps the yield will support the additional area needed for just a single pipe + 512-bit registers. And let's not forget how Intel increased the width of Goldmont, yet simply referred to it as Goldmont+.

    So, maybe Raptor Lake will use Gracemont+ cores that are augmented with AVX-512. We can hope.
  • GURU7OF9 - Saturday, November 6, 2021 - link

    The is by far the best review I have read so far.

    A great comparison I would love to see just out of curiouslty would be to see P core only benchmarks and then e core only benchmarks! We could gain a much better understanding of the capabilities and performance of both .
    This would bring a little bit of familiarity back to benchmarking .
  • nunya112 - Saturday, November 6, 2021 - link

    the only info provided was its on intels new process 7 node. what does that mean? are they using TSMC and at 7nm? or did they finally crack 7nm at Intel?
  • mode_13h - Sunday, November 7, 2021 - link

    "Intel 7" is the process node formerly known as "10 nm ESF" (Enhanced SuperFin), which is the 4th generation 10 nm process, counting by the revisions they've introduced between the different products based on it. They like to pretend that Cannon Lake didn't happen, but that's why Ice Lake was actually 10 nm+ (2nd gen).

    They rebranded 10 nm ESF as "Intel 7" for marketing reasons, as explained here:

    https://www.anandtech.com/show/16823/intel-acceler...
  • Hossein - Sunday, November 7, 2021 - link

    It's funny that most reviewers are conveniently silent about the fact that there are quite a 'few' games which are incompatible AL.

Log in

Don't have an account? Sign up now