DDR4 vs DDR3L on the CPU

One of the big questions when DDR4 was launched was around the comparison to DDR3. Was it better, was it worse? DDR4 by default switches down to an operating voltage of 1.2 volts from 1.5 volts, making it more power efficient, and the standard increases the maximum capacity on an unbuffered memory module. There are also some other enhancements such as per-IC voltage drop control and a design to aid DRAM placement in motherboards. But there was one big scary number – a CAS Latency of 15 (known as C15 or CL15).

Let’s do a quick memory recap on frequency (technically, transfer rate but used interchangeably for this purpose) against latency.

The CAS latency is the number of clocks taken between an access request from the memory controller to actually acting on that request. So a CL of 15 means that there are 15 clocks between that request and getting access. Generally, a lower CL is better.

The Frequency is the rate at which those clocks occur. DDR stands for Double Data Rate, which means that in one hertz in the frequency there are two requests – one each on the rise and fall of the clock signal. The reciprocal of the frequency/transfer rate (one divided by the frequency) is the time taken to perform a clock.

But the important thing here is that the latency is a number of clocks and thus is just a number, and the frequency determines how fast these clocks go. So on its own the CAS Latency value doesn’t say much.  The important metric is when the two are used together -the true latency is the CAS Latency * Time taken per clock, and here’s a table of values from Crucial’s recent whitepaper on the subject:

So here we have the values for True Latency:

DDR3-1600 C11: 13.75 nanoseconds
DDR4-2133 C15: 14.06 nanoseconds

In fact despite the development of new memory interfaces, the true latency for DRAM under default specifications has stayed roughly the same since DDR. As we make faster memory modules, the CAS Latency rises to keep higher frequency memory stable, but overall the true latency stays the same.

Normally in our DRAM reviews I refer to the performance index, which has a similar effect in gauging general performance:

DDR3-1600 C11: 1600/11 = 145.5
DDR4-2133 C15: 2133/15 = 142.2

As you have faster memory, you get a bigger number, and if you reduce the CL, we get a bigger number also. Thus for comparing memory kits, if the difference > 10, then the kit with the biggest performance index tends to win out, though for similar kits the one with the highest frequency is preferred.

“But who uses DDR3-1600 C11? Isn’t most memory like DDR3-1866 C9?”

This is valid point – as DDR3 has matured, the number of kits in the market that are running faster than default specifications are actually normal now. The performance index for this kit is:

DDR3-1866 C9: 1866/9 = 207.3

In the grand scheme of things, a PI of 207 is actually quite large, and super high for DDR3L. There are a few DDR3 memory kits that go beyond this up to a PI of 220, or an overclock might go to 240 beyond normal voltages, but a value of 207 shows the maturity of the DDR3 market.  If we look at the current DDR4 market, we can pick up kits with DDR4-3000 C15 ratings, which are similarly in the 200 bracket now too.

I’ve prefaced our DDR3L vs DDR4 testing with all this as a response to ‘large CL = bad’. Actually, you have to compare both numbers. Now that we have a platform that runs both, and we were able to source a beta DDR3L/DDR4 combination motherboard to test them on, we can see how it squares up from ‘regular DDR4’ against ‘high performance DDR3(L)’.

For these tests, both sets of numbers were run at 3.0 GHz with hyperthreading disabled.  Memory speeds were DDR4-2133 C15 and DDR3-1866 C9 respectively.

Dolphin Benchmark: link

Many emulators are often bound by single thread CPU performance, and general reports tended to suggest that Haswell provided a significant boost to emulator performance. This benchmark runs a Wii program that raytraces a complex 3D scene inside the Dolphin Wii emulator. Performance on this benchmark is a good proxy of the speed of Dolphin CPU emulation, which is an intensive single core task using most aspects of a CPU. Results are given in minutes, where the Wii itself scores 17.53 minutes.

Dolphin Emulation Benchmark

Cinebench R15

Cinebench is a benchmark based around Cinema 4D, and is fairly well known among enthusiasts for stressing the CPU for a provided workload. Results are given as a score, where higher is better.

Cinebench R15 - Single Threaded

Cinebench R15 - Multi-Threaded

Point Calculations – 3D Movement Algorithm Test: link

3DPM is a self-penned benchmark, taking basic 3D movement algorithms used in Brownian Motion simulations and testing them for speed. High floating point performance, MHz and IPC wins in the single thread version, whereas the multithread version has to handle the threads and loves more cores. For a brief explanation of the platform agnostic coding behind this benchmark, see my forum post here.

3D Particle Movement: Single Threaded

3D Particle Movement: MultiThreaded

Compression – WinRAR 5.0.1: link

Our WinRAR test from 2013 is updated to the latest version of WinRAR at the start of 2014. We compress a set of 2867 files across 320 folders totaling 1.52 GB in size – 95% of these files are small typical website files, and the rest (90% of the size) are small 30 second 720p videos.

WinRAR 5.01, 2867 files, 1.52 GB

Image Manipulation – FastStone Image Viewer 4.9: link

Similarly to WinRAR, the FastStone test us updated for 2014 to the latest version. FastStone is the program I use to perform quick or bulk actions on images, such as resizing, adjusting for color and cropping. In our test we take a series of 170 images in various sizes and formats and convert them all into 640x480 .gif files, maintaining the aspect ratio. FastStone does not use multithreading for this test, and thus single threaded performance is often the winner.

FastStone Image Viewer 4.9

Video Conversion – Handbrake v0.9.9: link

Handbrake is a media conversion tool that was initially designed to help DVD ISOs and Video CDs into more common video formats. The principle today is still the same, primarily as an output for H.264 + AAC/MP3 audio within an MKV container. In our test we use the same videos as in the Xilisoft test, and results are given in frames per second.

HandBrake v0.9.9 LQ Film

HandBrake v0.9.9 2x4K

Rendering – PovRay 3.7: link

The Persistence of Vision RayTracer, or PovRay, is a freeware package for as the name suggests, ray tracing. It is a pure renderer, rather than modeling software, but the latest beta version contains a handy benchmark for stressing all processing threads on a platform. We have been using this test in motherboard reviews to test memory stability at various CPU speeds to good effect – if it passes the test, the IMC in the CPU is stable for a given CPU speed. As a CPU test, it runs for approximately 2-3 minutes on high end platforms.

POV-Ray 3.7 Beta RC4

Synthetic – 7-Zip 9.2: link

As an open source compression tool, 7-Zip is a popular tool for making sets of files easier to handle and transfer. The software offers up its own benchmark, to which we report the result.

7-zip Benchmark

Overall: DDR4 vs DDR3L on the CPU

Pretty sure the results speak for themselves:

Comparing default DDR4 to a high performance DDR3 memory kit is almost an equal contest. Having the faster frequency helps for large frame video encoding (HandBrake HQ) as well as WinRAR which is normally memory intensive. The only real benchmark loss was FastStone, which regressed by one second (out of 48 seconds).

End result, looking at the CPU test scores, is that upgrading to DDR4 doesn’t degrade performance from your high end DRAM kit, and you get the added benefit of future upgrades, faster speeds, lower power consumption due to the lower voltage and higher density modules.

Overclocking, Test Setup, Power Consumption Skylake i7-6700K DRAM Testing: DDR4 vs DDR3L on Gaming
POST A COMMENT

476 Comments

View All Comments

  • bischofs - Thursday, August 6, 2015 - link

    Absolutely agree, My overclocked 920 still runs like a watch after 8 years. Not sure what Intel is doing these days, but lack of competition is really impacting this market. Reply
  • stux - Friday, August 7, 2015 - link

    I upgraded my 920 to a 990x, it runs at about 4.4ghz on air in an XPC chassis! and has 6/12 cores.

    I bought it off ebay cheap, and with an SSD on a SATA3 card I see no reason to upgrade. It works fantastically well, and is pretty much as fast as any modern 4 core machine.
    Reply
  • Samus - Sunday, October 25, 2015 - link

    If you single GPU and don't go ultra-high-end then gaming is still relevant on x58, but it really isn't capable of SLI due to PCIe 2.0 and the lanes being reduced to 8x electrical when more than one 16x length slot is used. QPI also isn't very efficient by todays standards and at the time, AMD still had a better on-die memory controller, but Intel's first attempt was commendable, but completely overhauled with Sandy Bridge which offered virtually the same performance from 2 channels. Anybody who has run dual channel on X58 knows how bad it actually is and why triple channel is needed to keep it competitive with todays platforms.

    I loved X58. It is undoubtedly the most stable platform I'd had since the 440BX. But as I said, by todays standards, it makes Sandy Bridge seem groundbreaking, not because of the IPC, but because of the chipset platform. The reduced power consumption, simplicity and overall smaller-size and lower cost of 60/70 series chipsets, then the incredibly simplified VRM layout in 80/90 chipsets (due to the ondie FIVR of Haswell) makes X58 "look" ancient, but as I said, still relevant.

    Just don't load up the PCIe bus. A GPU, sound card and USB 3.0 controller is about as far as you want to go, and for the most part, as far as you need too!
    Reply
  • vdek - Thursday, August 6, 2015 - link

    Get a Xeon 5650, 6 core CPU, 32nm, will run at 4-4.2ghz all day on air. I upgraded my i7 920 the X5650 and I couldn't be happier. They go for about $70-80 on amazon or ebay. I'm planning on keeping my desktop for another 2-3 years, I upgraded the GPU to a GTX970 and it maxes out most of what I can throw at it. I don't really see my CPU as a bottleneck here. Reply
  • mdw9604 - Tuesday, August 11, 2015 - link

    Can you OC a Xeon 5650? Reply
  • mapesdhs - Wednesday, August 12, 2015 - link

    Of course, back then the main oc'ing method was still bclk-based based, though X58 was a little more involved than that compared to P55 (uncore, etc.) Reply
  • LCTR - Saturday, August 15, 2015 - link

    I'd been pondering the 6700K until I saw these posts from 920 users :)
    I use mine for gaming / video editing, it's running non-hyperthreaded at 4.2GHz on air (about 4Ghz with HT on)

    I also upgraded my GPU to a 970 and have seen decent gaming performance - if I could jump to a X5650 and stretch things for 1-2 years that'd be great...

    What sort of performance do you see from the X5650? Would it win 4GHz with HT enabled?
    The Xeon 5650's don't need any special mobo support or anything, do they? I have a gigabyte GA-EX58-UD5

    Reply
  • Nfarce - Wednesday, August 5, 2015 - link

    Well sadly, ever since SB (which I have one that's 4 years old, a 2500K, alongside a newer Haswell 4690K, each new tick/tock has not been much. The days of getting 50% boost in performance between a few generations are long gone, let alone 100% boost, or doubling performance. Also keep in mind that there is a reason for this decrease in increased performance: as dies shrink, physics with electrons start becoming an issue. Intel has been focusing more on decreased power usage. At some point CPU manufacturers will need to look at an entirely different manufacturing material and design as silicon and traditional PCB design is coming to its limit. Reply
  • Mr Perfect - Wednesday, August 5, 2015 - link

    It's not even 30% in high-end gaming. There is a clear improvement between SB and Skylake, but why should I build a whole new PC for 5FPS? I can't justify that expense.

    I'd be curious to see the high-end gaming benchmarks rerun with the next generation of GPUs. Will next gen GPUs care more about the CPU, or does DX12 eliminate the difference altogether?
    Reply
  • mkozakewich - Thursday, August 6, 2015 - link

    It's unlikely you'll be seeing doubles and doubles anymore. If you look at what's been going on for the past several years, we're moving to more efficient processes instead of improving performance. I'm sure Intel's end goal is to make powerful CPUs for devices that fit into people's pockets. At that point you might see more start going into raw performance. Reply

Log in

Don't have an account? Sign up now