DDR4 vs DDR3L on the CPU

One of the big questions when DDR4 was launched was around the comparison to DDR3. Was it better, was it worse? DDR4 by default switches down to an operating voltage of 1.2 volts from 1.5 volts, making it more power efficient, and the standard increases the maximum capacity on an unbuffered memory module. There are also some other enhancements such as per-IC voltage drop control and a design to aid DRAM placement in motherboards. But there was one big scary number – a CAS Latency of 15 (known as C15 or CL15).

Let’s do a quick memory recap on frequency (technically, transfer rate but used interchangeably for this purpose) against latency.

The CAS latency is the number of clocks taken between an access request from the memory controller to actually acting on that request. So a CL of 15 means that there are 15 clocks between that request and getting access. Generally, a lower CL is better.

The Frequency is the rate at which those clocks occur. DDR stands for Double Data Rate, which means that in one hertz in the frequency there are two requests – one each on the rise and fall of the clock signal. The reciprocal of the frequency/transfer rate (one divided by the frequency) is the time taken to perform a clock.

But the important thing here is that the latency is a number of clocks and thus is just a number, and the frequency determines how fast these clocks go. So on its own the CAS Latency value doesn’t say much.  The important metric is when the two are used together -the true latency is the CAS Latency * Time taken per clock, and here’s a table of values from Crucial’s recent whitepaper on the subject:

So here we have the values for True Latency:

DDR3-1600 C11: 13.75 nanoseconds
DDR4-2133 C15: 14.06 nanoseconds

In fact despite the development of new memory interfaces, the true latency for DRAM under default specifications has stayed roughly the same since DDR. As we make faster memory modules, the CAS Latency rises to keep higher frequency memory stable, but overall the true latency stays the same.

Normally in our DRAM reviews I refer to the performance index, which has a similar effect in gauging general performance:

DDR3-1600 C11: 1600/11 = 145.5
DDR4-2133 C15: 2133/15 = 142.2

As you have faster memory, you get a bigger number, and if you reduce the CL, we get a bigger number also. Thus for comparing memory kits, if the difference > 10, then the kit with the biggest performance index tends to win out, though for similar kits the one with the highest frequency is preferred.

“But who uses DDR3-1600 C11? Isn’t most memory like DDR3-1866 C9?”

This is valid point – as DDR3 has matured, the number of kits in the market that are running faster than default specifications are actually normal now. The performance index for this kit is:

DDR3-1866 C9: 1866/9 = 207.3

In the grand scheme of things, a PI of 207 is actually quite large, and super high for DDR3L. There are a few DDR3 memory kits that go beyond this up to a PI of 220, or an overclock might go to 240 beyond normal voltages, but a value of 207 shows the maturity of the DDR3 market.  If we look at the current DDR4 market, we can pick up kits with DDR4-3000 C15 ratings, which are similarly in the 200 bracket now too.

I’ve prefaced our DDR3L vs DDR4 testing with all this as a response to ‘large CL = bad’. Actually, you have to compare both numbers. Now that we have a platform that runs both, and we were able to source a beta DDR3L/DDR4 combination motherboard to test them on, we can see how it squares up from ‘regular DDR4’ against ‘high performance DDR3(L)’.

For these tests, both sets of numbers were run at 3.0 GHz with hyperthreading disabled.  Memory speeds were DDR4-2133 C15 and DDR3-1866 C9 respectively.

Dolphin Benchmark: link

Many emulators are often bound by single thread CPU performance, and general reports tended to suggest that Haswell provided a significant boost to emulator performance. This benchmark runs a Wii program that raytraces a complex 3D scene inside the Dolphin Wii emulator. Performance on this benchmark is a good proxy of the speed of Dolphin CPU emulation, which is an intensive single core task using most aspects of a CPU. Results are given in minutes, where the Wii itself scores 17.53 minutes.

Dolphin Emulation Benchmark

Cinebench R15

Cinebench is a benchmark based around Cinema 4D, and is fairly well known among enthusiasts for stressing the CPU for a provided workload. Results are given as a score, where higher is better.

Cinebench R15 - Single Threaded

Cinebench R15 - Multi-Threaded

Point Calculations – 3D Movement Algorithm Test: link

3DPM is a self-penned benchmark, taking basic 3D movement algorithms used in Brownian Motion simulations and testing them for speed. High floating point performance, MHz and IPC wins in the single thread version, whereas the multithread version has to handle the threads and loves more cores. For a brief explanation of the platform agnostic coding behind this benchmark, see my forum post here.

3D Particle Movement: Single Threaded

3D Particle Movement: MultiThreaded

Compression – WinRAR 5.0.1: link

Our WinRAR test from 2013 is updated to the latest version of WinRAR at the start of 2014. We compress a set of 2867 files across 320 folders totaling 1.52 GB in size – 95% of these files are small typical website files, and the rest (90% of the size) are small 30 second 720p videos.

WinRAR 5.01, 2867 files, 1.52 GB

Image Manipulation – FastStone Image Viewer 4.9: link

Similarly to WinRAR, the FastStone test us updated for 2014 to the latest version. FastStone is the program I use to perform quick or bulk actions on images, such as resizing, adjusting for color and cropping. In our test we take a series of 170 images in various sizes and formats and convert them all into 640x480 .gif files, maintaining the aspect ratio. FastStone does not use multithreading for this test, and thus single threaded performance is often the winner.

FastStone Image Viewer 4.9

Video Conversion – Handbrake v0.9.9: link

Handbrake is a media conversion tool that was initially designed to help DVD ISOs and Video CDs into more common video formats. The principle today is still the same, primarily as an output for H.264 + AAC/MP3 audio within an MKV container. In our test we use the same videos as in the Xilisoft test, and results are given in frames per second.

HandBrake v0.9.9 LQ Film

HandBrake v0.9.9 2x4K

Rendering – PovRay 3.7: link

The Persistence of Vision RayTracer, or PovRay, is a freeware package for as the name suggests, ray tracing. It is a pure renderer, rather than modeling software, but the latest beta version contains a handy benchmark for stressing all processing threads on a platform. We have been using this test in motherboard reviews to test memory stability at various CPU speeds to good effect – if it passes the test, the IMC in the CPU is stable for a given CPU speed. As a CPU test, it runs for approximately 2-3 minutes on high end platforms.

POV-Ray 3.7 Beta RC4

Synthetic – 7-Zip 9.2: link

As an open source compression tool, 7-Zip is a popular tool for making sets of files easier to handle and transfer. The software offers up its own benchmark, to which we report the result.

7-zip Benchmark

Overall: DDR4 vs DDR3L on the CPU

Pretty sure the results speak for themselves:

Comparing default DDR4 to a high performance DDR3 memory kit is almost an equal contest. Having the faster frequency helps for large frame video encoding (HandBrake HQ) as well as WinRAR which is normally memory intensive. The only real benchmark loss was FastStone, which regressed by one second (out of 48 seconds).

End result, looking at the CPU test scores, is that upgrading to DDR4 doesn’t degrade performance from your high end DRAM kit, and you get the added benefit of future upgrades, faster speeds, lower power consumption due to the lower voltage and higher density modules.

Overclocking, Test Setup, Power Consumption Skylake i7-6700K DRAM Testing: DDR4 vs DDR3L on Gaming
Comments Locked

477 Comments

View All Comments

  • watzupken - Friday, September 4, 2015 - link

    To be honest, I feel the recent AMD chips are not so bad. From my opinion it boils down to 2 things,
    1) They are not able to get software makers to optimize for their chips,
    2) Disadvantage in terms of fab, i.e. 28nm vs 20/14nm.
    Of course, they also don't have a pocket a deep as Intel to begin with. So any misstep can have a serious setback for them.
  • i_will_eat_you - Saturday, December 12, 2015 - link

    AMD is long dead especially for the desktop market and server market. For their latest highend chips they simply slap bigger and bigger fans/heat sinks on to deal with a higher TDP from a ramped up clock. I'm not even sure if they have a particularly good standing in the "APU" market, low end market, etc. ARM and Intel are doing much better.

    They only have a slight gain in the GPU market with the push of HBM but even this does not give them a strong lead and they are falling onto Apple like marketing in attempt to boast their sales.

    The only reason people might tolerate AMD at the moment is because a lot of tasks will run ok on a CPU that is not the best or not the best value for money.

    Until they release a new architecture and a new fabrication process they are becoming completely out of the game. I agree they have no room for error in that.
  • Synomenon - Monday, August 24, 2015 - link

    So it's possible to have the full 16 PCIe 3.0 lanes from the CPU going to the GPU and have 4 PCIe 3.0 lanes from the chipset going to the m.2 drive on a Z170 board?
  • wyssin - Sunday, August 30, 2015 - link

    Here's what I'm talking about.
    In their i7-6700K review article, bit-tech.net compared chips at stock settings AND at a decent overclock. By seeing both of those results, you can see whether an upgrade makes sense for your needs (assuming you are an overclocker).
    http://www.bit-tech.net/hardware/2015/08/05/intel-...
  • oranos - Tuesday, September 15, 2015 - link

    Looks like after 5 years, there is still no reason to upgrade a 2500k.
  • sheeple - Thursday, October 15, 2015 - link

    I TOTALLY agree with you
  • sheeple - Thursday, October 15, 2015 - link

    THIS is funny, I'm using a SUPER OOOOOOLD L5408 Xeon that sips 40 watts and gives the performance of a 4th. gen i3 and runs ALL the latest 2015 games and the L5408 cost me 40 bucks on ebay, HAHAHAHAHAAAAA!!!!
  • sheeple - Thursday, October 15, 2015 - link

    My L5408 isn't even overclocked past 2.76 Ghz and runs The Witcher 3 Wild Hunt (game from 2015 on a machine with a cpu and mobo that together cost me 70 bucks used-the cpu was introduced in beginning of 2008) at 30 fps AVERAGE WITH ALL SETTINGS MAXED @ 1080p using a STOCK GTX 950 LOL!!! Whoever buys one of these "Skynet" Cpu's needs to do more research, SERIOUSLY !!!!
  • sheeple - Thursday, October 15, 2015 - link

    DON'T BE STUPID SHEEPLE!!! NEW DOES NOT ALWAYS = BETTER!
  • manolaren - Saturday, October 31, 2015 - link

    So if Anandtech tests are accurate, between the skylake cpu's, i5 is the way for a gaming pc. Gaming benchmarks are almost identical, but the price is a lot cheaper for the i5. Considering skylake doesn't bring nothing groundbreaking for the genre, then i cant see any other way for gamer's. My only question is if future games will take advantage of more than 4 cores and make i7 cpu's a must.

Log in

Don't have an account? Sign up now