Evaluating Standalone DRAM Performance

Intel's Memory Latency Checker tool can measure memory latencies and bandwidth, and how they change with increasing load on the system. It also provides several options for more fine-grained investigation where bandwidth and latencies from a specific set of cores to caches or memory can be measured as well. The tool also disables the sophisticated hardware prefetchers in order to give an idea of the actual performance of the tested component (cache or main memory).

Most of the test options available in Intel's Memory Latency Checker tool are an overkill for / not applicable to systems like the Skull Canyon NUC. In this section, we present selected main memory benchmarks for the various tested kits.

Idle DRAM to DRAM Bandwidth

Intel Memory Latency Checker v3.0

Idle DRAM to DRAM Latency

Intel Memory Latency Checker v3.0

Peak DRAM Bandwidth - 1:1 Reads:Writes

Intel Memory Latency Checker v3.0

Peak DRAM Bandwidth - 2:1 Reads:Writes

Intel Memory Latency Checker v3.0

Peak DRAM Bandwidth - 3:1 Reads:Writes

Intel Memory Latency Checker v3.0

Peak DRAM Bandwidth - All Reads

Intel Memory Latency Checker v3.0

There are no surprises at all in the bandwidth graphs - faster the memory frequency, the higher the bandwidth, whatever the scenario. Latency is a slightly different story, with certain high frequency kits with larger (worse) timing parameters performing better than kits with better timing parameters, but operating at a lower frequency. It is clear from the above graphs that if one has a purely main-memory-bound (not cache-sensitive) workload, it would benefit immensely from the G.Skill Ripjaws 3000 MHz kit. While the bandwidth numbers are excellent and according to expectations, even the actual latency numbers (in terms of ns rather than clock cycles) are better than the Kingston HyperX kit. Are there any real-world applications that can benefit from this performance? The next couple of sections provides the answers.

Premium DRAM Options for the Skull Canyon NUC CPU Performance Benchmarks
Comments Locked

31 Comments

View All Comments

  • jjj - Monday, August 29, 2016 - link

    The second graph on page 3 should be flipped upside down as lower latency is better and right now it is misleading if you aren't paying attention.
  • snowmyr - Monday, August 29, 2016 - link

    http://imgur.com/a/GxZWh
    You're Welcome
  • kebo - Tuesday, August 30, 2016 - link

    +1 internets
  • Gigaplex - Monday, August 29, 2016 - link

    "Upon booting into the BIOS after installation, I found that the memory was only configured to run at 2667 MHz. Altering the 'Automatic' DRAM timings to 'Manual' and 'user-defining' the various timing parameters as printed on the SODIMM label (16-18-18-43) enabled higher frequency operation."

    I'm not surprised. My G.Skill RAM (DDR3) also didn't perform as advertised in a plug and play fashion, and when I emailed to complain, they acted as if it was normal for manual entry to be required. So much for XMP compliance.
  • Ian Cutress - Monday, August 29, 2016 - link

    The system BIOS automatically loads the SPD profile of the memory kit unless the XMP option is enabled. In most systems, XMP is disabled as the default option because kits without XMP (most of the base ones) don't exist. Also, the SPD profile is typically left as the base JEDEC settings to ensure full compatibility.

    If you want true plug and play of high speed memory kits, one of two things need to happen:

    1) XMP is enabled by default (but not all memory will work)
    2) Base SPD profiles on the memory should be the high-speed option (means the memory won't work in systems not geared for high performance)

    There are a number of Kingston modules, typically DDR4-2400/2666, that will use option number (2). Some high-end motherboards have an onboard switch for (1). For everything else, it requires manually adjusting a setting in the BIOS.

    The problem, as always, is maintaining wide compatibility. Just in case someone buys a high-end memory kit but wants to run it at base JEDEC specifications, because the hardware they are moving the kit into doesn't support the high frequency.
  • TheinsanegamerN - Monday, August 29, 2016 - link

    dissapointing to see nearly no improvement in gaming benchmarks. You'd figure that a big iGPU would need more bandwidth with newer games.

    Perhaps current iGPUs just are not powerful enough. Maybe AMD will fix that with zen APUs next year.
  • Ian Cutress - Monday, August 29, 2016 - link

    It's a function of the embedded DRAM. You would expect DRAM speed to affect the iGPU less when eDRAM is present because it provides a large 50GB/s bidirectional DRAM buffer. Without eDRAM, I would expect the differences in gaming results to be more. Will have to do testing to find out - this piece was focusing primarily on the Skull Canyon environment which lists high speed memory support as a benefit.
  • Samus - Monday, August 29, 2016 - link

    I haven't seen a memory frequency roundup like this since Sandy Bridge, which did show a slight benefit (more than Skylake for sure) moving from DDR3 1066 through 1333, 1600 and so on. Haswell I'm sure is a similar story. I had noticeable performance improvements on AM3+ platforms going from 1600 to 2400 especially in regard to the embedded GPU.

    With sky lake it seems you are just wasting your money to run potentially less reliable, more expensive memory out of specification. But I wonder if CPUs without the eDRAM have the same flat scale?
  • Ian Cutress - Monday, August 29, 2016 - link

    Ivy Bridge: http://www.anandtech.com/show/6372/memory-performa...

    Haswell: http://www.anandtech.com/show/7364/memory-scaling-...
  • Samus - Monday, August 29, 2016 - link

    Oh cool, thanks Ian! Should have figured you guys keep up with it.

Log in

Don't have an account? Sign up now