Memory Straps

In the interest of achieving and obtaining consistent results, the Team Group Night Hawk RGB DDR4-3000 kit has been left at the rated latencies, which was achieved by enabling the XMP profile, and changing the memory strap/multiplier to achieve the desired frequency each time through the GIGABYTE UEFI BIOS. On Ryzen in this BIOS, due to memory strap limitations and the inability to support 100 MHz straps, the only way to run the memory at DDR4-3000 would be to run the 2933 memory strap and then adjust the base clock up from 100 MHz to 102.3 MHz. This would technically overclock the processor and in doing so, would skew the results from the other straps tested such as 2400MT/s etc.

So to keep everything on an even keel throughout, all of the settings in the BIOS except the memory settings remained at default. The memory had its XMP profile enabled, the base clock put back to 100 MHz, and different memory straps were tested with identical latency timings. All other tests were run at 16-18-18, as per the memory kit.

For the testing we are using a Ryzen 7 processor, specifically the Ryzen 7 1700. AMD's official listed support for this processor depends on the amount of memory and the memory type. The short answer to this support is that when using one memory module per channel, the Ryzen 7 1700 is designed to support DDR4-2666, but when two memory modules per channel are used, the support drops to DDR4-2400.  For this test, because downclocking is easy enough, we start test from the DDR4-2400 data rate and go through the rated memory speed of the processor to the speed the memory is rated to.

We also overclock the memory beyond its rated speed. Each kit will offer different levels of overclocking performance, as it depends on the quality of the kit, the processor memory controller, and the motherboard, but we were able to push this memory kit all the way to DDR4-3333. At this speed it was stable in all our testing, and equates to a 10% increase on the rated frequency of the kit. It was interesting to see how much of an effect this speed would really manifest in our testing.

For users that are unfamiliar with this sort of image, this is the common CPU-Z tool that most professionals use to quickly probe the underlying hardware and speeds in the system. This is the main memory tab of the software, showing that we are using DDR4 in dual channel mode and have a total of 16 gigabytes. The NB Frequency, where NB historically stands for 'North Bridge', is the frequency that the Infinity Fabric is running at. In this case above, we get it running at 1663 MHz.

Below is the frequency and sub-timings for the memory itself. It shows a kit running at 1663.3 MHz, with 16-18-18 sub timings and a 1T command rate. I can already hear some of our readers with questions: why does it say the memory is running at 1663.3 MHz? I thought it was being run at DDR4-3333? So the key thing here is the difference between the frequency of the memory and how the memory works.

For a given frequency of the memory, say 1000 MHz, the system will perform 1000 million full clock cycles every second. These are full cycles, alternating from a peak voltage to a low voltage and back again within a single cycle. Modern memory, such as DDR4, is memory that runs at a Double Data Rate - this is what DDR stands for. What this means is that an action or a transfer can occur twice per cycle, usually each time the voltage alternates from peak to trough. This is also referred to as transferring on the clock cycle edges. The final result is that every cycle we get two transfers, so DDR4 at 1666 MHz is another way of saying DDR4 at 3333 mega transfers per second, or MT/s. Memory is quoted in terms of transfers per second, hence DDR4-3000 or DDR4-3333. 

There is often user confusion here, with memory kits being listed as DDR4 at 3000 MHz when they mean DDR4 at 3000 MT/s (Ed: I'm pretty sure everyone on the AnandTech staff is guilty of this at some point). For this review, and any memory reviews going forward, AnandTech is going to keep consistency in how we represent numbers. Typically we will quote the MT/s value, as this is what is listed on the kit, and specifically state when we are talking about the frequency (in Hz) or the data rate (MT/s), and use 'speed' as the generic term.

In this review, we will be testing the following combinations of data rate and latencies:

  1. DDR4-2400 16-18-18
  2. DDR4-2666 16-18-18 (Ryzen 7 Supported at 1DPC)
  3. DDR4-2800 16-18-18
  4. DDR4-2933 16-18-18 (Nearest to memory kit rating)
  5. DDR4-3066 16-18-18
  6. DDR4-3200 16-18-18
  7. DDR4-3333 16-18-18 (10%+ overclock)
Team Group's Night Hawk RGB Memory: 2x8GB of DDR4-3000 CL16 Test Bed and Hardware
Comments Locked

65 Comments

View All Comments

  • lyssword - Friday, September 29, 2017 - link

    Seems these tests are GPU-limited (gtx 980 is about 1060-6gb) thus may not show true gains if you had something like 1080ti, and also not the most demanding cpu-wise except maybe warhammer and ashes
  • Alexvrb - Sunday, October 1, 2017 - link

    Some of the regressions don't make sense. Did you double-check timings at every frequency setting, perhaps also with Ryzen Master software (the newer versions don't require HPET either IIRC)? I've read on a couple of forums where above certain frequencies, the BIOS would bump some timings regardless of what you selected. Not sure if that only affects certain AGESA/BIOS revisions and if it was only certain board manufacturers (bug) or widespread. That could reduce/reverse gains made by increasing frequency, depending on the software.

    Still, there is definitely evidence that raising memory frequency enables decent performance scaling, for situations where the IF gets hammered.
  • ajlueke - Friday, October 6, 2017 - link

    As others have mentioned here, it is often extremely useful to employ modern game benchmarks that will report CPU results regardless of GPU bottlenecks. Case in point, I ran a similar test to this back in June utilizing the Gears of War 4 benchmark. I chose it primarily because the benchmark with display CPU (game) and CPU (render) fps regardless of GPU frames generated.

    https://community.amd.com/servlet/JiveServlet/down...

    At least in Gears of War 4, the memory scaling on the CPU style was substantial. But to be fair, I was GPU bound in all of these tests, so my observed fps would have been identical every time.

    https://community.amd.com/servlet/JiveServlet/down...

    Really curious if my results would be replicated in Gears 4 with the hardware in this article? That would be great to see.
  • farmergann - Wednesday, October 11, 2017 - link

    For gaming, wouldn't it be more illuminating to look at frame-time variance and CPU induced minimums to get a better idea of the true benefit of the faster ram?
  • JasonMZW20 - Tuesday, November 7, 2017 - link

    I'd like to see some tests where lower subtimings were used on say 3066 and 3200, versus higher subtimings at the same speeds (more speeds would be nice, but it'd take too much time). I'd think gaming is more affected by latency, since they're computing and transferring datasets immediately.

    I run my Corsair 3200 Vengeance kit (Hynix ICs) at 3066 using 14-15-15-34-54-1T at 1.44v. The higher voltage is to account for tighter subtimings elsewhere, but I've tested just 14-15-15-34-54-1T (auto timings for the rest) in Memtest86 at 1.40v and it threw 0 errors after about 12 hours. Geardown mode disabled.

Log in

Don't have an account? Sign up now