Power Analysis

One of the interesting aspects of the double height memory is how it affects power consumption. It would be natural to assume that double the number of chips and EPROMs would result in double the power draw of a standard capacity module, and the power per GB should be similar.

To measure the power consumption, we ran Intel's Power Gadget 3.5.0 utility during benchmark runs in our POV-Ray 3.7.1 test and in our Memory Latency Checker. POV-Ray 3.7 is a rendering based benchmark which stresses a system - It's a good indicator of memory stability and overall performance so it made it a natural choice for a power point of view. Our second test involves our MLC2 memory benchmark which is purely memory focused and loads the memory with high workloads as well as testing latency. 

POV-Ray 3.7: link

The Persistence of Vision Ray Tracer, or POV-Ray, is a freeware package for as the name suggests, ray tracing. It is a pure renderer, rather than modeling software, but the latest beta version contains a handy benchmark for stressing all processing threads on a platform. We have been using this test in motherboard reviews to test memory stability at various CPU speeds to good effect – if it passes the test, the IMC in the CPU is stable for a given CPU speed. As a CPU test, it runs for approximately 1-2 minutes on high-end platforms.

DRAM Power Consumption: POV-Ray 3.7 - Total

On average when directly comparing the G.Skill TridentZ and TridentZ DC RAM, the power consumption on average in POV-Ray was 285% higher. This is a noticeable jump over two sticks and more than double in terms of overall power used. If we convert this down to average energy per gigabyte:

Power Consumption: POV-Ray 3.7 - mW / GB

There is still an additional penalty in energy for using the new modules per GB.

Memory Latency Checker: link

Intel's Memory Latency checker is a tool designed to measure memory latency and bandwidth. MLC measures multiple aspects of DRAM with idle and load latencies, cache to cache data transfer latencies and peak memory bandwidth. The benchmark focuses purely on the memory and is influenced by higher clock speeds and latency timings.

DRAM Power Consumption: MLC 2 - Total

Over a longer duration and in a high memory weighted benchmark such as MLC2, the power variation from the double capacity to the standard was more consistent with what was initially expected; double the power consumption for double capacity RAM. For what it's worth, the Corsair Vengeance LPX kit at 1.2 V was no better off from a power consumption standpoint than the 1.35 V kits tested.

Power Consumption: MLC 2 - Total mWh / GB

If we compare energy per gigabyte, it is actually very competitive compared to the smaller kits. Here, the 2x8GB kit is actually consuming the most energy per GB, which suggests that the static power is a significant proportion of this analysis.

 

Gaming Performance Overclocking Performance
Comments Locked

50 Comments

View All Comments

  • mickulty - Wednesday, January 23, 2019 - link

    Really interesting article, thanks Gav and Ian!

    I'd love to see how a configuration using these DC sticks compares to 4x16GB on a 4-dimm T-topology board, especially in ability to hit higher speeds.
  • edzieba - Wednesday, January 23, 2019 - link

    Presumably the "Two DIMMs One Channel"-on-a-board layout would preclude these being used in 4-slot consumer boards (which would require effectively 4 DIMMs per channel)? I can't think of any boards off the top of my head that support more than 2 DIMMs per channel without using FBDIMMs.
  • Ej24 - Wednesday, January 23, 2019 - link

    Intel has validated their 8th and 9th gen desktop cpu's to work with 128gb of memory so that would suggest its possible, it's just up to the motherboard manufacturer to implement it appropriately.
  • Hul8 - Wednesday, January 23, 2019 - link

    I believe that's using regular (not double) modules with 16x Samsung's new 16 Gb memory packages. You can still use 2 of those per channel on regular consumer motherboards.
  • Ej24 - Wednesday, January 23, 2019 - link

    https://www.anandtech.com/show/13473/intel-to-supp...
  • schujj07 - Wednesday, January 23, 2019 - link

    I wonder if something that this could be designed for servers using RDIMMS or LRDIMMS. Current cost of 64GB LRDIMMs is more than double that of 32GB RDIMMs. 128GB LRDIMMs are about 4x more expensive than 64GB LRDIMMs. Could be a nice way to increase RAM capacity there without breaking the bank.
  • brakdoo - Wednesday, January 23, 2019 - link

    128 GB and 256 GB DIMMs use TSV (sometimes called 3DS or 3D stacked in the server business) memory. That's why they are more expensive.

    Other than that: This approach doubles the rank. Typical servers already reach their "maximum rank" on each channel with regular sized memory.
  • mickulty - Wednesday, January 23, 2019 - link

    It's pretty common for various forms of registered/buffered memory to use x4 width ICs rather than the standard x8, meaning you have 16 per rank rath than 8 per rank with the same capacity per IC. That acheives the same thing in terms of capacity.
  • nathanddrews - Wednesday, January 23, 2019 - link

    Certainly looks like the future of RAM, but like most things, I would wait for v3.0 before jumping in. There's bound to be more power savings, compatibility tweaks, and performance tweaks. When is DDR5 arriving?
  • oddity1234 - Wednesday, January 23, 2019 - link

    That's a bizarre existential predicament the sea slug is stuck in.

Log in

Don't have an account? Sign up now