Enabling XMP

By default, memory should adhere to specifications set by JEDEC (formerly known as the Joint Electron Device Engineering Council). These specifications state what information should be stored in the memory EEPROM, such as manufacturer information, serial number, and other useful information. Part of this is the memory specifications for standard memory speedswhich a system will adhere to in the event of other information not being available. For DDR4, this means DDR4-2133 15-15-15 at 1.20 volts.

An XMP, or (Intel-developed) Extreme Memory Profile, is an additional set of values stored in the EEPROM which can be detected by SPD in the BIOS. Most DRAM has space for two additional SPD profiles, sometimes referred to as an ‘enthusiast’ and an ‘extreme’ profile; however most consumer oriented modules may only have one XMP profile. The XMP profile is typically the one advertised on the memory kit – if the capability of the memory deviates in any way from specified JEDEC timings, a manufacturer must use an XMP profile.

Thus it is important that the user enables such a profile!  It is not plug and play!

As I have stated since reviewing memory, at big computing events and gaming LANs there are plenty of enthusiasts who boast about buying the best hardware for their system. If you ask what memory they are running, then actually probe the system (by using CPU-Z), more often than not the user after buying this expensive memory has not enabled XMP.  It sounds like a joke story, but this happened several times at my last iSeries LAN in the UK – people boasting about high performance memory, but because they did not enable it in the BIOS, were still running at DDR3-1333 MHz C9.

So enable XMP with your memory!

Here is how for most motherboards except the ASUS X99-Deluxe, which uses an onboard XMP switch:

Step 1: Enter the BIOS

This is typically done by pressing DEL or F2 during POST/startup. Users who have enabled fast booting under Windows 8 will have to use motherboard vendor software to enable ‘Go2BIOS’ or a similar feature.

Step 2: Enable XMP

Depending on your motherboard manufacturer, this will be different. I have taken images from the major four motherboard manufacturers to show where the setting is on some of the latest X99 motherboard models.

On any ASUS X99 board, the setting is on the EZ-Mode screen. Where it says ‘XMP’ on the left, click on this button and navigate to ‘Profile 1’:

If you do not get an EZ mode (some ROG boards go straight to advanced mode), then the option is under the AI Tweaker tab, in the AI Overclock Tuner option, or you can navigate back to EZ mode.

For ASRock motherboards, depending on which model you have, navigate to OC Tweaker and scroll down to the DRAM Timing Configuration. Adjust the ‘Load XMP Setting’ option to Profile 1.

For GIGABYTE motherboards, press F2 to switch to classic mode and navigate to the MIT tab. From here, select Advanced Frequency Settings.

In this menu will be an option to enable XMP where this arrow is pointing:

Finally on MSI motherboards, we get a button right next to the OC Genie in the BIOS to enable XMP:

I understand that setting XMP may seem trivial to most of AnandTech’s regular readers, however for completeness (and the lack of XMP being enabled at events it seems) I wanted to include this mini-guide. Of course different BIOS versions on different motherboards may have moved the options around a little – either head to enthusiast forums, or if it is a motherboard I have reviewed, I tend to post up all the screenshots of the BIOS I tested with as a guide.

Testing The Kits and The Markets Memory Scaling on Haswell-E: CPU Real World
Comments Locked

120 Comments

View All Comments

  • wyewye - Sunday, February 8, 2015 - link

    Extremely weak review.

    Ian, is this your first memory review?
    Everyone knows in the real world apps the difference is small. Whats the point to show a gazilion of charts with 1% differences. You had way more random noise from the tests errors, those numbers are meaningless.
    For memory, the syntetic tests is the only way.

    Thumbs down, bring back Anand for decent reviews.
  • wyewye - Sunday, February 8, 2015 - link

    @Ian
    ProTip: when the differences are small and you get obviously wrong results like 2800@cl14 slower than 2133@cl16, run 10 or 20 tests, eliminate spikes and compute the median.
  • wyewye - Sunday, February 8, 2015 - link

    Ian stop being sloppy and do a better job next time!
  • Oxford Guy - Sunday, February 8, 2015 - link

    "Moving from a standard DDR3-2133 C11 kit to DDR4-2133 C15, just by looking at the numbers, feels like a downgrade despite what the rest of the system is."

    Sure... let's just ignore the C10 and C9 DDR3 that's available to make DDR4 look better?
  • eanazag - Monday, February 9, 2015 - link

    Why not post some RAM disk numbers?

    What I saw in the article is that the cheapest, high capacity made the most sense for my dollar.
  • SFP1977 - Tuesday, February 10, 2015 - link

    Am I missing something, or how did they over come the fact that their 2011 test processor has 4 memory lanes while that 1150 processor has only 2??
  • deanp0219 - Wednesday, February 11, 2015 - link

    Great article, but in fairness, you're comparing the first run of DDR4 modules against very well developed and evolved DDR3 modules. When DDR3 was first released, I'll bet some of the high-end DDR2 modules available at the time matched up with them fairly well. We'll have to see where DDR4 technology goes from here. Again, great read though. Totally not a reflection on the article -- nothing you can do about the state of the tech. Made me feel better about my DDR3-2133 machine!
  • MattMe - Friday, July 10, 2015 - link

    Am I right in thinking that the benefits of DDR4 outside of power consumption could well be in scenarios where integrated graphics are being utilised?

    The additional channels and clock speeds are more likely to have an effect there than an external GPU, I would assume. But we're still yet to see any DDR4L in the consumer market (as far as I'm aware), it's most beneficial area.

    Seeing some benchmarks including integrated graphics would be very interesting, especially in smaller, lower powered systems like a NUC or similar.
  • LorneK - Monday, October 5, 2015 - link

    My gripe with Cinebench as a "professional" test is that aside from tracing rays, it in no way resembles the kind of rendering that an actual professional would be doing.

    There's hardly any geometry, hardly any textures, no displacement, no advanced lighting models, etc.

    So yeah, DDR4 makes barely any impact in Cinebench, but I have to wonder how much of that is due to Cinebench requiring almost nothing from RAM in general.

    Someone needs to come along and make a truly useful rendering benchmark. A complex scene with millions of polygons, gigs of textures, global illumination, glossy reflections, the works basically.

    Only then can we actually know what various aspects of a machine's hardware are affecting.

    An amazing SSD would reduce initial scene spool up time. Fast single thread performance would also increase render start times. Beefy RAM configs would be better at feeding the CPUs the multiple GBs needed to do the job. And the render tiles would take long enough to complete that a 72 thread Xeon box isn't wasting half its resources simply moving from tile to tile and rendering microscopic regions.
  • Zerung - Tuesday, February 9, 2016 - link

    My Asus Mobo notes the following:
    'Due to Intel® chipset limitation, DDR4 2133 MHz and higher memory modules on XMP mode will run at the maximum transfer rate of DDR4 2133 Mhz'. Does this mean that running the DDR4 3400 CL16 may not give me the latency below 10?
    Thanks

Log in

Don't have an account? Sign up now