Select a Memory Divider and Set Some Timings

The latest generation of Intel memory controllers provides a much more expansive choice in memory dividers than ever before. That said, there are only three that we ever use, the most obvious of these being 1:1. Setting 1:1, simply put, means that the memory runs synchronously with the FSB. Keep in mind though that the FSB is quad-pumped (QDR) and memory is double data rate (DDR). For example, setting an FSB of 400MHz results in a 1.6GHz (4 x 400) effective FSB frequency at DDR-800 (2 x 400), assuming your memory is running 1:1. Selecting 5:4 at an FSB of 400MHz sets a memory speed of DDR-1000 (5/4 x 2 x 400). The other two dividers we would consider using besides 1:1 are 5:4, and in the case of DDR3, 2:1.

Regrettably, there are rarely any real performance gains by moving to memory ratios greater than 1:1. While it is true that many synthetic benchmarks will reward you with higher read and copy bandwidth values, the reality of the situation is that few programs are in fact bottlenecked with respect to total memory throughput. If we were to take the time to analyze what happens to true memory latency when moving from DDR2-800 CAS3 to DDR2-1000 CAS4, we would find that overall memory access times might actually increase. That may seem counterintuitive to the casual observer and is a great example of why it's important to understand the effect before committing to the change.

Start your next phase of tuning by once again entering the BIOS and selecting a memory divider. As mentioned earlier, even though there are many choices in dividers you will do best to stick to either 1:1 or 5:4 when using DDR2 and 2:1 when running DDR3. Next set your primary timings - typically, even the worst "performance" memory can handle CAS3 when running at about DDR2-800, CAS4 to about DDR2-1075, and CAS5 for anything higher. These are only approximate ranges though and your results will vary depending on the design of you motherboard's memory system layout, the quality of your memory, and the voltages you apply. You may find it easiest to set all primary memory timings (CL-tRCD-tRP) to the same value when first testing (i.e. 4-4-4, 5-5-5, etc.), and as a general rule of thumb, cycle time (tRAS) should be set no lower than tRCD + tRP + 2 when using DDR2 - for DDR3 try to keep this value between 15 and 18 clocks inclusive.

Failure of the board to POST (Power On Self-Test) after adjusting memory settings is a strong indication that either: A) you've exceed the memory's maximum possible frequency - choose a divider that results in a lower memory speed; B) the timings are too tight (low) for the attempted speed - loosen the values and try again; or C) the particular frequency/timing combination is possible, but not at the voltage currently applied - raise the memory voltage. Not all failure to POST conditions will have a solution. Some motherboards simply refuse to run certain memory dividers and we're finding more and more memory modules these days that are just flat out incapable of running the tighter timings possible with the previous generation's products.

Booting to the Windows desktop is always a pretty good indication that you are at least close to final stable values when it comes to memory. Again, start Prime95 and run at least 30 minutes of the blend test. Failures, especially rounding errors, are strong indications of memory problems. If you encounter errors, reset the system and increase the memory voltage by a small amount, always remembering to stay within specified limits. If you continue to experience errors, regardless of memory voltage, then you should loosen the primary timings and continue the testing. Once you have managed to "prime" for 30 minutes or more you can move on to the final phase - overclocking the CPU.

Tuning Memory Subsystem Performance Overclock That CPU
Comments Locked

56 Comments

View All Comments

  • Lifted - Wednesday, December 19, 2007 - link

    Very impressive. Seems more like a thesis paper than a typical tech site article. While the content on AT is of a higher quality than the rest of the sites out there, I think the other authors, founder included, could learn a thing or two from an article like this. Less commentary/controversy and more quality is the way to go.
  • AssBall - Wednesday, December 19, 2007 - link

    Shouldn't page 3's title be "Exlporing the limits of 45nm Halfnium"? :D

    http://www.webelements.com/webelements/elements/te...">http://www.webelements.com/webelements/elements/te...
  • lifeguard1999 - Wednesday, December 19, 2007 - link

    "Do they worry more about the $5000-$10000 per month (or more) spent on the employee using a workstation, or the $10-$30 spent on the power for the workstation? The greater concern is often whether or not a given location has the capacity to power the workstations, not how much the power will cost."

    For High Performance Computers (HPC a.k.a. supercomputers) every little bit helps. We are not only concerned about the power from the CPU, but also the power from the little 5 Watt Ethernet port that goes unused, but consumes power. When you are talking about HPC systems, they now scale into the tens-of-thousands of CPUs. That 5 Watt Ethernet port is now a 50 KWatt problem just from the additional power required. That Problem now has to be cooled as well. More cooling requires more power. Now can your infrastructure handle the power and cooling load, or does it need to be upgraded?

    This is somewhat of a straw-man argument since most (but not all) HPC vendors know about the problem. Most HPC vendors do not include items on their systems that are not used. They know that if they want to stay in the race with their competitors that they have to meet or exceed performance benchmarks. Those performance benchmarks not only include how fast it can execute software, but also how much power and cooling and (can you guess it?) noise.

    In 2005, we started looking at what it would take to house our 2009 HPC system. In 2007, we started upgrades to be able to handle the power and cooling needed. The local power company loves us, even though they have to increase their power substation.

    Thought for the day:
    How many car batteries does it take to make a UPS for a HPC system with tens-of-thousands of CPUs?
  • CobraT1 - Wednesday, December 19, 2007 - link

    "Thought for the day:
    How many car batteries does it take to make a UPS for a HPC system with tens-of-thousands of CPUs?"

    0.

    Car batteries are not used in neither static nor rotary UPS's.
  • tronicson - Wednesday, December 19, 2007 - link

    this is a great article - very technical, will have to read it step by step to get it all ;-)

    but i have one question that remains for me.. how is it about electromigration with the very filigran 45nm structures? we have here new materials like the hafnium based high-k dielectricum, guess this may improove the resistance agains em... but how far may we really push this cpu until we risk very short life and destruction? intel gives a headroom until max 1.3625V .. well what can i risk to give with a good waterchill? how far can i go?

    i mean feeding a 45nm core p.ex. 1,5V is the same as giving a 65nm 1,6375! would you do that to your Q6600?
  • eilersr - Wednesday, December 19, 2007 - link

    Electromigration is an effect usually seen in the interconnect, not in the gate stack. It occurs when a wire (or material) has a high enough current density that the atoms actually move, leading to an open circuit, or in some cases, a short.

    To address your questions:
    1. The high-k dielectric in the gate stack has no effect on the resistance of the interconnect
    2. The finer features of wires on a 45nm process do have a lower threshold to electromigration effects, ie smaller wires have a lower current density they can tolerate before breaking.
    3. The effects of electromigration are fairly well understood at this point, there are all kinds of automated checks built in to the design tools before tapeout as well as very robust reliability tests performed on the chips prior to volume production to catch these types of reliability issues.
    4. The voltage a chip can tolerate is limited by a number of factors. Ignoring breakdown voltages and other effects limited by the physics of transistor operation, heat is where most OC'ers are concerned. As power dissipation is most crudely though of in terms of CVf^2 (capacitance times voltage times frequency-squared), the reduced capacitance in the gate due to the high-k dielectric does dramatically lower power power dissipation, and is well cited. The other main component in modern CPU's is the leakage, which again is helped by the high-k dielectric. So you should expect to be able to hit a bit higher voltage before hitting a thermal envelope limitation. However, the actual voltage it can tolerate is going to depend on the CPU and what corner of the process it came from. In all, there's no general guideline for what is "safe". Of course, anything over the recommended isn't "safe", but the only way you'll find out, unfortunately, is trial and error.
  • eilersr - Wednesday, December 19, 2007 - link

    Doh! Just noticed my own mistake:
    high-k dielectric does not reduce capacitance! Quite the contrary, a high-k dielectric will have higher capacitance if the thickness is kept constant. Don't know what I was thinking.

    Regardless, the capacitance of the gate stack is a factor, as the article mentioned. I don't know how the cap of Intel's 45nm gate compares with that of their 65nm gate, but I would venture it is lower:

    1. The area of the FET's is smaller, so less W*L parallel plate cap.
    2. The thickness of the dielectric was increased. Usually this decreases cap, but the addition of high-k counter acts that. Hard to say what balance was actually achieved.

    This is just a guess, only the process engineers no for sure :)
  • kjboughton - Wednesday, December 19, 2007 - link

    Asking how much voltage can be safetly applied to a (45nm) CPU is a lot like asking which story of a building can you jump from without the risk of breaking both legs on the landing. There's inherent risk in exceeding the manufacturer's specification at all and if you asked Intel what they thought I know exactly what they would say -- 1.3625V (or whatever the maximum rated VID value is). The fact of the matter is that choices like these can only be made by you. Personally, I feel exceeding about 1.4V with a quad 45nm CPU is a lot like beating your head against a wall, especially if your main concern is stability. My recommendation is that you stay below this value, assuming you have adequate cooling and can keep your core temperatures in check.
  • renard01 - Wednesday, December 19, 2007 - link

    I just wanted to tell you that I am impressed by your article! Deep and practical at the same time.

    Go on like this.

    This is an impressive CPU!!

    regards,
    Alexander
  • defter - Wednesday, December 19, 2007 - link

    People stop posting silly comments like: "Intel's TDP is below real power consumption, it isn't comparable to AMD's TDP".

    Here we have a 130W TDP CPU consuming 54W under load.

Log in

Don't have an account? Sign up now