Final Words

I think we confirmed what we pretty much knew all along: Sandy Bridge's improved memory controller has all but eliminated the need for extreme memory bandwidth, at least for this architecture. It's only when you get down to DDR3-1333 that you see a minor performance penalty. The sweet spot appears to be at DDR3-1600, where you will see a minor performance increase over DDR3-1333 with only a slight increase in cost. The performance increase gained by going up to DDR3-1866 or DDR3-2133 isn't nearly as pronounced.

As a corollary, we've seen that some applications do react differently to higher memory speeds than others. The compression and video encoding tests benefited the most from the increased memory bandwidth while the overall synthetic benchmark and 3D rendering test did not. If your primary concern is gaming, you’ll want to consider investing in more GPU power instead of a faster system memory; likewise, a faster CPU will be far more useful than more memory performance for most applications. Outside of chasing ORB chart placement, memory is one of the components least likely to play a significant role in performance.

We also found that memory bandwidth does scale with CPU clock speed; however, it still doesn't translate into any meaningful real-world performance. The sweet spot still appears to be DDR3-1600. All of the extra performance gained by overclocking almost certainly comes from the CPU overclock itself and not from the extra memory bandwidth.

Finally, although the effects of low latency memory can be seen in our bandwidth tests, they don't show any real world advantage over their higher latency (ahem, cheaper) counterparts. None of the real-world tests performed showed any reason to prefer low latency over raw speed.

Even though there's merely a $34 price difference between the fastest and slowest memory tested today, I still don't believe there's any value in the more expensive memory kits on the Sandy Bridge platform. Once you have enough bandwidth (DDR3-1600 at a small $9-$10 price premium), there's just not enough of a performance increase beyond that to justify the additional cost, even when it's only $34 between 4GB kits. Once you jump to the 8GB kits, the price difference for CL9 DDR3-1600 is a mere $8, but it becomes much more pronounced at $92 to move to DDR3-2133. We simply can’t justify such a price difference based on our testing.

Of course, testing with Sandy Bridge doesn't necessarily say anything about other platforms. It's possible that AMD's Llano and Bulldozer platforms will benefit more from higher bandwidth and/or better latency memory, but we'll save that article for another day. Also, we've shown that performance scaling on integrated graphics solutions can benefit, particularly higher performance IGPs like Llano. Ultimately, it's up to you to choose what's best for your particular situation, and we hope this article will help you make better-informed decisions.

Memory Scaling with Overclocking
Comments Locked

76 Comments

View All Comments

  • tomx78 - Tuesday, July 26, 2011 - link

    Article is called "choosing the best DDR3" so I agree they should test T1. Without it whole article is useless. It still does not answer question which DDR3 is best. If DDR3-2133 can't do T1 but DDR3-1600 can which one is faster?
  • Impulses - Monday, July 25, 2011 - link

    If you're pinching pennies and trying to build a system on a budget, even the $10 premium for anything but a basic 1333 kit doesn't seem worthwhile... I actually chose my last 2x4GB kit based on price and looks more than anything, heh, the old G.skill Sniper heatspreaders (the blue-ish version) matched my MSI mobo well and looked like they'd be the least likely to interfere with any heatsink. Some of the heatspreaders on pricier kits are crazy big, not to mention kinda gaudy.
  • Finally - Tuesday, July 26, 2011 - link

    Let me repeat: You buy your RAM based on... aesthetics?
    No further questions, thanks.
  • Finraziel - Wednesday, July 27, 2011 - link

    Well as this test showed, there is little performance gain to be had, so what else is there to base your choice on? Especially for people with windows it can be important. And if you buy really fast memory that wont fit under your heatsink, well, let's just say you want to insinuate someone else is dumb? :)
    I used to have the Corsair modules with lights on top showing activity, and while I mainly bought them for looks, they were actually useful at times to be able to quickly check if my system had totally crashed or was still doing stuff (you can sort of see the difference in the patterns in the lights).
  • knedle - Monday, July 25, 2011 - link

    I would love to see graphs showing how much power do different ram modules consume, few weaks ago I build low power computer with Sandy Bridge and I'm still looking into how to get as much from it as possible, with as low power consumption as possible.
  • Rajinder Gill - Monday, July 25, 2011 - link

    Power savings for DRAM are generally small. As you lower the current draw (either by reducing voltage or slacker timings) you are battling in part against the efficiency curve of the VRM.

    On some boards the difference in power consumption between DDR3-1333 and DDR3-1866 (given voltage and timing changes) can be as little as 1 Watt.

    -Raja
  • Vhozard - Monday, July 25, 2011 - link

    "Multiple passes are generally used to ensure the highest quality video output, and the first pass tends to be more I/O bound while the second pass is typically constrained by CPU performance."

    This is really not true, multiple passes are used by x264 to come as close as possible to a given file size. A one-pass crf-based encode produces an equally high quality video output, given the same conditions.

    Maybe you should use one-pass encodes, as they are more commonly used when file size specification is not very important.
  • JarredWalton - Monday, July 25, 2011 - link

    Multiple passes produce higher quality by using a higher bitrate where it's needed and a lower bitrate where it's not. In a single-pass, constant bitrate encode, scenes where there's a lot of movement will show more compression artifacts. There's no need to do multiple passes for size considerations: you do a constant bitrate of 2.0Mbps (including audio) for 120 minutes and you will end up with a file size of very close to 1800MB (or if you prefer, 1717.61MiB). Variable bitrate with a single pass doesn't have an accurate file size.
  • Vhozard - Monday, July 25, 2011 - link

    Very few people still use constant bitrate encodes.
    x264 works with a crf (constant rate factor), which gives constant *quality*; not constant bitrate!

    There is very much a need to do multiple passes for size considerations as a constant bitrate will not give them optimal quality at all.

    The quality between a crf (one-pass) of 15 that reaches a filesize of lets say 1 GBwill have almost exactly the same quality as a two-pass which is set at 1 GB.

    I suggest you read the x264 wiki...
  • JarredWalton - Monday, July 25, 2011 - link

    Sorry -- missed that you said CRF and not CBF.

Log in

Don't have an account? Sign up now