POST A COMMENT

112 Comments

Back to Article

  • crackedwiseman - Thursday, October 18, 2012 - link

    OK, just one question: why in the hell are the IGP memory tests done on an i7? The results would be much more meaningful if the tests were on an AMD A10 or similar - it has a beefier IGP, and thus would be more bandwidth-bound. Reply
  • creed3020 - Thursday, October 18, 2012 - link

    100% Agree. Doing these tests against a Trinity APU would have been much more interesting from a iGPU point of view. It it well known that AMD APUs benefit from increased memory bandwidth, AT has yet to test Trinity for this yet they did it for Llano. Reply
  • silverblue - Thursday, October 18, 2012 - link

    It makes sense to test; HD 4000 is far superior to HD 3000 and it is worth knowing if that extra power is bandwidth limited. Generally, it is a little, though nowhere near as much as AMD's equivalents are. Reply
  • JonnyDough - Monday, October 22, 2012 - link

    Not to mention, it's surprising to me that AMD wasn't mentioned as a company trying to match memory to motherboard. AMD started making their own memory modules, an interesting fact I think. Reply
  • SeanJ76 - Saturday, June 21, 2014 - link

    AMD is a decade behind Intel, in processor technology and instructions, it really doesn't matter what AMD attempts to do.... Reply
  • SeanJ76 - Saturday, June 21, 2014 - link

    No one gives a shit about APU you moron......these are desktop tests! Reply
  • hp79 - Thursday, October 18, 2012 - link

    Maybe because more people use intel? I agree that it would have stood out more if it was AMD's IGP, but doing the test on intel IGP is also okay and gives an idea of what to expect. I think the article is fine. Besides, do people really play games with IGP? If I am playing demanding games, I want the frame rates to be minimum 60 fps. That's why I use a dedicated graphics card. This might change when AMD's IGP gets even more powerful, but for now I think it's still not there yet. Reply
  • zcat - Thursday, October 18, 2012 - link

    > Besides, do people really play games with IGP?

    Some of us do. My miniitx i7 is primarily for work & everyday use, but its HD4000 is fast enough for Portal 2 and Diablo 3 to be very playable @ 1920x1080p with AA off.

    However, I know the limits of IGP, and intend on upgrading to an overclocked GeForce GTX 650 Ti very soon in order to play some more demanding games this winter.
    Reply
  • sking.tech - Monday, October 22, 2012 - link

    you may want to reconsider your choice of video "upgrade"
    nvidia's 2nd number is more significant than the first as far as overall gaming graphics power goes... You'd do better going for a 560 TI than a 650 for approx the same cost
    Reply
  • Dirk Broer - Wednesday, July 24, 2013 - link

    You should first look at what chip actually powers the card -and it's capabilities- before staring yourself blind on the last two digits. Besides that, a GTX 560 Ti is more expensive than a GTX 650. Reply
  • ssj4Gogeta - Thursday, October 18, 2012 - link

    "Besides, do people really play games with IGP?"

    They're definitely more likely to play games on a more powerful IGP like AMD's. I thought the whole point of AMD's Fusion lineup was that you could do light gaming on the IGP itself.
    Reply
  • SeanJ76 - Saturday, June 21, 2014 - link

    Exactly! no one buys shitty AMD products anymore...... Reply
  • Boogaloo - Thursday, October 18, 2012 - link

    There are already plenty of benchmarks out there for memory scaling on AMD's APUs. This is the first time I've seen an in-depth look at how memory speed affects Intel's IGP performance. Reply
  • ssj4Gogeta - Thursday, October 18, 2012 - link

    That's what I was thinking as well.
    I'm hoping for another article using Trinity. :)
    Reply
  • Calin - Friday, October 19, 2012 - link

    I'm not sure A10 supports DDR3-2400 (DDR3-1866 was the fastest memory supported) Reply
  • Medallish - Friday, October 19, 2012 - link

    The A10 has AMP profiles(Like XMP on Intel) up to 2133MHz, however, there's always overclocking, I'm pretty sure Ivy Bridge doesn't suppoort 2400+MHz memory natively either. I'm looking at an FM2 board by Asrock which they claim can support 2600MHz memory. Reply
  • IanCutress - Friday, October 19, 2012 - link

    My A10-5800K sort of liked DDR3-2400, then it didn't like it. Had to go back one to 2133 for the testing. Even with bumped voltages and everything else, the CPU memory controller couldn't take it. Perhaps the sample I have is a dud, but that was my experience.

    Ian
    Reply
  • tim851 - Friday, October 19, 2012 - link

    I concur.

    Pointless review anyway. The summary should have read: High-Clocked Memory only needed if your primary usage is either competitive benchmarking or WinRAR compression.
    Reply
  • IanCutress - Saturday, October 20, 2012 - link

    Did you know that before you read the article though? This is Anandtech, and I like to think I test things thoroughly enough to make reasoned opinions and suggestions :) Having a one sentence summary wouldn't have helped anyone in the slightest.

    Ian
    Reply
  • SeanJ76 - Saturday, June 21, 2014 - link

    Nothing is better done on AMD products idiot..... Reply
  • Mitch101 - Thursday, October 18, 2012 - link

    Love this article first time I ever commented on one. I believe you see little improvement past 1600/1866 because the Intel chips on die cache do a good job of keeping the CPU fed. Meaning the bottleneck on an Intel chip is the CPU itself not the memory or cache.

    Can you do this with an AMD chip also as I believe we would see a bigger improvement with their chips because the on die cache cant keep up with the chip and faster external memory would give bigger performance jumps for AMD chips. Well maybe 2 generations ago AMD but lets see your pockets are deeper than mine.

    Hope I said that right I'm a little droopy eyed from lack of caffeine.
    Reply
  • Jjoshua2 - Thursday, October 18, 2012 - link

    Just bought RipjawsZ from Newegg for $90 after coupon! I feel vindicated in my choice now :) Reply
  • ludikraut - Thursday, October 18, 2012 - link

    I thought the performance difference would be less than it was. Has me rethinking whether I need to update my old OCZ DDR3-1333 chips. I haven't yet, as I'm probably giving away 5-10% performance in my OC alone. I targeted efficiency, not absolute speed - at 4GHz my i7-920 D0 consumes 80W less @ idle than the default settings of my mobo - go figure.

    l8r)
    Reply
  • Beenthere - Thursday, October 18, 2012 - link

    For typical desktop use with RAM frequencies of 1333 MHz. and higher there is no tangible gains in SYSTEM performance to justify paying a premium for higher RAM frequency, increased capacity above 4 GB. or lower latencies - with APUs being the minor exception.

    In real apps, not synthetic benches, there is simply nothing of significance to be gained in system performance above 1333 MHz. as DDR3 running at 1333 MHz. is not a system bottleneck. Synthetic benches exaggerate any real gains so they are quite misleading and should be ignored.
    Reply
  • tynopik - Thursday, October 18, 2012 - link

    WinRAR is a 'real' app Reply
  • silverblue - Thursday, October 18, 2012 - link

    It's okay, he said the same thing on Xbit Labs. Reply
  • VoraciousGorak - Thursday, October 18, 2012 - link

    "For typical desktop use with RAM frequencies of 1333 MHz. and higher there is no tangible gains in SYSTEM performance to justify paying a premium for higher RAM frequency, increased capacity above 4 GB. or lower latencies - with APUs being the minor exception."

    No tangible gains above four gi-... what industries have you worked in? Because my old AdWords PPC company's software benefited from over 4GB, and that's the lightest workload I've had on a computer in a while. For home use, I just bumped my system to 16GB because I kept capping my 8GB, and I do zero video/photo work. If you just do word processing, I'll trade you a nice netbook with a VGA out for whatever you're using now.

    DDR3-1333 to 1600 is almost the same price on Newegg, and 1866 isn't much more. Think about it in percentage cost of your computer. Using current Newegg prices for 2x4GB CL9 DDR3, a $1000 computer with 8GB DDR3-1333 will cost $1002 with DDR3-1600, $1011 with DDR3-1866, and $1025 with DDR3-2133. Not exactly a crushing difference.
    Reply
  • Olaf van der Spek - Thursday, October 18, 2012 - link

    Why isn't XMP enabled by default? The BIOS should know what the CPU supports, shouldn't it? Reply
  • Gigaplex - Thursday, October 18, 2012 - link

    What this article glosses over is that G.Skill memory often recommends manually increasing the voltages when enabling XMP profiles. I have the F3-1866C10D-16GAB kit and G.Skill recommends pushing the memory controller voltage out of spec for Ivy Bridge in order to enable XMP. As a result I just run them at 1333 (they don't have 1600 timings in the SPD table and I can't be bothered experimenting to find a stable setting). Reply
  • IanCutress - Friday, October 19, 2012 - link

    I did not have to adjust the voltage once on any of these kits. If anything, what you are experiencing is more related to the motherboard manufacturer. Some manufacturers have preferred memory vendors, of which G.Skill may not be one. In that case you either have to use work arounds to make kits work, or wait for a motherboard BIOS update. If you have read any of my X79 or Z77 reviews, you will see that some boards do not like my 2400 C9 kit that I use for testing at XMP without a little voltage boost. But on the ASUS P8Z77-V Premium, all these kits worked fine at XMP, without issue.

    Ian
    Reply
  • frozentundra123456 - Thursday, October 18, 2012 - link

    While interesting from a theoretical standpoint. I would have been more interested in a comparison in laptops using HD4000 vs A10 to see if one is more dependent on fast memory than others. To be blunt, I dont really care much about the IGP on a 3770K. It would have been a more interesting comparison in laptops where the igp might actually be used for gaming. I guess maybe it would have been more difficult to do with changing memory around so much in a laptop though.

    The other thing is I would have liked to see the difference in games at playable frame rates. Does it really matter if you get 5.5 or 5.9 fps? It is a slideshow anyway. My interest is if using higher speed memory could have moved a game from unplayable to playable at a particular setting or allowed moving up to higher settings in a game that was playable.
    Reply
  • mmonnin03 - Thursday, October 18, 2012 - link

    RAM by definition is Random Access which means no matter where the data is on the module the access time is the same. It doesn't matter if two bytes are on the same row or on a different bank or on a different chip on the module, the access time is the same. There is no sequential or random difference with RAM. The only difference between the different rated sticks are short/long reads, not random or sequential and any reference to random/sequential reads should be removed. Reply
  • Olaf van der Spek - Thursday, October 18, 2012 - link

    You're joking right? :p Reply
  • mmonnin03 - Thursday, October 18, 2012 - link

    Well if the next commenter below says their memory knowledge went up by 10x they probably believe RAM reads are different depending on whether they are random or sequential. Reply
  • nafhan - Thursday, October 18, 2012 - link

    "Random access" means that data can be accessed randomly as opposed to just sequentially. That's it. The term is a relic of an era where sequential storage was the norm.

    Hard drives and CD's are both random access devices, and they are both much faster on sequential reads. An example of sequential storage would be a tape backup drive.
    Reply
  • mmonnin03 - Thursday, October 18, 2012 - link

    RAM is direct access, no sequential or randomness about it. Access time is the same anywhere on the module.
    XX reads the same as

    X
    X

    Where X is a piece of data and they are laid out in columns/rows.
    Both are separate commands and incure the same latencies.
    Reply
  • extide - Thursday, October 18, 2012 - link

    No, you are wrong. Period. nafhan's post is correct. Reply
  • menting - Thursday, October 18, 2012 - link

    no, mmonnin03 is more correct.
    DRAM has the same latency (relatively speaking.. it's faster by a little for the bits closer to the address decoder) for anywhere in the memory, as defined by the tAA spec for reads. For writes it's not as easy to determine since it's internal, but can be guessed from the tRC spec.

    The only time that DRAM reads can be faster for consecutive reads, and considered "sequential" is if you open a row, and continue to read all the columns in that row before precharging, because the command would be Activate, Read, Read, Read .... Read, Precharge, whereas a "random access" will most likely be Activate, Read, Precharge most of the time.

    The article is misleading, using "sequential reads" in the article. There is really no "sequential", because depending if you are sequential in row, column, or bank, you get totally different results.
    Reply
  • jwilliams4200 - Thursday, October 18, 2012 - link

    I say mmonnin03 is precisely wrong when he claims that " no matter where the data is on the module the access time is the same".

    The read latency can vary by about a factor of 3 times whether the read is from an already open row, or whether the desired read comes from a different row than one already open.

    That makes a big difference in total read time, especially if you are reading all the bytes in a page.
    Reply
  • menting - Friday, October 19, 2012 - link

    no. he is correct.
    if every read has the conditions set up equally (ie the parameters are the same, only the address is not), then the access time is the same.

    so if address A is from a row that is already open, the time to read that address is the same as address B, if B from a row that is already open

    you cannot have a valid comparison if you don't keep the conditions the same between 2 addresses. It's almost like saying the latency is different between 2 reads because they were measured at different PVT corners.
    Reply
  • jwilliams4200 - Friday, October 19, 2012 - link

    You are also incorrect, as well as highly misleading to anyone who cares about practical matters regarding DRAM latencies.

    Reasonable people are interested in, for example, the fact that reading all the bytes on a DRAM page takes significantly less time than reading the same number of bytes from random locations distributed throughout the DRAM module.

    Reasonable people can easily understand someone calling that difference sequential and random read speeds.

    Your argument is equivalent to saying that no, you did not shoot the guy, the gun shot him, and you are innocent. No reasonable person cares about such specious reasoning.
    Reply
  • hsir - Friday, October 26, 2012 - link

    jwilliams4200 is absolutely right.

    People who care about practical memory performance worry about the inherent non-uniformity in DRAM access latencies and the factors that prevent efficient DRAM bandwidth utilization. In other words, just row-cycle time (tRC) and the pin bandwidth numbers are not even remotely sufficient to speculate how your DRAM system will perform.

    DRAM access latencies are also significantly impacted by the memory controller's scheduling policy - i.e. how it prioritizes one DRAM request over another. Row-hit maximization policies, write-draining parameters and access type (if this is a cpu/gpu/dma request) will all affect latencies and DRAM bandwidth utilization. So just sweeping everything under the carpet by saying that every access to DRAM takes the same amount of time is, well, just not right.
    Reply
  • nafhan - Friday, October 19, 2012 - link

    I was specifically responding to your incorrect definition of "random access". Randomness doesn't guarantee timing; it just means you can get to it out of order. Reply
  • jwilliams4200 - Friday, October 19, 2012 - link

    And yet, by any practical definition, you are incorrect and the author is correct.

    For example, if you read (from RAM) 1GiB of data in sequential order of memory addresses, it will be significantly faster than if you read 1GiB of data, one byte at a time, from randomly selected memory addresses. The latter will usually take two to four times as long (or worse).

    It is not unreasonable to refer to that as the difference between sequential and random reads.

    Your argument reminds me of the little boy who, chastised by his mother for pulling the cat's tail, whined, "I didn't pull the cat's tail, I just held it and the cat pulled."
    Reply
  • jwilliams4200 - Thursday, October 18, 2012 - link

    Depending on whether there is a page-hit (row needed already open), page-empty (row needed not yet open), or page-miss (row needed is not the row already open), the time to read a word can vary by a factor of 3 times (i.e., 1x latency for a page-hit, 2x latency for a page-empty, and 3x latency for a page-miss).

    What the author refers to as a "sequential read" probably probably refers to reading from an already open page (page-hit).

    While his terminology may be ambiguous (and his computation for the "sequential read" is incorrect, it should be 4 clocks), he is nevertheless talking about a meaningful concept related to variation on latency in DRAM for different types of reads.

    See here for more detail:

    http://www.anandtech.com/show/3851/everything-you-...
    Reply
  • Shadow_k - Thursday, October 18, 2012 - link

    My knowledge of RAM has increased 10 fold very nice artical well done Reply
  • losttsol - Thursday, October 18, 2012 - link

    2133MHz "Recommended for Deeper Pockets"???

    Not really. DDR3 is so cheap now that high end RAM is affordable for all. I would have said you were crazy a few years ago if you told me soon I could buy 16GB of RAM for less than $150.
    Reply
  • IanCutress - Thursday, October 18, 2012 - link

    Either pay $95 for 1866 C9 or $130 for 2133 C9 - minor differences, but $35 saving. This is strictly talking about the kits used today, there could be other price differences. But I stand by my recommendation - for the vast majority of cases 1866 C9 will be fine, and there is a minor performance gain in some scenarios with 2133 C9, but at a $35 difference it is hard to justify unless you have some spare budget. Most likely that budget could be put into a bigger SSD or GPU.

    Ian
    Reply
  • just4U - Friday, October 19, 2012 - link

    Something has to be said about the TridentX brand I believe.. since it is getting some pretty killer feedback. It's simply the best ram out there being able to do all that any other ram can and that little bit extra. I don't see the speed increase as a selling point but the lower timings at conventional speeds that users are reporting is interesting.. I haven't tried it though.. just going on what I've read. Shame about the size of the heatsinks though.. makes it problematic in some builds. Reply
  • Peanutsrevenge - Friday, October 19, 2012 - link

    You clearly live in some protected bubble where everyone has well paid jobs and isn't on a shoestring budget.

    I would so LMAO when you get mugged by someone struggling to feed themselves because you're all flash with your cash.
    Reply
  • just4U - Saturday, October 20, 2012 - link

    Peaunut we are not talking 300-500 bucks here.. this is a 20-30 dollar premium which is nothing in comparison to what ram used to cost and how much more premium ram was as well.

    If your on a tight budget get 8Gigs of regular ram which is twice the amount of ram you likely need anyway.
    Reply
  • Tech-Curious - Monday, November 05, 2012 - link

    Thing is, these tests are for integrated graphics, unless I'm misreading something (AFAICT, the discrete card was only used for PhysX support; if I misread there then I apologize).

    Off the top of my head, there are basically three scenarios in which you're likely to want an IGP:

    1) You're building an HTPC, in which case you prioritize (lack of) noise and (lack of) heat over graphics' power. If all you want to run are movies, then the IGP should be adequate regardless of the speed of your memory -- and if you want to play games, no amount of memory is going to turn an Intel IGP into an adequate performer on your average TV set these days. (Better to grab an AMD APU or just give up the ghost and grab a moderate-performance GPU.)

    2) You're looking to run a laptop. But the memory reviewed in this article doesn't apply to laptops anyway.

    3) You're on a tight budget.

    So at best, we're talking about a fraction of a sliver of a tiny niche in the market, when we discuss the people who might be interested in wringing every last ounce of performance out of an IGP by installing high-priced desktop memory. Sure, the difference in absolute cost between the cheapest and the most expensive RAM here isn't going to make or break most people -- but people generally don't like to incur unnecessary costs either.

    And people who are on a budget? They can save $80, just based on the numbers in the article, without making any significant performance sacrifice. That's real money, computer-component-wise.
    Reply
  • tynopik - Thursday, October 18, 2012 - link

    "I remember buying my first memory kit ever. It was a 4GB kit"

    makes you feel old

    my first was 8MB
    Reply
  • DanNeely - Thursday, October 18, 2012 - link

    My first computer only had 16k. Reply
  • Mitch101 - Thursday, October 18, 2012 - link

    VIC-20
    3583 bytes free
    Reply
  • jamyryals - Thursday, October 18, 2012 - link

    wow :) Reply
  • just4U - Saturday, October 20, 2012 - link

    The first computer i bough was a tandy 1000. I got them to put in 4 megs of ram.. at 50 bucks per meg. Reply
  • GotThumbs - Thursday, October 18, 2012 - link

    Same here.

    I had purchased a used AT Intel 486DX 33Mhz powered system and upgraded it to 16mb around 1989. Overclocking it was done using jumpers on the motherboard. Heck, in HS I was a student assistant my senior year and recorded everyone's grades on a cassette tape drive using a Tandy (TS-80 I believe). It blows my mind thinking about how things have changed. There's more power/ram in a Raspberry PI than my first computer.

    Best wishes for computing in the next ~30 years.
    Reply
  • andrewaggb - Thursday, October 18, 2012 - link

    Agreed, my first computer I owned personally was a 486 slc 33 (cyrix....) and I had a couple 1mb memory sticks, can't remember if those were called sims or something else. We had an apple 2+, trs 80, commodore 64, and ibm pc jr in the early-mid 80's but those were my dads :-), and some 286 that I can't remember the brand of.

    Just thinking about the e6400 as a first pc amuses me :-), that's still usable, and actually is when most of the computer fun started to die in my books. My current pc's are running phenom II 965, i5 2500k, i7 620m, i5 750, i7 720qm and I just have little motivation to upgrade anything ever.

    Haswell is the first chip in a long time that I'm excited about. Everything else has been meh. And AMD... I had an amd 486-120,K6-200,K6-2 300,athlon xp 1800,2500, athlon 64 3200, athlon 64 x2 4800, 5600, phenom II 945,phenom x3, and my current 965 and a c-50 e netbook. man hard to believe all the computers I've had :-) Anyways, amd has nothing I want anymore, except cheap multicore cpus for running x264 all day.
    Reply
  • IanCutress - Thursday, October 18, 2012 - link

    E6400 wasn't the first PC... just the first processor I actually bought memory for. The rest were pre-built or hand-me-downs. :) I actually just took the same motherboard/chip out of my brother's computer (he has had it for a few years, with that memory) and bumped him up to Sandy Bridge. I'm still 27, and the E6400 system was new for me when I was around 21 or so. Since then I've got a Masters and a PhD - time flies when you're having fun!

    Ian
    Reply
  • andrewaggb - Friday, October 19, 2012 - link

    Fair enough :-) Reply
  • HisDivineOrder - Thursday, October 18, 2012 - link

    You "remember" getting your first memory kit and it was for a E6400. You act like that's just this classic thing.

    I remember getting a memory kit for my Celeron 300a. I remember getting a memory kit for my AMD K6 with 3dNow!.

    Wow, I'm old.
    Reply
  • silverblue - Thursday, October 18, 2012 - link

    I remember getting a 64MB PC100 DIMM in 2000... it was pretty much £1 a MB. Made a difference, so it was *gulp* worth it. Reply
  • StormyParis - Thursday, October 18, 2012 - link

    Very interesting read. Thank you. Reply
  • rscoot - Thursday, October 18, 2012 - link

    I remember paying upwards of $400 for a pair of matched 2x512MB Kingston HyperX modules with BH-5 chips. Those were the days! 300MHz at 2-2-2-5 1T in dual channel if you could put enough volts through them. Nowadays I don't think memory matters nearly as much as it did back then. Reply
  • superflex - Thursday, October 18, 2012 - link

    Your first kit was an E6400?
    Let me know when you get hair down there.
    My first computer was an Apple IIe in 1984, and my first build was an Opteron 170 with 400 MHz 2,2,2,5 DDR.
    Reply
  • Magnus101 - Thursday, October 18, 2012 - link

    Once again this only confirms that memory speed makes no real world difference.
    I mean, who in their right mind use the integrated GPU on an expensive i7-system to play metro-2033 with single digit framerate?
    The only thing standing out is the Winrar compression, but, how many use winrar for compression?
    Yes to decompress files it is very common but I only remember using it 2-3 times in my whole life to compress my own files.
    So that isn't important to most users, except for the ones that actually use winrar to compress files.
    And I don't get why the x264 encoding seemed like a big deal. The differences were very small.

    It's beem the same story all the way back to the late 90;s were tests between sdr memory at 100 and 133 MHz or at different timings showed no differences in real life applications in contrast to synthetics.

    But sure, if you are building a new system and choose between, let say 1333 or 1600, then a $5 difference is a no brainer.
    Then again, it would make no noticeable difference anyway.
    Reply
  • silverblue - Thursday, October 18, 2012 - link

    Here's one - will it affect QuickSync in any way? Reply
  • twoodpecker - Monday, October 22, 2012 - link

    I'd be interested in QuickSync results too. In my experience, not proven, it makes a big difference. I adjusted my memory speeds from 1600 to 2000 and noticed at some point that encoding is 25x instead of 15x. This might be due to different factors though, like software optimizations, because I didn't benchmark after adjusting mem speeds. Reply
  • Geofram - Thursday, October 18, 2012 - link

    I don't believe he's implying that single digit frame rates on a game are going to real-life usable for anyone. I believe the point of the test was simply: "Lets take a system that is generally fast and put it in a situation where the IGP is being stressed. This will be the best-case scenario for faster RAM helping it. Lets see if it does".

    To me the idea was not showing everyone everyday situations where faster RAM will help them, instead it was to see where those situations might lay, by setting up a stressful situation and seeing the results. Most of the results were extremely small differences.

    I agree it's not a noticeable difference in most cases. It doesn't make me feel like I should get rid of PC1333 RAM. I don't fault the logic for the tests used however. It was nice to see someone actually comparing the slight differences caused by RAM speed.
    Reply
  • vegemeister - Friday, October 19, 2012 - link

    Most of the (still tiny) difference that appeared in the x264 benchmark was in the first pass. Two pass encodes really only make sense when you're trying to fit a single video onto a single storage device. That's an extremely uncommon use case these days, for everyone but the people mastering blu-rays. Reply
  • jonyah - Thursday, October 18, 2012 - link

    "I remember buying my first memory kit ever. It was a 4GB kit of OCZ DDR2 for my brand new E6400 system, and at the time I paid ~$240, sometime back in 2005."

    I remember buying my first kit too. It was an upgrade from the 2MB I had to 6MB (yes MB, not GB), and that 6MB cost me $200 as well, this was back in 1995. Ten years and we had a 1000x improvement in size and who knows how much in speed.
    Reply
  • rchris - Thursday, October 18, 2012 - link

    Well, dang it! All these "I remember..." comments have really made me feel old. In my case it was paying $300 for a used 1MB board for a Zenith Z100. Can't even remember the year--somewhere in the mid- to late-1980s. Reply
  • IanCutress - Thursday, October 18, 2012 - link

    I should point out that the kit I got was my first purchased kit on its own... Many computers before then where they were built my family or came pre-built.

    On the topic of A10 comparisons, I had thought of doing some in the future if enough interest was there. As the majority of CPU sales is in Intel's favor, we went with Intel first. (Also most of the testing for this review occurred before I had an A10 sample at hand.)

    Ian
    Reply
  • Termie - Thursday, October 18, 2012 - link

    Great article, Ian. Thanks for taking on this challenge and enlightening us all.

    Don't worry about all the old-timers bugging you about your first build being in this century. It's not like they could have written this article!
    Reply
  • arthur449 - Thursday, October 18, 2012 - link

    I'd love to see an AMD CPU test run with the same memory kits and the same test suite to contrast the differences in performance gains offered by faster memory between the two major CPU platforms. Reply
  • lowenz - Thursday, October 18, 2012 - link

    Make an extension to this brilliant article with new Trinity A8 / A10 and you'll be an instant geek hero. Reply
  • frozentundra123456 - Thursday, October 18, 2012 - link

    Could you do a similar test in laptops, A10 vs HD4000? Like I said in my other post, this is where I see more possibility of igps actually being used for gaming. I also think this is where HD4000 is most competitive to AMD, in a power limited scenario. Reply
  • DanNeely - Thursday, October 18, 2012 - link

    Have laptop bios's opened up enough in the last few years to let you specify memory timings? The advice I've always seen was to buy the cheapest ram at your laptops designated clockspeed because you won't be able to set the faster timings even if you wanted. Reply
  • haplo602 - Friday, October 19, 2012 - link

    You have ONE set for each frequency, WHY the hell are you using the stupid model numbers in the graphs ????

    WHO CAME UP WITH THAT STUPID IDEA ????

    otherwise the review is solid.
    Reply
  • Calin - Friday, October 19, 2012 - link

    I remember the times when I had to select the speed of the processor (and even that of the processor's bus) with jumpers or DIP switches... It wasn't even so long ago, I'm sure anandtech.com has articles with mainboards with DIP switches or jumpers (jumpers were soooo Pentium :p but DIP switches were used in some K6 mainboards IIRC ) Reply
  • Ecliptic - Friday, October 19, 2012 - link

    Great article comparing different speed ram at similar timings but I'd be interested in seeing results at different timings. For example, I have some ddr3-1866 ram with these XMP timings:
    1333 @ 6-6-6-18
    1600 @ 8-8-8-24
    1866 @ 9-9-9-27
    The question I have is if it better to run it at the full speed or lower the slower speed and use tighter timings?
    Reply
  • APassingMe - Friday, October 19, 2012 - link

    + 1

    + 2, if I can get away with it. I've always wondered the same thing. I have seen some minor formulas designed to compare... something like frequency divided by timing, in order to get a comparable number. But that is pure theory for the most part, I would like to see how the differences in the real world effects different systems and loads.
    Reply
  • Spunjji - Friday, October 19, 2012 - link

    But in all seriousness, I would find that to be much more useful - it's more likely to actually be used for IGP gaming.

    If you could go as far as to show the possible practical benefits of the higher-speed RAM (e.g. new settings /resolutions that become playable) that would be spiffing.
    Reply
  • vegemeister - Friday, October 19, 2012 - link

    Stop using 2 pass for benchmarks. Nobody is trying to fit DVD rips onto CD-Rs anymore. Exact file size *does not matter*. Using the same CRF for every file in a set (say, a season of a television series) produces a much better result and takes less time (you pretty much avoid the first pass). Reply
  • IanCutress - Friday, October 19, 2012 - link

    The 2-pass is a feature of Greysky's x264 benchmark. Please feel free to email him if you would like him to stop doing 2-pass. Or, just look at the 1st pass results if the 2nd pass bothers you.

    Ian
    Reply
  • rigel84 - Friday, October 19, 2012 - link

    Hi, I don't know if I somehow skipped it in the article, but if I buy a 3570k and some 1866mhz memory, wouldn't I have to overclock the CPU in order for them to run at that speed? I'm pretty sure I had to overclock my RAM on my P4 2,4ghz, in order to use the extra mhz.. Does my memory fail me or has things changed? Reply
  • IanCutress - Friday, October 19, 2012 - link

    No, you do not have to overclock the CPU. This has not been the case since the early days :D. Modern computer systems in the BIOS have an option to adjust the memory strap (1333/1600/1866 et al.) as required. On Intel systems and these memory kits, all that is needed it to set XMP - you need not worry about voltages or sub-timings unless you are overclocking the memory.

    Ian
    Reply
  • CaedenV - Friday, October 19, 2012 - link

    as there is an obvious difference with ram speed for onboard graphics, the next obvious question is one of how much memory is needed to prevent the system from throwing things back on the HDD?

    The reason I ask is that 16GB, while relatively cheap today, is still a TON of ram by today's standards, and people who are on a budget where they are playing with igp are not going to be able to afford an i7, and much less be willing to fork over ~$100 for system memory. However, if there is no performance hit moving down to 8GB of system memory it becomes much more affordable for these users to purchase better performing ram because the price points are even closer together between the performance tiers. As I understand memory usage, there should be no performance hit so long as there is more memory available than is actively being used by the game, so the question is how much is really needed before hitting that need for more memory? is the old standard of 4GB enough still? or do people need to step up to 8GB? or, if nothing is getting passed onto a dedicated GPU, do igp users really need that glut of 16GB of ram?

    Lastly, I remember my first personal build being a Pentium 3 1GHz machine for a real time editing machine for college. I remember it being such an issue because the Pentium 4 was out, but was tied to Rambus memory which had a high burst rate, but terrible sustained performance, and so I agonized for a few months about sticking with the older but cheaper platform that had consistent performance, vs moving up to the newer (and terribly more expensive) P4 setup which would perform great for most tasks, but not as well for rendering projects. Anywho, I ended up getting the P3 with 1GB of DDR 133 memory. I cannot remember the actual price off hand (2001), but I do remember that the system memory was the 2nd most expensive part of the system (2nd to the real time rendering card which was $800). It really is mindblowing how much better things have gotten, and how much cheaper things are, and one wonders how long prices can remain this low with sales volumes dropping before companies start dropping out and we have 2-3 companies that all decide to up prices in lock step.
    Reply
  • IanCutress - Friday, October 19, 2012 - link

    With memory being relatively cheap, on a standard DDR3 system running Windows 7, 8 GB would be the minimum recommendation at this level. As I mentioned in my review, in my work load the most I have ever peaked at was 7.7 GB, and that was while playing a 1080p game with all the extras alongside lots of Chrome tabs and documents open at the same time.

    Ideally this review and comparison should be taken from the perspective that you should know how much memory you are using. For 99.9% of the populace, that usually means 16GB or less. Most can get away with 8, and on a modern Windows OS I wouldn't suggest anything less than that. 4GB might be ok, but that's what I have in my netbook and I sometimes hit that.

    Ian
    Reply
  • Peanutsrevenge - Friday, October 19, 2012 - link

    Thanks Ian.

    Well, except for making me feel ludicrously old, first memory kit of 4GB DDR2?

    Mine was back in SIMM days, when I think I added an 8MB 72pin stick to my existing 4MB stick.

    Although the external math co-processor might have come first.

    And I'm only 31.

    You shall now always be Dr Evil Cutress to me.
    Reply
  • IanCutress - Friday, October 19, 2012 - link

    First *purchased* memory kit. I dealt with plenty of older memory thanks to hand me downs or prebuilt systems from my family at the time. I still have some SDRAM around somewhere, or some 8MB sticks of something or other. It's in a box under the desk ;)

    Haha, I've been called worse :D

    Ian
    Reply
  • alpha754293 - Friday, October 19, 2012 - link

    I would have figured that with a memory test/benchmarking that you would be running Stream test.

    And with all this talk about the various latencies (measured in clock cycles) - a) a comparison should be given between the theorectical calculations and the actual performance and b) that you would think that you'd use something like lmbench in order to try to better quantify/test that (in addition to the actual games, tools, and applications).

    Most of the results are pretty much inconclusive since the standard deviation is within the margin of error.
    Reply
  • IanCutress - Friday, October 19, 2012 - link

    Main reason is to steer away from synthetics. Synthetics frustrate me so - they will easily show the difference between a 1600 C9 and 2400 C10 kit, but what is that difference in real life? If latencies and burst speeds are x% difference in the synthetic, does that actually make a difference when playing Portal 2? Hence the requirement of this review to focus on the practical rather than the synthetic.

    Regarding being within standard deviations, the results you see are the culmination of multiple tests. The standard deviations are actually quite low as the results are enormously repeatable. I did a science doctorate, I make sure my numbers are valid.

    Ian
    Reply
  • Tchamber - Friday, October 19, 2012 - link

    Back in 2009 I picked up a 3x2GB kit of Mushkin DDR3 1600 with timings of 6-7-6-18. Why don't we see low latency like that any more? Reply
  • IanCutress - Friday, October 19, 2012 - link

    Those were linked to different types of memory chips at the time - the Elpida 'Hyper' ICs (http://www.anandtech.com/show/2799). Nice speeds, but high fail rates and low yields. They have been replaced by chips that are slightly slower, but a lot more reliable. Also to note that those Elpida Hyper kits worked great with Clarkdale and Nehalem, but are poor with Sandy Bridge and Ivy Bridge.

    Ian
    Reply
  • CherryBOMB - Friday, October 19, 2012 - link

    Can you explain why you say Hyper' IC's are " are poor with Sandy Bridge and Ivy Bridge."
    As I stated "I have 16gb of the fastest money could buy around that era running on x79 @ 1666 6-6-6-18-1t right now."

    This was a tri channel run >
    http://www.overclock.net/t/872945/top-30-3d-mark-1...

    post #1054
    Reply
  • IanCutress - Saturday, October 20, 2012 - link

    Because Hyper ICs fell out of favor, motherboard manufacturers are now reluctant to spend time in optimizing the Hyper IC kits to work with their systems. Thus the kits often have to fall back onto default settings, and they sometimes do not work. As one set of ICs is phased out, and new ICs come in, the newer ICs get priority.

    Ian
    PS. You'll find me on the overclock.net HWBot team :)
    Reply
  • CherryBOMB - Friday, October 19, 2012 - link

    I have 16gb of the fastest money could buy around that era running on x79 @ 1666 6-6-6-18-1t right now.
    well over $1000 invested. Each 6gb kit was over $450 - bought the extra to future proof to quad lanes today.
    2x CMT6GX3M3A1600C6
    1x CMT4GX3M2A1600C6
    http://www.newegg.com/Product/Product.aspx?Item=N8...

    http://www.newegg.com/Product/Product.aspx?Item=N8...
    Reply
  • saturn85 - Friday, October 19, 2012 - link

    how about adding a folding on cpu benchmark with different memory speed? Reply
  • RayvinAzn - Friday, October 19, 2012 - link

    You got a Core 2 Duo E6400 back in '05? That's rather impressive.

    Typo aside, good article, gives me quite a bit to think about as I plan to migrate to IB after a solid 6-year run on my P965/Q6600 setup. I'll definitely be sticking with a DDR3 1600 C9 kit after seeing these results, as anything faster doesn't really seem to affect most of the things I do.
    Reply
  • mapesdhs - Saturday, October 20, 2012 - link


    I just bought 64GB of DDR3/2400 more to make it easier to achieve the desired 2133 target
    speed with a CPU oc rather than any expected performance gain from using 2400. Although
    CL10 at 2400, it should be CL9 at 2300 (GSkill TridentX). It's for a system running After Effects,
    3930K, ASUS P9X79 WS. Quadro 4000, RAID, SSDs, etc. Plus, the price was basically identical
    to 2133 kits, so I figured what the heck, why not.

    Ian.
    Reply
  • Senti - Saturday, October 20, 2012 - link

    From the beginning it looked like great article, but then it become less and less meaningful.

    First of all, who in sane mind will get 4x4 for dual channel 1155 cpu when there are 8x2 kits available? If you want to test 4x4 so badly – use 4 channel 2011 cpu (but there is no igp there, duh).

    Second major problem is overclocking tests. Even if we put aside that Linpack is no memory stability test (for example Prime95 is far better for this), rising frequency without adjusting timings is completely meaningless if module already can't handle more aggressive timings at the same frequency.

    What would be really interesting is can we run DDR3-1600 9-9-9-24 module at DDR3-1866 9-10-9-28? Or at least DDR3-1866 10-10-10-28 and what is the difference to base settings and module that officially rated DDR3-1866.
    Reply
  • IanCutress - Saturday, October 20, 2012 - link

    -With 2x8GB kits, you often pay a premium (the next kit up for review is a 2x8GB kit). 4x4 GB kits apply both to 1155 and 2011, and represent the bulk of the kits advertised on Newegg, hence their inclusion here.

    - OCCT has a version of Linpack specifically for memory that requires high memory usage (as stated in the review, but I'm sure you read that). The overclocking tests are designed to show if the kits were higher binned parts rated lower - and in some circumstances they were. For example, the TridentX kits are getting rave reviews on overclocking websites, and the kits I have in all seem to easily push up another memory strap on Ivy Bridge. As always with overclocking, your mileage may vary. I could spend a week overclocking each kit, dealing with voltages and sub-timings then testing thoroughly for stability. But the truth of the matter is there is little point baring in mind the severity of not even applying XMP among gamers, and going by the actual improvements you see moving up from 1866C9 and beyond (unless you are an extreme overclocker looking for a higher number in a synthetic benchmark).

    Don't forget this is a review of the kits themselves more than just looking at what different speed memory does. I rarely run any memory kit out of specifications - only if the kit is not that compatible with the board I am using do I bump voltages, or competitive overclocking when I want a higher number. Everything else is XMP.

    Ian
    Reply
  • Senti - Saturday, October 20, 2012 - link

    Well, my life taught me that it's well worth reducing number of memory chips on each channel as there are cases when 2+ modules per channel won't run without errors even on native speed while there are no problems with 1 module per channel. There is also concern of extensibility, but I guess it's less relevant now when you are already getting 16GB of ram in desktop.

    I don't have experience with "tweaked" Linpacks, but I see no much reason for such tweaks as Prime95 already does the job well and in practice it's easier to get stability when different tests stress different parts.

    About overclocking – there is difference in my case. 48% faster memory (from native DDR3-1066) can be felt in practice without synthetics and games in unpractical settings.

    As for gamers that can't even apply XMP, I seriously didn't thought it was the primary intended audience of Anandtech... Also, gamers with 5 fps in Metro2033, lol.

    For review of the kits themselves it was ok, but again quite useless for me as besides overclocking what is really interesting is how well do chips of different brands do compared with each other on equal price points.

    Overall, take my comments not as nitpicking about particular things, but rather that I hope to see here more in-depth articles.
    Reply
  • IanCutress - Saturday, October 20, 2012 - link

    I've reviewed 60+ motherboards at AnandTech since I started, and I the only issues I have running certain memory depends on the motherboard and how the manufacturer has optimised reading XMP profiles (e.g. some motherboard partners do not have G.Skill as a preferred memory partner, and thus do not work with them). The only thing I ever do is bump up the IMC voltage a little, and that alone is quite rare. The only issue running four modules rather than two that normally presents itself is overclocking - this is why ASUS started using T-Topology for their memory sub-system in order to remove any signalling irregularities when pushing a full set of modules to the limits.

    I agree that moving above 1066 makes a difference. That's not a point of contention. If in the current climate the machine you run offers 1866+ and people are choosing 1066 MHz memory, then that is your decision.

    Regarding AnandTech audiences, we have a wide range, from gamers to enthusiasts to engineers and financiers wanting to justify purchases. I would suspect that a fair few readers here build machines for friends and family, and are in all sorts of stages of understanding the technology under their feet. Hopefully everyone is applying XMP.

    Regarding overclocking, as always your mileage may vary. This was shown deeply in my Ivy Bridge overclocking article - many users reported worse than what I achieved and some performed better. Getting a good processor or set of memory sticks is like a chocolate chip cookie - if you take one out of the packet, some may have more chocolate chips in than others. We always hope the ones we get have the most chocolate chips, but sometimes we do not. When going from 1600 C9 to 1866 C10 for the most part the price difference in these kits (as well as the performance difference) is minimal - the main difference will be if the sub-timings are scaled accordingly, either by the BIOS on automatic or left at XMP settings. People are fooled that more MHz means better performance. In a couple of other kits I have coming up for review, (2400 C11 and 2666 C11), we see this is not always the case.

    Ian
    Reply
  • Swede(n) - Saturday, October 20, 2012 - link

    I fail to see how much more percentage in real performance those memory's at different speed has, this in regards of how much the price is compare to the relative performance.

    Now, if I look at the different graphs I can see little or almost none real life app. performance benefit from going from a 1600 cl 9 memory and upwards.

    Instead the article fail to recognize the most frequent problem with fast memory's; instability and shortened life span of the memory controller if added multiple modules.

    Question: have this changed since the last memory test by AnandTech:
    Sandy Bridge Memory Scaling: Choosing the Best DDR3
    http://www.anandtech.com/show/4503/sandy-bridge-me...

    Sincerely,
    Reply
  • IanCutress - Saturday, October 20, 2012 - link

    Over my Z77/Ivy Bridge reviews I have used a quad module DDR3-2400 C9 kit throughout, without any reduction in stability from the IMC. In order to test what you ask would require a large sampling and long testing - one month on Prime95 at 50C ambient to burn out one processor IMC then move onto the next one. It isn't going to happen - not enough processors, and by the time the testing is complete the article wouldn't matter so much.

    With so many people in the world using modules, under all sorts of scenarios, yes things will happen and *some* trends could be construed from the data. We do not have access to the data, and thus 'the most frequent problem with fast memory is instability and shortened lifespan' is not the most frequent - it may be the problem you most hear about, but I bet you do not hear about the 100s of others that have no issue. We can't confirm that on our end, and we can't provide any numbers that do so as they are held tightly by the company that makes the memory.

    This is a review of memory kits, not an overview of fast memory, and such it has been treated that way and we draw the conclusions that we can from the results at hand. This is the difference between a scientific method and random stabs in the dark regarding what was posted on forums. I can confirm the former, but I'd steer away from the latter unless I could provide concrete numbers.

    In answer to your question - Sandy Bridge processors can handle up to 2133-2400 MHz memory, but Ivy Bridge can go one further in the fact that my processor can handle 2950 MHz or thereabouts. As a result, memory vendors bring out kits to sell at these higher frequencies. They need to be tested to let you guys and gals know if there is any reason to buy them, but first we need an overview to see where we stand. The article you link to is from a different editor at AT, and I needed a series of my own results for comparison (as well as confirming I had a proper set of benchmarks on my own end.)

    Ian
    Reply
  • Swede(n) - Sunday, October 21, 2012 - link

    Hi Ian, Thanks a lot for Your answer. That clarified things.
    Have a nice and relaxed day.

    Btw. I enjoy reading articles and reviews here at AnandTech since many years back and I think it is one of the very best sites out there.
    Sincerely
    Reply
  • JonnyDough - Monday, October 22, 2012 - link

    where are the non-overclocked, non-heatsinked modules? Reply
  • svdb - Tuesday, October 23, 2012 - link

    This article is pointless and debating is futile. Everybody knows that ORANGE memory modules are always faster than BLACK one, but not as fast as RED ones! Duh...
    The same with cars...
    Reply
  • jonjonjonj - Friday, October 26, 2012 - link

    you keep saying that a big part of the heat sinks are too "prevent the competition from knowing what ICs are under the hood". do you really think if a competitor or anyone for that matter who wanted to know what ICs were being used are going to say damn we cant find out what the ICs are because the $45 memory has a heat sink? im pretty sure they are going to buy a kit and rip them apart. Reply

Log in

Don't have an account? Sign up now