POST A COMMENT

76 Comments

Back to Article

  • Rick83 - Monday, July 25, 2011 - link

    Do they take into account, that we should be using 1.5V DIMMs for Sandy Bridge?

    The addition of that requirement usually limits choice quite a bit.
    Reply
  • compudaze - Monday, July 25, 2011 - link

    The SNB datasheet does suggest that the max memory voltage is 1.575V, however, many motherboard and memory manufactures state that they haven't had any problems with memory running at 1.65V on SNB. Reply
  • compudaze - Monday, July 25, 2011 - link

    Also, if you stick to the spec sheet, you shouldn't be running faster than DDR3-1333 memory. Reply
  • Taft12 - Monday, July 25, 2011 - link

    You should be using 1.5V DIMMs anyway - if a memory OEM needs 1.65V to achieve the same speed and timings another vendor does at 1.5V, it's inferior memory. Reply
  • jdogi - Monday, July 25, 2011 - link

    Just as your daily driver vehicle is likely inferior to a Mercedes or Ferrari. You should get a new car. You should not make any attempt to balance cost with the value. Just get the best. It's the only way to go. What's best for Taft is best for all.

    ;-)
    Reply
  • Iketh - Tuesday, July 26, 2011 - link

    you didn't understand the logic Reply
  • MrSpadge - Wednesday, July 27, 2011 - link

    I'm sure he did. What Taft failed to mention was that "at the same price, you should be using the memory spec'ed for less voltage". However, if some memory needs a little more voltage, but is way cheaper - balance cost and value.

    MrS
    Reply
  • Rick83 - Wednesday, July 27, 2011 - link

    Actually, the higher voltage is out of spec for the CPU memory controller and may wel impact longevity.
    So it's like buying the Ferrari, and running it on Biofuel with too much Ethanol that eats right through the tubing, but is marginally cheaper.
    Reply
  • jfelano - Tuesday, July 26, 2011 - link

    Not inferior, just older. All 1600mhz memory was 1.65v when it debuted. Then they came out with 1.5v, now even 1.35v. Reply
  • cervantesmx - Thursday, July 28, 2011 - link

    That is correct indeed. Just purchased 8GB at 1600mhz running on 1.25v. $59.99. Free shipping. Reply
  • dfjgkheu - Tuesday, July 26, 2011 - link

    believe you will love it.
    ====( www )( bestniceshoes)( c o m ) ====

    The website whol esale for many kinds of fashion shoes,
    like the ni ke, jor dan, pra da, also including the jea ns,shirts,bags,hat and the decorations.
    All the products are free ship ping, and the the price is competitive,
    and also can accept the pay pal payment.
    ,after the payment, can ship within short time
    Reply
  • vailr - Monday, July 25, 2011 - link

    No discussion of differing voltages?
    A quick check for DDR3 at Newegg shows:
    G.SKILL ECO Series 4GB (2 x 2GB) 240-Pin DDR3 SDRAM DDR3 1600
    @ 1.35 volts & Cas Latency: 7
    vs.
    G.SKILL Ripjaws X Series 4GB (2 x 2GB) 240-Pin DDR3 SDRAM DDR3 1600
    @ 1.50 volts & Cas Latency: 6
    A more thorough consideration of these two DDR3 modules might be interesting.
    For virtually the same money, aren't most people going to seek out DDR3 with the lowest possible CAS latency number, and also combined with the lowest possible voltage design?
    I know that: I wouldn't consider buying any DDR3 memory modules with a (nominal) CAS latency higher than 7.
    Reply
  • JarredWalton - Monday, July 25, 2011 - link

    Just as we didn't test with ten different modules (for ease of testing), we didn't use different voltage memory. Whether your RAM is 1.5V or 1.35V, at the same timings and speed the performance should be identical (less than a 0.5% difference). And we did look at the effect of lower latency RAM; sure, at the same price buy lower latency and higher bandwidth RAM, but prices aren't the same, particularly on 2x4GB kits. Reply
  • Tchamber - Monday, July 25, 2011 - link

    I'd like to see how these tests stack up against the tripple channel nehalem i7's. Reply
  • duploxxx - Monday, July 25, 2011 - link

    compare with what an EOL platform? it was alreay known that there is no added value with memory speed testing on these systems, just like the previous gen., 1366 is dead testing has been done in the past
    This test just showed that it is a lot of wasted money and time investigated in this.

    They better take the time and investigate further into Liano memory speed, something that really does scale with memory.
    Reply
  • Finally - Monday, July 25, 2011 - link

    It's already done, see Computerbase... Reply
  • JarredWalton - Monday, July 25, 2011 - link

    We've done it as well for graphics applications:
    http://www.anandtech.com/show/4476/amd-a83850-revi...

    We haven't done the application testing with different DDR3 on Llano, however.
    Reply
  • banwell - Monday, July 25, 2011 - link

    You can also get a nice 'free' bump in performance at 1600 by switching to 1T. Something the better quality memory will be able to do easily. Reply
  • AssBall - Monday, July 25, 2011 - link

    I'm not sure why they didn't test 1T . It is a memory scaling article after all. Anyway TechReport did and their conclusions are about the same, I.E. unless you are overclocking and running synthetic benchmarks, it doesn't really matter. Reply
  • compudaze - Monday, July 25, 2011 - link

    Lowering the command rate from 2T to 1T at DDR3-1600 doesn't necessarily mean you can do the same at DDR3-2133. Not all memory modules, CPU's and motherboards are creased equal. Testing all configurations at 2T kept the results comparable. Reply
  • tomx78 - Tuesday, July 26, 2011 - link

    Article is called "choosing the best DDR3" so I agree they should test T1. Without it whole article is useless. It still does not answer question which DDR3 is best. If DDR3-2133 can't do T1 but DDR3-1600 can which one is faster? Reply
  • Impulses - Monday, July 25, 2011 - link

    If you're pinching pennies and trying to build a system on a budget, even the $10 premium for anything but a basic 1333 kit doesn't seem worthwhile... I actually chose my last 2x4GB kit based on price and looks more than anything, heh, the old G.skill Sniper heatspreaders (the blue-ish version) matched my MSI mobo well and looked like they'd be the least likely to interfere with any heatsink. Some of the heatspreaders on pricier kits are crazy big, not to mention kinda gaudy. Reply
  • Finally - Tuesday, July 26, 2011 - link

    Let me repeat: You buy your RAM based on... aesthetics?
    No further questions, thanks.
    Reply
  • Finraziel - Wednesday, July 27, 2011 - link

    Well as this test showed, there is little performance gain to be had, so what else is there to base your choice on? Especially for people with windows it can be important. And if you buy really fast memory that wont fit under your heatsink, well, let's just say you want to insinuate someone else is dumb? :)
    I used to have the Corsair modules with lights on top showing activity, and while I mainly bought them for looks, they were actually useful at times to be able to quickly check if my system had totally crashed or was still doing stuff (you can sort of see the difference in the patterns in the lights).
    Reply
  • knedle - Monday, July 25, 2011 - link

    I would love to see graphs showing how much power do different ram modules consume, few weaks ago I build low power computer with Sandy Bridge and I'm still looking into how to get as much from it as possible, with as low power consumption as possible. Reply
  • Rajinder Gill - Monday, July 25, 2011 - link

    Power savings for DRAM are generally small. As you lower the current draw (either by reducing voltage or slacker timings) you are battling in part against the efficiency curve of the VRM.

    On some boards the difference in power consumption between DDR3-1333 and DDR3-1866 (given voltage and timing changes) can be as little as 1 Watt.

    -Raja
    Reply
  • Vhozard - Monday, July 25, 2011 - link

    "Multiple passes are generally used to ensure the highest quality video output, and the first pass tends to be more I/O bound while the second pass is typically constrained by CPU performance."

    This is really not true, multiple passes are used by x264 to come as close as possible to a given file size. A one-pass crf-based encode produces an equally high quality video output, given the same conditions.

    Maybe you should use one-pass encodes, as they are more commonly used when file size specification is not very important.
    Reply
  • JarredWalton - Monday, July 25, 2011 - link

    Multiple passes produce higher quality by using a higher bitrate where it's needed and a lower bitrate where it's not. In a single-pass, constant bitrate encode, scenes where there's a lot of movement will show more compression artifacts. There's no need to do multiple passes for size considerations: you do a constant bitrate of 2.0Mbps (including audio) for 120 minutes and you will end up with a file size of very close to 1800MB (or if you prefer, 1717.61MiB). Variable bitrate with a single pass doesn't have an accurate file size. Reply
  • Vhozard - Monday, July 25, 2011 - link

    Very few people still use constant bitrate encodes.
    x264 works with a crf (constant rate factor), which gives constant *quality*; not constant bitrate!

    There is very much a need to do multiple passes for size considerations as a constant bitrate will not give them optimal quality at all.

    The quality between a crf (one-pass) of 15 that reaches a filesize of lets say 1 GBwill have almost exactly the same quality as a two-pass which is set at 1 GB.

    I suggest you read the x264 wiki...
    Reply
  • JarredWalton - Monday, July 25, 2011 - link

    Sorry -- missed that you said CRF and not CBF. Reply
  • Kevin G - Monday, July 25, 2011 - link

    One thing worth noting is that the memory controller has to run at least twice the clock speed of the memory (ie, the memory controller runs at 3.2 Ghz for 1600 Mhz DDR3 rated memory). This is one of the reasons why Intel doesn't officially support memory speed higher than 1600 Mhz. Running the CPU cores and memory controller at the same clock speed should produce a benefit even if memory speed isn't improved (ie running the CPU and memory controller at 4.8 Ghz with 1600 Mhz memory). It would be interesting to see how just scaling the memory controller's speed affected performance.

    I wish that their were a few game tests utilizing the integrated GPU. For a desktop, using a discrete graphics card for gaming is a new brainer over Intel's integrated graphics. However many laptops will only use Intel's solution and thus improving memory performance could be a means of improving gaming performance here. Further more, laptops are often lower resolution than 1920 x 1080 and thus the performance delta between memory speeds would be wider.
    Reply
  • RussianSensation - Monday, July 25, 2011 - link

    But even if laptops run at lower resolutions, they have much weaker CPUs and GPUs and generally have slow 5400 rpm hard drives. So if anything, memory speed will be even less important in a laptop since a laptop will be faced with all kinds if I/O, GPU and CPU bottlenecks.

    Improving performance of Intel's gaming solution with faster memory is a waste of time. HD3000 is just a dog. If you want a budget gaming laptop, you get Llano. No amount of memory bandwidth is going to translate into a more playable gaming experience on the HD3000.
    Reply
  • Kevin G - Monday, July 25, 2011 - link

    Except that Llano is in the same situation as it too needs fast memory for better graphics performance. If DIMM manufacturers start rolling out high performance memory for Llano, Sandy bridge laptops can also benefit. So why not actually test to see what those benefits could be? Reply
  • orenlevy - Tuesday, July 26, 2011 - link

    there is defenetly gpu improvment on hd3000 i dont know why you are not benching on Z68 platform. personally i have z68 htpc and sure i noticed different lateness on normal use. we always like sppedy windows poping and that simply whats happans.
    personally i say to myself "oh its about time to squeez little bit more from the sandybridge" even the G840 behave so different on 1600M cl8 Z68...
    Reply
  • jabber - Monday, July 25, 2011 - link

    As its such a waste of life. RAM hasnt been fun or that important since the good old DDR days.

    They may as well just hardwire 4GB of bog standard ram onto the motherboard for most folks.
    Reply
  • silverblue - Monday, July 25, 2011 - link

    Maybe they have, but RAM speed is important for iGPUs and especially so for APUs.

    For those wondering where the Llano equivalent is...

    http://www.anandtech.com/show/4476/amd-a83850-revi...
    Reply
  • silverblue - Monday, July 25, 2011 - link

    Duh. The link is on the final page of this article. Reply
  • Kevin G - Monday, July 25, 2011 - link

    The main reason why RAM still comes on DIMMs is due to reliability and the need for expansion. Cost is also a factor, especially for motherboards with lot of RAM soldered on to it.

    However, soldering down RAM in some situations does make sense. Expandability in laptops for example. Another area where soldiered RAM on a desktop motherboard would make sense is with the overclocking crowd. The RAM could be placed closer to the CPU socket which could easily increase clock speeds. The elimination of the edge connector on the DIMM would also significantly enhance signal strength too. Physical placement on the motherboard would allow for better cooling (larger heat sinks or water cooling with direct contract with the RAM chips). There would also be room for robust power delivery to the RAM chips on the motherboard.
    Reply
  • dac7nco - Monday, July 25, 2011 - link

    I' glad there are still RAM reviews; 1.5v DDR3 1600 / CAS 7/7/7 gets me an additional 1 to 2 FPS in long Handbrake transcodes, which is a big part of my bread and butter.

    Keep in mind, 1.5v, no higher, is the DDR3 spec, which is why you'll never see ECC DDR3 registered memory rated above that. Keep this in mind, Anand: Control your people! X58-rated memory has no place today.

    Daimon
    Reply
  • jabber - Monday, July 25, 2011 - link

    I still think the main selling pint for RAM is - How well does it match my motherboard?

    Maybe a round up of modules with fancy spreaders and how they look in Asus/Gigabyte/Asrock/MSI boards.
    Reply
  • Rick83 - Monday, July 25, 2011 - link

    Fancy heat spreaders are the worst that has ever happened to RAM.

    It gets worse when you have to pay more to get rid of it, as with the new low profile vengeance series from corsair.

    Memory doesn't usually get that hot anyway, and the large heat spreaders impede airflow between the modules in fully populated setups, as well as limit what size your cooler can be, occasionally forcing you to get one of those water-cooler-in-a-box things which incur massive extra costs.

    The only reason I don't want to have completely naked memory, is that the heat spreader gives the RAM some ESD protection, which is actually useful.
    Reply
  • JoJoman88 - Monday, July 25, 2011 - link

    The review just made your post the truest of them all jabber! Reply
  • Spacecomber - Monday, July 25, 2011 - link

    In the past, one reason to get faster rated memory is that you eventually would see a migration of what was the standard memory module to something running on a faster bus speed. I'm not sure if that really holds true, anymore. It seems that these days you are more likely to see the adoption of a completely new type of memory, rather than an existing standard sticking around long enough for the minimum required memory speeds it is based on to go up. Reply
  • geofelt - Monday, July 25, 2011 - link

    One of the price differentiators is the heat spreaders.
    Apart from the aesthetics, where is the value of fancy heat spreaders? Can it be measured?
    Seems to me that they are mostly marketing gimmicks, excepting perhaps for those used on overclocking competitions.
    I would like to see some sort of a study to determine the value of heat spreaders.
    Reply
  • MrSpadge - Wednesday, July 27, 2011 - link

    Short answer: nothing.

    MrS
    Reply
  • BobDavid - Monday, July 25, 2011 - link

    see subject Reply
  • JarredWalton - Monday, July 25, 2011 - link

    See the conclusion; we already did a look at that (with HD 3000 and Llano).
    http://www.anandtech.com/show/4476/amd-a83850-revi...
    Reply
  • LoneWolf15 - Monday, July 25, 2011 - link

    Ivy Bridge will be out next year. There is a reasonable chance it could have a bump in memory bandwidth. Buy RAM at one or two multipliers above what you need now, and when the upgrade comes along, you won't be wishing for new RAM.

    DDR3 is so cheap right now, it's worth planning ahead.
    Reply
  • dman - Monday, July 25, 2011 - link

    I've been looking for a review like this for a while, was a good read even if it didn't come as a huge surprise. I'm definitely interested in the AMD platform results if/when those are available. Reply
  • SteveSweetz - Monday, July 25, 2011 - link

    I was disappointed to see this article lacked the detail (and quantity) of the gaming tests versus it's predecessor on AnandTech: http://www.anandtech.com/show/2792/10

    That article showed that memory frequency and latency changes had a greater impact in some games than others, and that in most cases the memory also had a greater impact on the minimum framerate (an important consideration) than average framerate.

    Also disappointing to see no CAS6 sticks tested here. Particularly because 2GB 1600MHz CAS6 were relatively common at one point, but now, for whatever reason, G.Skill is the only company that still makes them. It'd be interesting to see if that's a meaningful exclusive. The previous article showed CAS latency being more important than frequency in some cases.
    Reply
  • mga318 - Monday, July 25, 2011 - link

    You mentioned Llano at the end, but in the Llano reviews & tests, memory bandwidth was tested primarily with little reference to latency. I'd be curious as to which is more important with a higher performance IGP like Llano's. Would CAS 7 (or 6) be preferrable over 1866 or 2166 speeds wtih CAS 8 or 9? Reply
  • DarkUltra - Monday, July 25, 2011 - link

    How about testing Valves particle benchmark or a source based game at low reslution with a non-geometry limited 3d card (fermi) and overclocked cpu? Valve did an incredible job with their game engine. They used a combination of fine-grained and coarse threading to max out all the cpu cores. Very few games can do that today, but may in the future. Reply
  • DarkUltra - Monday, July 25, 2011 - link

    Why test with 4GB? RAM is cheap, most people who buy the premium 2600K should pair it with two 4GB modules. I imagine Windows would require 4GB ram and games the same in the future. Just look at all the .net developers out there, .net usually results in incredible memory bloated programs. Reply
  • dingetje - Monday, July 25, 2011 - link

    hehe yeah
    .net sucks
    Reply
  • Atom1 - Monday, July 25, 2011 - link

    Most algorithms on CPU platform are optimized to have their data 99% of time inside the CPU cache. If you look at the SisSoft Sandra where there is a chart of bandwidth as a function of block size copied you can see that CPU cache is 10-50x faster than global memory depending on the level. Linpack here is no exception. The primary reason for success of linpack is its ability to have data in CPU cache nearly all of the time. Therefore, if you do find an algorithm which can benefit considerably from global memory bandwidth, you can be sure it is a poor job on the programmers side. I think it is a kind of a challenge to see which operations and applications do take a hit when the main memory is 2x faster or 2x slower. I would be interested to see where is the breaking point, when even well written software starts to take a hit. Reply
  • DanNeely - Monday, July 25, 2011 - link

    That's only true for benchmarks and highly computationally intensive apps (and even there many problem classes can't be packed into the cache or written to stream data into it). In the real world where 99% of software's performance is bound by network IO, HD IO, or user input trying to tune data to maximize the CPU cache is wasted engineering effort. This is why most line of business is written using java or .net, not C++; the finer grained memory control of the latter doesn't benefit anything while the higher level nature of the former allows for significantly faster development. Reply
  • Rick83 - Monday, July 25, 2011 - link

    I think image editing (simple computation on large datasets) and engineering software (numerical simulations) are two types of application that benefit more than average from memory bandwidth, and in the second case, latency.
    But, yeah, with CPU caches reaching the tens of Megabytes, Memory bandwidth and latency is getting less important for many problems.
    Reply
  • MrSpadge - Wednesday, July 27, 2011 - link

    True.. large matrix operations love bandwidth and low latency never hurts. I've seen ~13% speedup on part of my Matlab code going from DDR3-1333 CL9 to DDR3-1600 CL9 on an i7 870!

    MrS
    Reply
  • Patrick Wolf - Monday, July 25, 2011 - link

    You don't test CPU gaming benchmarks at normal settings cause you may become GPU limited so why do it here?
    http://www.xbitlabs.com/articles/memory/display/sa...
    Reply
  • dsheffie - Monday, July 25, 2011 - link

    ....uh...Linpack is just LU which in turn is just DGEMM. DGEMM has incredible operand reuse (O(sqrt(cache size)). Reply
  • Black1969ta - Monday, July 25, 2011 - link

    This article ignores a very important factor in choosing RAM, that is overclocking ability, sure the delta between 1333 and 2133 is not very large within the same stick of RAM that is down-clocked but what about a 1333 stick that is overclocked. Can the $50 stick of 1333 perform at 2133 or even 1866, etc... that the $150 DDR3-2133 does with no problem.

    I would like to get a i7-2600K and overclock it to 4.8GHz, but I wanted to know the cheapest stick of RAM that will allow that with no compromise, this article doesn't tell anything useful, sure a good expensive stick is a good expensive stick at any speed, but what about a cheaper stick?
    Reply
  • compudaze - Monday, July 25, 2011 - link

    That's a chance you just have to take youself. Just because my brand X model Y value DDR3-1333 ram will run at DDR3-2133 CL9 at 1.65V doesn't mean that you're guaranteed to get the same results if you buy the same make/model value DDR3-1333 ram. Same with CPU's & GPU's. Reply
  • xsilver - Friday, August 05, 2011 - link

    but for someone not in the know, how well generally does 1333 ram overclock. Some generations the bargain basement ram has no headroom at all and some generations, most basement ram has enough headroom to get where you need.

    Also, as an addendum, maybe you could also test ram size scaling as well as speed. I as well as others maybe contemplating 16gb ram and wondering if its worth it.
    Reply
  • Chris383 - Monday, July 25, 2011 - link

    I think you guys are missing the point of faster memory. It just depends on the work load most applications are written for linear code or at least i am guessing so. But what happens when you add more than one program at the same time or 3-5 programs at the same time i think then you would start to see why memory performance really does matter. And some games not most but some games will start to show a very big increase in performance with faster memory. Example FarCry2, GTA4, Starcraft2, now some of this maybe caused by poor memory management of the video card or lack of vmem but from my findings even with enough vmem you still will see some very big changes in FPS with faster/tighter memory speeds (provided your not cpu or gpu limited)

    That being said for most people playing games at lower resolutions like 1080 and below memory speed is really not needed 1333 will suffice plenty good but when your doing 3,4+MP screens you will defiantly want to take into consideration some faster memory for your rig.
    Reply
  • ypsylon - Tuesday, July 26, 2011 - link

    I don't get it why home users are so over-excited about memory over 1600 MHz? For gaming buying e.g. 2133 memory is as wise as buying Mercedes Maybach for trips to local grocery store.

    Fast memory has very limited usage (medicine, NASA etc...) and certainly home desktop/gaming rig doesn't qualify for that. Furthermore if you want to OC system, then whole point of OC is to buy cheap [read: cheap doesn't = crap] and squeeze every little bit of performance out of it. Stay at 1333/1600 level. 2000 or more is for [beeep!] with humongous e-penis running benchmarks 365/24/7. Biggest advantage of 1333/1600 range is that by default pretty much every chipset and motherboard support it right now.

    Running perfectly standard Kingstons 1333 at 1800 without any fancy cooling or changed timings, just bumped BClk on my x58. Even with current prices I'm 100$+ up just by doing this (2 triple sets). And RAMDisk created on that memory certainly isn't slow and eats every SSD for breakfast.
    Reply
  • Hrel - Tuesday, July 26, 2011 - link

    SO happy you posted this. I JUST ordered a new laptop, didn't upgrade the RAM at all cause it was overpriced. Now I know I'm going to order 2 4GB DIMMS of DDR3 1600 for it, Cas latency 9 be damned, haha. Thanks so much!

    I was apparently giving way too much credit to CAS latency, I was going to get 1066 Cas 7. Didn't realize pure bandwidth was so important nowadays. I remember an old memory article like this comparing DDR3 for i7 920's and such, whatever that family is called. 1600 cas 7 came out on top their. Odd that Sandy Bridge changes that, but I'm glad I know.

    I'd like to see more articles like this, and less articles about EVERY stupid smartphone under the sun.
    Reply
  • Rick83 - Wednesday, July 27, 2011 - link

    7/1,066,000,000Hz = ~6.56 e-9 s
    9/1,600,000,000Hz = 5.625 e-9 s
    So in fact, the 1600Mhz CL9 RAM has a one nanosecond lower effective latency.

    Always remember that CAS latency is in cycles, which take a different amount of time according to the clock speed.
    Reply
  • schulmaster - Wednesday, July 27, 2011 - link

    Tests are not 'ran', tests are 'run.' You can say we ran tests, but you must say we have run them. This site is way to technically proficient and intelligent for glaring mistakes in homepage articles. I know for at least some of you, English is a second+ language; if that's the case, send me your articles and I'll proof them for free. Reply
  • Black1969ta - Wednesday, July 27, 2011 - link

    "This site is way to technically proficient"
    to here refers to excessive, like too much but you left out the extra "o" so the means something totally different.

    If you want to be a Grammar Nazi, use proper English, especially when you are using it as an advertisement.
    Reply
  • schulmaster - Thursday, July 28, 2011 - link

    my man. Reply
  • Rajinder Gill - Friday, July 29, 2011 - link

    LOL!

    Caught with your pants down there sir.
    Reply
  • Isaac the k - Tuesday, August 02, 2011 - link

    Memory above DDR3-1333 is plentiful and cheap.
    Sandy Bridge benefits from the bump up to 1600.
    Mobo's all can handle the higher spec'ed memory.

    WHY IS INTEL SPEC'ING THE NEW I7's FOR 1333 INSTEAD OF 1600 BY DEFAULT?

    I don't want to have to futz with my BIOS just to get my memory to run at STOCK timings!
    Why the HELL are they crippling the basic utility of their chips??

    I grant you, it isn't a major increase in performance, but why set the ceiling so low? why not set it higher and let the mobo manufacturers choose what their boards are compatible with???
    Reply
  • shriganesh - Wednesday, August 24, 2011 - link

    This article is thoughtful and very good news for mid-end to high-end system builders :)
    There's absolutely no need to pay extra for those faster and better memory modules!!
    Reply
  • ryedizzel - Tuesday, September 20, 2011 - link

    Thank you very much for this article. I have been memory shopping for a couple days now and debating on different memory speeds vs. latency vs. price. I wish I had the lab setup to test them all. Thanks again! Reply
  • James D - Monday, November 28, 2011 - link

    On each speed type (1333, 1600, 1866 etc.) you used only slightly different timings! For example for speed 1333 Mhz you had:
    1) 7-7-7-18
    2) 8-8-8-18
    3) 9-9-9-18
    In this case of course you won't get much different results! Don't you know that changing CL and other timings you can change tRAS timings which is the minimum number of clock cycles needed to access a certain row of data in RAM between the data request and the precharge command.
    OF COURSE you won't get big difference if you didn't change tRAS! If you lower all other timings then most likely you can decrease tRAS which also will increase performance.

    Please rebench all these with slightly different all types of timings. It means a lot for results.
    Reply
  • poohbear - Saturday, December 17, 2011 - link

    Thanks for running the gaming benchmarks @ 1920x1080 to show us practical results (ie there were none!). i hate it when they run these benchmarks @ 800x600 or some nonsensical low resolution to show us a difference that we really could care less about in 2011. Reply

Log in

Don't have an account? Sign up now