When it comes to memory overclocking, there are several ways to approach the issue.  Typically memory overclocking is rarely required - only those attempting to run benchmarks need worry about pushing the memory to its uppermost limits.  It also depends highly on the memory kits being used - memory is similar to processors in the fact that the ICs are binned to a rated speed.  The higher the bin, the better the speed - however if there is a demand for lower speed memory, then the higher bin parts may be declocked to increase supply of the lower clocked component.  Similarly, for the high end frequency kits, less than 1% of all ICs tested may actually hit the speed of the kit, hence the price for these kits increase exponentially.

With this in mind, there are several ways a user can approach overclocking memory.  The art of overclocking memory can be as complex or as simple as the user would like - typically the dark side of memory overclocking requires deep in-depth knowledge of how memory works at a fundamental level.  For the purposes of this review, we are taking overclocking in three different scenarios:

a) From XMP, adjust Command Rate from 2T to 1T
b) From XMP, increase Memory Speed strap (e.g. 1333 MHz -> 1400 -> 1600)
c) From XMP, decrease main sub-timings (e.g. 10-12-12 to 9-11-11 to 8-10-10)

There is plenty of scope to overclock beyond this, such as adjusting voltages or the voltage of the memory controller.  As long as a user is confident with adjusting these settings, then there is a good chance that the results here will be surpassed.   There is also the fact that individual sticks of memory may perform better than the rest of the kit, or that one of the modules could be a complete dud and hold the rest of the kit back.  For the purpose of this review we are seeing if the memory out of the box, and the performance of the kit as a whole, will work faster at the rated voltage.

In order to ensure that the kit is stable at the new speed, we run the Linpack test within OCCT for five minutes.  This is a small but thorough test, and we understand that users may wish to stability test for longer to reassure themselves of a longer element of stability.  However for the purposes of throughput, a five minute test will catch immediate errors from the overclocking of the memory.

With this in mind, the kits performed as follows:

F3-1333C9Q-16GAO - rated at DDR3-1333 9-9-9-24 2T 1.50 volts

Adjusting from 2T to 1T: Passes Linpack
Adjusting from 1333 to 1400: Passes Linpack
Adjusting from 1333 to 1600: No Boot
Adjusting from 9-9-9 to 8-8-8: Linpack Error

F3-12800CL9Q-16GBXL - rated at DDR3-1600 9-9-9-24 2T 1.50 volts

Adjusting from 2T to 1T: Passed Linpack
Adjusting from 1666 to 1800: No boot
Adjusting from 9-9-9 to 8-8-8: No boot

F3-14900CL9Q-16GBSR - rated at DDR3-1866 9-10-9-28 2T 1.50 volts

Adjusting from 2T to 1T: Passes Linpack
Adjusting from 1866 to 2000: No boot
Adjusting from 9-10-9 to 8-9-8: No boot

F3-17000CL9Q-16GBZH - rated at DDR3-2133 9-11-10-28 2T 1.65 volts

Adjusting from 2T to 1T: Passes Linpack
Adjusting from 2133 to 2200: Passes Linpack
Adjusting from 2133 to 2400: No Boot
Adjusting from 9-11-10 to 9-9-9: No boot
Adjusting from 9-11-10 to 8-11-10: No boot

F3-2400C10Q-16GTX - rated at DDR3-2400 10-12-12-31 2T 1.65 volts

Adjusting from 2T to 1T: Passes Linpack
Adjusting from 2400 to 2600: No boot
Adjusting from 10-12-12 to 9-11-11: No boot

Rendering Conclusions
Comments Locked

114 Comments

View All Comments

  • frozentundra123456 - Thursday, October 18, 2012 - link

    While interesting from a theoretical standpoint. I would have been more interested in a comparison in laptops using HD4000 vs A10 to see if one is more dependent on fast memory than others. To be blunt, I dont really care much about the IGP on a 3770K. It would have been a more interesting comparison in laptops where the igp might actually be used for gaming. I guess maybe it would have been more difficult to do with changing memory around so much in a laptop though.

    The other thing is I would have liked to see the difference in games at playable frame rates. Does it really matter if you get 5.5 or 5.9 fps? It is a slideshow anyway. My interest is if using higher speed memory could have moved a game from unplayable to playable at a particular setting or allowed moving up to higher settings in a game that was playable.
  • mmonnin03 - Thursday, October 18, 2012 - link

    RAM by definition is Random Access which means no matter where the data is on the module the access time is the same. It doesn't matter if two bytes are on the same row or on a different bank or on a different chip on the module, the access time is the same. There is no sequential or random difference with RAM. The only difference between the different rated sticks are short/long reads, not random or sequential and any reference to random/sequential reads should be removed.
  • Olaf van der Spek - Thursday, October 18, 2012 - link

    You're joking right? :p
  • mmonnin03 - Thursday, October 18, 2012 - link

    Well if the next commenter below says their memory knowledge went up by 10x they probably believe RAM reads are different depending on whether they are random or sequential.
  • nafhan - Thursday, October 18, 2012 - link

    "Random access" means that data can be accessed randomly as opposed to just sequentially. That's it. The term is a relic of an era where sequential storage was the norm.

    Hard drives and CD's are both random access devices, and they are both much faster on sequential reads. An example of sequential storage would be a tape backup drive.
  • mmonnin03 - Thursday, October 18, 2012 - link

    RAM is direct access, no sequential or randomness about it. Access time is the same anywhere on the module.
    XX reads the same as

    X
    X

    Where X is a piece of data and they are laid out in columns/rows.
    Both are separate commands and incure the same latencies.
  • extide - Thursday, October 18, 2012 - link

    No, you are wrong. Period. nafhan's post is correct.
  • menting - Thursday, October 18, 2012 - link

    no, mmonnin03 is more correct.
    DRAM has the same latency (relatively speaking.. it's faster by a little for the bits closer to the address decoder) for anywhere in the memory, as defined by the tAA spec for reads. For writes it's not as easy to determine since it's internal, but can be guessed from the tRC spec.

    The only time that DRAM reads can be faster for consecutive reads, and considered "sequential" is if you open a row, and continue to read all the columns in that row before precharging, because the command would be Activate, Read, Read, Read .... Read, Precharge, whereas a "random access" will most likely be Activate, Read, Precharge most of the time.

    The article is misleading, using "sequential reads" in the article. There is really no "sequential", because depending if you are sequential in row, column, or bank, you get totally different results.
  • jwilliams4200 - Thursday, October 18, 2012 - link

    I say mmonnin03 is precisely wrong when he claims that " no matter where the data is on the module the access time is the same".

    The read latency can vary by about a factor of 3 times whether the read is from an already open row, or whether the desired read comes from a different row than one already open.

    That makes a big difference in total read time, especially if you are reading all the bytes in a page.
  • menting - Friday, October 19, 2012 - link

    no. he is correct.
    if every read has the conditions set up equally (ie the parameters are the same, only the address is not), then the access time is the same.

    so if address A is from a row that is already open, the time to read that address is the same as address B, if B from a row that is already open

    you cannot have a valid comparison if you don't keep the conditions the same between 2 addresses. It's almost like saying the latency is different between 2 reads because they were measured at different PVT corners.

Log in

Don't have an account? Sign up now