One of the most important kits in this review is the DDR3-1600 kit for which G.Skill has supplied one of their RipjawsX range.  This kit is of importance due to the close price differential to the DDR3-1333 kit ($5 difference), but also as generations of processors go forward we get an ever increasing suggested memory speed of those processors.  Take the most recent AMD Trinity processor release for desktops – all but the low end processor supports 1866 MHz memory as the standard out of the box.  Now we can be assured that almost all of the processors will do 2133 MHz, but as manufacturers raise that ‘minimum’ compliance barrier in their testing on their IMCs, the ‘standard’ memory kit has to be faster and come down in price also.

Visual Inspection

The RipjawsX kit we have uses a large heatsink design, with the top of the heatsink protruding 9.5mm above the module itself.  As mentioned with the Ares DDR3-1333 kit, there are multiple reasons for why heatsinks are used, and pretty low on that list is for cooling.  More likely these are placed initially for protecting which ICs are used in the kit from the competition (using a screwdriver and a heatgun to remove them usually breaks an IC on board), then also for aesthetics. 

The heatsink for RipjawsX uses a series of straight lines as part of the look, which may or may not be beneficial when putting them into a system with a large air cooler.  Here I put one module into a miniITX board, the Gigabyte H77N-WiFi, with a stupidly large and heavy air cooler, the TRUE Copper:

As we can see, the cooler would be great with the Ares kit, but not so much with the RipjawsX.  The kit will still work in the memory slot like this, though for piece of mind I would prefer it to be vertical.  As we will see with the TridentX (the 2400 MHz kit), sometimes having a removable top end heatsink helps.

JEDEC + XMP Settings

G.Skill
Kit Speed 1333 1600 1866 2133 2400
Subtimings 9-9-9-24 2T 9-9-9-24 2T 9-10-9-28 2T 9-11-10-28 2T 10-12-12-31 2T
Price $75 $80 $95 $130 $145
XMP No Yes Yes Yes Yes
Size 4 x 4 GB 4 x 4 GB 4 x 4 GB 4 x 4 GB 4 x 4 GB

MHz 1333 1600 1867 2134 2401
Voltage 1.500 1.500 1.500 1.650 1.650
tCL 9 9 9 9 10
tRCD 9 9 10 11 12
tRP 9 9 9 10 12
tRAS 24 24 28 28 31
tRC 33 33 37 38 43
tWR 10 12 14 16 16
tRRD 4 5 5 6 7/6
tRFC 107 128 150 171 313
tWTR 5 6 8/7 9/8 10/9
tRTP 5 6 8/7 9/8 10/9
tFAW 20 24 24 25 26
tCWL - 7 7 7 7
CR - 2 2 2 2

 

F3-1333C9Q-16GAO: 4 x 4 GB G.Skill Ares Kit F3-14900CL9Q-16GBSR: 4 x 4 GB G.Skill Sniper Kit
POST A COMMENT

108 Comments

View All Comments

  • nafhan - Thursday, October 18, 2012 - link

    "Random access" means that data can be accessed randomly as opposed to just sequentially. That's it. The term is a relic of an era where sequential storage was the norm.

    Hard drives and CD's are both random access devices, and they are both much faster on sequential reads. An example of sequential storage would be a tape backup drive.
    Reply
  • mmonnin03 - Thursday, October 18, 2012 - link

    RAM is direct access, no sequential or randomness about it. Access time is the same anywhere on the module.
    XX reads the same as

    X
    X

    Where X is a piece of data and they are laid out in columns/rows.
    Both are separate commands and incure the same latencies.
    Reply
  • extide - Thursday, October 18, 2012 - link

    No, you are wrong. Period. nafhan's post is correct. Reply
  • menting - Thursday, October 18, 2012 - link

    no, mmonnin03 is more correct.
    DRAM has the same latency (relatively speaking.. it's faster by a little for the bits closer to the address decoder) for anywhere in the memory, as defined by the tAA spec for reads. For writes it's not as easy to determine since it's internal, but can be guessed from the tRC spec.

    The only time that DRAM reads can be faster for consecutive reads, and considered "sequential" is if you open a row, and continue to read all the columns in that row before precharging, because the command would be Activate, Read, Read, Read .... Read, Precharge, whereas a "random access" will most likely be Activate, Read, Precharge most of the time.

    The article is misleading, using "sequential reads" in the article. There is really no "sequential", because depending if you are sequential in row, column, or bank, you get totally different results.
    Reply
  • jwilliams4200 - Thursday, October 18, 2012 - link

    I say mmonnin03 is precisely wrong when he claims that " no matter where the data is on the module the access time is the same".

    The read latency can vary by about a factor of 3 times whether the read is from an already open row, or whether the desired read comes from a different row than one already open.

    That makes a big difference in total read time, especially if you are reading all the bytes in a page.
    Reply
  • menting - Friday, October 19, 2012 - link

    no. he is correct.
    if every read has the conditions set up equally (ie the parameters are the same, only the address is not), then the access time is the same.

    so if address A is from a row that is already open, the time to read that address is the same as address B, if B from a row that is already open

    you cannot have a valid comparison if you don't keep the conditions the same between 2 addresses. It's almost like saying the latency is different between 2 reads because they were measured at different PVT corners.
    Reply
  • jwilliams4200 - Friday, October 19, 2012 - link

    You are also incorrect, as well as highly misleading to anyone who cares about practical matters regarding DRAM latencies.

    Reasonable people are interested in, for example, the fact that reading all the bytes on a DRAM page takes significantly less time than reading the same number of bytes from random locations distributed throughout the DRAM module.

    Reasonable people can easily understand someone calling that difference sequential and random read speeds.

    Your argument is equivalent to saying that no, you did not shoot the guy, the gun shot him, and you are innocent. No reasonable person cares about such specious reasoning.
    Reply
  • hsir - Friday, October 26, 2012 - link

    jwilliams4200 is absolutely right.

    People who care about practical memory performance worry about the inherent non-uniformity in DRAM access latencies and the factors that prevent efficient DRAM bandwidth utilization. In other words, just row-cycle time (tRC) and the pin bandwidth numbers are not even remotely sufficient to speculate how your DRAM system will perform.

    DRAM access latencies are also significantly impacted by the memory controller's scheduling policy - i.e. how it prioritizes one DRAM request over another. Row-hit maximization policies, write-draining parameters and access type (if this is a cpu/gpu/dma request) will all affect latencies and DRAM bandwidth utilization. So just sweeping everything under the carpet by saying that every access to DRAM takes the same amount of time is, well, just not right.
    Reply
  • nafhan - Friday, October 19, 2012 - link

    I was specifically responding to your incorrect definition of "random access". Randomness doesn't guarantee timing; it just means you can get to it out of order. Reply
  • jwilliams4200 - Friday, October 19, 2012 - link

    And yet, by any practical definition, you are incorrect and the author is correct.

    For example, if you read (from RAM) 1GiB of data in sequential order of memory addresses, it will be significantly faster than if you read 1GiB of data, one byte at a time, from randomly selected memory addresses. The latter will usually take two to four times as long (or worse).

    It is not unreasonable to refer to that as the difference between sequential and random reads.

    Your argument reminds me of the little boy who, chastised by his mother for pulling the cat's tail, whined, "I didn't pull the cat's tail, I just held it and the cat pulled."
    Reply

Log in

Don't have an account? Sign up now