As we go up the memory speeds, moving from 1333 to 1600 to 1866 means that the next stop is DDR3-2133.  DDR3-2133 will be the next checkpoint for processors to accept by default in the future, and as a result there is a price premium for all memory kits equal and above this mark.  In our case, the G.Skill F3-17000CL9Q-16GBZH comes in at $130, some $35 more than the DDR3-1866 kit.  That is quite a hefty chunk, adding 37% on the price of memory for only a 14.3% increase in absolute MHz value terms.  As we will see in the memory benchmarks later, the 2133 MHz point does offer improvements over the 1866 kit, but not by as much as 37%.

Visual Inspection

In the land of G.Skill and kit naming, RipjawsZ is the last step in the Ripjaws line before we hit Trident.  The Ripjaws naming scheme was devised in anticipation of the Sandy Bridge and Sandy Bridge-E processor lineup where the majority of processors can achieve the speeds of all of the Ripjaws kits.  The look of the RipjawsZ kits are less edge-driven than the RipjawsX, with a rounded module end, but more bulk in terms of heatsink with the top edge still being ~1cm taller than the module.  This causes issues when paired with large heatsinks, despite large heatsinks being the aim of the processors paired with this kit.

Again, the test with this module in a large heatsink environment gives us the following:

JEDEC + XMP Settings

G.Skill
Kit Speed 1333 1600 1866 2133 2400
Subtimings 9-9-9-24 2T 9-9-9-24 2T 9-10-9-28 2T 9-11-10-28 2T 10-12-12-31 2T
Price $75 $80 $95 $130 $145
XMP No Yes Yes Yes Yes
Size 4 x 4 GB 4 x 4 GB 4 x 4 GB 4 x 4 GB 4 x 4 GB

MHz 1333 1600 1867 2134 2401
Voltage 1.500 1.500 1.500 1.650 1.650
tCL 9 9 9 9 10
tRCD 9 9 10 11 12
tRP 9 9 9 10 12
tRAS 24 24 28 28 31
tRC 33 33 37 38 43
tWR 10 12 14 16 16
tRRD 4 5 5 6 7/6
tRFC 107 128 150 171 313
tWTR 5 6 8/7 9/8 10/9
tRTP 5 6 8/7 9/8 10/9
tFAW 20 24 24 25 26
tCWL - 7 7 7 7
CR - 2 2 2 2

 

F3-14900CL9Q-16GBSR: 4 x 4 GB G.Skill Sniper Kit F3-2400CL10Q-16GTX: 4 x 4 GB G.Skill TridentX Kit
Comments Locked

114 Comments

View All Comments

  • frozentundra123456 - Thursday, October 18, 2012 - link

    While interesting from a theoretical standpoint. I would have been more interested in a comparison in laptops using HD4000 vs A10 to see if one is more dependent on fast memory than others. To be blunt, I dont really care much about the IGP on a 3770K. It would have been a more interesting comparison in laptops where the igp might actually be used for gaming. I guess maybe it would have been more difficult to do with changing memory around so much in a laptop though.

    The other thing is I would have liked to see the difference in games at playable frame rates. Does it really matter if you get 5.5 or 5.9 fps? It is a slideshow anyway. My interest is if using higher speed memory could have moved a game from unplayable to playable at a particular setting or allowed moving up to higher settings in a game that was playable.
  • mmonnin03 - Thursday, October 18, 2012 - link

    RAM by definition is Random Access which means no matter where the data is on the module the access time is the same. It doesn't matter if two bytes are on the same row or on a different bank or on a different chip on the module, the access time is the same. There is no sequential or random difference with RAM. The only difference between the different rated sticks are short/long reads, not random or sequential and any reference to random/sequential reads should be removed.
  • Olaf van der Spek - Thursday, October 18, 2012 - link

    You're joking right? :p
  • mmonnin03 - Thursday, October 18, 2012 - link

    Well if the next commenter below says their memory knowledge went up by 10x they probably believe RAM reads are different depending on whether they are random or sequential.
  • nafhan - Thursday, October 18, 2012 - link

    "Random access" means that data can be accessed randomly as opposed to just sequentially. That's it. The term is a relic of an era where sequential storage was the norm.

    Hard drives and CD's are both random access devices, and they are both much faster on sequential reads. An example of sequential storage would be a tape backup drive.
  • mmonnin03 - Thursday, October 18, 2012 - link

    RAM is direct access, no sequential or randomness about it. Access time is the same anywhere on the module.
    XX reads the same as

    X
    X

    Where X is a piece of data and they are laid out in columns/rows.
    Both are separate commands and incure the same latencies.
  • extide - Thursday, October 18, 2012 - link

    No, you are wrong. Period. nafhan's post is correct.
  • menting - Thursday, October 18, 2012 - link

    no, mmonnin03 is more correct.
    DRAM has the same latency (relatively speaking.. it's faster by a little for the bits closer to the address decoder) for anywhere in the memory, as defined by the tAA spec for reads. For writes it's not as easy to determine since it's internal, but can be guessed from the tRC spec.

    The only time that DRAM reads can be faster for consecutive reads, and considered "sequential" is if you open a row, and continue to read all the columns in that row before precharging, because the command would be Activate, Read, Read, Read .... Read, Precharge, whereas a "random access" will most likely be Activate, Read, Precharge most of the time.

    The article is misleading, using "sequential reads" in the article. There is really no "sequential", because depending if you are sequential in row, column, or bank, you get totally different results.
  • jwilliams4200 - Thursday, October 18, 2012 - link

    I say mmonnin03 is precisely wrong when he claims that " no matter where the data is on the module the access time is the same".

    The read latency can vary by about a factor of 3 times whether the read is from an already open row, or whether the desired read comes from a different row than one already open.

    That makes a big difference in total read time, especially if you are reading all the bytes in a page.
  • menting - Friday, October 19, 2012 - link

    no. he is correct.
    if every read has the conditions set up equally (ie the parameters are the same, only the address is not), then the access time is the same.

    so if address A is from a row that is already open, the time to read that address is the same as address B, if B from a row that is already open

    you cannot have a valid comparison if you don't keep the conditions the same between 2 addresses. It's almost like saying the latency is different between 2 reads because they were measured at different PVT corners.

Log in

Don't have an account? Sign up now