Metro2033

Metro2033 is a DX11 benchmark that challenges every system that tries to run it at any high-end settings.  Developed by 4A Games and released in March 2010, we use the inbuilt DirectX 11 Frontline benchmark to test the hardware at 1920x1080 with full graphical settings.  Results are given as the average frame rate from 4 runs.

Metro2033 IGP, 1920x1080, All except PhysX

While comparing graphical results in the 5 FPS range may not seem appropriate, it taxes the system to its fullest, exposing whether at this high end memory actually makes a difference or if we are weighing on computation.  What we do see is a gradual increase in frame rate with each kit, up to 10% difference between the top end and the bottom kit.  The pivotal point of increase is from 1333 to 1866 – beyond 1866 our increases are smaller despite the increased cost of those kits.

Civilization V

Civilization V is a strategy video game that utilizes a significant number of the latest GPU features and software advances.  Using the in-game benchmark, we run Civilization V at 1920x1080 with full graphical settings, similar to Ryan in his GPU testing functionality.  Results reported by the benchmark are the total number of frames in sixty seconds, which we normalize to frames per second.

Civilization V IGP, 1920x1080 High Settings

In comparison to Metro2033, Civilization V does not merit a large % increase with memory kit, moving from 3% to 6.7% up the memory kits.  Again we do this test with all the eye candy enabled to really stress the CPU and IGP as much as we can to find out where faster memory will help.

Dirt 3

Dirt 3 is a rallying video game and the third in the Dirt series of the Colin McRae Rally series, developed and published by Codemasters.  Using the in game benchmark, Dirt 3 is run at 1920x1080 with Ultra Low graphical settings.  Results are reported as the average frame rate across four runs.

Dirt 3 IGP, 1920x1080, Ultra Low Settings

In contrast to our previous tests, this one we run at 1080p with ultra-low graphical settings.  This allows for more applicable frame rates, where the focus will be on processing pixels rather than post-processing with effects.  In previous testing on the motherboard side, we have seen that Dirt3 seems to love every form of speed increase possible – CPU speed, GPU speed, and as we can see here, memory speed.  Almost every upgrade to the system will give a better frame rate.  Moving from 1333 to 1600 gives us almost a 10% FPS increase, whereas 1333 to 1866 gives just under 15%.  We peak at 15% with the 2133 kit, but this reinforces the idea that choosing a 1600 C9 kit over a 1333 C9 kit is a no brainer for the price difference.  Choosing that 1866 C9 kit looks like a good idea, but the 2133 C9 kit is reaching the law of diminishing returns.

Market Positioning, Test Bed, Kit Order Gaming Tests: Portal 2, Batman AA, Overall IGP
POST A COMMENT

108 Comments

View All Comments

  • nafhan - Thursday, October 18, 2012 - link

    "Random access" means that data can be accessed randomly as opposed to just sequentially. That's it. The term is a relic of an era where sequential storage was the norm.

    Hard drives and CD's are both random access devices, and they are both much faster on sequential reads. An example of sequential storage would be a tape backup drive.
    Reply
  • mmonnin03 - Thursday, October 18, 2012 - link

    RAM is direct access, no sequential or randomness about it. Access time is the same anywhere on the module.
    XX reads the same as

    X
    X

    Where X is a piece of data and they are laid out in columns/rows.
    Both are separate commands and incure the same latencies.
    Reply
  • extide - Thursday, October 18, 2012 - link

    No, you are wrong. Period. nafhan's post is correct. Reply
  • menting - Thursday, October 18, 2012 - link

    no, mmonnin03 is more correct.
    DRAM has the same latency (relatively speaking.. it's faster by a little for the bits closer to the address decoder) for anywhere in the memory, as defined by the tAA spec for reads. For writes it's not as easy to determine since it's internal, but can be guessed from the tRC spec.

    The only time that DRAM reads can be faster for consecutive reads, and considered "sequential" is if you open a row, and continue to read all the columns in that row before precharging, because the command would be Activate, Read, Read, Read .... Read, Precharge, whereas a "random access" will most likely be Activate, Read, Precharge most of the time.

    The article is misleading, using "sequential reads" in the article. There is really no "sequential", because depending if you are sequential in row, column, or bank, you get totally different results.
    Reply
  • jwilliams4200 - Thursday, October 18, 2012 - link

    I say mmonnin03 is precisely wrong when he claims that " no matter where the data is on the module the access time is the same".

    The read latency can vary by about a factor of 3 times whether the read is from an already open row, or whether the desired read comes from a different row than one already open.

    That makes a big difference in total read time, especially if you are reading all the bytes in a page.
    Reply
  • menting - Friday, October 19, 2012 - link

    no. he is correct.
    if every read has the conditions set up equally (ie the parameters are the same, only the address is not), then the access time is the same.

    so if address A is from a row that is already open, the time to read that address is the same as address B, if B from a row that is already open

    you cannot have a valid comparison if you don't keep the conditions the same between 2 addresses. It's almost like saying the latency is different between 2 reads because they were measured at different PVT corners.
    Reply
  • jwilliams4200 - Friday, October 19, 2012 - link

    You are also incorrect, as well as highly misleading to anyone who cares about practical matters regarding DRAM latencies.

    Reasonable people are interested in, for example, the fact that reading all the bytes on a DRAM page takes significantly less time than reading the same number of bytes from random locations distributed throughout the DRAM module.

    Reasonable people can easily understand someone calling that difference sequential and random read speeds.

    Your argument is equivalent to saying that no, you did not shoot the guy, the gun shot him, and you are innocent. No reasonable person cares about such specious reasoning.
    Reply
  • hsir - Friday, October 26, 2012 - link

    jwilliams4200 is absolutely right.

    People who care about practical memory performance worry about the inherent non-uniformity in DRAM access latencies and the factors that prevent efficient DRAM bandwidth utilization. In other words, just row-cycle time (tRC) and the pin bandwidth numbers are not even remotely sufficient to speculate how your DRAM system will perform.

    DRAM access latencies are also significantly impacted by the memory controller's scheduling policy - i.e. how it prioritizes one DRAM request over another. Row-hit maximization policies, write-draining parameters and access type (if this is a cpu/gpu/dma request) will all affect latencies and DRAM bandwidth utilization. So just sweeping everything under the carpet by saying that every access to DRAM takes the same amount of time is, well, just not right.
    Reply
  • nafhan - Friday, October 19, 2012 - link

    I was specifically responding to your incorrect definition of "random access". Randomness doesn't guarantee timing; it just means you can get to it out of order. Reply
  • jwilliams4200 - Friday, October 19, 2012 - link

    And yet, by any practical definition, you are incorrect and the author is correct.

    For example, if you read (from RAM) 1GiB of data in sequential order of memory addresses, it will be significantly faster than if you read 1GiB of data, one byte at a time, from randomly selected memory addresses. The latter will usually take two to four times as long (or worse).

    It is not unreasonable to refer to that as the difference between sequential and random reads.

    Your argument reminds me of the little boy who, chastised by his mother for pulling the cat's tail, whined, "I didn't pull the cat's tail, I just held it and the cat pulled."
    Reply

Log in

Don't have an account? Sign up now