Synthetic testing has a way of elevating what may be a minor difference between hardware into a larger-than-life comparison, despite the effect on the usage of the system being near minimal.  There are several benchmarks which straddle the line between synthetic and real world (such as Cinebench and SPECviewperf) which we include here, plus a couple which users at home can use to compare their memory settings.

SPECviewperf

The mix of real-world and synthetic benchmarks does not get more complex than SPECviewperf – a benchmarking tool designed to test various capabilities in several modern 3D renders.  Each of these rendering programs come with their own coding practices, and as such can either be memory bound, CPU bound or GPU bound.  In our testing, we use the standard benchmark on the IGP and report the results for comparison.

Each of these tools uses different methods in order to compute and display information.  Some of these are highly optimized to be less taxing on the system, and some are optimized to use less memory.  All the tests benefit in some way moving from DDR3-1333 to DDR3-2400, although some as little as 2%.  The biggest gain was using Maya where a 22% increase was observed.

Cinebench x64

A long time favourite of synthetic benchmarkers the world over is the use of Cinebench, software designed to test the real-world application of rendering software via the CPU or GPU.  In this circumstance we test the CPU single core and multi-core performance, as well as the GPU performance using a single GTX 580 at x16 PCIe 2.0 bandwidth.  Any serial factors have to be processed through the CPU, and as such any memory access will either slow or speed up the benchmark.

Cinebench - CPU

Cinebench - OpenGL

In terms of CPU performance in Cinebench, the boost from faster memory is almost negligible; moving from DDR3-1333 to DDR3-2133 gives the best boost of about 1.5%.

Conversion, Compression and Computation Overclocking Results
Comments Locked

114 Comments

View All Comments

  • frozentundra123456 - Thursday, October 18, 2012 - link

    While interesting from a theoretical standpoint. I would have been more interested in a comparison in laptops using HD4000 vs A10 to see if one is more dependent on fast memory than others. To be blunt, I dont really care much about the IGP on a 3770K. It would have been a more interesting comparison in laptops where the igp might actually be used for gaming. I guess maybe it would have been more difficult to do with changing memory around so much in a laptop though.

    The other thing is I would have liked to see the difference in games at playable frame rates. Does it really matter if you get 5.5 or 5.9 fps? It is a slideshow anyway. My interest is if using higher speed memory could have moved a game from unplayable to playable at a particular setting or allowed moving up to higher settings in a game that was playable.
  • mmonnin03 - Thursday, October 18, 2012 - link

    RAM by definition is Random Access which means no matter where the data is on the module the access time is the same. It doesn't matter if two bytes are on the same row or on a different bank or on a different chip on the module, the access time is the same. There is no sequential or random difference with RAM. The only difference between the different rated sticks are short/long reads, not random or sequential and any reference to random/sequential reads should be removed.
  • Olaf van der Spek - Thursday, October 18, 2012 - link

    You're joking right? :p
  • mmonnin03 - Thursday, October 18, 2012 - link

    Well if the next commenter below says their memory knowledge went up by 10x they probably believe RAM reads are different depending on whether they are random or sequential.
  • nafhan - Thursday, October 18, 2012 - link

    "Random access" means that data can be accessed randomly as opposed to just sequentially. That's it. The term is a relic of an era where sequential storage was the norm.

    Hard drives and CD's are both random access devices, and they are both much faster on sequential reads. An example of sequential storage would be a tape backup drive.
  • mmonnin03 - Thursday, October 18, 2012 - link

    RAM is direct access, no sequential or randomness about it. Access time is the same anywhere on the module.
    XX reads the same as

    X
    X

    Where X is a piece of data and they are laid out in columns/rows.
    Both are separate commands and incure the same latencies.
  • extide - Thursday, October 18, 2012 - link

    No, you are wrong. Period. nafhan's post is correct.
  • menting - Thursday, October 18, 2012 - link

    no, mmonnin03 is more correct.
    DRAM has the same latency (relatively speaking.. it's faster by a little for the bits closer to the address decoder) for anywhere in the memory, as defined by the tAA spec for reads. For writes it's not as easy to determine since it's internal, but can be guessed from the tRC spec.

    The only time that DRAM reads can be faster for consecutive reads, and considered "sequential" is if you open a row, and continue to read all the columns in that row before precharging, because the command would be Activate, Read, Read, Read .... Read, Precharge, whereas a "random access" will most likely be Activate, Read, Precharge most of the time.

    The article is misleading, using "sequential reads" in the article. There is really no "sequential", because depending if you are sequential in row, column, or bank, you get totally different results.
  • jwilliams4200 - Thursday, October 18, 2012 - link

    I say mmonnin03 is precisely wrong when he claims that " no matter where the data is on the module the access time is the same".

    The read latency can vary by about a factor of 3 times whether the read is from an already open row, or whether the desired read comes from a different row than one already open.

    That makes a big difference in total read time, especially if you are reading all the bytes in a page.
  • menting - Friday, October 19, 2012 - link

    no. he is correct.
    if every read has the conditions set up equally (ie the parameters are the same, only the address is not), then the access time is the same.

    so if address A is from a row that is already open, the time to read that address is the same as address B, if B from a row that is already open

    you cannot have a valid comparison if you don't keep the conditions the same between 2 addresses. It's almost like saying the latency is different between 2 reads because they were measured at different PVT corners.

Log in

Don't have an account? Sign up now