USB 3.0 Backup

Our backup testing takes a typical set of user files – specifically just under 8000 files across 4 GB, some large files but mostly small.  For USB 3.0 testing, these files are copied from a 4 GB RAMDisk onto an OCZ Vertex3 which is connected via a SATA 6 Gbps to USB 3.0 device.  We use all the USB 3.0 protocols available - the UASP protocol that the ASMedia controller on our test bed affords as well as the chipset driven Intel USB 3.0 under ASUS' Turbo mode.  The copy test is conducted using DiskBench, a copying tool with accurate copy timing.

USB 3.0 Copy Test, ASMedia + UASP

USB 3.0 Copy Test, ASMedia

USB 3.0 Copy Test, Intel + Turbo

USB 3.0 Copy Test, Intel Chipset

Across the result range, no matter which protocol is used, our copy testing shows up to a 7% decrease in copy times over the USB 3.0 protocol moving from DDR3-1333 to DDR3-2133.  In some cases, such as using Intel Turbo mode, the timing levels out around DDR3-1866, but in the case of UASP, the DDR3-2133 C9 kit provides the best result.  Interesting to note that in the case of UASP, having a smaller CL value is more important than having a larger speed value.

Thunderbolt Backup

Similar to our USB 3.0 Backup test, Thunderbolt testing carries the same files directly through to our LittleBig Disk which contains two 120 GB Intel SSDs in RAID-0.  The copy test is conducted using DiskBench, a copying tool with accurate copy timing.

Thunderbolt Copy Test

Thunderbolt tests are never as consistent as USB timing – the results shown are the average of the best three obtained.  Typically the best results come after leaving the Thunderbolt device for 30 seconds or longer after the last copy test as the TB device does an amount of post processing after the data has officially been sent.  Nevertheless, a gradual decrease in copy times is exhibited from DDR3-1333 to DDR3-2400.

Gaming Tests: Portal 2, Batman AA, Overall IGP Conversion, Compression and Computation
Comments Locked

114 Comments

View All Comments

  • jwilliams4200 - Friday, October 19, 2012 - link

    You are also incorrect, as well as highly misleading to anyone who cares about practical matters regarding DRAM latencies.

    Reasonable people are interested in, for example, the fact that reading all the bytes on a DRAM page takes significantly less time than reading the same number of bytes from random locations distributed throughout the DRAM module.

    Reasonable people can easily understand someone calling that difference sequential and random read speeds.

    Your argument is equivalent to saying that no, you did not shoot the guy, the gun shot him, and you are innocent. No reasonable person cares about such specious reasoning.
  • hsir - Friday, October 26, 2012 - link

    jwilliams4200 is absolutely right.

    People who care about practical memory performance worry about the inherent non-uniformity in DRAM access latencies and the factors that prevent efficient DRAM bandwidth utilization. In other words, just row-cycle time (tRC) and the pin bandwidth numbers are not even remotely sufficient to speculate how your DRAM system will perform.

    DRAM access latencies are also significantly impacted by the memory controller's scheduling policy - i.e. how it prioritizes one DRAM request over another. Row-hit maximization policies, write-draining parameters and access type (if this is a cpu/gpu/dma request) will all affect latencies and DRAM bandwidth utilization. So just sweeping everything under the carpet by saying that every access to DRAM takes the same amount of time is, well, just not right.
  • nafhan - Friday, October 19, 2012 - link

    I was specifically responding to your incorrect definition of "random access". Randomness doesn't guarantee timing; it just means you can get to it out of order.
  • jwilliams4200 - Friday, October 19, 2012 - link

    And yet, by any practical definition, you are incorrect and the author is correct.

    For example, if you read (from RAM) 1GiB of data in sequential order of memory addresses, it will be significantly faster than if you read 1GiB of data, one byte at a time, from randomly selected memory addresses. The latter will usually take two to four times as long (or worse).

    It is not unreasonable to refer to that as the difference between sequential and random reads.

    Your argument reminds me of the little boy who, chastised by his mother for pulling the cat's tail, whined, "I didn't pull the cat's tail, I just held it and the cat pulled."
  • jwilliams4200 - Thursday, October 18, 2012 - link

    Depending on whether there is a page-hit (row needed already open), page-empty (row needed not yet open), or page-miss (row needed is not the row already open), the time to read a word can vary by a factor of 3 times (i.e., 1x latency for a page-hit, 2x latency for a page-empty, and 3x latency for a page-miss).

    What the author refers to as a "sequential read" probably probably refers to reading from an already open page (page-hit).

    While his terminology may be ambiguous (and his computation for the "sequential read" is incorrect, it should be 4 clocks), he is nevertheless talking about a meaningful concept related to variation on latency in DRAM for different types of reads.

    See here for more detail:

    http://www.anandtech.com/show/3851/everything-you-...
  • Shadow_k - Thursday, October 18, 2012 - link

    My knowledge of RAM has increased 10 fold very nice artical well done
  • losttsol - Thursday, October 18, 2012 - link

    2133MHz "Recommended for Deeper Pockets"???

    Not really. DDR3 is so cheap now that high end RAM is affordable for all. I would have said you were crazy a few years ago if you told me soon I could buy 16GB of RAM for less than $150.
  • IanCutress - Thursday, October 18, 2012 - link

    Either pay $95 for 1866 C9 or $130 for 2133 C9 - minor differences, but $35 saving. This is strictly talking about the kits used today, there could be other price differences. But I stand by my recommendation - for the vast majority of cases 1866 C9 will be fine, and there is a minor performance gain in some scenarios with 2133 C9, but at a $35 difference it is hard to justify unless you have some spare budget. Most likely that budget could be put into a bigger SSD or GPU.

    Ian
  • just4U - Friday, October 19, 2012 - link

    Something has to be said about the TridentX brand I believe.. since it is getting some pretty killer feedback. It's simply the best ram out there being able to do all that any other ram can and that little bit extra. I don't see the speed increase as a selling point but the lower timings at conventional speeds that users are reporting is interesting.. I haven't tried it though.. just going on what I've read. Shame about the size of the heatsinks though.. makes it problematic in some builds.
  • Peanutsrevenge - Friday, October 19, 2012 - link

    You clearly live in some protected bubble where everyone has well paid jobs and isn't on a shoestring budget.

    I would so LMAO when you get mugged by someone struggling to feed themselves because you're all flash with your cash.

Log in

Don't have an account? Sign up now