Professional Performance: Windows

Agisoft Photoscan – 2D to 3D Image Manipulation: link

Agisoft Photoscan creates 3D models from 2D images, a process which is very computationally expensive. The algorithm is split into four distinct phases, and different phases of the model reconstruction require either fast memory, fast IPC, more cores, or even OpenCL compute devices to hand. Agisoft supplied us with a special version of the software to script the process, where we take 50 images of a stately home and convert it into a medium quality model. This benchmark typically takes around 15-20 minutes on a high end PC on the CPU alone, with GPUs reducing the time.

Agisoft Photoscan 1.0.0

Photoscan, on paper, would offer more possibilities for faster memory to make a difference. However it would seem that the most memory dependent stage (stage 3) is actually a small part of the overall calculation and was absorbed by the natural variation in the larger stages, giving at most a 1.1% difference between times.

Cinebench R15

Cinebench R15 - Single Thread

Cinebench R15 - MultiThread

Cinebench is historically CPU dependent, giving a 2% difference from JEDEC to peak results.

3D Particle Movement

3DPM is a self-penned benchmark, taking basic 3D movement algorithms used in Brownian Motion simulations and testing them for speed. High floating point performance, MHz and IPC wins in the single thread version, whereas the multithread version has to handle the threads and loves more cores.

3D Particle Movement: Single Threaded

3D Particle Movement: MultiThreaded

3DPM is also relatively memory agnostic for DDR4 on Haswell-E, showing that DDR4-2133 is good enough.

Professional Performance: Linux

Built around several freely available benchmarks for Linux, Linux-Bench is a project spearheaded by Patrick at ServeTheHome to streamline about a dozen of these tests in a single neat package run via a set of three commands using an Ubuntu 14.04 LiveCD. These tests include fluid dynamics used by NASA, ray-tracing, molecular modeling, and a scalable data structure server for web deployments. We run Linux-Bench and have chosen to report a select few of the tests that rely on CPU and DRAM speed.

C-Ray: link

C-Ray is a simple ray-tracing program that focuses almost exclusively on processor performance rather than DRAM access. The test in Linux-Bench renders a heavy complex scene offering a large scalable scenario.

Linux-Bench c-ray 1.1 (Hard)

Natural variation gives a 4% difference, although the faster and more dense memory gave slower times.

NAMD, Scalable Molecular Dynamics: link

Developed by the Theoretical and Computational Biophysics Group at the University of Illinois at Urbana-Champaign, NAMD is a set of parallel molecular dynamics codes for extreme parallelization up to and beyond 200,000 cores. The reference paper detailing NAMD has over 4000 citations, and our testing runs a small simulation where the calculation steps per unit time is the output vector.

Linux-Bench NAMD Molecular Dynamics

NAMD showed little difference between our memory kits, peaking at 0.7% above JEDEC.

NPB, Fluid Dynamics: link

Aside from LINPACK, there are many other ways to benchmark supercomputers in terms of how effective they are for various types of mathematical processes. The NAS Parallel Benchmarks (NPB) are a set of small programs originally designed for NASA to test their supercomputers in terms of fluid dynamics simulations, useful for airflow reactions and design.

Linux-Bench NPB Fluid Dynamics

Despite the 4x8 GB results going south of the border, the faster memory does give a slight difference in NPB, peaking at 4.3% increased performance for the 3000+ memory kits.

Redis: link

Many of the online applications rely on key-value caches and data structure servers to operate. Redis is an open-source, scalable web technology with a b developer base, but also relies heavily on memory bandwidth as well as CPU performance.

Linux-Bench Redis Memory-Key Store, 100x

When tackling a high number of users, Redis performs up to 17% better using 2800+ memory, indicating our best benchmark result.

Memory Scaling on Haswell-E: CPU Real World Memory Scaling on Haswell: Single GTX 770 Gaming
POST A COMMENT

120 Comments

View All Comments

  • jabber - Thursday, February 5, 2015 - link

    Crikey...I'm still on 16GB of DDR2 ECC! I feel old. Reply
  • foxtrot1_1 - Thursday, February 5, 2015 - link

    If you're still running DDR2 it's probably not the RAM that's holding your system back. Reply
  • nathanddrews - Friday, February 6, 2015 - link

    Clearly. It's rather sad to see how little impact RAM has on performance... which begs the question of who is buying this stuff? Is the only strength stability during overclocking? Reply
  • III-V - Friday, February 6, 2015 - link

    Well Haswell-E users are tied to DDR4, so yeah there's that :)

    Of course, that's not what you were talking about. Memory bandwidth can have a big effect on certain workloads. IGPs need a bit of it (tapers off hard after 2133 MHz), but I know programs like WinZip and 7-Zip love memory bandwidth. There's certainly a lot of server and HPC workloads that love it too, but for most users, you're certainly right -- it's not worth it at the moment and may not really ever be a concerning bottleneck.
    Reply
  • r3loaded - Thursday, February 5, 2015 - link

    You mean to say you've not bought a single computer since Core 2? Damn! Reply
  • Murloc - Thursday, February 5, 2015 - link

    well do you really need additional CPU power?

    My overclocked E8500 (with stock cooler) was a beast, there was so much headroom, and I didn't change computer because of it.

    Right now I'm on a i5 750 from 2009 or something and it's totally fine. Also my GTX 275 still handles games in full hd just fine although not at max settings and it also becomes hot and only has DX10 so it's obsolete.
    So after 6 years, it's only the GPU that could use upgrading, the CPU/RAM part is not bottlenecking anything.

    Well not having sata 6 and that limiting my SSD is the one bad thing. I don't have any USB3 pendrives so I don't miss that.
    It's technology and power consumption making my CPU/chipset obsolete rather than performance.
    Reply
  • Guspaz - Thursday, February 5, 2015 - link

    I'm still running a first-gen i7 (Nehalem) as my work computer, and it's still plenty snappy. I've got 12GB of RAM in the thing, and whatever I do have in the way of performance limitations would largely be resolved by sticking an SSD in there. Reply
  • svan1971 - Sunday, February 8, 2015 - link

    Get the PX-AG256M6e say goodbye to sata 3 limitations. I put one in an old x58 board and its amazing what a 6 year old 3.6 oc'd i7 can do. Reply
  • mikato - Monday, February 9, 2015 - link

    Nice post. I had an E7300 system and I had already upgraded the GPU to a GTX 760 and maxed out memory. It was somewhat slow in the newer games I played (Call of Duty), then I bought an E8500 on ebay and put that in and overclocked it finding a sweet spot, but it was still not quite as fast as I wanted. The poor optimization of COD Ghosts was partly to blame, but I ended up redoing the whole system at that point.

    I do use an i7-950 Bloomfield at work still and it does just fine.
    Reply
  • jabber - Thursday, February 5, 2015 - link

    Just in clarify, I'm running a dual quad core 3.33Ghz Xeon setup. Still keeps up with a i7 in a lot of cases. They cost peanuts too. Reply

Log in

Don't have an account? Sign up now