Linux Performance 

C-Ray: link

C-Ray is a simple ray-tracing program that focuses almost exclusively on processor performance rather than DRAM access. The test in Linux-Bench renders a heavy complex scene offering a large scalable scenario.

Linux-Bench c-ray 1.1 (Hard)

NAMD, Scalable Molecular Dynamics: link

Developed by the Theoretical and Computational Biophysics Group at the University of Illinois at Urbana-Champaign, NAMD is a set of parallel molecular dynamics codes for extreme parallelization up to and beyond 200,000 cores. The reference paper detailing NAMD has over 4000 citations, and our testing runs a small simulation where the calculation steps per unit time is the output vector.

Linux-Bench NAMD Molecular Dynamics

NPB, Fluid Dynamics: link

Aside from LINPACK, there are many other ways to benchmark supercomputers in terms of how effective they are for various types of mathematical processes. The NAS Parallel Benchmarks (NPB) are a set of small programs originally designed for NASA to test their supercomputers in terms of fluid dynamics simulations, useful for airflow reactions and design.

Linux-Bench NPB Fluid Dynamics

Redis: link

Many of the online applications rely on key-value caches and data structure servers to operate. Redis is an open-source, scalable web technology with a strong developer base, but also relies heavily on memory bandwidth as well as CPU performance.

Linux-Bench Redis Memory-Key Store, 1x

Linux-Bench Redis Memory-Key Store, 10x

Linux-Bench Redis Memory-Key Store, 100x

Conclusions on Linux-Bench

Our Linux testing actually affords ten tests, but we chose the most important to publish here (the other results can be found in Bench). But here we see some slight differences when it comes to overclocks - the NPB tests rely on multi-dimensional matrix solvers, which are often more cache/memory dependent and thus a higher frequency processor doesn't always help. With Redis, we are wholly cache/memory limited here. The other results are in-line with CPU performance deltas over the overclock range.

CPU Tests on Windows: Office Gaming Benchmarks: Integrated Graphics
Comments Locked

103 Comments

View All Comments

  • Zoeff - Friday, August 28, 2015 - link

    As an owner of a 6700K that's running at 4.8GHz, this is a very interesting article for me. :)

    I've currently entered 1.470v in the UEFI and I can get up to 1.5v in CPUz. Anything lower and it becomes unstable. So I guess I'm probably on the high side voltage wise...
  • zepi - Friday, August 28, 2015 - link

    Sounds like a scorching voltage for 24/7 operations considering it is 14nm process... But obviously, we don't really know if this is detrimental on longer term.
  • 0razor1 - Friday, August 28, 2015 - link

    I believe it is. Ion shift. High voltage = breakdown at some level. Enough damage and things go amiss.
    When one considers 1.35+ for 22nm high, I wonder why we're doing this (1.35+) at 14nm.

    If it's OK, then can someone illustrate why one should not go over say 1.6V on the DRAM in 22nm, why stick to 1.35V for 14nm? Might as well use standard previous generation voltages and call it a day?

    Further, where are the AVX stable loads? Sorry, but no P95 small in place FFTs with AVX = NOT stable enough for me. It's not the temps ( I have an h100i) for sure. For example, on my 4670k, it takes 1.22VCore for 4.6GHz, but 1.27VCore when I stress with AVX loads ( P95 being one of them).

    It's *not* OK to say hey that synthetic is too much of a stress etc. I used nothing but P95 since K-10 and haven't found a better error catcher.
  • 0razor1 - Friday, August 28, 2015 - link

    To add to the above, downclocking the core on GPU's and running memcheck in OCCT is *it* for my VRAM stability tests when I OC my graphics cards. I wonder how people just 'look' for corruption in benchmarks like firestrike and call their OC's stable. It doesn't work.

    Run a game and leave it idle for ~ 10 hours and come back. You will find glitches all over the place on your 'stable' OC.

    Just sayin- OC stability testing has fallen to new lows in the recent past, be it graphic cards or processors.
  • Zoeff - Friday, August 28, 2015 - link

    I tend to do quick tests such as Cinebench 15 and HandBrake, then if that passes I just run it for a week with regular usage such as gaming and streaming. If it blue screens or I get any other oddities I raise the voltage by 0.01v. I had to do that twice in the space of 1 week (started at 1.45v, 4.8GHz)
  • Oxford Guy - Saturday, August 29, 2015 - link

    That's a great way to corrupt your OS and programs.
  • Impulses - Saturday, August 29, 2015 - link

    Yeah I do all my strenuous testing first, if I have to simulate real world conditions by leaving two tests running simultaneously I do it too... Like running an encode with Prime in the background; or stressing the CPU, GPU, AND I/O simultaneously.

    AFTER I've done all that THEN I'll restore a pre-tinkering OS image, unless I had already restored one after my last BSOD or crash... Which I'll do sometimes mid-testing if I think I've pushed the OC far enough that anything might be hinky.

    It's so trivial to work with backups like that, should SOP.
  • Oxford Guy - Sunday, August 30, 2015 - link

    If a person is using an unstable overclock for daily work it may be hard to know if stealth corruption is happening.
  • kuttan - Sunday, August 30, 2015 - link

    haha that is funny.
  • kmmatney - Saturday, September 19, 2015 - link

    I do the same as the OP (but use Prime95 and Handbrake). If it passes a short test there (say one move in Handbrake) I just start using the machine. I've had blue screens, but never any corruption issues. I guess corruption could happen, but the odds are pretty low. My computer gets backed up every night to a WHS server, so I can be fearless..

Log in

Don't have an account? Sign up now