Linux Performance 

C-Ray: link

C-Ray is a simple ray-tracing program that focuses almost exclusively on processor performance rather than DRAM access. The test in Linux-Bench renders a heavy complex scene offering a large scalable scenario.

Linux-Bench c-ray 1.1 (Hard)

NAMD, Scalable Molecular Dynamics: link

Developed by the Theoretical and Computational Biophysics Group at the University of Illinois at Urbana-Champaign, NAMD is a set of parallel molecular dynamics codes for extreme parallelization up to and beyond 200,000 cores. The reference paper detailing NAMD has over 4000 citations, and our testing runs a small simulation where the calculation steps per unit time is the output vector.

Linux-Bench NAMD Molecular Dynamics

NPB, Fluid Dynamics: link

Aside from LINPACK, there are many other ways to benchmark supercomputers in terms of how effective they are for various types of mathematical processes. The NAS Parallel Benchmarks (NPB) are a set of small programs originally designed for NASA to test their supercomputers in terms of fluid dynamics simulations, useful for airflow reactions and design.

Linux-Bench NPB Fluid Dynamics

Redis: link

Many of the online applications rely on key-value caches and data structure servers to operate. Redis is an open-source, scalable web technology with a strong developer base, but also relies heavily on memory bandwidth as well as CPU performance.

Linux-Bench Redis Memory-Key Store, 1x

Linux-Bench Redis Memory-Key Store, 10x

Linux-Bench Redis Memory-Key Store, 100x

Conclusions on Linux-Bench

Our Linux testing actually affords ten tests, but we chose the most important to publish here (the other results can be found in Bench). But here we see some slight differences when it comes to overclocks - the NPB tests rely on multi-dimensional matrix solvers, which are often more cache/memory dependent and thus a higher frequency processor doesn't always help. With Redis, we are wholly cache/memory limited here. The other results are in-line with CPU performance deltas over the overclock range.

CPU Tests on Windows: Office Gaming Benchmarks: Integrated Graphics
Comments Locked

103 Comments

View All Comments

  • bill.rookard - Friday, August 28, 2015 - link

    I wonder if not having the FIVR on-die has to do with the difference between the Haswell voltage limits and the Skylake limits?
  • Communism - Friday, August 28, 2015 - link

    Highly doubtful, as Ivy Bridge has relatively the same voltage limits.
  • Morawka - Saturday, August 29, 2015 - link

    yea thats a crazy high voltage.. that was even high for 65nm i7 920's
  • kuttan - Sunday, August 30, 2015 - link

    i7 920 is 45nm not 65nm
  • Cellar Door - Friday, August 28, 2015 - link

    Ian, so it seems like the memory controller - even though capable of driving DDR4 to some insane frequencies seems to error out with large data sets?

    It would interesting to see this behavior with Skylake and DDR3L.

    Also it would be interesting to see in the i56600k, lacking the hyperthreading would run into same issues.
  • Communism - Friday, August 28, 2015 - link

    So your sample definitively wasn't stable above 4.5ghz after all then.......

    Haswell/Broadwell/Skylake dud confirmed. Waiting for Skylake-E where the "reverse hyperthreading" will be best leveraged with the 6/8 core variant with proper quad channel memory bandwidth.
  • V900 - Friday, August 28, 2015 - link

    Nope, it was stable above 4.5 Ghz...

    And no dud confirmed in Broadwell/Skylake.

    There is just one specific scenario (4K/60 encoding) where the combination of the software and the design of the processor makes overclocking unfeasible.

    Not really a failure on Intels part, since it's not realistic to expect them to design a mass-market CPU according to the whims of the 0.5% of their customers who overclock.
  • Gigaplex - Saturday, August 29, 2015 - link

    If you can find a single software load that reliably works at stock settings, but fails at OC, then the OC by definition is not 100% stable. You might not care and are happy to risk using a system configured like that, but I sure as hell wouldn't.
  • Oxford Guy - Saturday, August 29, 2015 - link

    Exactly. Not stable is not stable.
  • HollyDOL - Sunday, August 30, 2015 - link

    I have to agree... While we are not talking about server stable with ECC and things, either you are rock stable on desktop use or not stable at all. Already failing on one of test scenarios is not good at all. I wouldn't be happy if there were some hidden issues occuring during compilations, or after few hours of rendering a scene... or, let's be honest, in the middle of gaming session with my online guild. As such I am running my 2500k half GHz lower than stability testing shown as errorless. Maybe it's excessively much, but I like to be on a safe side with my OC, especially since the machine is used for wide variety of purposes.

Log in

Don't have an account? Sign up now