Linux Performance

Built around several freely available benchmarks for Linux, Linux-Bench is a project spearheaded by Patrick at ServeTheHome to streamline about a dozen of these tests in a single neat package run via a set of three commands using an Ubuntu 11.04 LiveCD. These tests include fluid dynamics used by NASA, ray-tracing, OpenSSL, molecular modeling, and a scalable data structure server for web deployments. We run Linux-Bench and have chosen to report a select few of the tests that rely on CPU and DRAM speed.

C-Ray: link

C-Ray is a simple ray-tracing program that focuses almost exclusively on processor performance rather than DRAM access. The test in Linux-Bench renders a heavy complex scene offering a large scalable scenario.

Linux-Bench c-ray 1.1 (Hard)

C-Ray doesn't care much for the overclock, indicating that the bottleneck is elsewhere.

NAMD, Scalable Molecular Dynamics: link

Developed by the Theoretical and Computational Biophysics Group at the University of Illinois at Urbana-Champaign, NAMD is a set of parallel molecular dynamics codes for extreme parallelization up to and beyond 200,000 cores. The reference paper detailing NAMD has over 4000 citations, and our testing runs a small simulation where the calculation steps per unit time is the output vector.

Linux-Bench NAMD Molecular Dynamics

The Molecular Dynamics module of the test certainly prefers more physical cores , with the overclock giving the result a small raise but still lagging behind the Core i5 parts.

NPB, Fluid Dynamics: link

Aside from LINPACK, there are many other ways to benchmark supercomputers in terms of how effective they are for various types of mathematical processes. The NAS Parallel Benchmarks (NPB) are a set of small programs originally designed for NASA to test their supercomputers in terms of fluid dynamics simulations, useful for airflow reactions and design.

Linux-Bench NPB Fluid Dynamics

Fluid Dynamics appreciates the overclock, and we sit in the middle of the Core i5 parts and well above the previous generation Core i7s.

Redis: link

Many of the online applications rely on key-value caches and data structure servers to operate. Redis is an open-source, scalable web technology with a strong developer base, but also relies heavily on memory bandwidth as well as CPU performance.

Linux-Bench Redis Memory-Key Store, 1x

Linux-Bench Redis Memory-Key Store, 10x

Linux-Bench Redis Memory-Key Store, 100x

With Redis single thread speed as well as IPC is king, so with an overclocked Skylake it does rather well.

Core i3-6100TE Professional Performance: Windows Core i3-6100TE Gaming Performance: High End GTX 980/R9 290X
Comments Locked

62 Comments

View All Comments

  • ViperV990 - Thursday, March 17, 2016 - link

    The i5-6400 @ $180 seems to be a much better part to OC.
  • nightbringer57 - Thursday, March 17, 2016 - link

    Heh, when some of the younger ones today speak about overclocking, I like to remember them of how much more financially interesting overclocking used to be. It's like everyone forget how overclocking worked a few years ago. I still remember my cheap student gaming PC with a Pentium E2180 that went from 2GHz to 3GHz with a standard tower rad and only a slight voltage boost. Then you could have almost all of the performance of the 300€ CPUs (except a good bit of the cache) for 60€ or so. Multiplier overclocking is easier, yes, and it's good to reach insane peak frequencies - but this market of the "buy low, push high" overclocking has faded out (courtesy, of course, of the segmentation by core numbers as well)
  • BrokenCrayons - Thursday, March 17, 2016 - link

    Oh yeah, well I overclocked when there were still turbo buttons on the fronts of AT cases! So nyah nyah!

    Sarcasm aside though, drawing a line in the sand to mark when overclocking was "good" or "worthwhile" and when it stopped being fun or have any sort of point would result in an awful lot of people drawing an awful lot of lines all over the place. For instance, the last processor I bothered with overclocking was a 2GHz Pentium 4 derived Celeron. Pushing the FSB from 100 to 150MHz on an Intel boxed cooler with a little bit of extra voltage netted a 3GHz chip...which rapidly became highly unstable over the course of a few months. After that and numerous PIIs, PIIIs, the infamous Celeron 300A and whatnot, I got bored with it and my priorities shifted. I would have overclocked my VIC-20 and Trash 80 if I'd known more about computers because I couldn't resist tinkering. I think if one were to ask other people, they'd find different points in time and different processor technologies so it's probably unfair to people who are simply by nature of the date of their birth, unable to discuss overclocking in terms you're more comfortable with.
  • nightbringer57 - Thursday, March 17, 2016 - link

    Yes, but still. There had been a more or less constant trend of tinkering around with low-end CPUs to get quasi-high-end performance out of them for quite a long time. I quote my old E2180, but over the "modern" history of computers (that is, in the current context, IBM PC and their heir), there had always been such shenanigans available to the tinkerers. If you go further in time, the trend fades as the modern concept of CPU "range" fades out and it came more down to boosting your X - generation CPU to still have a bit more oomph after most of the software environment of you given platform had moved to a new generation.
    And not only Intel processors, but AMD processors as well, with the pencil unlockable Durons and whatnot.

    As this article states, this kind of overclocking has more or less died in recent years, partly due to technical issues (as systems get more and more complex and integrated, it becomes riskier), partly due to the current state of the market, partly due to marketing practices.

    It's not about discussing overclocking in terms I personally am comfortable with or whatnot. It's just about being realistic. I hope that AMD can come back with Zen and bring a bit more freshness into the low-end overclocking market.
  • Spoelie - Friday, March 18, 2016 - link

    Still had a lot of fun in the period between 2000-2010 with the Athlons, always buying the lowest end SKU of the performance line, and ocing between 20-40% to reach the same performance of the highest end SKU in the line.

    E.g.
    On an nForce2 board IIRC
    * Athlon XP 1800+ (Socket A Thoroughbred 256KB cache) 1533mhz OC to ~2ghz
    * Athlon XP 2500+ (Socket A Barton 512KB cache) FSB166 to FSB200 = OC to "3200+"

    Had a Athlon 64 2800+ on a Socket 754 for a very short time, don't remember what I did to it.

    Then a "DFI LanParty UT NF4 Ultra-D" (Socket 939 w/ nForce4 & 2*512MB Winbond BH-5 PC3200 @ 250mhz 2-2-2), cream of the crop at the time.
    * Athlon 64 3000+ (Venice) OC 1800 to 2250 (250bus)
    * Opteron 165 (Toledo) OC 1800 to 2475 (274bus)

    I loved those days
  • Murloc - Sunday, March 20, 2016 - link

    yeah I remember a 45nm core 2 duo I had, with the boxed stock cooler I was able to lower the voltage quite a bit and daily OC it at 4GHz at the same time.
    It was a lucky piece compared to others.
  • cobrax5 - Monday, March 21, 2016 - link

    I'm thinking about replacing my 45nm i7-930 @ 3.8ghz with a hex-core, 32nm Xeon and OC that to > 3.6ghz. You can get them for like under $200, and I'll keep my (admittedly aging) X58 platform.
  • benedict - Thursday, March 17, 2016 - link

    Single-threaded benchmarks show this processor to be much better than what it'd be in real life. I don't know if there are people who only run a single program at a time on their PCs. Having more cores is much more valuable than most benchmarks will show.
  • TheinsanegamerN - Thursday, March 17, 2016 - link

    I can run 7 programs at once, but if one is very demanding and is single threaded, then single threaded performance is still quite relevant. Multiple programs/=/not needing single threaded performance. Thinking that single threaded performance is not important got AMD the FX series, and subsequently a large portion of their users jumping to intel.
  • calculagator - Thursday, March 17, 2016 - link

    Everyone is different, but single threaded benchmarks give a much better picture of performance for "normal" users than multithreaded in my experience. Even if they have lots of programs running, most users are only using one program at a time. All of those open documents and web tabs use very little CPU power while they just sit there. I have about 100 active processes right now, but my CPU is idling at about 3% usage.
    Even a basic dual-core CPU can handle most users' multitasking. The most common exceptions are gaming and video editing, but most users are not doing those things most of the time. Consider how people use laptops so often: their CPUs have such high single-threaded/burst performance that they hardly notice how much less powerful they are than much more powerful desktop CPUs.

Log in

Don't have an account? Sign up now