Stock CPU Performance: Legacy Tests

We have also included our legacy benchmarks in this section, representing a stack of older code for popular benchmarks.

All of our benchmark results can also be found in our benchmark engine, Bench.

3DPM v1: Naïve Code Variant of 3DPM v2.1

The first legacy test in the suite is the first version of our 3DPM benchmark. This is the ultimate naïve version of the code, as if it was written by scientist with no knowledge of how computer hardware, compilers, or optimization works (which in fact, it was at the start). This represents a large body of scientific simulation out in the wild, where getting the answer is more important than it being fast (getting a result in 4 days is acceptable if it’s correct, rather than sending someone away for a year to learn to code and getting the result in 5 minutes).

In this version, the only real optimization was in the compiler flags (-O2, -fp:fast), compiling it in release mode, and enabling OpenMP in the main compute loops. The loops were not configured for function size, and one of the key slowdowns is false sharing in the cache. It also has long dependency chains based on the random number generation, which leads to relatively poor performance on specific compute microarchitectures.

3DPM v1 can be downloaded with our 3DPM v2 code here: 3DPMv2.1.rar (13.0 MB)

3DPM v1 Single Threaded3DPM v1 Multi-Threaded

x264 HD 3.0: Older Transcode Test

This transcoding test is super old, and was used by Anand back in the day of Pentium 4 and Athlon II processors. Here a standardized 720p video is transcoded with a two-pass conversion, with the benchmark showing the frames-per-second of each pass. This benchmark is single-threaded, and between some micro-architectures we seem to actually hit an instructions-per-clock wall.

x264 HD 3.0 Pass 1x264 HD 3.0 Pass 2

Stock CPU Performance: Encoding Tests Conclusion: I Actually Used the Cannon Lake Laptop as a Daily System
Comments Locked

129 Comments

View All Comments

  • dgingeri - Saturday, January 26, 2019 - link

    With Intel recently releasing the "F" SKUs for processors that don't have integrated graphics, I would think this processor would be a Core i3-8121FU.
  • KOneJ - Sunday, January 27, 2019 - link

    ROFL, mate. Though a UF line-up honestly wouldn't surprise me with where MCMs, TSVs, yields, iGPUs, and core counts are seemingly headed.
  • Piotrek54321 - Saturday, January 26, 2019 - link

    I would love an article on how quantum mechanical effects have to be taken into account at such small nodes.
  • KOneJ - Sunday, January 27, 2019 - link

    I would love to see the mathematics of quantum mechanics cleaned up to be more elegant and less Newtonian in nature.
  • Rudde - Saturday, January 26, 2019 - link

    I looked into the transistor density of different nodes and particularily the claim that Intel 10nm will feature "100 million transistors per square millimeter."
    Intel seems to historically lack in transistor density. 22nm has ~8 million per mm², while competing 28nm from GlobalFoundries have ~13 and TSMC has ~12.
    Moving unto 14nm and all foundries double their transistor density. Intel goes to 15M/mm², GF to 24 (on a node bought from Samsung) and TSMC's 16nm also to 24M/mm².
    TSMC's 7nm node has a density of ~40M/mm².
    Now Intel has made two statements (both found in the first page of the article):
    1. 100 million transistors per mm² or a 5.7x improvement.
    2. A 2.7x improvement in density over 14nm, which gives 55M/mm². 55M/mm² would be consistent with Intel's claim of beating TSMC's 7nm.
    Next I'm assuming my calculations about Intel's transistor density are wrong, and that both of Intels claims are true. In that case Intel's current 14nm would be 27M/mm². Now of course we can't assume my calculations about GF and TSMC are correct either and we are left without any conclusion.
  • Rudde - Saturday, January 26, 2019 - link

    I jumped the gun too early and didn't proceed to page two that explains a lot of the same things as I tries to explain, but uses actual node data and not chip sizes.
  • smalM - Saturday, January 26, 2019 - link

    Page two doesn't use actual node data, it uses Intel propaganda ;-)
  • KOneJ - Sunday, January 27, 2019 - link

    Yep, they're not the only ones optimizing libraries. They're trying to muddle transistors with design compiling. While this is fair, it's not taking into account that others are working both halves of the problem as well. Clearly meant to be misleading.
  • sidm2k11 - Saturday, January 26, 2019 - link

    How is the fan noise on the PN60? Mine makes a pretty loud whine all the time and temperatures regularly cross 80 on full load...My 4010u Brix PC is whisper quiet by comparison.
  • alacard - Saturday, January 26, 2019 - link

    Well that was a wonderfully intricate review. Thank you.

Log in

Don't have an account? Sign up now