Compiling LLVM, NAMD Performance

As we’re trying to rebuild our server test suite piece by piece – and there’s still a lot of work go ahead to get a good representative “real world” set of workloads, one more highly desired benchmark amongst readers was a more realistic compilation suite. Chrome and LLVM codebases being the most requested, I landed on LLVM as it’s fairly easy to set up and straightforward.

git clone https://github.com/llvm/llvm-project.git
cd llvm-project
git checkout release/11.x
mkdir ./build
cd ..
mkdir llvm-project-tmpfs
sudo mount -t tmpfs -o size=10G,mode=1777 tmpfs ./llvm-project-tmpfs
cp -r llvm-project/* llvm-project-tmpfs
cd ./llvm-project-tmpfs/build
cmake -G Ninja \
  -DLLVM_ENABLE_PROJECTS="clang;libcxx;libcxxabi;lldb;compiler-rt;lld" \
  -DCMAKE_BUILD_TYPE=Release ../llvm
time cmake --build .

We’re using the LLVM 11.0.0 release as the build target version, and we’re compiling Clang, libc++abi, LLDB, Compiler-RT and LLD using GCC 10.2 (self-compiled). To avoid any concerns about I/O we’re building things on a ramdisk. We’re measuring the actual build time and don’t include the configuration phase as usually in the real world that doesn’t happen repeatedly.

LLVM Suite Compile Time

Starting off with the Xeon 8380, we’re looking at large generational improvements for the new Ice Lake SP chip. A 33-35% improvement in compile time depending on whether we’re looking at 2S or 1S figures is enough to reposition Intel’s flagship CPU in the rankings by notable amounts, finally no longer lagging behind as drastically as some of the competition.

It’s definitely not sufficient to compete with AMD and Ampere, both showcasing figures that are still 25 and 15% ahead of the Xeon 8380.

The Xeon 6330 is falling in line with where we benchmarked it in previous tests, just slightly edging out the Xeon 8280 (6258R equivalent), meaning we’re seeing minor ISO-core ISO-power generational improvements (again I have to mention that the 6330 is half the price of a 6258R).

NAMD (Git-2020-12-09) - Apolipoprotein A1

NAMD is a problem-child benchmark due to its recent addition of AVX512: the code had been contributed by Intel engineers – which isn’t exactly an issue in my view. The problem is that this is a new algorithm which has no relation to the normal code-path, which remains not as hand-optimised for AVX2, and further eyebrow raising is that it’s solely compatible with Intel’s ICC and no other compiler. That’s one level too much in terms of questionable status as a benchmark: are we benchmarking it as a general HPC-representative workload, or are we benchmarking it solely for the sake of NAMD and only NAMD performance?

We understand Intel is putting a lot of focus on these kinds of workloads that are hyper-optimised to run well extremely on Intel-only hardware, and it’s a valid optimisation path for many use-cases. I’m just questioning how representative it is of the wider market and workloads.

In any case, the GCC binaries of the test on the ApoA1 protein showcase significant performance uplifts for the Xeon 8380, showcasing a +35.6% gain. Using this apples-to-apples code path, it’s still quite behind the competition which scales the performance much higher thanks to more cores.

SPECjbb MultiJVM - Java Performance Conclusion & End Remarks
Comments Locked

169 Comments

View All Comments

  • Gondalf - Wednesday, April 7, 2021 - link

    Depends on workload. If you need of a massive per core bandwidth, there is only a street: Intel.
    If you need of very low cache latency, there is only a street: Intel.
    Moreover consider that actually AMD is selling a small number of 64 cores SKUs, the focus of the market is on 32 cores parts. So again in this arena Intel is absolutely the best for bandwidth, latencies and idle power (AMD main defect).
    Not much is changed, Amd is best for many cores apps, Intel for medium/low number of cores apps.
    Something will change at the end of this year.
  • SarahKerrigan - Tuesday, April 6, 2021 - link

    So the 8380 number is being used for both Ice-SP and Cooper-SP, which are totally unrelated designs, on different platforms, with different microarchitectures?

    Well, that's not confusing at all.
  • Drumsticks - Tuesday, April 6, 2021 - link

    I thought surely it must have been a typo or you were confused, but yes indeed, you can find the 40 core 8380 right next to the 28 core 14nm 8380HL.

    Intel's naming has never been stellar but this is a new level.
  • jeremyshaw - Tuesday, April 6, 2021 - link

    Cooper Lake probably existed about as much as Cannon Lake. Intel still doesn't want to acknowledge their failures.

    This totally bodes well. /s
  • schujj07 - Tuesday, April 6, 2021 - link

    You can actually buy Cooper Lake 4p servers.
  • fallaha56 - Tuesday, April 6, 2021 - link

    well you won't be buying 38-40 core Ice Lake ones

    phantom parts, yield are awfuls. and as we saw, on the lower core count parts, so is performance
  • fallaha56 - Tuesday, April 6, 2021 - link

    https://semiaccurate.com/2021/04/06/intels-ice-lak...
  • 29a - Wednesday, April 7, 2021 - link

    Are you getting paid for every semiaccurate link you post?
  • eastcoast_pete - Tuesday, April 6, 2021 - link

    Maybe that's what Intel meant with "improved cryptographic performance"; nobody can make any sense out of their naming scheme (: . Cryptic indeed!
  • amootpoint - Tuesday, April 6, 2021 - link

    It seems like a great wholistic platform. I must say, well done Intel.

Log in

Don't have an account? Sign up now