Java Performance

The SPECjbb 2015 benchmark has "a usage model based on a world-wide supermarket company with an IT infrastructure that handles a mix of point-of-sale requests, online purchases, and data-mining operations." It uses the latest Java 7 features and makes use of XML, compressed communication, and messaging with security.

We tested SPECjbb with four groups of transaction injectors and backends. The reason why we use the "Multi JVM" test is that it is more realistic: multiple VMs on a server is a very common practice. 

The Java version was OpenJDK 1.8.0_131. We applied relatively basic tuning to mimic real-world use, while aiming to fit everything inside a server with 128 GB of RAM:

"-server -Xmx24G -Xms24G -Xmn16G -XX:+AlwaysPreTouch -XX:+UseLargePagesIndividualAllocation

The graph below shows the maximum throughput numbers for our MultiJVM SPECJbb test.

SPECJBB 2015-Multi Max-jOPS

Even though our testing is not the ideal case for AMD (you would probably choose 8 or even 16 back-ends), the EPYC edges out the Xeon 8176. Using 8 JVMs increases the gap from 1% to 4-5%. 

The Critical-jOPS metric is a throughput metric under response time constraint.

SPECJBB 2015-Multi Critical-jOPS

With this number of threads active, you can get much higher Critical-jOps by significantly increasing the RAM per JVM. However, we did not want that as this would mean we can not compare with systems that can only accommodate 128 GB of RAM. 

Database Performance: MySQL Percona Server 5.7.0 Big Data benchmarking
Comments Locked

219 Comments

View All Comments

  • tamalero - Tuesday, July 11, 2017 - link

    How is that different if AMD ran stuff that is extremely optimized for them?
  • Friendly0Fire - Tuesday, July 11, 2017 - link

    That's kinda the point? You want to benchmark the CPUs in optimal scenarios, since that's what you'd be looking at in practice. If one CPU's weakness is eliminated by using a more recent/tweaked compiler, then it's not a weakness.
  • coder543 - Tuesday, July 11, 2017 - link

    Rather, you want to test under practical scenarios. Very few people are going to be running 17.04 on production grade servers, they will run an LTS release, which in this case is 16.04.

    It would be good to have benchmarks from 17.04 as another point of comparison, but given how many things they didn't have time to do just using 16.04, I can understand why they didn't use 17.04.
  • Santoval - Wednesday, July 12, 2017 - link

    A compromise can be found by upgrading Ubuntu 16.04's outdated kernel. Ubuntu LTS releases include support for rolling HWE Stacks, which is a simple meta package for installing newer kernels compiled, modified, tested and packaged by the Ubuntu Kernel Team, and installed directly from the official Ubuntu repositories (not via a Launchpad PPA). With HWE 16.04 LTS can install up to the kernel of 18.04 LTS.

    I also use 16.04 LTS + HWE (it just requires installing the linux-generic-hwe-16.04 package), which currently provides the 4.8 kernel. There is even a "beta" version of HWE (the same package plus an -edge at the end) for installing the 4.10 kernel (aka the kernel of 17.04) earlier, which will normally be released next month.

    I just spotted various 4.10 kernel listings after checking in Synaptic, so they must have been added very recently. After that there are two more scheduled kernel upgrades, as is shown in the following link. Of course HWE upgrades solely the kernel, it does not upgrade any application or any of the user level parts to a more recent version of Ubuntu.
    https://wiki.ubuntu.com/Kernel/RollingLTSEnablemen...
  • CajunArson - Tuesday, July 11, 2017 - link

    Considering the similarities between RyZen and Haswell (that aren't coincidental at all) you are already seeing a highly optimized set of RyZen results.

    But I have no problem seeing RyZen be tested with the newest distros, the only difference being that even Ubuntu 16.04 already has most of the optimizations for RyZen baked in.
  • coder543 - Tuesday, July 11, 2017 - link

    What similarities? They're extremely different architectures. I can't think of any obvious similarities. Between the CCX model, caches being totally different layouts, the infinity fabric, Intel having better AVX-256/512 stuff (IIRC), etc.

    I don't think 16.04 is naturally any more optimized for Ryzen than it is for Skylake-SP.
  • CajunArson - Tuesday, July 11, 2017 - link

    Oh please, at the core level RyZen is a blatant copy-n-paste of Haswell with the only exception being they just omitted half the AVX hardware to make their lives easier.

    It's so obvious that if you followed any of the developer threads for people optimizing for RyZen they say to just use the Haswell compiler optimizations that actually work better than the official RyZen optimization flags.
  • ddriver - Tuesday, July 11, 2017 - link

    Can't tell if this post is funny or sad.
  • CajunArson - Tuesday, July 11, 2017 - link

    It's neither: It's accurate.

    Don't believe me? Look at the differences in performance of the holy 1800X over multiple Linux distros ranging from pretty new (OpenSuse Tumbleweed) to pretty old (Fedora 23 from 2015): http://www.phoronix.com/scan.php?page=article&...

    Nowhere near the variation that we see with Skylake X since Haswell was already a solved problem long before RyZen lauched.
  • coder543 - Tuesday, July 11, 2017 - link

    Right, of course. Ryzen is a copy-and-paste of Haswell.

    Don't make me laugh.

Log in

Don't have an account? Sign up now