Rocket League

Hilariously simple pick-up-and-play games are great fun. I'm a massive fan of the Katamari franchise for that reason — passing start on a controller and rolling around, picking up things to get bigger, is extremely simple. Until we get a PC version of Katamari that I can benchmark, we'll focus on Rocket League.

Rocket League combines the elements of pick-up-and-play, allowing users to jump into a game with other people (or bots) to play football with cars with zero rules. The title is built on Unreal Engine 3, which is somewhat old at this point, but it allows users to run the game on super-low-end systems while still taxing the big ones. Since the release in 2015, it has sold over 5 million copies and seems to be a fixture at LANs and game shows. Users who train get very serious, playing in teams and leagues with very few settings to configure, and everyone is on the same level. Rocket League is quickly becoming one of the favored titles for e-sports tournaments, especially when e-sports contests can be viewed directly from the game interface.

Based on these factors, plus the fact that it is an extremely fun title to load and play, we set out to find the best way to benchmark it. Unfortunately for the most part automatic benchmark modes for games are few and far between. Partly because of this, but also on the basis that it is built on the Unreal 3 engine, Rocket League does not have a benchmark mode. In this case, we have to develop a consistent run and record the frame rate.

Read our initial analysis on our Rocket League benchmark on low-end graphics here.

With Rocket League, there is no benchmark mode, so we have to perform a series of automated actions, similar to a racing game having a fixed number of laps. We take the following approach: Using Fraps to record the time taken to show each frame (and the overall frame rates), we use an automation tool to set up a consistent 4v4 bot match on easy, with the system applying a series of inputs throughout the run, such as switching camera angles and driving around.

It turns out that this method is nicely indicative of a real bot match, driving up walls, boosting and even putting in the odd assist, save and/or goal, as weird as that sounds for an automated set of commands. To maintain consistency, the commands we apply are not random but time-fixed, and we also keep the map the same (Aquadome, known to be a tough map for GPUs due to water/transparency) and the car customization constant. We start recording just after a match starts, and record for 4 minutes of game time (think 5 laps of a DIRT: Rally benchmark), with average frame rates, 99th percentile and frame times all provided.

The graphics settings for Rocket League come in four broad, generic settings: Low, Medium, High and High FXAA. There are advanced settings in place for shadows and details; however, for these tests, we keep to the generic settings. For both 1920x1080 and 4K resolutions, we test at the High preset with an unlimited frame cap.

All of our benchmark results can also be found in our benchmark engine, Bench.

MSI GTX 1080 Gaming 8G Performance


1080p

4K

ASUS GTX 1060 Strix 6G Performance


1080p

4K

Sapphire Nitro R9 Fury 4G Performance


1080p

4K

Sapphire Nitro RX 480 8G Performance


1080p

4K

With Ryzen, we encounted some odd performance issues when using NVIDIA-based video cards that caused those cards to significantly underperform. However equally strangely, the issues we have with Ryzen on Rocket League with NVIDIA GPUs seem to almost vanish when using Threadripper. Again, still no easy wins here as Intel seems to take Rocket League in its stride, but SMT-off mode still helps the 1950X. The Time Under graphs give some cause for concern, with the 1950X consistently being at the bottom of that graph.

CPU Gaming Performance: Rise of the Tomb Raider (1080p, 4K) CPU Gaming Performance: Grand Theft Auto (1080p, 4K)
Comments Locked

347 Comments

View All Comments

  • BOBOSTRUMF - Friday, August 11, 2017 - link

    Actually Intel's 140 can consume more than 210 if You want the top unrestricted performance limited. Read tomshardware review
  • Filiprino - Thursday, August 10, 2017 - link

    How comes WinRAR is faster with the 10 core Broadwell than with the 10 core Skylake?
    What did they change on Cinebench going from 10 to 11.5? Threadripper is the faster CPU in Cinebench 10, but in the newer one it is not. Then again Cinebench 15 shows TR as the faster CPU. Is this benchmark reliable?

    How comes Chromium compilation is so slow? Others have pointed out they get much better scaling (linear speedup). That makes sense because compilation basically consists in launching isolated processes (compiler instances). Is this related with the segfaulting problem under GNU/Linux systems?

    For encoding I would start to use FFmpeg when benchmarking so many cores. In my brain lies a memory of FFmpeg being faster than Handbrake for the same number of cores. Maybe the GUI loop interrupts the process in a performance-unfriendly way. Too much overhead. HPC workloads can suffer even from the network driver having too many interrupts (hence, Linux tickless configuration).

    I have read SYSMARK Results and I find strange that TR media results are slower than data, being TR slower than Intel in media and faster than Intel in data. Isn't SYSMARK from BAPCo (http://www.pcworld.com/article/3023373/hardware/am... You already point it out on the article, sorry.

    How comes R9 Fury in Shadow of Mordor has AMD and Intel CPUs running consistently at two different frame rates (~95 vs ~103)?

    The same but with the GTX 1080. Both cases happen regardless of the Intel architecture (Haswell, Broadwell and Skylake all have the same FPS value).

    What happens with NVIDIA driver on Rocket League? Bad caching algorithm (TR has more cores/threads -> more cache available to store GPU command data)? You say you had issues but, what are your thoughts?
    How comes GTA V has those Under 60 and 30 FPS graphs knowing that the game is available for PS4 and XBox One (it has been already optimized for two CCX CPU, at least there is a version for that case)? Nevertheless, with NVIDIA cards, 2 seconds out of 90 is not that much.

    What I can think is that all these benchmarks are programmed using threading libraries from the "good old times" due to bad scaling. And in some cases there is architecture-specific targeted code. I also include in my conception the small dataset being used. I also would not make a case out of a benchmark programmed with code having false sharing (¡:O!)

    Currently for gaming, it seems that the easiest way is to have a Virtual Machine with PCIe passthrough pinned to one of the MCM dies.

    As a suggestion to Anandtech, I would like to see more free (libre) software being used to measure CPU performance, compiling the benchmarks from source against the target CPU architecture. Something like Phoronix. Maybe you could use PTS (Phoronix Test Suite).
  • Filiprino - Thursday, August 10, 2017 - link

    Positive things: ThreadRipper is under its TDP consumption. Intel is more power hungry. The Intel 16-core might go through the rough in power consumption.
    Good gaming performance. Intel is generally better, but TR still offers a beefy CPU for that too, losing a few frames only.
    Strong rendering performance.
    Strong video encoding performance.

    When you talk about IPC, it would be useful to measure it with profiling tools, not just getting "points", "miliseconds" and "seconds".
    Seeing how these benchmarks do not scale by much beyond 10 cores you might realize software has to get better.
  • Chad - Thursday, August 10, 2017 - link

    Second ffmpeg test (pretty please!)
  • mapesdhs - Thursday, August 10, 2017 - link


    Ian, a query about the CPU Legacy Tests: why do you reckon does the 1920X beat both 1950X and 1950X-G for CB 11.5 MT, yet the latter win out for CB 10 MT? Is there a max-thread limit in V11.5? Filiprino asked much the same above.

    "...and so losing half the threads in Game Mode might actually be a detriment to a workstation implementation."

    Isn't that the whole point though? For most workstation tasks, don't use Game Mode. There will be exceptions of course, but in general...

    Btw, where's C-ray? ;)

    Ian.
  • Da W - Thursday, August 10, 2017 - link

    ALL OF YOU COMPLAINERS: START A TECH REVIEW WEBSITE YOURSELVES AND STFU!
  • hansmuff - Thursday, August 10, 2017 - link

    Don't read the comments. Also, a lot of the "complaints" are read by Ryan and he actually addresses them and his articles improve as a result of criticism. He's never been bad, but you can see an ascension in quality over time, along with his partaking in critical commentary.
    IOW, we don't really need a referee.
  • hansmuff - Thursday, August 10, 2017 - link

    And of course I mean Ian, not Ryan.
  • mapesdhs - Friday, August 11, 2017 - link

    It is great that he replies at all, and does so to quite a lot of the posts too.
  • Kepe - Thursday, August 10, 2017 - link

    Wait a second, according to AMD and all the other articles about the 1950X and Game Mode, game mode disables all the physical cores of one of the CPU clusters and leaves SMT on, so you get 8 cores and 16 threads. It doesn't just turn off SMT for a 16 core / 16 thread setup.

    AMD's info here: https://community.amd.com/community/gaming/blog/20...

Log in

Don't have an account? Sign up now