Rocket League

Hilariously simple pick-up-and-play games are great fun. I'm a massive fan of the Katamari franchise for that reason — passing start on a controller and rolling around, picking up things to get bigger, is extremely simple. Until we get a PC version of Katamari that I can benchmark, we'll focus on Rocket League.

Rocket League combines the elements of pick-up-and-play, allowing users to jump into a game with other people (or bots) to play football with cars with zero rules. The title is built on Unreal Engine 3, which is somewhat old at this point, but it allows users to run the game on super-low-end systems while still taxing the big ones. Since the release in 2015, it has sold over 5 million copies and seems to be a fixture at LANs and game shows. Users who train get very serious, playing in teams and leagues with very few settings to configure, and everyone is on the same level. Rocket League is quickly becoming one of the favored titles for e-sports tournaments, especially when e-sports contests can be viewed directly from the game interface.

Based on these factors, plus the fact that it is an extremely fun title to load and play, we set out to find the best way to benchmark it. Unfortunately for the most part automatic benchmark modes for games are few and far between. Partly because of this, but also on the basis that it is built on the Unreal 3 engine, Rocket League does not have a benchmark mode. In this case, we have to develop a consistent run and record the frame rate.

Read our initial analysis on our Rocket League benchmark on low-end graphics here.

With Rocket League, there is no benchmark mode, so we have to perform a series of automated actions, similar to a racing game having a fixed number of laps. We take the following approach: Using Fraps to record the time taken to show each frame (and the overall frame rates), we use an automation tool to set up a consistent 4v4 bot match on easy, with the system applying a series of inputs throughout the run, such as switching camera angles and driving around.

It turns out that this method is nicely indicative of a real bot match, driving up walls, boosting and even putting in the odd assist, save and/or goal, as weird as that sounds for an automated set of commands. To maintain consistency, the commands we apply are not random but time-fixed, and we also keep the map the same (Aquadome, known to be a tough map for GPUs due to water/transparency) and the car customization constant. We start recording just after a match starts, and record for 4 minutes of game time (think 5 laps of a DIRT: Rally benchmark), with average frame rates, 99th percentile and frame times all provided.

The graphics settings for Rocket League come in four broad, generic settings: Low, Medium, High and High FXAA. There are advanced settings in place for shadows and details; however, for these tests, we keep to the generic settings. For both 1920x1080 and 4K resolutions, we test at the High preset with an unlimited frame cap.

For all our results, we show the average frame rate at 1080p first. Mouse over the other graphs underneath to see 99th percentile frame rates and 'Time Under' graphs, as well as results for other resolutions. All of our benchmark results can also be found in our benchmark engine, Bench.

MSI GTX 1080 Gaming 8G Performance


1080p

4K

ASUS GTX 1060 Strix 6GB Performance


1080p

4K

Sapphire R9 Fury 4GB Performance


1080p

4K

Sapphire RX 480 8GB Performance


1080p

4K

Rocket League Conclusions

The map we use in our testing, Aquadome, is known to be strenuous on a system, hence we see frame rates lower than what people expect for Rocket League - we're trying to cover the worst case scenario. But the results also show how AMD CPUs and NVIDIA GPUs do not seem to be playing ball with each other, which we've been told is likely related to drivers. The AMD GPUs work fine here regardless of resolution, and both AMD and Intel CPUs  get in the mix.

Gaming Performance: Rise of the Tomb Raider (1080p, 4K) Gaming Performance: Grand Theft Auto (1080p, 4K)
Comments Locked

176 Comments

View All Comments

  • Spoelie - Monday, July 24, 2017 - link

    On the first page, I assume the green highlight in the processor charts signifies an advantage for that side. Why are the cores/threads rows in the Ryzen side not highlighted? Or is 8/16 not better than 4/8?
  • Ian Cutress - Monday, July 24, 2017 - link

    Derp. Fixed.
  • Gothmoth - Monday, July 24, 2017 - link

    intel must really push money into anandtech. :) so many interesting things to report about and they spend time on a niche product.....
  • Ian Cutress - Monday, July 24, 2017 - link

    This has been in the works for a while because our CPU failed. I work on the CPU stuff - other editors work on other things ;) If you've got an idea, reach out to us. I can never guarantee anything (I've got 10+ ideas that I don't have time to do) but if it's interesting we'll see what we can do. Plus it helps us direct what other content we should be doing.
  • halcyon - Monday, July 24, 2017 - link

    This is an amazing amount of benchmarking with many options. thank you. Must have been a lot of work :-)
    The obvious idea is this:

    Gaming (modern CPU limited and most played games) & Productive work (rendering, encoding, 4K video work, R/statistics/Matlab)

    Test those under 4c/8t and 8c/16t CPUs both from AMD and Intel - all at most common non-esoteric overlock levels (+/-10%).

    This is what many of your readers want:

    How much does c. 5Ghz 4c/8t do vs 4.x Ghz 8c/16t when taken to it's everyday stable extreme, in modern games / productivity.

    The web is already full of benchmarks at stock speed. Or overclocked Ryzen R 7 against stock Intel, or OC intel against overclocked Ryzen - and the game/app selections are not very varied.

    The result is a simple graph that plots the (assumed) linear trend in performance/price and shows any deviations below/above the linear trend.

    Of course, if you already have the Coffee lake 6c/12t sample, just skip the 4c/8t and go with 6c/12t vs 8c/16 comparision.

    Thanks for all the hard work throughout all these years!
  • Ryan Smith - Monday, July 24, 2017 - link

    "so many interesting things to report about and they spend time on a niche product....."

    What can we say? CPUs have been our favorite subject for the last 20 years.=)
  • user_5447 - Monday, July 24, 2017 - link

    "For 2017, Intel is steering the ship in a slightly different direction, and launching the latest microarchitecture on the HEDT platform."

    Skylake-S, Kaby Lake-S and Kaby Lake-X share the same microarchitecture, right?
    Then Skylake-X is newer microarchitecture than Kaby Lake-X (changes to L2 and L3 caches, AVX-512).
  • Ian Cutress - Monday, July 24, 2017 - link

    Correct me if I'm wrong: SKL-SP cores are derived from SKL-S, and 14nm. KBL-S/X are 14+, and shares most of its design with SKL-S, and the main changes are power related. Underneath there's no real performance (except Speed Shift v2), but Intel classifies Kaby Lake as its latest non-AVX512 IPC microarchitecture.
  • user_5447 - Monday, July 24, 2017 - link

    Kaby Lake-S has some errata fixes compared to Skylake-S. AFAIK, this is the only change to the CPU core (besides the Speed Shift v2, if it even involved hardware changes).
    David Kanter says Skylake-X/EP is 14+ nm http://www.realworldtech.com/forum/?threadid=16889...
  • extide - Wednesday, July 26, 2017 - link

    I have a buddy who works in the fabs -- SKL-X is still on plain 14nm

Log in

Don't have an account? Sign up now