Rocket League

Hilariously simple pick-up-and-play games are great fun. I'm a massive fan of the Katamari franchise for that reason — passing start on a controller and rolling around, picking up things to get bigger, is extremely simple. Until we get a PC version of Katamari that I can benchmark, we'll focus on Rocket League.

Rocket League combines the elements of pick-up-and-play, allowing users to jump into a game with other people (or bots) to play football with cars with zero rules. The title is built on Unreal Engine 3, which is somewhat old at this point, but it allows users to run the game on super-low-end systems while still taxing the big ones. Since the release in 2015, it has sold over 5 million copies and seems to be a fixture at LANs and game shows. Users who train get very serious, playing in teams and leagues with very few settings to configure, and everyone is on the same level. Rocket League is quickly becoming one of the favored titles for e-sports tournaments, especially when e-sports contests can be viewed directly from the game interface.

Based on these factors, plus the fact that it is an extremely fun title to load and play, we set out to find the best way to benchmark it. Unfortunately for the most part automatic benchmark modes for games are few and far between. Partly because of this, but also on the basis that it is built on the Unreal 3 engine, Rocket League does not have a benchmark mode. In this case, we have to develop a consistent run and record the frame rate.

Read our initial analysis on our Rocket League benchmark on low-end graphics here.

With Rocket League, there is no benchmark mode, so we have to perform a series of automated actions, similar to a racing game having a fixed number of laps. We take the following approach: Using Fraps to record the time taken to show each frame (and the overall frame rates), we use an automation tool to set up a consistent 4v4 bot match on easy, with the system applying a series of inputs throughout the run, such as switching camera angles and driving around.

It turns out that this method is nicely indicative of a real bot match, driving up walls, boosting and even putting in the odd assist, save and/or goal, as weird as that sounds for an automated set of commands. To maintain consistency, the commands we apply are not random but time-fixed, and we also keep the map the same (Aquadome, known to be a tough map for GPUs due to water/transparency) and the car customization constant. We start recording just after a match starts, and record for 4 minutes of game time (think 5 laps of a DIRT: Rally benchmark), with average frame rates, 99th percentile and frame times all provided.

The graphics settings for Rocket League come in four broad, generic settings: Low, Medium, High and High FXAA. There are advanced settings in place for shadows and details; however, for these tests, we keep to the generic settings. For both 1920x1080 and 4K resolutions, we test at the High preset with an unlimited frame cap.

All of our benchmark results can also be found in our benchmark engine, Bench.

MSI GTX 1080 Gaming 8G Performance


1080p

4K

Sapphire Nitro R9 Fury 4G Performance


1080p

4K

Sapphire Nitro RX 480 8G Performance


1080p

4K

With Ryzen, we encounted some odd performance issues when using NVIDIA-based video cards that caused those cards to significantly underperform. However equally strangely, the issues we have with Ryzen on Rocket League with NVIDIA GPUs seem to almost vanish when using Threadripper. Again, still no easy wins here as Intel seems to take Rocket League in its stride, but Game mode still helps the 1950X. The Time Under graphs give some cause for concern, with the 1950X consistently being at the bottom of that graph.

CPU Gaming Performance: Rise of the Tomb Raider (1080p, 4K) CPU Gaming Performance: Grand Theft Auto (1080p, 4K)
Comments Locked

104 Comments

View All Comments

  • peevee - Friday, August 18, 2017 - link

    Compilation scales even on multi-CPU machines. With much higher communication latencies.
    In general, compilers running in parallel on MSVC (with MSBuild) run in different processes, they don't write into each other's address spaces and so do not need to communicate at all.

    Quit making excuses. You are doing something wrong. I am doing development for multi-CPU machines and ON multi-CPU machines for a very long time. YOU are doing something wrong.
  • peevee - Friday, August 18, 2017 - link

    BTW, when you enable NUMA on TR, does Windows 10 recognize it as one CPU group or 2?
  • gzunk - Saturday, August 19, 2017 - link

    It recognizes it as two NUMA nodes.
  • Alexey291 - Saturday, September 2, 2017 - link

    They aren't going to do anything.

    All their 'scientific benchmarking' is running the same macro again and again on different hardware setups.

    What you are suggesting requires actual work and thought.
  • Arbie - Thursday, August 17, 2017 - link

    As noted by edzieba, the correct phrase (and I'm sure it has a very British heritage) is "The proof of the pudding is in the eating".

    Another phrase needing repair: "multithreaded tests were almost halved to the 1950X". Was this meant to be something like "multithreaded tests were almost half of those in Creator mode" (?).

    Technically, of course, your articles are really well-done; thanks for all of them.
  • fanofanand - Thursday, August 17, 2017 - link

    Thank you for listening to the readers and re-testing this, Ian!
  • ddriver - Thursday, August 17, 2017 - link

    To sum it up - "game mode" is moronic. It is moronic for amd to push it, and to push TR as a gaming platform, which is clearly neither its peak, nor even its strong point. It is even more moronic for people to spend more than double the money just to have half of the CPU disabled, and still get worse performance than a ryzen chip.

    TR is great for prosumers, and represents a tremendous value and performance at a whole new level of affordability. It will do for games if you are a prosumer who occasionally games, but if you are a gamer it makes zero sense. Having AMD push it as a gaming platform only gives "people" the excuse to whine how bad it is at it.

    Also, I cannot shake the feeling there should be a better way to limit scheduling to half the chip for games without having to disable the rest, so it is still usable to the rest of the system.
  • Gothmoth - Thursday, August 17, 2017 - link

    first coders should do their job.. that is the main problem today. lazy and uncompetent coders.
  • eriohl - Thursday, August 17, 2017 - link

    Of course you could limit thread scheduling on software level. But it seems to me that there is a perfectly reasonable explanation why Microsoft and the game developers haven't been spending much time optimizing for running games on systems with NUMA.
  • HomeworldFound - Thursday, August 17, 2017 - link

    You can't call a coder that doesn't anticipate a 16 core 32 thread CPU lazy. The word is incompetent btw. I'd like to see you make a game worth millions of dollars and account for this processor, heck any processor with more than six cores.

Log in

Don't have an account? Sign up now