Rocket League

Hilariously simple pick-up-and-play games are great fun. I'm a massive fan of the Katamari franchise for that reason — passing start on a controller and rolling around, picking up things to get bigger, is extremely simple. Until we get a PC version of Katamari that I can benchmark, we'll focus on Rocket League.

Rocket League combines the elements of pick-up-and-play, allowing users to jump into a game with other people (or bots) to play football with cars with zero rules. The title is built on Unreal Engine 3, which is somewhat old at this point, but it allows users to run the game on super-low-end systems while still taxing the big ones. Since the release in 2015, it has sold over 5 million copies and seems to be a fixture at LANs and game shows. Users who train get very serious, playing in teams and leagues with very few settings to configure, and everyone is on the same level. Rocket League is quickly becoming one of the favored titles for e-sports tournaments, especially when e-sports contests can be viewed directly from the game interface.

Based on these factors, plus the fact that it is an extremely fun title to load and play, we set out to find the best way to benchmark it. Unfortunately for the most part automatic benchmark modes for games are few and far between. Partly because of this, but also on the basis that it is built on the Unreal 3 engine, Rocket League does not have a benchmark mode. In this case, we have to develop a consistent run and record the frame rate.

Read our initial analysis on our Rocket League benchmark on low-end graphics here.

With Rocket League, there is no benchmark mode, so we have to perform a series of automated actions, similar to a racing game having a fixed number of laps. We take the following approach: Using Fraps to record the time taken to show each frame (and the overall frame rates), we use an automation tool to set up a consistent 4v4 bot match on easy, with the system applying a series of inputs throughout the run, such as switching camera angles and driving around.

It turns out that this method is nicely indicative of a real bot match, driving up walls, boosting and even putting in the odd assist, save and/or goal, as weird as that sounds for an automated set of commands. To maintain consistency, the commands we apply are not random but time-fixed, and we also keep the map the same (Aquadome, known to be a tough map for GPUs due to water/transparency) and the car customization constant. We start recording just after a match starts, and record for 4 minutes of game time (think 5 laps of a DIRT: Rally benchmark), with average frame rates, 99th percentile and frame times all provided.

The graphics settings for Rocket League come in four broad, generic settings: Low, Medium, High and High FXAA. There are advanced settings in place for shadows and details; however, for these tests, we keep to the generic settings. For both 1920x1080 and 4K resolutions, we test at the High preset with an unlimited frame cap.

For all our results, we show the average frame rate at 1080p first. Mouse over the other graphs underneath to see 99th percentile frame rates and 'Time Under' graphs, as well as results for other resolutions. All of our benchmark results can also be found in our benchmark engine, Bench.

MSI GTX 1080 Gaming 8G Performance


1080p

4K

ASUS GTX 1060 Strix 6GB Performance


1080p

4K

Sapphire R9 Fury 4GB Performance


1080p

4K

Sapphire RX 480 8GB Performance


1080p

4K

Rocket League Conclusions

The map we use in our testing, Aquadome, is known to be strenuous on a system, hence we see frame rates lower than what people expect for Rocket League - we're trying to cover the worst case scenario. But the results also show how AMD CPUs and NVIDIA GPUs do not seem to be playing ball with each other, which we've been told is likely related to drivers. The AMD GPUs work fine here regardless of resolution, and both AMD and Intel CPUs  get in the mix.

Gaming Performance: Rise of the Tomb Raider (1080p, 4K) Gaming Performance: Grand Theft Auto (1080p, 4K)
Comments Locked

176 Comments

View All Comments

  • mapesdhs - Monday, July 24, 2017 - link

    2700K, +1.5GHz every time.
  • shabby - Monday, July 24, 2017 - link

    So much for upgrading from a kbl-x to skl-x when the motherboard could fry the cpu, nice going intel.
  • Nashiii - Monday, July 24, 2017 - link

    Nice article Ian. What I will say is I am a little confused around this comment:

    "Intel wins for the IO and chipset, offering 24 PCIe 3.0 lanes for USB 3.1/SATA/Ethernet/storage, while AMD is limited on that front, having 8 PCIe 2.0 from the chipset."

    You forgot to mention the AMD total PCI-E IO. It has 24 PCI-E 3.0 lanes with 4xPCI-e 3.0 going to the chipset which can be set to 8x PCI-E 2.0 if 5Gbps is enough per lane, i.e in the case of USB3.0.

    I have read that Kabylake-X only has 16 PCI-E 3.0 lanes native. Not sure about PCH support though...
  • KAlmquist - Monday, July 24, 2017 - link

    With Kabylake-X, the only I/O that doesn't go through the chipset is the 16 PCI-E 3.0 lanes you mention. With Ryzen, in addition to what is provided by the chipset, the CPU provides

    1) Four USB 3.1 connections
    2) Two SATA connections
    3) 18 PCI-E 3.0 lanes, or 20 lanes if you don't use the SATA connections

    So if you just look at the CPU, Ryzen has more connectivity than Kabylake-X, but the X299 chip set used with Kabylake-X is much more capable (and expensive) than anything in the AMD lineup. Also, the X299 doesn't provide any USB 3.1 ports (or more precisely, 10 gb per second speed ports), so those are typically provided by a separate chip, adding to the cost of X299 motherboards.
  • Allan_Hundeboll - Monday, July 24, 2017 - link

    Interesting review with great benchmarks. (I don't understand why so many reviews only report average frames pr. second)
    The ryzen r5 1600 seems to offer great value for money, but i'm a bit puzzled why the slowest clocked R5 beats the higher clocked R7 in a lot of the 99% benchmarks, Im guessing its because the latency delta when moving data from one core to another penalize the higher core count R7 more?
  • BenSkywalker - Monday, July 24, 2017 - link

    The gaming benchmarks are, uhm..... pretty useless.

    Third tier graphics cards as a starting point, why bother?

    Seems like an awful lot of wasted time. As a note you may want to consider- when testing a new graphics card you get the fastest CPU you can so we can see what the card is capable of, when testing a new CPU you get the fastest GPU you can so we can see what the CPU is capable of. The way the benches are constructed, pretty useless for those of us that want to know gaming performance.
  • Tetsuo1221 - Monday, July 24, 2017 - link

    Benchmarking at 1080p... enough said.. Completely and utterly redundant
  • Qasar - Tuesday, July 25, 2017 - link

    why is benchmarking @ 1080p Completely and utterly redundant ?????
  • meacupla - Tuesday, July 25, 2017 - link

    I don't know that guy's particulars, but, to me, using X299 to game at 1080p seems like a waste.
    If I was going to throw down that kind of money, I would want to game at 1440p or 4K
  • silverblue - Tuesday, July 25, 2017 - link

    Yes, but 1080p shifts the bottleneck towards the CPU.

Log in

Don't have an account? Sign up now