Rocket League

Hilariously simple pick-up-and-play games are great fun. I'm a massive fan of the Katamari franchise for that reason — passing start on a controller and rolling around, picking up things to get bigger, is extremely simple. Until we get a PC version of Katamari that I can benchmark, we'll focus on Rocket League.

Rocket League combines the elements of pick-up-and-play, allowing users to jump into a game with other people (or bots) to play football with cars with zero rules. The title is built on Unreal Engine 3, which is somewhat old at this point, but it allows users to run the game on super-low-end systems while still taxing the big ones. Since the release in 2015, it has sold over 5 million copies and seems to be a fixture at LANs and game shows. Users who train get very serious, playing in teams and leagues with very few settings to configure, and everyone is on the same level. Rocket League is quickly becoming one of the favored titles for e-sports tournaments, especially when e-sports contests can be viewed directly from the game interface.

Based on these factors, plus the fact that it is an extremely fun title to load and play, we set out to find the best way to benchmark it. Unfortunately for the most part automatic benchmark modes for games are few and far between. Partly because of this, but also on the basis that it is built on the Unreal 3 engine, Rocket League does not have a benchmark mode. In this case, we have to develop a consistent run and record the frame rate.

Read our initial analysis on our Rocket League benchmark on low-end graphics here.

With Rocket League, there is no benchmark mode, so we have to perform a series of automated actions, similar to a racing game having a fixed number of laps. We take the following approach: Using Fraps to record the time taken to show each frame (and the overall frame rates), we use an automation tool to set up a consistent 4v4 bot match on easy, with the system applying a series of inputs throughout the run, such as switching camera angles and driving around.

It turns out that this method is nicely indicative of a real bot match, driving up walls, boosting and even putting in the odd assist, save and/or goal, as weird as that sounds for an automated set of commands. To maintain consistency, the commands we apply are not random but time-fixed, and we also keep the map the same (Aquadome, known to be a tough map for GPUs due to water/transparency) and the car customization constant. We start recording just after a match starts, and record for 4 minutes of game time (think 5 laps of a DIRT: Rally benchmark), with average frame rates, 99th percentile and frame times all provided.

The graphics settings for Rocket League come in four broad, generic settings: Low, Medium, High and High FXAA. There are advanced settings in place for shadows and details; however, for these tests, we keep to the generic settings. For both 1920x1080 and 4K resolutions, we test at the High preset with an unlimited frame cap.

All of our benchmark results can also be found in our benchmark engine, Bench.

MSI GTX 1080 Gaming 8G Performance


1080p

4K

ASUS GTX 1060 Strix 6G Performance


1080p

4K

Sapphire Nitro R9 Fury 4G Performance


1080p

4K

Sapphire Nitro RX 480 8G Performance


1080p

4K

With Ryzen, we encounted some odd performance issues when using NVIDIA-based video cards that caused those cards to significantly underperform. However equally strangely, the issues we have with Ryzen on Rocket League with NVIDIA GPUs seem to almost vanish when using Threadripper. Again, still no easy wins here as Intel seems to take Rocket League in its stride, but SMT-off mode still helps the 1950X. The Time Under graphs give some cause for concern, with the 1950X consistently being at the bottom of that graph.

CPU Gaming Performance: Rise of the Tomb Raider (1080p, 4K) CPU Gaming Performance: Grand Theft Auto (1080p, 4K)
Comments Locked

347 Comments

View All Comments

  • mapesdhs - Friday, August 11, 2017 - link

    And consoles are on the verge of moving to many-cores main CPUs. The inevitable dev change will spill over into PC gaming.
  • RoboJ1M - Friday, August 11, 2017 - link

    On the verge?
    All major consoles have had a greater core count than consumer CPUs, not to mention complex memory architectures, since, what, 2005?
    One suspects the PC market has been benefiting from this for quite some time.
  • RoboJ1M - Friday, August 11, 2017 - link

    Specifically, the 360 had 3 general purpose CPU cores
    And the PS3 had one general purpose CPU core and 7 short pipeline coprocessors that could only read and write to their caches. They had to be fed by the CPU core.
    The 360 had unified program and graphics ram (still not common on PC!)
    As well as it's large high speed cache.
    The PS3 had septate program and video ram.
    The Xbox one and PS4 were super boring pcs in boxes. But they did have 8 core CPUs. The x1x is interesting. It's got unified ram that runs at ludicrous speed. Sadly it will only be used for running games in 1800p to 2160p at 30 to 60 FPS :(
  • mlambert890 - Saturday, August 12, 2017 - link

    Why do people constantly assume this is purely time/market economics?

    Not everything can *be* parallelized. Do people really not get that? It isn't just developers targeting a market. There are tasks that *can't be parallelized* because of the practical reality of dependencies. Executing ahead and out of order can only go so far before you have an inverse effect. Everyone could have 40 core CPUs... It doesn't mean that *gaming workloads* will be able to scale out that well.

    The work that lends itself best to parallelization is the rendering pipeline and that's already entirely on the GPU (which is already massively parallel)
  • Magichands8 - Thursday, August 10, 2017 - link

    I think what AMD did here though is fantastic. In my mind, creating a switch to change modes vastly adds to the value of the chip. I can now maximize performance based upon workload and software profile and that brings me closer to having the best of both worlds from one CPU.
  • Notmyusualid - Sunday, August 13, 2017 - link

    @ rtho782

    I agree it is a mess, and also, it is not AMDs fault.

    I've have a 14c/28t Broadwell chip for over a year now, and I cannot launch Tomb Raider with HT on, nor GTA5. But most s/w is indifferent to the amount of cores presented to them, it would seem to me.
  • BrokenCrayons - Thursday, August 10, 2017 - link

    Great review but the word "traditional" is used heavily. Given the short lifespan of computer parts and the nature of consumer electronics, I'd suggest that there isn't enough time or emotional attachment to establish a tradition of any sort. Motherboards sockets and market segments, for instance, might be better described in other ways unless it's becoming traditional in the review business to call older product designs traditional. :)
  • mkozakewich - Monday, August 14, 2017 - link

    Oh man, but we'll still gnash our teeth at our broken tech traditions!
  • lefty2 - Thursday, August 10, 2017 - link

    It's pretty useless measuring power alone. You need to measure efficiency (performance /watt).
    So yeah, a 16 core CPU draws more power than a 10 core, but it also probably doing a lot more work.
  • Diji1 - Thursday, August 10, 2017 - link

    Er why don't you just do it yourself, they've already given you the numbers.

Log in

Don't have an account? Sign up now