Rocket League

Hilariously simple pick-up-and-play games are great fun. I'm a massive fan of the Katamari franchise for that reason — passing start on a controller and rolling around, picking up things to get bigger, is extremely simple. Until we get a PC version of Katamari that I can benchmark, we'll focus on Rocket League.

Rocket League combines the elements of pick-up-and-play, allowing users to jump into a game with other people (or bots) to play football with cars with zero rules. The title is built on Unreal Engine 3, which is somewhat old at this point, but it allows users to run the game on super-low-end systems while still taxing the big ones. Since the release in 2015, it has sold over 5 million copies and seems to be a fixture at LANs and game shows. Users who train get very serious, playing in teams and leagues with very few settings to configure, and everyone is on the same level. Rocket League is quickly becoming one of the favored titles for e-sports tournaments, especially when e-sports contests can be viewed directly from the game interface.

Based on these factors, plus the fact that it is an extremely fun title to load and play, we set out to find the best way to benchmark it. Unfortunately for the most part automatic benchmark modes for games are few and far between. Partly because of this, but also on the basis that it is built on the Unreal 3 engine, Rocket League does not have a benchmark mode. In this case, we have to develop a consistent run and record the frame rate.

Read our initial analysis on our Rocket League benchmark on low-end graphics here.

With Rocket League, there is no benchmark mode, so we have to perform a series of automated actions, similar to a racing game having a fixed number of laps. We take the following approach: Using Fraps to record the time taken to show each frame (and the overall frame rates), we use an automation tool to set up a consistent 4v4 bot match on easy, with the system applying a series of inputs throughout the run, such as switching camera angles and driving around.

It turns out that this method is nicely indicative of a real bot match, driving up walls, boosting and even putting in the odd assist, save and/or goal, as weird as that sounds for an automated set of commands. To maintain consistency, the commands we apply are not random but time-fixed, and we also keep the map the same (Aquadome, known to be a tough map for GPUs due to water/transparency) and the car customization constant. We start recording just after a match starts, and record for 4 minutes of game time (think 5 laps of a DIRT: Rally benchmark), with average frame rates, 99th percentile and frame times all provided.

The graphics settings for Rocket League come in four broad, generic settings: Low, Medium, High and High FXAA. There are advanced settings in place for shadows and details; however, for these tests, we keep to the generic settings. For both 1920x1080 and 4K resolutions, we test at the High preset with an unlimited frame cap.

For all our results, we show the average frame rate at 1080p first. Mouse over the other graphs underneath to see 99th percentile frame rates and 'Time Under' graphs, as well as results for other resolutions. All of our benchmark results can also be found in our benchmark engine, Bench.

MSI GTX 1080 Gaming 8G Performance


1080p

4K

ASUS GTX 1060 Strix 6GB Performance


1080p

4K

Sapphire R9 Fury 4GB Performance


1080p

4K

Sapphire RX 480 8GB Performance


1080p

4K

Rocket League Notes on GTX

The map we use in our testing, Aquadome, is known to be strenuous on a system, hence we see frame rates lower than what people expect for Rocket League - we're trying to cover the worst case scenario. But the results also show how AMD CPUs and NVIDIA GPUs do not seem to be playing ball with each other, which we've been told is likely related to drivers. 

Gaming Performance: Rise of the Tomb Raider (1080p, 4K) Gaming Performance: Grand Theft Auto (1080p, 4K)
Comments Locked

140 Comments

View All Comments

  • Oxford Guy - Thursday, July 27, 2017 - link

    "The Ryzen 3 1200 brings up the rear of the stack, being the lowest CPU in the stack, having the lowest frequency at 3.1G base, 3.4G turbo, 3.1G all-core turbo, no hyperthreading and the lowest amount of L3 cache."

    That bit about the L3 is incorrect unless the chart on page 1 is incorrect. It shows the same L3 size for 1400, 1300X, and 1200.
  • Oxford Guy - Thursday, July 27, 2017 - link

    And this:

    "Number 3 leads to a lop-sided silicon die, and obviously wasn’t chosen."

    Obviously?
  • Oxford Guy - Thursday, July 27, 2017 - link

    "DDR4-2400 C15"

    2400, really — even though it is, obviously, known that Zen needs faster RAM to perform efficiently?

    Joel Hruska managed to test Ryzen with 3200 speed RAM on his day 1 review. I bought 16 GB of 3200 RAM from Microcenter last Christmastime for $80. Just because RAM prices are nuts right now doesn't mean we should gut Ryzen's performance by sticking it with low-speed RAM.
  • Oxford Guy - Thursday, July 27, 2017 - link

    "This is also typically run at JEDEC subtimings where possible. It is noted that some users are not keen on this policy"

    Maybe you guys should rethink your logic.

    1) You have claimed, when overclocking, that it's not necessary to do full stability testing, like with Prime. Just passing some lower-grade stress testing is enough to make an overclock "stable enough".

    2) Your overclocking reviews have pushed unwise levels of voltage into CPUs to go along with this "stable enough" overclock.

    So... you argue against proof of true stability, both in the final overclock settings being satisfactorily tested and in safe voltages being decided upon.

    And — simultaneously — kneecap Zen processors by using silly JEDEC standards, trying to look conservative?

    Please.

    Everyone knows the JEDEC standard applies to enterprise. Patriot is just one manufacturer of RAM that tested and certified far better RAM performance on B350 and A320 Zen boards. You had that very article on your site just a short time ago.

    Your logic doesn't add up. It is not a significant enough cost savings for system builders to go with slow RAM for Zen. The only argument you can use, at all, is that OEMs are likely to kneecap Zen with slow RAM. That is not a given, though. OEMs can use faster RAM, like, at least, 2666, if they choose to. If they're marketing toward gamers they likely will.
  • Oxford Guy - Thursday, July 27, 2017 - link

    "Truth be told I never actually played the first version, but every edition from the second to the sixth, including the fifth as voiced by the late Leonard Nimoy"

    You mean Civ IV.
  • Oxford Guy - Thursday, July 27, 2017 - link

    And, yeah, we can afford to test with an Nvidia 1080 but we can't afford to use decent speed RAM.

    Yeah... makes sense.
  • Hixbot - Thursday, July 27, 2017 - link

    Are you having a conversation with yourself? Try to condense your points into a single post.
  • Oxford Guy - Friday, July 28, 2017 - link

    I don't live in a static universe where all of the things I'm capable of thinking of are immediately apparent, but thanks for the whine.
  • Manch - Friday, July 28, 2017 - link

    Really snowflake? You're saying he is whining? How many rants have you posted? LOL The difference between 2400 and 3200 shows up more on the higher end processors bc bigger L3 & HT err SMT. The diff in CPU bound gaming is 5-10% at most with the Ryzen 7's. Smaller with the 5's. Even more so with the 3's. Small enough to the point that it would not change the outlook on the CPU's. Also consider that if Ian change the parameters of his test constantly it would also skew numbers more so and render bench unreliable. Test the Ryzen 7's with 2133 then the 5's with 2400 then the 3's with 3200? Obviously anandtechs test are not the definitive performance bench mark for the world. What it is, is a reliably consistent benchmark allowing you to compare diff cpus with as little changed as possible as too not skew performance. Think EPA gas mileage stickers on cars. Will you get that rating? maybe. What it does is it gives you comparative results. From there its fairly easy to extrapolate the difference. Now I'm sure they will as they have in the past update there baseline specs for testing. You're running off the rails about how much the memory effects are. Look at all the youtube vids and other reviews out there. Difference yes. A lot? meh I also believe anandtech has mentioned doing a write up on the latest agesa update since its had a significant impact(including memory) on the series.
  • Oxford Guy - Friday, July 28, 2017 - link

    "You're saying he is whining? How many rants have you posted?"

    Pot kettle fallacy.

Log in

Don't have an account? Sign up now