Rocket League

Hilariously simple pick-up-and-play games are great fun. I'm a massive fan of the Katamari franchise for that reason — passing start on a controller and rolling around, picking up things to get bigger, is extremely simple. Until we get a PC version of Katamari that I can benchmark, we'll focus on Rocket League, as recent noise around its accessibility has had me interested.

Rocket League combines the elements of pick-up-and-play, allowing users to jump into a game with other people (or bots) to play football with cars with zero rules. The title is built on Unreal Engine 3, which is somewhat old at this point, but it allows users to run the game on super-low-end systems while still taxing the big ones. Since the release earlier in 2015, it has sold over 5 million copies and seems to be a fixture at LANs and game shows — and I even saw it being played at London’s MCM Expo (aka Comic Con) this year. Users who train get very serious, playing in teams and leagues with very few settings to configure, and everyone is on the same level. As a result, Rocket League could be a serious contender for a future title for e-sports tournaments (if it ever becomes free or almost free, which seems to be a prerequisite) when features similar to DOTA on watching high-profile games or league integration are added. (To any of the developers who may be reading this: You could make the game free and offer pay-for skins or credits for watching an official league match – it wouldn’t diminish the quality of actual gameplay in any way.)

Obligatory shot of Markiplier on Rocket League (caution, coarse language in the link)

Based on these factors, plus the fact that it is an extremely fun title to load and play, we set out to find the best way to benchmark it. Unfortunately, automatic benchmark modes for games are few and far between. In speaking with some indie developers as well as big studios, we learned that, for there to be a benchmark mode, the game has to be designed with a benchmark in mind from the beginning, because adding an automated one at a later date can be almost impossible. Some developers seem to realize this as their (first major) title is near completion, whereas large game studios don't care at all, even though a good benchmark mode will ensure its presence in many technical reviews, increasing awareness of them and answering a number of performance-related questions for the community automatically. Partly because of this, but also on the basis that it is built on the Unreal 3 engine, Rocket League does not have a benchmark mode. In this case, we have to develop a consistent run and record the frame rate.

Developing a consistent run for frame-rate analysis can be difficult without a "trace." A trace ensures that random numbers are fixed and that the same plot occurs each time – the Source engine was very good for this when we used to have Portal benchmarks, or even Battlefield 2 did it reasonably well. But when a trace is unavailable, we have to deal with non-player characters that use random action generators in sports-like titles. When faced with a task, typically, an AI function will have a weighted set of options of what it should do, and then will generate a random number that usually does option A, but might do option B and, 1 in 100 times, does uncharacteristic option C. But we've had this situation of random AI tasks before. For example, any racing benchmark that uses the Ego engine — such as DiRT, DiRT 2, DiRT Rally, GRID, GRID Autosport and any official F1 title this decade — runs a race over a fixed number of laps, representing what can happen in an actual race. While you don't get the same frames being rendered, the overall frame-rate profile of a long benchmark run should have both high- and low-fps moments and end up similar when all possible variables you can fix are fixed. As long as you don’t look at the absolute minimum frame rate and report it, the averages (mean or median) and percentiles instead should align appropriately.

With Rocket League, there is no benchmark mode, so we have to perform a series of automated actions. We take the following approach: Using Fraps to record the time taken to show each frame (and the overall frame rates), we use an automation tool to set up a consistent 4v4 bot match on easy, with the system applying a series of inputs throughout the run, such as switching camera angles and driving around. It turns out that this method is nicely indicative of a real bot match, driving up walls, boosting and even putting in the odd assist, save and/or goal, as weird as that sounds for an automated set of commands. To maintain consistency, the commands we apply are not random but time-fixed, and we also keep the map the same (Denham Park) and the car customization constant. We start recording just after a match starts, and record for 4 minutes of game time, with average frame rates, 99th percentile and frame times all provided.

The graphics settings for Rocket League come in four broad, generic settings: Low, Medium, High and High FXAA. There are advanced settings in place for shadows and details; however, for these tests, we keep to these four. Actually, due to an odd quirk with Rocket League, in most resolutions, only Low and High will generate different results. The title doesn’t require much in the way of GPU resources at times depending on the resolution (720p vs. 4K), and in our testing, the FXAA mode for High gave the same results as High, while in any resolution below 1920x1080, the Low and Medium results were equivalent. Our initial tests went through all four generic settings at 720p, 900p and 1080p to determine what would be a good metric for integrated graphics settings.

At this point, it is worth mentioning a small quick issue with Rocket League regarding frame rates. By default, the game is capped at 60 fps for a variety of reasons, including game consistency and a hybrid form of power saving, allowing the system to sleep if the system can produce and dump frames. Removing this cap requires adjusting the TASystemSettings.ini file, and the AllowPerFrameSleep parameter to False. Doing this lifts the cap, although some users of earlier versions have reported some camera issues in certain configurations. Our testing has not showed any issues resulting from an uncapped frame rate. Also, there is now a way to force MSAA, although we are not using it for this test.

Thus, with our test, we did a sweep of 1280x720, 1600x900 and 1920x1080 at each of the four graphics settings, at 10 runs apiece. That's 120 games of football/soccer, at 4 minutes of frame recording each (plus 90 seconds for automated setup to load the game and select the right match). Because Fraps also lets us extract frame time data, we can analyze percentile profiles as well. The following results show each of the 10 runs at each setting, with a final average at the end (click through to see the full table in high resolution).

For most users, the golden value of either 60 fps or 30 fps matters a great deal, depending on the scenario. Currently, our game tests use settings designed to allow a good integrated GPU (or R7 240-like discrete GPU) to achieve either 30 fps on the average or 30 fps on the 99th percentile, with some approaching a 60-fps average. In this case, the integrated graphics for the A8-7670K feels best at either:

- 1280x720 High: 48.8 fps average, 34.0 fps at 99th
- 1600x900 High: 32.0 fps average, 23.3 fps at 99th
- 1920x1080 Medium: 52.2 fps average, 35.5 fps at 99th

Here's what these look like on a frame-rate profile chart, indicating where the frame rate typically lies.

This graph shows that the 900p High line hits 30 fps only 67% of the time, clearly taking it out of the running. With the other two, we are comfortably in the 30-fps zone for just about the whole benchmark, but the 60-fps numbers are interesting — at 1080p and Medium settings, about 32% of the frames are over 60 fps, compared to only 10% of the frames at 720p High.

Numbers aside, look at the images below for quality and clarity, and see what you think. I've added in 900p as well, for completeness.

1280x720 High

1600x900 High

1920x1080 Medium

There's no doubt about it: At High settings, the game looks nicer. Colors and lighting are more vibrant. But this is countered by better edge compensation at the higher resolutions, making it easier to see what is in front of you at medium to long distances, as well as giving the game a smoother feel in general.

Because this is a new test, we are still testing it with other CPUs, and it'll make a full appearance next year in our 2016 benchmark update. But for now, we have an i5-6600 processor (one of Intel’s latest Skylake 65W parts for a future review) tested at all the resolution and graphics combinations. Here, we use both processors at their JEDEC memory supported frequencies (A8-7670K at DDR3-2133, i5-6600 at DDR4-2133).

On average, the A8-7670K in this comparison produces 14% better frame rates, with 720p and 1080p getting the best jumps for the more strenuous graphics settings. The 99th-percentile figures also favor the A8-7670K — this time, by an average of 4% but still preferential when the graphics settings are moving from low to high.

For our 2016 CPU benchmark tests, it would seem to suggest that 1280x720 at High or 1920x1080 at Medium would most likely be our CPU-focused benchmarks on integrated graphics going forward. Unless enough 4K monitors come my way, then we can also add in some 4K High comparisons for extreme graphics situations.

Gaming Benchmarks: GTX 770 and R9 285 AMD A8-7670K Conclusion


View All Comments

  • Drumsticks - Wednesday, November 18, 2015 - link

    If AMD really could get a 40% single threaded performance boost on their CPUs for Zen, and they can do it no later than Kaby Lake, then they really might get a moment to breath. That puts single threaded performance right around Intel's i3 parts, and would put multi-threaded performance (and likely graphics although that's a different story) well ahead. It's not going to take back the desktop market overnight, but it would be enough to get PC builders and maybe some OEMs interested and get enough volume moving for them to survive.

    Even if we budget a 10% IPC boost for Intel in Kaby Lake, that puts their i3's barely ahead, and still probably significantly behind in multi threaded performance compared to a 4 core Zen part. Here's hoping for an AMD recovery! I'd love to recommend AMD parts in more than just the $300 region now. Even if Zen only gets a single OEM to genuinely notice AMD, it will be an improvement.
  • V900 - Wednesday, November 18, 2015 - link

    You seriously think AMD is going to sell a 4 core Zen processor for the same amount that a dual core Intel i3 sells for?

    In that case I got a bridge to sell you!

    Make no mistake, AMD doesn't sell cheap APUs out of the goodness of their hearts.

    The reason they're the budget option is because they don't have anything remotely competitive with Intel's Core CPUs, and therefore only can compete on the very low end of the market.

    If their Zen core turns out to be on par with an intel processor, they'll sell it at the prices Intel charges, or slightly lower.

    You won't see a quadcore Zen selling for roughly the same price Intel charges for an I3. You'll have AMD selling their quadcore Zen for the same 300$ Intel charges for an i5
  • yankeeDDL - Wednesday, November 18, 2015 - link

    I don't fully agree.
    Yes, AMD's IPC is much lower than Intel's, and there's a gap in energy efficiency (although, much reduced with Carrizo).
    But, as you correctly indicate, AMD prices they chip accordingly. So at ~120usd, the A8/A10 are extremely attractive, in my opinion. For home users, which have the PC on on a relatively small fraction of the time, having more cores, and an excellent GPU (compared to intel's at those price point) is quite beneficial.
    Skylake changes things a bit, but up to Haswell (included) the performance of Intel's Core i3 in the low $100s, was easily beaten.
  • Dirk_Funk - Wednesday, November 18, 2015 - link

    I don't think he/she said a single word about how zen would be priced. I don't know why you responded this way. Also, i5 sells for like $200-$250. Reply
  • Aspiring Techie - Wednesday, November 18, 2015 - link

    If Zen is as good as advertised, then AMD can afford to increase the price of their CPUs by 20%. This would make their quad-cores in the $130-150 range, way cheaper than Intel's i5s. Granted, even Zen won't be as good as Kaby Lake. If AMD's performance per clock is 60% of Intel's, then Zen's will be about 84% of Intel's. Add in that a much better power efficiency (because the microarchitecture will have fewer pipeline stages) and possibly more cache with the smaller process node and you get roughly 85% i5 performance for $30 less. This doesn't even consider their APUs, which still could be priced at near i3 levels. They would beat the crap out of i3s and sometimes i5s (if HSA is utilized).

    Bottom line: Zen is AMD's last chance. AMD probably won't make the stupid mistake of pricing their CPU's too high. If they do, then bye-bye AMD for good.
  • JoeMonco - Wednesday, November 18, 2015 - link

    "Bottom line: Zen is AMD's last chance. AMD probably won't make the stupid mistake of pricing their CPU's too high. If they do, then bye-bye AMD for good."

    Because if AMD is known for anything it's for its great business decisions. rofl
  • medi03 - Thursday, November 19, 2015 - link

    Yeah, that's why they are in both major consoles at the moment, because of the "bad" business decisions. Reply
  • Klimax - Thursday, November 19, 2015 - link

    There's a reason why Intel was uninterested in consoles. AMD barely makes any money on them... Reply
  • Kutark - Thursday, November 19, 2015 - link

    ^ This. Being in the consoles is because it was a massive volume order of parts and MS and Sony are looking to save as much as possible, fractions of a dollar per part matter when you're paying for literally millions of parts. Reply
  • anubis44 - Sunday, November 29, 2015 - link

    The consoles are still providing AMD with a solid, baseline income every year, and their presence in consoles also make games easier to port to AMD's architecture, something that will become more apparent with DX12, since consoles are already using a DX12/Mantle-like API. AMD's decision to sweep the consoles and push Intel and nVidia out of them will have longer term reprocussions than many realize. AMD is also almost certain to win the next generation of consoles, too, with Zen-based APUs and Greenland-type graphics with HBM. In fact, AMD will probably release something like that for the mainstream PC market by 2017 and nVidia will be relegated to only the high-end of add-in graphics: AMD will be putting solidly mainstream graphics into their APUs, and an add-in mainstream AMD card will simply crossfire with the built-in graphics. Reply

Log in

Don't have an account? Sign up now