Rocket League

Hilariously simple pick-up-and-play games are great fun. I'm a massive fan of the Katamari franchise for that reason — passing start on a controller and rolling around, picking up things to get bigger, is extremely simple. Until we get a PC version of Katamari that I can benchmark, we'll focus on Rocket League, as recent noise around its accessibility has had me interested.

Rocket League combines the elements of pick-up-and-play, allowing users to jump into a game with other people (or bots) to play football with cars with zero rules. The title is built on Unreal Engine 3, which is somewhat old at this point, but it allows users to run the game on super-low-end systems while still taxing the big ones. Since the release earlier in 2015, it has sold over 5 million copies and seems to be a fixture at LANs and game shows — and I even saw it being played at London’s MCM Expo (aka Comic Con) this year. Users who train get very serious, playing in teams and leagues with very few settings to configure, and everyone is on the same level. As a result, Rocket League could be a serious contender for a future title for e-sports tournaments (if it ever becomes free or almost free, which seems to be a prerequisite) when features similar to DOTA on watching high-profile games or league integration are added. (To any of the developers who may be reading this: You could make the game free and offer pay-for skins or credits for watching an official league match – it wouldn’t diminish the quality of actual gameplay in any way.)


Obligatory shot of Markiplier on Rocket League (caution, coarse language in the link)

Based on these factors, plus the fact that it is an extremely fun title to load and play, we set out to find the best way to benchmark it. Unfortunately, automatic benchmark modes for games are few and far between. In speaking with some indie developers as well as big studios, we learned that, for there to be a benchmark mode, the game has to be designed with a benchmark in mind from the beginning, because adding an automated one at a later date can be almost impossible. Some developers seem to realize this as their (first major) title is near completion, whereas large game studios don't care at all, even though a good benchmark mode will ensure its presence in many technical reviews, increasing awareness of them and answering a number of performance-related questions for the community automatically. Partly because of this, but also on the basis that it is built on the Unreal 3 engine, Rocket League does not have a benchmark mode. In this case, we have to develop a consistent run and record the frame rate.

Developing a consistent run for frame-rate analysis can be difficult without a "trace." A trace ensures that random numbers are fixed and that the same plot occurs each time – the Source engine was very good for this when we used to have Portal benchmarks, or even Battlefield 2 did it reasonably well. But when a trace is unavailable, we have to deal with non-player characters that use random action generators in sports-like titles. When faced with a task, typically, an AI function will have a weighted set of options of what it should do, and then will generate a random number that usually does option A, but might do option B and, 1 in 100 times, does uncharacteristic option C. But we've had this situation of random AI tasks before. For example, any racing benchmark that uses the Ego engine — such as DiRT, DiRT 2, DiRT Rally, GRID, GRID Autosport and any official F1 title this decade — runs a race over a fixed number of laps, representing what can happen in an actual race. While you don't get the same frames being rendered, the overall frame-rate profile of a long benchmark run should have both high- and low-fps moments and end up similar when all possible variables you can fix are fixed. As long as you don’t look at the absolute minimum frame rate and report it, the averages (mean or median) and percentiles instead should align appropriately.

With Rocket League, there is no benchmark mode, so we have to perform a series of automated actions. We take the following approach: Using Fraps to record the time taken to show each frame (and the overall frame rates), we use an automation tool to set up a consistent 4v4 bot match on easy, with the system applying a series of inputs throughout the run, such as switching camera angles and driving around. It turns out that this method is nicely indicative of a real bot match, driving up walls, boosting and even putting in the odd assist, save and/or goal, as weird as that sounds for an automated set of commands. To maintain consistency, the commands we apply are not random but time-fixed, and we also keep the map the same (Denham Park) and the car customization constant. We start recording just after a match starts, and record for 4 minutes of game time, with average frame rates, 99th percentile and frame times all provided.

The graphics settings for Rocket League come in four broad, generic settings: Low, Medium, High and High FXAA. There are advanced settings in place for shadows and details; however, for these tests, we keep to these four. Actually, due to an odd quirk with Rocket League, in most resolutions, only Low and High will generate different results. The title doesn’t require much in the way of GPU resources at times depending on the resolution (720p vs. 4K), and in our testing, the FXAA mode for High gave the same results as High, while in any resolution below 1920x1080, the Low and Medium results were equivalent. Our initial tests went through all four generic settings at 720p, 900p and 1080p to determine what would be a good metric for integrated graphics settings.

At this point, it is worth mentioning a small quick issue with Rocket League regarding frame rates. By default, the game is capped at 60 fps for a variety of reasons, including game consistency and a hybrid form of power saving, allowing the system to sleep if the system can produce and dump frames. Removing this cap requires adjusting the TASystemSettings.ini file, and the AllowPerFrameSleep parameter to False. Doing this lifts the cap, although some users of earlier versions have reported some camera issues in certain configurations. Our testing has not showed any issues resulting from an uncapped frame rate. Also, there is now a way to force MSAA, although we are not using it for this test.

Thus, with our test, we did a sweep of 1280x720, 1600x900 and 1920x1080 at each of the four graphics settings, at 10 runs apiece. That's 120 games of football/soccer, at 4 minutes of frame recording each (plus 90 seconds for automated setup to load the game and select the right match). Because Fraps also lets us extract frame time data, we can analyze percentile profiles as well. The following results show each of the 10 runs at each setting, with a final average at the end (click through to see the full table in high resolution).

For most users, the golden value of either 60 fps or 30 fps matters a great deal, depending on the scenario. Currently, our game tests use settings designed to allow a good integrated GPU (or R7 240-like discrete GPU) to achieve either 30 fps on the average or 30 fps on the 99th percentile, with some approaching a 60-fps average. In this case, the integrated graphics for the A8-7670K feels best at either:

- 1280x720 High: 48.8 fps average, 34.0 fps at 99th
- 1600x900 High: 32.0 fps average, 23.3 fps at 99th
- 1920x1080 Medium: 52.2 fps average, 35.5 fps at 99th

Here's what these look like on a frame-rate profile chart, indicating where the frame rate typically lies.

This graph shows that the 900p High line hits 30 fps only 67% of the time, clearly taking it out of the running. With the other two, we are comfortably in the 30-fps zone for just about the whole benchmark, but the 60-fps numbers are interesting — at 1080p and Medium settings, about 32% of the frames are over 60 fps, compared to only 10% of the frames at 720p High.

Numbers aside, look at the images below for quality and clarity, and see what you think. I've added in 900p as well, for completeness.

1280x720 High

1600x900 High

1920x1080 Medium

There's no doubt about it: At High settings, the game looks nicer. Colors and lighting are more vibrant. But this is countered by better edge compensation at the higher resolutions, making it easier to see what is in front of you at medium to long distances, as well as giving the game a smoother feel in general.

Because this is a new test, we are still testing it with other CPUs, and it'll make a full appearance next year in our 2016 benchmark update. But for now, we have an i5-6600 processor (one of Intel’s latest Skylake 65W parts for a future review) tested at all the resolution and graphics combinations. Here, we use both processors at their JEDEC memory supported frequencies (A8-7670K at DDR3-2133, i5-6600 at DDR4-2133).

On average, the A8-7670K in this comparison produces 14% better frame rates, with 720p and 1080p getting the best jumps for the more strenuous graphics settings. The 99th-percentile figures also favor the A8-7670K — this time, by an average of 4% but still preferential when the graphics settings are moving from low to high.

For our 2016 CPU benchmark tests, it would seem to suggest that 1280x720 at High or 1920x1080 at Medium would most likely be our CPU-focused benchmarks on integrated graphics going forward. Unless enough 4K monitors come my way, then we can also add in some 4K High comparisons for extreme graphics situations.

Gaming Benchmarks: GTX 770 and R9 285 AMD A8-7670K Conclusion
Comments Locked

154 Comments

View All Comments

  • Ian Cutress - Wednesday, November 18, 2015 - link

    It's a 95W desktop part. It's not geared for laptops or NUCs. There are 65W desktop parts with TDP Down modes to 45W, and lower than that is the AM1 platform for socketed. Carrizo at 15W/35W for soldered such as laptops and NUC-like devices.
  • Vesperan - Wednesday, November 18, 2015 - link

    Apologies if I missed it - but what speed was the memory running at for the APUs?

    The table near the start just said 'JEDEC' and linked to the G-skill/Corsair websites. This is important given these things are bandwidth constrained - the difference between 1600mhz and 2133mhz can be significant (over 20 percent).
  • tipoo - Wednesday, November 18, 2015 - link

    2133mhz, page 2
  • Ian Cutress - Wednesday, November 18, 2015 - link

    We typically run the CPUs at their maximum supported memory frequency (which is usually quoted as JEDEC specs with respect to sub-timings). So the table on the front page for AMD processors is relevant, and our previous reviews on Intel parts (usually DDR3-1600 C11 or DDR4-2133 C15) will state those.

    A number of people disagree with this approach ('but it runs at 2666!' or 'no-one runs JEDEC!'). For most enthuiasts, that may be true. But next time you're at a BYOC LAN, go see how many people are buying high speed memory but not implementing XMP. You may be suprised - people just putting parts together and assuming they just work.

    Also, consider that the CPU manufacturers would put the maximum supported frequency up if they felt that it should be validated at that speed. It's a question of silicon, yields, and DRAM markets. Companies like Kingston and Micron still sell masses of DDR3-1600. Other customers just care about the density of the memory, not the speed. It's an odd system, and by using max-at-JEDEC it keeps it fair between Intel, AMD or others: if a manufacturer wants a better result, they should release a part with a higher supported frequency.

    I don't think we've done a DRAM scaling review on Kaveri or Kaveri Refresh, which is perhaps an oversight on my part. Our initial samples had issues with high speed memory - maybe I should put this one from 1600 up to 2666 if it will do it.
  • Oxford Guy - Wednesday, November 18, 2015 - link

    SInce you always overclock processor is makes little sense to hold back an APU with slow RAM.
  • Oxford Guy - Wednesday, November 18, 2015 - link

    It's not just the bandwidth, either (like 2666) but the combination of that and latency. My FX runs faster in Aida benches, for the most part, at CAS 9-11-10-1T 2133 (DDR3) than at 2400, probably due to limitations of the board (which is rated for 20000. Don't just focus on high clocks.
  • Oxford Guy - Wednesday, November 18, 2015 - link

    rated for 2000
  • Ian Cutress - Thursday, November 19, 2015 - link

    Off the bat, that's a false equivalence - we only overclocked in this review to see how far it would go, not for the general benchmark set.

    But to reiterate a variation on what I've already said to you before:

    For DDR3, if I was to run AMD at 2666 and Intel at 1600, people would complain. If I was to run both at DDR3-2133, AMD users would complain because I'm comparing overclocked DRAM perf to stock perf.

    Most users/SIs don't overclock - that's the reality.

    If AMD or Intel wanted better performance, they'd rate the DRAM controller for higher and offer multiple SKUs.
    They do it with CPUs all the time through binning and what you can actually buy.
    e.g. 6700k and 6600k - they don't sell a 6600k at 2133 and 6600k at 2400 for example.

    This is why we test out of the box for our main benchmark results.
    If they did do separate SKUs with different memory controller specifications, we would test update the dataset accordingly with both sets, or the most popular/important set at any rate.

    Besides, anyone following CPU reviews at AT will know your opinion on the matter, you've made that abundantly clear in other reviews. We clearly disagree. But if you want to run the AIDA synthetics on your overclocked system, great - it totally translates into noticeable real-world performance gains for sure.
  • Vesperan - Thursday, November 19, 2015 - link

    Thanks Ian - I missed than when quickly going through the story this morning prior to work. Yet somehow picked out the JEDEC bit!

    I like the approach you've outlined, it makes sense to me. So - for what it's worth, you have support of at least one irrelevant person on the internet!

    From what I saw from a few websites (Phoronix springs to mind) the gains from memory scaling decline rapidly after 2133mhz.
  • CaedenV - Wednesday, November 18, 2015 - link

    I just don't understand the argument for buying AMD these days. Computers are not things you replace every 3-5 years anymore. In the post Core2 world systems last at least a good 7-10 years of usefulness, where simple updates of SSDs and GPUs can keep systems up to date and 'good enough' for all but the most pressing workloads. People need to stop sweating about how much the up-front cost of a system is, and start looking at what tier it performs at, and finding a way to get their budget to stretch to that level.

    I don't mean starting with a $500 build and stretching your wallet (or worse, your credit card) to purchase a $1200 system. I'm not some elitist rich guy; I understand the need to stick to a budget. But the difference between AMD and Intel in price is not very much, while the Intel chip is going to run cooler, quieter, and faster. Spending the extra $50 for the Intel chip and compatible motherboard is not going to break the bank.

    Because lets face it; pretty much everyone is going to fall in one of 2 camps.
    1) you are not going to game much at all, and the integrated Intel graphics, while not stellar, are going to be 'good enough' to run solitaire, phone game ports, 4K video, and a few other things. In this case the system price is going to be essentially the same, the video performance is going to be more than adequate, and the i3 is going to knock the socks off of the A8 5+ years down the road.
    2) You actually do play 'real' games on a regular basis, and the integrated A8 graphics are going to be a bonus to you for the first 2-6 months while you save up for a dGPU anyways... in which case the video performance is going to be nearly identical between the i3 and A8, while the i3 is going to be much more responsive in your day-to-day browsing, work, and media consumption. Or, you are going to find that you outgrow what an i3 or A8 can do, and you end up building a much faster i5 or i7 based system... in which case the i3 will either retain it's resale value better, or will make a much better foundation for a home server, non-gaming HTPC, or some other use.

    I really want to love AMD, but after cost of ownership and longevity of the system is taken into consideration, they just do not make sense to purchase even in the budget category. The only place where AMD makes sense is if you absolutely have to have the GPU horsepower, but cannot have a dGPU in the system for some reason. And even in that case, the bump up to an A10 is going to be well worth the extra few $$. There is almost no use in getting anything slower than an A10 on the AMD side.

    But then again, AMD is working hard these days to reinvent themselves. Maybe 2 years from now this will all turn around and AMD will have more worthwhile products on the market that are useful for something.

Log in

Don't have an account? Sign up now