Xe-LP GPU Performance: F1 2019

The F1 racing games from Codemasters have been popular benchmarks in the tech community, mostly for ease-of-use and that they seem to take advantage of any area of a machine that might be better than another. The 2019 edition of the game features all 21 circuits on the calendar, and includes a range of retro models and DLC focusing on the careers of Alain Prost and Ayrton Senna. Built on the EGO Engine 3.0, the game has been criticized similarly to most annual sports games, by not offering enough season-to-season graphical fidelity updates to make investing in the latest title worth it, however the 2019 edition revamps up the Career mode, with features such as in-season driver swaps coming into the mix. The quality of the graphics this time around is also superb, even at 4K low or 1080p Ultra.

To be honest, F1 benchmarking has been up and down in any given year. Since at least 2014, the benchmark has revolved around a ‘test file’, which allows you to set what track you want, which driver to control, what weather you want, and which cars are in the field. In previous years I’ve always enjoyed putting the benchmark in the wet at Spa-Francorchamps, starting the fastest car at the back with a field of 19 Vitantonio Liuzzis on a 2-lap race and watching sparks fly. In some years, the test file hasn’t worked properly, with the track not being able to be changed.

For our test, we put Alex Albon in the Red Bull in position #20, for a dry two-lap race around Austin.

F1 2019: 768p Ultra Low QualityF1 2019: 1080p Ultra Quality

In this case, at 1080p Ultra, AMD and Intel (28W) are matched. Unfortunately looking through the data, the 15 W test run crashed and we only noticed after we returned the system.

Xe-LP GPU Performance: World of Tanks Conclusion: Is Intel Smothering AMD in Sardine Oil?
Comments Locked

253 Comments

View All Comments

  • MDD1963 - Saturday, September 19, 2020 - link

    Although equaling/exceeding 7700K-level of performance within a 50W envelope in a laptop is impressive, the 4c/8t design is going to cause at least one or two frowns/raised eyebrows...
  • ballsystemlord - Saturday, September 19, 2020 - link

    @Ian why do these companies always seem to have the worst timing on sending you stuff? Do you tell them when you'll be on vacation?

    Thanks for the review!
  • Ian Cutress - Sunday, September 20, 2020 - link

    It's happened a lot these past couple of years. The more segments of the tech industry you cover, the less downtime you have - my wife obviously has to book holiday months in advance, but companies very rarely tell you when launches are, or they offer surprise review samples a few days before you are set to leave. We do our best to predict when the downtime is - last year we had hands on with the Ice Lake Development system before the announcement of the hardware, and so with TGL CPUs being announced first on Sep 2nd, we weren't sure when the first units were coming in. We mistimed it. Of course with only two/three of us on staff, each with our own segments, it's hard to get substitutes in. It can be done, Gavin helped a lot with TR3 for example. But it depends on the segment.

    And thanks :)
  • qwertymac93 - Sunday, September 20, 2020 - link

    Finally a decent product from Intel. It's been a while. Those AVX512 numbers were impressive. Intel is also now able to compete toe to toe with AMD integrated graphics, trading blows. I feel that won't last, though. AMD is likely to at least double the GPU horsepower next gen with the move from a tweaked GCN5 to RDNA2 and I don't know if Intel will be able to keep up. Next year will be exciting in any case.
  • Spunjji - Sunday, September 20, 2020 - link

    It'll be a while before we get RDNA2 at the high end - looks like late 2021 or early 2022. Before that, it's only slated to arrive with Van Gogh at 7-15W
  • efferz - Monday, September 21, 2020 - link

    It is very interesting to see that the intel complier make the SPECint2017 scores 52% higher than other compliers without 462.libquantum.
  • helpMeImDying - Thursday, September 24, 2020 - link

    Hello, before ranting I want to know if the scores of spec2006 and spec2017 were adjusted/changed based on processors frequency(Read something like that in the article)? Because you can't do that. Frequencies should be out of the topic here unless comparing same generation CPU's and even then there are some nuances. What matters is the performance per watt comparing low power notebooks. It can be done mathematically, if the TDP can't be capped at the same level all the time, like you did in the first few pages. I'm interested in scores at 15W and 25W. So you should have and should in the future monitor and publish power consumed numbers near the scores.
    And if you are adjusting scores based on CPU frequencies, then they are void and incorrect.
  • helpMeImDying - Thursday, September 24, 2020 - link

    Btw, same with iGPUs.
  • beggerking@yahoo.com - Friday, September 25, 2020 - link

    none of the tests seem valid... some are intel based others are AMD based... I don't see a single test where Ryzen beats 10th gen but loses to 11th gen on standard 15 watt profile...

    the speed difference between 10th and 11th gen intel is approx 10-15%.. its good, but probably not worth the price premium since Ryzen is already cheaper than 10th gen, i don't see how 11th gen would go cheaper than Ryzen...
  • legokangpalla - Monday, September 28, 2020 - link

    I always thought AVX-512 was a direct standoff against heterogenous computing.
    I mean isn't it a better idea to develop better integrations for GPGPU like SYCL, higher versions of OpenCL etc? Programming with vector instructions IMO is lot more painful compared to writing GPU kernels and tasks like SIMD should be offloaded to GPU instead being handled by CPU instruction(CPU instruction with poor portability).

Log in

Don't have an account? Sign up now