Gaming Benchmarks

F1 2013

First up is F1 2013 by Codemasters. I am a big Formula 1 fan in my spare time, and nothing makes me happier than carving up the field in a Caterham, waving to the Red Bulls as I drive by (because I play on easy and take shortcuts). F1 2013 uses the EGO Engine, and like other Codemasters games ends up being very playable on old hardware quite easily. In order to beef up the benchmark a bit, we devised the following scenario for the benchmark mode: one lap of Spa-Francorchamps in the heavy wet, the benchmark follows Jenson Button in the McLaren who starts on the grid in 22nd place, with the field made up of 11 Williams cars, 5 Marussia and 5 Caterham in that order. This puts emphasis on the CPU to handle the AI in the wet, and allows for a good amount of overtaking during the automated benchmark. We test at 1920x1080 on Ultra graphical settings.

F1 2013: 1080p Max, 1x GTX 770

F1 2013, 1080p Max
  NVIDIA AMD
Average Frame Rates

Minimum Frame Rates

Bioshock Infinite

Bioshock Infinite was Zero Punctuation’s Game of the Year for 2013, uses the Unreal Engine 3, and is designed to scale with both cores and graphical prowess. We test the benchmark using the Adrenaline benchmark tool and the Xtreme (1920x1080, Maximum) performance setting, noting down the average frame rates and the minimum frame rates.

Bioshock Infinite: 1080p Max, 1x GTX 770

Bioshock Infinite, 1080p Max
  NVIDIA AMD
Average Frame Rates

Minimum Frame Rates

Tomb Raider

The next benchmark in our test is Tomb Raider. Tomb Raider is an AMD optimized game, lauded for its use of TressFX creating dynamic hair to increase the immersion in game. Tomb Raider uses a modified version of the Crystal Engine, and enjoys raw horsepower. We test the benchmark using the Adrenaline benchmark tool and the Xtreme (1920x1080, Maximum) performance setting, noting down the average frame rates and the minimum frame rates.

Tomb Raider: 1080p Max, 1x GTX 770

Tomb Raider, 1080p Max
  NVIDIA AMD
Average Frame Rates

Minimum Frame Rates

Scientific and Synthetic Benchmarks Gaming Benchmarks: Sleeping Dogs, Company of Heroes 2 and Battlefield 4
Comments Locked

53 Comments

View All Comments

  • silenceisgolden - Wednesday, May 14, 2014 - link

    So I think I might be a few PCIe lanes off, but would it be feasible to get rid of the PCI, one LAN slot, the D-SUB (because why is that still useful with DVI available), the PCI Express/M.2/SATA6 switch but keep the M.2? Then either add in another USB3, SATA6, or if possible in the future, another M.2 stacked on top of the first. I would think this would be the best combination of connectivity that the mainstream to enthusiast range of PC builders are looking for, and would stop the continuation of older standards or these choices that people have to make that might not be obvious when they are plugging stuff in to the motherboard.
  • Chil - Wednesday, May 14, 2014 - link

    The BIOS screenshots of both HD and Classic Mode show a BCLK of 99.79 MHz. Isn't the standard 100.0? Can anyone comment on if this is a bug or expected behavior and how it affects performance?
  • The_Assimilator - Wednesday, May 14, 2014 - link

    It's possible that AnandTech had Spread Spectrum enabled, but I have that option disabled on my Asrock Z77 Extreme6, and its BCLK fluctuates between 99.97MHz and 99.99MHz at boot (I have never seen it do a flat 100.00MHz).
  • Chil - Thursday, May 15, 2014 - link

    99.97 is right around what I expect, but 99.79 (0.21 off the mark) is a different story. I did a big of searching and this appears to affect Gigabyte's entire "ultra durable" lineup.
  • maecenas - Wednesday, May 14, 2014 - link

    Given that you've run a few articles explaining how modern games are GPU dependent, and very rarely is the CPU the bottleneck in single-card applications, I'm really not clear on how a motherboard is going to have a significant impact on gaming performance, holding the GPU and CPU constant.
  • extide - Wednesday, May 14, 2014 - link

    It doesnt. The only thing is really the PCIe lane allocation, and if it possibly uses a PLX chip. Also, the feature set may be different, but the motherboard doesn't really affect performance.
  • Ian Cutress - Wednesday, May 14, 2014 - link

    PCIe lane allocation is important if you are not limited by the CPU first (see our Haswell refresh). There are some weird and wonderful chipset lane allocations when you move into the world of the PLX chip, or some server boards miss out lanes altogether. If/when I move to 4K gaming benchmarks (2015? depends on 24"/27" 60Hz monitor pricing) we might see a greater effect there.
  • The_Assimilator - Wednesday, May 14, 2014 - link

    Flex IO is a step in the right direction from Intel. That said, it could be so much more; in fact it would make the most sense if ALL Flex IO ports were switchable between PCIe/USB3/SATA3. That would allow motherboard manufacturers to provide e.g. native 10 SATA ports without having to purchase and integrate additional standalone SATA controllers, which are slower and add to the BOM. I'd be pretty happy with a motherboard that did a 2/8/8 split for PCIe/USB3/SATA3.

    Additionally, the 14 USB 2.0 ports are ridiculous; I don't think I've ever seen a motherboard that provides that many. Intel should aggregate 10 of those ports into an additional Flex IO port, which would leave 4 USB 2.0 ports. Anyone who needs more than a minimum of 8 USB ports (4 USB2 + minimum of 4 USB3) can buy a USB hub.
  • DanNeely - Wednesday, May 14, 2014 - link

    8 back panel ports and 3 mobo headers for 6 more was a relatively common config a few years ago. I think I've seen 6 back panel and 4 headers a few times too. 3 mobo headers covers a case with 4 front panel ports and a card reader in a drive bay. Other than being mostly USB3 this board has the same 8 back panel and 3 header configuration.

    I'm not sure why Intel didn't cut the number of 2.0 ports down when they added USB3 to the chipset, but IIRC a USB2 controller is tiny compared to a USB3/PCIe lane/Sata6 controller. It's entirely possible that it came down to the 2.0 controllers being a small enough chunk of the chip that it wasn't worth fiddling with them because it couldn't affect enough space to matter for anything else.
  • repoman27 - Wednesday, May 14, 2014 - link

    Well A: the USB 2.0 controllers, hubs, ports were already there, so it's easier to just let them be, and B: every USB 3.0 port uses a USB 2.0 port as well. Thus you really have a maximum of 14 USB ports total, up to 6 of which can be USB 3.0.

Log in

Don't have an account? Sign up now