Comparing IPC on Skylake: Discrete Gaming

For this set of tests, we kept things simple – a low end single R7 240 DDR3, an ex-high end GTX 770 Lightning and a top line GTX 980 on our standard CPU game set under normal conditions. The IGP is not used here on the basis that each generation uses a substantially different integrated graphics arrangement.

Alien: Isolation

If first person survival mixed with horror is your sort of thing, then Alien: Isolation, based off of the Alien franchise, should be an interesting title. Developed by The Creative Assembly and released in October 2014, Alien: Isolation has won numerous awards from Game Of The Year to several top 10s/25s and Best Horror titles, ratcheting up over a million sales by February 2015. Alien: Isolation uses a custom built engine which includes dynamic sound effects and should be fully multi-core enabled.

For low end graphics, we test at 720p with Ultra settings, whereas for mid and high range graphics we bump this up to 1080p, taking the average frame rate as our marker with a scripted version of the built-in benchmark.

Alien Isolation on ASUS R7 240 DDR3 2GB ($70)

Alien Isolation on MSI GTX 770 Lightning 2GB ($245)

Alien Isolation on ASUS GTX 980 Strix 4GB ($560)

Total War: Attila

The Total War franchise moves on to Attila, another The Creative Assembly development, and is a stand-alone strategy title set in 395AD where the main story line lets the gamer take control of the leader of the Huns in order to conquer parts of the world. Graphically the game can render hundreds/thousands of units on screen at once, all with their individual actions and can put some of the big cards to task.

For low end graphics, we test at 720p with performance settings, recording the average frame rate. With mid and high range graphics, we test at 1080p with the quality setting. In both circumstances, unlimited video memory is enabled and the in-game scripted benchmark is used.

Total War: Attila on ASUS R7 240 DDR3 2GB ($70)

Total War: Attila on MSI GTX 770 Lightning 2GB ($245)

Total War: Attila on ASUS GTX 980 Strix 4GB ($560)

Grand Theft Auto V

The highly anticipated iteration of the Grand Theft Auto franchise finally hit the shelves on April 14th 2015, with both AMD and NVIDIA in tow to help optimize the title. GTA doesn’t provide graphical presets, but opens up the options to users and extends the boundaries by pushing even the hardest systems to the limit using Rockstar’s Advanced Game Engine. Whether the user is flying high in the mountains with long draw distances or dealing with assorted trash in the city, when cranked up to maximum it creates stunning visuals but hard work for both the CPU and the GPU.

For our test we have scripted a version of the in-game benchmark, relying only on the final part which combines a flight scene along with an in-city drive-by followed by a tanker explosion. For low end systems we test at 720p on the lowest settings, whereas mid and high end graphics play at 1080p with very high settings across the board. We record both the average frame rate and the percentage of frames under 60 FPS (16.6ms).

Grand Theft Auto V on ASUS R7 240 DDR3 2GB ($70)

Grand Theft Auto V on MSI GTX 770 Lightning 2GB ($245)

Grand Theft Auto V on ASUS GTX 980 Strix 4GB ($560)

GRID: Autosport

No graphics tests are complete without some input from Codemasters and the EGO engine, which means for this round of testing we point towards GRID: Autosport, the next iteration in the GRID and racing genre. As with our previous racing testing, each update to the engine aims to add in effects, reflections, detail and realism, with Codemasters making ‘authenticity’ a main focal point for this version.

GRID’s benchmark mode is very flexible, and as a result we created a test race using a shortened version of the Red Bull Ring with twelve cars doing two laps. The car is focus starts last and is quite fast, but usually finishes second or third. For low end graphics we test at 1080p medium settings, whereas mid and high end graphics get the full 1080p maximum. Both the average and minimum frame rates are recorded.

GRID: Autosport on ASUS R7 240 DDR3 2GB ($70)

GRID: Autosport on MSI GTX 770 Lightning 2GB ($245)

GRID: Autosport on ASUS GTX 980 Strix 4GB ($560)

Middle-Earth: Shadow of Mordor

The final title in our testing is another battle of system performance with the open world action-adventure title, Shadow of Mordor. Produced by Monolith using the LithTech Jupiter EX engine and numerous detail add-ons, SoM goes for detail and complexity to a large extent, despite having to be cut down from the original plans. The main story itself was written by the same writer as Red Dead Redemption, and it received Zero Punctuation’s Game of The Year in 2014.

For testing purposes, SoM gives a dynamic screen resolution setting, allowing us to render at high resolutions that are then scaled down to the monitor. As a result, we get several tests using the in-game benchmark. For low end graphics we examine at 720p with low settings, whereas mid and high end graphics get 1080p Ultra. The top graphics test is also redone at 3840x2160, also with Ultra settings, and we also test two cards at 4K where possible.

Shadow of Mordor on ASUS R7 240 DDR3 2GB ($70)

Shadow of Mordor on MSI GTX 770 Lightning 2GB ($245)

Shadow of Mordor on MSI GTX 770 Lightning 2GB ($245)

Shadow of Mordor on ASUS GTX 980 Strix 4GB ($560)

Shadow of Mordor on ASUS GTX 980 Strix 4GB ($560)

Conclusions on Gaming

There’s no easy way to write this.

Discrete graphics card performance decreases on Skylake over Haswell.

This doesn’t particularly make much sense at first glance. Here we have a processor with a higher IPC than Haswell but it performs worse in both DDR3 and DDR4 modes. The amount by which it performs worse is actually relatively minor, usually -3% with the odd benchmark (GRID on R7 240) going as low as -5%. Why does this happen at all?

So we passed our results on to Intel, as well as a few respected colleagues in the industry, all of whom were quite surprised. During a benchmark, the CPU performs tasks and directs memory transfers through the PCIe bus and vice versa. Technically, the CPU tasks should complete quicker due to the IPC and the improved threading topology, so that only leaves the PCIe to DRAM via CPU transfers.

Our best guess, until we get to IDF to analyze what has been changed or a direct explanation from Intel, is that part of the FIFO buffer arrangement between the CPU and PCIe might have changed with a hint of additional latency. That being said, a minor increase in PCIe overhead (or a decrease in latency/bandwidth) should be masked by the workload, so there might be something more fundamental at play, such as bus requests being accidentally duplicated or resent due to signal breakdown. There might also be a tertiary answer of an internal bus not running at full speed. To be sure, we rested some benchmarks on a different i7-6700K and a different motherboard, but saw the same effect. We’ll see how this plays out on the full-speed tests.

Comparing IPC on Skylake: Memory Latency and CPU Benchmarks Generational Tests on the i7-6700K: Legacy, Office and Web Benchmarks
Comments Locked

477 Comments

View All Comments

  • vdek - Thursday, August 6, 2015 - link

    I'm still running my x58 motherboard. I ended up upgrading to a Xeon 5650 for $75, which is a 6 core 32nm CPU compatible with the x58. Overclocked at 4.2ghz on air, the thing has excellent gaming performance, I see absolutely no reason to upgrade to Skylake.
  • bischofs - Thursday, August 6, 2015 - link

    Absolutely agree, My overclocked 920 still runs like a watch after 8 years. Not sure what Intel is doing these days, but lack of competition is really impacting this market.
  • stux - Friday, August 7, 2015 - link

    I upgraded my 920 to a 990x, it runs at about 4.4ghz on air in an XPC chassis! and has 6/12 cores.

    I bought it off ebay cheap, and with an SSD on a SATA3 card I see no reason to upgrade. It works fantastically well, and is pretty much as fast as any modern 4 core machine.
  • Samus - Sunday, October 25, 2015 - link

    If you single GPU and don't go ultra-high-end then gaming is still relevant on x58, but it really isn't capable of SLI due to PCIe 2.0 and the lanes being reduced to 8x electrical when more than one 16x length slot is used. QPI also isn't very efficient by todays standards and at the time, AMD still had a better on-die memory controller, but Intel's first attempt was commendable, but completely overhauled with Sandy Bridge which offered virtually the same performance from 2 channels. Anybody who has run dual channel on X58 knows how bad it actually is and why triple channel is needed to keep it competitive with todays platforms.

    I loved X58. It is undoubtedly the most stable platform I'd had since the 440BX. But as I said, by todays standards, it makes Sandy Bridge seem groundbreaking, not because of the IPC, but because of the chipset platform. The reduced power consumption, simplicity and overall smaller-size and lower cost of 60/70 series chipsets, then the incredibly simplified VRM layout in 80/90 chipsets (due to the ondie FIVR of Haswell) makes X58 "look" ancient, but as I said, still relevant.

    Just don't load up the PCIe bus. A GPU, sound card and USB 3.0 controller is about as far as you want to go, and for the most part, as far as you need too!
  • vdek - Thursday, August 6, 2015 - link

    Get a Xeon 5650, 6 core CPU, 32nm, will run at 4-4.2ghz all day on air. I upgraded my i7 920 the X5650 and I couldn't be happier. They go for about $70-80 on amazon or ebay. I'm planning on keeping my desktop for another 2-3 years, I upgraded the GPU to a GTX970 and it maxes out most of what I can throw at it. I don't really see my CPU as a bottleneck here.
  • mdw9604 - Tuesday, August 11, 2015 - link

    Can you OC a Xeon 5650?
  • mapesdhs - Wednesday, August 12, 2015 - link

    Of course, back then the main oc'ing method was still bclk-based based, though X58 was a little more involved than that compared to P55 (uncore, etc.)
  • LCTR - Saturday, August 15, 2015 - link

    I'd been pondering the 6700K until I saw these posts from 920 users :)
    I use mine for gaming / video editing, it's running non-hyperthreaded at 4.2GHz on air (about 4Ghz with HT on)

    I also upgraded my GPU to a 970 and have seen decent gaming performance - if I could jump to a X5650 and stretch things for 1-2 years that'd be great...

    What sort of performance do you see from the X5650? Would it win 4GHz with HT enabled?
    The Xeon 5650's don't need any special mobo support or anything, do they? I have a gigabyte GA-EX58-UD5

  • Nfarce - Wednesday, August 5, 2015 - link

    Well sadly, ever since SB (which I have one that's 4 years old, a 2500K, alongside a newer Haswell 4690K, each new tick/tock has not been much. The days of getting 50% boost in performance between a few generations are long gone, let alone 100% boost, or doubling performance. Also keep in mind that there is a reason for this decrease in increased performance: as dies shrink, physics with electrons start becoming an issue. Intel has been focusing more on decreased power usage. At some point CPU manufacturers will need to look at an entirely different manufacturing material and design as silicon and traditional PCB design is coming to its limit.
  • Mr Perfect - Wednesday, August 5, 2015 - link

    It's not even 30% in high-end gaming. There is a clear improvement between SB and Skylake, but why should I build a whole new PC for 5FPS? I can't justify that expense.

    I'd be curious to see the high-end gaming benchmarks rerun with the next generation of GPUs. Will next gen GPUs care more about the CPU, or does DX12 eliminate the difference altogether?

Log in

Don't have an account? Sign up now