Core: Out of Order and Execution

After Prefetch, Cache and Decode comes Order and Execution. Without rehashing discussions of in-order vs. out-of-order architectures, typically a design with more execution ports and a larger out-of-order reorder buffer/cache can sustain a higher level of instructions per clock as long as the out-of-order buffer is smart, data can continuously be fed, and all the execution ports can be used each cycle. Whether having a super-sized core is actually beneficial to day-to-day operations in 2016 is an interesting point to discuss, during 2006 and the Core era it certainly provided significant benefits.

As Johan did back in the original piece, let’s start with semi-equivalent microarchitecture diagrams for Core vs. K8:


Intel Core


AMD K8

For anyone versed in x86 design, three differences immediately stand out when comparing the two. First is the reorder buffer, which for Intel ranks at 96 entries, compared to 72 for AMD. Second is the scheduler arrangement, where AMD uses split 24-entry INT and 36-entry FP schedulers from the ‘Instruction Control Unit’ whereas Intel has a 32-entry combined ‘reservation station’. Third is the number of SSE ports: Intel has three compared to two from AMD. Let’s go through these in order.

For the reorder buffers, with the right arrangement, bigger is usually better. Make it too big and it uses too much silicon and power however, so there is a fine line to balance between them. Also, the bigger the buffer it is, the less of an impact it has. The goal of the buffer is to push decoded instructions that are ready to work to the front of the queue, and make sure other instructions which are order dependent stay in their required order. By executing independent operations when they are ready, and allowing prefetch to gather data for instructions still waiting in the buffer, this allows latency and bandwidth issues to be hidden. (Large buffers are also key to simultaneous multithreading, which we’ll discuss in a bit as it is not here in Core 2 Duo.) However, when the buffer has the peak number of instructions being sent to the ports every cycle already, having a larger buffer has diminishing returns (the design has to keep adding ports instead, depending on power/silicon budget).

For the scheduler arrangements, using split or unified schedulers for FP and INT has both upsides and downsides. For split schedulers, the main benefit is entry count - in this case AMD can total 60 (24-INT + 36-FP) compared to Intel’s 32. However, a combined scheduler allows for better utilization, as ports are not shared between the split schedulers.

The SSE difference between the two architectures is exacerbated by what we’ve already discussed – macro-op fusion. The Intel Core microarchitecture has 3 SSE units compared to two, but also it allows certain SSE packed instructions to execute within one instruction, due to fusion, rather than two. Two of the Intel’s units are symmetric, with all three sporting 128-bit execution rather than 64-bit on K8. This means that K8 requires two 64-bit instructions whereas Intel can absorb a 128-bit instruction in one go. This means Core can outperform K8 on 128-bit SSE on many different levels, and for 64-bit FP SSE, Core can do 4 DP per cycle, whereas Athlon 64 can do 3.

One other metric not on the diagram comes from branch prediction. Core can sustain one branch prediction per cycle, compared to one per two cycles on previous Intel microarchitectures. This was Intel matching AMD in this case, who already supported one per cycle.

Core: Decoding, and Two Goes Into One Core: Load Me Up, but no Hyper-Threading or IMC
Comments Locked

158 Comments

View All Comments

  • patel21 - Thursday, July 28, 2016 - link

    Me Q6600 ;-)
  • nathanddrews - Thursday, July 28, 2016 - link

    Me too! Great chip!
  • Notmyusualid - Thursday, July 28, 2016 - link

    Had my G0 stepping just as soon as it dropped.

    Coming from a high freq Netburst, I was thrown back, by the difference.

    Since then I've bought Xtreme version processors... Until now, its been money well spent.
  • KLC - Thursday, July 28, 2016 - link

    Me too.
  • rarson - Thursday, August 4, 2016 - link

    I built my current PC back in 2007 using a Pentium Dual Core E2160 (the $65 bang for the buck king), which easily overclocked to 3 GHz, in an Abit IP35 Pro. Several years ago I replaced the Pentium with a C2D E8600. I'm still using it today. (I had the Q9550 in there for a while, but the Abit board was extremely finnicky with it and I found that the E8600 was a much better overclocker.)
  • paffinity - Wednesday, July 27, 2016 - link

    Merom architecture was good architecture.
  • CajunArson - Wednesday, July 27, 2016 - link

    To quote Gross Pointe Blank: Ten years man!! TEN YEARS!
  • guidryp - Wednesday, July 27, 2016 - link

    Too bad you didn't test something with a bit more clock speed.

    So you have ~2GHz vs ~4GHz and it's half as fast on single threaded...
  • Ranger1065 - Wednesday, July 27, 2016 - link

    I owned the E6600 and my Q6600 system from around 2008 is still running. Thanks for an interesting and nostalgic read :)
  • Beany2013 - Wednesday, July 27, 2016 - link

    Built a Q6600 rig for a mate just as they were going EOL and were getting cheap. It's still trucking, although I suspect the memory bus is getting flaky. Time for a rebuild, methinks.

    And a monster NAS to store the likely hundreds of thousands of photos she's processed on it and which are stuck around on multiple USB HDDs in her basement.

    It's not just CPUs that have moved on - who'd have thought ten years ago that a *good* four bay NAS that can do virtualisation would be a thing you could get for under £350/$500 (QNAP TS451) without disks? Hell, you could barely build even a budget desktop machine (just the tower, no monitor etc) for that back then.

    God I feel old.

Log in

Don't have an account? Sign up now