Core: Out of Order and Execution

After Prefetch, Cache and Decode comes Order and Execution. Without rehashing discussions of in-order vs. out-of-order architectures, typically a design with more execution ports and a larger out-of-order reorder buffer/cache can sustain a higher level of instructions per clock as long as the out-of-order buffer is smart, data can continuously be fed, and all the execution ports can be used each cycle. Whether having a super-sized core is actually beneficial to day-to-day operations in 2016 is an interesting point to discuss, during 2006 and the Core era it certainly provided significant benefits.

As Johan did back in the original piece, let’s start with semi-equivalent microarchitecture diagrams for Core vs. K8:


Intel Core


AMD K8

For anyone versed in x86 design, three differences immediately stand out when comparing the two. First is the reorder buffer, which for Intel ranks at 96 entries, compared to 72 for AMD. Second is the scheduler arrangement, where AMD uses split 24-entry INT and 36-entry FP schedulers from the ‘Instruction Control Unit’ whereas Intel has a 32-entry combined ‘reservation station’. Third is the number of SSE ports: Intel has three compared to two from AMD. Let’s go through these in order.

For the reorder buffers, with the right arrangement, bigger is usually better. Make it too big and it uses too much silicon and power however, so there is a fine line to balance between them. Also, the bigger the buffer it is, the less of an impact it has. The goal of the buffer is to push decoded instructions that are ready to work to the front of the queue, and make sure other instructions which are order dependent stay in their required order. By executing independent operations when they are ready, and allowing prefetch to gather data for instructions still waiting in the buffer, this allows latency and bandwidth issues to be hidden. (Large buffers are also key to simultaneous multithreading, which we’ll discuss in a bit as it is not here in Core 2 Duo.) However, when the buffer has the peak number of instructions being sent to the ports every cycle already, having a larger buffer has diminishing returns (the design has to keep adding ports instead, depending on power/silicon budget).

For the scheduler arrangements, using split or unified schedulers for FP and INT has both upsides and downsides. For split schedulers, the main benefit is entry count - in this case AMD can total 60 (24-INT + 36-FP) compared to Intel’s 32. However, a combined scheduler allows for better utilization, as ports are not shared between the split schedulers.

The SSE difference between the two architectures is exacerbated by what we’ve already discussed – macro-op fusion. The Intel Core microarchitecture has 3 SSE units compared to two, but also it allows certain SSE packed instructions to execute within one instruction, due to fusion, rather than two. Two of the Intel’s units are symmetric, with all three sporting 128-bit execution rather than 64-bit on K8. This means that K8 requires two 64-bit instructions whereas Intel can absorb a 128-bit instruction in one go. This means Core can outperform K8 on 128-bit SSE on many different levels, and for 64-bit FP SSE, Core can do 4 DP per cycle, whereas Athlon 64 can do 3.

One other metric not on the diagram comes from branch prediction. Core can sustain one branch prediction per cycle, compared to one per two cycles on previous Intel microarchitectures. This was Intel matching AMD in this case, who already supported one per cycle.

Core: Decoding, and Two Goes Into One Core: Load Me Up, but no Hyper-Threading or IMC
Comments Locked

158 Comments

View All Comments

  • Jon Tseng - Wednesday, July 27, 2016 - link

    Great chip. Only just upgraded from my QX6850 last month. Paired with a GTX 970 it was doing just fine running all new games maxed out at 1080p. Amazing for something nearly a decade old!!
  • Negative Decibel - Wednesday, July 27, 2016 - link

    my E6600 is still kicking.
  • tarqsharq - Wednesday, July 27, 2016 - link

    My dad still uses my old E8400 for his main PC. He's getting my old i7-875k soon though.
  • jjj - Wednesday, July 27, 2016 - link

    You can't do DRAM in glasses, not in a real way. Since that's what mobile is by 2025.
    On-package DRAM is next year or soon not 2025.
    You can't have big cores either and you need ridiculous GPUs and extreme efficiency. Parallelism and accelerators, that's where computing needs to go, from mobile to server.
    We need 10-20 mm3 chips not 100cm2 boards. New NV memories not DRAM and so on.
    Will be interesting to see who goes 3D first with logic on logic and then who goes 3D first as the default in the most advanced process.

    At the end of the day, even if the shrinking doesn't stop, 2D just can't offer enough for the next form factor. Much higher efficiency is needed and the size of a planar chip would be far too big to fit in the device while the costs would be mad.Much more is needed. For robots too.The costs and efficiency need to scale and with planar it's at best little.
  • wumpus - Thursday, August 4, 2016 - link

    On package DRAM seems to be a "forever coming" tech. AMD Fury-X basically shipped it, and it went nowhere. I'm guessing it will be used whenever Intel or IBM feel it can be used for serious advantage on some high-core server chip, or possibly when Intel want to build a high-speed DRAM cache (with high-speed-bus) and use 3dXpoint for "main memory".

    The slow rollout is shocking. I'm guessing nvidia eventually gave up with it and went with tiling (see the Kanter demo on left, but ignore the thread: nothing but fanboys beating their chests).
  • willis936 - Wednesday, July 27, 2016 - link

    I'm certainly no silicon R&D expert but I'm very skeptical of those projections.
  • Mr.Goodcat - Wednesday, July 27, 2016 - link

    Typo:
    "On the later, we get the prediction that 450nm wafers should be in play at around 2021 for DRAM"
    450nm wafers would be truly interesting ;-)
  • wumpus - Thursday, August 4, 2016 - link

    I like the rapidly falling static safety. Don't breathe on a 2030 chip.
  • faizoff - Wednesday, July 27, 2016 - link

    My first Core 2 Duo was an E4400 that I bought in 2007 I believe, thing lasted me up to 2011 when I upgraded to an i5 2500k. I should've kept that C2D just for nostalgia's sake, I used it intermittently as a plex server and that thing worked great on FreeNAS. The only issue was it was really noisy and would get hot.
  • Notmyusualid - Thursday, July 28, 2016 - link

    I've got a few old servers kicking around, all with valid Win server licenses, but due to UK electricity costs, just can't bring myself to have them running at home 24/7 just to serve a backup, or yet another Breaking Bad viewing session... :) which we can do locally now.

Log in

Don't have an account? Sign up now