Core: Decoding, and Two Goes Into One

The role of the decoder is to decipher the incoming instruction (opcode, addresses), and translate the 1-15 byte variable length instruction into a fixed-length RISC-like instruction that is easier to schedule and execute: a micro-op. The Core microarchitecture has four decoders – three simple and one complex. The simple decoder can translate instructions into single micro-ops, while the complex decoder can convert one instruction into four micro-ops (and long instructions are handled by a microcode sequencer). It’s worth noting that simple decoders are lower power and have a smaller die area to consider compared to complex decoders. This style of pre-fetch and decode occurs in all modern x86 designs, and by comparison AMD’s K8 design has three complex decoders.

The Core design came with two techniques to assist this part of the core. The first is macro-op fusion. When two common x86 instructions (or macro-ops) can be decoded together, they can be combined to increase throughput, and allows one micro-op to hold two instructions. The grand scheme of this is that four decoders can decode five instructions in one cycle.

According to Intel at the time, for a typical x86 program, 20% of macro-ops can be fused in this way. Now that two instructions are held in one micro-op, further down the pipe this means there is more decode bandwidth for other instructions and less space taken in various buffers and the Out of Order (OoO) queue. Adjusting the pipeline such that 1-in-10 instructions are fused with another instruction should account for an 11% uptick in performance for Core. It’s worth noting that macro-op fusion (and macro-op caches) has become an integral part of Intel’s microarchitecture (and other x86 microarchitectures) as a result.

The second technique is a specific fusion of instructions related to memory addresses rather than registers. An instruction that requires an addition of a register to a memory address, according to RISC rules, would typically require three micro-ops:

Pseudo-code Instructions
read contents of memory to register2 MOV EBX, [mem]
add register1 to register2 ADD EBX, EAX
store result of register2 back to memory MOV [mem], EBX

However, since Banias (after Yonah) and subsequently in Core, the first two of these micro-ops can be fused. This is called micro-op fusion. The pre-decode stage recognizes that these macro-ops can be kept together by using smarter but larger circuitry without lowering the clock frequency. Again, op fusion helps in more ways than one – more throughput, less pressure on buffers, higher efficiency and better performance. Alongside this simple example of memory address addition, micro-op fusion can play heavily in SSE/SSE2 operations as well. This is primarily where Core had an advantage over AMD’s K8.

AMD’s definitions of macro-ops and micro-ops differ to that of Intel, which makes it a little confusing when comparing the two:

However, as mentioned above, AMD’s K8 has three complex decoders compared to Core’s 3 simple + 1 complex decoder arrangement. We also mentioned that simple decoders are smaller, use less power, and spit out one Intel micro-op per incoming variable length instruction. AMD K8 decoders on the other hand are dual purpose: it can implement Direct Path decoding, which is kind of like Intel’s simple decoder, or Vector decoding, which is kind of like Intel’s complex decoder. In almost all circumstances, the Direct Path is preferred as it produces fewer ops, and it turns out most instructions go down the Direct Path anyway, including floating point and SSE instructions in K8, resulting in fewer instructions over K7.

While extremely powerful in what they do, AMD’s limitation for K8, compared to Intel’s Core, is two-fold. AMD cannot perform Intel’s version of macro-op fusion, and so where Intel can pack one fused instruction to increase decode throughput such as the load and execute operations in SSE, AMD has to rely on two instructions. The next factor is that by virtue of having more decoders (4 vs 3), Intel can decode more per cycle, which expands with macro-op fusion – where Intel can decode five instructions per cycle, AMD is limited to just three.

As Johan pointed out in the original article, this makes it hard for AMD’s K8 to have had an advantage here. It would require three instructions to be fetched for the complex decoder on Intel, but not kick in the microcode sequencer. Since the most frequent x86 instructions map to one Intel micro-op, this situation is pretty unlikely.

Core: It’s all in the Prefetch, and More Cache Please Core: Out of Order and Execution
Comments Locked

158 Comments

View All Comments

  • Namisecond - Thursday, July 28, 2016 - link

    NVMe may not be all it's cracked up to be. It, for the most part, limits you to booting windows 8 and higher, and good luck with the free upgrade to windows 10 (which supposedly ends tomorrow).
  • FourEyedGeek - Monday, August 8, 2016 - link

    Same CPU here, mine is running at 4Ghz, I can't see a reason other than NVMe to upgrade.
  • dotwayne - Thursday, July 28, 2016 - link

    Had a trusty E6300 @ 3.4-5 ghz back then. ahhh...miss those days of oc-ing the shit out of these cheap but super capable silicons.
  • jamyryals - Thursday, July 28, 2016 - link

    Neat article, I enjoyed it Ian!
  • azazel1024 - Thursday, July 28, 2016 - link

    Yeah a lot of those assumptions and guestimates for the future seem either overly optimistic or seem to ignore realities. I realize board power doesn't equate to average power use, but you are still talking about max power consumption that would drain a current cell phone battery dead in less than an hour, even on some of the biggest phone batteries.

    Beyond that is the heat dissipation, that phone is going to get mighty hot trying to dissipate 8+ watts out of even a large phone chassis.

    As pointed out, 32 cores seems a wee excessive. A lot of it seems to be "if we take it to the logical extreme" as opposed to "what we think is likely".
  • Peichen - Thursday, July 28, 2016 - link

    Take a 45nm C2Q Q9650 ($50 eBay), overclock to 4.0GHz, and you will be as fast as AMD's FX-9590 that's running at 220W. Older motherboard and DDR2 will be harder to come by but it is sad how AMD never managed to catch up to Core 2 after all these years. E6400 was my first Intel after switching to AMD after the original Pentium and I have never look back at AMD again.
  • Panoramix0903 - Thursday, July 28, 2016 - link

    I have made an upgrade from C2D 6550 to Q9650 in my old DELL Optiplex 755 MT. Plus 4x 2GB DDR2 800 MHz, Intel 535 SSD 240 GB, Sapphire Radeon HD7750 1GB DDR5, Sound Blaster X-FI, and USB 3.0 PCI-E card. Running Windows 7 Professional. 3-times more power then original DELL configuration :-)
  • JohnRO - Thursday, July 28, 2016 - link

    I just logged in to tell you that I'm reading this article on my desktop PC which has a Intel Core 2 Duo E4300 processor (1,8 GHz, 200 MHz FSB) with 4 GB of RAM (started with 2). When I wanted (or needed) I overclocked this processor to 3 GHz (333 MHz FSB).
    My PC will have its 10 years anniversary this December. During the years I upgraded the video card (for 1080p h264 hardware decoding and games when I still played them) and added more hard drives. The PC has enough performance for what I’m using it right now – so I would say that this is a good processor.
  • siriq - Thursday, July 28, 2016 - link

    I still got my mobile 2600+ barton @2750 mhz , 939 3800+ x2 @2950 mhz . They were awesome!
  • althaz - Thursday, July 28, 2016 - link

    I bought a C2D E6300 the week it came out, my first Intel CPU since 2000. My previous CPUs had been an AMD Athlon 64 and an AMD Athlon Thunderbird.

    That E6300 remains my all-time favourite CPU. It's still running in a friend of mine's PC (@ 2.77Ghz, which I overclocked it to soon after getting it). It was just *so* fast compared to my old PC. Everything just instantly got faster (and I hadn't even upgraded my GPU!).

Log in

Don't have an account? Sign up now