Core: Decoding, and Two Goes Into One

The role of the decoder is to decipher the incoming instruction (opcode, addresses), and translate the 1-15 byte variable length instruction into a fixed-length RISC-like instruction that is easier to schedule and execute: a micro-op. The Core microarchitecture has four decoders – three simple and one complex. The simple decoder can translate instructions into single micro-ops, while the complex decoder can convert one instruction into four micro-ops (and long instructions are handled by a microcode sequencer). It’s worth noting that simple decoders are lower power and have a smaller die area to consider compared to complex decoders. This style of pre-fetch and decode occurs in all modern x86 designs, and by comparison AMD’s K8 design has three complex decoders.

The Core design came with two techniques to assist this part of the core. The first is macro-op fusion. When two common x86 instructions (or macro-ops) can be decoded together, they can be combined to increase throughput, and allows one micro-op to hold two instructions. The grand scheme of this is that four decoders can decode five instructions in one cycle.

According to Intel at the time, for a typical x86 program, 20% of macro-ops can be fused in this way. Now that two instructions are held in one micro-op, further down the pipe this means there is more decode bandwidth for other instructions and less space taken in various buffers and the Out of Order (OoO) queue. Adjusting the pipeline such that 1-in-10 instructions are fused with another instruction should account for an 11% uptick in performance for Core. It’s worth noting that macro-op fusion (and macro-op caches) has become an integral part of Intel’s microarchitecture (and other x86 microarchitectures) as a result.

The second technique is a specific fusion of instructions related to memory addresses rather than registers. An instruction that requires an addition of a register to a memory address, according to RISC rules, would typically require three micro-ops:

Pseudo-code Instructions
read contents of memory to register2 MOV EBX, [mem]
add register1 to register2 ADD EBX, EAX
store result of register2 back to memory MOV [mem], EBX

However, since Banias (after Yonah) and subsequently in Core, the first two of these micro-ops can be fused. This is called micro-op fusion. The pre-decode stage recognizes that these macro-ops can be kept together by using smarter but larger circuitry without lowering the clock frequency. Again, op fusion helps in more ways than one – more throughput, less pressure on buffers, higher efficiency and better performance. Alongside this simple example of memory address addition, micro-op fusion can play heavily in SSE/SSE2 operations as well. This is primarily where Core had an advantage over AMD’s K8.

AMD’s definitions of macro-ops and micro-ops differ to that of Intel, which makes it a little confusing when comparing the two:

However, as mentioned above, AMD’s K8 has three complex decoders compared to Core’s 3 simple + 1 complex decoder arrangement. We also mentioned that simple decoders are smaller, use less power, and spit out one Intel micro-op per incoming variable length instruction. AMD K8 decoders on the other hand are dual purpose: it can implement Direct Path decoding, which is kind of like Intel’s simple decoder, or Vector decoding, which is kind of like Intel’s complex decoder. In almost all circumstances, the Direct Path is preferred as it produces fewer ops, and it turns out most instructions go down the Direct Path anyway, including floating point and SSE instructions in K8, resulting in fewer instructions over K7.

While extremely powerful in what they do, AMD’s limitation for K8, compared to Intel’s Core, is two-fold. AMD cannot perform Intel’s version of macro-op fusion, and so where Intel can pack one fused instruction to increase decode throughput such as the load and execute operations in SSE, AMD has to rely on two instructions. The next factor is that by virtue of having more decoders (4 vs 3), Intel can decode more per cycle, which expands with macro-op fusion – where Intel can decode five instructions per cycle, AMD is limited to just three.

As Johan pointed out in the original article, this makes it hard for AMD’s K8 to have had an advantage here. It would require three instructions to be fetched for the complex decoder on Intel, but not kick in the microcode sequencer. Since the most frequent x86 instructions map to one Intel micro-op, this situation is pretty unlikely.

Core: It’s all in the Prefetch, and More Cache Please Core: Out of Order and Execution
Comments Locked

158 Comments

View All Comments

  • e1jones - Wednesday, July 27, 2016 - link

    My E8400 is still my daily driver, 4x 2gb and an SSD swapped in later as the boot drive. Still runs great, except it tends to get bogged down by the TrustedInstaller and the Firefox memory leaks.
  • rarson - Friday, August 5, 2016 - link

    I've got an E8600 in an Abit IP35 Pro motherboard. I was having a hard time finding DDR2-1066 last I looked, so I settled for 800. With an SSD and 7870, it's surprising how well it still games. I don't think I'll upgrade the GPU again just due to the fact that I'm limited to PCI-e 2.
  • FourEyedGeek - Monday, August 8, 2016 - link

    You could get a higher end GPU and still benefit from increased performance, then get a new CPU motherboard combo when you want too.
  • BrokenCrayons - Wednesday, July 27, 2016 - link

    I just upgraded out of a Q6600 and 4GB DDR2 about 2 months ago and I admit that I was still kicking around the idea of leaving it alone as I was pulling the motherboard out of the case. I replaced it with a cheap AMD 860k and 16GB DDR3 which really hasn't done a lot to improve the system's performance. In retrospect, I think I could realistically have squeezed another couple of years out of it, but the motherboard's NIC was iffy and I really wanted reliable ethernet.

    As for laptops, I've got a couple C2Ds kicking around that are perfectly adequate (T2310 & P8400) for daily use. I really can't see any point in replacing them just yet. Core was a good design through all its iterations.
  • Beany2013 - Wednesday, July 27, 2016 - link

    I like your style - rather than drop $100 on a midlevel intel NIC, you replace an entire platform.

    I strongly approve of these economics :-)
  • Michael Bay - Thursday, July 28, 2016 - link

    USB3 is kind of nice.
  • BrokenCrayons - Thursday, July 28, 2016 - link

    Well the NIC wasn't the only reason, but it was the last in a series of others that I was already coping with that tipped the scales. The upgrade was under $200 for the board, processor and memory so it really boiled down to one weekend dinner out to a mid-range restaurant. It was worth it for more reliable Steam streaming and fewer VNC disconnects as that wired ethernet port is the only means by which I regularly interact with my desktop since it has no monitor and is crammed into a corner in my utility room.
  • artk2219 - Friday, July 29, 2016 - link

    Why didn't you go for an FX if you dont mind me asking? You liked the FM2+ platform a bit better?
  • BrokenCrayons - Friday, July 29, 2016 - link

    Actually, I didn't give much of anything in the system a very close look before buying. I admittedly did about twenty minutes of research to make sure the 860k and the bottom feeder motherboard I'd picked would play nicely together before making a purchase. So the CPU & motherboard pair were the result of laziness and apathy rather than a preference for FM2+.
  • artk2219 - Monday, August 1, 2016 - link

    Ah ok gotcha, I just wanted to share that if you had a microcenter near you they sell FX 8320E's bundled with motherboards for 125 to 170 depending on which board you want to use. That can be quite the steal and a great base for a new cheap system once you bump the clocks on the 8320E.

Log in

Don't have an account? Sign up now