Core: Decoding, and Two Goes Into One

The role of the decoder is to decipher the incoming instruction (opcode, addresses), and translate the 1-15 byte variable length instruction into a fixed-length RISC-like instruction that is easier to schedule and execute: a micro-op. The Core microarchitecture has four decoders – three simple and one complex. The simple decoder can translate instructions into single micro-ops, while the complex decoder can convert one instruction into four micro-ops (and long instructions are handled by a microcode sequencer). It’s worth noting that simple decoders are lower power and have a smaller die area to consider compared to complex decoders. This style of pre-fetch and decode occurs in all modern x86 designs, and by comparison AMD’s K8 design has three complex decoders.

The Core design came with two techniques to assist this part of the core. The first is macro-op fusion. When two common x86 instructions (or macro-ops) can be decoded together, they can be combined to increase throughput, and allows one micro-op to hold two instructions. The grand scheme of this is that four decoders can decode five instructions in one cycle.

According to Intel at the time, for a typical x86 program, 20% of macro-ops can be fused in this way. Now that two instructions are held in one micro-op, further down the pipe this means there is more decode bandwidth for other instructions and less space taken in various buffers and the Out of Order (OoO) queue. Adjusting the pipeline such that 1-in-10 instructions are fused with another instruction should account for an 11% uptick in performance for Core. It’s worth noting that macro-op fusion (and macro-op caches) has become an integral part of Intel’s microarchitecture (and other x86 microarchitectures) as a result.

The second technique is a specific fusion of instructions related to memory addresses rather than registers. An instruction that requires an addition of a register to a memory address, according to RISC rules, would typically require three micro-ops:

Pseudo-code Instructions
read contents of memory to register2 MOV EBX, [mem]
add register1 to register2 ADD EBX, EAX
store result of register2 back to memory MOV [mem], EBX

However, since Banias (after Yonah) and subsequently in Core, the first two of these micro-ops can be fused. This is called micro-op fusion. The pre-decode stage recognizes that these macro-ops can be kept together by using smarter but larger circuitry without lowering the clock frequency. Again, op fusion helps in more ways than one – more throughput, less pressure on buffers, higher efficiency and better performance. Alongside this simple example of memory address addition, micro-op fusion can play heavily in SSE/SSE2 operations as well. This is primarily where Core had an advantage over AMD’s K8.

AMD’s definitions of macro-ops and micro-ops differ to that of Intel, which makes it a little confusing when comparing the two:

However, as mentioned above, AMD’s K8 has three complex decoders compared to Core’s 3 simple + 1 complex decoder arrangement. We also mentioned that simple decoders are smaller, use less power, and spit out one Intel micro-op per incoming variable length instruction. AMD K8 decoders on the other hand are dual purpose: it can implement Direct Path decoding, which is kind of like Intel’s simple decoder, or Vector decoding, which is kind of like Intel’s complex decoder. In almost all circumstances, the Direct Path is preferred as it produces fewer ops, and it turns out most instructions go down the Direct Path anyway, including floating point and SSE instructions in K8, resulting in fewer instructions over K7.

While extremely powerful in what they do, AMD’s limitation for K8, compared to Intel’s Core, is two-fold. AMD cannot perform Intel’s version of macro-op fusion, and so where Intel can pack one fused instruction to increase decode throughput such as the load and execute operations in SSE, AMD has to rely on two instructions. The next factor is that by virtue of having more decoders (4 vs 3), Intel can decode more per cycle, which expands with macro-op fusion – where Intel can decode five instructions per cycle, AMD is limited to just three.

As Johan pointed out in the original article, this makes it hard for AMD’s K8 to have had an advantage here. It would require three instructions to be fetched for the complex decoder on Intel, but not kick in the microcode sequencer. Since the most frequent x86 instructions map to one Intel micro-op, this situation is pretty unlikely.

Core: It’s all in the Prefetch, and More Cache Please Core: Out of Order and Execution
Comments Locked

158 Comments

View All Comments

  • perone - Friday, July 29, 2016 - link

    My E6300 is still running fine in a PC I have donated to a friend.
    It was set to 3GHz within a few days from purchase and never moved from that speed.
    Once or twice I changed the CPU fan as it was getting noisy.

    Great CPU and great motherboard the Asus P5B
  • chrizx74 - Saturday, July 30, 2016 - link

    These PCs are still perfectly fine if you install an SSD. I did it recently on an Acer Aspire t671 desktop. After modding the bios to enable AHCI I put an 850 evo (runs at sata 2 speed) and a pretty basic Nvidia GFX card. The system turned super fast and runs Windows 10 perfectly fine. You don't need faster processors all you need is get rid of the HDDs.
  • Anato - Saturday, July 30, 2016 - link

    I'm still running AMD Athlon x2 4850 2.5GHz as a file server + MythTV box. It supports ECC, is stable and has enough grunt to do its job so why replace. Yes, I could get bit energy efficiency but in my climate >50% of time heating is needed and new hardware has its risks of compatibility issues etc.

    +10 for anandtech again, article was great as always!
  • serendip - Sunday, July 31, 2016 - link

    I'm posting this on a Macbook with an E6600 2.4 GHz part. It's still rockin' after six years of constantly being tossed into a backpack. The comparisons between C2D and the latest i5 CPUs don't show how good these old CPUs really are - they're slow for hard number crunching and video encoding but they're plenty fast for typical workday tasks like Web browsing and even running server VMs. With a fast SSD and lots of RAM, processor performance ends up being less important.

    That's too bad for Intel and computer manufacturers because people see no need to upgrade. A 50% performance boost may look like a lot on synthetic benchmarks but it's meaningless in the real world.
  • artifex - Monday, August 1, 2016 - link

    "With a fast SSD and lots of RAM, processor performance ends up being less important."

    I remember back when I could take on Icecrown raids in WoW with my T7200-based Macbook.
    And I actually just stopped using my T7500-based Macbook a few months ago. For a couple years I thought about seeing if an SSD would perk it back up, but decided the memory bandwidth and size limitation, and graphics, was just not worth the effort. Funny that you're not impressed by i5s; I use a laptop with an i5-6200U, now. (Some good deals with those right now, especially if you can put up with the integrated graphics instead of a discrete GPU.) But then, my Macbooks were about 3 years older than yours :)
  • abufrejoval - Sunday, July 31, 2016 - link

    Replaced three Q6600 on P45 systems with socket converted Xeon X5492 at $60 off eBay each. Got 3.4GHz Quads now never using more than 60 Watts under Prime95 (150 Watts "official" TDP), with 7870/7950 Radeon or GTX 780 running all modern games at 1080p at high or ultra. Doom with Vulkan is quite fun at Ultra. Got my kids happy and bought myself a 980 ti off the savings. If you can live with 8GB (DDR2) or 16GB (DDR3), it's really hard to justify an upgrade from this 10 year old stuff.

    Mobile is a different story, of course.
  • seerak - Monday, August 1, 2016 - link

    My old Q6600 is still working with a friend.

    The laugher is that he (used to) work for Intel, and 6 months after I gave it to him in lieu of some owed cash, he bought a 4790K through the employee program (which isn't nearly as good as you'd think) and built a new system with it.

    The Q6600 works so well he's never gotten around to migrating to the new box - so the 4790k is still sitting unused! I'm thinking of buying it off him. I do 3D rendering and can use the extra render node.
  • jeffry - Monday, August 1, 2016 - link

    Thats a good point. Like, answering a question "are you willing to pay $800 for a new CPU to double the computers speed?" Most consumers say no. It all comes down to the mass market price.
  • wumpus - Thursday, August 4, 2016 - link

    Look up what Amazon (and anybody else buying a server) pays for the rest of the computer and tell me they won't pay $800 (per core) to double the computer's speed. It isn't a question of cost, Intel just can't do it (and nobody else can make a computer as fast as Intel, although IBM seems to be getting close, and AMD might get back in the "almost as good for cheap" game).
  • nhjay - Monday, August 1, 2016 - link

    The Core 2 architecture has served me well. Just last year I replaced my server at home which was based on a Core 2 Duo E6600 on a 965 chipset based motherboard. The only reason for the upgrade is that the CPU was having a difficult time handling transcoding jobs to several Plex clients at once.

    The desktop PC my kids use is Core 2 based, though slightly newer. Its a Core 2 Quad Q9400 based machine. It is the family "gaming" PC if you dare call it that. With a GT 730 in it, it runs the older games my kids play very well and Windows 10 hums along just fine.

Log in

Don't have an account? Sign up now