Out-of-Order Architectures

In contrast to in-order architectures, there are out-of-order architectures.   Out-of-order architectures still decode instructions in the original order of the program, and still retire the instructions in order, but the actual issue/execution of the instructions can be done out of order.

Let's talk a bit about what all of this means.   A CPU is useless if it changes the intent of the code fed to it.  Frankly speaking, if you double-click on a file, your CPU would be rather useless if it executed a bunch of format commands instead.   Although that's an extreme example, in order to ensure that things like that don't happen, a CPU must adhere to two rules:
  1. Instructions must be decoded (i.e. interpreted by the CPU to find out what they are asking it to do) in the original order of the program, and
  2. Instructions must retire in the original order of the program (i.e. the result of each operation must be written to memory/disk in the same order as it was sent to the CPU).
Both in-order and out-of-order architectures adhere to those two rules - it's what happens in between those two stages that out-of-order architectures differ.   We mentioned in the previous page that in-order architectures can't reorder instructions on the fly.   Let's say that we have an in-order CPU with one adder and one load/store unit that is fed the following code (for the sake of simplicity, we'll leave a forwarding network out of this discussion):
  1.      LD R10, R11
  2.      ADD R5, R10, R10
  3.      ADD R9, R9, #1
  4.      ...
In the first instruction, we're loading data from a memory address stored in R11 into R10.   Then, we're adding the value that we just obtained from memory to itself and storing it in R5.   The third and final line in the snippet increments the value stored in R9 by 1 and stores it in R9.   Quickly looking at the code, you see that line 2 can't execute before line 1.  Doing so would alter the intent of the code (if you want to add something to itself, you need to make sure you have that something first).   Line 3, however, is completely independent of lines 1 and 2.

With an in-order microprocessor, if the data being loaded in line 1 is contained within cache, then that instruction will take around 1 - 30 clock cycles to complete (varying depending on the architecture and which level of cache it is in).   Line 2 would have to simply wait those 1 - 30 cycles before executing and then after it executed, line 3 could have its turn.   If the requested data isn't stored in cache (maybe it's the first time that we're asking for that value and we haven't asked for anything near it in memory), then we have a problem.   All of the sudden, line 1 doesn't take around 1 - 30 cycles to complete; now, it's going to take 200+ clock cycles to complete.   For line 2, that's not such a big deal, since it can't execute until line 1 completes anyway, but for line 3, it could just as easily execute during the time that the CPU is waiting to get that load from memory.   Any independent instructions following line 3 are also at the mercy of the cache miss.

With an out-of-order microprocessor, however, the situation of a cache miss isn't nearly as dramatic.   The code is still decoded in order, meaning that it comes across instructions 1, 2 and 3 in the same order as the in-order CPU, but this time, we have the ability to execute line 3 ahead of lines 1 and 2 instead of idly waiting for line 1 to complete.   In the event of a cache miss, this gives the out-of-order microprocessor a pretty big performance advantage, as it isn't sitting there burning away clock cycles while nothing gets done.   So, how does the out-of-order CPU work?

If someone told you a list of things to do in any order that you wanted, you'd simply take in the list and get to it.   But if they told you to report back the things that you've completed in the order in which they were told to you, you'd have to grumble and write them down first before reorganizing them to fit your needs.

An out-of-order CPU works pretty much the same way, except instead of a to-do list, it has an instruction window.  The instruction window functions similarly to a to-do list - it has all of the decoded instructions in their original order and is kept as a record to make sure that those instructions retire in the order that they were decoded.

Alongside the instruction window, an out-of-order CPU also has a scheduling window - it is in this "window" where all of the reordering of instructions takes place.   The scheduling window contains logic to mark dependent and independent instructions and send all independent ones to execution units while waiting for dependent instructions to become ready for execution.

As previously dependent instructions (e.g. instructions waiting on data from main memory or instructions waiting for other instructions to complete) become independent, they are then able to be executed, once again, in any order.

Right off the bat, you can tell that the addition of an instruction window, a scheduling window and all of the associated logic to detect independent instructions, not to mention the logic to handle out-of-order execution but in order retirement, all makes for a more complex microprocessor.   But there is one other significant problem with out-of-order microprocessors - the increase in performance and instruction level parallelism is greatly dependent upon the size of the instruction window.

The larger you make this window, the more parallelism that can be extracted simply because the CPU is looking at a wider set of instructions from which to select independent ones.   At the same time, the larger you make the window, the lower your clock speed can be.

Despite the downsides, all modern day x86 microprocessors are out-of-order cores, as keeping a single core simple isn't the top priority given advances in manufacturing processes.   The benefits of an out-of-order architecture are two-fold:
  1. Dynamic reordering of instructions lets the CPU hide memory latencies, allowing for even higher clock speeds.   For every cache miss, a Pentium 4 3.6GHz has to wait around 230 clock cycles to get data from main memory, which is a lot of idle time in the eyes of the CPU.  Being able to make use of that idle time by executing other independent instructions in the meantime is one way in which architectures like the Pentium 4 and Athlon 64 get away with running at such high multiples of their memory frequency.
  2. Incremental increase in instruction level parallelism - by reordering instructions on the fly, out-of-order architectures can improve ILP as best as possible in areas where the compiler fails to.
So, it's obvious that both AMD and Intel have figured out that for a general purpose x86 microprocessor, out-of-order makes the most sense.   Then, why is it that the architects of Cell, when starting with a clean slate, outfitted the processor with 9 independent in-order cores?

The first thing to remember is that you can get pretty solid performance from an in-order architecture.   The Itanium is an in-order microprocessor, based on a premise similar to Cell by which the compiler should be able to extract the sort of parallelism that of an out-of-order core.   Current generation Itanium cores run at half the speed of modern day x86 cores, yet the CPU is able to execute around 2x the instructions per clock as the fastest x86 CPUs.  To quote Intel's Justin Rattner in reference to Itanium, "an appropriately designed instruction set should lend itself to an in-order architecture without any problems."   So, it's quite possible that the same could apply to Cell...

In-Order Architectures Cell's Approach - In Order with no Cache
Comments Locked

70 Comments

View All Comments

  • Houdani - Friday, March 18, 2005 - link

    I think I missed something fundamental.

    Can the SPEs be addressed directly by software, or do they have to be fed all of their instructions by the PPE?

    If they DO have to be fed be the PPE, I fail to see how the PPE can possibly feed them enough to keep them all working concurrently.

    Someone throw me a bone here.
  • suryad - Friday, March 18, 2005 - link

    I thought the G5 was a POWER5 proc. But I could of course be wrong. All I can say is the Cell definitely intriguing as it may be will have a rough road ahead of it and I am quite surprised that these large corporations invested so much in it, cutting edge though it might be. And as for the current forseeable future, I think when multi-core FX processors from AMD comes out, I do not believe there will be anything more devastating than that. Especially once they hit the 3 Ghz barrier with multi-cores enabled and faster DDR2-3 or even RAMBUS memory capabilities.
  • tipoo - Thursday, December 3, 2015 - link

    No, G5 was 970 based.
  • Questar - Friday, March 18, 2005 - link

    #50,
    Yes the G5 is a POWER4 derivitive.

    Since you were wrong on that, don't think that you know what is significant about the design of POWER5. There were major architechture changes made to the processor.
  • fitten - Friday, March 18, 2005 - link

    The only things new about Cell is its target market and being a single chip. The article mentions the TI DSP chip, but there were other similar architectures as well. One example that I'm familiar with is the MAP1310 board by CSPI. Back then, processes weren't good enough to put all the cores on a single chip but the basic architecture is the same - a PPC core to do the 'normal' stuff and two quad-core DSPs (SHARC) to do the 'work'. This board wasn't successful because it was considered too hard to program to get the performance it promised.... and this opinion is from people who live/breathe real-time systems and multiprocessing codes.

    The only thing new about Cell is that a) it's all on one chip now and b) the target market is a general marketplace and not a niche.
  • scrotemaninov - Friday, March 18, 2005 - link

    #48. OK, I was under the impression that the G5 was based on the POWER5. You're saying it's based on the POWER4 instead?

    And the POWER4 and POWER5 aren't really "completely different chips" in the same way that the P4 and P3 are different chips, or in the way that the P4 and the Opteron are different chips. I can give you a list of the differences if you want. Start at http://www.elet.polimi.it/upload/sami/architetture...

    The POWER5 is designed to not only be completely compatible with the POWER4 but to also to support all the optimisations from the POWER4. The only things of significance they've done is a) move the L3 cache controller on chip; b) change the various branch predictors to bimodal instead of 1-bit; c) increase the associativity and size of the caches.

    Anyway, this is going off topic now...
  • Jacmert - Friday, March 18, 2005 - link

    Rofl. Computer engineering and VLSI design. Gotta love those NMOS/PMOS transistor circuits.

    I never thought that I'd see stuff from my textbook explained on anandtech.com
  • saratoga - Friday, March 18, 2005 - link

    "#38. You're right that the G5 is a derivative of the POWER5. The POWER5 is dual core, each core with 2way SMT giving a total of 4 'visible' cpus to the OS. The G5 is simply a single core version of the same thing."

    Err no its not. POWER4 != POWER5. Hence the different names ;)

    They're completely different chips.

    "Well scrotemaninov I am not disputing that the POWER architecture by IBM is brilliantly done. IBM is definitely one of those companies churning out brilliant and elegant technology always in the background.

    But my problem with the POWER technology is from what I understand very limitedly, is that the POWER processors in the Mac machines are a derivative of that architecture right? Why the heck are they so damn slow then?

    I mean you can buy an AMD FX 55 based on the crappy legacy x86 arch and it smokes the dual 2.5 GHz Macs easily!! Is it cause of the OS? Because so far from what I have seen, if the Macs are any indication of the performance capabilities of the POWER architecture, the Cell will not be a big hit.

    I did read though at www.aceshardware.com benchmark reviews of the POWER5 architecture with some insane number of cores if I recall correctly and the benchmarks were of the charts. They are definitely not what the Macs have installed in them..."

    There are slow memeory systems and then theres the one used on the G5. I've heard that you can put 8 Opterons together and still get average access times across all 8 cores that are better then a single G5. Thats probably a good part of the reason the G5 was so much slower then many people thought it would be. The rest is mainly IBM's trouble making them, and their inability to ramp clock speed like they planned on.
  • scrotemaninov - Friday, March 18, 2005 - link

    #38. You're right that the G5 is a derivative of the POWER5. The POWER5 is dual core, each core with 2way SMT giving a total of 4 'visible' cpus to the OS. The G5 is simply a single core version of the same thing.

    As for the performance, Opteron is pretty much unbeatable for integer-bound applications. Itanium2 is unbeatable for FP applications. POWER5 is somewhere in the middle.

    Most desktop applications are going to be integer bound. So it's not at all surprising that you find the G5 'slow' in that respect in comparison to the FX55. Plus, and this is the whole problem with the CELL, there's no point putting dual CPUs in there unless you can utilise them properly. If you have one process going flat out trying to run a heavy application and it's single threaded then you're only using about 1/4 of the CPUs you've bought for that application (for a dual G5 2.5), whereas the Opterons and FX55 stuff is more designed around quick, single threaded applications.
  • dmens - Friday, March 18, 2005 - link

    psuedo-pmos wtf? That's domino logic, it's been around forever, and it's definitely not efficient in terms of power. Oh, and it takes forever to verify timing.

Log in

Don't have an account? Sign up now