Out-of-Order Architectures

In contrast to in-order architectures, there are out-of-order architectures.   Out-of-order architectures still decode instructions in the original order of the program, and still retire the instructions in order, but the actual issue/execution of the instructions can be done out of order.

Let's talk a bit about what all of this means.   A CPU is useless if it changes the intent of the code fed to it.  Frankly speaking, if you double-click on a file, your CPU would be rather useless if it executed a bunch of format commands instead.   Although that's an extreme example, in order to ensure that things like that don't happen, a CPU must adhere to two rules:
  1. Instructions must be decoded (i.e. interpreted by the CPU to find out what they are asking it to do) in the original order of the program, and
  2. Instructions must retire in the original order of the program (i.e. the result of each operation must be written to memory/disk in the same order as it was sent to the CPU).
Both in-order and out-of-order architectures adhere to those two rules - it's what happens in between those two stages that out-of-order architectures differ.   We mentioned in the previous page that in-order architectures can't reorder instructions on the fly.   Let's say that we have an in-order CPU with one adder and one load/store unit that is fed the following code (for the sake of simplicity, we'll leave a forwarding network out of this discussion):
  1.      LD R10, R11
  2.      ADD R5, R10, R10
  3.      ADD R9, R9, #1
  4.      ...
In the first instruction, we're loading data from a memory address stored in R11 into R10.   Then, we're adding the value that we just obtained from memory to itself and storing it in R5.   The third and final line in the snippet increments the value stored in R9 by 1 and stores it in R9.   Quickly looking at the code, you see that line 2 can't execute before line 1.  Doing so would alter the intent of the code (if you want to add something to itself, you need to make sure you have that something first).   Line 3, however, is completely independent of lines 1 and 2.

With an in-order microprocessor, if the data being loaded in line 1 is contained within cache, then that instruction will take around 1 - 30 clock cycles to complete (varying depending on the architecture and which level of cache it is in).   Line 2 would have to simply wait those 1 - 30 cycles before executing and then after it executed, line 3 could have its turn.   If the requested data isn't stored in cache (maybe it's the first time that we're asking for that value and we haven't asked for anything near it in memory), then we have a problem.   All of the sudden, line 1 doesn't take around 1 - 30 cycles to complete; now, it's going to take 200+ clock cycles to complete.   For line 2, that's not such a big deal, since it can't execute until line 1 completes anyway, but for line 3, it could just as easily execute during the time that the CPU is waiting to get that load from memory.   Any independent instructions following line 3 are also at the mercy of the cache miss.

With an out-of-order microprocessor, however, the situation of a cache miss isn't nearly as dramatic.   The code is still decoded in order, meaning that it comes across instructions 1, 2 and 3 in the same order as the in-order CPU, but this time, we have the ability to execute line 3 ahead of lines 1 and 2 instead of idly waiting for line 1 to complete.   In the event of a cache miss, this gives the out-of-order microprocessor a pretty big performance advantage, as it isn't sitting there burning away clock cycles while nothing gets done.   So, how does the out-of-order CPU work?

If someone told you a list of things to do in any order that you wanted, you'd simply take in the list and get to it.   But if they told you to report back the things that you've completed in the order in which they were told to you, you'd have to grumble and write them down first before reorganizing them to fit your needs.

An out-of-order CPU works pretty much the same way, except instead of a to-do list, it has an instruction window.  The instruction window functions similarly to a to-do list - it has all of the decoded instructions in their original order and is kept as a record to make sure that those instructions retire in the order that they were decoded.

Alongside the instruction window, an out-of-order CPU also has a scheduling window - it is in this "window" where all of the reordering of instructions takes place.   The scheduling window contains logic to mark dependent and independent instructions and send all independent ones to execution units while waiting for dependent instructions to become ready for execution.

As previously dependent instructions (e.g. instructions waiting on data from main memory or instructions waiting for other instructions to complete) become independent, they are then able to be executed, once again, in any order.

Right off the bat, you can tell that the addition of an instruction window, a scheduling window and all of the associated logic to detect independent instructions, not to mention the logic to handle out-of-order execution but in order retirement, all makes for a more complex microprocessor.   But there is one other significant problem with out-of-order microprocessors - the increase in performance and instruction level parallelism is greatly dependent upon the size of the instruction window.

The larger you make this window, the more parallelism that can be extracted simply because the CPU is looking at a wider set of instructions from which to select independent ones.   At the same time, the larger you make the window, the lower your clock speed can be.

Despite the downsides, all modern day x86 microprocessors are out-of-order cores, as keeping a single core simple isn't the top priority given advances in manufacturing processes.   The benefits of an out-of-order architecture are two-fold:
  1. Dynamic reordering of instructions lets the CPU hide memory latencies, allowing for even higher clock speeds.   For every cache miss, a Pentium 4 3.6GHz has to wait around 230 clock cycles to get data from main memory, which is a lot of idle time in the eyes of the CPU.  Being able to make use of that idle time by executing other independent instructions in the meantime is one way in which architectures like the Pentium 4 and Athlon 64 get away with running at such high multiples of their memory frequency.
  2. Incremental increase in instruction level parallelism - by reordering instructions on the fly, out-of-order architectures can improve ILP as best as possible in areas where the compiler fails to.
So, it's obvious that both AMD and Intel have figured out that for a general purpose x86 microprocessor, out-of-order makes the most sense.   Then, why is it that the architects of Cell, when starting with a clean slate, outfitted the processor with 9 independent in-order cores?

The first thing to remember is that you can get pretty solid performance from an in-order architecture.   The Itanium is an in-order microprocessor, based on a premise similar to Cell by which the compiler should be able to extract the sort of parallelism that of an out-of-order core.   Current generation Itanium cores run at half the speed of modern day x86 cores, yet the CPU is able to execute around 2x the instructions per clock as the fastest x86 CPUs.  To quote Intel's Justin Rattner in reference to Itanium, "an appropriately designed instruction set should lend itself to an in-order architecture without any problems."   So, it's quite possible that the same could apply to Cell...

In-Order Architectures Cell's Approach - In Order with no Cache
Comments Locked

70 Comments

View All Comments

  • ceefka - Thursday, March 17, 2005 - link

    Rambus'Revenge
  • Locut0s - Thursday, March 17, 2005 - link

    Great article Anand!! Yeah I actually get to bring my Comp150 knowledge to bear in reading this article! If this had come out 6 months ago I would have been totally lost. It will indeed be interesting to see what headway Cell can make, however unfortunately as Anand alludes to the x86 architecture is just too heavily entrenched for anything to budge it except the Big 2 (AMD and Intel). I can't wait to see what type of power the Playstation 3 will have though, and especially how that power will be utilized in games. I bet there will be some jaw dropping graphics awaiting us there. That is if Cells limitations don't hold back lazy game developers and lead to a string of mediocre games punctuated by a few amazing titles made by independent developers who really care to utilize the architecture. Didn't the Playstantion 1 suffer something similar?
  • knitecrow - Thursday, March 17, 2005 - link

    The real world technology article on the cell, states that it gives up single thread performance in favour of runing many parallel threads. That sounds like a terrible difficult processor to development games for.

    I for one think it will be easier to put the burden on the hardware rather than on the software side.

    Can we see another repeat of PS2? Technically impressive, but hard to code for.
  • JarredWalton - Thursday, March 17, 2005 - link

    11 - I think the point is that games tend to use certain functions of a CPU much more frequently, while general business/office applications make use of a wider range of generic operations. I understand your complaint, as office applications generally don't need a lot more power than about 1.5 GHz at most. However, the key of the statement was the "general purpose microprocessor" and not the "very powerful" part.
  • AnandThenMan - Thursday, March 17, 2005 - link

    WAIT. What the flock does this mean?

    "Performance in business/office applications requires a very powerful, very fast general purpose microprocessor, but performance in a game console, for example, does not."

    WHAT??????? Hello?? So an office app like Word needs a very powerful processor, but a game console does not? I beg to differ. I suppose it depends on how you define "business/office application" but I think that statement is WAY off. I know several current office applications that will limp along on a pentium 133, but no current game has any hope on the same CPU.
  • tipoo - Wednesday, July 30, 2014 - link

    It was clear to me that meant console CPUs didn't have to be as general purpose and brute force powerful in every regard - they can get away with being more specialized, and suck at general work, but still fast for game specific code.
  • Googer - Thursday, March 17, 2005 - link

    When are they coming out? Anyone know of a release date?
  • jeffbui - Thursday, March 17, 2005 - link

    #4, I do. Heh.

    I've been waiting for this article forever.. thanks!
  • JarredWalton - Thursday, March 17, 2005 - link

    Interesting stuff. The Playstation has always been something of a pain in the rear to program. PS1 went it's own way, and PS2 did the same. PS3 and Cell seem ready to pave new roads into the "OMG this is really complex" land of programming. I'm glad I've given up serious programming.... :)
  • Googer - Thursday, March 17, 2005 - link

    In soviet russia cell processor controls your mind.

Log in

Don't have an account? Sign up now