Out-of-Order Architectures

In contrast to in-order architectures, there are out-of-order architectures.   Out-of-order architectures still decode instructions in the original order of the program, and still retire the instructions in order, but the actual issue/execution of the instructions can be done out of order.

Let's talk a bit about what all of this means.   A CPU is useless if it changes the intent of the code fed to it.  Frankly speaking, if you double-click on a file, your CPU would be rather useless if it executed a bunch of format commands instead.   Although that's an extreme example, in order to ensure that things like that don't happen, a CPU must adhere to two rules:
  1. Instructions must be decoded (i.e. interpreted by the CPU to find out what they are asking it to do) in the original order of the program, and
  2. Instructions must retire in the original order of the program (i.e. the result of each operation must be written to memory/disk in the same order as it was sent to the CPU).
Both in-order and out-of-order architectures adhere to those two rules - it's what happens in between those two stages that out-of-order architectures differ.   We mentioned in the previous page that in-order architectures can't reorder instructions on the fly.   Let's say that we have an in-order CPU with one adder and one load/store unit that is fed the following code (for the sake of simplicity, we'll leave a forwarding network out of this discussion):
  1.      LD R10, R11
  2.      ADD R5, R10, R10
  3.      ADD R9, R9, #1
  4.      ...
In the first instruction, we're loading data from a memory address stored in R11 into R10.   Then, we're adding the value that we just obtained from memory to itself and storing it in R5.   The third and final line in the snippet increments the value stored in R9 by 1 and stores it in R9.   Quickly looking at the code, you see that line 2 can't execute before line 1.  Doing so would alter the intent of the code (if you want to add something to itself, you need to make sure you have that something first).   Line 3, however, is completely independent of lines 1 and 2.

With an in-order microprocessor, if the data being loaded in line 1 is contained within cache, then that instruction will take around 1 - 30 clock cycles to complete (varying depending on the architecture and which level of cache it is in).   Line 2 would have to simply wait those 1 - 30 cycles before executing and then after it executed, line 3 could have its turn.   If the requested data isn't stored in cache (maybe it's the first time that we're asking for that value and we haven't asked for anything near it in memory), then we have a problem.   All of the sudden, line 1 doesn't take around 1 - 30 cycles to complete; now, it's going to take 200+ clock cycles to complete.   For line 2, that's not such a big deal, since it can't execute until line 1 completes anyway, but for line 3, it could just as easily execute during the time that the CPU is waiting to get that load from memory.   Any independent instructions following line 3 are also at the mercy of the cache miss.

With an out-of-order microprocessor, however, the situation of a cache miss isn't nearly as dramatic.   The code is still decoded in order, meaning that it comes across instructions 1, 2 and 3 in the same order as the in-order CPU, but this time, we have the ability to execute line 3 ahead of lines 1 and 2 instead of idly waiting for line 1 to complete.   In the event of a cache miss, this gives the out-of-order microprocessor a pretty big performance advantage, as it isn't sitting there burning away clock cycles while nothing gets done.   So, how does the out-of-order CPU work?

If someone told you a list of things to do in any order that you wanted, you'd simply take in the list and get to it.   But if they told you to report back the things that you've completed in the order in which they were told to you, you'd have to grumble and write them down first before reorganizing them to fit your needs.

An out-of-order CPU works pretty much the same way, except instead of a to-do list, it has an instruction window.  The instruction window functions similarly to a to-do list - it has all of the decoded instructions in their original order and is kept as a record to make sure that those instructions retire in the order that they were decoded.

Alongside the instruction window, an out-of-order CPU also has a scheduling window - it is in this "window" where all of the reordering of instructions takes place.   The scheduling window contains logic to mark dependent and independent instructions and send all independent ones to execution units while waiting for dependent instructions to become ready for execution.

As previously dependent instructions (e.g. instructions waiting on data from main memory or instructions waiting for other instructions to complete) become independent, they are then able to be executed, once again, in any order.

Right off the bat, you can tell that the addition of an instruction window, a scheduling window and all of the associated logic to detect independent instructions, not to mention the logic to handle out-of-order execution but in order retirement, all makes for a more complex microprocessor.   But there is one other significant problem with out-of-order microprocessors - the increase in performance and instruction level parallelism is greatly dependent upon the size of the instruction window.

The larger you make this window, the more parallelism that can be extracted simply because the CPU is looking at a wider set of instructions from which to select independent ones.   At the same time, the larger you make the window, the lower your clock speed can be.

Despite the downsides, all modern day x86 microprocessors are out-of-order cores, as keeping a single core simple isn't the top priority given advances in manufacturing processes.   The benefits of an out-of-order architecture are two-fold:
  1. Dynamic reordering of instructions lets the CPU hide memory latencies, allowing for even higher clock speeds.   For every cache miss, a Pentium 4 3.6GHz has to wait around 230 clock cycles to get data from main memory, which is a lot of idle time in the eyes of the CPU.  Being able to make use of that idle time by executing other independent instructions in the meantime is one way in which architectures like the Pentium 4 and Athlon 64 get away with running at such high multiples of their memory frequency.
  2. Incremental increase in instruction level parallelism - by reordering instructions on the fly, out-of-order architectures can improve ILP as best as possible in areas where the compiler fails to.
So, it's obvious that both AMD and Intel have figured out that for a general purpose x86 microprocessor, out-of-order makes the most sense.   Then, why is it that the architects of Cell, when starting with a clean slate, outfitted the processor with 9 independent in-order cores?

The first thing to remember is that you can get pretty solid performance from an in-order architecture.   The Itanium is an in-order microprocessor, based on a premise similar to Cell by which the compiler should be able to extract the sort of parallelism that of an out-of-order core.   Current generation Itanium cores run at half the speed of modern day x86 cores, yet the CPU is able to execute around 2x the instructions per clock as the fastest x86 CPUs.  To quote Intel's Justin Rattner in reference to Itanium, "an appropriately designed instruction set should lend itself to an in-order architecture without any problems."   So, it's quite possible that the same could apply to Cell...

In-Order Architectures Cell's Approach - In Order with no Cache
Comments Locked

70 Comments

View All Comments

  • scrotemaninov - Thursday, March 17, 2005 - link

    #23: True, but I believe that when the SPE's access the outside memory they go through the cache. Sure it's a lower coherancy than we're used to but it's not much worse.
  • Houdani - Thursday, March 17, 2005 - link

    18: Top Drawer Post.
    20: Thanks for the links!
  • fitten - Thursday, March 17, 2005 - link

    "Given the speed of the interconnect and the fact that it is cache-coherant,"

    Only the PPC core has cache. The individual SPEs don't have cache - they have scratchpad RAM.

    #22: I believe the PPC core is a dual issue core that just happens to be 2xSMT.
  • AndyKH - Thursday, March 17, 2005 - link

    Great article.
    Anand, Could you please clarify something:
    I had the impression that the PPE was a SMT processor in the sense that it had to be executing 2 threads in order to issue 2 instructions per clock. In other words: I didn't think the PPE control logic could decide to issue 2 instructions from the same thread at any given clock tick, but rather that it absolutely needed an instruction from each thread to issue two instructions.

    After reading the article, I don't assume my impression is right, but a comment from you would be nice.

    As I come to think about it, my impression is rather identical to 2 seperate single thread in-order cores. :-)
  • Koing - Thursday, March 17, 2005 - link

    Cell looks VERY interesting.

    Any of you guys seen Devil May Cry 3 on the PS2? Looks great imo same with T5 and GT4.

    Cell at first will be tough like most consoles. BUT eventually THE developers will get around it and make some very solidly good looking games.

    Lets hope they are innovative and not just rehashed graphics and nothing else.

    Thanks for the great article.

    Koing
  • scrotemaninov - Thursday, March 17, 2005 - link

    I really hate just dumping loads of links, but this basically is the available content on the CELL.

    http://arstechnica.com/articles/paedia/cpu/cell-1....
    http://arstechnica.com/articles/paedia/cpu/cell-2....
    http://realworldtech.com/page.cfm?ArticleID=RWT021...
    http://www.blachford.info/computer/Cells/Cell0.htm...

    http://www.realworldtech.com/page.cfm?ArticleID=RW...

    http://www.hpcaconf.org/hpca11/papers/25_hofstee-c...
    http://www.hpcaconf.org/hpca11/slides/Cell_Public_... (slides)
  • mrmorris - Thursday, March 17, 2005 - link

    Brilliant article, there are few places for in-depth hardcore technology presentations but Anandtech never fails.
  • scrotemaninov - Thursday, March 17, 2005 - link

    Real concurrency is hard to do for the programmers. It's a real pain to get it right and it's hard to debug. Systematic analysis just gets too complex as there are just too many states, you end up with a huge graph/markov-model and it's just impossible to solve it tractably.

    Superscalar and SMT just try to increase ILP at the CPU level without burdening the programmer or compiler-writer. However, we've pretty much come to the end of getting a CPU to go faster - at 5GHz, LIGHT travels 6cm between clocks, and an electic PD will travel slower. As it is, in the P4 pipeline, there are at least 2 stages which are simply there to allow signals to propogate across the chip. Clearly, going faster in Hz isn't going to make the pipeline go faster.

    So the ONLY thing that they can do now is to put lots of cores on the same chip and then we're going to have to deal with real concurrency. IBM/Sony are doing it now with CELL and Intel will do it in a few years. It's going to happen regardless. What we need is languages which can support real concurrency. The Java Memory Model is an almost ideal fit for the CELL, but other aspects don't work out so well, maybe. We need Pi-calculus/Join-calculus constructs in languages to be able to really deal with these cpus efficiently.

    Your comments about CELL not being general purpose enough are a little wrong. IBM /already/ has the CELL in workstations and are evaluating applications that will work well. Given the speed of the interconnect and the fact that it is cache-coherant, I think we'll be seeing super-computers based on many CELLs, it's an almost ideal fit (as it is, you've almost got ccNUMA on a single chip). Also, bear in mind that this is IBM's 5th (or 6th?) generation of SMT in the PPE - they've been at it MUCH longer than Intel - IBM started it in the mid-90s around the same time that the Alpha crew were working on the EV8 which was going to have 8-way thread-level parallelism (got canned sadly).

    Also, if you look at IBMs heavy CPUs - the POWER5, that has SMT and dispatches in groups of 8 instructions, not the 3/4 that AMD/Intel manage.

    What I'm saying here, is that sure, the SPEs don't have BPTs of BTBs, they're all 2-way dispatch and not greater, but, they all run REALLY fast, they have short pipelines (so the pain of the branch misprediction won't be so bad), and, IBM have had software branch prediction available since the POWER4, so they've been at it a few years and must have decided that compilers really can successfully predict branch directions.

    Backwards compatibility doesn't matter. Sure, Microsoft took several years to support AMD64 but that didn't stop take up of the platform - everyone just ran Linux on it (well, everyone who wanted to use the 64bit CPU they'd bought). It'll only be a few months after the CELL is out that we'll have to wait until Linux can be built on it. 100quid says Microsoft will never support it.

    Frankly, considering that it's far more likely to go into super-computer or workstation environments, no one there gives a damn about backwards compatibility or Windows support. No one in those environments /wants/ a damn paper clip.
  • Reflex - Thursday, March 17, 2005 - link

    #14: Replace 'lazy developers' with 'developers on a budget' and you will have a true statement. Its not an issue of laziness, its an issue of having the budget to optimize fully for a platform.
  • GhandiInstinct - Thursday, March 17, 2005 - link

    Wow Super CPU and SUPER RAMBUS? AHHHH!

    This will replace my computer. PS3 that is.

Log in

Don't have an account? Sign up now