Out-of-Order Architectures

In contrast to in-order architectures, there are out-of-order architectures.   Out-of-order architectures still decode instructions in the original order of the program, and still retire the instructions in order, but the actual issue/execution of the instructions can be done out of order.

Let's talk a bit about what all of this means.   A CPU is useless if it changes the intent of the code fed to it.  Frankly speaking, if you double-click on a file, your CPU would be rather useless if it executed a bunch of format commands instead.   Although that's an extreme example, in order to ensure that things like that don't happen, a CPU must adhere to two rules:
  1. Instructions must be decoded (i.e. interpreted by the CPU to find out what they are asking it to do) in the original order of the program, and
  2. Instructions must retire in the original order of the program (i.e. the result of each operation must be written to memory/disk in the same order as it was sent to the CPU).
Both in-order and out-of-order architectures adhere to those two rules - it's what happens in between those two stages that out-of-order architectures differ.   We mentioned in the previous page that in-order architectures can't reorder instructions on the fly.   Let's say that we have an in-order CPU with one adder and one load/store unit that is fed the following code (for the sake of simplicity, we'll leave a forwarding network out of this discussion):
  1.      LD R10, R11
  2.      ADD R5, R10, R10
  3.      ADD R9, R9, #1
  4.      ...
In the first instruction, we're loading data from a memory address stored in R11 into R10.   Then, we're adding the value that we just obtained from memory to itself and storing it in R5.   The third and final line in the snippet increments the value stored in R9 by 1 and stores it in R9.   Quickly looking at the code, you see that line 2 can't execute before line 1.  Doing so would alter the intent of the code (if you want to add something to itself, you need to make sure you have that something first).   Line 3, however, is completely independent of lines 1 and 2.

With an in-order microprocessor, if the data being loaded in line 1 is contained within cache, then that instruction will take around 1 - 30 clock cycles to complete (varying depending on the architecture and which level of cache it is in).   Line 2 would have to simply wait those 1 - 30 cycles before executing and then after it executed, line 3 could have its turn.   If the requested data isn't stored in cache (maybe it's the first time that we're asking for that value and we haven't asked for anything near it in memory), then we have a problem.   All of the sudden, line 1 doesn't take around 1 - 30 cycles to complete; now, it's going to take 200+ clock cycles to complete.   For line 2, that's not such a big deal, since it can't execute until line 1 completes anyway, but for line 3, it could just as easily execute during the time that the CPU is waiting to get that load from memory.   Any independent instructions following line 3 are also at the mercy of the cache miss.

With an out-of-order microprocessor, however, the situation of a cache miss isn't nearly as dramatic.   The code is still decoded in order, meaning that it comes across instructions 1, 2 and 3 in the same order as the in-order CPU, but this time, we have the ability to execute line 3 ahead of lines 1 and 2 instead of idly waiting for line 1 to complete.   In the event of a cache miss, this gives the out-of-order microprocessor a pretty big performance advantage, as it isn't sitting there burning away clock cycles while nothing gets done.   So, how does the out-of-order CPU work?

If someone told you a list of things to do in any order that you wanted, you'd simply take in the list and get to it.   But if they told you to report back the things that you've completed in the order in which they were told to you, you'd have to grumble and write them down first before reorganizing them to fit your needs.

An out-of-order CPU works pretty much the same way, except instead of a to-do list, it has an instruction window.  The instruction window functions similarly to a to-do list - it has all of the decoded instructions in their original order and is kept as a record to make sure that those instructions retire in the order that they were decoded.

Alongside the instruction window, an out-of-order CPU also has a scheduling window - it is in this "window" where all of the reordering of instructions takes place.   The scheduling window contains logic to mark dependent and independent instructions and send all independent ones to execution units while waiting for dependent instructions to become ready for execution.

As previously dependent instructions (e.g. instructions waiting on data from main memory or instructions waiting for other instructions to complete) become independent, they are then able to be executed, once again, in any order.

Right off the bat, you can tell that the addition of an instruction window, a scheduling window and all of the associated logic to detect independent instructions, not to mention the logic to handle out-of-order execution but in order retirement, all makes for a more complex microprocessor.   But there is one other significant problem with out-of-order microprocessors - the increase in performance and instruction level parallelism is greatly dependent upon the size of the instruction window.

The larger you make this window, the more parallelism that can be extracted simply because the CPU is looking at a wider set of instructions from which to select independent ones.   At the same time, the larger you make the window, the lower your clock speed can be.

Despite the downsides, all modern day x86 microprocessors are out-of-order cores, as keeping a single core simple isn't the top priority given advances in manufacturing processes.   The benefits of an out-of-order architecture are two-fold:
  1. Dynamic reordering of instructions lets the CPU hide memory latencies, allowing for even higher clock speeds.   For every cache miss, a Pentium 4 3.6GHz has to wait around 230 clock cycles to get data from main memory, which is a lot of idle time in the eyes of the CPU.  Being able to make use of that idle time by executing other independent instructions in the meantime is one way in which architectures like the Pentium 4 and Athlon 64 get away with running at such high multiples of their memory frequency.
  2. Incremental increase in instruction level parallelism - by reordering instructions on the fly, out-of-order architectures can improve ILP as best as possible in areas where the compiler fails to.
So, it's obvious that both AMD and Intel have figured out that for a general purpose x86 microprocessor, out-of-order makes the most sense.   Then, why is it that the architects of Cell, when starting with a clean slate, outfitted the processor with 9 independent in-order cores?

The first thing to remember is that you can get pretty solid performance from an in-order architecture.   The Itanium is an in-order microprocessor, based on a premise similar to Cell by which the compiler should be able to extract the sort of parallelism that of an out-of-order core.   Current generation Itanium cores run at half the speed of modern day x86 cores, yet the CPU is able to execute around 2x the instructions per clock as the fastest x86 CPUs.  To quote Intel's Justin Rattner in reference to Itanium, "an appropriately designed instruction set should lend itself to an in-order architecture without any problems."   So, it's quite possible that the same could apply to Cell...

In-Order Architectures Cell's Approach - In Order with no Cache
Comments Locked

70 Comments

View All Comments

  • PhilAnd - Wednesday, October 5, 2005 - link

    Thank you SO MUCH!!! I've been looking for an explanation of the cell forever and this did it perfictly!! THANK YOU!!! YOU ARE GOD!!!
  • philpoe - Sunday, July 31, 2005 - link

    Under the high-level overview of the cell section, the PPE has 64KB L1 and 512KB L2 cache.
    On the other hand, under the on-die memory controller section, we see that the XDR memory gives bandwidth of 25.6GB/sec, and the integrated memory controller "significantly reduces memory latencies".
    My question then is, what good is the L1 and L2 cache doing? Given the amount of real estate those transistors take up, isn't it more economical to use the system RAM exclusively? The L2 cache takes up about the same amount of space as an SPE, not that it would help but so much to put another one on the die, but what effect on performance would getting rid of the L2 or even L1 cache have on memory with such high bandwidth?
  • tipoo - Wednesday, December 2, 2015 - link

    L1 and L2 latency isn't even approached by the fastest system RAM latencies, XDR included. Nanoseconds vs milliseconds.
  • jiulemoigt - Saturday, March 26, 2005 - link

    Oh #59 it's funnier than that the PPE does all the work the modern CPU does with logic, and the easy stuff is done by the extra procs... but that means the messy {think calc equations} can not be done by the extra proc so if your game requires more abstract equations vs simple math {the math understadning simple math} say AI vs drawing boxes adn cubes, your machine will be dependant of the smaller proc, and the pipeline length is a game of balence prediction vs speed meaning that if you can predict a full pipeline it is much faster if the pipe is longer the vs you miss with the prediction at some point in the pipe and everthing after that point is lost so the longer the pipe is after the miss is a loss. So a shorter pipe is not nessacry better as there are tasks the P4 excells at because it has the huge pipe and the longer the pipe the high you can scale the proc speed, which is why intel chose such a huge pipe knowing the misses would hurt but at the time people still wanted every mhz possible. AMD has a 14 stage pipe because they use decent prediction but better register use, as well as fast pathing, but the biggest reason x86 is fast is because as long as it works theres reams of code out there to reach the sun a new system will require human hours to clean up so that is can take all the short cuts that x86 already does. So if the dev's are laughing now it is becasue the know it going to be very unfriendly to code for and are frustrated that the hardware which has years of effort going into it's design is not being designed to be easier to code for and to do the hardwork for us instead of the doing all the easy work faster which doesn't help us and making the hardwork harder! and in some cases run slower because it was cheaper. I understand how much money M$ lost, which was passed on to nvidia, so for them they won't get away with that this time so they will have to make it cheaper this time around.
  • AndyKH - Thursday, March 24, 2005 - link

    #55
    Regarding the interview from GameSpot:
    He (the guy who is very upset with having to program for in-order cores) states that code will run very crappy on these new cores. Well... I don't know exactly how many pipeline stages the new cores have, but they will without a doubt have a LOT less stages than modern out-of-order core. If you also spend a great amount of design effort to make sure the branch target is calculated very early in the pipeline and couple that with a high clock frequency, you might not even need to fetch your bag of kleenexes to dry your eyes.

    Of course, I don't know how long the pipeline in a Cell PPE or in the Xenon's cores is, but everything points to a very short one. Also I don't know how early the branch target is calculated, but I bet it's pretty early.

    As an end remark I might add that "computer engineers are not stupid people". In the interview, the guy make it sound like it will be impossible to run gameplay code on the new console CPUs..... I personally don't think that IBM and Sonys engineers will design a CPU with such a little amount of care.

    Regards
    Andreas
  • TheGee - Monday, March 21, 2005 - link

    Transputer anyone? The computer on a chip that could be massively parralleled? Difficult to program but this cell is not such a great leap in ideas but with the corporate weight may succeed where others have failed and break the x86 limitations put on PCs. If the busses are big enough it would be nice to be able to plug in extra CPUs on a card or such like to upgrade or speed up a system without to much difficulty as long as the software is not CPU limited. But as before it's best not to hold your breath!
  • Slaimus - Sunday, March 20, 2005 - link

    PS1 was easy to program, so that took off. Sony made PS2 very hard to program if you want to use its vector units efficiently, but since the game developers are already on board, they had to live with it. And sony will dump the same heap onto developers again with the PS3.

    With this kind of complexity, I have a feeling that middleware companies will thrive. Game developers want to create content more than write assembly code, so a few middleware companies will probably supply the libraries while everyone else licenses them. Of course Microsoft has a head start since DirectX already exists and is included in the devkit, but then again, the xbox2 is not as massively parallel.
  • stephenbrooks - Sunday, March 20, 2005 - link

    Ah sod multiple cores. I always preferred playing Tetris anyhow.
  • knitecrow - Friday, March 18, 2005 - link

    GAME DEVELOPER @ GDC RANT ON NEXT GEN CONSOLES
    http://www.gamespot.com/news/2005/03/18/news_61204...


    All right, here we go. "How Sony and Microsoft are about to screw your game design." These are games in the good old days. We didn't exactly have the best physique, but we were at least a balanced individual, you walk out on the beach, and you were like, you know, pathetic. But you know, you looked like a normal person. These are games today. We've been working really hard--I mean, you can maybe make the argument that this is the game--these are games today. I gotta little more work on that left arm to do, it's going to be as big as our graphics arm soon. This is kind of lame. We really want to be this guy don't we?

    Unknown Speaker: No!

    [laughter]

    Chris Hecker: OK, he was the best guy I could find in like, three seconds in the WiFi network out in the lobby. All right. But how do we get there? Well, I'm going to take a little diversion here. I'm a programmer, so, I have two technical slides, really one technical slide. And that's about it. All right, ready? So there are two kinds of code in a game basically. There's gameplay code and engine code. Engine code, like graphics and physics, takes really giant data structures of homogenous data. I mean, it's all the same, like a lot of vertices are all a big matrix, or whatever, but usually floating point data structures these days. And you have a single small, relatively small hour that grinds away on that. This code is like, wow, it has a lot of math in it, it has to be optimized for super scalar, blah, blah, blah. It's just not actually that hard to write, right? It's pretty well defined what this code does.

    The second kind of code we have is AI and gameplay code. Lots of little exceptions. Even if you're doing a simulation-y kind of game, there's tons of tunable parameters, [it's got a lot of interactions], it's a mess. I mean, this code--you look at the gameplay code in the game, and it's crap. Compared to like, my elegant physics simulator or whatever. But this is a code that actually makes the game feel different. This is the kind of code we want to be easy to write and so we can do more experimental stuff. Here is the terrifying realization about the next generation of consoles. I'm about to break about a zillion NDAs, but I didn't sign any NDAs so that's totally cool!

    I'm actually a pretty good programmer and mathematician but my real talent is getting people to tell me stuff that they're not supposed to tell me. There we go. Gameplay code will get slower and harder to write on the next generation of consoles. Why is this? Here's our technical slide. Modern CPUs, like the Intel Pentium 4, blah, blah, blah, Pentium [indiscernible] or laptop, whatever is in your desktop, and all the modern power PCs, use what's called 'out of order' execution. Basically, out of order execution is there to make really crappy code run fast.

    So, they basically--when out of order execution came out on the P6, the Pentium 6 [indiscernible] the Pentium 5, the original Pentium and the one after that. The Pentium Pro I think they called it, it basically annoyed a whole bunch of low level ASCII coders, because now all of a sudden, like, the crappiest-ass C code, that like, Joe junior programmer could write, is running as fast as their Assembly, and there's nothing they can do about it. Because the CPU behind their back, is like, reordering that guy's crappy ass C code, to run really well and utilize all the parts of the processor. While this annoyed a whole bunch of people in Scandinavia, it actually…

    [laughter]

    And this is a great change in the bad old days of 'in order execution,' where you had to be an Assembly language wizard to actually get your CPU to do anything. You were always stalling in the cache, you needed to like--it was crazy. It was a lot of fun to write that code. It wasn't exactly the most productive way of doing experimental programming.

    The Xenon and the cell are both in order chips. What does this mean? The reason they did this, is it's cheaper for them to do this. They can drop a lot of core--you know--one out of order core is about the size of three to four in order cores. So, they can make a lot of in order cores and drop them on a chip, and keep the power down, and sell it for cheap--what does this do to our code?

    Well, it makes--it's totally fine for grinding like, symmetric algorithms out of floating point numbers, but for lots of 'if' statements in directions, it totally sucks. How do we quantify 'totally sucks?' "Rumors" which happen to be from people who are actually working on these chips, is that straight line gameplay code runs at 1/3 to 1/10 the speed at the same clock rate on an in order core as an out of order core.

    This means that your new fancy 2 plus gigahertz CPU, and its Xenon, is going to run code as slow or slower than the 733 megahertz CPU in the Xbox 1. The PS3 will be even worse.

    This sucks!

    [laughter]

    There's absolutely nothing you can do about this. Well, you can actually hope that Nintendo uses an out of order core, because they're claiming that they're going to try and make it easy to develop for--except for Nintendo basically totally flailed this generation. So maybe they'll do something next generation. Who knows? You can think about having batchable design simulation-y systems, but like, I'm a huge proponent of simulation in gameplay, but even simulation in gameplay takes kind of messy systems under the hood. And this makes your gameplay harder to write.

    You want to just write the gameplay. You don't want to have to like, spend 6 years of a super hardcore engine programmer's time to figure out how to make your gameplay run super scalars. You could do PC games. They are still out of order cores, but a lot of people don't think that's an option nowadays.
  • tipoo - Thursday, December 3, 2015 - link

    It's funny looking back, he wanted them to change the CPU from the Gamecube for the next generation...They ended up using an upclocked Gamecube CPU for the Wii, and a modified tri core version of it for the Wii U.

Log in

Don't have an account? Sign up now