Ordering Instructions around Dependencies

Luckily, there are solutions to the problem of dependencies in code; one tackles the problem in hardware, the other tackles the problem in software.

The software compiler is responsible for producing the assembly code that is sent to the CPU for execution.   Thus, with an intimate knowledge of the inner workings of the CPU, the compiler can, generally speaking, produce code that minimizes data dependencies.

There are microprocessor architectures that are dependent entirely on the compiler to extract parallelism, on the instruction level, while avoiding dependencies as much as possible.  These architectures are known as in-order microprocessors.

In-Order Architectures

As the name implies, an in-order microprocessor can only execute instructions in the order that they are sent to the CPU.   At best, the CPU can execute multiple instructions in parallel, but it has no ability to reorder the instructions to suit its needs better.

If you have a good enough compiler, then an in-order microprocessor should be just fine.   There are a couple of key limitations, however:

1.      Binaries Compiled for in-order architectures are very architecture specific

Although both the Athlon 64 and the Pentium 4 are fully able to run x86 code, they contain vastly different microarchitectures, with different execution units and very different things that they are “good” at.   If both of the aforementioned chips depended entirely on the compiler to extract parallelism and maximize performance, one would most definitely suffer.   You could always have two versions of every program, but that tends to get large and messy - especially from an update/patches standpoint.   The compiler has to be intimately aware of the architecture that it’s compiling for, which works in cases like a game console where you don’t have multiple vendors providing differently architected CPUs with a common ISA, yet not so well when you look at something like the desktop x86 market.

2.      Unpredictable memory latencies

Cache is a good thing, most of the time.   Cache on a microprocessor does its best to keep frequently used data at hand, so it can be made available to the CPU at very low latencies.   The problem is that cache adds a level of unpredictability to how long it will take to get data from memory.   A cache hit could mean that your data will be ready in 10 - 20 cycles.  A cache miss could mean that it’ll be hundreds of cycles.   With an in-order microprocessor, you can’t reorder instructions based on data availability, so if data isn’t available in cache and the CPU has to wait longer to pull it from main memory, the entire CPU has to sit and wait until that data is brought in from main memory.   Even if other instructions could be executed, an in-order microprocessor has no logic to effectively handle the on-the-fly reordering of instructions to get around unpredictable memory latencies.

If you can find a way around the limitations of an in-order architecture, there are some very tangible benefits:

1.      A much simplified microprocessor

Out-of-Order microprocessors have a significant amount of complexity added to them in order to deal with on-the-fly reordering of instructions.  We will talk about them in greater detail in the next section.   By moving this complexity to the software/compiler side, you greatly reduce the complexity of your microprocessor and save your transistor budget for other things that can yield better performance benefits.   Less complexity also means less power consumed and heat dissipated.

2.      Shorter pipeline

In order to deal with the reordering of instructions, generally speaking, a number of pipeline stages have to be added to the architecture, resulting in higher power consumption and demands for a more accurate branch predictor (thanks to an even higher branch prediction penalty).   While the impact on pipeline depth isn’t as big of a deal for longer pipelined designs, for shorter designs, the increase can be 40% or more.

Historically, the idea of a simple in-order core has been one that’s been abandoned in favor of the obvious alternative: an out-of-order architecture.

Cell's In-Order Architecture Out-of-Order Architectures
Comments Locked

70 Comments

View All Comments

  • WishIKnewComputers - Thursday, March 17, 2005 - link

    Well, I dont really see the Cell 'breaking' in any way. Between being in the PS3, IBM servers/supercomputers, and Sony and Toshiba electronics, the chip will be all over the place.

    As for it showing up in PCs... no it wont happen anytime soon, but I really dont think it's intended to at this point. Workstation and playstations are its main concern, and smartly so. The Cell in its first generation isnt cut out for superior general tasking, obviously, but when those things start pumping out (and they will... the PS2 has sold what, 80 million units?), there will likely be different and more advanced versions. And if some of those are changed for enhanced general purposing somehow or another, then they could have shot at entering the PC world. As for taking on Intel, though... I dont think IBM is even considering that. If I had to guess, if they wanted to be in a PC, they would have OS X adapted to Cell and IBM would have these things in Apples.

    But no matter which way they go, is it me or does IBM seem light-years ahead of Intel? After looking at Intel's future plans, it seems that they are trying to move towards what IBM is doing now. So is the Cell a processor just ahead of its time, or has Intel just gotten behind?
  • AnnihilatorX - Thursday, March 17, 2005 - link

    This article is seriously a kill for a child like me. I appreciate it though. Well done Anandtech
  • ravedave - Thursday, March 17, 2005 - link

    I can't wait to see what devlopers thing of the cell & the SDK's for it. I have a feeling thats what will kill the cell or make it successfull.
  • microbrew - Thursday, March 17, 2005 - link

    "System on a Chip (SoC)"

    What will make or break the Cell is the tools available, especially the operating system and libraries.

    I would like to see what they're doing in terms of marketing the chip to consumer electronics, telecom, military and other embedded applications. I could see the Cell as a viable alternative to the usual mixures of PowerPcs, ARMs and DSPs.

    I also agree with Final Words; I don't see the Cell breaking into the consumer PC market any time soon either.
  • Locut0s - Thursday, March 17, 2005 - link

    #17 Yeah that was a bit too harsh I agree.
  • Eug - Thursday, March 17, 2005 - link

    I'm just wondering how well a dual-core PPE-based 4+ GHz chip would do in general purpose (desktop) code.

    And I also wonder how cool/hot such a chip would be. The Xbox 2's CPU is probably a 3-core PPE, but it runs at 3 GHz, and we don't have power specs for it anyway.
  • Filibuster - Thursday, March 17, 2005 - link

    #11 (well, everyone should if they haven't before) read the Arstechnica article on PS2 vs PC - static applications vs dynamic media. Cell is taking it to the next level.

    http://arstechnica.com/articles/paedia/cpu/ps2vspc...

    Very nice article Anand!
  • Googer - Thursday, March 17, 2005 - link

    Besides a release date, is there any news or knowledge of a Linux Kit for Playstation 3 like there was for PS2? Does anyone KNOW OF Either?
  • Illissius - Thursday, March 17, 2005 - link

    Damn. Awesome article. If I hadn't known the site and author beforehand, I would've guessed Ars and Hannibal. Seems he isn't the only one with a talent for these kinds of articles ;)
    You should do more of them.
  • scrotemaninov - Thursday, March 17, 2005 - link

    #22: This is just a guess so don't rely on this. The POWER5 has 2way SMT. Each cycle it fetches 8 instructions from the L1I cache. All instructions fetched per cycle are for the same thread so it alternates (round robin). It also has capabilities for setting the thread priority so that you effectively run with 1 thread and it just fetches 8 instructions per cycle for the one running thread.

    I would expect the PPE to be similar to this, fetching 2 instructions for the same thread each cycle. The POWER5 has load balancing stuff in there too - if one thread keeps missing in L2 then the other thread gets more instructions decoded in order to keep the CPU functional unit utilisation up. I've no idea whether this kind of stuff has made it over into the PPE, I'd be a little surprised if it has, especially seeing as this is in-order anyway so it's not like you're going to be aiming for high utilisations rates.

Log in

Don't have an account? Sign up now