Usage Patterns

Before getting into the architecture of Cell, let’s talk a bit about the types of workloads for which Cell and other microprocessors are currently being built.

In the past, office application performance was a driving factor behind microprocessor development.   Before multitasking and before email, there was single application performance and for the most part, we were talking about office applications, word processors, spreadsheets, etc.   Thus, most microprocessors were designed toward incredible single application, single task performance.

As microprocessors became more powerful, the software followed - multitasking environments were born.   The vast majority of computer users, however, were still focused on single application usage, so microprocessor development continued to focus on single-threaded performance (single application, single task performance).

Over the years, the single-threaded performance demands grew.   Microsoft Word was no longer the defining application, but things like games, media processing and dynamic content creation became the applications that ate up the most CPU cycles.   This is where we are today with workloads being a mix of office, 3D games, 3D content creation and media encoding/decoding/transcoding that consume our CPU cycles.   But in order to understand the creation of a new architecture like Cell, you have to understand where these workloads are headed.   Just as the types of applications demanding performance today are much different than those run 10 years ago, the same will apply to applications in the next decade.   And given that a new microprocessor architecture takes about 5 years to develop, it is feasible to introduce a new architecture geared towards these new usage models now.

Intel spoke a lot about future usage models at their most recent IDF, things like real time voice recognition (and even translation), unstructured search (e.g. Google image search), even better physics and AI models in games, more feature-rich user interfaces (e.g. hand gesture recognition), etc.   These are the usage models of the future, and as such, they have a different set of demands on microprocessors and their associated architectures.

The type of performance required to enable these types of usage models is significantly higher than what we have available to us today.   Conventionally, performance increases from one microprocessor generation to the next by optimizing single thread performance.   There are a number of ways of improving single thread performance, either by driving up the clock speed or by increasing the instructions executed per clock (IPC).   Taking it one step further, the more parallelism you can extract from a single thread, the better your performance will be - this type of parallelism is known as instruction level parallelism (ILP) as it involves executing as many instructions out of a thread at the same time.

The problem with improving performance through increasing ILP is that from one generation to the next, you’re only talking about a 10% - 20% increase in performance.   Yet, the usage models that we’re talking about for the future require significantly more than the type of gains that we’ve been getting in the past.   With power limitations preventing clock speeds from scaling too high, it’s clear that there needs to be another way of improving performance.

The major players in the microprocessor industry have all pretty much agreed that the only way to get the type of performance gains that are necessary is by moving towards multi-core architectures.   Through a combination of multithreaded applications and multi-core processors, you can get the types of performance increases that should allow for these types of applications to be developed.   Instead of focusing on extracting ILP to improve performance, these multi-core processors extract parallelism on a thread level to improve performance (thread level parallelism - TLP).

It’s not as straightforward as that, however.  There are a handful of decisions that need to be made.   How powerful do you make each core in your multi-core microprocessor?   Do you have a small array of powerful processors or a larger array of simpler processors?   How do they communicate with one another?   How do you deal with feeding a multi-core processor with enough memory bandwidth?

The Cell implementation is just one solution to the problem...

Index High Level Overview of Cell
Comments Locked

70 Comments

View All Comments

  • ceefka - Thursday, March 17, 2005 - link

    Rambus'Revenge
  • Locut0s - Thursday, March 17, 2005 - link

    Great article Anand!! Yeah I actually get to bring my Comp150 knowledge to bear in reading this article! If this had come out 6 months ago I would have been totally lost. It will indeed be interesting to see what headway Cell can make, however unfortunately as Anand alludes to the x86 architecture is just too heavily entrenched for anything to budge it except the Big 2 (AMD and Intel). I can't wait to see what type of power the Playstation 3 will have though, and especially how that power will be utilized in games. I bet there will be some jaw dropping graphics awaiting us there. That is if Cells limitations don't hold back lazy game developers and lead to a string of mediocre games punctuated by a few amazing titles made by independent developers who really care to utilize the architecture. Didn't the Playstantion 1 suffer something similar?
  • knitecrow - Thursday, March 17, 2005 - link

    The real world technology article on the cell, states that it gives up single thread performance in favour of runing many parallel threads. That sounds like a terrible difficult processor to development games for.

    I for one think it will be easier to put the burden on the hardware rather than on the software side.

    Can we see another repeat of PS2? Technically impressive, but hard to code for.
  • JarredWalton - Thursday, March 17, 2005 - link

    11 - I think the point is that games tend to use certain functions of a CPU much more frequently, while general business/office applications make use of a wider range of generic operations. I understand your complaint, as office applications generally don't need a lot more power than about 1.5 GHz at most. However, the key of the statement was the "general purpose microprocessor" and not the "very powerful" part.
  • AnandThenMan - Thursday, March 17, 2005 - link

    WAIT. What the flock does this mean?

    "Performance in business/office applications requires a very powerful, very fast general purpose microprocessor, but performance in a game console, for example, does not."

    WHAT??????? Hello?? So an office app like Word needs a very powerful processor, but a game console does not? I beg to differ. I suppose it depends on how you define "business/office application" but I think that statement is WAY off. I know several current office applications that will limp along on a pentium 133, but no current game has any hope on the same CPU.
  • tipoo - Wednesday, July 30, 2014 - link

    It was clear to me that meant console CPUs didn't have to be as general purpose and brute force powerful in every regard - they can get away with being more specialized, and suck at general work, but still fast for game specific code.
  • Googer - Thursday, March 17, 2005 - link

    When are they coming out? Anyone know of a release date?
  • jeffbui - Thursday, March 17, 2005 - link

    #4, I do. Heh.

    I've been waiting for this article forever.. thanks!
  • JarredWalton - Thursday, March 17, 2005 - link

    Interesting stuff. The Playstation has always been something of a pain in the rear to program. PS1 went it's own way, and PS2 did the same. PS3 and Cell seem ready to pave new roads into the "OMG this is really complex" land of programming. I'm glad I've given up serious programming.... :)
  • Googer - Thursday, March 17, 2005 - link

    In soviet russia cell processor controls your mind.

Log in

Don't have an account? Sign up now