What Is Multithreading?

Before we get into discussion of how to go about multithreading, it may be beneficial for some if we explain what multithreading means. Most people who use computers are now familiar with the term multitasking. As the name implies, this involves running multiple tasks at the same time. This can be done either in the real world or on computers, and depending on what you're doing you may experience an overall increase in productivity by multitasking.

For example, let's say you're cooking dinner and it will consist of three dishes: roasted chicken, mashed potatoes, and green beans. If you were to tackle this task without any multitasking, you would first cook the chicken, then the potatoes, and finally the green beans. Unfortunately, by the time you're finished cooking the green beans, you might discover that the chicken and potatoes are already cold. So you decide to multitask and do all three at once: first you start boiling some water on the stove for the potatoes, while doing that you pull the chicken out of the refrigerator and place it into a pan and start heating the oven. Then you peel the potatoes. By now the water is boiling, so you put the potatoes into the water and let them cook. The oven is also preheated now, so you put the chicken in and let it begin cooking. The beans won't take too long to cook, so just wash them off and set them to the side for now. Eventually the potatoes are finished cooking, but before finishing those you put the green beans in a steamer and put them on the stove. Then you drain the potatoes and mash them up, add butter and whatever else you want, and now both the beans and chicken are done as well. You put everything onto plates, serve it up, and you're finished.

What's interesting to note is that the above description does not actually involve doing two things at once. Instead, you are actually doing portions of each task and then while you're waiting for certain things to complete you work on other tasks. On the classic single processor computer system, the same situation applies: the processor never really does two things at once; it just switches rapidly between various applications giving each of them a portion of the computational power of available. In order to actually do more than one thing at a time, you need more cooks in the kitchen, or else you need more processors. In the case of our example, you might have two people working on dinner, allowing more elaborate dishes to be prepared along with additional courses. Now while one person works on preparing the main three dishes we mentioned above, a second person could work on something like an appetizer and a dessert.

You could potentially even add more people, so you might have five people each preparing a single dish for a five course meal. Slightly trickier would be to have multiple people working on each dish. Rather than doing something mundane like grilled chicken, you could have a chicken dish with various other items to liven it up, along with a sauce. In extremely complex dishes, you could even break down a dish into more steps that various individuals could work on completing. Obviously, more can be accomplished as you add additional people, but you also run the risk of becoming less efficient so that some people might only be busy half the time.

We started with talking about multitasking, but the last example began to get into the concept of multithreading. In computer terminology, a "thread" is basically a portion of a program that needs to be executed. If you have a task that is computationally intensive and it is written as a single threaded application, it can only take advantage of a single processor core. Running two instances of such an application would allow you to use two processor cores, but if you only need to run one instance you need to figure out a way to take advantage of the additional computational power available. Multithreading is what is required, and in essence it involves breaking a task into two or more pieces which can be solved simultaneously.

Where multitasking can be important whether or not you have multiple processor cores available, multithreading really only begins to become important when you have the ability to execute more than one thread at a time. If you have a single core processor, multithreading simply adds additional overhead while the processor spends time switching between threads, and it is often better to run most tasks as a single thread on such systems. It's also worth noting that it becomes much easier to write and debug programming code when it is running as a single threaded application, because you know exactly in what order each task will execute.

We will return to this "cooks in the kitchen" example a bit more when we talk about the various types of threading environments. It's a bit simplistic, but hopefully it gives you a bit better idea about what goes on inside computer programs and what it means to break up a task into threads.

Index Threading Models
Comments Locked

55 Comments

View All Comments

  • Nighteye2 - Wednesday, November 8, 2006 - link

    Ok, so that's how Valve will implement multi-threading. But what about other companies, like Epic? How does the latest Unreal Engine multi-thread?
  • Justin Case - Wednesday, November 8, 2006 - link

    Why aren't any high-end AMD CPUs tested? You're testing 2GHz AMD CPUs against 2.6+ GHz Intel CPUs. Doesn't Anandtech have access to faster AMD chips? I know the point of the article is to compare single- and multi-core CPUs, but it seems a bit odd that all the Intel CPUs are top-of-the-line while all AMD CPUs are low end.
  • JarredWalton - Wednesday, November 8, 2006 - link

    AnandTech? Yes. Jarred? Not right now. I have a 5000+ AM2, but you can see that performance scaling doesn't change the situation. 1MB AMD chips do perform better than 512K versions, almost equaling a full CPU bin - 2.2GHz Opteron on 939 was nearly equal to the 2.4GHz 3800+ (both OC'ed). A 2.8 GHz FX-62 still isn't going to equal any of the upper Core 2 Duo chips.
  • archcommus - Tuesday, November 7, 2006 - link

    It must be a really great feeling for Valve knowing they have the capacity and capability to deliver this new engine to EVERY customer and player of their games as soon as it's ready. What a massive and ugly patch that would be for virtually any other developer.

    Don't really see how you could hate on Steam nowadays considering things like that. It's really powerful and works really well.
  • Zanfib - Tuesday, November 7, 2006 - link

    While I design software (so not so much programming as GUI design and whatnot), I can remember my University courses dealing with threading, and all the pain threading can bring.

    I predicted (though I'm sure many could say this and I have no public proof) that Valve would be one of the first to do such work, they are a very forward thinking company with large resources (like Google--they want to work on ANYthing, they can...), a great deal of experience and, (as noted in the article) the content delivery system to support it all.

    Great article about a great subject, goes a long way to putting to rest some of the fears myself and others have about just how well multi-core chips will be used (with the exception of Cell, but after reading a lot about Cell's hardware I think it will always be an insanely difficult chip to code for).
  • Bonesdad - Tuesday, November 7, 2006 - link

    mmmmmmmmm, chicken and mashed potatoes....
  • Aquila76 - Tuesday, November 7, 2006 - link

    Jarred, I wanted to thank you for explaining in terms simple enough for my extremely non-technical wife to understand why I just bought a dual-core CPU! That was a great progression on it as well, going through the various multi-threading techniques. I am saving that for future reference.
  • archcommus - Tuesday, November 7, 2006 - link

    Another excellent article, I am extremely pleased with the depth your articles provide, and somehow, every time I come up with questions while reading, you always seem to answer exactly what I was thinking! It's great to see you can write on a technical level but still think like a common reader so you know how to appeal to them.

    With regards to Valve, well, I knew they were the best since Half-Life 1 and it still appears to be so. I remember back in the days when we weren't even sure if Half-Life 2 was being developed. Fast forward a few years and Valve is once again revolutionizing the industry. I'm glad HL2 was so popular as to give them the monetary resources to do this kind of development.

    Right now I'm still sitting on a single core system with XP Pro and have lots of questions bustling in my head. What will be the sweet spot for Episode 2? Will a quad core really offer substantially better features than a dual core, or a dual core over a single core? Will Episode 2 be fully DX10, and will we need DX10 compliant hardware and Vista by its release? Will the rollout of the multithreaded Source engine affect the performance I already see in HL2 and Episode 1? Will Valve actually end up distributing different versions of the game based on your hardware? I thought that would not be necessary due to the fact that their engine is specifically designed to work for ANY number of cores, so that takes care of that automatically. Will having one core versus four make big graphical differences or only differences in AI and physics?

    Like you said yourself, more questions than answers at this point!
  • archcommus - Tuesday, November 7, 2006 - link

    One last question I forgot to put in. Say it was somehow possible to build a 10 or 15 GHz single core CPU with reasonable heat output. Would this be better than the multi-core direction we are moving towards today? In other words, are we only moving to mult-core because we CAN'T increase clock speeds further, or is this the preferred direction even if we could.
  • saratoga - Tuesday, November 7, 2006 - link

    You got it.

    A higher clock speed processor would be better, assuming performance scaled well enough anyway. Parallel hardware is less general then serial hardware at increasing performance because it requires parallelism to be present in the workload. If the work is highly serial, then adding parallelism to the hardware does nothing at all. Conversely, even if the workload is highly parallel, doubling serial performance still doubles performance. Doubleing the width of a unit could double the performance of that unit for certain workloads, while doing nothing at all for others. In general, if you can accelerate the entire system equally, doubling serial performance will always double program speed, regardless of the program.

    Thats the theory anyway. Practice says you can only make certain parts faster. So you might get away with doubling clock speed, but probably not halving memory latency, so your serial performance doesn't scale like you'd hope. Not to mention increasing serial performance is extremely expensive compared to parallel performance. But if it were possible, no one would ever bother with parallelism. Its a huge pain in the ass from a software perspective, and its becoming big now mostly because we're starting to run out of tricks to increase serial performance.

Log in

Don't have an account? Sign up now