Intel Dual Core Performance Preview Part I: First Encounterby Anand Lal Shimpi on April 4, 2005 2:44 PM EST
- Posted in
Scheduling and Responsiveness
In a single processor system (without Hyper Threading), the OS can only send one instruction thread to the CPU for execution at a time. But, you can run two applications at the same time and they can both be using up CPU time. In order to understand how this is possible, you have to understand a bit about how scheduling works.
As its name implies, the OS' scheduler schedules tasks. It takes the unlimited number of tasks that are requested of the OS, and schedules them to get done in the quickest way possible (in theory) on limited hardware resources.
When running a single application, the job of the scheduler is simple - the single active application gets all of the CPU's time for as long as it needs it. But what happens when you switch away from that active application and try to click on the Start Menu? Your usage experience would be pretty poor if you had to wait until your active application was done with its tasks before the scheduler would take the time to handle your Start Menu request. Imagine that your active application was 3ds max and you were rendering a scene that was going to take hours to complete. Would you be willing to wait hours for your Start Menu to appear?
Modern day OSes understand that this linear approach to scheduling isn't very practical, so they support pre-emptive multitasking, meaning that one task can pre-empt another before it is finished executing, and steal CPU time so that it may get some work done as well. In the previous example, the Start Menu request would pre-empt the 3D rendering process and your menu would pop up and the 3D rendering would resume immediately following that. Given that microprocessors these days are so fast, this rotation through tasks sent to the CPU occurs seamlessly to the end user, or at least it does most of the time.
There are times when the scheduler's work is not as transparent as it should be. In some cases, especially in Windows, processes will not always be able to pre-empt one another. If you're running two time-consuming, CPU intensive tasks, you may not notice, but if you're running one and trying to open a file or just click on a menu at the same time, then the hiccup is far more noticeable. The end result is usually a significantly delayed reaction to your input, such as a menu taking multiple seconds to appear instead of being an instantaneous response to your clicking. Anyone who runs more than one application at a time has undoubtedly encountered this type of a situation. Luckily, there are solutions.
Intel's Hyper Threading was one way around the problem. By fooling the scheduler into thinking that it can dispatch two threads simultaneously, situations like the one above were usually avoided assuming that the CPU had the appropriate resources free. Dual core is another solution to the problem, a far more robust one, since you literally have twice the processor resources.
The result of using a HT enabled or dual core system is better responsiveness when multitasking, but how do you quantify that? Unfortunately, it is extremely difficult to quantify response time in these situations. Even if we could easily quantify the response time improvements, is a snappier system when multitasking worth more than another 15% more performance in single threaded applications? How about 25%? It's a very different way of looking at the impact of a CPU to overall system performance, but it is an issue that we will have to tackle a lot more moving forward.