Scheduling and Responsiveness

In a single processor system (without Hyper Threading), the OS can only send one instruction thread to the CPU for execution at a time.  But, you can run two applications at the same time and they can both be using up CPU time.  In order to understand how this is possible, you have to understand a bit about how scheduling works.

As its name implies, the OS' scheduler schedules tasks.  It takes the unlimited number of tasks that are requested of the OS, and schedules them to get done in the quickest way possible (in theory) on limited hardware resources. 

When running a single application, the job of the scheduler is simple - the single active application gets all of the CPU's time for as long as it needs it.  But what happens when you switch away from that active application and try to click on the Start Menu?  Your usage experience would be pretty poor if you had to wait until your active application was done with its tasks before the scheduler would take the time to handle your Start Menu request.  Imagine that your active application was 3ds max and you were rendering a scene that was going to take hours to complete. Would you be willing to wait hours for your Start Menu to appear?

Modern day OSes understand that this linear approach to scheduling isn't very practical, so they support pre-emptive multitasking, meaning that one task can pre-empt another before it is finished executing, and steal CPU time so that it may get some work done as well.  In the previous example, the Start Menu request would pre-empt the 3D rendering process and your menu would pop up and the 3D rendering would resume immediately following that.  Given that microprocessors these days are so fast, this rotation through tasks sent to the CPU occurs seamlessly to the end user, or at least it does most of the time. 

There are times when the scheduler's work is not as transparent as it should be.  In some cases, especially in Windows, processes will not always be able to pre-empt one another.  If you're running two time-consuming, CPU intensive tasks, you may not notice, but if you're running one and trying to open a file or just click on a menu at the same time, then the hiccup is far more noticeable.  The end result is usually a significantly delayed reaction to your input, such as a menu taking multiple seconds to appear instead of being an instantaneous response to your clicking.  Anyone who runs more than one application at a time has undoubtedly encountered this type of a situation. Luckily, there are solutions.

Intel's Hyper Threading was one way around the problem. By fooling the scheduler into thinking that it can dispatch two threads simultaneously, situations like the one above were usually avoided assuming that the CPU had the appropriate resources free.  Dual core is another solution to the problem, a far more robust one, since you literally have twice the processor resources.

The result of using a HT enabled or dual core system is better responsiveness when multitasking, but how do you quantify that?  Unfortunately, it is extremely difficult to quantify response time in these situations.  Even if we could easily quantify the response time improvements, is a snappier system when multitasking worth more than another 15% more performance in single threaded applications?  How about 25%?  It's a very different way of looking at the impact of a CPU to overall system performance, but it is an issue that we will have to tackle a lot more moving forward. 

The Intangible Dual Core Characterizing Dual Core Performance
Comments Locked

141 Comments

View All Comments

  • Icehawk - Thursday, April 7, 2005 - link

    The idea is - there are things I'd like to do but currently can't - can these new processors allow it?

    Ie, Doom3, Azareus, and DVDShrink at once. YES, that IS a realistic test because it is something I'd like to do. Instead I have a second PC eating up electricity to handle downloading & encoding tasks.

    Ok, I gotta go read Pt 2!
  • michael2k - Thursday, April 7, 2005 - link

    Did you even read the rest of my post, DaDVD?

    I quote myself, just in case you didn't see it the first time:
    Here's a reason why importing a PST file while opening Photoshop is a valid benchmark:

    If you're using, say, Premiere to create a movie, and you want to create a mask, you have Premiere in the background rendering the previews, transitions, and SFX (CPU+HD load) while you open Photoshop, import a frame from the movie, and create your mask.

    You then go back to Premiere, apply your mask, and continue editing.

    That is also why the DVDShrink test is so important: It's doing a background video encode while the foreground is doing other stuff, which nicely simulates a video workstation load.

    Not everyone is a gamer, and not everyone is a casual user. There are some people who make movies, compile code, develop software, and write games, and some of us do read AnandTech.

    I'm not ripping hundreds of DVDs, but I have been known to make make about four DVDs a year, and out of those 4 DVDs, I will 'mass produce' each one about 10 times; some of them only get a handful of copies, like 3 or 4, while others get massive numbers of copies, like 20 or 30.

    The process of making a DVD is about five hours to get the movie ready and two hours to get the DVD ready, and half an hour per burn.

    This is on a 933MHz machine. With a dual, according to the data provided by DVD shrink and importing PST files, that time might go down to two hours to get the movie ready (essentially realtime) and half and hour to get the DVD ready (again realtime), and with the new 8x burners, 10 minutes per DVD.

    So if I make 50 DVDs a year, instead of spending 78 hours on it, I can more likely spend 18 hours on it. This on top of the OTHER things I do, like coding and compiling (both of which used to take three hours compiling Mozilla and Firefox on a 400MHz machine, and one hour on a 933MHz machine), or Photoshop, or making photomosaics (a 8000 picture photomosaic takes about 15 hours). Your complaint is like saying, "Reviewing DooM3 at 2048x1600 on the newest NVIDIA card is ludicrous because no one does that!"
  • phantom505 - Thursday, April 7, 2005 - link

    I didn't read every single post here, but the ones I did make me wonder are the ones that are moaning about who beats who to the market.

    The problem I see is that Intel has no more room to move up in clock speed. AMD does. That means AMD will not be forced into making larger dies as fast as Intel *must*. (where's that 4 GHz chip at?) How the heck is that not a serious advantage?

    I'm sure AMD will have to get better multi-tasking going, but compared to what CPU's I still work with are so crude (P3's, older P4's, K7's) I don't see that as being absolutely critical to general system use.

    Bottom line is always price, and if Intel has twice the core size, take a guess what the price will be vs AMD's single core.
  • xsilver - Thursday, April 7, 2005 - link

    #111 I find that running multiple ANY programs on an amd64 will screw up the workload; eg browsing and watching dvd; or ripping one disc while watching another; I wouldnt mind if it was something that took 100% cpu cycles so that leaves none for other tasks but browsing and watching dvd at the same time? WTF?

    #da dvd
    I think you are mistaken with the benchmarking that has taken place previously not with what anand is doing now -- benchmarks at many places are done with fresh installs, no sound, tweaked settings, nothing running in background. What anand is doing right now may be a bit extreme; eg. running more things than usual but I don't suppose you do a fresh format everytime you run doom3 either eh? get my point?
  • PrinceGaz - Wednesday, April 6, 2005 - link

    #104 is right. It's not dual-core CPUs or HT that make multi-tasking so much more efficient, it's the efficiency of the Windows Scheduler, or lack of. Personally if I set off stuff I want to run in the background for any length of time, I use Task Manager to reduce its priority to Below Normal so it doesn't slow down anything else I'm working on. I shouldn't have to do that, but there's a difference between foreground application priority, and automatic long-term background low-priority (which Windows does not have). I don't care if it takes five minutes or five hours to finish encoding a movie as it's a background task and I'm in no hurry for the results.

    Manually lowering the priority of intensive tasks makes a world of difference to responsiveness of seemingly hung apps, such as opening DScaler (a TV deinterlacer/viewer) when some other application is using 100% CPU. I'm surprised Anand didn't just try using Task Manager to lower the priority of whatever was thrashing the CPU so that Outlook could startup quickly.

    The advantages of a dual-core processor are very real, every bit as good as dual-core workstations that according to AT no-one has needed until today. I've long been a proponent of dual CPU machines; the mobos don't cost that much more than single CPU versions (though overclocking is rarely, if ever an option on them). The benefits for many people far outweigh the disadvantages of slightly higher cost and slightly lower individual CPU speed. Of course the reduced speed is something you also get with dual-core processors.

    It's good to know that all of a sudden to coincide with dual-core CPUs, everyone has just switched from runing single applications in isolation, to running multiple heavy-duty apps at once. It's rather fortunate people weren't multi-tasking like this in the past (ahem) otherwise the lack of dual processor desktop recommendations would have been a major oversight.
  • Son of a N00b - Wednesday, April 6, 2005 - link

    He did not incluse Xeons or Opertunes for obvious reasons...one, this is a DESKTOP cpu not a workstation one, and this article is comparing desktop cpu's, because no dual core workstation cpu's are out...

    2 because you can simply look at an old review, why make more work?
  • Da DvD - Wednesday, April 6, 2005 - link

    I wanted to add something, how many DVDs to people posess, that ripping them is seen as a common task? Despite my nickname, I only own a few DVDs and sometimes hire one. Do I rip these? No, someone else already did and shared it via P2P. Which means i'll be downloading them, not ripping them.

    People are talking about price/performance but they forget that they only perform the tasks where DC is twice as fast a few times per month. So the overall performance increase is what, 5%?

    So at the moment, unless you're in the bussiness of constantly ripping DVDs while being addicted to games, what use is Dual-Core for the average user?
  • Da DvD - Wednesday, April 6, 2005 - link

    "He wasn't adapting the workload to the product, he was adapting the workload to our requests."

    Exactly. Doesn't this imply this review will only be helpfull for the few people who made those requests? I won't be ripping DVD's while playing a game. For as simple a reason as that my game disc is in the drive. I won't find myself in the situation where i'm packing files with winRAR while playing a game. Simply because it only takes a few minutes anyway. (Who the hell packs his whole harddrive?) And how often do you import Outlook databases while playing a game? When do you import those things anyway? After reinstalling windows? I certainly dont play games at that stage. So perhaps this allows me to run intensive download programs while gaming? Yes, it does. But my current pc runs Azureus and eMule along with Doom3 just fine, thank you very much.
  • nweaver - Wednesday, April 6, 2005 - link

    You could try using Radview Webload to automate web page surfing for benchmarks. It has page load timers and several other cool features. We use it in house for web site load testing.
  • michael2k - Wednesday, April 6, 2005 - link

    #121: If you read Anand's blog, a bunch of us ASKED him to test this way. These tests DO reflect our 'typical' user workloads. He wasn't adapting the workload to the product, he was adapting the workload to our requests.

    Here's a reason why importing a PST file while opening Photoshop is a valid benchmark:

    If you're using, say, Premiere to create a movie, and you want to create a mask, you have Premiere in the background rendering the previews, transitions, and SFX (CPU+HD load) while you open Photoshop, import a frame from the movie, and create your mask.

    You then go back to Premiere, apply your mask, and continue editing.

    That is also why the DVDShrink test is so important: It's doing a background video encode while the foreground is doing other stuff, which nicely simulates a video workstation load.

    Not everyone is a gamer, and not everyone is a casual user. There are some people who make movies, compile code, develop software, and write games, and some of us do read AnandTech.

Log in

Don't have an account? Sign up now