A Fully Virtualized Platform

Rattner also talked about fully virtualizing the platform, showing off a demo where two virtual machines both had access to Intel's integrated graphics core. It would be interesting to see this sort of technology implemented by ATI and NVIDIA in their GPUs:

A Parallel Programming Model

A major demand of extremely parallel architectures (thread level) is the need for parallel compilers and management of threads, Rattner also talked about one method of properly implementing software to take advantage of multi-core CPUs.

By using a software run-time management layer, load balancing between the multiple cores in a 2015 CPU is achieved. Individual cores can be instructed to power down based on the application's requirements, which will definitely be necessary when you're dealing with 10s or 100s of cores on a chip.

Intel demoed an example of such a management layer with a multi-core network processor, showing certain cores going to sleep when they weren't used.

Final Words

All in all Rattner's keynote was a bit long winded, but an excellent first attempt - the vision he presented of 2015 was quite realistic, and we've got a lot more to talk about from an architectural level in our next article...

The Memory Bandwidth Challenge
Comments Locked

15 Comments

View All Comments

  • Verdant - Saturday, March 5, 2005 - link

    sigh - no compiler is going to magically make software work in parallel.


    not everything is "parallel-able" (my new word for the day)

    some tasks must be serial-processed, it is the nature of computing.

    my main point is that i hope we can see individual "cores" keep increasing their speed..


    did he say anything about what the highlighted "photonics" box on the slide was about?
  • mkruer - Friday, March 4, 2005 - link

    As if Intel can predict 10yrs into the future. They having trouble predicting one year in advance. I seriously doubt that Intel massive parallelism will be the solution to all their CPU issues. Looking somewhat ahead I see the parallelism thread dying out at around 8 pipelines for the simple reason, that most “standard” (non games or scientific apps) programs would never use more then eight. Look at RISC, most RISC architecture have 10 thread, and its been that way for the last 10yrs or more. You can only go so wide before the width becomes detrimental to the processing of the instruction.
  • Locut0s - Thursday, March 3, 2005 - link

    #12 Oops should have reap above posts. Yeah that makes more sense then.
  • xsilver - Thursday, March 3, 2005 - link

    the super resolution demo requires video people;
    it interpolates a 60-90 frames into 1 frame like the guy above said....

    and #8 ... I think they mean 1000x because the size of the image used in the demo is very small... so if you wanted to use it on say a face then you would need WAY more computing power.... eg. the stuff on CSI is so bunk....
  • Locut0s - Thursday, March 3, 2005 - link

    Am I the only one that thinks that the "Super Resolution" Demo shown there is just a little too good to be true?
  • xsilver - Thursday, March 3, 2005 - link

    "nanoscale thermal pumps"
    sounds like some tool you need to get botox done :)
  • sphinx - Thursday, March 3, 2005 - link

    All I can say is, we'll see.
  • DCstewieG - Thursday, March 3, 2005 - link

    60 seconds to do 3 seconds of footage. That would seem to me it needs 20x the power to do it in real-time. What's this about 1000x?
  • clarkey01 - Thursday, March 3, 2005 - link

    Intel said in early 03 that they would be at 10 Ghz ( nehalem) in 2005.

    So dont hold you breathe on thier dual core predictions
  • Phlargo - Thursday, March 3, 2005 - link

    Didn't Intel originally say that they could scale the P4 architecture to 10 ghz?

Log in

Don't have an account? Sign up now