Nocona and Prescott: Intel adopts x86-64

There was an incredible amount of speculation that Intel would make a formal announcement about their use of AMD's 64 bit extensions to the x86 ISA at IDF this week. Intel has indicated for a long while now that they would adopt 64 bit for the desktop "when it makes sense." Of course, the time frame we have been given for such a thing making sense has always been much further down the road than this year, and the tradition continues still. Even though Intel has announced that its Nocona (90nm Xeon) processor will have 64 bit x86 extensions enabled, they are targeting this squarely at the workstation/server market and have still not made the decision to move x86-64 to the desktop.

It is the case that x86-64 can be released in a P4 form anytime Intel wants (though it may end up being later rather than sooner) since the Nocona processor is based on a Prescott core with its 64 bit hardware enabled in a Xeon package (and with Xeon sized caches and features). We are looking into the method with which Intel has disabled the 64 bit extensions in current versions of Prescott, but we don't have conclusive data as of yet. We suspect, though, that the extensions are disabled much the same way that clock speeds are locked (so that neither enthusiasts nor remarkers can add value not included straight from Intel).

As far as what is actually going on architecturally, we still need to do a little digging. We do know some things for fact. Intel's implementation of the x86-64 extensions will be completely compatible with AMD's. The extensions are in the current version of Prescott in a disabled state (and Intel is still determining an appropriate time to release a 64bit enabled P4).

We still have questions about Intel's ALU design and how it supports the new extensions, as well as whether or not Nocona will have a larger trace cache than Prescott. Needless to day, there are still plenty of things we don't know yet.

Up until last year (with the release of the Athlon64), Intel had four options for more accessible 64 bit computing: bring the Itanium's EPIC (Explicitly Parallel Instruction Computer) based IA-64 to the desktop, develop a desktop 64 bit ISA based around fast emulation of x86, create their own 64 bit extensions to the x86 architecture, or adopt AMD's extensions to x86 for their future processors. Let's take a look at these options to try to understand how we got here today.
DDR, DDR2, RDRAM, and XDR IA-64 on the Desktop?
Comments Locked

17 Comments

View All Comments

  • Mrburns2007 - Wednesday, February 18, 2004 - link

    XDR has 6.4 GB/s per chip not module.
  • Ecmaster76 - Wednesday, February 18, 2004 - link

    "ultra emulated x86 with 8-way-hyperthreading and a +5 Dynamic Compiler of Doom"

    Sweet! Where can I get one? Is it compatible with my DRAM skin armor?
    Someones been playing too much Baldur's Gate, and not just me.
    (Think of all the processors Intel could sell with marketing like that.)
  • DerekWilson - Wednesday, February 18, 2004 - link

    Actually, PCI-X is completely differet from PCI Express ... PCI-X is a parallel architecture that's wider and faster than the 32bit 33mhz pci bus ... PCI Express is specification for a point to point serial bus protocol (and multiple serial data streams can be sent to the same periphreial, thus the x16 pci express graphics card).

    Any when I was talking about ATI's "next gen" chip I wasn't talking about their current PCI Express solution RV380. I was talking about some unspecified demo that I'm going to assume was R420 or R423... I just didn't want to mention a card since ATI wouldn't tell me which card it was that was powering the box.

    I think I fixed all the typos, sorry bout that ... I've been working by jumping between hotspots and hand coding html rather than using the dreamweaver over broadband that I'm used to ;-)
  • Lonyo - Wednesday, February 18, 2004 - link

    Intel was pretty much always going to use compatible 64bit extensions.
    They have to work with the OS, since MS is pretty much dictating that.

    AMD set up the initial spec (I would assume), and Intel didn't have much choice but to follow.

    ATi and nVidia have to stick to the PCI-Express spec to make their next gen graphics cards, and that was designed by Intel, it's just a similar thing.

    AMD obviously did well to get there first though and set the standards.
  • Malladine - Wednesday, February 18, 2004 - link

    KillaKilla's older brother: PC3200 Bandwidth is 3.2gb/s :)
    http://www.kingston.com/newtech/ddrbandwidth.asp
  • KillaKilla - Wednesday, February 18, 2004 - link

    whoops forgot a few things, guess I "jumped the gun[post]"...
    1st, this isn't KillaKilla, hes my brother, I don't have my own nick yet, sorry...
    What did you mean by "2x to 3x performance gains" for native PCI-X (pci express is PCIX, right? I've seen it as PCI-E, but that was from before?) Also, what are these "HD streams"(2nd to last paragraph, 2nd page) you talk about?

    3rd page:

    "The upcoming XDR chips were on display up at the RAMBUS both across from a demo of Toshiba chips running at very high speeds (the bandwidth of XDR is 6.4GB/s)." Isn't DC-DDR 3200/400's bandwidth 6.4GB/s?

    4th page:

    I'm not surprised that intel cross-licenced x86-64... it was only logical seeing MS-XP64. Kudos to AMD for making a better 64-bit solution(extension set).

    -KillaKilla's older brother
  • KillaKilla - Wednesday, February 18, 2004 - link

    Once again, first post.

    Anyway, there are a few typos.
    the Borad in the title?
    The open tags on 2nd page

Log in

Don't have an account? Sign up now