Our first day of IDF coverage has come to a close, and it has been quite eventful. From the barrage of multicore information, to the 64bit evangelism, down to the tech showcase, we've seen quite a bit from Intel.

It is usually a pleasure to listen to Pat Gelsinger deliver a keynote, and today was no different. The Intel VP took us through new technologies from Intel that will be introduced over the next year or two. Included in the long list of topics covered are network acceleration, advanced management features, virtualization, and, of course, multicore. We got to see a 32-way Itanium machine recognize faces in a split second, as well as another demo of Intel's virtualization technology -- this time focused on security.

On the floor of the tech showcase, we saw quite a few interesting booths. There is everything from fuel cell and battery research, to new a new server oriented chip from ATI (which we will talk more about later in the week). Also appearing is the NVIDIA nForce 4 SLI Intel Edition, brining into light the fruits of the NVIDIA/Intel cross-licensing agreement announced in November. We'll be talking more about these and other exciting exhibits over the course of IDF.

We hope you enjoy the conclusion to our coverage of IDF Spring 2005 Day 1. It's time for us to collapse into dreamland and prepare for another big day tomorrow.

Intel I/O Acceleration Technology: Improving Network Performance

When looking at network servers, a huge percentage of the overall work being done has to do with protocol overhead processing. In order to get more of the work the users is interested in done, the industry has pushed forth network hardware that handles some of the overhead of TCP/IP processing. Network cards that include a TCP Offload Engine (TOE) are able to help reduce overhead processing done on the server's CPUs.

Rather than simply rely on a NIC with a TOE, Intel has found that improving the rest of the platform is actually a better way of handling network overhead (especially when looking at many small packets).

We didn't get an in depth technical look at IOAT at Pat's keynote today, but we do have a little general information. The slide we are looking at lists "CPU Improvements" and "OS Support/IA Tuned Software" as architectural enhancements behind IOAT. Making faster and more parallel processors will definitely help without needing any IOAT specific optimizations, and OS support of hardware features is usually a prerequisite for their use. Of course there could be more too these aspects of the technology, but we'll have to wait an see. The real meat likely comes in the chipset data accelerations, Data Movement Engines, and Edge Device Accelerations. We could envision enhancements to the NIC and chipset that allow for further removed overhead processing as well as optimized or prioritized movement of data through they system. We'll bring out more details as we get them.

IAMT, VT, and why should I want Virtualization?
POST A COMMENT

19 Comments

View All Comments

  • sprockkets - Thursday, March 03, 2005 - link

    FDimms, the way we get around having each processor have its own memory controller but still have a lot of slots.

    Of course the right time for you to enter 64 bit for the desktop is now, cause you managed to kludge it on your processors oh BFD.

    Sorry Patty, but the AMD/Linux crowd didn't need to wait on the Wintel crowd for 64 bit.
    Reply
  • Viditor - Thursday, March 03, 2005 - link

    mickyb - "How is this any different than multi CPU SMP? It isn't, except for compressing them to a smaller space"

    I agree, at least for the Intel model. The AMD dual core is quite different however, in that the 2 cores are connected locally, and the MOESI protocol AMD uses allows for easy cache snooping.
    Reply
  • mickyb - Wednesday, March 02, 2005 - link

    #16 My statement was in context of the performance issue, not cost. The graphs imply that multi-core is the savior, when we have been experiencing what multi-cpu will do. Any graph that implies "exponential" growth for multi-core vs. single core is just a lie. I have been creating tools that analyze system performance for a while. It is far from the truth. It is still SMP. Reply
  • JarredWalton - Wednesday, March 02, 2005 - link

    #9: "How is this any different than multi CPU SMP? It isn't, except for compressing them to a smaller space. SMP has its problems as well and the number of CPUs does not create an exponential graph like Intel is implying."

    Actually, there is one major difference: one socket is sufficient. Designing motherboards with two CPU sockets increases costs a lot, and so the SMP market in the desktop space is extremely small. There are so many applications that *could* use multiple threads that don't, mostly because programmers would end up spending tons of effort in improving performance on a small percentage of systems.

    Just like the move to 64-bits will have more benefits in the future rather than in the short term, multi-core is looking to the future rather than the present. Software will have to be coded properly, but once that is done, multi-core will start to give us a lot of improvements in performance. Now that programmers have an incentive to support threading (probably almost all CPUs sold by late 2007 are going to be SMP), they will spend the time.
    Reply
  • Viditor - Wednesday, March 02, 2005 - link

    Cygni - "Im also wondering whether Intel will allow the Nforce4 to use the 1066fsb?"

    Good question...
    I would ASSUME that the answer to this is yes..."In for a penny, in for a pound" as they say. Considering all the trouble Intel has been having with 3rd party developers lately, I would assume that they will support Nvidia completely (if not, why support them at all?).
    Reply
  • Cygni - Wednesday, March 02, 2005 - link

    I thought the Pressler info was pretty shocking too, Viditor... especially considering its slated to replace the single die PD's. Wouldnt two phsyical cores over FSB be far slower? Really strange.

    Im also wondering whether Intel will allow the Nforce4 to use the 1066fsb?
    Reply
  • johnsonx - Wednesday, March 02, 2005 - link

    We all said it, and I think recent events and comments at IDF prove it beyond a reasonable doubt: Intel and Microsoft were in cahoots on XP 64-bit all along. XP x64 wasn't ever going to be released before Intel had 64-bit capable P4's and even Celerons ready to go.
    Reply
  • Viditor - Wednesday, March 02, 2005 - link

    I hadn't realised how very different Intel's dual-core is to AMD's until now...
    Just this one line:
    "The two cores in Pressler are totally independent, meaning that they must communicate with each other over the external front side bus and not over any internal bus"

    That will be a HUGE latency problem when compared to AMD's dual-core! It takes 1 clock for AMD to communicate from one core to another because of Direct Connect (the on-die "switcher" between cores), I would guess that Intel will require at least 3-5 clocks (based on it taking 6-8 clocks for their SMP)...
    Reply
  • skiboysteve - Wednesday, March 02, 2005 - link

    even if intel did implement x86-64 at the "right time" at the launch of Win64, how is this the right time? Anyone who wants to use Win64 that intel now harks as the future, will have to buy a new chip, while A64 users already have everything they needed.

    I dont even have an athlon64 but its just silly to say you are at the forefront of some new technology (64bit) when everyone has to go buy a new CPU to use it, amd had it right by equiping users before the "right time" so they were prepared.


    Anyway. Yeah FBDIMMs look tight, so does rambus, the DDRx shit is so lame. by the time they get DDR2 up and running they find 100 ways to do it better and no one wants DDR2 anymore, its stupid.

    and BTX is a joke
    Reply
  • sphinx - Wednesday, March 02, 2005 - link

    I agree with mickyb. I am more interested in the FBDIMM than NF4 for Intel and dual cores. Reply

Log in

Don't have an account? Sign up now