Network Manageability Engine

In the other interesting demo, Intel showed off their ability to protect a network environment from the spread of viruses. Infestations on the order of the slammer virus and the witty worm cause quite a few headaches around the world and were able to spread in only a matter of minutes.

One of the largest problems of future virus protection comes in the form of worms designed to defeat firewalls and circumvent anti virus software. In these cases even modestly protected computers can become contributing factors in the spread of a problem. In order to combat this issue, Intel has proposed a platform level solution that can dynamically control the functionality of the network hardware based on network activity.

The hardware, called the manageability engine, monitors connections opened per second and will shutdown the NIC if software attempts to open more connections per second than a certain threshold determines is appropriate.

The heuristics Intel employs in order to detect virus-like network activity seem to be very accurate and effective. Justin Rattner stated that under tests looking at 8000 worms and various other applications, the heuristics caught all every single virus. The worms studied include all known worms as well as custom worms developed by Intel to test the hardware.

On top of this, part of Intel's user aware platform goals adopt the "do no harm" aspect of Asimov's laws of Robotics. Doing no harm is not so much a bad goal to have as it is an almost ominous and frightening foreshadowing of things going wrong. Nevertheless, Intel says it is committed to doing no harm while implementing features designed to protect the user.

In all of their tests they have not found one false positive. It doesn't seem impossible to imagine that legitimate software could resemble a virus to the hardware, but so far Intel has not discovered a case of this happening. If Intel ever brings this technology to market, they had better make very certain that false positives do not ever happen. In order to combat issues that could arise from detecting a false positive, introducing the capability to turn this feature off could potentially cripple its ability to be effective against viruses. If a virus were designed to exploit the ability to disable this hardware, then the manageability engine would have the same fatal flaw as existing technology.

This is definitely a feature we want to keep our eyes on. If it works flawlessly it will be an incredible boon in the fight against the spread of computer viruses. But if it has even one problem -- if it fails to "do no harm" -- the ramifications could cause many more headaches than the technology saves.

Index
Comments Locked

17 Comments

View All Comments

  • LoneWolf15 - Friday, August 26, 2005 - link

    I think the Network Manageability Engine is a great concept --but only if it can be updated if necessary through some sort of PROM setup or other option. As resilient as it seems today, someone will eventually find a way around it. At that point, if the hardware can't be reprogrammed with updates to meet the threat, it will be useless and instantly obsolete.
  • Regs - Friday, August 26, 2005 - link

    I had the same general reaction as all of you guys. What's the use of all this crap? However I guess it's just something Intel wants to show where the future might lead. I just hope AMD are the ones holding the baton.
  • mkruer - Thursday, August 25, 2005 - link

    Is it my imagination, or is this years IDF very lackluster (Lacking brightness, luster, or vitality; dull)? It seems to me that none of the hardware they are offering, I am remotely interested it. About the only thing that I saw that was interesting was the virtualization technology, but even that will take years to come into the main stream. This for me will go down as one of the more mundane conferences. So far I have seen a lot of hype and a bunch of pretty slides.
  • mikecel79 - Thursday, August 25, 2005 - link

    Running demos of Intel's next generation hardware is not impressive? Lots of 65nm chips running isn't impressive?
  • Kensei - Friday, August 26, 2005 - link

    I agree. I don't know what's not to like here. This is a peek at the future, not a peek at next week. Multi-core, hyperthreading, etc. have huge implications for software engineering and computer science in general regarding how and when to best "divide and bring back together" various computational processes. It adds great complexity to the software engineering field at a time when they currently have difficulty writing code that isn't buggy and/or easily exploited. I think this will make architecting (is that a word?) software, before writing the code, an even more improtant step. Unfortunately, software architecture is something not often taught either on-the-job or in universities.

    What I find intersting about this whole "diamond" thing is why is Intel interested in this sort of stuff at all? It seems much more suited to the type of research being done by MS or at universities. I may be missing something, but what's hardware architecture got to do with identifying people in pictures? Is Intel planning on entering the software development world also?
  • PrinceGaz - Tuesday, August 30, 2005 - link

    Yeah, Diamond does seem a totally software related project, and unless Intel code it to only run on their processors (which they probably would), it would work just as well on AMD chips.

    I suppose Intel have made a few video codecs in the past which were quite well used; maybe they are planning on doing something like that again, but restrict their use to Intel chips this time?
  • mkruer - Friday, August 26, 2005 - link

    Intel's next generation is the problem. After the Prescott debacle, Intel made a knee jerk reaction, even looking at some of the benchmarks for the Yonah and Sossaman (what a joke).

    My predictions for 2006/2007

    1. Intel will go core ballistic with there “Performance per watt” and run into the opposite mistake of the “extreme gigahertz” i.e. they will have lots of slower cores, but they can’t be effectively be used by common x86 applications.
    2. VT will be rushed out the door, and be quickly replaced by VT2

    I have been hearing that Intel will be no threat to AMD for 2006, and after reading all the tech sites, it looks like that is 100% correct. Intel for all there “new technology” is still play catch-up.

    Like I said the only two really interesting things are the VT and new “dynamic” L2 cache that are going to be used for power savings. That’s about it.

    BTW to date I think Transmeta’s Coruso chip has the recorded for “Performance per watt” running the equivalent of a 500Mhz cpu at 1 watt
  • CSMR - Friday, August 26, 2005 - link

    You have to take in to account that power is something like voltage^2*speed and voltage can be reduced if speed is reduced, meaning that performance/watt is not a good metric; something between performance/watt and performance^3/watt would be better.
  • ElFenix - Friday, August 26, 2005 - link

    performance per watt is very important for a lot of customers. anywhere that has a huge amount of cores running will love this, especially as energy prices go up. datacenters are already paying attention to this, their AC bills alone would make most places choke. many customers need nothing more than a 1 ghz p3 to run their email, word, and powerpoint. to get these people to upgrade will need a serious focus on TCO, and power management is going to be a huge part of that.
  • mkruer - Friday, August 26, 2005 - link

    But there in lies the problem. With this massively multi core approach, it will show huge “Performance per watt”, but only in a very massively parallel environments. The good news is that most server applications will be able to use multi threads, bad news is that having multi threads does not mean there is a 1 to 1 increase per core. In real life the maximum number of cores that would be utilized to full extents is around 4, because there are too many process that require the outcome of the original process.

    What is required is a balance, and so far from what I have seen, intel is not gunning for balance, but yet another PR stunt.

Log in

Don't have an account? Sign up now