POST A COMMENT

17 Comments

Back to Article

  • Andyvan - Saturday, November 20, 2004 - link

    I have three quibbles about the article:

    1) "As the program starts flipping through the code in its array (which is actually called a stack), it reaches the function gets()."

    I think you've mixed a couple of concepts together here. The code is *not* in the stack. They're two different sections of memory.

    I would reword this as:

    "As the program starts executing the code of the main routine, it reaches the function gets()."

    2) "And since line exists inside a stack in memory, writing past the end of line actually starts to overwrite critical pieces of the program!"

    This may be more of a nit: I would say that it will overwrite data critical to the program. The code proper is not in the stack, and so you can't actually overwrite the *code* when you go off the end of 'line'.

    I'll understand if you don't accept this quibble.

    3) "When a program begins to load an array, a special byte after the array (called a return address) tells the computer to go back to the code segment and continue to run the program."

    I don't believe the return address is actually a byte, as that would only support an address with an 8-bit range. Isn't it a four byte value?

    -- Andyvan
    Reply
  • sluramod - Wednesday, November 03, 2004 - link

    Maybe it is offtopic, but...

    What are the good reasons for abandoning segmentation and segment protection on x86? Sorry if this is a stupid question...

    Alexei.
    Reply
  • emboss - Sunday, October 24, 2004 - link

    If you're going to be picky on terminology, at least get it right :) Segmented memory hasn't been used since Windows 3. The correct term to use is sections ("All memory is divided up into several SECTIONS").

    Also, the 286 most definately did not support a NX bit for pages. Primarily because the 286 didn't do paging. What was introduced with the 286 (and still remains on modern CPUs) is segment protection (executable, writable, readable, plus a few less-easily expained attributes). However, since segmentation has been abandoned (with good reason) and the flat address space model has taken over, this "protection" is of no use.

    Finally, a system with NX ideally shouldn't be any more stable than a system without it. User-space programs are isolated the kernel (and other user-space programs) so any damage they do is limited to themselves. In which case terminating the program immediately just speeds up the process of ending the program when something gets corrupted :)

    And if a kernel-mode component gets killed because it violated NX protection, then you're gonna get a BSOD or equivalent, so again, you're not much better off.

    The one thing it MAY help is stopping corrupted data from getting written to disk. But overall I don't think it would have much of an effect from this point of view either.
    Reply
  • Bitpower - Sunday, October 24, 2004 - link

    What I was trying to say was that a computer system that has NX on it, would be more 'stable' then a computer system without it, since another important side effect of NX is that besides protecting you against viruses, it also will protect you against certain program crashes where there is a buffer underrun. Reply
  • Pax Team - Sunday, October 24, 2004 - link

    software bugs vs. exploit techniques:

    in a reply above you suggested that instead of relying on 200,000 programmers to fix their bugs we should rely on NX and similar hardware features. i.e., it appeared to me as if fixing bugs and intrusion prevention techniques were mutually exclusive (wouldn't be the first time i've misunderstood someone ;-).

    as for my comment: software bugs (or let's just talk about memory corruption bugs) come in many flavours, such as various forms of buffer overflows (stack or heap based, linear or non-linear, single or multiple), integer handling bugs (that can create buffer overflows in turn), user-supplied format strings, etc. these bugs can be fixed (for good) by reading the code, finding them and modyfing the code properly. this is what needs many eyes (and lots of time and expertise) and in practice it's never been 100% efficient, hence the need for intrusion prevention technologies that try to prevent or mitigate the effects of bugs.

    orthogonal to bugs there are exploits that make use of the bugs by exploiting the level of access they (inadvertantly) give to the attacked program. memory corruption bugs can give at most arbitrary read-write access to the target program's memory (e.g., a user-supplied format string bug comes very close to giving this level of access while a 'traditional' strcpy() based stack overflow provides only limited write access).

    an exploit technique describes the way a given bug is made use of. note that this implies that a given bug can be exploited by different techniques and a given technique can make use of different kinds of bugs - hence my saying that these categories (bugs vs. exploit techniques) are orthogonal (and one can and should attack this two-dimensional problem space from either dimension).

    what exploit techniques can we speak of? this normally depends on how your defense mechanisms (intrusion prevention system) work. in PaX we assume arbitrary read-write bugs (i.e., the most powerful category, every memory corruption bug falls within), and this allows us to split the exploit techniques into 3 main categories only, you can read about them at http://pax.grsecurity.net/docs/pax.txt .

    PS: your guess about Intel/AMD beginning to work on NX in mid-2003 is not quite correct, amd64 (the architecture) has always had NX, at least since 2000 i guess. linux itself didn't make use of it until late 2002 though: http://www.x86-64.org/lists/discuss/msg03016.html .

    PS2: contact email address is on the PaX homepage.
    Reply
  • Bitpower - Thursday, October 21, 2004 - link

    Make a couple typos.

    EDITED VERSION OF MY LAST COMMENT (read this one instead):

    Just curious, but why in the above article, doesn't the author address the other very important advantage of NX, which has nothing to do with viruses? I would think that it is one of the most important advantages of NX is it would make your computer a lot more stable. And this advantage is totally seperate and has nothing to do with virus protection.

    Did you ever had an application or game that you use cause your computer to totally crash and lock up, and the only way out of it was to reboot? Then from the description of NX above, it would also protect you against certain crashes. So it would also act like a "crash guard", thus preventing poorly written programs or programs that have bugs from accidentally overwriting its own code with random junk and causing your machine to totally crash.

    I think its ability to act like a crash guard is as important, if not more important, then its ability to virus protect. So while virus protection is a very important ability of NX, I would also think NXs ability to act like a "crash guard" for your system is just as important.

    Am I the only person who thought of this second advantage of NX, as a crash guard? Myself, I don't care about the virus protection since I constantly scan my computer. But I would be very interested in the abillity to use NX for crash guarding and helping to increase the stability of my machine against programs that are buggy.
    Reply
  • Bitpower - Thursday, October 21, 2004 - link

    Just curious, by why in the above article, doesn't the author address the only very important advantage of NX, which has NOTHING to do with viruses? I would think that one of the MOST IMPORTANT advantages of NX is it would make your computer a lot more stable. And this advantage is totally seperate and has nothing to do with virus protection.

    Did you ever had an application or game that you use cause your computer to totally crash and lock up, and the only way out of it was to reboot? Then from the description of NX above, it would also protect you against certain crashes. So it would also act like a "crash guard", thus preventing poorly written programs or programs that have bugs from accidentally overwriting its own code with random junk and causing your machine to totally crash.

    I think its ability to act like a crash guard is as important, if not more important, then its ability to virus protect.

    Am I the only person who thought of this second advantage of NX, as a crash guard?
    Reply
  • KristopherKubicki - Thursday, October 21, 2004 - link

    Pax Team:

    Correct about W^X and Execsheild; same for NX as well too.

    "Prescott supports the NX - for 'no execute' -- feature that blocks worms and viruses from executing code after creating a buffer overflow on the machine", said Paul Otellini, Intel's COO.

    The scope of the article was to show that NX doesnt do what Intel's COO even believes it does.

    I am a little confused about your second paragraph though, can you please elaborate? Thanks for the feedback. Please email me when you get a chance.

    Kristopher
    Reply
  • Pax Team - Sunday, October 17, 2004 - link

    OpenBSD's W^X is only a subset of what PaX has implemented for 4 years now, but thanks for the praise anyway ;-). As for ExecShield, it doesn't implement W^X, data and bss sections in libraries remain both writable and executable. And both OpenBSD and ExecShield are vulnerable to executing existing code that can trivially circumvent W^X separation and execute injected shellcode.

    Also, you're mixing up software bugs (that your 200,000+ programmers can fix) with exploit techniques (some of which properly used hardware features can prevent). The two sets are orthogonal, not mutually exclusive.

    As for MIPS, as far as i know, none of them has true hardware NX bit support, although on some models it might be possible to simulate the behaviour, but it's lots of hacking and is mostly an academic exercise only, not useful for production.
    Reply
  • KristopherKubicki - Saturday, October 16, 2004 - link

    WarcraftIII: Although I agree with your comments on programming error, let's look at the last 20 years of programming as an example. If we have the ability to stop Buffer Overflows do we rely on 4 processor makers or 200,000+ C programmers to fix the problem; either of which can fix the errors with equal amounts of work?

    NX is OK, I am just illustrating it doesn't do what it says. It might have stopped some older worms, but when the next worm hits that moves around the eip to somewhere it shouldn't and "buffer overflows" all those NX protection machines don't you think people are going to be upset?

    Most RISC and MIPS processors have utilized some form of NX protection since their inception, by the way.

    Finally, I would like to add that OpenBSD's w^x protection (writeable xor execute; no execute on any writable segment) is probably the most elegent solution yet to the buffer overflow issue. It still is vulnerable to maliciously modified return pointers, but it doesnt advertise that it isn't.

    Kristopher

    Reply
  • bobbozzo - Saturday, October 16, 2004 - link

    Do the 32-bit versions Windows Server 2003 support this on AMD Opterons?

    Thanks!
    Reply
  • WarcraftIII - Friday, October 15, 2004 - link

    "Well, maybe not. In fact it seems that NX provides several layers of false security, particularly since it only stops some buffer overflows and whether or not it stops any viruses has yet to be seen yet."
    Why start off by questioning whether NX will even be able to stop any exploits? You only have to look back at what viruses rely on executable stack memory to see how useful this can be, not wait around to see if anyone ever comes up with one.
    Of course it doesn't stop all buffer overflow exploits. It could be as simple as entering machine code into an input that will be placed on the heap. At a later point in the program a stack-based buffer overflow could overwrite a return address or a function pointer variable with the appropriate address. You could conceivably avoid the stack altogether where a buffer on the heap overflows onto a function pointer variable stored on the heap. It doesn't change the fact that this can stop a lot of exploits, and probably the simplest and most successful exploits because positioning of items on the stack is a lot more predictable and you are a lot more likely to guess a close enough to correct return address (you can guess an offset from other addresses obtained off the stack).

    Why does this article try to present buffer overflow exploits as being due to inherent problems with x86? The usual calling conventions on pretty much any architecture involve storing return addresses on the stack which makes exactly the type of exploit the NX bit stops a possibility. Where one is provided, the contents of the return address register need to be saved to memory if you're doing calls more than one level deep.

    "in reality NX behaves more like an emergency patch for an easily exploitable architecture"
    Is x86 really any more exploitable than anything else? The memory model you described is not x86-specific.
    Why do so many other architectures support an NX bit?

    "both started working with Microsoft to patch the x86 architecture itself. The inherent flaw is two fold; for one, malicious code should not write past the end of an array like line from fingerd, and second the computer should not continue to happily execute the machine code on the stack."
    The computer is doing exactly what it has been asked, and it will do exactly the same thing whether you're using an x86 or not. This code SHOULD write past the end of the array if the user enters a lot of characters because that is what the programmer has written. This is due to the design of C, not x86. If this is not supposed to happen (as it obviously isn't in the example) it is solely due to programmer error, it is not the fault of x86. If you want bounds checking use Java, if you want speed write decent C/C++ code whether you're using x86 or not.

    I'm sorry if I sound overly critical and I guess I may be wrong in some of my assertions since I haven't done much assembly programming for a fair while. However, it seems to me that the point of the whole article was to criticise the NX bit as a relatively useless hack (or "emergency patch") useful more for marketing than for security purposes, and I believe this to be incorrect.
    Reply
  • Foxbat121 - Friday, October 15, 2004 - link

    NX bit is nothing new. It existed in Intel 80286 processor and no body liked it back then. So Intel took it out.

    On the other hand, the NX protection is XP SP2 is only limited to stack memory only. In WinXP 64bit extention for x86-64, Microsoft extends protection to all memory segments that is marked for NX, not only stack memory.
    Reply
  • KristopherKubicki - Friday, October 15, 2004 - link

    Fixed.

    Kristopher
    Reply
  • reever - Thursday, October 14, 2004 - link

    Typo: All memory is divided up into several SEGEMENTS; Reply
  • Pythias - Wednesday, October 13, 2004 - link

    No kidding! Our surfing/email habits have as much to do with our security as any piece of software in the world. Reply
  • TrogdorJW - Tuesday, October 12, 2004 - link

    Of course, this is to say nothing of the difficulty of stopping dumb people from launching any number of Trojan horse type applications. "Hey, pictures of [insert celebrity name]! I gotta see this!" We need an NM bit: No Morons. ;) Reply

Log in

Don't have an account? Sign up now