Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 12

SNORT REPORT

Snort limitations
Richard Bejtlich
11.29.2007
Rating: --- (out of 5)

Digg This!     StumbleUpon     Del.icio.us   

Service provider takeaway: Value-added resellers (VARs) will learn the limitations and
capabilities of Snort, including the implications of running the network inspection and control
system in active and passive mode.

In the first Snort Report I mentioned a few things value-added resellers should keep in mind
when deploying Snort:

1. Snort is not a "badness-ometer."


2. Snort is not "lightweight."
3. Snort is not just a "packet grepper."

In this edition of the Snort Report, I expand beyond those ideas, preparing you to use Snort by
explaining how to think properly about its use. Instead of demonstrating technical capabilities,
we'll consider what you can do with a network inspection and control system like Snort.

For our purposes, we'll focus on Snort 2.x. At the moment it is unclear exactly what capabilities
Snort 3.0 will offer. While the comments in this article will broadly apply to Snort 3.0,
Sourcefire may provide new, unanticipated features.

A segment of the security community immediately equates anything related to "intrusion


detection" or (worse) "monitoring" with Snort. When I taught classes with the title "Network
Security Operations," students would show up asking, "Is this the Snort class?" I would mention
that we would learn how to perform network security monitoring, and students would respond
with "You mean Snort, right?" While this is a testimony to the importance and mindshare Snort
has achieved in the decade it's been available, there are Snort limitations and these
misconceptions result in a user whose security worldview is exceptionally narrow.

I see similar issues with existing Snort users. It's not uncommon to see questions posted to the
#snort IRC channel or to the snort-users mailing list asking how Snort can be operated outside of
its designated use model -- for example, "How can I use Snort to monitor bandwidth usage and
alert on high traffic levels?" or "How do I make Snort log sessions/flows?" It's inspiring to see
such faith in Snort, but such questions indicate a certain amount of tool-fixation.

Positioning Snort as active or passive

Snort can operate in two modes: active and passive. Snort can be active either inline or offline. In
an active, inline mode, Snort acts as an intrusion prevention system (IPS). A Snort appliance
physically sits on the wire between other networking components, inspecting traffic as it passes
from one network interface card to the other. In this gatekeeper function Snort makes pass or
block decisions based on the traffic it sees and the configuration it runs.

In an active, offline mode, Snort acts as a quasi-IPS. It does not sit inline, but it sees traffic
passing nearby. Under certain conditions Snort will react to traffic it has been programmed to
consider malicious. Because the Snort appliance is limited in the fact that it cannot physically
deny traffic in this deployment model, it must rely on trying to "knock down" traffic using RST
segments for TCP traffic or ICMP error messages for UDP traffic. This model sees Snort as a
network control device of last resort; truly inline devices (firewalls, usually) are expected to stop
undesirable traffic.

Snort can also run passively, meaning it takes no actions to interfere with traffic. Passive
operation can happen in inline or offline deployments as well. In passive, inline mode, Snort sits
physically on the wire and allows all traffic to pass. Inspection is done and alerts are generated,
but no blocking occurs. This mode is usually a prelude to adopting an active, inline mode.

I strongly recommend deploying a Snort appliance in tandem with an external bypass switch
when Snort is operating inline, either in passive or active mode. Net Optics Bypass Switches
have performed reliably for me over the years. If you need more information on bypass switches
versus other tap types, please see the free online copy of Chapter 4 from my book Extrusion
Detection.

Finally, Snort can operate in a passive, offline mode. While I cannot cite statistics, I submit this
deployment model is the most popular. Here Snort watches traffic provided by a network tap or
switch SPAN port. The tool generates alerts, which must be reviewed by a security analyst.
When configured properly a Snort sensor operating in this mode is essentially invisible to an
intruder.

Now that we appreciate how Snort can be positioned on the network, what are we supposed to do
with it? First a decision must be made regarding Snort's active or passive role in the
environment. If you're comfortable deploying a system that has the ability to disrupt traffic --
malicious, suspicious or sometimes even normal -- then you can go active. If you prefer to let
other security devices make blocking and filtering decisions, then Snort should stay passive.
Many people who are considering Snort as an active defense tool deploy passively inline then
switch to actively inline once they gain confidence in Snort's operation.

Let's assume a passive, offline deployment. If you're reading this article you're more likely
getting started with Snort, so experimenting with an implementation that is least likely to impact
the environment is the best way to begin.

Snort operations

Now we must set proper expectations regarding Snort's operation because it has its limitations. I
prefer to view Snort as a means to acquire indications of network activity. I don't consider Snort
to be a definitive means to say exactly what is happening in an enterprise. Please note I am not
talking about so-called "false positives." As far as I am concerned, Snot did its job if I tell it to
look for "uid=0(root)" and the following from attack-responses.rules fires when I visit
http://www.testmyids.com/:

alert ip any any -> any any (msg:"ATTACK-RESPONSES id check returned root";
content:"uid=0|28|root|29|"; classtype:bad-unknown; sid:498; rev:6;)

If, however, Snort fires an alert saying it saw "id=0(root)" but that string never appeared on the
wire, I consider that event to be a real false positive.

That alert is designed to catch something like the following in cleartext:

hacom:/root# id
uid=0(root) gid=0(wheel) groups=0(wheel), 5(operator)

Years ago (and sometimes today) it was popular to embed the Unix "id" command at the end of
shellcode. When a service was exploited, the shell provided to the attacker would finish
execution by showing the level of access granted by the exploit.

Sometimes the data in a Snort alert (if Snort is configured to provide it) can be enough to
differentiate among normal, suspicious and malicious traffic. For example, the alert created by
rule 498 above contains a packet payload like the following:

HTTP/1.1 200 OK.


.Date: Tue, 20 N.
ov 2007 20:08:14.
GMT..Server: Ap.
ache/1.3.33 (Uni.
x)..Last-Modifie.
d: Mon, 15 Jan 2.
007 23:11:55 GMT.
..ETag: "9b30607.
-27-45ac0a3b"..A.
ccept-Ranges: by.
tes..Content-Len.
gth: 39..Keep-Al.
ive: timeout=2, .
max=200..Connect.
ion: Keep-Alive..
.Content-Type: t.
ext/html....uid=.
0(root) gid=0(ro.
ot) groups=0(roo.
t)..

This is clearly a Web response. Additional data can be helpful to see exactly what happened. The
following is a transcript generated from Sguil. The data was collected by a second instance of
Snort running in pure Libpcap packet logging mode. The content was built using Tcpflow. The
operating system fingerprinting was done by P0f.

Sensor Name: hacom.


Timestamp: 2007-11-20 20:02:31.
Connection ID: .hacom_208895.
Src IP: Obscured (Obscured).
Dst IP: 82.165.50.118 (kundenserver.de).
Src Port: 46480.
Dst Port: 80.
OS Fingerprint: Obscured:46480 - Linux 2.6, seldom 2.4 (older, 4).
(up: 16 hrs) .
OS Fingerprint: -> 82.165.50.118:80 (distance 3, link: ethernet/modem).

SRC: GET / HTTP/1.1.


SRC: Host: www.testmyids.com.
SRC: User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.1.8).
Gecko/20060601 Firefox/2.0.0.8 (Ubuntu-edgy).
SRC: Accept: text/xml,application/xml,application/xhtml+xml,text/html;.
q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5.
SRC: Accept-Language: en-us,en;q=0.5.
SRC: Accept-Encoding: gzip,deflate.
SRC: Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7.
SRC: Keep-Alive: 300.
SRC: Connection: keep-alive.
SRC: Pragma: no-cache.
SRC: Cache-Control: no-cache.
SRC: .
SRC: .
DST: HTTP/1.1 200 OK.
DST: Date: Tue, 20 Nov 2007 20:08:14 GMT.
DST: Server: Apache/1.3.33 (Unix).
DST: Last-Modified: Mon, 15 Jan 2007 23:11:55 GMT.
DST: ETag: "9b30607-27-45ac0a3b".
DST: Accept-Ranges: bytes.
DST: Content-Length: 39.
DST: Keep-Alive: timeout=2, max=200.
DST: Connection: Keep-Alive.
DST: Content-Type: text/html.
DST: .
DST: uid=0(root) gid=0(root) groups=0(root).
DST:

A Web browser visiting www.testmyids.com generated the SRC traffic. The Web server at
www.testmyids.com provided the DST reply traffic. In this example one can see that the
indication of something potentially bad (ATTACK-RESPONSES id check returned root) can be
validated by looking at the packet accompanying Snort, or, better yet, reviewing a transcript. The
transcript shows the activity before the offending packet(s) as well as the offending traffic itself.
This very short example hints at the real power of Snort. I tend to see Snort as a pointer to
activities that require additional inquiry. A Snort alert should be the beginning of an
investigation, not the end. If a Snort alert appears and you say, "That's OK, I don't need to
analyze that," you should question why you have the alert firing in the first place.

Many times you'll find that Snort is an excellent source of indicators, but Snort is limited when
acting as a tool for investigations. I find that real analysis requires supporting data, often
involving activity outside the scope of the Snort alert. This is an area where supporting tools
(open source or commercial) can make a big impact.

In future Snort Reports we will consider how to integrate other tools with Snort. We will also
return to reviewing new features in Snort 2.8.0.

SNORT REPORT

How to test Snort


Richard Bejtlich
08.03.2007
Rating: --- (out of 5)

Networking Channel Update

Digg This!     StumbleUpon     Del.icio.us   

"How do I test Snort?" is one of the most popular questions asked on the snort-users mailing list.
While a seemingly simple question, the answer depends on your intent. Value-added resellers
(VARs) and systems integrators (SIs) may need to provide customers with validation that the
network intrusion detection system (IDS) is working as expected. This edition of Snort Report
explains what it means to test Snort. I reveal some common misperceptions and offer alternatives
to satisfy the majority of readers.

Snort test options

"Testing Snort" requires recognizing the sort of data you expect from running a test. The
following are all legitimate reasons why you might test Snort.

1. "I want to know if Snort is working." This is the most common reason users post test
questions to Snort mailing lists, and an important one for VARs and SIs who should always
validate that Snort is working properly for customers. If you're unfamiliar with Snort or your
customer installed the open source IDS using a binary package or following a guide, you'll want
to know if the procedure resulted in Snort being capable of detecting suspicious or malicious
activity.

2. "I want to know if Snort will drop packets." This is the second most common reason for
testing Snort. VARs and SIs should understand the conditions that might cause Snort to not keep
track of all the network activity it's inspecting. Discovering indications that Snort is dropping an
unacceptable number of packets should trigger an evaluation of Snort's configuration and the
hardware specifications of the platform on which it runs. Also, you should know the conditions
where Snort performance will begin to degrade in order to properly size equipment and
processes.

3. "I want to know how a rule I wrote affects Snort's performance." This is a rare reason for
testing Snort. VARs and SIs who write custom rules for clients need to know how their new rules
will affect overall Snort performance. Depending on how they're written, the rules could have no,
some or devastating impact on Snort's ability to detect activity.

4. "I want to know how to evade Snort." This is another rare question, and security researchers
are most likely to ask it. However, VARs and SIs should be sure they understand how intruders
will try to negate the value provided by an IDS or IPS running Snort. Such a test demonstrates
how Snort performs when a malicious user deliberately tries to evade detection -- a topic worthy
of its own article.

This edition of Snort Report discusses the first three reasons for testing Snort. I'll cover how to
evade Snort in a future Snort Report.

Stateless rule parsing tools

When the topic of testing Snort is raised on a mailing list, someone usually recommends one or
more of the following tools:

 Snot (not available)


 Sneeze
 Stick
 Mucus

These tools are all stateless. They parse Snort rule sets and generate packets, which, to some
degree, emulate traffic seen in those rules. In other words, a rule that inspects a UDP packet to
port 161 containing the pattern "public" prompts a stateless tool to create a UDP packet to port
161 containing the pattern "public".

On the surface this may seem like a

More open source


security software tips for
resellers
Catch up on previous editions
of Snort Report by IDS
expert Richard Bejtlich

Our Open Source Security


Software All-in-One Guide
offers more tips on how to
use Snort, Nmap and Nessus

sound tactic. For stateless protocols like ICMP and UDP traffic, this approach may work.
However, tools that parse Snort rules to generate packets for each rule suffer two big problems.
First, for stateful protocols like TCP, this approach is almost worthless. Stateless tools don't
establish a full TCP connection in order to conduct their tests. The tool examines the Snort rule
set, creates a TCP segment and fires it. Snort's stateful inspection capabilities, first introduced in
2001, have rendered TCP-based stateless tests largely irrelevant.

The second problem with stateless tools is their inability to understand newer Snort rules. Sneeze
was written in 2001 for Snort 1.8. Stick was also written in 2001. Source code for Mucus dates
form 2004 but was tested against Snort 1.8.3. The Bleeding Threats project hosts an updated
version of Mucus maintained by James Gregory at Sensory Networks as part of his CoreMark
Tools. This newer version of Mucus dates from 2005 but supports rules from Snort 2.3.

The primary way to "test" Snort using a stateless tool is to disable the Stream4 preprocessor,
which requires editing the snort.conf file. This artificially disables a key component of Snort
that's designed to handle these very sorts of stateless attacks.

Stateless packet generation tools

A related stateless approach for triggering Snort alerts is to generate traffic that should trigger
Snort rules, but doesn't rely on parsing Snort rule sets. IDSWakeup is a stateless packet
generation tool. The following shows how IDSWakeup performs against Snort 2.6.1.5. I used the
Debian package net/idswakeup on Ubuntu Linux against a FreeBSD sensor running Snort 2.6.1.5
and Sguil 0.6.1.

IDSWakeup generates single packets that reflect traffic that might trigger Snort or other intrusion
detection systems. Like other stateless tools, IDSWakeup forges packets without establishing full
sessions as needed by TCP. In the following example we tell IDSWakeup to send traffic from
192.168.2.8 to 10.1.13.4, one packet per attack, with a time to live of 10. Note that we can
specify any source IP because no full session is expected for TCP tests.
IDSWakeup generated 181 packets of which 134 were TCP, 22 were UDP, 24 were ICMP and
one was malformatted IP (i.e., "IPv5").

Here's what some of that traffic looks like when viewed with Tshark.

Notice that some of the TCP traffic includes the warning TCP CHECKSUM INCORRECT.
Unless Snort's told to ignore incorrect TCP checksums via the -k switch, Snort will not alert on
these sorts of packets.

-k Checksum mode (all,noip,notcp,noudp,noicmp,none)

The reason is the target should discard the traffic, so Snort assumes the traffic with the bad
checksum has no effect on the target.

Snort's interpretation of IDSWakeup

When logging alerts in FAST mode, Snort records details like the following while inspecting
traffic generated by IDSWakeup.

This assortment of alerts contains a variety of traffic types. In the full output, the majority of
alerts address ICMP and UDP traffic. This makes sense, because those stateless protocols don't
depend on setting up a full session and therefore are not affected by Snort's Stream4
preprocessors.

Watching Snort drop traffic

Snort offers a feature that reports on its packet drops. When Snort shuts down, it creates output
like the following:

Snort dropped zero traffic, and it created 26 alerts. Given the number of "tests" IDSWakeup ran,
you can guess that the vast majority of the traffic wasn't suitable for testing Snort.

Another way to check for Snort dropping traffic (at least on FreeBSD) is to use Bpfstat. Bpfstat
can profile packet dropping for any process that relies on Berkeley Packet Filter for sniffing
traffic. For example, we know that Snort is running as process 39183 watching interface em0.
We tell Bpfstat to report statistics every 10 seconds as it watches that process and interface.

When Bpfstat starts, we see Snort has dropped 130 packets.

This matches output seen when we stop this instance of Snort:

Snort received 1628 packets


Analyzed: 1495(91.830%)
Dropped: 130(7.985%)
Outstanding: 3(0.184%)

These drops happened before we ran another IDSWakeup test. During the test, the drop column
never increased beyond 130. This indicates that Snort didn't drop any traffic while we were
running our IDSWakeup test.

Snort rule performance

Sourcefire devotes millions of dollars of high-end testing equipment to ensuring that new
Vulnerability Research Team (VRT) rules work efficiently within Snort. I personally saw this
equipment when I visited Sourcefire in 2005.

One option for checking the performance hit caused by rules is offered by the Turbo Snort Rules
project hosted by Vigilant Minds.

Visitors to the site can submit a rule to see how it compares from rules in the 2.3.x and 2.4.x rule
sets. For example, this test evaluates the performance of the following rule:

alert tcp any any -> any 25


(content:"|00|";sid:12345678;rev:1;classtype:misc-attack;)

This rule looks for binary content 0x00 in any TCP segment to port 25. Turbo Snort Rules
reports this rule is slightly slower than the average rule in the 2.3.3 and 2.4.0 Snort rule sets.

Turbo Snort Rules is a great idea, but the site does not appear to have been updated since 2005.
It's functional, but more modern rule sets (2.6.x, 2.7.x) haven't been benchmarked.

Final thoughts

As pointed out in the 2005 article by JP Vossen, Using IDS rules to test Snort, the easiest way to
ensure Snort is actually seeing any traffic is to create a simple rule and see if Snort generates an
alert. If you wish to run a tool like IDSWakeup, it will indeed generate some alerts. A simple
Nmap scan will most likely generate some alerts as well. Setting up a target system and running
an actual malicious attack, such as exploitation via Metasploit, is a means to test Snort via
server-side attack. More elaborate client-side attacks can also be devised to test Snort's ability to
detect that attack pattern.

The bottom line is to figure out the goal of your test, and then devise the simplest way to
accomplish that goal. It's always best to begin by running Snort with a very basic rule, explained
in the first Snort Report (Intrusion Detection Mode). If you can't get Snort to fire on the most
basic activity, then a serious problem exists.

About the author


Richard Bejtlich is founder of TaoSecurity, author of several books on network security
monitoring, including Extrusion Detection: Security Monitoring for Internal Intrusions, and
operator of the TaoSecurity blog (taosecurity.blogspot.com).

You might also like