Testing Diagram

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 1

ZEEK TESTING

"zeek-level" "dissector-level"

live traffic tcpreplay** static pcaps


throughput TCP layer
fuzzing*

"normal" "abnormal" "normal" "abnormal" "normal" "abnormal"


*Invalidating packet integrity at the TCP-layer (e.g. bad checksum, wrong port, etc.) sometimes triggers an “exit condition”
where Zeek gives up parsing the packet before it even passes the connection object or control flow to the dissector (i.e. the
DNP3 dissector may never get to see the packet because Zeek dropped the effort too early). We need to figure out what
circumstances trigger such exit conditions.

** The purpose here is to use the same underlying traffic/pcaps for static and live capture and check for any differences in
output logs.

For each set of “normal” tests, only generate good/legitimate traffic: (1) write a script that creates packets which iterate
through all request function codes, response function codes, error codes, data object types (for DNP3), etc. and (2) let the
dissector run over a sample of legitimate traffic in the wild (e.g. let it sniff for XX minutes in our lab or upload/replay a sample
pcap that we know contains only legitimate traffic)

For each set of “abnormal” tests, generate malformed and potentially malicious traffic: Attacks in real life occur over the course
of (a) a single packet, (b) several packets on stateful protocols, or (c) months of “living off the land” where no individual
communication fragment is abnormal in itself versus the usual activity on that network. We expect our dissectors to
detect/raise exceptions on 100% of (a), most of (b) and none of (c). To test (a) we generate fuzzed packets in a systematic way
(i.e. control for what fields/parameters are being changed in each packet produced and keep all other parameters unchanged
so that we know which fields throw off the dissector). Note, we fuzz only at the protocol layer (ADU/PDU) to isolate dissector
behavior from possible Zeek “exit condition” behavior. To test (b) for stateful protocols such as DNP3 we need to craft a set of
packets where the anomaly occurs over the sequence of 2-5 packets (e.g. a response doesn’t match a request for MITM).
Perhaps the 3rd party samples of attack scenario pcaps are appropriate to test (b), but we need to figure out ahead of time what
are the specific weird artifacts/patterns within each attack sample.

For all “abnormal” traffic tests, we should predict ahead of time what types of weird.log warnings we expect from each
condition/iteration, and then verify whether our dissector actually returned those results.

You might also like