Professor Messer Sec+ Domain 4

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 50

Professor Messer’s CompTIA SY0-601 Security+ Training Course

How to Pass Your SY0-601 CompTIA Security+ Exam


Are you planning to take your SY0-601 CompTIA Security+ exam? In this video, you’ll learn how to
prepare and which resources will give you the best chance for success.

Reconnaissance Tools – Part 1 – SY0-601 CompTIA Security+ : 4.1


There are many reconnaissance tools that may be built-into your operating system. In this video, you’ll
learn about traceroute, nslookup, ping, pathping, netstat, and more.

The traceroute command allows you to map an entire path between two devices to know exactly what
routers may be between point A and point B. This uses the tracert command if you’re in Windows, and if
you’re running Linux, Unix, or Mac OS, you’ll use traceroute. The information displayed by traceroute is
being received by routers on the network via ICMP Time to Live Exceeded error messages.

You’ll send packets out to the network. Those packets will cause the routers to create an error message,
and send that error message back to you. And the traceroute command uses those error messages in
order to build that route.

The parameter that we’re going to manipulate to cause these error messages to occur is the TTL, or
Time To Live message. This is a value within the IP packet that designates how many hops or routers a
particular packet should go through until it is allowed to be dropped by the routers. This is commonly
used to prevent loops on the network, but it’s also very useful when you’re using the traceroute
command.

As you’ll see when we run this command, it is very common for these routers to send back these ICMP
Time Exceeded messages. But there are some firewalls or routers that will filter out, or not respond, to
these types of messages. So there may be gaps when you’re performing these traceroutes where some
of these devices are filtering out those messages.

The traceroute command in one operating system will probably be slightly different than a traceroute
command used in a different operating system. They may use different protocols and different methods
to be able to transmit to the network. For example, Windows uses ICMP echo requests, which is exactly
the same information that is sent during a ping command.

So throughout the path, as we are building this route, we’re going to receive ICMP Time Exceeded
messages. And when we finally make it to the far side, we will ultimately receive an ICMP echo reply.
Unfortunately, ICMP as a protocol, is one that is commonly filtered. So you may find that running
traceroute in Windows does not work as well as running traceroute in Linux or Mac OS.

To get around this problem, you can modify what type of protocol you’re sending out and there are
options within Linux, Unix, Mac OS, and other operating systems to let you modify what information
you’re sending to the network.

To give you an idea of just how much this varies between operating systems, in Windows we’re using
ICMP, but in iOS, we’re sending UDP datagrams over port 33434. We can, of course, make changes to
this and modify what we’re sending, but you can see there are big differences depending on what
operating system you’re using.

Let’s step through the process of really what’s happening behind the scenes when we perform a
traceroute command. We’re going to run a traceroute through this very busy network. We have Sam,
who is on one side of the network, and Sam is going to run a traceroute to Jack’s machine, on the other
side of the network.

You can see there are a number of switches and routers between Sam and Jack. Sam is going to start by
sending this message to Jack with the Time To Live equal to 1. When that hits the first router, the Time
To Live will be decreased to 0. And a router is designed to drop any packets that have a TTL equal to 0.

At that point, it will also send a message back to the original station, telling it that its message didn’t get
through because the Time To Live was exceeded. When Sam’s computer receives TTL exceeded
message, it notes that it took two milliseconds to receive that, and it received it from router one’s IP
address, which is 10.10.10.1.

Now Sam performs exactly the same message, sending information to Jack’s IP address, but it changes
the TTL to 2. The first router receives the message with TTL equal to 2. Since it’s going through the
router, it decreases the TTL by 1, continues sending it on its way, and when that message is received by
router three, the TTL is set to 0, which causes a TTL exceeded message to be sent back to the original
workstation. And Sam makes a note again, that took one millisecond, and it was received from
172.16.1.2, which is the IP address of router three.

This process occurs again, with Sam now setting the TTL equal to 3. It makes its way through the
network from router one to router three. This time it makes it all the way to router four before the TTL
is exceeded. And that message is sent back to Sam’s workstation. This is one millisecond later, and
172.16.3.1 is the router that reported that message.

Sam now sends another message to the network with TTL equal to 4. That means it will make it through
the first router, it will move to the second router, finally to router four, TTL is still equal to 1, and it
makes it all the way to Jack’s workstation.

Jack’s workstation will also decrease the TTL by 1, which means the TTL will be exceeded, and that
message will be sent all the way back to Sam. But this time, the IP address that was received was the
original destination IP address, and our traceroute is now done.

Let’s watch this happen now in real time as we run the traceroute command. I’m going to run this
traceroute command to Quad9, which is a DNS provider. And when we hit enter, you can see it run
through the traceroute process. Does that very quickly through the 13 hops that made it all the way to
DNS9.quad9.net.

Each step along the way gives an IP address, where we receive the ICMP message from, and it gives
three different options by default because it tries three times on each one of those hops.

On the last hop, on all three of those different tests, we received an exclamation mark Z, which means
connection to that particular IP address is administratively prohibited.
Now that we know what this route looks like, we can compare it later on if we run into a problem. For
example, if we ran this traceroute later and it stopped at hop seven, we would know exactly where this
problem was occurring, and we could focus our efforts on resolving the network outage at that
particular point.

Another useful reconnaissance tool is querying a name server to gather information about the devices
that might be on a network. We can do this with two common commands. One is the nslookup
command, and the other is the dig command.

The nslookup command is common in Windows, Linux, Mac OS, and other operating systems. And they
can query a DNS server to determine names and IP addresses. You’ll find that nslookup is slowly being
deprecated from a number of these operating systems, so it may be that your operating system doesn’t
have nslookup, but it does have the dig command.

Dig is effectively the replacement for nslookup, but it does have some additional functionality as well.
This is probably going to be the first choice you use. And there are ways to install this on Windows,
which doesn’t necessarily include the dig command by default. You’ll find information on that at
ProfessorMesser.link/digwin.

Let’s start with the nslookup command, and we’ll query the domain ProfessorMesser.com. The results
are coming directly from Quad9, and you can see that the answers provide us with three separate IP
addresses for the same ProfessorMesser.com domain.

This is by default, because I do have some redundancy built into the connections to my web server, and
you can see how the redundancy works using Cloudflare. Let’s run the same query again, but instead of
using nslookup, let’s use dig to ProfessorMesser.com.

It’s going to, again, query the Quad9 server, because that is my default DNS on this machine. You can
see a lot of other messages about what we are querying and the answers that we are receiving. In the
answer section we’re getting a lot more information than we got with the nslookup command because
we can see the domain name, we can see time out information for caching, we can see that this is an
address record, and you can see all of the different IP addresses that are found on that particular DNS
server.

If you’re performing some type of analysis or reconnaissance on your network, then you’ll need to know
the IP address of the device that you’re connected to. And there are two ways to do this, depending on
the operating system you’re using. You can commonly use ipconfig and ifconfig to gather those details.

This is going to tell you about the IP configuration of this device, and any other information about the
network adapters in this computer. If you’re running in Windows, this is the ipconfig command, and in
Linux, this is commonly the ifconfig command.

In Windows, we run the ipconfig command. And we can see DNS suffix information, IP address, subnet
mask, and default gateway. If you need more details about this particular configuration, we can run the
ipconfig command again, but use the /all option, which provides us with much more detail.

We can see things like the description of the network interface card, Mac addresses, additional details
regarding the DHCP lease, and other important IP address information.
In Linux or Mac OS, you would commonly run the ifconfig command. And on my system, I have a
number of different adapters because of all of the different virtualization clients, but if I scroll up a bit
you’ll find the physical address on this device, which is EN0. And you can get information about this
device such as its Mac address, IP address information, and IPv6, and here’s my IPv4 address, which is
10.1.10.249.

Sometimes you just want to know if a device happens to be on the network. And an easy way to find
that information is by using the ping command. Ping is a troubleshooting tool that you will use
constantly to be able to see if a device is communicating on the network, or if it happens to be assigned
to another device.

This is a utility that’s been around since 1983. And Mike Muuss created this and named it from the
sound that sonar makes when you ping another device. Let’s run the ping command to see if device
9.9.9.9 is responding to our messages.

So we’ll use the ping command to that Quad9 address. And you can see we immediately get responses
back for that particular IP address, and we have around trip time, and Time To Live information
associated with that as well.

This will continue to ping in my operating system until I use the Control C command, and then that will
give us a summary of how these pings have worked from the time we started the ping until the time we
interrupted it.

In Windows there’s a command that merges together the functionality of ping and traceroute to create
a single command called pathping. Pathping will run a traceroute to a destination IP address to
determine what routes may be between your local device and the one that you’re running as part of
pathping.

Once that’s complete, pathping will measure the round trip time to every hop along the way. And you’ll
be able to see exactly how much information was sent to that hop, and how many of those packets may
have been dropped.

Pathping takes a number of minutes to run. So instead, I’ve taken the output and put it on the screen
here. I ran pathping on my Windows device to IP address 8.8.8.8, which is the Google DNS server.

Pathping created this very familiar traceroute, and then began the process of computing statistics for
each one of these hops along the way. Once pathping creates that report, we now have a
comprehensive view of exactly what’s happening on the network between these two locations.

We have a round trip time to each one of those hops, we can see how much traffic may have been lost
on the network from the source to here and on this node, and you get a percentages of each step along
the way. So you can see most of this, we were losing no traffic.

Nothing was lost. Of the 100 messages sent, we received all of them. Until we got all the way down to
this particular hop, you can see that 100 packets were sent, and 100 packets were lost, making this a
100% loss for this particular IP address.
It’s probably the case that this IP address is administratively configured to drop this type of traffic, and
so once it hit that particular route, it wasn’t able to get any statistics. The pathping did continue
however, and we do have statistics for the hop right after that, which was the Google DNS server.

There may be times when you’re looking at a set of local log files, or you’re looking through a packet
capture, and you’re seeing some IP addresses that a device may be communicating with. And you might
want to know more about the conversations taking place across the network to that IP address.

One way that you can begin gathering some of this reconnaissance is by using the netstat command.
This stands for Network statistics, and it’s very good at showing us exactly what IP addresses may be
communicating into our device, and what IP addresses our device may be communicating to. There are
many different options available in netstat, making it a very powerful reconnaissance tool.

But let’s look at three different options that can really help you. The first is netstat-a, which will show all
active connections that are being used currently on that device.

If you’re in Windows, you may want to use the netstat-b command. If you’re in Windows, you may want
to use the netstat-b command. This will associate the Windows binary with the IP address conversation.

And lastly, if you would like just IP addresses, and not resolve names, you can include the -n option on
the command line.

To use the -b option in Windows, you have to start your command prompt in administrator mode. So
I’ve open to new command prompt with that elevated permission. And I’m going to use the netstat
command with all of those options, the -a, the -b, and the -n. It will then go through the process of
evaluating what conversations are occurring between my device and others.

And if I scroll up, you can see a number of the different executables that are using the network. Some of
these are internal to Windows, others are executables we might expect, such as onedrive.exe, or
microsoft.photos.exe.

And if you’re using a browser, like I am with Google Chrome, you’ll see all of the different sessions the
Google Chrome is using to communicate between my local IP address and an IP address on the internet.

In my local network, there are a number of devices that are currently active. And I know what the IP
address of those devices is, but often you won’t know what the Mac address of those devices might be.
And sometimes that Mac address is important, especially if we’re performing some type of packet
analysis.

One way that you could determine what that Mac address is to look at the local ARP table. And you can
do that by using the ARP command with the -a option.

Let’s look at our cache with the ARP-a command. And you can see on my network there are a number of
devices that are stored in my ARP cache. Probably the most important one for me is my local router,
which is 10.1.10.1, and you can see the Mac address of that device.

Now if I ping another device on my network– so let’s ping 10.1.10.249. And now let’s again look at our
ARP cache, and you’ll see that the Mac address for 10.1.10.249 has been added, and there’s the Mac
address of that local device.
It’s often useful to know what the next route might be outside of a network, or what other routes may
be available or configured on a particular device. There are a number of different ways to do this,
depending on the operating system you’re using. In Windows, the command is route print. And in Linux
and Mac OS, you would use the netstat command with a -r option.

On this particular device, there is a single route out to the internet, and it’s my only route. So if I
perform a route print, I should see my default route of 0.0.0.0 with a netmask of 0.0.0.0.

That designates my default route, and you could see that that route is to 10.1.10.1 on this particular
network. This is my local IP address. And the metric of 25 is the lowest in this list. So that’s the one that
has the priority.

On my Mac OS device, let’s run the netstat command with a -r. And this will list out all the IPv4 routes,
and IPv6 routes on my device. If I scroll up a bit, you can see the default route. And this is to the same
route as my Windows device, which is 10.1.10.1.

I do have some other routes from this particular device. For example, if we scroll down a bit, you can see
the 10.10.10/24 network, and it’s communicating to a router at 10.1.10.211. I have another network
that I route to, which is my 192.168.254 network, and it also can be reached by going to 10.1.10.211.

Reconnaissance Tools – Part 2 – SY0-601 CompTIA Security+ : 4.1


Third-party reconnaissance tools can provide extensive information about users, networks, and devices.
In this video, you’ll learn about hping, Nmap, theHarvester, sn1per, scanless, and more.

In this video, we’ll look at some reconnaissance tools that may require you to install some additional
software. Some of these are available in Linux or Mac OS or you may be able to find a pre-configured
Linux version like callie, where a number of these tools are pre-configured and ready to run.

The first tool we’ll look at is curl, the stands for client URL or uniform resource locator. This is referring
to the URL that you could use to access web pages, to perform FTP, you can retrieve emails, and many
other functions as well. This allows you to grab the raw data from these sites and display it in a terminal
screen.

If you’re displaying a web page in a browser, it’s usually a very graphical view. But with curl, you’re
viewing the y html that’s being transferred from that web server. This is very useful for being able to see
the source code and search through it, you might be able to pass information out from the website and
it makes it very easy to begin automating based on the information you’re receiving from a website.

Let's performer curl on my website and see what results we get. Will run a curl to
www.professormesser.com and let’s look at these results it shows that this site has a 301 moved
permanently message, and it shows that it has been moved to a secure website with https so let’s run
the same command.

A curl but in this case, we’ll use https://www.professormesser.com and what we’ll get is a much larger
amount of data coming back because this is the source code for the site written out as html, that we’re
viewing in our terminal screen, thanks to curl.
There are many different tools you can use to perform IP scanning; this is one more scanning the
network to try to locate IP addresses or identify what port numbers might be open on an IP address.
This usually is using a number of different techniques to be able to identify and then display these
devices and the port numbers on your system.

These scanners might use ARP to find devices on your local subnet, but if you’re trying to scan outside of
your local subnet, you may be using ICMP, TCP acknowledgments, ICMP timestamp requests, and other
techniques that can help you identify and then scan devices on the network.

If these tools identify a device that happens to be active on the network, you can then choose another
tool to provide even more reconnaissance after the fact. Will be looking at a number of different tools
but some of the most popular are tools like Nmap, hping and others.

Hping takes the idea of a ping command, and takes it to the next level. We’re able to provide a lot more
information than simply performing a ping we may want to provide information about what ports might
be on a device, or we could even craft our own packets and send those to a device across the network.

This is a relatively easy command to run, we just run the hping command, we can choose destination
ports like I’ve done here to send it to a destination port 80, and they can choose the IP address or IP
address range of the device you’d like to scan.

Unlike a simple ping command, the hping command allows you to modify almost everything about the
packet. You can modify IP information, TCP details, UDP information, ICMP values, and much more. If
you use any of these tools to scan information on your network, there’s always the potential for a denial
service. So make sure you have permission to perform these scans and that you’re running a scan that’s
not going to create problems for other devices on your network.

Let’s run a ping to my local router. I’m going to use the sudo command to run this with elevated rights
and permissions, will run the hping 3 command, and I’m going to choose the IP address of my local
router and you can see this performs a relatively simple ping to the device. And we can see that this
device has a time to live of 64, we’ve got the flags the sequence numbers and the round trip times very
similar to the ping command.

Let’s run the hping command again, but I want to run the hping command with the port queries so that
we can see what ports may be open, and what ports might be closed. We’ll run the hping command with
the scan option and I’m going to choose to scan ports 80, through ports 443.

Will then also specify the destination IP address for this scan and I’m also going to choose the capital V
option for verbose, so we can see exactly what’s happening behind the scenes when we perform the
scan.

When I hit Enter, you can see it scanning through many different ports all at once and they can see all of
the different ports that are running on this device and which ones may be closed. You can see that the
only two ports that received any information back from the scan were port 80 and if we scroll all the
way to the bottom, we can also see port 443 is open on that device as well.

If you’re going to do anything serious with port scanning though you’ll probably want to become familiar
with Nmap. Nmap stands for network mapper, and it is one of the most popular scanners and mapping
tools available in almost any operating system.
It can identify open ports on a device, it can identify what operating system might be running on that
device without actually logging in to the operating system. It can also perform a service scan to
determine more information about the specific services running on that particular remote device.

Nmap also includes the ability to run additional scripts, which greatly extend the functionality of the tool
this is from the Nmap scripting engine or NSE This allows you to use Nmap to be able to run vulnerability
scans and other tests to the device.

Let’s run a very simple Nmap scan to my router, I’ll run a sudo Nmap will choose my IP address for the
router, and I’m going to choose the verbose option actually two verbose options here to give us more
information about what’s happening behind the scenes.

It’s now finished the scan of 1,000 different ports on that device, and it shows us information about
what ports did not respond, what ports did respond, and how they responded, and what ports may end
up being unreachable on that device. We can then take this information, and decide what our next steps
might be to gather additional Intel from that particular IP address.

There is an amazing amount of information that can be obtained for free from public websites. We refer
to this information as open source intelligence or OSINT. And there are many tools available to allow
you to gather that information from those open source sites, and do it in a way that’s relatively
automated.

A good example of one of those tools is the harvester. The harvester allows you to gather many different
kinds of information from many different kinds of sites. You can go to Google or Bing, you can gather
information from LinkedIn, and many other resources as well.

For instance, if you wanted to find everybody on LinkedIn that matched a particular domain, you can
have the harvester automatically find that information and present it to you on the screen. The
harvester can also provide things like a DNS brute force, so it can identify not only DNS services that may
be publicly available, but it can find a host that may not be automatically identified in a DNS server. For
example, you may be able to find a VPN server or an email server by running some of the brute force
tasks within the harvester.

Here’s the harvester running on my curl Linux distribution, and you can see it is used to gather open
source intelligence on a company or domain. You can also see the services available, and there’s a lot of
different services everything from Google to Yahoo to Twitter and I can even run LinkedIn queries from
here as well.

Let’s run the harvester to see if we can find any emails that might be in a search engine. So I’m going to
run sudo and the harvester, and let’s choose the domain as example.com I’ll tell the harvester that I’d
like to receive hundreds maximum results and will choose Bing as our search engine that we would like
to query.

The harvester will go out to that source, it will see what is available for IP addresses didn’t find any of
those, but it did find two emails associated with those domain its email at example.com and mail at
example.com. And it found one host IP address and it listed that as well.

Doing this automated process can help us identify other types of information, especially if we include
searches that go out to LinkedIn, to Google and to other services. As we’re going through these
reconnaissance tools, you may notice that the different tools are providing different kinds of
information.

Won’t it be nice if we could bring all of this information back to a single query, and see all of those tools
occurring at one time. We can do this with a reconnaissance tool called sniper. Sniper combines all of
these tools together to give you one set of queries and one set of output for all of these different
functions.

There are many different ways to configure the way sniper runs, some of these options are very
intrusive and others are specifically built to run in a stealth mode. This is another one of those
reconnaissance tools that can really create problems and denial of service situations. So make sure that
you have permission to scan the device that you’re accessing and that exactly what options you’re using
in sniper.

Let’s run a sniper query to example.com and I’m going to choose to only run a request to web services
on that particular domain. When we hit Enter, it will begin the scan and a lot of information is going to
start flying by as it performs information queries against different port numbers it checks for a while, it
does some http information gathering, it’s running in map scripts, and putting all of this into a single set
of output. We can let this run through its entire process to gather everything we can about example.com
and then we can evaluate the results.

I’ve scrolled to the top of the results of this scan that we did to example.com and you can see that it
gathered DNS information, check for subdomain highjacking, it then ping the host did a TCP port scan
using Nmap, we can see the results to port 80 and port 443.

It ran some intrusive scans to some port numbers that were apparently close it scanned http ports to
see if there was any information, It could gather from the web server, then it began running a spider
that ran a TCP port scan looked for http headers, and the entire set of results continues to flow through
as it goes from tool to tool, and provides the results for all of these in a single sniper query.

One of the problems you may find when you’re performing a port scan, is that your device is easily
identified as the source of the scan. So one of the things you might want to run is a scan from a different
host, this would effectively be a proxy for port scanning and the utility that does this is called scanless.
Scanless includes support for many different services you can choose exactly which proxy you’d like to
use, and you can use this device to perform the port scan for you.

Let’s perform a port scan through a proxy to the public in map scanning server, that’s available for you
to access with Nmap or other scanning devices. Let’s use scanless to do this, I’m going to specify the
destination as scanme.nmap.org and I’m going to choose the service of spider IP to provide the proxy for
me.

So instead of this port scan occurring from my local workstation, it’s actually going to be performed from
the spider IP service. I’ll Enter, It will go out to Spider IP run the scan against scanme.nmap.org and the
results of this scan are coming directly from Spider IP and it shows me exactly what ports are open and
closed on that particular destination of scanme.nmap.org
There’s a lot of information that you can gather from a DNS server and one of the tools to be able to see
that is the dnsenum command, this will enumerate DNS information from a DNS server. There is a great
deal of information you can gather and many host you can identify from that DNS server.

But there are also other hosts that you may be able to find, using a number of different techniques and
dnsenum allows you to do that. And of course, there’s other DNS information that could be obtained
from sources outside of a DNS server. For example, you could go to the Google index to see if you can
identify any hosts there and dnsenum will search through Google find any host names and begin to
perform queries of those host names against a DNS.

Let’s run a very simple dnsenum query, I’m going to choose the option to show this in a verbose mode
so that we can see what’s happening behind the scenes and we’ll do this for example.com When I
perform this search, it’s going to find information on existing DNS servers, and then it’s going to perform
a brute force against the DNS server changing the name that’s in the beginning of the query to try to
find subdomains inside of example.com and this brute force will go through a predefined set of names
that’s in a dictionary to see what it can find in that DNS server.

If I scroll back up to the beginning of the query, you can see that it did find example.com and an IP
address, and it found the name servers that were associated with that domain. It didn’t find anything
else associated with that domain and that’s why it began running through the brute force using the DNS
that text file, inside of dnsenum.

If you’re planning to perform vulnerability’s scanning against a remote IP address, then you want to use
a scanner such as nessus. This is one of the most popular vulnerability scanners probably because of its
very large database, they can identify many, many known vulnerabilities.

This is a scanning tool that has a lot of support in the industry and you’ll find as we step through the
results that there is extensive reporting and information that will help you identify vulnerabilities and
help you resolve and fix the vulnerabilities on those systems.

Before making this video, I ran this nessus scanner against a single IP address on my network 10.1 10.13
and it found a number of vulnerabilities it found 71 informational vulnerabilities, it found three low
priority vulnerabilities, nine medium, two high and two critical.

If we click on this, we can drill down into what these vulnerabilities were two of the critical
vulnerabilities were for a Debian open SSH or open SSL package random number generator weakness
and this allowed people to gain a shell remotely on this device. If you click and drill down into that, it will
explain why that particular vulnerability was identified and what the solution is to resolve this problem
on that host.

If we go back to our vulnerability list, let’s look at the other critical vulnerability, which is that this is a
Unix operating system, unsupported version detection. If we click on that, we can see that this is an old
version of Unix Linux that is no longer supported and it shows you that it was running Ubuntu 8.04 and
support ended in 2011. And the obvious solution for this is to upgrade to a version that is currently
supported.
Nessus can be configured to scan many different IP addresses or ranges of IP addresses on your network
create this database of vulnerabilities, and then you as the security professional, can begin identifying
where you need to start to make these systems much safer.

One of the challenges we have when receiving an executable or wanting to run a program that we’ve
never run before, is we’re always concerned there might be something malicious or perhaps malware
inside of that executable. And running the executable to test it on your production machine is probably
not the most secure way to identify problems.

That’s why we would want to use cuckoo. Cuckoo is a sandbox that is specifically written to run
programs inside and identify any malware this virtualized environment can consist of many different
operating systems, including Windows Linux, Mac OS, and Android.

And it can perform API calls it can identify what network traffic is being sent by this application, it can
perform a memory analysis and much more. If you’re in a position where you need to evaluate different
executables and confirm that they are safe before deploying them in your environment, then you might
want to use cuckoo.

This takes a little bit of time to get set up and have all of the correct VMs in place, but once you have this
running. You now have a method to be able to run these executables safely without worrying about it
infecting anything else in your environment.

File Manipulation Tools – SY0-601 CompTIA Security+ : 4.1


An important skill of any IT security professional is the ability to manipulate files. In this video, you’ll
learn about cat, head, tail, grep, chmod, and logger.

If you’re using Linux or Mac OS and you want to see the contents of a file, then you want to use the cat
command. Cat is short for concatenate. Concatenating a file means that you would either view the
contents of a particular file to the screen or you would link multiple files together to create a larger file.

For example, if you’d like to copy files to the screen, use the cat command and then the name of the
files that you would like to display on the screen. You could also copy those files into a larger file. So you
would cat file number one, file number two, and then use the greater than sign to pipe the contents of
both of those files into a single file called both.

Here’s the Var log directory of my Kali Linux, you can see if I perform in ls command, there are many files
in this particular directory. And I would like to see the contents of a file called sys log. I want to run this
command with enhanced permission, I will use the cat command and then choose sys log as the file that
I would like to view. And if I’d enter, a lot of information goes by.

I’m going to scroll up a bit just so we can get a feel for what all of that information looks like. You can
see that it’s sent all of the contents of this file to the screen and now we’re able to read through the
information that was stored in that individual file. If you’d like to view this information a page at a time,
we can run the same command. But I’m going to pipe this to the more command.

And now we can view the first page of information, there is a more option at the bottom. We can hit the
space bar to look at the next page, or we can hit the Enter key to go a single line at a time so that you’re
able to read this much easier than sending everything to the screen at one time.
And exit from this view, I can simply push the Q key. And now we’re back at the command prompt.
Sometimes, you just need to see part of a file. Maybe, the information you’d like to see is written at the
beginning of the file and you don’t need to parse through the entire file or display everything in the file
on the screen. If you just want to see the top part of the file, we can use the head command.

And there are a number of options that would allow you to view a certain number of lines into the file or
you can customize how that information is displayed on the screen. For instance, if you’d like to display
the first five lines of a file, you can use the head command, with the dash in, to specify the number of
lines. You would choose the number you would like, in this case five lines, and in the name of the file
you’d like to view. This will view you just the first five lines from the beginning or the head of the file.

It may be that the information you need is not at the beginning of the file, but it’s at the end of the file,
and the opposite of the head command is the tail command. This allows you to view the last part of the
file. And the syntax of the head and the tail commands are very similar. For example, if you want to see
the last five lines in the file, you would use the -tail command with the -n option, which is the same for
the head command. Will choose five lines and then the name of the file, in this example, that file was
this syslog.

Let’s view you the contents of a file. We’ll start with the head command to view the beginning of this
file. I’m going to choose sudo because this file normally is one that I would not have access to with my
user account, we’ll use the head command. And I’m going to choose the messages file to view. And if we
hit Enter, we’ll see, by default, the first 10 lines that come from the file messages.

Let’s now look at the last 10 lines of this file. I’m going to clear the screen to make this easier to see.
We’ll choose sudo again. And I’m going to use the tail command to that same file messages and now we
get the last 10 lines in the messages file. As we were displaying those files to the screen, you could see
there’s a lot of information stored inside of those log files. But we may be looking for just a little piece of
information that may be contained within this entire file. And that’s a lot like trying to find a needle in a
very large Haystack.

Fortunately, we have a tool that can help us search through the file to find exactly what we’re looking
for. And that command is called grep. This allows us to find any bit of text we’d like in a file and we can
even search through multiple files at one time to find any line that might have some text in it. For
example, if we would like to find the pattern failed within a file called auth.log, we can use the grep
command to find that information.

Back in my Kali distribution, in the /var/log directory, there is a file called auth.log. And let’s look for the
word field within that auth.log. Before we do that, I’m going to cat auth.log to the screen so you can see
just how big this file is. As you can see, there are many, many lines of information. And if we were trying
to find one specific piece of text, it would take us a very long time to do this manually. Instead, we’re
going to use the grip command.

I’m going to search for the word failed and we’re going to search inside of the auth.log file. And as you
can see, there were only three lines in that file where that particular word of failed happens to appear.
In this first line, you can see that conversation failed. And in the next two lines, you can see that I was
trying to perform a function and I used the wrong password. So it tells me that the authentication failed.
There may be times when you want to change how a file is viewed or whether a file might be able to be
written to or even executed in your operating system. The way that you would define these parameters
is by using the chmod command, which allows you to change the mode of a file system object. In this
context, the term mode means that we are changing it to either be read, write, or execute, enabled by
setting the parameters for that particular file. We commonly do this by setting the binary patterns within
an individual file. But we can also use octal notation to abbreviate this by setting this for the file owner,
the group, others, or everyone.

If you list the contents of a directory with the -l option, you’ll see these modes are listed at the very
beginning, it’s in the first column that’s presented. And this particular set of odd letters and dashes is
actually separated into different pieces. This first column of the group tells us what type of object this is.
If it’s a file, then there’s just a dash. If there’s a directory, then there’s the letter D.

This could also list symbolic links and other types of objects as well. But for the purposes of the chmod
command, we’re going to ignore that first column for now. The next set of characters are separated into
three apiece. So we have three characters that designate the user permissions. Three characters that
designate the group. And the last three designate what the rights and permissions are for everyone else.

For example, for this particular file script.sh, you can see that you have rwx as the first three, r– as the
middle three, and r– as the last three. This means that the user, and in this case, the user for the file is
professor, the user has read, write, and execute rights to that file. If you are in the group and the group
is the second designation here, which is right after the name professor, the group is staff. If someone
from the staff group was to use this file, they would have read access to the file but no right or execute
access because those bits are not enabled. And if you are anyone else who is either not professor or not
staff, then your rights would also be read-only with no write permission and no execute permission.

If you wanted these rights to be different, then you would change what these bits represented for this
individual file. For example, if you use chmod 744 and the name of the file, it turns it into a 7 for the
user, you can see that there is read, write, and execute, has a 4 the group, and that is our read-only, and
a 4 for everyone else, which is also read-only.

If you would like to remove all access for everyone else, then you would change the chmod command to
be 740, and the 0 would mean that there would be no access for everyone else who is not the user or
the group.

There are also some shortcuts that you can use to help the process and easier to remember some of
these rather than to use the binary and the numbers associated with those. In our previous example, we
use numbers to designate what those permissions were. So with chmod 744 for a particular file, means
that the 7 for the user, they had read, write, and execute permission. The 4 for the group meant that the
group had read-only permission. And the last 4 meant that everyone else also had read-only permission.

You can also use letters to designate the type of designation and permissions that you would like to set.
For example, chmod with an a means all users. This would mean the user, the group, and the other,
would have a -w or would not be able to write to first.text. You might also try another one like chmod u,
which would be for just the user with a +x, that means that we would turn on the execute capabilities
for a particular file.
So by using these shortcuts or these numeric representations of the permissions, we can define exactly
what type of access a particular person may have to any one of the files on your system. Let’s change
the permissions of some of the files on my system. We scroll down a bit so we can see this. And I’ll
perform a ls-l that allows us to see the two files that I have in this documents directory. One is the
app.conf and the other is a readme.text.

Both of these are owned by the professor user in the group netdev. And you can see that it is rw- for the
user, which means that professor has read and write permission. We have r–, which means that the
group netdev has read permission. And we can also see that everyone else or others also has read
permission.

Let’s say in this case, we wanted to modify readme.text so that the only people who can read that file is
the owner of this particular file or professor. So we’re going to perform a chmod command. We’re going
to maintain the read-write associated with this. And read-write together is simply a 6. So we’ll choose 6
as the value there. We’ll also keep the read only for the group, netdev. And that read-only is a 4. And in
the last 3 which represents everyone else on the system and the group of people that we would like to
remove access, we will put 0. So they have no access. And we’ll refer to the readme.text file.

And if I hit Enter, and perform another ls-l, you can see that readme.text has been changed to be read-
write for professor, read-only for netdev, and then no access for anyone else on the system.

There may be times when you’re working on a system. And in order to document some information and
a log file or to designate when a particular series of steps may be starting or ending, you may want to
add some additional details into the logs on the system. One way to do that is to use the logger
command. And the logger command will add additional information into the system log in that
operating system, which is commonly the file syslog.

For example, we could use logger and inside of quotes we’ll put, this information is added to syslog, and
that entire bit of information inside the quotes will then be written to the syslog file. This is very useful if
you’re running a script and you want that script to log this information so that we can grep or find this
information later. We may even want to log important events that occur and have that information
documented in all of the files on that system. Or we may want to log an important event and make sure
that the documentation for that event is stored in the system log of that computer.

Let’s look at the very last section of the syslog file on my system. We’ll perform a sudo. I’ll use the tail
command to look at the last 10 lines of the syslog file. But now, we would like to put our own
information into this file. So let’s use the logger command and let’s put information that says, backup
starts here. And I’ll hit Enter, will then run that same tail command. And you can see that a line of
information has been added to the end of the syslog file. And this allows me to document when things
may be occurring and allows me later on to go back into this file get a timestamp of exactly when that
occurred and be able to locate this information very easily.

Shell and Script Environments – SY0-601 CompTIA Security+ : 4.1


There are many options when working with shells and scripting environments. In this video, you’ll learn
about SSH, Windows PowerShell, Python, and OpenSSL.
If you’re connecting to a remote device and using the terminal screen on that device, looks a little like
this example here, then you’re probably using the SSH or Secure Shell Command. This provides for an
encrypted communication channel so you can put in your username, your password, and perform any
functions you would like in that terminal screen and no one will be able to eavesdrop capture that
information, or somehow see what you’re doing across the network.

If you were using the older telnet command, then you would be sending this information in the clear.
This is why if you’re ever performing any type of terminal communication across the network, you
would always use SSH so that all of your communication would be encrypted. If you’re using a Windows
machine at the command prompt, then one of the more advanced shells available on that system is the
Windows PowerShell.

PowerShells commonly used by system administrators on Windows devices to be able to control almost
every aspect of the Windows operating system. Running a script inside a PowerShell usually has a .ps1
file extension, so that you can recognize that that’s a PowerShell script.

And if you’re running Windows 8, Windows 8.1, or Windows 10 then PowerShell is already included and
installed inside of those particular operating system versions. If you’re using any of these functions
inside of PowerShell, then you’re using something called a command-lets. You can run scripts inside a
PowerShell, you can manipulate almost every aspect of the Windows operating system, and you can
even run certain scripts in a standalone executable mode so that they can operate as a standalone
utility.

As the name implies, PowerShell is a remarkably powerful tool for doing any type of administration on a
Windows operating system machine. If you’re a system administrator, you’re in charge of Active
Directory at your company, or you’re responsible for applications that are running inside of Windows,
you want to learn as much as you can about Windows PowerShell.

Another popular scripting language that works across many different operating systems is Python.
Python files usually have a .py file extension, so that we can easily recognize these as Python scripts.
Python is available in Linux, Mac OS, Windows, and many other operating systems as well.

And it is well supported across the entire industry primarily because it has such flexibility and allows us
to do so much inside of the operating system. Although a primary emphasis of Python is based around
the automation and orchestration of cloud-based systems, there are many other functions of Python
that are very useful for individual computers as well, and the more you know about Python, the more
you’ll be able to automate in all of these operating systems.

Another tool that’s not really a shell or a scripting language but still has extensive use in our applications
and operating systems today, is OpenSSL. OpenSSL is a library and a series of utilities that allows us to
manage SSL or TLS certificates on our systems. If you’re building your own certificate authority inside of
your company, then you’re probably going to be creating X.509 certificates.

People will be sending you certificates signing requests, or CSRs and you will have to manage certificate
revocation lists or CRLs and you can do all of that using the utilities available in OpenSSL. OpenSSL also
has cryptographic libraries to perform hashing functions for many different hashing algorithms.
And you can also, of course, encrypt and decrypt using the built-in functionality of OpenSSL. If you’re
running a web server or you have some type of certificate authority in your environment, then you
probably have OpenSSL installed on that system to be able to facilitate those functions.

Packet Tools – SY0-601 CompTIA Security+ : 4.1


Capturing packets is a foundational skill in IT security. In this video, you’ll learn about Wireshark,
tcpdump, and Tcpreplay.

As a security professional, we will always have a requirement to capture raw data from the network.
One of the easiest ways to do this and be able to easily view this information is to use a utility like
Wireshark. Wireshark has both graphical and text based packet capture capabilities, and it can provide
us with a decode of every packet so that we can see exactly what information may be contained within
this network traffic.

Using Wireshark, we can easily capture information that’s being sent over in the ethernet network or an
802.11 wireless network. And once we capture that information, we can view all of the packets, we can
get a breakdown of the timestamps, the IP address that was used as the source, the destination the
specific protocol, and then a breakdown of what other information may be contained within that
particular packet. If you need to document exactly what a particular attacker may be doing on the
network, then you want to be sure to get the packets. One of the easiest ways is to use Wireshark.

Here’s a real time packet capture of Wireshark on my local network. This is on my [INAUDIBLE]
distribution, and it’s really just receiving broadcasts and multicasts on my local network. Right now that
consists of a number of ICMP version six and some SSDP, which is this simple service discovery protocol.
With each one of these, I can select a particular frame. I get a breakdown or a detailed view of that
frame, I can even extend some of this out to see exactly what may be sending this discovery protocol,
and you also get a hexadecimal breakdown of this decode so that you can really see exactly what
information is being sent over the network.

Once you’ve captured these packets, they can be saved in a file so that you can pull this up later and
have the documentation to understand exactly what was sent across the network during that time
frame. If you’re working on a system at the command prompt, you may not have a graphical front end
that you can use with Wireshark, so instead, you need something that can perform the same function at
the command line level, and a good utility for that is tcpdump. This is often included in many Linux
distributions, so you may not even have to install any new software to be able to have the tcpdump
capabilities. If you run tcpdump, it can display information on the screen. You can have other options to
provide additional decodes as this is capturing, and even include the option to write all of this
information into a capture file that you can later look at inside of tcpdump, or use Wireshark.

I’m going to run tcpdump on my colleague distribution. I’m going to run it with elevated permission, so
that I have rights to capture this information. And as the packets are coming through, just as we saw
with Wireshark, we start to see, in this case, broadcasts and multicasts that are being sent over the
network, and the decodes are being sent to our screen. We could have also used parameters on the
command line to also write this information into a file so that we can reference that information later.

Now that we’ve captured these packets, we can, of course, look at this information inside of a protocol
decoder like Wireshark, or we can replay this information back onto the network using a utility called
Tcpreplay. This allows us to take the information that we’ve gathered and simply send it right back out
our network interface card so that other devices on the network can see that traffic as well. This is a
great way to test your security devices. If you’ve captured some malicious software and you want to see
if your IP can recognize it, you can simply send that information across the network and see if anything
shows up in the logs of your IPS. This is also a good way to test firewall rules to see if the information
you’re sending through the network will either be allowed or denied access at the firewall.

I’ve also used Tcpreplay to be able to send large amounts of information across the network to test for
monitoring tools and how well they’re operating. So if you want to check IP flow, or NetFlow devices, or
other packet capture devices on the network, you can send hundreds of thousands of traffic flows across
the network at very high speed and see how those devices happen to respond when they’re receiving
that data. This is also a good way to do some stress testing of the other devices on your network. So if
you want to see how a switch will perform, how a firewall might react to all of this data coming through
the network, you can use Tcpreplay.

Forensic Tools – SY0-601 CompTIA Security+ : 4.1


Some IT security investigations will require additional forensics. In this video, you’ll learn about
memdump, WinHex, FTK imager, Autopsy, and more.

If you’ve ever imaged a drive or a partition in Linux, then you’ve probably used the DD command. The
term DD comes from another DD command that was originally on IBM mainframes, and those used the
Job Control Language, or JCL to operate. DD is a reference to the data definition that converted between
ASCII and EBCDIC on the IBM mainframe.

DD allows you to create a bit-by-bit copy of all of the information that may be on a drive or in a
directory. This can obviously be very useful if you need to capture this information in order to perform
additional analysis later.

The command to create a disk image would be to use the DD command with the input associated with a
particular drive or a partition, and then the output would go to an image file that you would create. To
restore from that image, you would simply reverse the process by specifying the image follows the
input, and the output would be that drive or partition where you would like to store that information.

If you are performing forensics on a system, you might be interested in what’s on the storage drive. But
you might also be interested in gathering what might be in memory. In order to capture that
information, you can use the memdump utility. That will take all of the information and system memory
and send it to a particular location on your system. This is very useful after the fact, because many third-
party forensics tools can read this memory dump file, and be able to identify or locate information that
may be stored in that memory file.

Because you would commonly store the memory dump somewhere outside of the system, we would
commonly use memdump in conjunction with Netcat, stunnel, openssl, or some other host that we
would send to across the network.

Once you have some of these images or memory dumps, you may want to look through a raw
representation of those files. And one easy way to do this in Windows is with the WinHex utility. This is a
third-party editor that allows you to view information in hexadecimal mode, so you can pull out
information that’s located in a file, in memory, in disks that you may have, and be able to not only view,
but edit that information as well.

There’s also disk cloning capabilities built into WinHex, so you could copy everything from a file and
store it in an image file, or copy it to a separate storage device. You can also perform secure wipes with
WinHex, to be sure that all of this information that might be contained within a file will be completely
wiped and will not be recoverable with third-party utilities.

And there are other forensics tools inside of WinHex as well. It’s a great utility to have in Windows, and
performs many different functions for the security professional.

If you’re using Windows as your forensics platform, then you need some way to capture images from
other drives and be able to store them in a format that can be read by other third-party utilities. A utility
that’s almost used universally for this purpose is from Access Data. It’s called FTK Imager, and it’s an
imaging tool that can mount drives, image drives, or perform file utilities in a Windows executable.

This is also widely supported in many other forensics tools so that you can capture information in FTK
Imager, and then use those image files in other utilities on other operating systems.

There’s even the ability built into FTK Imager to read encrypted drives, as well. Of course, you would still
need the key or the password required to be able to access that encrypted drive. But the ability to
decrypt it and reimage it is something that is built in to FTK Imager.

It can also save these files into other very common formats. So if you’re using DD, or ghost, or Expert
Witness, FTK Imager can read and write to those image formats as well.

Once we’ve taken an image of a storage drive, we’d like to be able to search through that drive to find
other pieces of information. A tool that provides this is the Autopsy tool. This is a tool that provides
digital forensics of information that is stored on a storage device, or in an image file, and it allows us to
view and recover data from these devices as well.

It can view many different kinds of data. So you can search through downloaded files, you can view the
browser history on a device, view email messages, identify databases, view graphics files, and so much
more.

Here’s an Autopsy output from a drive that I purchased on eBay. The drive was sent to me as a used
hard drive. But nothing on the drive had been formatted. So I imaged the drive using FTK Imager, and
then I imported that image into Autopsy, which was able to go through the drive and showed me that
there were 1,057 images, nine videos, 146 audio files, and more information that it was able to pull out
from there.

If I look at the images, I can click on the thumbnails and it can show me the information that was stored
on that drive. And it can do that for all of these file types, and identify where any of this information
might be. This was very telling because I was able to go through and view web bookmarks, email
addresses, emails that were sent on this machine, and I was able to find internal company information
that was being sent out and sold to me as a used hard drive.

It may be useful as a security professional to perform your own tests against systems that may be in
your environment, and see how vulnerable they might be.
One way to perform these tests is with an exploitation framework. And there are many third-party tools
that you can use to be able to perform these exploitations. These tools can be used to create custom
attacks, where you build the attack type, and what’s contained within it. And you can add additional
tools as more vulnerabilities are found.

These are commonly frameworks that allow you to add additional modules. And as the community finds
different vulnerabilities, they’ll create new modules, you’ll be able to download those modules, and use
them on your own systems.

A good example of a popular exploitation framework is Metasploit, that has a number of known
vulnerabilities, and an increasing number are added to Metasploit all the time.

Another one is the Social-Engineer Toolkit, which has information that allows for spear phishing, website
attack vectors, infectious media generators, and so much more.

In the process of performing these vulnerability checks against a system, or searching through it using
our forensics tools, we may run across password files or information that may contain password hashes.
If we have that information, we may be able to perform brute force attacks to be able to identify those
passwords. And a good way to do that is to use a password cracker.

We can use this as an online tracking tool that can perform multiple requests to a device that’s online,
trying different passwords with usernames to see if you can find the right combination. But it’s probably
more common to use these crackers in an offline mode where you already have the hash files. If you
have the hashes, then you can perform a very high speed brute force to see if you can identify what
those passwords might be.

Of course, it can take a great deal of time and resources to be able to perform these brute force attacks,
and the amount of time and resources will depend on a number of criteria. One of these characteristics
might be the password complexity, or the strength of the password.

If the password is more randomized, then the entropy is higher, and it takes much longer to be able to
perform a brute force attack. These password hashes might have also been saved with a hashing
algorithm that uses a great deal of CPU cycles and makes it very difficult to perform a brute force attack.
If we have graphics processors, or GPU’s, then we can often use the high speed capabilities of those
pieces of hardware in order to help with this password cracking process.

In order to prevent someone like me from purchasing a used drive, imaging the drive, and then running
it through forensic software, you can sanitize the drive before you send that to someone else. This is the
process of completely removing data, and also making it so that none of that data could be recovered
later on.

We would use data sanitization if we wanted to take an entire drive, clean it of anything that might be
on it, and then use that drive again internally, or sell it on the open market.

Or we might want to sanitize a single file that’s on our system, making all of that data unrecoverable,
but leaving everything else on our system. Of course, you want to be very careful when you’re using
data sanitization tools. Once you delete this information using these tools, there’s no way to recover it
later. Unless you have a backup, of that data has now been permanently erased.
Incident Response Process – SY0-601 CompTIA Security+ : 4.2
Identifying and responding to an incident is an important part of IT security. In this video, you’ll learn
about incident preparation, detection, precursors, indicators, and more.

As a security professional, you’ll be responsible for responding to security events that occur in your
organization. Events like a user clicking an email attachment and suddenly infecting themselves with
malware, and that malware then begins communicating with other services and sending information
outside of your organization. Or maybe you’re dealing with a distributed denial of service attack with
botnets that are overloading your internet connection. Or maybe information that was stored
confidentially on your servers, has now somehow made its way onto public servers on the internet. And
sometimes the thief contacts you before making it public, to see if you’d like to pay a little money to
keep it away from the public’s eyes. Or you can have a user installing peer-to-peer software inside of
your organization and effectively opening up all of your systems to access from the outside. Each one of
these security incidents are very different but they all require some type of response by the security
professionals in your organization.

These types of incidents are often responded to by your incident response team. This is a group of
people that have been specifically trained to deal with these types of circumstances. This might include
the IT management team for your security department, so you have corporate support. It could have
compliance officers that are responsible for making sure that all of the data is compliant with all of the
rules and regulations followed by your organization. You’ll of course need technical staff to help
troubleshoot and resolve these types of problems. And there may be users in your community that can
help with these situations as well. This is certainly not a comprehensive list, you could have
management within the organization, public relations, applications developers, and other people who
would be critical for responding to these types of incidents.

NIST, or the National Institute of Standards and Technology here in the US, have created a document
that can help you understand the process you’d go through to handle these types of security incidents.
This is NIST special publication 800-61 Revision 2, which is titled computer security incident handling
guide. This gives you information about the entire lifecycle when you’re handling a security incident.
This includes preparation, detection and analysis, containment, eradication, and recovery, and lastly
your post-incident activity.

The key to handling a security incident properly, is to make sure you’re well prepared. There needs to be
all of the right people and processes in place so you know exactly what to do when the incident occurs.
This would include communication methods, which will document exactly who should be contacted and
how they should be contacted, this would also include your hardware and software tools, so you know
exactly how to respond to these problems, store and capture data that’s important and be able to have
information that you might want to use later on as evidence.

There will also be a need to have documentation of the organizations network, , understand exactly
where data may be located, and once you have created and stored some of this evidence you may want
to create hash values of that information so that you can be assured that none of that information
changes. You also want to prepare for the mitigation process, so you want to be sure that your planning
includes a clean operating system and application images. And lastly, and probably most importantly, we
need policies and procedures so that everyone knows exactly what they should be doing when a security
incident occurs.

To be able to respond to a security incident, you have to know that the security incident has occurred.
And there are many different ways to monitor and identify these security incidents. This is an ongoing
challenge because we receive so many different types of attacks all the time, every day. And there are
always security tools we have in place that will prevent the majority of these types of attacks. But how
do you identify the legitimate threats and know if a particular incident has occurred? These security
incidents often include many different devices, many different operating systems, and you often need
someone who’s very knowledgeable in order to understand exactly when an incident might have
occurred.

Sometimes we can be informed when the potential for an incident may have increased. These
precursors give us a little bit of a heads up or at least help us to predict where particular areas of the
network may receive a security breach. For example, we may look through a web server log and see that
someone was using a vulnerability assessment tool to try to identify any open or known vulnerabilities
on that server. Or it may be that announcement day where we receive a list of vulnerabilities in
Microsoft operating systems or Adobe Flash, that tell us that we need to update those systems to avoid
any type of vulnerability. Or there might be direct emails, Twitter messages, or Facebook posts from a
hacking group that tells you that they are going to try to attack your network.

This means that we’ll need to monitor our systems and see if we can identify cases where a particular
security incident might have occurred. For example, we might find that our intrusion prevention system
has identified a buffer overflow attack against our system, and they can tell us whether that particular
attack was successful or whether it was stopped by the IPS. We might also have alerts and alarms that
come from our anti-malware or antivirus systems that can tell us if a particular piece of malicious
software is running on a device. We can also have messages from our file integrity monitoring systems
that will be able to tell us if any of the critical operating system files on our servers have been changed
or modified. And if we’re monitoring network traffic, we may be able to tell if there are differences in
the traffic as compared to what might be normal on that particular segment.

If you do find that there’s malicious software or some type of breach, one of the best things you can do
is isolate and contain that particular security incident. You would never want to leave it running just to
see what it does because nothing good can ever come from a situation like that. Instead we might want
to take that malicious software and run it in a sandbox. This would be an isolated operating system that
is specifically designed to run software and see what the results of running that software might be. This
is also an environment that can be completely deleted after performing your analysis, so that you can be
assured that that malware is not going to get outside of your sandbox.

But even the sandbox doesn’t provide for perfect analysis of malware. Some malware can recognize
when it’s running in a sandbox and it will perform differently in a sandbox than if it’s running in an open
network. And there’s some malware that recognizes when you lose connectivity to the internet, so
when you isolate that system, it begins deleting files or damaging the operating system. Once we’ve
identified that an incident has occurred and we’ve identified where malware might be on a system, it’s
time to recover that system. We would first need to eradicate this malware and remove it from that
system. Sometimes this involves completely deleting everything on the system and recovering from a
known good backup or a known good image or we may want to recover the system and then fix the
vulnerabilities that caused this incident to occur in the first place.

This is why it’s always important to have a backup so that you can restore this system very quickly. If you
don’t have a backup, then you’ll need to rebuild the entire system from scratch and in some cases, you
may have lost data because you don’t have a backup. In either of those situations you’ll want to be sure
that you have rebuilt the system and that you have then repaired the system to close any of those
vulnerabilities. And lastly, you want to lock down the perimeter of the networks, so that you can stop
the attack before it gets into your private network.

On large networks, the reconstitution process can be very difficult and very time consuming. You want
to be sure that you have cleaned every system that could have been touched by this malware, or this
security incident, and this might take months of work to be able to recover everything on the network.
This usually starts with making high value changes and making changes that can be done very quickly. So
we may want to send patches out to our systems, or modify the firewall to prevent a certain type of
traffic from entering your network. Then you can look at a much larger response across the entire
infrastructure and begin looking at changes in how the network is designed, look at the operating
systems that may be used for a particular project, and perhaps roll out additional security controls so
you can prevent these types of incidents from occurring again.

And once the incident is over, we can take a step back and look at what processes worked and what
processes didn’t work during our incident response. We might want to have a post incident meeting
where everyone attends to talk about what occurred during the process, and we want to be sure we do
this as quickly as possible because we tend to forget some of the details as time goes on. You want to
make sure you have plenty of documentation, and that you’re able to offer ideas on what to do better
for the next incident.

An important piece of documentation would be able to understand exactly what happened, and when it
happened, so you know exactly the time frame that everything occurred. You also want to examine how
all of these plans you put in place before the incident were able to execute during the actual incident.
And then what would you do next time to make those plans work even better? Being able to look back
and have an objective view of how things went is going to help you the next time something happens on
your network. And it may be that this particular incident has caused you to have a different idea of
things to look for the next time. So you may want to update your alarms and alerts, so that you’re able
to receive additional precursors that might identify when an incident is going to occur.

Incident Response Planning – SY0-601 CompTIA Security+ : 4.2


Most of the hard work related to security incidents happens before an event occurs. In this video, you’ll
learn about tabletop exercises, walkthroughs, simulations, communication plans, and more.

Usually when we talk about security incidents, it’s usually after the fact, when one has already occurred.
But much of your work is going to be done well before an incident ever occurs in your environment. And
there are a number of things you can do prior to an incident occurring that can help you with the
planning process.

The first step is going to be performing exercises. It’s going to be testing yourself, and everyone in your
organization, on what they would do if an incident occurs. These can be scheduled, so you might have
them once a year, or twice a year, or even more, so that everybody becomes accustomed to what they
would do during an incident.

We want to be very careful that when we’re performing these exercises that we’re not affecting
anything related to our production networks. Although this incident is probably something that would
affect our production network, we want to be sure that we don’t touch anything with our production
network during these exercises.

Some security events could take weeks, or even months, to resolve. But when you’re performing these
exercises, you have a limited amount of time. So any event that you plan to do needs to have a narrow
focus that you can complete in a certain amount of time.

Once the exercise is over, we can look at our documentation and determine how we were able to
perform during this particular test. Going through the process of getting everyone together, performing
a full scale test of a particular security incident, and being able to go through from the very beginning to
the very end– these full scale drills can take a lot of time to complete, and they’re very costly because
you have so many people and so many resources involved in this disaster drill.

But many of the challenges that we have during these incidents are logistical issues. They might also be
related to what process we follow whenever an incident occurs. And because of that, we don’t
necessarily have to go through a physical drill to be able to find these issues. Instead, we can talk
through the drill occurring to determine what we would do first, what we would do second, and
continue through the entire process.

We call these types of drills tabletop exercises, because we’re getting everyone around the table, we’re
being presented with a particular scenario, and then we’re stepping through what we would do if this
particular incident occurred, instead of actually performing the tasks.

This means that everyone in the room can step through what they would be doing. They can discuss the
process with others in the organization. And you may be able to find places where the process you’re
following doesn’t match what other people were expecting, and you can resolve those process and
procedure problems before an actual incident occurs.

There may be times when you want to go one step beyond a tabletop exercise, and have all of the
players step through everything they would do if an incident occurred. This would be a walkthrough.
And this allows you to test all of your processes and procedures, not only with the management of your
organization, but with everyone who would be responding to this particular incident.

This would involve all the different parts of the organization, and you would use all of the tools that you
would normally have available to you. This allows you to go through every process and procedure and
see how it would work if you were to actually perform it.

So you could grab your tool kit, you can make sure that all the software and hardware that you’re using
is ready to go, you could see if you have all of the software up to date, and you want to be sure that it’s
working properly. And if you run into a problem, you can resolve it now during the walkthrough, rather
than waiting for an actual event to occur.

Many organizations perform ongoing simulations where they will pretend that a particular event has
occurred, and see how people in the organization respond to that. A good example of this would be a
phishing attack or a password request. And you can see how many people would click on that phishing
attempt and provide credentials to what would be a simulated attacker.

This usually starts with creating an email that would entice people to click on information inside of that
message, and ultimately provide their login credentials. This could be sent to individual users or groups
of users. And then you can check your reports to see who click through, and who provided those
credentials.

If you’re sending this email through from the outside, you can also test your anti-phishing mechanisms
to see if those email filters are working the way you would expect. And if the phishing got through your
filter, you may have to modify the filter so that it doesn’t get to your users.

Ultimately, you’ll have a list of all of the users that received the email. You’ll know exactly who clicked
on links in the email. And ultimately, you’ll know who provided credentials once they click through that
link. At that point, it’s very common to take that group of folks who click through and send them to
specific anti-phishing training, so they know what not to do the next time they receive one of these
messages.

An IT department doesn’t commonly operate in a vacuum. There are usually customers of IT that have
applications, data, and other technical resources that the IT department is managing for them. These are
the stakeholders in your organization, and when something is not working properly, it’s the stakeholders
that are going to be suffering.

So it’s always a good idea to maintain a good relationship with your stakeholders. Involve them in the
planning process for these types of security events. And if there is an event that occurs, you can bring
them in and have them involved in the resolution process.

Most of this relationship building, though, doesn’t occur when an event happens, it occurs prior to the
event. Often years before an event would occur. There’s ongoing communication and meetings to make
sure that everyone is involved in the process. And if you do have a security exercise, it’s important to
involve all of your stakeholders. And of course, once the event or the exercise is over, you want to
continue to involve them in the process, so they know exactly what to expect if a security event occurs.

Many of the problems that occur during a high stress event can be mitigated by simply having a good
line of communication. So if you are planning for a security event, you want to be sure that your contact
list is up to date and has all of the current information, so that you can contact everybody who needs to
be informed.

In your organization, this could include your CIO. You could have a Head of Information Security, and of
course, your internal response teams would be involved. And you’ll certainly need to involve people
who are not in the IT organization, such as human resources, your PR group, and your legal team.

And in some cases, you may need to call in external resources, such as the owner of the data, or perhaps
federal or state authorities. And if you’re part of a US government agency, you may need to call US-
CERT, which is our Computer Emergency Readiness Team.

One type of security incident that’s important to plan for is a disaster. The IT team is responsible for the
uptime and availability of all the data, and very often a disaster is going to affect that uptime and
availability.
These disasters can present themselves in many different ways. It might be a flood, or hurricane, or a
fire. Or perhaps you’ve had a system failure, or a technology failure in your software, or your hardware.
And of course, human beings can cause our own types of disasters. Someone doing construction could
accidentally cut through a water line that’s directly above your data center, or you may overload a
circuit on your power system and cause an entire floor’s power to go out.

All of these situations need to have a comprehensive disaster recovery plan so you know exactly what to
do, and when to do it. This may involve recovery at your current location, or you may have to use a
different location for recovery.

You also have to think about where your data is stored, and what it would take to recover that data if
you weren’t able to access it inside of your own building. And once that data is recovered, we need the
applications to go along with it. And we need to make sure that we have the personnel in your IT
department to be able to build all of these new systems, should a disaster occur.

When a disaster or security incident occurs, we need to find some other way to get our job done. And
often, this will require continuity of operations planning, or COOP. this is something that we would put
together well before a disaster occurring, so that we know what to do if we don’t have our normal
systems in place.

We rely constantly on the technology that we’ve created, and we often don’t even think about how we
would perform our job functions if we didn’t have our laptop, or smartphone, or any of our other
technology. But of course, there needs to be some type of alternative because this technology may not
be available during a disaster.

So we might use manual transactions that we’ve created on paper receipts, and instead of using
automated transaction approvals, we would pick up the phone and call someone to get those approvals.
If we have to use these processes, it will probably be painful and less efficient than our technology, but
at least we’d be able to get some of our work done. But we want to be sure that all of these
contingencies are well documented prior to a security event occurring.

Inside of our organization, we need to have a group of professionals who have been trained to respond
to these security incidents. This is our Incident Response Team, and they have been specifically trained
to deal with these types of problems.

They first would determine what type of incident is occurring, and what type of response does it need.
For example, a virus infection has a certain set of responses that may involve a small group of people.
But something like ransomware, or a distributed denial of service attack, may involve a larger group of
people in your organization.

The Incident Response Team may not be a separate department within your organization, but instead
may be a group of people that come together in a committee if an incident occurs. This means they can
be pulled in when a response is needed, and when no security incident has occurred, they can go back
to doing their normal day-to-day job.

This is the team within your organization that responds to any incidents that might be occurring. They
provide the analysis of what is occurring, and what needs to be done to resolve it. And they provide the
reporting that gives us the information we need to make our networks even stronger for the next
incident.

If you’re involved in a security incident, the first thing you’re going to think about is how much data is
going to be affected by this? The data is some of the most valuable assets that your organization has. So
you want to be sure that you have backups of everything. But perhaps more important, especially during
a security incident, where is that data located, and how much of it do you have?

You need to make sure that you have copies of this information. Some of it might be on site, some of it
may be off site. There may be different life cycles of this data that is stored in different ways, depending
on how you’re storing the information. And there may be data that is purged or deleted after a certain
amount of time has gone by.

Some organizations are also required to store certain types of information for a certain amount of time.
This regulatory compliance may affect financial organizations, or organizations that deal with certain
types of data.

There might also be very good reasons to have this backup available for operational problems. For
example, someone could accidentally delete data, and we need an easy way to restore that data, if
needed.

Or if there’s a disaster, and a flood or fire happens to take out all of your storage systems, you need to
have some way to restore that data from the backup.

And of course, not all of the data in your organization has the same priority or the same criticality. We
need to make sure that we have access to all of the data, but we need to understand, more importantly,
what data we restore first, what data we restore second, and so on. There needs to be very clear
understanding of what applications are going to be used, and where the data is located for those
applications.

Attack Frameworks – SY0-601 CompTIA Security+ : 4.2


An attack framework can help prepare, understand, and react to cyber-attacks. In this video, you’ll learn
about the MITRE ATT&CK framework, the Diamond Model of Intrusion Analysis, and the cyber kill chain.

If you’re an IT security professional and you’re responsible for protecting your network, you may find
that the attacks are many and varied. It’s difficult to keep track of exactly what type of attacks may be
out there and how you can protect yourself against these many and different varied attacks. And if an
attack is occurring, it’s important to know what your response should be and what you can do in the
future to mitigate these kinds of attacks. One of the challenges with this is there are so many different
methods that can be used by the attackers in so many different ways that they can gain access to
information. It’s important to know if your organization may be at risk. And if you are at risk, what are
the things you can do to help mitigate that risk?

One place to begin gathering this type of information is through the MITRE ATT&CK framework. This
comes from the MITRE corporation. They are based in the Northeast United States, and they primarily
support US governmental agencies. Their entire framework is available for you to view online. You can
go to attack.mitre.org and view the entire framework from that website. Using this framework, you can
identify broad categories of attacks, you can find exact intrusions that could be occurring, understand
how those intrusions are occurring and how attackers move around after the attack, and then identify
security techniques that can help you block any future attacks.

Here is the MITRE ATT&CK framework. It includes reconnaissance, resource, development initial access,
and so on. You could see many different categories are available. And let’s look at one of these. We’ll go
through the reconnaissance process. Let’s say perhaps we’ve discovered that there is some scanning
that’s going on against our network. So I want to click the Active Scanning option here. You can scan IP
blocks or do vulnerability’s scanning, and you can learn more about what those could be. We can also
learn information about how we may mitigate this. This is a pre-compromise mitigation, because
normally, the scanning takes place prior to an actual attack.

The framework also includes detection techniques and references you can use to help understand more
about this particular attack type. Let’s go back to our main list and let’s look at a brute force attack.
There are four different kinds of brute force attacks. Listed are password guessing, password cracking,
password spraying, and credential stuffing. Let’s do credential stuffing, and we can get information
about how those credentials are being stuffed by the attacker, ways to mitigate, which would be
account use policies, multifactor authentication, password policies, and user account management, how
you would detect these particular brute force attacks, and references to help you understand more. This
is an extensive amount of information. And if you’re trying to learn more about all of these different
attacks and ways that you can prevent them, this framework can give you a wealth of information.

Another useful framework that’s commonly used when an intrusion occurs is the Diamond Model. This
is the Diamond Model of intrusion analysis that was designed by the intelligence community of the US
federal government. You can get more information on that from this link that’s available at dtic.mil. This
guide is focused on helping you understand the intrusions that have occurred in your environment. The
Diamond Model uses scientific principles and applies them towards intrusion analysis, and how you can
focus on understanding more about these intrusions. So you’ll be doing some measurement, testability,
and repeatability. It’s the focus of this Diamond Model, and although it appears very simple from the
outside, when you start going through the process of filling in all the blanks around the diamond, you
begin to see how complex this process can really be.

As a broad example of how you would apply this model, let’s take a scenario where there has been an
adversary that has deployed a capability over some infrastructure against a victim. And you can use the
Diamond Model to help understand the relationships between all of those different pieces and gather
details and documentation to fill in the blanks regarding this intrusion. This is the Diamond Model, and
you can see there are four corners to the diamond, adversary, capability, victim, and infrastructure. The
adversary is obviously going to be the attacker. We have capability, which is going to be what the
attacker uses. This could be malware or a hacker tool or some other type of exploit that they can use
against your systems. The infrastructure is describing what was used to gain access. So this could be IP
addresses, domain names, email addresses, or other parts of your infrastructure. And lastly is the victim.
This could be a person, it could be an asset that’s on the network, or it could be a series of email
addresses that’s used.

There is a relationship between each one of these points on the diamond. So an adversary would use
the infrastructure. The adversary also would develop a capability. The victim is exploited by that
capability, and the victim, of course, is connecting to the infrastructure. So you can see there are
relationships between each point on this diamond. And if you suffer an intrusion, you’ll begin filling in
documentation at each one of these points to help understand more about who the adversary was,
what part of the infrastructure they used, who was the specific victim, and what capabilities did they use
to be able to gain access. So as you begin filling in those blanks you’ll have a much better idea about
how this attack occurred, and then you can go back later and try to find ways to prevent this from
occurring in the future.

And the last model we’ll look at is one that is often referenced in IT security materials, this is the cyber
kill chain. This is a concept that was brought to us by the military, and we’ve applied it into the
cybersecurity world. This starts with the first phase of reconnaissance. Reconnaissance is where we’re
going to gather intel, so we can use many different sources to get intelligence about what we’re
attacking.

We have weaponization as the next phase, so we need to find some way to have a payload that can then
take advantage of a vulnerability. You would then deliver that payload. For example, you may send that
executable over an email to the intended victim. And the attacker is hoping that the victim is going to
run that code in their email to create the exploit and execute the code on the victim’s device. When that
code is executing, there will be the installation of software such as malware to create back doors and
additional channels, which brings us to the phase of command and control, where the attacker is now
creating a channel that they can use to gain access to that system. And lastly, is where the attacker will
begin carrying out their objectives in the last phase, which is actions on objectives.

Each one of these models provides us with a different perspective of it security. Some of these models
are created so that we can gather information and learn more before an attack occurs, and other
frameworks are designed to help us understand the results of an attack. Either way, we can take
advantage of these frameworks to help make our network safer, and prepare for the next round of
attacks against our systems.

Vulnerability Scan Output – SY0-601 CompTIA Security+ : 4.3


The output of a vulnerability scan can identify significant security vulnerabilities. In this video, you’ll
learn about vulnerability scans, reading through the results, and managing false positives and false
negatives.

Vulnerability scanners are an important part of maintaining the safety and security of the devices on
your network. This allows you to scan these devices to see if there are any known vulnerabilities that
you may be able to close or remove prior to an attacker taking advantage of those vulnerabilities.

The scanner looks at a huge amount of information. And although it seems like it’s looking at almost
everything on the system it’s really looking at very specific signatures for known vulnerabilities. These
vulnerabilities can be cross-referenced online. So you can see exactly what is associated with that
vulnerability. And in many cases, how to resolve or remove that vulnerability.

There are many places to find this information. One of the most popular is the National Vulnerability
Database at nvd.nist.gov. And of course, we can always find our Microsoft Security Bulletins on the
microsoft.com website. The information we get from a vulnerability scanner can sometimes be very
obvious and very clear that a vulnerability exists. But sometimes a vulnerability scanner will simply give
us an idea that perhaps a vulnerability may be an issue on that device.
So you may have to manually connect to this device do some additional research and determine if this
system is really vulnerable. A vulnerability scan can give you a lot of information on the status of those
devices. One is that it may tell you that it has a lack of security controls. If the firewall has been
configured or turned off a vulnerability scan can inform you of that problem. Or if there’s no antivirus
and no anti-spyware on the system you will see that listed in the vulnerabilities associated with that
device.

A vulnerability scan might also tell us if a user’s created an open network share. This would be accessed
to files on the system that don’t require any type of authentication and that would be a significant
vulnerability. But one of the things you will find with these vulnerability scans is that it is able to identify
some very specific vulnerabilities that exist. And as we update the database in the vulnerability scanner
we can be notified of new vulnerabilities as they are discovered.

This is a vulnerability scan that I ran on a system that has a number of different vulnerabilities by
default. This is intentionally a very vulnerable system. The scan took only two minutes to run. And it
identified 39 vulnerabilities on the system. Some of those vulnerabilities are critical vulnerabilities,
others are mixed, some are medium, there’s one low vulnerability. And a large number of informational
vulnerabilities on the system.

If we scroll to the top we’ll look at the critical vulnerabilities. And we’ll look at the second one on this list
which is Unix operating system unsupported version detection. And it found that this particular system
is running Ubuntu 8.04, which is a very old version of that operating system. And it even tells us that
there’s no support and no new security patches. So this may not be a good operating system to run on
your network.

If we go back to our list of vulnerabilities. One of the medium category vulnerabilities is that the NFL
shares on this device or world readable. Which means there are no access restrictions. Anyone who is
able to see the system is able to connect to that share and access the files on the storage device of this
machine. And we’ll do one more we’ll look at a medium vulnerability of an unencrypted telnet server.

Telnet servers obviously communicate in the clear. And it’s telling us that this device is running a telnet
server over an unencrypted channel. In fact, it gives us the banner it received when it connected to this
device. And you can see that this is an intentionally vulnerable operating system called metastable
voidable two.

If this was a production system it wouldn’t be running mate exploitable. But it would be providing
information about the banner on the system. And then we can take the proper steps to disable this
telnet server on this device.

One challenge when working with vulnerability scans is occasionally the information we receive in those
reports won’t be entirely accurate. For example, we may receive false positives in that report telling us
that a vulnerability exists. Here’s the type of vulnerability that it happens to be. But then when we
research it and have a look at the system, we find that system isn’t vulnerable at all.

The vulnerability scanner believes that a vulnerability exists. But now that we’ve researched it we can
see that there is no vulnerability on the system it was instead a false positive. False positives are
problems that don’t exist at all, they were miscategorized or misidentified as a vulnerability.
This is different to something that is a low severity vulnerability. Like the low severity vulnerabilities on
our report where the problems really did exist. But the vulnerability scanner believes that these
particular problems may be of a lower priority than perhaps a medium critical or high vulnerability.

A vulnerability we did not see on our report was a false negative. That’s because false negatives don’t
appear on any of your reports. A false negative is when a vulnerability exists on that device but the
vulnerability scan did not identify it. And therefore was not able to alert us that a problem might really
exist on that system.

A false negative can be a significant concern. A vulnerability exists on the system but our scanner never
identified it. And therefore we may have no idea that our system is susceptible.

One of the things you can do to minimize the number of false positives or false negatives is to make sure
you have the latest signatures for your scanner. This is going to provide the most accurate set of
signatures and the latest set of signatures. So that things like a false negative can suddenly be identified.
And things like false positives can be properly not identified by your system.

And there may be aspects of your network or the configurations that you’re running on your systems
that might cause false positives and false negatives. So you want to work with your vulnerability scanner
manufacturer to make sure that you’re running the right configuration with the right signatures.

SIEM Dashboards – SY0-601 CompTIA Security+ : 4.3


A SIEM can provide extensive visibility and reporting options. In this video, you’ll learn about using a
SIEM (Security Information and Event Management) console and searching for important security
details.

S-I-E-M, or SIEM, stands for Security Information and Event Management. This is usually a device that is
logging information from many different resources on the network, and consolidating all of those logs
back to one single reporting tool.

This allows us to perform analysis of the data to create security alerts and real time information about
what’s happening on the network right now. Since we have collected all of this log information, we’re
aggregating it into one place, and creating a long term storage so that we can create some extensive
reports over a long period of time.

There’s also the ability to correlate different types of data together. We’re bringing together data from
firewalls, servers, switches, routers, and other devices on the network. This allows us to correlate data
together that normally would be completely separate.

And of course, if we have some type of security event, we can go back through these logs to determine
what happened during that time frame, and what other details can we gather about this specific security
issue.

A SIEM can gather information from many different devices. We can of course gather log files from
operating systems like Windows or Linux, and have that information sent into the central SIEM
database.
There’s also log files that are in our switches, our routers, our firewalls, and other devices. And of
course, those log files can also be parsed and stored in the SIEM database. We might also use third-party
sensors, which follow standards such as NetFlow, that can provide information about traffic flows across
our network.

And if you can imagine consolidating all of this information from so many different devices into a single
database, and then trying to read through the database to find information that we might be able to
use, it’s almost overwhelming. So it’s important to use a SIEM that is able to parse the data, and perhaps
put the information into different categories.

Perhaps some of these log entries can be categorized as informational. Others might have a warning
category. And others could be categorized as urgent.

Because we’re able to capture this information over a very long period of time, we can start to see
trends in the way that the data is changing. We can see spikes whenever a particular security event
occurs, or we might be able to tell that a particular network is more or less utilized than normal.

The SIEM also has intelligence that can parse this data, look through the information for details, and
proactively provide you with alarming and alerting. You could then drill down into the raw data that’s
inside the SIEM to be able to create reports and view other details about that event.

And we can begin correlating these very different data types into a standard set of information. For
example, you may be able to see the relationships between source and destination IP addresses, users,
source type, and other information that you could gather from the log files.

I’m connected to SIEM that has 339,000 events inside of the database that go back five years. The latest
event was two days ago. And we can begin searching for information inside of the SIEM.

I’m going to perform a search of the word fail, and the word fail, or failure, or anything after it, and the
word password. And it’s going to search through that database and show me all of these log entries
where I have matches for the word fail or failure, and the word password.

And you can see some of these entries you’re able to determine that it was Microsoft authentication.
You can see server names and other information inside of it. And there’s additional details that you
could drill down into if you would like to.

There’s also information along the left side of the screen that tells me information about, perhaps, what
the source type was for these particular records. And if I click that, it will summarize or create an
incident report that shows me that 2,400 of these were Windows authentication instances. But there
were some of Linux devices that had this particular entry. A database audit and a secure file service.

If we click Linux secure, it will add that to our list. And now we’re looking at Linux-specific events where
there was a failed password. And then we can find more information about each one of those events,
and what was associated with that particular log file.

It will be interesting to see what devices had this particular authentication failure. So I’m going to
choose an option on the left side for the destination. I can see there are eight destinations listed, and
57% of these events were to the Corp file server. So now we know exactly which service is having the
greatest number of events, or perhaps the greatest number of brute force attacks, all because we were
able to search very quickly through the SIEM database.

Log Files – SY0-601 CompTIA Security+ : 4.3


Security information can be found in the logs files of almost any device. In this video, you’ll learn about
viewing log files on network devices, systems, applications, web servers, and more.

There are a number of devices that are connected to our networking infrastructure that can provide us
with feedback about things that may be occurring on the network. These might be switches, routers,
firewalls, VPN concentrators, and other devices.

This is a log file from a switch that is giving us information about what interfaces may be going up and
down on the switch. There’s also security information on the switch. The TCP SYN traffic destined to the
local system is automatically blocked for 60 seconds. This type of information might vary depending on
the logs we are receiving.

If we’re looking at router logs, then we’d see router updates. We may see authentication issues occur
with some of these, especially if it’s a VPN concentrator, or things like this TCP SYN attack, which is
related to a network security issue.

If you’ve ever looked at log files on an operating system, then you know there is extensive information
that you can gather. And very often, this can include information about the operating system, the file
system, and the applications that are running on that OS.

In Windows, not only do we collect information about the application, the setup, the system, and
forwarded events, there’s also a section for security events. The operating system can monitor for
security or authentication events, and log all of that information as well. Because there is so much
information stored in these operating system log files, you’re going to need some way to filter this
information.

You can see on this particular Event Viewer, there are over 7,000 events stored in this log file.
Fortunately, built into Event Viewer are a number of different actions that allow you to filter or be able
to view the data in different ways, so that you can find exactly what you’re looking for.

Many applications also keep their own log files, and we can get more details about the way an
application is performing based on what we’re finding in this log information. In Windows, you’ll find
this application log information in the Event Viewer under Application log. And if you’re on a Linux
operating system, you’ll find many different log entries under /var/log.

You can either view the logs on the system itself, like I have here, or you could bring those log files into a
Security Information and Event Manager, or SIEM, and look at all of this information consolidated into
one single place.

The emphasis in this course is on security. So certainly, there will be a lot of security log information to
view. You have many different devices that can gather security details, so that you can see what traffic
flows have been allowed or blocked through your network. You can view any exploits that may have
been attempted. You can see if any URL categories have been blocked by your firewall or your proxy.
And you can also see DNS sinkhole traffic, which can tell you what devices attempted to connect to a
known malicious location.

Most of these logs and security details are created on security devices that we have connected to our
network, such as intrusion prevention systems, firewalls, or proxies. This can provide us with detailed
security information about every single traffic flow going through the network. We can get a summary of
what attacks may be inbound to our network, and we can correlate this log file with the other log files
that we’re collecting at the same time.

Firewall logs can give us information about traffic flows that may be allowed or blocked. This is a firewall
log that shows information on what IPv6 packets have been blocked on this network. This firewall is also
able to provide information on website access that has been denied. And there’s many other error
messages in here that might give us an idea of what attacks may be underway.

A web application firewall can also provide details about application level attacks. We can see
information such as cross-site scripting attacks that have been attempted from a particular location. We
can see error codes that were created but suppressed, and kept from the attacker, and this was a code
405 on this particular web server. And then there are SQL injections and cross-site scripting attacks that
were occurring later on in this log.

All of this information gives us an idea about where attacks may be coming from. It can tell us what
attacks we’re stopping at our firewalls, and give us an idea of where we may want to add additional
security controls in the future.

If you’re running a web server, then you have an extensive log that shows exactly who connected to
your website server, and what pages they were able to view. We’re also able to get information about
what errors may be occurring, especially if someone’s trying to access nonexistent files, or files that
might be associated with known vulnerabilities.

You can look through log files individually on a web server, or you can consolidate these log files into a
SIEM, or log file analyzer, to be able to tell if someone’s trying to take advantage of a vulnerability. And
from an operational perspective, the log file has information about when a service started, when a
service ended, and you’ll be able to get some operational details about how well this web server is
performing.

A Domain Name System server can give us information about what queries have been made against this
DNS server. We can view the IP address of the request, and many log files will also store the fully-
qualified domain name for the request. We can see if someone’s trying to perform a name resolution to
a known malicious site, or site that has known command and control information. This may indicate that
a device has already been infected on the inside of our network.

And since we have control of our own DNS server, we can block any attempts that have been made to
resolve a known malicious site. We can then use that list to identify potentially infected devices, and
then we can clean those devices, or remove them from our network.

Whenever we authenticate into a device, we’re commonly using a username, a password, and perhaps
some other type of authentication factor. Each time we attempt an authentication, the results of that
attempt are logged in an authentication log file. We know exactly who was able to gain access to a
system, and who was denied access to that system. We can find account name, source IP address, the
authentication method they used, and we can create a report that shows, over time, what devices
successfully authenticated, and what devices did not successfully authenticate.

If this is one where someone is performing multiple authentication attempts, then we may be able to
identify brute force attacks and block them just by looking through our log files.

We can then correlate that with other log files we might have. So we could see router and switch
information, we can identify SSL VPN authentication, and see if we can see why this particular device is
authenticating incorrectly, and where it may be coming from.

Here’s an authentication log file showing a series of brute force attempts where someone is trying to put
in the password for the route login on the server, and ultimately keeps trying over and over again.
Perhaps in this case, even with different IP addresses, to try to gain access to this particular server.

You can see exactly when a password attempt failed, you can see the disconnects for that, and then you
can see the next failed attempt later on in the log. All of this information can be consolidated from all of
your servers into one single SIEM, and then you can create a report that shows all of the authentication
attempts across your entire network.

Most of the log files we’ve been discussing so far are created constantly on the devices that we use
every day. But there’s some log files we can create on demand. A good example of this are memory
dump files. We can take a single application, using task manager and Windows, and we can create a
dump file that will store everything in memory associated with that application into a single file.

We would usually create one of these files when working with tech support to resolve an application
problem. And we can send that memory dump file to developers in an effort to try to locate and resolve
that issue.

This is very easy to do in the Windows Task Manager application. You simply right click the application
and choose create dump file. You might also find that the application you’re using in Windows, or any
other operating system, has its own internal method of creating a dump file for the developers. So make
sure you work with the support team to find the best way to create this memory dump for your
application.

Although most of the environments we’re working in have moved from the traditional plain old
telephone system, running over analog phone lines, to voiceover IP and digital packets. And although
we’ve made this technology shift, we still have reports that we create based on these voice over IP
systems and our Call Manager logs.

For example, you can view inbound and outbound call information, including what endpoints were
involved in the phone call, and any communication that may go in and out of a particular gateway. There
may also be security information inside these log files, especially if these phones are authenticating to
the Call Manager, and we can see exactly when a particular phone may have been in use.

And we can get detailed log information from voiceover IP protocol, such as the session initiation
protocol, which sets up and tears down our phone calls and messages so that we can see the call setup,
the management, and the tear down of the phone call.
We’re also able to see inbound and outbound call information, and if somebody happens to use unusual
country codes, we may have alarms and alerts based on these log files that can inform us when
something like this may be happening with our phone system.

Log Management – SY0-601 CompTIA Security+ : 4.3


Security monitoring processes create extensive logs and data. In this video, you’ll learn about
transferring, storing, and reporting on logs created from journalctl, metadata, NetFlow, IPFIX, sFlow,
protocol analyzers, and more.

One of the standard methods for transferring log files from one device to a centralized database is called
Syslog. This is a standard that you’ll find across many different devices, and if you’re installing a firewall
or switch or a server, you’ll notice that there will be Syslog options within those devices so that you
could send that data to a SIEM.

This is a Security Information and Event Manager and it’s usually a centralized log server that
consolidates logs from all of these different devices. When we send information via Syslog, we’re
labeling each log entry into this Syslog destination. There will be a facility code, which is the program
effectively that created the log, and there will be a severity level associated with that log entry.

If you look in a Linux device or device that doesn’t automatically have Syslog functionality, you may see
different daemons available, such as rsyslog, that is the rocket-fast system for log processing, you could
see syslog-ng which is a popular syslog daemon for Linux devices, or NXLog, which is able to collect log
information from many different devices and consolidate it on a single machine.

This is the front-end of a security information and event manager, that has received these log files via
syslog, pass those files, and now we can view this information, we can search through this data, and
create reports on everything stored in our database.

If you’re managing a Linux operating system, there are many different logs available on that device.
Some of them are specific to the operating system itself, some of the logs are created by the daemons
that are running on that system or the applications that you’re using.

There is a standard format for storing system logs on Linux in a special binary format. This optimize the
storage area and allows you to query the information very quickly. But you’re not able to see it with a
text editor because it’s in the special binary format.

Fortunately, Linux has a utility called journalctl, which allows you to query the information that’s in that
system journal and provide output on what may be contained in there. And you can search and filter on
those details, or view it as plain text.

This is a view of the output from the journalctl and you can get an idea of information about
connections to SSD, here’s in rsyslogd entry, where syslog information has been received, and you can
look at other details about authentications that have either failed or not failed on the system.

One of the first statistics we often want to gather from these log files are information on the bandwidth
that we happen to be using. This is a fundamental network statistic and one that is almost universal no
matter what device you’re connecting to. This shows you the percentage of the network that has been
used over time.
And there are many different ways to gather this metric. You might use SNMP, the Simple Network
Management Protocol, or you could use other more advanced capabilities such as NetFlow, sFlow, or
IPFIX. You could also use protocol analyzers or software agents that might be running on a particular
device.

Bandwidth monitoring is always a good first step. It’s good to qualify that you have the bandwidth
available to transfer information for that application, because if the bandwidth has been exceeded and
you’re running out of available space on the network, then none of your applications are going to
perform very well.

Another great source for data that is in some ways hidden from us usually is metadata. Metadata is data
that describes other types of data, and usually, metadata is contained within the files that we’re using
on our devices. For example, if you send or receive an email, there is metadata within that email
message that normally you don’t see.

There’s information in the headers of that email, that header information may show you, which servers
were used to transfer that email from point A to point B, and you might want to be able to see
destination information as part of that header in the email as well.

If you’re using your mobile phone, there’s an extensive amount of metadata that could be stored. For
example, if you take a picture or store video on your mobile device, it could keep in that metadata the
type of phone that was used to take that picture or the GPS location where the picture was made.

If you’re using a web browser to connect to a web server, then there’s metadata that’s transferred back
and forth there as well. For example, you could be sending your operating system information, the type
of browser that you’re using, and the IP address that you’re sending it from.

And if you look into documents or files that you store, for example, in Microsoft Office, you may find
metadata inside that document that shows your name, your address, your phone number, your title,
and other identifying details. Here are the headers that you normally don’t see when you’re looking at
your email messages, and the metadata that’s hidden inside.

You can see what IP address a message was received from, and who received that message, you can see
the return path, another IP address where that message was received, and other details that help you
understand what path this to go through the network, what validations were used to be able to confirm
this message was really sent by that originator, and other details about this email message.

NetFlow is one of these standardized methods of gathering network statistics from switches, routers,
and other devices on your network. This NetFlow information is usually consolidated onto a central
NetFlow server, and we’re able to view information across all of these devices on a single management
console.

NetFlow itself is a very well-established standard, so that makes it very easy to collect information from
devices that are made from many different manufacturers, but bring all of that information back to one
central NetFlow server.

This is an architecture that separates the probe from the collector. So we have devices on our network
that may be individual NetFlow probes, or the NetFlow capability may be built into the network devices
that were using. These probes are either sitting in line with our network traffic, or they’re receiving a
copy of the network traffic, and all of those details are exported to a central NetFlow collector where
you can then create different reports.

There are usually extensive reporting options available on the collector, and we can gather very long-
term information to be able to see trends and other details about how our network is performing.
Here’s a NetFlow collector front end that shows the top 10 conversations and top 10 endpoints on our
network, and it shows it as it’s relating to bandwidth.

We can also get a breakdown of all individual hosts here as well. And here’s another summary of details
that shows the top five applications running on our network, and what the top NetFlow sources might
be.

A similar data flow standard is IPFIX. This is the IP flow information export which you can think of as a
newer version of NetFlow. It was one that was created and based on NetFlow version nine. This allows
us with some flexibility over what data we would collect and what information would be reported to a
centralized server. This is very similar to NetFlow, except we can customize exactly what kind of data
we’d like to receive from those collectors.

One of the challenges with collecting this network traffic and creating metrics based on the
conversations occurring on our network is that it can take a lot of resources, especially, if you’re running
a very high-speed network. To be able to balance the available resources with the need to view more
statistics on the network, this is sFlow or sampled flow, where we’re looking at a portion of the network
traffic to gather metrics on.

Because of the lower resources required for sFlow, we can embed this capability in a number of our
infrastructure devices. So the switches and routers that you’re already using on your network, may
already support sFlow functionality. And although we’re only looking at a portion of the traffic going
through, we can infer some relatively accurate statistics from these sFlow devices.

You may be able, for example, to view video streaming and high-traffic applications, by simply sampling
a portion of that traffic as the flow is active. Here’s an example of some of the statistics you can gather
using sFlow. You can see the top 10 interfaces by a percentage of utilization, we have top 10 interfaces
by total amount of traffic, top 10 wireless clients by traffic, top 10 wireless access points by client count,
and other statistics as well.

And if you need to get detailed information of exactly what’s going over your network, then you can use
a protocol analyzer. Protocol analyzers are commonly used to troubleshoot complex application
problems, because they gather every bit and bite from the network, and provide a breakdown of exactly
what’s going across those particular network links.

You can also use this on wireless networks or wide area networks as well. You’re able to see information
such as unknown traffic, that may be going across the network, you can provide packet filtering so that
you can view exactly the information you’re looking for, and of course, the protocol decodes on the
analyzer will give you a plain English breakdown of exactly what traffic is traversing the network.

Here’s a screenshot from an analyzer. On the top, we can see a packet by packet breakdown of delta
times source IP addresses, source port numbers, destination IP addresses, destination port numbers,
protocols, the length of data in the packet, information about the packet, and then underneath all of
that, we have a detailed breakdown of every single packet that’s going through the network.

Endpoint Security Configuration – SY0-601 CompTIA Security+ : 4.4


Security administrators use a few different philosophies when configuring security policies on endpoints.
In this video, you’ll learn about approval lists, block lists, quarantine areas, and the criteria used for
application approval lists.

When we refer to the endpoint, we’re talking about the devices that we use day to day to do our jobs.
This could be our desktop computer, the laptop that we carry around, or it might be a smartphone or
tablet. And of course, there are different ways to exploit each of these devices. The endpoint is a critical
piece of our security. So we have to protect against malware, operating system vulnerabilities, or users
who are trying to circumvent existing security controls. The IT security team is responsible for
monitoring all of these different devices. And they are constantly watching for alerts and alarms that can
let them know when something unusual might be happening on the endpoint.

One security control might be to define what applications are allowed or not allowed on a particular
endpoint. The concern might be that a user would download software from a third-party website. And
that software might have some malicious software or malware. By providing control of the applications
running on the endpoint, the IT security team can create a more secure and stable environment.

One philosophy on how to implement this type of control is through the use of an approved list. That
means that the IT security team would create a list of applications that was approved. And no other
applications would be able to run on that endpoint. This is obviously a very restrictive list. And you
would have to go to the IT security team if there’s any software that you do need to have installed that
may not currently be on the approved list.

Another way to implement this control is to have a blocklist or a deny list. This would be a list of
applications that would specifically be prevented from running on this particular endpoint. This would
mean that users were allowed to install applications unless that application is specifically listed in the
deny list. In fact, it’s very common for anti-virus or anti-malware to have their own deny list. And if a
user tries to launch that application, the anti-malware software will prevent that application from
running.

If your endpoint security software does recognize an application that seems to have malicious software,
then it can remove that from the system and place it into a quarantine area. This might be a folder that’s
on the existing system where no applications are allowed to run. Later, the IT security team can look into
the quarantine folder and perform additional analysis of that software.

The ability to run or not run an application in an operating system is commonly built into the core
functionality of the OS. And the security team can enable or disable different parameters to allow
certain software to run. For example, there may be a hash that is taken of an executable, and if that
hash matches the executable on the system, it can either be allowed or denied access to execute. Since
this is a hash, if the application changes, then the hash would change. So this may be something that the
security team has to constantly update every time a new version of software is introduced.
Many applications include a digital signature. And that digital signature often is based around a
particular manufacturer or developer. That developer’s name then can be allowed or disallowed to run
on that system if the digital signature matches. For example, you could decide that anything that is
digitally signed from Microsoft is trusted and therefore can be installed and run on that endpoint.
Malicious software often installs itself into different places on a storage device. So you may be able to
tell your operating system, only run software if it happens to be installed in this particular folder.

And if you limit the permissions to those folders, you can effectively create a trusted area of the storage
drive. We can also set a policy that would allow or disallow an application to run based on the zone it is
executing from. For example, our internal network is commonly designated as a private zone. And the
internet side is usually the public zone. We could set a policy that says that any applications that are
executing or running from private zone devices are allowed, and any that are coming from a public zone
would be prohibited.

Security Configurations – SY0-601 CompTIA Security+ : 4.4


A secure configuration can be designed to include many different features. In this video, you’ll learn
about isolation, containment, segmentation, and SOAR.

Our latest generation of firewalls allows us to allow or deny certain applications from traversing the
network. This means the firewall might allow access to a Microsoft SQL server application, but deny
access to a web based application. But, of course, if we’re on our mobile phone or our tablet and we are
not in our office, then our firewall is not going to be able to help us very much.

So we need to have rules on the mobile device that can allow or disallow access. And we can provide
those types of policies through the Mobile Device Manager or MDM. This MDM allows the IT security
administrator to set policies on all of these mobile devices. So no matter where you take your mobile
phone or your tablet, it will always be protected from malicious software.

And another useful security function is DLP or data loss prevention. The DLPs role is to identify and block
the transfer of any PII or personally identifiable information. This means that someone trying to transfer
personal records, social security numbers, credit card numbers, or anything else that sensitive, could be
blocked by the DLP.

A lot of what we do in our browser is visiting other websites. And those websites, of course, have a URL
or uniform resource locator. That you URL can be used as a security control. If someone tries to visit a
known malicious site, the URL filter can block access to that particular location. And if someone’s trying
to access a known good location, the URL filter would still allow access to those sites.

Many of these URL filters can also be integrated with third party blocklists. These are blocklists that are
updated constantly and can provide you with real time blocking of known malicious sites. For example,
the first name on this particular blocklist is amazon.co.UK, which sounds fine, except it’s actually
amazon.co.UK.security-check.ga. This is clearly a URL that’s trying to disguise what it’s doing. And you
can see in the blocklist that it has been configured as a phishing site.

In larger environments, we might deploy certificates to all of our trusted devices and all of our trusted
services. And if someone tries to connect to the network and they do not have a trusted certificate on
that device, that device would not have any access to any network services.
The concept of isolation is one where we can move a device into an area where it has limited or no
access to other resources. Isolation is a key strategy, especially when you’re trying to fight malicious
software or software that’s constantly trying to communicate back to a command and control location.
We often use isolation if someone’s trying to connect to the network and does not have the correct
security posture on their device.

Perhaps they’ve not updated to the latest antivirus signatures. So their device will be put on a separate
remediation VLAN that would give them access to update the signatures. And once those signatures are
updated, they’re then allowed access to the rest of the network.

We can also implement process isolation. If we identify a process running on that device that seems
suspicious, we can disallow any access from that process to the rest of the network. That means that the
user would still be able to communicate using the normal trusted applications. And we would be able to
communicate inbound to that device to provide additional support.

We can normally communicate from our local laptop to any of the other devices on our network. So the
idea of providing some type of isolation is a useful security tool, especially if this device was to suffer
some type of malicious software. Now that this device is infected, we may be concerned that this device
could also infect other devices on the network.

With an isolation policy we can disable the connection between this laptop and the rest of the network.
And we might also put this device on its own isolated VLAN, which means that it would be able to
communicate to other devices on the isolated VLAN, but no one else inside of the organization. We
might even configure a firewall rule that would allow the isolated VLAN access to known trusted sites on
the internet. This would allow the laptop owner to download antivirus or anti-malware software from
the internet in an effort to remove that from their machine.

One way to prevent the spread of malicious software is to prevent the software from having anywhere
to go. One way you can implement this is through containment. And a popular way to do this is
application containment, where every application that runs on your system is running in its own
sandbox.

That means that every application is not aware of any other applications running on that device. And
every application has limited access to the operating system and to other processes. This means that if
you were infected with something like ransomware, the ransomware may be able to infect that
particular application, but it would not have any way to jump outside of that application to infect the
rest of the local machine or other devices on the network.

The containment might also be one that is reactive. Once ransomware is identified on anyone’s
machine, we may change the security posture to disable any administrative shares on any one system,
disable remote management of all devices on the network, and we would also disable any local account
or administrator access. And we would change the passwords associated with our administrator
accounts.

On many of our networks, we’ve created extensive security between the outside of the network, usually
on the internet, and the inside, or our internal network. What we have not done is put a lot of security
controls on the inside of the network, which means that devices who are all in the inside of the network
can communicate with each other relatively freely. This also means that access from the outside, once it
gets to the inside, is able to traverse the internal network without any concern of being blocked.

Many network administrators have started to create segmented networks where they would put
different devices into their own segmented and protected areas of the network. In those cases,
someone coming in from the outside might gain access to the internal network. But because all of these
devices are on their own segmented network, there would be no way to communicate in or out of those
protected areas of the network.

As you can see, there are a number of different security controls you can put in place to allow or
disallow access for applications and data through the network. One of the challenges for security
professionals is that in order to make all of these changes and be able to do it dynamically, you would
need to automate this process. And a number of organizations have started to implement SOAR, which
is security orchestration, automation, and response.

Using SOAR, an administrator can integrate multiple third party tools and have them all work together.
This integration is based around what we call a runbook. A run book is a bit of a cookbook that has
detailed steps on how to perform a particular task.

So there might be a runbook that describes how to reset a password. And it describes connecting to the
Active Directory system, locating that particular user in the list, changing the password parameters,
resetting the lock that might be on the password, and then sending the user a message. That particular
set of steps is something that should occur automatically and should be relatively easy to automate.

These runbooks can be combined together to create a playbook. A playbook is a much broader
description of tasks to follow should a particular event occur. For example, if you want to recover from
ransomware, there needs to be a playbook written that can describe all of the different steps that need
to occur in order to remove that ransomware.

Digital Forensics – SY0-601 CompTIA Security+ : 4.5


The gathering digital forensics is often a critically important process. In this video, you’ll learn about
legal holds, video capture, admissibility, chain of custody, time offsets, and more.

Digital forensics describes the process of collecting and protecting information that is usually related to
some type of security event. As you can imagine, this can cover many different techniques for gathering
data across many types of digital devices. And it also describes different methods to use for protecting
that information once you’ve retrieved it.

If you’d like to get an overview of this process, there is an RFC for this– RFC 3227, which is guidelines for
evidence collection and archiving. And it’s a great best practice to get an idea of what’s involved with
the digital forensics process.

This RFC describes three phases for the digital forensics process– the acquisition of data, the analysis of
that data, and the reporting of that data. And you’ll notice that the digital forensics steps and the entire
process of collecting and protecting this data requires you to be very detail oriented, especially since
some of this information could be used later on in a court of law.
One of the first notices you might get relating to digital forensics is something called a legal hold. This is
often requested by legal counsel. And it’s often a precursor to other types of legal proceedings.

This legal hold often describes what type of data needs to be preserved for later use. The data copied for
this legal hold is often stored in a separate repository. And it’s referred to as electronically stored
information or ESI.

These legal holds may ask for many different kinds of information and many types of applications. And
the information that you’re storing might be stored for a certain amount of time or it may be an
indefinite hold. As a security professional, if you receive a legal hold, you have a responsibility to gather
and maintain that data so that everything is preserved.

Another good source of information to gather would be in a video form. Video can provide important
information that you could reference after the fact that normally would not be available. For example,
you can capture the screen information and other details around the system that normally would not be
captured through any other means.

And if you’ve got a mobile phone, it’s very easy to grab video from wherever you might be. You might
also want to look around and see if there’s any security cameras which may also have stored video that
could then be included with this information gathering. This video content needs to be archived so that
you’re able to view it later in reference to this particular security incident.

One concern regarding the data that you collect is how admissible that data might be in a court of law.
Not all data you collect is something that can be used in a legal environment. And the laws are different
depending on where you might be. The important part is that you collect the data with a set of
standards, which would allow that data to be used in a court of law if necessary.

Another concern is if you are authorized to gather that information. In some environments, the data
itself is protected, in others, the network administrator may have complete access to that data. And of
course, there are correct ways to gather data and incorrect ways to gather data. You need to be familiar
with the best practices for your tools and the procedures that you follow.

If this data will be used by a laboratory, you want to be sure the proper scientific principles are used
during the analysis process. And you may be asked for your academic or technical qualifications
surrounding this data acquisition so that anyone analyzing this data knows that it was gathered properly
by a professional.

Once you gather data, you want to be sure that nothing happens to that information and that no
changes occurred to anything that you’ve collected. To be able to verify this, we need to have some type
of documentation that shows that nothing could have been changed since the time you collected it. This
documentation is known as a chain of custody.

Anyone who comes in contact with this data or uses it for analysis, needs to document what they did
with this chain of custody. It’s common to have a catalog that labels and documents everything that’s
been collected into a central database. We would also use hashes during the collection process so that
later on we can verify that the data that we’re looking at is exactly the same data that was collected.

An important piece of information, especially as time goes on, is to document the time zone information
associated with the device that you’re examining. These time offsets can be different depending on the
operating system that you’re using, the file system that’s in place, or where the device happens to be
located. For example, if you’re using the file allocation table file system, all of the timestamps are stored
in local time on that file system. If this device was storing information in a file system using NTFS, you’ll
find that all the time stamps in the file system are stored in Greenwich Mean Time.

This is where the recording of this timestamp becomes very important. If a year later we go back to this
information and it shows that this file was changed at 5:00 PM, is that 5:00 PM local time or 5:00 PM
GMT?

There might also be time offsets in the operating system itself. You may want to refer to the Windows
Registry or the configuration settings for the operating system that you’re examining, to see exactly
what the time zone settings are. This may be very different depending on where this device is located. It
may be in a different time zone. And the rules regarding daylight saving time and other time information
may be specific to its local geography.

Event logs provide a wealth of information because they are storing details about the operating system,
the security events, and the applications that are running in that operating system. So if you’re collecting
data from a device, you want to be sure to get the event logs.

There’s usually a method to export the event logs, like in the Windows Event Viewer. Or there may be a
way to simply copy them off of the device, if you’re running something like Linux or Mac OS. You may
not need all of the data in the event log. Maybe you only need to store a certain subset of that
information.

So you may be able to filter or pass the data based on a particular application or based on a time of day.
In Linux, you’ll find the log information in the slash var slash log directory. And in Windows, you can
gather all of the details in the Event Viewer application.

We’re often very focused on gathering information from a digital machine. But often you can gather
important details from the users of those devices so you may want to perform interviews. Interviews
will allow you to ask questions and get information about what a person saw when a particular security
event occurred.

You want to be sure to perform these interviews as quickly as possible after the event, especially since
people may leave the organization or they may forget what happened during that particular time frame.
This is the challenge we have when getting witness statements is that they may not be 100% accurate
because people may see or hear things during this event, but may not accurately describe that someone
during an interview.

And of course, once all of this data is collected, there needs to be an analysis and report of exactly what
occurred during that security event. This might start with a summary, which would provide a high level
overview of what occurred during the security event. There should also be detailed documentation that
describes how the data was collected, the analysis that was performed on that data, and the inferences
or conclusions that can be gathered based on that analysis.

There should also be detailed documentation about the data acquisition process. We need to know step
by step exactly what data was gathered and how that information was gathered. We can then provide
detailed information about the analysis of that data. And once we’ve collected the data and analyzed
the data, we need to document what conclusions we can make based on that analysis.

Forensics Data Acquisition – SY0-601 CompTIA Security+ : 4.5


Capturing digital data is a series of technical challenges. In this video, you’ll learn about capturing data
from disk, RAM, swap files, operating systems, firmware, and other sources.

One challenge you have when collecting data from a system is that some of the data is more volatile
than others. That means that certain data will be stored on the system for an extended period of time,
while other pieces of data may only be here for a few moments. Therefore we need to start collecting
data with the information that is the most volatile and then we’ll work down to the data that is the least
volatile.

Data that is very volatile is data that’s in your CPU. So things like your CPU registers or CPU cache should
be the very first thing you gather. Secondly, would be information that would be around for a little bit
longer than CPU information, but not much longer. Things like router tables, ARP cache, process tables,
and information and memory will probably be the second most volatile.

From there, we can start to look at files that might be stored on our system. Temporary file systems
would be the next on our list, followed with other information that’s normally stored on a drive. If any of
the information on the system is sent to a remote logging facility, it may be here for an extended period
of time.

So we want to be sure we’re looking at that monitoring data and then as we move further down the list,
we find information that rarely changes, such as, the physical configuration of the device or the typology
of the network.

And lastly, information that could be around for years is information in your backups and in your
archival media. There’s a great deal of information stored on a system’s hard drive or SSD. So it’s useful
to know the best way to gather that information for forensics. The first thing we should do is prepare
the drive to be imaged.

We would normally power down a system so that nothing could be written to that drive, and often
we’re removing the storage drive from the system so that we can then connect it to a device specifically
designed for imaging. These are usually handheld systems that are designed with a right protection so
that nothing on that drive can be altered.

We would then copy everything from that drive. And by copy everything, I mean a bit-for-bit
representation of everything contained on that particular storage device. This is going to preserve all of
the data, even information that normally would be in a deleted file or be marked for deletion. This way
we’re able to collect the entire drive in its exact form and later on we can provide analysis of exactly
what we found in that image.

Another important source of data would be the information in memory. This can be difficult to gather.
Not only because this information changes constantly, but the process of capturing the information from
memory, can change a portion of that memory. There are third-party tools available that can provide
memory dump. They will take everything that’s in the active memory of the system and copy it to a
separate system or a separate connected device.

We want to gather as much as we can from the memory because some of this information is never
written to a storage drive. Things like your browsing history, clipboard information, encryption keys, or
your command history may be found in memory but may not show up on the storage drive itself.

Our modern operating systems have a temporary storage area called a swap or a pagefile. And
depending on the operating system you use; these have slightly different uses between different
operating systems.

In many cases, the swap drive is an area of your storage device that you could use to swap information
out of your random access memory and free up information for other applications to execute. We may
need more room for an application to execute, so we’ll swap some information out of memory and store
it temporarily on our local drive, perform the execution inside of memory, and then pull everything off
of that drive and put it back in the active RAM.

The swap might also contain portions of an application. We can take an application that we’re not
currently using, we can transfer it out of our active memory, store it temporarily on our local drive so
that other applications are able to execute. You can think of this as an extension to a memory dump. So
as we’re taking the memory dump and gathering information from active RAM, we want to be sure to
also gather information from the swap.

There’s a great deal of information we can get from the operating system itself and there may be files
and data that can help us understand more about the security event that we’re investigating. We might
want to start by looking at the core operating system files and libraries and compare those to what a
known-good operating system file and library would look like. This is something that you can usually
capture with a drive image so that you can later perform that analysis.

But there’s other information that’s in the operating system as it’s running. For example, we can look at
the number of logged-in users and who those happen to be, we can see what ports might be open on
that device, we could see what processes are running in that operating system, and understand what
devices are currently attached to that system. If we’re investigating a malware infection or a
ransomware installation, those details from the operating system can provide important information
during our analysis.

Collecting the same type of information from a mobile device can be a bit more of a challenge, but there
are tools available to help gather details from smartphones and from tablets. There are capture
methods available.

You could either use a backup file that was previously made from that device, or you can connect
directly to the device usually over USB, and create a new image from that device. Inside of these
smartphones and tablets, you may be able to find information about phone calls, contact information,
text messages, email data, images, movies, and so much more.

With some security events, you may find that the firmware of a device has been modified. This is not
unusual if we start to look at some of the cable modems and wireless routers that are in use, and how
some attackers have completely replaced the firmware to gain access to those systems.
Since we are talking about firmware, this would obviously be associated with a particular product and a
particular model of a product, and the attacker is usually the one gaining access to the device to then
install this updated or hacked version of the firmware.

Getting access to the firmware may help us understand how this device was exploited. We might also be
able to determine once this firmware is installed, what functionality did the attacker have. And lastly, if
this device is still running, we may be able to see real-time data being sent to and from this device.

If you’re working with a virtual machine or VM, we may be able to gather details from a snapshot. We
can think of snapshots as a way to image a virtual machine. This commonly starts with the very first
original snapshot, and you can think of this as the full backup of the system.

It’s common to then take subsequent snapshots of this VM. Especially, if we’re going to make changes to
the virtual machine, and each snapshot is an incremental update from the last snapshot that was taken.
If we then wanted to recreate this virtual machine, we would need the original snapshot and all of the
incremental snapshots that were taken since that point.

Once we have this snapshot, we have a complete image of the system. We have everything in the file
system of that virtual machine. We can see the operating systems, the applications, the user data, and
everything else in that OS.

Our operating systems and applications are able to speed themselves up through the use of a cache. A
cache is a temporary storage area and it’s designed to speed up the performance of an application or an
operating system. There’s many different kinds of caches. You’d find caches available on your CPU,
there’s disk caches, caches available for a browser, or caches that are connected to the network.

These caches often contain very specialized data. If we’re talking about a CPU cache, then everything in
that CPU cache will be focused on the operation of a single CPU. If we’re looking at an internet browser
cache, then we’re looking at a broader amount of data that is used specifically by an internet browser.

The cache is usually writing information that was queried originally so that if a second query was made
that was identical, we could simply go to the cache instead of performing the query against the original
service. This speeds up the process since we don’t have to go all the way to the original service to find
one of the answers that we had already previously asked.

This is something that usually is also temporary. So once we write information to a cache, that
information usually times out, or it’s erased when the cache fills up. We might also find that some
caches may stick around for a very long period of time.

A good example is the browser caches in your system, where information may be there for days or
weeks. If you were to look into a browser cache, you would not only see the URLs of the locations you
were visiting, but you would also see the information that made up that page, including the text and the
images.

Your network also contains a wealth of information. You can see all of the different connections being
made over the network. And in some cases, you may be able to capture the raw data that was sent over
the network. It might also be useful to see what sessions were created from this device, and what
sessions were inbounded to the device. You could also break this out by sessions created by the
operating system, and sessions created by the applications.
And in larger environments, you may find that there is extensive packet captures occurring and storage
of large amounts of data that’s being sent across the network. That would allow you to effectively
rewind back in time and see the raw data that was transferred through the network. There might also be
smaller packet captures available on security devices such as firewalls, and intrusion prevention
systems.

And once we’ve looked through all of those locations, we may still find other bits of data that are stored
in different places in memory or on your storage drive. We refer to these as artifacts. And these artifacts
may be something that is stored in a log.

It may be flash memory. It could be the cache files that are used by the prefetch process of Windows. It
might be information that’s stored in the recycle bin. And the information you’re storing in your browser
bookmarks or your logins records might also be considered an artifact.

On-Premises vs. Cloud Forensics – SY0-601 CompTIA Security+ : 4.5


Performing forensics in the cloud provides additional challenges to the security professional. In this
video, you’ll learn about right-to-audit clauses, regulatory issues, and data breach notification laws.

Up to this point, we have been describing our digital forensics process with devices that would be in our
possession. It would be a computer, a laptop, a mobile device of some kind, but we also need to think
about how we perform digital forensics to devices that may be in the cloud. Obviously cloud-based
services are not in our immediate possession, we don’t have physical access to these devices. In fact, we
may have very limited access to this particular device because it is located in another facility, that is
somewhere in the cloud.

It might also be very difficult to associate cloud-based data to one specific user. There are many people
accessing this cloud-based service simultaneously, and picking out an individual’s piece of data may add
additional complexity to the forensics process. And there might also be legal issues associated with this
cloud-based data, especially since the rules and regulations around this data may be different depending
on where you are in the world, and where the data may be located.

Before you put into a position where you would need to access this cloud-based data for forensics
purposes, it would be valuable to have already created an agreement on how this data could be
accessed. So if you’re working with a cloud provider, or a business partner, it will be useful to qualify
how the data should be shared and how the outsourcing agreement would work. We might also have a
concern about how safe this data might be at a third party provider. So it’s not uncommon to work with
that provider to create a right to audit clause in the agreement. That would give you permission to know
where the data is being held, how the data is being accessed over the internet, and what security
features may be in place to protect that data.

As the initial contract with the cloud provider is being created, a right to audit clause can be added that
would specify how you would be able to create a security audit of that data. Everyone would agree to
those terms and conditions and the contract will be signed. This would allow you access to perform
security audits and to make sure the data’s safe, well before you would run into the situation where a
security breach might occur.
The technology behind cloud computing is evolving rapidly, and the legal system is trying to catch up
with all of these changes with the technology. This is why it’s going to be important for forensics
professionals to work very closely with the legal team, especially if they’re looking at data that may be
located in a different location. The regulations regarding the use and access to data in one location may
be very different than the rules in another location. And if we’re describing a cloud-based application,
the data may be located in a completely different country.

In that particular case, the physical location of the data center may determine the legal jurisdiction for
that data. From a forensics perspective, this could work against you when you’re trying to perform any
type of analysis. For example, some countries don’t allow any type of electronic searches if the search is
coming from outside of their country. So it may be very important to include your legal team as you’re
stepping through the process of digital forensics in these cloud-based locations.

Another concern are the notification laws associated with data breaches and how they would affect you
depending on where the data may be located. Many states or countries have laws or regulations that
state, if any consumer data happens to be breached, then the consumers must be informed of that
situation. And like the legal issues we have regarding where the data is stored, the data breach
notification laws may be different depending on where that data would be stored. If you have a cloud-
based application, you may be storing information from all countries into a single database and a breach
of that data may have a very broad impact on who gets notified.

You might also find that the notification requirements might be very different depending on the
geography. There might be rules and regulations regarding the type of data that is breached, and what
type of notification should be made. So if the breach is only someone’s name or email, is that different
than if it’s their name, email address, and telephone number. It’s also important to know who needs to
be notified if a breach occurs, and how quickly you would need to notify them after a breach has been
identified.

Managing Evidence – SY0-601 CompTIA Security+ : 4.5


Once evidence has been collected, the data must be managed properly. In this video, you’ll learn about
data integrity, preservation, e-discovery, data recovery, non-repudiation, and strategic intelligence.

When you’re collecting data for evidence, you want to be sure that nothing is going to change with the
information that you’ve collected. One way to ensure this is to create a hash of that data. This is a way
to cryptographically verify that what you have collected is going to be exactly the same as what you’re
examining later.

You can think of this as a digital fingerprint. You would take that fingerprint or create that hash when
you first collect the data. And then you would verify that hash whenever you perform the analysis to
make sure that nothing has changed in the meantime.

A relatively simple integrity check can be done with a checksum. This is very commonly done with
network communication to make sure that the information that we’ve sent from one side of the
network to the other has shown up without any type of corruption. This isn’t designed to replace a hash,
but it does provide a simple integrity check that might be useful in certain situations.
And we also have to think about the original source of this data. We refer to this as provenance. This
provides us with documentation of where this data originated. It’s also useful to have a chain of custody
so you know exactly where this data has been since the time it was taken. This might even be an
opportunity to take advantage of newer blockchain technologies that can provide more detailed
tracking of information.

It’s important when working with data as evidence that we are able to preserve this information and to
verify that nothing has changed with this information while it’s been stored. We commonly will take the
original source of data and create a copy of that data, often imaging storage drives or copying
everything that might be on a mobile device. This becomes especially useful for these mobile
smartphones, since it is possible to remotely erase these devices.

This is not always as simple as powering down the system, removing a drive, and then imaging the
information that’s there, especially since many drives are configured with full disk encryption. And
powering down the system could cause all of that data to be inaccessible. We often have to think about
different techniques when we’re gathering this data, especially if encryption is in use.

We want to be sure that when we’re gathering this information that we’re using the best practices. This
will be especially useful if this information is being used later on in a court of law because they will be
examining the process you took to gather these details.

There’s a legal mechanism used to gather information called discovery. And when we apply this to
digital technologies, it’s referred to as e-discovery. The process of e-discovery is about gathering the
data.

We aren’t examining the information. We’re not analyzing the information that we’re gathering. We’re
simply going through a list of information that’s been requested, and we’re gathering all of those details,
and providing it to the legal authorities.

The process of e-discovery often works in conjunction with digital forensics. For example, with e-
discovery, we may be requested to obtain a storage drive and provide that to the authorities. The
authorities would then look at that drive and notice that the information on that drive is actually smaller
than what they expected. At that point, they can bring in some digital forensics experts that can
examine the drive and attempt to recover any data that may have been deleted.

Recovering missing data can be a complex process. There’s no single way to go about recovering data.
So it takes extensive training and knowledge to know exactly the best way to do it.

The exact process someone might go through might vary based on whether the files were simply
deleted on the drive. Were the files deleted and then the recycle bin was deleted? Or were the files
simply hidden, but are still contained on the storage drive?

Was there corruption with the data associated with the operating system or the application? Or was the
storage media damaged itself? All of these situations can have some type of data recovery associated
with them if we use the correct techniques.

Another important part of this process is knowing exactly who sent the data originally. If we can ensure
that the information that we’ve received is exactly what was sent and we can verify the person who sent
it, then we have what’s called non-repudiation. With non-repudiation, we not only know who sent the
data, but we have a high confidence of exactly who sent that information. This means that the only
person who could have sent the data is that original sender.

There are commonly two ways to provide non-repudiation, one is with a message authentication code
or a Mac. With message authentication codes, the two parties that are communicating back and forth
are the two that can verify that non-repudiation. This is a little bit different than a digital signature
where anyone who has access to the public key of the person who wrote the information can verify that
they sent it. This is obviously a much broader non-repudiation since it would be verified by anyone and
not just the two parties in the conversation.

Gathering evidence can also be done by using strategic intelligence. This is when we are focusing on a
domain and gathering threat information about that domain. We might want to look at business
information, geographic information, or details about a specific country.

We might get much of this information from threat reports that we create internally or information that
we’re gathering from a third party. There might also be other data sources, especially with open source
intelligence or OSIT that could even provide additional details. And if we’re looking at information over
an extended period of time, we may be able to track certain trends that would give us more information
about the threat.

If we’re the subject of someone’s strategic intelligence, we may want to prevent that intelligence from
occurring. And instead, we would perform strategic counterintelligence or CI. With CI, we would identify
someone trying to gather information on us. And we would attempt to disrupt that process. And then
we would begin gathering our own threat intelligence on that foreign operation.

You might also like