Professor Messer Sec+ Domain 3

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 102

Professor Messer’s CompTIA SY0-601 Security+ Training Course

How to Pass Your SY0-601 CompTIA Security+ Exam


Are you planning to take your SY0-601 CompTIA Security+ exam? In this video, you’ll learn how to
prepare and which resources will give you the best chance for success.

Secure Protocols – SY0-601 CompTIA Security+ : 3.1


If you’re going to communicate on a network, you should ensure the use of secure protocols. In this
video, you’ll learn how to secure phone conversations, email, file transfers, directory services, and more.

If you’ve ever used voice over IP or a voice over IP telephone like this one then you’ve used the real time
transport protocol. There is an encrypted version of that protocol called the secure real-time transport
protocol. You’ll sometimes see this referred to as secure RTP or very simply SRTP.

The goal with RTP is to take conversations that normally would not be encrypted across the network and
add encryption for security so that nobody can listen in to your conversation. The encryption used for
SRTP is AES this ensures that your communication with voice over IP or video over IP will be secure. But
there’s more to SRTP than simply encryption there are additional security features, such as
authentication, integrity, and replay protection. This is accomplished by using H MAC SHA 1 which is a
hash based message authentication code using the hashing protocol SHA 1.

One of the challenges we have with legacy protocols that we’ve used for so many years on the internet
is that they were never originally designed with any security features. A good example of this is the
network time protocol or NTP the original specifications for NTP didn’t include any security features.
And we’ve noticed recently that attackers have taken advantage of this by using NTP and amplification
attacks when they’re performing distributed denial of service attacks.

Thirty years after it was introduced NTP has now started to have additional security features added, this
is added as part of the NTPsec protocol which is the Secure Network Time Protocol. This update has
added a number of security features to NTP and has cleaned up some of the old code to remove some
existing vulnerabilities. As users are sending an increasing amount of email we of course, need to be
able to keep the information within those emails confidential.

One way that you can do this is with SMIME that stands for secure multipurpose internet mail
extensions. This is a public private key encryption mechanism that allows you to protect the information
using that encryption and to provide digital signatures for integrity. Since SMIME requires this public
private key pair there needs to be some type of public key infrastructure or PKI in place in order to
properly manage these keys.

Another common way to send and receive email is using POP 3 and using IMAP and there are some
security extensions to those protocols as well. With POP 3 you can use a start TLS extension to include
SSL as part of that POP 3 communication and with IMAP you can choose to use secure IMAP which also
uses SSL. And if you’re using a browser based email such as Google Gmail or Yahoo Mail then you should
always be using an encrypted communication and your browser should always be using SSL to provide
that confidentiality.
We often refer to this browser based encryption as SSL that stands for Secure Sockets Layer in the
reality we’re using a newer version of SSL called TLS which is Transport Layer Security and these days if
somebody is using the older term Secure Sockets Layer or SSL they are actually referring to TLS. If we are
sending encrypted data over that connection it’s using the HTTPS secure protocol that stands for HTTP
over TLS or HTTP over SSL and sometimes you’ll see it referred to as HTTP secure. The most common
form of HTTPS is going to use a public key encryption method and it’s going to use that public and
private key paired in order to transfer symmetric key across the network so that a session key can then
be used symmetrically during the communication.

This is the backbone for most of the encryption that we’re doing on the internet if you’re on the
professormr.com website or the youtube.com website then you are using HTTPS to be able to send that
information privately. If you need to communicate between two locations across the internet in a secure
form, then you’ll probably need to use some type of encrypted tunnel. One of the most common types
is IPsec or internet protocol security, this allows you to send information across this layer 3 public
internet but encrypt the data so that all of that information remains confidential.

IPsec also includes packet signing for integrity and anti-replay features. One nice part of IPsec is that it is
so standardized and you can use different manufacturers equipment on both ends of the tunnel and
both of those manufacturers will be able to communicate with each other using IPsec because this is
such a well-known and well-established standard. If you’re implementing an IPsec tunnel you’ll be using
two main protocols the first is the authentication header or AH which provides the integrity and the
other is the encapsulation security payload or ESP that provides the encryption.

If you’re transferring files between devices you’ll also want to use a secure protocol for those. Two of
the most common are FTPS and SFTP. FTPS is the file transfer protocol secure and it’s using SSL to
encrypt the information that we’re sending using that FTP client. Although the name is very similar, FTPS
is using a completely different mechanism to communicate than SFTP. SFTP is the SSH file transfer
protocol so where FTPS was using SSL to provide the encryption, SFTP is using SSH to provide that
encryption.

SFTP also includes some additional management capabilities for example you can resume interrupted
transfers, you can get a listing of the directories that are on a device, you can remove files and
directories, and manipulate the file system using the SFTP protocol. Most enterprise networks will have
a central directory where information is stored on the network. This directory can be accessed using
common protocols one of those protocols is LDAP that stands for the Lightweight Directory Access
Protocol.

The standard for having a centralized directory on the network comes from an X.500 standard by the
International Telecommunications Union. This original standard was actually called DAP, D- A- P and it
ran on the OSI protocol stack. When this was updated for TCP/IP networks they created the LDAP
version of this protocol. If you’re using Microsoft Active Directory Apple’s open directory or you’re using
open LDAP then you’re using a directory that can be accessed using this standardized LDAP protocol.

There’s a non-standard version of LDAP that provides a level of security, this is LDAP secure or LDAPS
and very similar to other protocols LDAPS uses SSL to be able to communicate securely to an LDAP
server. A more common form of security is SASL or the simple authentication and security layer, which is
a framework that many different application protocols can use to be able to communicate securely.
LDAP uses SASL for this and it can communicate using Kerberos, client certificates, and other methods as
well.

We spoke earlier of using SFTP to communicate with file transfer protocol over SSH. SSH stands for
Secure Shell and it’s commonly used to provide a terminal screen that is encrypting the information
between the client and the server. As SSH effectively replaces the older telnet protocol which was still
providing a terminal screen like this but there was no encryption mechanism within telnet this made SSH
a very popular upgrade and it’s very common now to use SSH almost exclusively when doing any type of
terminal communication.

DNS or domain name system is another one of those legacy protocols that was originally created
without any type of security features. This allowed attackers to change the information that was being
sent to and from a DNS server effectively allowing them to redirect traffic to whatever server they’d like.
To avoid this we’ve added additional security features to DNS this would be DNSSEC this stands for the
domain name system security extensions. DNSSEC gives us a way to validate the information we’re
getting from a DNS server so that we know that it really did come from the DNS server that we were
requesting it from and that the information was not changed as it went through the network.

We’re able to do this using public key cryptography, we can sign the information that we’re adding to a
DNS server and then the recipient of that information can verify that information is correct based on
those digital signatures. If you’re in charge of managing switches or routers then you’re performing a
number of different communications to those devices and you need to be sure that, that communication
is secure.

We’ve already discussed how connecting to these devices with a terminal can be protected by using SSH
or Secure Shell if you’re querying your routers or switches for information then you’ll use the SNMP
protocol and if you want to make sure that’s secure then you use at least the version 3 of SNMP the
secure version is SNMPv3 that is the Simple Network Management Protocol version 3. This third version
added encryption so we can have confidentiality of the data. We also have integrity and authentication
capabilities so that we know the data wasn’t changed as it went through the network and we can be
assured that we’re communicating directly to that device and receiving responses from that device
without anyone modifying that information in the middle of the conversation.

Although it’s commonly used as SSH to modify the configuration of a switch a router at the command
line, it’s also becoming common to do this from a web browser. So we’ll want to use HTTPS rather than
the insecure HTTP to make sure that all of our browser communication is running over an encrypted
connection. We rely on the Dynamic Host Configuration Protocol or DHCP to automatically assign IP
addresses to the devices on our network. But DHCP does not include any particular security functionality
within the original specification and so there are opportunities for attackers to be able to manipulate
this information or modify what people are seeing from a DHCP server.

In order to enhance the security of DHCP we’ve added additional controls outside of the DHCP protocol.
For example, with active directory you can avoid rogue DHCP servers by authorizing what devices are
able to act as DHCP devices on your network. Many switches can also be configured to monitor for DHCP
communication and only allowed DHCP to come from trusted interfaces on that switch. If a switch sees
DHCP being sent from an untrusted interface it can block that communication on Cisco’s which is you’ll
see this configuration referred to as DHCP snooping.
Another attack you might see with DHCP is where the attacker will change their MAC address and use up
all of the available IP addresses that are in a DHCP pool effectively causing starvation or limiting the
number of IP addresses that are available to other devices on the network. To prevent this from
occurring we can make other configurations to the switch that will limit the number of MAC addresses
that can be seen from any particular interface. If an interface is connected to a single workstation we
would only expect to see a single MAC address from that interface. If suddenly we see a large number of
MAC addresses being created that interface can automatically disable itself and prevent a DHCP
starvation attack.

There are a number of different devices on our network that are constantly updating themselves
automatically. For example, antivirus and anti-malware software will update their signatures we have
intrusion prevention systems that require updates to those signatures. And we might even have
firewalls that update a huge list of IP addresses to be able to block known malicious locations. One
challenge we have with managing these updates is that each one of these devices is commonly using a
different method to be able to perform the update using different protocols and communicating to
different IP addresses. It may require that we examine each one of these devices individually to
understand more about the protocols that it uses during the update process and that we can configure
firewall rules and trust relationships to only allow that device to receive updates from specific well
known and trusted servers.

Endpoint Protection – SY0-601 CompTIA Security+ : 3.2


We use many different techniques to keep our endpoint devices safe. In this video, you’ll learn about
anti-malware, EDR, data loss prevention, host-based firewalls, and more.

We’re keeping an increasing amount of sensitive information on our computing devices and we’re using
different kinds of computing devices. We might have a desktop computer, a laptop, a tablet, or a mobile
phone. With each of these devices, we’re concerned about attackers gaining access inbound to these
devices or sending the information that we have outbound to their own servers.

Of course, we have to think about all of the different operating systems that we’re using, all of the
different platforms, and maintaining the security for each one of those. With all of these devices, there
will be different security techniques and different mechanisms that we’ll use to keep these safe. We call
this Layered Protection Defense in depth. And so in this video, we’ll look at some of the security features
that we might add to our endpoint devices.

Antivirus and anti-malware are some of the most common security features that we might add to our
computing devices. Our endpoints have antivirus loaded on them, but very often this means antivirus
along with other anti-malware software.

This is designed to stop viruses, worms, trojan horses, and other types of malicious software attacks.
Along with antivirus, we have anti-malware software that we’re running. Anti-malware software is
commonly blocking fileless malware, ransomware, and other types of malware attacks. If you have
antivirus software on your device, then you probably effectively also have anti-malware software as
well.

These capabilities have been generally combined into the same software suite. So if you’re running one
of these, you’re probably running the other as well. One of the challenges with antivirus and anti-
malware software, is they tend to focus on identifying malicious code through the use of signatures.
Signatures are a set pattern that may be within the file or within the memory that is being used by this
malicious software.

The attackers though have found many ways around signature-based detection. And so we’ve had to
change the way that we’re looking for a lot of this malicious software. To do that, we use endpoint
detection and response or EDR. EDR is going to use other mechanisms to find malicious software other
than just signatures. So instead of looking for a signature to occur within a file, we can look at what the
file is doing.

We can use machine learning and process monitoring to see if we can identify malicious types of actions
on our computer, and block the actions rather than blocking a signature. This can be done from a
relatively lightweight agent that’s running on the endpoint, and can constantly be watching for these
types of problems to occur.

This EDR solution can often perform a root cause analysis, to determine why this particular behavior
occurred in the first place, and can generally find the code that was being used as that malicious
software. It can then respond to the threat by isolating the system from the rest of the network, it can
quarantine that malicious software into a different part of the operating system, or remove what’s on
the system now and roll back to a previously known good configuration.

Of course, all of this can be automated through the use of application programming interfaces or APIs,
which means that the identification, the removal, and the restoration of this system can be done
without the need to involve any individual technician. Organizations are keeping databases of very
sensitive information. They can have medical records, social security numbers, credit card numbers, and
other types of confidential data.

So how can you prevent that sensitive information from being sent across the network in a form that
someone would be able to see? To be able to do that, we use data loss prevention or DLP. DLP is
designed to stop data leakage. It’s designed to prevent this sensitive data from being sent across the
network in the clear or even set across the network in encrypted form.

The challenge, of course, is that there’s so many different places that could be sending this information
and so many destinations that it could be going to. So DLP often involves many different systems. This
could be a DLP solution based in a firewall, it could be something in client software, on each individual
system, or it might be based in the cloud so that it can examine all of the emails that may be going in or
out of an organization.

If the DLP solution identifies some sensitive data within any of these data streams, it can block that
information from being transferred outside of your private network. Traditionally, we’ve used firewalls
to allow or block traffic based on an IP address and a port number. But these days we need to provide
more security, and we need to do it across a much more granular scale.

To be able to provide this functionality, we need to use next-generation firewalls or NGFWs. A next-
generation firewall is able to identify the applications that are flowing across the network, regardless of
the IP address or port number that might be in use. And you as the security professional can set policies
to allow or disallow access to those applications on the network.
Although next-generation firewall is probably the most common name for this, you might also hear it
called an application layer gateway, stateful multilayer inspection, or deep packet inspection. A next-
generation firewall cannot only identify the applications running over the network, it can identify
individual features within the application.

So you could set security policies that would allow someone to view the information on Twitter but
prevent them from posting any information on Twitter. Most next-generation firewalls also have
antivirus and anti-malware capabilities, so they can look for known malicious software and block it at the
network level.

It’s also common to see next-generation firewalls have an SSL decryption capability so that any of the
SSL traffic can be decrypted, examined by the firewall, and then re-encrypted and sent on its way. And
next-generation firewalls often have a URL filtering capability, so that you can block individual access to
a particular website or you can block it based on categorizations. You might want to allow access to
travel websites but prevent any access to auction websites.

It’s not only useful to have firewalls that are on the network, it’s also useful to have firewalls on each
individual endpoint as well. These are host-based firewalls and it’s software that we would run in the
operating system of our endpoints. Because this software is running on our endpoints, it can see all of
the different applications that are in use and it can allow or disallow communication for each individual
app.

Even if the information being sent across the network is encrypted, a host-based firewall is able to see
the in the clear traffic that exists on the individual endpoint. And since this host-based firewall is on our
individual endpoint, it can see everything going on with the operating system.

So we can identify any unknown processes that are trying to start and it can block that malware before
anything is executing on the endpoint itself. It’s also common to manage all of these endpoints centrally
so that you can put host-based firewalls on all of your devices, and be able to view and manage them all
from one single point.

Another type of software for our endpoints is a host-based intrusion detection system. This is a
secondary type of security that will look through the log files on your system, to identify intrusions that
may be occurring, and at that point, the software can choose to reconfigure firewalls or other types of
security devices to prevent additional attacks to that computer.

These days it’s more common to have a host-based intrusion prevention system or a HIPS, where it has a
known set of vulnerabilities that it’s looking for, and if it sees any inbound attacks occurring, it can block
those immediately before they hit the operating system.

You may find that the software that’s running on your computer is not running a separate piece of
software that is a host space intrusion prevention system, but instead, it’s built-in to the endpoint
protection software that’s being used for your antivirus or anti-malware.

These host-based intrusion prevention systems use many different techniques in order to identify
attacks. One of those might be a signature, we have a known vulnerability we can attack it by looking at
the flow of data itself as it’s going across the network. We might also have heuristic functionality, within
the device, so that we can identify when large changes may be occurring and investigate more into why
those particular changes are happening on that device.

We also might have behavioral identification, so that if a certain behavior occurs, even though we don’t
have a signature and even though nothing major change that would cause the heuristics to fire, we do
have a way to identify something that is occurring a little bit out of the ordinary.

For example, we can have buffer overflows that have a known signature that can be identified by the
intrusion prevention system, or perhaps a large number of registry updates suddenly occurs and that
would fire the heuristics engine inside of the hips, and of course, writing files directly to the Windows
folder is certainly behavior that you would not expect to see and the behavioral aspect of that software
would identify something malicious occurring whenever anything is written into that folder.

And of course, since this is running on our endpoints themselves, it has full access to all of the data that
is on the system, even data that may be in the clear and not encrypted as we are processing that
information in memory.

Boot Integrity – SY0-601 CompTIA Security+ : 3.2


It’s important to secure the boot process to prevent the installation of malicious software. In this video,
you’ll learn about hardware root of trust, secure boot, trusted boot, and measured boot.

One consistent aspect of IT security is that the attackers are constantly going after our systems. They’re
trying to find new ways to gain access to our operating systems, and the data that we’re keeping on
those operating systems.

Once they compromise the device, they want to get embedded within that operating system, and they
don’t want to be thrown out or lose contact with that particular system. It’s very difficult to find a way
to exploit an operating system. It’s even more difficult to try to get in the second time.

That’s why the boot process would be a perfect place to try to get into an operating system and stay
there. Something like a rootkit has traditionally been able to work at the kernel level, which means it has
full control of the operating system, and you’re able to infect the system before the OS can even start.

Once that malicious software is operating at the kernel level, it effectively has full control of the
operating system. This is why protecting every part of the boot process becomes so important.

In this video, we’re going to look at secure boot, trusted boot, and measured boot, which are all
different parts of the boot process. This is something called the chain of trust. And it’s incredibly
important that these are in place to be able to protect our operating systems.

Anything we do with IT security is based on a level of trust that we have with the operating systems that
we’re using, and the software that we’re loading on that operating system. We’re concerned about our
data, and we want to be sure that our data is safely encrypted on that system. We want to be sure when
we visit a website, that we can trust that that website is going to be the legitimate one. And if we’re
using an operating system, we’re trusting that that operating system has not been infected.
But of course, this isn’t a blind trust. We put specific security controls in place to make sure that we can
rely on, and trust, that these systems are safe. For example, if you’re working on an individual system,
you probably have a Trusted Platform Module, or TPM. We’ll learn more about TPM’s in this video.

And in a future video, we’ll look at hardware security modules, or HSM’s.

It’s this hardware root of trust that gives us the ability to trust that the system is going to be safe and
secure. One significant security advantage of this hardware root of trust is that it’s hardware. It’s not
something that you can easily change by running malicious software, or changing something about an
application configuration. You have to physically change this if you have any hopes of modifying what’s
on the hardware of that system.

This also means the hardware has to be installed for the trust to be put into the system. So if you’re
running an operating system, there should be a Trusted Platform Module on your computer that gives
you that hardware root of trust.

The computing device you’re using may have a piece of hardware called a Trusted Platform Module, or
TPM. This is an image of a TPM module that can be installed onto this particular motherboard, or it may
be something that is integrated into the motherboard itself.

The TPM is designed to help with cryptographic functions that are used by applications within the
operating system. This can include a cryptographic processor, which is commonly used as a random
number generator, or a key generator.

You might also have memory on this Trusted Platform Module that’s able to store keys, especially keys
that can be burned in and not changed. This means that we can reference the Trusted Platform Module
to be able to obtain a unique value that’s not on any other computer that you might have.

This might also store information in the memory, such as encryption keys, or configuration information
about the hardware that this TPM is installed on.

And all of this information within the TPM is password protected. And before you think of using a brute
force attack to gain access to this system, it is already built with an anti-brute force technology, so that
you’re not able to constantly try to find the password that’s used for a particular TPM.

With the Trusted Platform Module providing the hardware security, our BIOS provides the software
security. And our UEFI BIOS has a function within it called secure boot.

This is part of the UEFI BIOS specification, so any device that has a UEFI BIOS is going to be able to use
secure boot. There are also protections built into the UEFI BIOS itself to protect the BIOS.

For example, the BIOS has the manufacturer’s public key as part of the BIOS software, and if there is a
BIOS update, then there’s a digital signature check to make sure that the BIOS update is really coming
from the manufacturer.

This means that someone can’t provide a fake version of a BIOS, or one that is malicious, and somehow
get around the update process. Once secure boot is enabled inside of that BIOS, we can now check the
bootloader of the operating system to make sure that no malicious software has changed any part of
that bootloader.
We’ll make sure that the bootloader’s digital signature verifies with the digital signature from the
operating system manufacturer. There is a trusted certificate that the bootloader must be signed by,
and that trusted certificate is compared to the digital signature that is in the bootloader.

The operating system’s bootloader must be signed by a certificate that is trusted, or it has to be a
manually approved digital signature, so that we know when we’re starting the operating system, that no
part of that bootloader has been changed by any malicious software.

Once the secure boot process is complete, we move to the trusted boot process. During this process the
bootloader, which we now know has not been changed, is going to verify the digital signature of the
operating system kernel.

This ensures that the operating system kernel has not been modified by any malware, and if there has
been any change, the boot process will stop. The kernel of the operating system will then verify other
parts of the OS, such as boot drivers and start up files, and make sure that those components remain
safe.

And just before the operating system starts loading any hardware drivers, it starts a process called
ELAM. This is the early launch anti-malware. The operating system will check every driver that it’s
loading to make sure that it is trusted. There’s a digital signature associated with these drivers, and it
will check every single one of those digital signatures.

If the driver fails that digital signature verification, or the digital signature is untrusted, then that driver
will not be loaded by the operating system.

Once the drivers are loaded, we can move to the measured boot process. This is the process that allows
us to measure if any changes have occurred with the operating system.

If this was just your computer, then you may have a number of checks that you perform to make sure
that your system is running normally. But if you have thousands of computers, there needs to be an
automated way to ensure that all of these operating systems have not been infected by any type of
malware.

Fortunately, there are some measurements that are taken that can help put this automation. That UEFI
BIOS is going to store a hash of the firmware, boot drivers, and anything else that’s loaded during the
secure boot and the trusted boot process. And the hash created by that information is stored within the
TPM of the system.

Now that all of this information is gathered, we can begin a process called remote attestation, which
means that our device is going to provide a central management server with a verification report
showing all of the information that’s been gathered.

That report is going to be encrypted and digitally signed with the keys that are part of the TPM. And that
is sent to the attestation server. That server is then going to receive the boot report, and compare the
information in that report with the information that it knows to be trusted on that system.

If that report shows that the software or hardware in that system has been modified, then the system
administrators can choose to turn that system off, or have that system disabled until it can be looked at
by a technician.
Database Security – SY0-601 CompTIA Security+ : 3.2
Keeping data safe is a challenge when storing information in a database. In this video, you’ll learn about
tokenization, hashing, and using a salt.

When we are storing data we’re often putting it into some type of database. And obviously, we need to
protect the data that is stored in that database, and we need to protect the data that is transmitted to
and from that database.

We know that the data that we’re storing is incredibly valuable. In some cases, entire businesses are
built around data that is stored inside of a database. We also have to manage the data inside of that
database so that it complies with rules and regulations such as PCI DSS, HIPAA, GDPR, and other
compliance rules.

Having strong security of our database ensures that the data is always available, and therefore ensures
that the business is always able to operate. If there is a breach to the database, it not only disrupts the
business, but can also be extremely expensive to repair.

One way to protect data inside of a database is to not actually store the real data inside of the database.
Instead, we use a technique called tokenization, where we might have sensitive data that we replace
with a token that is not associated with the original value.

For example, a social security number such as 266-12-1112 is stored in the database as 691-61-8539. A
completely different number, and has no representation to the original social security number that was
originally stored in the database.

We also see tokenization used when we’re storing credit card numbers. We need to have credit card
information stored so that we can use it during our normal purchase process, but we also want to
protect ourselves from anyone else gaining access to that credit card number.

So instead of storing the actual credit card number on our device, we’ll store a temporary credit card
number, or temporary token, on our device and use that during the purchase process. That token is sent
to a server that validates the token during the purchase process, and that token is then thrown away,
and a different token will be used for the next purchase.

That means that if an attacker does find a way to gain access to that token during the purchase process,
they won’t be able to use that for any subsequent purchases. That’s a valuable part of this tokenization
process, is that you’re able to limit the use of these tokens.

An interesting part of this tokenization process is that we’re not hashing any of our sensitive data, and
we’re not encrypting any of our sensitive data. So there’s no overhead associated with any
cryptographic function. We’re simply replacing one value with a tokenized value and using that token for
our transactions.

Another way to store information in a database is to store it as a hash. This is something that’s
commonly done with passwords because we can store the password as a message digest, which is a
fixed length string of text. But storing that hash, instead of storing a password, means that an attacker
would not be able to understand what the original password was if they did gain access to the data.
A unique aspect of hashes is that different inputs will have a different hash value. And that we will not
see a duplicated hash value, which we refer to as a collision.

We also know that there’s no way to retrieve the password from a hashed value. The hash is a
fingerprint of the original information, and not some type of encrypted representation. We also know
that the hash a one-way trip. There’s no way to reverse the process and somehow see what the
password was based on a hash that’s stored inside of a database.

Here’s an example of some hashes that are based on some passwords. This is a SHA-256 hash. It’s a 256-
bit hash value. And what we do is take the original password, for example, 123456. Not a great
password, but still, we’re able to hash that password. And the hash that we would store in our database
is this very long 256-bit value.

You can see that that value has no representation to the 123456 password, and there’s no way to
reverse the process to somehow return back to that original password value.

During the login process, the login that you put in is going to also be hashed and compared to this
hashed value that’s stored in the database. You can see the hashes for 1234567 are very different than
the hash for 123456, and the hash for qwerty, and the hash for password, are also very, very different
values.

If an attacker somehow came across our database that was filled with these hashes, they would have no
idea what the original passwords were. And they would have no way to easily reverse the process back
into those original passwords.

To add even more randomization to these hashes that we’re creating, we would add some additional
information during the hashing process. That additional information is called a salt. If you have multiple
users storing passwords, every user is going to have a different salt associated with their account.

We would use that different randomized salt, with the password that they’ve chosen, to then store a
hash value in our database. This means that an attacker won’t be able to use rainbow tables to
somehow determine quickly what the original password might have been.

A rainbow table is a pre-computed set of hashes and original values. But if you use salt during the
hashing process to create more randomization, those predefined rainbow tables won’t be very useful.
They would have to then perform the normal brute force to try to determine what that original
password might be. And that is a much slower process than doing something with a rainbow table, that
might take only a matter of seconds, to determine what that original password might have been.

Here’s how a salt might change the hashing value that we store in a database. Let’s say that our users
are going to use exactly the same password on their accounts. And the password is dragon. You can see
that the hash value for dragon is listed in our database. For our first user, they’re going to store that
password dragon, but there’s going to be some random salt added to that password. You can see that
the stored hash value is very different than the hash value that’s created by simply using the password
dragon.

And every user that uses the password dragon is going to have a different salt, and therefore has a
different hash that is stored in the database. If an attacker does gain access to this database, they’ll think
these are very different passwords. In reality, they’re exactly the same password with some random salt
added. But this is now going to create a much longer process for them to be able to brute force the
original password used by every user.

Application Security – SY0-601 CompTIA Security+ : 3.2


Application development use a number of different techniques to keep our applications secure. In this
video, you’ll learn about input validation, fuzzing, secure cookies, code signing, and more.

As security professionals, we’re tasked with making sure that our applications and operating systems are
always updated to the latest version and are bug free. But of course, that process starts at the beginning
with the developers themselves, creating applications that have been hardened and resistant to these
types of attacks. But of course, this process starts with the application developers who are tasked with
building these applications, while at the same time making sure they’re safe from attacks.

This of course is a balancing act, because the developers have a goal to create an application that can be
used, very often security is a secondary goal during the process of development. A lot of the testing that
takes place during an application development process happens with the QA team. This is the Quality
Assurance team that is tasked with not only making sure that the application is working to specification,
but also making sure that the application is secure. But even with the best testing, applications can still
have vulnerabilities that go unnoticed and eventually someone is going to find those vulnerabilities and
take advantage of them.

One task for the developer is to always make sure that they understand the type of input that is going
into the application. This input validation should always occur whenever information is going into the
application process. This means that the application developer needs to document every place where
data is going into the application. This might be in a form that’s on the screen, it might be fields that a
user is filling out, or it might be information that’s added as a file is uploaded. The application
developers should check all of this data that’s being input, and if anything is outside the scope of what it
should be, those input variables should be resolved.

That process of checking and correcting the data that’s being input is called normalization. An example
of this might be a zip code. Depending on your country, a zip code might be a series of letters or
numbers, and those are usually a certain length. The application developer may define a zip code as only
having a certain number of characters long, with a letter that’s in a particular column. If the data being
input into the application doesn’t follow these rules, then that information can be rejected or corrected
as part of the application. It’s important that the application developer understand exactly what input is
being used, and how that input is being handled by the application. The attackers are going to use third
party tools, such as fuzzers, to be able to constantly try to randomize input into the application to see if
perhaps they can make the application perform unexpectedly or in a way that they could replicate later
on.

This term, fuzzing, is referring to a task called dynamic analysis where random data is simply being put
into the input of an application. You may hear this referred to as fault injecting, robustness testing,
syntax testing, negative testing, and other terms as well. The attackers are looking for something out of
the ordinary to occur. They’re looking to see if there might be a buffer overflow, if the application might
crash, or something else that could cause this application to give them an opportunity to find a
vulnerability. A standard fuzzing framework started with a 1988 class project at the University of
Wisconsin, where they began taking applications and running their own utilities to find vulnerabilities.
They called this the operating system utility program reliability, with Professor Barton Miller. As you can
imagine, there are many different types of fuzzing tests that you can perform on an individual
application. These tests may be specific to a particular application platform, or operating system, and
there can be many, many variables involved when performing these fuzzing tests.

These fuzzing tests take a lot of time and a lot of processing power. They’re almost always automated
and so there are constant randomized inputs that are being tried against the application. This may take
quite a bit of time because the fuzzing engine is going to try many, many different iterations try to locate
where vulnerability might be. Many of these fuzzing utilities are also optimized to try the most likely
tests to find a vulnerability. This also speeds things up instead of going through every possible random
input, you can simply try the ones that are most likely to cause a problem.

Carnegie Mellon has a Computer Emergency Response Team or CERT that has released a version of a
fuzzer called the CERT Basic Fuzzing Framework, or BFF, and you can download this and try it on your
own machine by going to professormesser.link/bff. Here is this fuzzing framework running on my
machine, it’s going through an automated process with data that’s provided as a sample with the fuzzing
program, and it’s going through every iteration to see if it can find problems. This also has a version of
top running on this Linux machine, so that you can see just how much of the utilization is being used by
the fuzzing program and you can keep track of where certain vulnerabilities may be found or occurring
during the fuzzing process.

Another important security concern is the information that is stored on your computer from your
browser. This information is referred to as cookies. Cookies are commonly used to keep track of
information that is only used for a limited amount of time. This might be tracking details, or things that
might personalize a login into a site, or perhaps session management information. This is not an
executable, it’s not a running application, and generally this is not something that is a security concern.
Of course we want to be sure that no one else has access to this information, so your cookies should, of
course, be protected. Many of the cookies that are stored on our system are designated as secure
cookies. Those cookies have an attribute on them that is marked as secure. This tells the browser that if
this information is being sent across the network, it needs to be sent over an encrypted connection
using HTTPS.

Although we’re designating that these cookies are secure, that doesn’t mean that the information inside
of those cookies needs to be something that is private or sensitive. The cookies are there to make the
browsing process easier for the user and are not designed to store private or personal information. If
you’re not the person who has designed and created an application, it’s difficult to know exactly what
work has been put into the security side of that app. To be able to make our applications more secure,
we’ve created something called HTTP secure headers. This is a way to configure our web server to
restrict the capabilities of a browser to be able to perform certain functions.

This means that we can tell the end user’s browser to either allow, or not allow certain tasks to occur
while this application is in use. For example, we may configure a secure header that tells the client’s
browser to only communicate to the web server using HTTPS. This forces the user to use an encrypted
communication and restricts any communication that may not be using the HTTPS protocol. Or we might
try providing cross site scripting attacks by telling the browser to only allow scripts, stylesheets, or
images from loading from the web server. Or perhaps we’ve configured a secure header to prevent any
data from loading into an IFrame. This is an inline frame, and something that’s commonly used when an
attacker is trying to perform a cross-site scripting attack.

When we install and run a new application on our system, there’s a level of trust we have with the
application that it’s not going to run any malicious software. And we’re constantly installing new
applications and running scripts on our system, and we have to be aware every time we run a new
application that there may be something inside of the application that we are not aware of. But of
course, there’s always a concern that that application may have been modified by a third party to add
malicious software into the executable. It would be useful if we could confirm that the application that
we’re running is the application that was originally deployed by the application developer and that no
one has made any other changes to that application in the meantime.

This is a perfect opportunity to take advantage of digital signatures with our code, by using code signing.
This is very similar to the process we use to provide encryption certificates on web servers. We first
need a trusted certificate authority who’s going to sign the developer’s public key. The developer will
then use their private key to digitally sign any code that they happen to be deploying. That means that
you can validate that this code is exactly what was deployed by the original developer, by validating it
with their public key. This is something that’s commonly done when the application is installed and if the
validation fails, you get messages on the screen telling you that the code signing signature is not valid.

From an administrator’s perspective, there’s probably a finite list of applications that commonly are
used in your environment, they have been tested on your systems, and you can trust that they don’t
contain any malicious software. But of course any application, from anywhere, could contain
vulnerabilities, Trojan horses, malware, or other unwanted code. Many administrators then will
configure operating system software or they’ll configure security software settings to allow or deny
certain applications from executing. This means that anything that’s listed on the allow list can run with
no problem on that particular system, but only applications in that allow list can operate. If someone
tries to install a new application that’s not on the allow list, that application will not execute. Some
organizations have a more open approach to applications and will only prevent applications that are on
a specific deny list. If an application is trying to run and it’s listed on the bad list, it will be stopped
before it executes on that computer.

Most operating systems have methods built into the OS that allow the administrator to set up allow lists,
and deny lists. But these lists can be based on many different types of criteria. For example, there might
be an application hash. You would take a hash of the application executable and the only application
version that would run is the one that matches the one associated with that hash. If the application is
upgraded or changed in any way, then the hash will not match and the application won’t run. An
application that has been digitally signed can also be used for criteria on an allow list or a deny list. For
example, you can tell from a digital signature if the application is from Microsoft, or Adobe, or Cisco.
And if it matches any of those, you could choose to allow those applications to run on your computer.

Or perhaps the applications that are allowed on your computer are stored in very specific folders on
your storage drive and we can configure an allow list to only run applications that happen to be stored in
those folders. Your operating system might also recognize where applications are running from. If an
application is running from a server that’s on an internal network, it may say that that particular
network zone is allowed to run applications. But if you’re trying to run that application from a server
that’s on the internet, you might want to deny that until that application has been properly vetted.
When an application developer writes some software, there might be a hidden vulnerability inside of
that code that no one’s found yet. It’s very difficult for human beings to sift through all of those
hundreds or thousands of lines of code to be able to identify where those vulnerabilities might be. So
instead we can automate the process with Static Application Security Testing or SAST. We can use the
static code analyzers to go through the source code and identify places where there may be
vulnerabilities such as buffer overflows, database injections, or other well-known types of attacks.

But of course, the static code analysis doesn’t find every type of vulnerability. For example, if a type of
encryption has been implemented in a way that is insecure, a static code analyzer may not find that type
of issue. And as you might expect, the output from a static code analyzer does not necessarily mean that
an actual vulnerability exists in the code. Every single one of these instances needs to be examined and
confirmed before anyone can be sure that a particular vulnerability exists.

Here’s an example of some output from a static code analyzer. For example, in the first set of source
code test.c on line 32, an error was found that says it does not check for buffer overflows, it gives some
examples of what those might be and it says use fgets instead. Another set of errors was found on test.c
on line 56, and it gives examples of things that may be considered as a problem and ways that you might
be able to fix it. Each one of these test results has to be examined to make sure that it really does match
a particular error or vulnerability, and these problems can then be resolved before the application is
deployed.

Application Hardening – SY0-601 CompTIA Security+ : 3.2


There are some best practices to use when managing the security of applications. In this video, you’ll
learn how to harden application through the use of port filters, registry settings, disk encryption, patch
management, and more.

As a security professional, there are a number of techniques that you can use to make sure that your
applications are as secure as possible. In this video, we’re going to focus on application hardening, which
means we’re going to minimize the number of attack surfaces and hopefully limit the ability of an
attacker to be able to exploit an application. We’re not only trying to protect against well-known attack
points, but hopefully during our process of hardening this application we’re able to protect it from
unknown attack points as well.

Sometimes this attack hardening comes from a third party, or from an existing compliance. For example,
there are specific regulations regarding HIPAA servers or PCI-DSS credit card protection. We can use the
guidance from these compliance mandates to provide additional application hardening. If you’re not
sure where to start in the process of application hardening, there are a number of resources available on
the internet from the Center for Internet Security or CIS, the Network and Security Institute or SANS, or
you may find some white papers at NIST or the National Institute of Standards and Technology.

If you’re a network professional, then you’re probably already familiar with the concept of limiting what
ports may be accessible on a device. This is usually the case where we will close all available ports on
that device, except the ones that can provide exactly the services needed by that application. This is
commonly done with a firewall, where you can limit what IP addresses and port numbers are accessible,
and in some cases you can use a next-generation firewall to also limit the applications that can flow over
that particular IP address and port number.
It’s not unusual to find applications or services running on a device that have opened up port numbers
that are accessible from the network. This is something that can commonly happen if you install
software, or the default configuration for an operating system, and unless you go through all of the
ports and test that particular device you may have no idea that those ports are open. And for those of us
that have been tasked with configuring applications on a service, we have seen the application
programmers provide us documentation that says the appropriate ports to be open on this device is
port 0 through 65,535, which effectively opens every port on that device. We would obviously not want
to provide every port to be available to the network, so instead it’s common to do our own testing. We
may want to run an Nmap scan and verify what ports happen to be open currently, and perhaps limit
access to only those particular ports.

For Windows administrators, they can perform a number of application hardening tasks inside of the
Windows Registry. The Windows Registry is a large database that contains configuration settings for the
Windows operating system and the applications that run on that operating system. One of the
challenges we have is that when we install a new application, we have no idea exactly what may have
been changed inside of the registry. It’s common to use third party tools that can show the registry
settings before an application has been installed, and what settings have changed after the application
has been installed. This is useful to know because some of these registry settings are very important
from a security perspective. For example, the registry can allow someone to configure permissions and
other applications to make changes to the registry, and in some extreme cases you can disable
vulnerabilities in the registry, for example, a very well-known vulnerability took advantage of SMB
version one, and now it’s very common to disable that particular protocol in the registry of every
Windows computer.

Here’s a screenshot of the registry, there is a hierarchical structure for the registry and thousands and
thousands of different settings inside of the registry. That’s why it’s important to always take a backup
of the registry before you start making changes, because it can be very easy to accidentally remove
information that might be critical for the operating system or for other applications. One way to prevent
third party access to the data that we store on our computers is to use hard drives and storage devices
that will encrypt the information that we’re storing. This is sometimes handled in the file system itself,
through something called full disk encryption.

If you’re using Windows BitLocker you’re using an FDE, or Full Disk Encryption utility that is built into the
Windows operating system. In some highly secure environments, you may use a type of encryption on a
storage drive that’s built into the hardware of the drive itself. It’s not specific to any particular operating
system, and anything that’s written to that drive is automatically stored in an encrypted form because
it’s built into the hardware of the device. If you’re using a Self-Encrypting Drive, or SED, then there is a
standard associated with these. It is the Opal Storage Specification, so if you are purchasing or
implementing a self-encrypting drive, you want to be sure that drive follows the Opal standard.

Operating systems are complex environments and there are many different kinds of operating systems
that we might need to provide some type of hardening. There is Windows, Linux, iOS, Android, and
other operating systems and every single operating system is going to have a different set of techniques
to be able to make that operating system more hardened. One common technique across all operating
systems is to always keep the operating system up to date with the latest versions. These can be
updates to the core operating system itself, they may be deployed using service packs, or they may be
individual security patches that are installed one by one.

We also want to harden our user accounts. We want to be sure that all of our users have very good
passwords and we have a password policy for every single user. We also want to be sure that the
accounts that people are using to log in have limitations that are only going to allow them to perform
the tasks necessary for their job. We also want to be sure these devices have limited access to other
components across the network and we want to be sure that these devices have limited input from
other devices across the network as well. And of course, even after performing all of these hardening
tasks, there can still be attacks that occur in the operating system which is why we still run antivirus,
anti-malware and other security software on these individual endpoints.

Patch management is so important in these operating systems that it is a standard part of the operating
system, and it’s built into the scheduling and automated systems within the OS. You’ll find that security
patches and fixes are automatically deployed to our systems to avoid any type of vulnerability or attack.
Many operating systems will have monthly updates where the manufacturer of the operating system
creates a batch of fixes and pushes all of those fixes out once a month. This keeps the update process
manageable for the operating system administrators, and it keeps all the systems up to date and
protected from any vulnerabilities.

Of course, there are other applications running on these operating systems that are created by third
parties, and those applications also have to be kept up to date. The same thing applies for the device
drivers that are installed in that operating system as well. It’s important that we update these systems
as quickly as possible, but we don’t want to perform an update that’s going to cause a problem with the
operating system or the applications that are running in that OS. That’s why many people will not use
auto update on the systems that are in an enterprise, instead the IT department will first test all of the
updates and then push those updates out once they know that they’re going to work properly in their
environment.

But there will be times when an update is an emergency update that suddenly appears out of the
normal flow of the update schedule and needs to be deployed as soon as possible. These are usually
because of zero-day attacks that are being taken advantage of in the wild, and we need to make sure
that all of our systems are patched as quickly as possible. Some operating systems and applications will
include a sandbox functionality. This limits the scope of an application from accessing data that is not
part of that application. This is something that’s commonly done during the development process so
that the developers are not changing any of the data that might be in a production environment, but
using sandboxing during production can be a valuable security tool as well. For example, if you’re using a
virtual machine, that virtual machine is sandboxed from other virtual machines that might be running on
that system, or from the host operating system itself.

You might also see sandboxing being used on mobile devices. For example, the browser on a mobile
device is not going to have access to other types of data on that device, unless you as the user explicitly
allow that browser to have access to that information. Otherwise, that data is sandboxed away from the
browser. Browsers themselves have ways to separate and sandbox data from other parts of the
browser. This is especially important with IFrames, or inline frames, where one IFrame may not know or
have access to data that would exist in other IFrames on that browser. And in Windows, there is
extensive sandboxing especially relating to User Account Control, or UAC, where applications would not
have access to other types of data unless you grant them the access as part of that application.

Load Balancing – SY0-601 CompTIA Security+ : 3.3


Load balancers provides many options for keeping services available and efficient. In this video, you’ll
learn about load balancing, scheduling, affinity, and active/passive load balancing.

If you’re watching this video on the Professor Messer website, then you are taking advantage of load-
balancing technology. Load balancing is a way to distribute the load that is incoming across multiple
devices, thereby making the resource available to more people than having a single server in place. This
is implemented by having multiple web servers behind the scenes. And when you access
professormesser.com, your query is distributed to one of those available servers.

This all happens without any knowledge from the end user. And it’s all done automatically behind the
scenes. This type of load balancing could scale up into very large implementations. And some of the
largest networks in the world are using load balancing for their web servers, their database servers, and
other services that they provide on their infrastructure. One nice feature of load balancing is because
there are so many servers in place, if one of those servers happens to fail, the load balancer recognizes
the failure and simply continues to use the remaining servers.

The end users who are accessing those devices have no idea that a server has failed. All they know is
that the service remains available, and everything is working normally. Obviously, the primary function
of a load balancer is to balance the load. And you can configure the load balancer to manage that load
across multiple servers. You can also set up the load balancer so that some of the TCP overhead is
offloaded onto the load balancer, rather than down to the individual server.

This keeps the communication between the load balancer and the servers very efficient and maintains
the speed of the communication. This might also be used for SSL offloading. The encryption and
decryption process used for SSL or TLS is one that uses additional CPU cycles on a device. So you may
find that the load balancer is the one performing that SSL encryption and decryption in the hardware of
this device. And it is instead sending “in the clear” information down to these individual servers that are
all within the same data center.

This load balancer might also provide caching services. It will keep a copy of very common responses.
And when you make a request to one of these servers, and the load balancer already has that response
in the cache, it can reply back to you on the internet without ever accessing any of the local servers.

Load balancers can also provide “quality of service” functionality so that certain applications would have
a higher priority than other applications running on these same servers. You might also use the load
balancer for content switching. This means that certain applications might be switched to individual
servers. And other applications might be switched to other servers within that same load balancer.

There are many ways to configure the operation of a load balancer. One of those is in a round-robin
form. The first user communicating through that load balancer would be distributed to the first server or
server A. The second user who’s communicating through that load balancer would be round-robin or
distributed to the next server on the list. And then the third person coming through the load balancer
would be sent to the third server and so on. This round-robin process assures that all of the servers are
going to get exactly the same amount of load across everyone communicating into the network.

There are also variants to this round-robin process. For example, a weighted round-robin might
prioritize one server over another. So perhaps one of the servers would receive half of the available
load. And the other servers would make up the rest of that load. With dynamic round-robin, the load
balancer is keeping track of the load that is occurring across all of the servers. And when a request
comes into the load balancer, it will send the next request to the server that has the lightest load.

And of course, this is a staple for active/active server load balancing, where all of these servers are
active simultaneously. And if one of these servers happens to fail, all of the other servers can then pick
up the load and continue to operate without anyone on the outside knowing that there’s a problem.
Most of the time, the load balancer is going to distribute an incoming load across whatever it happens to
be available on the inside of the load balancer.

But there may be certain applications that require that any time a user is communicating that they’re
always communicating to exactly the same server. In those instances, our load balancer needs to
support affinity. Affinity is defined as being a kinship or a likeness. But in the world of load balancers, it
means that a user communicating through that load balancer will always be distributed to the same
server. This is usually tracked using a session ID or a combination of variables, such as an IP address and
a set of port numbers.

If those same IP addresses and same port numbers are in use, then that communication will always go
to one particular server. For example, our server here at the top will communicate to the load balancer.
The load balancer will assign that session to server A. The second user communicating through that load
balancer may be assigned to server B. If that first user then sends more information on that session
through the load balancer, the load balancer will recognize that that is the same session from earlier and
send that session down to server A.

The same thing will occur if the second user sends information in. The load balancer will recognize that
that is an active session and sends that down to the server B, which was the original server used by that
second user. Our load balancer might also be set up in an active/passive mode, where some of the
servers are actively in use, and other servers are on a standby mode. This means if one of our active
server fails, we have other devices that could suddenly move into an active mode and begin providing
services through that load balancer.

For example, we have a user communicating through the load balancer to server A. And as long as
server A is communicating, we’re just fine. But there may be times when server A is no longer available.
And if server A happens to have a failure, we might have servers on standby– like server C– that could
suddenly turn themselves on and start providing services through that load balancer. That means the
next time that user communicates through the load balancer, they’ll automatically be assigned to the
server C instead of the server A, which is no longer available.

Network Segmentation – SY0-601 CompTIA Security+ : 3.3


Network segmentation is a common security control, and there are many ways to implement
segmentation. In this video, you’ll learn about VLANs, screened subnets, extranets, intranets, and more.
IT security is all about segmentation. It’s about allowing or disallowing traffic between different devices.
We can do this in a number of different ways, but if we’re segmenting across the network, we can
segment physically between devices, we can create logical separation within the same device, or we can
do virtual segmentation with virtual systems. It’s sometimes common to segment application instances
into their own separate private segments. This is especially useful for applications that have high
bandwidth requirements and need the fastest amount of throughput possible.

We can also set up segmentation for security. For example, we might have database servers that contain
sensitive information and we may segment our users so they can’t talk directly to those servers. Or
perhaps we have application instances running inside of the core of our network, but the only protocols
that should be in that core should be SQL-type traffic and SSH traffic and we can segment other types of
traffic to remain outside of the network core. Or we might be segmenting the network because we are
legally required to segment it, or there may be rules or regulations. For example, PCI compliance
requires that we have mandated segmentation to prevent any type of user access to credit card
information.

One obvious way to segment the network is to have physically separate devices. For example, we might
have a switch A, and a switch B, and you can see there is a physical separation. We sometimes refer to
this as an air gap because there is no direct connection between these devices, the only thing in the
middle of these switches is the air itself. If we did need to communicate between switch A and switch B,
we would then need some type of physical connection. We would need to run a cable between these
switches or we might put a router in the middle or firewall, and then send all of our traffic through that
device so that we can communicate between those two physically separate devices.

We might also implement this type of physical segmentation to keep devices separate from each other.
For example one switch may contain all of our web servers, and the other switch may contain all of our
database servers. If these are on separate switches then we know the web servers could never
accidentally communicate to a database server, and the database servers would have no access to the
web servers. Or maybe we’re concerned about customer data mixing with each other, so we keep
customer A on one switch and customer B on the other.

Here’s a practical design for keeping our customers separated, you can see we have the customer A
switch on the left and the customer B switch on the right. The devices for customer A can communicate
with each other and the devices for customer B can communicate with each other, but you can see that
there is no direct connection between these two switches so customer A has no way to communicate
with anything that’s on customer B and vice versa. There are a number of challenges with this design.
One is that we have two separate physical switches that both have to be separately maintained,
separately upgraded, and separately powered. We also have a number of interfaces on these switches
that we probably aren’t using, so we’re spending a lot of money for a switch but not using all of the
capabilities of that switch.

Instead of physically separating these devices, we can do logical segmentation using VLANs, or Virtual
Local Area Networks. VLANs have the same functionality where we can have customers on one part of
the switch, and another customer on another part of the switch. Because we’ve configured this with
separate VLANs, these two different customers can still not communicate directly with each other. It’s as
if we had two separate physical devices, but instead now have them logically separated inside of the
same device. As with physical segmentation, we would need to have direct communication between
these devices using a cable or a third party device like a router that can allow us to communicate
between these separate VLANs.

It’s very common to install services on our local network that we would like to provide to people that
may be on the internet. But we don’t want to provide access to the internal part of our network for
people that are coming in from the internet. So instead, we’ll build a completely separate network just
for that incoming traffic. This is sometimes referred to as a screened subnet, you may also hear this
referred to as a Demilitarized Zone or a DMZ. This allows people to come in from the internet, usually
they connect to a firewall, and the firewall redirects them to the screen subnet. Instead of accessing our
internal network, which would be on this side, all of the users access the services on the screen subnet
and we can set additional security to make sure that no one has access to the inside of our network
while still providing access to the applications that are on our network.

You may find that an extranet is a very similar design to a screened subnet. We have an internet, there’s
a firewall, and a separate network that has been designed as an extranet. We still have our internal
network for all of our internal resources are but we’ve built out this separate extranet for vendors,
suppliers and other partners of ours that need access to our internal resources. Unlike a screened
subnet, an extranet commonly has additional authentication that’s required. So we would not allow full
access to our extranet from the internet, instead there would be an authentication process or a login
screen that would then gain you access to the extranet.

Your organization might also have an intranet. An intranet is very different than a screened subnet or an
extranet, because an intranet is only accessible from the inside of your network. So you might be at your
headquarters network, your remote site number one, or your remote site number two, and from all of
those networks you can access the internal part of your network and gain access to the intranet on your
network. This intranet commonly has internal servers that can provide company announcements,
maybe employee documents and other company information, and it’s only accessible by employees of
the company. Since your intranet usually contains company private information you would never want
to make this available to others outside of the network. The only way to access the intranet is if you are
on an internal network already, or you’re accessing the internal network through a VPN connection.

If you’re managing data flows within a data center then you have another set of segmentation
challenges. There may be hundreds or thousands of devices inside of your data center and there may be
hundreds of thousands of users who are accessing those services. For all of these applications running in
the data center, it’s important to know what the data flows happen to be. You need to understand
where data is coming from, where people are connecting from, and where you’re sending the data to. In
a data center it’s very common to refer to these data flows with directions. For example East-West
traffic would be traffic between devices that are in the same data center, and because they’re local
inside of that same building, you usually get very fast response times between those devices. North-
South traffic refers to data that is either inbound or outbound from our data center, and we are usually
setting up different security policies for that type of traffic because very often that’s coming from an
unknown or an untrusted source.

Here’s a better picture of our data center. We have an internet connection that’s coming into some core
routers, those core routers are then connecting to redundant switches, and the switches are connecting
to file servers, web servers, directory servers, and image servers. If any of these devices inside of our
data center are communicating outside of the data center, then we’re referring to that as North-South
traffic. So we may have traffic coming inbound and then going back outbound over the internet, and
that would be North-South traffic. Anything that’s communicating internally within our data center, for
example our web servers may be communicating to directory services or file servers, all of that traffic
that stays within the building is referred to as East-West traffic.

One of the challenges with data centers and other network configurations is once you get inside of the
network, we don’t tend to have a lot of security controls. Traditionally, if you’ve been inside the network
then there is an inherent level of trust that we’ve associated with that device. But as we’ve seen, once
malicious software gets on the inside of your network, having no security controls effectively means that
software can spread to every other device inside of your network. Instead of trusting every device,
we’ve changed the model to be zero trust, which means you trust nothing else on your network and
there has to be additional authentication and additional methods in place to make sure that the data
flows that are occurring are data flows that should be occurring for those applications.

With zero trust every device, every application, and every data flow is considered to be untrusted. This
means that the flows that would normally go across our network without any checks whatsoever, are
now suddenly subject to authentication, encryption, and other security controls. So we might set up
multifactor authentication between our devices where no authentication existed before. Maybe we’re
including encryption on the core of our network, where normally we didn’t bother with encryption. And
of course we’ll have additional permissions, perhaps additional firewalls and other security controls, to
make sure that we can verify every data flow that’s occurring on the inside of our network using this
zero trust model.

Virtual Private Networks – SY0-601 CompTIA Security+ : 3.3


We use VPNs to maintain the security of our data over insecure networks. In this video, you’ll learn
about SSL VPNs, HTML5 VPNs, L2TP, IPSec, and more.

A Virtual private network, or VPN is a way to send data securely through a network that normally would
be considered public. The internet is a good example of this. And by using a VPN we can send
information between two points on the internet without anyone in the middle being able to understand
anything that’s being sent.

The device that’s doing all of the hard work on a VPN is the VPN concentrator. This is the device that is
encrypting data sending it out over the network and then decrypting anything that it happens to receive.
This concentrators often a standalone device or it’s integrated into another device, such as a firewall.

There are many implementations of VPN concentrators and VPN solutions. Some of them are hardware
devices or are built into a purpose built appliance. Others are implemented as software and you can run
it on any operating system. The end stations that are communicating over the VPN commonly have
some type of client that’s able to encrypt and decrypt the data. That client may be something that is
installed separately or may be built into the operating system that you’re using.

With a VPN you might be at home at a coffee shop at a hotel or some other place and you need to
access resources that are inside of your corporate network. But if you access those directly across the
internet, anyone that is on the inside of the internet or listening in to the conversation would be able to
see this information being sent back and forth.
So instead, you start your VPN software and you create an encrypted tunnel to the VPN concentrator
that is just in front of your corporate network. The VPN concentrator is going decrypt that information
and send everything in the clear into the corporate network. And of course, the process will work in
reverse, where information from the corporate network will be encrypted by the VPN concentrator sent
across the internet. And the VPN client that’s on your laptop will then decrypt that data and show you
the information as if you were sitting locally in the corporate network.

It’s very common to turn this software on the laptop whenever you need access to your corporate
network. But there may be some implementations of VPN software that are always on. So the moment
you login into your computer it’s always connected using that VPN software to your corporate network.
For individual users communicating to a network, especially from a coffee shop a hotel or from home
you might be using an SSL VPN. Or secure Sockets Layer VPN, which communicates over TCP port 443.

Since that’s such a common port to be used for SSL communication it’s one that commonly works on any
network you happen to connect to. This is also something that doesn’t need any big VPN clients it’s not
incredibly complex. You’re usually providing remote access from a single device using this SSL VPN.

Many SSL VPN are designed for end user use. So you’ll often use your username and password and
perhaps some two factor authentication to be able to access and authenticate to the concentrator. With
this type of VPN you don’t often need very complex authentication. For instance, you don’t need to have
digital certificates deployed or shared passwords that you might use for something like an IPSec
configuration.

In fact, it’s very common for SSL VPN to run as a very small client on an operating system or inside of the
browser itself. In fact, many of the latest browsers support VPN software running inside of them using
HTML5. This is hypertext markup language version 5. And one of the nice features of HTML5 is that it
supports application programming interfaces and includes a web cryptography API as part of the
browser.

This means you don’t have to install any software. There’s no installation of a client you simply start
your browser, connect to the remote network and you’re able to send SSL VPN communication without
installing any additional code. The only thing you have to have been a browser that supports HTML5 and
these days most of our modern browsers will be able to use these capabilities for your SSL VPN.

And in user VPN configuration can be configured as a full tunnel or a split tunnel. With a full tunnel
everything that is being transmitted by the remote user is sent to the VPN concentrator on the other
side. The VPN concentrator will then decide where that data happens to go. So if the remote user wants
to send information to the corporate network it will be sent to the VPN concentrator, which would then
decrypted send it into the corporate network. And then that information will be sent back to the remote
user.

But if the remote user wants to connect to another third party device such as my website at
professormesser.com. With a full tunnel, it has to communicate to the VPN concentrator. The VPN
concentrator will communicate to the web server, which will then be reported back to the VPN
concentrator and then sent to the remote user. This means that in a full tunnel all of the data is going
across that encrypted tunnel. And that user can’t break out of that tunnels to send information to
another device directly.
With a split tunnel the administrator of the VPN can configure some information to go through the
tunnel. And other information can go outside of the tunnel. For example, a remote user can still
communicate to the remote network through the VPN concentrator that set the remote site. But if they
need to communicate to a separate website they can communicate through the split tunnel directly to
that server and back to the remote user. It doesn’t need to go through the full tunnel to communicate to
devices that aren’t on the internal corporate network.

We can also use VPN technology between remote locations. We might have a corporate network and a
remote site and we might set up a VPN between VPN concentrators or firewalls. This means that
anything running between those firewalls will be encrypted. And of course, that firewall or VPN
concentrator is providing the encryption and decryption process to provide access from the remote site
to the corporate network.

Side to side VPN like this are almost always connected constantly or they’re in a configuration where
they will dynamically connect each other whenever you need to communicate. Usually there is a firewall
in place at the corporate network and there’s a firewall in place at the remote site. And so it’s very easy
to use those firewalls as VPN concentrators.

Many site to site VPNs are implemented using L2TP. This is layer 2 tunneling protocol. This means that
we’re connecting two networks together as if they are on the same layer 2 network. But we’re obviously
connecting them through a layer 3 network to perform that function. This is commonly used in
conjunction with IPSec networks.

So you would use L2TP for the tunnel between these sites. And then add on the IPSec for the encryption
capabilities. You’ll sometimes see those referred to as L2TP over IPSec or L2TP/IPSec.

And if you are connecting site to site communication using an encrypted tunnel then you’re most likely
using IPSec or internet protocol security. This allows you to have authentication and encryption over a
layer 3 network. And it’s very commonly used on networks like the internet.

IPSec also supports packet signing along with the encryption. So you can not only have security of the
data but you can make sure that anti-replay is built in to the conversation. You also find that this is very
standardized and regardless of what manufacturer’s firewall you happen to be using you’re probably
able to communicate to any other firewall using IPSec using the standard set of protocols.

If you do configure IPSec you’ll see that there are two major protocols that you will use. One is AH or the
authentication header. And the other is ESP or the encapsulation security payload.

There are two ways to send encrypted data over an IPSec tunnel. One is using transport mode and the
other is using tunnel mode. Let’s take for example an original packet of information, which includes an
IP header and then the data that we need to protect. In transport mode we will take our IP header put it
in the front. So that we know where to send this information.

But then we will encrypt the data and put an IPSec header in an IPSec trailer around it. This obviously
doesn’t protect everything in this particular packet because the IP header remains in the clear. And is
sent across the network to the original IP addresses.

If you want to protect both the IP information and the data then you want to use tunnel mode. And
tunnel mode both the IP header and the data are encrypted with IPSec. Will find IPSec headers and
trailers around those. And then we will have a brand new IP header that sends this information to the
IPSec concentrator on the other side of the tunnel.

If your only concern is the integrity of the data then you may not need to encrypt any data going across
the tunnel. In that case, you would only be using the authentication header or AH protocol. This is a
hash of the packet and a shared key that is shared between the two IPSec concentrators. It’s common to
use a hash such as SHA-2, that would then add an authentication header to the data that you’re already
sending across the network.

This again is not going to provide any encryption. But it will provide data integrity because we do have a
hash. We can guarantee the origin of the data because we have authentication with the shard key. And
there’s going to be a prevention of replay attacks because sequence numbers are also included as part
of this communication.

In most IPSec implementations though you’re going to be doing some type of encryption And we use
ESP or the encapsulation security payload to provide that encryption functionality. This is going to
encrypt and authenticate across this IPSec tunnel. It commonly uses SHA-2 for the hash and AES for
encryption but you can change those parameters in the IPSec config.

This adds extra headers on the front and trailers on the end. We have an integrity check value so we can
make sure that the data goes through the network properly. And your IP header and data is encrypted in
the middle of the tunnel.

In most implementations of IPSec that you’ll see will often have the encapsulation security payload. So
that we can encrypt the data. But we also include the authentication header to make sure that the data
gets through the network without anyone changing any of that information.

Here’s a graphic that combines both the authentication header and the encapsulation security payload
in both of the modes, both the transport mode and the tunnel mode. Remember in transport mode
we’re only encrypting the data and the original IP header remains in its original form. And now you have
the authentication header and the encapsulation security payload all combined with an integrity check
value at the end to make sure that the data gets through the network without any corruption.

With tunnel mode, which is undoubtedly the most common implementation of IPSec that you’ll see. You
of course have both the IP header and data that’s encrypted and the AH and ESP protocols are wrapped
around that data. So that it can be sent through the network with the encryption and the integrity.

Port Security – SY0-601 CompTIA Security+ : 3.3


We have many security protections that can be applied to our switch and router interfaces. In this video,
you’ll learn about broadcast storm control, loop protection, DHCP snooping, MAC filtering, and more.

In this video, we’re going to talk about port security. We’re talking about physical interfaces on a switch
or router. We’re not talking about TCP or UDP ports in this particular usage of the word port. So when
you think of port security for this video, think of it as that physical connection you’re making to a switch
or router.

The goal with port security is to maintain uptime and availability of the communication across the
network. We want to limit the overall traffic on the network. There may be certain traffic types that
need to be controlled as they’re going across the network, and we certainly want to remove any
unwanted network traffic from those connections.

But of course, there are many different types of devices that we would have on our network, and many
different protocols and applications that are using those networks. So we will use different techniques
to be able to provide security at the port level.

One challenge we have on our networks is with broadcasts. Broadcasts are packets that are sent from
one device that are addressed to everybody else who happens to be on the network.

When that information is sent out, every single device on the network has to examine the information
that’s inside of that packet. If there are many broadcasts traversing the network, this could cause
everyone to stop and evaluate what is inside of those packets, and it would use unnecessary bandwidth
on the network as well.

Fortunately, broadcasts have a limited scope. They can only traverse the networks that are connected in
a single broadcast domain. When we talk about using a VLAN on our network, that VLAN is a broadcast
domain. And any broadcast sent to that VLAN will only be sent to other devices that are on that same
VLAN.

With IP version 4 it’s very common to see broadcasts being sent over the network. Protocols such as
routing updates and ARP requests commonly use broadcasts to be able to communicate to other
devices on the network.

Unfortunately, broadcasts can also be malicious traffic, or unwanted traffic, as well. So we need to have
some way to manage and control the good broadcast traffic from the bad broadcast traffic.

If you’ve done any work with IPv6, one of the things you’ll notice is that there’s no broadcasts. One of
the things we did when we implemented IPv6 was focus on multicast, rather than broadcasts, which are
much easier to manage.

Many of our managed switches can be used to control broadcasts. There’s functionality within the
software of the switch itself that allows us to limit the number of broadcasts that can be sent in any
particular second. Some of these switches can even provide control of multicast, and unicast as well,
giving the administrator of the network a tight level of control over what type of traffic is sent on the
network.

There might also be a way to limit broadcast by a certain finite value. For example, you might want to
limit 100 broadcasts per second, or 1,000 broadcasts per second. But you can also have the switch
monitor the amount of broadcasts, and if it increases by a large percentage in a short period of time,
you can have those broadcasts removed from the network as well.

One challenge we have with our layer 2 networks is there’s no mechanism built into layer 2
communication that is able to recognize and remove loops from the network. This makes it very easy for
us to accidentally bring down the network by simply connecting two networks to each other into a loop
configuration.

It’s surprising how easy it is to do this, especially if you’re moving around a lot of cables inside of a
wiring closet and you accidentally plug the wrong cable into the wrong interface. Fortunately, there is a
standard for preventing loops on switch networks that is from the IEEE. It’s the standard called the
802.1D standard that was created by Radia Perlman to prevent loops on any type of switch network.

The standard is called Spanning Tree Protocol, or STP. And it’s a very common way to implement loop
control on layer 2 networks. I have a design for a layer 2 network here. You can see there are a number
of bridges, or switches, on this network, five of them, and they’re connecting many different networks
together.

You can see in this design that there’s many opportunities to create loops. For example, if you’re on
Network M, you could communicate through Bridge 6, then network B, then Bridge 21, then network J,
back to Bridge 1, and finally, back to Network M. And the process can go around, and around, and
around, unless you’re using Spanning Tree Protocol.

In this example, Spanning Tree is enabled. And anywhere you see BP, which is a blocked port, that is a
port that has been administratively disabled because of Spanning Tree, to prevent loops on the network.
This means that Network M could not loop all the way through the network because they would be
blocked at this interface on Bridge 21.

Spanning Tree is also good at finding problems that occur, and working around those problems. For
example, if Network M wanted to communicate to Network Y, it would hop through Bridge 6, then
Network A, then Bridge 5, and then finally down to Network Y.

But what if there was an outage on the network? What if a cable was disconnected, or a switch
happened to go bad? In that case, we would lose the connectivity that we had between Bridge 6 and
Network A, and now we would no longer be able to follow that path between Network M and Network
Y.

Fortunately, Spanning Tree is constantly monitoring itself for situations like this, and it will put the
network into a convergence mode, where it will begin examining what interfaces are available, and what
interface are not available based on this particular outage.

In this example, it is found that there are some interfaces that it’s able to change, and it’s able to
remove the block that was on Bridge 11, allowing Network M to use a different path to communicate to
Network Y. Now Network M would communicate through Bridge 1, Network J, Bridge 21, Network C,
and Bridge 11, to finally talk to Network Y.

So it was able to work around the problem, still maintain communication to that network, but also
prevent any type of loops, thanks to Spanning Tree Protocol.

One of the challenges with initial implementations of Spanning Tree is when you first connect to the
network, it might take 20 to 30 seconds before Spanning Tree understands exactly what path it should
use to be able to communicate on the network. Since we’re plugging in a single device, there should
never be a situation where our single device would create a loop. But Spanning Tree doesn’t know that.
So it has to perform the same checks every time we plug into the network.

Instead of having this delay when individual devices connect, we can configure the switch and let it
know that the only thing that would be plugging in to that physical interface on the switch, is
somebody’s end station, and you don’t have to go through the listening and learning process that
Spanning Tree would normally use.
Instead, you can plug the device in and instantly start communicating on the network. On Cisco
switches, this capability is called PortFast.

The problem, of course, is that someone could plug into that connection with another switch, and then
there would be a loop created over that connection. To be able to combine the speed of having
PortFast, and the security of Spanning Tree, we can configure BPDU guard on the switch.

BPDU stands for Bridge Protocol Data Unit, and it’s the primary protocol used by the Spanning Tree
Protocol. With BPDU guard, the switch is constantly watching the communication coming from these
interfaces. And if at the interface ever sends a BPDU frame, then it recognizes that there could be a
switch on the other side of this communication. PortFast would no longer apply, and it would disable
that interface before there was an opportunity for a loop to occur.

Another challenge we have on switches is that someone could plug in a DHCP server that was not
authorized to be on the network, thereby creating either a denial of service situation, or a potential
security issue. Fortunately, switches have software inside of them that can also look for these types of
problems.

This is called DHCP snooping, for Dynamic Host Configuration Protocol. The switch would be configured
with a series of trusted interfaces that may have routers, switches, and other DHCP servers on it, but it
would have other interfaces that are not trusted. These might have other computers, and of course,
DHCP servers that would not be authorized to be on the network.

The switch is constantly monitoring the conversations that are occurring over these interfaces. And if
the DHCP protocol appears, and it’s coming from one of the untrusted interfaces on that switch, the
switch can filter out that DHCP conversation, and not allow it to be sent to any of the other devices on
the network.

Another type of control technology used on our switches, routers, and other devices, is MAC filtering.
This stands for Media Access Control, and it’s referring to the physical interface that is on our ethernet
cards. MAC filtering allows the administrator of this device to either allow or disallow traffic based on
the Mac address that’s communicating through the network.

This means that you could configure your switch to allow communication for all of the devices that are
on the inside of your network. But if someone comes in with their own device and plugs in, that device
would not be able to communicate across those network links.

One of the challenges with MAC filtering is there’s no security mechanism at Layer 2 that can obscure or
encrypt these Mac addresses. That means that anyone could connect to the network, they could listen,
and collect a list of all of the MAC addresses that are allowed on the network, and then simply change
their MAC address to match one of the MAC addresses that’s allowed.

This obviously is not a very strong security mechanism, which means that MAC filtering is really more of
an administrative tool. We often refer to MAC filtering as security through obscurity, because if you
know the mechanism that’s being used to provide this filtering, it’s very easy to circumvent the security
controls.

Secure Networking – SY0-601 CompTIA Security+ : 3.3


A secure network will include a number of different security controls. In this video, you’ll learn about
DNSSEC, out-of-band management, QoS, taps, port mirrors, and more.

In the previous video, we describe some of the DNS security mechanisms that we’ve recently created.
Because originally DNS was built with no security features in mind. We’ve added security features
through the use of DNSSEC that stands for Domain Name System Security Extensions.

And it adds the ability to confirm the responses that we’re getting from a DNS server. So that we know it
really came from that server. So we have origin authentication. And we know that the information that
we’re receiving is exactly what that server has sent because we have data integrity.

We’re able to do this by adding public key cryptography. We’re digitally signing the information that’s on
these DNS servers. And each record has that sign DNS information as part of the DNS server itself. But
we can use our DNS servers as additional security tools for our endpoints.

We know that all of our users have to access the DNS to be able to gain the IP address of the device
they’d like to communicate with. So one of the things we can do is tell our DNS server if a user ever tries
to visit unknown malicious location, don’t give out the actual IP address of that location instead give a
different IP address.

We’ve configured for ourselves. We call this a sinkhole address. This means the user will not be
redirected to the malicious site instead they’ll be redirected to a different location. And we can then
perform logging and reporting on how many people have been accessing our private sinkhole address.
By doing that we can immediately identify stations that may have been infected with malware. And we
can stop them before they’re able to communicate back to the central server for that malware
infestation.

We can monitor this sinkhole address and any time we see an internal device try to access that address
we can assume that device may have been infected by some malware. This prevents any additional
exploitation of that device. And then we can perform our own internal mitigation of malware on those
specific systems.

This also effectively acts as content filtering. If our DNS has a list of unwanted or suspicious sites our
users would not be able to visit those locations because our DNS will not provide them with the proper
DNS resolution. IT professionals often take advantage of out-of-band management to work around
problems that may be occurring on the network.

For example, if you lose the connectivity to a remote site using the normal network connection you
would normally not have access to any of those devices. Fortunately switches, routers, firewalls, servers
and other devices often have a separate management interface that you can connect to. Sometimes this
is a serial port, sometimes it’s USB or it may be an additional ethernet connection on that device.

We would commonly connect a wired modem or perhaps a wireless cellular modem to these serial
connections. Which allow us to dial in or connect around our network into the out-of-band management
interface on that device.

In larger environments we might have a centralized console router sometimes you’ll hear this referred
to as a comm server. Where you’ll connect to the comm server. And the comm server will then gain you
access to all of the other devices on that network that are connected through the out-of-band
management interfaces.

There are many different kinds of devices that we’re using on our networks these days. We have laptops
and desktops our mobile devices and tablets and our voice over IP systems. For all of these there are
different applications that are running. Some of these applications are real time applications some of
them are streaming audio or video and some of them are web based applications.

Each one of those applications has a different requirement for access. Some of them require faster
response times others require larger amounts of bandwidth transferred. For example, if you’re on the
phone and having a voice conversation that is a real-time communication that needs to happen in the
immediate time frame. If you’re streaming video, there’s usually a buffer involved. So you might have a
little more leeway into how much data you’re transferring through the network at any particular time.

And if you’re using things like a database application it may be interactive. Where if you put input into
the application you’re expecting a certain output in a very short period of time. Network administrators
are often tasked with setting priorities for these different applications. Voice over IP traffic for example,
probably needs to be a higher priority than something like streaming video or an interactive database
app.

This means that we would need to configure these applications. So that voice over IP would have a
priority over web browsing. No matter how much traffic was transferred over a web browsing
connection we would still give our voice over IP calls much higher priority on the network. This means
that we would prioritize the applications based on response times, bandwidth, traffic rates and other
criteria.

We broadly call this prioritization process quality of service or QoS. This describes the process we would
go through to prioritize one application over another. The method of implementing QoS can vary widely
depending on the type of equipment you’re using and the type of applications you have in place. Your
QoS functionality may be in the switches you’re using; it may be associated with routers. Or it may be
something that’s in your next generation firewall.

One of the challenges we have with IPv4 is that it was built before there was a huge emphasis in security
on the network. Well, that changed with the implementation of IPv6. During the IPv6 process we put a
lot of different configuration settings inside the protocol itself that will assist with security on our
network. We already know that there are a lot more IPv6 addresses that we can have when compared
to IPv4.

So it’s much more difficult to perform a complete port scan or interface scan when we’re working with
IPv6 addresses. Many of the security tools that we’re using like port scanners and vulnerability scanners
have already been updated to take advantage of IPv6. Because there’s so many IP addresses available
with IPv6, we’ve effectively removed the need to perform port address translation or outbound network
address translation on our network. Without having that in place, we can simplify the communications
process.

Network address translation is it a security feature and in many environments it was simply used to
minimize the number of public IP addresses that would be required. Another nice advantage of
removing certain protocols from the network means that those protocols can’t be a security risk. For
example, with IPv6 we removed the Address Resolution Protocol or ARP. And without any ARP there
can’t be any ARP spoofing.

But this doesn’t necessarily mean that IPv6 is any more or less secure than IPv4. It simply changes the
security that we’re using on our network. For example, Neighbor Cache Exhaustion can use IPv6
protocols to fill up the neighbor cache. And therefore make a system unable to communicate with other
devices on the network.

Network administrators have always taken advantage of taps and port mirrors to be able to properly
manage the network. But these are also a security concern, especially if an unauthorized third party
happens to add a tap to your network. These physical taps will allow someone to disconnect a link, put
the tap in the middle of the link and now they can receive a copy of all of the traffic going over the
network.

A port mirror is often a software base tapping mechanism that’s usually built into a switch. You’ll
sometimes hear this referred to as port redirection or in the case of a Cisco switch it’s referred to as a
SPAN or switched port analyzer. Although there are some limitations in using these port mirrors they do
become very useful in places where no other option is available.

Here’s an example of a physical tap that you would plug into the network. This is a DS3 tap. So you can
see there are two devices there’s an equipment or DTE side, for data terminal equipment. And a
network side or DCE, data communications equipment. On one side is transmit and the other side is
received and it’s reversed in the other direction.

To install the tap we would interrupt this flow and put the tap in the middle of the connection. The same
thing for the data that’s coming through the other way. A copy of this information is then sent to other
interfaces on this particular tap. And that’s where we would plug-in our monitoring tools to be able to
see all of the traffic going between these two devices on the network.

To help provide additional security on the network. You might take advantage of a monitoring service.
This is an organization that might constantly monitor the security on your network. They might perform
ongoing security checks. So you would always know if all of your systems have been updated to the
latest version of patches. And there might be a series of experts at the SOC. Or the security operations
center who can constantly monitor the security posture of your network.

These organizations are often performing constant monitoring of the traffic going in and out of your
network. And they can identify if there are any increase in threats or anybody who’s trying to attack
certain parts of your network. And since they’re constantly watching the network they can react very
quickly to any problems that might be occurring.

This is usually a 24 by 7 organization. So they can take care of problems that are occurring during the
nighttime hours when you may not necessarily have any staff on site. And if you’re concerned about
compliance you can rely on these experts to make sure that you maintain your HIPAA compliance, your
PCI DSS compliance and any other compliance requirements.

Another useful security technique is constantly monitoring the files on our system. And if anybody
modifies a file that should not be modified you can be informed automatically. This is file integrity
monitoring or FIM. This commonly monitors files that would never change things like your operating
system files. If something is changing parts of your operating system it’s a good bet that is some type of
malicious software.

A type of on demand file integrity monitoring can be done with Windows with the SFC utility. That
stands for system file checker, which will go through all of your system files and make sure that none of
those files have been modified. A type of real-time file integrity monitoring can be found in Linux with
the tripwire application. And there are a number of other host-based IPS solutions that include different
levels of file integrity monitoring.

Firewalls – SY0-601 CompTIA Security+ : 3.3


The firewall is a staple of IT security. In this video, you’ll learn about stateless vs. stateful firewalls,
UTMs, next-generation firewalls, web application firewalls, and more.

If you’re connected to the internet at home or in your office, then you are using a firewall to help
protect your network from whatever happens to be on the internet. This is a component that allows us
to control the flow of traffic. This might be inbound traffic into your network, or it may be traffic that
we’re sending out to the internet. This can be especially important in corporate environments where we
want to be sure that anyone on the internet does not have access to the sensitive information that’s on
the inside of our network.

Your firewall might also include content filtering. So there may be a way for your system administrator
to define exactly what type of content you’re able to access on the internet. And with some firewalls, we
have– antivirus, anti-malware, and other modes of detecting malicious software if it happens to flow
through that firewall.

A traditional firewall is able to control traffic based on the IP address and port numbers that are in use.
Some of the newer next-generation firewalls are able to go a step further and identify the applications
that may be flowing across the network. Many firewalls can also act as a VPN endpoint for IP site tunnels
or communication from end users. This allows you to configure the firewall as the central point of
security for all of your remote access devices.

And it’s also very common for your firewall to act, as a layer 3 device can effectively replace the router
that you’re using to connect to the internet. This also means that we have the ability to change
addressing with network address translation so that you can have a group of internal private addresses.
And all of those private addresses are able to communicate to the internet. And since this firewall is
acting as a router, you’re able to have dynamic routing, route redistribution and other advanced routing
functions.

We can think of the communication on our network as a flow of traffic. We might begin to query a web
server from one device and the web server will return a response to our query to the original requester.
That is a single flow of communication. And when we make that request to the web server, we naturally
expect there’s going to be a response from that web server.

There is an older style of firewall called a stateless firewall that doesn’t have any idea about these flows
of communication. A stateless firewall doesn’t know that when you make a request to a web server, that
there is going to be a response from that web server, and all traffic going one direction through the
firewall needs its own set of rules. Traffic coming the other direction through the firewall needs a
completely different set of rules.

A stateless firewall is not able to determine that the response from a web server is being sent inbound
because we originally made a request earlier to that web server. This means that the firewall is not
going to keep track of any of these flows going back and forth. So it needs to have a rule base that
covers all communication in both directions.

This rule base has a first rule which says, the source IP of 10.1.1.1 which is Jack’s workstation is able to
communicate to 10.10.10.10 which is the SGC web server. It’s able to communicate to that device using
TCP of the port 80 and that information is allowed. Since this firewall has no idea that a state of
communication exists, it also needs a rule that allows traffic the other direction.

So we have a second rule in this firewall that allows 10.10.10.10, which is the SGC web server to
communicate to 10.1.1.1, which is Jack’s workstation, using any TCP port and allowing that traffic to
pass through the network. Let’s test the rules in this stateless firewall.

We’ll begin with Jack communicating to the web server. The firewall evaluates that traffic as 10.1.1.1
communicating to 10.10.10.10. It sees that that information is allowed if that’s TCP port 80. And in this
case, it is. So that traffic is allowed through the firewall.

The SGC web server is going to respond to that communication and send the information back to the
firewall. But since this is stateless, the firewall has no idea that this is the response to that earlier
request. So it has to look into its rule base again and see that there is a rule that allows this traffic from
to 10.10.10.10 to 10.1.1.1. Since that is allowed, that information continues to Jack’s workstation.

Let’s say an attacker now gains access to the SGC web server and wants to send some unprompted data
into Jack’s workstation. So the SGC web server will send a packet of information. Since the firewall has
no idea what the state of this flow might be, the only thing it can rely on is the firewall rule base. And
this rule base does allow 10.10.10.10 to communicate to 10.1.1.1, which is Jack’s workstation. And that
information even though it is something that could be malicious, is allowed through the firewall.

These days it would be very unusual to find a stateless firewall. Practically, all firewalls that you’ll find
are stateful devices. Stateful devices are much more secure and are much more intelligent about how
they allow traffic through the network.

Let’s take the same scenario where Jack is going to communicate to the SGC web server. With a stateful
firewall, the set of rules is very different. We only need a single rule which would be 10.1.1.1, Jack’s
workstation, communicating to 10.10.10.10 which is the SGC web server and that traffic is allowed using
TCP port 80 through the firewall. Notice that there is not another rule that allows traffic the other
direction. Since this is a stateful firewall, it knows that if a conversation is beginning from Jack that it’s
going to allow the return communication through the firewall automatically.

So now let’s send this information from Jack’s computer. It hits the firewall, it’s evaluated, that
particular traffic flow is And the firewall creates a state table. The session table is going to have
information about this particular flow. This flow is from 10.1.1.1, has a source port of 15442, it is
heading to 10.10.10.10 over port 80. And that information continues to the web server.
When the SGC web server responds to that request, the firewall will look through the rule base and see
that a rule is not allowed for that traffic. However, it does show an active session table which does show
that this is part of an active communication and therefore it allows that traffic back through to Jack’s
computer.

If we have the same scenario where the attacker gains access to the web server and is sending
information to Jack’s computer without a flow existing originally, when that hits the firewall, it won’t
match any of the rule bases. It won’t match anything in the session table because this is using a different
destination TCP port and therefore that information will be denied access through the firewall.

You can see that the stateful firewall is much more secure for these active flows. The rule bases are
much simpler on a stateful firewall. Which is why the stateful firewall is most likely the default firewall
type that you’ll use. Historically, we’ve continued to improve the capabilities of these firewalls and
they’ve become more than simply a device that allows or disallows network flows.

We created a newer version of the firewalls called a UTM. This is a Unified Threat Management device
or what some people call, a web security gateway. These devices include a number of additional
features over simply being a firewall. For example, a UTM might include URL filtering or content
inspection, it can look for malware communicating through the firewall, it can provide spam filtering in
some cases.

Some of these UTMs have CSU/DSU connectivity for your wide area network serial connections. It could
have routing and switching functionality built into interfaces that are on the UTM device. And of course
they provide firewalling, IPs/IDS functionality, bandwidth shaping, VPN endpoints and even more.

One of the challenges we have with UTMs is that, there generally was never one single vendor that
could provide all of this in a single device. And installing other manufacturer’s code into a third party
device often created more problems than it actually solved. We needed a smarter way to include this
functionality in a device that was really built to handle the loads of our modern networks.

That’s why most of our enterprise firewalls today are next-generation firewalls or NGFW devices. They
are application layer devices that can see the application flows across all of the communication on our
network. You might also hear of these called application layer gateways, stateful multilayer inspection
devices, or deep packet inspection devices.

A next generation firewall is going to evaluate all of the traffic flowing through the network and it’s able
to understand exactly what applications are in use regardless of what IP addresses or port numbers may
be used. These are much more intelligent devices than a UTM or a traditional firewall. But every packet
has to be analyzed and categorized so that it can then apply security rules to all of the traffic.

Next-generation firewalls are commonly network connected devices. They are able to look at all of the
traffic going through and show how much web browsing, how much BitTorrent, how much Blackboard
communication, and all of these other applications as well. These might also include intrusion
prevention capabilities. There’s usually application-specific rules inside of the firewall that are able to
recognize and react to any vulnerabilities that may exist for that application.

And since we’re looking at everything going by with the next-generation firewall, we can also apply rules
relating to URL filters or categorizations of URLs. This newer style of security provides much more
efficiency and provides more security on the network. Which is why you often see people now throwing
out their UTMs and replacing them with next-generation firewalls.

If you’re responsible for securing web services and you want to provide additional security for the
information that’s being input into those web service applications, then you need a web application
firewall or a WAF. This is not like a traditional firewall that is able to allow or disallow traffic based on IP
address or port number. And this is not like a next-generation firewall which is examining application
flows. This is a firewall specifically built for web web-based applications. And it’s going to apply rules to
the conversations that are taking place for your HTTP and HTTPS based applications.

Instead of allowing or disallowing traffic based on an IP address or port number, a WAF is going to allow
or deny traffic based on the input to that particular application. For example, if an attacker sends
information to a web server that exploits a SQL injection vulnerability, the WAF can recognize that a SQL
injection has been attempted and block that traffic flow.

This is the type of security you would commonly see with very high end web-based applications. If
you’re accepting credit card numbers to your website, then you must comply with the Payment Card
Industry Data Security Standard or PCI DSS. And part of that compliance requires that you have in place
a web application firewall.

Here’s a log file from a web application firewall. This particular firewall has caught some instances where
there has been a cross site scripting attack. This particular attempt was denied by the firewall. We have
another problem that was an error code four or five, where there was an error from the web server
although the response to that error was suppressed so that the end user would not see the error and
that information was logged. We also have a SQL injection that was attempted and in this case that
particular SQL injection was denied by the web application firewall.

In this video, we’ve described the rule base within a firewall that allows or disallows traffic. You might
see this referred to as a security policy or an access control list or ACL. This provides us with a list of
rules that the firewall will follow to decide whether information should be allowed through the firewall
or denied through the firewall.

The series of variables that you would choose are called tuples, and they are groupings of information.
So your firewall can make these forwarding decisions based on the traffic itself. It can find whether there
is a source IP a destination IP in use, a particular port number, the time of day, the application that’s
being used, and other criteria as well. The firewall evaluates all of these different characteristics to see if
it can match any of the rules within the access control list. And once it finds a rule that matches it looks
at the disposition for that role to determine whether that information should be allowed or denied
through the firewall.

Some firewalls have hundreds or even thousands of rules in the rule-base. So it’s important to
understand which particular rule is evaluated first. On most firewalls, it’s done through a top to bottom
approach. As the data is coming through the firewall, the firewall tries to match that data based on the
information in the first rule at the very top of the rule base. If nothing matches in that rule, the firewall
looks at the second rule in the rule base. If nothing matches in that rule we go to the third rule then the
fourth and the fifth and so on.
Eventually, we will find a rule that does match the characteristics of that flow and we’ll know what the
disposition is. This means we generally put the more specific rules at the top of this firewall list. So these
very specific rules can be evaluated before any other rules in the rule base. If we go through this entire
list of rules on the firewall and we have no matches to the data flow, you’ll find that most firewalls are
configured with an implicit deny. Which means once you get to the bottom of the rule base and nothing
matches, none of that data is allowed through the firewall by default.

Let’s step through a firewall rule base and see if we can determine what these particular rules are
allowing our denying. We have a rule base that has a rule number, a remote IP address, remote port
number, local port number, a protocol and an action. The first rule in the rule base looks for any remote
IP address that we’re sending information to over any port number, and it allows us to send this traffic if
the local port number is 22 using the TCP protocol. If our network traffic matches that rule, then that
information will be allowed through the firewall.

The second rule also has all remote IP addresses. We are communicating to any remote port number.
The local port number that’s being used is port 80 over TCP which would be web traffic and that traffic is
allowed through the firewall.

Rule 3 is also, any remote IP from any port number is allowed through the firewall if it matches a local
port of 443. Which would commonly be HTTPS communication over TCP ports. And again, we are
allowing that traffic.

Rule number 4 allows any remote IP over any remote port number to communicate to port 3389 on our
local network using the TCP protocol which would commonly be used for remote desktop. This is also
allowed traffic. Rule 5 is a little bit different. We are allowing any remote IP but the remote port no
that’s allowed is port 53. This would go to any local port number with the UDP protocol, which means
we’re allowing DNS traffic into our network.

Rule 6 is a similar rule that allows any remote traffic using port number 123. So this would be the
network time protocol for time synchronization. And that is UDP protocol that is allowed to our
network.

Now normally, at the bottom of the rule-base, anything else that came through would be denied due to
our implicit deny. But implicit denies are not commonly logged. We may decide that certain traffic
should always be logged in our firewall. So we have a single rule at the bottom, that for all remote traffic
that is communicating via the ICMP protocol into our network, we will deny any of that traffic. And
because we have a specific rule written for that deny, that information will be logged in the firewall logs.

There are many different kinds of firewalls with many types of implementations, and those
implementations have many different advantages and disadvantages. For example, you have the choice
on whether you would like to use an open-source firewall versus a proprietary firewall. Open source
firewalls tend to allow or disallow traffic based on the traditional firewall rules for an IP address or a port
number.

It’s unusual to find an open-source firewall that has a large understanding of the different applications
that can flow through the network. For that functionality, you may want to use a proprietary firewall
which does have application control and visibility into all of the application flows on your network, and
it’s usually built on hardware that’s designed for these high speed networks.
This is one of the big advantages for having a piece of hardware that has been purpose built. To be a
firewall, is that it’s designed for speed. That firewall can provide very efficient traffic flows and provide
different types of interfaces so that you’re connecting to not only ethernet networks but perhaps even
wider area network or wireless networks as well.

You can also get software based firewalls that you might install on your own hardware. This would give
you the flexibility to put these firewalls anywhere you’d like in your organization. And with so many
services moving into a virtual environment, it’s important to understand the differences between using
an appliance-based physical firewall versus a virtual software-based firewall.

An appliance often has the fastest throughput because there’s purpose-built hardware that’s designed
for the speeds and connections that you would commonly find on an enterprise network. On servers
and workstations, you can run a host-based firewall. Those devices, since they’re running as part of the
operating system, can understand exactly what applications you’re running on that device and can make
security decisions based on what applications are in use.

Since you’re also running this on your local machine, it can view information that has been decrypted
once it comes off the network. And if you’re in an environment where there’s many different virtual
systems, you might want to use a virtual firewall to provide security for all of the traffic within a single
data center. We refer to that as East-West traffic. And having a virtual firewall in that environment can
help you control exactly what application traffic moves between servers.

Network Access Control – SY0-601 CompTIA Security+ : 3.3


Network access control (NAC) is an important part of IT security. In this video, you’ll learn about posture
assessments, compare persistent and dissolvable agents, and agentless NAC.

There are a number of different ways to control access to your network. When we’re setting up a
firewall on the edge of our network, we’re usually connecting our internal network to the internet. This
edge connection is usually managed using rules that we put inside of that firewall. And generally, we set
up rules inside the firewall. We test those rules and make sure they’re working. And at that point, we
don’t tend to make a lot of changes to the rules inside of that edge firewall.

Access control approaches the idea of allowing or disallowing access to the network based on a number
of different criteria and not just whether you’re on the edge of the network. You could be a user that’s
on the inside of the network trying to access resources. Or you may be on the outside trying to access
resources on the inside. These rules that we use for access control are also quite different than the rules
we might have in a firewall.

These rules may be based on a username, perhaps a group the user belongs to, where the user may be
located, or the application in use. And unlike a firewall rule where changes don’t occur unless we go
through a change control process, the access control rules can change dramatically at any time. We can
decide to allow or disallow access for a user or group of users and can change our security posture as
needed.

One of the challenges with allowing people access to the network is sometimes they’re using equipment
that we did not provide to them. This would be a BYOD environment where you are Bringing Your Own
Device. So you might have your own phone. You might have your own tablet. And you are connecting
your device to the corporate network. The security team knows that we’ll be using your equipment to
connect to the network.

But we also want to protect what’s in the network already. So we’re concerned about malware that may
already be on these devices. Or perhaps the devices aren’t even running any anti-malware software. Or
it could be that the applications that are already installed on these devices are applications that we
really don’t want to run inside of the corporate network. It would be useful, then, to know exactly what
the status is of these devices.

So when someone connects to the network, we can perform a posture assessment. This will check the
device to see if, perhaps, it’s already a device that we’ve configured and is trusted by our organization.
We can see if it’s running anti-virus. If so, which type of anti-virus is it running? And what version of
software is it running? Are there any corporate applications already installed on this device? Or will we
need to install additional applications on that device? Is this a mobile device like a phone or laptop? And
if so, is the information stored on this device stored with encryption?

This requirement is not specific to any particular operating system. Whether you’re running Windows or
Mac OS, iOS, or Android, you need to perform some type of posture assessment when these devices are
connecting to the network. To be able to perform these posture assessments, we need to run some type
of software on these devices that are connecting to the network. Sometimes these are persistent
agents. We would install software onto the laptop or the mobile device. And that software would always
be on that device and run when we’re connecting to the network.

This also means that we have to maintain that software. And if there’s any updates, we have to push
those updates out to all of those devices. An option that doesn’t require this much management
overhead might be a dissolvable agent, which means we’re not installing a permanent piece of software.
This means that when we connect to the network, the software will run on that local device and perform
that posture assessment. When that assessment is done, the software terminates and is no longer
located on that machine.

Some operating systems include network access control as part of the operating system itself. And no
additional agent is required. In the case of Windows, for example, an agentless NAC is integrated with
Active Directory. And it performs these checks when the system logs into the network and logs out of
the network. But this also means that you’re not able to schedule any of these health checks. So if you
need additional functionality, you may require a persistent or dissolvable agent.

Once the security team has configured the network access control system with the minimum
configuration allowed on the network, it can then begin evaluating the user’s connection when they
begin to log in. And it may be that a user is connecting to the network with a device that can’t meet the
minimum requirements for these posture assessments. In that case, the device is not allowed access to
the network and very often is put into a quarantine network that is specifically built for devices that
don’t pass their health check.

This gives the user a chance to install the software that might be needed to update their system to meet
the minimum requirements of that posture assessment. Once the user feels that they’ve fixed all of
these problems, they can try reconnecting to the network. The posture assessment will run again. And if
any problems are found, the process repeats itself. If all of the problems have been resolved, the user
would then have access to the network.

Proxy Servers – SY0-601 CompTIA Security+ : 3.3


A proxy can control traffic on complex enterprise networks. In this video, you’ll learn about forward
proxies, reverse proxies, and open proxies.

A proxy server is a device that sits between the users and the rest of the network. This proxy server
usually receives requests from the users, it creates its own request out to the service on behalf of the
users, receives the response to that request, and then it usually performs some type of security checks,
and if everything looks good, it provides the answer to that request to the original user. This means that
the proxy server since it’s sitting in the middle of this conversation can control quite a bit about these
traffic flows. You can perform caching on the proxy server. This might be used for access control so
someone would have to put in a username and password for that request to be sent out of the network,
and this proxy server may provide your URL filtering or content scanning to be able to keep all of those
in safe.

Some proxy servers are configured to be explicit this means that we would have to go into the
configuration of each of these user’s devices and tell those devices that our proxy servers located at a
particular IP address and uses a particular port number. Or the proxy server may be one the users have
no idea exists on the network; no additional configurations are required. We refer to these proxies as
transparent proxies, because the end users have no idea that proxy server sitting in the middle of the
conversation. Although we don’t often think of it this way, if we’re doing some type of network address
translation in a router, we’re effectively creating a network level proxy. But when we refer to proxies on
a network, it’s almost always an application level proxy. The proxy understands exactly how the
application operates, and it’s able to create application requests on behalf of all of the clients.

The proxy you’re using may only know one individual application, perhaps the proxy is only aware of
how HTTP might work, or maybe it’s a proxy that has multiple applications that they can support, so it
might support HTTP, HTTPS, FTP, and other applications as well. If you have a proxy in your environment
that is used to control the users access to the internet, then you’re probably using a forward proxy.
Sometimes this is referred to as an internal proxy, because it’s a proxy that’s used for your internal
users. Your users would make a request to the proxy to gain access to a web server on the internet.

The proxy might examine the URL, make sure that you’re not visiting a known malicious site, because if
you are it can block that communication. There might also be a series of categories associated with
these URLs, so that you can control exactly what type of content a user might be visiting. If all of that
passes the check, then the proxy will perform that request for the user, receive the answer from the
internet, evaluate that information and make sure it’s safe for the user, and then send the user a copy of
that response.

You can also use proxies in the other direction where users from the internet are hitting a proxy so they
can gain access to internal services on your network. We refer to this as a reverse proxy, and the process
is exactly opposite from what we just described with our forward proxy. The requests come from users
on the internet into the proxy server. At this point, the proxy will examine the requests from the users
and make sure that none of the requests are malicious. If the requests are valid, it will send those
requests to the web server and get a response to those requests. The proxy then receives the answer
from the web server and sends a copy of that answer to the user on the internet.

In some environments, there are very tight security controls. And one of the ways that people can get
around those security controls is to communicate to a third party proxy that may be controlled by
someone else. We refer to these as an open proxy, because these proxies are installed on the internet
for anyone to be able to use. This is a significant security concern, primarily because most people are
using these proxies to circumvent existing security controls in their environment, but we also are
concerned about what the proxy may be doing to the data that’s being sent or received from that proxy
server.

It could be that a simple request is being made by your users to the proxy server, the proxies then
making that request on the users response. The answer is comings back from the devices on the
internet, but then the proxy may change or add additional code into the response and send that
response to the users. This might be something very simple like putting an advertisement on the
response that’s received, or the proxy could add malicious code into that software and send it directly to
your users.

Intrusion Prevention – SY0-601 CompTIA Security+ : 3.3


Intrusion prevention is a useful way to block known vulnerabilities. In this video, you’ll learn about
passive monitoring, out-of-band responses, inline monitoring, and in-line responses.

A network based intrusion detection system or more commonly a network based intrusion prevention
system. It’s designed to look at traffic going through your network identify any known attacks that may
be inside of that traffic and block or mitigate those attacks in some way.

These attacks are commonly focusing on individual operating systems or application servers. And they
are attacking known vulnerabilities such as buffer overflows, database injections, cross site scripting and
other known vulnerabilities. The intrusion detection system is designed to simply alarm or alert if a
problem occurs. And an IDS does not commonly have a way to block that communication in real time.

Because of that, we don’t commonly see a single IDS device on our network is usually an intrusion
prevention system that may not be configured to block in real time. Sometimes we refer to that as an
IDS. And intrusion prevention system has the ability to block information in real time as it’s going
through IPS. This Prevention capability is very valuable, especially when you want to be sure that none
of that malicious traffic is getting into your network.

One way to connect an IDS or IPS to your network is through a passive monitoring system. You might
have an IPS off to the side that is receiving information from a switch that is redirecting traffic from
other devices on the network. This can be done with a port mirror in the Cisco world that’s called a
switch port analyzer or span. Or maybe a physical network tap that’s redirecting that traffic.

This means that information would be flowing from one device to another and a copy of that traffic is
sent to the IPS during that process. The IPS then examines that copy of the traffic flow to see if there’s
any known attacks inside of that traffic. If there is usually an alert or message is displayed. So that the
system administrator can then decide what to do next.
Since this IPS is not in line with the actual traffic flows there’s no way for the IPS block that in real time.
If the IPS is in one of these passive modes, then they’re not in line with the actual traffic flows and
cannot block that traffic in real time.

However, there are some response methods that would allow the IPS to limit the amount of traffic that
might be sent or received from a device once that particular malicious traffic is identified. We do that
through an out-of-band response. We have traffic that’s going between systems, a copy of that traffic is
sent the IPS. If the IPS identifies malicious traffic within those flows it can send a TCP reset frame to
these devices effectively disabling that particular traffic flow.

This doesn’t stop the original packet from getting through this conversation. But it would prevent any
subsequent information being sent over that same flow. This reset functionality is part of the TCP
protocol. If the traffic flow was one that was UDP based, the IPS would not have a reset feature available
to be able to disconnect that particular flow.

A much more common implementation for an IPS is to have the IPS in line on the network evaluating all
traffic that sent through it. Traffic is sent into the network and received by the IPS. The IPS then
examines the traffic make sure that nothing inside that traffic might be malicious. And if everything
looks OK, it sends it on its way. Because the IPS is in-band it’s able to block the traffic in real time. And
prevent any of the malicious traffic from getting inside the network.

So the same traffic flow would occur and as that traffic hits the IPS, the IPS would recognize there’s
some type of attack inside of that packet. It would then drop that packet prevent any additional traffic
from going through the network. And nothing would come out the other side of the IPS to cause any
problems on the inside of your network.

There are many different ways for an IPS to be able to understand what might be malicious on your
network. One of the most common ways is by looking at a signature. This is a signature for the Conficker
worm. So if any traffic comes through your network that matches this exactly, the IPS will identify that
as something malicious and drop that traffic from the network.

Many IPS systems can also examine what normal traffic might be. And if anything changes with that
normal traffic flow it can block that particular traffic. We refer to that as anomaly based. But it requires
that it set on the network for a certain amount of time to understand what might be normal on your
network and what might be abnormal.

If the IPS normally sees that your network has relatively low utilization and not many file transfers. And
suddenly there is a large amount of traffic that is performing file transfers it may recognize that as an
anomaly and block that traffic. All the IPS may not have a specific signature that it can fire on but it does
recognize certain types of behavior.

For example, an IPS may understand what a normal database request might look like. But it might also
understand what SQL injection looks like. And even if there is not a specific signature for the SQL
injection. It can identify that unusual behavior and block it in the IPS.

And more advanced IPS systems might use big data along with artificial intelligence and machine
learning to be able to understand broadly the way that your network operates. And is able to identify
malicious software based on that large amount of data and intelligence.
Other Network Appliances – SY0-601 CompTIA Security+ : 3.3
A layered approach to network security is always the best. In this video, you’ll learn about jump servers,
hardware security modules (HSM), and sensors and collectors.

One of the challenges we have when we’re administering our network and our servers is to be able to
provide administration of those devices in a way that is secure. To be able to do this, we will often take
advantage of a jump server. A jump server allows us to access usually internal devices through a private
connection that we’re making to a single device on the inside. This is usually a very secure device, one
that we’ve hardened so that no one would be able to gain access to that device except authorized users
like ourselves.

We would then perform an SSH or VPN tunnel to that device, and from there, we’re able to jump to the
other devices on the inside of the network. This means that if we need to provide administration to this
application server, this web server, or this database server, we would first connect to the jump server,
and from there, we would then jump to these different servers to administer those systems.

From a security perspective, we have to be very careful about the system that we are configuring as this
jump server. Since this jump server effectively has access to all of these devices on the inside of the
network, we want to be sure that no one unauthorized gains access to the jump server. This is an
important consideration when configuring this jump server, because we want to be sure that a
compromise would limit someone’s access to this internal network.

If you’re working in a very large environment with many web servers and devices that need
cryptographic keys, then you’re probably using a hardware security module or an HSM. This is a device
that is specifically designed to help you manage and control these large number of keys and certificates
in your environment. This is a device that is usually installed in clusters with redundancy. There’s very
often multiple power supplies on these devices, because you always want to be sure that you can access
your HSM.

These devices are more than just a simple server. Usually, inside they have specialized hardware that’s
designed for cryptography. This might be a card that’s added to the system after the fact, or it may be
purpose built to have this cryptographic functionality as part of the HSM. This HSM can provide secure
storage. This would be a perfect place to keep your private keys that you would use for your web
servers, and many environments will have this configured as a cryptographic accelerator, so that they
are performing their encryption and decryption on this device, and then simply using in the clear
communication to the server. This would keep the overhead of the encryption process away from the
server and focus it on the device that has built in hardware that’s designed specifically for encryption
and decryption.

If you are managing one of these large networks, then you’ve certainly installed some sensors and
collectors into your network. There needs to be some way to take all of the important statistics that are
being gathered by all of the devices on your network and centralize them into one point. This would be
devices such as switches, routers, servers, firewalls, and other devices that have logs and statistics that
can help you manage these devices better.

The sensor usually goes on the device itself. So you would have a sensor that’s part of your intrusion
prevention system. There might be logs inside of your firewall, or authentication server, your web server
may have logs, and all of these sensors are gathering information and providing them to the collector.
The collector is usually a console or series of consoles on your network. The collector is usually receiving
all of the sensor data, it’s passing through the data, and then presenting a representation of that data on
the screen. This collector could be proprietary, so it may be specifically created for one specific product
such as a firewall. And that means that it would only be able to provide information that is specific to
that firewall.

Or you may be using a more generic collector that can gather information across multiple different
devices. A good example of this is a SIEM. It is a security information and event management tool that is
able to collect log files from switches, routers, servers, and almost anything else in your environment. It
then consolidates those log files, compares them with each other, and then provides the output that’s
able to give you a broader perspective of exactly what’s going on your network across many, many
different devices.

Wireless Cryptography – SY0-601 CompTIA Security+ : 3.4


Wireless networks wouldn’t be a very useful networking medium without cryptography. In this video,
you’ll learn about wireless encryption, WPA2, WPA3, SAE, and more.

When we’re using our wired networks, we don’t have to worry so much about other people listening in
to what we’re doing. But on wireless networks, anyone nearby is able to pull our traffic right out of the
air and listen in to whatever it happens to be going across the network. That means we need additional
security controls whenever we’re using these wireless networks. For example, before anyone can gain
access to the wireless network they need to properly authenticate, and that authentication can take a
number of different forms. That might be a username, a password, there might be multifactor
authentication, you might be using 802.1X, or smartcards, or some other method to help authenticate a
user on to that wireless network.

We also want to be sure that all of the traffic that we’re sending across this wireless network is
encrypted. If someone was to grab this information out of the air and look into the data of the packets,
they would have no idea the information that was being sent because everything is sent over an
encrypted channel. And it would be useful if there was integrity built into the communication as well.
That way we can be assured the information we’re receiving from a third party is the information that
was originally sent, and we can be assured that nothing was changed along the way. You’ll sometimes
see this integrity check referred to as a Message Integrity Check, or an MIC. We’ve relied on wireless
encryption since the advent of 802.11 wireless networking. That’s because anyone who’s around can
effectively hear all of the conversations occurring over the network, and if we were sending this
information without any encryption, it would be very, very easy for an attacker to gather these packets
and see exactly what was being sent back and forth.

This means if you’re on a wireless network and you want that information to remain private, then you
need to enable encryption on that wireless access point. This means that everyone using the network
will have an encryption key that’s used to send and receive all of the data sent across this wireless
network. If you don’t have the encryption key, then you won’t be able to understand any of the
information that’s being sent between the stations on this wireless network. So if you’re using WPA2 or
WPA3 encryption, then all of this information is protected over the wireless network.
WPA2 is a security type on our wireless networks that’s been around for a very long time. This is called
Wi-Fi Protected Access 2, or WPA2 and this began certification in 2004. This uses an encryption called
CCMP block cipher mode. This stands for Counter mode with Cipher block chaining Message
authentication code Protocol, or Counter/CBC-MAC protocol. That’s a very long name that effectively
means you’re using CCMP over WPA2. CCMP uses a number of different protocols to provide the
security we need for our wireless networks. For example, the confidentiality of the data is encrypting
with the AES protocol and the integrity that we’re using on the network, for the message integrity check
or the MIC, uses CBC-MAC.

The update to WPA2 security is WPA3. This is version 3 of the WPA protocol that was introduced in
2018. It changes the encryption just a bit. It uses a different block cipher mode called GCMP. This is your
Galois Counter Mode Protocol and it is an update to the encryption method used with WPA2 in an effort
to make this just a bit stronger encryption than the older WPA2 protocol. The methods used for
encryption and integrity are similar in many ways to WPA2, the confidentiality of the data still uses AES,
but the message integrity check has changed to Galois message authentication code or GMAC. One of
the significant security updates to WPA3 addressed a number of challenges with keeping WPA2 secure.

One of these is the pre-shared key issue associated with WPA2. Although the WPA2 protocol is not
insecure, it still is subject to brute force attacks if somebody has the hash that is used for the pre-shared
key. Obtaining the hash then, is an important first step for an attacker if they’d like to perform a brute
force attack to find that key. Obtaining the hash can be done with WPA2 by listening in on the four way
handshake that occurs initially when someone is connecting to the WPA2 network. And there are a
number of methods where an attacker could get their hands on this hash without actually listening to
the handshake. Once attackers have captured that hash information, they can begin the brute force
process to try to determine what that pre-shared key might be.

As security professionals, we know that as time goes on, it becomes easier and easier to perform a brute
force on these keys. Part of the reason for that is that our graphical processing units, our GPUs, are
primarily used for decryption and brute force functionality and those particular processes are becoming
faster and faster. We’ve also found ways to use the cloud in order to perform this password cracking,
and we can use many hundreds or thousands of systems to be able to work on this brute force
simultaneously. With all of that computing power behind you, it becomes easier and easier to perform
the brute force. And once you have found that pre-shared key, you now effectively have access to all of
the data that was sent over that wireless communication.

With WPA3, we’ve changed the authentication process to avoid this hashing problem. Instead, we’ve
added additional security features such as mutual authentication so that not only are you authenticating
to the access point, the access point could also authenticate with you. We’re also changing the way that
the key exchange operates. Instead of sending a hash over the network, we create a shared session key
without having to send that key across the network. There’s no more handshaking, there’s no more
hashes that are sent, and no one is able to gain access to the hash and then perform some type of brute
force attack.

We also have the advantage of perfect forward secrecy in WPA3, which means that the session key that
we’re using can change often, and everyone is using a different session key. WPA3 also includes perfect
forward secrecy, which means that the session keys are created whenever we’re performing the
sessions, and once the session is over, the key is thrown away. We use a completely different key if we
start a new session. This means that WPA3 no longer has those problems associated with WPA2. We no
longer have a hash; therefore we no longer have to worry about brute forces associated with these pre-
shared keys.

So how do we create a session key that’s used on both sides of the conversation without actually
sending that session key across the network? To be able to do this, we use a method called
Simultaneous Authentication of Equals, or SAE. If you’re familiar with Diffie-Hellman key exchange, you
may find that SAE sounds a little familiar, that’s because it is derived from that Diffie-Hellman process.
We add some additional capabilities though, that go a little bit farther than Diffie-Hellman so that we
can add some authentication components to the conversation. And of course, everyone on the network
is generating a different session key even if everybody is using exactly the same pre-shared key to
connect to the wireless network. This was added to the IEEE 802.11 standard, and you’ll sometimes hear
this key exchange process referred to as the dragonfly handshake.

Wireless Authentication Methods – SY0-601 CompTIA Security+ : 3.4


There are a number of different methods to authenticate users with a wireless network. In this video,
you’ll learn about PSK, 802.1X, captive portal, and WPS.

When you’re connecting to a wireless network. One of the first thing that happens is authentication. We
need some way to ensure that the people connecting to the wireless network are truly authorized to be
on that wireless network. This could be a wireless network that’s configured to allow access for mobile
users. Or this may be in a coffee shop where the people stopping by are simply there temporarily and
then they leave the network.

There are generally two major ways to authenticate to a wireless network. The first is giving everyone
the same password. We refer to this as a pre-shared key because we’ve created the key previously. And
then we hand that key out to anyone who needs access to the network.

These pre-shared keys or shared passwords are commonly used for networks that we might have at our
home. In our corporate environment however, we need additional security. We need to make sure that
everyone has a different authentication method for logging in. We want to be sure that if someone was
to leave the company we could disabled their access but still allow access for everyone else.

To be able to do that, we use a standard called 802.1X. This provides for centralized authentication. So
someone logging in into a wireless network could use the credentials that they might normally use to
login into their Windows Active Directory domain. The configuration for pre-shared key or 802.1X is
usually configured on the wireless access point itself. Then if anybody connects to that wireless network
they’re normally prompted to add their authentication credentials as part of that connection process.

Here’s a good example of configuring that authentication on a wireless access point. You can see there
are a number of different options. Any time we see anything with a personal connection that would be a
pre-shared key. And anything labeled with enterprise would be 802.1X. If we can figure out wireless
access point to have no security or listed as open security, that means that anyone can connect to the
wireless network and they don’t need any type of authentication.

The next step up would be WPA3-Personal or you may see this written as WPA3 pre-shared key or PSK.
Where everyone gets exactly the same password that they would use to connect to the wireless
network. Although everyone is using the same pre-shared key to gain access to the wireless access
point, the access point is going to give everyone a completely different session key.

This means that the session key that I would use for my wireless conversation will be completely
different than the encryption key used for a different wireless conversation. WPA3 is able to do this
using a method called SAE or simultaneous authentication of equals. And this is a new capability with
WPA3 that makes the encryption configured with pre-shared keys much more secure than WPA2.

And if you’re in an office you’re probably using WPA3-Enterprise you might also see this written as
WPA3-802.1X. WPA3-Enterprize we’re using a centralized authentication server. So we’re using RADIUS,
TACACS or LDAP to be able to centralize everyone’s username and password.

If you’re on a third party wireless network, especially one that’s used in a coffee shop a hotel or some
other temporary basis, then you’re probably using a captive portal to be able to provide this
authentication. A captive portal is a method of providing authentication using a separate login screen
from your browser. The access point that you’re authenticating to will check to see if you have
previously authenticated. And if you haven’t it will redirect you to this portal page when you open your
browser.

It’s common for this login page to ask for a username or password and many captive portals support the
use of additional authentication factors as well. Once this information is typed in to the captive portal
and that information is confirmed then you have access to the wireless network. These captive portals
often have a time out function associated with them.

So that you’re either hitting a Logout button to disconnect from the wireless network or automatically
times you out after a certain number of hours have elapsed. Once that initial session expires you’ll need
to reconnect to the captive portal front end, add your credentials again and then you’ll be connected to
the wireless network for that next interval.

As you can see there are a number of configuration settings inside of an access point that have to be
enabled or disabled depending on the type of protection you’d like to have for your wireless network.
And you have to make sure that whatever configuration you have for the access point is also configured
the same way on your wireless clients. To be able to make this process a bit easier for the administrator
and for the users a type of authentication was created called WPS. That stands for Wi-Fi protected
setup.

This is a format that used to be called Wi-Fi simple config. The idea is that it would be much easier to use
this method of authentication rather than using pre-shared keys 802.1X authentication or some other
type of authentication method. WPS allows different methods to be used for authentication. For
example, you could use a personal identification number that you would put into the mobile device and
that gains you access to the wireless network. Or you might have to push a button on the access point
itself while you’re configuring the settings on your wireless device.

Or perhaps you need to bring the wireless device close to the access point and they will transfer
information between each other using near-field communication or NFC. With this configuration users
don’t have to remember a pre-shared key. You don’t have to configure 802.1X authentication behind
the scenes, you would simply use one of these criteria to be able to allow access to the wireless
network.
Perhaps the most common method used for authentication with WPS is to add the personal
identification number to the devices that will be connecting to the wireless network. But unfortunately,
WPS includes a significant flaws associated with this personal identification number. And you may find
that disabling WPS on your network may be a better idea than leaving this functionality enabled.

The challenge for WPS is that it was built incorrectly from the very beginning. The verification of this
personal identification number is an important step during the authentication process. And as you’ve
seen that PIN is an eight digit number. If we look into the details of this number it’s really a seven digit
number and the last number is a checksum. So with those seven numbers, that means you could really
only have 10 million possible combinations if you had to brute force every single one of those.

But it’s actually even worse than that this personal identification number validates each half of the pin
individually. That means the first half or four digits are validated. And then the second half, which is only
three digits because the last digit is a checksum is validated as a separate set of input. This means your
four digits in the first half will be about 10,000 possibilities. And the second half of the number is only
1,000 possibilities. That means instead of going through 10 million possible combinations you only need
to go through 11,000 possible combinations to try every single one of them.

If you have an older wireless access point that has no brute force attack prevention built inside of it. It
only takes a number of hours to go through every possible one of those 11,000 options. Most of your
newer access points are going to have brute force protection built into the device, which means you
won’t be able to simply go through all 11,000. It will stop you after the first incorrect series of personal
identification number attempts.

Although, WPS was intended to make the process so much easier. It instead made the process much
more insecure. And usually a best practice for WPS is to simply disable it on your wireless access point.

Wireless Authentication Protocols – SY0-601 CompTIA Security+ : 3.4


There are many options available when configuring wireless authentication. In this video, you’ll learn
about EAP, EAP-FAST, PEAP, EAP-TLS, and more.

In a previous video, we talked about the need for authenticating to a wireless network. And of course,
there are many different ways that you could authenticate to a network, and we’ve used many of these
different authentication methods on our different networks through the years.

The most common type of authentication someone would use is their username and password. And it’s
not unusual to add other types of authentication factors, along with the username and password.

Although we’ll sometimes use these authentication methods for wired networks, it’s very common to
also see this on wireless networks. That’s mostly because wireless networks are sending and receiving
into the air, and anyone who happens to be nearby with a wireless device could attempt to connect to
that wireless network.

Many of the types of authentication we’ll use for wireless networks are built on a standard framework
called the Extensible Authentication Protocol, or EAP. There are many, many different types of
authentication methods using EAP, and different manufacturers will use EAP in different ways.
In the enterprise, we commonly see EAP authentication used in conjunction with 802.1X. So when you
initially connect to the wireless network, you’ll be prompted for these authentication details, and the
EAP framework will be used to provide that authentication confirmation behind the scenes.

802.1X is also referred to as port-based Network Access Control, or NAC. This means that if you’re trying
to connect to the network, you don’t gain any access to this wired or wireless network, unless you’re
providing the proper credentials using 802.1X

This almost always is using some type of centralized authentication database on the back end. When
you first connect to the wireless network, you’ll be prompted with the username and password, and
802.1X will check that username and password by communicating on the back end to one of these
databases.

Very commonly, we would use RADIUS, or TACACS, or LDAP to be able to have this centralized database
of usernames and passwords.

There are usually three different parts to this to 802.1X authentication. There is the supplicant, this is
commonly the client that is connecting to the network. There’s the authenticator, this is the device that
provides the access you need to the network. And then there is this centralized authentication server
that is doing the validation of the username and password.

If you try to connect to this network for the first time from the supplicant, the authenticator will not
provide you with any access, because you’ve not authenticated yet. The authenticator will recognize
that you’re trying to get access, and it will send an EAP request to the supplicant, asking if you’d like to
provide some type of authentication. The supplicant sends a response to the authenticator, saying yes,
I’m trying to gain access to the network. The authenticator then informs the authentication server that
there’s a new person on the network who would like to authenticate.

The authentication server then asks if the supplicant is going to provide any login credentials. The
authenticator passes through that request to the supplicant, saying, let’s provide some credentials. And
then the supplicant provides username, password, and other types of authentication details.

The authenticator sends that information to the authentication server that’s in the back end. Those
credentials are checked, and if everything matches, a message is sent back to the authenticator that the
process was successful, and that user can gain access to the network.

One way of securely providing this authentication process is through a method of EAP called EAP-FAST.
FAST stands for Flexible Authentication via Secure Tunneling. This is a way to make sure that the
authentication server and the supplicant, are able to transfer information between each other over a
secure tunnel.

This is accomplished with a shared secret referred to as a Protected Access Credential, or a PAC. The
supplicant receives the PAC and then sets up a Transport Layer Security Tunnel.

This TLS tunnel is very similar to the TLS mechanism that’s used to encrypt information within a browser.
Once this TLS tunnel is in place, everything sent across is encrypted, and then authentication details are
sent over that TLS tunnel.
It’s common to see EAP-FAST used with a centralized authentication server, such as RADIUS, where you
can have both the authentication database and EAP-FAST services running on that RADIUS server.

Another form of an encrypted tunnel being used with EAP is called PEAP. That stands for a Protected
Extensible Authentication Protocol, and it was created by Cisco, Microsoft, and RSA Security.

This is also using TLS to be able to send this information, but instead of it being based on a shared secret
with the PAC, we’re using the same method as a traditional web server by using a digital certificate. This
digital certificate is only needed on the server. Your clients do not need separate digital certificates to be
able to use PEAP.

If you’re authenticating to a Microsoft network, then you’re probably combining PEAP with MS-CHAPv2.
This is Microsoft’s Challenge Handshake Authentication protocol, and it commonly integrates with some
of the Microsoft CHAP version 2 databases.

PEAP could also be used with a more generic authentication type using a Generic Token Card, or GTC.
And this can be used also with a hardware token generator to provide additional authentication.

A more secure form of EAP can be found with EAP-TLS. The TLS is Transport Layer Security, so we’re
already performing a very strong encryption of data between our clients and our servers.

Unlike the previously described EAP implementations that did not need a digital certificate, or only
needed a single digital certificate, EAP-TLS requires digital certificates on all devices. This is because we
perform a mutual authentication when connecting to the network. Once the mutual authentication is
complete, a TLS tunnel is then built to send the user authentication details.

If you’ve ever managed a network where every device had its own digital certificate, you know this is not
a trivial task. You need a Public Key Infrastructure, or a formal PKI, so that you can properly, manage,
deploy, and revoke any of these certificates that may be in use in your environment.

We also have to consider that some older devices may not allow for the use of digital certificates, and
therefore, they would not be able to connect to the network and authenticate using EAP-TLS.

Your environment may have an authentication protocol that’s not already supported by one of these
other EAP types. In that case, you may want to run EAP-TTLS. This is a Tunneled Transport Layer
Security, where you can tunnel other authentication protocols within the existing TLS tunnel.

Unlike EAP-TLS, EAP-TTLS only needs a single digital certificate on the authentication server. We don’t
have to deploy separate digital certificates to all of these other devices on our network. We would use
the digital certificate on the authentication server to be able to create and send information over this
TLS tunnel.

Once this TLS tunnel is in place, we can then send other authentication protocols across that tunnel. This
might be other types of EAP, it could be Microsoft’s CHAP version 2, or any other type Authentication
Protocol that we’d like to use.

You might also use EAP in conjunction with Federation. Federation is when you can link a user’s identity
across multiple authentication systems. This is commonly used if you’re at a third-party location, and
you would like to authenticate using credentials that were created for a different location.
RADIUS Federation commonly uses 802.1X as the authentication method. So you’re using EAP to
authenticate, and you’re very commonly authenticating to a RADIUS server on the back end.

A common implementation of RADIUS Federation can be found with eduroam. This was built so that
educators who were visiting a different campus could use their original username and password to be
able to authenticate, regardless of what campus they may travel to.

Installing Wireless Networks – SY0-601 CompTIA Security+ : 3.4


Installing a wireless network can be an involved process. In this video, you’ll learn about site surveys,
channel selection, access point placement, and more.

Before installing a wireless network, it’s useful to know the environment that you’re installing this
equipment in. So it’s common to perform a site survey, where we’re going to get more information
about the wireless infrastructure that may already be in place.

There may be existing access points in the same building or location where you’ll be installing additional
access points, or there may be access points that are located nearby that aren’t necessarily in your
control.

This means we may need to work around any frequencies that are already in use, or we may have to put
our access point in a location that will minimize the amount of interference. And like most things
associated with technology, these things tend to change over time. So you may want to perform
additional site surveys later on down the line to make sure that nothing has changed, and that your
wireless network is performing optimally.

One way to visually see the results of these site surveys is to use a heat map. There are a number of
tools that can help you build these heat maps. All you would need to do is move around your building
and have this system create, visually, where your wireless networks happen to be, and where the largest
signal strengths might be for that network.

There are a number of software tools that you could run on your laptop or mobile device that can give
you information about the local wireless network. For example, you might want to run a wireless survey
tool that can show you what type of wireless signals happen to be in your area.

You can also see what frequencies are in use, and what type of potential interference may already be on
this network. They might have built-in tools in the existing access point you’re using that can provide
some of this information, without needing additional software on a laptop or mobile device.

And some of these tools are hardware-based, that can provide you with information about the spectrum
itself, so that you can really start to understand where frequencies may be used, and what devices are
using those frequencies.

Instead of just looking at the frequency use, you can also capture information that’s being sent over that
wireless network with a wireless packet analyzer. Since this is wireless, you simply need to listen in to
the signals going around the room, and capture anything that you happen to hear.

But this means the device you’re using can’t be sending information to that wireless network at the
same time. That’s because the wireless receiver would be overloaded by the local wireless transmitter.
Many packet analyzers will disable the sending function from your device, so that you can hear all of the
traffic coming from the other devices on the wireless network.

Sometimes, an operating system or wireless analyzer will only capture information and display it as
ethernet frames. But there’s also a lot of wireless specific information being sent directly from the
access point that you’ll only be able to see if you have the appropriate drivers or hardware adapters that
are supporting the wireless capture function.

You can often use these wireless packet analyzers to view other information about the wireless network,
such as the signal-to-noise ratio, channel information, utilization, and other details.

If you’d like to try this yourself, you can download Wireshark from www.wireshark.org. Install it onto a
machine that has a wireless adapter card, and see if you can view some of that wireless information
that’s on your local network.

To avoid any type of interference between access points, we need to make sure that access points that
are near each other are not using the same frequencies. If we look at the frequencies available for 2.4
GHz, you can see it’s a very small number of channels that don’t overlap with each other.

In the United States, channel 1, channel 6, and channel 11 have no interference between each other. So
if you’re running one access point at channel 1, and another at channel 6, you’ll want to configure your
third access point use channel 11.

If you’re using an access point that supports 5 GHz, you have many more channels available. Anything
not in red in this picture are available in the 5 GHz range, giving you much more flexibility for installing
wireless access points with those frequencies.

Here’s a view of two separate access points that are configured without using overlapping channels. One
of these is using channel 6, and the other is using channel 11. What you don’t want to do, is go into this
network, install a new access point, configure for channel 8, and you can see that that new access point
overlaps and interferes with both of the access points that were there previously.

This is why it’s so important to perform your site surveys prior to an installation, so that you don’t install
an access point on the wrong channel, and create interference for all of the other devices on the
wireless network.

If you’re installing a new access point, you want to make sure that you place it in the right location. You
want to have minimal overlap with other access points. You certainly don’t want to put the access points
directly next to each other. But you also want to maximize the coverage that’s being used in your
environment. This will also minimize the number of physical access points you’ll need, which will
ultimately save you money.

You also want to make sure the location you’re installing the access point is not going to have other
interference nearby. You want to be sure you avoid any electronic devices that could create
interference. Make sure you avoid parts of the building where the signals could be absorbed. And you
want to be sure to avoid other third-party wireless networks that could potentially cause additional
interference.
And of course, you want to get the access points as close as possible to the users that will need access to
this wireless network. And you want to be sure that you’re putting the access point in a place that
doesn’t send that signal too far outside of your existing work area.

Here’s a building layout that we might want to use to install access points. And we’ll place access points
around the building in a way that just barely overlap with each other. We’re also going to make sure
that we choose different channels for these access points so nothing is conflicting with each other.

You can see that channel 1 does not have any type of connection with a channel 1 anywhere nearby.
Channel 6 and channel 11 are configured in a similar way. By separating the channels in this way, we can
be assured that we’re not creating any interference between these access points, and your network is
going to run as efficiently as possible.

If you’ve ever walked around a large office building, you’ll start to notice there are quite a few access
points that have been installed. And of course, each one of those access points has to be managed. Each
one of those has its own separate configuration. And you have to make sure that you keep all of those
devices up to date with the latest software.

To be able to do this, we need some type of centralized management device. And that would be our
wireless controller. This allows us to configure, update, and maintain all of the access points that we
have in our infrastructure.

It’s very common to connect to these wireless controllers from our desk using a browser configuration,
so we’ll often have HTTPS to provide encrypted communication between our browser and the wireless
controller. And if we step away from this configuration, there’s usually a timeout period where if no
input goes by, there will be an automatic log out from the wireless controller.

On the access points themselves, we want to be sure that we are using strong passwords, or some other
type of very strong authentication method. And we’ll use our wireless controller to make sure that all of
those devices are always updated to the latest firmware.

Mobile Networks – SY0-601 CompTIA Security+ : 3.5


A single smartphone can contain many different mobile network technologies. In this video, you’ll learn
about Wi-Fi, Bluetooth, RFID, NFC, and much more.

Some wireless networks provide a one-to-one connection between the two devices communicating on
that network. You might use a point-to-point connection if you’re connecting two buildings together
with a wireless network, and you would have a directional antenna that would connect from one
building to the other.

This is often used with Wi-Fi repeaters that you might have in your home, where the repeaters are
communicating directly between each other with a point-to-point network connection. But it’s more
common to have 802.11 networks that are communicating from point-to-multipoint. This is probably
one of the most common types of wireless networks that we use today.

There’s not necessarily full connectivity between all of these devices. For example, a device on one end
of the network can easily communicate to the access point, but it may not be able to communicate
directly with devices on the other side of that access point, because they’re simply too far away.
There might also be configurations within the access point that would allow the devices to communicate
to the access point and then to the internet, but it may have restrictions built into the configuration that
will prevent devices on the same wireless network from communicating with each other.

Another popular wireless network type is the cellular network that we use for our mobile devices, or our
cell phones. The cellular network towers separate the network into individual cells. And each antenna is
going to have a different set of frequencies that are used for each cell of the network.

There are some security concerns associated with cellular networks. On some cell networks, you might
have the ability to monitor the traffic that’s being sent between the mobile device and the cellular
tower. And there may be location tracking functions, so that someone may be able to know exactly
where you are based on the signal that you’re sending to these cellular towers.

There’s also access to these cellular networks from many places around the world. So you can take your
phone with you and be able to connect and communicate over almost any one of these networks. This
creates concern for the security professional, who wants to be sure that people authenticating into their
network are people who are authorized. And if someone is communicating from a different country, you
may have to consider that when implementing some type of access control into your network.

Unlike cellular networks, which have a larger scope, Wi-Fi networks tend to be very local. So any
concerns we have with security are all based on a local access point and devices in our immediate area.

Of course, there are important security concerns on Wi-Fi networks as well. We want to be sure that we
are encrypting all of the data center over that network, so that no one could connect to, and be able to
see, any of the traffic that we’re sending back and forth.

We also have to be concerned about an on-path attack, where someone can sit in the middle of a
conversation and be able to watch the communication go back and forth. And of course, there’s always
concern that someone might interfere with the frequencies that we’re using, effectively creating a
denial of service.

Bluetooth networks are commonly used to connect our mobile devices and their accessories all to each
other. Sometimes you’ll see these Bluetooth networks referred to as a PAN, or a Personal Area Network.

It’s common to see Bluetooth functionality in our laptops, our tablets, and our smartphones, and we
often can use Bluetooth to tether those devices to create an internet connection where one wouldn’t
commonly be available.

You’ll also find a number of headsets and headphones that can connect to our mobile devices, and even
health monitors that you might wear on your wrist. It’s common to see Bluetooth used in our cars, so
we can connect our mobile phones into the console of our trucks and our cars, along with the
smartwatches that we might be using. And if you have mobile speakers like these, you can use Bluetooth
to connect your mobile device to these external speakers.

A very common wireless network type is RFID. That stands for Radio-frequency identification, and RFID
is used in so many aspects of our lives today. If you use an access badge to gain access through doors at
work, then you’re probably have an RFID chip inside of that badge. If you work on an assembly line, or
you need to track where certain equipment may be in a warehouse, then there’s probably RFID inside of
that pallet, or inside of those boxes.
Many of our pets have RFID chips inside of them. So we can track them and find them if they happen to
get lost. Anything that needs to be tracked, tends to have RFID connected. Primarily because RFID is so
small. This is an RFID chip right next to a grain of rice. You can see that you could put this RFID interface
almost anywhere.

RFID works using radar technology. We would send a signal to this RFID device. It is powered from that
signal, charges the chip, and then information is transmitted back from that powered connection. You’re
able to get information or an ID number from that chip that you can then associate with where that chip
happens to be.

Some RFID tag formats don’t require that you power them with the RF signal that you’re sending
originally. Instead, they may be locally powered, and they may have other methods in order to send
their ID information out over the wireless network.

A technology that builds on RFID is near field communication, or NFC. This is a two-way wireless
communication that is generally used with two devices that are very close to each other. This is
commonly seen with payment systems. So you may be checking out at the store, and you can pay for
your groceries, or other items, using this NFC from your watch or your mobile phone.

We can also use NFC to speed up connectivity. For example, NFC is commonly used to bootstrap the
Bluetooth pairing process. And since this NFC device acts as a unique piece of hardware, we could also
use this as an access token or an identity card that might gain you access to a locked room with an NFC
card reader.

The security concerns we have with NFC are very similar to those that we have with other wireless
network technologies. For example, it may be possible to capture the information that’s transmitted
between your device and the NFC device. You do have to be very close to capture that information. But
since it is wireless, there is that potential.

And of course, because there are frequencies in use, someone could jam those frequencies, and create a
denial of service situation. If there is someone in the middle of this conversation, they may be able to
modify the information that’s being sent back and forth, effectively creating and on-path attack.

And of course, if you lose your phone or you lose this mobile device, you’ve then lost the ability to use
this NFC functionality. And of course, you want to make sure it’s protected, since there could be
financial details that are stored and accessed using NFC.

We used to use IR, or infrared communication, on our mobile devices. And we tend to use much more to
802.11 and Bluetooth these days. But infrared is still used in many places. Especially if you’re connecting
to some type of media center, or entertainment center, and you’re able to control the devices on that
entertainment center using infrared. For that reason, you’ll still find a number of mobile devices that
have IR capabilities built into them.

If you do have two devices that can communicate via infrared, it is possible to transfer files between
those two devices. And from a security perspective, you have to know that infrared doesn’t have a lot of
security controls built into it. So it is possible for other devices to be able to control your infrared devices
using IR.
One of the most common wired mobile connection types we have is USB, or Universal Serial Bus. This is
commonly how we connect to our mobile phones to transfer data, charge the devices, or transfer files
between your computer and this device. Some phones have USB on the phone itself, although it could
also be something proprietary, such as a lightning connector.

Since USB is a physical connection, you would have to be nearby to be able to connect devices using
USB. This certainly limits access to the device, especially if you’ve disabled access over a remote
connection, and required access with a physical wired link.

Of course, you can transfer data over this USB connection. So you want to be very careful where you’re
plugging in. And if you are connecting to an unknown USB connection, it might be a good idea to leave
your device locked so that no data can be transferred.

And if you are plugging in via USB, you could use your phone as a mobile storage device. This could be a
concern for highly secure areas since you don’t want someone copying information from your internal
network onto your phone, and then walking out of the building with that data.

If you’ve used the map functionality on your mobile phone, then you’ve taken advantage of GPS, or the
Global Positioning System. This is a technology created by the US Department of Defense, and there are
about 30 satellites in orbit that are providing this GPS functionality.

This provides very precise location information for your mobile device. You would need to see at least
four satellites to be able to use this GPS capability, so it’s often easiest to use if you’re near a window, or
you’re outside.

GPS is able to determine where you are based on the timing differences from each of these different
satellites. And by using these timing differences, it can determine your latitude, your longitude, and your
altitude. We use GPS extensively on our mobile devices, and we commonly use it for maps and
directions. You can also use GPS in conjunction with other types of networks as well, to be able to
triangulate where a person might be. So your location may not only be based on GPS, there may be Wi-
Fi and cellular tower triangulation, as well.

Mobile Device Management – SY0-601 CompTIA Security+ : 3.5


Protecting an organization’s data on a mobile device is an ongoing challenge. In this video, you’ll learn
about mobile device management, content management, geofencing, and more.

In many organizations, employees have mobile devices that they use to do their jobs. Those mobile
devices can be tablets, smartphones, and other devices as well. To be able to manage these devices,
these organizations often will use Mobile Device Management or MDM. This management can be very
important if users are bringing their own devices into the workplace and then we’re putting sensitive
company information on the user’s own device.

Mobile Device Management allows us to keep track of where all these systems might be, what data is on
the system, and we can manage different aspects of those mobile devices. For example, you can set
policies based on applications that are in use, the type of data that’s on the device and where the data is
stored, if the camera is operational, and almost any other aspect of the functionality of those mobile
devices. You can also specify a specific type of security in place on these mobile devices. For example,
you can implement screen locks and personal identification numbers to ensure that this device remains
secure even when the user isn’t around.

One aspect of mobile device management is making sure that you can manage what applications are on
these mobile devices and which versions of those applications. On many of our personal mobile devices,
these applications are updated all the time. But in a corporate environment, you may want to manage
exactly when a particular application is upgraded.

We also have a concern that users might install an application that has malicious software inside of it. So
we need some way to be sure that we can allow or disallow certain applications from being installed. A
good way to manage this application installation process is through the use of allow lists. The
administrator of the Mobile Device Manager would have a list of known trusted applications. And those
would be added into the configuration of the MDM.

The users would then have this list of available applications and can choose these known good apps to
install. And anything that’s outside of that list would not be installed onto the mobile device. This
obviously adds a good bit of overhead for someone who’s managing the MDM console because they’ll
have to know exactly what new applications need to be added to the list and be able to remove any
applications that are no longer trusted.

We not only have to manage the applications that are installed, but we also have to manage how the
data is stored on these mobile devices, especially if that data happens to contain sensitive information.
We control this through the use of MCM or Mobile Content Management, where we can secure the
data that’s on these mobile devices and make sure that all of that information remains safe.

This Mobile Content Management may allow us to set policies based on where the data is stored. For
example, we may be able to store and retrieve information from on-site content that might be in our
own Microsoft SharePoint servers or our own corporate file servers. We can also set policies that might
allow users to receive and transmit information to cloud-based storage systems such as Google Drive,
Box, or Office 365.

Once the data is stored, we can use Mobile Content Management to ensure that the data remain safe.
For example, we can include data loss prevention capabilities on the mobile device. And this would
prevent someone from sending sensitive information such as health records, credit card information, or
other personally identifiable details to someone else outside of this mobile device.

We can also ensure that all of the data that’s being stored on that device is stored in encrypted form.
That way if someone does gain access to this device or the storage of this device, they would not be able
to retrieve and view the information that we’ve privately stored on these tablets and cellular phones. All
of these Mobile Content Management settings are commonly configured on the Mobile Device
Manager. And it’s up to the administrator of the MDM to configure and set these security options.

Unfortunately, these mobile devices can sometimes go missing. They might be stolen. Someone might
leave a device on an airplane or in a coffee shop. So we need to make sure that we have a way to delete
everything on that mobile device, even though we don’t have physical access to the device.

We can do that through a remote wipe functionality. This is usually managed from the Mobile Device
Manager and allows you to click a button and erase all of the data on that device, even though we may
not know exactly where that device happens to be. If this device is connected to a cellular network or
connected to a wireless network, it will receive those notices that we want to connect and wipe
everything that happens to be on the device.

This is something that you would commonly configure before this device is deployed. Or it’s something
that’s built into the Mobile Device Manager that you’re using. The key with this is that you should
always have a backup of your data. These devices are very easily lost or stolen. So as long as we can
delete everything on the device and replace it with the new unit, we can restore from that backup and
be up and running very quickly.

The geolocation functionality of our mobile devices allows us to get very accurate measurements on
where that device is physically located in the world. This commonly uses GPS and wireless networks to
triangulate the exact location of these mobile devices. This can be great if you lose your device because
you can get an accurate map that shows you exactly where the device might be. This can also be a
privacy concern since it effectively would show where you happen to be as well. So there are advantages
and disadvantages to using these geolocation capabilities.

Many mobile devices have the option to disable this geolocation functionality. So you can turn this off
and not track where this device happens to be. But if you’re on a corporate network, this is usually
managed from the Mobile Device Manager. And the administrator of that system will determine
whether geolocation is appropriate for your system.

Some Mobile Device Managers use that geolocation information to enable geofencing. This allows the
mobile device to enable or disable certain features, depending on the location of where that device is at
any particular moment. For example, you might have your MDM configured to disable the camera when
you’re inside of the office, but re-enable the camera once you leave the office.

Or you could use geofencing as part of your authentication. So when someone is logging into the
network, you can check to see where this device is physically located. And if they happen to be in or
around the building, you can allow that authentication to continue. If you check the authentication and
the user’s authenticating from a different country, than you might want to automatically deny
authentication from occurring.

Everyone should have a screen lock configured on their mobile device. And this might be especially
important if you keep company data on your mobile device. Using a screen lock can ensure that people
do not have access to corporate data that might be stored on that particular tablet or smartphone.

This could be something as simple as a passcode or personal identification number. Or it might be a


strong passcode that includes both letters and numbers. There are usually options within the mobile
device that allows you to determine what happens when a certain number of incorrect attempts are
made.

The MDM administrator could configure your system to wait for 10 invalid screen lock attempts in a
row. And once you hit the 10th invalid attempt, it deletes everything that’s on that mobile device. Or
they could create a lockout policy that completely locks the phone and requires administrative access to
be able to unlock and use that mobile device again.
If you have a smartphone or tablet, you’re probably familiar with push notifications. These are messages
that appear on your screen even when your system might be locked to tell you information about an
app, information that’s updated from a third party source– and this information is usually pushed your
device without any intervention from the end user.

This is an important aspect of these push notifications is that the user is not querying for these
notifications. They could be using one application and receive notifications on their screen that are
associated with a completely different application. The administrator of the Mobile Device Manager can
set policies that can control exactly what would appear with the notifications on our screen. And they
may choose to disable all notifications except those that are pushed directly from the MDM.

As with our desktop computers and laptop computers, our mobile devices also have passwords
associated with them. In some cases, multiple types of passwords. There might be a password, a
personal identification number, or a swipe pattern to unlock the device. And once you unlock the device,
there may be additional passwords for the applications that are used on that mobile device.

If you forget your password, there may be a recovery process on the mobile device itself. Or you may
have to contact the Mobile Device Manager directly and have them remove those security controls from
your device.

One type of password that you can’t forget is a biometric password. This would be something that is
part of you. It could be a face. It could be a fingerprint.

It’s something that allows the mobile device to interact with you at a personal level. But this might not
be the most secure authentication factor you can find. On some devices, it’s very easy to circumvent
these biometric systems. And some organizations prefer using other types of authentication instead of
biometrics.

This functionality can be enabled or disabled from the Mobile Device Manager. And they would be able
to determine what type of biometric authentication would be available on which devices. This can also
be administered on a per application basis. You may not require biometrics to be able to unlock the
device, but running an application may require some type of facial scan.

A type of administration that goes a bit beyond two-factor authentication or multifactor authentication
is context-aware authentication. Context-aware authentication combines different characteristics
together to build a profile of who may be trying to authenticate to a particular device. For example,
during the authentication process, the IP address or location of where you’re logging in may be
examined. They may notice that you’re in a place that you would normally frequent, based on previous
GPS information that was stored.

Or there may be additional context based on the devices that happen to be around you. For example, if
you’re at home, there may be Bluetooth connected speakers and some ear pods that are normally
associated with your mobile device. And seeing that those are available, would give the authentication
process more context about whether this is really you or if it’s someone else. This may not be the most
common authentication method, but it can provide additional criteria to help during that authentication
process.
One of the challenges we have with BYOD or bring your own device is that some of the data on the
device is the personal user’s data. And the other data on the device may be sensitive company
information. How do you keep all of that information separate? And perhaps more importantly, how do
you separate those out if you ever need to return that device for personal use?

One way to manage this is through containerization. In this context, containerization means that we’re
creating separate areas or partitions on the mobile device where we can keep private information in one
partition and company information in another. This is different than the containerization that you might
see for application deployment.

So we’re not referring to kubernetes or any of these application deployment systems. This type of
containerization is referring to mobile device stored segmentation. You can think of this as splitting the
device into two pieces, where one part of the device contains corporate information and the other part
of the device contains your personal information.

This is especially important during the off boarding process, where you want to be sure the company
information is deleted, but you don’t want any of your personal information to be removed from your
private phone. The MDM administrator can click a button and remove all of the company information
from the company container, but leave all of your private information intact. So you’ll still have your
videos, your music, your emails, your pictures. But all of that corporate data is no longer on your
personal device.

It seems almost standard these days that when we deploy a mobile device that we also ensure that all of
the data stored on that device is encrypted. We do that by using full device encryption or FDE. Some
mobile devices give you an option as to how you would like to implement that full device encryption. It
can be, for example, a strongest setting, a stronger, or a strong.

There may be tradeoffs with this regarding the amount of CPU battery and time that it takes to be able
to store and retrieve this encrypted information. Some of these encryption methods can use quite a bit
of CPU cycles, which also means it’s going to use a lot of your battery time on this device. By having
some of these options available, we can customize how strong we would like the encryption to be and
balance that with how much battery life we’d like to have from these devices.

Similar to our desktop and laptop encryption, we have to make sure that we always have the decryption
key for our mobile devices as well. If we lose that key, there’s no recovery of the data that we’d
previously stored in this encrypted form. It’s very common that that key is backed up on the Mobile
Device Manager, in case you need to now use a different piece of hardware, log in with different
credentials, and be able to recover all of that encrypted data.

Mobile Device Security – SY0-601 CompTIA Security+ : 3.5


Security administrators use many different technologies to protect their mobile devices. In this video,
you’ll learn about MicroSD HSMs, Unified Endpoint Management, Mobile Application Management, and
SEAndroid.

A hardware security module is a physical device that provides cryptographic features for your computer.
This can also be applied to mobile devices through a much smaller form factor of the HSM called a
microSD HSM. This means that we can associate a piece of hardware with the cryptographic functions
for encryption, key generation, digital signatures or authentication.

This additional hardware is truly tying these cryptographic features to these physical tablets or
smartphones. We can also store information securely in these microSD HSM. We can keep different
encryption and decryption keys in the HSM or we might want to store our cryptocurrency as part of the
hardware of our mobile device.

We rarely use any single type of system during our normal day to day work. We might have a laptop on
our desk, we might use a tablet when we’re in a meeting. And we might have our smartphone as we’re
traveling back and forth to the office.

To be able to have exactly the same data available across all of these devices. And to maintain security
across all of these devices, we can take advantage of a unified endpoint management solution or a UEM.
This allows us to easily manage the security posture across all of these different devices. And it allows us
to use applications in different places. But still ensure that all of the proper security features are in
place.

So we might work on our laptop when we’re in the office or our smartphone when we’re at home. But
we’re providing exactly the same security posture in both of those environments. This is almost required
for the way that we work these days. We certainly don’t think about using a laptop or a tablet, or
smartphone we’re more interested in knowing that we can use a particular application.

So our security policies and the management of these devices need to have the same philosophy as well.
And there may be quite a few corporate applications on our mobile devices that we are constantly
using. But there has to be some way to maintain these applications make sure that they’re patched and
they stay up to date. And the way that you do that is by taking advantage of mobile application
management or MAM.

You would still use your mobile device manager to manage the device itself. But you would use the
mobile application management to be able to manage the applications that are running on those mobile
devices. For example, your organization might maintain an app catalog that’s specific to your enterprise.

So you can connect to your corporate app catalog, download the applications that you need to use as
part of your job. And those will then be available on your mobile device. Thanks to the use of your
mobile application management.

The administrator of the MAM can also monitor how the applications are being used and if there are any
problems with the applications. If the applications are crashing or users are not properly authenticating
to the application all of those events can be seen on your mobile application management.

Your MAM can also provide you with very fine grained control of the data that’s on these mobile
devices. So it may be able to delete data associated with one particular application. But leave all of the
other data on that mobile device intact.

If you’re using Android on your mobile device you’re probably using the security enhancements for
Android or SEAndroid. This is effectively taking the SELinux functionality and including it as part of the
Android operating system. This provides some additional access controls security policies and includes
different policies for configuring the security of these mobile devices.
Well, if there’s an organization that needs to provide a secure mobile device it would be the NSA or the
National Security Agency here in the United States. This was the organization that really pushed the
SEAndroid functionality. And it is an addition to their already popular SELinux distribution.

The goal of this project was to provide security across the entire Android operating system. So you’ll see
enhancements to the kernel to the userspace and to the configuration settings in the security policy.
SEAndroid is now the default version of Android that runs on our systems since version of 4.3 that was
released in July of 2013.

SEAndroid prevents any direct access to the kernel of the Android operating system by protecting these
privileged demons that are running inside of SEAndroid. It also changes the way the data is accessed on
these mobile devices by default Discretionary Access Control or DAC was used. And that has been
changed to Mandatory Access Control or MAC. This removes the user from being able to control what
type of access someone might have to the system. And instead puts that in the control of the
administrator.

The administrator can assign object labels and then assign users with minimum access to those specific
labels. This also creates sandboxes between the applications running in the operating system. So that it
can isolate data from one application from data that is created and stored by a different application. And
SEAndroid provides a centralized area for policy configurations. So that all of the security features can
be administered from one central point.

Mobile Device Enforcement – SY0-601 CompTIA Security+ : 3.5


A mobile device administrator uses many different enforcement options to keep the organization’s data
safe. In this video, you’ll learn about rooting, carrier unlocking, firmware OTA upgrades, accessory use,
and more.

If you are an Apple iOS device, then you’re probably very aware of the Apple App Store and if you have
an Android you’ve probably visited the Google Play store. Each of those third-party stores contain apps
that you can browse through, download, and run on your mobile device. However not every application
that you’re going to get from those app stores is secure. These organizations do a very good job at
finding applications that are malicious and preventing them from being part of their App Store.

Unfortunately, there have been times when an application has managed to get on the App Store that
contains some type of security concern. This could be a vulnerability that could be taken advantage of
by an attacker, or it may be that the application leaks data and makes that private information available
to others.

And if we’re using this mobile device at work, you may find that a number of apps on these third-party
app stores contain applications that probably aren’t appropriate for work. These might be games or
social media apps, or they just go outside the scope of what’s considered appropriate for work. If your
organization uses a Mobile Device Manager, the administrator of that system can allow or disallow
certain apps from running on your mobile device.

If you have a smartphone or tablet, you’ve probably never seen the command prompt at the operating
system level for that device. That’s because you don’t generally need access to the operating system of
these mobile devices. They are purpose-built systems that provide you with a user interface that keeps
you away from the operating system of those machines.

But there are some technologies that like to have control of the operating system of their devices, and in
Android, you can gain that by rooting the system, in iOS, we often refer to this as jailbreaking the
system. To root or jailbreak your mobile device, you commonly need to install a specialized type of
firmware.

This will replace the operating system that’s currently running on that system, with one that would allow
you access to the operating system itself. This means that you could circumvent any existing security
systems that might be in place, you can go outside the scope of the App Store and simply download and
install apps directly. We call this side loading.

With this rooting or jailbreaking in place, your mobile device Manager doesn’t have much control of
those systems. That’s why most of the devices you’ll see administered by a centralized management tool
like an MDM, are almost always using the standard firmware and are not using a gel broken or routed
version of firmware.

You may not realize it, but the smartphone that you’re using is probably locked to the carrier that you’re
using. So in the United States, if you’re using AT&T, you can only use this phone on an AT&T network.
You can’t take this phone from AT&T and start using it on a Verizon network, because the phone has
been locked to the AT&T network.

This is primarily because the carrier is subsidizing the cost of the phone. Instead of purchasing the phone
for its full cost, the carrier is subsidizing the cost of that phone over your monthly contract. And that’s
why AT&T doesn’t want you to get a phone for very little money and then immediately take that phone
over to Verizon and start using their network.

However, there might be times when using this phone on a different network may be required. It may
be that you’ve already paid off this phone and had it for a certain amount of time, so your carrier will
allow you to unlock that phone and use it on a different network, or perhaps you’re leaving the country
and you want to have a way to use this phone while you’re outside of the normal AT&T network area.

In those cases, you’ll need to contact your carrier directly and they have a series of processes they
follow to unlock the phone so that you can use it elsewhere. Since a lot of the security that we configure
on our mobile device managers are associated with the configuration of this phone, unlocking it and
moving it to a different carrier could potentially circumvent the security of that Mobile Device Manager.

The MDM administrator would need policies that would either allow or not allow someone from
unlocking their phone to move it to a different carrier, or they would need a series of processes in place
to put it back into the MDM after it’s been moved to the new carrier.

The operating systems of our mobile devices are constantly being updated. Sometimes these updates
include feature updates, other times they are security patches. Whenever your system needs to be
updated, it’s often receiving these updates over the air or OTA. This means that you don’t have to plug it
into your system, you don’t have to download any software, all of these updates are automatically
pushed down to your mobile device when they’re ready.
You often see a message pop up on your mobile device that says, a new version of firmware is available,
you can click here to install it now or click this other button to install it overnight. This means you can go
to sleep you wake up in the morning and you have a brand new version of firmware available with all of
the new features of that version.

If this is a mobile device used for corporate applications, you may want to test this firmware before
deploying it in your environment. In those cases, the deployment is handled through OTA but from the
mobile Device Manager itself. That way the update can be tested internally, and when you’re ready to
roll it out, you can push it out to all of your systems from your Mobile Device Manager.

Now that everybody is walking around with their own smartphone, they’re also effectively walking
around with their own camera. This is a feature a lot of people use on their smartphone and it’s
perfectly acceptable in most environments. But you may be working at a very high-security environment
that doesn’t want everyone bringing in their own camera into an environment where no data should be
getting out.

It’s difficult to control camera use on the device itself. There’s no way to completely turn off the camera,
and you’re never quite sure if somebody is using a camera on their device or not. Fortunately, the MDM
is able to enable or disable the features of the camera. And it may configure them based on where you
happen to be.

If you’re anywhere near the main corporate building, which is very secure, the camera feature may be
disabled. But once you leave the building, the geo-fencing features of your MDM can recognize that
you’re no longer near the main office, and it can re-enable the camera functionality.

One way that users can transfer data off of their mobile device is by using SMS and MMS. This stands for
Short Message Service and Multimedia Message Service but we often just refer to this as texting. These
text messages can contain pictures, audio, movies, and other types of data as well. So they can be used
for outbound data leaks or disclosure of financial information.

We’ve also seen these text messages used for inbound attacks, where the attackers are trying to obtain
access to a system using phishing techniques. Just like the controls we have available to our camera; the
Mobile Device Manager can also control the MMS and SMS functionality of your mobile device. So the
text messaging on your device may be disabled completely or it may only be available when you’re in
certain areas.

It’s becoming easier and easier to move data from a secure area to somewhere that is insecure through
the use of these mobile storage devices. This is external media that’s commonly associated with an SD
card or similar flash drive configuration. Or you may be plugging in a multi-terabyte portable USB
connector drive, transferring data onto that drive, and simply putting the drive in your pocket, and
walking out the door.

These are very standardized and easy to use. You simply need an interface that will support the media,
you plug it into your computer, transfer the files, and unplug the media and take it with you. Some
security administrators will configure their operating systems of their desktops, laptops, and other
devices to limit how much data might be written to an external USB or external media drive like this one.
The administrator of the MDM can also set security policies that might allow or disallow access to these
flash drives from our mobile devices.
Another way to transfer data would not even use a flash drive, instead will simply plug in a cable
between two devices to transfer information to your mobile device. This is called USB OTG, which stands
for On-The-Go. You don’t need an external drive or flash memory; you simply need a cable that is
compatible with the devices on both ends.

This was a feature that was introduced with USB 2.0. So it’s been around for quite some time and it’s
very common to see on Android devices. There’s also some USB OTG capabilities built into iOS as well.
And it is extremely convenient and easy to use. You can simply plug in your laptop or desktop, transfer
the files, disconnect, and walk out of the building with all of that data.

Just as there is a camera on all of our mobile devices, there are also audio recording capabilities on all of
our mobile devices. And these can be very useful if you’re taking notes, or you’re in a meeting and need
to capture what somebody might be saying. But there are some legal issues associated with capturing
audio and it depends on where you happen to be.

Each state in the US has a different set of laws associated with capturing audio and every situation
you’re in, is going to be a little bit different, so you have to make sure you check the laws in your
particular area. Like most features on your mobile phone, all of the audio recordings can be enabled or
disabled from your Mobile Device Manager.

In this video, we’ve already talked about enabling or disabling features based on where you happen to
be. We refer to this as geo-fencing. But the information of where you are can also be stored into files.
This is called geotagging or GPS tagging.

When you’re saving a document, when you’re taking a picture, or storing some audio information on
your mobile device, there’s additional information that’s stored as metadata, and that metadata might
include information of your longitude, your latitude, or other information associated with your location.

This means view of access to some of the documents on this device, you might also know where this
user has been. And that might be a security concern. So you might want to configure your Mobile Device
Manager or the configuration of your mobile device to not save any of the location information when
you’re storing these files.

If you’re at home, or the office, or coffee shop, you’re probably connecting to a Wi-Fi network that’s
controlled from an access point. Everybody connects to the same access point, and that gives you access
to the internet and any other resources on your local network. But the wireless standard also supports a
mode where two devices can communicate directly to each other without the use of an access point.

In the 802.11 world, this is called ad hoc mode, and it allows these two devices to easily communicate
without including any other devices on the network. Configuring ad hoc on both sides of the
configuration can sometimes be difficult. But there are some enhancements to Wi-Fi called Wi-Fi direct
which simplifies the process so that two devices can easily connect to each other and begin transferring
data between both sides.

If you’ve ever configured any IoT or internet of things devices at home, you may have seen that it starts
with a Wi-Fi direct connection, so that you could then configure the device. And then that device can
connect to your access point once it’s been configured.
This is another opportunity for devices to transfer data between each other without using security
features that you might have in an access point. This becomes a concern for security professionals, who
want to make sure that they have control over data and can limit the scope of where the data might go.

On your corporate network, you probably have an internet connection with a next-generation firewall
and security policies and procedures associated with that connection. And you may find if you’re using
that corporate connection, there may be websites that are blocked, or you may have limited access to
certain parts of the internet.

Some users have found that instead of using their corporate network connection, they can turn their
phone into a Wi-Fi hotspot, and have unfettered access to the internet. This means your phone is now
communicating to internet connections through your cellular phone provider, and then any other Wi-Fi
devices you have can communicate through your phone to gain access to the internet.

This functionality is dependent on your carrier. They may have it turned off by default or may require an
additional cost to be able to use this feature. Not only is this allowing people to have unsecure access to
the internet from your corporate network, this could also allow access into your corporate network
instead of going through the existing security controls on the outside. This is probably not a capability
that you’ll want to allow by default, and it should probably be administered and monitored through your
Mobile Device Manager.

And of course, most of our mobile devices these days allow for NFC or Near Field Communication. It’s a
common way to transfer data between two devices that are in close proximity. We often use NFC to pay
for things when we’re checking out at a store, or we can use it to transfer information between two
mobile devices.

If you’re using this for purchases, you’ve probably seen Apple Pay, Android Pay, or Samsung Pay as some
of the common standards that are available when checking out. And there’s usually some type of
authentication that you have to do before the payment system will go through. You would, of course,
not want to have a locked phone, be able to pay for things using NFC. So there’s usually the first
authentication or unlock process before you’re able to use this during checkout.

Mobile Deployment Models – SY0-601 CompTIA Security+ : 3.5


There are a number of different options available when deploying mobile devices. In this video, you’ll
learn about BYOD, COPE, corporate-owned, and VDI/VMI deployments.

The first type of mobile deployment we’ll discuss in this video is bring your own device you may see this
also referred to as bring your own technology. This is when you are bringing your own personal
smartphone or your own personal tablet to work, and we’re using that tablet for both personal use and
for corporate use. There are obviously some security challenges with having a single device that stores
both personal information and corporate information. The data that’s stored on this device needs to be
protected not only for your personal data, but for your corporate data as well. And there probably needs
to be a way to differentiate between what is personal and what is corporate.

We also have to think about the security when we sell our phone or trade in our phone for a newer
model. This is something the security administrator usually manages through the use of an MDM, or a
Mobile Device Manager. A similar deployment type is one called COPE. This is corporate owned but
personally enabled. You’re still using the same single device for both corporate use and personal use,
but instead of you purchasing a device and bringing it to work, your office is purchasing the device and
letting you use it. This means you’ll use it as a corporate device and a personal device, but you only have
to carry around one device.

Since the smartphone or the tablet is owned by the company, they have full control of everything that
goes onto this device. This is very similar to a laptop or a desktop that’s assigned to you by your
company. These are usually administered from your mobile device manager, so everything on this
device is controlled by the organization, and they can decide what information is stored on the device,
and what information can be deleted.

A similar deployment type to cope is the CYOD, or choose your own device. This is very similar to COPE
where the organization chooses what device you’re going to carry around. With CYOD, you get to decide
the device that you’re going to use, and then the organization purchases that device for you.

On those previous deployments, we used a single device, and that single device could be used for
personal use or for corporate use. But there are times when you don’t want to have a single device used
for both of those situations. One of these deployments would be something like a corporate owned
deployment where the organization owns the device and you can’t use it for personal use. If you need
your own smartphone for personal use, then you’ll need to purchase one yourself and carry around both
your personal smartphone and your corporate owned smartphone. This is probably not the most
common deployment type, but it might be the standard type used if your organization has a lot of data
that they want to keep private, and they want to avoid having your personal data on these same
devices.

Another mobile deployment type separates the data from the device. This would be VDI or VMI. That
stands for Virtual Desktop Infrastructure, or Virtual Mobile Infrastructure. With VDI and VMI, you can
separate both your applications and the data from the mobile device and have all of that information
stored somewhere else. This would keep all of the data and app stored external from your mobile device
and you would simply access all of those applications and data using some type of remote access
software.

This means that all of your data is stored securely and separate from your mobile device, which means if
you lose your mobile device, you’re not losing any of that data. You can easily replace the mobile device
and simply reconnect to that data store that’s located somewhere else. This model works very well for
the application developer, because they can build an app based on a single type of platform. This would
be the VDI or VMI platform.

All of these devices that you’re using are simply connecting in usually through a remote desktop type of
configuration and running the applications from there. This also makes it easier to manage the
applications. You don’t have to worry about deploying new apps to everyone’s mobile phone. Instead,
you would update a single application store at the VDI or VMI management server itself, and all of the
devices connecting in are now using the new software.

Cloud Security Controls – SY0-601 CompTIA Security+ : 3.6


The cloud introduces new security requirements for our enterprise networks. In this video, you’ll learn
about HA across zones, resource policies, secrets management, and security auditing.
One of the key aspects of IT security is maintaining the uptime and availability of our applications. For
cloud-based applications, we can organize areas where there would be availability into Availability Zones
or AZ. These are commonly referred to as a region within the cloud services. So your cloud service
provider might have an availability zone for North America, a different one for South America, one for
Europe, and another for Asia-Pacific.

Each of these availability zones is effectively self-contained. Each one would have independent power, a
separate HVAC system, separate networking configurations. And anything that happens in one
availability zone has no effect on anything else that’s happening in another availability zone. We can
take advantage of this availability when we’re building our applications. We might build the applications
to already recognize what zone they may be working in and give them the ability to operate outside of
the existing availability zone.

So we might configure an application to run as an active/standby or active/active. So it might be active


in one AZ. And if anything happens to the connectivity or resources in that AZ, the application can
automatically switch itself to run from a completely separate availability zone. We could also use a load
balancer not only to distribute the load for the application but to provide additional high availability. If
we lose one of the servers that’s being served as part of that load balancer, the load balancer will
automatically recognize that server is not available and transfer the load to the remaining servers on
that system.

One of the challenges in these cloud-based environments is making sure that the user and
administrative access to these systems is properly managed. And you can do this by using Identity and
Access Management or IAM. This determines who gets access to a particular cloud resource. And then
within that resource, it determines what they get access to. This allows you to create different groups.
And you can map different job functions to those individual groups.

There might be a group for administrators that provides full access to the cloud-based application and a
different group for the users of that application. And since this is in the cloud, you can create some very
granular controls based on a number of different criteria. For example, if someone’s connecting from a
trusted IP address, they might have additional access to that particular application.

Or if it’s a particular time of the day, you may enable or disable different capabilities within that
application. And of course, since this application can be used in many different cloud environments
spanning many different availability zones, you can synchronize all of these user roles so that everyone
is always up to date with the access that they need.

When you start configuring a lot of cloud-based systems and applications, you’re going to find yourself
filling in a lot of areas where it’s asking for a secret key. This could be part of an API or Application
Programming Interface. This might be associated with a shared passphrase between two devices. Or it
may be associated with the certificates that you’re using for encryption. There are many different places
where these secret keys and secret phrases are added into the configuration.

So it’s important that you’re able to manage this process and have some centralized form of secrets
management. It’s not uncommon to have a separate service that manages all of these secrets for
everyone in your organization. It’s common not only to allow or disallow access to the secret service so
that someone can gain access to the secret information, but it’s also important to limit what type of
secrets are available.

You may want to allow access for a user to secrets specific to the application that they’re managing but
restrict access to secrets that are not associated with that app. And of course, you’ll need some type of
audit trail and logging. So you’ll know exactly who accessed the secret service. You’ll know what secrets
they accessed. And you’ll understand more about who has access to different secrets in your
environment.

This auditing process isn’t specific to the secrets that you’re keeping because we have security that
we’re maintaining across multiple operating systems and multiple applications. We need some way to
consolidate and report on logs from all of these different devices. This might include firewalls, VPNs,
routers, switches, servers, and anything else on our network that creates logs. We can centralize all of
this to a Security Information and Event Management system or a SIEM.

This allows us to not only maintain a consolidated log storage, but we can also create reports from this
information. We can keep constant checks on the security of our systems. We can audit them
occasionally, make sure that nobody’s gaining access to something they shouldn’t have access to. And of
course, we can create reports that shows that we’re in compliance with a series of laws or regulations.

Securing Cloud Storage – SY0-601 CompTIA Security+ : 3.6


Additional security is important when storing information in the cloud. In this video, you’ll learn about
permissions, encryption, and replication.

As security professionals, we have to manage applications that are running in the cloud. And of course,
there will be user data that is stored in the cloud. Often, this is stored in a public cloud, which means it
could be on Microsoft, Amazon, Rackspace, or any other cloud provider. But the data we’re putting onto
these cloud services may not be public information. It may be information that we need to keep private.
Because of that, we need to have controls in place that can limit who can see what type of data.

Fortunately, there are a number of options available to us to configure exactly how the cloud-based
storage should be configured. We may want to set it up so that the data is in different geographical
locations. This is not uncommon, especially if there are rules and regulations over where certain data
must be stored. And it may be the data collected for users in a particular country must have that data
stored somewhere in that country as well.

And of course, we have to be concerned about the data being available. It may be that we have extra
copies of this data available. We may create redundant configurations of this data so that no matter
what happens in the cloud, we will always have access to the data. The permissions that we assign to
the data we’re storing in the cloud is our first step in securing it. And it’s important to remember the
data being stored in the cloud must have these permissions set. Otherwise, this data would be available
for anyone to see.

A good example of this are the data breaches associated with Accenture, with Uber, with the US
Department of Defense, and many other organizations that did not properly set permissions on data
stored in the cloud. In the past, some cloud services by default would set your data to public. This meant
that you were required to go into that data source and configure the permissions differently if you
needed to restrict that from public view. That obviously is not a best practice. And many cloud-based
providers have changed their defaults so that public access is not the default.

These are the account settings for public access for data on the Amazon network. And you can configure
options to block all public access or modify how the data is provided to others. There are many different
ways to manage this access. One is through identity and access management so that we can assign user
account information and associate these users to the data they’re accessing. We can also set policies on
the buckets themselves. This is where we are storing the data in the cloud.

You can also globally block public access. There’s an option for that in the Amazon configuration. And
when you turn off the ability to have public access, you turn on the ability to assign other rights and
permissions to that data. The goal, though, is to keep the data safe. And it may be that the best practice
is not to put the data in the cloud at all unless it really does need to be stored there.

By default, data that we store in the cloud is naturally going to be more accessible to others than data
that is not stored in the cloud. It is so easy to access these cloud-based storage systems from anywhere
in the world. So having data stored in some central facility is going to be very available to anyone who
needs the access. Because of that, we may want to encrypt the data that we’re storing in the cloud so
that there is another layer of security beyond the scope of the identity and access management or other
permissions that we’re assigning to that data.

We could perform server-side encryption, which means we encrypt the data when it’s posted in the
cloud. And when we’re retrieving information from the cloud, our system decrypts that data so that
we’re able to use it. This means if someone does gain access to that storage drive or the individual files,
they won’t be able to read any of the data because it was encrypted when it was stored onto that drive.

If we want to protect this data beyond the scope of where it happens to be stored, we may want to
perform the encryption on the client side. This client-side encryption means that we will be encrypting
the data locally, sending all of that encrypted data across the network in its secure form, and ultimately
saving it as that encrypted data on the storage drive. This relies on our local applications to have this
functionality built into the app itself.

So we perform all of our encryption process in the application. We store that data. And then when we
retrieve the data, we retrieve it in its encrypted form, send it encrypted across the network, and only
decrypt it once it’s local on our client. By moving this process to the client, we’ve now required an
additional overhead for managing these encryption keys. Since these encryption keys are used for both
the encryption and decryption process and are crucial for keeping that data safe, we need to be sure we
have the proper processes in place to manage existing keys and any future keys that we might use for
this data.

It’s also common to have data replication in the cloud. This data in the cloud can be stored in one
location. And we can create a copy of that data and store it in a different location. One of the most
common reasons for doing this is to maintain the uptime and availability of that data. If there are any
types of disasters or downtime, we can always recover this data from another location or have our
applications retrieve that data from the replicated site.

It may be that we’re taking every bit of data and copying it off to a hot site. So if we do have the need to
call a disaster, we can move everyone over to the backup location. And all of our data will be replicated
and available for us in that hot site. Another reason to duplicate this data and store it in another location
is if we would like to perform some additional analysis of the data.

By using a copy of the data, we don’t have to worry about making any changes to the production
records or modifying the way the application might be performing. And of course, having a backup of
the data is a perfect reason to perform replication. This means that we can constantly update the data
store that we have and have duplicates of that data located in different places around the world.

Securing Cloud Networks – SY0-601 CompTIA Security+ : 3.6


The cloud introduces new security challenges to our connectivity and networking. In this video, you’ll
learn about virtual networks, public and private subnets, segmentation, and API inspection.

If we have applications that are running in the cloud and we’re storing data in the cloud, then we need
some type of network to be able to send and receive data from that cloud-based system. Our
application may be a public one, which means anyone on the internet from anywhere in the world
would be able to access that application. Or maybe it’s a private cloud. And you’ll have to have a VPN
connection in order to gain access to that private subnet.

And most application instances have many different services running simultaneously. And there needs
to be some way for all of these different services to communicate with each other. This may be within
the same data center using east-west traffic. Or there may be communication outside of the data center
to information that’s stored in a completely different location. And that would be referred to as north-
south communication.

And, of course, it’s very common to have the ability in the cloud to be able to create and tear down
instances of servers of databases and of other systems. We can do exactly the same thing with our
networking.

We can have virtual switches, virtual routers, and build an entire virtual infrastructure with different IP
addressing, different routing configurations all within a cloud-based system. From the perspective of the
network administrator, these virtual systems look and feel exactly the same way as the physical systems.

So configuring a virtual switch is exactly the same process in front-end as configuring a physical switch.
But unlike our physical devices, our cloud-based networking systems can be created virtually at any
time. We can use our on-demand functionality to go to a single console, configure as many different
switches as we would like. And instantly, we now have a new network infrastructure created.

We can combine this with rapid elasticity so our applications can automatically create new instances.
And with those new instances would be virtual switches and virtual routers along with all of the other
cloud-based systems.

It’s not uncommon to create cloud-based systems that may be located at a global provider like Amazon
or Microsoft. But there’s no public access to any of those resources. This would be virtual private clouds
that have all internal or private IP addressing. And the only way that you’d be able to connect to those
systems is using some type of virtual private network.

This allows you to have all the advantages of a cloud-based system but also have all of the privacy since
no one on the internet has direct access to your data. Of course, you might be building out an
application that you would want everyone in the world to be able to access. And in that case, you would
use a private cloud.

This would use external IP addresses so that no matter where you are in the world, everyone would be
able to connect to your service. And it’s not unusual to find situations where you might have a public-
facing part of your application that users connect to. But the internals of your application may only be
communicating with each other over private networks.

This allows you to determine exactly what part of your cloud-based services can be accessed by folks on
the internet and what parts of that cloud-based service are only private for people who are part of your
organization. We would use these public and private subnets to be able to create segmentation of your
application and of the data.

This application instance might be running separate virtual private clouds. It might have different
containers and separate microservices for the applications that you’re running. This means that there is
certainly going to be opportunities to provide segmentation of the data. And now we can start to
manage how we want that data to flow between all of those different segments.

For example, we might have an application front-end that users are able to access. But there may be
communication to a backend database that is all on a private network. And the users can’t directly
access the database server. Only your application server has access to your database server.

This adds additional security between the different components of your application instance and allows
you to maintain a level of privacy that is naturally built into your infrastructure. We might also want to
supplement this segmentation with additional security controls. For example, we might have a Web
Application Firewall, or a WAF, which is examining all of the communication inbound and outbound for
that application and making sure that there’s no malicious data going into or out of your application
instance.

If you want to segment the network even further, you might want to use a next-generation firewall. This
allows you to set different subnets to be able to route between those subnets. And you can do this all in
a virtual environment. Many next-generation firewalls also include intrusion prevention so they can
identify any known malicious code that might be traversing your network.

Many of our applications use the microservice architecture where there is an API gateway that is simply
accessing microservices of the application and using application programming interface or API calls to be
able to request and receive data from that microservice. Although this provides efficiency and
redundancy for the application, it also introduces a number of security concerns.

For example, we may have situations where instead of the client making requests to an API gateway,
someone attacking this application could circumvent the client and send their own request directly to
the API gateway. So we may want to examine those calls to see if they’re coming directly from the client
or if someone may be circumventing the client and sending their own customized calls to our API
gateway.

To be able to manage this, you’ll want to perform some API monitoring where you can view specific API
queries that are coming from the API gateway. And you can monitor exactly what type of data is
incoming to your database and what information is being sent back to the client.
Securing Compute Clouds – SY0-601 CompTIA Security+ : 3.6
Our compute cloud instances are the core of a cloud-based application instance. In this video, you’ll
learn about security groups, dynamic resource allocation, VPC endpoints, and container security.

When we’re creating our cloud based applications, we need some components that will perform the
actual calculations. These are our compute cloud instances. A good example of this would be the
Amazon Elastic Compute Cloud. This is often referred to as the EC2 cloud. You have the Google Compute
Engine or the GCE. Or an Azure, you have the Microsoft Azure virtual machines.

We commonly manage these compute instances by launching a virtual machine or perhaps this
particular instance is launched as a container. We can allocate additional resources to that compute
engine. And then when we’re done using that engine, we can disable it or remove it completely from the
cloud.

One common way to manage access to these compute engines is from network connectivity using
security groups. It will be common to have a firewall that you could use just outside of the compute
engine. And then you can control what traffic is inbound and outbound from that instance.

Since this is a firewall, we can commonly control access based on a TCP or UDP port number. This would
be something working at OSI layer 4 or, of course, use OSI layer 3, which would be an IP address, either
as an individual IP address, perhaps an entire block of addresses. And you can usually add this using
CIDR block notation to the firewall. And, of course, we can manage both IPv4 addressing and IPv6
addressing.

This is the network security configuration settings for an EC2 cloud. You can see that you have a number
of different criteria that you can choose for the firewall. You can specify your own TCP or UDP port
number. You can specify all TCP or UDP. And there may be individual applications that are already pre-
configured, and you simply need to select that one to allow or disallow that traffic.

These computing resources can be created on demand so that you would only have them active when
they’re needed. And then you can disable them or deprovision them once they are complete. This
means as the demand to the application increases, you can provision additional resources automatically
to be able to cover the additional load.

You can even automate this process so that you are constantly monitoring the load of the application.
And if the application needs more resources, they can automatically be provisioned. And when the load
decreases and the number of people accessing that application slows down, you can deprovision those
resources.

We refer to this as rapid elasticity. And it’s a very common way to maintain the uptime and availability
of your cloud based applications. This would also allow you to minimize how much you’re paying for this
cloud based application because you commonly pay for these resources as you use them. That means
when the application is busy, you can pay a little bit extra to have more resources available. And when
the load on the application is decreasing, you can remove some of those resources so that you’re not
having to pay for them.
It’s common to combine this dynamic resource utilization with some type of monitoring system that can
identify when the load increases and when the load decreases. For example, you may want to monitor
CPU utilization. And when that utilization gets to a certain amount, you can increase additional
resources to drop that load across multiple compute instances.

Sometimes it’s useful to have very granular security controls. And be able to manage exactly what type
of traffic may be flowing across your cloud configuration. This means you can identify what’s in the data
of the application flows and be able to make security decisions based on that.

For example, your organization might be using box.com for sharing files in the cloud. And if users are
storing data in the corporate file share, you may have policies that would allow personal information to
be stored into that share. And you may allow any department to put information into the corporate file
share.

But there may be other shares that require additional security. For instance, there may be a personal file
share. There may be policies that are already pre-configured by your corporate headquarters that would
allow you to store graphics files in your personal file share, but would deny any spreadsheets.

It might also prevent you from storing any files that would have sensitive information, such as credit
card numbers. And there may be security monitoring for these files so that if anybody does try to store
this information, an alert is sent to your security team.

Applications developers can build cloud based systems that might be designed for internal use only. That
means your internal employees would have applications that they’re able to use. And the data that’s
being accessed by that application is private data that’s not available to other people on the internet.

It’s very common to have this private application stored in the cloud. And to have the private data
stored into a file share that’s in the cloud as well. To be able to allow access from both of those private
locations so that the application can access the data and vise versa, you need a VPC, virtual private
cloud, gateway endpoint. This would allow you to have private access between the application instance
and the data and restrict access from anyone else.

This also means that you don’t have to have internet connectivity just to be able to access these
applications. This means that we need to add another component between our application instance and
the data that we’re storing. And that additional component is a VPC endpoint.

Connectivity between different components is made a bit easier when there happens to be an internet
in the middle of the conversation. If this is a public application, then there would be a public subnet with
virtual machines or other application instances. And there would be a gateway that would provide
access to the internet. That would effectively allow the virtual machine to be able to communicate to
the cloud storage because the cloud storage is also on the internet. This makes it very easy for your
application to communicate to the data that it needs inside of a bucket that is on the cloud storage
because you do have this internet connectivity in the middle.

But if this application instance is on a private subnet and needs access to this data, we don’t have an
internet connection in the middle that would allow that. So we’re going to add a VPC endpoint. This
allows us to have connectivity between a private subnet and another part of the connection that’s in the
cloud, in this case, a storage network, so that our application can access that data even though there’s
no public internet connection in the middle.

Instead of using separate virtual machines for every application instance that we would like to deploy,
these days, we commonly use containers. But just because we’re removing a VM component from our
applications doesn’t mean that our application is necessarily more secure. Our containers have similar
security concerns as any other application deployment method.

There might be bugs in the application that would introduce vulnerabilities. You could have missed or
insufficient security controls built into the application, which would allow others to have access to the
data. Or there might be misconfigurations of the application, which could allow others access to the
data that’s not intended.

If you aren’t using containers on top of an existing operating system, it might be a good idea to use an
OS that is specifically built for containerization. This might be a minimal or hardened operating system
that is specifically designed to run containers on top of it.

And we might want to group our containers together so that containers running like services are
contained within a single host. And another host would contain a different type of service. That would
allow us to focus our security posture on the specifics associated with that particular service that’s
running in those containers. And by not mixing different types of containers on the same host, we may
be able to limit the scope of any type of intrusion.

Cloud Security Solutions – SY0-601 CompTIA Security+ : 3.6


We have created new technologies to protect a new generation of cloud-based applications. In this
video, you’ll learn about CASB, secure web gateways, firewalls in the cloud, and other security controls.

One of the challenges that a security professional has is being able to maintain the security of our data,
even though the data is being stored somewhere external to our organization.

This is one of the largest challenges with cloud-based applications because it can become more difficult
to manage exactly what type of data is being transferred.

One way to manage this is through the use of a cloud access security broker, or a CASB. We would use a
CASB to help enforce the security policies that we’ve already created with data that we’re storing in the
cloud.

This can be implemented as software that’s running on individual devices, we may have a security
appliance that’s local on our corporate network, or the CASB may be located in the cloud, and that’s
where we’re making our security policy decisions.

The CASB is able to operate based on four primary characteristics. The first of these characteristics is
visibility. The CASB needs to understand exactly what applications are in use, and it needs to understand
what users are authorized to use those applications. Being able to see exactly what data is being
transferred is an important part of making this determination.
Your organization might also have additional compliance requirements such, as HIPPA, PCI, or some
other type of local regulation. Our CASB allows us to enforce these compliance regulations on all of the
users that may be storing data in the cloud.

Our CASB is allowing authorized use of the application, but it can also be configured to disallow
unauthorized use through the use of threat prevention. This might focus on exactly what users have
access to the application, and would prevent access from everyone else.

And there may be additional components of the CASB that are looking at the actual transfer of data. For
example, if this is sensitive data, it may require that all of the data is encrypted, or it may be protecting
any personally identifiable information through the use of data loss prevention.

Securing an application running on your local network is difficult enough. When we move that
application to the cloud, there are additional security concerns. One of the biggest concerns is a
misconfiguration of the application itself. You can implement the strongest encryption, and have the
strongest security policies in place, but if someone happens to misconfigure the application to allow
access, then all of those security policies aren’t helping you.

There’s also a need to provide additional authorization and access to the data. There should be very
strong and granular controls that might allow access for individual users, or groups of users. And you
want to be sure that you have some way to monitor all of the application programming instance calls
that are being made by that application to see if anybody may be trying to exploit an existing API, or gain
access to data that would normally not be available.

We can add on additional security through the use of the Next-Gen secure web gateway, or an SWG.
This is going to provide security for all of our users, across all of their devices, regardless of where they
may be connecting from.

It’s common to use a secure web gateway if you want to monitor that API usage. This would allow you to
get detailed information about how these API’s are being queried, and exactly what queries are
occurring.

You would also be able to make policy decisions with your secure web gateway. For example, you might
want to monitor Dropbox use and make sure that the Dropbox is being used for corporate use, and not
personal use.

The secure web gateway gets into the details of the data that are being transferred through the
network. So it can examine API calls, it can look at the JSON strings, and understand exactly what type of
API requests are being made.

Once all of that information is examined, the secure web gateway can make a decision about whether
this type of traffic is allowed, or if this might be malicious. And the secure web gateway might allow us
to apply different security policies depending on the type of instance that’s being created. For example,
a production application instance is going to have a completely different security profile than an
instance that is running for development use.

And these days, we can put physical and virtual firewalls within the application flow that would allow us
to control exactly what type of data is being transferred. In a cloud-based environment we don’t need
physical appliances. So we might spin up a virtual firewall, or host-based firewall, and because there isn’t
a physical component, there may be a more economical cost associated with using this type of firewall.

This also means that you could deploy firewalls at a very granular level. You could spin up multiple
firewalls for each individual virtual machine, or microservice, and allow a very fine-grained control over
exactly what data is allowed through the network.

It may be that we just want to provide simple filtering of traffic based on an IP address or port number.
And that would provide us with layer four, or TCP/UDP, type controls.

Our more modern firewalls can provide us with visibility up to layer seven, which is viewing exactly what
type of application is flowing through the network, regardless of what port number or IP address may be
in use.

This means that we could set security policies that would allow certain individuals that are part of a
certain network to use certain applications, but prevent any other type of communication to that
service.

Depending on your cloud service provider, there may be security features that are already built into
their clouds. These may be provided and supported by the cloud provider themselves, which give you a
lot of different options and granular control over that provider’s configurations.

This also means that, since this is built into the infrastructure of what you’re using, that there usually is
not an additional cost to provide that functionality. This becomes more of a challenge, though, if you’re
using more than one cloud service provider. And many organizations are using multiple providers.

In that case, you may want to use third-party solutions that can allow you to see and control different
aspects of security, regardless of what cloud service provider you’re using. This gives us one pane of
glass to be able to see the security for all of these different providers. And we can also set security
policies that would apply across the board, regardless of what cloud provider you may be using.

Many of these third party tools also provide enhanced reporting, so that we’re able to get a
comprehensive view of all of our security controls across all of our cloud providers.

Identity Controls – SY0-601 CompTIA Security+ : 3.7


Verifying the identity of a user is an important step in the authentication process. In this video, you’ll
learn about identity providers, certificates, tokens, and SSH keys.

When you have an application that’s running on your local network you probably have a pretty good
idea of what users and what devices will be accessing that application. But if your application is running
in the cloud you may not have that level of visibility into exactly who’s connecting. In those cases, you
may want to control the identities through the use of an identity provider or IDP. This is a service that
can vouch for who a person happens to be.

You can think of this as being authentication as a service because this is a third party providing this type
of identity control. This IDP will be responsible for identifying and controlling users based on who the
user name might be and what devices they might be using. This is commonly used for cloud based
applications that need single sign on or some type of authentication. And it’s more useful to have a third
party provide that than to have to recreate and manage that process ourselves.

Fortunately, there are many standards available that can help with this identity control, including SAML,
OAuth and OpenID Connect. To be able to understand a particular person’s identity we need to gather a
number of attributes associated with that person. Combining these attributes together allows us to
understand and identify a particular entity.

For example, a common attribute you can associate with an individual who may be working in your
organization, may be their name, their email address, their phone number or their employee IDed. We
could also add other attributes to this as well that might help us with the identification, such as what
department they happen to belong to, their job title or what their mail stop might be.

We could use just one of these attributes to be able to identify someone. For example, we could use a
name. But there may be cases where you have different employees who have the same name. In those
cases, we may want to add on additional attributes such as an email address or phone number to be
sure we know exactly who that user might be.

We can also take advantage of public key cryptography to help identify who a person might be through
the use of certificates. This digital certificate is assigned to a person or assigned to a device and it allows
us to confirm that the owner of that certificate is someone that we can trust.

The certificate owner might also be able to perform other cryptographic functions with this certificate.
For example, they can use this for encrypting data or to create digital signatures that can be trusted by a
third party. This type of identity control requires that we put in some type of public Key Infrastructure or
PKI. And this normally would also include a certificate authority or CA.

The CA is the central trusted entity for all of these digital certificates. And the CA is usually digitally
signing these certificates when they’re deployed. We can also put these certificates onto smart cards
that can also double as identification cards. We would slide this card into the device that we’re using to
provide authentication and we’re usually providing a personal identification number along with that. Or
if the device we’re using doesn’t have a smart card reader we might want to use a USB key and put the
certificate on the USB drive itself.

During the authentication process, the USB key is plugged in, the certificate is read and usually works in
conjunction also with a personal identification number. If you’re a server administrator, a network
administrator or you work on the security team you’re certainly using secure shell or SSH.

Secure shell allows us to get this command line prompt on these remote devices. But instead of using a
username and password we might want to use public and private keys to be able to provide this
authentication. This will be especially important if we’re doing any type of automation since we usually
won’t be there to type in a password while this script is running.

One challenge with allowing key based authentication is the management of these keys themselves. We
want to be sure that there is a centralized way to be able to manage all of these private keys and that
will allow us to both control the keys and audit the use of those keys. There are many options available
for us key management both on open source and on the commercial market.
If you have your own lab and you’d like to use this public private key pair for SSH authentication instead
of using your username and password it’s a relatively simple process. If you’ve not previously created a
public private key pair you can do that by running the ssh-keygen command, this is usually one you’ll
find in Linux or Mac OS.

If you’ve installed the open source package or it’s one that’s already installed in your Linux distribution,
then the SSH keygen command is probably available on your system. After creating that public private
key pair you would then copy the public key to the SSH server using the SSH copy IDed command.

And once you have your publicly key deployed to the servers that you’re connecting to you only need to
simply SSH to the user at host. And then it will login without any type of password authentication.

Here’s what this process would look like. Let’s first try to SSH to my local server as root. And the server
name is 10.1.10.170. It then prompts me for a password and if I type in that password I now have access
to that server.

Let’s now create a public private key pair. Let’s push our public key to the server and let’s see what
difference that makes during the authentication process. To create the key pair I’ll simply run ssh-
keygen. And when I do that it asks what file I like to save the key will use the default file.

If we want a passphrase we can put a passphrase in here or we can leave it empty. I’m going to leave
this one empty; it asks for the same passphrase again. And shows me that the IDed has been saved in
the particular file on my system. Now I have a public private key pair and I can start deploying that
public key to all of the servers that I’d like to use to automate this identity process.

Now that we’ve created a public key and a private key. Let’s push the public key to the server. And we’ll
of course, keep the private key private on our machine.

I’m going to use the SSH copy IDed command. And undo that to route at 10.1.10.170. It says that the
source of the key to be installed is the one that I have at id_rsa.pub. I’m attempting to login with those
new keys. And it asks for a password. So let’s put our password in for that server and it says that the
number of keys added is one.

Now it says try logging on to the machine with that command to see if that key has been added
properly. So let’s do that. Let’s use SSH and route at 10.1.10.170. And now I’m logged in to that server
without using any password during the authentication process. Instead of a password it used my public
key on that server to confirm that the private key on my local machine is indeed the correct one. And
using that as the identity it provides me access to the machine.

If someone else tried to use this SSH command it would not authenticate automatically using the public
private key because no one else has access to the private key. That key is only available on my local
machine.

Account Types – SY0-601 CompTIA Security+ : 3.7


There are many different types of accounts. In this video, you’ll learn about user accounts, shared and
generic accounts, guest accounts, service accounts, and privileged accounts.
If you’re watching this video on your computer, then you’re probably logged in with a user account. A
user account is one that is associated with an individual, a specific person who has been assigned this
account. Your account probably has a name associated with it.

For example, the username I log in with is Professor Messer. There’s also, behind the scenes, a specific
identification number that associates this account with that particular name. And the operating system
usually refers to the number associated with the account, rather than referring to Professor Messer as
that account name.

If you have a computer where multiple users are logging into that same computer, one thing you may
notice is that the files associated with your account are not able to be seen by users of other accounts.
This, of course, is by design. If you’re creating and saving files under your username, you would have to
specifically allow those users access to the files. Otherwise, they won’t be able to see them.

User accounts are specifically designed not to have full access to the operating system. This means that
if malware or some other type of malicious software was running as that user, they would only have
certain access to the operating system and ideally would not be able to make any significant changes to
the underlying OS. By default, this is the type of account that every user should have.

Even if this is someone who is a system administrator or they’re doing some other type of work in IT,
they would still use a user account any time they are not doing anything specific to that operating
system. This limits the amount of damage that could be done to the operating system if something was
to execute as this user account.

A shared account is an account type used by more than one person. If you have a guest log in or an
anonymous log in, this is an example of a shared account. The problem with a shared account is it’s very
difficult to know exactly who performed a particular action on that operating system.

Since everyone’s logging in as guest, you aren’t exactly sure who was logged in as guest when a
particular function occurred. This also makes it difficult to assign the proper permissions to an account,
since so many different people could be logging in with the same account name. And if multiple people
are using the same account, imagine what happens if the password changes on that account.

How do you inform all of those individuals sharing the same account that this password has now been
changed? This makes it difficult for users to keep up with all of the different changes. And what they
may end up doing is writing down the password and storing it somewhere, which, of course, is a very
bad habit when it comes to passwords.

The best practice for shared and generic accounts is to simply not use them at all. Everyone should have
their own personal account associated with them as an individual. And no one else shouldn’t have
access to their user account.

Guest accounts are a shared account that, until most recently, were a normal part of an operating
systems installation. A guest account allows users who don’t have a normal user account on that system
to be able to log in and at least have some rights and permissions to the operating system. These
accounts usually don’t have a password associated with them, and they’re highly restricted accounts.

They don’t usually have access to run all of the applications on that system. They have very limited
access to the operating system. And you can’t change or customize many aspects of the user interface.
But even more concerning is that if someone does gain access to your system, it might be possible for
them to run some type of privilege escalation and gain full access to the system. So an attacker could log
in as guest, which has very limited access to the operating system, take advantage of a vulnerability in
that operating system, and now suddenly have full access to the system.

Although there is a convenience factor associated with having a guest account, there are so many
disadvantages from a security perspective that we’ve started to disable this and not use it as a standard
practice. For example, in Windows 10, build 10159, Microsoft completely removed the guest account.
And it’s no longer a default part of the operating system.

A type of account that doesn’t log in interactively and you usually don’t see it running on your desktop is
a service account. As the name implies, a service account is an account type that is only used by
background services in your operating system. So things like a web server, a database server, a print
spooler, and the other services that you have running on your system are using some type of service
account to gain access to the OS.

It’s very common for different services to have different service accounts. This allows you to set specific
rights and permissions for a service that are just for the ability of that service to operate. For example, a
web server can be assigned a certain set of rights and permissions that allow people to store and
retrieve information from the web server directories.

But a database server has a completely different set of files and a completely different set of
permissions required for users to access the database server. It makes sense, in that case, to have two
separate service accounts, one for the web server and one for the database server.

Just like your normal user accounts, these service accounts have a username and password. The
username is assigned to a particular service. And the password is included along with that service
configuration. That means that if there are any password updates on the system, you will also have to
change the password in the service. Otherwise, it won’t be able to log in and operate normally.

If you’re a system administrator, then you’re probably very familiar with a privileged account. In
Microsoft Windows, administrator is the privileged account. And in Linux, your privileged account is
called root. These privilege accounts have full access to the operating system.

They can change core files. They can modify the kernel of the system. They can manage the hardware
drivers and anything else that may be required to run that system.

The privileged account has access to the operating system the normal user account does not have. And
it’s usually a best practice to use your user account all the time, unless you happen to be performing a
function that a user account simply doesn’t have access to. In those cases, you might switch over to the
privileged account, perform those functions required for that privileged account, and then log out of
that privileged account and back to your user account once that’s done.

A best practice for privileged accounts is never to use it as your normal user account, but instead, only
use it in times when it’s required. Because this account has full access to the operating system, it needs
to be very secure. So we need to provide at least strong passwords and multifactor authentication. We
might also want to schedule password changes so that we know that we always have a password that
has been updated recently for these privileged accounts.
Account Policies – SY0-601 CompTIA Security+ : 3.7
A security administrator has a number of challenges associated with accounts. In this video, you’ll learn
about routine audits, password complexity, account lockouts, and location-based policies.

When an administrator is configuring a user on a system, there are many different policies to consider.
For example, the administrator needs to configure a username and password for that system. But there
are a number of other considerations associated with the account policies as well.

For example, there may be specific password policies on the system. Or we may require additional
authentication if the user is connecting from somewhere outside of the building. Once the user logs in,
there are a number of other policies that have to be considered in order to keep all of the data safe on
the system.

It’s common in most environments to perform periodic audits to make sure that all of the policies that
you’ve configured are indeed being used on the systems. Once you set up a system or a series of
accounts, it’s remarkable how quickly things can change. So it’s always useful to have periodic audits
that are scheduled throughout the year.

And instead of waiting for an audit, there may be a way to have an alert occur if a particularly important
policy isn’t followed. You might have a log file analyzer that can go through the entire logs for all of your
systems and if any high profile or high security issues were to occur, you can be informed immediately.

Some of the things to look at during an audit, are there permissions that are being used on the system?
Everyone should have permissions that are specific to the job that they’re doing. And the permission
should not go beyond the scope of their particular role.

I’ve worked in some environments where everyone on the network was assigned administrator access.
In those particular cases, an audit would show that the permissions were definitely not set up properly
and that changes would have to be made to everyone’s log in. This is a process that should occur
regularly, so you might want to set up a one month, three month, and six month checkup for your
network.

Once we have the user policies in place, it would be useful to see how those policies were being
followed. You might want to look at different resources on the network and determine how those
resources are being used. This might also be a good time to look at the applications in use and make
sure that they’re accessing the data in a way that would be considered secure.

The implementation of user passwords on a system is a topic that is of great debate in the industry.
There are many ideas and proposals on what the best password implementation might be. So you may
want to check internally with your organization and see what the best set of policies would be for you.

The idea behind setting these password policies is to have a password that is considered to be strong.
This means that it would resist being guessed by someone, or have someone perform a brute force
attack and be able to reverse engineer what that password is. To accomplish this, we need to increase
the entropy associated with this password. Password entropy is a way to measure just how
unpredictable a password might be.
So we don’t want to use any single words. And we don’t want to use any passwords that might be
obvious, such as the name of your dog. And if you use lowercase letters, uppercase letters, and special
characters all at the same time in the password, it makes it much more difficult to guess and much more
difficult to brute force.

This doesn’t mean that you should start replacing letters with numbers that are very similar. For
example, you would not replace a 0 with an O or replace the letter T with the number seven. The
attackers have brute force systems that are already designed to replace these letters with these
numbers because this is such a common thing for people to do. Instead, you want to use a random set
of letters and numbers. And you want to be sure that it’s not something that could be guessed by a third
party piece of software.

The minimum password size is generally considered to be eight characters, but a longer phrase or series
of words is an even better idea for a password. And if you’ve ever been prompted to change your
password and you tried to use a password that was used previously, you may have seen that the
password was not able to be reused. This is a good best practice that prevents somebody from using an
older password that could possibly have already been identified by an attacker.

One way to prevent an attacker from using a live system to perform a brute force attack would be to
implement an account lockout policy. This means that after a certain number of incorrect passwords,
the account is automatically locked. And even if the attacker was to eventually find the correct
password, they would have no idea because the system has already locked that account and made it so
that it would not be accessible.

This is, undoubtedly, the norm for most organizations accounts. That way they can be assured that no
one is performing a brute force attack on their live systems. This can be especially problematic if the
attackers are trying to brute force a service account because then the services that rely on that account
could be locked out and you would have to manually go in and re-enable that service.

Unlike a user account, when a service account is locked, it could affect a large number of people. I have
seen situations where an administrator has disabled the brute force protection and account lockout
policy for a service account, but that generally is not considered a good best practice. You probably want
to know when someone is performing a brute force attack. And if someone locks that account, then you
would have at least an idea that particular system is under attack.

It’s also very common if someone leaves the organization or moves to a different part of the company
that instead of deleting their account, we would simply disable the account. This means that no one
would be able to log in to that account, but all of the files and all of the protected information
associated with that account would stay in place on that service. This is especially important if the user
has been encrypting data using their account information. And you want to save those decryption keys
so that the next person who gets that job will have access to the same data.

We can also use someone’s location to be able to set policies on whether they might have access to a
system. This could be done with IP addresses or IP sub setting, especially if it’s on an IP subnet that’s
inside of our building. But once someone leaves the building and they’re outside of our control, we may
not be able to use network location as one of our policies.
Instead, we may want to take advantage of the geolocation for that user. This can use global positioning
system coordinates. It might take advantage of the 802.11 wireless network that someone is connected
to or, in some cases, use the IP address of where someone may be connecting from. Each of these have
different levels of accuracy associated with them, but combined together, you might get a pretty good
idea of where someone might be.

We might combine this location with a policy that uses geofencing. Geofencing allows you to set a policy
on where a user might be physically located. So if someone is inside the building, there may be a set of
policies associated with their account. But if they are outside the building or if they’re in another city,
there may be a completely different set of policies.

And we might also take advantage of geotagging, which is location information that is added to the
metadata of documents that a user is storing. So a user might store an image or they might store a
movie on their mobile device. And within that image or that movie is metadata that has GPS coordinates
of where that user was when they took that picture or made that video.

You can take all of these different location based policies and combine them together into one single
permission. For example, you can check an IP address and see that someone trying to log in is associated
with an IP address block that is assigned to Russia. You would notice that we don’t have an office in
Russia. And perhaps more importantly, that user was in Colorado Springs just an hour ago.

This makes it very difficult for us to believe that this user could now be somehow located in Russia. This
means this user would not be granted permission to log in. And we may have additional security
controls in place because we may believe that this user’s account could be compromised.

You might also combine this with time based policies. There may be, for example, a lab that should only
be used during normal working hours. But if someone tries to access that lab at 3:00 in the morning, we
might have an alert fire to tell us that something unusual is occurring. And we would prevent access for
that user.

Authentication Management – SY0-601 CompTIA Security+ : 3.8


As a security professional, you’ll have many options for managing your passwords. In this video, you’ll
learn about password keys, password vaults, trusted platform modules, hardware security modules, and
knowledge-based authentication.

One type of hardware-based authentication might be something like a password key. This is a physical
device that you would plug into a USB drive that would allow you access to a system and use this as part
of the authentication process. This would prevent someone else from logging into your account, even if
they had your username and password. They would not be able to complete that authentication,
however, because they do not have this physical password key to plug into their system.

And like most things associated with the authentication process, we wouldn’t simply use this single form
of authentication. We would still want to use some other types of authentication along with this. So we
might use a username, a password, perhaps use this password key, and then maybe also use a personal
identification number that’s associated with this password key. That way, someone couldn’t steal your
key and gain access to the system. They would still need that additional authentication factor to gain
access.
One thing you don’t want to do is to use exactly the same username and password for all of the different
systems that you might authenticate with. Unfortunately, it’s difficult to remember a different password
for every single system. In those particular cases, you might want to use a password vault. This is a
password manager that allows you to store all of your passwords in one central secure area. And then
you would be able to set different passwords for every single location you logged into.

The core database of this password manager would all be encrypted data. So even if somebody gained
access to your password vault, they still would not be able to see any of the passwords that you use.
There are often cloud synchronization options available with the software so that you could set up
passwords, encrypt them on your local machine, and that encrypted information would be shared in the
cloud. This would allow you to access those passwords from wherever you happened to be. And the
passwords themselves would all be stored safely in the cloud.

With so many breaches occurring on so many different sites, it’s now very easy for an attacker to be able
to gather usernames and passwords. If you’re able to change the password on every single site that you
access, having access to one single site’s authentication would not allow someone access to your
account on a different site. And of course, there are options for these password vaults to use for not
only personal use but to use in a corporate environment as well so that administration would have
access to all of the passwords you would use for business purposes.

If you’re using any advanced cryptography on your system, especially if you’re doing full-disk encryption,
then you’re probably using a Trusted Platform Module or TPM. This is a feature that’s either part of the
motherboard that you’re using, or it might be a module that you can add to the motherboard. This is
going to provide you with additional secure cryptography functions to be able to create random
numbers or key generators from this Trusted Platform Module.

These TPMs often have keys that are burned into the TPM that can’t be changed. That means if you do
see this key in use, you know exactly what TPM it’s associated with. We can also securely store keys on
this TPM. And that storage is protected from any type of brute-force attack.

If you’re managing a large number of servers that are using encryption, then you need some way to
centralize the management of all of these different keys. One way to do that is to use a Hardware
Security Module or HSM. This is usually a server like the one we see here. But it usually has specialized
hardware inside that allows it to perform cryptographic functions very, very quickly. This means this
HSM can be used for centralized storage of all of our encryption and decryption keys.

And we also have accelerators inside of this device that can offload the encryption and decryption
process from our servers and instead perform that function inside of this specialized hardware. It’s
common to see these HSMs used in very large environments. And because of that, we’re going to need
redundancy for these keys as well. So we might have multiple HSMs. And each one of those may have
redundant power supplies to maintain the uptime and availability of all of our HSMs.

During the authentication process, you may find that you’re asked for some very specific information
that only you might know. This is called Knowledge-Based Authentication or KBA. You may find two
different kinds of KBA. One is a static KBA. And the other is a dynamic KBA. Static KBA is some type of
secret that we’ve previously configured in our system. This is usually used to change a password or
recover an account on a system. You might be asked a question that was previously configured when
you originally made the account. For example, you may be asked what the make and model of your first
car is. And you would have to answer that correctly in order to perform that account reset.

A dynamic KBA might be used for a similar purpose. But the question that’s being posed to you is not a
question that you previously configured in the system. This uses an identity verification service. And it
may pull information from public records or from private information in order to pose a question to you.
For example, it might ask you what the street number was when you lived in a particular house at a
particular location. And only you may be able to answer that question or answer in a very short period
of time that would allow you access to perform this reset.

PAP and CHAP – SY0-601 CompTIA Security+ : 3.8


Authentication protocols have been used for many years in IT security. In this video, you’ll learn about
the authentication process and the differences between PAP and CHAP authentication.

There are many different ways to provide authentication to a network. And in this video, we’ll look at
two very common methods. One is called PAP and the other is CHAP. This is a common problem that
needs to be solved. You have a client who’s outside of the building they are accessing a VPN
concentrator that is part of your organization. So that they can then gain access to an internal file server.

But before they’re allowed access to that internal file server, they first need to authenticate. They’re
going to send a request through the internet to the VPN concentrator to login.

The VPN concentrator doesn’t have any information about usernames and passwords. So it passes that
request down to AAA server. This is a server designed to provide authentication, authorization and
accounting. And it’s going to provide a way to check a username and password to see if it’s valid.

Once it performs that check it’ll send a message back saying those credentials have been approved or
disapproved. In this case, the correct username and password was provided and the users request is
then sent on to the internal file server.

One way to provide that authentication between the VPN concentrator and the server is a very common
protocol known as PAP. This is the password authentication protocol. It’s an extremely basic method to
provide this authentication process. And if you’re using some relatively old operating systems or systems
that were designed for some very simple authentication they’re probably using PAP.

One problem with the password authentication protocol is that it sends all of this information through
the network in the clear. There’s no encryption built into PAP that provides a way to protect the
username or the password. To say that this is a weak authentication scheme may be a little bit of an
understatement because there is no encryption being used for that password exchange process.

This is because PAP is originally designed before we had these internet connected networks. Instead we
were using dial up analog lines where there was only two devices on that connection. The client and the
server.

What you commonly see with implementations of PAP today is that the application performing the
authentication based in the username in the clear. But the application will provide the encryption of the
password and be able to send that through a PAP connection without too much worry about that
password being seen by others.
Here’s how this very simple password authentication protocol works. We have a username and
password. Our username is James, our password is password111. And we’ve got a client and a server.
The request will be made from the client to the server saying the user name is James and the password
is password111 sent in the clear using PAP. The PAP server will authenticate the username and
password and send a message back to the client saying, the username and password checks out you are
now allowed access to the network.

Somewhat of a next step up from PAP is the Challenge Handshake Authentication Protocol or CHAP. This
is going to provide an encrypted challenge sent across the network. So this does add additional security
over what you might find with PAP.

CHAP has a three-way handshake that occurs. Once there is a link the server is going to send the client a
challenge message. That challenge message is going to be combined with a password hash and sent
back to the server where it will evaluate the password and the challenge to be able to see if that
matches what’s expected.

This challenge response process is not only something that is at the beginning of the authentication
process. But it may occur multiple times while that session is active. The end user never sees that this
additional handshake is occurring but this is something that can occur periodically while the session is
active.

Let’s take the same scenario with the Challenge Handshake Authentication Protocol with the same
username of James and the same password a password111. And we have the same scenario where we
have a client and a server. The client is going to send the request saying that they would like to login
with the username James. Of course, this server already knows that there is a user named James and it
knows the password for that particular user.

The server is going to take that password and combine it with a challenge. And it will send that challenge
across the network back to the client. The client will then perform exactly the same combination of the
password and the challenge that the CHAP server has already calculated. It’s then going to send back a
response to that particular password and challenge. And the challenge response hash is set over the
network to the CHAP server.

The CHAP server then does its own calculation of the password and the challenge to see if the exact
same response was to occur. Notice with CHAP we’re not sending the password in the clear across the
network we’re sending either a challenge or a response to that challenge. And neither of those contain
the actual password.

There’s a version of CHAP called MS-CHAP. The stands for Microsoft CHAP and it’s used commonly with
Microsoft’s point to point tunneling protocol or PPTP. The most recent version of MS-CHAP is referred to
as MS-CHAP V2.

Unfortunately MS-CHAP is a very old implementation of security. It uses the data encryption standard
for encryption. And that is a very weak type of encryption. Makes it very easy to brute force the
relatively small number of possible keys that could be used during this connection. For that reason, we
commonly do not use MS-CHAP or MS-CHAP V2 any longer. Instead, we prefer to use L2TP, IPsec,
802.1X or some other method of secure authentication.
Identity and Access Services – SY0-601 CompTIA Security+ : 3.8
As a security professional, you’ll need to use different authentication methods in different situations. In
this video, you’ll learn about authentication using RADIUS, TACACS, Kerberos, and 802.1X.

One of the more common authentication authorization and accounting protocols is the RADIUS
protocol. RADIUS is the Remote Authentication Dial-in User Service. And although it has dial-in in the
name, it’s also very common to use RADIUS on a local area network, or wide area network as well.

This is a very common way to centralize authentication for your users. So if someone is logging in to the
network, they’re logging into a VPN concentrator, or they’re trying to authenticate to a switch or router,
they could use RADIUS to be able to authenticate the username and password.

This is a very common authentication type to use. There are RADIUS services available for practically any
operating system, and that’s why you’ll probably find RADIUS running somewhere in most enterprise
networks.

As an alternative to RADIUS, you might use TACACS. TACACS is the Terminal Access Controller Access-
Control System. It is a remote authentication protocol. And again, this was a type of authentication that
was originally built when we were using dial-up lines.

Cisco found that this was a very useful authentication method, and updated it into a new version called
Extended TACACS, that provided additional support for accounting and auditing.

When you see TACACS in an environment today, it’s probably using the latest version of TACACS, which
is TACACS+. This was an open standard that was released in 1993. And although TACACS+ is not a Cisco
specific protocol any longer, it’s still very common to see Cisco devices using TACACS+ for
authentication.

A more complex, but more robust, authentication method would be Kerberos. This is a type of
authentication system that is able to use single sign on. Which means we can authenticate one time, and
after that point, we are trusted by the system.

This means we can access different file shares during the day, we can print to different printers during
the day, or access other resources on the network, and we don’t have to keep putting in our username
and password. Kerberos remembers that we authenticated properly at the beginning, and is able to
authenticate us throughout the day automatically.

Unlike RADIUS or TACACS, Kerberos also provides mutual authentication, which means you’re not only
authenticating to the server, the server is also authenticating to you, so that both sides know exactly
who they’re talking to.

With this mutual authentication, we can avoid any type of replay attack, or any type of on-path or man-
in-the-middle attack.

Kerberos has been around for a very long time. It was created in the 1980s by MIT. And you’ll find that
Kerberos has been integrated into Windows since the year 2000. This is based on an open standard of
Kerberos, called Kerberos 5.0, and it works not only with Microsoft Windows, but any other operating
system that is written to this open standard.
You often see Kerberos described as a ticketing system. That’s because the cryptography that is used is
referenced as a cryptographic ticket.

When you authenticate to a ticket-granting service, which would be your Centralized Authentication
server, that ticket-granting service gives you a service ticket. And then, instead of having to put in a
username and password every time you access a different resource, you simply have to show the service
ticket that device recognizes that you were properly authenticated by the ticket-granting service, and
then provide you access to those services, without going through the process of re-entering a username
and password.

This saves you a lot of time during the day because you don’t have to keep putting in a username and
password every time you access yet another resource. But it only works if those devices are compatible
with Kerberos. Not everything is compatible with Kerberos, so you may find that some of the devices
you’re authenticating to cannot use this Kerberos functionality.

There are other methods that can provide single sign-on, such as SAML, or smart cards, or even cloud-
based single sign-on services, but Kerberos is certainly one of the most popular you might find.

What sounds like we have three different ways to authenticate that are very similar to each other, with
only minor differences in functionality. So you may wonder, which one of these should you be using?

Should you be using RADIUS, TACACS+, or Kerberos? The answer usually depends on what you’re
connecting to, and what is supported by that device that you’re connecting to.

For example, you may have a VPN concentrator that only knows how to authenticate to a RADIUS
server. So you might use RADIUS for that particular service. You might have other network
administrators that are authenticating to a Cisco switch, or, a Cisco router, and perhaps they’d like to
have their own authentication methods that are outside the scope of what you would use elsewhere on
the network. So they may set up TACACS+ server just for their Cisco authentication.

And if you’re on a Microsoft network, then by default, you’re using Kerberos. And you may find that,
throughout the day, you may be using all of these different methods, depending on exactly what service
you happen to be using.

Another type of access control is Network Access Control. This means you can prevent people from
accessing the network until they’ve gone through this specific authentication method. This is called
802.1X, sometimes referred to as port-based Network Access Control, or very simply, NAC.

It’s common to see 802.1X used with wireless network authentication, but 802.1X can also be used for
wired authentication as well. We often integrate 802.1X with EAP. This is the Extensible Authentication
Protocol, which is a framework that can be used for many different types of authentication protocols.

And on the back end, we probably have a RADIUS server, an LDAP server, a TACACS+ server, a Kerberos
server, or any other type of authentication service.

When the user first tries to connect to the network, 802.1X will stop that connection, ask for credentials,
the user will provide that username, password, and any other authentication credentials, and then it will
be checked with these databases on the back end to make sure that the user has the proper access. And
if all of that authenticates properly, the user then can access the network.
Federated Identities – SY0-601 CompTIA Security+ : 3.8
Instead of maintaining your own authentication database, you might use one from a third-party. In this
video, you’ll learn about federation, SAML, OpenID Connect, and OAuth.

Federation is a way that you could provide access to your network using credentials that someone uses
for a completely different service. This can be done for users that are on your local network or you could
use this for third party individuals such as partners or customers to be able to gain access to your server.

You probably seen the login screen like this before that isn’t asking you to login with some credentials
that you’ve previously created on that site. It’s asking you to login with credentials you may already have
at Twitter, Facebook, at LinkedIn or at other locations. This means that you can use authentication
credentials that you already use and maintain without having to recreate additional login credentials for
the site.

Of course, the site you’re visiting and these third party sites need to already have a relationship and a
trust built between them. And there’s usually a number of configuration steps so that third parties can
use these authentication methods for users to login to the local site.

There have been a number of different standards used through the years to allow someone to
authenticate and authorize someone to access a third party set of resources. One of these standards is
SAML. This is the Security Assertion Markup Language and it was designed to provide both
authentication and authorization for users to access third party resources.

This one standard provides a great deal of functionality unless of course you’re a mobile app. SAML was
never designed to be used for mobile applications where someone might be accessing a resource from
multiple devices simultaneously. And for that reason you may see the use of SAML declining as the years
go on.

The SAML process is relatively straightforward. Let’s say you’re here in a browser needing to access a
third party site which we’ll call our resource server. But we want to authenticate to that resource server
using credentials that exist on a completely different service. And that service provides a way to
authorize those credentials through an authorization server.

So how do we get all three of these resources to talk to each other. Well, the first thing we’re going to
do is try to access the application. So we’ll go to our browser and put in the website to gain access to
that particular site. This site says that we have not previously authenticated. So it sends us back a sign an
encrypted SAML request and says, for you to be able to login you need to send this request to the
authorization server.

Will then communicate with the authorization server with our login credentials and will include this
signed an encrypted SAML request. If our username and password is correct that authorization server
will send us back a successful notification and will include a SAML token with that response. We can now
present that same token to the original third party website. And now that site sees that we have a valid
token it can allow us access to that service.

A more common way to provide authentication and authorization for our mobile devices and our
browsers is to use some newer types of protocols. These newer standards consist of OpenID Connect
and OAuth. OAuth is a framework that allows us to control what types of resources a third party
application may be able to access. This framework was created by Google, Twitter, and a number of
other companies. And it has very broad support in the industry.

Unlike SAML, which provided authentication and authorization, OAuth is usually used in conjunction
with OpenID Connect. So OpenID Connect is providing all of the authentication functionality. And then
OAuth is determining what types of data is accessible by that third party app once the authentication is
complete.

This is a very popular framework and you’ll see it used on Google, Facebook, Twitter and many, many
other sites. You’ve probably seen a message like this when you’ve wanted to have one particular
application share data from a completely different site. This is an example of a site called Zapier that
allows me to automate a number of backend processes. And I want Zapier to be able to modify
information that’s in my Google Drive directory.

Well obviously Zapier is not Google and so there needs to be some type of authorization between the
Zapier website and the Google website. So I’ve told Zapier that, I’d like to begin the process of allowing
it to see, edit, create or delete files that are in my Google Drive. And then I can decide whether that
particular application has access to my Google account. And how much access it happens to have.

This means that I can set up this trust between these two applications without having to hand over any
of my login credentials for either of these sites. And at any time if I would like to remove this particular
link between these sites I simply remove the key and it no longer has access to Google Drive.

Access Control – SY0-601 CompTIA Security+ : 3.8


Access control is a fundamental aspect of IT security. In this video, you’ll learn about mandatory,
discretionary, role-based, attribute-based, and rule-based access control.

Once a user gains access to a service, we now need to determine what type of resources does that user
have access to. That process is authorization. It’s part of the access control that we would normally
configure using some type of policy enforcement process that’s built into an application or an operating
system.

Of course, prior to configuring this policy enforcement, we need to go through some process to
determine who gains access and what type of access they get. This would be part of our policy
definition, and it’s something that we can use to then assign any type of enforcement process. And
there are many different ways to configure access control in an application or an operating system. And
you’ll find that different businesses and different applications have different requirements on how that
access is provided.

If you work for a highly secure organization or some type of government, then you may be using
mandatory access control or MAC. This requires you to configure separate security clearance levels and
then associate objects in the operating system with one of those security levels. This means that every
object that you’d be working with– this could be a spreadsheet, a presentation, it could be a word
processing document– but each one of these objects gets a security label. So we would assign these
objects with labels such as confidential, secret, top secret, or perhaps others as well.
This means that we would assign a user with a minimum type of access. The administrator configures a
particular user to have a particular access. For example, we could assign a user with secret access. The
users don’t get to change this type of access. This is an access that is defined by the administrators of
the site. Now that this user has secret access, they may be able to access objects that are labeled
confidential or objects that are labeled secret, but they would not be able to access objects that are
labeled top secret.

If you’re a user of Microsoft Windows, then you may be familiar with the discretionary access control or
DAC. This means that you would create an object, and you, as the owner of that object, would assign
rights and permissions to it. So you may create a spreadsheet, and then in that spreadsheet, you might
decide that an individual or particular group in the organization has access to the spreadsheet, and that
access may allow them to modify that spreadsheet. Or we might set permissions on that spreadsheet so
that one group in the organization has ready-only access, and the other group in the organization is able
to make changes.

This means the person that created or owns that spreadsheet has complete control over who has access
to it. And although that does provide a lot of flexibility for access control, it requires the owner to be
responsible for the security of that device, and on some cases, that may be considered very weak
security and something that may not be the best security option in many organizations.

In many large organizations, we use an access control type called a role-based access control or RBAC.
This is associated with the role that an employee might have in that company. So this might be a
technician. It might be a manager. It could be someone responsible for a particular project. And they
have been assigned rights and permissions based on their role.

The administrator of the system or the network is the person that would assign these particular access
control rights. This means if someone is a manager in the organization, then they are assigned all of the
rights and permissions that a manager should have. In Windows, we manage this role-based access
control through the use of groups.

So we might have a group for someone who works in shipping and receiving, and we might have another
group for someone who is the manager of the people that work in shipping and receiving. This means
that someone who works in shipping and receiving might have access to the application that allows
them to send and receive packages. But the manager of those people have a different role. They are an
employee of shipping and receiving, so they would have access to the shipping software, but they have
also enhanced access– since they are the manager– that allows them to review any of the logs
associated with that shipping software.

With attribute-based access control, we can define a number of different criteria that have to be
evaluated that would then allow someone access to a resource. This allows the system administrator to
define a number of different parameters. And then as the user tries to access those resources, each one
of those parameters is checked and evaluated.

This means if a user tries to access a spreadsheet, the system will evaluate what type of resource they’re
trying to access. It will understand what IP address they’re trying to access this resource from. It may
check the time of day to see if access to this resource is allowed in that particular time frame. It can see
what type of action the user wants to perform to that particular object. And it may perform a check to
see what the relationship of that user might be to that data. Once all of those different parameters are
evaluated and the user meets all of the parameters that were previously defined, then they would have
access to that resource.

Another type of access control is a rule-based access control. And this is more of a generic term that can
be applied across many different operating systems or different ways to allow someone access to a
resource. With rule-based access control, the system administrator is setting the rules. The users do not
get to define whether someone might have access to a particular object or not. The rule is generally
associated with the object that they’re trying to access.

So if somebody is trying to gain access to a network, then the rules are going to be associated with that
particular network. Or if someone needs access to a spreadsheet, the rules are specific to that particular
spreadsheet. For example, if someone’s trying to log into a lab, there might be a rule-based access
control based on the time of day. So they would only be able to access those resources between 8:00
AM and 5:00 PM. Or perhaps there is a web form that someone’s trying to fill out, but they’re only able
to see and complete that form if they happen to be using a specific type of browser.

There’s also a great deal of access control built into the operating systems that we use every day. When
we’re storing files, we need to have some way to define who would have access to that file that’s stored
on our system. So this might be stored on a hard drive, an SSD. It could be a flash drive that we plug in.
And this is something that is commonly built into an operating system so that some users would have a
certain set of rights, and other users would have a completely different set of rights.

Generically, we refer to this as an access control list, but you may find that in Windows, it’s a list of user
rights or a list of groups, and then you’re assigning permissions to those users or groups. This could be
something that is centrally managed through group policy on a Windows network, or it may be that the
owner of the object has control to make those changes themselves in the file system of the OS. And in
the case of NTFS that is used by Microsoft Windows, we can have the file system perform additional
security, such as encryption and decryption, as part of the file system itself. We don’t have to install any
additional software or have additional plug-ins. It is simply built into the operating system that we’re
using.

One of the challenges we have with cloud-based systems is that people can access our resources from
anywhere in the world, and we may have these resources changing and moving all the time. So we need
a type of access control that is up to date with these more modern ways to access our resources in the
cloud. To be able to do that, we use conditional access. This allows us to set certain conditions. We may
check to see whether someone is an employee or whether they’re part of a third-party organization. We
might also check what location they happen to be located in or what type of application they’re trying to
access.

Once we know this condition, we can apply certain controls to that. So if it’s an employee, we may
provide more access to a particular file; or if it’s a partner, we may require multifactor authentication
during the login process; or if this is an employee and they happen to be in a different country, they may
have a different type of access to that file. Many cloud services include this type of conditional access as
part of their system. And if you’re the system administrator, you can build some very complex access
rules so that you can customize exactly the type of security you would like to have over your data.
So far, we’ve talked about how users have access to applications and data. But we also have to be
concerned about how administrators have access to applications, data, and the underlying operating
system. To be able to manage this process, we use privileged access management or PAM. This is a
centralized way to be able to handle elevated access to system resources. And if you are in a large
organization with many different administrators, you may want to consider using privileged access
management.

If you’re using this type of access control, then administrators to a system don’t automatically have
administrator rights. They would need to access a centralized digital vault, and then that privileged
access is then checked out to them to be able to use. These privileges only last for a certain amount of
time, and then they’re revoked by the system.

This gives us much more control over what someone with administrator access may be allowed to do.
They first have a centralized password management function, so that if you do need to change the
administration passwords, you can do that in one central place. There’s also the advantage of being able
to automate services that need administrator access, and we can do that by using this privileged access
management. This would then allow us to manage this administrator access for each individual
administrator on the system. And then we can log and audit for anyone who may be assigned these
particular administrative rights.

Public Key Infrastructure – SY0-601 CompTIA Security+ : 3.9


It takes many different components to build a successful PKI. In this video, you’ll learn about the key
management lifecycle, digital certificates, certificate authorities, and more.

PKI or Public Key Infrastructure, is the process of managing practically every aspect of digital certificates.
This covers the policies and procedures, the hardware and software, behind these digital certificates,
and of course the entire digital certificate process starting with the creation all the way through the
revocation.

Creating a public key infrastructure for your organization, is not a trivial undertaking. There is a lot of
questions that need to be asked and this is something that usually involves a large number of the people
in the organization. If you’re working with PKI on a daily basis, most of your work is going to be about
creating certificates and associating those certificates with users or devices. This is something that is
centrally located in the certificate authority or the CA.

Since the PKI creates the foundation of trust for all of these certificates, it becomes a very central and
very important part of your infrastructure. If you’re managing the PKI for your organization, then you’ll
have a number of different responsibilities. For example, you’ll be responsible for creating the keys,
those keys will have a particular strength and use a particular cipher.

You’ll also be responsible for generating the certificates which associate these keys with a particular
user. And then you’ll need to distribute those keys safely and securely to their users. Those keys will
need to be stored somewhere and of course, we’ll need to make sure that they are stored safely and
that they are used appropriately, and there may be times when a key has to be revoked and it will be
the responsibility of the PKI administrator to manage that revocation process.
We usually create these keys with a particular expiration date, and once the expiration date is met,
you’ll have to begin the process again with creating new keys. We often talk about distributing digital
certificates to users and to devices on our network and those digital certificates are commonly a public
key that is combined with a digital signature. Usually the digital signature is from the certificate
authority.

These digital certificates might also contain additional information that could describe more
characteristics for that particular user or device. We mentioned earlier that the foundation for this
public key infrastructure, is the trust we associate with these certificates. The only way that we can trust
that certificate is valid, is to evaluate its digital signature, and since that digital signature is coming from
the certificate authority, that certificate authority is the central point of trust.

There are other ways to associate trust with these digital certificates. One way would be through a web
of trust. If you don’t have a central certificate authority, then it’s up to the users of these certificates to
vouch for each other and digitally sign each other’s certificates.

There are many different ways to create these digital certificates, and if you’re building out a certificate
authority, you’ll find that Windows domain services has the certificate creation process built into the
operating system itself. If you’re using other operating systems like Linux or Mac OS, there are many
third party options available as well.

Your browser or operating system, will maintain a list of all of the trusted commercial certificate
authorities. And if you bring up that list, you will see there are hundreds of CAs that are part of the
operating system or part of the browser you happen to be using.

These commercial certificate authorities, allow us to purchase a certificate from them, that is then
trusted by all of these browsers or all of these operating systems. This is usually done by building a key
player locally on your machine, you would then provide the public key to the certificate authority, the
certificate authority would go through a number of steps to confirm that you are indeed the person
making the request and then they will sign that particular certificate.

When you’re providing that public key to the certificate authority, we refer to that as a certificate
signing request or a CSR. Once the certificate authority signs that certificate, they send it back to you
and now you can put that certificate on your server. When people connect to your server, they can not
only see that there is a certificate with your name but it has been signed by one of these commercial
CAs and therefore it can be trusted.

If all the applications and services that you’re using are done in-house, and no one external will be
connecting to them, you could become your own certificate authority and sign your own certificates. It’s
certainly much easier to build your own certificates internally and sign them yourself, than sending them
out to a third party, and it’s also less expensive because we don’t have to pay for a third party to do
that.

Having an internal CA is almost a requirement if you’re a medium to large scale organization with
hundreds of different servers and you need to provide sign digital certificates for every single one of
those servers. There are many ways to manage this process internally and be your own certificate
authority and Windows, you have the Windows certificate services and in Linux and other operating
systems you can use software such as open CA.
In the simplest infrastructure, there is a single certificate authority and that single certificate authority
signs the certificates for everyone in the organization. But in most organizations there is a hierarchy of
certificate authorities. You can have a root CA, there might be intermediate CA just underneath, and
underneath an intermediate CA would be what we call a leaf CA. This distributes the load of managing
the certificates across multiple CAs, and perhaps even across different parts of your organization.

This also makes it easier if a particular certificate authority is compromised, and you need to revoke all
of these certificates that, that CA is signed. You can remove one of those leaf CAs, and the intermediate
and root CA would still remain valid.

In large organizations, you might have someone managing the certificate authority, and there may be
another group that handles the registration authority or RA. The registration authority goes through the
process of identifying who the requester happens to be, they perform some validation of that requester,
and then ultimately decide if that certificate should be signed.

This is a critical step when we’re working with certificate authorities because everything is based on the
trust of that CA. And if somebody is performing additional checks and balances, then you have a
stronger level of trust for that sign certificate. This already may also be responsible for revocations.

So if a certificate is deemed to have been compromised, the RA can revoke that certificate, and make
that certificate invalid for anyone connecting to that servers. And eventually, the certificates that have
been assigned, will expire and need to be reassigned or recreated and the registration authority can be
responsible for that as well.

So what’s inside one of these certificates? If you’re connected to a website right now, you can click the
lock that’s in your address bar, and you can look through the certificate and all of the different
characteristics. This is the certificate from professormesser.com it was issued by CloudFlare, and it has
an expiration date at the top, and the browser has already done its checks and determined that this
certificate is valid.

There are a number of other attributes that are inside of this certificate and you can scroll through the
certificate and see all of the different settings that can be used by the browser or other clients that are
connecting to the server.

An important attribute on the certificate, is the common name or CN. This is the fully qualified domain
name associated with the certificate. So on my site it would be professormesser.com or
www.professormesser.com.

If you’re connecting to a site and that common name in the certificate doesn’t match the name that you
put into your browser or address bar, you get a message like this saying your connection is not private, if
you’d like to see some of these error screens in your browser, you can go to badssl.com and there are a
number of different error messages so that you can get an idea of what you can expect to see in your
browser if any of these errors were to occur.

You can assign a single common name to that CN attribute, but in my particular case, my website could
be professormesser.com or www.professormesser.com in that case I want to add some additional
names and we can do that through the subject alternate name attribute, or the SAN. This allows me to
set additional hosts and you may find with some sites that there are many subject alternative names
associated with a single certificate. If you look at the one for professormesser.com there are generally
only two listed there for professormesser.com and www.professormesser.com

There are times when these certificates will expire and if you connect to a site that has an expired
certificate, you’ll see a message that says the cert date invalid. This means that this particular certificate
has expired on that server, and the owner of that site would need to renew the certificate and update
that on their servers.

It used to be very standard to create web server certificates that might last as long as three years, but
through the years we’ve decided to shorten that time frame to limit the amount of time that a particular
certificate might be compromised.

So in your browser, there is a maximum expiration that is allowed of about 30 months or 398 days. This
means that any certificate you create for your website, will need to be a maximum of 30 months or
smaller amounts of time for that certificate. If you go to professormesser.com, you’ll find that my
certificates are only valid for about 1 to 3 months.

There may be times when a site is compromised and the certificates on that site need to be revoked
before they expire. We can provide this key revocation through a number of different means, one of
these is a certificate revocation list or CRL.

This is a large list of revoke certificates that is stored at the certificate authority. And usually this is a
single large file that is stored on the CA. I was looking at the certificate on the comptia website, and you
can see that the CRLs are listed as one of the attributes in their certificate. I downloaded one of these
files before making this video and that single certificate revocation list was about 18 megabytes in size.

There are many different reasons for modifying or revoking a certificate there could, of course be
compromise to the site, but it may be that you’d like to change one of the attributes that’s part of the
certificate. Or maybe you acquired a new company and you’re changing all of the certificates on their
servers.

One big reason that we had for changing certificates occurred in April of 2014, through a vulnerability
and open SSL called Heartbleed. Through this vulnerability, the potential existed for someone to gain
access to the private keys associated with these certificates on these web servers.

And because the potential was there, and we had no way to know whether someone had been able to
take advantage of this or not, almost everybody had to replace the certificates on their website servers
across the world. This meant that a lot of certificates had to be replaced and we ended up adding many
of these to our certificate revocation lists.

If you want to see the CRL for a website you happen to be visiting, you can click the lock in the SSL view
of your browser, and go through all of the different attributes of the certificate to find the one for the
CRL distribution points.

In many environments, downloading a relatively large certificate revocation list file is not very practical.
So there needs to be another way to check the validity of these certificates. In a much more efficient
way to do this, is by using an online certificate status protocol or OCSP. This is something built into our
browser that can perform a single check just for this certificate, to see if that certificate may have been
associated with something revoked.
This is usually a check that can be done from your browser to an OCSP responder, that is usually
managed by the certificate authority. This allows a browser to validate a single certificate rather than
downloading a large file which may or may not include this certificate.

Many browsers these days support OCSP, but you may find some older browsers or older applications
that do not properly perform any OCSP checks. You might also find browsers that support OCSP but
don’t go through the process of actually checking for the revocation when visiting that site. For that
reason, you may not want to rely on a single method for performing validation and instead use multiple
methods to check on the validity of a particular certificate.

Certificates – SY0-601 CompTIA Security+ : 3.9


We use digital certificates extensively for servers, users, and software. In this video, you’ll learn about
using certificates for web servers, code signing, machines, email, and more.

In previous videos we talked a lot about what’s inside of a certificate but we haven’t talked about when
you would use these, and in this video, we’ll look at the many uses for these digital certificates.

One of the most visible uses of these certificates, is a certificate that allows you to encrypt
communication to a web server. This is usually indicated by a lock that’s in the address bar of your
browser. We refer to these as domain validation certificates or DV certificates.

And this means that the owner of this certificate who’s added it to this website server, has some control
over the domain that you’re connecting to. This provides the trust that when you’re connecting to a
website, you really are connecting to the legitimate form of that particular website.

In the past, you may have also seen websites with an EV or an extended validation certificate, that
ostensibly additional checks had been done by the certificate authority, and they’ve enabled additional
features that would show the name of the certificate owner in the browser bar itself, and it’s marked in
green in the browser bar as well.

These also usually meant that you paid a little bit extra for the extended validation certificate, but it
made it very clear what websites you were connecting to. These days, the use of SSL has become normal
if not expected on a website. So it’s not necessary to promote the fact that you’re running SSL on your
server.

If you were to look at the attributes of a certificate, you’ll notice that one of those extensions is called a
subject alternative name or SAN. This allows the owner of the certificate to add many different DNS
names into this certificate configuration. And that means a single certificate could support connectivity
for many, many different websites.

This is common to see on sites like CloudFlare which is providing a reverse proxy service. So they’ll use a
single certificate and that single certificate will support many, many different sites that are using
CloudFlare as a service.

One thing you may notice when looking through the subject alternative name fields, is that these
certificates can support many different hosts by using a wild card. The wild card is designated with an
asterisk, and you can see that the asterisk means that there can be many host names associated with
this particular DNS.
For instance on this certificate, there is a domain birdfeeder. Live and there is an asterisk.birdfeeder.live
which means this certificate could support .birdfeeder.live ftp.birdfeeder.live ssl.birdfeeder.live or any
other name that you would like to use, thanks to the wild card associated with this name in the
certificate.

Another use of digital certificates, is often used when we are distributing software. A developer will
create an executable or a piece of software that needs to be distributed, and then they will sign that
software with a code signing certificate. This means that we can receive that software and install it and
during the installation process, we can validate that the program that we’re installing is exactly the same
executable as the one that was distributed by the manufacturer.

This lets us know that this software has not changed since the time it left the developer. If we’re
installing the software and it fails this validation check, that we have the option to decide whether we’d
like to continue with the software installation or whether we’d like to stop the installation and
determine why this particular code signing certificate, did not properly validate.

If you’re building out a public key infrastructure, you’re starting with a certificate authority. And that
certificate authority needs a starting point and that starting point is a root certificate. All of the
signatures and additional certificate, authority certificates, are starting with this root certificate.

So if you’re building out an intermediate CA, and leaf CA is beyond that, you will start with this root
certificate and sign everything downstream from there. This is obviously an incredibly important
certificate, it is the foundation of your PKI and that means that you want to be sure that, that certificate
remains safe.

If someone wants to gain access to this root certificate, they could effectively create any type of
certificate for your organization, which is why there is so much emphasis put on the security of this root
certificate.

We mentioned in an earlier video that you may not need to go outside of your organization to manage
certificates, but instead build your own internal certificate authority. If you do that, you’ll be providing
your own signatures to your internal certificates. And we often refer to these as self-sign certificates.

This designates that we’ve created this certificate ourselves, we have signed it internally and distribute it
to our internal devices. This also means that we didn’t go outside of our organization to have a third
party validate this certificate.

This is very common to do, especially if you have a large number of servers and devices and you need to
create many, many certificates. You can build them all out and self-sign those certificates. But this also
means that you have to tell all of those devices to trust your internal certificate authority.

If someone tries to use a device that doesn’t have that trust in place, a message will appear that say that
the certificate is not trusted. That’s why it’s very common to distribute your CA certificates to all of your
devices and that will ensure that your root CA, intermediate CA, and all of the other CA which you’re
using are trusted internally.

As a system administrator, you may be deploying hundreds or thousands of devices throughout your
organization. So it’s important that if a device is connecting to your network, that you’re able to tell if
that device is a trusted device or not. One way to do this, is to deploy machine or computer certificates
to all of the devices that need to be trusted by your organization. And if someone is connecting to the
network and that device contains one of these certificates, that it must be trusted by your organization.

A good example of this might be if someone is connecting through the VPN, and before it gains access to
the internal network. It provides an additional set of authentication to check for that certificate. And if
that device validates properly, we know that, that machine is trusted by the rest of the organization.

If you’re sending or receiving email messages, you may have the option to enable encryption or digital
signatures. If that’s the case, then you probably have an email certificate on your system. This email
certificate uses public key cryptography to be able to encrypt information so that you can send it in a
protected form through the internet. And it would also provide a way for you to receive encrypted
messages and decrypt them locally in your email client.

These email certificates can also be used for digital signatures. We can sign any of these emails that we
happen to be sending with this email certificate and the recipient of this email can validate that
everything in this email is exactly the same as it was when it was sent and that provides the integrity.
They can also validate that it was really us that sent the message providing non repudiation.

Your organization may also distribute user certificates to every single user. These can often be
integrated into Identification cards and used as an additional form of authentication. Their card readers
that you can connect to a system via USB or may be integrated into the device itself, and that can be
used as that additional authentication factor.

This is often integrated into the smart cards that we’re using so this might be both a physical
identification that might have your picture on it, and a digital access card that you can use when logging
into your system.

Certificate Formats – SY0-601 CompTIA Security+ : 3.9


There are many different formats that can be used for certificate storage. In this video, you’ll learn
about DER, PEM, PKCS #12, CER, and PKCS #7.

The standard used when we are working with digital certificates is called the X.509 standard. It’s a
standard format for these digital certificates. And allows us to move these certs between different
systems and have all of those different systems understand what’s inside of these digital certificates.

There are different ways to transfer these certs however. And there are many different file formats that
you might find when moving from one system to another. Fortunately, there are applications like
OpenSSL that can read different formats or even convert between different formats if we need to.

One of these formats is the DER format or distinguished encoding rules format. This is a set of rules that
allows us to encode many different kinds of data but it’s especially useful for these X.509 certificates. It
is a binary format, which means that we can’t bring it up in a text editor and read anything that might be
in there. But it is a very common format that you’ll see when you’re deploying things for applications
using Java.

One of the challenges with sending a binary file over email is that some email systems might modify the
attachment. One of the ways that you can prevent this is to encode that binary in base64 format. This
means that it will be something that is readable in an email. And you can transfer it simply as text
between one device and another.

This means that you now have that DER formatted certificate in an ASCII form that you can easily send
through email. If you’re trying to transfer information from one place to another, this might be a very
easy way to do that. This is supported across many different platforms and it’s a very standard way to
send certificates from one machine to another.

This is all letters and numbers, which makes it very easy to email and it won’t be modified by the email
system. And it’s something that you can look at and see exactly when the certificate begins, the
certificate information and you’ll know exactly where the end of the certificate might be.

If you need to transfer multiple certificates at one time you might want to use PKCS as number 12. This
is the public key cryptography standards number 12. This is a standard that was created by RSA Security
and it is now a standard that you’ll find as an RFC.

This is a container format. So you have a standard format that you can put many certificates inside. This
is usually sent as .P12 or .PFX file. And we might commonly use this to transfer a private and public key
pair within the same container. This also supports the ability to password protect this, which is
especially important if you’re transferring a private key.

This is a standard that was extended from a Microsoft format called the PFX format or the personal
information exchange. These are very similar formats and very often we reference both of these
interchangeably.

If you’re managing certificates in the Windows operating system you’re probably using the CSR format
or the certificate format. This is primarily used in Windows and it does provide flexibility for including
binary DER format or the ASCII PEM format. This normally contains just the public key because we would
probably want to send a private key in a more protected form such as using a password with PFX file.
But if you’re running Windows you’re probably using a lot of these .CER files and it’s a very common way
to import and export certificates in the Windows operating system.

Another certificate type you might find is PKCS number seven. This is the public key cryptography
standards number seven. And you’ll commonly see this sent as .P7B file. Like the PEM format, the PKCS
number seven format is also an ASCII file that can be read and easily transferred over email.

It’s common to send certificates and chain certificates using this format. But we don’t commonly use
private keys in P7B file. This is a format that you’ll find support for in Windows, in Java Tomcat and many
other operating systems and applications as well.

Certificate Concepts – SY0-601 CompTIA Security+ : 3.9


Certificate management is an important part of a PKI. In this video, you’ll learn about offline CAs, OCSP
stapling, certification pinning, trust relationships, certificate chaining, and more.

If you’re managing a certificate authority, one of the things you do not want to have happen is to have
someone compromise your CA or any of the keys associated with your CA. That means that any of the
keys that may be signed or distributed by that CA would no longer be trusted in your organization.
To limit this type of exposure, we can have some CAs act as online CAs, and others certificate authorities
might be offline CAs. We might, for example, want to build out some intermediate CAs, and those CAs
are signing the certificates that are used throughout our organization. This means that we can remove
our root CA from the network and store that root CA somewhere safely, so that no one has access to the
root CA certificates. This means that we could limit the scope of any type of compromise of a single
intermediate CA, and we would only have to recreate a new CA and distribute a fraction of all of the
total certificates in our environment. This also means that the root CA remains protected, and if we
need to create all new intermediate CAs, we have a protected CA that could not have been
compromised.

One of the challenges we have when working with certificates is knowing if a certificate may have been
revoked. There are different ways to check this revocation status, but one of the easiest ways is through
something called OCSP stapling. This is using the online certificate status protocol to be able to
determine if a certificate may have been revoked. One of the challenges in using OCSP is we have to
constantly check back with the certificate authority to see if this certificate is valid.

It’d be much easier if we were able to make that validation ourselves on a local server. And you can do
that by using OCSP stapling. The status information regarding that certificate is stored on the local
server, and we effectively staple that particular status into the handshake that normally occurs when SSL
or TLS is used when first connecting to a server. Since the status information is digitally signed by the CA,
we can trust that it’s valid without having to go all the way to the CA to perform that validation.

Another concern we might have when using digital certificates to something like a web server is really
trusting that we’re talking directly to the web server without someone in the middle modifying our
conversation. One way to confirm this is to put the certificate inside of the application that you’re using,
and then compare that to the certificate that you’re seeing when that application connects to the
server. This means that you’ll have to compile the certificate inside of the application that you’re using,
or add it the first time that you run the application. This means the application is going to perform a
check when it connects to that server. It will see if the certificate that it has internal in the application
matches the certificate that it’s seeing on the server. And if those don’t match, the application can
decide what to do. It may show an error message on the screen or it may shut down the application and
prevent it from running.

If you’re just starting out with a certificate authority, or maybe you’re building one out in a lab, it may
only be necessary to have one single certificate authority in your organization. There’s obviously some
security concerns associated with that, but if you’re using this on a limited basis, that may be all you
need. It’s probably more common for organizations to create a hierarchical structure where you have
intermediate CAs and leaf CAs so that you can limit the scope of any type of compromise. This means
your users and devices are probably receiving certificates from a leaf CA, which was created from an
intermediate CA, and of course, the intermediate CA was created from a root CA. If a security event
occurs and a leaf CA is compromised, we would only need to replace the certificates associated with
that single leaf CA.

Another model might be to create multiple CAs and create them as a mesh of CAs where every CA trust
every other CA. This is probably not a concern if you only have two or three of them, but you can see
once you get up to a larger number of certificate authorities becomes very difficult to maintain the
scalability across all of them. If you’ve created a certificate in PGP, you’ll notice there is no certificate
authority. PGP was created as a web of trust. In a web of trust, you would sign certificates of people you
know, and those people would sign certificates of people they know. This means that if you happen to
CA certificate from someone you don’t know, but that certificate has been signed by someone you do
know, then there is a level of trust. You can associate with that.

And another PKI trust relationship is one that is mutual authentication. This is when you’re validating
that the server you’re communicating with is trusted, and the server is also confirming that the client
that it’s communicating to is trusted. This means that both sides of the conversation can trust each
other, and that ensures that the application that they’re using has an additional level of trust that can be
associated with it. There may be cases where you need to have a third party hold on to the decryption
keys that are used in your organization. This means that you would hand over your private keys to a
third party, and they would only use those keys in particular situations.

For example, you might be storing private information about your employees that is stored in an
encrypted form, and you would only be allowed access to that information if it is validated by the third
party that has the decryption keys. Obviously, if a third party has control of your decryption keys, there
needs to be very specific processes and procedures in place to allow access to that data. You obviously
need to be able to trust the third party that you’re using for this key escrow, and you also need to be
assured that they’re using the proper security to keep these private keys safe. There may even be a very
specific set of circumstances that has to occur for someone to be able to gain access to those decryption
keys, so there might be court orders, or legal proceedings that are required before anyone is able to gain
access to those decryption keys.

It’s very common for the certificates that we’re using to be signed by an intermediate CA, or device that
is not the root certificate authority. Because of this, we need to have some way to validate that the
intermediate or hierarchical CA that we happen to be using is one that is original to the root CA. To be
able to do this, we use a chain of trust, which allows us to list all of the certificates between the server
that we’re connecting to and the root certificate authority. This chain starts with the SSL certificate that
we are connecting to, and it ends with the root CA. You can see a representation of this if you click the
lock that is in your browser address bar, and have a look at the details of the certificate.

Any certificate that is between the device that you’re connecting to and the root CA is a chain
certificate. Sometimes you’ll hear this referred to as an intermediate certificate. This means if you’re the
administrator of this web server that you’ll need to add the certificates for all of these chain certs.
Otherwise, someone connecting to your site may receive an error in the browser.

Here’s a better view of this entire chain of certificates. We’re connecting to robowhois.com. There is an
R3 intermediate certificate, and then finally, the DST root CA certificate. So on our web server, we would
have to have, of course, the certificate for the local web server, but we’ll need to include the certificate
for R3, and ultimately, we’ll need to make sure we have a certificate for the root CA as well.

You might also like