Summary

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 44

MODULE ONE UNDERSTANDING SECURITY THREATS

Cybersecurity refers to the practice of protecting computer systems, networks, and sensitive information from
unauthorized access, attacks, theft, damage, or disruption. The CIA triad is a well-known model of
cybersecurity, which stands for confidentiality, integrity, and availability.

 Confidentiality refers to the protection of sensitive information from unauthorized access or disclosure.
This can be achieved through encryption, access controls, and other security measures that ensure only
authorized personnel can access sensitive data.

 Integrity refers to the protection of information from unauthorized alteration, modification, or


destruction. This can be achieved through data backup, data validation, checksums, and other security
measures that ensure data remains unaltered and authentic.

 Availability refers to the assurance that information and systems are accessible and usable by authorized
personnel. This can be achieved through redundancy, disaster recovery planning, and other measures
that ensure systems and data are available when needed.

 Confidentiality - Keeping things/data/info hidden


 Integrity - Keeping data accurate and untampered
 Availability - Readily accessible

Risk, vulnerability, threat, and exploit are terms commonly used in cybersecurity:

 Risk: The probability of a threat exploiting a vulnerability and causing damage or harm to an
organization's assets. Risk is typically measured in terms of likelihood and impact.

 Vulnerability: A weakness or flaw in a system, network, or software that can be exploited by a


threat. Vulnerabilities can be unintentionally created during software development or introduced
through misconfigured or outdated software.

 Threat: Any potential danger that can exploit a vulnerability and cause harm to an organization's
assets. Threats can come from a variety of sources, including hackers, malware, ransomware,
phishing attacks, and more.

 Exploit: A software tool, code, or technique used to take advantage of a vulnerability and gain
unauthorized access to a system or network. Exploits can be used to execute malicious code, steal
data, or gain control of a system.

Cybersecurity attacks are malicious activities carried out by cybercriminals to compromise the confidentiality,
integrity, and availability of computer systems, networks, and sensitive data. The following are some common
types of cybersecurity attacks:

TYPE OF ATTACKS
1. Malware attack
a. Virus -A virus is a program or code that can replicate itself and spread to other computers or devices.
It can modify or delete files and data, and it may also steal sensitive information.
b. Worm - A worm is a self-replicating program that can spread rapidly over a network or the internet. It
can consume a significant amount of system resources, cause system crashes, and spread other types
of malware.
c. Trojan - Disguise itself as one thing but does something else. It can open a backdoor to allow hackers
to access the infected system, steal data, or install other types of malware.
d. Rootkit - Unauthorized access to a computer w/o being detected. A rootkit is a type of malware that
can hide its presence and activities from the user and security software. It can allow hackers to gain
remote access to the infected system and steal sensitive information.
e. Adware
f. Spyware - Ex. Keylogger
g. Backdoor - Hidden entry point and allow unauthorized access to the system
h. Botnet - Bots controlled by central entity (Criminal org). Created to infect large nums of comp to take
control and launch malicious acts like DDos, Bitcoin mining
i. Ransomware
2. Network attack:

1
a. Man-in-the-middle - involves intercepting communications between two parties to steal sensitive
information or modify the communication. This can be done by compromising the network infrastructure
or by using social engineering to trick users into installing malware.
 IP Spoofing - False source address (Use firewalls - filter traffic)
 WiFi Eavesdropping - Intercepting/listening to wireless network traffic. Use WiFi Protected
Access(WPA2) Strong passwords for network Authentication, use VPN
 DNS Cache poisoning attack or DNS spoofing - Corrupting the DNS resolver cache memory with
incorrect info, to redirect traffic from legit web to a fraudulent one. The goal of DNS cache
poisoning is to redirect users to malicious websites or servers, intercept their traffic, or steal
sensitive information such as login credentials or financial data. For example, an attacker could
redirect users trying to access a legitimate banking website to a fake website that looks identical
to the real one, and capture their login credentials and other sensitive data.
b. Denial-of-service attack (Dos) - a single device or computer is used to send a high volume of traffic or
requests to a target, with the goal of overwhelming its resources and making it unavailable to legitimate users.
The attacker can achieve this by exploiting vulnerabilities in the target's software or network infrastructure or by
flooding it with traffic from a botnet
 Ping of death - sending sing ICMP(internet control msg protocol) packet that is larger than
what system can handle
 Ping flood - sending tons of ICMP packet. Goal is to consumed all available resources,
preventing it from serving legitimate traffic
c. Distributed denial-of-service-attack (DDos) - In contrast, a DDoS attack involves multiple devices or
computers, typically spread across different locations and networks, which are coordinated to launch an attack
on the target. The devices are often compromised through malware or other means, and the attacker controls
them remotely to send a high volume of traffic or requests to the target. This makes DDoS attacks more difficult
to defend against, as they can come from multiple sources and be more challenging to identify and block.
d. Password attack - type of cyberattack that involves an attacker attempting to guess or crack a user's
password to gain unauthorized access to a system or account.
 Brute-force-attack - Guessing the password
 Dictionary attack - Systematically try diff combination of common use password
 Phising email
3. Social Engineering - Is a type of attack that rely on exploiting human psychology and emotions, rather than
technical vulnerabilities in computer systems or networks. Attackers use tactics such as trust, fear, greed,
urgency, and curiosity to manipulate people into performing actions or divulging sensitive information that
can be used for nefarious purposes.
a. Phising (rely more on tricking individuals rather than exploiting vulnerabilities in network
infrastructure. Send to lot people, no specific target)
b. Spear phising (Target specific individual or group)
c. Pretetxting - Trick individuals into divulging sensitive info. Pretending to be from legit company
pressuring the victim.
d. Spoofing - Impersonate a trusted source.
e. Baiting - Lure victims with free gifts in exchange of sensitive information
f. Tailgating - Following the real employee into office
g. Whaling - Target big individuals for big ransom or info
h. Vishing - VoIP
i. Shoulder surfing
j. Impersonating - pretending to be you and call your bank company asking for password reset.
k. Dumpster diving
l. Evil twin - Server that looks identically yours, but this server is controlled by attacker.
4. Client side attack - Target client’s computer or device rather than the server-side. Targets vulnerabilities in
the software running on a user’s device (Compromised web, untrusted link, etc)
5. Injection attack - a type of cyberattack that involves injecting malicious code into an application or system
to exploit a vulnerability and gain unauthorized access or control. Injection attacks typically occur when an
application or system does not properly validate user input, allowing an attacker to insert malicious code or
commands.
a. SQL Injection - Insert malicious SQL code(Web app) to bypass web authentication or to access
database
b. Cross-site-scripting - Inject malicious scripts into web, will be executed when the page loaded.
C. Command Injection - Inject Malicious commands in application to execute action (Del files)

Prevention
a) Use strong password and multi-factor authentication - Prevent unauthorized access.
b) Keep software up-to-date - Prevent vulnerabilities that can be exploited by an attacker.
c) Implement Firewalls - Filtering incoming traffic

2
d) Implement network segmentation - . Dividing larger computer network into several small sub-
networks that are each isolated from one another (limit the damage of an attack)
e) Access control list - Control access to network resources and limit the traffic.
f) Conduct regular security assessments - Testing can identify vulnerabilities and potential areas of
weakness in network or system.
g) Use anti-virus and make sure it is updated.
h) Limit access to sensitive data.
i) Train Employee -Untrusted emails, links,untrusted WiFi, SSL Certificate.
j) Backup regularly
k) Train employees
l) Stay informed
MODULE TWO CRYPTOLOGY
 Cryptography - Method of protecting information and communication using code in the presence of 3 rd
parties. Study of this called Cryptology.
 Cryptography is the science of encoding and decoding information to protect its confidentiality, Integrity
and authenticity.
 Cryptanalysis - Study of analyzing and breaking Cryptographic systems.
 Cryptosystem - Collection or set of algorithms and protocols used to secure data and communications
through encryption, decryption, key generation.
 Stenography - Hiding information without encoding it.
1. SYMMETRIC ENCRYPTION - Is a Cryptographic technique that use one secret key to encrypt and decrypt data.
 Substitution cipher - Caesar cipher - ROT13 -
 Stream Cipher
 Block Cipher
To avoid key reuse, use Initialization vector or IV (Master key generating one-time-key
a) Data encryption standard (DES)- is a symmetric-key algorithm for encrypting electronic data. It was
developed by IBM in the 1970s and adopted by the U.S. government as a federal standard in 1977.
DES uses a 56-bit key to encrypt data in 64-bit blocks, and it operates using a substitution-
permutation network (SPN) structure. Although DES was widely used for many years, it has since
been replacded by more secure encryption algorithms, such as the Advanced Encryption Standard
(AES)
b) Advance encryption standard - AES uses a block cipher, which encrypts data in fixed-size blocks,
typically 128 bits in length. It operates using a substitution-permutation network (SPN) structure
similar to DES, but with several key differences that make it more secure. One of these differences is
the key length used by AES, which can be 128, 192, or 256 bits, making it much more difficult to
brute-force attack than DES.
i. RC4 ( Normal attack) - RC4 is a symmetric-key stream cipher algorithm that was widely used for
encryption in various applications, such as wireless networks, secure HTTP connections, and
wireless payment systems. However, it has been known to be vulnerable to certain types of
attacks, including the normal attack. A normal attack on RC4 involves analyzing the cipher
output to determine the key. The attacker does this by analyzing the statistical properties of the
output stream and then trying to deduce the key that was used to generate it. The normal
attack is effective against RC4 because the algorithm has a weakness in its key-scheduling
algorithm that causes it to generate a non-random output stream.
2. ASYMMETRIC ENCRYPTION - is a Cryptographic technique that use public key to encrypt and private key to
decrypt
a) Confidentiality - Granted through encryption and decryption method.
b) Authenticity - Granted by digital signature mechanism
c) Non-repudiation - Ensure that the msg came from the person claiming to be the author.
 Digital Signature - Without msg verification, everyone can use Daryl's public key. In order to know
that the msg is really from Daryl, she will compose a msg and combining it with her private key to
generate a digital signature and send it to Suzzane, Suzzane can now verify the msg origin and
authenticity by combining the msg, the DS, and Daryl’s public key, if the msg was actually signed by
Daryl’s private key and not someone else and the msg wasn’t modified at all, then the DS should
validate. DS used to verify the authenticity and integrity of documents or msg. Ensure info not
tampered during transmission. Generating it using private key and verify it using public key.
 Msg authentication Codes (MAC) - Someone related to asymmetric encryption, MAC. Is a bit of
information that allows authentication of a receive msg, ensuring that the msg came from the alleged
sender and not a 3rd party masquerading as them. It also ensures that the msg wasn’t modified in
some way in order to provide data integrity. Similar to DS using public key cryptography, but the
secret key that use to generate MAC, is the same one that’s used to verify it. Private key used to
generate and verify the MAC. Uses secret key to generate checksum. So MAC is a symmetric
cryptographic technique.

3
 Keyed-hash msg authentication code (HMAC) - One popular in secure type of MAC,
Cryptographic hash function along with secret key to generate a MAC.
 Cipher-Based Msg Authentication Codes (CMACs) - Instead of using a hashing function to
produce a digest like HMAC, symmetric cipher with a shared key is used to encrypt and
resulting output used as a MAC.
 SSL/TLS, IPsec, and SSH are all network communication protocols that provide secure
communication over a network by encrypting data and providing authentication.
 TLS grants authentication by using digital certificates to authenticate the identity of the
parties, and integrity by using symmetric encryption and MACs to ensure that the data
exchanged between the parties is confidential and has not been tampered with.
ASYMMETRIC CRYPTOGRAPHY SYSTEMS
 RSA (Ron Rivest, Adi Shamir, Leonard Alderman) 1983 - .Is widely used public-key encryption algo.
It’s used for key exchange and digital signatures. Uses pair of key to encrypt and decrypt.
 Digital Signature Algorithm (DSA) 1991 - Signing and verifying electronic documents. Used private
key to sign and public key to verify.
 Diffie Hellman (DH) - Key exchange algorithm allows 2 parties to established a shared secret key over
insecure channel, such as in TLS/SSL.
 Elliptic curve cryptography - Known for its strong security and efficiency.Increasingly popular
3. HASHING
 The purpose of a hashing function is to take an input data of arbitrary length and generate a fixed-
size output, often called a hash or digest, that represents the input data in a unique and
deterministic way.
 Hashing functions are commonly used in various areas of computer science, including cryptography,
data structures, and computer networking. They have several important applications, including:
 Data integrity: Hashing functions are often used to verify the integrity of data. A hash value of the
original data is computed and compared to the hash value of the received data. If the two hash
values match, it is highly unlikely that the data has been tampered with or corrupted during
transmission.
 Digital signatures: Hashing functions are also used to generate digital signatures. A hash value of
the data to be signed is computed, and the resulting hash value is encrypted with the private key of
the signer. The recipient of the signature can verify the authenticity of the signature by computing
the hash value of the data and comparing it to the decrypted hash value of the signature.
 Password storage: Hashing functions are commonly used to store passwords securely. A hash value
of the password is computed and stored in a database instead of the plain-text password. When a
user enters their password, the hash value of the entered password is computed and compared to
the stored hash value. If the two hash values match, the password is considered to be correct.
 Data structures: Hashing functions are used in data structures such as hash tables to quickly retrieve
data based on a key. In a hash table, the key is hashed to a value that is used as an index to store
and retrieve the associated data.
 Overall, hashing functions are a fundamental building block of many computer systems and are
crucial for ensuring the security and integrity of data.
Example of hashing Function:
a) MD5 - MD5 (Message-Digest algorithm 5) is a widely-used Cryptographic hash function that produces
a 128-bit hash value. It takes an input (message) of arbitrary length and produces a fixed-size output
that is typically used for verifying the integrity of data. MD5 was developed by Ron Rivest in 1991 and
has been widely used in various security applications. However, since 2004, it has been considered
insecure due to a number of vulnerabilities that have been discovered, including the possibility of
collisions (different inputs producing the same output). Therefore, it is no longer recommended for
use in Cryptographic applications where security is critical. In practice, SHA-2 (Secure Hash Algorithm
2) or SHA-3 (Secure Hash Algorithm 3) are now the recommended hash functions for Cryptographic
applications.
b) SHA1, SHA2, SHA3 - SHA-1, SHA-2, and SHA-3 are all cryptographic hash functions designed by the
United States National Security Agency (NSA) for use in various security applications.
a) SHA-1 (Secure Hash Algorithm 1) produces a 160-bit hash value and was first published in 1995.
However, it has been found to have vulnerabilities and is no longer considered secure for use in
cryptographic applications.
b) SHA-2 (Secure Hash Algorithm 2) is a family of hash functions that includes SHA-224, SHA-256,
SHA-384, SHA-512, SHA-512/224, and SHA-512/256. These functions produce hash values of
varying sizes (from 224 bits to 512 bits) and are widely used in various security applications.
SHA-256 is currently the most commonly used hash function from this family.
c) SHA-3 (Secure Hash Algorithm 3) is the latest addition to the SHA family of hash functions. It
was designed as a result of a public competition held by the National Institute of Standards and
Technology (NIST) in response to concerns about the security of the existing hash functions. The

4
winning design, called Keccak, was selected as the basis for SHA-3. SHA-3 produces hash values
of either 224 bits, 256 bits, 384 bits, or 512 bits and is intended to provide better security and
performance than SHA-2. In general, SHA-2 and SHA-3 are currently recommended for use in
cryptographic applications where security is critical.
It is commonly used in digital certificate, SSL/TLS, and other security protocol
TLS/SSL Digital Certificate - Digital document used to verify the identity of a website or server, and
establish a secure, encrypted connection between client and server. Issued by CA.

d) Message Integrity check (MIC) - A MIC is essentially a hash digest of the msg in question. You
can think of it as a checksum for the msg weren’t modified in transit. Checksum the integrity of
the msg by checking the hash and comparing it with expected value.

One crucial application for cryptographic hash functions is for authentication.


Rainbow table
Protect against brute-force-attack - Strong password, Password salt, Run passwords through the
hashing function mult times.

4. PUBLIC KEY INFASTRUCTURE - Uses an asymmetric encryption and digital certificate to provide secure
communication over the internet.
 A Digital Certificate is a file that proves that an entity owns a certain public key.
 Certificate Contains info about the public key data, digital signature, and identifying information
of the certificate owner.
 Certificate Authority -Responsible for issuing, revoking, and distributing of digital cert.
 Registration Authority - A trusted entity that verifies the identity of certificate requester and creates
digital cert on behalf of the CA. Authenticating the requester’s digital signature, and ensuring that the
requester meets the requirements for receiving the cert.
 Central Repository - needed to securely store and index keys, and a certificate management system
of some sort makes managing access to stored certificates and issuance of certificates easier.
 SSL or TLS server certificate - Certificate that a web server presents to a client as part of the
initial secure setup of an SSL/TLS connection. Issued by CA and contains the server public key.
 SSL or TLS client certificate - Bound to clients and are used to authenticate the client to the
server, allowing access control to an SSL/TLS service. Aren’t issued by a public CA. Usually, The
service operator would have their own internal CA which issues and manages client certificates
for their service.
 Code Signing Certificates -Issued by CA. It is a cert that sign executable programs. Allows users
of these signed app to verify the signatures and ensure that the app was not tampered with.
Also verify that the app came from the real author, not from malicious twin.
 Root Certificate Authority - Issued by CA. Serves as the foundation of trust in a public key
infrastructure (PKI) system. Used to authenticate the identity of other digital cert. It is self-signed
meaning it’s signed by its own private key rather than by other certificate.
Example of PKI
 X.509 (1998) - Defines the format of digital cert. Also define:
 Certificate Revocation List (CRL) - Distribute a cert that is no longer valid.
 Serial Number - Identify individual certs.
 Certificate Signature Algorithm - Indicates what public key algorithm is used for the public key
and what hashing algo is used to sign the cert.
 Issuer Name - Contains info about the authority that sign the cert.
 Validity - Not before and not after date.
 Subject - Contains identifying info about the entity the cert was issued.
 Subject public key info - define the algo of public key, along w/ the public key itself.
 Certificate Signature Algorithm - Same as SPI, two fields must match.
 Certificate signature value - the digital signature data itself.
 Certificate fingerprint - Unique digital identifier that is generated from the contents of a DG.
Calculated using Cryptographic hash function.
 Web of trust - Alternative to the centralized PKI model of establishing trust and binding identities.
Where an individuals, instead of CA, sign other public keys. You’re saying that you trust this public
key belongs to this individual. Signing each other keys, signing parties.
 HTTPS(Hyper text transport protocol) secure version of HTTP, HTTPS can call HTTP over SSL or TLS since
it’s encapsulating traffic over an encrypted secured channel utilizing SSL or TLS.
 SSL/TLS encryption - used both symmetric and asymmetric encryption. SSL/TLS use asymmetric to
securely exchange info used to derive a symmetric encryption key.
 TLS grants us 3 things - A. Secure communication line, protected against eavesdroppers. B.
Authenticate both parties communicating. C. The integrity of communications.

5
 Session key- is the shared symmetric encryption key used in TLS sessions to encrypt data being sent
back and forth.
 Forward secrecy - property of a Cryptographic system so that even in the event that the private
key is compromised, the session keys are still safe.
 Secure shell (SSH) - A secure network protocol that uses encryption to allow access to a
network service over unsecured networks.
SSH (Secure Shell) is a network protocol that allows you to securely connect to a remote
computer or server over an unsecured network. It provides a secure encrypted connection for
remote login, command execution, and file transfer. SSH uses public-key cryptography to
authenticate the remote computer and encrypt the data transmitted over the network.
 Pretty good privacy - Develop by Phil Zimmermann an anti-nuclear activist. Encryption app that
allows authentication of data, and privacy from 3rd parties, relying upon asymmetric encryption
to achieve this. Commonly used for encrypted email communication, also available for full disk
encryption solution or encrypting files/docu.
 SECURING NETWORK TRAFFIC (What if app doesn’t utilize encryption)
 Virtual Private Network - Tool that encrypts your internet connection and routes it through a secure
server, difficult for others to intercept or track ur online acts. Works by creating a secure and private
tunnel between your device and the server, allows u to access the internet anonymously and w/0
exposing your data.
Allows you to remotely connect a network host to an internal private network while passing data over
a public channel.
 Internet protocol security (IPsec) - VPN protocol that was designed in conjunction with IPV6.
IPsec works by encrypting an IP packet and encapsulating the encrypted packet inside an IPsec
packet. Then gets routed to the VPN end-point where the packet is de-encapsulated and
decrypted then sent to the final destination. Support two modes of operation:
 Transport mode- Only the payload of the IP packet is encrypted, IP header is untouched.
 Tunnel mode- Entire IP packet, header and all, is encrypted and encapsulated inside a new
IP packet with new header.
 CRYPTOGRAPHIC HARDWARE
 Trusted platform Module (TPM) - Hardware dedvice crypto processor. Offer secure generation of
keys, random num generation, remote attestation, and data binding and selling.
A TPM has unique secret RSA key burned into the hardware, which allows a TPM to perform things
like hardware authentication. Can detect unauthorized hardware changes to a system.
MODULE THREE AAA SECURITY
AAA provide a foundation for effective security practices, helping to ensure that sensitive
information and resources are only accessible to authorized individuals or systems and that any
unauthorized activity is tracked and monitored.
1. Authentication - Process of verifying the identity of a user or system, typically through the use of
usernames, passwords, biometric, etc. Authentication ensure that only authorized individuals or
systems have access to sensitive information.
2. Authorization - Process of guarding or denying access to resources or info based on a user’s
identity and level of privileges. Authorization ensures that users can only access info or resources
that they are authorized to access.
3. Accounting - Process of taking and logging user or system act, including authentication attempts,
resource access, and system changes. Accounting provides a record of system act, which can be
used for act, troubleshooting, and security analysis purposes.
 Authentication
 Multifactor-Authentication - User’s are authenticated by presenting multiple pieces of info or
objects. USB device w/ a secret token, OTP
 Three authentication method
 Something you know - Password, PIN
 Something you have - ID
 Something you are - biometrics
 RSA - an ex of OTP. Is a form of 2factor authentication that uses a unique code for each login
attempt.
 Universal factor (U2F) - Is a form of 2factor authentication that uses a physical security key to
verify the user’s identity. The key is a USB device that contains a private key.
 Certificate
 Certificates - are public keys that are signed by a CA as a sign of trust.
 Client Cert - Presented by clients & allow servers to authenticate & verify clients.
 Checked revocation list (CRL) - A signed list published by the CA, which defines certs that
have been revoked.

6
 Last step to authentication server verification process is to prove possession of the
corresponding private key, since the cert has a signed public key.
 RADIUS - Networking protocol that provides AAA for remote access users. It is commonly used by
internet Service Providers (ISPs) and large organizations to control access to their networks. RADIUS
supports a wide range of authentication methods, such as password-based authentication, token-
based authentication, and certificate-based authentication. Server will reply Access denied, access
challenges or access except.
 Kerberos - A network Authentication protocol that uses tickets to provide secure authentication and
authorization for users and services. Kerberos uses symmetric-key cryptography to authenticate
clients and servers, and supports mutual authentication between client and server.
 TACACs - Terminal access control access control system, network protocol provides AAA for remote
access users. TACACs is similar to RADIUS but is more commonly used in CISCO networking
environments. It separates the AAA function into separate processes, which provides greater
flexibility and granularity in controlling access to network resources.
 Single Sign-on - Is a method of authentication that allows users to access multiple apps or services w/
a single set of credentials. W/ SSO, users only need to login once, and then are automatically
authenticated to all other apps or services that are integrated w/ the SSO system. SSO can be
implemented using diff protocols and technologies, such as SAML (Security Assertion Markup
Language) or OAuth(Open Authorization).
 OpenID - standard protocol that allows users to authenticate themselves to diff website using a
single set of credentials.
 Authorization
 OAuth - A very popular open standard for authorization and access delegation. Used by companies
like Google, Facebook, and Microsoft. OAuth allows users to grant third-party websites and
applications access to their info without sharing credentials.
 OAuth works by providing the third-party application with an access token that is issued by the
website or application after the user authorizes the third party to access their resources. The
access token contains information about the user and the permissions granted to the third
party. The third party can then use this access token to access the user's resources on the
website or application.
 Here's how OAuth works in a nutshell:
1. The user wants to grant access to their data to a client application. They initiate the OAuth
flow by clicking a "Connect with [Client]" button in the client application.
2. The client application redirects the user to the authorization server, which asks the user to log
in and authorize the client application to access their data.
3. If the user grants authorization, the authorization server issues an access token to the client
application.
4. The client application uses the access token to access the user's data on the resource server.

 Access Control List (ACL) - is a mechanism used to control access to resources such as files,
folders, or network devices. It is a list of permissions that determine which users or groups
are allowed to access the resources and what actions they can perform on it. ACL is an
important security mechanism that helps to control access to resources by specifying
permissions for individual users or groups. It allows administrators to grant or deny access
to resources based on the specific needs of users and the security policies of the
organization.
 Network ACL - A Network Access Control List (ACL) is a set of rules that are applied to
network traffic to filter and control access to network resources. ACLs are used to
provide network security by restricting access to sensitive resources based on
predefined criteria, such as IP addresses, ports, or protocols. ACLs can be
implemented on routers, switches, and firewalls, and they can be used to filter traffic
at various points in the network, including the edge, distribution, and core layers.
 Accounting

7
 Keeping record of what resources and services your users accessed, or what they did when
they were using your system.
 Auditing which involves reviewing records to ensure that nothing is out of the ordinary.
 TACACs - TACACS+ access tracking involves recording information related to user
authentication and authorization requests, such as username, password, and access
request time. This information is used to verify user identity and grant access to the
network. TACACS+ can also track additional information, such as the type of access
being requested, the network device being used, and the access point location.
TACACS+ accounting involves recording information related to user activity on the
network, such as the duration of the session, the amount of data transferred, and the
resources accessed. This information is used for billing, auditing, and compliance
purposes. TACACS+ accounting can also track additional information, such as the type
of service being used, the destination of the data transfer, and the quality of service
being provided. TACACS+ accounting records can be used for a variety of purposes,
such as monitoring network usage, identifying potential security threats, and analyzing
user behavior. TACACS+ accounting data can also be exported to external systems for
further analysis and reporting.
 RADIUS - RADIUS access tracking involves recording information related to user
authentication and authorization requests, such as username, password, and access
request time. This information is used to verify user identity and grant access to the
network. RADIUS can also track additional information, such as the type of access
being requested, the network device being used, and the access point location.
RADIUS accounting involves recording information related to user activity on the
network, such as the duration of the session, the amount of data transferred, and the
resources accessed. This information is used for billing, auditing, and compliance
purposes. RADIUS accounting can also track additional information, such as the type of
service being used, the destination of the data transfer, and the quality of service
being provided. RADIUS accounting records can be used for a variety of purposes, such
as monitoring network usage, identifying potential security threats, and analyzing user
behavior. RADIUS accounting data can also be exported to external systems for further
analysis and reporting.
 Cisco’s AAA system - enables the collection and recording of information related to
user activity on the network. This includes tracking user login and logout times, the
resources accessed, and the duration of access. The collected data can then be used
for auditing, billing, and other purposes.
The Cisco AAA system allows for different types of accounting, including network
accounting and command accounting. Network accounting captures information
related to network access, while command accounting records information related to
the execution of specific commands on network devices.
MODULE FOUR SECURING YOUR NETWORK
 Secure network architecture
 Network hardening - Is a process of securing a network by reducing its potential
vulnerabilities through configuration changes and taking specific steps.
 Networks would be much safer if you disable access to network services that aren’t
needed and enforce access restrictions.
 Implicit deny - A network security concept where anything not explicitly
permitted or allowed should be denied. This is diff from blocking all traffic since
an implicit deny configuration will still let traffic pass that you’ve defined as
allowed. This can usually configure in a firewall, which makes it easier to build
secure firewall rules. Instead of specifically block all traffic you don’t want, you
can just create rules for traffic that you need to go through. You can call it as
whitelisting, opposite of blacklisting. It is less convenient but much secure
configuration.
 Another very important component of network security is monitoring and
analyzing traffic on your network. Why network monitoring important:
 It lets you establish a baseline of what your typical traffic looks like. This is
key because in order to know what unusual or potential attack traffic looks

8
like, you need to know what normal traffic looks like. Can do this by
network traffic monitoring and logs analysis.
 Analyzing logs - The practice of collecting logs from diff network and sometimes
client devices on your network, then performing an automated analysis on them.
This will highlight potential intrusions, signs of malware infections, or a typical
behavior. You’d want to analyze things like firewall logs, authentication server
logs, and application logs.
 Analyze of logs would be involve looking for a specific log msg of interests,
like with firewall logs.
 Attempted connections to an internal service from an untrusted source
address may be worth investigating.
 Connections from internal network to known address ranges of botnet
command and control servers could mean there’s a compromised machine
on the network. So a device that is affected by botnet will connect to
command and control servers to receive instructions and updates from
botnet operators.
 Logs analysis systems - are configured using user-defined rules to match
interesting or a typical log entries.
 These can then be surfaced through an alerting system to let security
engineers investigate the alert. Alerts can take the form of sending an
email or an SMS w/ info and a link to the event that was detected. Can
even wake someone up in the middle of the night if the event was
severe enough.
 Normalizing log data - is an important step, since logs from diff devices and
systems may not be formatted in a common way.
 You might need to convert log components into a common format to
make analysis easier for analysts and rules-based detection systems.
 This also make correlation analysis easier.
 Correlation analysis is the process of taking log data from different systems
and matching events across the systems.
 correlation analysis can be used to identify patterns or relationships
between security events, such as network traffic, system log entries,
and user activity. By analyzing these events together, security analysts
can gain a better understanding of the overall security posture of the
organization and identify potential threats and vulnerabilities.
 For example, if a correlation analysis of network traffic logs reveals a
high number of failed login attempts at a specific time, this may
indicate a possible brute-force attack on the network. Similarly, if an
analysis of system log entries shows a correlation between multiple
failed attempts to access a specific file or directory, this may indicate a
possible attempt at unauthorized access.
 It is also important to investigating and recreating the events that
happened once a compromise is detected.
 Post fail analysis - Investigating how a compromise happened after the
breach is detected.
 Detailed logging and analysis of logs would allow for detailed
reconstruction of events that led to the compromise. By
collecting and analyzing logs, investigators can reconstruct a
timeline of events leading up to the breach, including the specific
actions taken by the attacker and the resources they accessed.
 Let the security team to make appropriate changes to
security systems to prevent further attacks.
 It could also help determine the extent and severity of the
compromise.
 It will also be able to show if further systems were
compromised after the initial breach.

9
 It would also tell us whether or not any data was stolen and
it it was what that data was.
 One popular and powerful log analysis systems is
splunk, a very flexible and extensible log aggregation
and search system.
 Splunks can grab logs data from a wide variety of
systems and in large amounts of formats.
 It can also be configured to generate alerts and
allows for powerful visualization of activity based
on log data.
 Flood guards - Provide protection against DoS attack.
 Availability is an important tenant of security and is exactly
what flood guard protections are designed to help ensure.
This works by identifying common flood attack types like
SYN flood or UDP floods. It then triggers alert once a
configurable threshold of traffic is reached.
 A common open-source flood guard protection tool is
FAIL TO BAN.
 It watches for signs of an attack on a system and blocks
further attempts from a suspected attack address.
 Fail to ban is a popular tool for smaller-scale
organizations.
 This flood guard protection can also be described as a
form of intrusion prevention system.
 Network separation or network segmentation is a good security
principle for IT support specialist to implement.
 Network segmentation is a security principle that involves
dividing a network into smaller sub-networks, known as
segments, to improve security. Each segment can be
isolated and secured independently, making it more
difficult for an attacker to move laterally within the
network and access sensitive resources.
 Limiting the attack surface: By dividing the network
into smaller segments, the attack surface is reduced, as
each segment can be secured independently. This
limits the potential impact of a breach and helps
prevent attackers from moving laterally within the
network
 Containing the damage: In the event of a breach,
network segmentation can help contain the damage by
limiting the attacker's access to sensitive resources. By
isolating the affected segment, the attacker's ability to
move laterally within the network is limited, reducing
the impact of the breach.
 Better access control: Network segmentation allows
organizations to implement more granular access
controls, restricting access to sensitive resources based
on the user's role and responsibilities. This reduces the
risk of unauthorized access to sensitive data.
 Simplifying security management: By segmenting the
network, security teams can focus their efforts on
securing each segment independently, making it easier
to manage security policies and controls.
 Network Hardware Hardening
 Dynamic Host Configuration Protocol DHCP - is a network protocol used to
dynamically assign IP addresses and other network configuration information to
devices on a network.

10
 An Attacker may able to deploy ROGUE DHCP SERVER.
 If an attacker is able to deploy a rogue DHCP server on a network, they can
distribute DHCP leases with whatever information they want, such as IP
addresses, default gateways, and DNS server addresses. This can lead to various
security risks and connectivity issues.
 For example, the rogue DHCP server can assign IP addresses that conflict with
existing IP addresses on the network, leading to connectivity issues and network
downtime. Additionally, the rogue DHCP server can distribute incorrect default
gateway and DNS server addresses, leading to security breaches and other
issues.
 To mitigate the risks associated with rogue DHCP servers, it's important for
network administrators to implement security measures such as DHCP snooping
and DHCP authentication. DHCP snooping can help prevent rogue DHCP servers
from being deployed on the network, while DHCP authentication can help verify
the identity of authorized DHCP servers and prevent rogue DHCP servers from
distributing malicious network configuration information.
 In addition to these measures, network administrators should also regularly
monitor their networks for any signs of rogue DHCP servers and take appropriate
action to remove them. This can involve scanning the network for unauthorized
DHCP servers, monitoring DHCP traffic for anomalies, and implementing network
segmentation to limit the scope of rogue DHCP servers.
 Dynamic ARP inspection ( Is an other form of Network Hardware Hardening)
 Dynamic ARP Inspection (DAI) is a security feature used in network switches to
prevent ARP (Address Resolution Protocol) spoofing attacks. DAI works by
intercepting ARP packets and inspecting them for validity before forwarding
them to their intended destination. This process is done in real-time, which helps
to prevent attackers from intercepting network traffic and launching man-in-the-
middle attacks.
 When DAI is enabled on a switch, it creates a trusted database of MAC-to-IP
address mappings that is typically maintained by a DHCP server or manually
configured by the network administrator. When an ARP packet is received, the
switch checks the packet against the database to determine if it is legitimate. If
the packet is not valid, it is dropped by the switch. This prevents attackers from
sending falsified ARP messages and hijacking network traffic.
 DAI inspection occurs on Layer 2 of the OSI (Open Systems Interconnection)
model, which is the data link layer. This layer is responsible for transmitting data
frames between network nodes on the same physical network segment. By
inspecting ARP packets at this layer, DAI can identify and prevent ARP spoofing
attacks that occur within the same physical network segment.
 DAI is a powerful security feature that can help protect networks from ARP
spoofing attacks. However, it does require careful configuration and
management to ensure that legitimate devices are not blocked and that all
potential attack vectors are covered. Network administrators should also
consider implementing additional security measures, such as DHCP snooping and
IP source guard, to further enhance network security.
 IP source guard - preventing IP spoofing attack.
NOW IF YOU REALLY WANNA LOCK DONW YOUR NETWORK
 You can implement 802.1x - 802.1x is a standard for port-based network access
control (NAC) that provides an authentication mechanism for devices trying to
connect to a network. One of the authentication methods supported by 802.1x is the
Extensible Authentication Protocol (EAP), which is an open standard for secure
network access.
 EAP-TLS (EAP-Transport Layer Security) is a specific implementation of EAP that
uses the Transport Layer Security (TLS) protocol to provide strong authentication
and encryption for wireless and wired networks. EAP-TLS is widely used in
enterprise networks and is considered one of the most secure EAP methods.

11
 When a device attempts to connect to the network using EAP-TLS, it initiates a
TLS handshake with the authentication server. During the handshake, the device
presents its digital certificate to the server, which verifies the certificate and the
device's identity. If the authentication is successful, the server provides the
device with a session key, which is used to encrypt all subsequent
communications between the device and the network.
 One of the main advantages of EAP-TLS is that it provides strong mutual
authentication, which means that both the device and the server authenticate
each other. This helps to prevent man-in-the-middle attacks and other types of
security threats. In addition, EAP-TLS provides strong encryption for all network
communications, which helps to protect against eavesdropping and other types
of attacks.
 Overall, 802.1x and EAP-TLS are powerful security protocols that provide strong
authentication and encryption for network access. They are widely used in
enterprise networks and are important components of network security.
 Network Software Hardening
 Firewalls
 Firewalls are critical to securing a network. They can be deployed as dedicated
network infrastructure devices which regulate the flow of traffic for a whole
network . They can also be host-based as software that runs on a client system
providing protection for that one host only. It generally recommend to deploy
both solutions.
 A Host-based Firewall provides protection for mobile devices, such as
laptops that could be used in an untrusted, potentially malicious
environment, like public wifi.
 Host-based firewalls provide an additional layer of security by protecting
individual devices (hosts) from being compromised by malicious software or
users on the same network.
 Proxies
 proxies can also be useful for protecting client devices and their traffic from
certain security threats.
 A proxy server acts as an intermediary between a client device and the
Internet. When a client device makes a request to access a website or other
online resource, the request is first sent to the proxy server, which then
forwards the request to the Internet on behalf of the client device. The
proxy server then receives the response from the Internet and sends it back
to the client device.
 One way that proxies can help protect client devices and their traffic is by
filtering out malicious traffic and blocking access to known malicious
websites or domains. This can help prevent client devices from accessing
malicious content or inadvertently downloading malware or viruses.
 In addition, proxies can also be used to enhance privacy by masking the
client device's IP address and location. By using a proxy server located in a
different geographic location, clients can effectively hide their identity and
location from the websites or services they are accessing.
 However, it's important to note that proxies can also introduce certain
security risks. For example, if the proxy server is compromised or operated
by a malicious actor, it could potentially intercept, monitor, or manipulate
the client device's traffic. Therefore, it's important to choose reputable and
secure proxy servers, and to use additional security measures such as
encryption and authentication to protect the client device's traffic.
 VPNs
 VPNs are commonly used to provide secure remote access, and link two
networks securely.
 When used for remote access, VPNs allow employees to securely connect to the
corporate network from anywhere in the world, using an encrypted tunnel that
protects their traffic from interception or snooping. This is particularly important

12
for businesses that need to protect sensitive data or intellectual property from
unauthorized access.
 Similarly, when used to link two or more networks together, VPNs create an
encrypted tunnel between the networks, allowing them to securely
communicate with each other over an untrusted network such as the Internet.
This is commonly used by businesses with multiple offices or remote employees
who need to access resources located in a different physical location.
 Wireless Security
Best security option for securing a WiFi network
The first security protocol introduced for WiFi networks was Wired Equivalent Privacy (WEP)
 WEP Encryption
 WEP (Wired Equivalent Privacy) is a security protocol that was designed to
provide confidentiality and integrity for wireless networks in the past, because it
has since been found to have serious security vulnerabilities and is no longer
recommended for use.

 There are several reasons why WEP should not be used:

 Weakness in Encryption: WEP uses a weak encryption algorithm based


on the RC4 stream cipher, which has several known vulnerabilities that
allow attackers to easily crack the encryption and intercept wireless
network traffic.

 Weakness in Key Management: WEP uses a weak key management


scheme that makes it vulnerable to brute-force attacks. Specifically,
WEP uses a 24-bit initialization vector (IV) along with a user-defined
key to encrypt data. The short length of the IV allows attackers to
quickly determine the key by brute force, making it easy to crack WEP
encryption.

 Lack of Authentication: WEP does not provide any authentication


mechanism to verify the identity of users or devices on the wireless
network. This makes it vulnerable to spoofing attacks, where an
attacker can impersonate a legitimate user or device on the network
and gain unauthorized access.
 No Integrity Protection: WEP does not provide any integrity protection,
which means that an attacker can modify or inject packets on the
wireless network without detection. This makes it vulnerable to attacks
such as packet injection, where an attacker can insert malicious
packets into the network to compromise its security.
 Overall, WEP is an insecure protocol that is easily compromised by
attackers. As a result, it is no longer recommended for use and has
been replaced by stronger security protocols such as WPA2 and WPA3,
which use stronger encryption algorithms and provide better
authentication and integrity protection.
 WPA -
 WPA (Wi-Fi Protected Access) was designed as a replacement for the older
WEP (Wired Equivalent Privacy) protocol, which had several known
vulnerabilities that made it less secure.

 WEP used a relatively weak encryption algorithm that was vulnerable to


attacks that could allow attackers to decrypt wireless traffic and eavesdrop
on network activity. WEP also relied on static encryption keys, which made
it easier for attackers to crack the key and gain unauthorized access to the
network.

13
 WPA addressed these weaknesses by introducing several improvements in
the security mechanism. First, it introduced a new and much stronger
encryption algorithm called TKIP (Temporal Key Integrity Protocol). TKIP
uses a unique encryption key for each data packet, making it much more
difficult for attackers to intercept and decode wireless traffic.

 WPA also introduced a more robust authentication mechanism called IEEE


802.1X, which enabled network administrators to control access to the
wireless network by requiring users to authenticate with a username and
password or a digital certificate. This helped to prevent unauthorized access
to the network and protect against attacks such as brute force password
cracking.

 Another key improvement introduced by WPA was the use of message


integrity checking (MIC) to prevent packet forgery and tampering. MIC
ensures that data packets are not modified in transit, providing an
additional layer of security to the wireless network.

 Overall, WPA was designed as a short-term replacement for WEP that


addressed its known vulnerabilities and provided a stronger and more
secure alternative. Later, an improved version of WPA called WPA2 was
introduced that provided even stronger security and remains the current
standard for wireless network security.

 Also disabling WPS (Wi-Fi Protected Setup) can reduce the likelihood of a
WPS brute-force attack. WPS is a feature that allows users to easily set up a
secure wireless network by using a PIN or pushing a button on the router.
However, WPS has been found to have significant vulnerabilities,
particularly in the PIN-based method.

 In a WPS brute-force attack, an attacker attempts to guess the WPS PIN in


order to gain access to the wireless network. This type of attack can be
successful because the WPS PIN is often only 8 digits long and can be
guessed through trial and error.

 Disabling WPS removes the ability for an attacker to use the PIN-based
method to gain access to the wireless network. Without the WPS PIN, an
attacker would need to resort to other methods such as cracking the Wi-Fi
password or exploiting other vulnerabilities in the wireless network.

 However, if you need to use WPS, you can use a lockout period to protect
your network against brute force attacks. A lockout period blocks any
further connection attempts for a set amount of time if an attacker tries to
connect to your network using WPS and enters the wrong PIN a certain
number of times. This means that if an attacker tries to guess the WPS PIN
repeatedly, the router will temporarily block any further connection
attempts for a set amount of time, making it more difficult for the attacker
to gain access to your network.

 Overall, while using a lockout period can add some extra protection,
disabling WPS altogether is the safest option to reduce the likelihood of
brute force attacks. You should also ensure that your wireless network is
secured with a strong password, using WPA2 or WPA3 encryption, and
regularly update your router's firmware to keep it secure.

 However, it's important to note that disabling WPS is not a complete


solution for securing a wireless network. Other security measures such as

14
strong Wi-Fi passwords, WPA2 with AES/CCMP encryption, and regular
firmware updates should also be implemented to secure a wireless
network.
 WPA2
 WPA (Wi-Fi Protected Access) is a security protocol that was designed to
replace the older WEP (Wired Equivalent Privacy) protocol for securing
wireless networks. Although WPA improved security over WEP, it still had
some vulnerabilities that were addressed in the later WPA2 protocol.

 One of the main weaknesses of WPA was its use of the Temporal Key
Integrity Protocol (TKIP) for encryption. TKIP was an improvement over the
encryption used in WEP, but it was still susceptible to some attacks, such as
the chop-chop attack and the fragmentation attack. These attacks allowed
attackers to bypass encryption and intercept traffic on the wireless network.

 WPA2 addressed this weakness by using the Advanced Encryption Standard


(AES) algorithm for encryption, which is much stronger and more secure
than TKIP. AES encryption is resistant to brute-force attacks and provides
better security for wireless networks.

 Another weakness of WPA was the use of the Message Integrity Code (MIC)
for integrity checking, which was vulnerable to attacks such as the Forgery
Attack and the Replay Attack. WPA2 addressed this weakness by introducing
a new encryption protocol called Counter Mode with Cipher Block Chaining
Message Authentication Code Protocol (CCMP). CCMP provides improved
data integrity and confidentiality, making it much more difficult for
attackers to intercept and tamper with wireless traffic.

 WPA2 also introduced the use of a stronger key management mechanism


called the 802.11i standard, which provides improved protection against
dictionary attacks and other forms of attack.

 Overall, WPA2 provides stronger security than WPA by addressing the


vulnerabilities and weaknesses of the earlier protocol. WPA2 is currently the
recommended standard for securing wireless networks and provides a high level
of security when implemented correctly.
 Wireless Hardening
 Best security protocol to use is WPA2 with AES/CCMP mode.
 WPA2 with AES/CCMP mode is considered the best security protocol to use
for securing wireless networks due to several factors.

 First, WPA2 provides stronger security than its predecessor, WPA. WPA2
uses the Advanced Encryption Standard (AES) algorithm for encryption,
which is more secure than the Temporal Key Integrity Protocol (TKIP) used
in WPA. AES encryption is resistant to brute-force attacks and provides
better security for wireless networks.

 Second, CCMP (Counter Mode with Cipher Block Chaining Message


Authentication Code Protocol) is the recommended encryption protocol for
WPA2, which provides improved data integrity and confidentiality. CCMP is
more efficient than the encryption algorithm used by WPA and provides
better protection against attacks such as replay attacks and dictionary
attacks.

 Third, WPA2 has a more robust key management mechanism than WPA.
WPA2 uses the 802.11i standard for key management, which provides
improved protection against dictionary attacks and other forms of attack.

15
 Finally, WPA2 with AES/CCMP mode is widely recognized as a secure and
reliable security protocol for wireless networks. It has been extensively
tested and is recommended by security experts and organizations such as
the Wi-Fi Alliance and the National Institute of Standards and Technology
(NIST).

 Overall, WPA2 with AES/CCMP mode provides a high level of security for
wireless networks and is currently the recommended standard for securing
wireless networks.
 WPA2 Enterprise would offer the highest level of security for a WiFi
network. It offers the best encryption options for protecting data from
eavesdropping third parties, and does not suffer from the manageability
or authentication issues that WPA2 Personal has with a shared key
mechanism. WPA2 Enterprise used with TLS certificates for authentication
is one of the best solutions available.
 Network Monitoring
1. Sniffing the network
 "Sniffing the network" or “Packet sniffing” refers to the practice of intercepting
and analyzing network traffic for the purpose of gathering information or
monitoring activity. This can be done using specialized software, such as network
analyzers or packet sniffers, which capture and decode packets of data as they
pass through a network.
 "Promiscuous mode" is a feature of a network interface that allows it to receive
and analyze all network traffic passing through the network, regardless of
whether the traffic is addressed to the interface or not. This mode is commonly
used by network administrators and security professionals to diagnose and
troubleshoot network issues, monitor network activity, and identify security
threats.
 "Monitor mode" is a special mode that is available on wireless network adapters,
which allows them to capture and analyze all wireless traffic that is being
transmitted in their vicinity. This mode is useful for network administrators and
security professionals who need to troubleshoot wireless network issues or
identify potential security threats.

 Both promiscuous mode and monitor mode can be used for legitimate purposes,
such as network troubleshooting and security analysis, but they can also be used
for malicious purposes, such as eavesdropping on network traffic or stealing
sensitive information. Therefore, it is important to use these tools responsibly
and in accordance with ethical and legal guidelines.
2. Wireshark and tcpdump
 TCPDump and Wireshark are two popular tools used for capturing and
analyzing network traffic in order to diagnose and troubleshoot issues,
monitor network performance, and identify potential security threats.

 TCPDump is a command-line utility that is used to capture and display


packets of data as they are transmitted over a network. It supports multiple
packet capture formats and can be used to capture packets in real-time or
read them from a file. TCPDump is commonly used by network
administrators and security professionals to diagnose network issues,
monitor network performance, and identify potential security threats.
TCPDump has a smaller footprint and requires less system resources than
Wireshark.

 Wireshark, on the other hand, is a more advanced graphical user interface


(GUI) tool that provides a more comprehensive set of features and
capabilities for capturing and analyzing network traffic. It is widely used for

16
troubleshooting network problems and analyzing network performance.
With Wireshark, network administrators can capture packets in real-time or
read them from a file, and then view and analyze the packets in detail.
Wireshark has an advanced filtering system and provides a more detailed
analysis of network traffic, such as displaying the content of each packet,
identifying the source and destination of each packet, and highlighting
potential security threats. However, Wireshark requires more system
resources than TCPDump.

 Both TCPDump and Wireshark are powerful tools that can be used to
troubleshoot network issues and identify potential security threats.
TCPDump is a command-line utility that is lightweight and requires less
system resources than Wireshark, while Wireshark is a more advanced GUI-
based tool that provides a more comprehensive set of features and
capabilities for analyzing network traffic. In practice, network administrators
often use a combination of both tools to gain a more complete picture of
their network's performance and identify potential issues or security
threats.
 Why packet capture and analysis fits into security at this point.
 Like logs analysis, traffic analysis is also an important part of network
security.
 Traffic analysis is done using packet captures and packet analysis. Traffic on
a network is basically a flow of packets.
 Now being able to capture and inspect those packets is important to
understanding what type of traffic is flowing on our networks that we’d like
to protect.
3. Intrusion Detection/ Prevention Systems
 IDS or IPS systems operate by monitoring network traffic and analyzing it.
 As an IT support specialist, you may need to support the underlying platform that the
IDS/IPS runs on. You might also need to maintain the system itself, ensuring that rules are
updated and you may even need to respond to alerts. So what exactly do IDS and IPS
systems do?

 An Intrusion Detection and Prevention System (IDPS) is a security solution designed to


detect and prevent unauthorized access, attacks, and other security threats on a
network or computer system. It monitor network traffic and analyzing it.

 An Intrusion Detection System (IDS) is a security technology that monitors network or


host activity for signs of suspicious or malicious behavior and generates alerts to notify
security personnel.

 Intrusion Prevention Systems (IPS) are designed to prevent attacks by blocking or


restricting access to specific network resources or systems. IPS can operate in several
ways, including signature-based, policy-based, and behavior-based prevention.

 The difference between an IDS and an IPS system is that IDS is only a detection
system. It won’t take action to block or prevent an attack when one is detected, it will
only log and alert. But an IPS system can adjust firewall rules on the fly to block or
drop the malicious traffic when it’s detected.
 IPS and IDS systems can either be host-based or network based.

 Network Intrusion Detection System, which is an IDS that monitors network traffic for
signs of intrusion or attack. It typically analyzes network packets to identify known
attack signatures or abnormal traffic patterns that may indicate an attack.

 Host-based intrusion detection system (HIDS) which is an IDS that monitors activity
on a single host or endpoint, such as a server or workstation. It typically analyzes

17
system logs, file system changes, and other host activity to identify signs of intrusion
or attack.

 NIPS stands for Network Intrusion Prevention System, which is an IPS that operates
at the network level, monitoring network traffic and actively preventing attacks in
real-time.

 HIPS stands for Host-based Intrusion Prevention System, which is an IPS that
operates on a single host or endpoint, such as a server or workstation. It typically
analyzes system logs, file system changes, and other host activity to identify signs of
intrusion or attack and can block malicious activity in real-time to prevent an attack
from succeeding.

 NIDS systems resemble firewalls in a lot of ways. But a firewall is designed to prevent
intrusions by blocking potentially malicious traffic coming from outside and enforce
ACLs between networks. NIDS system are meant to detect and alert on potential
malicious activity coming from within the network. Plus, firewalls only have visibility of
traffic flowing between networks. They’re set up to protect. They generally wouldn't
have visibility of traffic between hosts inside the network.

 The location of the NIDS must be considered carefully when you deploy a system.
It needs to be located in the network topology in a way that it has access to the
traffic we’d like to monitor. A good way that you can get access to network traffic
is using the port mirroring functionality found in many enterprises switches. This
allows all packets on a port range or entire villain to be mirrored to another port
where our needs hosts would be connected.

 With this configuration, our NIDS machine would be able to see all packets
flowing in and out of hosts on the switch segment.

 This lets us monitor host-to-host communications and traffic from hosts to


external networks like the internet.
 Yes, that is correct. A Network Intrusion Detection System (NIDS) typically
analyzes network traffic by capturing and analyzing packets on a dedicated
analysis port. To capture all the packets on this port, the NIDS host would enable
promiscuous mode on the network interface controller (NIC) of the analysis port
for wired networks, or monitor mode for wireless networks.

 By enabling promiscuous mode or monitor mode, the NIC will capture all packets
that pass through the network, regardless of whether they are intended for the
NIDS host or not. The NIDS software running on the host will then analyze these
packets to identify potential security threats or suspicious behavior, such as
unauthorized access attempts, malware infections, or network-based attacks.

 Enabling promiscuous mode or monitor mode on the analysis port is a critical


step in configuring a NIDS, as it ensures that all network traffic is captured and
analyzed, providing a comprehensive view of network activity and potential
threats.

 To capture all the packets on this port, the NIDS host would enable promiscuous
mode on the network interface controller (NIC) of the analysis port for wired
networks, or monitor mode for wireless networks.

 By enabling promiscuous mode or monitor mode, the NIC will capture all packets that
pass through the network, regardless of whether they are intended for the NIDS host
or not. The NIDS software running on the host will then analyze these packets to

18
identify potential security threats or suspicious behavior, such as unauthorized access
attempts, malware infections, or network-based attacks.

 Enabling promiscuous mode or monitor mode on the analysis port is a critical step in
configuring a NIDS, as it ensures that all network traffic is captured and analyzed,
providing a comprehensive view of network activity and potential threats.

 In the context of a Network Intrusion Detection System (NIDS), port mirroring can be
used to capture network traffic for analysis and detection of security threats. This is
achieved by configuring the switch to mirror traffic from one or more network ports to
a dedicated analysis port, where the NIDS software can capture and analyze the
traffic.

 The NIDS host would analyze this traffic by enabling promiscuous mode on the
analysis port.
 This is the network interfaces that’s connected to the mirror port on a
switch. It can see all packets being passed and perform analysis on the
traffic.
 Since this interface is used for receiving mirrored packets from the network
we’d like to monitor, a NIDS host must have at least two network interfaces.
One is for monitoring and analysis and a separate one is for connecting to
our network for management and administrative purposes.
 Placement of a NIP system, would differ from NID system. This is because of a
prevention system being able to take action against a suspected malicious traffic. In
order for NIPS device to block from a detected threat, it must be placed in line w/ the
traffic being monitored. This means that the traffic that’s being monitored must pass
through the NIPS device. If it wasn't the case, the NIPS host wouldn’t be able to take
action on suspected traffic.
 Example, NIDS device is a passive observer that only watches the traffic and sends
an alert if sees something. Unlike NIPS device, Which not only monitor s traffic,
but can take action on the traffic it’s monitoring, blocking or dropping the traffic.
 The detection of threats or malicious traffic is usually handled through signature-based
detection. Similar to how antivirus software detects malware.
 Signature - Unique characteristics of known malicious traffic. They might be specific
sequences of packets, or packets with certain values encoded in the specific header
field.
 This allows intrusion detection and prevention systems from easily and quickly
recognizing known bad traffic from sources like botnets, worms, and other
common attack vectors on the internet.
 But similar to antivirus, less common or targeted attacks might not be detected
by signature-based system, since there might not be signatures developed for
these cases.
 So, it’s also possible to create custom rules to match traffic that might be
considered suspicious but not necessarily malicious. This would allow
investigators to look into the traffic in more detail to determine the badness level.
If the traffic is found to be malicious, a signature can be developed from the
traffic and incorporated into the system.
 When NID detect something malicious, NID would log the detection event along
w/ a full packet capture of the malicious traffic. An alert would also usually be
triggered to notify the investigating team to look into the detected traffic.

 Together, an IDPS can offer a comprehensive security solution for organizations,


providing detection and prevention capabilities to protect against a wide range of
security threats. However, IDPS must be configured and maintained properly to be
effective and should be used in conjunction with other security measures such as
firewalls, antivirus software, and access controls to provide robust protection against
security threats.

19
4. Unified Threat Management
5. Home Network Security

MODULE FIVE DEFENSE IN DEPTH


 System Hardening
 Disabling Unnecessary Components
 Host-Based Firewall
 Logging and Auditing
A critical part of any security architecture is logging and alerting.
It wouldn't do much good to have all these defenses in place. If we have no idea if they're working or
not, we need visibility into the security systems in place to see what kind of traffic they're seeing.

We also need to have the visibility into the logs of all of our infrastructure devices and equipment that
we manage but it's not enough to just have logs.

We also need ways to safeguard logs and make them easy to analyze and review.

If there's a dedicated security team at your company, they would be performing this analysis but at a
smaller company this responsibility would likely fall to the IT team. So let's make sure you're prepared
with the skills you might need for incident investigation.

Many investigative techniques can also be applied to troubleshooting. All systems and services
running on hosts will create logs of some kind with different levels of detail. It depends on what its
logging and what events it's configured to log.

So an authentication server would log every authentication attempt whether it's successful or not.

A firewall would log traffic that matches rules with details like source and destination addresses and
ports being used. All this log information gives us details about the traffic and activity that's
happening on our network and systems. This can be used to detect compromise or attempts to attack
the system.

When there are a large number of systems located around your network, each with their own log
format, it can be challenging to make meaningful sense of all this data.

This is where security information and event management systems or SIEMS come in.
A SIEM can be thought of as a centralized log server it is some extra analysis features too. You can
think of SIEM as a form of centralized logging for security administration purposes. A SIEM system
gets logs from a bunch of other systems. It consolidates the logs from all different places and places it
in one centralized location. This makes handling logs a lot easier. As an IT support specialist an
important step you'll take in logs analysis is normalization. This is the process of taking log data in
different formats and converting it into a standardized format that's consistent with a defined log
structure.

As an IT support specialist you might configure normalization for your log sources. For example, log
entries from our firewall may have a timestamp using a year, month and day format. While logs from
our client machines may use day, month, year format. To normalize this data, you choose one
standard date format then you define what the fields are for the log types that need to be converted.

When logs are received from these machines, the log entries are converted into the standard that we
defined and stored by the logging server. This lets you analyze and compare log data between
different log types and systems in a much easier fashion.

So what type of information should you be logging? Well that's a great question.

20
If you log too much info, it's difficult to analyze the data and find useful information plus storage
requirements for saving the logs become expensive very quickly. But if you log too little then the
information won't provide any useful insights into your systems and network.

Finding that middle ground can be difficult. It will vary depending on the unique characteristics of the
systems being monitored and the type of activity on the network. No matter what events are logged,
all of them should have information that will help understand what happened and reconstruct the
events.

There are lots of important fields to capture and log entries like timestamp, the event or error code,
the service or application being logged.

The user or system account associated with the event and the devices involved in the event.

Timestamps are super important to understanding when an event occurred.

Fields like source and destination addresses will tell us who was talking to whom.

For application logs you can grab useful information from the logged in user associated with the
event and from what client they used.

On top of the analysis assistance it provides a centralized log server also has security benefits.
By maintaining logs on a dedicated system, it's easier to secure the system from attack.

Logs are usually targeted by attackers after a breach so that they can cover their tracks. By having
critical systems send logs to remote logging server that's locked down, the details of a breach should
still be logged.

A forensics team will be able to reconstruct the events that led to the compromise.

Once we have logging configured and the relevant events recorded on a centralized log server, what
do we do with all the data while analyzing log details depends on what you're trying to achieve.

Typically when you look at aggregated logs as an IT support specialist, you should pay attention to
patterns and connections between traffic.

So if you're seeing a large percentage of Windows hosts, all connecting to specific address outside
your network that might be worth investigating, it could signal a malware infection.

Once logs are centralized and standardized you can write automated alerting based on rules. Maybe
you'll want to define an alert rule for repeated unsuccessful attempts to authenticate to a critical
authentication server.

Lots of SIEM solutions also offer handy dashboards to help analysts visualize this data. Having data in
a visual format can potentially provide more insight. You can also write some of your own monitoring
and alert systems.

Now it doesn't matter if you're using a SIEM solution or writing your own. It can be useful to break
down things like commonly used protocols in the network.

Quickly see the top talkers in the network and view reported errors over time to reveal patterns.

Another important component to logging to keep in mind as an IT support specialist is retention. Your
log storage needs will vary based on the amount of systems being logged. The amount of detail logs
and the rate at which logs are created, how long you want or need to keep logs around will also really
influence the storage requirements for a log server. Some examples of logging servers and SIEM

21
solutions are the open source, rsyslog, Splunk Enterprise Security, IBM Security Qradar and RSA
Security Analytics.

MODULE FIVE DEFENSE IN DEPTH


A. SYSTEM HARDENING
Defense in depth is a security strategy that involves implementing multiple layers of security controls
and measures to protect against a wide range of potential threats and attacks. The goal of defense in
depth is to create multiple barriers or layers of protection that can slow down or prevent attackers
from penetrating the network or system, and to minimize the impact of any successful attacks.

By the end of this module, we’ll be able to implement the appropriate methods for system hardening
and application hardening. We’ll also be able to determine the policies to use for operating system
security.

1. DISABLING UNNECESSARY COMPONENTS


Disabling unnecessary components is one aspect of defense in depth. By reducing the number of
services and components that are exposed to the network, the attack surface is reduced, making it
more difficult for attackers to find and exploit vulnerabilities.

An attack vector is a path or method that attackers use to gain unauthorized access to a network or
system. Attack vectors can include techniques such as phishing, malware, and social engineering, as
well as exploiting vulnerabilities in software or hardware.

The attack surface refers to the total number of entry points that attackers can use to gain
unauthorized access to a network or system. This includes all software and hardware components
that are exposed to the network, such as servers, routers, firewalls, and other devices. The larger the
attack surface, the more vulnerable the network or system is to attack. By reducing the attack surface
through measures such as disabling unnecessary components, the network or system becomes less
vulnerable to attack.

Host-based Firewalls
Are important to creating multiple layers of security.
Protect individual hosts from being compromised when they are used in untrusted, potentially
malicious environments. They also protect individual hosts from potentially compromised peers inside
a trusted network. Network-based Firewalls has a duty to protect our internal network by filtering
traffic in and out of it.
While the Host-based Firewall on each individual host, protects that one machine.
HBF plays a big part in reducing what’s accessible to an outside attacker.

It provides flexibility while only permitting connections to selective services on a given host from
specific networks or IP ranges. This ability to restrict connections from certain origins, is usually used
to implement a highly secure host to network.

From there, access to critical or sensitive systems or infrastructure is permitted. These are called
bastion hosts or networks, and are specifically hardened and minimized to reduce what's permitted
to run on them.

Bastion hosts are usually exposed to the Internet, so you should pay special attention to hardening
and locking them down to reduce the chances of compromise. But they can also be used as a gateway
or access portal into more sensitive services like core authentication servers or domain controllers.
This would let you implement more secure authentication mechanisms and ACLs on the bastion hosts
without making it inconvenient for your entire company.

Monitoring and logging can be prioritized for these hosts more easily.

22
Typically, these hosts or networks would also have severely limited network connectivity. It's usually
just to the secure zone that they're designed to protect and not much else.

Applications that are allowed to be installed and run on these hosts, would also be restricted to those
that are strictly necessary since these machines have one specific purpose. Part of the host-based
firewall rules will likely also provide ACLs that allow access from the VPN subnet. It's good practice to
keep the network that VPN clients connect into separate using both subnetting and VLANs. This gives
you more flexibility to enforce security on these VPN clients. It also lets you build additional layers of
defenses.
While a VPN host should be protected using other means, it's still a host that's operating in a
potentially malicious environment. This host is then initiating a remote connection into your trusted
internal network.
These hosts represent another potential vector of attack and compromise. Your ability to separately
monitor traffic coming and going from them is super useful.

There's an important thing for you to consider when it comes to host-based firewalls, especially for
clients systems like laptops. If the users of the system have administrative rights than they have the
ability to change firewall rules and configurations. This is something you should keep in mind and
make sure to monitor with logging. If management tools allow it, you should also prevent the
disabling of the host-based firewall. This can be done with Microsoft Windows machines when
administered using Active directory as an example.

2. Logging and Auditing

A critical part of any security architecture is logging and alerting.

It wouldn't do much good to have all these defenses in place. If we have no idea if they're working or
not, we need visibility into the security systems in place to see what kind of traffic they're seeing.

Logging refers to the process of capturing and storing events or actions that occur within a system,
application, or network infrastructure. These events can include user actions, system events, errors,
warnings, and other relevant information that can be used for troubleshooting, debugging, and
analysis purposes. Logs are typically stored in a centralized location, such as a log server, for easy
access and management.

Auditing, on the other hand, refers to the process of examining and analyzing the logs to determine if
any unauthorized or inappropriate activities have occurred. Auditing involves reviewing the logs to
identify any anomalies or patterns that may indicate a security breach or other types of malicious
activity. Auditing is an important part of ensuring the security and integrity of systems and networks
and can help detect and prevent security incidents before they cause significant damage.

Both logging and auditing are essential components of an effective security strategy. Logging provides
a detailed record of system and user activity, while auditing enables security teams to analyze this
data to identify potential security risks and take appropriate action to mitigate them. In summary,
logging is the act of recording events, and auditing is the act of analyzing those records to ensure
system security and integrity.

Centralized logging is a technique used in computer systems and networks to collect and store log
data from various sources into a single location or repository. The purpose of centralized logging is to
provide a unified view of system and network activity, making it easier to monitor and analyze system
events for security and operational purposes.

In a centralized logging architecture, log data is typically collected from various sources, such as
servers, network devices, and applications, and forwarded to a central logging server or database. The
central logging server aggregates and stores the log data, providing a single point of access for log
analysis and monitoring.

23
Centralized logging provides several benefits, including:

1. Improved security: Centralized logging allows for real-time monitoring of system and network
activity, making it easier to detect and respond to security threats. It also enables security analysts to
identify and investigate security incidents more efficiently.

2. Operational insights: By aggregating log data from various sources, centralized logging can provide
insights into system and network performance, allowing for proactive identification of issues before
they become critical.

3. Compliance: Many compliance regulations require organizations to retain and analyze log data.
Centralized logging can help organizations comply with these regulations by providing a single source
for log data collection, storage, and analysis.

However, centralized logging also has some potential drawbacks, such as increased network traffic
and storage requirements, as well as the need for proper management and monitoring of the central
logging server.

To effectively implement centralized logging, organizations must carefully plan and design their
logging architecture, selecting appropriate log sources, log formats, and storage options. They must
also consider the security implications of centralized logging, such as protecting log data from
unauthorized access or tampering. Overall, centralized logging is a valuable technique for improving
system and network security, as well as operational efficiency and compliance.

3. Windows Defender Guide


Preventing threats across an enterprise environment can be challenging for IT Support professional.
Microsoft 365 Defender can help to simplify this responsibility. Defender provides enterprise-wide
security through an integrated suite of tools. It offers tools to prevent attacks, detect threats,
investigate security breaches, and coordinate effective response strategies. The Defender portal also
offers an action center for monitoring incidents and alerts, as well as for threat hunting and Analytics.

Microsoft 365 Defender protection and services include:

Defender for Endpoint: Protects network endpoints including servers, workstations, mobile devices,
and IoT devices. Provides preventative safeguards, breach detections, automated analyses, and threat
response services.

Defender Vulnerability Management: Protects assets including, hardware, software, licenses,


networks, and data. Provides asset inventory, vulnerability discovery, configuration assessment, risk-
based prioritization, and remediation tools.

Defender for Office 365: Protects Microsoft 365 (formerly Office 365), including Exchange, Outlook,
files, and attachments. Guards against malicious threats entering from email messages, links (URLs),
and collaboration tools.

Defender for Identity: Protects user identities and credentials. Detects, identifies, and investigates
advanced threats, compromised identities, and malicious actions performed using stolen user
identities or by internal threats.

Azure Active Directory Identity Protection: Protects cloud-based identities in Azure by automating
detection and resolutions for identity risks.

Defender for Cloud Apps: Protects cloud applications by providing deep visibility searches, robust data
controls, and advanced threat protection.

Using Microsoft 365 Defender

24
As an IT Support professional in an organization, you might use Microsoft 365 Defender to monitor
your enterprise’s IT security. You can customize the Defender portal Home page by job roles. Various
security cards can be selected to appear on the Home page for your role. For example, you might see
cards for monitoring:

Identities: Monitor user identities for suspicious or risky behaviors.

Data: Track user activity that is risky to data security.

Devices: See alerts, breach activity, and other threats on devices connected to the organization’s
network.

Apps: Observe how cloud apps are being used in your organization.

Incidents: Review attacks through compiled comprehensive incident data.

Alerts: View alerts compiled from across the Microsoft 365 suite.

Advanced hunting: Scan for suspicious files, malware, and risky activities.

Threat Analytics: View information about current cybersecurity threats.

Secure score: Get a calculated score for your security configuration and recommendations on how to
improve your score.

Learning hub: Easily access Microsoft 365 security tutorials and other learning materials.

Reports: Obtain information to help you better protect your organization.

Microsoft 365 Defender aggregates and organizes this monitoring data to provide IT Support
professionals details on where attacks began, which malicious tactics were used, the scope of the
attacks, and other related incident information.

Microsoft 365 Defender in action


The following are examples of how a cyberattack might penetrate and infect an enterprise network.
For each type of malicious attack, a potential Microsoft 365 Defender response follows, illustrating
how the security suite could respond:

A phishing attempt enters through email: An employee in an organization receives an email from a
business that appears to be legitimate, like a bank. The email might claim that there is a problem with
the employee’s account and that they must click on a given link to resolve the problem. However, the
phishing email actually contains a link to a malicious website that a cybercriminal disguised to look
like a real bank. If the employee clicks on the link to view the website, the site requests that the user
enter their account credentials or other sensitive information. This information is then transmitted to
the cybercriminal.
Microsoft Defender for Office 365 detects the emailed phishing scam by monitoring Exchange and
Outlook. Both the employee and the IT Support team are alerted about this attempted phishing
attack.

Malware enters through social media: An employee clicks on an enticing link posted on their favorite
social media app. The link triggers an automatic download of a malware file to the employee’s laptop.
Microsoft Defender for Endpoint monitors the employee’s laptop for suspicious malware signatures.
Upon detecting the malware, Defender for Endpoint alerts the employee and the organization’s IT
Support team about the malware and discloses its endpoint location.

A cybercriminal intercepts an employee’s work login credentials: An employee accesses their work
account using their laptop and an open Wi-Fi access point in a busy coffee shop. A cybercriminal is in

25
the same coffee shop to intercept and collect unprotected information flowing through the open Wi-
Fi access point. The cybercriminal obtains the employee’s user account credentials and uses them to
hijack the employee’s work account. The cybercriminal then begins a malicious attack on the
employer’s network.
Microsoft Defender for Identity can detect the sudden change in activity on the employee’s user
account. Defender for Identity alerts the employee and the IT Support team about the compromised
user identity.

A virus enters a cloud drive through a file upload: An employee unknowingly uploads an file that is
infected with a virus to their work cloud storage drive. When the employee opens the file from the
cloud drive, the virus is activated and begins changing the security settings on the other files in the
employees cloud drive.
Microsoft Defender for Cloud Apps detects the unusual pattern of activity and alerts the employee
and IT Support team of the suspicious activity in the cloud account.

User Account Control (UAC)


User Account Control (UAC) allows IT administrators to create standard user accounts with limited
access rights and privileges for end users. This configuration can prevent users from installing
unauthorized programs, changing system settings, tampering with firewalls, and more. In order to
perform these types of tasks, administrator credentials must be provided. For less restrictive controls,
UAC provides the option to grant end users local administrative privileges for approved activities that
require administrative privileges. For more restrictive controls, UAC can require global administrator
credentials be entered for each and every administrative change the user attempts to make.

4. Anti-malware protection
Anti malware defenses are a core part of any company’s security model in this day and age.

Today, the internet is full of bots, viruses, worms and other automated attacks.

Lots of unprotected systems would be compromised in a matter of minutes if directly connected to


the internet w/out any safeguards or protections in place.

While modern operating systems have reduced this threat vector by having basic firewalls enabled by
default, there’s still huge amount of attack on the internet.

Anti-virus software is signature-based. Means, that it has a database of signatures that identify
known malware like unique file hash of a malicious binary or the file associated with an infection. Or it
could be the network traffic characteristics that malware uses to communicate with a command and
control server.

Antivirus software will monitor and analyze things, like new files being created or being modified on
the system, in order to watch for any behavior that matches a known malware signature.

If it detects activity that matches the signature, depending on the signature type, it will attempt to
block the malware from harming the system.

But some signatures might only be able to detect the malware after the infection has occurred.
In that case, it may attempt to quarantine the infected files. Or it’ll just log and alert the detection
event at a high level. This is how all antivirus products work.

There are two issues w/ antivirus software though.


1) The first is that they depend on antivirus signatures distributed by the antivirus software vendor.
The effectiveness of antivirus software depends on timely updates of virus definitions and
signatures.
2) The second is that they depend on the antivirus vendor discovering new malware and writing
new signatures for newly discovered threats. Until the vendor is able to write new signatures

26
and publish and disseminate them, your antivirus software can't protect you from these
emerging threats

Antivirus which is designed to protect systems, actually represents an additional attack surface that
attackers can exploit.

It is true that antivirus software, like any other software installed on a system, can potentially be
exploited by attackers to gain unauthorized access or perform malicious actions on the system. This is
because antivirus software typically operates with high privileges and has access to a wide range of
system resources, making it an attractive target for attackers.

Moreover, antivirus software often relies on complex and sophisticated detection algorithms to
identify and block threats, which can also be targeted by attackers. For instance, attackers can
attempt to exploit vulnerabilities in the antivirus software itself, trick it into ignoring or whitelisting
malicious files or bypassing its detection mechanisms.

That said, it is important to note that the benefits of using antivirus software typically outweigh the
potential risks. Antivirus software can effectively detect and prevent a wide range of malware and
other threats from infecting and damaging a system, which can help mitigate the risk of data
breaches, theft, and other malicious activities. Additionally, most antivirus vendors regularly release
updates and patches to address any identified vulnerabilities in their software and improve its overall
security.

Overall, while it is true that antivirus software can potentially represent an additional attack surface
for attackers, the benefits of using such software to protect systems usually outweigh the risks. It is
important for users to regularly update their antivirus software and ensure that it is configured to
provide maximum protection while minimizing potential risks.

And remember, our defense and depth concept involves multiple layers of protection. Antivirus
software is just one piece of our anti malware defenses.

If antivirus can't protect us from the threats we don't know about, how do we protect against the
unknown that's out there? Well, anti virus operates on a blacklist model, checking against a list of
known bad things and blocking what gets matched.
There's a class of anti malware software that does the opposite. Binary whitelisting software operates
off a white list. It's a list of known good and trusted software and only things that are on the list are
permitted to run. Everything else is blocked.
I should call out that this typically only applies to executable binaries, not arbitrary files like pdf
documents or text files.
This would naturally defend against any unknown threats but at the cost of convenience.

Now, imagine if you had to get approval before you could download and install any new software,
that would be really annoying. It's for this reason that binary whitelisting software can trust software
using a couple different mechanisms.

Binary whitelisting software can use both cryptographic hashes and software signing certificates as
trust mechanisms to whitelist software and prevent unauthorized or malicious code from executing
on a system.

A software signing certificate is a digital certificate that is issued to a software publisher by a trusted
certificate authority (CA). The certificate contains information about the publisher, such as their name
and contact information, as well as a public key that can be used to verify the digital signature of the
code.

When a software publisher signs a binary with their software signing certificate, they are essentially
vouching for the authenticity and integrity of the code. The digital signature can be verified by the
binary whitelisting software, which can then allow the code to run if it is deemed trustworthy.

27
Using software signing certificates as a trust mechanism can provide an additional layer of security
beyond just verifying the cryptographic hash of a binary. However, it is important to note that
software signing certificates can potentially be compromised or misused if the private key associated
with the certificate falls into the wrong hands. Therefore, it is important for software publishers to
properly secure their private keys and take other measures to prevent unauthorized access to their
signing infrastructure.

5. Disk Encryption
FDE is an important factor in a defense in depth security model. It provides protection from some
physical forms of attack.
Full-disk encryption (FDE)
Full Disk Encryption (FDE) is a security technique that involves encrypting all of the data on a storage
device, including the operating system, applications, and user data. FDE ensures that all the data on
the disk is protected and inaccessible to unauthorized users, even if the device is lost, stolen, or
accessed by an attacker.

With FDE, the encryption process is performed at the disk level, meaning that all the data on the disk
is encrypted, not just individual files or folders. This provides a high level of security and protection
against attacks that attempt to access or steal data from the device.

FDE typically uses symmetric encryption, where the same key is used for both encryption and
decryption. The encryption key is typically derived from a user password or passphrase, meaning that
the key is only accessible to authorized users who know the password.

During the boot process, critical boot files are decrypted to allow the operating system to start. This
requires the use of the encryption key, which is typically entered by the user at boot time. Once the
system has booted, all the data on the disk remains encrypted until it is accessed by an authorized
user or application.

FDE is commonly used in environments where data security is a high priority, such as in government,
financial, and healthcare organizations. It is also increasingly being used on personal devices, such as
laptops and smartphones, to protect personal data and sensitive information.

Overall, FDE is a powerful security technique that provides strong protection for data stored on a disk,
ensuring that it remains inaccessible to unauthorized users or attackers.

Systems w/ their entire hard drive’s encrypted are resilient against data theft. They’ll prevent an
attacker from stealing potentially confidential information from a hard drive that’s been stolen or lost.
W/out also knowing the encryption password or having access to the encryption key, the data on the
hard drive is just meaningless gibberish.

This is a very important security mechanism to deploy for more mobile devices like laptops, cellphones
and tablets.

But it’s also recommended for desktops and servers to since disk encryption not only provides
confidentiality but also integrity.
This means that an attacker with physical access to a system can’t replace system files w/ malicious
ones or install malware.
Having the disk fully encrypted protects from data theft and unauthorized tampering even if an
attacker has physical access to the disk.

There are first-party full-disk encryption (FDE) solutions from Microsoft and Apple, called BitLocker and
FileVault 2, respectively.

There are also a bunch of third-party and open-source solutions. On Linux, the dm-crypt package is
very popular.

28
There are also offerings from PGP, VeraCrypt and a host of others.

QUIZ SYSTEM HARDENING

1.)What is an attack vector?


 An attack vector can be thought of as any route through which an attacker can interact with
your systems and potentially attack them.

2.)Disabling unnecessary components serves which purposes?


 Every unnecessary component represents a potential attack vector. The attack surface is the
sum of all attack vectors. So, disabling unnecessary components closes attack vectors, thereby
reducing the attack surface.

4.)A good defense in depth strategy would involve deploying which firewalls?
 Defense in depth involves multiple layers of overlapping security. So, deploying both host- and
network-based firewalls is recommended.

5.)Using a bastion host allows for which of the following?


 Bastion hosts are special-purpose machines that permit restricted access to more sensitive
networks or systems. By having one specific purpose, these systems can have strict
authentication enforced, more firewall rules locked down, and closer monitoring and logging.

6.)What benefits does centralized logging provide?


Centralized logging is really beneficial, since you can harden the log server to resist attempts from
attackers trying to delete logs to cover their tracks. Keeping logs in place also makes analysis on
aggregated logs easier by providing one place to search, instead of separate disparate log systems.

7. ) What are some of the shortcomings of antivirus software today?


Antivirus software operates off a blacklist, blocking known bad entities. This means that brand
new, never-before-seen malware won't be blocked.

8. ) How is binary whitelisting a better option than antivirus software?


By blocking everything by default, binary whitelisting can protect you from the unknown threats
that exist without you being aware of them.
9. ) What does full-disk encryption protect against?
With the contents of the disk encrypted, an attacker wouldn't be able to recover data from the drive
in the event of physical theft. An attacker also wouldn't be able to tamper with or replace system
files with malicious ones.
10. ) What's the purpose of escrowing a disk encryption key
Key escrow allows the disk to be unlocked if the primary passphrase is forgotten or unavailable for
whatever reason.

B. APPLICATION HARDENING
1. Software Patch Management
As an IT Support Specialist, it’s critical that you make sure that you install software updates and
security patches in a timely way, in order to defend your company’s systems and networks.

Software updates don’t just improve software products by adding new features and improving
performance, and stability. They also address security vulnerabilities.

Patching isn’t just necessary for software, but also operating systems and firmware that run on
infrastructure devices.

Every devices has code running on it that might have software bugs that could lead to security
vulnerabilities from routers, switches , phones, even printers.

29
Operating system vendors usually push security related patches pretty quickly when an issue is
discovered. They will usually release security fixes out of cycle from typical OS upgrades to ensure a
timely fix, because of the security implications.
But for embedded devices like network and equipment or printers, this might not be typical.

Critical infrastructure devices should be approached carefully when you apply updates.
There’s always the risk that a software update will introduce a new bug that might affect the
functionality of the device or if the update process itself would go wrong and cause an outage.

To minimize the risk of introducing new issues through software updates, it is important to thoroughly
test updates in a controlled environment before deploying them in the production environment. This
can involve creating a test environment that closely mirrors the production environment, and testing
the updates in that environment to identify any issues before deploying the updates to the live
systems.

It is also important to have a well-designed and tested rollback plan in case an update causes issues in
the live environment. This can involve having backups of the system and a plan to quickly revert to
the previous version of the software if necessary.

2. Browser Hardening
In this reading, you will learn how to harden browsers for enhanced internet security. The methods
presented include evaluating sources for trustworthiness, SSL certificates, password managers, and
browser security best practices. Techniques for browser hardening are important components in
enterprise-level IT security policies. These techniques can also be used to improve internet security
for organizations of any size and for individual users.

Identifying trusted versus untrusted sources


Some cybercriminals monitor SEO search terms for popular software downloads. Then they create
fake websites to pose as hosts for these popular downloads. They might even use advertising and
stolen logos of trusted companies to make the sites appear to be legitimate businesses. However, the
downloadable files available on the cybercriminals’ websites are usually malicious software. Unaware
of the deception, users download and install the malware. In some cases, the users don’t even need
to download a file. Savvy cybercriminals can design web pages that have the ability to infect users’
devices simply upon visiting the sites.

To guard against threats like this, there are checks you can perform to evaluate websites:

 Use antivirus and anti-malware software and browser extensions. Run antivirus and anti-
malware scans regularly and scan downloaded files. Ensure antivirus and anti-malware browser
extensions are enabled when surfing the web.
 Check for SSL certificates. See the “Secure connections and sites” section below.
 Ensure the URL displayed in the address bar shows the correct domain name. For example,
Google websites use the Google.com domain name.
 Search for negative reviews of the website from trusted sources. Be wary of websites that have
few to no reviews. They may not have been active long enough to build a bad reputation.
Cybercriminals will create new websites when they get too many negative reviews on their older
sites.
 Don’t automatically trust website links provided by people or organizations you trust. They
may not be aware that they are passing along links to malicious websites and files.
 Use hashing algorithms for downloaded files. Compare the developer-provided hash value of
the original file to the hash value of the downloaded copy to ensure the two values match.

Secure connections and sites


Secure Socket Layer (SSL) certificates are issued by trusted certificate authorities (CA), such as
DigiCert. An SSL certificate indicates that any data submitted through a website will be encrypted. A

30
website with a valid SSL certificate has been inspected and verified by the CA. You can find SSL
certificates by performing the following steps:

1. Check the URL in the address bar. The URL should begin with the https:// protocol. If you see
http:// without the “s”, then the website is not secure.

2. Click on the closed padlock icon in the address bar to the left of the URL. An open lock indicates
that the website is not secure.

3. A pop-up menu should open. Websites with SSL certificates will have a menu option labeled
“Connection is secure.” Click on this menu item.

4. A new pop-up menu will appear with a link to check the certificate information. The layout and
wording of this pop-up will vary depending on which browser you are using. When you review
the certificate, look for the following items:4

a) The name of the issuer - Make sure it is a trusted certificate authority.

b) The domain it was issued to - This name should match the website domain name.

c) The expiration date - The certificate should not have passed its expiration date.

Note that cybercriminals can obtain SSL certificates too. So, this is not a guarantee that the site is
safe. CAs also vary in how thorough they are in their inspections.

Password managers
Password managers are software programs that encrypt and retain passwords in secure cloud storage
or locally on users’ personal computing devices. There are a wide variety of activities users perform
online that require unique and complex passwords, such as banking, managing health records, filing
taxes, and more. It can be difficult for users to keep track of so many different logins and passwords.
Fortunately, password managers can help.

Advantages of using a password manager:

 It provides only one password for a user to remember;


 Can generate and store secure passwords that are difficult for cybercriminal tools to crack;
 Is more secure than keeping passwords written down on paper or in an unencrypted file on a
computer; and
 Work across multiple devices and operating systems.

Disadvantages of using a password manager:

 It can expose all of the user’s account credentials if a cybercriminal obtains the master password
to the password manager;
 Can be very difficult for a user to regain access to the password manager account if the master
password is lost or forgotten;
 Requires the user to learn a new method for logging in to their various accounts in order to
retrieve passwords from the password manager software; an
 Often requires a fee or subscription for password management services.

A few of the top brands for password manager applications include Bitwarden, Last Pass, and
1Password. Please see the Resource section at the end of this reading for more information.

Browser settings

31
Browser settings can be configured for additional safety measures. Some additional options for
hardening browsers include:

1. Use pop-up blockers: Disable Web Browser Pop-up Blockers

2. Clear browsing data and cache: Clear your web browser's cache, cookies, and history

3. Use private-browsing mode: How to Turn on Incognito Mode in Your Browser

4. Sign-in/browser data synchronization:

a. Turn sync on and off in Chrome

b. Disable Firefox Sync

c. Change and customize sync settings in Microsoft Edge

5. Use ad blockers: How to block ads

Key takeaways
You learned about multiple steps you can take to harden a browser and protect your online security:

Identify if sources can be trusted or not:

 Use antivirus and anti-malware software and browser extensions.


 Check for SSL certificates.
 Ensure the URL displayed in the address bar shows the correct domain name.
 Search for negative reviews of the website from trusted sources.
 Don’t automatically trust website links provided by people or organizations you trust.

Use a password manager


Configure your browser settings:

 Use pop-up blockers.


 Clear browsing data and cache.
 Use private-browsing mode.
 Sign-in/browser data synchronization.
 Use ad blockers.

3. Application Policy
Application Software can represent a pretty large attack surface.
So it’s important to have some kind of application policies in place.

These policies serve two purposes.


1. Only support or require the latest version of a piece of software.
Latest version of the software will ensure that all security patches have been applied, and the
most secure version is in use.
2. Understanding what your users need to do their jobs will help shape your approach to software
policies and guidelines
a) Video games and file sharing software typically don't have a use in business (though it does
depend on the nature of the business). So, it might make sense to have explicit policies
dictating whether or not this type of software is permitted on systems.
b) Browser extensions or add-ons.
Extensions that require full access to web sites visited can be risky, since the extension developer
has the power to modify pages visited.

32
QUIZ DEFENSE IN DEPTH
1. What is a class of vulnerabilities that are unknown before they are exploited?
Zero-day

2. A core authentication server is exposed to the internet and is connected to sensitive services. What
are some measures you can take to secure the server and prevent it from getting compromised by a
hacker? Select all that apply.
 Access Control Lists (ACLs)
 Designate as a bastion host
 Secure firewall

 Secure Firewall (A secure firewall configuration should restrict connections between untrusted
networks and systems)

 Bastion Hosts. (Bastion hosts are specially hardened and minimized in terms of what is
permitted to run on them. Typically, bastion hosts are expected to be exposed to the internet,
so special attention is paid to hardening and locking them down to minimize the chances of
compromise.)

 Access Control Lists (ACLs). Secure configurations, such as ACLs, could be implemented on
specific bastion hosts to secure sensitive services without degrading the convenience of the
entire organization.

3. When looking at aggregated logs, you are seeing a large percentage of Windows hosts connecting
to an Internet Protocol (IP) address outside the network in a foreign country. Why might this be worth
investigating more closely?
 It can indicate a malware infection

 It can indicate a malware infection. When looking at aggregated logs, you should pay attention
to patterns and correlations between traffic. For example, if you are seeing a large percentage
of hosts all connecting to a specific address outside your network, that might be worth
investigating more closely, as it could indicate a malware infection.

4. Which of these plays an important role in keeping attack traffic off your systems and helps to
protect users? Select all that apply.
Antivirus software
Antimalware measures

5. What does full-disk encryption protect against? Select all that apply.
Data theft
Data tampering

5. A hacker exploited a bug in the software and triggered unintended behavior which led to the
system being compromised by running vulnerable software. Which of these helps to fix these types of
vulnerabilities?
 Software patch management

 Software Patch Management. Vulnerabilities can be fixed through software patches and
updates which correct the bugs that attackers exploit.

6. Besides software, what other things will also need patches? Select all that apply.
Infrastructure firmware
Operating systems

7. What is the best way to avoid personal, one-off software installation requests?
A clear application whitelist policy

33
A clear application whitelist policy can be an effective way to avoid personal, one-off software
installation requests. An application whitelist policy is a list of approved software that employees
are allowed to install on their computers. By implementing a whitelist policy, you can prevent
employees from installing unapproved software on their machines.

A clear application whitelist policy should include a list of approved software, as well as guidelines
for requesting new software to be added to the whitelist. This can help ensure that employees have
access to the software they need to do their jobs, while also maintaining security and compliance
standards.

One of the benefits of an application whitelist policy is that it can be automated using software
deployment tools. This can help streamline the process of installing approved software on
employee machines, while also ensuring that unapproved software is not installed.

However, it is important to note that an application whitelist policy should be regularly reviewed
and updated to ensure that it remains relevant and effective. New software may be released that
employees need to use, and existing software may become outdated or pose security risks. Regular
reviews can help ensure that the whitelist policy continues to meet the needs of the organization.

8. Securely storing a recovery or backup encryption key is referred to as _______.


Key escrow

Yes, securely storing a recovery or backup encryption key is referred to as key escrow. Key escrow is
the process of keeping a copy of an encryption key in a secure location, separate from the system or
device that uses the key, to ensure that the key can be recovered if it is lost or becomes
inaccessible.

The purpose of key escrow is to provide a way to recover encrypted data if the original encryption
key is lost or damaged. It is particularly important for organizations that need to maintain the
confidentiality and integrity of sensitive information, such as financial institutions or government
agencies.

Key escrow can be implemented in different ways, depending on the requirements of the
organization. One common approach is to use a third-party service provider to securely store the
encryption key. The provider would store the key in a secure data center with strong physical and
logical security controls, and only release the key to authorized individuals or systems with proper
authentication and authorization.

Another approach is to use a trusted employee or group of employees within the organization to
store and manage the encryption keys. This approach requires a high degree of trust and
accountability, as the individuals responsible for the keys must ensure that they are stored securely
and not misused.

Overall, key escrow is an important security measure that helps organizations protect sensitive data
and ensure that it can be recovered in the event of a disaster or system failure.

9. Why is it important to disable unnecessary components of software and systems?


Less complexity means less vulnerability.

MODULE SIX CREATING A COMPANY CULTURE FOR SECURITY


RISK IN THE WORKPLACE
 Security Goal
The Payment Card Industry Data Security Standard (PCI DSS) has six primary objectives, each
with a set of requirements to help organizations protect cardholder data:

1. Build and Maintain a Secure Network and Systems: This objective focuses on ensuring that an
organization's network and systems are secure and protected from unauthorized access. This

34
includes requirements such as installing and maintaining firewalls and anti-virus software,
encrypting data in transit, and restricting access to cardholder data.

2. Protect Cardholder Data: This objective focuses on protecting cardholder data wherever it is
stored, processed, or transmitted. This includes requirements such as encrypting cardholder
data, masking cardholder data when displayed, and limiting access to cardholder data to
authorized personnel.

3. Maintain a Vulnerability Management Program: This objective focuses on identifying and


addressing vulnerabilities in an organization's systems and applications. This includes
requirements such as regularly scanning for vulnerabilities, implementing patches and updates in
a timely manner, and maintaining secure coding practices.

4. Implement Strong Access Control Measures: This objective focuses on ensuring that access to
cardholder data is limited to authorized personnel only. This includes requirements such as
assigning unique user IDs to each person with access, implementing two-factor authentication,
and regularly reviewing access rights and permissions.

5. Regularly Monitor and Test Networks: This objective focuses on regularly monitoring an
organization's systems and networks to detect and respond to security incidents. This includes
requirements such as regularly monitoring access logs, conducting penetration testing, and
implementing intrusion detection and prevention systems.

6. Maintain an Information Security Policy: This objective focuses on maintaining and enforcing a
comprehensive information security policy that addresses all aspects of the organization's
security program. This includes requirements such as implementing and maintaining a security
awareness program, conducting regular security training for employees, and regularly reviewing
and updating the security policy as needed.
 Measuring and assessing risk
Security is all about determining risks or exposure; understanding the likelihood of attacks; and
designing defense around these risks to minimize the impact of an attack.

Security risks assessment starts with threat modeling. First, we identify threats to our systems,
then we assign them priorities that correspond to severity and probability. We do this by
brainstorming from the perspective of an outside attacker, putting ourselves in a hacker shoes. It
help to start by figuring out what high-value targets an attacker may want to go after. From
there, you can start to look at possible attack vectors that could be used to gain access to high-
value assets.

High-value data usually includes info, like usernames and passwords. Or any kind of user data is
considered high-value.

Another part of risk measurement is understanding what vulnerabilities are on your systems and
network. One way to find these out is to perform regular vulnerability scanning.

Vulnerability Scanner - A compter program designed to assess computers, computer systems,


networks or applications for weaknesses.
a) Nessus
b) OpenVAS
c) Qualys

The primary functions of vulnerability scanners are as follows:

Identifying vulnerabilities: Vulnerability scanners scan the target system or network and
compare the results to a database of known vulnerabilities. The scanners identify vulnerabilities
such as unpatched software, misconfigured systems, and weak passwords.

35
Prioritizing vulnerabilities: Vulnerability scanners prioritize vulnerabilities based on severity and
potential impact. High-risk vulnerabilities, such as those that could result in a data breach or
system compromise, are given top priority.

Reporting vulnerabilities: Vulnerability scanners generate reports that detail the identified
vulnerabilities and their severity. The reports often include recommendations for remediation or
mitigation of the vulnerabilities.

Automating vulnerability management: Some vulnerability scanners are designed to automate


the vulnerability management process. This includes automatically patching or remediating
identified vulnerabilities, or triggering alerts to security teams for manual remediation.

Continuous monitoring: Many vulnerability scanners provide continuous monitoring capabilities,


allowing organizations to regularly scan their systems and applications for new vulnerabilities as
they are identified and reported.

Overall, vulnerability scanners are an important tool for organizations to use as part of their
security program. They help identify potential vulnerabilities and prioritize them for remediation
or mitigation, which can help organizations prevent security breaches and protect sensitive data.

How vulnerability scanners work?


Vulnerability scanners work by scanning a target system, network, or application for potential
vulnerabilities and weaknesses. Here are the general steps that vulnerability scanners follow:

Discovery: The first step is to identify the target system or network to be scanned. The scanner
will use various techniques such as IP scanning, DNS lookup, and port scanning to identify the
target system.

Enumeration: Once the target system or network is identified, the scanner will begin
enumerating the system or network to gather information such as open ports, running services,
and installed applications.

Vulnerability Detection: After enumerating the system or network, the scanner will compare the
gathered information to a database of known vulnerabilities to identify potential security
weaknesses. The scanner will use various techniques to test the system for vulnerabilities such
as sending specific packets, probing for specific configurations, and brute-force attacks.

Vulnerability Assessment: Once the scanner has identified potential vulnerabilities, it will assess
the severity and impact of each vulnerability. This includes assigning a risk score or severity level
based on the potential impact on the system or network.

Reporting: Finally, the scanner will generate a report detailing the vulnerabilities identified,
including their severity, impact, and recommended actions for remediation or mitigation.

Overall, vulnerability scanners help organizations identify potential security weaknesses before
they can be exploited by attackers. They provide a comprehensive way to assess the security
posture of systems and networks and prioritize remediation efforts to mitigate the most critical
vulnerabilities.

But Vulnerability scanning isn’t the only way to put your defenses to the test.

Conducting regular penetration tests is also really encouraged to test your defenses even more.
These tests will also ensure detection and alerting systems are working properly.

Penetration Testing - The practice of attempting to break into a system or network to verify the
systems in place. This way you can test your systems to make sure they protect you like they’re

36
supposed to. The results of the penetration testing reports will also show you where weak points
or blind spots exist. This test help improves defenses and guide future security projects.

 Privacy Policy
Privacy is not only a defends against external threats. It also protects the data against misuse by
employees.

Privacy Policies oversee the access and use of sensitive data.

Both privacy and data access policies are important to guiding and information people how to
maintain security while handling sensitive data.

Auditing data access logs is super important. It helps us ensure that sensitive data is only
accessed by people who are authorized to access it and they use it for the right reasons.

It’s a good practice to apply the principle of least privilege here, by not allowing access to this
type of data by default. If need access, first make an access request.

Any access that doesn’t have a corresponding request should be flagged as a high-priority
potential breach that needs to be investigated as soon as possible.

Data handling policies should cover the details of how different data is classified.
Once different data classes are defined, you should create guidelines around how to handle
these different types of data. If something is considered sensitive or confidential, you’d probably
have stipulations that this data shouldn’t be stored in media that’s easily lost or stolen, like USB
sticks or portable hard drives. If you really have no choice, store it in encrypted media.

QUIZ RISK IN THE WORKPLACE

1. What are some examples of security goals that you may have for an organization? Check all
that apply.
To prevent unauthorized access to customer credentials
To protect customer data from unauthorized access

2. Which of these would you consider high-value targets for a potential attacker? Check all that
apply.
Authentication databases
Customer credit card information

3. What's the purpose of a vulnerability scanner?


It detects vulnerabilities on your network and systems.
A vulnerability scanner will scan and evaluate hosts on your network. It does this by looking
for misconfigurations or vulnerabilities, then compiling a report with what it found.

4. What are some restrictions that should apply to sensitive and confidential data? Check all that
apply.
It can be stored on encrypted media only.
Sensitive data should be treated with care so that an unauthorized third-party doesn't gain
access. Ensuring this data is encrypted is an effective way to safeguard against unauthorized
access.

5. What's a privacy policy designed to guard against?


Misuse or abuse of sensitive data

37
 Data Destruction
Data destruction is removing or destroying data stored on electronic devices so that an
operating system or application cannot read it. Data destruction is required when a company no
longer needs a device, when there are unused or multiple copies of data, or you are required to
destroy specific data.

There are three categories of data destruction methods: recycling, physical destruction, and
third-party destruction. This reading will introduce the data destruction methods and how to
decide which method to use.

Recycling
Recycling includes methods that allow for device reuse after data destruction. This option is
recommended if you hope to reuse devices internally, sell surplus equipment, or your devices
are on loan and are due to be returned. Standard recycling methods include the following:

Erasing/wiping: cleans all data off a device’s hard drive by overwriting it. Erasing or wiping data
can be done manually or with data-destruction software. This method is practical when you only
have a few devices that need data destroyed, as it takes a long time. Note that it may take
multiple passes to wipe highly sensitive data completely.

Low-level formatting: erases all data written on the hard drive by replacing it with zeros. Low-
level reformatting can be done using a tool such as HDDGURU on a PC or the Disk Utility function
on a Mac.

Standard formatting: erases the path to the data and not the data itself. Both PCs and Macs have
internal tools that can perform a standard format, Disk Management on a PC or Disk Utility on a
Mac. Note that standard formatting does not remove the data from the device, enabling data
rediscovery using software.

Physical destruction
Physical destruction includes any method that physically destroys a device to make it difficult to
retrieve data from it. You should only use physical destruction if you do not need to reuse the
device. However, only completely destroying the device ensures the destruction of all data with
physical methods. Physical destruction methods include the following:

Drilling holes directly into the device wipes data out on the sections where there are holes.
However, individuals can recover data from the areas that are still intact.

Shredding includes the physical shredding of hard drives, memory cards, CDs, DVDs, and other
electronic storage devices. Shredding reduces the potential for recovery. Shredding requires
special equipment or outsourcing to another facility.

Degaussing uses a high-powered magnet which destroys the data on the device. This method
effectively destroys large data storage devices and renders the hard drive unusable. As electronic
technology changes, this method may become obsolete

Incinerating destroys data by burning the device. Most companies do not have an incinerator on-
site. Devices need to be transported to a facility for incineration. Due to this, devices can be lost
or stolen in transit.

In addition to effectively destroying data on electronic devices, it is essential to follow best


practices for electronic device disposal.

Outsourcing
Outsourcing means using a third-party specializing in data destruction to complete the physical
or recycling process. This option appeals to companies that do not have the staff or knowledge

38
to complete the destruction themselves. Once a vendor has completed the task, they issue a
certificate of destruction/recycling.

The certificate of destruction serves as a statement of completed destruction of data on


electronics, hard drives, or other devices. The certificate includes the client’s contact
information, date of service, vendor company name, manifest, signature, method of destruction,
and legal statement. However, exercise caution as the certificate does not indicate a level of
training, auditing, or any other verification that a vendor is knowledgeable about data
destruction.

Key Takeaways
Data destruction makes data unreadable to an operating system or application. You should
destroy data on devices no longer used by a company, unused or duplicated copies of data, or
data that’s required to destroy. Data destruction methods include:

Recycling: erasing the data from a device for reuse

Physical destruction: destroying the device itself to prevent access to data

Outsourcing: using an external company specializing in data destruction to handle the process
USERS
 User habits
You can build the world’s best security systems, but they won’t protect you if the users are going
to be practicing unsafe security.

You should never upload confidential information onto a third-party service that hasn’t been
evaluated by your company.

Password Policy - 20 long character and change every 3 months. Don’t password reuse.

A much greater risk in the workplace that users should be educated on is credential theft from
phising emails.
Having two-factor Authentication helps protect against it.
If someone entered their password into a phising website, or even suspect they did, it’s
important to change their password as soon as possible.
And use tools like Password alert, it is a chrome extension from Google that can detect when you
enter your password into a site that’s not a Google page.
If a user writes their password on a Post-it Note, and sticks it to laptop, then leave laptop on cafe
unintended.
 Third Party Security
Sometimes you need to rely on 3rd party solutions or service providers because you might not be
able to do everything in house.

If they have subpar (weak) security, you are undermining your security defenses by potentially
opening a new avenue of attack.

It’s important to hire trustworthy and reputable vendors whenever you can.
This involves conducting a vendor risk review or security assessment.
In typical vendor security assessments, you ask vendors to complete a questionnaire that covers
different aspects of their security policies, procedures and defenses.

The questionnaire is designed to determine whether or not they’ve implemented good security
designs in their organization.

For software services or hardware vendors, you might also ask to test the software hardware.
That way you can evaluate it for potential security vulnerabilities or concerns before deciding to
contract their services.

39
It’s important to understand how well protected your business partners are before deciding to
work with them. If they have poor security practices, your organization’s security could be at
risk. A compromise of their infrastructure could lead to a breach of your systems.

If you can, ask for a third-party security assessment report.


 Some of the information on the questionnaire can be verified like third party security audit
results and penetration testing reports.
 Int the case of third party software, you might be able to conduct some basic vulnerability
assessments and tests to ensure the product has some reasonable security.
 Google recently made their vendor security assessment questionnaires available for free.

Additional monitoring would also be recommended for this third party device, since it represents
a new potential attack surface in your network. If the vendor lets you, evaluate the hardware in
a lab environment, first, there you can run in depth vulnerability assessments and penetration
testing of the hardware. And make sure there aren’t any obvious vulnerabilities in the product.
Report your findings to the vendor and ask that they address any issues you discover.

 Security Training
It’s impossible to have good security practices at your company if employees and users haven’t
received good trainings and resources. This will boost a healthy company culture and overall
attitude towards security.

A working environment that encourages people to speak up when they feel something isn’t right
is critical. It encourage them to do the right thing.

Helping others keep security in mind will help decrease the security burdens you’ll have as an IT
support specialist.
It will also make the overall security of the organization better.
Because anyone with access to the machine can impersonate you and get access to any
resources you’re logged into.

INCIDENT HANDLING
 Incident reporting and analysis
We try our best to protect our systems and networks, but it’s pretty likely that some sort of
incident will happen.

Regardless of the nature of the incident, proper incident handling is important to understanding
what exactly happened and how it happened and how to avoid it from happening again.

 The very first step of handling an incident is to detect it in the first place.
 The next step is to analyze it and determine the effects and scope of damage.
 Was it a data leak? Or information disclosure?
 If so, what information got out? How bad is it, where systems compromised? What
systems and what level of access did they manage to get?
 Is it a malware infection? What systems were infected?
This is why having good monitoring in place is so important along with understanding your
baseline. Once you figure out what normal traffic looks like on your network and what services
you expect to see, outliers will be easier to detect.

This is important because every false lead that the incident response team has to investigate
means time and resources wasted. This has the potential to allow real intrusions to go
undetected and uninvestigated longer.

 Once the scope of the incident is determined, the next step is containment.
You need to contain the breach to prevent further damage for system compromises and
malware infection.

40
 If an account was compromised, change the password immediately. If the owner is
unable to change the password right away, then lock the account.
 If it’s a malware infection, can our Antimalware software quarantine or remove the
infection. If not then the infected machine needs to removed from the network as
soon as possible to prevent lateral movement around the network, to do this, you can
adjust network based firewall rules to effectively quarantine the machine.
 You could also move the machine to a separate VLAN used for security quarantining
purposes. This would be a VLAN w/ strict restrictions and filtering applied to prevent
further infection of other systems and networks.
 It’s important during this phase that efforts are made to avoid the destruction of any
logs or forensic evidence.
 Attackers will usually try to cover their attacks by modifying logs and deleting
files, especially when they suspect they’ve been caught.
 They’ll take measures to make sure they keep their access to compromised
systems.
 This could involve installing a backdoor or some kind or remote access malware.
 Another step to watch out is creating a new user account that they can use to
authenticate w/ in the future w/ effective logging, configurations and system in place.
 This type of access should be detected during an incident investigation.
 Another part of incident analysis is determining severity, impact and recovery ability of
the incident.
 Severity includes factors like what and how many systems were compromised, and
how the breach affects business functions.
 An incident that’s compromised a bunch of machines in the network would be
more severe than one where a single web server was hacked.
 So the impact of an incident is also an important issue to consider.
 If the org only had one web server and it was compromised, it might be
considered a much higher severity breach.
 Data exfiltration - The unauthorized transfer of data from a computer
 It’s also a very important concerns when a security incident happens, hackers may try
to steal data for a number of reasons. Or account information to provide access later.
 Or attacker may just want to cause damage and destruction which might involve
corrupting data.
 Recoverability - How Complicated and time-consuming the recovery effort will be
 An incident that can be recovered w/ a simple restoration from backup by following
documented procedures would be considered easily recovered from.
 But an incident where an attacker deleted large amounts of customer information and
wrecked havoc across lots of critical infrastructure systems would be way more
difficult to recover from.
 It might not be possible to recover from it at all. In some cases, depending on backup
systems and configurations, some data may be lost forever and can’t be restored.
 Back ups won’t contain any changes or new data that were made after the last backup
run.

 Incident response
When you’ve had a data breach, you may need forensic analysis to analyze the attack. This
analysis usually involves extensive evidence gathering. This reading covers some considerations
for protecting the integrity of your forensic evidence and avoiding complications or issues
related to how you handle evidence.

Regulated data
It’s important to consider the type of data involved in an incident. Many types of data are
subject to government regulations that require you to take extra care when handling it. Here are
some examples you’re likely to encounter as an IT support specialist.

41
1. Protected Health Information: This information is regulated by the Health Insurance
Portability and Accountability Act (HIPAA). It is personally identifiable health information that
relates to:

 Past, present, or future physical or mental health or condition of an individual


 Administration of health care to the individual by a covered provider (for example, a
hospital or doctor)
 Past, present, or future payment for the provision of health care to the individual

2. Credit Card or Payment Card Industry (PCI) Information: This is information related to credit,
debit, or other payment cards. PCI data is governed by the Payment Card Industry Data Security
Standard (PCI DSS), a global information security standard designed to prevent fraud through
increased control of credit card data.

3. Personally Identifiable Information (PII): PII is a category of sensitive information associated


with a person. Examples include addresses, Social Security Numbers, or similar personal ID
numbers.

4. Federal Information Security Management Act (FISMA) compliance: FISMA requires federal
agencies and those providing services on their behalf to develop, document, and implement
specific IT security programs and to store data on U.S. soil. For example, organizations like NASA,
the National Institutes of Health, the Department of Veteran Affairs—and any contractors
processing or storing data for them—need to comply with FISMA.

6. Export Administration Regulations (EAR) compliance: EAR is a set of U.S. government


regulations administered by the U.S. Department of Commerce’s Bureau of Industry and Security
(BIS). These regulations govern the export and re-export of commercial and dual-use goods,
software, and technology. Dual-use goods are items that can be used both for civilian and
military applications. These goods are heavily regulated because they can be classified for civilian
use and then transformed for military purposes.

Digital rights management (DRM)


Digital Rights Management (DRM) technologies can help ensure data regulations compliance.
DRM technology comes in the form of either software or hardware solutions. Both options allow
content creators to prevent deliberate piracy and unauthorized usage. DRM often involves using
codes that prohibit content copying or limit the number of devices that can access a product.
Content creators can also use DRM applications to restrict what users can do with their material.
They can encrypt digital media so only someone with the decryption key can access it. This gives
content creators and copyright holders a way to:

 Restrict users from editing, saving, sharing, printing, or taking screenshots of content or
products

 Set expiration dates on media to prevent access beyond that date or limit the number of
times users can access the media

 Limit access to specific devices, Internet Protocol (IP) addresses, or locations, such as
limiting content to people in a specific country

Organizations can use these DRM capabilities to protect sensitive data. DRM enables
organizations to track who has viewed files, control access, and manage how people use the
files. It also prevents files from being altered, duplicated, saved, or printed. DRM can help
organizations comply with data protection regulations.

End User Licensing Agreement (EULA)


End User Licensing Agreements (EULAs) are similar to DRM in specifying certain rights and
restrictions that apply to the software. You often encounter EULA statements when installing a

42
software package, accessing a website, sharing a file, or downloading content. A EULA is usually
considered a legally binding agreement between the owner of a product (e.g., a software
publisher) and the product's end-user. The EULA specifies the rights and restrictions that apply
to the software, and it’s usually presented to users during installation or setup of the software.
You can’t complete an installation (or access, share, or download data) until you agree to the
terms written in the EULA statement.

Unlike DRM restrictions, EULAs are only valid if you agree to it (i.e., you check a box or click the ‘I
Agree’ button). DRM restrictions don’t require your agreement—or rely on you to keep that
agreement. DRMs are built into the product they protect, making it easier for content creators to
ensure users do not violate restrictions.

Chain of custody
“Chain of custody” refers to a process that tracks evidence movement through its collection,
safeguarding, and analysis lifecycle. Maintaining the chain of custody makes it difficult for
someone to argue that the evidence was tampered with or mishandled. Your chain of custody
documentation should answer the following questions. Documentation for these questions must
be maintained and filed in a secure location for current and future reference.

 Who collected the evidence? Evidence can include the afflicted or used devices,
media, and associated peripherals.

 How was the evidence collected, and where was it located?

 Who seized and possessed the evidence?

 How was the evidence stored and protected in storage? The procedures involved in
storing and protecting evidence are called evidence-custodian procedures.

 Who took the evidence out of storage and why? Ongoing documentation of the names
of individuals who check out evidence and why must be kept.

When a data breach occurs, forensic analysis usually involves taking an image of the disk.
This makes a virtual copy of the hard drive. The copy lets an investigator analyze the disk’s
contents without modifying or altering the original files. An alteration compromises the
integrity of the evidence. This kind of compromised integrity is what you want to avoid
when performing forensic investigations.
 Incident response and recovery
Once the threat has been detected and contained, it has to be removed or remediated.

 When it comes to Malware Infection, this means removing the malware from affected
systems.
 But in some cases, this may not be possible, so the affected systems have to be
restored to a known good configuration. This can be done by rebuilding the machine
or restoring from backup.
 Take care when removing malware from systems, because some malware is designed
to be very persistent, which means it’s resistant to being removed.
 But before we can start the recovery, we have to contain the incident. This might
involve shutting down affected systems to prevent further damage or spread of an
infection.
 Affected systems may just have to remove access removed to cut off any
communication with the compromised system.
 The motivating factor here would be to prevent the spread of any infection or to
remove remote access to the system.
 Forensic analysis may need to be done to analyze the attack. This is true when it comes to a
malware infection.

43
 In the case of forensic analysis, affected machines might be investigated very closely
to determine exactly what the attacker did.
 This is usually done by taking an image of the disk, essentially making a virtual copy of
the hard drive.
 This lets the investigator analyze the contents of the disk w/out the risk of modifying
or altering the original files. Or it would compromise the integrity of any forensic
evidence.
 Usually evidence gathering is also part of the incident response process. This provides
evidenced to law enforcement if the organization wants to pursue legal action against
the attackers.
 Forensic evidence is super useful for providing details of the attack to security
community. It allows other security teams to be aware of new threats and lets them
better defend themselves.
 It’s also very important that you get members from your legal team involved in any
incident handling plans. Because an incident can have legal implications for the
company, a lawyer should be available to consult and advise on the legal aspects of
the investigation.
 We’ll need to use information from the analysis to prevent any further intrusions or
infections.
 First, we determine the entry point to figure out how the attacker got in or what
vulnerability the malware exploited.
 If you remove a malware infection w/out also addressing the underlying vulnerability,
systems could become re-infected right after you clean them up. Portmortems can be
a great way to document incident.
 Logs have to be audited to determine exactly what the attacker did while they had
access to the system. They’ll also tell you what data the attacker accessed.
 System must be scrutinized to ensure no back doors have been installed or malware
planted on the system.
 And vulnerabilities should be close to prevent any future attacks.
 When all traces of the attack have been removed and discovered and the known
vulnerabilities have been closed, you can move on to the last step.
 Systems need to be thoroughly tested to make sure proper functionality has been restored.
 It is possible the attacker will attack the same target again. Or similar attack
methodology on other targets in your network.
 Updates firewall rules and ACLs if an exposure was discovered in the course of the
investigation.
 Create new definitions and rules for intrusion detection systems that can watch for
the signs of the same attack again.
 Stay vigilant and prepared to protect your system from attacks.
 Mobile security and privacy

44

You might also like