Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 78

LO2: Describe IT security solutions.

LO2 Describe IT security solutions

• IT security solution evaluation:


• Network Security infrastructure: evaluation of NAT, DMZ, FWs.
• Network performance: RAID, Main/Standby, Dual LAN, server balancing. Data
security: explain asset management, image differential/incremental backups, SAN
servers.
• Data centre: replica data centres, virtualisation, secure transport protocol, secure
MPLS routing and remote access methods/procedures for third-party access.
• Security vulnerability: logs, traces, honeypots, data mining algorithms, vulnerability
testing.
Network Security infrastructure: evaluation of NAT,
DMZ, FWs

• Network Address Translation (NAT)


• Demilitarized zone (DMZ)
• FWs (Firewalls)
Firewalls(FWs)
• Firewalls protect systems from both external and internal threats. Although firewalls initially
became popular in corporate environments, most home networks with a broadband Internet
connection now also implement a firewall to protect against Internet-borne threats.
• Essentially a firewall is an application, device, system, or group of systems that controls the
flow of traffic between two networks.
• The most common use of a firewall is to protect a private network from a public network such
as the Internet. However, firewalls are also increasingly used to separate a sensitive area of a
private network from less-sensitive areas.
• At its most basic, a firewall is a device (a computer system running firewall software or a
dedicated hardware device) that has more than one network interface. It manages the flow of
network traffic between those interfaces.
• How it manages the flow and what it does with certain types of traffic depends on its
configuration.
Firewalls(FWs)
• Content filtering: Most firewalls can be configured to provide some level of content
filtering. This can be done for both inbound and outbound content. This is often done when
organizations want to control employee access to Internet sites.

• Signature identification: A signature is a unique identifier for a particular


application. In the antivirus world, a signature is an algorithm that uniquely identifies a specific
virus. Firewalls can be configured to detect certain signatures associated with malware or other
undesirable applications and block them before they enter the network.

• Virus scanning services: As web pages are downloaded, content within the pages
can be checked for viruses. This feature is attractive to companies concerned about potential
threats from Internet-based sources.
Firewalls(FWs)
• URL filtering: By using a variety of methods, the firewall can choose
to block certain websites from being accessed by clients within the
organization. This blocking allows companies to control what pages can
be viewed and by whom.
• Bandwidth management: Although it’s required in only certain
situations, bandwidth management can prevent a certain user or system
from hogging the network connection. The most common approach to
bandwidth management is to divide the available bandwidth into sections
and then make just a certain section available to a user or system.
Demilitarized zone (DMZ)
• An important firewall-related concept is
the demilitarized zone (DMZ),
sometimes called a perimeter network.
• A DMZ is part of a network where you
place servers that must be accessible by
sources both outside and inside your
network.
• Not connected directly to either
network, and it must always be
accessed through the firewall.
• Using DMZs gives your firewall
configuration an extra level of
flexibility, protection, and complexity.
Demilitarized zone (DMZ)
• By using a DMZ, you can create an
additional step that makes it more
difficult for an intruder to gain access to
the internal network.
• Using the example opposite an intruder
who tried to come in through Interface
1 would have to spoof a request from
either the web server or proxy server
into Interface 2 before it could be
forwarded to the internal network.
• Although it is not impossible for an
intruder to gain access to the internal
network through a DMZ, it is difficult.
NAT(Network Address Translation)
• The basic principle of NAT is that many
computers can “hide” behind a single IP
address.
• The main reason you need to do this is
because there simply aren’t enough IPv4
addresses to go around.
• Using NAT means that only one
registered IP address is needed on the
system’s external interface, acting as the
gateway between the internal and
external networks.
To outside users, all traffic coming to and going from the
network has the same IP address or is from the same pool of
addresses.
Security vulnerability:
• Logs & Traces
• Honeypots
• Data mining algorithms
• Vulnerability testing.
Security vulnerability: Logs
• A system’s security log contains events
related to security incidents such as
successful and unsuccessful logon
attempts and failed resource access.
• Security logs can be customized,
meaning that administrators can fine-
tune exactly what they want to
monitor.
• Some administrators choose to track
nearly every security event on the
system. Although this might be
prudent, it can often create huge log
files that take up too much space
Security vulnerability: Logs
• Each event in a security log contains
additional information to make it
easy to get the details on the event:
• Date: The exact date the security event
occurred.
• Time: The time the event occurred. .
User: The name of the user account that
was tracked during the event. .
• Computer: The name of the computer
used when the event occurred. .
• Event ID: The Event ID tells you what
event has occurred. You can use this ID
to obtain additional information about
the particular event.
Security vulnerability: Honeypots

• Honeypots are a rather clever approach to


network security but perhaps a bit
expensive.
• It’s a a system set up as a decoy to attract
and deflect attacks from hackers.
• The server decoy appears to have
everything a regular server does—OS,
applications, and network services.
• The attacker thinks he is accessing a real
network server, but he is in a network trap.
Security vulnerability: Honeypots
• The honeypot has two key purposes. It can give
administrators valuable information on the types of
attacks being carried out.
• In turn, the honeypot can secure the real production
servers according to what it learns. Also, the honeypot
deflects attention from working servers, allowing them to
function without being attacked.
• A honeypot can .
• Deflect the attention of attackers from production servers.
• Deter attackers if they suspect their actions may be monitored
with a honeypot.
• Allow administrators to learn from the attacks to protect the
real servers.
• Identify the source of attacks, whether from inside the network
or outside.
• Data Security vulnerability:
mining techniques can then be Data mining
used to analyse the data collected by
algorithms
the honeypot and to detect important Mining Algorithms Examples
patterns and attacks.
• Using data mining algorithms we can
learn statistical patterns of logs
adaptively and to detect intrusions as
statistical anomalies relative to the
learned patterns.
• We can then use this information to
detect weaknesses in our networks and
IT security.
Security vulnerability: Vulnerability testing.
• A software program that contains a database
of known vulnerabilities against your system
to identify weaknesses.
• It is highly recommended that you obtain
such a vulnerability scanner and run it on
your network to check for any known
security holes.
• It is always preferable for you to find them
on your own network before someone
outside the organisation does by running
such a tool against you.
Security vulnerability: Vulnerability testing.

• The vulnerability scanner may be a port Vulnerability scanners


scanner (such as NMAP: http://nmap.org/), a • Nessus
network enumerator, a web application, or (http://www.nessus.org/nessus/)
• Retina
even a worm. (http://www.eeye.com/Retina)
• In all cases it runs tests on its target against a • SAINT, OpenVAS are also widely
used.
complete range of known vulnerabilities.
(M2) Benefits of Network monitoring systems
with supporting reasons.
• You need to:
• M2 Discuss three benefits to implement network monitoring
systems with supporting reasons.
• After writing the assignment consider the overall advantages
of the different monitoring systems.
(D1)Trusted network vs Untrusted
• You need to: D1 Investigate how a ‘trusted network’ may be part of an IT
security solution.
• A trusted network is the one which is under the control of the network
manager or the network administrator. Computers using trusted networks
are more secured and confidential because of strong firewalls. Only open to
authorized users, and allows for only secure data to be transmitted.
• Untrusted networks are situated outside the security perimeter and control
of the network admin. They could even be a private or a shared network.
• Consider the types of security policies and access levels you can use to
secure a network,
Digital Forensics- Definition

• Definition - the application of computer investigation and analysis techniques to gather


evidence suitable for presentation in a court of law
OR
• Digital Forensics is defined as the process of preservation, identification, extraction, and
documentation of computer evidence which can be used by the court of law.
• It is a science of finding evidence from digital media like a computer, mobile phone, server,
or network.
• It provides the forensic team with the best techniques and tools to solve complicated digital-
related cases.
• Digital Forensics helps the forensic team to analyze, inspect, identify, and preserve the
digital evidence residing on various types of electronic devices.
Objectives of Digital forensics

• It helps to recover, analyze, and preserve computer and related materials in such a
manner that it helps the investigation agency to present them as evidence in a court of
law.
• It helps to postulate the motive behind the crime and identify of the main culprit.
• Helps you to identify the evidence quickly, and also allows you to estimate the potential
impact of the malicious activity on the victim.

• Producing a computer forensic report which offers a complete report on the


investigation process.

• Preserving the evidence by following the chain of custody.


Digital Forensic Model
NIST forensic model

• Each computer forensic model is focused on a particular area such as law enforcement
or electronic evidence discovery. There is no single digital forensic investigation model
that has been universally accepted.
• However, it was generally accepted that the digital forensic model framework must be
flexible, so that it can support any type of incidents and new technologies
• NIST developed a basic digital forensic investigation model using which digital forensics
investigation can be conducted by even non-technical persons.
• NIST model gives more flexibility than any other model so that an organization can
adopt the most suitable model based on the situations that occurred. 
Process of Digital forensics
NIST Digital forensics entails the following steps (Kent et al., n.d.):

• Collection

• Examination

• Analysis

• Reporting
Process of Digital forensics

Collection
• During collection phase , data related to a specific event is identified, labeled, recorded,
and collected, and its integrity is preserved
• The first step in the collection forensic process is to identify potential sources of data
and acquire data from them.
• The most obvious and common sources of data are desktop computers, servers,
network storage devices, and laptops.
• In addition to computer-related devices, many types of portable digital devices (e.g.,
PDAs, cell phones, digital cameras, digital recorders, audio players) may also contain
data.
Process of Digital forensics

Collection
• After the data has been acquired, its integrity should be verified. It is particularly
important for an analyst to prove that the data has not been tampered with if it might
be needed for legal reasons.
• Data integrity verification typically consists of using tools to compute the message
digest of the original and copied data, then comparing the digests to make sure that
they are the same.
Process of Digital forensics
Collection
Answer the five W’s questions in digital forensics process. Who, What, When, Where,
Why.
• Who controlled the evidence?
• What was used to collect it?
• Why was it done in that manner?
• When was each piece of evidence found?
• Where was the evidence found?
Process of Digital forensics
Examination
• In the second phase, examination, forensic tools and techniques appropriate to the
types of data that were collected are executed to identify and extract the relevant
information from the collected data while protecting its integrity.
• Examination may use a combination of automated tools and manual processes
• Examination of data involves assessing and extracting the relevant pieces of
information from the collected data. This phase may also involve bypassing or
mitigating OS or application features that obscure data and code, such as data
compression, encryption, and access control mechanisms.
Process of Digital forensics
Examination
• Text and pattern searches can be used to identify pertinent data, such as finding
documents that mention a particular subject or person, or identifying e-mail log entries
for a particular e-mail address.
• Another helpful technique is to use a tool that can determine the type of contents of
each data file, such as text, graphics, music, or a compressed file archive.
• Knowledge of data file types can be used to identify files that merit further study, as
well as to exclude files that are of no interest to the examination.
• There are also databases containing information about known files, which can also be
used to include or exclude files from further consideration
Process of Digital forensics
Analysis
• The analysis phase involves analyzing the results of the examination to derive useful
information that addresses the questions that were the impetus for performing the
collection and examination.
• In this step, investigation agents reconstruct fragments of data and draw conclusions
based on evidence found. However, it might take numerous iterations of examination
to support a specific crime theory.
Process of Digital forensics
Analysis

• The analysis should include identifying people, places, items, and events, and
determining how these elements are related so that a conclusion can be reached.
• Often, this effort will include correlating data among multiple sources.
• Tools such as centralized logging and security event management software can
facilitate this process by automatically gathering and correlating the data. Comparing
system characteristics to known baselines can identify various types of changes made
to the system.
Process of Digital forensics
Reporting
• The final phase involves reporting the results of the analysis, which may include
describing the actions performed, determining what other actions need to be
performed, and recommending improvements to policies, guidelines, procedures, tools,
and other aspects of the forensic process.
• It includes the presentation of all the digital evidences and documentation in the court
in order to prove the digital crime committed and identify the criminal.
Process of Digital forensics 

• The forensic process transforms media into evidence, whether evidence is needed for
law enforcement or for an organizations internal usage.
• Specifically, the first transformation occurs when collected data is examined, which
extracts data from media and transforms it into a format that can be processed by
forensic tools.
• Second, data is transformed into information through analysis.
• Finally, the information transformation into evidence is analogous to transferring
knowledge into action using the information produced by the analysis in one or more
ways during the reporting phase. For example, it could be used as evidence to help
prosecute a specific individual, actionable information to help stop or mitigate some
activity, or knowledge in the generation of new leads for a case.
Types of Digital Forensics
• Disk Forensics:
It deals with extracting data from storage media by searching active, modified, or
deleted files.
• Network Forensics:
It is a sub-branch of digital forensics. It is related to monitoring and analysis of
computer network traffic to collect important information and legal evidence.
• Wireless Forensics:
It is a division of network forensics. The main aim of wireless forensics is to offers the
tools need to collect and analyze the data from wireless network traffic.
• Database Forensics:
It is a branch of digital forensics relating to the study and examination of databases
and their related metadata.
Types of Digital Forensics
• Malware Forensics:
This branch deals with the identification of malicious code, to study their payload,
viruses, worms, etc.
• Email Forensics
Deals with recovery and analysis of emails, including deleted emails, calendars, and
contacts.
• Memory Forensics:
It deals with collecting data from system memory (system registers, cache, RAM) in raw
form and then carving the data from Raw dump.
• Mobile Phone Forensics:
It mainly deals with the examination and analysis of mobile devices. It helps to retrieve
phone and SIM contacts, call logs, incoming, and outgoing SMS/MMS, Audio, videos,
etc.
Benefits of Digital Forensics
Here, are pros/benefits of Digital forensics

• To ensure the integrity of the computer system.


• To produce evidence in the court, which can lead to the punishment of the culprit.
• It helps the companies to capture important information if their computer systems or
networks are compromised.
• Efficiently tracks down cybercriminals from anywhere in the world.
• Helps to protect the organization’s money and valuable time.
• Allows to extract, process, and interpret the factual evidence, so it proves the
cybercriminal action’s in the court.
Major Cons of Digital Forensics
Here, are major cons/ drawbacks of using Digital Forensic
• For Digital evidence may accepted into court, it is must be proved that there is no tampering. Need
to produce authentic and convincing evidence. If the tool used for digital forensic is not according
to specified standards, then in the court of law, the evidence can be disapproved by justice.
• Investigators must apply the following two tests for evidences in both digital and physical forensics
to survive in a court of law:
• Authenticity: Where the evidence originated?
• Reliability: How the evidence was handled?
• Legal practitioners must have extensive computer knowledge
• Another con is that when retrieving data, analyst may inadvertently disclose privilege
documents(Cyber Security for Small businesses, 2019)
• Computer forensics is still fairly new and some may not understand it. The analyst must be able to
communicate his findings in a way that everyone will understand
Issues
• The increase of PC’s and extensive use of internet access
• Easy availability of hacking tools
• Lack of physical evidence makes prosecution difficult.
• The large amount of storage space into Terabytes that makes this investigation job difficult.
• Any technological changes require an upgrade or changes to solutions.
• RAID Network performance:
• Main/Standby
• Dual LAN
• Web server balancing
• Data security:
• Explain asset management, image differential/incremental backups, SAN servers.
Redundant array of inexpensive
• In terms of component failure, the hard disk is responsible
for 50 percent of all system downtime (Followed closely by
disks (RAID)
system fans. Both have moving parts that can wear out).
• It should come as no surprise that hard disks have garnered
the most attention for fault tolerance.
• RAID is a set of standards that enables servers to cope
with the failure of one or more hard disks.

• As part of fault tolerance you require redundant hardware


components that can easily or automatically take over
when a hardware failure occurs. RAID is the solution for
hard disks.
What is RAID?

• Redundant Array of Inexpensive Disks or


• Redundant Array of Independent Disks
Motivation for RAID

• Just as additional memory in form of cache, can improve system


performance, in the same way additional disks can also improve
system performance
• In RAID, we use an array of disks. These disks operate independently
• Since there are many disks, multiple I/O requests can be handled in
parallel if the data required is on separate disks
• A single I/O operation can be handled in parallel if the data required is
distributed across multiple disks
Benefits of RAID

• Data loss can be very dangerous for an organisation


• RAID technology prevents data loss due to disk failure
• RAID technology can be implemented in hardware or software
• Servers make use of RAID technology
RAID Technology

• The common characteristic in all these levels is:


• A set of physical disk drives.
• The operating system views these separate disks as a single logical disk.
• Data is distributed across the physical drives of the array.
• Redundant disk capacity is used to store parity information.
• Parity information can help in recovering data in case of disk failure
RAID Level 0 - Characteristics

• RAID level 0 divides data into block units and writes them across a
number of disks.
• As data is placed across multiple disks, it is also called “data
stripping”
• The advantage of distributing data over disks is that if two different
I/O requests are pending for two different blocks of data, then there is
a possibility that the requested blocks are on different disks
RAID Level 0

• There is no parity checking of data.


• So if data in one drive gets corrupted then all the data would be lost.
Thus RAID 0 does not support data recovery
• Spanning is another term that is used with RAID level 0 because the
logical disk will span all the physical drives
• RAID 0 implementation requires minimum 2 disks
STRIPING
• Take file data and map it to different disks
• Allows for reading data in parallel
• Striping improves overall I/O performance by allowing multiple I/Os to be
serviced in parallel, thus providing high overall transfer rates.
• It also enables load balancing.

file data block 0 block 1 block 2 block 3

Disk 0 Disk 1 Disk 2 Disk 3


RAID Level 0 - Diagram
RAID Level 0 - Advantages

• Advantage of RAID level 0 is that it increases speed.


• Throughput (speed) is increased because :
• Multiple data requests probably not on same disk
• Disks seek in parallel
• A set of data is likely to be striped across multiple disks

• Implementation is easy
• No overhead of parity calculation
RAID Level 0 - Disadvantages

• Not a true RAID because it is not fault tolerant

• The failure of just one drive will result in all data in an array being
lost.

• Should not be used in mission critical environments


RAID Level 1 - Characteristics

• This level is called "mirroring" as it copies data onto two disk drives
simultaneously.
• As same data is placed on multiple disks, it is also called “data
mirroring”
• The automatic duplication of the data means there is little likelihood
of data loss or system downtime.
RAID Level 1 - Diagram
RAID Level 1 - Animation
RAID Level 1 - Characteristics

• A read request can be executed by either of the two disks


• A write request means that both the disks must be updated. This can be
done in parallel
• There is no overhead of storing parity information
• Recovery from failure is simple. If one drive fails we just have to
access data from the second drive
RAID Level 1 - Advantages

• Main advantage is RAID 1 provides fault tolerance. If one disk fails,


the other automatically takes over.
• So continuous operation is maintained.
• RAID 1 is used to store systems software (such as drivers, operating
systems, compilers, etc) and other highly critical files.
RAID Level 1 - Disadvantages

• Main disadvantage is cost. Since data is duplicated, storage costs


increase.
RAID Level 2
• In RAID 2 mechanism, all disks participate in the execution of every I/O request.

• RAID 2 differs from other levels of RAID because it does not use the standard way of mirroring,
striping or parity. It implements these methods by separating data in the bit level and then saving
the bits over a number of different data disks and redundancy disks. Hamming code is used to
compute for the parity of the redundant bits to check and correct errors.

• The spindles of individual disk drives are synchronized so that each disk head is in the same
position on each disk at any given time. This configuration requires special driver hardware to
make the disks spin synchronously.

• Not implemented in practice due to high costs and overheads


RAID Level 3

• Data is divided into byte units and written across multiple disk drives.
• Parity information is stored for each disk section and written to a
dedicated parity drive.
• All disks can be accessed in parallel
• Data can be transferred in bulk. Thus high speed data transmission is
possible
RAID Level 3

• In case of drive failure, the parity drive is accessed and data is


reconstructed from the remaining devices.
• Once the failed drive is replaced, the missing data can be restored on
the new drive
• RAID 3 can provide very high data transfer rates
RAID Level 3
RAID Level 3

Parity Disk
Important Questions on RAID

• What is the motivation for using RAID? What common characteristics


are shared by all RAID levels?
• Explain RAID level 0, 1, 2, and 3.
• Explain the term striped data.
• How is redundancy achieved in a RAID system?
RAID 5
• Also known as disk striping with parity, uses distributed parity
to write information across all disks in the array.
• Unlike the striping used in RAID 0, RAID 5 includes parity
information in the striping, which provides fault tolerance.
• This parity information can re-create the data if a failure occurs.
• RAID 5 requires a minimum of three disks, with the equivalent of a
single disk used for the parity information.
• This means that if you have three 1TB hard disks, you have 2TB of
The parity data are not written to a
storage space, with the other 1TB used for parity. To increase
storage space in a RAID 5 array, you need to add only another disk fixed drive, they are spread across all
to the array. drives, as the drawing below shows.

• RAID 5 can continue to function if a single drive failure occurs.


If a hard disk in the array were to fail, the parity would re-create the
Using the parity data, the computer
missing data and continue to function with the remaining drives.
can recalculate the data of one of the
• The read performance of RAID 5 is improved over a single disk. other data blocks, should those data
no longer be available.
• Combines RAID levels 1 and 0. RAID 10
• In this configuration, four disks are required.
• RAID 10, also known as RAID 1+0, is a RAID configuration that
combines disk mirroring and disk striping to protect data. It requires a
minimum of four disks and stripes data across mirrored pairs. As long as
one disk in each mirrored pair is functional, data can be retrieved. If two
disks in the same mirrored pair fail, all data will be lost because there is no
parity in the striped set
• The configuration consists of a mirrored stripe set. To some extent, RAID
10 takes advantage of the performance capability of a stripe set while
offering the fault tolerance of a mirrored solution.
• In addition to the benefits of each, though, RAID 10 inherits the
shortcomings of each strategy.
• In this case, the high overhead and decreased write performance are the
disadvantages.
RAID
RAID(Summary
Description
Advantage
of RAIDDisadvantage
Levels) Required Disks
Level
RAID 0 Disk striping Increased read and write performance. Does not offer any Two or more
RAID 0 can be implemented with two or tolerance.
more disks.
RAID 1 Disk mirroring Provides fault Can also be used with RAID 1 has 50% Two
separate disk controllers, reducing the overhead and suffers
single point of failure. This is called disk from poor write
duplexing. performance.
RAID 5 Disk striping Can recover a single disk failure. Increased May slow down the Min of three
with distributed read performance over a poor write single network during
parity disk. Disks can be added to the array to regeneration time, and
increase storage capacity performance may suffer.
RAID 10 Striping with Increased performance with striping. High overhead, as with Four
mirrored Offers mirrored fault tolerance mirroring
volumes
Main/Standby
• Standby servers are a fault-tolerance
measure in which a second server is
identically configured to the first one.
• The second server can be stored remotely or
locally and set up in a failover configuration.
• In a failover configuration, the secondary
server connects to the primary and is ready ■ The primary(Main) server communicates
to take over the server functions at a with the secondary server by issuing special
moment’s notice. If the secondary server notification notices called heartbeats.
detects that the primary has failed, it
■ If the secondary server stops receiving the
automatically cuts in.
heartbeat messages, it assumes that the
• Network users will not notice the transition, primary(main) has died and therefore
because little or no disruption in data assumes the primary server configuration.
availability occurs.
Dual
• Combining two or LAN ( NIC
more physical Teaming
Ethernet links or link aggregation or trunking)
into a single logical link.
• If two 1Gb/s ports were aggregated, you would get a
total aggregated(group) bandwidth of 2Gb/s.
• If one physical part of the logical link fails, traffic will
failover to the remaining active links. 
• Consider that if you're transferring a file from one PC
to another over a 2Gb aggregated link, you'll find that
the total maximum transfer rate will top out at 1Gb/s. To implement you need:

• Start two file transfers, however, and you'll see the 1. Switch/router that supports Dual Lan/link aggregation
benefits of aggregated bandwidth.
2. PC with two LAN ports
• In simple terms, link aggregation increases the number
of lanes on a highway but it doesn't increase the speed 3. Windows Server, linux, OS X.
limit.
Server balancing(Load balancing)
• Network servers are the workhorses of the network.
They are relied on to hold and distribute data,
maintain backups, secure network communications,
and more.
• The load of servers is often a lot for a single server
to maintain. This is where load balancing comes into
play.
• Load balancing is a technique in which the workload
is distributed between several servers. This feature ■ Increases redundancy and therefore
can take networks to the next level; it increases data availability.
network performance, reliability, and availability. ■ Increases performance by
distributing the workload.
■ Implemented through Server
Clustering
Data security: Asset management
• As part of your network risk management the assets used should be
considered and assessed by performance, configuration, and behaviour.
• Plan and organise devices:
• What functions do they perform? How and where are they used? Who is responsible
for them? Expected lifespan of each device including the refresh cycles, lease date
or end of life warranty.
• Monitor your devices:
• Consider performance, health, and risk exposure, and make informed decisions
about changes to your environment.
• Consider then how you identify the scope of unexpected changes in your
environment and how can you address them at-scale when they occur?  What’s your
action plan if a device is lost or stolen? How will you discover that it’s gone?
• DeviceData security:
Retirement: Asset management
• Ensure that the devices important to you are monitored and protected.
• Establish a process for your devices’ end of life.
• Device’s should be collected, secured, sanitized, and removed from your
environment when the time comes.
• How will you manage device returns when employees leave or change roles?
• How do you manage timely and secure device end-of-life?
• How can you confirm that are they safely decommissioned from your organisation?
Data security: image differential/incremental
backups
• Considered previously as part of Unit 2: Networking.
Data security: (Storage area network) SAN
servers
• A centralised subnetwork of storage devices, usually
found on high-speed networks and shared by all
servers on a network.
• An SAN makes a network of storage devices
accessible to potentially multiple servers/devices.
• Often combined with “Fibre Channel” technology
that defines over 5 gigabit-per-second data transfer
over fiber-optic cable. Simple guide
on how to
• Advantages include Storage Virtualization, High- configure in
Speed Disk Technologies(Fibre Channel), Centralized Windows
Backup, Dynamic Failover Protection(Provides Server:
continuous network operation, even if a server fails or
https://www.techrep
goes offline for maintenance, which enables built-in Through Storage Virtualization ublic.com/blog/data
-center/diy-san-win
redundancy and automatic traffic rerouting.) these different devices will be dows-server-2012-st
orage-spaces-and-is
seen as one storage area csi-target/
Data centres

• For larger businesses and networks data centres


can be created. A data centre centralizes an
organisation's IT operations and equipment, as
well as where it stores, manages, and disseminates
its data.
• Replica data centres can be created to synchronise
and maintain mirror like functionality so if one
goes down the network keeps running.
• Data centres can use virtualisation so lots of
physical servers can be converted into 1 virtual
server
Secure Transport Protocol
Example of TLS being used to send an email:

• Cryptographic protocols designed to provide
communications security over a network
and in data centres.
• TLS(Transport Layer Security) replaces
SSL(Secure Sockets Layer) as the current
most secure option.
• Websites can use TLS to secure all
communications between
their servers and web browsers.
• TLS used in the context of web servers is
known as HTTPS(Hyper Text Transfer Further guidance:
Protocol Secure) (that is HTTP over TLS).
https://www.ncsc.gov.uk/guidance/tls-external-facing-services
Secure MPLS (Multiprotocol Label
Switching) routing
• MPLS is a switching technology used frequently with data centres to make Further reading:
packet forwarding happen.
https://www.networkworld
• A technology designed to speed up network traffic flow by moving away from .com/article/2297171/netw
the use of traditional routing tables. ork-security-mpls-explaine
• Instead of routing tables, MPLS uses short labels to direct packets and forward d.html
them through the network.
Routing table 
• Because labels refer to paths and not endpoints, packets destined for the same
endpoint can use a variety of LSPs(label-switched path) to get there: A set of rules, often
• The packet follows the channel to its destination, thereby eliminating the need to check the viewed in table format,
packet for forwarding information at each hop and reducing the need to check routing tables. that is used to determine
where data packets
• The multiprotocol part of the name refers to the fact that MPLS works with a traveling over an
variety of protocols, including Frame Relay, ATM, and IP. Internet Protocol (IP)
network will be
directed.
Remote access methods/procedures for third-
party access.
• Remote management allows centrally located personnel and applications to monitor,
manage, and respond to globally distributed networks and systems from a single
location.
Company's that
• With these tools, IT managers can respond to problems quickly and perform
provides remote
corrective actions from anywhere in the world at anytime.
access data
• This addresses staffing issues and ensures effective systems management. centres:
• Remote access methods should be able to:
scc.com
• Remotely configure, monitor, and manage equipment lantronix.com
• Access equipment over the network (in-band), through a single modem connection (out-of-
band), or via the Internet (IP-based management)
• Connect equipment that lacks a network interface
• Secure access to mission-critical equipment

You might also like