Professional Documents
Culture Documents
Cybersecurity Module 1 Lesson 6 Notes
Cybersecurity Module 1 Lesson 6 Notes
Cybersecurity Module 1 Lesson 6 Notes
Enterprise Network
Infrastructure
Module 1 Lesson 6
Summary Notes
2 www.shawacademy.com
Contents
3 Introduction
Lesson outcomes
Concept of Deployment
4 Centralised infrastructure
6 Decentralised infrastructure
9 Converged infrastructure
10 Hyperconverged infrastructure
12 Comparison table
13 Conclusion
14 References
Lesson outcomes
By the end of this lesson, you should be able to:
Introduction
In today’s lesson, we are going to be covering networking in-depth. We are going to explore the different infrastructure
options and how they can be applied to an enterprise network. In doing this we’ll discuss the benefits of implementing a
centralised or decentralised infrastructure their benefits and lastly, we’ll describe the converged infrastructure and how it
stores data within the network and then examine how a converged infrastructure can be upgraded to become a hyper-
converged infrastructure.
Concept of Deployment
Exploring deployment and virtualization
Before we dive into network infrastructures let’s quickly talk about deployment and virtualisation, these two terms that
are commonly used in the context of network administration. Basically, deployment refers to the setting up of a new
system to the point that is it ready for productive work in a live environment. Deployment is the start-up or initialisation of
a task which could include a piece of software, configuration, or even a pipeline of both simultaneously.
In the simple of terms, deployment can be seen as a stage in which activity involves the start-up or initialisation of a task
which could include a piece of software, configuration, or even a pipeline of both simultaneously.
When deploying a new system, big organisations often use virtualisation. Basically, this means combining all their
hardware and software network resources and functionality into a single, software-based administrative entity – a virtual
network.
By using virtualisation, companies are able to implement redundancy without purchasing additional hardware.
Redundancy - a third important term to get your head around when discussing network infrastructure.
Network redundancy is the process of installing additional network devices, equipment, and communication mediums
within a network infrastructure to ensure network availability and decrease the risk of failure along the critical data path.
When it comes to setting up an enterprise network there are five different infrastructure options. They are:
• Centralised infrastructure
• Decentralised (or distributed) infrastructure
• Converged infrastructure
• Hyperconverged infrastructure and
• Cloud-based infrastructure
We are going to look at the first four in this lesson and will have a closer look at cloud-based infrastructure in the next
lesson after a discussion about enterprise portals and policies.
Centralised infrastructure
User story
To enable you to better understand a centralised infrastructure used on a remote site, we are going to look at a user story.
A company is considering setting up a centralised infrastructure that will allow multiple applications/software to be shared
across the USA, Asia and Europe.
The company has a specification in place to show the number of users that will be connecting to the network and the
resources to be shared. It looks like this:
• Users: The system must be designed to support 1,500,000 to 3,000,000 users and 5000 queries simultaneously.
• Resources: It will comprise1000 hierarchical resources each with multiple actions to be shared in communication
between all connected users.
• Latency (or the time it takes for data to get to its destination): It needs to allow for less than 0.2 seconds per query
response time for role- and rule-based decisions.
In terms of infrastructure – it’s one central enterprise covering all geographies.
Our first stop is the Policy Enforcement Points (PEPs) at the bottom: A policy enforcement point (PEP) is a component that
serves as the guard to a digital resource on a network. If a user of the system tries to access a file/resources, the PEP
component will associate the user's attributes to the Policy Decision Point (PDP for short) component in the middle of the
diagram to make decisions based on the user's attributes. PEPs are compatible with web servers, portals, legacy
applications, LDAP Directories and SOAP Engines.
We’ll talk more about PEPs when we get to the section on network security. The Entitlement Database or DB at the top of
the diagram is a collection of information, typically aggregations of data records or files, resources, and other vital
information about the company.
The Administration of Entitlement or entitlement management is generally monitored by a high-level administrator in the
company. The company uses this to manage, identity, and access the lifecycle on a larger scale. The process is usually
automated because of the large number of requests. The Policy Administration Point (or PAP) cluster comprises the
components that enable the company administrator to create and edit digital policies on the network. The PAP can also be
referred to as the Policy Management Authority (PMA) due to the fact it helps manage different policies in a network.
The HyperText Transfer Protocol (HTTP) refers to the medium of communication. i.e. in most cases through the use of
organisation portals. The Redundancy Management Interface (RMI) is used as a secondary link between the active and
standby wireless controllers. The RMI manages the Hot Standby Router Protocol (HSRP) that is used to create Virtual IP to
ensure redundancy in the case of a network failure.
From our user story, we would have three remote sites in USA, ASIA and EUROPE and the main centralised server running
from New York. In New York, the centralised server is where we would have the Policy Administration Point cluster and the
Policy Decision Point. This means that the New York system administrators will be able to create and edit digital policies.
The system administrator will also be able to control who has access to file/resources on the company’s network.
All the traffic from connected users on the remote sites will have to go through the centralised server in New York.
So, what are some of the main benefits of centralised infrastructure? The most obvious reason for businesses to adopt a
network infrastructure like a centralised one is to ensure the efficiency of the network.
These are some benefits to go with a centralised infrastructure. These include the following:
• Reduced costs: In a centralised architecture, the main server is carrying most of the workload and therefore
companies can spend more funds acquiring a very sophisticated and dedicated centralised server and less money
on the interconnection between the small modules in the remote sites and centralised server.
• With this system, considerable money is saved across the various locations of the company.
• Increased productivity: If more sophisticated and dedicated servers are operating as the centralised server, this
will be where things like QoS mentioned earlier and other requests and queries and can be handled. It also
improves productivity for IT staff - having one central server control the whole network means less IT
management time and fewer admins.
• Centralised control: Due to the fact that the centralised server is on the premises of the company, system
administrators will be able to monitor the entire company network and control traffic sent from all the remote
sites. IT staff will also be able to modify and deny users based on the information configured on the Policy
Decision Point (PDP). All this information is made available to the system administrators in the company domains.
• Centralised database: In big companies, there is always the need to keep a record of all documentation,
transactions, files, and so on. Having a centralised database makes it easy for any employee with the appropriate
clearance to access any company file or resources regardless of their geographical location. In other words, data
and information can be easily shared across different departments, leading to better knowledge sharing and
collaboration among employees.
• Security: A centralised infrastructure is proven to be more secure than other infrastructures. In our use case
diagram, we saw that the company had its own database, PAP cluster and PDP cluster. These are all vital parts of
an enterprise network. The company can direct most of its funds relating to security measures to the centralised
server location. With centralised security, companies also no longer have to deploy IT staff to individual network
locations, freeing them to work on more strategic initiatives.
Decentralised infrastructure
The term decentralised clearly refers to items spread around with no central focus, In the case of decentralised network
infrastructure, the main controllers are spread around remote sites rather than having one centralised server. In the
decentralised infrastructure, the workload is evenly distributed among all remote sites. A decentralised infrastructure is
often referred to as a distributed workload due to its nature. With a distributed workload, all the remote sites don’t have to
rely so heavily on the central server. The Distributed workload approach has given rise to a lot of advancement in new and
exciting technologies such as edge computing and Internet of Things devices.
This diagram illustrates the distribution and sharing of databases and servers within a remote site.
Because users in the remote site have their dedicated resources clearly this impacts the speed at which they can access
them.
Users story
To enable you to better understand a centralised infrastructure used on a remote site, we are going to look at a user story.
A company is considering setting up a centralised infrastructure that will allow multiple applications/software to be shared
across the USA, Asia and Europe.
The company has a specification in place to show the number of users that will be connecting to the network and the
resources to be shared. It looks like this:
• Users: The System needs to be designed to support 40,000 to 60,000 users with 3000 queries simultaneously.
• Resources: We are still looking at 1000 hierarchical resources each with multiple actions to be shared in
communication between all connected users.
• Latency (or the time it takes for data to get to its destination): It needs to allow for less than 0.2 seconds per query
response time for a role- and rule-based decisions.
But the infrastructure needs to be decentralised - where all 4 geographically locations would be sharing the workload of the
entire company.
The same terminology applies as per our previous diagram but there are clearly significant differences…
Here we can see the PDPs are distributed among the remote sites and all of them share a common entitlement database
repository and Policy Administration Point cluster (PAP).
Earlier we said the Policy Decision Point is a component used for making decisions based on the user's attribute from PEP.
Here we see that we both the PEP and PDP are on the remote site which greatly speeds up the time it takes to resolve queries.
When looking at the benefits of a decentralised infrastructure there are a lot of benefits enterprise can get from adapting
these type of infrastructure, these include the following:
• Decentralised infrastructure makes it easy for the system administrator in the remote site to configure their own
PDPs to suit employees on the site in line with the PAP at the headquarter. This is most common in a secure
workspace where sensitive data is transmitted within the site.
• Another key advantage is a distributed workload. In the case of the Decentralised infrastructure, there is an even
spread of workload across all the remote sites. If we use of case study example, the companies in ASIA, USA,
EUROPE will work independently without overloading the headquarters server with different queries.
The remote site connects to the headquarter to update the Policy Administration Points which helps manage and control
different policies and assess the impact of new policies on applications and services in the network.
Converged infrastructure
In many big companies, a converged infrastructure is used to centralise the management of network resources. By
implementing a virtualisation system, the administrators can increase resource-utilisation and also drastically reduce costs
in acquiring new hardware. Converging the grouped network device resources means they can be shared by multiple
applications and managed in a collective manner using the policy administration points (PAPs) in the company network.
In this diagram, we can see how a converged infrastructure can be used for storage of data and information compared to
traditional architecture. You will see illustrations of storage in hyper-converged and cloud-based infrastructure as well, but
more about that just now.
In most cases, companies usually purchase the entire converged infrastructure system rather than buying the hardware and
software components independently from suppliers. . This is pre-configured and pre-tested and ready for deployment.
The converged infrastructure is often found in big data centres.
• Fast deployment: As mentioned, converged infrastructures usually come with all the pre-configured and pre-tested
packaged components that are needed to deploy in the infrastructure including networking cabling, virtualised
storage, and so on. These ready-to-roll infrastructures are examined by certified experts.
• Reduction in deployment error and risk: Due to the fact that the converged infrastructures are tested before they
are sold, the risk of error is greatly reduced.
• Infrastructure has a guaranteed a warranty: Companies acquiring the infrastructure are usually guaranteed a
warranty on the packaged components which means the company does not need to worry about day-to-day
maintenance. The vendor supplying the converged system usually provides a single point of contact for
maintenance and service issues. In the case where the packaged components are faulty; the vendor company is
solely responsible for their repair and maintenance.
• Cost: Because a converged infrastructure supports the use of virtualisation the company will be utilising every
hardware resource in the network thereby saving funds on acquiring new hardware.
Hyperconverged infrastructure
Hyperconverged infrastructure
Hyperconverged infrastructure is basically an upgrade of the converged infrastructure, let inspect the diagram of an
hyperconverged infrastructure.
As can be seen in the diagram, in a hyperconverged infrastructure, multiple network devices such as server nodes and
storage are combined to form a dedicated storage array hardware. This infrastructure can be seen as a software-defined
infrastructure, a unified system that combines all the elements of a traditional data centre which includes storage, network
computation, networking, and event management.
The integration of this infrastructure solution makes use of software and servers to replace built hardware. And, in doing so,
the infrastructure helps to reduce the complexity and increase scalability in the data centre.
• Virtualisation software powers the hyperconverged infrastructure to fully utilise every hardware resource.
Companies have the additional benefit of deploying virtual desktop infrastructure (VDI) even from a remote site
while they are connected to the network.
• Flexibility: With a hyperconverged infrastructure, a business can use the virtual desktop infrastructure (VDI) for
multiple data centres simultaneously. This also means businesses can adjust their policies even while the
deployment process is being initialised.
• Data redundancy: The data is replicated or mirrored and saved across different hard drives of different servers in
the cluster within the company network.
• Data replication : is also one of the best benefits of the hyperconverged infrastructure because in the event of a
network failure the data is backed-up across multiple locations and can easily be replaced if needed.
• Data availability: A hyperconverged infrastructure is mainly controlled by software in the virtual machine, running
in the same cluster on a network. If one server hosting a virtual machine goes down, features like VMware High
Availability restart that virtual machine on another server and assure the availability of those virtual machines.
Applications and workloads can be easily and quickly migrated or replaced as all the infrastructure runs under one
software control. Clearly a hyperconverged infrastructure using virtualisation is a real win. Companies could gain a lot by
making use of this type of infrastructure. Basically, it simplifies management, consolidates resources and reduces costs by
combining computing, storage and networking into a single system.
• Storage virtualisation
• Computing virtualisation
• Networking virtualisation
• Advanced management which includes automation and deployment
Hyperconverged infrastructure also supports rising cloud infrastructure, with the virtualisation being placed on the
storage and server, businesses can simply install applications to run on the cloud and make them accessible to anyone on
the internet.
You will notice these similar attributes when we look at cloud infrastructure in our next lesson.
Comparison table
Comparison of infrastructures
let’s summarise by looking at a comparison of the four infrastructures: Centralised, Decentralised, Converged and
Hyperconverged infrastructures.
purchased
infrastructure.
Failure tolerance centralised decentralised Converged Hyperconverged
infrastructure has a infrastructure has a infrastructures have infrastructures also
low failure tolerance if high failure a low failure have a low failure
one of the remote tolerance since the tolerance due to tolerance. A benefit
sites gets hacked, remote site is data replication n of virtualisation.
there is a good chance sharing Policy the system.
the whole company Decision Points
will not be affected with their
because the main entitlement
source of the database
resources is coming repository on the
from the headquarters remote site. If the
servers/nodes. remote site does
get hacked, the
company will lose a
section of that
database
repository.
Conclusion
In this lesson, we touch base with the different infrastructure options for an enterprise. Then we moved into the logical
operation of these infrastructures, highlighting their benefits for implementing these infrastructure and lastly a
comparison between the different types of infrastructures.
Bahl, R., Bird, R. and Young, A. (n.d.). Decentralization and Infrastructure in Developing Countries:
Reconciling Principles and Practice IMFG P M F G. [online] Available at:
https://munkschool.utoronto.ca/imfg/uploads/260/1483_imfg_no_16_online_final__(2).pdf.
Cisco. (n.d.). CEPM Capacity Planning Guide - Distributed Architecture Deployment [Cisco Policy
Administration Point]. [online] Available at:
https://www.cisco.com/en/US/docs/security/epm/epm33/Guide/Capacity_Planning_Guide/CH1.html#wpxr
ef46428.
Cisco. (n.d.). Centralised Architecture Deployment [Cisco Policy Administration Point]. [online] Available at:
https://www.cisco.com/en/US/docs/security/epm/epm33/Guide/Capacity_Planning_Guide/CH2_external_
docbase_0900e4b180e669f1_4container_external_docbase_0900e4b1815ca6e6.html.
Jerichosystems.com. (2020). What is a policy enforcement point (PEP)? [online] Available at:
https://www.jerichosystems.com/technology/glossaryterms/policy_enforcement_point.html#:~:text=When
%20a%20user%20tries%20to.
Cisco. (n.d.). Cisco Catalyst 9800 Series Wireless Controller Software Configuration Guide, Cisco IOS XE
Amsterdam 17.1.x - Redundancy Management Interface (RMI) [Cisco Catalyst 9800 Series Wireless
Controllers]. [online] Available at: https://www.cisco.com/c/en/us/td/docs/wireless/controller/9800/17-
1/config-guide/b_wl_17_11_cg/b_wl_16_12_cg_chapter_01111111.html.
GeeksforGeeks. (2018). Hot Standby Router Protocol (HSRP) and Virtual Router Redundancy Protocol
(VRRP). [online] Available at: https://www.geeksforgeeks.org/hot-standby-router-protocol-hsrp-virtual-
router-redundancy-protocol-vrrp/?ref=lbp.
Cisco. (n.d.). CEPM User Guide - Cisco Enterprise Policy Manager [Cisco Policy Administration Point].
[online] Available at:
https://www.cisco.com/en/US/docs/security/epm/epm33/Guide/User_Guide/CH1.html.
Cisco. (n.d.). CEPM Capacity Planning Guide - Distributed Architecture Deployment [Cisco Policy
Administration Point]. [online] Available at:
https://www.cisco.com/en/US/docs/security/epm/epm33/Guide/Capacity_Planning_Guide/CH1.html#wpxr
ef_Toc141169777.
A Shared Digital Europe. (2018). Principle: Decentralise infrastructure. [online] Available at: https://shared-
digital.eu/decentralise-
infrastructure/#:~:text=Decentralised%20infrastructure%20is%20open%2C%20distributed.
Swati Goyal (2018). Centralised vs. DeCentralised ? The New DeCentralised Internet Networks. [online] 101
Blockchains. Available at: https://101blockchains.com/Centralised -vs-deCentralised -internet-networks/.
Anon, (n.d.). Building an IT infrastructure [Converged, Hyper-converged and Public Cloud]. [online] Available
at: https://www.simform.com/building-it-infrastructure-converged-hyper-converged-public-cloud/.