Download as pdf or txt
Download as pdf or txt
You are on page 1of 35

DATA CENTRE

PREPARED BY: ER. LOCHAN RAJ DAHAL


What Is a Data Center?
At its simplest, a data center is a physical facility that organizations use to house
their critical applications and data. A data center's design is based on a network of
computing and storage resources that enable the delivery of shared applications
and data. The key components of a data center design include routers, switches,
firewalls, storage systems, servers, and application-delivery controllers.

What defines a modern data


center?
Modern data centers are very different than they were just a short time ago. Infrastructure has shifted
from traditional on-premises physical servers to virtual networks that support applications and
workloads across pools of physical infrastructure and into a multicloud environment.
In this era, data exists and is connected across multiple data centers, the edge, and public and private
clouds. The data center must be able to communicate across these multiple sites, both on-premises and
in the cloud. Even the public cloud is a collection of data centers. When applications are hosted in the
cloud, they are using data center resources from the cloud provider.
Why are data centers important to
business?
In the world of enterprise IT, data centers are designed to support business applications and activities that include:
• Email and file sharing
• Productivity applications
• Customer relationship management (CRM)
• Enterprise resource planning (ERP) and databases
• Big data, artificial intelligence, and machine learning
• Virtual desktops, communications and collaboration services

What are the core components of a data


center?
Data center design includes routers, switches, firewalls, storage systems, servers, and application delivery
controllers. Because these components store and manage business-critical data and applications, data
center security is critical in data center design. Together, they provide:
Network infrastructure. This connects servers (physical and virtualized), data center services, storage,
and external connectivity to end-user locations.
Storage infrastructure. Data is the fuel of the modern data center. Storage systems are used to hold this
valuable commodity.
Computing resources. Applications are the engines of a data center. These servers provide the
processing, memory, local storage, and network connectivity that drive applications.
Types of data centers
Many types of data centers and service models are available. Their classification depends on whether they are owned by one
or many organizations, how they fit (if they fit) into the topology of other data centers, what technologies they use for
computing and storage, and even their energy efficiency. There are four main types of data centers:
Enterprise data centers
These are built, owned, and operated by companies and are optimized for their end users. Most often they are housed on
the corporate campus.
Managed services data centers
These data centers are managed by a third party (or a managed services provider) on behalf of a company. The company
leases the equipment and infrastructure instead of buying it.
Colocation data centers
In colocation ("colo") data centers, a company rents space within a data center owned by others and located off company
premises. The colocation data center hosts the infrastructure: building, cooling, bandwidth, security, etc., while the company
provides and manages the components, including servers, storage, and firewalls.
Cloud data centers
In this off-premises form of data center, data and applications are hosted by a cloud services provider such as Amazon Web
Services (AWS), Microsoft (Azure), or IBM Cloud or other public cloud provider.
Example data centers
Replication Data Centre's
Data replication is the process by which data residing on a physical/virtual server(s) or cloud instance (primary instance)
is continuously replicated or copied to a secondary server(s) or cloud instance (standby instance). Organizations replicate
data to support high availability, backup, and/or disaster recovery. Depending on the location of the secondary instance,
data is either synchronously or asynchronously replicated. How the data is replicated impacts Recovery Time Objectives
(RTOs) and Recovery Point Objectives (RPO).

For example, if you need to recover from a system failure, your standby instance should be on your local area network
(LAN). For critical database applications, you can then replicate data synchronously from the primary instance across
the LAN to the secondary instance. This makes your standby instance hot and in sync with your active instance, so it is
ready to take over immediately in the event of a failure. This is referred to as high availability (HA).

In the event of a disaster, you want to be sure that your secondary instance is not co-located with your primary instance.
This means you want your secondary instance in a geographic site away from the primary instance or in a cloud instance
connected via a WAN. To avoid negatively impacting throughput performance, data replication on a WAN is
asynchronous. This means that updates to standby instances will lag updates made to the active instance, resulting in a
delay during the recovery process.
Why Replicate Data to the Cloud?
There are five reasons why you want to replicate your data to the cloud.

As we discussed above, cloud replication keeps your data offsite and away from the company’s site. While a major disaster, such as
a fire, flood, storm, etc., can devastate your primary instance, your secondary instance is safe in the cloud and can be used to
recover the data and applications impacted by the disaster.
Cloud replication is less expensive than replicating data to your own data center. You can eliminate the costs associated with
maintaining a secondary data center, including the hardware, maintenance, and support costs.
For smaller businesses, replicating data to the cloud can be more secure especially if you do not have security expertise on staff.
Both the physical and network security provided by cloud providers is unmatched.
Replicating data to the cloud provides on-demand scalability. As your business grows or contracts, you do not need to invest in
additional hardware to support your secondary instance or have that hardware sit idle if business slows down. You also have no
long-term contracts.
When replicating data to the cloud, you have many geographic choices, including having a cloud instance in the next city, across
the country, or in another country as your business dictates.
Why Replicate Data Between Cloud Instances?
While cloud providers take every precaution to ensure 100 percent up-time, it is possible for individual cloud servers to fail as a
result of physical damage to the hardware and software glitches all the same reasons why on-premises hardware would fail. For
this reason, organizations that run their essential applications in the cloud should use data replication to support high availability
and disaster recovery. You can replicate data between availability zones in a single region, between regions in the cloud, between
different cloud platforms, to on premise systems, or any hybrid combination.
Benefits of data replication
By making data available on multiple hosts or data centers, data replication facilitates the large-scale sharing of data
among systems and distributes the network load among multisite systems. Organizations can expect to see benefits
including:
Improved reliability and availability: If one system goes down due to faulty hardware, malware attack, or another
problem, the data can be accessed from a different site.
Improved network performance: Having the same data in multiple locations can lower data access latency, since
required data can be retrieved closer to where the transaction is executing.
Increased data analytics support: Replicating data to a data warehouse empowers distributed analytics teams to work on
common projects for business intelligence.
Improved test system performance: Data replication facilitates the distribution and synchronization of data for test
systems that demand fast data accessibility.

Data replication
Though replication provides many benefits, organizations should weigh the benefits against the disadvantages. The
challenges
challenges to maintaining consistent data across an organization boil down to limited resources:
Money: Keeping copies of the same data in multiple locations leads to higher storage and processor costs.
Time: Implementing and managing a data replication system requires dedicated time from an internal team.
Bandwidth: Maintaining consistency across data copies requires new procedures and adds traffic to the network.
What is Data Center Virtualization?
Data center virtualization is the process of creating a modern data center that is highly scalable, available and secure.
With data center virtualization products you can increase IT agility and create a seamless foundation to manage private
and public cloud services alongside traditional on-premises infrastructure.
In a virtualized data center, a virtual server, also called a software-defined data center (SDDC) is created from
traditional, physical servers. This process abstracts physical hardware by imitating its processors, operating system, and
other resources with help from a hypervisor. A hypervisor (or virtual machine monitor, VMM, virtualizer) is a software
that creates and manages a virtual machine. It treats resources such as CPU, memory, and storage as a pool that can be
easily reallocated between existing virtual machines or to new ones.
Benefits of Data Center
Virtualization
Data center virtualization offers a range of strategic and technological benefits to businesses looking for increased
profitability or greater scalability. Here we’ll discuss some of these benefits.

Scalability
Compared to physical servers, which require extensive and sometimes expensive sourcing and time management, virtual data
centers are relatively simpler, quicker, and more economical to set up. Any company that experiences high levels of growth
might want to consider implementing a virtualized data center.
It’s also a good fit for companies experiencing seasonal increases in business activity. During peak times, virtualized memory,
processing power, and storage can be added at a lesser cost and in a faster timeframe than purchasing and installing
components on a physical machine. Likewise, when demand slows, virtual resources can be scaled down to remove unnecessary
expenses. All of these are not possible with metal servers.
Data Mobility
Before virtualization, everything from common tasks and daily interactions to in-depth analytics and data storage
happened at the server level, meaning they could only be accessed from one location. With a strong enough
Internet connection, virtualized resources can be accessed when and where they are needed. For example,
employees can access data, applications, and services from remote locations, greatly improving productivity
outside the office.
Moreover, with help of cloud-based applications such as video conferencing, word processing, and other content
creation tools, virtualized servers make versatile collaboration possible and create more sharing opportunities.

Cost Savings
Typically outsourced to third-party providers, physical servers are always associated with high management and
maintenance. But they will not be a problem in a virtual data center. Unlike their physical counterparts, virtual servers
are often offered as pay-as-you-go subscriptions, meaning companies only pay for what they use. By contrast, whether
physical servers are used or not, companies still have to shoulder the costs for their management and maintenance. As a
plus, the additional functionality that virtualized data centers offer can reduce other business expenses like travel costs.
Cloud vs. Virtualization: How Are They
Related?
It’s easy to confuse virtualization with cloud. However, they are quite different but also closely related. To put it simply,
virtualization is a technology used to create multiple simulated environments or dedicated resources from a physical
hardware system, while cloud is an environment where scalable resources are abstracted and shared across a network.

Clouds are usually created to enable cloud computing, a set of principles and approaches to deliver compute, network,
and storage infrastructure resources, platforms, and applications to users on-demand across any network. Cloud
computing allows different departments (through private cloud) or companies (through a public cloud) to access a single
pool of automatically provisioned resources, while virtualization can make one resource act like many.

In most cases, virtualization and cloud work together to provide different types of services. Virtualized data center
platforms can be managed from a central physical location (private cloud) or a remote third-party location (public
cloud), or any combination of both (hybrid cloud). On-site virtualized servers are deployed, managed, and protected by
private or in-house teams. Alternatively, third-party virtualized servers are operated in remote data centers by a service
provider who offers cloud solutions to many different companies.

If you already have a virtual infrastructure, to create a cloud, you can pool virtual resources together, orchestrate them
using management and automation software, and create a self-service portal for users.
Transport Layer Security (TLS)
Computers send packets of data around the Internet. These packets are like letters in an
envelope: an onlooker can easily read the data inside them. If that data is public
information like a news article, that's not a big deal. But if that data is a password, credit
card number, or confidential email, then it's risky to let just anyone see that data.
The Transport Layer Security (TLS) protocol adds a layer of security on top of the TCP/IP
transport protocols. TLS uses both symmetric encryption and public key encryption for
securely sending private data, and adds additional security features, such as
authentication and message tampering detection.
TLS adds more steps to the process of sending data with TCP/IP, so it increases latency
in Internet communications. However, the security benefits are often worth the extra
latency.
(Note that TLS superseded an older protocol called SSL, so the terms TLS and SSL are often
used interchangeably.)
From start to finish
Let's step through the process of securely sending data with TLS from one computer
to another. We'll call the sending computer the client and the receiving computer
the server.

TCP handshake
Since TLS is built on top of TCP/IP, the client must first complete the 3-way TCP
handshake with the server.
TLS initiation
The client must notify the server that it desires a TLS connection instead of the standard
insecure connection, so it sends along a message describing which TLS protocol version and
encryption techniques it'd like to use.

Server confirmation of protocol


If the server doesn't support the client's requested technologies, it will abort the
connection. That may happen if a modern client is trying to communicate with an older
server.
As long as the server does support the requested TLS protocol version and other options, it
will respond with a confirmation, plus a digital certificate that contains its public key.
Certificate verification
The server's digital certificate is the server's way of saying "Yes, I really am who you think I am". If the
client doesn't believe the certificate is legit, it will abort the connection, since it doesn't want to send
private data to an imposter.
Otherwise, if the client can verify the certificate, it continues on to the next step.

Shared key generation


The client now knows the public key of the server, so it can theoretically use public key encryption to encrypt
data that the server can then decrypt with its corresponding private key.
However, public key encryption takes much more time than symmetric encryption due to the more difficult
arithmetic operations involved. When possible, computers prefer to use symmetric encryption to save time.
Fortunately, they can! The computers can first use public key encryption to privately generate a shared key,
and then they can use symmetric encryption with that key in future messages.
The client starts off that process by sending a message to the server with a pre-master key, encrypted with the
server's public key. The client computes the shared key based on that pre-master key (as that is more secure
than sending along the actual shared key) and remembers the shared key locally.
The client also sends a "Finished" message whose contents are encrypted with the shared key.
Server confirmation of shared key
The server can now compute the shared key based on the pre-master key, and attempt to
decrypt the "Finished" message with that key. If it fails, it aborts the connection.
As long as the server can successfully decrypt the client's message with the shared key, it
sends along a confirmation and its own "Finished" message with encrypted contents.

Send secure data


Finally, the client securely sends the private data to the server, using symmetric encryption and the shared
key.
Oftentimes, the same client needs to send data to a server multiple times, like when a user fills out forms
on multiple pages of a website. In that case, the computers can use an abbreviated process to establish the
secure session.
MPLS | What Is Multiprotocol Label
Switching
What Is MPLS?
Before we dive into MPLS, let’s explain how data travels through the internet. When you
send an email, connect to VoIP or video conferencing, that data packet or IP packet is
sent from one internet router to its destination. The internet router must decide for each
IP packet/data packet how it’s sent to the destination IP. Each packet requires a decision,
which the router uses complex routing tables to determine. Every path the packet arrives
at requires another forwarding decision until it arrives at its destination. This process can
result in poor performance for users, the applications they are using and impact the
network across an organization. MPLS provides an alternative for organizations to
increase network performance and improve user experience.
MPLS Meaning
Multiprotocol Label Switching, or MPLS, is a networking technology that routes traffic
using the shortest path based on “labels,” rather than network addresses, to handle
forwarding over private wide area networks. As a scalable and protocol-independent
solution, MPLS assigns labels to each data packet, controlling the path the packet follows.
MPLS greatly improves the speed of traffic, so users don’t experience downtime when
connected to the network.
MPLS Network
An MPLS network is Layer 2.5, meaning it falls between Layer 2 (Data Link) and Layer 3
(Network) of the OSI seven-layer hierarchy. Layer 2, or the Data Link Layer, carries IP packets
over simple LANs or point-to-point WANs. Layer 3, or the Network Layer, uses internet-wide
addressing and routing using IP protocols. MPLS sits in between these two layers, with additional
features for data transport across the network.
What Is MPLS Used For
Organizations often use this technology when they have multiple remote branch offices across the
country or around the world that need access to a data center or applications at the organization’s
headquarters or another branch location. MPLS is scalable, provides better performance and
bandwidth, and improves user experience compared to traditional IP routing. But it is costly,
difficult to deliver globally and lacks the flexibility to be carrier independent.
As organizations move their applications to the cloud, the traditional MPLS hub-and-spoke model
has become inefficient and costly because:
•It requires backhauling traffic through the organization’s headquarters and out to the cloud instead
of connecting to the cloud directly, which impacts performance significantly.
•As companies add more applications, services and mobile devices to their networks, the demand
for bandwidth and cloud expertise increases costs and operational complexity.
How do MPLS works ?
MPLS routing mechanism works with almost all types of networking protocols and
transmission channels such as IP (Internet Protocol), ATM (Asynchronous Transport
Mode), Ethernet.
Datagram packets are redirected through the routers by assigning the label to each packet
by passing through each router called Label Switch Paths (LSP’s) in the telecommunications
network. The label is associated with a predecided route throughout the network hopes,
when hope receives the datagram it dynamically assigns the label to the datagram packet
to pass through it to the other upcoming hops. it means the label of control is higher as
compare to the packet-switched networks.
Labels used in the switching act as transmission paths in an MPLS network, which are
established by the signaling protocols, these patch guides voice and data packets to their
final ultimate destination in the network.
Let’s understand it working with an example when a pos office assign work to postman to
search, sort and look through each of the parcels to get to know to the address of its
receiver it will make the process slows down due to manual working, instead if it will use
the automated conveyor sorting system to search, sort the packet by scanning the barcode
stick on the top of the packet, due to automated system processing it will improve speed
of the parcel processing and sorting drastically and longer-lasting.
How do MPLS works ?
LSP’s improve the speed and optimize the process of the data packet transmission by passing it
through the routers over the network, by allowing each router to quickly determine where the
packet is going.
As data packets pass through the MPLS network, its labels are swapped by the routers. As the
packet been through the boundary of the MPLS backbone it is examined, classified and assigned
a label. after it, these packets forwarded to the next step in the pre-set Label Switched Path
(LSP). As the packet passes through the path, each router on the path uses the label to determine
where the datagram should be redirected routed.
MPLS Pros and Cons:
1.PLS is dynamic, scalable, versatile with better performance, better bandwidth utilization, reduce
network traffic congestion as compared to traditional IP routing.
2.MPLS is a virtual private network and it does not provide encryption by itself, it is separated from
the public Internet. this is the reason, MPLS is considerable secure transmission mode for data
packets. And it is not easily vulnerable to DOS attacks, which can affect the IP based routing.
3.On the other side, MPLS is a mechanism that needs to purchase a carrier frequency to use it in the
network that is the reason it is more costly than the transmission which can be accomplished
through the public network.
4.As long as the organization expands its infrastructure, it is difficult to find the MPLS service provider
for global coverage service. which can be costly and time-consuming.
5.MPLS was designed to use when multiple branch offices receive and sent traffic back to a main
headquarters or data center, it is not suited where branch office directly want to access the data
available on the cloud.
Enterprise network architecture should be durable enough to determine the service and risk
between the expensive and faster performance of MPLS and the cheaper and less versatile
performance of the public Internet/Intranet.
What is segment routing?
Segment routing (SR) is a source-based routing technique that simplifies traffic engineering and
management across network domains. It removes network state information from transit routers and nodes
in the network and places the path state information into packet headers at an ingress node.
Because information moves from the transit nodes to the packet, segment routing is highly responsive to
network changes, making it more agile and flexible than other traffic-engineering solutions.
Traffic-engineering capabilities enable SR to provide quality of service (QoS) for applications and also to map
network services to end users and applications as they traverse the network.
Segment Routing Overview
To understand segment routing, you must first understand its fundamental components.
SR Domain: A collection of nodes that participate in SR protocols. Within an SR domain, a node can execute
ingress, transit, or egress procedures.
SR Path: An ordered list of segments that connects an SR ingress node to an SR egress node. Typically, it follows
the least-cost path from ingress to egress.
SR Segment: A forwarding instruction that causes a packet to traverse a section of the network topology. SR
defines many SR segment types, and the two used most often are adjacency and prefix segments. An adjacency
segment is a strict forwarded single-hop tunnel. It causes a packet to traverse a specified link associated with an
interior gateway protocol (IGP) adjacency between two nodes, irrespective of the link cost. A prefix segment is a
multihop tunnel that uses equal cost multihop-aware shortest path links to reach a prefix.
How Segment Routing Works
When a packet arrives at the SR ingress node, it’s subjected to policy. If the packet
satisfies match conditions for an SR path, the SR ingress node encapsulates the
packet in an SR tunnel that traverses an SR path, segment by segment.
Each segment in an SR path terminates at a segment endpoint node. When a packet
arrives at a segment endpoint, the endpoint examines the outermost packet label or
header to attain the corresponding segment. It then pops the outermost label or
header and forwards the packet to the next segment endpoint. This process
continues until the packet arrives at the final segment endpoint, which may be the
SR egress node.
When a packet arrives at the SR egress node, that node determines whether the
packet is at the end of its path. If it is, the node removes the SR header information
and forwards the packet based on its destination IP address.
Because the transit routers simply forward the packets based on the SR segment
identifier (SID), SR can be used to map packets associated with an end user or
application to specific network function services. It does this by mapping a path to
where the service will be applied, and providing instructions about the service and
additional path information from the service gateway to the SR domain egress
router.
Why Vendor Remote Access is So
Important
Third parties, business partners, and vendors continue to be the biggest risks to an organization’s
cybersecurity framework. Third-party remote access creates a unique opportunity for hackers to
infiltrate a system or network under the disguise of a trusted third-party user. And third parties
are prime targets for cyber criminals; the “hack one, breach many” hacking trend has put
third-party remote access on the map, and not for good reason. Once hacked, a third-party
connection could lead to dozens, hundreds, or thousands of businesses that use that said third
party—which means dozens, hundreds, or thousands of businesses are at risk.
Third parties are real causes for concern. The stats speak for themselves.
• Over 50% of data breaches have been caused by third parties according to the SecureLink and
Ponemon Report on third-party security.
• According to Gartner, by 2025, 60% of organizations will use cybersecurity risk as a primary
determinant in conducting third-party transactions and business engagements.
• The cost of a cyber attack increases by an average of $370,000 when caused by a third party.
• It would take 210 days to identify a breach caused by vulnerabilities in third-party software, then
an additional 76 days to contain the breach.
And if these stats aren’t convincing enough, just take a look at the recent Okta breach. Okta
experienced a cyber attack through one of their third-party customer support engineers. The
hackers were hoping to then use Okta as a third party to connect to their customers and attack
their systems.
These are just a few key findings and stories on third parties and security, meaning there’s even
more reason third parties are considered to be the biggest liability to your security structure.
Their remote access to mission critical systems and assets needs to be treated differently.
Best Practices for Third-Party Vendor Remote
Access
Implementing best practices for vendor remote access will protect your networks by
thwarting lateral movement by unauthorized users, monitoring privileged accounts and
sessions, and enforcing security policies for your vendors.
Third-party vendor threats are pervasive. But they’re not unconquerable. Being
proactive and using these vendor remote access best practices can help mitigate the
threat posed by third parties.
• Identify users
• Audit all high-risk access points
• Implement and enforce vendor remote access policies
• Apply access controls
• Monitor user access
• Automate vendor remote access
Step 1: Identify users
An organization typically uses between 250-500 third-party vendors. When you
account for how many reps are logging into your network from each of those
vendors, the number of external parties accessing your internal systems could reach
in the thousands.
Inventorying every vendor rep helps keep track of the individuals accessing your
sensitive data and assets as well as which users have the credentials leading to
critical access points. Complete visibility into access provides accountability and
knowledge of who is accessing your system, why they’re accessing it, and how
they’re accessing it.
Step 2: Audit all high-risk access
points
Identifying which assets are most at risk and the access points that lead to those
assets gives insight into the security measures that need to be taken around those
access points. The amount of security should correspond with how high risk the
asset is.
Let’s take PII for example; personal information is some of the most valuable on the
black market, thus most targeted by hackers. If your organization stores PII of
customers, the access points that lead to that data need extra security.
Once you’ve identified these access points, you can put controls in place to mitigate
unauthorized user access, and you can assign specific privileges and permissions
needed to access those critical assets.
Though this is reminiscent of a more traditional “protect the perimeter” approach to
cybersecurity, auditing your access points gives you more visibility into all access
points that need protecting — not just the ones that lead external parties in.
Organizations often trust third parties as if they’re employees and already within the
perimeter. Identifying access points can help find security gaps that might be missed
when just focusing on external access threats.
Step 3: Implement and enforce vendor remote access
policies
Access policies are rules that establish who should have access to what assets and
what privileges are needed to access the asset. This element of access governance is
crucial for protecting private information because it provides a baseline security
standard for all users.
It’s likely that you already have access policies that are managed for your users in
your HR systems; HR systems are able to recognize an employee title and assign the
access permissions that are needed for the corresponding role. But this becomes a
challenge with third parties, as their identities are outside of your organization’s
control, so traditional role-based methods fall short.
Instead, the best practice is to define a vendor remote access policy based on which
assets third-party users need access to and what associated privileges or rights are
needed. Then, make sure this vendor access policy is enforced with highly detailed
systems or processes to grant access. This system should be based on the principle
of least privileged access, ensuring that third-party vendors have the minimum
access needed to do their job and nothing more.
Step 4: Apply access controls
Applying access controls is like hitting the brakes on a car. It adds friction to the movement a user
is making through access points.
Controls are what authenticate and restrict user access. It looks at a third-party rep requesting
access, vets their identity to make sure the person who owns the account matches the identity
logging in, restricts their access to a granular level, and prevents unnecessary exposure to other
parts of an organization’s network.
The most useful framework you can use to control access is the zero trust framework. This model
is based on the notion of “never trust, always verify.” Traditional cybersecurity methods have
been based on castle-and-moat architecture where security is built around a network perimeter,
protecting anything from the outside from getting access to internal networks, systems, and
data.
However, the threat landscape is changing; hacks are originating from all sources, whether that’s
externally through third parties or internally. Access that was once entrusted to loyal employees
or reputable third parties has been exploited, and security measures are evolving to meet those
threats.
When building your security framework on zero trust principles, no user is trusted and must
always be verified before being granted access to critical assets. This is done through a variety of
security controls:
• Zero Trust Network Access —
ZTNA is the actual access method that embodies the zero trust architecture. Once a user is granted access, ZTNA routes users
directly to the application needed so the rest of the network is invisible except for anything specifically assigned to the user. It takes
them exactly to where they need to go rather than granting broad access, which limits visibility and prevents lateral movement
throughout a company’s network.
• Multi-factor authentication —
Use multiple forms of authentication to validate that the identity accessing an asset is the person assigned to the account being
used. Multi-factor authentication is made up of three common credentials: What the user knows (a password), what the user has (a
security token), and who the user is (a secure biometric verification). While at least two of these credentials need to be employed for
multi-factor authentication, which ones and the breadth of access for both parties can be adjusted to meet logistical and security needs of
a company.
• Credential management —
Compromised passwords account for over 60% of data breaches. Credential vaulting, such as that available in vendor privileged access
management (VPAM) systems, can provide a key roadblock for users of stolen credentials. These systems basically keep all privilege
credentials in an encrypted vault, never allowing users to see the actual login information. It allows them to check out the right to use it,
which is logged and then passes the encrypted credential to the appropriate system, initiating the login for the user automatically. This can
keep a key credential from being stolen because they never had the login information in the first place. Vendor access management also
provides valuable usage and tracking information for all your privileged logins for use in monitoring and audit efforts.
• Fine-grained access controls —
These types of access control methods provide an additional layer of security over vendor access rights to fully secure high-risk access
points. While these don’t change a user’s rights and privileges, they put more control on how a user is able to use these rights and limit
how much access they’re actually granted:
• Access schedules/time-based access: Third-party users can only access during a specific time period.
• Access notifications: Network or system administrators will be notified when a third-party vendor accesses networks and systems.
• Access approvals: Third parties have to request approval before being granted access.

These vendor access controls are most powerful and most effective when used in combination with each other rather than used independently. A system that is utilizing ZTNA, MFA, and
fine-grained access controls is much more protected than one that only requires validation from MFA alone.
Step 5: Monitor user access
Security doesn’t stop once a user has been vetted, authenticated, and granted access. Their activity needs to be
watched and recorded to ensure no nefarious behavior is happening while a third party is behind the scenes of
your network. Access monitoring includes the proactive monitoring of a user while in session and reactive
observation and analysis of network activity.
Proactive monitoring watches vendor behavior in real-time so any suspicious or inappropriate activity can be
detected. When in combination with machine learning, user behavior can be observed so any anomalous activity
can create alerts and inform administrators of at-risk activity.
Reactive monitoring happens after a session (hence, reactive) when there’s a specific reason to look into
third-party activity. It requires systems and tools to be in place to record sessions and is critical for investigation if a
data or privacy breach occurs. Not to mention it saves time on providing incident reports that are now mandated
by Executive Order and meeting audit or compliance requirements that require reports on remote access activity.
Access monitoring is particularly important for the healthcare industry. Hospital staff members need open-ended
access to systems and databases to quickly access information that’s critical to patient care. Access controls aren’t
ideal in a healthcare cyber infrastructure; a nurse who needs to know the specific prescription or allergies of a
patient shouldn’t have to wait for access approval or authentication before accessing her patient information.
The situations hospital staff face could sometimes be a matter of life and death — and access controls cannot get in
the way. To keep patient data secure, healthcare organizations look to patient privacy monitoring (PPM) tools to
monitor user access into patient files and electronic medical records (EMR).
PPM tools monitor and track all accesses into EMR databases so hospital privacy and compliance teams can see the
“who, what, when, where, and why” of EMR access. These tools also detect access threats and flag any access
that’s anomalous or inappropriate to shore up the security gaps where access controls would usually be used.
Step 6: Automate vendor remote
access
While this technically isn’t a “next step,” it’s the best “best practice” you can implement
to fully secure vendor remote access into your high-risk access points.
Automating third-party remote access allows your organization to have a streamlined
process to connect third parties to your critical systems. This automation comes in the
form of third-party remote access platforms that provide a secure remote connection
between vendors and organizational networks.
These solutions are designed specifically for third-party remote access and are built to
cover all of these steps, from vendor identity management to ZTNA and authentication
tools.
When looking at these solutions for your organization, make sure there are also access
monitoring capabilities in place to audit and record all vendor activity to meet the needs
your cybersecurity strategy demands.
The End

You might also like