Professional Documents
Culture Documents
Data Centre
Data Centre
For example, if you need to recover from a system failure, your standby instance should be on your local area network
(LAN). For critical database applications, you can then replicate data synchronously from the primary instance across
the LAN to the secondary instance. This makes your standby instance hot and in sync with your active instance, so it is
ready to take over immediately in the event of a failure. This is referred to as high availability (HA).
In the event of a disaster, you want to be sure that your secondary instance is not co-located with your primary instance.
This means you want your secondary instance in a geographic site away from the primary instance or in a cloud instance
connected via a WAN. To avoid negatively impacting throughput performance, data replication on a WAN is
asynchronous. This means that updates to standby instances will lag updates made to the active instance, resulting in a
delay during the recovery process.
Why Replicate Data to the Cloud?
There are five reasons why you want to replicate your data to the cloud.
As we discussed above, cloud replication keeps your data offsite and away from the company’s site. While a major disaster, such as
a fire, flood, storm, etc., can devastate your primary instance, your secondary instance is safe in the cloud and can be used to
recover the data and applications impacted by the disaster.
Cloud replication is less expensive than replicating data to your own data center. You can eliminate the costs associated with
maintaining a secondary data center, including the hardware, maintenance, and support costs.
For smaller businesses, replicating data to the cloud can be more secure especially if you do not have security expertise on staff.
Both the physical and network security provided by cloud providers is unmatched.
Replicating data to the cloud provides on-demand scalability. As your business grows or contracts, you do not need to invest in
additional hardware to support your secondary instance or have that hardware sit idle if business slows down. You also have no
long-term contracts.
When replicating data to the cloud, you have many geographic choices, including having a cloud instance in the next city, across
the country, or in another country as your business dictates.
Why Replicate Data Between Cloud Instances?
While cloud providers take every precaution to ensure 100 percent up-time, it is possible for individual cloud servers to fail as a
result of physical damage to the hardware and software glitches all the same reasons why on-premises hardware would fail. For
this reason, organizations that run their essential applications in the cloud should use data replication to support high availability
and disaster recovery. You can replicate data between availability zones in a single region, between regions in the cloud, between
different cloud platforms, to on premise systems, or any hybrid combination.
Benefits of data replication
By making data available on multiple hosts or data centers, data replication facilitates the large-scale sharing of data
among systems and distributes the network load among multisite systems. Organizations can expect to see benefits
including:
Improved reliability and availability: If one system goes down due to faulty hardware, malware attack, or another
problem, the data can be accessed from a different site.
Improved network performance: Having the same data in multiple locations can lower data access latency, since
required data can be retrieved closer to where the transaction is executing.
Increased data analytics support: Replicating data to a data warehouse empowers distributed analytics teams to work on
common projects for business intelligence.
Improved test system performance: Data replication facilitates the distribution and synchronization of data for test
systems that demand fast data accessibility.
Data replication
Though replication provides many benefits, organizations should weigh the benefits against the disadvantages. The
challenges
challenges to maintaining consistent data across an organization boil down to limited resources:
Money: Keeping copies of the same data in multiple locations leads to higher storage and processor costs.
Time: Implementing and managing a data replication system requires dedicated time from an internal team.
Bandwidth: Maintaining consistency across data copies requires new procedures and adds traffic to the network.
What is Data Center Virtualization?
Data center virtualization is the process of creating a modern data center that is highly scalable, available and secure.
With data center virtualization products you can increase IT agility and create a seamless foundation to manage private
and public cloud services alongside traditional on-premises infrastructure.
In a virtualized data center, a virtual server, also called a software-defined data center (SDDC) is created from
traditional, physical servers. This process abstracts physical hardware by imitating its processors, operating system, and
other resources with help from a hypervisor. A hypervisor (or virtual machine monitor, VMM, virtualizer) is a software
that creates and manages a virtual machine. It treats resources such as CPU, memory, and storage as a pool that can be
easily reallocated between existing virtual machines or to new ones.
Benefits of Data Center
Virtualization
Data center virtualization offers a range of strategic and technological benefits to businesses looking for increased
profitability or greater scalability. Here we’ll discuss some of these benefits.
Scalability
Compared to physical servers, which require extensive and sometimes expensive sourcing and time management, virtual data
centers are relatively simpler, quicker, and more economical to set up. Any company that experiences high levels of growth
might want to consider implementing a virtualized data center.
It’s also a good fit for companies experiencing seasonal increases in business activity. During peak times, virtualized memory,
processing power, and storage can be added at a lesser cost and in a faster timeframe than purchasing and installing
components on a physical machine. Likewise, when demand slows, virtual resources can be scaled down to remove unnecessary
expenses. All of these are not possible with metal servers.
Data Mobility
Before virtualization, everything from common tasks and daily interactions to in-depth analytics and data storage
happened at the server level, meaning they could only be accessed from one location. With a strong enough
Internet connection, virtualized resources can be accessed when and where they are needed. For example,
employees can access data, applications, and services from remote locations, greatly improving productivity
outside the office.
Moreover, with help of cloud-based applications such as video conferencing, word processing, and other content
creation tools, virtualized servers make versatile collaboration possible and create more sharing opportunities.
Cost Savings
Typically outsourced to third-party providers, physical servers are always associated with high management and
maintenance. But they will not be a problem in a virtual data center. Unlike their physical counterparts, virtual servers
are often offered as pay-as-you-go subscriptions, meaning companies only pay for what they use. By contrast, whether
physical servers are used or not, companies still have to shoulder the costs for their management and maintenance. As a
plus, the additional functionality that virtualized data centers offer can reduce other business expenses like travel costs.
Cloud vs. Virtualization: How Are They
Related?
It’s easy to confuse virtualization with cloud. However, they are quite different but also closely related. To put it simply,
virtualization is a technology used to create multiple simulated environments or dedicated resources from a physical
hardware system, while cloud is an environment where scalable resources are abstracted and shared across a network.
Clouds are usually created to enable cloud computing, a set of principles and approaches to deliver compute, network,
and storage infrastructure resources, platforms, and applications to users on-demand across any network. Cloud
computing allows different departments (through private cloud) or companies (through a public cloud) to access a single
pool of automatically provisioned resources, while virtualization can make one resource act like many.
In most cases, virtualization and cloud work together to provide different types of services. Virtualized data center
platforms can be managed from a central physical location (private cloud) or a remote third-party location (public
cloud), or any combination of both (hybrid cloud). On-site virtualized servers are deployed, managed, and protected by
private or in-house teams. Alternatively, third-party virtualized servers are operated in remote data centers by a service
provider who offers cloud solutions to many different companies.
If you already have a virtual infrastructure, to create a cloud, you can pool virtual resources together, orchestrate them
using management and automation software, and create a self-service portal for users.
Transport Layer Security (TLS)
Computers send packets of data around the Internet. These packets are like letters in an
envelope: an onlooker can easily read the data inside them. If that data is public
information like a news article, that's not a big deal. But if that data is a password, credit
card number, or confidential email, then it's risky to let just anyone see that data.
The Transport Layer Security (TLS) protocol adds a layer of security on top of the TCP/IP
transport protocols. TLS uses both symmetric encryption and public key encryption for
securely sending private data, and adds additional security features, such as
authentication and message tampering detection.
TLS adds more steps to the process of sending data with TCP/IP, so it increases latency
in Internet communications. However, the security benefits are often worth the extra
latency.
(Note that TLS superseded an older protocol called SSL, so the terms TLS and SSL are often
used interchangeably.)
From start to finish
Let's step through the process of securely sending data with TLS from one computer
to another. We'll call the sending computer the client and the receiving computer
the server.
TCP handshake
Since TLS is built on top of TCP/IP, the client must first complete the 3-way TCP
handshake with the server.
TLS initiation
The client must notify the server that it desires a TLS connection instead of the standard
insecure connection, so it sends along a message describing which TLS protocol version and
encryption techniques it'd like to use.
These vendor access controls are most powerful and most effective when used in combination with each other rather than used independently. A system that is utilizing ZTNA, MFA, and
fine-grained access controls is much more protected than one that only requires validation from MFA alone.
Step 5: Monitor user access
Security doesn’t stop once a user has been vetted, authenticated, and granted access. Their activity needs to be
watched and recorded to ensure no nefarious behavior is happening while a third party is behind the scenes of
your network. Access monitoring includes the proactive monitoring of a user while in session and reactive
observation and analysis of network activity.
Proactive monitoring watches vendor behavior in real-time so any suspicious or inappropriate activity can be
detected. When in combination with machine learning, user behavior can be observed so any anomalous activity
can create alerts and inform administrators of at-risk activity.
Reactive monitoring happens after a session (hence, reactive) when there’s a specific reason to look into
third-party activity. It requires systems and tools to be in place to record sessions and is critical for investigation if a
data or privacy breach occurs. Not to mention it saves time on providing incident reports that are now mandated
by Executive Order and meeting audit or compliance requirements that require reports on remote access activity.
Access monitoring is particularly important for the healthcare industry. Hospital staff members need open-ended
access to systems and databases to quickly access information that’s critical to patient care. Access controls aren’t
ideal in a healthcare cyber infrastructure; a nurse who needs to know the specific prescription or allergies of a
patient shouldn’t have to wait for access approval or authentication before accessing her patient information.
The situations hospital staff face could sometimes be a matter of life and death — and access controls cannot get in
the way. To keep patient data secure, healthcare organizations look to patient privacy monitoring (PPM) tools to
monitor user access into patient files and electronic medical records (EMR).
PPM tools monitor and track all accesses into EMR databases so hospital privacy and compliance teams can see the
“who, what, when, where, and why” of EMR access. These tools also detect access threats and flag any access
that’s anomalous or inappropriate to shore up the security gaps where access controls would usually be used.
Step 6: Automate vendor remote
access
While this technically isn’t a “next step,” it’s the best “best practice” you can implement
to fully secure vendor remote access into your high-risk access points.
Automating third-party remote access allows your organization to have a streamlined
process to connect third parties to your critical systems. This automation comes in the
form of third-party remote access platforms that provide a secure remote connection
between vendors and organizational networks.
These solutions are designed specifically for third-party remote access and are built to
cover all of these steps, from vendor identity management to ZTNA and authentication
tools.
When looking at these solutions for your organization, make sure there are also access
monitoring capabilities in place to audit and record all vendor activity to meet the needs
your cybersecurity strategy demands.
The End