DC CIA2 ANSWER KEY

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 11

M.A.M.

SCHOOL OF ENGINEERING
AccreditedbyNAAC
ApprovedbyAICTE, New Delhi; AffiliatedtoAnnaUniversity, Chennai
Siruganur, Trichy -621 105. www.mamse.in
Department of Computer science & Engineering
Academic Year 2022-2023(ODD Semester) SET B
CIA – 1I
Sub.Code/Sub.Name : CS3551 – Distributed Computing Date : 10.10.2023
Year/Sem. : III / V Max.Marks : 100

1. Define the term deadlock avoidance.


The request for any resource will be granted if the resulting state of the system doesn't cause deadlock in the
system.

2. What is a phantom deadlock?

Phantom Deadlock is a deadlock that occurs in a Distributed DBMS due to communication delays between
different processes and leads to unnecessary process abortions

3. Write short notes on wait for graph?

The wait-for-graph scheme is not applicable to a resource allocation system with multiple instances of each
resource type.

4. Explain wait-die method?


When an older transaction tries to lock a DB element that has been locked by a younger transaction, it waits.

5. Write short notes on deadlock detection?


Deadlock detection algorithms are used to identify the presence of deadlocks in computer systems.

6. Define Public cloud.


Deadlock detection algorithms are used to identify the presence of deadlocks in computer systems.

7. Write the characteristics of cloud computing?


Resources Pooling.
On-Demand Self-Service.
Easy Maintenance.
Scalability And Rapid Elasticity.
Economical.
Measured And Reporting Service.
Security.
Automation.
8. Distinguish between virtualization and cloud computing

Let’s see the difference between Cloud computing and Virtualization:-

S.N
O Cloud Computing Virtualization

Cloud computing is used to provide While It is used to make various


1. pools and automated resources that can simulated environments through a
be accessed on-demand. physical hardware system.

Cloud computing setup is tedious, While virtualization setup is simple as


2.
complicated. compared to cloud computing.
S.N
O Cloud Computing Virtualization

While virtualization is low scalable


3. Cloud computing is high scalable.
compared to cloud computing.

While virtualization is less flexible


4. Cloud computing is Very flexible.
than cloud computing.

In the condition of disaster recovery,


While it relies on single peripheral
5. cloud computing relies on multiple
device.
machines.

In cloud computing, the workload is In virtualization, the workload is


6.
stateless. stateful.

The total cost of cloud computing is The total cost of virtualization is lower
7.
higher than virtualization. than Cloud Computing.

Cloud computing requires many


8. While single dedicated
dedicated hardware.

9. What are the Pros and Cons of virtualization?


is the creation of Virtual Version of something such as server, desktop, storage device, operating system etc.

Thus, Virtualization is a technique which allows us to share a single physical instance of a resource or an
application among multiple customers and an organization. Virtualization often creates many virtual resources
from one physical resource.

Host Machine –
The machine on which virtual machine is going to create is known as Host Machine.

Guest Machine –
The virtual machines which are created on Host Machine is called Guest Machine.
10. Explain the characteristics of PaaS?
PaaS offers browser based development environment. It allows the developer to create database and edit the
application code either via Application Programming Interface or point-and-click tools.
Part B (5x13=65 Marks)
11. Illustrate with a case study explain ricart–agrawala algorithm?
Ricart–Agrawala algorithm is an algorithm for mutual exclusion in a distributed system proposed by Glenn Ricart
and Ashok Agrawala. This algorithm is an extension and optimization of Lamport’s Distributed Mutual Exclusion
Algorithm. Like Lamport’s Algorithm, it also follows permission-based approach to ensure mutual exclusion. In
this algorithm:

Two type of messages ( REQUEST and REPLY) are used and communication channels are assumed to follow FIFO
order.
A site send a REQUEST message to all other site to get their permission to enter the critical section.
A site send a REPLY message to another site to give its permission to enter the critical section.
A timestamp is given to each critical section request using Lamport’s logical clock.
Timestamp is used to determine priority of critical section requests. Smaller timestamp gets high priority over
larger timestamp. The execution of critical section request is always in the order of their timestamp.
Algorithm:

To enter Critical section:


When a site Si wants to enter the critical section, it send a timestamped REQUEST message to all other sites.
When a site Sj receives a REQUEST message from site Si, It sends a REPLY message to site Si if and only if
Site Sj is neither requesting nor currently executing the critical section.
In case Site Sj is requesting, the timestamp of Site Si‘s request is smaller than its own request.
To execute the critical section:
Site Si enters the critical section if it has received the REPLY message from all other sites.
To release the critical section:
Upon exiting site Si sends REPLY message to all the deferred requests.
Message Complexity: Ricart–Agrawala algorithm requires invocation of 2(N – 1) messages per critical section
execution. These 2(N – 1) messages involves

(N – 1) request messages
(N – 1) reply messages
Advantages of the Ricart-Agrawala Algorithm:

Low message complexity: The algorithm has a low message complexity as it requires only (N-1) messages to enter
the critical section, where N is the total number of nodes in the system.
Scalability: The algorithm is scalable and can be used in systems with a large number of nodes.
Non-blocking: The algorithm is non-blocking, which means that a node can continue executing its normal
operations while waiting to enter the critical section.
Drawbacks of Ricart–Agrawala algorithm:

Unreliable approach: failure of any one of node in the system can halt the progress of the system. In this
situation, the process will starve forever. The problem of failure of node can be solved by detecting failure after
some timeout.

12. Discuss the suzuki–kasami’s broadcast algorithm with neat diagram?


Suzuki–Kasami algorithm is a token-based algorithm for achieving mutual exclusion in distributed systems.This is
modification of Ricart–Agrawala algorithm, a permission based (Non-token based) algorithm which uses REQUEST
and REPLY messages to ensure mutual exclusion.

In token-based algorithms, A site is allowed to enter its critical section if it possesses the unique token. Non-token
based algorithms uses timestamp to order requests for the critical section where as sequence number is used in
token based algorithms.

Each requests for critical section contains a sequence number. This sequence number is used to distinguish old
and current requests.

Data structure and Notations:


An array of integers RN[1…N]
A site Si keeps RNi[1…N], where RNi[j] is the largest sequence number received so far through REQUEST message
from site Si.
An array of integer LN[1…N]
This array is used by the token.LN[J] is the sequence number of the request that is recently executed by site Sj.
A queue Q
This data structure is used by the token to keep record of ID of sites waiting for the token
Algorithm:

To enter Critical section:


When a site Si wants to enter the critical section and it does not have the token then it increments its sequence
number RNi[i] and sends a request message REQUEST(i, sn) to all other sites in order to request the token.
Here sn is update value of RNi[i]
When a site Sj receives the request message REQUEST(i, sn) from site Si, it sets RNj[i] to maximum of RNj[i] and sn
i.e RNj[i] = max(RNj[i], sn).
After updating RNj[i], Site Sj sends the token to site Si if it has token and RNj[i] = LN[i] + 1
To execute the critical section:
Site Si executes the critical section if it has acquired the token.
To release the critical section:
After finishing the execution Site Si exits the critical section and does following:
sets LN[i] = RNi[i] to indicate that its critical section request RNi[i] has been executed
For every site Sj, whose ID is not present in the token queue Q, it appends its ID to Q if RNi[j] = LN[j] + 1 to
indicate that site Sj has an outstanding request.
After above updation, if the Queue Q is non-empty, it pops a site ID from the Q and sends the token to site
indicated by popped ID.
If the queue Q is empty, it keeps the token
Message Complexity:
The algorithm requires 0 message invocation if the site already holds the idle token at the time of critical section
request or maximum of N message per critical section execution. This N messages involves

(N – 1) request messages
1 reply message
Drawbacks of Suzuki–Kasami Algorithm:

Non-symmetric Algorithm: A site retains the token even if it does not have requested for critical section.
According to definition of symmetric algorithm
“No site possesses the right to access its critical section when it has not been requested.”

13.A) Explain chandy-Misra- Haas algorithm for OR model?


The Chandy–Misra–Haas algorithm resource model checks for deadlock in a distributed system. It was developed
by K. Mani Chandy, Jayadev Misra and Laura M Haas.

Locally dependent
Consider the n processes P1, P2, P3, P4, P5,, ... ,Pn which are performed in a single system (controller). P1 is
locally dependent on Pn, if P1 depends on P2, P2 on P3, so on and Pn−1 on Pn. That is, if

{\displaystyle P_{1}\rightarrow P_{2}\rightarrow P_{3}\rightarrow \ldots \rightarrow P_{n}\rightarrow P_{1}},


then

P_{1} is locally dependent on itself.

Description
The algorithm uses a message called probe(i,j,k) to transfer a message from controller of process Pj to controller
of process Pk. It specifies a message started by process Pi to find whether a deadlock has occurred or not. Every
process Pj maintains a boolean array dependent which contains the information about the processes that depend
on it. Initially the values of each array are all "false".

Controller sending a probe


Before sending, the probe checks whether Pj is locally dependent on itself. If so, a deadlock occurs. Otherwise it
checks whether Pj, and Pk are in different controllers, are locally dependent and Pj is waiting for the resource that
is locked by Pk. Once all the conditions are satisfied it sends the probe.

Controller receiving a probe


On the receiving side, the controller checks whether Pk is performing a task. If so, it neglects the probe.
Otherwise, it checks the responses given Pk to Pj and dependentk(i) is false. Once it is verified, it assigns true to
dependentk(i). Then it checks whether k is equal to i. If both are equal, a deadlock occurs, otherwise it sends the
probe to next dependent process.

Algorithm
In pseudocode, the algorithm works as follows:[1]

Controller sending a probe


if Pj is locally dependent on itself
then declare deadlock
else for all Pj,Pk such that
(i) Pi is locally dependent on Pj,
(ii) Pj is waiting for 'Pk and
(iii) Pj, Pk are on different controllers.
send probe(i, j, k). to home site of Pk
Controller receiving a probe
if
(i)Pk is idle / blocked
(ii) dependentk(i) = false, and
(iii) Pk has not replied to all requests of to Pj
then begin
"dependents""k"(i) = true;
if k == i
then declare that Pi is deadlocked
else for all Pa,Pb such that
(i) Pk is locally dependent on Pa,
(ii) Pa is waiting for 'Pb and
(iii) Pa, Pb are on different controllers.
send probe(i, a, b). to home site of Pb
end

B) State the example of a WFG


A wait-for graph in computer science is a directed graph used for deadlock detection in operating systems and
relational database systems.
One such deadlock detection algorithm makes use of a wait-for graph to track which other processes a process is
currently blocking on. In a wait-for graph, processes are represented as nodes, and an edge from process
P_{i} to
P_{j} implies
P_{j} is holding a resource that
P_{i} needs and thus
P_{i} is waiting for
P_{j} to release its lock on that resource. If the process is waiting for more than a single resource to become
available (the trivial case), multiple edges may represent a conjunctive (and) or disjunctive (or) set of different
resources or a certain number of equivalent resources from a collection. The possibility of a deadlock is implied by
graph cycles in the conjunctive case, and by knots in the disjunctive case. There is no simple algorithm for
detecting the possibility of deadlock in the final case.[1]

14.A) Explain about para virtualization technique with neat diagram?


Paravirtualization is an enhancement of virtualization technology in which a guest operating system (OS) is
modified prior to installation inside a virtual machine (VM). This lets all guest OSes within the system share
resources and successfully collaborate rather than emulate an entire hardware environment.

With paravirtualization, virtual machines are accessible through interfaces similar to the underlying hardware.
This capacity minimizes overhead and optimizes system performance by supporting the use of VMs that would
otherwise be underutilized in conventional or full hardware virtualization.

Paravirtualization eliminates the need for the virtual machine to trap privileged instructions. Trapping, a means of
handling unexpected or unallowable conditions, can be time consuming and lower performance in systems that
employ full virtualization.

The main limitation of paravirtualization is that the guest OS must be tailored specifically to run on top of the
virtual machine monitor (VMM) -- the host program that lets a single computer support multiple, identical
execution environments.
How does paravirtualization work?
Paravirtualization attempts to resolve issues found in full virtualization. The primary difference between
paravirtualization and full virtualization is the ability to make modifications to the guest OS in paravirtualization.

Furthermore, in paravirtualization, the guest OS is aware it is being virtualized. In full virtualization, the
unmodified OS is unaware it is being virtualized, and sensitive OS calls are captured and translated using binary
translation.

By granting the guest OS access to the underlying hardware, paravirtualization enables communication between
the guest OS and the hypervisor, thus improving performance and efficiency within the system.

More specifically, the paravirtualization process consists of the guest OS being modified specifically for installation
on a VM. This is necessary because unmodified guest OSes are unable to run on a VMM. The intent of the
modification is to decrease the execution time required to complete operations that can be problematic in virtual
environments.

In paravirtualization, the guest kernel is modified to run with the hypervisor. This frequently involves removing
operations that only run on ring 0 of the processor with calls to the hypervisor, or hypercalls.

The hypervisor responds by performing the task for the guest kernel and supplying hypercall interfaces that can
complete other important kernel operations -- such as interrupt handling, time keeping and memory
management.
B) Distinguish between para virtualization and full virtualization.

key differences between Full Virtualization and Paravirtualization multitasking in operating


systems. Some main differences between Full
Virtualization and Paravirtualization multitasking in operating systems are as follows:

1. Full virtualization is the first generation of software solutions for server virtualization. On
the other hand, the interaction of the guest operating system with the hypervisor to
improve performance and productivity is known as paravirtualization.
2. Full virtualization enables the Guest operating system to run independently. In contrast,
paravirtualization enables the Guest OS to interact with the hypervisor.
3. Full virtualization performance is slow. In contrast, paravirtualization performance is high
than full virtualization.
4. Full virtualization is less secure than paravirtualization. On the other hand,
paravirtualization is more secure than full virtualization.
5. Binary translation and a direct approach are used in full virtualization. On the other hand,
paravirtualization operates through hypercalls.
6. Full virtualization is more portable and adaptable. On the other hand, paravirtualization is
less portable and compatible.
7. Full virtualization supports all the Guest OS without any change. On the other hand, the
Guest OS has to be modified in paravirtualization and only a few OS support it.
8. The Guest OS will issue hardware calls in full virtualization. In contrast, the guest OS will
interface directly with the hypervisor via drivers in paravirtualization.
9. Full virtualization is less efficient than paravirtualization. On the other hand,
paravirtualization is more simplified.
10. The optimum isolation is provided by full virtualization. On the other hand,
paravirtualization offers less isolation than full virtualization.
11. There are just a few paravirtualization examples, such as VMware and Xen. In contrast, full
virtualization is used in VMware, Microsoft, and Parallels systems.

15.A) Discuss briefly load balancing in clouds?


Load balancing is the method that allows you to have a proper balance of the amount of work being done on
different pieces of device or hardware equipment. Typically, what happens is that the load of the devices is
balanced between different servers or between the CPU and hard drives in a single cloud server.

Load balancing was introduced for various reasons. One of them is to improve the speed and performance of
each single device, and the other is to protect individual devices from hitting their limits by reducing their
performance.

Cloud load balancing is defined as dividing workload and computing properties in cloud computing. It enables
enterprises to manage workload demands or application demands by distributing resources among multiple
computers, networks or servers. Cloud load balancing involves managing the movement of workload traffic and
demands over the Internet.

Traffic on the Internet is growing rapidly, accounting for almost 100% of the current traffic annually. Therefore,
the workload on the servers is increasing so rapidly, leading to overloading of the servers, mainly for the popular
web servers. There are two primary solutions to overcome the problem of overloading on the server-
First is a single-server solution in which the server is upgraded to a higher-performance server. However, the new
server may also be overloaded soon, demanding another upgrade. Moreover, the upgrading process is arduous
and expensive.
The second is a multiple-server solution in which a scalable service system on a cluster of servers is built. That's
why it is more cost-effective and more scalable to build a server cluster system for network services.
Cloud-based servers can achieve more precise scalability and availability by using farm server load balancing. Load
balancing is beneficial with almost any type of service, such as HTTP, SMTP, DNS, FTP, and POP/IMAP.

15.B) Discuss about the windows Azure platform architecture with neat diagram?

What is Microsoft Azure Used For?


Microsoft Azure is a cloud computing service that provides virtual servers and storage. It allows you to quickly
deploy applications and run them in the cloud, while also giving you access to a large network of other users. You
can use Azure to run web apps, mobile apps, and more. Azure is available in several different subscription plans,
so you can choose what’s right for your needs. It’s commonly used for things like building websites and running
databases, among other things. With Microsoft’s recent launches of Azure Kubernetes Service (AKS) and Azure
Container Service (ACS), microservice adoption has gained momentum in enterprises. Microservices are “a
collection of small, independent services that communicate with each other via APIs”. These individual services
are easier to maintain and develop than monolithic applications. Microservices show signs of being a key
component of future IT architecture due to their flexibility as well as scalability. In addition to its ease of use,
Microsoft also offers a wide range of tools designed to help developers build microservices and scale their
applications end-to-end on Azure. For example, Visual Studio Code provides a lightweight code editor designed
for HTML, JavaScript, and TypeScript development on the macOS platform. Azure CLI provides an easy-to-use
Command Line Interface (CLI) that enables developers to automate tasks across Azure services.

Users can create any kind of web application in Azure, from a blog to a corporate intranet. Developers can choose
from a large range of hosted tools, from WordPress to Node.js. For example, a large telecom company could set
up an internal corporate intranet for employees and contractors with a Node.js app.
Azure provides a scalable and reliable environment for hosting your applications. It is easy to scale the application
up or down as needed, and easily change the location of where you host the application.
The dev testing service Azure App Creation lets developers quickly spin up virtual machines in the cloud and run
their applications in the same way a customer would. It’s ideal for testing on the same platform as a customer or
for beta testing applications and features on the developer’s own infrastructure.
You can use Azure to quickly set up a virtual machine with the software your organization needs. You can use
virtual networks, Storage accounts, and other features to make your environment more secure, manage access,
and stay compliant with regulatory and compliance requirements. You can also monitor virtual machines with
built-in synthetic tests, custom metrics, and custom alarm rules. In the end, virtualizing your workloads gives you
flexibility, scalability, and control.
Virtual hard drives allow you to have more storage capacity than what’s allowed in a single virtual machine. You
can use this feature to store excess data from a virtual machine, host a backup, or allocate storage to an
application that requires a large amount of hard drive space. You can also use virtual hard drives to create a
separate, independent storage pool for each virtual machine. This flexibility allows you to scale your
infrastructure without having to scale your hardware.
Azure provides a broad array of cloud-based applications that can be easily integrated with your existing
infrastructure. This gives enterprises a new way of using apps in the cloud, without sacrificing data security,
compliance, or control. Azure also gives you the option to create hybrid cloud applications that can run on both
on-premises infrastructure and in the cloud.
This data can then be used to improve performance, measure the effectiveness of your marketing campaigns, and
more. It can also be used to create custom reports and visualizations that were never possible before the advent

of cloud computing.
16. Discuss briefly about the application services of cloud computing?
Cloud technology offers several applications in various fields like business, data storage, entertainment,
management, social networking, education, art, GPS, to name a few.

The major types of cloud computing service models available are Platform as a Service (PaaS), Infrastructure as a
Service (IaaS), and Software as a Service (SaaS). Plus, there are platforms like Public Cloud, Private Cloud, Hybrid
Cloud, and Community Cloud.

Let’s start elaborating on the top 7 applications of cloud computing.


7 Most Popular Applications of Cloud Computing : All You Need to Know
By Simplilearn
Last updated on Aug 28, 202364015
7 Most Popular Applications of Cloud Computing and Why It Should Be Your Next Career Move
Table of Contents
What is Cloud Computing?What Are the Benefits of Cloud Computing?Top 7 Applications of Cloud ComputingWhy
Cloud Computing as a Career?Top Cloud Computing CareersView More
In the digital world, the cloud has nothing to do with the white fluffy things in the sky; it has everything to do with
the Internet.

As a tech revolution that has witnessed rapid adoption over the last decade, the cloud fuels some of the world’s
largest brands, and it’s the technology behind some of the most innovative products and tools of recent times.

Businesses worldwide are using cloud resources or cloud computing to access important programs and data on a
pay-as-you-go basis. Prized for its convenience and reliability, cloud computing is transforming businesses and
their operations across industries.

Here we’ll look at some of the reasons why cloud computing is so important, its benefits, the most popular
applications of cloud computing, and why cloud computing careers are so high-in-demand.

What is Cloud Computing?


Cloud Computing refers to the delivery of on-demand computing services over the Internet on an as-needed
basis. It allows businesses to rent access to computing services like servers, storage, databases, analytics,
networking, software, and intelligence, typically over the Internet.

By renting IT resources from a cloud service provider, companies can avoid setting up and owning data centers
and computing infrastructure. This reduces the cost of developing and installing software to improve business
operations. Companies simply pay for services they use when they use. Thus cloud computing enables business
owners to lower operational costs, run their infrastructure more efficiently, and scale as business needs change.

What Are the Benefits of Cloud Computing?


Cloud computing offers numerous benefits, which is why businesses of all sizes – from corporate giants to small
start-ups - are adopting it with such enthusiasm. The top benefits of cloud computing are:

Find Our Cloud Computing Training in Top Cities

India United States Other Countries


Cloud Computing Training in Mumbai Cloud Computing Training in San Diego Cloud Computing Training in
London
Cloud Computing Training in Delhi Cloud Computing Training in Los Angeles Cloud Computing
Training in Riyadh
Cloud Computing Training in Pune Cloud Computing Training in Austin Cloud Computing Training in
Singapore
Learn the Fundamentals of How Business Works
Executive Certificate In General ManagementEXPLORE PROGRAMLearn the Fundamentals of How Business Works
Lower Costs
It is expensive to establish and run in-house computing infrastructure. Purchasing and maintaining equipment and
hiring trained IT experts come at a cost. By switching to cloud computing, businesses only need to pay for the
services they procure. This results in significant cost savings.

Mobility
Cloud-based technology offers mobility, ensuring workers can access resources in the cloud in real-time from any
location or device.

Scalability
Businesses using cloud computing can scale up or down their IT features based on business requirements.

Disaster Recovery
There’s no need for a disaster recovery data backup plan in cloud systems. There’s no permanent data loss in case
of a disaster.

Data Security
Cloud computing offers many advanced data security features to guarantee data safety and security.

Wide Range of Options


There are various types, models, and services of cloud platforms available suited to the different needs of
enterprises.

Want a Job at AWS? Find Out What It Takes


Cloud Architect Master's ProgramEXPLORE PROGRAMWant a Job at AWS? Find Out What It Takes
Unlimited Storage Capacity
The cloud has unlimited storage capacity for all types of data.

Automatic Software Updates


Software and security are regularly managed by software vendors on behalf of the users.

Better Collaboration
Cloud environments allow easy sharing of real-time data across teams within an organization, which improves
collaboration and team performance.

Top 7 Applications of Cloud Computing


Cloud technology offers several applications in various fields like business, data storage, entertainment,
management, social networking, education, art, GPS, to name a few.

The major types of cloud computing service models available are Platform as a Service (PaaS), Infrastructure as a
Service (IaaS), and Software as a Service (SaaS). Plus, there are platforms like Public Cloud, Private Cloud, Hybrid
Cloud, and Community Cloud.

1. Online Data Storage


Cloud Computing allows storage and access to data like files, images, audio, and videos on the cloud storage. In
this age of big data, storing huge volumes of business data locally requires more and more space and escalating
costs. This is where cloud storage comes into play, where businesses can store and access data using multiple
devices.

The interface provided is easy to use, convenient, and has the benefits of high speed, scalability, and integrated
security.
2. Backup and Recovery
Cloud service providers offer safe storage and backup facility for data and resources on the cloud. In a traditional
computing system, data backup is a complex problem, and often, in case of a disaster, data can be permanently
lost. But with cloud computing, data can be easily recovered with minimal damage in case of a disaster.

3. Big Data Analysis


One of the most important applications of cloud computing is its role in extensive data analysis. The extremely
large volume of big data makes it impossible to store using traditional data management systems. Due to the
unlimited storage capacity of the cloud, businesses can now store and analyze big data to gain valuable business
insights.

4. Testing and Development


Cloud computing applications provide the easiest approach for testing and development of products. In
traditional methods, such an environment would be time-consuming, expensive due to the setting up of IT
resources and infrastructure, and needed manpower. However, with cloud computing, businesses get scalable
and flexible cloud services, which they can use for product development, testing, and deployment.

5. Antivirus Applications
With Cloud Computing comes cloud antivirus software which is stored in the cloud from where they monitor
viruses and malware in the organization’s system and fixes them. Earlier, organizations had to install antivirus
software within their system and detect security threats.

6. E-commerce Application
Ecommerce applications in the cloud enable users and e-businesses to respond quickly to emerging opportunities.
It offers a new approach to business leaders to make things done with minimum amount and minimal time. They
use cloud environments to manage customer data, product data, and other operational systems.

7. Cloud Computing in Education


E-learning, online distance learning programs, and student information portals are some of the key changes
brought about by applications of cloud computing in the education sector. In this new learning environment,
there’s an attractive environment for learning, teaching, experimenting provided to students, teachers, and
researchers so they can connect to the cloud of their establishment and access data and information.

You might also like