Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 30

Assigment no.

Q1. Define distributed system. Mention a few real time examples of


distributed system.
Ans-What is a Distributed System?
Distributed System is a collection of autonomous computer systems that are
physically separated but are connected by a centralized computer network that is
equipped with distributed system software. The autonomous computers will
communicate among each system by sharing resources and files and performing the
tasks assigned to them.
Example of Distributed System:
Any Social Media can have its Centralized Computer Network as its Headquarters and
computer systems that can be accessed by any user and using their services will be the
Autonomous Systems in the Distributed System Architecture.

 Distributed System Software: This Software enables computers to coordinate


their activities and to share the resources such as Hardware, Software, Data, etc.
 Database: It is used to store the processed data that are processed by each
Node/System of the Distributed systems that are connected to the
Centralized network.
 As we can see that each Autonomous System has a common Application that can
have its own data that is shared by the Centralized Database System.
 To Transfer the Data to Autonomous Systems, Centralized System should be
having a Middleware Service and should be connected to a Network.
 Middleware Services enables some services which are not present in the local
systems or centralized system default by acting as an interface between the
Centralized System and the local systems. By using components of Middleware
Services systems communicate and manage data.
 The Data which is been transferred through the database will be divided into
segments or modules and shared with Autonomous systems for processing.
 The Data will be processed and then will be transferred to the Centralized system
through the network and will be stored in the database.
Characteristics of Distributed System:
 Resource Sharing: It is the ability to use any Hardware, Software, or Data
anywhere in the System.
 Openness: It is concerned with Extensions and improvements in the system (i.e.,
How openly the software is developed and shared with others)
 Concurrency: It is naturally present in the Distributed Systems, that deal with the
same activity or functionality that can be performed by separate users who are in
remote locations. Every local system has its independent Operating Systems and
Resources.
 Scalability: It increases the scale of the system as a number of processors
communicate with more users by accommodating to improve the responsiveness of
the system.
 Fault tolerance: It cares about the reliability of the system if there is a failure in
Hardware or Software, the system continues to operate properly without degrading
the performance the system.
 Transparency: It hides the complexity of the Distributed Systems to the Users
and Application programs as there should be privacy in every system.
 Heterogeneity: Networks, computer hardware, operating systems, programming
languages, and developer implementations can all vary and differ among dispersed
system components.
Advantages of Distributed System:
 Applications in Distributed Systems are Inherently Distributed Applications.
 Information in Distributed Systems is shared among geographically distributed
users.
 Resource Sharing (Autonomous systems can share resources from remote
locations).
 It has a better price performance ratio and flexibility.
 It has shorter response time and higher throughput.
 It has higher reliability and availability against component failure.
 It has extensibility so that systems can be extended in more remote locations and
also incremental growth.
Disadvantages of Distributed System:
 Relevant Software for Distributed systems does not exist currently.
 Security possess a problem due to easy access to data as the resources are shared to
multiple systems.
 Networking Saturation may cause a hurdle in data transfer i.e., if there is a lag in
the network then the user will face a problem accessing data.
 In comparison to a single user system, the database associated with distributed
systems is much more complex and challenging to manage.
 If every node in a distributed system tries to send data at once, the network may
become overloaded.
Applications Area of Distributed System:
 Finance and Commerce: Amazon, eBay, Online Banking, E-Commerce websites.
 Information Society: Search Engines, Wikipedia, Social Networking, Cloud
Computing.
 Cloud Technologies: AWS, Salesforce, Microsoft Azure, SAP.
 Entertainment: Online Gaming, Music, youtube.
 Healthcare: Online patient records, Health Informatics.
 Education: E-learning.
 Transport and logistics: GPS, Google Maps.
 Environment Management: Sensor technologies.
Q2.What are the challenges involved in designing scalable distributed
system? What are type of Transparency in distributed system.

Ans-What is Scalable System in Distributed System?


The Scalable System in Distributed System refers to the system in which there is a
possibility of extending the system as the number of users and resources grows with
time.
 The system should be enough capable to handle the load that the system and
application software need not change when the scale of the system increases.
 To exemplify, with the increasing number of users and workstations the frequency
of file access is likely to increase in a distributed system. So, there must be some
possibility to add more servers to avoid any issue in file accessing handling.
 Scalability is generally considered concerning hardware and software. In
hardware, scalability refers to the ability to change workloads by altering hardware
resources such as processors, memory, and hard disc space. Software scalability
refers to the capacity to adapt to changing workloads by altering the scheduling
mechanism and parallelism level.

Need for Scalability Framework:

The scalability framework is required for the applications as it refers to a software


system’s ability to scale up in some way when and where required because of the
changing demands of the system like increasing users or workload, etc.
Examples include Spring Framework, JavaServer Faces(JSF), Struts, Play!, and
Google Web Toolkit (GWT).

How to Measure Scalability:

We can measure Scalability in terms of developing and testing the foundation of a


scalable system. However, precisely measuring a system’s scalability is difficult due
to scalable systems’ vast and diverse environment. The general metric method in this
process is to analyze system performance improvements by loading various system
resources and system loads. The system has the best scalability when the workload
and computing resources are increased or lowered by a factor of K at the same time
while the average response time of the system or application remains unchanged.

Measures of Scalability:

 Size Scalability
 Geographical Scalability
 Administrative Scalability
1. Size Scalability: There will be an increase in the size of the system whenever users
and resources grow but it should not be carried out at the cost of performance and
efficiency of the system. The system must respond to the user in the same manner as it
was responding before scaling the system.
2. Geographical Scalability: Geographical scalability refers to the addition of new
nodes in a physical space that should not affect the communication time between the
nodes.
3. Administrative Scalability: In Administrative Scalability, significant management
of the new nodes which are being added to the system should not be required. To
exemplify, if there are multiple administrators in the system, the system is shared with
others while in use by one of them.

Types of Scalability:

1. Horizontal Scalability: Horizontal Scalability implies the addition of new servers


to the existing set of resources in the system. The major benefit lies in the scaling of
the system dynamically. For example, Cassandra and MongoDB. Horizontal scaling is
done in them by adding more machines. Furthermore, the load balancer is employed
for distributing the load on the available servers which increases overall performance.
2. Vertical Scalability: Vertical Scalability refers to the addition of more power to the
existing pool of resources like servers. For example, MySQL. Here, scaling is carried
out by switching from smaller to bigger machines.

Q3. Why is cluster computing important, write type of cluster computing.

Ans-Cluster computing is a collection of tightly or loosely connected


computers that work together so that they act as a single entity. The
connected computers execute operations all together thus creating the idea
of a single system. The clusters are generally connected through fast local
area networks (LANs)
Cluster Computing

Why is Cluster Computing important?


1. Cluster computing gives a relatively inexpensive, unconventional to the
large server or mainframe computer solutions.
2. It resolves the demand for content criticality and process services in a
faster way.
3. Many organizations and IT companies are implementing cluster
computing to augment their scalability, availability, processing speed and
resource management at economic prices.
4. It ensures that computational power is always available.
5. It provides a single general strategy for the implementation and
application of parallel high-performance systems independent of certain
hardware vendors and their product decisions.
A Simple Cluster Computing Layout

Types of Cluster computing :


1. High performance (HP) clusters :
HP clusters use computer clusters and supercomputers to solve advance
computational problems. They are used to performing functions that need
nodes to communicate as they perform their jobs. They are designed to take
benefit of the parallel processing power of several nodes.
2. Load-balancing clusters :
Incoming requests are distributed for resources among several nodes
running similar programs or having similar content. This prevents any single
node from receiving a disproportionate amount of task. This type of
distribution is generally used in a web-hosting environment.
3. High Availability (HA) Clusters :
HA clusters are designed to maintain redundant nodes that can act as
backup systems in case any failure occurs. Consistent computing services
like business activities, complicated databases, customer services like e-
websites and network file distribution are provided. They are designed to
give uninterrupted data availability to the customers.
Classification of Cluster :

1. Open Cluster :
IPs are needed by every node and those are accessed only through the
internet or web. This type of cluster causes enhanced security concerns.
2. Close Cluster :
The nodes are hidden behind the gateway node, and they provide increased
protection. They need fewer IP addresses and are good for computational
tasks.
Cluster Computing Architecture :
 It is designed with an array of interconnected individual computers and
the computer systems operating collectively as a single standalone
system.
 It is a group of workstations or computers working together as a single,
integrated computing resource connected via high speed interconnects.
 A node – Either a single or a multiprocessor network having memory,
input and output functions and an operating system.
 Two or more nodes are connected on a single line or every node might be
connected individually through a LAN connection.

Cluster Computing Architecture

Components of a Cluster Computer :


1. Cluster Nodes
2. Cluster Operating System
3. The switch or node interconnect
4. Network switching hardware

Q6 what is Hypervisor? Enlist its types and elaborate type-1 in details.


And-A hypervisor is a form of virtualization software used in Cloud hosting to divide
and allocate the resources on various pieces of hardware. The program which provides
partitioning, isolation, or abstraction is called a virtualization hypervisor. The
hypervisor is a hardware virtualization technique that allows multiple guest operating
systems (OS) to run on a single host system at the same time. A hypervisor is
sometimes also called a virtual machine manager(VMM).

Types of Hypervisor –
TYPE-1 Hypervisor:
The hypervisor runs directly on the underlying host system. It is also known as a
“Native Hypervisor” or “Bare metal hypervisor”. It does not require any base server
operating system. It has direct access to hardware resources. Examples of Type 1
hypervisors include VMware ESXi, Citrix XenServer, and Microsoft Hyper-V
hypervisor.

Pros & Cons of Type-1 Hypervisor:


Pros: Such kinds of hypervisors are very efficient because they have direct access to
the physical hardware resources(like Cpu, Memory, Network, and Physical storage).
This causes the empowerment of the security because there is nothing any kind of the
third party resource so that attacker couldn’t compromise with anything.
Cons: One problem with Type-1 hypervisors is that they usually need a dedicated
separate machine to perform their operation and to instruct different VMs and control
the host hardware resources.

TYPE-2 Hypervisor:
A Host operating system runs on the underlying host system. It is also known as
‘Hosted Hypervisor”. Such kind of hypervisors doesn’t run directly over the
underlying hardware rather they run as an application in a Host system(physical
machine). Basically, the software is installed on an operating system. Hypervisor asks
the operating system to make hardware calls. An example of a Type 2 hypervisor
includes VMware Player or Parallels Desktop. Hosted hypervisors are often found on
endpoints like PCs. The type-2 hypervisor is very useful for engineers, and security
analysts (for checking malware, or malicious source code and newly developed
applications).
Pros & Cons of Type-2 Hypervisor:
Pros: Such kind of hypervisors allows quick and easy access to a guest Operating
System alongside the host machine running. These hypervisors usually come with
additional useful features for guest machines. Such tools enhance the coordination
between the host machine and the guest machine.
Cons: Here there is no direct access to the physical hardware resources so the
efficiency of these hypervisors lags in performance as compared to the type-1
hypervisors, and potential security risks are also there an attacker can compromise the
security weakness if there is access to the host operating system so he can also access
the guest operating system.

Choosing the right hypervisor :


Type 1 hypervisors offer much better performance than Type 2 ones because
there’s no middle layer, making them the logical choice for mission-critical
applications and workloads. But that’s not to say that hosted hypervisors don’t have
their place – they’re much simpler to set up, so they’re a good bet if, say, you need to
deploy a test environment quickly. One of the best ways to determine which
hypervisor meets your needs is to compare their performance metrics. These include
CPU overhead, the amount of maximum host and guest memory, and support for
virtual processors. The following factors should be examined before choosing a
suitable hypervisor:
1. Understand your needs: The company and its applications are the reason for the
data center (and your job). Besides your company’s needs, you (and your co-workers
in IT) also have your own needs. Needs for a virtualization hypervisor are:
a. Flexibility
b. Scalability
c. Usability
d. Availability
e. Reliability
f. Efficiency
g. Reliable support
2. The cost of a hypervisor: For many buyers, the toughest part of choosing a
hypervisor is striking the right balance between cost and functionality. While a
number of entry-level solutions are free, or practically free, the prices at the opposite
end of the market can be staggering. Licensing frameworks also vary, so it’s important
to be aware of exactly what you’re getting for your money.
3. Virtual machine performance: Virtual systems should meet or exceed the
performance of their physical counterparts, at least in relation to the applications
within each server. Everything beyond meeting this benchmark is profit.
4. Ecosystem: It’s tempting to overlook the role of a hypervisor’s ecosystem – that is,
the availability of documentation, support, training, third-party developers and
consultancies, and so on – in determining whether or not a solution is cost-effective in
the long term.
5. Test for yourself: You can gain basic experience from your existing desktop or
laptop. You can run both VMware vSphere and Microsoft Hyper-V in either VMware
Workstation or VMware Fusion to create a nice virtual learning and testing
environment.
HYPERVISOR REFERENCE MODEL :
There are 3 main modules coordinates in order to emulate the underlying hardware:

1. DISPATCHER:
The dispatcher behaves like the entry point of the monitor and reroutes the
instructions of the virtual machine instance to one of the other two modules.

2. ALLOCATOR:
The allocator is responsible for deciding the system resources to be provided to the
virtual machine instance. It means whenever a virtual machine tries to execute an
instruction that results in changing the machine resources associated with the
virtual machine, the allocator is invoked by the dispatcher.

3. INTERPRETER:
The interpreter module consists of interpreter routines. These are executed,
whenever a virtual machine executes a privileged instruction.
Cluster Components

Advantages of Cluster Computing :

1. High Performance :
The systems offer better and enhanced performance than that of mainframe
computer networks.
2. Easy to manage :
Cluster Computing is manageable and easy to implement.
3. Scalable :
Resources can be added to the clusters accordingly.
4. Expandability :
Computer clusters can be expanded easily by adding additional computers to
the network. Cluster computing is capable of combining several additional
resources or the networks to the existing computer system.
5. Availability :
The other nodes will be active when one node gets failed and will function as
a proxy for the failed node. This makes sure for enhanced availability.
6. Flexibility :
It can be upgraded to the superior specification or additional nodes can be
added.
Disadvantages of Cluster Computing :

1. High cost :
It is not so much cost-effective due to its high hardware and its design.
2. Problem in finding fault :
It is difficult to find which component has a fault.
3. More space is needed :
Infrastructure may increase as more servers are needed to manage and
monitor.
Applications of Cluster Computing :
 Various complex computational problems can be solved.
 It can be used in the applications of aerodynamics, astrophysics and in
data mining.
 Weather forecasting.
 Image Rendering.
 Various e-commerce applications.
 Earthquake Simulation.
 Petroleum reservoir simulation.
Q4. Enlist benefits of cloud computing.

Ans-Direct Benefits Cloud Capabilities Features :


 High Availability –
Infrastructure will be highly available in the Cloud with fewer outages
experienced and less downtime. Applications will exist across a number
of disparate Cloud Data Centres and can auto recover or terminate and
restart if performance drops enabling continued quality of services.
 Flexibility –
The University will have access to the full range of programming models,
operating systems, databases and architecture with which they are
familiar as well as new services available through the market place. The
University will not be locked into infrastructure purchases and will have
more freedom of choice.
 Self Service Self Provisioning –
A Cloud environment will allow for greater adoption of self-service and
provisioning, particularly in the research space. Graphical user interfaces
and Cloud tools can be set-up to allow users to run their own workloads
and have visibility of the costs and metrics associated.
 Automation and Ease of Management –
Platform and application automation will enable greater ease of
management across patching, security, provisioning, testing, deployment
and logging. These operational areas become integrated into the service
that the University consumes allowing quicker deployment of services.
 Scalability –
Movement of workloads to Cloud will allow the University to instantly
scale up or down in line with student and researcher demands. This will
allow the University to maintain quality services as it grows and accounts
for volatile or seasonal application usage (e.g. peak enrolment periods).
 Greater Security Controls –
Cloud environments keep track of all changes made through logging and
can make use of the latest firewalls and security features to reduce the
likelihood and impact of cyber attacks and internal mistakes.
Indirect Benefits Cloud Consumption Enables :
 Greater Agility and Time to Market –
Ease of development and provisioning in the Cloud will enable the
University to quickly spin up new ideas and test them. This way of
operation lends itself to greater agility through learning fast and taking
ideas to market or further iterating upon them.
 Reduced Environmental Impact –
The University will not need to disrupt its natural environment to build a
new Data Centre and will contribute to lower greenhouse gas emissions
through use of more efficient facilities.
 public Cloud providers –
Cost Avoidance and Cost Savings Through not building a Data Centre,
the University will achieve upfront cost avoidance. Whilst this will be
partially offset by the need to increase investment in Cloud migration, it
will drive a reduction in costs longterm through a reduction in IT
overheads.
 Focus on Value-Adding Activities –
The University can free up its resources to focus on more growth and
transformation activities. This will give staff more time to uplift the
environment, develop new service offerings and improve the student and
researcher experience rather than keeping the lights on.
 Improved Brand Perception –
The movement to a full Cloud environment will enhance the University’s
brand and reputation as a forward-thinking University. In addition, the
Cloud Platform can become a selling point to attract technology students
and researchers to collaborate.
 Creation of New Revenue Streams –
The University can capture additional revenue streams by providing
researchers with Cloud services through a portal that is managed by IT.
As a result, researchers will no longer need to procure their own hardware
to run research workloads and the University will be able to capture this
expenditure.
Q5 What is parallel computing? Explain in details about flynns taxonomy.

Ans-Parallel computing is a computing where the jobs are broken into


discrete parts that can be executed concurrently. Each part is further broken
down to a series of instructions. Instructions from each part execute
simultaneously on different CPUs. Parallel systems deal with the
simultaneous use of multiple computer resources that can include a single
computer with multiple processors, a number of computers connected by a
network to form a parallel processing cluster or a combination of both.
Parallel systems are more difficult to program than computers with a single
processor because the architecture of parallel computers varies accordingly
and the processes of multiple CPUs must be coordinated and synchronized.
The crux of parallel processing are CPUs. Based on the number
of instruction and data streams that can be processed simultaneously,
computing systems are classified into four major categories:

Flynn’s classification –
1. Single-instruction, single-data (SISD) systems –
An SISD computing system is a uniprocessor machine which is capable of
executing a single instruction, operating on a single data stream. In SISD,
machine instructions are processed in a sequential manner and
computers adopting this model are popularly called sequential computers.
Most conventional computers have SISD architecture. All the instructions
and data to be processed have to be stored in primary memory.
The speed of the processing element in the SISD model is
limited(dependent) by the rate at which the computer can transfer
information internally. Dominant representative SISD systems are IBM
PC, workstations.
2. Single-instruction, multiple-data (SIMD) systems –
An SIMD system is a multiprocessor machine capable of executing the
same instruction on all the CPUs but operating on different data streams.
Machines based on an SIMD model are well suited to scientific computing
since they involve lots of vector and matrix operations. So that the
information can be passed to all the processing elements (PEs) organized
data elements of vectors can be divided into multiple sets(N-sets for N PE
systems) and each PE can process one data set.

Dominant representative SIMD systems is Cray’s vector processing


machine.
3. Multiple-instruction, single-data (MISD) systems –
An MISD computing system is a multiprocessor machine capable of
executing different instructions on different PEs but all of them operating
on the same dataset .
Example Z = sin(x)+cos(x)+tan(x)
The system performs different operations on the same data set. Machines
built using the MISD model are not useful in most of the application, a few
machines are built, but none of them are available commercially.
4. Multiple-instruction, multiple-data (MIMD) systems –
An MIMD system is a multiprocessor machine which is capable of
executing multiple instructions on multiple data sets. Each PE in the
MIMD model has separate instruction and data streams; therefore
machines built using this model are capable to any kind of application.
Unlike SIMD and MISD machines, PEs in MIMD machines work
asynchronously.

MIMD machines are broadly categorized into shared-memory


MIMD and distributed-memory MIMD based on the way PEs are
coupled to the main memory.
In the shared memory MIMD model (tightly coupled multiprocessor
systems), all the PEs are connected to a single global memory and they
all have access to it. The communication between PEs in this model takes
place through the shared memory, modification of the data stored in the
global memory by one PE is visible to all other PEs. Dominant
representative shared memory MIMD systems are Silicon Graphics
machines and Sun/IBM’s SMP (Symmetric Multi-Processing).
In Distributed memory MIMD machines (loosely coupled multiprocessor
systems) all PEs have a local memory. The communication between PEs
in this model takes place through the interconnection network (the inter
process communication channel, or IPC). The network connecting PEs
can be configured to tree, mesh or in accordance with the requirement.
The shared-memory MIMD architecture is easier to program but is less
tolerant to failures and harder to extend with respect to the distributed
memory MIMD model. Failures in a shared-memory MIMD affect the
entire system, whereas this is not the case of the distributed model, in
which each of the PEs can be easily isolated. Moreover, shared memory
MIMD architectures are less likely to scale because the addition of more
PEs leads to memory contention. This is a situation that does not happen
in the case of distributed memory, in which each PE has its own memory.
As a result of practical outcomes and user’s requirement , distributed
memory MIMD architecture is superior to the other existing models.

Q7. Give an example of platform-as-service &explain in details.

Ans-Platform as a Service.

PaaS is a category of cloud computing that provides a platform and


environment to allow developers to build applications and services over the
internet. PaaS services are hosted in the cloud and accessed by users
simply via their web browser.
A PaaS provider hosts the hardware and software on its own infrastructure.
As a result, PaaS frees users from having to install in-house hardware and
software to develop or run a new application. Thus, the development and
deployment of the application take place independent of the hardware.
The consumer does not manage or control the underlying cloud
infrastructure including network, servers, operating systems, or storage, but
has control over the deployed applications and possibly configuration
settings for the application-hosting environment. To make it simple, take the
example of an annual day function, you will have two options either to create
a venue or to rent a venue but the function is the same.
Advantages of PaaS:
1. Simple and convenient for users: It provides much of the infrastructure
and other IT services, which users can access anywhere via a web
browser.
2. Cost-Effective: It charges for the services provided on a per-use basis
thus eliminating the expenses one may have for on-premises hardware
and software.
3. Efficiently managing the lifecycle: It is designed to support the
complete web application lifecycle: building, testing, deploying, managing,
and updating.
4. Efficiency: It allows for higher-level programming with reduced
complexity thus, the overall development of the application can be more
effective.
The various companies providing Platform as a service are Amazon Web
services Elastic Beanstalk, Salesforce, Windows Azure, Google App Engine,
cloud Bees and IBM smart cloud.

Disadvantages of Paas:
1. Limited control over infrastructure: PaaS providers typically manage
the underlying infrastructure and take care of maintenance and updates,
but this can also mean that users have less control over the environment
and may not be able to make certain customizations.
2. Dependence on the provider: Users are dependent on the PaaS
provider for the availability, scalability, and reliability of the platform, which
can be a risk if the provider experiences outages or other issues.
3. Limited flexibility: PaaS solutions may not be able to accommodate
certain types of workloads or applications, which can limit the value of the
solution for certain organizations.

Assigment2

Q1. Describe Cloud Computing with respect to its cloud service models and explain
the provider-consumer interaction dynamics for Infrastructure-as-a-service.

And- Cloud Based Services


Cloud Computing can be defined as the practice of using a network of remote servers
hosted on the Internet to store, manage, and process data, rather than a local server or
a personal computer. Companies offering such kinds of cloud computing services are
called cloud providers and typically charge for cloud computing services based on
usage. Grids and clusters are the foundations for cloud computing.
Types of Cloud Computing
Most cloud computing services fall into five broad categories:
1. Software as a service (SaaS)
2. Platform as a service (PaaS)
3. Infrastructure as a service (IaaS)
4. Anything/Everything as a service (XaaS)
5. Function as a Service (FaaS)
These are sometimes called the cloud computing stack because they are built on top
of one another. Knowing what they are and how they are different, makes it easier to
accomplish your goals. These abstraction layers can also be viewed as a layered
architecture where services of a higher layer can be composed of services of the
underlying layer i.e, SaaS can provide Infrastructure.

Software as a Service(SaaS)
Software-as-a-Service (SaaS) is a way of delivering services and applications over the
Internet. Instead of installing and maintaining software, we simply access it via the
Internet, freeing ourselves from the complex software and hardware management. It
removes the need to install and run applications on our own computers or in the data
centers eliminating the expenses of hardware as well as software maintenance.
SaaS provides a complete software solution that you purchase on a pay-as-you-
go basis from a cloud service provider. Most SaaS applications can be run directly
from a web browser without any downloads or installations required. The SaaS
applications are sometimes called Web-based software, on-demand software, or
hosted software.
Advantages of SaaS
1. Cost-Effective: Pay only for what you use.
2. Reduced time: Users can run most SaaS apps directly from their web browser
without needing to download and install any software. This reduces the time spent
in installation and configuration and can reduce the issues that can get in the way
of the software deployment.
3. Accessibility: We can Access app data from anywhere.
4. Automatic updates: Rather than purchasing new software, customers rely on a
SaaS provider to automatically perform the updates.
5. Scalability: It allows the users to access the services and features on-demand.
The various companies providing Software as a service are Cloud9 Analytics,
Salesforce.com, Cloud Switch, Microsoft Office 365, Big Commerce, Eloqua,
dropBox, and Cloud Tran.
Disadvantages of Saas :
1. Limited customization: SaaS solutions are typically not as customizable as on-
premises software, meaning that users may have to work within the constraints of
the SaaS provider’s platform and may not be able to tailor the software to their
specific needs.
2. Dependence on internet connectivity: SaaS solutions are typically cloud-based,
which means that they require a stable internet connection to function properly.
This can be problematic for users in areas with poor connectivity or for those who
need to access the software in offline environments.
3. Security concerns: SaaS providers are responsible for maintaining the security of
the data stored on their servers, but there is still a risk of data breaches or other
security incidents.
4. Limited control over data: SaaS providers may have access to a user’s data,
which can be a concern for organizations that need to maintain strict control over
their data for regulatory or other reasons.

Platform as a Service

PaaS is a category of cloud computing that provides a platform and environment to


allow developers to build applications and services over the internet. PaaS services are
hosted in the cloud and accessed by users simply via their web browser.
A PaaS provider hosts the hardware and software on its own infrastructure. As a
result, PaaS frees users from having to install in-house hardware and software to
develop or run a new application. Thus, the development and deployment of the
application take place independent of the hardware.
The consumer does not manage or control the underlying cloud infrastructure
including network, servers, operating systems, or storage, but has control over the
deployed applications and possibly configuration settings for the application-hosting
environment. To make it simple, take the example of an annual day function, you will
have two options either to create a venue or to rent a venue but the function is the
same.
Advantages of PaaS:
1. Simple and convenient for users: It provides much of the infrastructure and other
IT services, which users can access anywhere via a web browser.
2. Cost-Effective: It charges for the services provided on a per-use basis thus
eliminating the expenses one may have for on-premises hardware and software.
3. Efficiently managing the lifecycle: It is designed to support the complete web
application lifecycle: building, testing, deploying, managing, and updating.
4. Efficiency: It allows for higher-level programming with reduced complexity thus,
the overall development of the application can be more effective.
The various companies providing Platform as a service are Amazon Web services
Elastic Beanstalk, Salesforce, Windows Azure, Google App Engine, cloud Bees and
IBM smart cloud.

Q2. Why is it necessary to secure a Hypervisor? Enlist the different types of threats to
hypervisor and Virtual Machines? Explain any two threats in detail.

Ans-already answer assignment first.

Q3. What are the different types of Cloud Security challenges'? Explain the Infrastructure
security at Network & Host Level.

Ans; In this, we will discuss the overview of cloud computing, its need, and mainly
our focus to cover the security issues in Cloud Computing. Let’s discuss it one by one.
Cloud Computing :
Cloud Computing is a type of technology that provides remote services on the internet
to manage, access, and store data rather than storing it on Servers or local drives. This
technology is also known as Serverless technology. Here the data can be anything like
Image, Audio, video, documents, files, etc.
Need of Cloud Computing :
Before using Cloud Computing, most of the large as well as small IT companies use
traditional methods i.e. they store data in Server, and they need a separate Server room
for that. In that Server Room, there should be a database server, mail server, firewalls,
routers, modems, high net speed devices, etc. For that IT companies have to spend lots
of money. In order to reduce all the problems with cost Cloud computing come into
existence and most companies shift to this technology.
Security Issues in Cloud Computing :
There is no doubt that Cloud Computing provides various Advantages but there are
also some security issues in cloud computing. Below are some following Security
Issues in Cloud Computing as follows.
1. Data Loss –
Data Loss is one of the issues faced in Cloud Computing. This is also known as
Data Leakage. As we know that our sensitive data is in the hands of Somebody
else, and we don’t have full control over our database. So, if the security of cloud
service is to break by hackers then it may be possible that hackers will get access
to our sensitive data or personal files.
2. Interference of Hackers and Insecure API’s –
As we know, if we are talking about the cloud and its services it means we are
talking about the Internet. Also, we know that the easiest way to communicate
with Cloud is using API. So it is important to protect the Interface’s and API’s
which are used by an external user. But also in cloud computing, few services are
available in the public domain which are the vulnerable part of Cloud Computing
because it may be possible that these services are accessed by some third parties.
So, it may be possible that with the help of these services hackers can easily hack
or harm our data.

3. User Account Hijacking –


Account Hijacking is the most serious security issue in Cloud Computing. If
somehow the Account of User or an Organization is hijacked by a hacker then the
hacker has full authority to perform Unauthorized Activities.

4. Changing Service Provider –


Vendor lock-In is also an important Security issue in Cloud Computing. Many
organizations will face different problems while shifting from one vendor to
another. For example, An Organization wants to shift from AWS Cloud to Google
Cloud Services then they face various problems like shifting of all data, also both
cloud services have different techniques and functions, so they also face problems
regarding that. Also, it may be possible that the charges of AWS are different from
Google Cloud, etc.

5. Lack of Skill –
While working, shifting to another service provider, need an extra feature, how to
use a feature, etc. are the main problems caused in IT Company who doesn’t have
skilled Employees. So it requires a skilled person to work with Cloud Computing.

6. Denial of Service (DoS) attack –


This type of attack occurs when the system receives too much traffic. Mostly DoS
attacks occur in large organizations such as the banking sector, government sector,
etc. When a DoS attack occurs, data is lost. So, in order to recover data, it requires
a great amount of money as well as time to handle it.

1. Q4. What is the importance of cloud disaster recovery? Explain the terms RPO &
RTO in detail

Ans: One of the best practices in well run IT organizations is for CIOs and
IT managers to evaluate the risk of data loss, and establish business
continuity plans that outline backup and recovery along with their
respective Recovery Point Objectives (RPOs) and Recovery Time
Objectives (RTOs). But first, information technology teams and business
stakeholders must start with a common understanding of what RPO and
RTO mean in terms of backup and recovery.
 Recovery Point Objective (RPO): The maximum acceptable age of the
data that can be restored (or recovery point) and the version of data
lost. For simplicity, RPO can be thought of as the time between the
time of data loss and the last useful backup of a known good state.
 Recovery Time Objective (RTO): The maximum acceptable length of time
required for an organization to recover lost data and get back up and
running. This value may be defined as part of a larger Disaster
Recovery Plan across an organization that also includes applications
like Office 365. For simplicity, RTO can be thought of as the time it
takes, from start to finish, to recover data to an acceptable current
good state.

Defining RPO and RTO as part of disaster


recovery scope, in the age of SaaS and
Cloud
As you review your disaster recovery plan and your RPO and RTO, it’s
critical to classify and determine the applications and data that are:

1. Existentially-critical: The organization will likely cease to exist if these


first-tier systems or data are unavailable or compromised.
2. Mission-critical: There will be significant, but not existential, harm to
employee productivity, organizational reputation, and potentially
revenue if these second-tier systems or data are unavailable or
compromised.
3. Optimal-for-performance: There will be a reduction in organizational
efficiency, but otherwise limited impact on the organization’s mission
if these third-tier systems or data are unavailable or compromised.

There’s no “one size fits all” answer when calculating RPO and RTO. IT
and stakeholders must assess the MTPD (Maximum Tolerable Period of
Disruption) on an application-by-application, dataset-by-dataset basis, then
layer protection based on what your organization has defined as
existentially-critical, mission-critical, and optimal-for-performance.

With the advent of robust, widely adopted collaboration platforms like


Microsoft Office 365, some of what had been IT costs and responsibilities
are now offloaded to the vendor, freeing IT to better focus on protecting
existentially-critical systems and applications. Office 365 “assumes” some
of the responsibility that used to be the IT organization’s alone — for
example, Office 365 SLAs contain provisions related to data center
outages, infrastructure management, and data storage. Microsoft even
provides some recovery tools. For more details, read this blog post.

But while SaaS vendors like Microsoft offer some protections for mission-
critical applications and data, IT is still ultimately responsible for ensuring
the organization is protected, and can meet RPO and RTO requirements.
SaaS vendors cannot protect you from you — you need to establish RPO
and RTO to recover from sync errors, administrator errors, and malicious
actors who may corrupt or permanently delete your organization’s data.

Ransomware and Office 365 RTO


What makes this discussion of Office 365 RTO urgent is the staggering
increase in the number of types of ransomware purpose-built to shut your
organization out of not just email, but shared documents and files such as
those in OneDrive for Business and SharePoint Online. The cost of
ransomware is also non-trivial — damage is forecast to hit $11.5B by 2019.
For more on ransomware vectors and attacks, read this blog post.

Action steps to take today


If your organization is using Office 365, your teams should set time for the
following actions:

1. Review current RPO and RTO to ensure they reflect what serves
your organization best when recovering SaaS data.
2. Assess the potential impact of ransomware and your organization’s
current ransomware response plan.
3. Test your current recovery approaches for adequate RTO of Office
365 mail, data and files.

Q5. What is Kernel-based Virtual rnachine? Identity & Justify its type.

Types of Virtual Machines


In this article, we will study about virtual machines, types of virtual machines, and
virtual machine languages. Virtual Machine is like fake computer system operating
on your hardware. It partially uses the hardware of your system (like CPU, RAM, disk
space, etc.) but its space is completely separated from your main system. Two virtual
machines don’t interrupt in each other’s working and functioning nor they can access
each other’s space which gives an illusion that we are using totally different hardware
system. More detail at Virtual Machine.
Question : Is there any limit to no. of virtual machines one can install?
Answer – In general there is no limit because it depends on the hardware of your
system. As the VM is using hardware of your system, if it goes out of it’s capacity
then it will limit you not to install further virtual machines.
Question : Can one access the files of one VM from another?
Answer – In general No, but as an advanced hardware feature, we can allow the file-
sharing for different virtual machines.
Types of Virtual Machines : You can classify virtual machines into two types:
1. System Virtual Machine: These types of virtual machines gives us complete
system platform and gives the execution of the complete virtual operating system. Just
like virtual box, system virtual machine is providing an environment for an OS to be
installed completely. We can see in below image that our hardware of Real Machine is
being distributed between two simulated operating systems by Virtual machine
monitor. And then some programs, processes are going on in that distributed hardware
of simulated machines separately.

2. Process Virtual Machine : While process virtual machines, unlike system virtual
machine, does not provide us with the facility to install the virtual operating system
completely. Rather it creates virtual environment of that OS while using some app or
program and this environment will be destroyed as soon as we exit from that app. Like
in below image, there are some apps running on main OS as well some virtual
machines are created to run other apps. This shows that as those programs required
different OS, process virtual machine provided them with that for the time being those
programs are running. Example – Wine software in Linux helps to run Windows
applications.
Virtual Machine Language : It’s type of language which can be understood by
different operating systems. It is platform-independent. Just like to run any
programming language (C, python, or java) we need specific compiler that actually
converts that code into system understandable code (also known as byte code). The
same virtual machine language works. If we want to use code that can be executed on
different types of operating systems like (Windows, Linux, etc) then virtual machine
language will be helpful.

Q6. How security is managed in cloud? Explain it in detail.

Cloud computing security processes should address the security controls the cloud
provider will incorporate to maintain the customer's data security, privacy and
compliance with necessary regulations. The processes will also likely include a business
continuity and data backup plan in the case of a cloud security breach.
The following are 10 security-as-a-service categories are :
1. Identity and Access Management should provide controls for assured identities and
access management. Identity and access management includes people, processes and
systems that are used to manage access to enterprise resources by assuring the identity of
an entity is verified and is granted the correct level of access based on this assured
identity. Audit logs of activity such as successful and failed authentication and access
attempts should be kept by the application/solution.
2. Data Loss Prevention is the monitoring, protecting and verifying the security of data at
rest, in motion and in use in the cloud and on-premises. Data loss prevention services
offer protection of data usually by running as some sort of client on desktops/servers and
running rules around what can be done. Within the cloud, data loss prevention services
could be offered as something that is provided as part of the build, such that all servers
built for that client get the data loss prevention software installed with an agreed set of
rules deployed.
3. Web Security is real-time protection offered either on-premise through
software/appliance installation or via the cloud by proxying or redirecting web traffic to
the cloud provider. This provides an added layer of protection on top of things like AV to
prevent malware from entering the enterprise via activities such as web browsing. Policy
rules around the types of web access and the times this is acceptable also can be enforced
via these web security technologies.
4. E-mail Security should provide control over inbound and outbound e-mail, thereby
protecting the organization from phishing and malicious attachments, enforcing
corporate policies such as acceptable use and spam and providing business continuity
options. The solution should allow for policy-based encryption of e-mails as well as
integrating with various e-mail server offerings. Digital signatures enabling identification
and non-repudiation are features of many cloud e-mail security solutions.
5. Security Assessments are third-party audits of cloud services or assessments of on-
premises systems based on industry standards. Traditional security assessments for
infrastructure and applications and compliance audits are well defined and supported by
multiple standards such as NIST, ISO and CIS. A relatively mature toolset exists, and a
number of tools have been implemented using the SaaS delivery model. In the SaaS
delivery model, subscribers get the typical benefits of this cloud computing variant
elasticity, negligible setup time, low administration overhead and pay-per-use with low
initial investments.
6. Intrusion Management is the process of using pattern recognition to detect and react
to statistically unusual events. This may include reconfiguring system components in real
time to stop/prevent an intrusion. The methods of intrusion detection, prevention and
response in physical environments are mature; however, the growth of virtualization and
massive multi-tenancy is creating new targets for intrusion and raises many questions
about the implementation of the same protection in cloud environments.
7. Security Information and Event Management systems accept log and event
information. This information is then correlated and analyzed to provide real-time
reporting and alerting on incidents/events that may require intervention. The logs are
likely to be kept in a manner that prevents tampering to enable their use as evidence in
any investigations.
8. Encryption systems typically consist of algorithms that are computationally difficult or
infeasible to break, along with the processes and procedures to manage encryption and
decryption, hashing, digital signatures, certificate generation and renewal and key
exchange.
9. Business Continuity and Disaster Recovery are the measures designed and
implemented to ensure operational resiliency in the event of any service interruptions.
Business continuity and disaster recovery provides flexible and reliable failover for
required services in the event of any service interruptions, including those caused by
natural or man-made disasters or disruptions. Cloud-centric business continuity and
disaster recovery makes use of the cloud's flexibility to minimize cost and maximize
benefits.
10. Network Security consists of security services that allocate access, distribute,
monitor and protect the underlying resource services. Architecturally, network security
provides services that address security controls at the network in aggregate or specifically
addressed at the individual network of each underlying resource.
In a cloud/virtual environment, network security is likely to be provided by virtual devices
alongside traditional physical devices.

Types of Cloud Computing Security Controls :


There are 4 types of cloud computing security controls i.e.
1. Deterrent Controls : Deterrent controls are designed to block nefarious
attacks on a cloud system. These come in handy when there are insider
attackers.
2. Preventive Controls : Preventive controls make the system resilient to
attacks by eliminating vulnerabilities in it.
3. Detective Controls : It identifies and reacts to security threats and
control. Some examples of detective control software are Intrusion
detection software and network security monitoring tools.
4. Corrective Controls : In the event of a security attack these controls are
activated. They limit the damage caused by the attack.
Importance of cloud security :
For the organizations making their transition to cloud, cloud security is an
essential factor while choosing a cloud provider. The attacks are getting
stronger day by day and so the security needs to keep up with it. For this
purpose it is essential to pick a cloud provider who offers the best security
and is customized with the organization’s infrastructure. Cloud security has a
lot of benefits –
 Centralized security : Centralized security results in centralizing
protection. As managing all the devices and endpoints is not an easy task
cloud security helps in doing so. This results in enhancing traffic analysis
and web filtering which means less policy and software updates.
 Reduced costs : Investing in cloud computing and cloud security results
in less expenditure in hardware and also less manpower in administration
 Reduced Administration : It makes it easier to administer the
organization and does not have manual security configuration and
constant security updates.
 Reliability : These are very reliable and the cloud can be accessed from
anywhere with any device with proper authorization.
When we are thinking about cloud security it includes various types of
security like access control for authorized access, network segmentation for
maintaining isolated data, encryption for encoded data transfer, vulnerability
check for patching vulnerable areas, security monitoring for keeping eye on
various security attacks and disaster recovery for backup and recovery
during data loss.
There are different types of security techniques which are implemented to
make the cloud computing system more secure such as SSL (Secure Socket
Layer) Encryption, Multi Tenancy based Access Control, Intrusion Detection
System, firewalls, penetration testing, tokenization, VPN (Virtual Private
Networks), and avoiding public internet connections and many more
techniques.
But the thing is not so simple how we think, even implementation of number
of security techniques there is always security issues are involved for the
cloud system. As cloud system is managed and accessed over internet so a
lot of challenges arises during maintaining a secure cloud. Some cloud
security challenges are
 Control over cloud data
 Misconfiguration
 Ever changing workload
 Access Management
 Disaster recovery

Q7 What is the Impact of AWS in Cloud Computing? Elaborate any of its two
services in detail.

 Ans- Trade fixed expense for variable expense – Instead of having to invest heavily in data
centers and servers before you know how you’re going to use them, you can pay only when
you consume computing resources, and pay only for how much you consume.
 Benefit from massive economies of scale – By using cloud computing, you can achieve a
lower variable cost than you can get on your own. Because usage from hundreds of thousands
of customers is aggregated in the cloud, providers such as AWS can achieve higher
economies of scale, which translates into lower pay as-you-go prices.
 Stop guessing capacity – Eliminate guessing on your infrastructure capacity needs. When
you make a capacity decision prior to deploying an application, you often end up either
sitting on expensive idle resources or dealing with limited capacity. With cloud computing,
these problems go away. You can access as much or as little capacity as you need, and scale
up and down as required with only a few minutes’ notice.
 Increase speed and agility – In a cloud computing environment, new IT resources are only a
click away, which means that you reduce the time to make those resources available to your
developers from weeks to just minutes. This results in a dramatic increase in agility for the
organization, since the cost and time it takes to experiment and develop is significantly lower.
 Stop spending money running and maintaining data centers – Focus on projects that
differentiate your business, not the infrastructure. Cloud computing lets you focus on your
own customers, rather than on the heavy lifting of racking, stacking, and powering servers.
 Go global in minutes – Easily deploy your application in multiple regions around the world
with just a few clicks. This means you can provide lower latency and a better experience for
your customers at minimal cost.

You might also like