Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 19

2023-2024

YEAR:III
SEMESTER:IV

Cloud Computing 325E6D

Common for B.C.A. , B.Sc.-SA , B.Sc.-CSc , B.Sc.-CSc-wAI , B.Sc.-


CSc-wDS
Credits 3 Lecture Hours:5
per week
Learning Objectives: (for teachers: what they have to do in the class/lab/field)
 To impart fundamental concepts of Cloud Computing.
 To impart a working knowledge of the various cloud service types and their uses
and pitfalls.
 To enable the students to know the common features and differences in the
service offerings of the three major Cloud Computing service providers, namely
Amazon, Microsoft and Google.
 To provide know-how of the various aspects of application design, benchmarking
and security on the Cloud.
Course Outcomes: (for students: To know what they are going to learn)
CO1: To understand the concepts and technologies involved in Cloud Computing.
CO2: To understand the concepts of various cloud services and their implementation in the
Amazon, Microsoft and Google cloud computing platforms.
CO3: To understand the aspects of application design for the Cloud.
CO4: To understand the concepts involved in benchmarking and security on the Cloud.
CO5: To understand the way in which the cloud is used in various domains.

Units Contents
Introduction to Cloud Computing: Definition of Cloud Computing – Characteristics of
Cloud Computing – Cloud Models – Cloud Service Examples – Cloud-based Services
and Applications.
I Cloud Concepts and Technologies: Virtualization – Load balancing – Scalability and
Elasticity – Deployment – Replication – Monitoring – Software Defined Networking –
Network Function Virtualization – MapReduce – Identity and Access Management –
Service Level Agreements – Billing.
Compute Services: Amazon Elastic Computer Cloud - Google Compute Engine -
Windows Azure Virtual Machines. Storage Services: Amazon Simple Storage Service
- Google Cloud Storage - Windows Azure Storage
Database Services: Amazon Relational Data Store - Amazon Dynamo DB - Google
Cloud SQL - Google Cloud Data Store - Windows Azure SQL Database - Windows
Azure Table Service
Application Services: Application Runtimes and Frameworks - Queuing Services -
II Email Services - Notifiction Services - Media Services
Content Delivery Services: Amazon CloudFront - Windows Azure Content Delivery
Network
Analytics Services: Amazon Elastic MapReduce - Google MapReduce Service -
Google BigQuery - Windows Azure HDInsight
Deployment and Management Services: Amazon Elastic Beanstack - Amazon
CloudFormation
Identity and Access Management Services: Amazon Identiy and Access Management
- Windows Azure Active Directory
Open Source Private Cloud Software: CloudStack – Eucalyptus - OpenStack
Cloud Application Design: Introduction – Design Consideration for Cloud
Applications – Scalability – Reliability and Availability – Security – Maintenance and
Upgradation – Performance – Reference Architectures for Cloud Applications – Cloud
III Application Design Methodologies: Service Oriented Architecture (SOA), Cloud
Component Model, IaaS, PaaS and SaaS Services for Cloud Applications, Model
View Controller (MVC), RESTful Web Services – Data Storage Approaches:
Relational Approach (SQL), Non-Relational Approach (NoSQL).
Cloud Application Benchmarking and Tuning: Introduction to Benchmarking – Steps
in Benchmarking – Workload Characteristics – Application Performance Metrics –
Design Consideration for Benchmarking Methodology – Benchmarking Tools and
IV Types of Tests – Deployment Prototyping.
Cloud Security: Introduction – CSA Cloud Security Architecture – Authentication
(SSO) – Authorization – Identity and Access Management – Data Security : Securing
data at rest, securing data in motion – Key Management – Auditing.
Case Studies: Cloud Computing for Healthcare – Cloud Computing for Energy
V Systems - Cloud Computing for Transportation Systems - Cloud Computing for
Manufacturing Industry - Cloud Computing for Education.

Learning Resources:
Recommended Texts
1. Arshdeep Bahga, Vijay Madisetti, Cloud Computing – A Hands On Approach,
Universities Press (India) Pvt. Ltd., 2018.
Reference Books
1. Anthony T Velte, Toby J Velte, Robert Elsenpeter, Cloud Computing: A
Practical Approach, Tata McGraw-Hill, 2013.
2. Barrie Sosinsky, Cloud Computing Bible, Wiley India Pvt. Ltd., 2013.
3. David Crookes, Cloud Computing in Easy Steps, Tata McGraw Hill, 2012.
4. Dr. Kumar Saurabh, Cloud Computing, Wiley India, Second Edition 2012.
CHAPTER -1
INTRODUCTION TO CLOUD COMPUTING
DEFINITION :

The cloud is a large group of interconnected computers and these computers can be
personal computers or network servers and even they can be public or private. For
example, Google’s cloud is a private one (that is, Google owns it) that is publicly
accessible (by Google’s users).

This cloud of computers extends beyond a single company or enterprise. The


applications and data served by the cloud are available to broad group of users, cross-
enterprise and cross-platform. i.e. Any authorized user can access these docs and apps
from any computer over any Internet connection. And, to the user, the technology and
infrastructure behind the cloud is invisible. It isn’t apparent whether cloud services are
based on HTTP, HTML, XML, JavaScript, or other specific technologies. There are six
key properties of cloud computing:

 Cloud computing is user-centric: The users who are connected to the cloud
store their documents, messages, images, and applications. In addition, they
can also share it with others. i.e. any device that accesses your data in the cloud
also becomes yours.

 Cloud computing is task-centric: The main focus is what you need done
and how the application can do it for you instead of focusing what it can do for
you. Traditional applications like word processing, spreadsheets, email, and so
on are becoming less important than the documents they create.

 Cloud computing is powerful: Connecting hundreds or thousands of computers together


in a cloud creates a wealth of computing power impossible with a single desktop PC.

• Cloud computing is accessible: Because of storing the data in the cloud, users can
instantly retrieve more information from multiple repositories. There is no limitation just
like desktop PC.

• Cloud computing is intelligent: Various numerous data stored on the computers in a


cloud, therefore analysing and accessing that information should be in an intelligent
manner.

 Cloud computing is programmable: To maintain the integrity, the cloud computing


tasks must be automated. For example, information stored on a single computer in the
cloud must be replicated on other computers in the cloud.

CHARACTERISTICS OF CLOUD COMPUTING:

There are many characteristics of Cloud Computing here are few of them :
1. On-demand self-services: The Cloud computing services does not require any human
administrators, user themselves are able to provision, monitor and manage computing
resources as needed.
2. Broad network access: The Computing services are generally provided over standard
networks and heterogeneous devices.
3. Rapid elasticity: The Computing services should have IT resources that are able to
scale out and in quickly and on a need basis. Whenever the user require services it is
provided to him and it is scale out as soon as its requirement gets over.
4. Resource pooling: The IT resource (e.g., networks, servers, storage, applications, and
services) present are shared across multiple applications and occupant in an
uncommitted manner. Multiple clients are provided service from a same physical
resource.
5. Measured service: The resource utilization is tracked for each application and
occupant, it will provide both the user and the resource provider with an account of
what has been used. This is done for various reasons like monitoring billing and
effective use of resource.
6. Multi-tenancy: Cloud computing providers can support multiple tenants (users or
organizations) on a single set of shared resources.
7. Virtualization: Cloud computing providers use virtualization technology to abstract
underlying hardware resources and present them as logical resources to users.
8. Resilient computing: Cloud computing services are typically designed with
redundancy and fault tolerance in mind, which ensures high availability and reliability.
9. Flexible pricing models: Cloud providers offer a variety of pricing models, including
pay-per-use, subscription-based, and spot pricing, allowing users to choose the option
that best suits their needs.
10. Security: Cloud providers invest heavily in security measures to protect their users’
data and ensure the privacy of sensitive information.
11. Automation: Cloud computing services are often highly automated, allowing users to
deploy and manage resources with minimal manual intervention.
12. Sustainability: Cloud providers are increasingly focused on sustainable practices,
such as energy-efficient data centers and the use of renewable energy sources, to
reduce their environmental impact.

CLOUD MODEL:
Cloud Computing can be defined as the practice of using a network of remote servers hosted on
the Internet to store, manage, and process data, rather than a local server or a personal computer.
Companies offering such kinds of cloud computing services are called cloud providers and
typically charge for cloud computing services based on usage. Grids and clusters are the
foundations for cloud computing.

Types of Cloud Computing


Most cloud computing services fall into three broad categories:
1. Software as a service (SaaS)
2. Platform as a service (PaaS)
3. Infrastructure as a service (IaaS)
These are sometimes called the cloud computing stack because they are built on top of one
another. Knowing what they are and how they are different, makes it easier to accomplish your
goals. These abstraction layers can also be viewed as a layered architecture where services of a
higher layer can be composed of services of the underlying layer i.e, SaaS can provide
Infrastructure.

Software as a Service (SaaS)

Software-as-a-Service (SaaS) is a way of delivering services and applications over the Internet.
Instead of installing and maintaining software, we simply access it via the Internet, freeing
ourselves from the complex software and hardware management. It removes the need to install
and run applications on our own computers or in the data centers eliminating the expenses of
hardware as well as software maintenance.
SaaS provides a complete software solution that you purchase on a pay-as-you-go basis from a
cloud service provider. Most SaaS applications can be run directly from a web browser without
any downloads or installations required. The SaaS applications are sometimes called Web-based
software, on-demand software, or hosted software.

Advantages of SaaS

1. Cost-Effective: Pay only for what you use.


2. Reduced time: Users can run most SaaS apps directly from their web browser without
needing to download and install any software. This reduces the time spent in
installation and configuration and can reduce the issues that can get in the way of the
software deployment.
3. Accessibility: We can Access app data from anywhere.
4. Automatic updates: Rather than purchasing new software, customers rely on a SaaS
provider to automatically perform the updates.
5. Scalability: It allows the users to access the services and features on-demand.
The various companies providing Software as a service are Cloud9 Analytics, Salesforce.com,
Cloud Switch, Microsoft Office 365, Big Commerce, Eloqua, dropBox, and Cloud Tran.

Disadvantages of Saas :
1. Limited customization: SaaS solutions are typically not as customizable as on-
premises software, meaning that users may have to work within the constraints of the
SaaS provider’s platform and may not be able to tailor the software to their specific
needs.
2. Dependence on internet connectivity: SaaS solutions are typically cloud-based,
which means that they require a stable internet connection to function properly. This
can be problematic for users in areas with poor connectivity or for those who need to
access the software in offline environments.
3. Security concerns: SaaS providers are responsible for maintaining the security of the
data stored on their servers, but there is still a risk of data breaches or other security
incidents.
4. Limited control over data: SaaS providers may have access to a user’s data, which
can be a concern for organizations that need to maintain strict control over their data
for regulatory or other reasons.

Platform as a Service

PaaS is a category of cloud computing that provides a platform and environment to allow
developers to build applications and services over the internet. PaaS services are hosted in the
cloud and accessed by users simply via their web browser.
A PaaS provider hosts the hardware and software on its own infrastructure. As a result, PaaS frees
users from having to install in-house hardware and software to develop or run a new application.
Thus, the development and deployment of the application take place independent of the
hardware.

The consumer does not manage or control the underlying cloud infrastructure including network,
servers, operating systems, or storage, but has control over the deployed applications and possibly
configuration settings for the application-hosting environment. To make it simple, take the
example of an annual day function, you will have two options either to create a venue or to rent a
venue but the function is the same.

Advantages of PaaS:

1. Simple and convenient for users: It provides much of the infrastructure and other IT
services, which users can access anywhere via a web browser.
2. Cost-Effective: It charges for the services provided on a per-use basis thus eliminating
the expenses one may have for on-premises hardware and software.
3. Efficiently managing the lifecycle: It is designed to support the complete web
application lifecycle: building, testing, deploying, managing, and updating.
4. Efficiency: It allows for higher-level programming with reduced complexity thus, the
overall development of the application can be more effective.
The various companies providing Platform as a service are Amazon Web services Elastic
Beanstalk, Salesforce, Windows Azure, Google App Engine, cloud Bees and IBM smart cloud.

Disadvantages of Paas:

1. Limited control over infrastructure: PaaS providers typically manage the underlying
infrastructure and take care of maintenance and updates, but this can also mean that
users have less control over the environment and may not be able to make certain
customizations.
2. Dependence on the provider: Users are dependent on the PaaS provider for the
availability, scalability, and reliability of the platform, which can be a risk if the
provider experiences outages or other issues.
3. Limited flexibility: PaaS solutions may not be able to accommodate certain types of
workloads or applications, which can limit the value of the solution for certain
organizations.
Infrastructure as a Service:

Infrastructure as a service (IaaS) is a service model that delivers computer infrastructure on an


outsourced basis to support various operations. Typically IaaS is a service where infrastructure is
provided as outsourcing to enterprises such as networking equipment, devices, database, and web
servers.
It is also known as Hardware as a Service (HaaS). IaaS customers pay on a per-user basis,
typically by the hour, week, or month. Some providers also charge customers based on the
amount of virtual machine space they use.
It simply provides the underlying operating systems, security, networking, and servers for
developing such applications, and services, and deploying development tools, databases, etc.

Advantages of IaaS:
1. Cost-Effective: Eliminates capital expense and reduces ongoing cost and IaaS
customers pay on a per-user basis, typically by the hour, week, or month.
2. Website hosting: Running websites using IaaS can be less expensive than traditional
web hosting.
3. Security: The IaaS Cloud Provider may provide better security than your existing
software.
4. Maintenance: There is no need to manage the underlying data center or the
introduction of new releases of the development or underlying software. This is all
handled by the IaaS Cloud Provider.
The various companies providing Infrastructure as a service are Amazon web services,
Bluestack, IBM, Openstack, Rackspace, and Vmware.

Disadvantages of laaS :

1. Limited control over infrastructure: IaaS providers typically manage the underlying
infrastructure and take care of maintenance and updates, but this can also mean that
users have less control over the environment and may not be able to make certain
customizations.
2. Security concerns: Users are responsible for securing their own data and applications,
which can be a significant undertaking.
3. Limited access: Cloud computing may not be accessible in certain regions and
countries due to legal policies.

CLOUD SERVICES EXAMPLES:

Examples of Cloud Storage

Ex: Dropbox, Gmail, Facebook

The number of cloud storage providers online seems to grow every day. Each competing over the
amount of storage they can provide to clients.
Right now, Dropbox is the clear leader in streamlined cloud storage allowing users to access files
on any device through its application or website with up to 1 terabyte of free storage.

Google’s email service provider Gmail, on the other hand, provides unlimited storage on the cloud.
Gmail has revolutionized the way we send emails and largely responsible for the increased usage of
email worldwide.

Facebook is a mix of the two, in that it can store an infinite amount of information, images, and
videos on your profile. They can then be easily accessed on multiple devices. Facebook goes a step
further with their Messenger app, which allows for profiles to exchange data.

Examples of Marketing Cloud Platforms

Ex: Maropost for Marketing, Hubspot, Adobe Marketing Cloud

A marketing cloud is an end-to-end digital marketing platform for clients to manage contacts and
target leads. Maropost Marketing Cloud combines easy-to-use marketing automation and hyper-
targeting of leads. At the same time, ensuring emails actually arrive in the inbox, thanks to its
advanced email deliverability capabilities.

In general, marketing clouds fulfill a need for personalization. This is important in a market that
demands messaging be “more human.” That’s why communicating that your brand is here to help,
will make all the difference in closing.

Examples of Cloud Computing in Education

Ex: SlideRocket, Ratatype, Amazon Web Services

Education is increasingly adopting advanced technology because students already are. So, in an
effort to modernize classrooms, educators have introduced e-learning software like SlideRocket.

SlideRocket is a platform that students can use to build presentations and submit them. Students
can even present them through web conferencing all on the cloud. Another tool teachers use is
Ratatype, which helps students learn to type faster and offers online typing tests to track their
progress.

For school administration, Amazon’s AWS Cloud for K12 and Primary Education features a virtual
desktop infrastructure (VDI) solution. Through the cloud, allows instructors and students to access
teaching and learning software on multiple devices.

Examples of Cloud Computing in Healthcare

Ex: Clear DATA, Dell’s Secure Healthcare Cloud, IBM Cloud

Cloud computing lets nurses, physicians, and administrators share information quickly from
anywhere. It also saves on costs by allowing large data files to be shared instantly for maximum
convenience. This is a major boost for efficiency.
Ultimately, cloud technology ensures patients receive the best possible care without
unnecessary delay. The patient’s condition can also be updated in seconds through remote
conferencing.

However, many modern hospitals have yet to implement cloud computing but are forecasted to do
so in the near future.

Examples of Cloud Computing for Government

Uses: IT consolidation, shared services, citizen services

The U.S. government and military were early adopters of cloud computing. The U.S. Federal
Cloud Computing Strategy, introduced under the Obama administration, was instituted to accelerate
cloud adoption in all departments.

According to the strategy: “focus will shift from the technology itself to the core competencies and
mission of the agency.”

The U.S. government’s cloud incorporates social, mobile and analytics technologies. However, they
must adhere to strict compliance and security measures (FIPS, FISMA, and FedRAMP). This is to
protect against cyber threats both domestic and abroad.

Cloud computing is the answer for any business struggling to stay organized, increase ROI, or grow
their email lists. Maropost has the digital marketing solutions you need to transform your business.

CLOUD CONCEPTS AND TECHNOLOGIES:

VIRTUALIZATION:

Virtualization is the "creation of a virtual (rather than actual) version of something, such as a
server, a desktop, a storage device, an operating system or network resources".

In other words, Virtualization is a technique, which allows to share a single physical instance of a
resource or an application among multiple customers and organizations. It does by assigning a
logical name to a physical storage and providing a pointer to that physical resource when
demanded.\

Types of Virtualization:
1. Hardware Virtualization.
2. Operating system Virtualization.
3. Server Virtualization.
4. Storage Virtualization.

1) Hardware Virtualization:
When the virtual machine software or virtual machine manager (VMM) is directly installed on the
hardware system is known as hardware virtualization.

The main job of hypervisor is to control and monitoring the processor, memory and other hardware
resources.

After virtualization of hardware system we can install different operating system on it and run
different applications on those OS.

Usage:

Hardware virtualization is mainly done for the server platforms, because controlling virtual
machines is much easier than controlling a physical server.

2) Operating System Virtualization:

When the virtual machine software or virtual machine manager (VMM) is installed on the Host
operating system instead of directly on the hardware system is known as operating system
virtualization.

Usage:

Operating System Virtualization is mainly used for testing the applications on different platforms of
OS.

3) Server Virtualization:

When the virtual machine software or virtual machine manager (VMM) is directly installed on the
Server system is known as server virtualization.

Usage:

Server virtualization is done because a single physical server can be divided into multiple servers on
the demand basis and for balancing the load.

4) Storage Virtualization:

Storage virtualization is the process of grouping the physical storage from multiple network storage
devices so that it looks like a single storage device.

Storage virtualization is also implemented by using software applications.

LOAD BALANCING:

Load balancing is the method that allows you to have a proper balance of the amount of work
being done on different pieces of device or hardware equipment. Typically, what happens is that the
load of the devices is balanced between different servers or between the CPU and hard drives in a
single cloud server.
Load balancing was introduced for various reasons. One of them is to improve the speed and
performance of each single device, and the other is to protect individual devices from hitting their
limits by reducing their performance.

Cloud load balancing is defined as dividing workload and computing properties in cloud
computing. It enables enterprises to manage workload demands or application demands by
distributing resources among multiple computers, networks or servers. Cloud load balancing
involves managing the movement of workload traffic and demands over the Internet.

Traffic on the Internet is growing rapidly, accounting for almost 100% of the current traffic
annually. Therefore, the workload on the servers is increasing so rapidly, leading to overloading of
the servers, mainly for the popular web servers. There are two primary solutions to overcome the
problem of overloading on the server.

SCALABILITY:

Cloud scalability in cloud computing refers to the ability to increase or decrease IT resources as
needed to meet changing demand. Scalability is one of the hallmarks of the cloud and the primary
driver of its exploding popularity with businesses.
ELASTICITY:
Elastic computing is the ability to quickly expand or decrease computer processing, memory, and
storage resources to meet changing demands without worrying about capacity planning and
engineering for peak usage.
DEPLOYMENT :
Cloud deployment is the utilization of cloud environments to run applications through the use of
different models, such as software-as-a-service (SaaS), platform-as-a-service (Paas), and
infrastructure-as-a-service (IaaS).

By utilizing a cloud solution, organizations can help reduce capital expenditures (CAPEX) and
allow for flexible operational costs (OPEX) in response to changing needs.

Cloud deployments allow for computing resources to be moved away from a company’s physical
location and exist entirely on a cloud platform. By doing so, businesses can access improved
computing power through the use of multiple servers, utilize online virtual machines, and take
advantage of online data centers for increased storage ability.

Cloud deployment also allows for different computing environments, including public cloud,
private cloud, community cloud, and hybrid.

A public cloud deployment model, such as Microsoft Azure or Amazon Elastic Compute Cloud
(EC2), is available to the general public and will run on third-party servers. Public clouds are
managed by the service provider, meaning they take care of all the software and hardware,
making it generally easy to use and scale as needed. Benefits of a public cloud model include:

 Highly scalable: Because the operating system and cloud infrastructure are managed by
the provider, it is easy to scale the capacity of a given program as needed.

 Lower costs: Organizations need only pay for what they use, without having to invest in
physical hardware or expensive software licenses.
 Reliable uptime: Public could platforms are used by many different organizations, so
keeping a consistent uptime is important, with most provider’s able to offer above 99%
uptime.

A private cloud platform is used by a single company or organization, but otherwise functions
very similarly to a public cloud. A private cloud is most often used for securing sensitive data,
and often uses multiple firewalls. Generally, there will be a dedicated cloud server that cannot be
accessed by anyone from outside of the organization. Benefits of a private cloud include:

 Increased security: Private clouds use a designated private network and higher security
practices, such as requiring a virtual private network (VPN) to access the data.

 Customized services: Rather than being limited to the services offered on a public cloud,
private clouds generally allow for more complex and customized solutions.

A community cloud model is similar to a private cloud, but rather than allowing only one
organization access, several organizations with similar backgrounds will share the infrastructure,
while simultaneously maintaining higher security than a public cloud. Community cloud
advantages include:

 Reduced costs: Rather than one company having to bear the costs of dedicated cloud
servers, it can be shared across multiple companies.

 Easy data sharing: Organizations that share a community cloud can easily share data
between them, without having to compromise on security.

A hybrid cloud model allows for a combination of the above models (public, private, and
community) as needed by organizations to find the solution that best meets their needs. For
example, they could secure sensitive data on a private cloud, while hosting non-critical data on a
private cloud. Benefits of a hybrid model include:

 Minimized expenses: By utilizing different models, companies can ensure they are
getting the best price by only paying premium prices on the data and services that require
it.

 Increased flexibility: It is easy to move from one cloud model to another as the needs of
the business change, while simultaneously maintaining necessary security standards.

DEPLOYMENT:

Data replication is the process of copying data from one location to another. The technology helps
an organization maintain up-to-date copies of its data in the event of a disaster.

Replication can take place over a storage area network, local area network or local wide area
network as well as to the cloud. For disaster recovery (DR) purposes, replication typically occurs
between a primary storage location and a secondary offsite location.
MONITORING:

Cloud monitoring is a method of reviewing, observing, and managing the operational workflow in
a cloud-based IT infrastructure. Manual or automated management techniques confirm the
availability and performance of websites, servers, applications, and other cloud infrastructure. This
continuous evaluation of resource levels, server response times, and speed predicts possible
vulnerability to future issues before they arise.

SOFTWARE DEFINED NETWORKING:


Software-defined networking (SDN) is a new networking paradigm that separates the network's
control and data planes. The traditional networking architecture has a tightly coupled relationship
between the data and control planes. This means that network devices, such as routers and switches,
are responsible for forwarding packets and determining how the network should operate.

SDN Architecture

The architecture of software-defined networking (SDN) consists of three main layers: the
application layer, the control layer, and the infrastructure layer. Each layer has a specific role and
interacts with the other layers to manage and control the network.

1. Infrastructure Layer: The infrastructure layer is the bottom layer of the SDN architecture,
also known as the data plane. It consists of physical and virtual network devices such as
switches, routers, and firewalls that are responsible for forwarding network traffic based on
the instructions received from the control plane.
2. Control Layer: The control layer is the middle layer of the SDN architecture, also known as
the control plane. It consists of a centralized controller that communicates with the
infrastructure layer devices and is responsible for managing and configuring the network.
The controller interacts with the devices in the infrastructure layer using protocols such as
OpenFlow to program the forwarding behaviour of the switches and routers. The controller
uses network policies and rules to make decisions about how traffic should be forwarded
based on factors such as network topology, traffic patterns, and quality of service
requirements.
3. Application Layer: The application layer is the top layer of the SDN architecture and is
responsible for providing network services and applications to end-users. This layer consists
of various network applications that interact with the control layer to manage the network.

Examples of applications that can be deployed in an SDN environment include network


virtualization, traffic engineering, security, and monitoring. The application layer can be used to
create customized network services that meet specific business needs.

The main benefit of the SDN architecture is its flexibility and ability to centralize control of the
network. The separation of the control plane from the data plane enables network administrators to
configure and manage the network more easily and in a more granular way, allowing for greater
network agility and faster response times to changes in network traffic.

NETWORK FUNCTIONS VIRTUALIZATION:


The term “Network Functions Virtualization” (NFV) refers to the use of virtual machines in place
of physical network appliances. There is a requirement for a hypervisor to operate networking
software and procedures like load balancing and routing by virtual computers. A network
functions virtualization standard was first proposed at the OpenFlow World Congress in 2012 by
the European Telecommunications Standards Institute (ETSI), a group of service providers that
includes AT&T, China Mobile, BT Group, Deutsche Telekom, and many more.
Need of NFV:
With the help of NFV, it becomes possible to separate communication services from specialized
hardware like routers and firewalls. This eliminates the need for buying new hardware and
network operations can offer new services on demand. With this, it is possible to deploy network
components in a matter of hours as opposed to months as with conventional networking.
Furthermore, the virtualized services can run on less expensive generic servers.
Advantages:
 Lower expenses as it follows Pay as you go which implies companies only pay for
what they require.
 Less equipment as it works on virtual machines rather than actual machines which
leads to fewer appliances, which lowers operating expenses as well.
 Scalability of network architecture is quite quick and simple using virtual functions in
NFV. As a result, it does not call for the purchase of more hardware.
Working:
Usage of software by virtual machines enables to carry out the same networking tasks as
conventional hardware. The software handles the task of load balancing, routing, and firewall
security. Network engineers can automate the provisioning of the virtual network and program all
of its various components using a hypervisor or software-defined networking controller.
Benefits of NFV:
 Many service providers believe that advantages outweigh the issues of NFV.
 Traditional hardware-based networks are time-consuming as these require network
administrators to buy specialized hardware units, manually configure them, then join
them to form a network. For this skilled or well-equipped worker is required.
 It costs less as it works under the management of a hypervisor, which is significantly
less expensive than buying specialized hardware that serves the same purpose.
 Easy to configure and administer the network because of a virtualized network. As a
result, network capabilities may be updated or added instantly.

MAPREDUCE:
MapReduce and HDFS are the two major components of Hadoop which makes it so powerful and
efficient to use. MapReduce is a programming model used for efficient processing in parallel over
large data-sets in a distributed manner. The data is first split and then combined to produce the
final result. The libraries for MapReduce is written in so many programming languages with
various different-different optimizations. The purpose of MapReduce in Hadoop is to Map each
of the jobs and then it will reduce it to equivalent tasks for providing less overhead over the
cluster network and to reduce the processing power. The MapReduce task is mainly divided into
two phases Map Phase and Reduce Phase.
MapReduce Architecture:
Components of MapReduce Architecture:

1. Client: The MapReduce client is the one who brings the Job to the MapReduce for
processing. There can be multiple clients available that continuously send jobs for
processing to the Hadoop MapReduce Manager.
2. Job: The MapReduce Job is the actual work that the client wanted to do which is
comprised of so many smaller tasks that the client wants to process or execute.
3. Hadoop MapReduce Master: It divides the particular job into subsequent job-parts.
4. Job-Parts: The task or sub-jobs that are obtained after dividing the main job. The
result of all the job-parts combined to produce the final output.
5. Input Data: The data set that is fed to the MapReduce for processing.
6. Output Data: The final result is obtained after the processing.

In MapReduce, we have a client. The client will submit the job of a particular size to the Hadoop
MapReduce Master. Now, the MapReduce master will divide this job into further equivalent job-
parts. These job-parts are then made available for the Map and Reduce Task.
This Map and Reduce task will contain the program as per the requirement of the use-case that the
particular company is solving. The developer writes their logic to fulfill the requirement that the
industry requires.
The input data which we are using is then fed to the Map Task and the Map will generate
intermediate key-value pair as its output.
The output of Map i.e. these key-value pairs are then fed to the Reducer and the final output is
stored on the HDFS. There can be n number of Map and Reduce tasks made available for
processing the data as per the requirement. The algorithm for Map and Reduce is made with a
very optimized way such that the time complexity or space complexity is minimum.

IDENTITY AND ACCESS MANAGEMENT:


In a recent study by Verizon, 63% of the confirmed data breaches are due to either weak, stolen,
or default passwords used. There is a saying in the cybersecurity world that goes like this “No
matter how good your chain is it’s only as strong as your weakest link.” and exactly hackers use
the weakest links in the organization to infiltrate. They usually use phishing attacks to infiltrate an
organization and if they get at least one person to fall for it, it’s a serious turn of events from
thereon. They use the stolen credentials to plant back doors, install malware or exfiltrate
confidential data, all of which will cause serious losses for an organization.
How Identity and Access Management Works?

AWS(Amazon Web Services) will allows you to maintain the fine-grained permissions to the
AWS account and the services provided Amazon cloud. You can manage the permissions to the
individual users or you can manage the permissions to certain users as group and roles will helps
you to manage the permissions to the resources.

SERVICE LEVEL AGREEMENTS


A Service Level Agreement (SLA) is the bond for performance negotiated between the cloud
services provider and the client. Earlier, in cloud computing all Service Level Agreements were
negotiated between a client and the service consumer. Nowadays, with the initiation of large
utility-like cloud computing providers, most Service Level Agreements are standardized until a
client becomes a large consumer of cloud services. Service level agreements are also defined
at different levels which are mentioned below:
 Customer-based SLA
 Service-based SLA
 Multilevel SLA
Few Service Level Agreements are enforceable as contracts, but mostly are agreements or
contracts which are more along the lines of an Operating Level Agreement (OLA) and may not
have the restriction of law. It is fine to have an attorney review the documents before making a
major agreement to the cloud service provider. Service Level Agreements usually specify some
parameters which are mentioned below:
1. Availability of the Service (uptime)
2. Latency or the response time
3. Service components reliability
4. Each party accountability
5. Warranties
In any case, if a cloud service provider fails to meet the stated targets of minimums then the
provider has to pay the penalty to the cloud service consumer as per the agreement. So, Service
Level Agreements are like insurance policies in which the corporation has to pay as per the
agreements if any casualty occurs. Microsoft publishes the Service Level Agreements linked with
the Windows Azure Platform components, which is demonstrative of industry practice for cloud
service vendors. Each individual component has its own Service Level Agreements. Below are
two major Service Level Agreements (SLA) described:
1. Windows Azure SLA – Window Azure has different SLA’s for compute and storage.
For compute, there is a guarantee that when a client deploys two or more role instances
in separate fault and upgrade domains, client’s internet facing roles will have external
connectivity minimum 99.95% of the time. Moreover, all of the role instances of the
client are monitored and there is guarantee of detection 99.9% of the time when a role
instance’s process is not runs and initiates properly.
2. SQL Azure SLA – SQL Azure clients will have connectivity between the database
and internet gateway of SQL Azure. SQL Azure will handle a “Monthly Availability”
of 99.9% within a month. Monthly Availability Proportion for a particular tenant
database is the ratio of the time the database was available to customers to the total
time in a month. Time is measured in some intervals of minutes in a 30-day monthly
cycle. Availability is always remunerated for a complete month. A portion of time is
marked as unavailable if the customer’s attempts to connect to a database are denied
by the SQL Azure gateway.

BILLING:

Cloud billing is a method of generating bills from resource usage data in a cloud environment.
This approach to billing allows for automated, scalable and flexible management of billing
operations. It's especially useful for services such as software, infrastructure and online
platforms. It's a dynamic, adaptable and efficient way for businesses to handle their billing
needs, especially in an environment where services and usage can vary greatly from one
customer to the next.
CHAPTER 2
COMPUTE SERVICES

Compute services in cloud computing refer to the infrastructure provided by cloud service providers
that allows users to run and manage their applications and workloads in a scalable and flexible
manner without the need to invest in and maintain physical hardware.

These services typically include:

1. Virtual Machines (VMs): Virtualized computing instances that mimic physical servers,
allowing users to install and run software as they would on a physical machine.
2. Containers: Lightweight, portable, and scalable environments that package application
code and dependencies, enabling consistent deployment across different computing
environments.
3. Serverless Computing: A cloud computing model where cloud providers dynamically
manage the allocation of machine resources, automatically scaling and provisioning
infrastructure as needed, allowing developers to focus solely on writing code without
worrying about server management.
4. Functions-as-a-Service (FaaS): A subset of serverless computing where developers can
deploy individual functions or pieces of code that are triggered by specific events or
requests, and the cloud provider manages the execution and scaling of these functions.
5. Bare Metal Instances: Physical servers offered by cloud providers without virtualization,
providing users with full control over the underlying hardware for performance-sensitive
workloads.
6. High-Performance Computing (HPC) Instances: Specialized instances optimized for
running compute-intensive workloads, such as scientific simulations, modeling, and
rendering.
7. GPU Instances: Instances equipped with Graphics Processing Units (GPUs) for
accelerating tasks such as machine learning, data processing, and rendering.

Compute services in cloud computing offer advantages such as elasticity (the ability to scale
resources up or down based on demand), cost-effectiveness (users pay only for the resources they
use), and flexibility (support for various operating systems, programming languages, and
frameworks). These services form the foundation for building and deploying applications in the
cloud, enabling organizations to innovate rapidly and scale their operations efficiently.

You might also like