Professional Documents
Culture Documents
Cloud Computing Intro
Cloud Computing Intro
Cloud Computing Intro
1. Distributed Computing:
It is a composition of multiple independent systems that work on a single problem, but all of
them are depicted as a single entity to the users. They are linked together and the problem is
divided into sub-problems where each part is solved by different computer systems. The
purpose of distributed systems is to share resources and also use them effectively and
efficiently. Distributed systems possess characteristics such as scalability, concurrency,
continuous availability, heterogeneity, and independence in failures. But the main problem with
this system was that all the systems were required to be present at the same geographical
location. Thus to solve this problem, distributed computing led to three more types of
computing and they were-Mainframe computing, cluster computing, and grid computing.
2. Parallel Computing :
Parallel computing is defined as a type of computing where multiple computer systems are
used simultaneously. Here a problem is broken into sub-problems and then further broken
down into instructions. These instructions from each sub-problem are executed concurrently on
different processors.
Here in the below diagram you can see how the parallel computing system consists of multiple
processors that communicate with each other and perform multiple tasks over a shared
memory simultaneously. The goal of parallel computing is to save time and provide
concurrency.
3. Cluster Computing :
A cluster is a group of independent computers that work together to perform the tasks given.
Cluster computing is defined as a type of computing that consists of two or more independent
computers, referred to as nodes, that work together to execute tasks as a single machine. In
1980s, cluster computing came as an alternative to mainframe computing. Each machine in the
cluster was connected to each other by a network with high bandwidth. These were way
cheaper than those mainframe systems. These were equally capable of high computations.
Also, new nodes could easily be added to the cluster if it was required. Thus, the problem of the
cost was solved to some extent but the problem related to geographical restrictions still
pertained. To solve this, the concept of grid computing was introduced. The goal of cluster
computing is to increase the performance, scalability and simplicity of the system. As you can
see in the below diagram, all the nodes, (irrespective of whether they are a parent node or child
node), act as a single entity to perform the tasks.
4. Grid Computing:
Introduced in the 1990s, grid computing is defined as a type of computing where it is
constitutes a network of computers that work together to perform tasks that may be difficult
for a single machine to handle. Different systems were placed at entirely different geographical
locations and these all were connected via the internet. These systems belonged to different
organizations and thus the grid consisted of heterogeneous nodes. All the computers on that
network work under the same umbrella and are termed as a virtual super computer.
The tasks they work on is of either high computing power and consist of large data sets.
All communication between the computer systems in grid computing is done on the “data
grid”. The goal of grid computing is to solve more high computational problems in less time and
improve productivity. Although it solved some problems but new problems emerged as the
distance between the nodes increased. The main problem which was encountered was the low
availability of high bandwidth connectivity and with it other network associated issues. Thus,
cloud computing is often referred to as “Successor of grid computing”.
5. Utility Computing :
Utility computing is defined as the type of computing where the service provider provides the
needed resources such as compute services along with other major services such as storage,
infrastructure etc. and charges them depending on the usage of these resources as per
requirement and demand, but not of a fixed rate. Utility computing involves the renting of
resources such as hardware, software, etc. depending on the demand and the requirement.
The goal of utility computing is to increase the usage of resources and be more cost-efficient.
6. Edge Computing :
Edge computing is defined as the type of computing that is focused on decreasing the long
distance communication between the client and the server. This is done by running fewer
processes in the cloud and moving these processes onto a user’s computer, IoT device or edge
device/server. The goal of edge computing is to bring computation to the network’s edge which
in turn builds less gap and results in better and closer interaction.
7. Fog Computing :
Fog computing is defined as the type of computing that acts a computational structure between
the cloud and the data producing devices. It is also called as “fogging”. This structure enables
users to allocate resources, data, applications in locations at a closer range within each other.
The goal of fog computing is to improve the overall network efficiency and performance.
8. Cloud Computing :
Cloud is defined as the usage of someone else’s server to host, process or store data.
Cloud computing is defined as the type of computing where it is the delivery of on-demand
computing services over the internet on a pay-as-you-go basis. It is widely distributed, network-
based and used for storage. There type of cloud are public, private, hybrid and community and
some cloud providers are Google cloud, AWS, Microsoft Azure and IBM cloud.
Business Drivers in Cloud Computing
Business Driver :
It is the interface or resource, and a process that is used for the growth and success of the
business. Every business has its own driver to which they decide as per the circumstances.
Business drivers are the key inputs that drive a business operationally and financially.
Businesses have been motivated to adopt such business drivers to achieve organizational goals.
Example –
Some common examples of business drivers are the quantity and price of the products sold,
units of production, number of enterprises, salespeople, etc.
• Level of Demand
• Cost of Production
• Availability of Funds
Taking these considerations into account let us look at the different capacity planning
strategies that exist. Let’s discuss it one by one.
i. Lead Strategy – This is a strategy where the capacity is added beforehand in
reference to a future increase in demand. This strategy keeps the customers
intact and prevents competitors from luring them back in.
ii. Lag Strategy – This strategy is where the capacity is added only when it is
required, that is, only when the demand is observed and not based on
anticipation. This strategy is more conservative, as it reduces the risk of wastage
but at the same time, it can result in late delivery of goods if not planned
outright.
iii. Match Strategy – This strategy is where small amounts of capacity are added
gradually in required intervals of time, keeping in mind the demand and the
market potential of the product. This strategy is said to improve performance in
heterogeneous environments and hybrid clouds.
3. Organizational Agility
Organization agility is the process by which an organization will adapt and evolve to
sudden changes caused by internal and external factors. It measures how quickly an
organization will get back on its feet, in the face of problems. Agility requires stability,
and for an organization to reach organizational agility, it should build a stable
foundation. In the IT field, one should respond to business change by scaling its IT
resources. If infrastructure seems to be the problem, changing the business needs and
prioritizing as per the circumstances should be the solution.
Principles of Organizational Agility –
The five principles of Organizational Agility are as follows.
• Frame your problems properly
• Limit Change
• Simplify Change
• Subtract before you Add
• Verify Outcomes
Cloud Computing (NIST Model)
The NIST Definition of Cloud Computing:
Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to
a shared pool of configurable computing resources (e.g., networks, servers, storage,
applications, and services) that can be rapidly provisioned and released with minimal
management effort or service provider interaction. This cloud model is composed of five
essential characteristics, three service models, and four deployment models.
1) Essential Characteristics:
On-demand self-service. A consumer can unilaterally provision computing capabilities, such as
server time and network storage, as needed automatically without requiring human interaction
with each service provider.
Broad network access. Capabilities are available over the network and accessed through
standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g.,
mobile phones, tablets, laptops, and workstations).
Resource pooling. The provider’s computing resources are pooled to serve multiple consumers
using a multi-tenant model, with different physical and virtual resources dynamically assigned
and reassigned according to consumer demand. There is a sense of location independence in
that the customer generally has no control or knowledge over the exact location of the
provided resources but may be able to specify location at a higher level of abstraction (e.g.,
country, state, or datacenter). Examples of resources include storage, processing, memory, and
network bandwidth.
Rapid elasticity. Capabilities can be elastically provisioned and released, in some cases
automatically, to scale rapidly outward and inward commensurate with demand. To the
consumer, the capabilities available for provisioning often appear to be unlimited and can be
appropriated in any quantity at any time.
Measured service. Cloud systems automatically control and optimize resource use by leveraging
a metering capability (typically done on a pay-per-usage or charge-per-usage basis) at some
level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and
active user accounts). Resource usage can be monitored, controlled, and reported, providing
transparency for both the provider and consumer of the utilized service.
2) Service Models:
Software as a Service (SaaS). The capability provided to the consumer is to use the provider’s
applications running on a cloud infrastructure (the collection of hardware and software that
enables the five essential characteristics of cloud computing). The applications are accessible
from various client devices through either a thin client interface, such as a web browser (e.g.,
web-based email), or a program interface. The consumer does not manage or control the
underlying cloud infrastructure including network, servers, operating systems, storage, or even
individual application capabilities, with the possible exception of limited user-specific
application configuration settings.
Platform as a Service (PaaS). The capability provided to the consumer is to deploy onto the
cloud infrastructure consumer-created or acquired applications created using programming
languages, libraries, services, and tools supported by the provider. The consumer does not
manage or control the underlying cloud infrastructure including network, servers, operating
systems, or storage, but has control over the deployed applications and possibly configuration
settings for the application-hosting environment.
Infrastructure as a Service (IaaS). The capability provided to the consumer is to provision
processing, storage, networks, and other fundamental computing resources where the
consumer is able to deploy and run arbitrary software, which can include operating systems
and applications. The consumer does not manage or control the underlying cloud infrastructure
but has control over operating systems, storage, and deployed applications; and possibly
limited control of select networking components (e.g., host firewalls).
3) Deployment Models:
Private cloud. The cloud infrastructure is provisioned for exclusive use by a single organization
comprising multiple consumers (e.g., business units). It may be owned, managed, and operated
by the organization, a third party, or some combination of them, and it may exist on or off
premises.
Community cloud. The cloud infrastructure is provisioned for exclusive use by a specific
community of consumers from organizations that have shared concerns (e.g., mission, security
requirements, policy, and compliance considerations). It may be owned, managed, and
operated by one or more of the organizations in the community, a third party, or some
combination of them, and it may exist on or off premises.
Public cloud. The cloud infrastructure is provisioned for open use by the general public. It may
be owned, managed, and operated by a business, academic, or government organization, or
some combination of them. It exists on the premises of the cloud provider.
Hybrid cloud. The cloud infrastructure is a composition of two or more distinct cloud
infrastructures (private, community, or public) that remain unique entities, but are bound
together by standardized or proprietary technology that enables data and application
portability (e.g., cloud bursting for load balancing between clouds).
History of Cloud Computing
During the time of ARPANET, the word cloud was used as a metaphor for the Internet and a
standardized cloud-like shape was used to denote a network on telephony schematics.
During the 1960s, the initial concepts of time-sharing became popularized via Remote Job
Entry. Full-time-sharing solutions were available by the early 1970s on such platforms as
Multics (on GE hardware), Cambridge CTSS, and the earliest UNIX ports (on DEC hardware). Yet,
the "data center" model where users submitted jobs to operators to run on IBM's mainframes
was overwhelmingly predominant.
In the 1990s, companies who provided point-to-point data circuits started offering VPNs
(Virtual Private Networks). They utilized overall network bandwidth more effectively by
switching traffic to balance server use. The cloud symbol was used to denote the distinction
point where the responsibility of provider ended and that of user started. Cloud computing
extended this boundary to cover all servers as well as the network infrastructure. As computers
became more diffused, scientists and technologists explored ways to make large-scale
computing power available to more users through time-sharing. They experimented with
algorithms to optimize the infrastructure, platform, and applications to prioritize CPUs and
increase efficiency for end users.
The use of the word cloud for virtual services dates back to 'General Magic' in 1994, where it
was used to describe the universe of "places" that mobile agents in the Telescript (distributed
computing platform created by General Magic and AT&T ) environment could go. 'Cloud' was
also used in promoting AT&T's associated PersonaLink Services.
In July 2002, Amazon Web Services was created to "enable developers to build innovative and
entrepreneurial applications on their own." Simple Storage Service (S3) and Elastic Compute
Cloud (EC2) were introduced in 2006, which pioneered the delivery of IaaS through server
virtualization.
In 2008, Google App Engine (beta version) was released, which provided PaaS through fully
maintained infrastructure and a deployment platform for users to create web applications using
common high level languages. The goal was to eliminate the need for some administrative tasks
typical of an IaaS model, while creating a platform where users could easily deploy such
applications and scale them to demand. During the same period, NASA's Nebula became the
first open-source software for deploying private and hybrid clouds, and for the federation of
clouds.
In February 2010, Microsoft Azure was released. On March 1, 2011, IBM announced the IBM
SmartCloud framework to support Smarter Planet. On June 7, 2012, Oracle announced the
Oracle Cloud. This cloud offering is poised to be the first to provide users with access to an
integrated set of IT solutions, including the Applications (SaaS), Platform (PaaS), and
Infrastructure (IaaS) layers. In May 2012, Google Compute Engine was released in preview,
before being rolled out into General Availability in December 2013. In December 2019, Amazon
announced AWS Outposts, which is a fully managed service that extends AWS infrastructure,
AWS services, APIs, and tools to virtually any customer datacenter, co-location space, or on-
premises facility for a truly consistent hybrid experience.
2) Microsoft Azure
Microsoft Azure is also known as Windows Azure. It supports various operating systems,
databases, programming languages, frameworks that allow IT professionals to easily build,
deploy, and manage applications through a worldwide network. It also allows users to create
different groups for related utilities.
Features of Microsoft Azure
o Microsoft Azure provides scalable, flexible, and cost-effective
o It allows developers to quickly manage applications and websites.
o It managed each resource individually.
o Its IaaS infrastructure allows us to launch a general-purpose virtual machine in different
platforms such as Windows and Linux.
o It offers a Content Delivery System (CDS) for delivering the Images, videos, audios, and
applications.
6) Oracle cloud
Oracle cloud platform is offered by the Oracle Corporation. It combines Platform as a Service,
Infrastructure as a Service, Software as a Service, and Data as a Service with cloud
infrastructure. It is used to perform tasks such as moving applications to the cloud, managing
development environment in the cloud, and optimize connection performance.
Features of Oracle cloud
o Oracle cloud provides various tools for build, integrate, monitor, and secure the
applications.
o Its infrastructure uses various languages including, Java, Ruby, PHP, Node.js.
o It integrates with Docker, VMware, and other DevOps tools.
o Oracle database not only provides unparalleled integration between IaaS, PaaS, and
SaaS, but also integrates with the on-premises platform to improve operational
efficiency.
o It maximizes the value of IT investments.
o It offers customizable Virtual Cloud Networks, firewalls, and IP addresses to securely
support private networks.
7) Red Hat
Red Hat virtualization is an open standard and desktop virtualization platform produced by Red
Hat. It is very popular for the Linux environment to provide various infrastructure solutions for
virtualized servers as well as technical workstations. Most of the small and medium-sized
organizations use Red Hat to run their organizations smoothly. It offers higher density, better
performance, agility, and security to the resources. It also improves the organization's economy
by providing cheaper and easier management capabilities.
Features of Rad Hat
o Red Hat provides secure, certified, and updated container images via the Red Hat
Container catalog.
o Red Hat cloud includes OpenShift, which is an app development platform that allows
developers to access, modernize, and deploy apps
o It supports up to 16 virtual machines, each having up to 256GB of RAM.
o It offers better reliability, availability, and serviceability.
o It provides flexible storage capabilities, including very large SAN-based storage, better
management of memory allocations, high availability of LVMs, and support for
particularly roll-back.
o In the Desktop environment, it includes features like New on-screen keyboard, GNOME
software, which allows us to install applications, update application, as well as extended
device support.
8) DigitalOcean
DigitalOcean is the unique cloud provider that offers computing services to the organization. It
was founded in 2011 by Moisey Uretsky and Ben. It is one of the best cloud provider that allows
us to manage and deploy web applications.
Features of DigitalOcean
o It uses the KVM hypervisor to allocate physical resources to the virtual servers.
o It provides high-quality performance.
o It offers a digital community platform that helps to answer queries and holding
feedbacks.
o It allows developers to use cloud servers to quickly create new virtual machines for their
projects.
o It offers one-click apps for droplets. These apps include MySQL, Docker, MongoDB,
Wordpress, PhpMyAdmin, LAMP stack, Ghost, and Machine Learning.
9) Rackspace
Rackspace offers cloud computing services such as hosting web applications, Cloud Backup,
Cloud Block Storage, Databases, and Cloud Servers. The main aim to designing Rackspace is to
easily manage private and public cloud deployments. Its data centers operating in the USA, UK,
Hong Kong, and Australia.
Features of Rackspace
o Rackspace provides various tools that help organizations to collaborate and
communicate more efficiently.
o We can access files that are stored on the Rackspace cloud drive, anywhere, anytime
using any device.
o It offers 6 globally data centers.
o It can manage both virtual servers and dedicated physical servers on the same network.
o It provides better performance at a lower cost.
PROPERTIES:
On Demand Self Service: Cloud Computing allows the users to use web services and resources
on demand. One can logon to a website at any time and use them. Human administrators are
not required.
Broad Network Access: Since cloud computing is completely web based, it can be accessed
from anywhere and at any time, over standard networks and heterogenous devices.
Resource Pooling: Cloud computing allows multiple tenants to share a pool of resources e.g.,
networks, servers, storage, applications, and services) in an uncommitted manner. One can
share single physical instance of hardware, database and basic infrastructure.
Rapid Elasticity: It is very easy to scale the resources vertically or horizontally at any time.
Scaling of resources means the ability of resources to deal with increasing or decreasing
demand. The resources being used by customers at any given point of time are automatically
monitored.
Measured Service: The resource utilization is tracked for each application and occupant. The
service cloud provider controls and monitors all the aspects of cloud service. Resource
optimization, billing, and capacity planning etc. depend on it.
Maintenance: Since they do not need to be installed on each user's computer and can be
accessed from different places, cost is reduced and maintenance is easier.
Low Cost: By using cloud computing, the cost will be reduced because to take the services of
cloud computing, IT company need not to set its own infrastructure and pay-as-per usage of
resources.
Advanced Security: Cloud service providers store encrypted data of users and provide
additional security features such as user authentication, security against breaches and other
threats, several layers of abstraction, data backup etc.
Resilient Computing: Resilience in cloud computing means its ability to recover from any
interruption. Disaster management earlier used to pose problems for service providers but now
due to a lot of investments and advancements in this field, clouds have become a lot more
resilient. Advanced backup and recovery methods make sure that your data is always safe.
Virtualization: It is a technique, which allows to share single physical instance of an application
or resource among multiple organizations or tenants (customers). It does so by assigning a
logical name to a physical resource and providing a pointer to that physical resource on
demand.
Disadvantages of Cloud Computing
Cloud computing, an emergent technology, has placed many challenges in different aspects of
data and information handling. Some of these are:
Security and Privacy: Security and Privacy of information is the biggest challenge to cloud
computing. The user data can be accessed by the host company with or without permission.
The service provider may access the data that is on the cloud at any point in time. They could
accidentally or deliberately alter or even delete information. Security and privacy issues can be
overcome by employing encryption, security hardware and security applications.
Portability (Vendor Lock-in): This is another challenge to cloud computing that applications
should easily be migrated from one cloud provider to another. Organizations may face
problems when transferring their services from one vendor to another. As different vendors
provide different platforms, that can cause difficulty moving one cloud to another. This is called
vendor lock-in. However, it is not yet made possible because each of the cloud provider uses
different standard languages for their platforms.
Pay-per-use service charges: Cloud computing services are on-demand services a user can
extend or compress the volume of the resource as per needs. so you paid for how much you
have consumed the resources. It is difficult to define a certain pre-defined cost for a particular
quantity of services. Such types of ups and downs and price variations make the
implementation of cloud computing very difficult and intricate.
Upkeeping(management) of Cloud: Maintaining a cloud is a herculin task because a cloud
architecture contains a large resources infrastructure and other challenges and risks as well,
user satisfaction, etc. Moreover, there is a lack of skilled expertise and resources. The workload
in the cloud is increasing so the cloud service hosting companies need continuous rapid
advancement.
Interoperability: It means the application on one platform should be able to incorporate
services from the other platforms. It is made possible via web services, but developing such
web services is very complex.
Computing Performance: Data intensive applications on cloud requires high network
bandwidth, which results in high cost. Low bandwidth does not meet the desired computing
performance of cloud application.
Reliability and Availability: It is necessary for cloud systems to be reliable and robust because
most of the businesses are now becoming dependent on services provided by third-party.
Universal document
access
Latest version
availability of your
documents
Easier group
collaboration
Jobs can be
executed in parallel
speeding
Role of Open Standards
“Standard” is a document that provides requirements, specifications, guidelines or
characteristics that can be used consistently to ensure that materials, products, processes and
services are fit for their purpose.
An open standard is a standard that is freely available for adoption, implementation and
updates. A few famous examples of open standards are XML, SQL and HTML. They provide a
harmonized, stable and globally recognized framework for the dissemination and use of
technologies. They encompass best practices and agreements that encourage more equitable
development and promote the overall growth of the Information Society.
It is different from “open-source” because open-source is a code that is created to be freely
available, and most licenses allow for the redistribution and modification of the code by
anyone, anywhere, with attribution. In many cases the license further dictates that any updates
from contributors will also become free and open to the community. This allows a
decentralized community of developers to collaborate on a project and jointly benefit from the
resulting software
Businesses within an industry share open standards because this allows them to bring huge
value to both themselves and to customers. Standards are often jointly managed by a
foundation of stakeholders. There are typically rules about what kind of adjustments or updates
users can make, to ensure that the standard maintains interoperability and quality.
Open cloud standards provide three important benefits to IT organizations:
1. Increased Choice: Open standards give customers the freedom to choose the products
that work best with their tools and work in their environment. Constraints around
specific interfaces disappear and decisions can be based upon performance.
2. Reduced Cost: Open standards lower costs by reducing the complexity and number of
tools required to support an environment. Training is also more efficient in this
environment.
3. Improved Interoperability: Ultimately, users want to integrate their business systems
and the infrastructures that support them. Open standards enable that integration
which drives greater business agility and responsiveness
Adding to these benefits, open standards can also help in preventing vendor lock-in. To explain
this with an example: a business might buy a PDF reader and editor from a vendor. Over time,
the team could create a huge number of PDF documents. Maybe these documents become a
valuable asset for the company. Since the PDF format is an open standard, the business would
have no problem switching from one PDF software to another. There is no concern that it
would be unable to access its documents. Even if the PDF reader software isn’t open source,
the PDF format is an open standard. Everyone uses this format.