Professional Documents
Culture Documents
Cloud_Computing_Unit_1
Cloud_Computing_Unit_1
Cloud
"The cloud" refers to servers that are accessed over the Internet, and the software
and databases that run on those servers. Cloud servers are located in data centers
all over the world. By using cloud computing, users and companies do not have to
manage physical servers themselves or run software applications on their own
machines.
The cloud enables users to access the same files and applications from almost any
device, because the computing and storage takes place on servers in a data center,
instead of locally on the user device. This is why a user can log in to their Instagram
account on a new phone after their old phone breaks and still find their old account
in place, with all their photos, videos, and conversation history. It works the same
way with cloud email providers like Gmail or Microsoft Office 365, and with cloud
storage providers like Dropbox or Google Drive.
For businesses, switching to cloud computing removes some IT costs and overhead:
for instance, they no longer need to update and maintain their own servers, as the
cloud vendor they are using will do that. This especially makes an impact for small
businesses that may not have been able to afford their own internal infrastructure but
can outsource their infrastructure needs affordably via the cloud. The cloud can also
make it easier for companies to operate internationally, because employees and
customers can access the same files and applications from any location.
Cloud Computing
Cloud computing is on-demand access, via the internet, to computing resources—
applications, servers (physical servers and virtual servers), data storage,
development tools, networking capabilities, and more—hosted at a remote data
center managed by a cloud services provider (or CSP). The CSP makes these
resources available for a monthly subscription fee or bills them according to usage.
NIST
1960’s
With the vision to interconnect the global space, J.C.R. Licklider introduced
the concepts of “Galactic Network” and “Intergalactic Computer Network” and
also developed Advanced Research Projects Agency Network- ARPANET.
1970
1997
Prof. Ramnath Chellappa introduced the concept of “Cloud Computing” in Dallas.
1999
2003
The Virtual Machine Monitor (VMM), that allows running of multiple virtual guest
operating systems on single device, paved way ahead for other huge inventions.
2006
Amazon also started expanding in cloud services. From EC2 to Simple Storage
Service S3, they introduced pay-as-you-go model, which has become a standard
practice even today.
2013
Web 2.0:It is the interface through which the cloud computing services interact
with the clients. It is because of Web 2.0 that we have interactive and dynamic
web pages. It also increases flexibility among web pages. Popular examples of
web 2.0 include Google Maps, Facebook, Twitter, etc. Needless to say, social
media is possible because of this technology only. In gained major popularity in
2004.
Parallel Computing
It is the use of multiple processing elements simultaneously for solving any problem.
Problems are broken down into instructions and are solved concurrently as each
resource that has been applied to work is working at the same time.
1. It saves time and money as many resources working together will reduce the
time and cut potential costs.
3. It can take advantage of non-local resources when the local resources are finite.
Types of Parallelism:
1. Bit-level parallelism –It is the form of parallel computing which is based on the
increasing processor’s size. It reduces the number of instructions that the
system must execute in order to perform a task on large-sized
data. Example: Consider a scenario where an 8-bit processor must compute the
sum of two 16-bit integers. It must first sum up the 8 lower-order bits, then add
the 8 higher-order bits, thus requiring two instructions to perform the operation. A
16-bit processor can perform the operation with just one instruction.
The whole real-world runs in dynamic nature i.e. many things happen at a
certain time but at different places concurrently. This data is extensively huge to
manage.
Real-world data needs more dynamic simulation and modeling, and for
achieving the same, parallel computing is the key.
Complex, large datasets, and their management can be organized only and only
using parallel computing’s approach.
The algorithms must be managed in such a way that they can be handled in a
parallel mechanism.
The algorithms or programs must have low coupling and high cohesion. But it’s
difficult to create such programs.
Distributed Computing
A distributed computer system consists of multiple software components that are on
multiple computers, but run as a single system. The computers that are in a
distributed system can be physically close together and connected by a local
network, or they can be geographically distant and connected by a wide area
network. A distributed system can consist of any number of possible configurations,
such as mainframes, personal computers, workstations, minicomputers, and so on.
The goal of distributed computing is to make such a network work as a single
computer.
Distributed systems offer many benefits over centralized systems, including the
following:
Scalability
The system can easily be expanded by adding more machines as needed.
Redundancy
Several machines can provide the same services, so if one is unavailable, work does
not stop. Additionally, because many smaller machines can be used, this
redundancy does not need to be prohibitively expensive.
Distributed computing systems can run on hardware that is provided by many
vendors, and can use a variety of standards-based software components. Such
systems are independent of the underlying software. They can run on various
operating systems, and can use various communications protocols. Some hardware
might use UNIX or Linux as the operating system, while other hardware might use
Windows operating systems. For intermachine communications, this hardware can
use SNA or TCP/IP on Ethernet or Token Ring.
You can organize software to run on distributed systems by separating functions into
two parts: clients and servers. This is described in The client/server model. A
common design of client/server systems uses three tiers, as described in Three-
tiered client/server architecture.
The customer generally has no control or information over the location of the
provided resources but is able to specify location at a higher level of abstraction
2. On-Demand Self-Service
It is one of the important and valuable features of Cloud Computing as the user can
continuously monitor the server uptime, capabilities, and allotted network storage.
With this feature, the user can also monitor the computing capabilities.
3. Easy Maintenance
The servers are easily maintained and the downtime is very low and even in some
cases, there is no downtime. Cloud Computing comes up with an update every time
by gradually making it better.
The updates are more compatible with the devices and perform faster than older
ones along with the bugs which are fixed.
5. Availability
The capabilities of the Cloud can be modified as per the use and can be extended a
lot. It analyzes the storage usage and allows the user to buy extra Cloud storage if
needed for a very small amount.
6. Automatic System
Cloud computing automatically analyzes the data needed and supports a metering
capability at some level of services. We can monitor, control, and report the usage. It
will provide transparency for the host as well as the customer.
7. Economical
8. Security
Cloud Security, is one of the best features of cloud computing. It creates a
snapshot of the data stored so that the data may not get lost even if one of the
servers gets damaged.
The data is stored within the storage devices, which cannot be hacked and utilized
by any other person. The storage service is quick and reliable.
9. Pay as you go
In cloud computing, the user has to pay only for the service or the space they have
utilized. There is no hidden or extra charge which is to be paid. The service is
economical and most of the time some space is allotted for free.
This means that the resource usages which can be either virtual server instances
that are running in the cloud are getting monitored measured and reported by the
service provider. The model pay as you go is variable based on actual consumption
of the manufacturing organization.
Elasticity
Cloud Elasticity is the property of a cloud to grow or shrink capacity for CPU,
memory, and storage resources to adapt to the changing demands of an
organization. Cloud Elasticity can be automatic, without need to perform capacity
planning in advance of the occasion, or it can be a manual process where the
organization is notified they are running low on resources and can then decide to
add or reduce capacity when needed. Monitoring tools offered by the cloud provider
dynamically adjust the resources allocated to an organization without impacting
existing cloud-based operations.
A cloud provider is said to have more or less elasticity depending on the degree to
which it is able to adapt to workload changes by provisioning or deprovisioning
On Demand Provisioning
Also referred to as “dynamic provisioning”, customers are provided with resources
on runtime. In this delivery model, cloud resources are deployed to match
customers’ fluctuating demands. Deployments can scale up to accommodate spikes
in usage and down when demands decrease. Customers are billed on a pay-per-use
basis. When this model is used to create a hybrid cloud environment, it is sometimes
called “cloud bursting.”