Professional Documents
Culture Documents
Cloud Computing Unit3
Cloud Computing Unit3
Cloud Service Management entails all activities that an organization does to plan,
design, deliver, control and operate the IT and Cloud Services that it offers to the
consumers. In other words, it refers to the implementation and management of
cloud services in terms of quality, business, customer satisfaction and operation
that is performed by cloud providers.
A cloud service provider has the responsibility of providing best quality, control
and performance of cloud services to every cloud customer. However, they are
not easy to implement and require good and well-organized management skills
and experiences to achieve the business goals and objectives.
In Cloud Computing, A Service Level Agreement (SLA) is the bond for performance
negotiated between the cloud services provider and the client. It can be an
agreement between two or more parties, where one is the customer, and the
others are service providers. This can be a legally binding formal or an informal
"contract" (for example, internal department relationships). The agreement may
involve separate organizations, or different teams within one organization.
Earlier, in cloud computing all Service Level Agreements were negotiated between
a client and the service consumer. Nowadays, with the initiation of large utility-
like cloud computing providers, most Service Level Agreements are standardized
until a client becomes a large consumer of cloud services. Service level
agreements are also defined at different levels which are mentioned below:
• Multilevel SLA: The SLA is split into the different levels, each addressing
different set of customers for the same services, in the same SLA.
o Corporate-level SLA: Covering all the generic service level
management (often abbreviated as SLM) issues appropriate to every
customer throughout the organization. These issues are likely to be
less volatile and so updates (SLA reviews) are less frequently
required.
o Customer-level SLA: covering all SLM issues relevant to the particular
customer group, regardless of the services being used.
o Service-level SLA: covering all SLM issue relevant to the specific
services, in relation to this specific customer group.
Few Service Level Agreements are enforceable as legal contracts, but mostly are
agreements or contracts which may not have the restriction of law. It is fine to
have an attorney review the documents before making a major agreement to the
cloud service provider. Service Level Agreements usually specify some
parameters which are mentioned below:
5. Warranties
4. The steps for reporting issues with the service: This component will specify
the contact details to report the problem to and the order in which details about
the issue have to be reported. The contract will also include a time range in which
the problem will be looked upon and also till when the issue will be resolved.
In any case, if a cloud service provider fails to meet the stated targets of
minimums then the provider has to pay the penalty to the cloud service consumer
as per the agreement. So, Service Level Agreements are like insurance policies in
which the corporation has to pay as per the agreements if any casualty occurs.
Any SLA management strategy considers two well-differentiated phases:
negotiating the contract and monitoring its fulfillment in real time. Thus, SLA
management encompasses the SLA contract definition: the basic schema with the
QoS parameters; SLA negotiation; SLA monitoring; SLA violation detection; and
SLA enforcement—according to defined policies.
BILLING AND ACCOUNTING
BILLING
Cloud billing is the process of generating bills from resource data usage using
some predefined billing policies (defined in the SLA and agreed upon by both the
service provider and service user). The bill can be for real money or it can be a
more abstract notion depending upon the individual cloud computing general
policies.
• Working Module
• Billing Policies – Language and Syntax
• The Rules/Inference Engine and its role
Working Module:
The billing is not automated and needs to be started by the manager. All the
billing services are implemented as a web service. The billing service establishes
connection to the web server through a PHP interface and gets the record
document from the Usage Records repository.
The billing policies are not part of the billing service and are an external element
that the billing service makes use of. They are written by the resource manager
and stored in a repository. The billing service then uses the Billing Policies
Repository to extract the necessary guidelines and generate the bill. This
generated bill is then stored into the Billing Repository for later use by the
manager.
There are different types of languages that can be used to write the billing
policies. A natural language permits the expression of rules in plain English, is
easier to understand by any party and has an almost flat learning curve. The same
policies can also be described in a domain specific language (the language that
uses the same syntax and semantics as the specification requirement document).
This language is more structured, organized and is easier to make the quick edits
or changes in, as compared to the natural language.
The process of matching the new or existing facts against production rules is
called pattern matching. There are a number of algorithms used for pattern
matching by rules engines. The rules are stored in the production memory and
the facts that the inference engine matches against the working memory. Facts
are inserted into the working memory where they are modified or retracted. A
system with a large number of rules and facts may result in many rules being
true for the same fact assertion; these rules are said to be in conflict.
ACCOUNTING
For one, cloud accounting is more flexible. Accounting data can be accessed from
anywhere on any device with an Internet connection, rather than on a few select
on-premises computers. Secondly, unlike traditional accounting software, cloud
accounting software updates financial information automatically and provides
financial reporting in real-time. This means account balances are always accurate
and fewer errors take place due to manual data entry. They are also better able
to handle multi-currency and multi-company transactions more efficiently.
With cloud accounting, it’s also easier to get real-time reporting and visibility
throughout your organization, with greater mobile capabilities and collaboration.
Subscription-based models are popular among cloud accounting providers, and
in most cases these subscriptions are usage-based.
In the on-premises world, every time a firm grows, they encounter greater
software license and maintenance costs as well as new licenses and fees for
database, systems management and other software. The firm might also have to
make expensive capital purchases of new hardware, such as servers. With cloud
solutions, businesses don’t get stuck with permanent, expensive equipment and
licenses when your business contracts are up and, likewise, there are no big
spikes in costs when it expands a little.
Sharing data is also less worrisome. With cloud accounting, two people simply
need access rights to the same system with their unique passwords. Traditional
methods often require flash drives to transport data, which could be lost or
stolen.
Also, cloud accounting requires far less maintenance than its traditional
counterpart. The cloud provider completes the backups, updates occur
automatically and nothing needs to be downloaded or installed on a company
computer.
Lastly, cloud providers usually have backup servers in two or more locations. If
one server network gets down, you still have access to your data. Information
kept just on-premises could be destroyed or damaged in a fire or natural disaster,
and may never be recovered.
There are two types of Cloud Billing accounts:
Cloud-based accounting software now offers all the functionality and reliability
of your tried and trusted desktop accounting system, but with a excess of
additional benefits that only online technology can deliver. If your business is
looking for a more effective way to manage its financial affairs, here are six
reasons for seriously considering a move to cloud accounting.
With cloud accounting, you can access your accounts and key financial figures at
any time, from anywhere.
When you use an old-fashioned, desktop-based system, you are effectively tied
to the office. Your software, your data and your accounts are all sat on a local
drive. And that limits the access you can have to your financial information.
Cloud-based accounting frees you up from this restriction. Your data and records
are all safely encrypted and stored on a cloud server, and there is no software
application for you to download – you log in and work from your web browser,
wherever you have Wi-Fi and an Internet connection.
So, wherever you are, you can always check on the status of your business.
Working online reduces your IT costs and saves you time by keeping you
constantly connected to the business.
Online accounting is carried out entirely from the cloud. There is no costly IT
infrastructure for you to maintain, and you can access the software whether you
are in the office or not, you can immediately approve payments, or send out
invoices to customers, saving you time and making your financial processes far
more effective.
When you are cloud-based, your accounts and records are all saved and backed
up with high levels of encryption. If you have used desktop accounting, you will
be aware of the need to back-up your work at the end of each day. And you will
also know about the need for updates each time your provider brings out a new
version of the software.
Using the old, desktop approach, you had limited access to your accounts – and
that made collaboration with colleagues and advisers difficult. With a system like
online accounting, you, your colleagues, your management team and your
advisers can all access the same numbers – instantly, from any geographical
location.
Using cloud accounting can deliver the dream of having a paperless office. With
traditional accounting, dealing with paperwork, everything must be printed out
and dealt with in hard copy, and this is slow, ineffective and bad for the
environment.
With an online accounting system, you can significantly reduce your reliance on
paperwork. Invoices can be emailed out directly to clients, removing the costs of
printing and postage – and speeding up the payment process. And because your
documents are all digitized and stored in the cloud, there is no need to keep the
paper originals.
The efficiencies of online accounting software give you greatly improved control
of your core financial processes. The online invoicing function streamlines the
whole invoice process, giving you a better view of expected income, an overview
of outstanding debts and a clear breakdown of what each customer owes your
business.
SCALING HARDWARE: CLOUD V/S TRADITIONAL
Cloud Computing vs Traditional IT Infrastructure
Cloud computing is really popular nowadays. More and more companies prefer
using cloud infrastructure rather than the traditional one. Really it’s much more
reasonable to buy a cloud storage subscription instead of investing in physical
in-house servers. However, are there any benefits of using the cloud computing
instead of traditional one? Let’s review the main differences.
First of all, you do not need to buy the hardware and maintain it with your own
team. The information in the cloud is stored on several servers at the same time.
It means that even if 1 or 2 servers are damaged, you will not lose your
information. It also helps to provide the high uptime, up to 99.9%.
When we talk about their traditional infrastructure, you will have to buy and
maintain the hardware and equipment. If something happens, you can lose the
data and spend a lot of time and money to fix the issues.
The cloud computing is the perfect Choice for those who do not require a high
performance constantly but use it time by time. You can get a subscription and
use the resources you paid for. Most providers even let pause the subscription if
you do not need it. and at the same time, you’re able to control everything and
get instant help from the support team.
The traditional infrastructure required the own team to maintain and monitor the
system. It requires a lot of time and efforts.
Cost
With cloud computing, you do not need to pay for the services you don’t use: the
subscription model means you choose the amount of space, processing power,
and other components that you really need.
With traditional infrastructure, you are limited to the hardware you have. If your
business is growing, you will regularly have to expand your infrastructure. At the
same time, you will have to support and maintain it.
Security
Many people are not sure about the security of cloud services. Why can it be not
so secure? As the company uses the third party solution to store data, it’s
reasonable to think that the provider can access the confidential data without
permission. However, there are good solutions to avoid the leaks.
As for traditional infrastructure, you and only you are responsible for who will be
able to access the stored data. For the companies who operate the confidential
information, it’s a better solution.
These are the most common reasons organizations are motivated to move away
from traditional IT towards cloud computing. It’s by no means extensive, but
offers a snapshot of what your peers report:
Cost Savings and Total Cost of Ownership- When you move to cloud computing
you tend to save money in several ways in comparison to traditional IT, including
the reduction of in-house equipment and infrastructure.
Time to Value and Ease of Implementation- Since you don’t have to spend time
configuring hardware, maintain systems, and patching software, you can invest
your resources elsewhere.
Access to Emerging Technology- Much like a car, the minute you purchase on-
premise hardware and software, it immediately starts to age. You can access the
latest and greatest innovations with a cloud provider who has more purchasing
power and stays up-to-date with available solutions. Also, as new innovations are
released, they become less likely to integrate with legacy solutions that are often
inflexible. Instead, cloud-first development is coming to the forefront.
Using Hybrid to Optimize your Operations- Some organizations take the use
of the cloud even further by using multiple clouds simultaneously, thus giving
them even more agility. Workloads are placed where they perform best, and
where costs are most efficient.
Financial Benefits of Moving to the Cloud
As previously mentioned, there are many reasons to move your IT to the cloud,
but for many organizations the business case to move away from traditional IT is
the overriding factor in their decision.
Lower Power Expenses- When you use in-house servers, you must pay to run
them 24/7 even though you aren’t using them all the time. A cloud service
provider will charge less for energy than you would pay if you ran your own on-
premise data center.
No Capital Costs- When you send everything to the cloud you eliminate all capital
expenses. You don’t have to purchase the servers or set up the infrastructure.
Your cloud provider pays those expenses instead of you.
No Redundancy- Most companies with in-house servers have to go the extra mile
and purchase backup hardware as well. Having extra hardware lying around just
in case is a huge waste of resources. Enterprise data centers also offer redundant
power and infrastructure, which is difficult to afford on your own. You can
eliminate these expenses and use the funds elsewhere when you move to the
cloud and away from traditional IT.
ECONOMICS OF SCALING
Economies of Scale refer to the cost advantage experienced by a
firm/organization when it increases its level of output or when production
becomes efficient. Companies can achieve economies of scale by increasing the
production and lowering the per-unit cost. This happens because costs are
spread over a larger number of goods. Costs can be both fixed and variable.
Economies of scale are an important concept for any business in any industry and
represent the cost-savings and competitive advantages larger businesses have
over smaller ones.
Most consumers don't understand why a smaller business charges more for a
similar product sold by a larger company. That's because the cost per unit
depends on how much the company produces. Larger companies are able to
produce more by spreading the cost of production over a larger amount of goods.
An industry may also be able to dictate the cost of a product if there are a number
of different companies producing similar goods within that industry.
There are several reasons why economies of scale give rise to lower per-unit
costs.
• Second, lower per-unit costs can come from bulk orders from suppliers,
larger advertising buys, or lower cost of capital.
• Third, spreading internal function costs across more units produced and
sold helps to reduce costs.
For example, if an industry is producing the video games, there is the one-time
cost of actually creating the game. As we create more copies of the game, the
cost per game decreases as the one-time cost is distributed. We may get lower
the costs on CDs and other raw materials, higher the manufacturing efficiency.
So, once a company can establish itself, expand the customer base for a specific
game, and raise demand for that game. This extra output required to meet that
demand lowers overall cost in the long run.
Technology
Financial
Larger firms tend to be more creditworthy so that they have access to credit with
especially favorable rates of borrowing. They have better financing options and
lower interest rates. This means they access capital more cheaply. Additionally,
large firms listed on stock markets can raise money by selling equity more easily.
Risk-bearing
Investments that might end up being extremely worthwhile may also be quite
costly and high-risk. Generally, only large companies can afford to accept such
risks and make large investments.
Transportation
The presence of larger firms may create better transportation networks. This
reduces costs for companies that use these networks.
Skilled labor
High concentrations of skilled labor often appear as workers receive training and
education to serve particular firms and industries. One of the best-known
examples of this concentration is the Silicon Valley, where there are huge
concentrations of programmers drawn to the presence of massive tech firms like
Apple and Google.
As more and more businesses adopt cloud services, seizing on the latest software
tools and development methodologies, the lines between them are blurring. What
really distinguishes one business from the next is its data.
Much of the intrinsic value of a business resides in its data, but we’re not just
talking about customer and product data, there’s also supply chain data,
competitor data, and many other types of information that might fall under the
big data umbrella. Beyond that there are a multitude of smaller pieces of data,
from employee records to HVAC system logins, that are rarely considered, but
are necessary for the smooth running of any organization. And don’t forget about
source code. Your developers are using cloud-based repositories for version
control of application code. It also needs to be protected.
In the past, companies would typically try to centralize their data and lock it safely
away in an impenetrable vault, but hoarding data doesn’t allow you to extract
value from it. Data gains business value when it’s transported from place to place
as needed and available to be leveraged, not locked away in some dark place.
People need swift, easy access to data and real-time analysis to make innovative
leaps, achieve operational excellence and gain that all-important competitive
edge.
Businesses grow organically, so new systems and software are adopted, mergers
and acquisitions prompt integrations and migrations, and new devices and
endpoints are added to networks all the time. Even the most organized of
businesses inevitably ends up with a complex structure and data that’s
distributed globally.
1. Resting data
Most of the time data sits in storage. It’s often behind firewalls and other layers
of security, which it should be, but it’s also vital to ensure that your data is
encrypted. It should be encrypted all the time, even when you think it’s safely
tucked up in your vault.
If you properly protect your data at rest by encrypting it, then anyone stealing it
will end up with lines of garbled junk that they can’t decipher. You may think it’s
unlikely a cybercriminal will breach your defenses, but what about a motivated
insider with malicious intent or even a careless intern? Hackers most common
point of penetration is actually your employees’ devices, whereby they gain a
foothold that can be leveraged to go deeper into your networks. Encrypt
everything and take proper precautions to restrict access to the decryption key.
2. Accessing data
It’s very important that your employees can access the data they need to do their
jobs whenever and wherever they want, but access must also be controlled. Start
by analyzing which people need access to what data and create tailored access
rights and controls that restrict unnecessary access. Any person requesting
access to data must be authenticated and every data transaction should be
recorded so you can audit later if necessary. Active Directory is the most common
place to manage and control such access today.
Access control should also scan the requesting device to ensure it’s secure and
doesn’t harbor any malware or viruses. Analyzing behavior to see if the user or
device requesting access falls into normal patterns of use can also be a great way
of highlighting nefarious activity.
3. Data in transit
It’s crucial to create a secure, authenticated and encrypted tunnel between the
authenticated user and device and the data they’re requesting. You want to make
the data transfer as swift and painless as possible for the end user, but without
comprising security. Make sure data remains encrypted in transit, so no
interceptor can read it. Choosing the right firewalls and virtual private network
(VPN) services is vital. You may also want to compartmentalize endpoints to keep
data safely siloed or employ virtualization to ensure it doesn’t reside on insecure
devices.
There’s no doubt that most companies focus their data protection efforts here
and it is important, but don’t focus on data in transit to the detriment of other
areas.
4. Arriving data
When the data arrives at its destination you want to be certain that it is authentic
and hasn’t been tampered with. Can you prove data integrity? Do you have a clear
audit trail? This is key to effectively managing data and reducing the risk of any
breach or infection. Phishing attacks often show up in the inbox as genuine data
to fool people into clicking somewhere they shouldn’t and downloading malware
that bypasses your carefully constructed defenses.
Even with the first four pillars solidly implemented, things can and do go
sideways from time to time when least expected. Most companies recognize the
importance of proper backup hygiene today and have implemented backup and
recovery processes. Be sure to actually test and validate your ability to restore
the backups and recover periodically.
In the cloud, there’s another critical area to carefully consider. Be careful not to
put all your data eggs in one basket. Do not store your backups in the same cloud
account where your production data resides. That’s a formula for disaster you
may not recover from should a hacker somehow gain access to your network and
delete everything.
That is, leverage multiple cloud accounts to segregate your backup data from
your production data. Be certain to back up your cloud infrastructure
configuration information as well, in case you ever need to rebuild it for any
reason.
The cloud is useful as a data storage tier for disaster recovery, backup, and long-
term archiving.
SCALABILITY AND CLOUD SERVICES
What is scalability?
2. Sizeable difference- You don’t use a cool word like “scalability” to describe a
minor change. It should mean adding a significant amount of users or data, or
hardware-like assets such as vCPUs and vRAM.
4. Relatively fast- Not at all cloud solutions scale up in minutes or with the click
of a button, but scaling with the cloud should at least be faster than buying and
setting up the hardware yourself.
5. Relatively easy- If scalability was easy in every case, we’d all be AWS Architects
making a sweet $100,000+ per year. But the architecture of the cloud still makes
things easier than scaling locally. Without virtualization, you’d have to run your
largest apps on expensive, difficult-to-maintain mainframes.
A scalable system provides several benefits, here are the most important ones:
2. Cost-efficient: You can allow your business to grow without making any
expensive changes in the current setup. This reduces the cost implications of
storage growth making scalability in the cloud very cost effective.
3. Easy and Quick: Scaling up or scaling out in the cloud is simpler; you can
commission additional VMs with a few clicks and after the payment is processed,
the additional resources are available without any delay.
4. Capacity: Scalability ensures that with the continuous growth of your business
the storage space in cloud grows as well. Scalable cloud computing systems
accommodate your data growth requirements. With scalability, you don’t have to
worry about additional capacity needs.
5. Business incredible speed and flexibility: Want to open a new branch? Add a
new team? Start a new project or campaign? Scalability lets you add the IT
resources for initiatives like this in minutes, not months.
Types of Scaling
Vertical scaling
Vertical is often thought of as the “easier” of the two methods. When scaling a
system vertically, you add more power to an existing instance. This can mean
more memory (RAM), faster storage such as Solid State Drives (SSDs), or more
powerful processors (CPUs).
The reason this is thought to be the easier option is that hardware is often trivial
to upgrade on cloud platforms like AWS, where servers are already virtualized.
There is also very little (if any) additional configuration you are required to do at
the software level.
Horizontal scaling
Diagonal scaling
Diagonal scaling is really the sweet spot of scalability in the cloud, because while
you can estimate and plan for growth, sometimes surges happen when we least
expect them to. The ability to keep your business going strong no matter what’s
thrown at it is the epitome of a nimble company – and one that is truly possible
only with the cloud.
Four ways to help you get the most out of your cloud in terms of scalability.
Many cloud providers offer auto scaling, which can even further help manage
resources and allocate load balances appropriately. Auto scaling is defined as
automatically scaling capacity either up or down based on user-defined
conditions. It allows you to ensure that you always have the precise number of
instances available in order to handle your application’s load.
When dealing with a peak in users or resource consumption, for example, load
balancing attempts to balance your workloads among all available nodes in order
to equalize unused resources. Load balancers also generally continuously
monitor the health of each instance to ensure it sends traffic to only healthy
instances, and they can also move workloads they deem to heavy for a specific
node, instead finding a less burdened node.
Containers aren’t perfect for every application, so it’s important to assess your
current applications to determine which are suited for containerization.
Your cloud environment may be scalable – but is the application you want to scale
able to do so? Testing for scalability is a crucial part of growing and should be
done continuously to prevent bottlenecks later on. Make sure you add extra time
at the end of your application development cycle to test for scalability to ensure
you don’t stumble across major issues when scaling that application later on.
DATABASE AND DATA STORES IN CLOUD
Cloud Database
The behavior of the database should be the same whether accessed through
direct queries, such as SQL statements, or through API calls. However, it may be
possible to discern small differences in response time. An on-premises database,
accessed with a LAN, is likely to provide a slightly faster response than a cloud-
based database, which requires a round trip on the internet for each interaction
with the database.
Key features:
Cloud databases, like their traditional ancestors, can be divided into two broad
categories: relational and non-relational.
The DBaaS model is a fee-based subscription service in which the database runs
on the service provider's physical infrastructure. Different service levels are
usually available. In a classic DBaaS arrangement, the provider maintains the
physical infrastructure and database, leaving the customer to manage the
database's contents and operation.
Alternatively, a customer can set up a managed hosting arrangement, in which
the provider handles database maintenance and management. This latter option
may be especially attractive to small businesses that have database needs but
lack adequate IT expertise.
• Ease of access, Users can access cloud databases from virtually anywhere,
using a vendor’s API or web interface.
• Fills the talent gap when IT lacks adequate in-house database expertise.
Cloud storage- Cloud storage is a service which enables saving the data on
offside storage system.
1) Block Storage Devices – This type of devices provide raw storage to the
clients. This raw storage is separated for creating volumes. A volume is a
recognizable unit of data storage.
2) File Storage Devices – The file storage devices are provided to the client in
the form of files for maintaining its file system. Storage data is accessed using
the Network File System(NFS).
• The customer cannot format or install his own file system or change drive
properties.
• This system shows the user like raw disk that the user can partition and
format.
LARGE SCALE DATA PROCESSING
What is MapReduce?
The input to each phase is key-value pairs. In addition, every programmer needs
to specify two functions: map function and reduce function.
The major advantage of MapReduce is that it is easy to scale data processing over
multiple computing nodes. Under the MapReduce model, the data processing
primitives are called mappers and reducers. Decomposing a data processing
application into mappers and reducers is sometimes nontrivial. But, once we
write an application in the MapReduce form, scaling the application to run over
hundreds, thousands, or even tens of thousands of machines in a cluster is
merely a configuration change. This simple scalability is what has attracted many
programmers to use the MapReduce model.
The Algorithm
• Generally MapReduce paradigm is based on sending the computer to where
the data resides!
o Map stage − The map or mapper’s job is to process the input data.
Generally the input data is in the form of file or directory and is
stored in the Hadoop file system (HDFS). The input file is passed to
the mapper function line by line. The mapper processes the data and
creates several small chunks of data.
• During a MapReduce job, Hadoop sends the Map and Reduce tasks to the
appropriate servers in the cluster.
• Most of the computing takes place on nodes with data on local disks that
reduces the network traffic.
After completion of the given tasks, the cluster collects and reduces the data to
form an appropriate result, and sends it back to the Hadoop server
CLOUD MANAGEMENT WITH PUPPET
Puppet is a software configuration management tool which includes its
own declarative language to describe system configuration. It is a model-driven
solution that requires limited programming knowledge to use.
Puppet Server is installed on one or more servers, and Puppet Agent is installed
on all the machines to be managed. Puppet Agents communicate with the server
and fetch configuration instructions. The Agent then applies the configuration
on the system and sends a status report to the server. Devices can run Puppet
Agent as a daemon, that can be triggered periodically as a cron job or can be run
manually whenever needed.
Rather than specifying the exact commands to perform a system action, the user
creates a resource, which Puppet then translates into system-specific instructions
which are sent to the machine being configured. For example, if a user wants to
install a package on three different nodes, each of which runs a different
operating system, they can declare one resource, and Puppet will determine
which commands need to be run based on the data obtained from Facter, a
program that collects data about the system it is running on, including its
operating system, IP address, and some hardware information.
Providers on the node use Facter facts and other system details to translate
resource types in the catalog into machine instructions that will actually configure
the node.
3. The actual state of the system is then configured according to the desired
state described in manifest file. If the system is already in the desired state,
Puppet will not make any changes, making transactions idempotent.
4. Finally, the agent sends a report to the master, detailing what changes were
made and any errors that occurred.
CASE STUDY: EUCALYPTUS
HISTORY
EUCALYPTUS ARCHITECTURE
Eucalyptus CLIs can oversee both Amazon Web Services and their own private
occasions. Clients can undoubtedly relocate cases from Eucalyptus to Amazon
Elastic Cloud. Network, storage, and compute are overseen by the virtualisation
layer. Occurrences are isolated by hardware virtualisation. The following wording
is utilised by Eucalyptus architecture in cloud computing.
1. Cluster Controller: It oversees at least one Node controller and liable for
sending and overseeing occurrences on them.
2. Storage Controller: It permits the making of depictions of volumes.
3. Cloud Controller: It is a front end for the whole environment.
4. Walrus Storage Controller: It is a straightforward record storage
framework.
5. Node Controller: It is an essential part of Nodes. It keeps up the life cycle
of the occasions running on every node.
• Compatible with Amazon Web Services (AWS) and Simple Storage Service
(S3).
• Can be installed and deployed from source code or DEB and RPM packages.
Numerous other tools can be utilised to associate with AWS and Eucalyptus in
cloud computing, and they are recorded below.
1. Vagrant AWS Plugin: This instrument gives config records to oversee AWS
instances and oversee VMs on the local framework.
2. s3curl: This is a device for collaboration between AWS S3 and Eucalyptus
Walrus.
3. s3fs: This is a FUSE record framework, which can be utilised to mount a
bucket from Walrus or S3 as a local document framework.
4. Cloudberry S3 Explorer: This Windows instrument is for overseeing
documents among S3 and Walrus.
1. Eucalyptus can be utilised to benefit both the eucalyptus private cloud and
the eucalyptus public cloud.
5. Eucalyptus can be utilised with DevOps apparatuses like Chef and Puppet.
3. Viable with Simple Storage Service (S3) and Amazon Web Services (AWS).
It is utilised to assemble hybrid, public and private cloud. It can likewise deliver
your own datacentre into a private cloud and permit you to stretch out the
usefulness to numerous different organisations.
CONCLUSION
Execution Environment
• Virtual Machine (VM role). This role allows you to host your own custom
instance of the Windows Server 2008 R2 Enterprise or Windows Server 2008
R2 Standard operating system within a Windows Azure data center.
Data Management
Windows Azure, SQL Azure, and the associated services provide opportunities for
storing and managing data in a range of ways. The following data management
services and features are available:
• Azure Storage: This provides four core services for persistent and durable
data storage in the cloud. The services support a REST interface that can
be accessed from within Azure-hosted or on-premises (remote)
applications. The four storage services are:
Networking Services
Windows Azure provides several networking services that you can take advantage
of to maximize performance, implement authentication, and improve
manageability of your hosted applications. These services include the following:
• Content Delivery Network (CDN). The CDN allows you to cache publicly
available static data for applications at strategic locations that are closer
(in network delivery terms) to end users. The CDN uses a number of data
centers at many locations around the world, which store the data in BLOB
storage that has anonymous access. These do not need to be locations
where the application is actually running.
• Virtual Network Traffic Manager. This is a service that allows you to set
up request redirection and load balancing based on three different
methods. Typically you will use Traffic Manager to maximize performance
by redirecting requests from users to the instance in the closest data center
using the Performance method. Alternative load balancing methods
available are Failover and Round Robin.
• Service Bus. This provides a secure messaging and data flow capability for
distributed and hybrid applications, such as communication between
Windows Azure hosted applications and on-premises applications and
services, without requiring complex firewall and security infrastructures. It
can use a range of communication and messaging protocols and patterns
to provide delivery assurance, reliable messaging; can scale to
accommodate varying loads; and can be integrated with on-premises
BizTalk Server artifacts.
CASE STUDY: AMAZON EC2
Elastic IP addresses
It provides an easy way to setup, operate and scale a relational database in the
cloud. You can launch a DB Instance and get access to a full-featured MySQL
database and not worry about common database administration tasks like
backups, patch management etc. Amazon Simple Queue Service (Amazon SQS) is
a reliable, highly scalable, hosted distributed queue for storing messages as they
travel between computers and application components.
It allows you to extend your corporate network into a private cloud contained
within AWS. Amazon VPC uses IPSec tunnel mode that enables you to create a
secure connection between a gateway in your data center and a gateway in AWS.
Amazon Route53
It is a highly scalable DNS service that allows you manage your DNS records by
creating a HostedZone for every domain you would like to manage.
It enables you to create multiple Users with unique security credentials and
manage the permissions for each of these Users within your AWS Account. IAM
is natively integrated into AWS Services. No service APIs have changed to support
IAM, and exiting applications and tools built on top of the AWS service APIs will
continue to work when using IAM. AWS also offers various payment and billing
services that leverages Amazon’s payment infrastructure.
All AWS infrastructure services offer utility-style pricing that require no long-term
commitments or contracts. For example, you pay by the hour for Amazon EC2
instance usage and pay by the gigabyte for storage and data transfer in the case
of Amazon S3. More information about each of these services and their pay-as-
you-go pricing is available on the AWS Website. Note that using the AWS cloud
doesn’t require sacrificing the flexibility and control you’ve grown accustomed
to:
You are free to use the programming model, language, or operating system
(Windows, OpenSolaris or any flavor of Linux) of your choice. You are free to pick
and choose the AWS products that best satisfy your requirements—you can use
any of the services individually or in any combination. Because AWS provides
resizable (storage, bandwidth and computing) resources, you are free to
consume as much or as little and only pay for what you consume.
You are free to use the system management tools you’ve used in the past and
extend your datacenter into the cloud.