Download as pdf or txt
Download as pdf or txt
You are on page 1of 50

SERVICE MANAGEMENT IN CLOUD COMPUTING

Cloud Service Management entails all activities that an organization does to plan,
design, deliver, control and operate the IT and Cloud Services that it offers to the
consumers. In other words, it refers to the implementation and management of
cloud services in terms of quality, business, customer satisfaction and operation
that is performed by cloud providers.

A cloud service provider has the responsibility of providing best quality, control
and performance of cloud services to every cloud customer. However, they are
not easy to implement and require good and well-organized management skills
and experiences to achieve the business goals and objectives.

Cloud Service Management is supported by three major support services:

➢ Architecture Service: It consists of two things – architecture design and


architecture deployment. It supports timely integration and alignment of
the architecture to the internal IT projects of the customer or to their
initiative deployment plan. This ensures early integration and alignment of
the cloud services to the business changes of customer
➢ Business Support: It interacts with business-related services that directly
deal with the cloud consumers. It includes components such as contracts,
billing and customer account management, which includes the pricing of
the services provided to the consumer.
➢ Operational Support: It deals directly with the service level agreement
management and the monitoring of the cloud service operations to ensure
that the service level agreed can be delivered to the consumer. Service Level
Agreements are included in technical support because it covers the quality,
performance and accessibility of a cloud service.
Service Level Agreement

A service-level agreement (SLA) is a commitment between a service provider and


a client. Aspects of the service – quality, availability, responsibilities – are agreed
between the service provider and the service user.

In Cloud Computing, A Service Level Agreement (SLA) is the bond for performance
negotiated between the cloud services provider and the client. It can be an
agreement between two or more parties, where one is the customer, and the
others are service providers. This can be a legally binding formal or an informal
"contract" (for example, internal department relationships). The agreement may
involve separate organizations, or different teams within one organization.

Earlier, in cloud computing all Service Level Agreements were negotiated between
a client and the service consumer. Nowadays, with the initiation of large utility-
like cloud computing providers, most Service Level Agreements are standardized
until a client becomes a large consumer of cloud services. Service level
agreements are also defined at different levels which are mentioned below:

• Customer-based SLA: An agreement with an individual customer group,


covering all the services they use. For example, an SLA between a supplier
(Cloud service provider) and the finance department of a large organization
for the services such as finance system, payroll system, billing system,
procurement/purchase system, etc.

• Service-based SLA: An agreement for all customers using the services


being delivered by the service provider. For example: A mobile service
provider offers a routine service to all the customers and offers certain
maintenance as a part of an offer with the universal charging. Another
example could be an email system for the entire organization.
There are chances of difficulties arising in this type of SLA as level of the
services being offered may vary for different customers (for example, head
office staff may use high-speed LAN connections while local offices may
have to use a lower speed leased line).

• Multilevel SLA: The SLA is split into the different levels, each addressing
different set of customers for the same services, in the same SLA.
o Corporate-level SLA: Covering all the generic service level
management (often abbreviated as SLM) issues appropriate to every
customer throughout the organization. These issues are likely to be
less volatile and so updates (SLA reviews) are less frequently
required.
o Customer-level SLA: covering all SLM issues relevant to the particular
customer group, regardless of the services being used.
o Service-level SLA: covering all SLM issue relevant to the specific
services, in relation to this specific customer group.

Few Service Level Agreements are enforceable as legal contracts, but mostly are
agreements or contracts which may not have the restriction of law. It is fine to
have an attorney review the documents before making a major agreement to the
cloud service provider. Service Level Agreements usually specify some
parameters which are mentioned below:

1. Availability of the Service (uptime)

2. Latency or the response time

3. Service components’ reliability

4. Each party accountability

5. Warranties

A well-defined and typical SLA will contain the following components

1. Type of service to be provided: It specifies the type of service and any


additional details of type of service to be provided. In case of an IP network
connectivity, type of service will describe functions such as operation and
maintenance of networking equipment, connection bandwidth to be provided,
etc.

2. The service's desired performance level, especially its reliability and


responsiveness: A reliable service will be the one which suffers minimum
disruptions in a specific amount of time and is available at almost all times. A
service with good responsiveness will perform the desired action promptly after
the customer requests for it.

3. Monitoring process and service level reporting: This component describes


how the performance levels are supervised and monitored. This process involves
gathering of different type of statistics, how frequently this statistics will be
collected and how this statistics will be accessed by the customers.

4. The steps for reporting issues with the service: This component will specify
the contact details to report the problem to and the order in which details about
the issue have to be reported. The contract will also include a time range in which
the problem will be looked upon and also till when the issue will be resolved.

5. Response and issue resolution time-frame/ Latency or the response time:


Response time-frame is the time period by which the service provider will start
the investigation of the issue. Issue resolution time-frame is the time period by
which the current service issue will be resolved and fixed.

6. Repercussions for service provider not meeting its commitment: If the


provider is not able to meet the requirements as stated in SLA then service
provider will have to face consequences for the same. These consequences may
include customer's right to terminate the contract or ask for a refund for losses
incurred by the customer due to failure of service.

In any case, if a cloud service provider fails to meet the stated targets of
minimums then the provider has to pay the penalty to the cloud service consumer
as per the agreement. So, Service Level Agreements are like insurance policies in
which the corporation has to pay as per the agreements if any casualty occurs.
Any SLA management strategy considers two well-differentiated phases:
negotiating the contract and monitoring its fulfillment in real time. Thus, SLA
management encompasses the SLA contract definition: the basic schema with the
QoS parameters; SLA negotiation; SLA monitoring; SLA violation detection; and
SLA enforcement—according to defined policies.
BILLING AND ACCOUNTING
BILLING

Cloud billing is the process of generating bills from resource data usage using
some predefined billing policies (defined in the SLA and agreed upon by both the
service provider and service user). The bill can be for real money or it can be a
more abstract notion depending upon the individual cloud computing general
policies.

The cloud billing service module has three parts:

• Working Module
• Billing Policies – Language and Syntax
• The Rules/Inference Engine and its role

Working Module:

The billing is not automated and needs to be started by the manager. All the
billing services are implemented as a web service. The billing service establishes
connection to the web server through a PHP interface and gets the record
document from the Usage Records repository.

The billing policies are not part of the billing service and are an external element
that the billing service makes use of. They are written by the resource manager
and stored in a repository. The billing service then uses the Billing Policies
Repository to extract the necessary guidelines and generate the bill. This
generated bill is then stored into the Billing Repository for later use by the
manager.

Language and Syntax:

There are different types of languages that can be used to write the billing
policies. A natural language permits the expression of rules in plain English, is
easier to understand by any party and has an almost flat learning curve. The same
policies can also be described in a domain specific language (the language that
uses the same syntax and semantics as the specification requirement document).
This language is more structured, organized and is easier to make the quick edits
or changes in, as compared to the natural language.

The Rules/Inference Engine and its role:

The process of matching the new or existing facts against production rules is
called pattern matching. There are a number of algorithms used for pattern
matching by rules engines. The rules are stored in the production memory and
the facts that the inference engine matches against the working memory. Facts
are inserted into the working memory where they are modified or retracted. A
system with a large number of rules and facts may result in many rules being
true for the same fact assertion; these rules are said to be in conflict.

ACCOUNTING

Cloud accounting is the practice of using an accounting system that's accessed


through the internet. Cloud accounting software is similar to traditional, on-
premises, or self-install accounting software, only the accounting software is
hosted on remote servers, similar to the SaaS (Software as a Service) business
model. Data is sent into “the cloud,” where it is processed and returned to the
user.

For one, cloud accounting is more flexible. Accounting data can be accessed from
anywhere on any device with an Internet connection, rather than on a few select
on-premises computers. Secondly, unlike traditional accounting software, cloud
accounting software updates financial information automatically and provides
financial reporting in real-time. This means account balances are always accurate
and fewer errors take place due to manual data entry. They are also better able
to handle multi-currency and multi-company transactions more efficiently.

With cloud accounting, it’s also easier to get real-time reporting and visibility
throughout your organization, with greater mobile capabilities and collaboration.
Subscription-based models are popular among cloud accounting providers, and
in most cases these subscriptions are usage-based.

Cloud vs. Traditional Accounting

In the on-premises world, every time a firm grows, they encounter greater
software license and maintenance costs as well as new licenses and fees for
database, systems management and other software. The firm might also have to
make expensive capital purchases of new hardware, such as servers. With cloud
solutions, businesses don’t get stuck with permanent, expensive equipment and
licenses when your business contracts are up and, likewise, there are no big
spikes in costs when it expands a little.

Cloud accounting solutions provide an equally secure method of storing financial


information than traditional accounting software. For instance, a company
computer or laptop with critical financial information could be lost or stolen,
which could lead to an information breach. Cloud accounting, however, leaves no
trace of financial data on company computers, and access to that data in the
cloud is encrypted and password protected.

Sharing data is also less worrisome. With cloud accounting, two people simply
need access rights to the same system with their unique passwords. Traditional
methods often require flash drives to transport data, which could be lost or
stolen.

Also, cloud accounting requires far less maintenance than its traditional
counterpart. The cloud provider completes the backups, updates occur
automatically and nothing needs to be downloaded or installed on a company
computer.

Lastly, cloud providers usually have backup servers in two or more locations. If
one server network gets down, you still have access to your data. Information
kept just on-premises could be destroyed or damaged in a fire or natural disaster,
and may never be recovered.
There are two types of Cloud Billing accounts:

• Self-serve (or Online) account: Payment instrument is a credit or


debit card or ACH direct debit, depending on availability in each
country or region. Costs are charged automatically to the payment
instrument connected to Cloud Billing account.

• Invoiced (or Offline) account: Payment instrument can be check or


wire transfer. Invoices are sent by mail or electronically. Invoices are
also accessible in the Cloud Console, as are payment receipts.

There are two types of payments profiles:

• Individual: You're using your account for your own personal


payments.

• Business: You're paying on behalf of a business, organization,


partnership, or educational institution.

Benefits of Cloud Accounting

Cloud-based accounting software now offers all the functionality and reliability
of your tried and trusted desktop accounting system, but with a excess of
additional benefits that only online technology can deliver. If your business is
looking for a more effective way to manage its financial affairs, here are six
reasons for seriously considering a move to cloud accounting.

1. Mobile access at any time

With cloud accounting, you can access your accounts and key financial figures at
any time, from anywhere.

When you use an old-fashioned, desktop-based system, you are effectively tied
to the office. Your software, your data and your accounts are all sat on a local
drive. And that limits the access you can have to your financial information.

Cloud-based accounting frees you up from this restriction. Your data and records
are all safely encrypted and stored on a cloud server, and there is no software
application for you to download – you log in and work from your web browser,
wherever you have Wi-Fi and an Internet connection.

So, wherever you are, you can always check on the status of your business.

2. A cost and time-effective solution

Working online reduces your IT costs and saves you time by keeping you
constantly connected to the business.

Desktop-based systems require an investment in IT hardware, plus the


maintenance of that hardware. You require a server to house the application
software and the related data. And you will need to pay an IT expert to maintain
both the server and the office network – that can be an expensive overhead.

Online accounting is carried out entirely from the cloud. There is no costly IT
infrastructure for you to maintain, and you can access the software whether you
are in the office or not, you can immediately approve payments, or send out
invoices to customers, saving you time and making your financial processes far
more effective.

3. High security and no time-consuming back-ups

When you are cloud-based, your accounts and records are all saved and backed
up with high levels of encryption. If you have used desktop accounting, you will
be aware of the need to back-up your work at the end of each day. And you will
also know about the need for updates each time your provider brings out a new
version of the software.

Security is another area where cloud accounting overcomes a desktop system.


Your data is no longer sat on a physical server in the office, or on the hard drive
of your laptop. All your accounting information is encrypted at source and saved
to the cloud. So the only person who can access your confidential information is
you, plus selected members of your team and advisers.

4. Share and collaborate with ease


Working with colleagues, and sharing data with your advisers, is an extremely
straightforward process when you’re based in the cloud.

Using the old, desktop approach, you had limited access to your accounts – and
that made collaboration with colleagues and advisers difficult. With a system like
online accounting, you, your colleagues, your management team and your
advisers can all access the same numbers – instantly, from any geographical
location.

5. Reduces paperwork and is more sustainable

Using cloud accounting can deliver the dream of having a paperless office. With
traditional accounting, dealing with paperwork, everything must be printed out
and dealt with in hard copy, and this is slow, ineffective and bad for the
environment.

With an online accounting system, you can significantly reduce your reliance on
paperwork. Invoices can be emailed out directly to clients, removing the costs of
printing and postage – and speeding up the payment process. And because your
documents are all digitized and stored in the cloud, there is no need to keep the
paper originals.

6. Better control of your financial processes

The efficiencies of online accounting software give you greatly improved control
of your core financial processes. The online invoicing function streamlines the
whole invoice process, giving you a better view of expected income, an overview
of outstanding debts and a clear breakdown of what each customer owes your
business.
SCALING HARDWARE: CLOUD V/S TRADITIONAL
Cloud Computing vs Traditional IT Infrastructure

Cloud computing is really popular nowadays. More and more companies prefer
using cloud infrastructure rather than the traditional one. Really it’s much more
reasonable to buy a cloud storage subscription instead of investing in physical
in-house servers. However, are there any benefits of using the cloud computing
instead of traditional one? Let’s review the main differences.

The differences between cloud computing and traditional IT infrastructure

Elasticity and resilience

First of all, you do not need to buy the hardware and maintain it with your own
team. The information in the cloud is stored on several servers at the same time.
It means that even if 1 or 2 servers are damaged, you will not lose your
information. It also helps to provide the high uptime, up to 99.9%.

When we talk about their traditional infrastructure, you will have to buy and
maintain the hardware and equipment. If something happens, you can lose the
data and spend a lot of time and money to fix the issues.

Scalability and flexibility

The cloud computing is the perfect Choice for those who do not require a high
performance constantly but use it time by time. You can get a subscription and
use the resources you paid for. Most providers even let pause the subscription if
you do not need it. and at the same time, you’re able to control everything and
get instant help from the support team.

The traditional infrastructure is not so flexible. You have to buy an equipment


and maintain it even if you do not use it. In many cases, it’s even more expensive
because you might need their own technical crew.
Automation

One of the biggest differences between cloud and traditional infrastructure is


how they are maintained. Cloud service is served by the provider’s support team.
They take care of all the necessary aspects including security, updates, hardware,
etc.

The traditional infrastructure required the own team to maintain and monitor the
system. It requires a lot of time and efforts.

Cost

With cloud computing, you do not need to pay for the services you don’t use: the
subscription model means you choose the amount of space, processing power,
and other components that you really need.

With traditional infrastructure, you are limited to the hardware you have. If your
business is growing, you will regularly have to expand your infrastructure. At the
same time, you will have to support and maintain it.

Security

Many people are not sure about the security of cloud services. Why can it be not
so secure? As the company uses the third party solution to store data, it’s
reasonable to think that the provider can access the confidential data without
permission. However, there are good solutions to avoid the leaks.

As for traditional infrastructure, you and only you are responsible for who will be
able to access the stored data. For the companies who operate the confidential
information, it’s a better solution.

What kind of infrastructure is a good choice for your business? It depends on


what your company does and what are your needs. Nevertheless, more and more
organisations today prefer cloud infrastructure.
Why Move from Traditional IT and to the Cloud?

These are the most common reasons organizations are motivated to move away
from traditional IT towards cloud computing. It’s by no means extensive, but
offers a snapshot of what your peers report:

Cost Savings and Total Cost of Ownership- When you move to cloud computing
you tend to save money in several ways in comparison to traditional IT, including
the reduction of in-house equipment and infrastructure.

Dissatisfaction in Traditional IT- Many companies are frustrated with the


negatives that come with traditional IT, such as poor data backup, managing and
maintaining your own hardware, and lack of disaster recovery options.
Businesses often become dissatisfied that they don’t have the same capacity,
scalability, and flexibility as those who are leveraging the cloud. You also do not
have the same access to apps and data.

Time to Value and Ease of Implementation- Since you don’t have to spend time
configuring hardware, maintain systems, and patching software, you can invest
your resources elsewhere.

Access to Emerging Technology- Much like a car, the minute you purchase on-
premise hardware and software, it immediately starts to age. You can access the
latest and greatest innovations with a cloud provider who has more purchasing
power and stays up-to-date with available solutions. Also, as new innovations are
released, they become less likely to integrate with legacy solutions that are often
inflexible. Instead, cloud-first development is coming to the forefront.

Using Hybrid to Optimize your Operations- Some organizations take the use
of the cloud even further by using multiple clouds simultaneously, thus giving
them even more agility. Workloads are placed where they perform best, and
where costs are most efficient.
Financial Benefits of Moving to the Cloud

As previously mentioned, there are many reasons to move your IT to the cloud,
but for many organizations the business case to move away from traditional IT is
the overriding factor in their decision.

Completely Exploited Hardware- Because cloud computing is practical the ever-


changing workload evens outs and becomes smoother. Thus, the hardware is
fully utilized, which means resources aren’t going to waste.

Lower Employee Costs- Most companies employ several IT professionals. Those


expenses add up quickly. When you move to the cloud, you can typically cut back
on the number of the IT employees your organization needs, thus saving you a
significant amount of money.

Lower Power Expenses- When you use in-house servers, you must pay to run
them 24/7 even though you aren’t using them all the time. A cloud service
provider will charge less for energy than you would pay if you ran your own on-
premise data center.

No Capital Costs- When you send everything to the cloud you eliminate all capital
expenses. You don’t have to purchase the servers or set up the infrastructure.
Your cloud provider pays those expenses instead of you.

No Redundancy- Most companies with in-house servers have to go the extra mile
and purchase backup hardware as well. Having extra hardware lying around just
in case is a huge waste of resources. Enterprise data centers also offer redundant
power and infrastructure, which is difficult to afford on your own. You can
eliminate these expenses and use the funds elsewhere when you move to the
cloud and away from traditional IT.
ECONOMICS OF SCALING
Economies of Scale refer to the cost advantage experienced by a
firm/organization when it increases its level of output or when production
becomes efficient. Companies can achieve economies of scale by increasing the
production and lowering the per-unit cost. This happens because costs are
spread over a larger number of goods. Costs can be both fixed and variable.

Understanding Economies of Scale

Economies of scale are an important concept for any business in any industry and
represent the cost-savings and competitive advantages larger businesses have
over smaller ones.

Most consumers don't understand why a smaller business charges more for a
similar product sold by a larger company. That's because the cost per unit
depends on how much the company produces. Larger companies are able to
produce more by spreading the cost of production over a larger amount of goods.
An industry may also be able to dictate the cost of a product if there are a number
of different companies producing similar goods within that industry.

There are several reasons why economies of scale give rise to lower per-unit
costs.

• First, specialization of labor and more integrated technology boost


production volumes.

• Second, lower per-unit costs can come from bulk orders from suppliers,
larger advertising buys, or lower cost of capital.

• Third, spreading internal function costs across more units produced and
sold helps to reduce costs.

For example, if an industry is producing the video games, there is the one-time
cost of actually creating the game. As we create more copies of the game, the
cost per game decreases as the one-time cost is distributed. We may get lower
the costs on CDs and other raw materials, higher the manufacturing efficiency.
So, once a company can establish itself, expand the customer base for a specific
game, and raise demand for that game. This extra output required to meet that
demand lowers overall cost in the long run.

Types of Economies of Scale

Economies of scale can be both internal as well as external. Internal economies


of scale are borne from within the company (based on management decisions),
while external ones are based on outside factors.

Internal economies: Internal economies of scale happen when a company cuts


costs internally, as they're unique to that particular firm. This may be the result
of the limited size of a company or because of decisions from the firm's
management. Larger companies may be able to achieve internal economies of
scale—lowering their costs and raising their production levels—because they can
buy resources in bulk, have special technology, or can access more capital.

Examples of internal economies of scale;

Specialization through the division of labor

Workers in larger-scale factories and other such production operations can do


more precise, specific jobs. This situation increases economic efficiency as
relatively limited training can allow workers to become excellent at their assigned
tasks.

Technology

A larger firm may be able to adopt production technologies of production that a


smaller firm just can’t. That’s because large-scale businesses can afford to invest
in expensive, specialized capital in the form of specialized machinery and other
manufacturing equipment.

Bulk buying power


With greater buying power, a large firm can purchase its factor inputs in bulk at
discounted prices. They can buy more from suppliers at a lower price.

Financial

Larger firms tend to be more creditworthy so that they have access to credit with
especially favorable rates of borrowing. They have better financing options and
lower interest rates. This means they access capital more cheaply. Additionally,
large firms listed on stock markets can raise money by selling equity more easily.

Risk-bearing

Investments that might end up being extremely worthwhile may also be quite
costly and high-risk. Generally, only large companies can afford to accept such
risks and make large investments.

External economies: External economies of scale originate from outside the


firm. External economies of scale, on the other hand, are achieved because of
external factors, or factors that affect an entire industry. These factors benefit
the entire industry, and no single firm has control over these costs. That means
no one company controls costs on its own. These occur when there is a highly-
skilled labor pool, subsidies and/or tax reductions, and partnerships and joint
ventures—anything that can cut down on costs to many companies in a specific
industry.

Examples of external economies of scale

Transportation

The presence of larger firms may create better transportation networks. This
reduces costs for companies that use these networks.

Skilled labor

High concentrations of skilled labor often appear as workers receive training and
education to serve particular firms and industries. One of the best-known
examples of this concentration is the Silicon Valley, where there are huge
concentrations of programmers drawn to the presence of massive tech firms like
Apple and Google.

Benefits: Internal economies of scale are beneficial to the business because


decreased costs mean they are able to decrease prices to gain a competitive
advantage, or increase profit margins. Technical economies describe a larger
company's ability to invest in the latest technology in order to increase efficiency.
Managerial economies are when a large company attracts a highly skilled
workforce because of the benefits they can offer an employee. Marketing
economies are seen in a larger company's ability to gain bulk buying discounts
and financial economies occur when a company can attract investors and are able
to borrow at lower interest rates. Therefore economies of scale are hugely
beneficial to an expanding business due to the competitive advantage that can
be achieved.
MANAGING DATA IN CLOUD
Cloud data management is a way to manage data across cloud platforms, either
with or instead of on-premises storage.

Cloud data management is emerging as an alternative to data management using


traditional on-premises software. Instead of buying on-premise storage resources
and managing them, resources are bought on-demand in the cloud. . This service
model allows organizations to receive dedicated cloud data management
resources on an as-needed basis.

As more and more businesses adopt cloud services, seizing on the latest software
tools and development methodologies, the lines between them are blurring. What
really distinguishes one business from the next is its data.

Much of the intrinsic value of a business resides in its data, but we’re not just
talking about customer and product data, there’s also supply chain data,
competitor data, and many other types of information that might fall under the
big data umbrella. Beyond that there are a multitude of smaller pieces of data,
from employee records to HVAC system logins, that are rarely considered, but
are necessary for the smooth running of any organization. And don’t forget about
source code. Your developers are using cloud-based repositories for version
control of application code. It also needs to be protected.

In the past, companies would typically try to centralize their data and lock it safely
away in an impenetrable vault, but hoarding data doesn’t allow you to extract
value from it. Data gains business value when it’s transported from place to place
as needed and available to be leveraged, not locked away in some dark place.
People need swift, easy access to data and real-time analysis to make innovative
leaps, achieve operational excellence and gain that all-important competitive
edge.

Businesses grow organically, so new systems and software are adopted, mergers
and acquisitions prompt integrations and migrations, and new devices and
endpoints are added to networks all the time. Even the most organized of
businesses inevitably ends up with a complex structure and data that’s
distributed globally.

Another layer that exacerbates this problem is people. Sometimes your


employees will show poor judgement. They may unexpectedly wipe out critical
data or accidentally delete configuration files. Disgruntled employees may even
do these things deliberately. Then you must consider all the employees and
contractors working for your partners and vendors, who often have access to
your business-critical data.

To effectively manage your data without shuttering it and blocking


legitimate requests for access, you need a solid cloud data management
strategy and that begins with five important considerations.

1. Resting data

Most of the time data sits in storage. It’s often behind firewalls and other layers
of security, which it should be, but it’s also vital to ensure that your data is
encrypted. It should be encrypted all the time, even when you think it’s safely
tucked up in your vault.

If you properly protect your data at rest by encrypting it, then anyone stealing it
will end up with lines of garbled junk that they can’t decipher. You may think it’s
unlikely a cybercriminal will breach your defenses, but what about a motivated
insider with malicious intent or even a careless intern? Hackers most common
point of penetration is actually your employees’ devices, whereby they gain a
foothold that can be leveraged to go deeper into your networks. Encrypt
everything and take proper precautions to restrict access to the decryption key.

2. Accessing data

It’s very important that your employees can access the data they need to do their
jobs whenever and wherever they want, but access must also be controlled. Start
by analyzing which people need access to what data and create tailored access
rights and controls that restrict unnecessary access. Any person requesting
access to data must be authenticated and every data transaction should be
recorded so you can audit later if necessary. Active Directory is the most common
place to manage and control such access today.

Access control should also scan the requesting device to ensure it’s secure and
doesn’t harbor any malware or viruses. Analyzing behavior to see if the user or
device requesting access falls into normal patterns of use can also be a great way
of highlighting nefarious activity.

3. Data in transit

It’s crucial to create a secure, authenticated and encrypted tunnel between the
authenticated user and device and the data they’re requesting. You want to make
the data transfer as swift and painless as possible for the end user, but without
comprising security. Make sure data remains encrypted in transit, so no
interceptor can read it. Choosing the right firewalls and virtual private network
(VPN) services is vital. You may also want to compartmentalize endpoints to keep
data safely siloed or employ virtualization to ensure it doesn’t reside on insecure
devices.

There’s no doubt that most companies focus their data protection efforts here
and it is important, but don’t focus on data in transit to the detriment of other
areas.

4. Arriving data

When the data arrives at its destination you want to be certain that it is authentic
and hasn’t been tampered with. Can you prove data integrity? Do you have a clear
audit trail? This is key to effectively managing data and reducing the risk of any
breach or infection. Phishing attacks often show up in the inbox as genuine data
to fool people into clicking somewhere they shouldn’t and downloading malware
that bypasses your carefully constructed defenses.

5. Defensible backup and recovery

Even with the first four pillars solidly implemented, things can and do go
sideways from time to time when least expected. Most companies recognize the
importance of proper backup hygiene today and have implemented backup and
recovery processes. Be sure to actually test and validate your ability to restore
the backups and recover periodically.

In the cloud, there’s another critical area to carefully consider. Be careful not to
put all your data eggs in one basket. Do not store your backups in the same cloud
account where your production data resides. That’s a formula for disaster you
may not recover from should a hacker somehow gain access to your network and
delete everything.

That is, leverage multiple cloud accounts to segregate your backup data from
your production data. Be certain to back up your cloud infrastructure
configuration information as well, in case you ever need to rebuild it for any
reason.

In the unlikely event your production environment should somehow become


compromised, it’s critical a copy of all backups and cloud configuration are
stored separately and secured from tampering and deletion. One way to do this
is to create a separate backup account (on the same cloud or different cloud) with
a “write only” policy that allows backup and archival data to be written and read,
but not deleted. This protects your business by ensuring your DR systems and
backups will always be available should you need them to recover.

The benefits of cloud data management are speeding up technology deployment


and reducing system maintenance costs; it can also provide increased flexibility
to help meet changing business requirements.

The cloud is useful as a data storage tier for disaster recovery, backup, and long-
term archiving.
SCALABILITY AND CLOUD SERVICES
What is scalability?

Scalability refers to the idea of a system in which every application or piece of


infrastructure can be expanded to handle increased load.

A scalable system is able to increase or reduce its performance, resources and


functionalities according to user’s needs. We are talking about a very flexible
infrastructure, customizable for each company’s requirements and able to
respond to specific needs immediately. High-scalability allows optimizing the
overall efficiency of the system and getting cost-savings. Scalability is one of the
key features of Cloud Computing solutions, one of the reasons why Cloud has
been so successful on the market and it will keep increasing.

Key Features of Cloud Scalability

1. Grow or shrink- Scaling is a change in size. It can mean increasing or


decreasing.

2. Sizeable difference- You don’t use a cool word like “scalability” to describe a
minor change. It should mean adding a significant amount of users or data, or
hardware-like assets such as vCPUs and vRAM.

3. Non-disruptive- Scaling doesn’t mean replacing. You’re adding resources to an


existing deployment, so there should be minimal downtime or learning curve.
Adding seats to your Google Apps deployment as you grow – that’s scaling.
Switching to Office 365 because Google Apps can’t support you any longer, not
so much.

4. Relatively fast- Not at all cloud solutions scale up in minutes or with the click
of a button, but scaling with the cloud should at least be faster than buying and
setting up the hardware yourself.

5. Relatively easy- If scalability was easy in every case, we’d all be AWS Architects
making a sweet $100,000+ per year. But the architecture of the cloud still makes
things easier than scaling locally. Without virtualization, you’d have to run your
largest apps on expensive, difficult-to-maintain mainframes.

Benefits of scalable systems

A scalable system provides several benefits, here are the most important ones:

1. Performance: One core benefit of scalability in the cloud is that it facilitates


performance. Scalable architecture has the ability to handle the bursts of traffic
and heavy workloads that will come with business growth.

2. Cost-efficient: You can allow your business to grow without making any
expensive changes in the current setup. This reduces the cost implications of
storage growth making scalability in the cloud very cost effective.

3. Easy and Quick: Scaling up or scaling out in the cloud is simpler; you can
commission additional VMs with a few clicks and after the payment is processed,
the additional resources are available without any delay.

4. Capacity: Scalability ensures that with the continuous growth of your business
the storage space in cloud grows as well. Scalable cloud computing systems
accommodate your data growth requirements. With scalability, you don’t have to
worry about additional capacity needs.

5. Business incredible speed and flexibility: Want to open a new branch? Add a
new team? Start a new project or campaign? Scalability lets you add the IT
resources for initiatives like this in minutes, not months.

6. Avoid costly, disruptive migrations: You don’t want to deploy your IT on a


platform, only to find that it can’t support you after several years of solid growth.
With a scalable platform, you only migrate when you want to – not when your
underlying platform lets you down.

7. Higly-customizable infrastructure according to specific customer’s needs


8. Possibility of increasing or decreasing the system power according to the
needs of the moment and the customer’s availability

9. Scalability ensure a minimum service level even in case of failure

Types of Scaling

Vertical scaling

Vertical is often thought of as the “easier” of the two methods. When scaling a
system vertically, you add more power to an existing instance. This can mean
more memory (RAM), faster storage such as Solid State Drives (SSDs), or more
powerful processors (CPUs).
The reason this is thought to be the easier option is that hardware is often trivial
to upgrade on cloud platforms like AWS, where servers are already virtualized.
There is also very little (if any) additional configuration you are required to do at
the software level.

Horizontal scaling

Horizontal scaling is slightly more complex. When scaling your systems


horizontally, you generally add more servers to spread the load across multiple
machines.
With this, however, comes added complexity to your system. You now have
multiple servers that require the general administration tasks such as updates,
security and monitoring but you must also now sync your application, data and
backups across many instances.

Diagonal scaling

Diagonal scaling, then, is the combination of vertical and horizontal scaling,


where you grow (vertically) within your existing infrastructure until you hit the
tipping point, when you can simply clone the server to add more resources in a
new cloud server.

Diagonal scaling is really the sweet spot of scalability in the cloud, because while
you can estimate and plan for growth, sometimes surges happen when we least
expect them to. The ability to keep your business going strong no matter what’s
thrown at it is the epitome of a nimble company – and one that is truly possible
only with the cloud.

Four ways to help you get the most out of your cloud in terms of scalability.

1. Employ auto scaling

Many cloud providers offer auto scaling, which can even further help manage
resources and allocate load balances appropriately. Auto scaling is defined as
automatically scaling capacity either up or down based on user-defined
conditions. It allows you to ensure that you always have the precise number of
instances available in order to handle your application’s load.

Auto scaling works by IT defining specific policies or milestones that will


automatically trigger the creation of a new instance or the expansion of an
existing one, removing the need to constantly monitor traffic and resources used
by every application. You can use multiple policies for the same service or
application to have it scale up or down based on policies you create that are
based on specific events. For example, if you know that an application always see
high use at night and low use in the morning, you can create a schedule-based
policy that scales up the number of nodes in the evening and back down the next
day. In addition, you can create auto scaling triggers for those events you don’t
anticipate.

2. Use load balancing

Load balancers offer another automatic way of scaling up by distributing


workloads across various nodes in order to maximize resources. A load balancer
will accept all incoming application traffic and then acts as an usher to find the
best instance for each incoming request that will best utilize the resources on
hand.

When dealing with a peak in users or resource consumption, for example, load
balancing attempts to balance your workloads among all available nodes in order
to equalize unused resources. Load balancers also generally continuously
monitor the health of each instance to ensure it sends traffic to only healthy
instances, and they can also move workloads they deem to heavy for a specific
node, instead finding a less burdened node.

3. Employ containers with container orchestration

Containers – and container orchestration systems – have quickly become a


popular way to create more scalable and portable
infrastructure. Containers share a single kernel yet they are isolated from their
surroundings, limiting issues to the single container, as opposed to the entire
machine. Containers require fewer resources and offer more flexibility than
virtual machines because they can share things like operating systems and other
components. This allows containers to function the same way across platforms,
and thus can be easily and rapidly moved between nodes.

The beauty of containers is the ability to deploy large numbers of identical


application instances, which, when combined with their low use of resources,
make containers a great way to scale up certain microservices. Container
orchestration systems like Docker Swarm, Amazon ECS, and Kubernetes offer
automated management and coordination of containers, which allows for
automated services like auto-placement (similar to load balancing) and auto-
replication (similar to auto scaling) to more easily allow a container stack to
scale.

Containers aren’t perfect for every application, so it’s important to assess your
current applications to determine which are suited for containerization.

4. Test, test, test again

Your cloud environment may be scalable – but is the application you want to scale
able to do so? Testing for scalability is a crucial part of growing and should be
done continuously to prevent bottlenecks later on. Make sure you add extra time
at the end of your application development cycle to test for scalability to ensure
you don’t stumble across major issues when scaling that application later on.
DATABASE AND DATA STORES IN CLOUD
Cloud Database

A cloud database is a collection of informational content,


either structured or unstructured, that resides on a private, public or hybrid
cloud computing infrastructure platform.

From a structural and design perspective, a cloud database is no different than


one that operates on a business's own on-premises servers. The critical difference
lies in where the database resides.

Where an on-premises database is connected to local users through a


corporation's internal local area network (LAN), a cloud database resides on
servers and storage furnished by a cloud or database as a service (DBaaS)
provider and it is accessed solely through the internet. To software application,
for example, a SQL database residing on-premises or in the cloud should appear
identical.

The behavior of the database should be the same whether accessed through
direct queries, such as SQL statements, or through API calls. However, it may be
possible to discern small differences in response time. An on-premises database,
accessed with a LAN, is likely to provide a slightly faster response than a cloud-
based database, which requires a round trip on the internet for each interaction
with the database.

Key features:

• A database service built and accessed through a cloud platform

• Enables enterprise users to host databases without buying dedicated


hardware

• Can be managed by the user or offered as a service and managed by a


provider
• Can support relational databases (including MySQL and PostgreSQL)
and NoSQL databases (including MongoDB and Apache CouchDB)

• Accessed through a web interface or vendor-provided API

How Cloud Databases Work

Cloud databases, like their traditional ancestors, can be divided into two broad
categories: relational and non-relational.

A relational database, typically written in structured query language (SQL), is


composed of a set of interrelated tables that are organized into rows and
columns. The relationship between tables and columns (fields) is specified in
a schema. SQL databases, by design, rely on data that is highly consistent in
its format, such as banking transactions or a telephone directory. Popular cloud
platforms and cloud providers include MySQL, Oracle, IBM DB2 and
Microsoft SQL Server. Some cloud platforms such as MySQL are open sourced.

Non-relational databases, sometimes called NoSQL, do not employ a table model.


Instead, they store content, regardless of its structure, as a single document. This
technology is well-suited for unstructured data, such as social media content,
photos and videos.

Types of Cloud Databases

Two cloud database environment models exist:

1. Traditional 2. Database as a service (DBaaS)

In a traditional cloud model, a database runs on an IT department's


infrastructure with a virtual machine. Tasks of database oversight and
management fall upon IT staffers of the organization.

The DBaaS model is a fee-based subscription service in which the database runs
on the service provider's physical infrastructure. Different service levels are
usually available. In a classic DBaaS arrangement, the provider maintains the
physical infrastructure and database, leaving the customer to manage the
database's contents and operation.
Alternatively, a customer can set up a managed hosting arrangement, in which
the provider handles database maintenance and management. This latter option
may be especially attractive to small businesses that have database needs but
lack adequate IT expertise.

Cloud database benefits

Compared with operating a traditional database on an on-site physical server and


storage architecture, a cloud database offers the following distinct advantages:

• Elimination of physical infrastructure- In a cloud database environment,


the cloud computing provider of servers, storage and other infrastructure
is responsible for maintenance and keeping high availability. The
organization that owns and operates the database is only responsible for
supporting and maintaining the database software and its contents. In a
DBaaS environment, the service provider is responsible for managing and
operating the database software, leaving the DBaaS users responsible only
for their own data.

• Cost savings- Through the elimination of a physical infrastructure owned


and operated by an IT department, significant savings can be achieved
from reduced capital expenditures, less staff, decreased electrical
and HVAC operating costs and a smaller amount of needed physical space.

• Ease of access, Users can access cloud databases from virtually anywhere,
using a vendor’s API or web interface.

• Disaster recovery, In the event of a natural disaster, equipment failure or


power outage, data is kept secure through backups on remote servers.

• Scalability, Cloud databases can expand their storage capacities on run-


time to accommodate changing needs. Organizations only pay for what
they use.

• DBaaS benefits also include performance guarantees, failover support,


declining pricing and specialized expertise.
Migrating legacy databases to the cloud

An on-premises database can migrate to a cloud implementation. Numerous


reasons exist for doing this, including the following:

• Allows IT to retire on-premises physical server and storage infrastructure.

• Fills the talent gap when IT lacks adequate in-house database expertise.

• Improves processing efficiency, especially when applications and analytics


that access the data also reside in the cloud.

• Achieves cost savings through several means, including:

o Reduction of in-house IT staff.

o Continually declining cloud service pricing.

o Paying for only the resources consumed, known as pay-as-you-


go pricing.

Relocating a database to the cloud can be an effective way to further enable


business application performance as part of a wider software-as-a-
service deployment. Doing so simplifies the processes required to make
information available through internet-based connections. Storage
consolidation can also be a benefit of moving a company's databases to the
cloud. Databases in multiple departments of a large company, for example, can
be combined in the cloud into a single hosted database management system.

Cloud storage- Cloud storage is a service which enables saving the data on
offside storage system.

Cloud storage involves stashing data on hardware in a remote physical location,


which can be accessed from any device via the internet. Clients send files to a
data server maintained by a cloud provider instead of (or as well as) storing it on
their own hard drives. Dropbox, which lets users store and share files, is a good
example. Cloud storage systems generally encompass hundreds of data servers
linked together by a master control server, but the simplest system might involve
just one.

Cloud computing also involves clients connecting to remote computing


infrastructure via a network, but this time that infrastructure includes shared
processing power, software and other resources. This frees users from having to
constantly update and maintain their software and systems, while at the same
time allowing them to harness the processing power of a vast network. Familiar
everyday services powered by cloud computing include social networks like
Facebook, webmail clients like Gmail, and online banking apps.

Following are the categories of storage devices:

1) Block Storage Devices – This type of devices provide raw storage to the
clients. This raw storage is separated for creating volumes. A volume is a
recognizable unit of data storage.
2) File Storage Devices – The file storage devices are provided to the client in
the form of files for maintaining its file system. Storage data is accessed using
the Network File System(NFS).

Following are the categories of storage classes:

1) Unmanaged Cloud Storage

• The storage is preconfigured for the customer, this is known


as unmanaged cloud storage.

• The customer cannot format or install his own file system or change drive
properties.

2) Managed Cloud Storage

• Managed cloud storage provides the online storage space on-demand.

• This system shows the user like raw disk that the user can partition and
format.
LARGE SCALE DATA PROCESSING
What is MapReduce?

MAPREDUCE is a software framework and programming model used for


processing huge amounts of data. MapReduce program work in two phases,
namely, Map and Reduce. Map tasks deal with splitting and mapping of data while
Reduce tasks shuffle and reduce the data.

The input to each phase is key-value pairs. In addition, every programmer needs
to specify two functions: map function and reduce function.

MapReduce is a framework using which we can write applications to process huge


amounts of data, in parallel, on large clusters of commodity hardware in a reliable
manner.

MapReduce is a processing technique and a program model for distributed


computing based on java. The MapReduce algorithm contains two important
tasks, namely Map and Reduce. Map takes a set of data and converts it into
another set of data, where individual elements are broken down into tuples
(key/value pairs). Secondly, reduce task, which takes the output from a map as
an input and combines those data tuples into a smaller set of tuples. As the
sequence of the name MapReduce implies, the reduce task is always performed
after the map job.

The major advantage of MapReduce is that it is easy to scale data processing over
multiple computing nodes. Under the MapReduce model, the data processing
primitives are called mappers and reducers. Decomposing a data processing
application into mappers and reducers is sometimes nontrivial. But, once we
write an application in the MapReduce form, scaling the application to run over
hundreds, thousands, or even tens of thousands of machines in a cluster is
merely a configuration change. This simple scalability is what has attracted many
programmers to use the MapReduce model.

The Algorithm
• Generally MapReduce paradigm is based on sending the computer to where
the data resides!

• MapReduce program executes in three stages, namely map stage, shuffle


stage, and reduce stage.

o Map stage − The map or mapper’s job is to process the input data.
Generally the input data is in the form of file or directory and is
stored in the Hadoop file system (HDFS). The input file is passed to
the mapper function line by line. The mapper processes the data and
creates several small chunks of data.

o Reduce stage − This stage is the combination of the Shuffle stage


and the Reduce stage. The Reducer’s job is to process the data that
comes from the mapper. After processing, it produces a new set of
output, which will be stored in the HDFS.

• During a MapReduce job, Hadoop sends the Map and Reduce tasks to the
appropriate servers in the cluster.

• The framework manages all the details of data-passing such as issuing


tasks, verifying task completion, and copying data around the cluster
between the nodes.

• Most of the computing takes place on nodes with data on local disks that
reduces the network traffic.

After completion of the given tasks, the cluster collects and reduces the data to
form an appropriate result, and sends it back to the Hadoop server
CLOUD MANAGEMENT WITH PUPPET
Puppet is a software configuration management tool which includes its
own declarative language to describe system configuration. It is a model-driven
solution that requires limited programming knowledge to use.

Puppet is designed to manage the configuration of Unix-like and Microsoft


Windows systems declaratively. The user describes system resources and their
state, either using Puppet's declarative language or a Ruby DSL (domain-specific
language). This information is stored in files called "Puppet manifests". Puppet
discovers the system information via a utility called Facter and compiles the
Puppet manifests into a system-specific catalog containing resources and
resource dependency, which are applied against the target systems. Any actions
taken by Puppet are then reported.

Puppet usually follows client-server architecture. The client is known as an agent


and the server is known as the master. For testing and simple configuration, it
can also be used as a stand-alone application run from the command line.

Puppet Server is installed on one or more servers, and Puppet Agent is installed
on all the machines to be managed. Puppet Agents communicate with the server
and fetch configuration instructions. The Agent then applies the configuration
on the system and sends a status report to the server. Devices can run Puppet
Agent as a daemon, that can be triggered periodically as a cron job or can be run
manually whenever needed.

The Puppet programming language is a declarative language that describes the


state of a computer system in terms of "resources", which represent underlying
network and operating system constructs. The user assembles resources
into manifests that describe the desired state of the system. These manifests are
stored on the server and compiled into configuration instructions for agents on
request.

Rather than specifying the exact commands to perform a system action, the user
creates a resource, which Puppet then translates into system-specific instructions
which are sent to the machine being configured. For example, if a user wants to
install a package on three different nodes, each of which runs a different
operating system, they can declare one resource, and Puppet will determine
which commands need to be run based on the data obtained from Facter, a
program that collects data about the system it is running on, including its
operating system, IP address, and some hardware information.

Providers on the node use Facter facts and other system details to translate
resource types in the catalog into machine instructions that will actually configure
the node.

A normal Puppet run has the following stages:

1. An agent sends facts from Facter to the master.

2. Puppet builds a graph of the list of resources and their inter-dependencies,


representing the order in which they need to be configured, for every client.
The master sends the appropriate catalog to each agent node.

3. The actual state of the system is then configured according to the desired
state described in manifest file. If the system is already in the desired state,
Puppet will not make any changes, making transactions idempotent.

4. Finally, the agent sends a report to the master, detailing what changes were
made and any errors that occurred.
CASE STUDY: EUCALYPTUS
HISTORY

Improvement on Eucalyptus started as an examination project at US-based Rice


University in the year 2003. In the year 2009, an organisation named Eucalyptus
Systems was framed to market Eucalyptus software. Afterwards, in the year 2012,
the firm went into a concurrence with Amazon Web Services for keeping up
similarity and Application Programming Interface support. In the year 2014, it
was procured by Hewlett-Packard or HP, which unexpectedly has its own cloud
contributions under the HPE eucalyptus. The Helion portfolio has an assortment
of cloud-related items, which incorporates HP’s own kind of OpenStack called HP
Helion OpenStack. Presently, Eucalyptus is a piece of the HPE portfolio and is
known as HPE Helion Eucalyptus.

EUCALYPTUS ARCHITECTURE

Eucalyptus CLIs can oversee both Amazon Web Services and their own private
occasions. Clients can undoubtedly relocate cases from Eucalyptus to Amazon
Elastic Cloud. Network, storage, and compute are overseen by the virtualisation
layer. Occurrences are isolated by hardware virtualisation. The following wording
is utilised by Eucalyptus architecture in cloud computing.

1. Images: Any software application, configuration, module software or


framework software packaged and conveyed in the Eucalyptus cloud is known
as a Eucalyptus Machine Image.
2. Instances: When we run the picture and utilise it, it turns into an instance.
3. Networking: The Eucalyptus network is partitioned into three modes: Static
mode, System mode, and Managed mode.
4. Access control: It is utilised to give limitation to clients.
5. Eucalyptus elastic block storage: It gives block-level storage volumes to
connect to an instance.
6. Auto-scaling and load adjusting: It is utilised to make or obliterate cases
or administrations dependent on necessities.
EUCALYPTUS COMPONENTS

Components of eucalyptus in cloud computing:

1. Cluster Controller: It oversees at least one Node controller and liable for
sending and overseeing occurrences on them.
2. Storage Controller: It permits the making of depictions of volumes.
3. Cloud Controller: It is a front end for the whole environment.
4. Walrus Storage Controller: It is a straightforward record storage
framework.
5. Node Controller: It is an essential part of Nodes. It keeps up the life cycle
of the occasions running on every node.

Eucalyptus features include:

• Supports both Linux and Windows virtual machines (VMs).

• Application program interface- (API) compatible with


Amazon EC2 platform.

• Compatible with Amazon Web Services (AWS) and Simple Storage Service
(S3).

• Works with multiple hypervisors including VMware, Xen and KVM.

• Can be installed and deployed from source code or DEB and RPM packages.

• Internal processes communications are secured through SOAP and WS-


Security.

• Multiple clusters can be virtualized as a single cloud.

• Administrative features such as user and group management and reports.


OTHER TOOLS

Numerous other tools can be utilised to associate with AWS and Eucalyptus in
cloud computing, and they are recorded below.

1. Vagrant AWS Plugin: This instrument gives config records to oversee AWS
instances and oversee VMs on the local framework.
2. s3curl: This is a device for collaboration between AWS S3 and Eucalyptus
Walrus.
3. s3fs: This is a FUSE record framework, which can be utilised to mount a
bucket from Walrus or S3 as a local document framework.
4. Cloudberry S3 Explorer: This Windows instrument is for overseeing
documents among S3 and Walrus.

THE ADVANTAGES OF THE EUCALYPTUS CLOUD

The benefits of Eucalyptus in cloud computing are:

1. Eucalyptus can be utilised to benefit both the eucalyptus private cloud and
the eucalyptus public cloud.

2. Clients can run Amazon or Eucalyptus machine pictures as examples on


both clouds.

3. It isn’t extremely mainstream on the lookout yet is a solid contender to


CloudStack and OpenStack.

4. It has 100% Application Programming Interface similarity with all the


Amazon Web Services.

5. Eucalyptus can be utilised with DevOps apparatuses like Chef and Puppet.

Features of eucalyptus in cloud computing are:

1. Supports both Windows and Linux virtual machines.


2. API is viable with the Amazon EC2 platform.

3. Viable with Simple Storage Service (S3) and Amazon Web Services (AWS).

EUCALYPTUS VS OTHER IAAS PRIVATE CLOUDS

There are numerous Infrastructure-as-a-Service contributions accessible in the


market like OpenNebula, Eucalyptus, CloudStack and OpenStack, all being
utilised as private and public Infrastructure-as-a-Service contributions.

Of the multitude of Infrastructure-as-a-Service contributions, OpenStack stays the


most well-known, dynamic and greatest open-source cloud computing project. At
this point, eagerness for OpenNebula, CloudStack and Eucalyptus stay strong.

WHAT IS THE USE OF EUCALYPTUS IN CLOUD COMPUTING?

It is utilised to assemble hybrid, public and private cloud. It can likewise deliver
your own datacentre into a private cloud and permit you to stretch out the
usefulness to numerous different organisations.

CONCLUSION

Eucalyptus in cloud computing is open-source programming that carries out an


AWS viable cloud, which is financially savvy, secure and flexible. It tends to be
effectively sent in existing IT frameworks to appreciate both private and public
cloud models’ advantages.
CASE STUDY: MICROSOFT AZURE

Execution Environment

The Windows Azure execution environment consists of a platform for


applications and services hosted within one or more roles. The types of roles you
can implement in Windows Azure are:

• Azure Compute (Web and Worker Roles). A Windows Azure application


consists of one or more hosted roles running within the Azure data centers.
Typically there will be at least one Web role that is exposed for access by
users of the application. The application may contain additional roles,
including Worker roles that are typically used to perform background
processing and support tasks for Web roles.

• Virtual Machine (VM role). This role allows you to host your own custom
instance of the Windows Server 2008 R2 Enterprise or Windows Server 2008
R2 Standard operating system within a Windows Azure data center.
Data Management

Windows Azure, SQL Azure, and the associated services provide opportunities for
storing and managing data in a range of ways. The following data management
services and features are available:

• Azure Storage: This provides four core services for persistent and durable
data storage in the cloud. The services support a REST interface that can
be accessed from within Azure-hosted or on-premises (remote)
applications. The four storage services are:

o The Azure Table Service provides a table-structured storage


mechanism based on the familiar rows and columns format, and
supports queries for managing the data. It is primarily aimed at
scenarios where large volumes of data must be stored, while being
easy to access and update.

o The Binary Large Object (BLOB) Service provides a series of


containers aimed at storing text or binary data. It provides both
Block BLOB containers for streaming data, and Page BLOB containers
for random read/write operations.

o The Queue Service provides a mechanism for reliable, persistent


messaging between role instances, such as between a Web role and
a Worker role.

o Windows Azure Drives provide a mechanism for applications to


mount a single volume NTFS VHD as a Page BLOB, and upload and
download VHDs via the BLOB.

• SQL Azure Database: This is a highly available and scalable cloud


database service built on SQL Server technologies, and supports the
familiar T-SQL based relational database model. It can be used with
applications hosted in Windows Azure, and with other applications running
on-premises or hosted elsewhere.
• Data Synchronization: SQL Azure Data Sync is a cloud-based data
synchronization service built on Microsoft Sync Framework technologies. It
provides bi-directional data synchronization and data management
capabilities allowing data to be easily shared between multiple SQL Azure
databases and between on-premises and SQL Azure databases.

• Caching: This service provides a distributed, in-memory, low latency and


high throughput application cache service that requires no installation or
management, and dynamically increases and decreases the cache size
automatically as required. It can be used to cache application data, ASP.NET
session state information, and for ASP.NET page output caching.

Networking Services

Windows Azure provides several networking services that you can take advantage
of to maximize performance, implement authentication, and improve
manageability of your hosted applications. These services include the following:

• Content Delivery Network (CDN). The CDN allows you to cache publicly
available static data for applications at strategic locations that are closer
(in network delivery terms) to end users. The CDN uses a number of data
centers at many locations around the world, which store the data in BLOB
storage that has anonymous access. These do not need to be locations
where the application is actually running.

• Virtual Network Connect. This service allows you to configure roles of an


application running in Windows Azure and computers on your on-premises
network so that they appear to be on the same network. It uses a software
agent running on the on-premises computer to establish an IPsec-protected
connection to the Windows Azure roles in the cloud, and provides the
capability to administer, manage, monitor, and debug the roles directly.

• Virtual Network Traffic Manager. This is a service that allows you to set
up request redirection and load balancing based on three different
methods. Typically you will use Traffic Manager to maximize performance
by redirecting requests from users to the instance in the closest data center
using the Performance method. Alternative load balancing methods
available are Failover and Round Robin.

• Access Control. This is a standards-based service for identity and access


control that makes use of a range of identity providers (IdPs) that can
authenticate users. ACS acts as a Security Token Service (STS), or token
issuer, and makes it easier to take advantage of federation authentication
techniques where user identity is validated in a realm or domain other than
that in which the application resides. An example is controlling user access
based on an identity verified by an identity provider such as Windows Live
ID or Google.

• Service Bus. This provides a secure messaging and data flow capability for
distributed and hybrid applications, such as communication between
Windows Azure hosted applications and on-premises applications and
services, without requiring complex firewall and security infrastructures. It
can use a range of communication and messaging protocols and patterns
to provide delivery assurance, reliable messaging; can scale to
accommodate varying loads; and can be integrated with on-premises
BizTalk Server artifacts.
CASE STUDY: AMAZON EC2

Elastic IP addresses

They allow you to allocate a static IP address and programmatically assign it to


an instance. You can enable monitoring on an Amazon EC2 instance using
Amazon CloudWatch in order to gain visibility into resource utilization,
operational performance, and overall demand patterns (including metrics such
as CPU utilization, disk reads and writes, and network traffic). You can create
Auto-scaling Group using the Auto-scaling feature to automatically scale your
capacity on certain conditions based on metric that Amazon CloudWatch collects.
You can also distribute incoming traffic by creating an elastic load balancer using
the Elastic Load Balancing service. Amazon Elastic Block Storage (EBS)5 volumes
provide network-attached persistent storage to Amazon EC2 instances. Point-in-
time consistent snapshots of EBS volumes can be created and stored on Amazon
Simple Storage Service (Amazon S3). Amazon S3 is highly durable and distributed
data store. With a simple web services interface, you can store and retrieve large
amounts of data as objects in buckets (containers) at any time, from anywhere
on the web using standard HTTP verbs. Copies of objects can be distributed and
cached at 14 edge locations around the world by creating a distribution using
Amazon CloudFront service – a web service for content delivery (static or
streaming content). Amazon SimpleDB is a web service that provides the core
functionality of a database- real-time lookup and simple querying of structured
data – without the operational complexity. You can organize the dataset into
domains and can run queries across all of the data stored in a particular domain.
Domains are collections of items that are described by attribute-value pairs.

Amazon Relational Database Service9 (Amazon RDS)

It provides an easy way to setup, operate and scale a relational database in the
cloud. You can launch a DB Instance and get access to a full-featured MySQL
database and not worry about common database administration tasks like
backups, patch management etc. Amazon Simple Queue Service (Amazon SQS) is
a reliable, highly scalable, hosted distributed queue for storing messages as they
travel between computers and application components.

Amazon Simple Notifications Service (Amazon SNS)

It provides a simple way to notify applications or people from the cloud by


creating Topics and using a publish-subscribe protocol.

Amazon Elastic MapReduce

It provides a hosted Hadoop framework running on the web-scale infrastructure


of Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Simple Storage
Service (Amazon S3) and allows you to create customized JobFlows. JobFlow is a
sequence of MapReduce steps.
Amazon Virtual Private Cloud (Amazon VPC)

It allows you to extend your corporate network into a private cloud contained
within AWS. Amazon VPC uses IPSec tunnel mode that enables you to create a
secure connection between a gateway in your data center and a gateway in AWS.

Amazon Route53

It is a highly scalable DNS service that allows you manage your DNS records by
creating a HostedZone for every domain you would like to manage.

AWS Identity and Access Management (IAM)

It enables you to create multiple Users with unique security credentials and
manage the permissions for each of these Users within your AWS Account. IAM
is natively integrated into AWS Services. No service APIs have changed to support
IAM, and exiting applications and tools built on top of the AWS service APIs will
continue to work when using IAM. AWS also offers various payment and billing
services that leverages Amazon’s payment infrastructure.

All AWS infrastructure services offer utility-style pricing that require no long-term
commitments or contracts. For example, you pay by the hour for Amazon EC2
instance usage and pay by the gigabyte for storage and data transfer in the case
of Amazon S3. More information about each of these services and their pay-as-
you-go pricing is available on the AWS Website. Note that using the AWS cloud
doesn’t require sacrificing the flexibility and control you’ve grown accustomed
to:
You are free to use the programming model, language, or operating system
(Windows, OpenSolaris or any flavor of Linux) of your choice. You are free to pick
and choose the AWS products that best satisfy your requirements—you can use
any of the services individually or in any combination. Because AWS provides
resizable (storage, bandwidth and computing) resources, you are free to
consume as much or as little and only pay for what you consume.
You are free to use the system management tools you’ve used in the past and
extend your datacenter into the cloud.

You might also like