Professional Documents
Culture Documents
Cloud File Aaditya Mishra 19100btcse05628
Cloud File Aaditya Mishra 19100btcse05628
Google Sheets allows users to edit, organize, and analyze different types
of information. It allows collaborations, and multiple users can edit and
format files in real-time, and any changes made to the spreadsheet can be
tracked by a revision history.
1. Editing
2. Explore
3. Offline Editing
4. Supported file format
5. Integration with other google products
Google Sheets is a powerful tool—it's everything you'd expect from a
spreadsheet, with the extra perks of an online app. While the
example spreadsheet that we created may have been a bit silly, the
practical applications of using Sheets for your workflows (both
business and personal) are limitless.
Whether you need to make a budget, outline your next proposal, gather
data for a research project, or log info from any other app that connects
with Zapier, a Google Sheets spreadsheet can bring your data to life.
And with everything stored in Google Drive, you'll never worry about
losing your files again—even if your computer dies.
Experiment – 2
Aim – Service deployment and usage over cloud using virtual box.
Introduction to Oracle Virtual Box :-
Oracle Virtual Box (formerly Sun Virtual Box, Sun xVM Virtual Box and
Innotek VirtualBox) is a free and open-source hosted hypervisor for
x86computers and is under development by Oracle Corporation.
Developed initially by Innotek GmbH, it was acquired by Sun
Microsystems in 2008, which was, in turn, acquired by Oracle in 2010.
VirtualBox may be installed on a number of host operating systems,
including Linux, macOS, Windows, Solaris and Open Solaris. There are
also ports to FreeBSD and Genode. It supports the creation and
management of guest virtual machines running versions and derivations
of Windows, Linux, BSD, OS/2, Solaris, Haiku, OSx86and others, and
limited virtualization of macOS guests on Apple hardware. For some
guest operating systems, a "Guest Additions" package of device drivers
and system applications are available, which typically improves
performance, especially that of graphics.
Software-based virtualization
Hardware-Based Virtualization
Hardware-assisted virtualization is the use of a computer's physical
components to support the software that creates and manages virtual
machines (VMs). Virtualization is an idea that traces its roots back to
legacy mainframe system designs of the
1960s. Early mainframes used the Control
Program/Conversational Monitor System operating system and were
adept at provisioning the mainframe's available computing resources into
isolated environments capable of running enterprise workloads.
Step3:- After Running Executable file. It will show first preparing installation
window.
Step 4 :- Click Next to continue the installation.
Step 5 :- Select the path for installation of VMware software. And click
continue.
Step 6 :- Click on Install to install the ORACLE VMware VIRTUAL box.
https://downloads.cloudera.com/demo_vm/virtualbox/cloude ra-
quickstart-vm-5.12.0-0- virtualbox.zip
Step 3 :- After Clicking on open these window will show click on import.
When you click in Import the Quick VM is imported from the particular
path and then it will launch a new window with the Virtual Machine at off
State.
When you Power on the Virtual Machine it will take some time to start
the virtual Machine which is completely dependent on the System
hardware. The recommended Ram is 2to 12 Gb. For the proper
functioning of Ubuntu cluster. You can use this as a Linux virtual machine
and run Ubuntu on it. And we can start here for further work to start.
Step 6 :- When it is ON there is a screen like these will be observed on which
browser is showing Ubuntu machine.
Step 6 :- install the Ubuntu and your Ubuntu is successfully
installed.
EXPERIMENT 03
AIM: Performance evaluation of services over cloud using VMware tool.
VMware Workstation Pro is a hosted hypervisor that runs on x64 and x86
versions of Windows and Linux operating systems it enables users to set
up virtual machines (VMs) on a single physical machine, and use them
simultaneously along with the actual machine. Each virtual machine can
execute its own operating system, including versions of Microsoft
Windows, Linux, BSD, and MS-DOS. VMware Workstation is developed
and sold by VMware, Inc., a division of Dell Technologies. First, we will
install VMware Workstation and then We are going to install Cloudera
Hadoop Software in Ubuntu Linux Server.
Step 2 :- Search for the VMware Workstation and click download for
windows and then click on download now0.And exe file will be
downloaded. Or use this link to download VMware : -
https://download3.vmware.com/software/wkst/file/V
Mware-workstation-full-15.5.6- 16341506.exe
After Clicking to install. The installation will start according to your choices
what you made.
Step 10 :- Window of installation will appear and installation is in progress.
Step 11 :- At last after installing click finish to finish the installation.
Step 1 :- Open the VMware application and click on Open a new Virtual
Machine.
It is the Main Screen interface of VMware Workstation. We
Step 3 :- After Importing this window will be appeared. Now Power on this
Virtual Machine.
When you Power on the Virtual Machine it will take some time to start
the virtual Machine which is completely dependent on the System
hardware. The recommended Ram is 2 to 8 Gb. For the proper
functioning of Ubuntu cluster. You can use this as a Linux virtual machine
and run Hadoop on it. And we can start our Ubuntu and check versions of
linux here for further work to start.
EXPERIMENT 04
Google App Engine (GAE) is the platform that allows software developers
to Leverage Google’s cloud computing infrastructure for web application
development. Similar computing resources (or services) used by Google
Docs are available under the GAE. Google offers the same reliability and
openness as its other flagship products like Google Search and Google
Mail. GAE is free up to a point, but there is plenty of room For the
experiments that are needed to evaluate the technology.
History
Google App Engine (often referred to as GAE or simply App Engine, and
also used by the acronym GAE/J) is a platform as a service (PaaS) cloud
computing platform for developing and hosting web applications in
Google-managed data centers. Applications are sandboxed and run
across multiple servers. App Engine offers automatic scaling for web
applications—as the number of requests increases for an application,
App Engine automatically allocates more resources for the web
application to handle the additional demand.
Google App Engine is free up to a certain level of consumed resources.
Fees are charged for additional storage, bandwidth, or instance hours
required by the application. It was first released as a preview version in
April 2008, and came out of preview in September 2011.
Characteristics
Quickly reach customers and end users by deploying web apps on App
Engine. With zero-config deployments and zero server management, App
Engine allows you to focus on writing code. Plus, App Engine
automatically scales to support sudden traffic spikes without
provisioning, patching, or monitoring.
Whether you’re building your first mobile app or looking to reach existing
users via a mobile experience, App Engine automatically scales the
hosting environment for you. Seamless integration with Firebase
provides an easy-to-use frontend mobile platform along with the scalable
and reliable back end.
Installation
You can download the Google App Engine SDK by going to:
https://storage.googleapis.com/appenginesdks/featured/GoogleAp
pEngine-1.9.85.msi
Then click on next and set the installation path for Google App
Engine
After setting the installation path click on next and then click on Install.
Once the installation has completed click on Finish
EXPERIMENT 05
AIM: Working on Aneka services for Cloud application.
INTRODUCTION:
Aneka is a market oriented Cloud development and management
platform with rapid application development and workload
distribution capabilities. Aneka is an integrated middleware package
which allows you to seamlessly build and manage an interconnected
network in addition to accelerating development, deployment and
management of distributed applications using Microsoft .NET
frameworks on these networks. It is market oriented since it allows
you to build, schedule, provision and monitor results using pricing,
accounting, and QoS/SLA services in private and/or public (leased)
network environments.
Aneka is a workload distribution and management platform that
accelerates applications in Microsoft .NET framework environments.
Some of the key advantages of Aneka over other GRID or Cluster based
workload distribution solutions include:
rapid deployment tools and framework,
ability to harness multiple virtual and/or physical machines for
accelerating application result
provisioning based on QoS/SLA
support of multiple programming and application environments
simultaneous support of multiple run-time environments
built on-top of Microsoft .NET framework, with support for Linux
environments through Mono
BUILD
Aneka includes a Software Development Kit (SDK) which includes a
combination of APIs and Tools to enable you to express your
application. Aneka also allows you to build different run- time
environments and build new applications.
Aneka provides APIs and tools that enable applications to be
virtualized over a heterogeneous network.
ANEKA ARCHITECTURE
Aneka is a platform and a framework for developing distributed
applications on the Cloud. It harnesses the spare CPU cycles of a
heterogeneous network of desktop PCs and servers or data centers on
demand. Aneka provides developers with a rich set of APIs for
transparently exploiting such resources and expressing the business
logic of applications by using the preferred programming abstractions.
System administrators can leverage on a collection of tools to monitor
and control the deployed infrastructure. This can be a public cloud
available to anyone through the Internet, or a private cloud
constituted by a set of nodes with restricted access.
The Aneka based computing cloud is a collection of physical and
virtualized resources connected through a network, which are either
the Internet or a private intranet. Each of these resources hosts an
instance of the Aneka Container representing the runtime
environment where the distributed applications are executed. The
container provides the basic management features of the single node
and leverages all the other operations on the services that it is hosting.
The services are broken up into fabric, foundation, and execution
services. Fabric services directly interact with the node through the
Platform Abstraction Layer (PAL) and perform hardware profiling and
dynamic resource provisioning. Foundation services identify the core
system of the Aneka middleware, providing a set of basic features to
enable Aneka containers to perform specialized and specific sets of
tasks. Execution services directly deal with the scheduling and
execution of applications in the Cloud.
Overview of Aneka Framework
One of the key features of Aneka is the ability of providing different
ways for expressing distributed applications by offering different
programming models; execution services are mostly concerned with
providing the middleware with an implementation for these models.
Additional services such as persistence and security are transversal
to the entire stack of services that are hosted by the Container. At the
application level, a set of different components and tools are
provided to:
1) simplify the development of applications (SDK)
2) porting existing applications to the Cloud
3) Monitoring and managing the Aneka Cloud.
Foundation Services
Logical management of the distributed system built
on top of the infrastructure:
a) Storage management for applications
• Centralized file storage
• More suitable for compute-intensive applications
• Distributed file storage
• More suitable for data intensive applications
• To support different protocols, the concept of file channel, is
introduced.
• File Channel identifies a pair of components:
o file channel controller :
server part o file
channel handler : client
part
• Storage service supports the execution of task-based programming
• Storage service supports the execution of task-based
programming such as Task and the Thread Model, and
Parameter Sweep based applications.
c) Resource reservation
• Supports the execution of distributed applications
• Allows for reserving resources for exclusive use by specific
applications.
Application Services
• Manage the execution of applications
• Constitute a layer that differentiates according to the specific
programming model
• Scheduling Service and Execution Service
Building Aneka Cloud
C:\Program Files\Manjrasoft\Aneka.3.0.
Specifying the installation folder Step 3 – Confirm and start the installation
Confirm Installation
At this point you are ready to begin the installation. Click “Next” to
start the installation or “Back” to change your installation folder.
Installation Complete
Once the installation is complete, close the wizard and launch Aneka
Management Studio from the start menu.
EXPERIMENT 06
AIM: Working on Application deployment & services of Microsoft Azure.
INTRODUCTION:
Microsoft Azure is a Microsoft cloud service provider that provides cloud
computing services like computation, storage, security and many other
domains. Microsoft is one of the global leaders when it comes to Cloud
solutions and global cloud infrastructure. Microsoft Azure provides services in
60+ global regions and serves in 140 counties. It provides services in the form of
Infrastructure as a service, Platform as a Service and Software as a service. It
even provides serverless computing meaning, you just put your code and all
your backend activities as managed by Microsoft Azure.
Azure Cloud Service (read Web/Worker Roles) is one of the earliest Platform as
a Service (PaaS) offered by Microsoft Azure. In fact, when Azure started in
2008, Cloud Services was the only Compute option available in Azure (Virtual
Machines, Websites etc. came a bit later).
With Cloud Services you can run web applications (typically by hosting your
application in a Web Role) or run background applications (typically by hosting
your application/background service in a Worker Role). Since it is a PaaS
offering, you need not worry about the issues that comes with IaaS (i.e.
patching, configuring etc.). You simply provide your application and the desired
settings in form of a package to Microsoft and based on that Azure will create
VMs for you and deploy your applications in those VMs.
Cloud Services offer you a lot of flexibility (when compared with WebApps), yet
take away the complexities that you would normally face when working with
Virtual Machines (IaaS).
Though not officially deprecated, Cloud Services is heading that way. Microsoft
is pushing for the use of other PaaS offerings (like WebApps, WebJobs,
Functions, Service Fabric etc.). If you're building a new app, my
recommendation would be not to use Cloud Services.
It easily integrates with Microsoft Products making it very popular using
Microsoft products. This platform is now 10 years old and it picked up to
compete with the best of the best.
History
Microsoft Azure began in the mid-2000s as an internal initiative codenamed
Project Red Dog. At the time, Amazon had already launched its cloud
computing service, and Microsoft was rushing to catch up.
During the Microsoft Professional Developers Conference 2008, two years after
Amazon Web Services (AWS) had gone live with its Simple Storage Service, Ray
Ozzie, Microsoft’s chief software architect, announced that the company
planned to launch its own cloud computing service called Windows Azure. The
plan called for Microsoft to offer five key categories of cloud services: Windows
Azure for compute, storage and networking; Microsoft SQL Services for
databases; Microsoft .NET Services for developers; Live Services for file sharing;
and Microsoft SharePoint Services and Microsoft Dynamics CRM Services SaaS
offerings.
Ozzie told the crowd, “It’s a transformation of our software and a
transformation of our strategy.”
He acknowledged Amazon’s leadership, noting that AWS had “established a
base-level design pattern, architecture models, and business models that we’ll
all learn from.” And he predicted that one day, “all our enterprise software will
be delivered as an online service as an option.”
After the announcement, Microsoft began to roll out preview versions of its
cloud services, and in February 2010, the Windows Azure Platform became
commercially available. Early reviews of the service were mixed, with many
analysts comparing Azure unfavourably with AWS. However, Microsoft
improved Azure dramatically over time. It also added support for a wide variety
of programming languages, frameworks and operating systems, including Linux
— something that once would have been unthinkable for a Microsoft product.
Recognizing that its cloud computing service had moved far beyond Windows,
the company renamed Windows Azure as Microsoft Azure in April 2014.
In the years since, Microsoft has continued to expand its cloud capabilities,
largely living up to Ozzie’s predictions at the initial announcement in 2008. It
has also increased its support for open source software, and today Azure is a
reasonable choice even for enterprises that don’t run Windows servers.
Competing closely with AWS, Google Cloud and IBM, Microsoft Azure is one of
the unquestioned cloud leaders – and some observers say it has a chance to be
the top cloud vendor, long term.
Characteristics
Compute — includes Virtual Machines, Virtual Machine Scale Sets,
Functions for serverless computing, Batch for containerized batch
workloads, Service Fabric for microservices and container orchestration,
and Cloud Services for building cloud-based apps and APIs
Networking — includes a variety of networking tools, like the Virtual
Network, which can connect to on-premise data centers; Load Balancer;
Application Gateway; VPN Gateway; Azure DNS for domain hosting,
Content Delivery Network, Traffic Manager,
Installation
Step-1: Log in to the Azure portal.
http://www.microsoft.com/windowsazure/
Step-2: Click Create a resource > Compute, and then scroll down to and click
Cloud Service.
Result:- We successfully configure cloud services on Microsoft azure.
EXPERIMENT 07
AIM: Working on Application deployment & services of IBM Smart Cloud.
INTRODUCTION:
The IBM SmartCloud brand includes infrastructure as a service, software as a
service and platform as a service offered through public, private and hybrid
cloud delivery models. IBM places these offerings under three umbrellas:
SmartCloud Foundation, SmartCloud Services and SmartCloud Solutions.
SmartCloud Foundation consists of the infrastructure, hardware, provisioning,
management, integration and security that serve as the underpinnings of a
private or hybrid cloud. Built using those foundational components, PaaS, IaaS
and backup services make up SmartCloud Services. Running on this cloud
platform and infrastructure, SmartCloud Solutions consist of a number of
collaboration, analytics and marketing SaaS applications.
IBM also builds cloud environments for clients that are not necessarily on the
SmartCloud Platform.
For example, features of the SmartCloud platform—such as Tivoli management
software or IBM Systems Director virtualization—can be integrated separately
as part of a non-IBM cloud platform. The SmartCloud platform consists solely of
IBM hardware, software, services and practices.
IBM SmartCloud Enterprise and SmartCloud Enterprise+ compete with products
like those of Rackspace and Amazon Web Services. Erich Clementi, vice
president of Global Technology Services at IBM, said in 2012 that the goal with
SmartCloud Enterprise and SmartCloud Enterprise+ was to provide an Amazon
EC2-like experience primarily for test and development purposes and to
provide a more robust experience for production workloads.
IP LONG-HOSTNAME SHORT-HOSTNAME
For a Dynamic Host Configuration Protocol (DHCP) server that uses a loop back
IP address, define the loop back IP address and the required values in the
following format:
LOOPBACK-IP LONG-HOSTNAME SHORT-HOSTNAME
To log in to IBM SmartCloud Analytics - Log Analysis after you set up the DCHP
server, use the Fully Qualified Domain Name (FQDN) or the host name to log in.
You cannot use the IP address as you would for non-DHCP servers.
About this task
When you run the installation script, IBM SmartCloud Analytics - Log Analysis
and IBM Installation Manager Version 1.7 are installed. Where necessary, IBM
SmartCloud Analytics - Log
Analysis upgrades the currently installed version of IBM Installation Manager.
Results
You can now log on to the service with either the Notes client or the web client.
To return at a later time and use the web client, log on to
http://www.ibmcloud.com/social using your web login email and password.
EXPERIMENT 08
AIM: Working on Heroku for Cloud application deployment.
INTRODUCTION:
Managing resources at large scale while providing performance isolation and
efficient use of underlying hardware is a key challenge for any cloud
management software. Most virtual machine (VM) resource management
systems like VMware DRS clusters, Microsoft PRO and Eucalyptus, do not
currently scale to the number of hosts and VMs needed by cloud offerings to
support the elasticity required to handle peak demand. In addition to scale,
other problems a cloud-level resource management layer needs to solve
include heterogeneity of systems, compatibility constraints between virtual
machines and underlying hardware, islands of resources created due to storage
and network connectivity and limited scale of storage resources.
Managing compute and IO resources at large scale in both public and private
clouds is challenging. The success of any cloud management software critically
depends on the flexibility, scale and efficiency with which it can utilize the
underlying hardware resources while providing necessary performance
isolation. Customers expect cloud service providers to deliver quality of service
(QoS) controls for tenant VMs. Thus, resource management at cloud scale
requires the management platform to provide a rich set of resource controls
that balance the QoS of tenants with overall resource efficiencies of
datacenters.
For public clouds, some systems (e.g., Amazon EC2) provide largely a 1:1
mapping between virtual and physical CPU and memory resources. This leads to
poor consolidation ratios and customers are unable to exploit the benefits from
statistical multiplexing that they enjoy in private clouds.
For private clouds, resource management solutions like VMware DRS,
Microsoft PRO have led to better performance isolation, higher utilization of
underlying hardware resources via over-commitment and overall lower cost of
ownership.
Cloud Resource Management Operations
There are many ways to provide resource management controls in a cloud
environment. We pick
VMware’s Distributed Resource Scheduler (DRS) as an example because it has
a rich set of controls providing the services needed for successful multi-
resource management while providing differentiated QoS to groups of VMs,
albeit at a small scale compared to typical cloud deployments. In this section,
we first describe the services provided by DRS, then discuss the importance of
such services in the cloud setting, and finally highlight some of the challenges in
scaling the services DRS provides.
In a large scale environment, having these controls will alleviate the noisy-
neighbour problem for tenants if, like DRS, the underlying management
infrastructure natively supports automated enforcement and guarantees. At
the same time, the cloud service provider must be able to over-commit the
hardware resources safely allowing better efficiency from statistical
multiplexing of resources without sacrificing the exposed guarantees.
VMware Clusters
Clusters are a new concept in virtual infrastructure management. They give you
the power of multiple hosts with the simplicity of managing a single entity. A
cluster is a group of loosely connected computers that work together, so that,
from the point of view of aggregating resources such as CPU processing
capability and memory, they can be viewed as though they are a single
computer.
VMware clusters let you aggregate the various hardware resources of individual
ESX Server hosts but manage the resources as if they resided on a single host.
When you power on a virtual machine, it can be given resources from
anywhere in the cluster, rather than be tied to a specific ESX Server host.
Resource Pools
In addition to the basic resource controls presented earlier, administrators and
users can specify flexible resource management policies for groups of VMs. This
is facilitated by introducing the concept of a logical resource pool – a container
that can be used to specify an aggregate resource allocation for a set of VMs. A
resource pool is a named object with associated settings for each managed
resource – the same familiar shares, reservation, and limit controls used for
VMs. Admission control is performed at the pool level; the sum of the
reservations for a pool’s children must not exceed the pool’s own
reservation.Separate, per- pool allocations provide both isolation between
pools, and sharing within pools. Resource pools are useful in dividing large
capacity into logically grouped users. Organizational administrators can use
resource pool hierarchies to mirror human organizational structures, and to
support delegated administration.
Using VMware DRS
This section describes some of the setup and operation tasks you can perform
using DRS and VirtualCenter: adding and removing hosts from clusters, setting
up and allocating resource pools to virtual machines, and delegating resource
administration to pool administrators.
Enabling DRS
VMware DRS is included as an integrated component in VMware Infrastructure
3 Enterprise. It is also available as add-on license options to VMware
Infrastructure 3 Starter and VMware Infrastructure 3 Standard. To use DRS
when you create VMware clusters, you need to set the Enable VMware DRS
option, so that DRS can use the cluster load distribution information for initial
virtual machine placement, to make load balancing recommendations, and to
perform automatic runtime virtual machine migration.
To install VMware Workstation on a Windows host:
1. Log in to the Windows host system as the Administrator user or as a
user who is a member of the local Administrators group.
2. Open the folder where the VMware Workstation installer was
downloaded. The default location is the Downloads folder for the user
account on the Windows host.
3. Right-click the installer and click Run as Administrator.
4. Select a setup option:
Typical: Installs typical Workstation features. If the Integrated Virtual Debugger
for Visual Studio or Eclipse is present on the host system, the associated
Workstation plug-ins are installed.
EXPERIMENT 09
AIM: Working and configuration of Eucalyptus.
INTRODUCTION:
Eucalyptus is a Linux-based software architecture that implements scalable
private and hybrid clouds within your existing IT infrastructure. Eucalyptus
allows you to use your own collections of resources (hardware, storage, and
network) using a self-service interface on an as-needed basis.
You deploy a Eucalyptus cloud across your enterprise’s on-premise data center.
Users access Eucalyptus over your enterprise's intranet. This allows sensitive
data to remain secure from external intrusion behind the enterprise firewall.
You can install Eucalyptus on the following Linux distributions:
• CentOS 7
• Red Hat Enterprise Linux (RHEL) 7
Eucalyptus was designed to be easy to install and as non-intrusive as possible.
The software framework is modular, with industry-standard, language-agnostic
communication.Eucalyptus provides a virtual network overlay that both isolates
network traffic of different users and allows two or more clusters to appear to
belong to the same Local Area Network (LAN). Also, Eucalyptus offers API
compatibility with Amazon’s EC2, S3, IAM, ELB, Auto Scaling, CloudFormation,
and CloudWatch services. This offers you the capability of a hybrid cloud.
Requirements
Compute Requirements :-
Eucalyptus can be installed via the Ubuntu Enterprise Cloud, introduced in
Ubuntu 9.04. In terms of hardware requirements recommended minimum
specification is a a dual-core
2.2 GHz processor with virtualization extension (Intel-VT or AMD-V), 4GB RAM
and 100 GB hard drive.
• Ubuntu 9.10, server edition
• • Dual-core 2.2 GHz processor with virtualization extension (Intel-VT
or AMD-V), 4GB RAM and 100 GB hard drive.
• Port 22 needs to be open for admins (for maintenance)
• Port 8443 needs to be open for users for controlling and sending
requests to the cloud via a web interface
Network Requirements:-
For VPCMIDO, Eucalyptus needs MidoNet to be installed.
The network connecting machines that host components (except the CC and
NC) must support UDP multicast for IP address 239.193.7.3. Note that UDP
multicast is not used over the network that connects the CC to the NCs.
Once you are satisfied that your systems requirements are met, you are ready
to plan your Eucalyptus installation.
Eucalyptus components
Node Controller :- The Node Controller (NC) service runs on any machine that
hosts VM instances. The NC controls VM activities, including the execution,
inspection, and termination of VM instances. It also fetches and maintains a
local cache of instance images, and it queries and controls the system software
(host OS and the hypervisor) in response to queries and control requests from
the CC.
Eucanetd :- The eucanetd service implements artifacts to manage and define
Eucalyptus cloud networking. Eucanetd runs alongside the CLC or NC services,
depending on the configured networking mode.
Advantages
The benefits of Eucalyptus in cloud computing are:
1. Eucalyptus can be utilised to benefit both the eucalyptus private cloud
and the eucalyptus public cloud.
2. Clients can run Amazon or Eucalyptus machine pictures as examples on
both clouds.
3. It isn’t extremely mainstream on the lookout yet is a solid contender to
CloudStack and OpenStack.
4. It has 100% Application Programming Interface similarity with all the
Amazon Web Services.
5. Eucalyptus can be utilised with DevOps apparatuses like Chef and
Puppet.
Features
Eucalyptus features include:
• Supports both Linux and Windows virtual machines (VMs).
• Application program interface- (API) compatible with Amazon EC2
platform.
• Compatible with Amazon Web Services (AWS) and Simple Storage
Service (S3).
• Works with multiple hypervisors including VMware, Xen and KVM.
• Can be installed and deployed from source code or DEB and RPM
packages.
• Internal processes communications are secured through SOAP and WS-
Security.
• Multiple clusters can be virtualized as a single cloud.
• Administrative features such as user and group management and
reports.
• Version 3.3, which became generally available in June 2013, adds the
following features:
• Auto Scaling: Allows application developers to scale Eucalyptus
resources up or down based on policies defined using Amazon EC2-
compatible APIs and tools
• Elastic Load Balancing: AWS-compatible service that provides greater
fault tolerance for applications
• CloudWatch: An AWS-compatible service that allows users to collect
metrics, set alarms, identify trends, and take action to ensure
applications run smoothly
• Resource Tagging: Fine-grained reporting for showback and chargeback
scenarios; allows IT/ DevOps to build reports that show cloud
utilization by application, department or user
• Expanded Instance Types: Expanded set of instance types to more
closely align to those available in Amazon EC2. Was 5 before, now up
to 15 instance types.
• Maintenance Mode: Allows for replication of a virtual machine’s hard
drive, evacuation of the server node and provides a maintenance
window.
System Requirements :-
Compute Requirements :-
Physical Machines: All services must be installed on physical servers, not virtual
machines.
Central Processing Units (CPUs): We recommend that each host machine in
your cloud contain either an Intel or AMD processor with a minimum of 4 2GHz
cores.
Operating Systems: supports the following Linux distributions: CentOS 7.9 and
RHEL
7.9. supports only 64-bit architecture.
Machine Clocks: Each host machine and any client machine clocks must be
synchronized (for example, using NTP). These clocks must be synchronized all
the time, not only during the installation process.
Network Requirements :-
For VPCMIDO, Eucalyptus needs MidoNet to be installed.
The network connecting machines that host components (except the CC and
NC) must support UDP multicast for IP address 239.193.7.3. Note that UDP
multicast is not used over the network that connects the CC to the NCs.
Once you are satisfied that your systems requirements are met, you are ready
to plan your Eucalyptus installation.
Cloud Services :- The main decision for cloud services is whether to install the
Cloud Controller (CLC) and Walrus on the same server. If they are on the same
server, they operate as separate web services within a single Java environment,
and they use a fast path for inter-service communication. If they are not on the
same server, they use SOAP and REST to work together. Sometimes the key
factor for cloud services is not performance, but server cost and data center
configuration. If you only have one server available for the cloud, then you have
to install the services on the same server. All services should be in the same
data center. They use aggressive time-outs to maintain system responsiveness
so separating them over a long-latency, lossy network link will not work.
User Services :- The User Facing Services (UFS) handle all of the AWS APIs and
provide an entry point for clients and users interacting with the Eucalyptus
cloud. The UFS and the Management Console are often hosted on the same
machine since both must be accessible from the public, client-facing network.
You may optionally choose to have redundant UFS and Management Console
host machines behind a load balancer.
Zone Services :- The Eucalyptus services deployed in the zone level of a
Eucalyptus deployment are the Cluster Controller (CC) and Storage Controller
(SC). You can install all zone services on a single server, or you can distribute
them on different servers. The choice of one or multiple servers is dictated by
the demands of user.
Node Services :- The Node Controllers are the services that comprise the
Eucalyptus backend. All NCs must have network connectivity to whatever
machine(s) host their EBS volumes. Hosts are either a Ceph deployment or the
SC.
Physical Networks:-
NSDB: IP network that connects all nodes that participate in
MidoNet. While NSDB and Tunnel Zone networks can be the same,
it is recommended to have an isolated (physical or VLAN) segment.
API: in deployments only eucanetd/CLC needs access to the API
network. Only “special hosts/processes” should have access to this
network. The use of “localhost” network on the node running
CLC/eucanetd is sufficient and recommended in deployments.
Tunnel Zone: IP network that transports the MidoNet overlay
traffic (VM traffic), which is not “visible” on the physical network.
Public network: network with access to the Internet (or
corporate/enterprise) network.
Step 6 :- Prepare the network.
Reserve Ports :-
Port Description
TCP 5005 DEBUG ONLY: This port is used for debugging (using
the –debug flag).
TCP 8772 DEBUG ONLY: JMX port. This is disabled by default,
and can be enabled with the –debug or –jmx options
for CLOUD_OPTS.
TCP 8773 Web services port for the CLC, user-facing services
(UFS), object storage gateway (OSG), Walrus SC;
also used for external and internal communications
by the CLC and Walrus. Configurable
with euctl.
TCP 8774 Web services port on the CC. Configured in the
eucalyptus.conf configuration file
TCP 8775 Web services port on the NC. Configured in the
eucalyptus.conf configuration file.
TCP 8777 Database port on the CLC
TCP 8779 jGroups failure detection port on CLC, UFS, OSG,
(or next Walrus SC. If port 8779 is available, it will be used,
available otherwise, the next port in the range will be
port, up to attempted until an unused port is found.
TCP 8849)
TCP 8888 The default port for the Management Console.
Configured in the
/etc/eucalyptus-console/console.ini file.
TCP 16514 TLS port on Node Controller, required for instance
migrations
UDP 7500 Port for diagnostic probing on CLC, UFS, OSG, Walrus
SC
UDP 8773 Membership port for any UFS, OSG, Walrus, and SC
UDP 8778 The bind port used to establish multicast
communication
TCP/UDP 53 DNS port on UFS
UDP 63822 eucanetd binds to localhost port 63822 and uses
it to detect and avoid running multiple instances
(of eucanetd)
Configure NTP :-
Configure NTP :-For the supported version of the Java Virtual
Machine (JVM), see the Compatibility Matrix in the Release Notes.
As of Eucalyptus 4.3, JVM 8 is required. Eucalyptus RPM packages
require java-1.8.0-openjdk, which will be installed
automatically.To use Java with Eucalyptus cloud:
Open the /etc/eucalyptus/eucalyptus.conf file. Verify that the
CLOUD_OPTS setting does not set –java-home , or that –java-
home points to a supported JVM version.
Result :-We successfully install Eucalyptus and get its whole installation
process.
EXPERIMENT 10
AIM: Deployment & Services of Amazon Web Services.
INTRODUCTION:
Amazon Web Services (AWS) is the world’s most comprehensive and
broadly adopted cloud platform, offering over 200 fully featured
services from data centres globally. Millions of customers—including
the fastest-growing start-ups, largest enterprises, and leading
government agencies—are using AWS to lower costs, become more
agile, and innovate faster.
AWS has significantly more services, and more features within those
services, than any other cloud provider–from infrastructure
technologies like compute, storage, and databases–to emerging
technologies, such as machine learning and artificial intelligence,
data lakes and analytics, and Internet of Things. This makes it faster,
easier, and more cost effective to move your existing applications to
the cloud and build nearly anything you can imagine.
AWS is architected to be the most flexible and secure cloud
computing environment available today. Their core infrastructure is
built to satisfy the security requirements for the military, global
banks, and other high-sensitivity organizations. This is backed by a
deep set of cloud security tools, with 230 security, compliance, and
governance services and features. AWS supports 90 security
standards and compliance certifications, and all 117 AWS services
that store customer data offer the ability to encrypt that data.
Deployment
AWS CodeDeploy is a fully managed deployment service that
automates software deployments to a variety of compute services
such as Amazon EC2, AWS Fargate, AWS Lambda, and your on-
premises servers. AWS CodeDeploy makes it easier for you to rapidly
release new features, helps you avoid downtime during application
deployment, and handles the complexity of updating your
applications. You can use AWS CodeDeploy to automate software
deployments, eliminating the need for error-prone manual
operations. The service scales to match your deployment needs.
Basically, CodeDeploy is a deployment service that automates
application deployments to Amazon EC2 instances, on-premises
instances, serverless Lambda functions, or Amazon ECS services.
You can deploy a nearly unlimited variety of application content, including:
Code
Serverless AWS Lambda functions
Web and configuration files
Executables
Packages
Scripts
Multimedia files
CodeDeploy can deploy application content that runs on a server and is stored
in Amazon S3 buckets, GitHub repositories, or Bitbucket repositories.
CodeDeploy can also deploy a serverless Lambda function. You do not need to
make changes to your existing code before you can use CodeDeploy.
CodeDeploy makes it easier for you to:
Rapidly release new features.
Update AWS Lambda function versions.
Avoid downtime during application deployment.
Handle the complexity of updating your applications, without
many of the risks associated with error-prone manual
deployments.
Services
Amazon web service is an on-demand cloud computing platform that
offers flexible, reliable, scalable, managed and easy-to-use, cost-
effective cloud computing solution these all services are comes with
a different level of abstraction like (IaaS) Infrastructure as a Service,
(PaaS) Platform as a Service, and (SaaS) packaged software as a
service and all of the service can be used on a pay-as-you-go basic
means you will only be paying for what you are using and while it’s
using computing resources.
It is easy to scale as per need. Servers are distributed across the
world; hence available readily, and data stored in these servers is
easily retrievable. It is secure and reliable. Nowadays, it is widely
used and preferred as a cloud service provider. It provides more than
100 services which include computing, storage, management tools,
analytics, deployment, IOT and much more. All these services are
provided under the Amazon portal as per the subscription of the
user.
List of Services offered by AWS
1. Analytics
Amazon EMR provides the Hadoop framework to process big
data. Amazon Kinesis helps in analyzing real-time streaming
data. AWS Data Pipeline and Glue provide pipeline structures
schedule data load and processing. There are many more
applications provided by AWS for almost every operation.
2. Storage
Amazon Simple Storage Service (S3) provides scalable data
storage with backup and replication. Amazon Glacier offers
storage for archived data and affordable retrieval. AWS backup
service manages the backup of data. It automates the backup
process. Apart from these applications AWS storage offers
other services also.
3. Compute
Amazon Elastic Compute Cloud (EC2) provides virtual servers
or instances for computing. It is auto-scalable as per the
requirement. Amazon Elastic Container Service is a high-
performance container service that supports Docker
containers. AWS Lambda offers serverless computing to run
applications. Light sail is an easy-to-use service which provides
virtual server, storage, DNS management, etc. It provides all the
services required for the development of applications.
4. Blockchain
Amazon Managed Blockchain creates and manages a
blockchain network. Amazon Quantum Ledger Database
(QLDB) offers a fully managed ledger database to maintain
transactions.
5. Database
Amazon Relational Database Service provides a fully managed
database service that includes Oracle, SQL, MySQL, etc.
Amazon Aurora offers a high-performance, fully managed
relational database service. Amazon Timestream provides a
fully managed time-series database. Amazon DynamoDB
provides database services for the NoSQL database. Along with
these databases, AWS offers many other database services to
support almost every type of requirement.
6. Developer Tool
AWS Codestar helps the user set up a continuous delivery
pipeline in minutes. AWS X-Ray helps to debug production
applications. With the help of an X-Ray, the user can analyze
and identify performance issues and application components.
AWS CodeCommit provides fully managed private GIT
repositories to store code and manage versions. Apart from
these services, AWS provides AWS CodePipeline, AWS
CodeBuild, AWS CodeDeploy, AWS CLoud9 to support
development and deployment.
1. Networking and Content Delivery
AWS is a virtual private cloud; it offers services over a network.
Hence it ensures that AWS can run any workload over the
network with security, performance, manageability, and
availability. It offers a set of resources over the network by
connecting it privately. It gives administrative control to users
over a virtual network. It provides an application for load
balancing in the networks. It also offers DNS to route end users
to the application.
2. Security, Identity and Compliance
AWS Firewall Manager helps manage firewall rules for
application. Amazon Inspector is an automated security scan
which helps in improving the security and compliance of
applications. Amazon Macie is a machine learning-powered
service to identify, classify and protect sensitive data. Apart
from these security check services, AWS provides a lot more
applications to keep the hosted applications secure and safe.
3. Machine Learning
AWS offers a wide range of services and pre-defined models for
AI. Amazon SageMaker provides services to quickly build, train
and deploy models at a big scale. It also supports the custom
model building. Amazon Recognition is used to analyze images
and videos. Along with these, AWS offers ML service for speech
recognition, language translation, chatbots, and many other
scenarios, with high speed and scalability.
Installation
Step-1: Log in to the Amazon web services portal.
http://aws.amazon.com/
Step-2:Click on search >cloud, and then scroll down to and click AWS EC2.