Download as pdf or txt
Download as pdf or txt
You are on page 1of 107

Shri Vaishnav Vidyapeeth Vishwavidyalaya

Shri Vaishnav Institute of Information & Technology


III-Year/VI-Semester - Section (A)
Session - 2021-22

Subject Name:- Cloud Computing


Subject Code:- BTCS-701

Submitted to:- Submitted by:-


Mr. Avdhesh Kumar Sharma, Aaditya Mishra
Assistant Professor,
SVIIT, SVVV Indore
INDEX
ENROLLMENTNO: 19100BTCSE05628 NAME: Aaditya Mishra Batch: 01

Sr. Name of Experiment Date of Remarks


No. Experiment
1 Working of Google Drive to make 24/01/2022
spreadsheet
2 Service deployment & Usage over 07/02/2022
cloud using Virtual Box.
3 Performance evaluation of services 21/02/2022
over cloud using VMware tool.
4 Working on services of Google App 14/03/2022
Engine.
5 Working on Aneka services for
Cloud application.
6 Working on Application
deployment & services of Microsoft
Azure.
7 Working on Application deployment
& services of IBM Smart Cloud.
8 Working on Heroku for Cloud
application deployment.
9 Working and configuration of
Eucalyptus.
10 Deployment & Services of Amazon
Web Services.
EXPERIMENT – 1
Aim: - Working of Google Drive to make spreadsheet.
Google Sheets: Google Sheets is a free, web-based spreadsheet
application that is provided by Google within the Google Drive service.
The application is also available as a desktop application on ChromeOS,
and as a mobile app on Android, Windows, iOS, and BlackBerry. The
Google Drive service also hosts other Google products such as Google
Docs, Slides, and Forms.

Google Sheets allows users to edit, organize, and analyze different types
of information. It allows collaborations, and multiple users can edit and
format files in real-time, and any changes made to the spreadsheet can be
tracked by a revision history.

Features of Google Sheets

1. Editing
2. Explore
3. Offline Editing
4. Supported file format
5. Integration with other google products
Google Sheets is a powerful tool—it's everything you'd expect from a
spreadsheet, with the extra perks of an online app. While the
example spreadsheet that we created may have been a bit silly, the
practical applications of using Sheets for your workflows (both
business and personal) are limitless.
Whether you need to make a budget, outline your next proposal, gather
data for a research project, or log info from any other app that connects
with Zapier, a Google Sheets spreadsheet can bring your data to life.
And with everything stored in Google Drive, you'll never worry about
losing your files again—even if your computer dies.

Common terms associated with Google spreadsheets:

• Cell: A single data-point.


• Column: A vertical range of cells that runs down from the top of the
sheet.
• Row: A horizontal range of cells that run across from the left side of
the sheet.
• Range: A selection of multiple cells that runs across a column, row,
or a combination of both.
• Function: A built-in feature in Google Sheet that is used to calculate
values and manipulate data.
• Formula: A combination of functions, columns, rows, cells, and
ranges that are used to obtain a specific end result.
• Worksheet: Sets of columns and rows that make up a spreadsheet.
• Spreadsheet: Entire document that contains Google Excel sheets.
One spreadsheet can have more than one worksheet.

How to use Google sheets

Google Sheets is a free-to-use application that can be accessed on the


Chrome web browser or the Google Sheets app on Android or iOS
platform. Users need a free Google account to get started. To create a
new Google Excel Sheet, following the
following steps:

1. Go to the Google Drive Dashboard, and click the “New”


button on the top left corner, and select Google Sheets.
2. Open the menu bar in the spreadsheet window, go to File
then New. It will create a blank spreadsheet, and the
interface will be as follows:

3. Open the menu bar in the template window, select


template. It will create a spreadsheet, and the interface will be as
follows:
4. Adding data to spreadsheet

5. Format Data for Easy Viewing


6. Add, Average, and Filter Data with Formulas The
most basic formulas in Sheets include:
• SUM: adds up a range cells (e.g., 1+2+3+4+5 = sum of 15)
• AVERAGE: finds the average of a range of cells (e.g.,
1,2,3,4,5 = average of 3)
• COUNT: counts the values in a range of cells (ex:
1,blank,3,4,5 = 4 total cells with values)
• MAX: finds the highest value in a range of cells (ex:
1,2,3,4,5 = 5 is the highest)
• MIN: finds the lowest value in a range of cells (ex: 1,2,3,4,5 = 1 is
the lowest)
• Basic Arithmetic: You can also perform functions like addition,
subtraction, and multiplication directly in a cell without calling a
formula

Experiment – 2

Aim – Service deployment and usage over cloud using virtual box.
Introduction to Oracle Virtual Box :-

Oracle Virtual Box (formerly Sun Virtual Box, Sun xVM Virtual Box and
Innotek VirtualBox) is a free and open-source hosted hypervisor for
x86computers and is under development by Oracle Corporation.
Developed initially by Innotek GmbH, it was acquired by Sun
Microsystems in 2008, which was, in turn, acquired by Oracle in 2010.
VirtualBox may be installed on a number of host operating systems,
including Linux, macOS, Windows, Solaris and Open Solaris. There are
also ports to FreeBSD and Genode. It supports the creation and
management of guest virtual machines running versions and derivations
of Windows, Linux, BSD, OS/2, Solaris, Haiku, OSx86and others, and
limited virtualization of macOS guests on Apple hardware. For some
guest operating systems, a "Guest Additions" package of device drivers
and system applications are available, which typically improves
performance, especially that of graphics.

Software-based virtualization

In the absence of hardware-assisted virtualization, VirtualBox adopts a


standard software-based virtualization approach. This mode supports
32-bit guest OSs which run in rings 0 and 3 of the Intel ring
architecture.
The system reconfigures the guest OS code, which would normally run in
ring 0, to execute in ring 1 on the host hardware. Because this code
contains many privileged instructions which cannot run natively in ring 1,
VirtualBox employs a Code Scanning and Analysis Manager (CSAM) to
scan the ring 0 code recursively before its first execution to identify
problematic instructions and then calls the Patch Manager (PATM) to
perform in-sit patching. This replaces the instruction with a jump to a
VM-safe equivalent compiled code fragment in hypervisor memory.

The guest user-mode code, running in ring 3, generally runs directly on


the host hardware in ring 3. In both cases, VirtualBox uses CSAM and
PATM to inspect and patch the offending instructions whenever a fault
occurs. VirtualBox also contains a dynamic recompiler, based on QEMU
to recompile any real mode or protected mode code entirely (e.g. BIOS
code, a DOS guest, or any operating system start up). Using these
techniques, VirtualBox can achieve a performance comparable to that of
VMware.

Hardware-Based Virtualization
Hardware-assisted virtualization is the use of a computer's physical
components to support the software that creates and manages virtual
machines (VMs). Virtualization is an idea that traces its roots back to
legacy mainframe system designs of the
1960s. Early mainframes used the Control
Program/Conversational Monitor System operating system and were
adept at provisioning the mainframe's available computing resources into
isolated environments capable of running enterprise workloads.

System Requirement For Virtual Box

• Reasonably powerful x86 hardware (how much RAM will depend


upon how many VMs will be deployed, but 8 GB is a good
minimum)

• Storage: VirtualBox requires only about 30 MB of hard disk space.


You will need enough storage to house your VMs, and VMs can
easily start at 10 GB each.
Installation of VirtualBox :-
Step1 :- login to https://www.virtualbox.org/wiki/Downloads website with click
on window hosts..

Step 2 :- download the require file

Step3:- After Running Executable file. It will show first preparing installation
window.
Step 4 :- Click Next to continue the installation.

Step 5 :- Select the path for installation of VMware software. And click
continue.
Step 6 :- Click on Install to install the ORACLE VMware VIRTUAL box.

Cloudera Quick Start VM :-

Ubuntu is a complete Linux operating system, freely available with both


community and professional support. The Ubuntu community is built on
the ideas enshrined in the Ubuntu Manifesto: that software should be
available free of charge, that software tools should be usable by people
in their local language and despite any disabilities, and that people
should have the freedom to customize and alter their software in
whatever way they see fit. We will use Ubuntu with VirtualBox to run
Linux on Windows.

Downloading Ubuntu on virtal box setup :-


Use this link to download the zip file of Quick Start VM. And Unzip it for use.

https://downloads.cloudera.com/demo_vm/virtualbox/cloude ra-
quickstart-vm-5.12.0-0- virtualbox.zip

Configuration of Ubuntu on VMware :-


Step 1 :- Open the oracle VMware application and click on Open a new Virtual
Machine.
It is the Main Screen interface of oracle VMware Workstation. We can also
create a New Virtual Machine and also connect to a remote Server.
Step 2 :- Select the Virtual Machine and click open.

Step 3 :- After Clicking on open these window will show click on import.
When you click in Import the Quick VM is imported from the particular
path and then it will launch a new window with the Virtual Machine at off
State.

Step 4 :- It’s time to edit the virtual machine to make it works.on,


Settings. Now, bring the following changes to Ubuntu machine .

1. Under System>Motherboard, increase the Base Memory to


2638MB. However, you can use a bit lower memory if your
system doesn’t have this much RAM.
2. check Floppy from the Boot Order section.
3. Under the Processor tab, increase the processors to 2or higher.
On the Display window, increase the Video Memory to
128MB.
Step 5:- After Importing this window will be appeared. Now Power on this
Virtual Machine.

When you Power on the Virtual Machine it will take some time to start
the virtual Machine which is completely dependent on the System
hardware. The recommended Ram is 2to 12 Gb. For the proper
functioning of Ubuntu cluster. You can use this as a Linux virtual machine
and run Ubuntu on it. And we can start here for further work to start.
Step 6 :- When it is ON there is a screen like these will be observed on which
browser is showing Ubuntu machine.
Step 6 :- install the Ubuntu and your Ubuntu is successfully
installed.
EXPERIMENT 03
AIM: Performance evaluation of services over cloud using VMware tool.

Introduction to VMware Workstation :-

VMware Workstation Pro is a hosted hypervisor that runs on x64 and x86
versions of Windows and Linux operating systems it enables users to set
up virtual machines (VMs) on a single physical machine, and use them
simultaneously along with the actual machine. Each virtual machine can
execute its own operating system, including versions of Microsoft
Windows, Linux, BSD, and MS-DOS. VMware Workstation is developed
and sold by VMware, Inc., a division of Dell Technologies. First, we will
install VMware Workstation and then We are going to install Cloudera
Hadoop Software in Ubuntu Linux Server.

System Requirements for VMware Workstation :-

VMware recommends the following :-

• 64-bit x86 Intel or AMD Processor from 2011 or later.


• 1.3GHz or faster core speed.
• 2GB RAM minimum/4GB RAM recommended.
• The host system should have either an NVIDIA GeForce 8800GT or later or
an ATI Radeon HD 2600 or later graphics processor.

Workstation Pro installation :-

• 1.2 GB of available disk space for the application.


• Additional hard disk space required for each virtual machine.
• Please refer to vendor's recommended disk space for specific guest
operating systems.

Installation of VMware Workstation :-


Step 1 :- login to http://www.vmware.com website with your login details.

Step 2 :- Search for the VMware Workstation and click download for
windows and then click on download now0.And exe file will be
downloaded. Or use this link to download VMware : -
https://download3.vmware.com/software/wkst/file/V
Mware-workstation-full-15.5.6- 16341506.exe

Step 3 :- Open Downloads and Run the Executable File.


Step 4 :- After Running Executable file. It will show first preparing installation
window.

Step 5:- Click Next to continue the installation.


Step 6 :- Accept the License Agreement and click next to continue the
installation.
Accepting License Agreement is compulsory to accept. By it you are accepting
to the all terms, conditions and Licenses.

Step 7 :- Select the path for installation of VMware software.


And click continue.

The VMware enhanced keyboard driver is a software that enables you


to have a better experience when using your keyboard in virtual
machines.
Step 8 :- Select the shortcuts to place on you System.
The Shortcuts will help you to Quickly access the VMware Software to use and
do your desired task on it.

Step 9 :- Click on Install to install the VMware Workstation.

After Clicking to install. The installation will start according to your choices
what you made.
Step 10 :- Window of installation will appear and installation is in progress.
Step 11 :- At last after installing click finish to finish the installation.

After clicking Finish You should have to restart your PC


which is recommended for the proper functioning. Check
to Desktop you have a shortcut by name VMware
workstation pro.
Your VMware is Successfully Installed !

Configuration of Ubuntu on VMware :-

Step 1 :- Open the VMware application and click on Open a new Virtual
Machine.
It is the Main Screen interface of VMware Workstation. We

can also create a New Virtual Machine and also connect to a


remote Server.

Step 2 :- Select the Virtual Machine and click open.

Step 3 :- After Importing this window will be appeared. Now Power on this
Virtual Machine.
When you Power on the Virtual Machine it will take some time to start
the virtual Machine which is completely dependent on the System
hardware. The recommended Ram is 2 to 8 Gb. For the proper
functioning of Ubuntu cluster. You can use this as a Linux virtual machine
and run Hadoop on it. And we can start our Ubuntu and check versions of
linux here for further work to start.

Step 4 :- When it is ON there is a screen like these will be observed on which


browser is showing Ubuntu machine.
Step 5 :- Close the welcome window and your Ubuntu VMwarw is successfully
installed.

EXPERIMENT 04

AIM: Working on services of Google App Engine.

Google App Engine

Google App Engine (GAE) is the platform that allows software developers
to Leverage Google’s cloud computing infrastructure for web application
development. Similar computing resources (or services) used by Google
Docs are available under the GAE. Google offers the same reliability and
openness as its other flagship products like Google Search and Google
Mail. GAE is free up to a point, but there is plenty of room For the
experiments that are needed to evaluate the technology.

History

Google App Engine (often referred to as GAE or simply App Engine, and
also used by the acronym GAE/J) is a platform as a service (PaaS) cloud
computing platform for developing and hosting web applications in
Google-managed data centers. Applications are sandboxed and run
across multiple servers. App Engine offers automatic scaling for web
applications—as the number of requests increases for an application,
App Engine automatically allocates more resources for the web
application to handle the additional demand.
Google App Engine is free up to a certain level of consumed resources.
Fees are charged for additional storage, bandwidth, or instance hours
required by the application. It was first released as a preview version in
April 2008, and came out of preview in September 2011.

Runtimes and frameworks

Currently, the supported programming languages are Python, Java (and, by


extension, other JVM languages such as Groovy,
JRuby, Scala, Clojure, Jython and PHP via a special version of Quercus), and
Go. Google has said that it plans to support more languages in the future,
and that the Google App Engine has been written to be language
independent.

Reliability and Support


All billed High-Replication Datastore App Engine applications have a
99.95% uptime SLA.

Characteristics

The following are some of GAE’s key features.


• Dynamic web serving, with full support for common web technologies
• Persistent storage with queries, sorting, and transactions
• Automatic scaling and load balancing
• APIs for authenticating users and sending email using Google Accounts
• A fully featured local development environment that simulates Google
App
• Engine on your computer
• Task queues for performing work outside of the scope of a web request
• Scheduled tasks for triggering events at specified times and regular
intervals

Advantage and Disadvantage

S.no Advantage Disadvantage

1. The main advantage of GAE GAE is yet not stable enough.


is it doesn’t scale. However, Even Google says that GAE is
even after you empower the same infrastructure of
billing, the whole system is Google self-internal project.
augmented to support only
500 requests per second.

2. GAE feature set is good Without native file system


enough to build a decent read/write access, it is hard to
website and you don’t need process some data transform
to do the maintenance work. with existing library, and it
doesn’t support some native file
system base library as well.

3. It doesn’t require any server It does not provide full text


administration. It has free search API. Also, the SDK/Java is
usage quotas and provides unfavourable with Maven, it is
scalability. GAE has better unsatisfactory to accomplish
access to Google user lots of external libraries.
accounts and deployment
process is very easy.

4. GAE has the highest admin It is not easy to process unit


load. Once you are set up, test. It cannot fix the root cause
deploying and re-deploying and does not support add SSL to
is quick and web site.
they will autoeverything.

5. You can get any feature It suffers from the inability


from the store with GAE to tweak server software. The
File system and many standard
library modules are
inaccessible.

Application and uses of Google App Engine

1. Modern web applications

Quickly reach customers and end users by deploying web apps on App
Engine. With zero-config deployments and zero server management, App
Engine allows you to focus on writing code. Plus, App Engine
automatically scales to support sudden traffic spikes without
provisioning, patching, or monitoring.

2. Scalable mobile back ends

Whether you’re building your first mobile app or looking to reach existing
users via a mobile experience, App Engine automatically scales the
hosting environment for you. Seamless integration with Firebase
provides an easy-to-use frontend mobile platform along with the scalable
and reliable back end.

Below is a sample reference architecture for a typical mobile app built


using Firebase and App Engine along with other services in Google Cloud.

Installation

You can download the Google App Engine SDK by going to:
https://storage.googleapis.com/appenginesdks/featured/GoogleAp
pEngine-1.9.85.msi
Then click on next and set the installation path for Google App
Engine

After setting the installation path click on next and then click on Install.
Once the installation has completed click on Finish
EXPERIMENT 05
AIM: Working on Aneka services for Cloud application.
INTRODUCTION:
Aneka is a market oriented Cloud development and management
platform with rapid application development and workload
distribution capabilities. Aneka is an integrated middleware package
which allows you to seamlessly build and manage an interconnected
network in addition to accelerating development, deployment and
management of distributed applications using Microsoft .NET
frameworks on these networks. It is market oriented since it allows
you to build, schedule, provision and monitor results using pricing,
accounting, and QoS/SLA services in private and/or public (leased)
network environments.
Aneka is a workload distribution and management platform that
accelerates applications in Microsoft .NET framework environments.
Some of the key advantages of Aneka over other GRID or Cluster based
workload distribution solutions include:
 rapid deployment tools and framework,
 ability to harness multiple virtual and/or physical machines for
accelerating application result
 provisioning based on QoS/SLA
 support of multiple programming and application environments
 simultaneous support of multiple run-time environments
 built on-top of Microsoft .NET framework, with support for Linux
environments through Mono

BUILD
Aneka includes a Software Development Kit (SDK) which includes a
combination of APIs and Tools to enable you to express your
application. Aneka also allows you to build different run- time
environments and build new applications.
Aneka provides APIs and tools that enable applications to be
virtualized over a heterogeneous network.

Supported APIs include:


 Task Model for batch and legacy applications
 Thread Model for applications that use object oriented thread.
 MapReduce Model for data intensive applications like data
mining or analytics.
 Others such as MPI (Message Passing) and Actors (Distributive
Active Objects/Agents) can be customized.

Supported Tools include:


 Design Explorer for Parameter Sweep applications. Built on-top
of task model with no additional requirements for programming.
 Work Flow applications. Built on-top of task model with some
additional requirements for programming.
ACCELERATE
Aneka supports Rapid Development and Deployment of Applications in
Multiple Run-Time environments. Aneka uses physical machines as
much as possible to achieve maximum utilization in local environment.
As demand increases, Aneka provisions VMs via private clouds (Xen or
VMWare) or Public Clouds (Amazon EC2).
How we accelerate Development and Deployment:
1) Rapid Deployment includes support of Parameter Sweep using
Design Explorer Tool. Parameter sweep takes existing
applications that are controlled by a set of parameters passed
as a command line and produces multiple distributed
executions of the same application with different parameter
sets.
2) Building on-top of Microsoft .NET framework allows multiple
programming languages to be supported, thereby making it
faster to get existing applications running.
3) Develop Application once and run in multiple environments
simultaneously. Support for Multiple Run-time environments
saves you time in programming your applications. Aneka
supports Virtual Machine and Physical hardware in private and
public networks.
4) Optimized for networked multi-core computers, Aneka
effectively virtualizes your application which allows you to
harness the power of multiple computers for the same
workload. This gives you results in near real-time allowing you
to make faster decisions.
5) Aneka Scheduler allows you to run multiple applications on
same Run-time environment either concurrently
(simultaneously) or in a queue arrangement.
MANAGE
Aneka Management includes a Graphical User Interface (GUI) and APIs
to set-up, monitor, manage and maintain remote and global Aneka
compute clouds. Aneka also has an accounting mechanism and
manages priorities and scalability based on SLA/QoS which enables
dynamic provisioning. Briefly, the set of operations that are performed
through the Management Studio are the following:
 Quick setup of computing clouds;
 Remote installation and configuration of nodes; System load
monitoring and tuning.
 Monitor aggregate dynamic statistics and probing individual
nodes for CPU and memory load
 Extensible framework – add new features and services by
implementing management plug-ins

ANEKA ARCHITECTURE
Aneka is a platform and a framework for developing distributed
applications on the Cloud. It harnesses the spare CPU cycles of a
heterogeneous network of desktop PCs and servers or data centers on
demand. Aneka provides developers with a rich set of APIs for
transparently exploiting such resources and expressing the business
logic of applications by using the preferred programming abstractions.
System administrators can leverage on a collection of tools to monitor
and control the deployed infrastructure. This can be a public cloud
available to anyone through the Internet, or a private cloud
constituted by a set of nodes with restricted access.
The Aneka based computing cloud is a collection of physical and
virtualized resources connected through a network, which are either
the Internet or a private intranet. Each of these resources hosts an
instance of the Aneka Container representing the runtime
environment where the distributed applications are executed. The
container provides the basic management features of the single node
and leverages all the other operations on the services that it is hosting.
The services are broken up into fabric, foundation, and execution
services. Fabric services directly interact with the node through the
Platform Abstraction Layer (PAL) and perform hardware profiling and
dynamic resource provisioning. Foundation services identify the core
system of the Aneka middleware, providing a set of basic features to
enable Aneka containers to perform specialized and specific sets of
tasks. Execution services directly deal with the scheduling and
execution of applications in the Cloud.
Overview of Aneka Framework
One of the key features of Aneka is the ability of providing different
ways for expressing distributed applications by offering different
programming models; execution services are mostly concerned with
providing the middleware with an implementation for these models.
Additional services such as persistence and security are transversal
to the entire stack of services that are hosted by the Container. At the
application level, a set of different components and tools are
provided to:
1) simplify the development of applications (SDK)
2) porting existing applications to the Cloud
3) Monitoring and managing the Aneka Cloud.

Platform Abstraction Layer (PAL)


• Core infrastructure of the system is
based on .NET technology
• PAL provided features: o Uniform and
platform- independent
o implementation interface for accessing the hosting platform
oaccess to remote nodes omanagement interfaces
o Uniform access to extended and additional
properties of the hosting platform Fabric
Services.

Lowest level of the software stack representing


Aneka container Consists of:
a) Profiling and Monitoring Services Heartbeat,
Monitoring and reporting services
• Heartbeat service periodically collects the dynamic information
about the node
• The basic information about memory space, disk space,
CPU and operating system are collected.
• All these information can be stored on RDBMS or a flat file.
b) Resource Management Services
• Comprises tasks: resource membership, resource
reservation and resource provisioning service
• Equivalent services: Index Service(Membership
catalogue), Reservation Service, Resource Provisioning Service
• The Membership catalogue tracks the performance information of
nodes
• The Resource Provisioning Service tracks the provisioning
and lifetime information of virtual nodes.

Foundation Services
Logical management of the distributed system built
on top of the infrastructure:
a) Storage management for applications
• Centralized file storage
• More suitable for compute-intensive applications
• Distributed file storage
• More suitable for data intensive applications
• To support different protocols, the concept of file channel, is
introduced.
• File Channel identifies a pair of components:
o file channel controller :
server part o file
channel handler : client
part
• Storage service supports the execution of task-based programming
• Storage service supports the execution of task-based
programming such as Task and the Thread Model, and
Parameter Sweep based applications.

b) Accounting, billing, and resource pricing


• Accounting keeps track of the status of applications in the Aneka
cloud
• Shows the usage of infrastructure and the execution of
applications
• Billing service provides detailed information about the
resource usage of each user with the associated costs
• Corresponding Aneka container or the installed software in the
node.

c) Resource reservation
• Supports the execution of distributed applications
• Allows for reserving resources for exclusive use by specific
applications.

Application Services
• Manage the execution of applications
• Constitute a layer that differentiates according to the specific
programming model
• Scheduling Service and Execution Service
Building Aneka Cloud

Figure 5.3. High-level view of Aneka Cloud

System Components View


Aneka Cloud Deployment

Typical Aneka Cloud Deployment

Installing Aneka Cloud Management Studio


Aneka installation begins with installing Aneka Cloud Management
Studio. The Cloud Management Studio is your portal for creating,
configuring and managing Aneka Clouds. Installing Aneka using the
distributed Microsoft Installer Package (MSI) is a quick process
involving three steps as described below.
Step 1 – Run the installer package to start the Setup Wizard
Welcome Page Step 2 – Specifying the installation folder
In Step 2 you specify the installation folder. By default Aneka is installed in

C:\Program Files\Manjrasoft\Aneka.3.0.
Specifying the installation folder Step 3 – Confirm and start the installation

Confirm Installation

At this point you are ready to begin the installation. Click “Next” to
start the installation or “Back” to change your installation folder.
Installation Complete
Once the installation is complete, close the wizard and launch Aneka
Management Studio from the start menu.
EXPERIMENT 06
AIM: Working on Application deployment & services of Microsoft Azure.
INTRODUCTION:
Microsoft Azure is a Microsoft cloud service provider that provides cloud
computing services like computation, storage, security and many other
domains. Microsoft is one of the global leaders when it comes to Cloud
solutions and global cloud infrastructure. Microsoft Azure provides services in
60+ global regions and serves in 140 counties. It provides services in the form of
Infrastructure as a service, Platform as a Service and Software as a service. It
even provides serverless computing meaning, you just put your code and all
your backend activities as managed by Microsoft Azure.

Azure Cloud Service (read Web/Worker Roles) is one of the earliest Platform as
a Service (PaaS) offered by Microsoft Azure. In fact, when Azure started in
2008, Cloud Services was the only Compute option available in Azure (Virtual
Machines, Websites etc. came a bit later).

With Cloud Services you can run web applications (typically by hosting your
application in a Web Role) or run background applications (typically by hosting
your application/background service in a Worker Role). Since it is a PaaS
offering, you need not worry about the issues that comes with IaaS (i.e.
patching, configuring etc.). You simply provide your application and the desired
settings in form of a package to Microsoft and based on that Azure will create
VMs for you and deploy your applications in those VMs.

Cloud Services offer you a lot of flexibility (when compared with WebApps), yet
take away the complexities that you would normally face when working with
Virtual Machines (IaaS).

Though not officially deprecated, Cloud Services is heading that way. Microsoft
is pushing for the use of other PaaS offerings (like WebApps, WebJobs,
Functions, Service Fabric etc.). If you're building a new app, my
recommendation would be not to use Cloud Services.
It easily integrates with Microsoft Products making it very popular using
Microsoft products. This platform is now 10 years old and it picked up to
compete with the best of the best.
History
Microsoft Azure began in the mid-2000s as an internal initiative codenamed
Project Red Dog. At the time, Amazon had already launched its cloud
computing service, and Microsoft was rushing to catch up.
During the Microsoft Professional Developers Conference 2008, two years after
Amazon Web Services (AWS) had gone live with its Simple Storage Service, Ray
Ozzie, Microsoft’s chief software architect, announced that the company
planned to launch its own cloud computing service called Windows Azure. The
plan called for Microsoft to offer five key categories of cloud services: Windows
Azure for compute, storage and networking; Microsoft SQL Services for
databases; Microsoft .NET Services for developers; Live Services for file sharing;
and Microsoft SharePoint Services and Microsoft Dynamics CRM Services SaaS
offerings.
Ozzie told the crowd, “It’s a transformation of our software and a
transformation of our strategy.”
He acknowledged Amazon’s leadership, noting that AWS had “established a
base-level design pattern, architecture models, and business models that we’ll
all learn from.” And he predicted that one day, “all our enterprise software will
be delivered as an online service as an option.”
After the announcement, Microsoft began to roll out preview versions of its
cloud services, and in February 2010, the Windows Azure Platform became
commercially available. Early reviews of the service were mixed, with many
analysts comparing Azure unfavourably with AWS. However, Microsoft
improved Azure dramatically over time. It also added support for a wide variety
of programming languages, frameworks and operating systems, including Linux
— something that once would have been unthinkable for a Microsoft product.
Recognizing that its cloud computing service had moved far beyond Windows,
the company renamed Windows Azure as Microsoft Azure in April 2014.
In the years since, Microsoft has continued to expand its cloud capabilities,
largely living up to Ozzie’s predictions at the initial announcement in 2008. It
has also increased its support for open source software, and today Azure is a
reasonable choice even for enterprises that don’t run Windows servers.
Competing closely with AWS, Google Cloud and IBM, Microsoft Azure is one of
the unquestioned cloud leaders – and some observers say it has a chance to be
the top cloud vendor, long term.

Characteristics
 Compute — includes Virtual Machines, Virtual Machine Scale Sets,
Functions for serverless computing, Batch for containerized batch
workloads, Service Fabric for microservices and container orchestration,
and Cloud Services for building cloud-based apps and APIs
 Networking — includes a variety of networking tools, like the Virtual
Network, which can connect to on-premise data centers; Load Balancer;
Application Gateway; VPN Gateway; Azure DNS for domain hosting,
Content Delivery Network, Traffic Manager,

ExpressRoute dedicated private network fiber connections; and Network


Watcher monitoring and diagnostics
 Storage — includes Blob, Queue, File and Disk Storage, as well as a Data
Lake Store, Backup and Site Recovery, among others
 Web + Mobile — includes several services for building and deploying
applications, but the most notable is probably the App Service, which
comprises services for Web Apps, Mobile Apps, Logic Apps (a low-code,
data-driven service) and API Apps (for creating and using APIs)
 Containers — includes Container Service, which supports Kubernetes,
DC/OS or Docker Swarm, and Container Registry, as well as tools for
microservices
 Databases — includes several SQL-based databases and related tools, as
well as Cosmos DB, Table Storage for NoSQL and Redis Cache in-memory
technology
 Data + Analytics — includes big data tools like HDInsight for Hadoop
Spark, R Server, HBase and Storm clusters; Stream Analytics; Data Lake
Analytics; and Power BI Embedded, among others
 AI + Cognitive Services — includes multiple tools for developing
applications with artificial intelligence capabilities, like the Computer
Vision API, Face API, Bing Web Search, Video Indexer, Language
Understanding Intelligent Service and more
 Internet of Things — includes IoT Hub and IoT Edge services that can be
combined with a variety of machine learning, analytics and
communications services
 Enterprise Integration — includes multiple tools for building and
managing hybrid cloud computing environments
 Security + Identity — includes Security Centre, Azure Active Directory,
Key Vault and Multi-Factor Authentication Services
 Developer Tools — includes cloud development services like Visual Studio
Team Services, Azure DevTest Labs, HockeyApp mobile app deployment
and monitoring, Xamarin cross-platform mobile development and more
 Monitoring + Management — includes numerous tools for managing
Azure workloads and hybrid cloud environments, such as the Microsoft
Azure Port, Azure Resource Manager, Log Analytics, Automation,
Scheduler and more
 Microsoft Azure Stack — includes solutions for replicating Azure
infrastructure in enterprise data centers with the goal of facilitating
hybrid cloud deployments.

Advantages and Disadvantages:


S.no Advantages Disadvantages
1. Microsoft Azure offers high Microsoft Azure does not help you
availability. manage your cloud-based data
center.
2. It offers you a strong security You must have platform expertise
profile. available.
3. It is a cost-effective solution for Microsoft Azure proposes for your
an IT budget. business is a single vendor
strategy. Although working with
one vendor does increase
convenience, it also increases your
risk.
4. There are multiple Speed can be an issue for some
redundancies in place to businesses.
maintain data access
5. Azure allows businesses to It may not be the right value
build a hybrid infrastructure. offering for every business,
especially some start-ups
or new SMBs.

Application and uses


 Virtual machines: Create Microsoft or Linux virtual machines (VMs) in just
minutes from a wide selection of marketplace templates or from your
own custom machine images. These cloud-based VMs will host your apps
and services as if they resided in your own data centre.
 SQL databases: Azure offers managed SQL relational databases, from one
to an unlimited number, as a service. This saves you overhead and
expenses on hardware, software, and the need for in- house expertise.
 Azure Active Directory Domain services: Built on the same proven
technology as Windows Active Directory, this service for Azure lets you
remotely manage group policy, authentication, and everything else. This
makes moving and existing security structure partially or totally to the
cloud as easy as a few clicks.
 Application services: With Azure it’s easier than ever to create and
globally deploy applications that are compatible on all popular web and
portable platforms. Reliable, scalable cloud access lets you respond
quickly to your business’s ebb and flow, saving time and money. With the
introduction of Azure WebApps to the Azure Marketplace, it’s easier than
ever to manage production, testing and deployment of web applications
that scale as quickly as your business. Prebuilt APIs for popular cloud
services like Office 365, Salesforce and more greatly accelerate
development.
 Visual Studio team services: An add-on service available under Azure,
Visual Studio team services offer a complete application lifecycle
management (ALM) solution in the Microsoft cloud. Developers can share
and track code changes, perform load testing, and deliver applications to
production while collaborating in Azure from all over the world. Visual
Studio team services simplify development and delivery for large
companies or new ones building a service portfolio.
 Storage: Count on Microsoft’s global infrastructure to provide safe, highly
accessible data storage. With massive scalability and an intelligent pricing
structure that lets you store infrequently accessed data at a huge savings,
building a safe and cost-effective storage plan is simple in Microsoft
Azure.

Installation
Step-1: Log in to the Azure portal.
http://www.microsoft.com/windowsazure/
Step-2: Click Create a resource > Compute, and then scroll down to and click
Cloud Service.
Result:- We successfully configure cloud services on Microsoft azure.
EXPERIMENT 07
AIM: Working on Application deployment & services of IBM Smart Cloud.
INTRODUCTION:
The IBM SmartCloud brand includes infrastructure as a service, software as a
service and platform as a service offered through public, private and hybrid
cloud delivery models. IBM places these offerings under three umbrellas:
SmartCloud Foundation, SmartCloud Services and SmartCloud Solutions.
SmartCloud Foundation consists of the infrastructure, hardware, provisioning,
management, integration and security that serve as the underpinnings of a
private or hybrid cloud. Built using those foundational components, PaaS, IaaS
and backup services make up SmartCloud Services. Running on this cloud
platform and infrastructure, SmartCloud Solutions consist of a number of
collaboration, analytics and marketing SaaS applications.
IBM also builds cloud environments for clients that are not necessarily on the
SmartCloud Platform.
For example, features of the SmartCloud platform—such as Tivoli management
software or IBM Systems Director virtualization—can be integrated separately
as part of a non-IBM cloud platform. The SmartCloud platform consists solely of
IBM hardware, software, services and practices.
IBM SmartCloud Enterprise and SmartCloud Enterprise+ compete with products
like those of Rackspace and Amazon Web Services. Erich Clementi, vice
president of Global Technology Services at IBM, said in 2012 that the goal with
SmartCloud Enterprise and SmartCloud Enterprise+ was to provide an Amazon
EC2-like experience primarily for test and development purposes and to
provide a more robust experience for production workloads.

Before you begin


 Ensure that your user has the access rights that are required to add files
to the location where you want to install IBM SmartCloud Analytics - Log
Analysis.
 Use a non-root user to install IBM SmartCloud Analytics - Log Analysis.
You must use a user who is not a root user.
 If you previously installed IBM Tivoli Monitoring Log File Agent 6.3 or
lower, the installation fails. To solve this problem, stop the existing IBM
Tivoli Monitoring Log File Agent installation or rename the folder that was
created when it was installed. For detailed information, see the topic
about the installation failure if IBM Tivoli Monitoring Log File Agent was
installed in the Troubleshooting Guide for IBM SmartCloud Analytics - Log
Analysis.
 Before you install IBM SmartCloud Analytics - Log Analysis, you must
ensure that the details for each host server are maintained correctly in
the etc/hosts directory on the target system. For a regular server that
uses a static IP address, define the static IP address and the required
values in the following format:

IP LONG-HOSTNAME SHORT-HOSTNAME
For a Dynamic Host Configuration Protocol (DHCP) server that uses a loop back
IP address, define the loop back IP address and the required values in the
following format:
LOOPBACK-IP LONG-HOSTNAME SHORT-HOSTNAME
To log in to IBM SmartCloud Analytics - Log Analysis after you set up the DCHP
server, use the Fully Qualified Domain Name (FQDN) or the host name to log in.
You cannot use the IP address as you would for non-DHCP servers.
About this task
When you run the installation script, IBM SmartCloud Analytics - Log Analysis
and IBM Installation Manager Version 1.7 are installed. Where necessary, IBM
SmartCloud Analytics - Log
Analysis upgrades the currently installed version of IBM Installation Manager.

Installing IBM SmartCloud Analytics- Log Analysis Procedure


1) Copy and extract the installation archive to a location on your server.
2) From the directory to which you extracted the installation files, run the
command: ./install.sh
3) Click Next. The Install Packages screen displays.
4) To install IBM SmartCloud Analytics - Log Analysis in the default directory,
click Next. To install IBM SmartCloud Analytics - Log Analysis to a different
location, click Browse, select an alternative location, and click Next.
5) Click Next accepting the default option for the IBM Log File Agent.
6) The default ports that are used by IBM SmartCloud Analytics - Log Analysis
are displayed. Accept the default option if these ports are not in use by any
other application or change them if necessary. Click Next.
7) If you want to install a local instance of IBM InfoSphere Data Explorer,
ensure that the IBM InfoSphere Data Explorer 9.0 check box is selected. The
check box is selected by default. If you want to use a local installation of IBM
InfoSphere Data Explorer, you must install it now. You cannot install it after
the IBM SmartCloud Analytics - Log Analysis is installed. However, you can
install instances of IBM InfoSphere Data Explorer on remote machines after
the installation is completed. To enable search and indexing, you must install
at least one instance of IBM InfoSphere Data Explorer locally or remotely.
8) Review the summary information that is provided for the installation and
click Install.
9) To complete the installation click Finish.

IBM SmartCloud Notes Installation


Before you begin
Your administrator will make the Notes install kit available to you. Follow the
instructions provided to you by your administrator to download the install kit to
your computer. For information on supported versions of the client, see the
IBM SmartCloud Notes client requirements.

About this task


The steps in this procedure describe how to install Notes on Microsoft
Windows.
Procedure
1. Shut down all applications.
2. Obtain the Notes installation kit.
3. Uninstall any Notes Beta installations.
4. Save the installation kit to a local folder, for example, on Windows, to
your C:\temp folder.
5. Navigate to the folder in which you saved the installation kit.
6. Locate and run the installation executable. For example on Windows,
run SETUP.EXE.
7. Read the welcome information, and then click Next.
8. Read and accept the license agreement terms, and then click Next.
9. Enter your name and the name of your organization and then click
Next.
10. Accept the default install directory or specify a different installation
directory, and then click Next.
11. Select the features and sub-features to install, and then click Next.
 If you are installing version 8.5.2 or later, the service instant messaging
community can be integrated into the Notes client. To enable this feature,
select the Sametime (integrated) option.
 If you are installing version 8.5.2 or later, and you have a Connections
Cloud subscription, Connections Cloud Activities can be added to the
Notes sidebar. To enable this feature, select the IBM Connections option.

IBM SmartCloud Notes Configuration, Connecting to a mail Server


Before you begin
Before you set up set up Notes to connect to a mail server in the service,
complete the following procedures:
• Logging in to the service for the first time
• If you have not installed and set up the Notes client, complete the
procedures Installing Notes and Setting up Notes.

For hybrid environments, in which you continue to have access to on-premises


application servers at your company site:
• Start the Notes client and log in with your Notes ID file.
• Make sure that the current Location document allows you to connect to
an on- premises Domino® server. This step ensures that you can switch to
the Location later to use applications on your company servers.
• You might have an existing mail file on a company server that is being
transferred to your mail server in the service. In this case, status messages
display about a final replication of your mail file on the company server to
your mail file on the new SmartCloud Notes mail server. Make sure that
you get confirmation of a successful mail file transfer before continuing or
breaking out of the replication.

About this task


Your existing Notes client can be running or closed. The SmartCloud Notes
client configuration tool launches Notes for you if it is closed. After you have
completed this setup, a Location document called SmartCloud for username is
active and associated with a Notes ID for SmartCloud Notes. While this Location
document is active, you can send and receive mail through servers in the
service, use contacts, and schedule meetings with other service users in your
company. Location documents are not used with the web client.
The following procedure includes steps to download and run the client
configuration tool manually. However, you might receive a welcome email with
a Configuration button that automates these steps.
Procedure
1. Log in to http://www.ibmcloud.com/social using your service login email
address and password.
2. In your dashboard click, click your photo and select Downloads and
Setup.
3. Under IBM SmartCloud Notes, click View IBM SmartCloud Notes options.
4. At the Welcome to IBM SmartCloud Notes window, click With IBM Notes
client.
5. If you see the Lets set your IBM Notes password window, provide a
password for your Notes ID, and then verify it. Then click Set Password.
6. At the Start using IBM SmartCloud Notes window, click Download.
7. At the Software License Agreement window for the client configuration
tool, select a language, agree to the terms and conditions, and
clickContinue.
8. At the Your download will start in a few seconds window, when
prompted open the download file.
9. Notes starts if it is not already running. When the Join SmartCloud Notes
window displays:
a) Read the information provided about your account.
b) Close any other open tabs and save your work.
c) Select I have closed all other Notes windows and tabs
d) Click Join to run the tool.
10. When configuration has completed, read the prompt that is displayed,
select I have read the above, and then click Yes. The Notes client exits.
11. Restart Notes. Your name and your new Location is selected in the login
dialog. Do not switch locations at this time. If you do, the download will
fail and you will have to rerun the SmartCloud Notes client configuration
tool.
12. Log on to Notes using your Notes ID password. If you provided a
password in Step 5, enter that password. If you did not provide a
password in Step 5, enter the password for your current ID that you also
use to connect to servers at your company.
13. Click Yes if you are prompted to create a cross-certificate when first
accessing your mail server in the service. You may see this prompt in a
hybrid environment if your Notes ID is certified under a different
organization certifier than your mail server in the service.

Results
You can now log on to the service with either the Notes client or the web client.
To return at a later time and use the web client, log on to
http://www.ibmcloud.com/social using your web login email and password.
EXPERIMENT 08
AIM: Working on Heroku for Cloud application deployment.
INTRODUCTION:
Managing resources at large scale while providing performance isolation and
efficient use of underlying hardware is a key challenge for any cloud
management software. Most virtual machine (VM) resource management
systems like VMware DRS clusters, Microsoft PRO and Eucalyptus, do not
currently scale to the number of hosts and VMs needed by cloud offerings to
support the elasticity required to handle peak demand. In addition to scale,
other problems a cloud-level resource management layer needs to solve
include heterogeneity of systems, compatibility constraints between virtual
machines and underlying hardware, islands of resources created due to storage
and network connectivity and limited scale of storage resources.
Managing compute and IO resources at large scale in both public and private
clouds is challenging. The success of any cloud management software critically
depends on the flexibility, scale and efficiency with which it can utilize the
underlying hardware resources while providing necessary performance
isolation. Customers expect cloud service providers to deliver quality of service
(QoS) controls for tenant VMs. Thus, resource management at cloud scale
requires the management platform to provide a rich set of resource controls
that balance the QoS of tenants with overall resource efficiencies of
datacenters.
For public clouds, some systems (e.g., Amazon EC2) provide largely a 1:1
mapping between virtual and physical CPU and memory resources. This leads to
poor consolidation ratios and customers are unable to exploit the benefits from
statistical multiplexing that they enjoy in private clouds.
For private clouds, resource management solutions like VMware DRS,
Microsoft PRO have led to better performance isolation, higher utilization of
underlying hardware resources via over-commitment and overall lower cost of
ownership.
Cloud Resource Management Operations
There are many ways to provide resource management controls in a cloud
environment. We pick
VMware’s Distributed Resource Scheduler (DRS) as an example because it has
a rich set of controls providing the services needed for successful multi-
resource management while providing differentiated QoS to groups of VMs,
albeit at a small scale compared to typical cloud deployments. In this section,
we first describe the services provided by DRS, then discuss the importance of
such services in the cloud setting, and finally highlight some of the challenges in
scaling the services DRS provides.
In a large scale environment, having these controls will alleviate the noisy-
neighbour problem for tenants if, like DRS, the underlying management
infrastructure natively supports automated enforcement and guarantees. At
the same time, the cloud service provider must be able to over-commit the
hardware resources safely allowing better efficiency from statistical
multiplexing of resources without sacrificing the exposed guarantees.

Basic Resource Controls in DRS


VMware ESX and DRS provide resource controls which allow administrators and
users to express allocations in terms of either absolute VM service rates or
relative VM importance. The same control knobs are provided for CPU and
memory allocations, both at the host and cluster levels. Similar controls are
under development for I/O resources and have been shown by a research
prototype. Note that VMware’s Distributed Power Management product (DPM)
powers on/off hosts while respecting these controls.
Reservation: A reservation is used to specify a minimum guaranteed amount of
resources; a lower bound that applies even when a system is heavily
overcommitted. Reservations are expressed in absolute units, such as
megahertz (MHz) for cpu, and megabytes (MB) for memory. Admission control
during VM power on, ensures that the sum of reservations for a resource would
not exceed total capacity.
Limit: A limit is used to specify an upper bound on consumption, even when a
system is under committed. A VM is prevented from consuming more than its
limit, even if that leaves some resources idle. Like reservations, limits are
expressed in concrete absolute units, such as MHz and MB.
Shares: Shares are used to specify relative importance, and are expressed as
abstract numeric values. A VM is entitled to consume resources proportional to
its share allocation; it is guaranteed a minimum resource fraction equal to its
fraction of the total shares in the system. Shares represent relative resource
rights that depend on the total number of shares contending for a resource. VM
allocations degrade gracefully in overload situations, and VMs benefit
proportionally from extra resources when some allocations are underutilized.
Reservations and limits play an important role in the cloud. Without these
guarantees, users would suffer from performance unpredictability, unless the
cloud provider simply partitions the physical hardware leading to the
inefficiency of overprovisioning. This was the main reason for providing these
controls for enterprise workloads running on top of VMware ESX and VMware
DRS.

DRS Architecture and Conceptual Overview VMware Infrastructure


At the core of VMware Infrastructure, VMware ESX Server is the foundation for
delivering virtualization-based distributed services to IT environments. ESX
Server provides a robust virtualization layer that abstracts processor, memory,
storage and networking resources into multiple virtual machines that run side-
by-side on the same physical server.
ESX Server installs directly on the server hardware, or "bare metal", and inserts
a robust virtualization layer between the hardware and the operating system.
ESX Server partitions a physical server into multiple secure and portable virtual
machines that run on the same physical server. Each virtual machine represents
a complete system—with processors, memory, networking, storage and BIOS—
so that Windows, Linux, Solaris, and NetWare operating systems and software
applications run in virtualized environment without any modification.
VirtualCenter, another key building block of VMware Infrastructure, manages
all aspects of your virtual infrastructure—ESX Server hosts, virtual machines,
provisioning, migration, resource allocations, and so on.

VMware Infrastructure Configuration


VMware Infrastructure simplifies management with a single client called the
Virtual- Infrastructure (VI) Client that you can use to perform all tasks. Every
ESX Server configuration task from configuring storage and network
connections, to managing the service console, can be accomplished centrally
through the VI Client.
The VI Client connects to ESX Server hosts, even those not under VirtualCenter
management, and also lets you remotely connect to any virtual machine for
console access. There is a Windows version of the VI Client, and for access from
any networked device, a web browser application provides virtual machine
management and VMware Console access. The browser version of the client,
Virtual Infrastructure Web Access, makes it as easy to give a user access to a
virtual machine as sending a bookmark URL.
VirtualCenter user access controls provide customizable roles and permissions,
so you create your own user roles by selecting from an extensive list of
permissions to grant to each role. Responsibilities for specific VMware
Infrastructure components such as resource pools can be delegated based on
business organization, or ownership. VirtualCenter also provides full audit
tracking to provide a detailed record of every action and operation performed
on the virtual infrastructure.Users can also access virtualization-based
distributed services provided by VMotion, DRS, and HA directly through
VirtualCenter and the VI client.

VMware Clusters
Clusters are a new concept in virtual infrastructure management. They give you
the power of multiple hosts with the simplicity of managing a single entity. A
cluster is a group of loosely connected computers that work together, so that,
from the point of view of aggregating resources such as CPU processing
capability and memory, they can be viewed as though they are a single
computer.
VMware clusters let you aggregate the various hardware resources of individual
ESX Server hosts but manage the resources as if they resided on a single host.
When you power on a virtual machine, it can be given resources from
anywhere in the cluster, rather than be tied to a specific ESX Server host.

VMware Infrastructure 3 provides two services to help with the management of


VMware clusters: VMware HA (high availability) and VMware DRS. VMware
HA allows virtual machines running on specific hosts to be switched over to use
other host resources in the cluster in the case of host machine failures. VMware
DRS provides automatic initial virtual machine placement and makes automatic
resource relocation and optimization decisions as hosts are added or removed
from the cluster or the load on individual virtual machines changes.

Distributed Resource Scheduling (DRS)


VMware DRS and VirtualCenter provide a view and management of all
resources in the cluster. A global scheduler within VirtualCenter enables
resource allocation and monitoring for all virtual machines running on ESX
Servers that are part of the cluster.

DRS Global Scheduler of Clusters in VirtualCenter


DRS provides automatic initial virtual machine placement on any of the hosts in
the cluster, and also makes automatic resource relocation and optimization
decisions as hosts or virtual machines are added or removed from the cluster.
DRS can also be configured for manual control, in which case it only makes
recommendations that you can review and carry out.
DRS provides several additional benefits to IT operations:
 Day-to-day IT operations are simplified as staff members are less affected
by localized events and dynamic changes in their environment. Loads on
individual virtual machines invariably change, but automatic resource
optimization and relocation of virtual machines reduces the need for
administrators to respond, allowing them to focus on the broader, higher-
level tasks of managing their infrastructure.
 DRS simplifies the job of handling new applications and adding new virtual
machines. Starting up new virtual machines to run new applications
becomes more of a task of high-level resource planning and determining
overall resource requirements, than needing to reconfigure and adjust
virtual machines settings on individual ESX Server machines.
 DRS simplifies the task of extracting or removing hardware when it is no
longer needed, or replacing older host machines with newer and larger
capacity hardware. To remove hosts from a cluster, you can simply place
them in maintenance mode, so that all virtual machines currently running
on those hosts get reallocated to other resources of the cluster. After
monitoring the performance of remaining systems to ensure that
adequate resources remain for currently running virtual machines, you
can remove the hosts from the cluster to allocate them to a different
cluster, or remove them from the network if the hardware resources are
no longer needed. Adding new resources to the cluster is also
straightforward, as you can simply drag and drop new ESX Server hosts
into a cluster.

Resource Pools
In addition to the basic resource controls presented earlier, administrators and
users can specify flexible resource management policies for groups of VMs. This
is facilitated by introducing the concept of a logical resource pool – a container
that can be used to specify an aggregate resource allocation for a set of VMs. A
resource pool is a named object with associated settings for each managed
resource – the same familiar shares, reservation, and limit controls used for
VMs. Admission control is performed at the pool level; the sum of the
reservations for a pool’s children must not exceed the pool’s own
reservation.Separate, per- pool allocations provide both isolation between
pools, and sharing within pools. Resource pools are useful in dividing large
capacity into logically grouped users. Organizational administrators can use
resource pool hierarchies to mirror human organizational structures, and to
support delegated administration.
Using VMware DRS
This section describes some of the setup and operation tasks you can perform
using DRS and VirtualCenter: adding and removing hosts from clusters, setting
up and allocating resource pools to virtual machines, and delegating resource
administration to pool administrators.
Enabling DRS
VMware DRS is included as an integrated component in VMware Infrastructure
3 Enterprise. It is also available as add-on license options to VMware
Infrastructure 3 Starter and VMware Infrastructure 3 Standard. To use DRS
when you create VMware clusters, you need to set the Enable VMware DRS
option, so that DRS can use the cluster load distribution information for initial
virtual machine placement, to make load balancing recommendations, and to
perform automatic runtime virtual machine migration.
To install VMware Workstation on a Windows host:
1. Log in to the Windows host system as the Administrator user or as a
user who is a member of the local Administrators group.
2. Open the folder where the VMware Workstation installer was
downloaded. The default location is the Downloads folder for the user
account on the Windows host.
3. Right-click the installer and click Run as Administrator.
4. Select a setup option:
Typical: Installs typical Workstation features. If the Integrated Virtual Debugger
for Visual Studio or Eclipse is present on the host system, the associated
Workstation plug-ins are installed.
EXPERIMENT 09
AIM: Working and configuration of Eucalyptus.
INTRODUCTION:
Eucalyptus is a Linux-based software architecture that implements scalable
private and hybrid clouds within your existing IT infrastructure. Eucalyptus
allows you to use your own collections of resources (hardware, storage, and
network) using a self-service interface on an as-needed basis.
You deploy a Eucalyptus cloud across your enterprise’s on-premise data center.
Users access Eucalyptus over your enterprise's intranet. This allows sensitive
data to remain secure from external intrusion behind the enterprise firewall.
You can install Eucalyptus on the following Linux distributions:
• CentOS 7
• Red Hat Enterprise Linux (RHEL) 7
Eucalyptus was designed to be easy to install and as non-intrusive as possible.
The software framework is modular, with industry-standard, language-agnostic
communication.Eucalyptus provides a virtual network overlay that both isolates
network traffic of different users and allows two or more clusters to appear to
belong to the same Local Area Network (LAN). Also, Eucalyptus offers API
compatibility with Amazon’s EC2, S3, IAM, ELB, Auto Scaling, CloudFormation,
and CloudWatch services. This offers you the capability of a hybrid cloud.

Requirements
Compute Requirements :-
Eucalyptus can be installed via the Ubuntu Enterprise Cloud, introduced in
Ubuntu 9.04. In terms of hardware requirements recommended minimum
specification is a a dual-core
2.2 GHz processor with virtualization extension (Intel-VT or AMD-V), 4GB RAM
and 100 GB hard drive.
• Ubuntu 9.10, server edition
• • Dual-core 2.2 GHz processor with virtualization extension (Intel-VT
or AMD-V), 4GB RAM and 100 GB hard drive.
• Port 22 needs to be open for admins (for maintenance)
• Port 8443 needs to be open for users for controlling and sending
requests to the cloud via a web interface

Storage and Memory Requirements:-


Each machine needs a minimum of 100GB of storage. We recommend at least
500GB for Walrus and SC hosts.
We recommend 200GB per NC host running Linux VMs. Note that larger
available disk space enables a greater number of VMs.
Each machine needs a minimum of 16GB RAM. However, we recommend more
RAM for improved caching and on NCs to support more instances.

Network Requirements:-
For VPCMIDO, Eucalyptus needs MidoNet to be installed.
The network connecting machines that host components (except the CC and
NC) must support UDP multicast for IP address 239.193.7.3. Note that UDP
multicast is not used over the network that connects the CC to the NCs.
Once you are satisfied that your systems requirements are met, you are ready
to plan your Eucalyptus installation.
Eucalyptus components

Components of eucalyptus in cloud computing:


Cloud Controller :- In many deployments, the Cloud Controller (CLC) service and
the User-Facing Services (UFS) are on the same host machine. This server is the
entry-point into the cloud for administrators, developers, project managers,
and end-users. The CLC handles persistence and is the backend for the UFS. A
Eucalyptus cloud must have exactly one CLC.
User-Facing Services :- The User-Facing Services (UFS) serve as endpoints for
the AWS- compatible services offered by Eucalyptus : EC2 (compute), AS
(AutoScaling), CW (CloudWatch), ELB (LoadBalancing), IAM (Euare), and STS
(tokens). A Eucalyptus cloud can have several UFS host machines.
Object Storage Gateway :- The Object Storage Gateway (OSG) is part of the
UFS. The OSG passes requests to object storage providers and talks to the
persistence layer (DB) to authenticate requests. You can use Walrus, Riak CS, or
Ceph-RGW as the object storage provider.
Object Storage Provider :- The Object Storage Provider (OSP) can be either the
Eucalyptus Walrus backend, Riak CS, or Ceph-RGW. Walrus is intended for light
S3 usage and is a single service. Riak is an open source scalable general purpose
data platform; it is intended for deployments with heavy S3 usage. Ceph-RGW
is an object storage interface built on top of Librados.
Management Console :- The Eucalyptus Management Console is an easy-to-use
web- based interface that allows you to manage your Eucalyptus cloud. The
Management Console is often deployed on the same host machine as the UFS.
A Eucalyptus cloud can have multiple Management Console host machines.
Cluster Controller :- The Cluster Controller (CC) service must run on a host
machine that has network connectivity to the host machines running the Node
Controllers (NCs) and to the host machine for the CLC. CCs gather information
about a set of NCs and schedules virtual machine (VM) execution on specific
NCs.
Storage Controller :- The Storage Controller (SC) service provides functionality
similar to Amazon Elastic Block Store (Amazon EBS). The SC can interface with
various storage systems. Elastic block storage exports storage volumes that can
be attached by a VM and mounted or accessed as a raw block device. EBS
volumes can persist past VM termination and are commonly used to store
persistent data. An EBS volume cannot be shared between multiple VMs at
once and can be accessed only within the same availability zone in which the
VM is running. Users can create snapshots from EBS volumes.

Node Controller :- The Node Controller (NC) service runs on any machine that
hosts VM instances. The NC controls VM activities, including the execution,
inspection, and termination of VM instances. It also fetches and maintains a
local cache of instance images, and it queries and controls the system software
(host OS and the hypervisor) in response to queries and control requests from
the CC.
Eucanetd :- The eucanetd service implements artifacts to manage and define
Eucalyptus cloud networking. Eucanetd runs alongside the CLC or NC services,
depending on the configured networking mode.
Advantages
The benefits of Eucalyptus in cloud computing are:
1. Eucalyptus can be utilised to benefit both the eucalyptus private cloud
and the eucalyptus public cloud.
2. Clients can run Amazon or Eucalyptus machine pictures as examples on
both clouds.
3. It isn’t extremely mainstream on the lookout yet is a solid contender to
CloudStack and OpenStack.
4. It has 100% Application Programming Interface similarity with all the
Amazon Web Services.
5. Eucalyptus can be utilised with DevOps apparatuses like Chef and
Puppet.

Features
Eucalyptus features include:
• Supports both Linux and Windows virtual machines (VMs).
• Application program interface- (API) compatible with Amazon EC2
platform.
• Compatible with Amazon Web Services (AWS) and Simple Storage
Service (S3).
• Works with multiple hypervisors including VMware, Xen and KVM.
• Can be installed and deployed from source code or DEB and RPM
packages.
• Internal processes communications are secured through SOAP and WS-
Security.
• Multiple clusters can be virtualized as a single cloud.
• Administrative features such as user and group management and
reports.
• Version 3.3, which became generally available in June 2013, adds the
following features:
• Auto Scaling: Allows application developers to scale Eucalyptus
resources up or down based on policies defined using Amazon EC2-
compatible APIs and tools
• Elastic Load Balancing: AWS-compatible service that provides greater
fault tolerance for applications
• CloudWatch: An AWS-compatible service that allows users to collect
metrics, set alarms, identify trends, and take action to ensure
applications run smoothly
• Resource Tagging: Fine-grained reporting for showback and chargeback
scenarios; allows IT/ DevOps to build reports that show cloud
utilization by application, department or user
• Expanded Instance Types: Expanded set of instance types to more
closely align to those available in Amazon EC2. Was 5 before, now up
to 15 instance types.
• Maintenance Mode: Allows for replication of a virtual machine’s hard
drive, evacuation of the server node and provides a maintenance
window.

History timelines of Eucalyptus:-


The software development had its roots in the Virtual Grid Application
Development Software project, at Rice University and other institutions from
2003 to 2008.Rich Wolski led a group at the University of California, Santa
Barbara (UCSB), and became the chief technical officer at the company
headquartered in Goleta, California before returning to teach at UCSB.
Eucalyptus software was included in the Ubuntu 9.04 distribution in 2009. The
company was formed in 2009 with $5.5 million in funding by Benchmark Capital
to commercialize the software. Eucalyptus Systems announced a formal
agreement with Amazon Web Services in March 2012.

Eucalyptus Architecture Overview


This topics describes the relationship of the components in a Eucalyptus
installation.

System Requirements :-
Compute Requirements :-
Physical Machines: All services must be installed on physical servers, not virtual
machines.
Central Processing Units (CPUs): We recommend that each host machine in
your cloud contain either an Intel or AMD processor with a minimum of 4 2GHz
cores.
Operating Systems: supports the following Linux distributions: CentOS 7.9 and
RHEL
7.9. supports only 64-bit architecture.
Machine Clocks: Each host machine and any client machine clocks must be
synchronized (for example, using NTP). These clocks must be synchronized all
the time, not only during the installation process.

Storage and Memory Requirements :-


Each machine needs a minimum of 100GB of storage. We recommend at least
500GB for Walrus and SC hosts.
We recommend 200GB per NC host running Linux VMs. Note that larger
available disk space enables a greater number of VMs.
Each machine needs a minimum of 16GB RAM. However, we recommend more
RAM for improved caching and on NCs to support more instances.

Network Requirements :-
For VPCMIDO, Eucalyptus needs MidoNet to be installed.
The network connecting machines that host components (except the CC and
NC) must support UDP multicast for IP address 239.193.7.3. Note that UDP
multicast is not used over the network that connects the CC to the NCs.
Once you are satisfied that your systems requirements are met, you are ready
to plan your Eucalyptus installation.

Manual Installation of Eucalyptus :-


Prerequisite :- Before you install Eucalyptus components on your machines, we
recommend that you take the time to plan how you want to install it.
To successfully plan for your Eucalyptus installation, you must determine two
things:
Think about the application workload performance and resource utilization
tuning. Think about how many machines you want on your system.
Use your existing architecture and policies to determine the networking
features you want to enable: EC2 Classic Networking or EC2 VPC Networking.
The cloud components: Cloud Controller (CLC) and Walrus, as well as user
components: User-Facing Services (UFS) and the Management Console,
communicate with cluster components: the Cluster Controllers (CCs) and
Storage Controllers (SCs). The CCs and SCs, in turn, communicate with the Node
Controllers (NCs). The networks between machines hosting these components
must be able to allow TCP connections between them.

Step 1 :- Plan your Hardware.


You can run Eucalyptus services in any combination on the various physical
servers in a data center. For example, you can install the Cloud Controller (CLC),
Walrus, CC, and SC on one host machine, and NCs on one or more host
machines. Or you can install each service on an independent physical server.
This gives each service its own local resources to work with.
Often in installation decisions, you must trade deployment simplicity for
performance. For example, if you place all cloud (CLC) and zone (CC) services on
a single machine, it makes for simple administration. This is because there is
only one machine to monitor and control for the Eucalyptus control services.
But, each service acts as an independent web service; so if they share a single
machine, the reduced physical resources available to each service might
become a performance bottleneck.

Step 2 :- Plan services Placement.

Cloud Services :- The main decision for cloud services is whether to install the
Cloud Controller (CLC) and Walrus on the same server. If they are on the same
server, they operate as separate web services within a single Java environment,
and they use a fast path for inter-service communication. If they are not on the
same server, they use SOAP and REST to work together. Sometimes the key
factor for cloud services is not performance, but server cost and data center
configuration. If you only have one server available for the cloud, then you have
to install the services on the same server. All services should be in the same
data center. They use aggressive time-outs to maintain system responsiveness
so separating them over a long-latency, lossy network link will not work.
User Services :- The User Facing Services (UFS) handle all of the AWS APIs and
provide an entry point for clients and users interacting with the Eucalyptus
cloud. The UFS and the Management Console are often hosted on the same
machine since both must be accessible from the public, client-facing network.
You may optionally choose to have redundant UFS and Management Console
host machines behind a load balancer.
Zone Services :- The Eucalyptus services deployed in the zone level of a
Eucalyptus deployment are the Cluster Controller (CC) and Storage Controller
(SC). You can install all zone services on a single server, or you can distribute
them on different servers. The choice of one or multiple servers is dictated by
the demands of user.
Node Services :- The Node Controllers are the services that comprise the
Eucalyptus backend. All NCs must have network connectivity to whatever
machine(s) host their EBS volumes. Hosts are either a Ceph deployment or the
SC.

Step 3 :- Plan Disk space.


We recommend that you choose a disk for the Walrus that is large enough to
hold all objects and buckets you ever expect to have, including all images that
will ever be registered to your system, plus any Amazon S3 application data. For
heavy S3 usage, Riak CS is a better choice for object storage.
Service Directory Minimum
Size
Cloud
Controller /var/lib/eucalyptus/db/var/log/eucalyptus 20GB2GB
(CLC)CLC
logging
WalrusWalrus /var/lib/eucalyptus/bukkits/var/log/eucalyptus 250GB2G
logging B
Storage
Controller (SC)
(EBS storage)
This disk
space on the
SC is only
required if /var/lib/eucalyptus/volumes/var/log/eucalyptus 250GB
you are not
using Ceph.
For DAS the
space must not
be used by an
existing
filesystem.
User-Facing
Services /var/lib/eucalyptus/var/log/eucalyptus 5GB 2GB
(UFS)UFS
logging
Cluster
Controller /var/lib/eucalyptus/CC/var/log/eucalyptus 5GB2GB
(CC)CC
logging
Node
Controller /var/lib/eucalyptus/instances/var/log/eucalyptu 250GB2G
(NC)NC s B
logging
If necessary, create symbolic links or mount points to larger
filesystems from the above locations. Make sure that the
‘eucalyptus’ user owns the directories.

Step 4 :- Plan Features.


Availability zone-support :- Eucalyptus offers the ability to create
multiple local availability zones.An availability zone for AWS
denotes a large subset of their cloud environment. Eucalyptus
refines this definition to denote a subset of the cloud that shares a
local area network.

ObjetStorage :- Eucalyptus supports Walrus and Riak CS as its


object storage backend. There is no extra planning if you use
Walrus. If you use Riak CS, you can use a single Riak CS cluster for
several Eucalyptus clouds. Basho (the vendor of RiakCS)
recommends five nodes for each Riak CS cluster.
Step 5 :- Plan Networking modes.
These networking modes are designed to allow you to choose an
appropriate level of security and flexibility for your cloud. The
purpose is to direct Eucalyptus to use different network features
to manage the virtual networks that connect VMs to each other
and to clients external to Eucalyptus.
Eucalyptus networking modes are generally modeled after AWS
networking capabilities. In legacy AWS accounts, you have the
ability to choose EC2 Classic network mode or VPC network mode.
New AWS accounts do not have this flexibility and are forced into
using VPC. Eucalyptus VPCMIDO mode is similar to AWS VPC in
that it allows users to fully manage their cloud network, including
the definition of a Classless Inter-Domain Routing (CIDR) block,
subnets, and security groups with rules for additional protocols
beyond the default three (UDP, TCP, and ICMP) available in EC2
Classic networking.
Does your cloud need to mimic behavior in your AWS account? If
you need EC2-Classic behavior, select EDGE mode. If you need
EC2-VPC behavior, select VPCMIDO mode.
Do you need to create security group rules with additional
protocols (e.g., all protocols, RDP, XTP, etc.)? If so, choose
VPCMIDO mode.
If there is no specific requirement for either mode, then VPCMIDO
mode is recommended given its flexibility and networking
features.
Eucanetd :-The eucanetd service implements artifacts to manage
and define Eucalyptus cloud networking. Eucanetd runs alongside
the CLC or NC services, depending on the configured networking
mode. Eucanetd manages network functionality. For example:
Installs network artifacts (iptables, ipsets,
ebtables, dhcpd) Performs state management
for the installed network artifacts Updates
network artifact configuration as needed
In VPCMIDO mode:
Interacts with MidoNet via the MidoNet API
Defines network artifacts in MidoNet

Host Machine EDGE mode VPCMIDO mode


CLC No Yes
NC Yes No

Eucalyptus Edge Mode :-In EDGE networking mode, the


components responsible for implementing Eucalyptus VM
networking artifacts are running at the edge of a Eucalyptus
deployment: the Linux host machines acting as Node Controllers
(NCs). On each NC host machine, a Eucalyptus stand-alone service,
eucanetd, runs side-by-side with the NC service. The eucanetd
service receives dynamically changing Eucalyptus networking views
and is responsible for configuring the Linux networking subsystem
to reflect the latest view.EDGE networking mode integrates with
your existing network infrastructure, allowing you to inform
Eucalyptus, through configuration parameters for EDGE mode,
about the existing network, which Eucalyptus then will consume
when implementing the networking view.EDGE networking mode
integrates with two basic types of pre-existing network setups:
One flat IP network used to service component systems, VM
public IPs (elastic IPs), and VM private IPs.
Two networks, one for components and VM public IPs, and the other for
VM private IPs.
EDGE Mode Requirements :-
Each NC host machine must have an interface configured with an
IP on a VM public and a VM private network (which can be the
same network).
There must be IP connectivity from each NC host machine (where
eucanetd runs) and the CLC host machine, so that network path
from instances to the metadata server (running on the CLC host
machine) can be established.
There must be a functioning router in place for the private
network. This router will be the default gateway for VM instances.
The private and public networks can be the same network, but
they can also be separate networks.
The NC host machines need a bridge configured on the private
network, with the bridge interface itself having been assigned an
IP from the network.
If you’re using a public network, the NC host machines need an
interface on the public network as well (if the public and private
networks are the same network, then the bridge needs an IP
assigned on the network).
If you run multiple zones, each zone can use the same network as
its private network, or they can use separate networks as private
networks. If you use separate networks, you need to have a router
in place that is configured to route traffic between the networks.
If you use private addressing only, the CLC host machine must
have a route back to the VM private network.

EDGE Mode Limitations :-


Global network updates (such as security group rule updates,
security group VM membership updates, and elastic IP updates)
are applied through an “eventually consistent” mechanism, as
opposed to an “atomic” mechanism.
Mappings between VM MAC addresses and private IPs are strictly
enforced. This means that instances cannot communicate using
addresses the cloud has not assigned to them.That is, there may
be a brief period of time where one NC has the new state
implemented.
EDGE networking mode integrates with networks that already
exist. If the network, netmask, and router don’t already exist, you
must create them outside before configuring EDGE mode.
VPCMIDO and MidoNet :- Eucalyptus VPCMIDO mode resembles
the Amazon Virtual Private Cloud (VPC) product wherein the
network is fully configurable by users. In Eucalyptus, it is
implemented with a Software-Defined Networking (SDN)
technology called MidoNet. MidoNet is a network virtualization
platform for Infrastructure-as-a- Service (IaaS) clouds that
implements and exposes virtual network components as software
abstractions, enabling programmatic provisioning of virtual
networks. This network mode requires configuration of MidoNet
in order to make cloud networking functional. It offers the most
advanced networking capabilities and therefore it is
recommended to be used on all new Eucalyptus installations.
MidoNetComponents :- A MidoNet deployment consists of four
types of nodes (according to their logical functions or services
offered), connected via four IP networks as depicted in Figure 1.
MidoNet does not require any specific hardware, and can be
deployed in commodity x86_64 servers. Interactions with
MidoNet are accomplished through Application Programming
Interface (API) calls, which are translated into (virtual) network
topology changes. Network state information is stored in a
logically centralized data store, called the Network State Database
(NSDB), which is implemented on top of two open- source
distributed coordination and data store technologies: ZooKeeper
and Cassandra.
Node types :-
MidoNet Network State Database (NSDB): consists of a cluster of
ZooKeeper and Cassandra. All MidoNet nodes must have IP
connectivity with NSDB.
MidoNet API: consists of MidoNet web app. Exposes MidoNet REST APIs.
Hypervisor: MidoNet agent (Midolman) are required in all
Hypervisors to enable VMs to be connected via MidoNet overlay
networks/SDN.
Gateway: Gateway nodes are connected to the public network,
and enable the network flow from MidoNet overlays to the public
network.

Physical Networks:-
NSDB: IP network that connects all nodes that participate in
MidoNet. While NSDB and Tunnel Zone networks can be the same,
it is recommended to have an isolated (physical or VLAN) segment.
API: in deployments only eucanetd/CLC needs access to the API
network. Only “special hosts/processes” should have access to this
network. The use of “localhost” network on the node running
CLC/eucanetd is sufficient and recommended in deployments.
Tunnel Zone: IP network that transports the MidoNet overlay
traffic (VM traffic), which is not “visible” on the physical network.
Public network: network with access to the Internet (or
corporate/enterprise) network.
Step 6 :- Prepare the network.
Reserve Ports :-
Port Description
TCP 5005 DEBUG ONLY: This port is used for debugging (using
the –debug flag).
TCP 8772 DEBUG ONLY: JMX port. This is disabled by default,
and can be enabled with the –debug or –jmx options
for CLOUD_OPTS.
TCP 8773 Web services port for the CLC, user-facing services
(UFS), object storage gateway (OSG), Walrus SC;
also used for external and internal communications
by the CLC and Walrus. Configurable
with euctl.
TCP 8774 Web services port on the CC. Configured in the
eucalyptus.conf configuration file
TCP 8775 Web services port on the NC. Configured in the
eucalyptus.conf configuration file.
TCP 8777 Database port on the CLC
TCP 8779 jGroups failure detection port on CLC, UFS, OSG,
(or next Walrus SC. If port 8779 is available, it will be used,
available otherwise, the next port in the range will be
port, up to attempted until an unused port is found.
TCP 8849)
TCP 8888 The default port for the Management Console.
Configured in the
/etc/eucalyptus-console/console.ini file.
TCP 16514 TLS port on Node Controller, required for instance
migrations
UDP 7500 Port for diagnostic probing on CLC, UFS, OSG, Walrus
SC
UDP 8773 Membership port for any UFS, OSG, Walrus, and SC
UDP 8778 The bind port used to establish multicast
communication
TCP/UDP 53 DNS port on UFS
UDP 63822 eucanetd binds to localhost port 63822 and uses
it to detect and avoid running multiple instances
(of eucanetd)

Step 7 :- Verify Connectivity.


Verify component connectivity by performing the following checks
on the machines that will be running the listed Eucalyptus
components.
Verify connection from an end-user to the CLC on TCP port 8773
Verify connection from an end-user to Walrus on TCP port 8773
Verify connection from the CLC, SC, and NC to SC on TCP port 8773
Verify connection from the CLC, SC, and NC to Walrus on TCP port
8773 Verify connection from Walrus and SC to CLC on TCP port
8777 Verify connection from CLC to CC on TCP port 8774 Verify
connection from CC to NC on TCP port 8775 Verify connection
from NC to Walrus on TCP port 8773. Or, you can verify the
connection from the CC to Walrus on port TCP 8773, and from an
NC to the CC on TCP port 8776 Verify connection from public IP
addresses of Eucalyptus instances (metadata) and CC to CLC on
TCP port 8773 Verify TCP connectivity between CLC, Walrus, and
SC on TCP port 8779 (or the first available port in range 8779-8849)
Verify connection between CLC, Walrus, and SC on UDP port 7500
Verify multicast connectivity for IP address 239.193.7.3 between
CLC and UFS, OSG, Walrus, and SC on UDP port 8773 If DNS is
enabled, verify connection from an end-user and instance IPs to
DNS ports If you use tgt (iSCSI open source target) for EBS in DAS
or Overlay modes, verify connection from NC to SC on TCP port
3260
Step 8 :- Configure dependencies.
Configure Bridges :- To configure a bridge on CentOS 7 or RHEL 7,
you need to create a file with bridge configuration (for example,
ifcfg-brX) and modify the file for the physical interface (for
example, ifcfg-ethX). The following steps describe how to set up a
bridge on both CentOS 7 and RHEL 7. We show examples for
configuring bridge devices that either obtain IP addresses using
DHCP or statically.
Create a new network script in the /etc/sysconfig/network-scripts
directory called ifcfg-br0 or something similar. The br0 is the name
of the bridge, but this can be anything as long as the name of the
file is the same as the DEVICE parameter, and the name is
specified correctly in the previously created physical interface
configuration (ifcfg-ethX).
Disable firewall on RHEL 7 :-You should have successfully installed
RHEL 7 before this task. If you have existing firewall rules on your
host machines, you must disable the firewall in order to install
Eucalyptus. You should re-enable it after installation.

Configure NTP :-
Configure NTP :-For the supported version of the Java Virtual
Machine (JVM), see the Compatibility Matrix in the Release Notes.
As of Eucalyptus 4.3, JVM 8 is required. Eucalyptus RPM packages
require java-1.8.0-openjdk, which will be installed
automatically.To use Java with Eucalyptus cloud:
Open the /etc/eucalyptus/eucalyptus.conf file. Verify that the
CLOUD_OPTS setting does not set –java-home , or that –java-
home points to a supported JVM version.

Configure an MTA :- You can use Sendmail, Exim, postfix, or


something simpler. The MTA server does not have to be able to
receive incoming mail. Many Linux distributions satisfy this
requirement with their default MTA. For details about configuring
your MTA, go to the documentation for your specific product. To
test your mail relay for localhost, send email to yourself from the
terminal using mail.

Step 9 :- Install Midonet.


Prerequisite :-You need to configure software repositories and

install Network State Database (NSDB) services: ZooKeeper


and Cassandra.

ZooKeeper :-MidoNet uses Apache ZooKeeper to store critical


path data about the virtual and physical network topology. For a
simple single-server installation, install ZooKeeper on any server
that is IP accessible from all Midolman agents (for example: on the
CLC host machine itself). You can also cluster ZooKeeper for fault
tolerance. See MidoNet NSDB ZooKeeper Installation. Enable and
start the ZooKeeper service before installing the other MidoNet
services.
Cassandra :-MidoNet uses Apache Cassandra to store flow state
information.For a simple single-server installation, install
Cassandra on any server that is IP accessible from all Midolman
agents (for example: on the CLC host machine itself). You can also
cluster Cassandra for fault tolerance. Enable and start the
Cassandra service before installing the other MidoNet services.

Commands to install Midonet :-

Step 9 :- Install Repositories.


Installing Eucalyptus from RPM package downloads.The first step
to installing Eucalyptus is to download the RPM packages. When
you’re ready, continue to Software Signing.The following
terminology might help you as you proceed through this section.
Eucalyptus open source software: Eucalyptus release packages
and dependencies, which enable you to deploy a Eucalyptus cloud.
Euca2ools CLI: Euca2ools is the Eucalyptus command line
interface for interacting with web services. It is compatible with
many Amazon AWS services, so can be used with Eucalyptus as
well as AWS.
RPM and YUM and software signing : Eucalyptus CentOS and
RHEL download packages are in RPM (Red Hat Package Manager)
format and use the YUM package management tool. We use GPG
keys to sign our software packages and package repositories.

EPEL software : EPEL (Extra Packages for Enterprise Linux) are


free, open source software, which is fully separated from licensed
RHEL distribution. It requires its own package.
Step 10 :- Configure Eucalyptus.
Configure SELinux :- Enabling SELinux on host systems running
Eucalyptus 4.4 services to improve their security on RHEL 7.
Enabling SELinux, as described in this topic, can help contain
break-ins. For more information, see RedHat SELinux
documentation.You need to set boolean values on Storage
Controller (SC) and Management Console host machines.
If your network mode is VPCMIDO, you also set a boolean value
on the Cloud Controller (CLC) host machines. To configure SELinux
on Eucalyptus 4.4 :
On each Storage Controller (SC) host machine, run the following command:

Configure Edge Network mode :-To configure Eucalyptus for EDGE


mode, most networking configuration is handled through settings
in a global Cloud Controller (CLC) property file. The
/etc/eucalyptus/eucalyptus.conf file contains some network-
related options in the “Networking Configuration” section. These
options use the prefix VNET_.
Create Scheduling Policy :- This topic describes how to set up the
Cluster Controller (CC) to choose which Node Controller (NC) to
run each new instance.In the CC, open the
/etc/eucalyptus/eucalyptus.conf file. In the SCHEDPOLICY=
parameter, set the value to one of the following: GREEDY When
the CC receives a new instance run request, it runs the instance on
the first NC in an ordered list of NCs that has capacity to run the
instance. At partial capacity with some amount of churn, this policy
generally results in a steady state over time where some nodes are
running many instances, and some nodes are running few or no
instances. ROUNDROBIN (Default) When the CC receives a new
instance run request, it runs the instance on the next NC in an
ordered list of NCs that has capacity. The next NC is determined by
the last NC to have received an instance. At partial capacity with
some amount of churn, this policy generally results in a steady
state over time where instances are more evenly distributed
across the set of NCs. Save the file.
Configure VPCMIDO Network Mode :-Eucalyptus VPCMIDO
network mode. Eucalyptus requires network connectivity between
its clients (end-users) and the cloud components (e.g., CC, CLC,
and storage).To configure VPCMIDO mode parameters, you must
create a network.yaml configuration file. Later in the installation
process you will upload the Network Configuration to the CLC.
Create the network configuration file. Open a text editor. Create a
file similar to the following structure. And save the network.yaml
file.

Step 11 :- Start Eucalyptus.


Start the Eucalyptus services in the order presented in this section.
Make sure that each host machine you installed a Eucalyptus
service on resolves to an IP address. Edit the /etc/hosts file if
necessary.
Prerequisites You should have installed and configured Eucalyptus
before starting the CLC.To initialize and start the CLC
Log in to the Cloud Controller (CLC) host machine. Enter the
following command to initialize the CLC:
Step 12 :- Start Eucalyptus.

If you plan on running multiple Management Console host


machines, we recommend turning off the default Memcached in
your console.ini file.
Eucalyptus in nutshell :-
Eucalyptus is utilised to assemble hybrid, public and private cloud.
It can likewise deliver your own datacentre into a private cloud
and permit you to stretch out the usefulness to numerous
different organisations.
Eucalyptus vs Other Private Clouds :-

There are numerous Infrastructure-as-a-Service contributions


accessible in the market like OpenNebula, Eucalyptus, CloudStack
and OpenStack, all being utilised as private and public
Infrastructure-as-a-Service contributions.
Of the multitude of Infrastructure-as-a-Service contributions,
OpenStack stays the most well-known, dynamic and greatest
open-source cloud computing project. At this point, eagerness for
OpenNebula, CloudStack and Eucalyptus stay strong.

Result :-We successfully install Eucalyptus and get its whole installation
process.
EXPERIMENT 10
AIM: Deployment & Services of Amazon Web Services.
INTRODUCTION:
Amazon Web Services (AWS) is the world’s most comprehensive and
broadly adopted cloud platform, offering over 200 fully featured
services from data centres globally. Millions of customers—including
the fastest-growing start-ups, largest enterprises, and leading
government agencies—are using AWS to lower costs, become more
agile, and innovate faster.
AWS has significantly more services, and more features within those
services, than any other cloud provider–from infrastructure
technologies like compute, storage, and databases–to emerging
technologies, such as machine learning and artificial intelligence,
data lakes and analytics, and Internet of Things. This makes it faster,
easier, and more cost effective to move your existing applications to
the cloud and build nearly anything you can imagine.
AWS is architected to be the most flexible and secure cloud
computing environment available today. Their core infrastructure is
built to satisfy the security requirements for the military, global
banks, and other high-sensitivity organizations. This is backed by a
deep set of cloud security tools, with 230 security, compliance, and
governance services and features. AWS supports 90 security
standards and compliance certifications, and all 117 AWS services
that store customer data offer the ability to encrypt that data.

Deployment
AWS CodeDeploy is a fully managed deployment service that
automates software deployments to a variety of compute services
such as Amazon EC2, AWS Fargate, AWS Lambda, and your on-
premises servers. AWS CodeDeploy makes it easier for you to rapidly
release new features, helps you avoid downtime during application
deployment, and handles the complexity of updating your
applications. You can use AWS CodeDeploy to automate software
deployments, eliminating the need for error-prone manual
operations. The service scales to match your deployment needs.
Basically, CodeDeploy is a deployment service that automates
application deployments to Amazon EC2 instances, on-premises
instances, serverless Lambda functions, or Amazon ECS services.
You can deploy a nearly unlimited variety of application content, including:
 Code
 Serverless AWS Lambda functions
 Web and configuration files
 Executables
 Packages
 Scripts
 Multimedia files

CodeDeploy can deploy application content that runs on a server and is stored
in Amazon S3 buckets, GitHub repositories, or Bitbucket repositories.
CodeDeploy can also deploy a serverless Lambda function. You do not need to
make changes to your existing code before you can use CodeDeploy.
CodeDeploy makes it easier for you to:
 Rapidly release new features.
 Update AWS Lambda function versions.
 Avoid downtime during application deployment.
 Handle the complexity of updating your applications, without
many of the risks associated with error-prone manual
deployments.

Benefits of AWS CodeDeploy


CodeDeploy offers these benefits:
 Server, serverless, and container applications. CodeDeploy lets
you deploy both traditional applications on servers and
applications that deploy a serverless AWS Lambda function
version or an Amazon ECS application.
 Automated deployments. CodeDeploy fully automates your
application deployments across your development, test, and
production environments. CodeDeploy scales with your
infrastructure so that you can deploy to one instance or
thousands.
 Minimize downtime. If your application uses the EC2/On-
Premises compute platform, CodeDeploy helps maximize your
application availability. During an in-place deployment,
CodeDeploy performs a rolling update across Amazon EC2
instances. You can specify the number of instances to be taken
offline at a time for updates.
 Stop and roll back. You can automatically or manually stop and
roll back deployments if there are errors.
 Centralized control. You can launch and track the status of your
deployments through the CodeDeploy console or the AWS CLI.
You receive a report that lists when each application revision
was deployed and to which Amazon EC2 instances.
 Easy to adopt. CodeDeploy is platform-agnostic and works with
any application. You can easily reuse your setup code.
CodeDeploy can also integrate with your software release
process or continuous delivery toolchain.
 Concurrent deployments. If you have more than one
application that uses the EC2/On-Premises compute platform,
CodeDeploy can deploy them concurrently to the same set of
instances.

Overview of CodeDeploy compute platforms


CodeDeploy is able to deploy applications to three compute platforms:
 EC2/On-Premises: Describes instances of physical servers that

can be Amazon EC2 cloud instances, on-premises servers, or


both. Applications created using the EC2/On- Premises
compute platform can be composed of executable files,
configuration files, images, and more.
Deployments that use the EC2/On-Premises compute
platform manage the way in which traffic is directed to
instances by using an in-place or blue/green deployment type.
For more information, see Overview of CodeDeploy
deployment types.
 AWS Lambda: Used to deploy applications that consist of an
updated version of a Lambda function. AWS Lambda manages
the Lambda function in a serverless compute environment
made up of a high-availability compute structure. All
administration of the compute resources is performed by AWS
Lambda.
You can manage the way in which traffic is shifted to the
updated Lambda function versions during a deployment by
choosing a canary, linear, or all-at-once configuration.
 Amazon ECS: Used to deploy an Amazon ECS containerized
application as a task set. CodeDeploy performs a blue/green
deployment by installing an updated version of the application
as a new replacement task set. CodeDeploy reroutes production
traffic from the original application task set to the replacement
task set. The original task set is terminated after a successful
deployment. For more information about Amazon ECS, see
Amazon Elastic Container Service.
You can manage the way in which traffic is shifted to the
updated task set during a deployment by choosing a canary,
linear, or all-at-once configuration.

Services
Amazon web service is an on-demand cloud computing platform that
offers flexible, reliable, scalable, managed and easy-to-use, cost-
effective cloud computing solution these all services are comes with
a different level of abstraction like (IaaS) Infrastructure as a Service,
(PaaS) Platform as a Service, and (SaaS) packaged software as a
service and all of the service can be used on a pay-as-you-go basic
means you will only be paying for what you are using and while it’s
using computing resources.
It is easy to scale as per need. Servers are distributed across the
world; hence available readily, and data stored in these servers is
easily retrievable. It is secure and reliable. Nowadays, it is widely
used and preferred as a cloud service provider. It provides more than
100 services which include computing, storage, management tools,
analytics, deployment, IOT and much more. All these services are
provided under the Amazon portal as per the subscription of the
user.
List of Services offered by AWS
1. Analytics
Amazon EMR provides the Hadoop framework to process big
data. Amazon Kinesis helps in analyzing real-time streaming
data. AWS Data Pipeline and Glue provide pipeline structures
schedule data load and processing. There are many more
applications provided by AWS for almost every operation.
2. Storage
Amazon Simple Storage Service (S3) provides scalable data
storage with backup and replication. Amazon Glacier offers
storage for archived data and affordable retrieval. AWS backup
service manages the backup of data. It automates the backup
process. Apart from these applications AWS storage offers
other services also.
3. Compute
Amazon Elastic Compute Cloud (EC2) provides virtual servers
or instances for computing. It is auto-scalable as per the
requirement. Amazon Elastic Container Service is a high-
performance container service that supports Docker
containers. AWS Lambda offers serverless computing to run
applications. Light sail is an easy-to-use service which provides
virtual server, storage, DNS management, etc. It provides all the
services required for the development of applications.
4. Blockchain
Amazon Managed Blockchain creates and manages a
blockchain network. Amazon Quantum Ledger Database
(QLDB) offers a fully managed ledger database to maintain
transactions.
5. Database
Amazon Relational Database Service provides a fully managed
database service that includes Oracle, SQL, MySQL, etc.
Amazon Aurora offers a high-performance, fully managed
relational database service. Amazon Timestream provides a
fully managed time-series database. Amazon DynamoDB
provides database services for the NoSQL database. Along with
these databases, AWS offers many other database services to
support almost every type of requirement.
6. Developer Tool
AWS Codestar helps the user set up a continuous delivery
pipeline in minutes. AWS X-Ray helps to debug production
applications. With the help of an X-Ray, the user can analyze
and identify performance issues and application components.
AWS CodeCommit provides fully managed private GIT
repositories to store code and manage versions. Apart from
these services, AWS provides AWS CodePipeline, AWS
CodeBuild, AWS CodeDeploy, AWS CLoud9 to support
development and deployment.
1. Networking and Content Delivery
AWS is a virtual private cloud; it offers services over a network.
Hence it ensures that AWS can run any workload over the
network with security, performance, manageability, and
availability. It offers a set of resources over the network by
connecting it privately. It gives administrative control to users
over a virtual network. It provides an application for load
balancing in the networks. It also offers DNS to route end users
to the application.
2. Security, Identity and Compliance
AWS Firewall Manager helps manage firewall rules for
application. Amazon Inspector is an automated security scan
which helps in improving the security and compliance of
applications. Amazon Macie is a machine learning-powered
service to identify, classify and protect sensitive data. Apart
from these security check services, AWS provides a lot more
applications to keep the hosted applications secure and safe.
3. Machine Learning
AWS offers a wide range of services and pre-defined models for
AI. Amazon SageMaker provides services to quickly build, train
and deploy models at a big scale. It also supports the custom
model building. Amazon Recognition is used to analyze images
and videos. Along with these, AWS offers ML service for speech
recognition, language translation, chatbots, and many other
scenarios, with high speed and scalability.

Installation
Step-1: Log in to the Amazon web services portal.
http://aws.amazon.com/
Step-2:Click on search >cloud, and then scroll down to and click AWS EC2.

You might also like