Virtualisation

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 50

Lab work Lab date Submission Remark

date
1. Case Study of VMware 1

2. Case Study of VMware 2

3. Understanding the use of Map-


reduce methodology 1
4. Understanding the use of Map-
reduce methodology 2
5. Demonstrating provisioning and
scaling codenvy
6. Working in codenvy to create
Project on cloud.
7. Demonstrating provisioning and
scaling of a website salesforce.
8. Working in salesforce to create
project on cloud
9. Working on salesforce to
execute queries on cloud
10. Case study of amazon web
services.
LAB WORK NO. 1

Objective: Case Study of VMware 1


We can define virtualization as a technology that provides the capability to
logically separate the physical resources of a server and use them as different
isolated machines, called virtual machines. The CPU becomes many virtual
CPUs, and same becomes true for RAMs and hard disks.

What are the different types of virtualization?


Server Virtualization
Server virtualization is the masking of server resources, including the number
and identity of individual physical servers, processors, and operating systems,
from server users
The first type of virtualization in our list is Server Virtualization/ Compute
Abstraction with VMware vSpherewherein ESXi is installed on our bare metal
hardware (Dell, CISCO,IBM) and helps us to abstract the underlying hardware
resources (CPU + Memory) and we provide these resources to our resource
consumers aka virtual machines.

When it comes to management of ESXi and Virtual Machines and to avail all the
important features provided to us with VMware vSphere like HA, DRS,
vMotion we make use of vCenter Server which can be installed on Windows
Machine or can be downloaded as preconfigured virtual machine running with
SLES.
Storage Virtualization

Key thing to understand when discussing the concept related to Virtualization


is abstraction and pooling with VMware vSphere we have seen how ESXi helps
us to abstract the underlying compute resources so as we can provide those
resources to Virtual Machines, it’s time to elaborate the same concept of
Virtualization towards another component i.e Storage wherein will be abstracting
the underlying Physical Hardware storage resources and pool them together.

Yes we are talking about another VMware Product which is quite popular these
days i.e VMware vSAN which helps us to abstract underlying HDD’s and SSD’s
to create a logical entity known as DISK GROUP on each ESXi hosts which can
be used further for creating a shared pool of storage resources known as VSAN
Shared aggregated datastore.

When working with VMware vSAN we need to create a cluster and enable
vSAN ensuring we have a minimum of 3 ESXi hosts contributing local
storage running with at least ESXi 5.5 version or later.All ESXi hosts must be

managed by vCenter Server and configured as a Virtual SAN


cluster member.ESXi hosts in a VSAN cluster should not participate in any other
cluster last but not the least a VMKernel Port on each host dedicated for VSAN
traffic.

Network Virtualization
We have already seen three important resources being Virtualized in our data
center environment now it’s time to talk about another important component i.e
Network, with VMware NSX we can leverages the underlying VMware vSphere
platform and provides Network Virtualization platform to make use of Logical
Networks.

VMware NSX help us to remove the limit of VM communication which we see


when working with VMware vSphere wherein we can not make VM’s defined
with different VLAN ID’s will to communicate with any VM’s other than the one
which are connected on the same host and on the same Port Group.

VMware NSX use a broader approach of VM communication across hosts,


clusters and datacenter by using VXLAN’s which further removes any trunking
dependency from the underlying Physical Network.

VMware NSX provides Network and Security services like Logical Switching,
Logical Routing, Logical Firewall, Logical Load Balancing, NAT, DHCP and
VPN Services.

When working with NSX we have to consider the various plane including Data
Plane (NSX Switch and kernel modules) that helps in abstracting the physical
network layer and provide various kernel modules including VXLAN, second in
the list is the control plane which is provided with the help of NSX controller for
logical routing and logical switching functions, another important layer to
consider is management plane aka NSX Manager which provides us centralized
management capabilities for managing various components of NSX.

Desktop virtualization
Similar to application virtualization mentioned above, desktop virtualization
separates the desktop environment from the physical device and configured as a
“virtual desktop infrastructure” (VDI). One of the biggest advantages of desktop
virtualization is that users are able to access all their personal files and
applications on any PC, meaning they can work from anywhere without the need
to bring their work computer. It also lowers the cost of software licensing and
updates. Maintenance and patch management are simple, since all of the virtual
desktops are hosted at the same location.

Horizon7- Securely deliver virtualized desktops and published applications to end


users across devices and locations through a single platform.

Application virtualization
This is a process where applications are virtualized and delivered from a server
to the end user’s device, such as laptops, smartphones, and tablets. So instead of
logging into their computers at work, users will be able to gain access to the
application right from their device, provided an Internet connection is available.
This is particularly popular for businesses that require the use of their applications
on the go.

App Volumes Deliver applications to desktop environments in seconds. With the


click of a button, IT can provision applications to users and desktops at scale.

What is difference in type 1 and type 2 hypervisor?

Virtualization requires the use of a hypervisor, which was traditionally called a


virtual machine monitor or VMM. The hypervisor is a software program that
provides the layer of abstraction, handles the translations between physical
and virtual resources -- such as physical vs. virtual CPUs or memory -- and
manages the creation and support of virtual machines (VMs).

A Type 1 hypervisor runs directly on the host machine's physical hardware, and
it's referred to as a bare-metal hypervisor; it doesn't have to load an underlying
OS first. With direct access to the underlying hardware and no other software --
such as OSes and device drivers -- to contend with, Type 1 hypervisors
are regarded as the most efficient and best-performing hypervisors available for
enterprise computing. Hypervisors such as VMware ESXi, Microsoft Hyper-V
server and open source KVM are examples of Type 1 hypervisors.

A Type 2 hypervisor is typically installed on top of an existing OS, and it's called
a hosted hypervisor because it relies on the host machine's pre-existing OS to
manage calls to CPU, memory, storage and network resources. Type 2
hypervisors include VMware Fusion, Oracle VM VirtualBox, Oracle VM Server
for x86, Oracle Solaris Zones, Parallels and VMware Workstation.
What is a Virtual Desktop Infrastructure (VDI)?
Virtual desktop infrastructure (VDI) is the technology for providing and
managing virtual desktops. VDI hosts desktop environments on a centralized
server and deploys them to end clients on request. These virtualized desktops are
created by a virtual machine controlled by a hypervisor. All computing activity
on the deployed virtual desktop instance occurs on the centralized server.
LAB WORK No. 2
Objective: Case Study of VMware 2.

VMware Workstation is a hosted hypervisor that runs on x64 versions of


Windows and Linux operating systems (an x86version of earlier releases was
available); it enables users to set up virtual machines (VMs) on a single physical
machine, and use them simultaneously along with the actual machine. Each
virtual machine can execute its own operating system, including versions
of Microsoft Windows, Linux, BSD, and MS-DOS. VMware Workstation is
developed and sold by VMware, Inc., a division of Dell Technologies. There is a
free-of-charge version, VMware Workstation Player, for non-commercial use.
An operating systems license is needed to use proprietary ones such as Windows.
Ready-made Linux VMs set up for different purposes are available from several
sources.
VMware Workstation supports bridging existing host network adapters and
sharing physical disk drives and USB devices with a virtual machine. It can
simulate disk drives; an ISO image file can be mounted as a virtual optical disc
drive, and virtual hard disk drives are implemented as .vmdk files.
VMware Workstation Pro can save the state of a virtual machine (a "snapshot")
at any instant. These snapshots can later be restored, effectively returning the
virtual machine to the saved state, as it was and free from any post-snapshot
damage to the VM.
VMware Workstation includes the ability to group multiple virtual machines in
an inventory folder. The machines in such a folder can then be powered on and
powered off as a single object, useful for testing complex client-server
environments.
VMware Workstation 5.0 Setting Up a New Virtual Machine
The New Virtual Machine Wizard guides you through the key steps for setting
up a new virtual machine, helping you set various options and parameters. You
can then use the virtual machine settings editor (VM > Settings) if you need to
make any changes to your virtual machine's setup.
Steps to a New Virtual Machine

By default, the new virtual machine uses an IDE disk for Windows 95, Windows
98, Windows Me, Windows XP, Windows Server 2003, NetWare and FreeBSD
guests. The default for other guest operating systems is a SCSI disk.

Follow these steps to create a virtual machine using a virtual disk.

1. Start VMware Workstation.


Windows hosts: Double-click the VMware Workstation icon on your desktop
or use the Start menu (Start > Programs > VMware > VMware Workstation).
Linux hosts: In a terminal window, enter the command
vmware &

Note: On Linux hosts, the Workstation installer adds an entry to the Start
menu for VMware Workstation. However, this menu entry is located in
different submenus, depending on your Linux distribution. For example:
SUSE Linux 9.1 — Start > System > More Programs > VMware Workstation
Red Hat Enterprise Linux AS/WS Release 3 — Start > System Tools > More
System Tools > VMware Workstation
2. If this is the first time you have launched VMware Workstation and you did
not enter the serial number when you installed the product (an option available
on a Windows host), you are prompted to enter it. The serial number is on the
registration card in your package or in the email message confirming your
electronic distribution order. Enter your serial number and click OK.
The serial number you enter is saved and VMware Workstation does not ask
you for it again. For your convenience, VMware Workstation automatically
sends the serial number to the VMware Web site when you use certain Web
links built into the product (for example, Help > VMware on the
Web > Register Now! and Help > VMware on the Web> Request Support).
This allows us to direct you to the correct Web page to register and get support
for your product.
3. Start the New Virtual Machine Wizard.
When you start VMware Workstation, you can open an existing virtual
machine or create a new one. Choose File > New > Virtual Machine to begin
creating your virtual machine.
4. The New Virtual Machine Wizard presents you with a series of screens that
you navigate using the Next and Prev buttons at the bottom of each screen. At

each screen, follow the instructions, then click Next to proceed to the next
screen.
5. Select the method you want to use for configuring your virtual machine.

If you select Typical, the wizard prompts you to specify or accept defaults for
the following choices:
The guest operating system
The virtual machine name and the location of the virtual machine's files
The network connection type
Whether to allocate all the space for a virtual disk at the time you create it
Whether to split a virtual disk into 2GB files
If you select Custom, you also can specify how to set up your disk — create
a new virtual disk, use an existing virtual disk or use a physical disk — and
specify the settings needed for the type of disk you select. There is also an
option to create a legacy virtual disk for use in environments with other
VMware products.
Select Custom if you want to
Make a legacy virtual machine that is compatible with Workstation 4.x, GSX
Server 3.x, ESX Server 2.x and ACE 1.x.
Make a virtual disk larger or smaller than 4GB
Store your virtual disk's files in a particular location
Use an IDE virtual disk for a guest operating system that would otherwise have
a SCSI virtual disk created by default
Use a physical disk rather than a virtual disk (for expert users)
Set memory options that are different from the defaults
6. If you selected Typical as your configuration path, skip to step 7.
If you selected Custom as your configuration path, you may create a virtual
machine that fully supports all Workstation 5 features or a legacy virtual
machine compatible with specific VMware products.
This screen asks whether you want to create a Workstation 5 virtual machine
or a legacy virtual machine. See Legacy Virtual Disks for more information.
7. Select a guest operating system.

This screen asks which operating system you plan to install in the virtual
machine. Select both an operating system and a version. The New Virtual
Machine Wizard uses this information to

Select appropriate default values, such as the amount of memory needed


Name files associated with the virtual machine
Adjust settings for optimal performance
Work around special behaviors and bugs within a guest operating system
If the operating system you plan to use is not listed, select Other for both guest
operating system and version.
The remaining steps assume you plan to install a Windows XP Professional
guest operating system. You can find detailed installation notes for this and
other guest operating systems in the VMware Guest Operating System
Installation Guide, available from the VMware Web site or from the Help
menu.
8. Select a name and folder for the virtual machine.
The name specified here is used if you add this virtual machine to the VMware
Workstation Favorites list. This name is also used as the name of the folder
where the files associated with this virtual machine are stored.

Each virtual machine should have its own folder. All associated files, such as
the configuration file and the disk file, are placed in this folder.

Windows hosts: On Windows 2000, Windows XP and Windows Server 2003,


the default folder for this Windows XP Professional virtual machine
is C:\Documents and Settings\<username>\My Documents\My Virtual
Machines\Windows XP Professional. On Windows NT, the default folder
is C:\WINNT\Profiles\<username>\Personal\My Virtual Machines\Windows
XP Professional.
Linux hosts: The default location for this Windows XP Professional virtual
machine is <homedir>/vmware/winXPPro, where <homedir> is the home
directory of the user who is currently logged on.
Virtual machine performance may be slower if your virtual hard disk is on a
network drive. For best performance, be sure the virtual machine's folder is
on a local drive. However, if other users need to access this virtual machine,
you should consider placing the virtual machine files in a location that is
accessible to them. For more information, see Sharing Virtual Machines with
Other Users.
9. If you selected Typical as your configuration path, skip to step 10.
If you selected Custom as your configuration path, you may adjust the

memory settings or accept the defaults, then click Next to continue.


In most cases, it is best to keep the default memory setting. If you plan to use
the virtual machine to run many applications or applications that need high
amounts of memory, you may want to use a higher memory setting. For more
information, see Virtual Machine Memory Size.
Note: You cannot allocate more than 2GB of memory to a virtual machine if
the virtual machine's files are stored on a file system such as FAT32 that does
not support files greater than 2GB.
10. Configure the networking capabilities of the virtual machine.
If your host computer is on a network and you have a separate IP address for
your virtual machine (or can get one automatically from a DHCP server),
select Use bridged networking.
If you do not have a separate IP address for your virtual machine but you want
to be able to connect to the Internet, select Use network address translation
(NAT). NAT allows you to share files between the virtual machine and the
host operating system.
For more details about VMware Workstation networking options,
see Configuring a Virtual Network.
11. If you selected Typical as your configuration path, click Finish and the
wizard sets up the files needed for your virtual machine.
If you selected Custom as your configuration path, continue with the steps
below to configure a disk for your virtual machine.
12. Select the type of SCSI adapter you want to use with the virtual machine.
An IDE and a SCSI adapter are installed in the virtual machine. The IDE
adapter is always ATAPI. You can choose a BusLogic or an LSI Logic SCSI
adapter. The default for your guest operating system is already selected. All
guests except for Windows Server 2003, Red Hat Enterprise Linux 3 and
NetWare default to the BusLogic adapter.

The LSI Logic adapter has improved performance and works better with
generic SCSI devices. The LSI Logic adapter is also supported by ESX Server
2.0 and higher. Keep this in mind if you plan to migrate the virtual machine
to another VMware product.

Your choice of SCSI adapter does not affect your decision to make your
virtual disk an IDE or SCSI disk. However, some guest operating systems —
such as Windows XP — do not include a driver for the Buslogic or LSI Logic
adapter. You must download the driver from the LSI Logic Web site.

Note: Drivers for a Mylex (BusLogic) compatible host bus adapter are not
obvious on the LSI Logic Web site. Search the support area for the numeric
string in the model number. For example, search for "958" for BT/KT-958
drivers.
See the VMware Guest Operating System Installation Guide for details about
the driver and the guest operating system you plan to install in this virtual
machine.
13. Select the disk you want to use with the virtual machine.
Select Create a new virtual disk.
Virtual disks are the best choice for most virtual machines. They are quick
and easy to set up and can be moved to new locations on the same host
computer or to different host computers. By default, virtual disks start as small
files on the host computer's hard drive, then expand as needed — up to the
size you specify in the next step. The next step also allows you to allocate all
the disk space when the virtual disk is created, if you wish.

To use an existing operating system on a physical hard disk (a "raw" disk),


read Configuring a Dual-Boot Computer for Use with a Virtual Machine. To
install your guest operating system directly on an existing IDE disk partition,
read the reference note Installing an Operating System onto a Physical
Partition from a Virtual Machine.
Note: Raw disk configurations are recommended only for expert users.
Caution: If you are using a Windows Server 2003, Windows XP or Windows
2000 host, see Do Not Use Windows 2000, Windows XP and Windows Server
2003 Dynamic Disks as Raw Disks.
To install the guest operating system on a raw IDE disk, select Existing IDE
Disk Partition. To use a raw SCSI disk, add it to the virtual machine later with
the virtual machine settings editor. Booting from a raw SCSI disk is not
supported. For a discussion of some of the issues involved in using a raw SCSI
disk, see Configuring Dual- or Multiple-Boot SCSI Systems to Run with
VMware Workstation on a Linux Host.
14. Select whether to create an IDE or SCSI disk.

The wizard recommends the best choice based on the guest operating system
you selected. All Linux distributions you can select in the wizard use SCSI
virtual disks by default, as do Windows NT, Windows 2000 and Longhorn.
All Windows operating systems except Windows NT, Windows 2000 and
Longhorn use IDE virtual disks by default; NetWare, FreeBSD, MS-DOS and
other guests default to IDE virtual disks.

15. Specify the capacity of the virtual disk.

Enter the size of the virtual disk that you wish to create.

You can set a size between 0.1GB and 950 GB for a SCSI virtual disk . The
default is 4GB.

The option Allocate all disk space now gives somewhat better performance
for your virtual machine. If you do not select Allocate all disk space now, the
virtual disk's files start small and grow as needed, but they can never grow
larger than the size you set here.
Note: Allocate all disk space now is a time-consuming operation that cannot
be cancelled, and requires as much physical disk space as you specify for the
virtual disk.
Select the option Split disk into 2GB files if your virtual disk is stored on a
file system that does not support files larger than 2GB.

16. Specify the location of the virtual disk's files.

If you want to specify which device node should be used by your SCSI or IDE
virtual disk, click Advanced.
On the advanced settings panel, you can also specify a disk mode. This is
useful in certain special-purpose configurations in which you want to exclude
disks from snapshots. For more information on the snapshot feature, see Using
Snapshots.
Normal disks are included in snapshots. In most cases, you should use normal
disks, leaving Independent unchecked.
Independent disks are not included in snapshots.

Caution: The independent disk option should be used only by advanced users
who need it for special-purpose configurations.
You have the following options for an independent disk:

Persistent — changes are immediately and permanently written to the disk.


Nonpersistent — changes to the disk are discarded when you power off the
virtual machine.
17. Click Finish.
The wizard sets up the files needed for your virtual machine.
LAB WORK NO. 3
OBJECTIVE:- Understanding map reduce technology.

What is map reduce?


MapReduce is a programming model and an associated implementation for
processing and generating big data sets with a parallel, distributed algorithm on
a cluster.
A MapReduce program is composed of a map procedure (or method), which
performs filtering and sorting (such as sorting students by first name into queues,
one queue for each name), and a reduce method, which performs a summary
operation (such as counting the number of students in each queue, yielding name
frequencies). The "MapReduce System" (also called "infrastructure" or
"framework") orchestrates the processing by marshalling the distributed servers,
running the various tasks in parallel, managing all communications and data
transfers between the various parts of the system, and providing
for redundancy and fault tolerance.

How key value pair is generated?


The Map and Reduce functions of MapReduce are both defined with respect to
data structured in (key, value) pairs. Map takes one pair of data with a type in
one data domain, and returns a list of pairs in a different domain:
Map(k1,v1) → list(k2,v2)
The Map function is applied in parallel to every pair (keyed by k1 ) in the input
dataset. This produces a list of pairs (keyed by k2 ) for each call. After that, the
MapReduce framework collects all pairs with the same key ( k2 ) from all lists
and groups them together, creating one group for each key.
The Reduce function is then applied in parallel to each group, which in turn
produces a collection of values in the same domain:
Reduce(k2, list (v2)) → list(v3)
Thus the MapReduce framework transforms a list of (key, value) pairs into a list
of values. This behavior is different from the typical functional programming map
and reduce combination, which accepts a list of arbitrary values and returns one
single value that combines all the values returned by map.

What are advantage of use of Map-reduce method?


Scalability - Hadoop is a platform that is highly scalable. This is largely because
of its ability to store as well as distribute large data sets across plenty of servers.
These servers can be inexpensive and can operate in parallel. And with each
addition of servers one adds more processing power.

Cost-effective solution - Hadoop’s highly scalable structure also implies that it


comes across as a very cost-effective solution for businesses that need to store
ever growing data dictated by today’s requirements. Hadoop’s scale-out
architecture with MapReduce programming, allows the storage and processing of
data in a very affordable manner.
Flexibility - Business organisations can make use of Hadoop MapReduce
programming to have access to various new sources of data and also operate on
different types of data, whether they are structured or unstructured. This allows
them to generate value from all of the data that can be accessed by them.

Fast - Hadoop uses a storage method known as distributed file system, which
basically implements a mapping system to locate data in a cluster. The tools used
for data processing, such as MapReduce programming, are also generally located
in the very same servers, which allows for faster processing of data.

Parallel processing - One of the primary aspects of the working of MapReduce


programming is that it divides tasks in a manner that allows their execution in
parallel. Parallel processing allows multiple processors to take on these divided
tasks, such that they run entire programs in less time.
LAB WORK No. 4
Objective: Understanding the use of map reduce methodology.

The canonical MapReduce example counts the appearance of each word in a set
of documents:

function map(String name, String document):


// name: document name
// document: document contents
for each word w in document:
emit (w, 1)

function reduce(String word, Iterator partialCounts):


// word: a word
// partialCounts: a list of aggregated partial counts
sum = 0
for each pc in partialCounts:
sum += pc
emit (word, sum)

Here, each document is split into words, and each word is counted by
the map function, using the word as the result key. The framework puts together
all the pairs with the same key and feeds them to the same call to reduce. Thus,
this function just needs to sum all of its input values to find the total appearances
of that word.
As another example, imagine that for a database of 1.1 billion people, one would
like to compute the average number of social contacts a person has according to
age. In SQL, such a query could be expressed as:

SELECT age, AVG(contacts)


FROM social.person
GROUP BY age
ORDER BY age

Using MapReduce, the K1 key values could be the integers 1 through 1100, each
representing a batch of 1 million records, the K2 key value could be a person's
age in years, and this computation could be achieved using the following
functions:

function Map is
input: integer K1 between 1 and 1100, representing a batch of 1 million
social.person records
for each social.person record in the K1 batch do
let Y be the person's age
let N be the number of contacts the person has
produce one output record (Y,(N,1))
repeat
end function

function Reduce is
input: age (in years) Y
for each input record (Y,(N,C)) do
Accumulate in S the sum of N*C
Accumulate in Cnew the sum of C
repeat
let A be S/Cnew
produce one output record (Y,(A,Cnew))
end function
The MapReduce system would line up the 1100 Map processors, and would
provide each with its corresponding 1 million input records. The Map step would
produce 1.1 billion (Y,(N,1)) records, with Y values ranging between, say, 8 and
103. The MapReduce System would then line up the 96 Reduce processors by
performing shuffling operation of the key/value pairs due to the fact that we need
average per age, and provide each with its millions of corresponding input
records. The Reduce step would result in the much reduced set of only 96 output
records (Y,A), which would be put in the final result file, sorted by Y.
The count info in the record is important if the processing is reduced more than
one time. If we did not add the count of the records, the computed average would
be wrong, for example:

-- map output #1: age, quantity of contacts


10, 9
10, 9
10, 9
-- map output #2: age, quantity of contacts
10, 9
10, 9
-- map output #3: age, quantity of contacts
10, 10

If we reduce files #1 and #2, we will have a new file with an average of 9 contacts
for a 10-year-old person ((9+9+9+9+9)/5):

-- reduce step #1: age, average of contacts


10, 9

If we reduce it with file #3, we lose the count of how many records we've already
seen, so we end up with an average of 9.5 contacts for a 10-year-old person
((9+10)/2), which is wrong. The correct answer is 9.166 = 55 / 6 =
(9*3+9*2+10*1)/(3+2+1).
LAB WORK NO. 5
Objective: Working in Codenvy to demonstrate Provisioning and Scaling of a
website.

One of the advantages of coding in the cloud with Codenvy is deploying to a PaaS
of choice once the app has been built, run and tested in Codenvy. Users do not
need to install any plugins or administer their workspaces in any way. Codenvy
talks to API of most popular PaaS providers. Currently, the following PaaS are
supported:

• AppFog
• CloudBees
• AWS Elastic Beanstalk
• Google App Engine
• Heroku
• Openshift
• ManyMo (to run Android apps)

The mechanism of deploying, updating and configuring apps slightly differs


depending on the chosen PaaS provider. To be able to deploy to a PaaS
authentication is required (Login or Switch Account in PaaS menus). Codenvy
will handle connection to a PaaS account, retrieving information on existing apps
and providing tools to manage them.

Some providers will require deploy of SSH keys and git operations to update the
apps (Heroku, OpenShift), while others (GAE, AWS) make it possible to update
apps in one click.

When deploying an application, it is created in Codenvy and then deployed to a


PaaS. OpenShift is an exception from this rule – the application is created there
and then pulled to a Codenvy workspace.

It is possible to import existing apps deployed to some PaaS (Heroku) or


overwrite existing applications (Google App Engine).

Registration and Login

There are several registration options available in Codenvy:


• The fastest and the easiest way is to register using
your Google or GitHub account. Click Sign in with Google or GitHub and
follow the registration process. Your Codenvy workspace name will be
identical to your Google or GitHub ID. Note that you will need a verified
email associated with your GitHub account.

• If you do not have accounts with Gmail or GitHub or just want to choose
a domain name by yourself, enter your email and the desired domain name,
and press Go.
Getting Started Using Codenvy Factories

You can find Codenvy Factory buttons at his site, Сodenvy.com or anywhere on
the net. If you click on a Factory button, we will create a temporary workspace
for you with the project of your choice. After a fruitful coding session in a
temporary workspace you can create a permanent account with Codenvy by
pressing Create Account button in the top right corner of a temporary workspace.

Create a Project from Scratch


You can place any number of projects into a workspace.

Projects are combinations of modules, folders and files. Projects can be mapped
1:1 to a source code repository. If a project is given a type, then Codenvy will
active plug-ins related to that type. For example, projects with the maven
project type will automatically get the Java plug-in, which provides a variety of
intellisense for Java projects.

You can also create a new project from the Welcome Screen - Create a New
Project From Scratch

Manage Projects Through ID


Projects can be managed by using the IDE with the project explorer and drop
down menu at top. New projects can be imported from an remote source such
as Git/SVN or hosted zip file(URL required) with Workspace > Import Project ... menu
item. New projects can be created from internal project samples with Workspace >
Create Project .. menu item.

Manage Projects Through Dashboard


Projects can also be managed in the dashboard.

Device Support

Codenvy currently supports all desktop and laptop devices. We currently provide
touch device support through the use of the Puffin Web Browser which virtualizes
double clicks and right clicks. We have not yet created a native touch UI design.
Browser Support
Browser Version

Chrome
21+

Firefox
15+

Safari
5.1+

Puffin Browser
2.4+

Web Server Support


Web Server Version

Apache Tomcat
7.0.39

Build Environment Support


Build System Version

3.0.4

Version Control System Support


Version Control System Version

1.6
Language Support
Cloud
Syntax Code Code Error Debug
Language Version local
coloring outline assistant detection mode
run

2.0 Yes No Yes No Preview -----

4.1 Yes Yes Yes Yes Preview -----

1.6 (runner and


Yes Yes Yes Yes Yes Yes
autocomplete)

JavaScript Standard ECMA-


Yes Yes Yes Yes No No
262

5.3 Yes Yes No No Yes No

2.7 (for runner


Yes No No No Yes No
only)

1.9.2 Yes Yes Yes No No No

1.0 Yes Yes Yes Yes ----- -----

Framework Support
Framework Version Access Templates

Project 1) Simple Android application

1) Google App Engine Java project that uses the


Search API
2.1 Project 2) Java Web project
3) A demonstration of accessing Amazon S3
buckets and objects using the AWS Java SDK
Framework Version Access Templates

0.4.12 Project 1) Simple Node.js project

1) Simple Ruby on Rails application. v1.9


1.8 Project 2) Simple Ruby on Rails application. v1.8 (Heroku
compatible)

All 1) Simple Spring application


Project
versions 2) Spring MVC application with AJAX usage

PaaS Support
Cloud SDK
PaaS Languages Features
Run

Application management
Java Yes
EC2 and S3 console

Java, PHP, Python,


Manage applications No
Ruby

Yes - Micro
Java, Ruby Manage applications Cloud
Foundry

Java Manage applications No

Java
Python Application management, Logs,
PHP (app IDs need Indexes, Pagespeed, Queues, DoS, Yes
to be whitelisted at Resource Limits, Crons, Backends
GAE)

Ruby Manage applications No

Java, PHP, Ruby,


Manage applications No
Python, Node.js
Cloud SDK
PaaS Languages Features
Run

Java, Ruby Manage applications

Architecture
LAB WORK NO. 6
Objective: Working in codenvy to create project in cloud.

Codenvy is a cloud environment for coding, building, and debugging apps.


Codenvy makes on-demand workspaces to give you a better agile experience.
Codenvy SaaS is available for self-service at codenvy.com. You can also install
Codenvy on your own infrastructure with Codenvy On-Prem.
Codenvy is a commercial offering based on the Eclipse Che open source
project, so Eclipse Che workspaces and plug-ins will work within Codenvy.
Eclipse Che was designed as a single-user system and can be used by Codenvy
users on their local machines to provide offline access to their Codenvy
workspaces. Simply clone the Codenvy workspace locally and start it in Che,
then push it back to Codenvy when you are back online.
Features
Java improvements: Maven, javac and classpath
On-demand workspaces for GitHub and GitLab
New plug-in for Subversion
UX improvements: notifications, debugger, dashboard and consoles
Editor personalization
Support for Docker Registry Mirrors
Simplified installation and more flexibility for configuration
Generic debugger API and structural refactoring

Setup a New Account


Go to https://codenvy.com/site/login
Use GITHUB / Google credential’s for signup or use email for signup
Verify email id, login to account
Note: for visualization we are using https://beta.codenvy.com

2.Create Workspace in Codenvy


1.For creating workspace goto https://beta.codenvy.com/dashboard/
2.Click on Dashboard → Workspaces
3.on the Top bar click on ADD[+] button
4.Follow below steps
Workspace is running
6.Workspace is running. Now we can create projects on this workspace

3.Create Java Project in Codenvy


We can create project in two ways
create project using Wizrd
create project using IDE

1.Create project using Wizard


Go to Dashboard https://beta.codenvy.com/dashboard/#/
1.go to Workspaces
2.Choose Projects Tab
create new Project
3.Click on +Button to create new Project

4.Follow below steps to create Java Project


Select Source → New
Select Stack → Java
Select Template →for now it is Console
Project meta → Project Name, and description
Create Project → It will create project

create Java Project


It will create and opens the java project.

2.Create project using IDE


1.go to workspaces https://beta.codenvy.com/dashboard/#/workspaces
Create project using IDE
2.Click on OpenIDE. It will Opens the IDE

3.Go to Top Menu → workspace → Create Project → Java → Java → Create

4.After that it will opens the created project as below

In above topic we successfully created Java Project. By default it will


give Main.java
before execute any java file in CodeNvy we must configure our run time
environment.
Steps to Configure Java Environment in CodeNvy
1.Open IDE and open Java program
2.On the top goto CMD → Edit Commands

3.To configure Java Environment follow below steps


Expand Java[+]
Add new Environment Name JAVA_RUNTIME
Choose Main Class
Save it!
4.To RUN the Program CMD → JAVA_RUNTIME → [>] Run button

5.In the same way we can configure Maven Environment as below


CMD → Edit Commands → Maven → give details → Save

odenvy
Create Maven project using Wizard
1.Click on create project

2.Choose Java → Maven Project, provide project details → create Project

3.To Start IDE, go to Projects → Select project→ Open IDE


4.It will open the Dashboard as below .

5.Run Maven Build to create java folder structure. For doing this go to
[1].Top →CMD→ Maven → Build then click on
[2].RUN button
LAB WORK NO. 7
Objective – Demonstrating provisioning and scaling of website salesforce.

Salesforce began with the vision of reinventing Customer Relationship


Management (CRM). Since then we’ve changed the way enterprise software is
delivered and used, changing the industry forever. All Salesforce products run
entirely in the cloud so there’s no expensive setup costs, no maintenance and
your employees can work from any device with an internet connection –
smartphone, tablet or laptop.
We make CRM easy to use for small businesses and large scale enterprises.
This approach has helped to make Sales Cloud the world’s number 1 CRM
system. But Salesforce doesn’t start and end with CRM for Sales and
Marketing. Our platform enables you to manage all interactions with your
customers and prospects, so your organisation can grow and succeed. That’s
why we call it the Customer Success Platform.

How salesforce CRM works who should use it?


Salesforce is a top-notch CRM application built on the Force.com platform. It can
manage all the customer interactions of an organization through different media,
like phone calls, site email enquiries, communities, as well as social media.
Salesforce handles all the customer relationships, by focusing on the sales,
marketing and support processes. This is done by working with the standard
objects (Shown below), and facilitating the relationships between them.
Force.com platform

Force.com is a platform for creating applications in the cloud with absolutely no


software or hardware investment required. The apps thus created are data-centric
and collaborative. In fact, the data is never lost here, because there are auto back-
ups.

The Standard Objects in Salesforce


These could be partners or customers (Be it companies or
1 Account individuals) who are involved in the organization’s business
interactions

2 Contact Individuals that come within any account


3 Product Items that the organization sells to the customer

Those interested in the product, be it individuals or


4 Lead
companies

Any event or activity with potential for revenue


5 Opportunity
generation

6 Case A description of any issue that a customer has reported

The resolution of a customer problem. Salesforce’s


7 Solution
solution knowledge base is the collection of all such solutions

8 Forecast Estimation of the organization’s quarterly revenue

9 Report Analysis of the standard or custom objects’ data

Representation of reports as a set of graphical data or


10Dashboard
charts

All the organization’s tasks as well as calendar events


12Activity
within the Activity object.

13Campaign All marketing projects, for example – mass emailing

What makes Salesforce CRM tick?


Salesforce works by managing the standard objects, and maintaining the
relationships between them, and the standard in-built functionalities. It is built on
the following different types of cloud:
Sales Cloud

Sales Cloud mainly works based on Lead, Account, Contact & Opportunity
objects. Leads can be further converted into account, contact and opportunity
objects -an important built-in functionality of the Sales Cloud. What’s more, if
any custom fields are added to the lead object, it is also possible to set the
mapping for them.
Opportunities are well managed by giving them different stages and
probabilities.
Service Cloud
In Service Cloud, the base objects are cases and solutions. A Service Executive
can create a case on a customer enquiry or a complaint, and the corresponding
solution can be stored in a solution object. There is some standard functionality,
like email to case, which will automatically create a new case in the CRM on
every customer email.

Marketing Cloud
Marketing Cloud is an application for marketing purposes. It helps in the creation
and execution of marketing campaigns, email promotions, and more.

Custom Cloud
Custom fields can be added to the standard objects, and custom workflows can
also be created. For custom views as well as business logic, Visualforce pages
and associated apex classes can be used. All of these customization facilities
make it possible to fulfill just about any need a CRM user may have.

Analytics
Every CRM application must be able to present reports with the data stored in it.
In Salesforce, the ‘Reports and Dashboards’ feature enables effective analytics.
There are a number of standard reports associated with the standard objects. Each
report can be used to create dashboard components like graphs. Standard Reports
are placed in the folders available in Salesforce, so finding the reports are easy.
Salesforce automation
Salesforce automation features include tracking leads, managing emails,
assigning tasks, notifications, approvals etc. This CRM will handle all the
automation required for the sales, marketing and service processes.

Who should use the Salesforce CRM?


The answer is simple – ‘Everyone’.Salesforce has editions for all variety of users.
It offers different editions like Group, Professional, Enterprise and Performance
(For Sales Cloud). An organization can select the edition they, according their
aims, and the features they require. Do note, the editions have price variations
too.
Salesforce covers all the areas of customer relationships, ranging from marketing
to service. Any organization that wants to manage their customer relationships
holistically can come in and use Salesforce without losing time, and wasting
money for software development or hardware infrastructure.
LAB WORK NO. 8

Objective:- Working in salesforce to create project in cloud.

This technical note shows you the first steps in creating an iBOLT Salesforce.com
project.
Creating a Salesforce.com Account
1.Go to http://www.salesforce.com/developer/

2.Click the link.


3.Fill in the subscription form.
4.You should now receive an email with a login URL. Click the URL to
change your password.

5.After you have changed your password, you should ask for a security
token:
1. Click on the Setup link at the top of the page.
2. In the My Personal Information section, click on the Reset your
security token link to receive an email containing a security token.
You will use this later on, when you add the security token to your
password, and enter the combined result in the Password field in
iBOLT’s Connections Salesforce dialog box. For example, if the
password is 1234, and the security token is AABBCC, you should
enter 1234AABBCC in the iBOLT password field.

Note: You do not need the security token for logging into the
Salesforce.com website. The token is used only when you use
external applications such as iBOLT.

6.You should now go to https://login.salesforce.com/. Here, you can log in


to your Salesforce.com application.

Creating a Simple Salesforce.com Project


In this example, a Query operation is performed for all Channel
Customers account types.
Configuring the connection to Salesforce.com:
1.Open the iBOLT Studio, and create a new project.
2.Drag a Salesforce.com connector to the flow area.
3.If this is the first time that you have dragged the Salesforce.com connector
to the flow area or the trigger area of the iBOLT Studio, the Component
Properties dialog box opens. When you click the Configurationbutton,
the Connections Salesforce dialog box opens automatically. If you want
to access the Connections Salesforce dialog box later on, click
the Connections button in the Salesforce Configuration dialog box.
4.In the Connections Salesforce dialog box, click New to add a new
connection.
5.In the Connections section (on the right hand side), enter your new
connection’s name in the Name field.
6.In the Details section (on the right hand side), enter your Salesforce.com
server user name and password in the relevant fields. The password is a
combination of your Salesforce.com password and the security token that
was supplied by Salesforce.com.
7.Click Validate to test your new connection.
8.Once the connection has been validated, click Build to retrieve the updated
supported object list.
9.Click OK to close the Connections Salesforce dialog box and to save your
connection definitions.

Configuring the Salesforce.com connector:


1.Click Configuration to open the Salesforce Configuration dialog box.
2.Click next to the Name field, and choose a connection name. If you
have only defined one connection, it will be selected automatically.
3.Click next to the Object field, and select the Account object.
4.In the Operation field, select Query from the drop-down list.
5.Press CTRL + L to open the Flow Variables for Flow dialog box. Here,
define a new BLOB variable called F_ResultBlob.
6.Click next to the Store Result In field, and choose F_ResultBlob to hold
the returned XML data.
7.Click OK to go open the Data Mapper’s Source/Destination
Management dialog box.
8.Click Map to open the main Data Mapper screen.
9.In the Destination side, double-click on the Type element, and
enter Customer – Channel in the Node Properties dialog box’s Calculated
Value field. This allows you to query only the channel customers.
10.Click OK several times, until you exit the configuration screens.
Mapping the Salesforce.com XML query result to a flat file:
1.Drag a Data Mapper component into the flow as a child to
the Salesforce.com connector, and click Configuration.
2.Create a new XML Source type, and click Properties.
3.Select the account XSD file. You can find this under
<project name>/salesforce/XSD/<Salesforce.com connection
name>/Account.xsd
4.Select F_ResultBlob as the data source.
5.Create a new flat file Destination type, and click Properties.
6.In the Data Destination field, select File from the drop-down list. You
should give it the following value:
'%currentprojectdir%ChannelAccounts.txt'
7.In the table area, click New three times to create three Alpha fields.
8.Rename the new fields to Name, Country, and Website.
9.Click OK, followed by Map.
10.Map the Name Source node to the Name Destination node.
11.Map the BillingCountry Source node to the Country Destination node.
12.Map the Website Source node to the Website Destination node.
13.Click OK until the Data Mapper is closed.
Running the flow:
1.Run the project in Debugger mode. Use the step to follow the
Salesforce.com query result, and check that the ChannelAccounts.txt file
is available and that it contains all the channel records.
2.The final flow should look like this:
LAB WORK NO. 09
Objective: Working in salesforce to execute query in cloud.

What is SOQL?
SQL is standard ANSI standard query language where as SOQL is specially
optimized version of SQL specially designed for working with Salesforce.com
underlying database. it is the abbreviation of Salesforce Object Query Language.
In addition, SOQL does not support very advance level SQL specific functions
like using wild cards, join operations etc...

What is difference in SOQL and SQL ?


A big difference between SOQL and SQL – is the simplified syntax in SOQL to
traverse object relationships.

The basic structure of SOQL is very similar to SQL as you see below:
SELECT Id, Name, Phone
FROM Contact
WHERE Phone <> null
AND Name LIKE ‘%adam%’
ORDER BY Name
LIMIT 50

However the ease with which you can access related records in one query
without needing to do complex joins is amazing!
SELECT Id, Name, Phone, Account.Name
FROM Contact
WHERE Phone <> null
AND Name LIKE ‘%adam%’
ORDER BY Name
LIMIT 50

Contact is a child of Account. Account and Contact have a master-detail


relationship. In the above query you are able to reference the master record from
a query on detail records very easily.
Keep in mind that you will access custom objects and fields through their API
name (i.e internal name) which has the suffix __c. The suffix __r is used when
referencing the related objects or fields
SELECT Id, Name, APEX_Customer__r.Name, APEX_Status__c FROM APEX_Invoice__c

WHERE createdDate = today

Execute a SOQL Query or SOSL Search

Execute SOQL queries or SOSL searches in the Query Editor panel of the
Developer Console.

1. Enter a SOQL query or SOSL search in the Query Editor panel.


2. If you want to query tooling entities instead of data entities, select Use
Tooling API.
3. Click Execute. If the query generates errors, they are displayed at the
bottom of the Query Editor panel. Your results display in the Query Results
grid in the Developer Console workspace.
4. WARNING If you rerun a query, unsaved changes in the Query Results
grid are lost.
To rerun a query, click Refresh Grid or click the query in the History panel
and click Execute.
LAB WORK NO. 10
Objective: Case study of AWS.

About Amazon.com
Amazon.com is the world’s largest online retailer. In 2011, Amazon.com
switched from tape backup to using Amazon Simple Storage Service (Amazon
S3) for backing up the majority of its Oracle databases. This strategy reduces
complexity and capital expenditures, provides faster backup and restore
performance, eliminates tape capacity planning for backup and archive, and
frees up administrative staff for higher value operations. The company was able
to replace their backup tape infrastructure with cloud-based Amazon S3 storage,
eliminate backup software, and experienced a 12X performance improvement,
reducing restore time from around 15 hours to 2.5 hours in select scenarios.

The Challenge
As Amazon.com grows larger, the sizes of their Oracle databases continue to
grow, and so does the sheer number of databases they maintain. This has caused
growing pains related to backing up legacy Oracle databases to tape and led to
the consideration of alternate strategies including the use of Cloud services of
Amazon Web Services (AWS), a subsidiary of Amazon.com. Some of the
business challenges Amazon.com faced included:
• Utilization and capacity planning is complex, and time and capital expense
budget are at a premium. Significant capital expenditures were required over
the years for tape hardware, data center space for this hardware, and
enterprise licensing fees for tape software. During that time, managing tape
infrastructure required highly skilled staff to spend time with setup,
certification and engineering archive planning instead of on higher value
projects. And at the end of every fiscal year, projecting future capacity
requirements required time consuming audits, forecasting, and budgeting.
• The cost of backup software required to support multiple tape devices sneaks
up on you. Tape robots provide basic read/write capability, but in order to
fully utilize them, you must invest in proprietary tape backup software. For
Amazon.com, the cost of the software had been high, and added significantly
to overall backup costs. The cost of this software was an ongoing budgeting
pain point, but one that was difficult to address as long as backups needed to
be written to tape devices.
• Maintaining reliable backups and being fast and efficient when retrieving data
requires a lot of time and effort with tape. When data needs to be durably
stored on tape, multiple copies are required. When everything is working
correctly, and there is minimal contention for tape resources, the tape robots
and backup software can easily find the required data. However, if there is a
hardware failure, human intervention is necessary to restore from tape.
Contention for tape drives resulting from multiple users’ tape requests slows
down restore processes even more. This adds to the recovery time objective
(RTO) and makes achieving it more challenging compared to backing up to
Cloud storage.

Why Amazon Web Services


Amazon.com initiated the evaluation of Amazon S3 for economic and
performance improvements related to data backup. As part of that evaluation,
they considered security, availability, and performance aspects of Amazon S3
backups. Amazon.com also executed a cost-benefit analysis to ensure that a
migration to Amazon S3 would be financially worthwhile. That cost benefit
analysis included the following elements:
• Performance advantage and cost competitiveness. It was important that the
overall costs of the backups did not increase. At the same time, Amazon.com
required faster backup and recovery performance. The time and effort
required for backup and for recovery operations proved to be a significant
improvement over tape, with restoring from Amazon S3 running from two to
twelve times faster than a similar restore from tape. Amazon.com required
any new backup medium to provide improved performance while maintaining
or reducing overall costs. Backing up to on-premises disk based storage
would have improved performance, but missed on cost competitiveness.
Amazon S3 Cloud based storage met both criteria.
• Greater durability and availability. Amazon S3 is designed to provide
99.999999999% durability and 99.99% availability of objects over a given
year. Amazon.com compared these figures with those observed from their
tape infrastructure, and determined that Amazon S3 offered significant
improvement.
• Less operational friction. Amazon.com DBAs had to evaluate whether
Amazon S3 backups would be viable for their database backups. They
determined that using Amazon S3 for backups was easy to implement
because it worked seamlessly with Oracle RMAN.
• Strong data security. Amazon.com found that AWS met all of their
requirements for physical security, security accreditations, and security
processes, protecting data in flight, data at rest, and utilizing suitable
encryption standards.

The Benefits
With the migration to Amazon S3 well along the way to completion,
Amazon.com has realized several benefits, including:
• Elimination of complex and time-consuming tape capacity planning.
Amazon.com is growing larger and more dynamic each year, both organically
and as a result of acquisitions. AWS has enabled Amazon.com to keep pace
with this rapid expansion, and to do so seamlessly. Historically, Amazon.com
business groups have had to write annual backup plans, quantifying the
amount of tape storage that they plan to use for the year and the frequency
with which they will use the tape resources. These plans are then used to
charge each organization for their tape usage, spreading the cost among many
teams. With Amazon S3, teams simply pay for what they use, and are billed
for their usage as they go. There are virtually no upper limits as to how much
data can be stored in Amazon S3, and so there are no worries about running
out of resources. For teams adopting Amazon S3 backups, the need for formal
planning has been all but eliminated.
• Reduced capital expenditures. Amazon.com no longer needs to acquire tape
robots, tape drives, tape inventory, data center space, networking gear,
enterprise backup software, or predict future tape consumption. This
eliminates the burden of budgeting for capital equipment well in advance as
well as the capital expense.
• Immediate availability of data for restoring – no need to locate or retrieve
physical tapes. Whenever a DBA needs to restore data from tape, they face
delays. The tape backup software needs to read the tape catalog to find the
correct files to restore, locate the correct tape, mount the tape, and read the
data from it. In almost all cases the data is spread across multiple tapes,
resulting in further delays. This, combined with contention for tape drives
resulting from multiple users’ tape requests, slows the process down even
more. This is especially severe during critical events such as a data center
outage, when many databases must be restored simultaneously and as soon as
possible. None of these problems occur with Amazon S3. Data restores can
begin immediately, with no waiting or tape queuing – and that means the
database can be recovered much faster.
• Backing up a database to Amazon S3 can be two to twelve times faster than
with tape drives. As one example, in a benchmark test a DBA was able to
restore 3.8 terabytes in 2.5 hours over gigabit Ethernet. This amounts to 25
gigabytes per minute, or 422MB per second. In addition, since Amazon.com
uses RMAN data compression, the effective restore rate was 3.37 gigabytes
per second. This 2.5 hours compares to, conservatively, 10-15 hours that
would be required to restore from tape.
• Easy implementation of Oracle RMAN backups to Amazon S3. The DBAs
found it easy to start backing up their databases to Amazon S3. Directing
Oracle RMAN backups to Amazon S3 requires only a configuration of the
Oracle Secure Backup Cloud (SBC) module. The effort required to configure
the Oracle SBC module amounted to an hour or less per database. After this
one-time setup, the database backups were transparently redirected to
Amazon S3.
• Durable data storage provided by Amazon S3, which is designed for 11 nines
durability. On occasion, Amazon.com has experienced hardware failures with
tape infrastructure – tapes that break, tape drives that fail, and robotic
components that fail. Sometimes this happens when a DBA is trying to
restore a database, and dramatically increases the mean time to recover
(MTTR). With the durability and availability of Amazon S3, these issues are
no longer a concern.
• Freeing up valuable human resources. With tape infrastructure, Amazon.com
had to seek out engineers who were experienced with very large tape backup
installations – a specialized, vendor-specific skill set that is difficult to find.
They also needed to hire data center technicians and dedicate them to
problem-solving and troubleshooting hardware issues – replacing drives,
shuffling tapes around, shipping and tracking tapes, and so on. Amazon S3
allowed them to free up these specialists from day-to-day operations so that
they can work on more valuable, business-critical engineering tasks.
• Elimination of physical tape transport to off-site location. Any company that
has been storing Oracle backup data offsite should take a hard look at the
costs involved in transporting, securing and storing their tapes offsite – these
costs can be reduced or possibly eliminated by storing the data in Amazon S3.
As the world’s largest online retailer, Amazon.com continuously innovates in
order to provide improved customer experience and offer products at the lowest
possible prices. One such innovation has been to replace tape with Amazon S3
storage for database backups. This innovation is one that can be easily
replicated by other organizations that back up their Oracle databases to tape.

You might also like