Professional Documents
Culture Documents
Virtualisation
Virtualisation
Virtualisation
date
1. Case Study of VMware 1
When it comes to management of ESXi and Virtual Machines and to avail all the
important features provided to us with VMware vSphere like HA, DRS,
vMotion we make use of vCenter Server which can be installed on Windows
Machine or can be downloaded as preconfigured virtual machine running with
SLES.
Storage Virtualization
Yes we are talking about another VMware Product which is quite popular these
days i.e VMware vSAN which helps us to abstract underlying HDD’s and SSD’s
to create a logical entity known as DISK GROUP on each ESXi hosts which can
be used further for creating a shared pool of storage resources known as VSAN
Shared aggregated datastore.
When working with VMware vSAN we need to create a cluster and enable
vSAN ensuring we have a minimum of 3 ESXi hosts contributing local
storage running with at least ESXi 5.5 version or later.All ESXi hosts must be
Network Virtualization
We have already seen three important resources being Virtualized in our data
center environment now it’s time to talk about another important component i.e
Network, with VMware NSX we can leverages the underlying VMware vSphere
platform and provides Network Virtualization platform to make use of Logical
Networks.
VMware NSX provides Network and Security services like Logical Switching,
Logical Routing, Logical Firewall, Logical Load Balancing, NAT, DHCP and
VPN Services.
When working with NSX we have to consider the various plane including Data
Plane (NSX Switch and kernel modules) that helps in abstracting the physical
network layer and provide various kernel modules including VXLAN, second in
the list is the control plane which is provided with the help of NSX controller for
logical routing and logical switching functions, another important layer to
consider is management plane aka NSX Manager which provides us centralized
management capabilities for managing various components of NSX.
Desktop virtualization
Similar to application virtualization mentioned above, desktop virtualization
separates the desktop environment from the physical device and configured as a
“virtual desktop infrastructure” (VDI). One of the biggest advantages of desktop
virtualization is that users are able to access all their personal files and
applications on any PC, meaning they can work from anywhere without the need
to bring their work computer. It also lowers the cost of software licensing and
updates. Maintenance and patch management are simple, since all of the virtual
desktops are hosted at the same location.
Application virtualization
This is a process where applications are virtualized and delivered from a server
to the end user’s device, such as laptops, smartphones, and tablets. So instead of
logging into their computers at work, users will be able to gain access to the
application right from their device, provided an Internet connection is available.
This is particularly popular for businesses that require the use of their applications
on the go.
A Type 1 hypervisor runs directly on the host machine's physical hardware, and
it's referred to as a bare-metal hypervisor; it doesn't have to load an underlying
OS first. With direct access to the underlying hardware and no other software --
such as OSes and device drivers -- to contend with, Type 1 hypervisors
are regarded as the most efficient and best-performing hypervisors available for
enterprise computing. Hypervisors such as VMware ESXi, Microsoft Hyper-V
server and open source KVM are examples of Type 1 hypervisors.
A Type 2 hypervisor is typically installed on top of an existing OS, and it's called
a hosted hypervisor because it relies on the host machine's pre-existing OS to
manage calls to CPU, memory, storage and network resources. Type 2
hypervisors include VMware Fusion, Oracle VM VirtualBox, Oracle VM Server
for x86, Oracle Solaris Zones, Parallels and VMware Workstation.
What is a Virtual Desktop Infrastructure (VDI)?
Virtual desktop infrastructure (VDI) is the technology for providing and
managing virtual desktops. VDI hosts desktop environments on a centralized
server and deploys them to end clients on request. These virtualized desktops are
created by a virtual machine controlled by a hypervisor. All computing activity
on the deployed virtual desktop instance occurs on the centralized server.
LAB WORK No. 2
Objective: Case Study of VMware 2.
By default, the new virtual machine uses an IDE disk for Windows 95, Windows
98, Windows Me, Windows XP, Windows Server 2003, NetWare and FreeBSD
guests. The default for other guest operating systems is a SCSI disk.
Note: On Linux hosts, the Workstation installer adds an entry to the Start
menu for VMware Workstation. However, this menu entry is located in
different submenus, depending on your Linux distribution. For example:
SUSE Linux 9.1 — Start > System > More Programs > VMware Workstation
Red Hat Enterprise Linux AS/WS Release 3 — Start > System Tools > More
System Tools > VMware Workstation
2. If this is the first time you have launched VMware Workstation and you did
not enter the serial number when you installed the product (an option available
on a Windows host), you are prompted to enter it. The serial number is on the
registration card in your package or in the email message confirming your
electronic distribution order. Enter your serial number and click OK.
The serial number you enter is saved and VMware Workstation does not ask
you for it again. For your convenience, VMware Workstation automatically
sends the serial number to the VMware Web site when you use certain Web
links built into the product (for example, Help > VMware on the
Web > Register Now! and Help > VMware on the Web> Request Support).
This allows us to direct you to the correct Web page to register and get support
for your product.
3. Start the New Virtual Machine Wizard.
When you start VMware Workstation, you can open an existing virtual
machine or create a new one. Choose File > New > Virtual Machine to begin
creating your virtual machine.
4. The New Virtual Machine Wizard presents you with a series of screens that
you navigate using the Next and Prev buttons at the bottom of each screen. At
each screen, follow the instructions, then click Next to proceed to the next
screen.
5. Select the method you want to use for configuring your virtual machine.
If you select Typical, the wizard prompts you to specify or accept defaults for
the following choices:
The guest operating system
The virtual machine name and the location of the virtual machine's files
The network connection type
Whether to allocate all the space for a virtual disk at the time you create it
Whether to split a virtual disk into 2GB files
If you select Custom, you also can specify how to set up your disk — create
a new virtual disk, use an existing virtual disk or use a physical disk — and
specify the settings needed for the type of disk you select. There is also an
option to create a legacy virtual disk for use in environments with other
VMware products.
Select Custom if you want to
Make a legacy virtual machine that is compatible with Workstation 4.x, GSX
Server 3.x, ESX Server 2.x and ACE 1.x.
Make a virtual disk larger or smaller than 4GB
Store your virtual disk's files in a particular location
Use an IDE virtual disk for a guest operating system that would otherwise have
a SCSI virtual disk created by default
Use a physical disk rather than a virtual disk (for expert users)
Set memory options that are different from the defaults
6. If you selected Typical as your configuration path, skip to step 7.
If you selected Custom as your configuration path, you may create a virtual
machine that fully supports all Workstation 5 features or a legacy virtual
machine compatible with specific VMware products.
This screen asks whether you want to create a Workstation 5 virtual machine
or a legacy virtual machine. See Legacy Virtual Disks for more information.
7. Select a guest operating system.
This screen asks which operating system you plan to install in the virtual
machine. Select both an operating system and a version. The New Virtual
Machine Wizard uses this information to
Each virtual machine should have its own folder. All associated files, such as
the configuration file and the disk file, are placed in this folder.
The LSI Logic adapter has improved performance and works better with
generic SCSI devices. The LSI Logic adapter is also supported by ESX Server
2.0 and higher. Keep this in mind if you plan to migrate the virtual machine
to another VMware product.
Your choice of SCSI adapter does not affect your decision to make your
virtual disk an IDE or SCSI disk. However, some guest operating systems —
such as Windows XP — do not include a driver for the Buslogic or LSI Logic
adapter. You must download the driver from the LSI Logic Web site.
Note: Drivers for a Mylex (BusLogic) compatible host bus adapter are not
obvious on the LSI Logic Web site. Search the support area for the numeric
string in the model number. For example, search for "958" for BT/KT-958
drivers.
See the VMware Guest Operating System Installation Guide for details about
the driver and the guest operating system you plan to install in this virtual
machine.
13. Select the disk you want to use with the virtual machine.
Select Create a new virtual disk.
Virtual disks are the best choice for most virtual machines. They are quick
and easy to set up and can be moved to new locations on the same host
computer or to different host computers. By default, virtual disks start as small
files on the host computer's hard drive, then expand as needed — up to the
size you specify in the next step. The next step also allows you to allocate all
the disk space when the virtual disk is created, if you wish.
The wizard recommends the best choice based on the guest operating system
you selected. All Linux distributions you can select in the wizard use SCSI
virtual disks by default, as do Windows NT, Windows 2000 and Longhorn.
All Windows operating systems except Windows NT, Windows 2000 and
Longhorn use IDE virtual disks by default; NetWare, FreeBSD, MS-DOS and
other guests default to IDE virtual disks.
Enter the size of the virtual disk that you wish to create.
You can set a size between 0.1GB and 950 GB for a SCSI virtual disk . The
default is 4GB.
The option Allocate all disk space now gives somewhat better performance
for your virtual machine. If you do not select Allocate all disk space now, the
virtual disk's files start small and grow as needed, but they can never grow
larger than the size you set here.
Note: Allocate all disk space now is a time-consuming operation that cannot
be cancelled, and requires as much physical disk space as you specify for the
virtual disk.
Select the option Split disk into 2GB files if your virtual disk is stored on a
file system that does not support files larger than 2GB.
If you want to specify which device node should be used by your SCSI or IDE
virtual disk, click Advanced.
On the advanced settings panel, you can also specify a disk mode. This is
useful in certain special-purpose configurations in which you want to exclude
disks from snapshots. For more information on the snapshot feature, see Using
Snapshots.
Normal disks are included in snapshots. In most cases, you should use normal
disks, leaving Independent unchecked.
Independent disks are not included in snapshots.
Caution: The independent disk option should be used only by advanced users
who need it for special-purpose configurations.
You have the following options for an independent disk:
Fast - Hadoop uses a storage method known as distributed file system, which
basically implements a mapping system to locate data in a cluster. The tools used
for data processing, such as MapReduce programming, are also generally located
in the very same servers, which allows for faster processing of data.
The canonical MapReduce example counts the appearance of each word in a set
of documents:
Here, each document is split into words, and each word is counted by
the map function, using the word as the result key. The framework puts together
all the pairs with the same key and feeds them to the same call to reduce. Thus,
this function just needs to sum all of its input values to find the total appearances
of that word.
As another example, imagine that for a database of 1.1 billion people, one would
like to compute the average number of social contacts a person has according to
age. In SQL, such a query could be expressed as:
Using MapReduce, the K1 key values could be the integers 1 through 1100, each
representing a batch of 1 million records, the K2 key value could be a person's
age in years, and this computation could be achieved using the following
functions:
function Map is
input: integer K1 between 1 and 1100, representing a batch of 1 million
social.person records
for each social.person record in the K1 batch do
let Y be the person's age
let N be the number of contacts the person has
produce one output record (Y,(N,1))
repeat
end function
function Reduce is
input: age (in years) Y
for each input record (Y,(N,C)) do
Accumulate in S the sum of N*C
Accumulate in Cnew the sum of C
repeat
let A be S/Cnew
produce one output record (Y,(A,Cnew))
end function
The MapReduce system would line up the 1100 Map processors, and would
provide each with its corresponding 1 million input records. The Map step would
produce 1.1 billion (Y,(N,1)) records, with Y values ranging between, say, 8 and
103. The MapReduce System would then line up the 96 Reduce processors by
performing shuffling operation of the key/value pairs due to the fact that we need
average per age, and provide each with its millions of corresponding input
records. The Reduce step would result in the much reduced set of only 96 output
records (Y,A), which would be put in the final result file, sorted by Y.
The count info in the record is important if the processing is reduced more than
one time. If we did not add the count of the records, the computed average would
be wrong, for example:
If we reduce files #1 and #2, we will have a new file with an average of 9 contacts
for a 10-year-old person ((9+9+9+9+9)/5):
If we reduce it with file #3, we lose the count of how many records we've already
seen, so we end up with an average of 9.5 contacts for a 10-year-old person
((9+10)/2), which is wrong. The correct answer is 9.166 = 55 / 6 =
(9*3+9*2+10*1)/(3+2+1).
LAB WORK NO. 5
Objective: Working in Codenvy to demonstrate Provisioning and Scaling of a
website.
One of the advantages of coding in the cloud with Codenvy is deploying to a PaaS
of choice once the app has been built, run and tested in Codenvy. Users do not
need to install any plugins or administer their workspaces in any way. Codenvy
talks to API of most popular PaaS providers. Currently, the following PaaS are
supported:
• AppFog
• CloudBees
• AWS Elastic Beanstalk
• Google App Engine
• Heroku
• Openshift
• ManyMo (to run Android apps)
Some providers will require deploy of SSH keys and git operations to update the
apps (Heroku, OpenShift), while others (GAE, AWS) make it possible to update
apps in one click.
• If you do not have accounts with Gmail or GitHub or just want to choose
a domain name by yourself, enter your email and the desired domain name,
and press Go.
Getting Started Using Codenvy Factories
You can find Codenvy Factory buttons at his site, Сodenvy.com or anywhere on
the net. If you click on a Factory button, we will create a temporary workspace
for you with the project of your choice. After a fruitful coding session in a
temporary workspace you can create a permanent account with Codenvy by
pressing Create Account button in the top right corner of a temporary workspace.
Projects are combinations of modules, folders and files. Projects can be mapped
1:1 to a source code repository. If a project is given a type, then Codenvy will
active plug-ins related to that type. For example, projects with the maven
project type will automatically get the Java plug-in, which provides a variety of
intellisense for Java projects.
You can also create a new project from the Welcome Screen - Create a New
Project From Scratch
Device Support
Codenvy currently supports all desktop and laptop devices. We currently provide
touch device support through the use of the Puffin Web Browser which virtualizes
double clicks and right clicks. We have not yet created a native touch UI design.
Browser Support
Browser Version
Chrome
21+
Firefox
15+
Safari
5.1+
Puffin Browser
2.4+
Apache Tomcat
7.0.39
3.0.4
1.6
Language Support
Cloud
Syntax Code Code Error Debug
Language Version local
coloring outline assistant detection mode
run
Framework Support
Framework Version Access Templates
PaaS Support
Cloud SDK
PaaS Languages Features
Run
Application management
Java Yes
EC2 and S3 console
Yes - Micro
Java, Ruby Manage applications Cloud
Foundry
Java
Python Application management, Logs,
PHP (app IDs need Indexes, Pagespeed, Queues, DoS, Yes
to be whitelisted at Resource Limits, Crons, Backends
GAE)
Architecture
LAB WORK NO. 6
Objective: Working in codenvy to create project in cloud.
odenvy
Create Maven project using Wizard
1.Click on create project
5.Run Maven Build to create java folder structure. For doing this go to
[1].Top →CMD→ Maven → Build then click on
[2].RUN button
LAB WORK NO. 7
Objective – Demonstrating provisioning and scaling of website salesforce.
Sales Cloud mainly works based on Lead, Account, Contact & Opportunity
objects. Leads can be further converted into account, contact and opportunity
objects -an important built-in functionality of the Sales Cloud. What’s more, if
any custom fields are added to the lead object, it is also possible to set the
mapping for them.
Opportunities are well managed by giving them different stages and
probabilities.
Service Cloud
In Service Cloud, the base objects are cases and solutions. A Service Executive
can create a case on a customer enquiry or a complaint, and the corresponding
solution can be stored in a solution object. There is some standard functionality,
like email to case, which will automatically create a new case in the CRM on
every customer email.
Marketing Cloud
Marketing Cloud is an application for marketing purposes. It helps in the creation
and execution of marketing campaigns, email promotions, and more.
Custom Cloud
Custom fields can be added to the standard objects, and custom workflows can
also be created. For custom views as well as business logic, Visualforce pages
and associated apex classes can be used. All of these customization facilities
make it possible to fulfill just about any need a CRM user may have.
Analytics
Every CRM application must be able to present reports with the data stored in it.
In Salesforce, the ‘Reports and Dashboards’ feature enables effective analytics.
There are a number of standard reports associated with the standard objects. Each
report can be used to create dashboard components like graphs. Standard Reports
are placed in the folders available in Salesforce, so finding the reports are easy.
Salesforce automation
Salesforce automation features include tracking leads, managing emails,
assigning tasks, notifications, approvals etc. This CRM will handle all the
automation required for the sales, marketing and service processes.
This technical note shows you the first steps in creating an iBOLT Salesforce.com
project.
Creating a Salesforce.com Account
1.Go to http://www.salesforce.com/developer/
5.After you have changed your password, you should ask for a security
token:
1. Click on the Setup link at the top of the page.
2. In the My Personal Information section, click on the Reset your
security token link to receive an email containing a security token.
You will use this later on, when you add the security token to your
password, and enter the combined result in the Password field in
iBOLT’s Connections Salesforce dialog box. For example, if the
password is 1234, and the security token is AABBCC, you should
enter 1234AABBCC in the iBOLT password field.
Note: You do not need the security token for logging into the
Salesforce.com website. The token is used only when you use
external applications such as iBOLT.
What is SOQL?
SQL is standard ANSI standard query language where as SOQL is specially
optimized version of SQL specially designed for working with Salesforce.com
underlying database. it is the abbreviation of Salesforce Object Query Language.
In addition, SOQL does not support very advance level SQL specific functions
like using wild cards, join operations etc...
The basic structure of SOQL is very similar to SQL as you see below:
SELECT Id, Name, Phone
FROM Contact
WHERE Phone <> null
AND Name LIKE ‘%adam%’
ORDER BY Name
LIMIT 50
However the ease with which you can access related records in one query
without needing to do complex joins is amazing!
SELECT Id, Name, Phone, Account.Name
FROM Contact
WHERE Phone <> null
AND Name LIKE ‘%adam%’
ORDER BY Name
LIMIT 50
Execute SOQL queries or SOSL searches in the Query Editor panel of the
Developer Console.
About Amazon.com
Amazon.com is the world’s largest online retailer. In 2011, Amazon.com
switched from tape backup to using Amazon Simple Storage Service (Amazon
S3) for backing up the majority of its Oracle databases. This strategy reduces
complexity and capital expenditures, provides faster backup and restore
performance, eliminates tape capacity planning for backup and archive, and
frees up administrative staff for higher value operations. The company was able
to replace their backup tape infrastructure with cloud-based Amazon S3 storage,
eliminate backup software, and experienced a 12X performance improvement,
reducing restore time from around 15 hours to 2.5 hours in select scenarios.
The Challenge
As Amazon.com grows larger, the sizes of their Oracle databases continue to
grow, and so does the sheer number of databases they maintain. This has caused
growing pains related to backing up legacy Oracle databases to tape and led to
the consideration of alternate strategies including the use of Cloud services of
Amazon Web Services (AWS), a subsidiary of Amazon.com. Some of the
business challenges Amazon.com faced included:
• Utilization and capacity planning is complex, and time and capital expense
budget are at a premium. Significant capital expenditures were required over
the years for tape hardware, data center space for this hardware, and
enterprise licensing fees for tape software. During that time, managing tape
infrastructure required highly skilled staff to spend time with setup,
certification and engineering archive planning instead of on higher value
projects. And at the end of every fiscal year, projecting future capacity
requirements required time consuming audits, forecasting, and budgeting.
• The cost of backup software required to support multiple tape devices sneaks
up on you. Tape robots provide basic read/write capability, but in order to
fully utilize them, you must invest in proprietary tape backup software. For
Amazon.com, the cost of the software had been high, and added significantly
to overall backup costs. The cost of this software was an ongoing budgeting
pain point, but one that was difficult to address as long as backups needed to
be written to tape devices.
• Maintaining reliable backups and being fast and efficient when retrieving data
requires a lot of time and effort with tape. When data needs to be durably
stored on tape, multiple copies are required. When everything is working
correctly, and there is minimal contention for tape resources, the tape robots
and backup software can easily find the required data. However, if there is a
hardware failure, human intervention is necessary to restore from tape.
Contention for tape drives resulting from multiple users’ tape requests slows
down restore processes even more. This adds to the recovery time objective
(RTO) and makes achieving it more challenging compared to backing up to
Cloud storage.
The Benefits
With the migration to Amazon S3 well along the way to completion,
Amazon.com has realized several benefits, including:
• Elimination of complex and time-consuming tape capacity planning.
Amazon.com is growing larger and more dynamic each year, both organically
and as a result of acquisitions. AWS has enabled Amazon.com to keep pace
with this rapid expansion, and to do so seamlessly. Historically, Amazon.com
business groups have had to write annual backup plans, quantifying the
amount of tape storage that they plan to use for the year and the frequency
with which they will use the tape resources. These plans are then used to
charge each organization for their tape usage, spreading the cost among many
teams. With Amazon S3, teams simply pay for what they use, and are billed
for their usage as they go. There are virtually no upper limits as to how much
data can be stored in Amazon S3, and so there are no worries about running
out of resources. For teams adopting Amazon S3 backups, the need for formal
planning has been all but eliminated.
• Reduced capital expenditures. Amazon.com no longer needs to acquire tape
robots, tape drives, tape inventory, data center space, networking gear,
enterprise backup software, or predict future tape consumption. This
eliminates the burden of budgeting for capital equipment well in advance as
well as the capital expense.
• Immediate availability of data for restoring – no need to locate or retrieve
physical tapes. Whenever a DBA needs to restore data from tape, they face
delays. The tape backup software needs to read the tape catalog to find the
correct files to restore, locate the correct tape, mount the tape, and read the
data from it. In almost all cases the data is spread across multiple tapes,
resulting in further delays. This, combined with contention for tape drives
resulting from multiple users’ tape requests, slows the process down even
more. This is especially severe during critical events such as a data center
outage, when many databases must be restored simultaneously and as soon as
possible. None of these problems occur with Amazon S3. Data restores can
begin immediately, with no waiting or tape queuing – and that means the
database can be recovered much faster.
• Backing up a database to Amazon S3 can be two to twelve times faster than
with tape drives. As one example, in a benchmark test a DBA was able to
restore 3.8 terabytes in 2.5 hours over gigabit Ethernet. This amounts to 25
gigabytes per minute, or 422MB per second. In addition, since Amazon.com
uses RMAN data compression, the effective restore rate was 3.37 gigabytes
per second. This 2.5 hours compares to, conservatively, 10-15 hours that
would be required to restore from tape.
• Easy implementation of Oracle RMAN backups to Amazon S3. The DBAs
found it easy to start backing up their databases to Amazon S3. Directing
Oracle RMAN backups to Amazon S3 requires only a configuration of the
Oracle Secure Backup Cloud (SBC) module. The effort required to configure
the Oracle SBC module amounted to an hour or less per database. After this
one-time setup, the database backups were transparently redirected to
Amazon S3.
• Durable data storage provided by Amazon S3, which is designed for 11 nines
durability. On occasion, Amazon.com has experienced hardware failures with
tape infrastructure – tapes that break, tape drives that fail, and robotic
components that fail. Sometimes this happens when a DBA is trying to
restore a database, and dramatically increases the mean time to recover
(MTTR). With the durability and availability of Amazon S3, these issues are
no longer a concern.
• Freeing up valuable human resources. With tape infrastructure, Amazon.com
had to seek out engineers who were experienced with very large tape backup
installations – a specialized, vendor-specific skill set that is difficult to find.
They also needed to hire data center technicians and dedicate them to
problem-solving and troubleshooting hardware issues – replacing drives,
shuffling tapes around, shipping and tracking tapes, and so on. Amazon S3
allowed them to free up these specialists from day-to-day operations so that
they can work on more valuable, business-critical engineering tasks.
• Elimination of physical tape transport to off-site location. Any company that
has been storing Oracle backup data offsite should take a hard look at the
costs involved in transporting, securing and storing their tapes offsite – these
costs can be reduced or possibly eliminated by storing the data in Amazon S3.
As the world’s largest online retailer, Amazon.com continuously innovates in
order to provide improved customer experience and offer products at the lowest
possible prices. One such innovation has been to replace tape with Amazon S3
storage for database backups. This innovation is one that can be easily
replicated by other organizations that back up their Oracle databases to tape.