Professional Documents
Culture Documents
Veritas Netbackup Backup Planning and Performance Tuning Guide
Veritas Netbackup Backup Planning and Performance Tuning Guide
Release 6.0
N281842
Technical support
For technical assistance, visit http://support.veritas.com and select phone or
email support. Use the Knowledge Base search feature to access resources such
as TechNotes, product alerts, software downloads, hardware compatibility lists,
and our customer email notification service.
Contents
Section I
Chapter 1
Chapter 2
Chapter 3
Chapter 4
Chapter 5
Chapter 6
Best practices
Best practices: new tape drive technologies .................................................... 66
Best practices: tape drive cleaning ................................................................... 66
Best practices: storing tape cartridges ............................................................. 68
Best practices: recoverability ............................................................................. 68
Suggestions for data recovery planning .................................................. 69
Best practices: naming conventions ................................................................. 71
Section II
Performance tuning
Chapter 7
Measuring performance
Overview ................................................................................................................76
Controlling system variables for consistent testing conditions ...................76
Server variables ............................................................................................76
Network variables ........................................................................................77
Client variables .............................................................................................78
Data variables ...............................................................................................78
Evaluating performance .....................................................................................79
Evaluating UNIX system components ..............................................................84
Monitoring CPU load ...................................................................................84
Measuring performance independent of tape or disk output ...............84
Evaluating Windows system components .......................................................85
Monitoring CPU load ...................................................................................86
Monitoring memory use .............................................................................87
Monitoring disk load ...................................................................................87
Chapter 8
Chapter 9
Chapter 10
Chapter 11
Appendix A
Additional resources
Performance tuning information at vision online ............................... 161
Performance monitoring utilities ........................................................... 161
Freeware tools for bottleneck detection ................................................ 161
Mailing list resources ................................................................................ 162
Index
163
Section I
Best Practices
Note: For a discussion of tuning factors and general recommendations that may
be applied to an existing installation, see Section II.
10
Chapter
NetBackup capacity
planning
This chapter explains how to design your backup system as a foundation for
good performance.
This chapter includes the following sections:
Introduction on page 13
New
Veritas NetBackup is a high-performance data protection application. Its
architecture is designed for large and complex distributed computing
environments. NetBackup provides a scalable storage management server that
can be configured for network backup, recovery, archival, and file migration
services.
This manual is for administrators who want to analyze, evaluate, and tune
NetBackup performance. This manual is intended to answer questions such as
the following: How big should the backup server be? How can the NetBackup
server be tuned for maximum performance? How many CPUs and tape drives
are needed? How to configure backups to run as fast as possible? How to
improve recovery times? What tools can characterize or measure how
NetBackup is handling data?
Note: Most critical factors in performance are based in hardware rather than
software. Hardware selection and configuration have roughly four times the
weight that software has in determining performance. Although this guide
provides some hardware configuration assistance, it is assumed for the most
part that your devices are correctly configured.
Disclaimer
It is assumed you are familiar with NetBackup and your applications, operating
systems, and hardware. The information in this manual is advisory only,
presented in the form of guidelines. Changes to an installation undertaken as a
result of the information contained herein should be verified in advance for
appropriateness and accuracy. Some of the information contained herein may
apply only to certain hardware or operating system architectures.
Note: The information in this manual is subject to change.
Introduction
The first step toward accurately estimating your backup requirements is a
complete understanding of your environment. Many performance issues can be
traced to hardware or environmental issues. A basic understanding of the entire
backup data path is important in determining the maximum performance you
can expect from your installation.
Every backup environment has a bottleneck. It may be a fast bottleneck, but it
will determine the maximum performance obtainable with your system.
Example:
Consider the configuration illustrated below. In this environment, backups run
slowly (in other words, they are not completing in the scheduled backup
window). Total throughput is eight to 10 megabytes per second.
What makes the backups run slowly? How can NetBackup or the environment be
configured to increase backup performance in this situation?
Figure 1-1
The explanation is that the LAN, having a speed of 100megabits per second, has
a theoretical throughput of 12.5 megabytes per second. In practice, 100BaseT
throughput is unlikely to exceed 70% utilization. Therefore, the best delivered
data rate is about 8 megabytes per second to the NetBackup server. The
throughput can be even lower than this, when TCP/IP packet headers,
TCP-window size constraints, router hops (packet latency for ACK packets
delays the sending of the next data packet), host CPU utilization, filesystem
overhead, and other LAN users activity are considered. Since the LAN is the
slowest element in the backup path, it is the first place to look in order to
increase backup performance in this configuration.
13
What types of backups are needed and how often should they take place?
Tape requirements
Restore opportunities.
To properly size your backup system, you must decide on the type and
frequency of your backups. Will you perform daily incremental and weekly
full backups? Monthly or bi-weekly full backups?
If backups are sent off site, how long must they remain off site?
If you plan to send tapes to an off site location as a disaster recovery option,
you must identify which tapes to send off site and how long they remain off
site. You might decide to duplicate all your full backups, or only a select few.
You might also decide to duplicate certain systems and exclude others. As
tapes are sent off site, you will need to buy new tapes to replace them until
they are recycled back from off site storage. If you forget this simple detail,
you will run out of tapes when you most need them.
15
backup system, explains how to calculate the amount of data you can
transfer over those networks in a given time.
Depending on the amount of data that you want to back up and the
frequency of those backups, you might want to consider installing a private
network just for backups.
What new systems will be added to your site in the next six months?
It is important to plan for future growth when designing your backup
system. By analyzing the potential future growth of your current or future
systems, you can insure the backup solution that you have accommodates
the kind of environment that you will have in the future. Remember to add
any resulting growth factor that you incur to your total backup solution.
Data type: What are the types of data: text, graphics, database? How
compressible is the data? How many files are involved? Will the data be
encrypted? (Note that encrypted backups may run slower. See Encryption
on page 133 for more information.)
Data location: Is the data local or remote? What are the characteristics of
the storage subsystem? What is the exact data path? How busy is the
storage subsystem?
Note: The ideas and examples that follow are based on standard and ideal
calculations. Your numbers will differ based on your particular environment,
data, and compression rates.
Calculate the required data transfer rate for your backups on page 17
Calculate the required data transfer rate for your network(s) on page 21
Calculate how much media is needed for full and incremental backups on
page 25
Calculate the size of the tape library needed to store your backups on
page 26
Summary on page 36
17
Example: Calculating your ideal data transfer rate during the week
Assumptions:
Amount of data to back up during a full backup = 500 gigabytes
Amount of data to back up during an incremental backup = 20% of a full
backup Daily backup window = 8 hours
Solution 1:
Full backup = 500 gigabytes
Ideal data transfer rate = 500 gigabytes/8 hours = 62.5 gigabytes/hour
Solution 2:
Incremental backup = 100 gigabytes
Ideal data transfer rate = 100 gigabytes/8 hours = 12.5 gigabytes/hour
To calculate your ideal data transfer rate during the weekends, divide the
amount of data that needs to be backed up by the length of the weekend backup
window.
Drive
Theoretical
Theoretical
gigabytes/hour (no gigabytes/hour (2:1
compression)
compression)
Typical
gigabytes/hour
LTO gen 1
54
108
37-65
LTO gen 2
108
216
75-130
LTO gen 3
288
576
200-345
SDLT 320
57
115
40-70
SDLT 600
129
259
90-155
STK 9940B
108
252 (2.33:1)
75-100
19
Depending on the several factors that can influence the transfer rates of your
tape drives, it is possible to obtain higher or lower transfer rates. The solutions
in the examples above are approximations of what you can expect.
Note also that a backup of encrypted data may take more time. See Encryption
on page 133 for more information.
The table below displays the transfer rates for several drive controllers. In
practice, your transfer rates might be slower because of the inherent overhead
of several variables including your file system layout, system CPU load, and
memory usage.
Table 1-2
Drive Controller
Theoretical
megabytes/second
Theoretical
gigabytes/hour
ATA-5 (ATA/ATAPI-5)
66
237.6
80
288
iSCSI
100
360
100
360
SATA/150
150
540
Ultra-3 SCSI
160
576
200
720
SATA/300
300
1080
Ultra320 SCSI
320
1152
400
1440
Network Technology
Theoretical gigabytes/hour
Typical gigabytes/hour
10BaseT (switched)
3.6
2.7
100BaseT (switched)
36
32
1000BaseT (switched)
360
320
21
To calculate your NetBackup catalog size, you need to know how much data you
will be backing up for full and incremental backups, how often these backups
will be performed, and for how long they will be retained. Here are two simple
formulas to calculate these values:
Data being tracked = (Amount of data to back up) * (Number of backups) *
(Retention period)
NetBackup catalog size = 120 * (number of files)
Note: If you select NetBackups True Image Restore option, your catalog will be
twice as large as a catalog without this option selected. True Image Restore
collects the information required to restore directories to their contents at the
time of any selected full or incremental backup. Because the additional
information that NetBackup collects for incremental backups is the same as that
of a full backup, incremental backups take much more disk space when you
collect True Image Restore information.
23
Note: This space must be included when determining size requirements for a
master or media server, depending on where the EMM server is installed.
Space for the NBDB on the EMM server is required in the following two
locations:
UNIX
/usr/openv/db/data
/usr/openv/db/staging
Windows
install_path\NetBackupDB\data
install_path\NetBackupDB\staging
Calculate the required space for the NBDB in each of the two directories, as
follows:
60 MB + (2 KB * number of volumes configured for EMM)
where EMM is the Enterprise Media Manager, and volumes are NetBackup
(EMM) media volumes. Note that 60 MB is the default amount of space needed
for the NBDB database used by the EMM server. It includes pre-allocated space
for configuration information for devices and storage units.
Note: During NetBackup installation, the install script looks for 60 MB of free
space in the above /data directory; if there is insufficient space, the installation
fails. The space in /staging is only required when a hot catalog backup is run.
If you expect your site's workload to increase over time, you can ease the pain of
future upgrades by planning for expansion. Design your initial backup
architecture so it can evolve to support more clients and servers. Invest in the
faster, higher-capacity components that will serve your needs beyond the
present.
A simple formula for calculating your tape needs is shown here:
Number of tapes = (Amount of data to back up) / (Tape capacity)
To calculate how many tapes will be needed based on all your requirements, the
above formula can be expanded to
Number of tapes = ((Amount of data to back up) * (Frequency of backups) *
(Retention period)) / (Tape capacity)
Table 1-4
Tape capacities
Drive
Theoretical gigabytes
(no compression)
Theoretical gigabytes
(2:1 compression)
LTO gen 1
100
200
LTO gen 2
200
400
LTO gen 3
400
800
SDLT 320
160
320
SDLT 600
300
600
STK 9940B
200
400
Example: Calculating how many tapes are needed to store all your
backups
Preliminary calculations:
Size of full backups = 500 gigabytes * 4 (per month) * 6 months = 12
terabytes
25
Calculate the size of the tape library needed to store your backups
To calculate how many robotic library tape slots are needed to store all your
backups, take the number of tapes for backup calculated in Calculate how much
media is needed for full and incremental backups on page 25 and add tapes for
catalog backup and cleaning:
Tape slots needed = (Number of tapes needed for backups) + (Number of
tapes needed for catalog backups) + 1 (for a cleaning tape)
A typical example of tapes needed for catalog backup is 2.
Additional tapes may be needed for the following:
Add tapes needed for future data growth. Make sure your system has a
viable upgrade path as new tape drives become available.
Designing a backup server becomes a simple task once the basic design
constraints are known:
Given the above, a simple approach to designing your backup server can be
outlined as follows:
Add memory
Add CPUs
27
Figure 1-2
In some cases, it may not be practical to design a generic server to back up all of
your systems. You might have one or several large servers that cannot be backed
up over a network within your backup window. In such cases, it is best to back up
those servers using their own locally-attached tape drives. Although this section
discusses how to design a master backup server, you can still use its information
to properly add the necessary tape drives and components to your other servers.
The next example shows how to configure a master server using the design
elements gathered from the previous sections.
The master server must be able to periodically communicate with all its
media servers. If there are too many media servers, master server
processing may be overloaded.
If at all possible, design your configuration with one master server per
firewall domain. In addition, do not share robotic tape libraries between
firewall domains.
As a rule, the number of clients (separate physical hosts) per master server
is not a critical factor for NetBackup. Ordinary backup processing
performed by each client has little or no impact on the NetBackup server,
unless, for instance, the clients all have database extensions or are trying to
run ALL_LOCAL_DRIVES at the same time.
29
Note: This table provides a rough estimate only, as a guideline for initial
planning. Note also that the RAM amounts shown below are for a base
NetBackup installation; RAM requirements vary depending on the NetBackup
features, options, and agents being used.
Table 1-5
Master
Server Type
RAM
Number of
Processors
Master
Backups
Number of
Media
Servers Per
Master
Server
Solaris
2 gigabytes
Not backing
up clients
Media server
backing up
itself only
10 - 20 tape
drives in not
more than 2
libraries
25 - 40
Solaris
4 gigabytes
Not backing
up clients
Media server
backing up
itself only
10 - 20 tape
drives in not
more than 2
libraries
35 - 50
Solaris
8+ gigabytes
Not backing
up clients
Media server
backing up
network
clients
20 - 40 tape
drives in not
more than 2
libraries
50 -70
Table 1-5
Master
Server Type
RAM
Number of
Processors
Master
Backups
Number of
Media
Servers Per
Master
Server
Windows
2 gigabytes
Not backing
up clients
Media server
backing up
itself only
15 - 30 tape
drives in not
more than 2
libraries
10+
Windows
4 gigabytes
Not backing
up clients
Media server
backing up
itself only
20 - 40 tape
drives in not
more than 2
libraries
20+
Windows
8+ gigabytes
Not backing
up clients
Media server
backing up
network
clients
40 - 128 tape
drives in not
more than 2
libraries
50+
Component
Network cards
1 ATM card
Tape drives
31
Table 1-6
Component
OS and NetBackup
Table 1-7
Component
Type of component
Network cards
Tape drives
256 megabytes
128 megabytes
128 megabytes
64 megabytes
OS and NetBackup
1 gigabyte
1 or more gigabytes
NetBackup multiplexing
Consider how many CPUs are needed (see CPUs needed per master/media
server component on page 31). Here are some general guidelines:
Experiments (with Sun Microsystems) have shown that a useful,
conservative estimate is 5MHz of CPU capacity per 1MB/second of data
movement in and out of the NetBackup media server. Keep in mind that the
operating system and other applications also use the CPU. This estimate is
for the power available to NetBackup itself.
Example:
A system backing up clients over the network to a local tape drive at the
rate of 10MB/second would need 100MHz of available CPU power:
50MHz to move data from the network to the NetBackup server
50MHz to move data from the NetBackup server to tape.
The platform chosen must be able to drive all network interfaces and keep all
tape devices streaming.
NOM server software does not have to be installed on the same server as
NetBackup 6.0 master server software. Since the NOM server is also a web
server, installing NOM on a master server may impact security and
performance. (The guidelines provided here assume that the NOM server is
a standalone host not acting as a master server.)
33
Sizing considerations
The size of your NOM server depends largely on the number of NetBackup
objects that NOM manages. See the following table.
Factors in determining NOM server size
Number of master servers to manage
(the number of media servers is irrelevant)
Number of policies
Number of jobs run per day
Number of media
Number of catalog images
Based on the above factors, the following NOM server components should be
sized accordingly.
NOM server components
Disk space (installed NOM binary + NOM database, described below)
Type and number of CPUs
RAM
The next section describes the NOM database and how it affects disk space
requirements, followed by overall sizing guidelines for NOM.
NOM database
The Sybase database used by NOM is similar to that used by NetBackup and is
installed as part of the NOM server installation.
The disk space needed for the initial installation of NOM depends on the
volume of data initially loaded onto the server, based on the following:
number of policy data records, number of job data records, number of media
data records, and number of catalog image records.
The rate of NOM database growth depends on the quantity of data being
managed: policy data, job data, media data, and catalog data.
Sizing guidelines
The following guidelines are presented in groups based on the number of objects
that your NOM server manages.
It is assumed that your NOM server is a standalone host (the host is not acting as
a NetBackup master server).
Note: Symantec recommends multiple NOM servers for deployments larger than
those described in the following guidelines.
Note: The guidelines are intended for basic planning purposes, and do not
represent fixed recommendations or restrictions.
In the following table, find the installation category that matches your site,
based on number of master servers that your NOM server will manage, number
of jobs per day, and so forth. Then consult the following table for NOM sizing
guidelines.
Table 1-8
NetBackup
installation
category
Master
servers
Jobs per
day
Policies
Alerts per
day
Media
1-3
200 - 500
200 - 300
100 - 200
5000
3-5
500 - 1000
300 - 500
200 - 300
10000
5-7
1000 - 5000
1000 - 4000
500 - 800
20000
8 - 10
5000 - 8000
4000 - 8000
800 - 3000
30000
Using the NetBackup installation category from above (A, B, C, D), read across to
the recommended NOM server capacities.
Table 1-9
NetBackup
installation
category
OS
CPU type
Number
of CPUs
RAM
Disk
space
Windows
Pentium V
2 GB
80 GB
Solaris
2 GB
80 GB
Windows
Pentium V
2 GB
80 GB
35
Table 1-9
NetBackup
installation
category
CPU type
Solaris
Number
of CPUs
RAM
Disk
space
2 GB
80 GB
Windows
Pentium V
4 GB
80 GB
Solaris
4 GB
80 GB
Windows
Pentium V
4 GB
80 GB
Solaris
8 GB
80 GB
Summary
Using the guidelines provided in this chapter, design a solution that can do a full
backup and incremental backups of your largest system within your time
window. The remainder of the backups can happen over successive days.
Eventually, your site may outgrow its initial backup solution. By following these
guidelines, you can add more capacity at a future date without having to
redesign your basic strategy. With proper design and planning, you can create a
backup strategy that will grow with your environment.
As outlined in the previous sections, the number and location of the backup
devices are dependent on a number of factors.
If one drive causes backup window time conflicts, another can be added,
providing an aggregate rate of two drives. The trade-off is that the second drive
imposes extra CPU, memory, and I/O loads on the media server.
If you find that you cannot complete backups in the allocated window, one
approach is to either increase your backup window or decrease the frequency of
your full and incremental backups.
Another approach is to reconfigure your site to speed up overall backup
performance. Before you make any such change, you should understand what
determines your current backup performance. List or diagram your site network
and systems configuration. Note the maximum data transfer rates for all the
components of your backup configuration and compare these against the rate
you must meet for your backup window. This will identify the slowest
components and, consequently, the cause of your bottlenecks. Some likely areas
for bottlenecks include the networks, tape drives, client OS load, and filesystem
fragmentation.
Backup questionnaire
Question
Explanation
System name
Any unique name to identify the machine. Hostname or any unique name for each
system.
Vendor
The hardware vendor who made the system (for example, Sun, HP, IBM, generic PC)
Model
OS version
Building / location
Total storage
Used storage
Total used internal and external storage capacity - if the amount of data to be backed up
is substantially different from the amount used, please note that.
Network connection
For example, 10/100MB, Gigabit, T1. It is important to know if the LAN is a switched
network or not.
Database (DB)
Key application
For example: Exchange server, accounting system, software developer's code repository,
NetBackup critical policies.
Backup window
For example: incrementals run M-F from 11PM to 6AM, Fulls are all day Sunday. This
information helps determine where potential bottlenecks will be and how to configure a
solution.
Retention policy
For example: incrementals for 2 weeks, full backups for 13 weeks. This information will
help determine how to size the number of slots needed in a library.
37
Table 1-10
Question
Explanation
Comments?
Any special situations to be aware of? Any significant patches on the operating system?
Will the backups be over a WAN? Do the backups need to go through a firewall?
Chapter
Master Server
configuration guidelines
This chapter provides guidelines and recommendations for better performance
on the NetBackup master server.
This chapter includes the following sections:
Host Properties > Master Server > Properties > Global Attributes >
Maximum jobs per client (should be greater than 1).
Host Properties > Master Server > Properties > Client Attributes setting
for Maximum data streams (should be greater than 1).
Policy attribute Limit jobs per policy (should be greater than 1).
41
Note: The Activity Monitor may not update if there are many (thousands of) jobs
to view. If this happens, you may need to change the memory setting using the
NetBackup Java command jnbSA with the -mx option. Refer to the
INITIAL_MEMORY, MAX_MEMORY subsection in the NetBackup System
Administrators Guide for UNIX and Linux, Volume I. Note that this situation
does not affect NetBackup's ability to continue running jobs.
Consolidated job and job policy views per server (or group of servers), for
filtering and sorting job activity.
For more information on the capabilities of NOM, refer to the NOM online help
in the Administration console, or see the NetBackup Operations Manager Getting
Started Guide.
Windows
install_path\NetBackup\bin
Windows
cd install_path\NetBackup
43
Windows
echo 0 > NOexpire
Miscellaneous considerations
Consider the following issues when planning for or troubleshooting NetBackup.
Use storage units in the order in which they are listed in the group.
Configure the storage unit group as a failover group. This means the first
storage unit in the group will be the only storage unit used. If the storage
unit is busy, then backups will queue. The second storage unit will only be
used if the first storage unit is down.
Disk staging
With disk staging, images can be created on disk initially, then copied later to
another media type (as determined in the disk staging schedule). The media type
for the final destination is typically tape, but could be disk. This two-stage
process leverages the advantages of disk-based backups in the near term, while
preserving the advantages of tape-based backups for long term.
Note that disk staging can be used to increase backup speed. For more
information, refer to the NetBackup System Administrators Guide, Volume I.
Image database: The image database contains information about what has
been backed up. It is by far the largest part of the catalog.
NetBackup data stored in relational databases: This includes the media and
volume data describing media usage and volume information which is used
during the backups.
NetBackup configuration files: Policy, schedule and other flat files used by
NetBackup.
45
/usr/openv/
/Netbackup/db
/db/data
/Netbackup/vault
/var
/var/global
server.conf
EMM_DATA.db
databases.conf
EMM_INDEX.db
NBDB.log
BMRDB.db
BMRDB.log
/images
/error
/class
/class_template
/config
/vault
/jobs
/failure_history
BMR_DATA.db
/client
BMR_INDEX.db
/media
vxdbms.conf
/client_1
/Master
/Media_server
Relational database
files
/client_n
Image database
Configuration files
When defining the file list, use absolute pathnames for the locations of the
NetBackup and Media Manager catalog paths and include the server name
in the path. This is in case the media server performing the backup is
changed.
Use catalog archiving. Catalog archiving reduces the size of online catalog
data by relocating the large catalog .f files to secondary storage. NetBackup
47
Off load some policies, clients, and backup images from the current master
server to a new, additional master, so that each master has a window large
enough to allow its catalog backup to finish. Since a media server can be
connected to one master server only, additional media servers may be
needed. For assistance in adding another master server to lighten the
workload of the existing master, contact Symantec Consulting.
Catalog compression
When the NetBackup image catalog becomes too large for the available disk
space, there are two ways to manage this situation:
For details, refer to Moving the Image Catalog and Compressing and
Uncompressing the Image Catalog in the NetBackup System Administrators
Guide, Volume I.
Note that NetBackup compresses images after each backup session, regardless
of whether or not any backups were successful. This happens right before the
execution of the session_notify script and the backup of the catalog. The actual
backup session is extended until compression is complete.
Merging/splitting/moving servers
A master server schedules and maintains backup information for a given set of
systems. The Enterprise Media Manager (EMM) server and its database maintain
centralized device and media related information used on all servers that are
part of the configuration. By default, the EMM server and the NetBackup
Relational Database (NBDB) that contains the EMM data are located on the
master server. A large and dynamic data center can expect to periodically
reconfigure the number and organization of its backup servers.
49
excluded. Should disaster (or user error) strike, not being able to recover
files costs much more than backing up extra data.
When a policy specifies that all local drives be backed up
(ALL_LOCAL_DRIVES), nbpem initiates a parent job (nbgenjob) that
connects to the client and runs bpmount -i to get a list of mount points.
Then nbpem initiates a job with its own unique job identification number
for each mount point. Next the client bpbkar starts a stream for each job.
Then, and only then, the exclude list is read by NetBackup. When the entire
job is excluded, bpbkar exits with a status 0, stating that it sent 0 of 0 files
to backup. The resulting image files are treated just as any other successful
backup's image files. They expire in the normal fashion when the expiration
date in the image header files specifies they are to expire.
Critical policies
For online, hot catalog backups (a new feature in NetBackup 6.0), make sure to
identify those policies that are crucial to recovering your site in the event of a
disaster. For more information on hot catalog backup and critical policies, refer
to the NetBackup System Administrators Guide, Volume I.
Schedule frequency
To minimize the number of times you back up files that have not changed, and
to minimize your consumption of bandwidth, media, and other resources,
consider limiting the frequency of your full backups to monthly or even
quarterly, followed by weekly cumulative incremental backups and daily
incremental backups.
Managing logs
Optimizing the performance of vxlogview
As explained in the NetBackup Troubleshooting Guide, the vxlogview command
is used for viewing logs created by unified logging (VxUL). The vxlogview
command will deliver optimum performance when a file ID is specified in the
query.
For example: when viewing messages logged by the NetBackup Resource Broker
(nbrb) for a given day, you can filter out the library messages while viewing the
nbrb logs. To achieve this, run vxlogview as follows:
vxlogview o nbrb i nbrb n 0
Note that -i nbrb specifies the file ID for nbrb. Specifying the file ID improves
the performance, because the search is confined to a smaller set of files.
The meaning of the various fields in this message (the fields are delimited by
blanks) is defined in the table below, Table 2-11Meaning of daily_messages log
fields. The next table, Table 2-12Message types, lists the values for the message
type, which is the third field in the log message.
Table 2-11
Field
Definition
Value
Type of message
Severity of error:
1: Unknown
2: Debug
4: Informational
8: Warning
16: Error
32: Critical
5
nabob
(optional entry)
(optional entry)
*NULL*
51
Table 2-11
Field
Definition
Value
10
bpjobd
11
TERMINATED bpjobd
Table 2-12
Message types
Type Value
Unknown
General
Backup
Archive
16
Retrieve
32
Security
64
Backup status
128
Media device
Chapter
Media Server
configuration guidelines
This chapter provides configuration guidelines for the media server along with
related background information.
This chapter includes the following sections:
Windows
install_path\NetBackup\bin\goodies\available_media
The NetBackup Media List report may show that some media is frozen and
therefore cannot be used for backups.
One of the reasons NetBackup freezes media is because of recurring I/O errors.
The NetBackup Troubleshooting Guide describes the recommended approaches
for dealing with this issue, for example, under NetBackup error code 96. It is also
possible to configure the NetBackup error threshold value. The method for
doing this is described in this section.
Each time a read, write, or position error occurs, NetBackup records the time,
media ID, type of error, and drive index in the EMM database. Then NetBackup
scans to see whether that media has had m of the same errors within the past
n hours. The variable m is a tunable parameter known as
media_error_threshold. The default value of media_error_threshold is 2 errors.
If the volume has not been previously assigned for backups, then
NetBackup will:
log an error
If the volume is in the NetBackup media catalog and has been previously
selected for backups, then NetBackup will:
log an error
Adjusting media_error_threshold
To configure the NetBackup media error thresholds, use the nbemmcmd
command on the media server as follows. NetBackup freezes a tape volume or
downs a drive for which these values are exceeded. For more detail on the
nbemmcmd command, refer to the man page or to the NetBackup Commands
Guide.
UNIX
/usr/openv/netbackup/bin/admincmd/nbemmcmd -changesetting
-time_window unsigned integer -machinename string
-media_error_threshold unsigned integer -drive_error_threshold
unsigned integer
Windows
<install_path>\NetBackup\bin\admincmd\nbemmcmd.exe
-changesetting -time_window unsigned integer -machinename string
-media_error_threshold unsigned integer -drive_error_threshold
unsigned integer
55
Note: The following description has nothing to do with the number of times
NetBackup retries a backup/restore that fails. That situation is controlled by the
global configuration parameter Backup Tries for backups and the bp.conf
entry RESTORE_RETRIES for restores. This algorithm merely deals with
whether I/O errors on tape should cause media to be frozen or drives to be
downed.
When a read/write/position error occurs on tape, the error returned by the
operating system does not distinguish between whether the error is caused by
the tape or the drive. To prevent the failure of all backups in a given timeframe,
bptm tries to identify a bad tape volume or drive based on past history, using the
following logic:
Each time an I/O error occurs on a read/write/position, bptm logs the error
in the file /usr/openv/netbackup/db/media/errors (UNIX) or
install_path\NetBackup\db\media\errors (Windows). The error
message includes the time of the error, media ID, drive index and type of
error.
Examples of the entries in this file are these:
07/21/96 04:15:17 A00167 4 WRITE_ERROR
07/26/96 12:37:47 A00168 4 READ_ERROR
Each time an entry is made, the past entries are scanned to determine if the
same media ID and/or drive has had this type of error in the past n hours.
n is known as the time_window. The default time window is 12 hours.
When performing the history search for the time_window entries, EMM
notes past errors that match the media ID, the drive, or both the drive and
the media ID. The purpose of this is to determine the cause of the error. For
example, if a given media ID gets write errors on more than one drive, it is
assumed that the tape volume is bad and NetBackup freezes the volume. If
more than one media ID gets a particular error on the same drive, it is
assumed the drive is bad and the drive goes to a down state. If only past
errors are found on the same drive with the same media ID, then EMM
assumes that the volume is bad and freezes it.
Freezing or downing does not occur on the first error. There are two other
parameters, media_error_threshold and drive_error_threshold. The default
value of both of these parameters is 2. For a freeze or down to happen,
more than the threshold number of errors must occur (by default, at least
three errors must occur) in the time window for the same drive/media ID.
57
Use devfsadm to recreate the device nodes in /devices and the device
links in /dev for tape devices by running any one (not all) of the following
commands:
/usr/sbin/devfsadm -i st
/usr/sbin/devfsadm -c tape
/usr/sbin/devfsadm -C -c tape (Use this command to enforce cleanup if
dangling logical links are present in /dev.)
Chapter
Media configuration
guidelines
This chapter provides guidelines and recommendations for better performance
with NetBackup media.
This chapter includes the following sections:
Pooling on page 60
Pooling
Here are some useful conventions for media pools (formerly known as volume
pools):
Use the available_media script in the goodies directory. You can put the
available_media report into a script which redirects the report output to a
file and emails the file to the administrators daily or weekly. This helps
track which tapes are full, frozen, suspended, and so on. By means of a
script, you can also filter the output of the available_media report to
generate custom reports.
To monitor media, you can also use the NetBackup Operations Manager
(NOM). For instance, NOM can be configured to issue an alert if there are
fewer than X number of media available, or if more than X% of the media is
frozen or suspended.
Do not create too many pools. The existence of too many pools causes the
library capacity to become fragmented across the pools. Consequently, the
library becomes filled with many partially-full tapes.
No need to multiplex
Writing to disk does not need to be streamed. This means that
multiplexing is not necessary.
Multiplexing is only necessary with tape because the tape must be
streamed. Multiplexing allows multiple clients and multiple file
systems to be backed up to the same tape simultaneously, thus
streaming the drive. However, this functionality slows down the
restore. (See Tape streaming on page 126 for an explanation of
streaming.)
61
Chapter
Database backup
guidelines
This chapter gives planning guidelines for database backup.
This chapter includes the following sections:
Introduction on page 64
Introduction
Before you create a database, decide how to protect the database against
potential failures. Answer the following questions before developing your
backup strategy.
For specific information on backing up and restoring your database, refer to the
NetBackup administrators guide for your database product. In addition, the
manufacturer of your database product may provide publications that document
backup recommendations and methods.
Chapter
Best practices
This chapter describes an assortment of best practices, and includes the
following sections:
66 Best practices
Best practices: new tape drive technologies
Frequency-based cleaning
On-demand cleaning
TapeAlert
Robotic cleaning
Frequency-based cleaning
NetBackup does frequency-based cleaning by tracking the number of hours a
drive has been in use. When this time reaches a configurable parameter,
NetBackup creates a job that mounts and exercises a cleaning tape. This cleans
the drive in a preventive fashion. The advantage of this method is that typically
there are no drives unavailable awaiting cleaning. There is also no limitation on
platform or robot type. On the downside, cleaning is done more often than
necessary. This adds system wear and consumes time that could be used to write
to the drive. Another limitation is that this method is hard to tune. When new
tapes are used, drive cleaning is needed less frequently; the need for cleaning
increases as the tape inventory ages. This increases the amount of tuning
administration needed and, consequently, the margin of error.
Best practices
Best practices: tape drive cleaning
On-demand cleaning
Refer to the NetBackup Media Manager System Administrators Guide for more
information on this topic.
TapeAlert
TapeAlert allows reactive cleaning for most drive types. TapeAlert allows a tape
drive to notify EMM when it needs to be cleaned. EMM then performs the
cleaning. You must have a cleaning tape configured in at least one library slot in
order to utilize this feature. TapeAlert is the recommended cleaning solution if
it can be implemented.
Not all drives, at all firmware levels, support this type of reactive cleaning. In
the case where reactive cleaning is not supported on a particular drive,
frequency-based cleaning may be substituted. This solution is not vendor or
platform specific. The specific firmware levels have not been tested by
Symantec, however the vendor should be able to confirm that the TapeAlert
feature is supported.
67
68 Best practices
Best practices: storing tape cartridges
Disabling TapeAlert
To disable TapeAlert, create a touch file called NO_TAPEALERT:
UNIX:
/usr/openv/volmgr/database/NO_TAPEALERT
Windows:
install_path\volmgr\database\NO_TAPEALERT
Robotic cleaning
Robotic cleaning is not proactive, and is not subject to the limitations detailed
above. By being reactive, unnecessary cleanings are eliminated, frequency
tuning is not an issue, and the drive can spend more time moving data, rather
than in maintenance operations.
Library-based cleaning is not supported by EMM for most robots, since robotic
library and operating systems vendors have implemented this type of cleaning
in many different ways.
Best practices
Best practices: recoverability
methods and procedures you adopt for your installation should be documented
and tested regularly to ensure that your installation can recover from disaster.
Table 6-13
Operational Risk
Recovery Possible?
No
None
Yes
Yes
Media failure
Yes
Yes
Yes
No NetBackup software
Yes
Yes
Implementing Backup and Recovery: The Readiness Guide for the Enterprise,
by David B. Little and David A. Chapa, published by Wiley Technology
Publishing.
69
70 Best practices
Best practices: recoverability
Review your site-specific recovery procedures and verify that they are
accurate and up-to-date. Also, verify that the more complex systems, such
as the NetBackup master and media servers, have current procedures for
rebuilding the machines with the latest software.
Best practices
Best practices: naming conventions
Put the NetBackup catalog on different online storage than the data
being backed up.
In the case of a site storage disaster, the catalogs of the backed-up data
should not reside on the same disks as production data. The reason
behind this is straightforward: you want to avoid the case where, if a
disk drive loses production data, it also loses the catalog of the
production data, resulting in increased downtime.
Policy names
One good naming convention for policies is platform_datatype_server(s).
Example 1: w2k_filesystems_trundle
This policy name designates a policy for a single Windows server doing file
system backups.
Example 2: w2k_sql_servers
This policy name designates a policy for backing up a set of Windows 2000 SQL
servers. Several servers may be backed up by this policy. Servers that are
candidates for being included in a single policy are those running the same
operating system and with the same backup requirements. Grouping servers
within a single policy reduces the number of policies and eases the management
of NetBackup.
71
72 Best practices
Best practices: naming conventions
Schedule names
Create a generic scheme for schedule naming. One recommended set of schedule
names is daily, weekly, and monthly. Another recommended set of names is
incremental, cumulative, and full. This convention keeps the management of
NetBackup at a minimum. It also helps with the implementation of Vault, if your
site uses Vault.
Section II
Performance tuning
Section II explains how to measure your current NetBackup performance, and
gives general recommendations and examples for tuning NetBackup.
Section II includes these chapters:
Measuring performance
Additional resources
74
Chapter
Measuring performance
This chapter provides suggestions for measuring NetBackup performance.
This chapter includes the following sections:
Overview on page 76
76 Measuring performance
Overview
Overview
The final measure of NetBackup performance is the length of time required for
backup operations to complete (usually known as the backup window), or the
length of time required for a critical restore operation to complete. However, to
measure existing performance and improve future performance by means of
those measurements calls for performance metrics more reliable and
reproducible than simple wall clock time. This chapter will discuss these metrics
in more detail.
After establishing accurate metrics as described here, you can measure the
current performance of NetBackup and your system components to compile a
baseline performance benchmark. With a baseline, you can apply changes in a
controlled way. By measuring performance after each change, you can
accurately measure the effect of each change on NetBackup performance.
Server variables
It is important to eliminate all other NetBackup activity from your environment
when you are measuring the performance of a particular NetBackup operation.
One area to consider is the automatic scheduling of backup jobs by the
NetBackup scheduler.
When policies are created, they are usually set up to allow the NetBackup
scheduler to initiate the backups. The NetBackup scheduler will initiate backups
based on the traditional NetBackup frequency-based scheduling or on certain
days of the week, month, or other time interval. This process is called
calendar-based scheduling. As part of the backup policy definition, the Start
Window is used to indicate when the NetBackup scheduler can start backups
using either frequency-based or calendar-based scheduling. When you perform
backups for the purpose of performance testing, this setup might interfere since
the NetBackup scheduler may initiate backups unexpectedly, especially if the
operations you intend to measure run for an extended period of time.
Measuring performance
Controlling system variables for consistent testing conditions
The simplest way to prevent the NetBackup scheduler from running backup jobs
during your performance testing is to create a new policy specifically for use in
performance testing and to leave the Start Window field blank in the schedule
definition for that policy. This prevents the NetBackup scheduler from initiating
any backups automatically for that policy. After creating the policy, you can run
the backup on demand by using the Manual Backup command from the
NetBackup Administration Console.
To prevent the NetBackup scheduler from running backup jobs unrelated to the
performance test, you may want to set all other backup policies to inactive by
using the Deactivate command from the NetBackup Administration Console. Of
course, you must reactivate the policies to start running backups again.
You can use a user-directed backup to run the performance test as well.
However, using the Manual Backup option for a policy is preferred. With a
manual backup, the policy contains the entire definition of the backup job,
including the clients and files that are part of the performance test. Running the
backup manually, straight from the policy, means there is no doubt which policy
will be used for the backup, and makes it easier to change and test individual
backup settings: from the policy dialog.
Before you start the performance test, check the Activity Monitor to make sure
there is no NetBackup processing currently in progress. Similarly, check the
Activity Monitor after the performance test for unexpected activity (such as an
unanticipated restore job) that may have occurred during the test.
Additionally, check for non-NetBackup activity on the server during the
performance test and try to reduce or eliminate it.
Note: By default, NetBackup logging is set to a minimum level. To gather more
logging information, set the legacy and unified logging levels higher and create
the appropriate legacy logging directories. For details on how to use NetBackup
logging, refer to the logging chapter of the NetBackup Troublshooting Guide.
Keep in mind that higher logging levels will consume more disk space.
Network variables
Network performance is key to achieving optimum performance with
NetBackup. Ideally, you would use a completely separate network for
performance testing to avoid the possibility of skewing the results by
encountering unrelated network activity during the course of the test.
In many cases, a separate network is not available. Ensure that non-NetBackup
activity is kept to an absolute minimum during the time you are evaluating
performance. If possible, schedule testing for times when backups are not
active. Even occasional short bursts of network activity may be enough to skew
77
78 Measuring performance
Controlling system variables for consistent testing conditions
the results during portions of the performance test. If you are sharing the
network with production backups occurring for other systems, you must
account for this activity during the performance test.
Another network variable you must consider is host name resolution.
NetBackup depends heavily upon a timely resolution of host names to operate
correctly. If you have any delays in host name resolution, including reverse
name lookup to identify a server name from an incoming connection from a
certain IP address, you may want to eliminate that delay by using the HOSTS
(Windows) or /etc/hosts (UNIX) file for host name resolution on systems
involved in your performance test environment.
Client variables
Make sure the client system is in a relatively quiescent state during
performance testing. A lot of activity, especially disk-intensive activity such as
virus scanning on Windows, will limit the data transfer rate and skew the results
of your tests.
One possible mistake is to allow another NetBackup server, such as a production
backup server, to have access to the client during the course of the test. This
may result in NetBackup attempting to back up the same client to two different
servers at the same time, which would severely impact the results of a
performance test in progress at that time.
Different file systems have different performance characteristics. For example,
comparing data throughput results from operations on a UNIX VxFS or
Windows FAT file system to those from operations on a UNIX NFS or Windows
NTFS system may not be valid, even if the systems are otherwise identical. If you
do need to make such a comparison, factor the difference between the file
systems into your performance evaluation testing, and into any conclusions you
may draw from that testing.
Data variables
Monitoring the data you are backing up improves the repeatability of
performance testing. If possible, move the data you will use for testing backups
to its own drive or logical partition (not a mirrored drive), and defragment the
drive before you begin performance testing. For testing restores, start with an
empty disk drive or a recently defragmented disk drive with ample empty space.
This will reduce the impact of disk fragmentation on the NetBackup
performance test and yield more consistent results between tests.
Similarly, for testing backups to tape, always start each test run with an empty
piece of media. You can do this by expiring existing images for that piece of
media through the Catalog node of the NetBackup Administration Console, or by
Measuring performance
Evaluating performance
Evaluating performance
There are two primary locations from which to obtain NetBackup data
throughput statistics: the NetBackup Activity Monitor and the NetBackup All
Log Entries report. The choice of which location to use is determined by the type
of NetBackup operation you are measuring: non-multiplexed backup, restore, or
multiplexed backup.
79
80 Measuring performance
Evaluating performance
You can obtain statistics for all three types of operations from the NetBackup All
Log Entries report. You can obtain statistics for non-multiplexed backup or
restore operations from the NetBackup Activity Monitor. For multiplexed
backup operations, you can obtain the overall statistics from the All Log Entries
report after all the individual backup operations which are part of the
multiplexed backup are complete. In this case, the statistics available in the
Activity Monitors for each of the individual backup operations are relative only
to that operation, and do not reflect the actual total data throughput to the tape
drive.
There may be small differences between the statistics available from these two
locations due to slight differences in rounding techniques between the entries in
the Activity Monitor and the entries in the All Logs report. For a given type of
operation, choose either the Activity Monitor or the All Log Entries report and
consistently record your statistics only from that location. In both the Activity
Monitor and the All Logs report, the data-streaming speed is reported in
kilobytes per second. If a backup or restore is repeated, the reported speed can
vary between repetitions depending on many factors, including the availability
of system resources and system utilization, but the reported speed can be used
to assess the performance of the data-streaming process.
The statistics from the NetBackup error logs show the actual amount of time
spent reading and writing data to and from tape. This does not include time
spent mounting and positioning the tape. Cross-referencing the information
from the error logs with data from the bpbkar log on the NetBackup client
(showing the end-to-end elapsed time of the entire process) indicates how much
time was spent on operations unrelated to reading and writing to and from the
tape.
To evaluate performance through the NetBackup activity monitor
1
View the log details for the job by selecting the Actions > Details menu
option, or by double-clicking on the entry for the job. Select the Detailed
Status tab.
Start Time/EndTime: These fields show the time window during which
the backup or restore job took place.
Measuring performance
Evaluating performance
Elapsed Time: This field shows the total elapsed time from when the job
was initiated to job completion and can be used as in indication of total
wall clock time for the operation.
Run the All Log Entries report from the NetBackup reports node in the
NetBackup Administrative Console. Be sure that the Date/Time Range that
you select covers the time period during which the job was run.
Verify that the job completed successfully by searching for an entry such as
the requested operation was successfully completed for a backup, or
successfully read (restore) backup id... for a restore.
Entry
Statistic
The Date and Time fields for this entry show the time at
which the backup job started.
81
82 Measuring performance
Evaluating performance
Entry
Statistic
The Date and Time fields for this entry show the time at
which the backup job completed. This value is later than
the successfully wrote entry above because it includes
extra processing time at the end of the job for tasks such
as NetBackup image validation.
The Date and Time fields for this entry show the time at
which the restore job started reading from the storage
device. (Note that the latter part of the entry is not
shown for restores from disk, as it does not apply.)
Measuring performance
Evaluating performance
Entry
Statistic
Additional information
The NetBackup All Log Entries report will also have entries similar to those
described above for other NetBackup operations such as image duplication
operations used to create additional copies of a backup image. Those entries
have a very similar format and may be useful for analyzing the performance of
NetBackup for those operations.
The bptm debug log file for tape backups (or bpdm log file for disk backups) will
contain the entries that are in the All Log Entries report, as well as additional
detail about the operation that may be useful for performance analysis. One
example of this additional detail is the intermediate data throughput rate
message for multiplexed backups, as shown below:
... intermediate after <number> successful, <number> Kbytes at
<number> Kbytes/sec
This message is generated whenever an individual backup job completes that is
part of a multiplexed backup group. In the debug log file for a multiplexed
backup group consisting of three individual backup jobs, for example, there
could be two intermediate status lines, then the final (overall) throughput rate.
For a backup operation, the bpbkar debug log file will also contain additional
detail about the operation that may be useful for performance analysis.
Keep in mind, however, that writing the debug log files during the NetBackup
operation introduces some overhead that would not normally be present in a
production environment. Factor that additional overhead into any calculations
done on data captures while debug log files are in use.
The information in the All Logs report is also found in
/usr/openv/netbackup/db/error (UNIX) or
install_path\NetBackup\db\error (Windows).
See the NetBackup Troubleshooting Guide to learn how to set up NetBackup to
write these debug log files.
83
84 Measuring performance
Evaluating UNIX system components
Turn on the legacy bpbkar log by ensuring that the bpbkar directory exists.
UNIX: /usr/openv/netbackup/logs/bpbkar
Windows: install_path\NetBackup\logs\bpbkar
Check the time it took NetBackup to move the data from the client disk:
UNIX: The start time is the first PrintFile entry in the bpbkar log, the end
time is the entry Client completed sending data for backup, and the
amount of data is given in the entry Total Size.
Windows: Check the bpbkar log for the entry Elapsed time.
Measuring performance
Evaluating Windows system components
To measure disk I/O using the bpdm_dev_null touch file (UNIX only)
For UNIX systems, the procedure below can be useful as a follow-on to the
bpbkar procedure (above). If the bpbkar procedure shows that the disk read
performance is not the bottleneck and does not help isolate the problem, then
the bpdm_dev_null procedure described below may be helpful. If the
bpdm_dev_null procedure shows poor performance, the bottleneck is
somewhere in the data transfer between the bpbkar process on the client and
the bpdm process on the server. The problem may involve the network, or shared
memory (such as not enough buffers, or buffers that are too small). To change
shared memory settings, see Shared memory (number and size of data buffers)
on page 102.
Caution: If not used correctly, the following procedure can lead to data loss.
Touching the bpdm_dev_null file redirects all disk backups to /dev/null, not
just those backups using the storage unit created by this procedure. You should
disable active production policies for the duration of this test and remove
/dev/null as soon as this test is complete.
1
Note: The bpdm_dev_null file re-directs any backup that uses a disk
storage unit to /dev/null.
2
Create a new disk storage unit, using /tmp or some other directory as the
image directory path.
Run a backup using this policy. NetBackup will create a file in the storage
unit directory as if this were a real backup to disk. This degenerate image
file will be zero bytes long.
To remove the zero-length file and clear the NetBackup catalog of a backup
that cannot be restored, run this command:
where backupid is the name of the file residing in the storage unit directory.
85
86 Measuring performance
Evaluating Windows system components
Measuring performance
Evaluating Windows system components
Note: The default scale for the Processor Queue Length may not be equal to 1. Be
sure to read the data correctly. For example, if the default scale is 10x, then a
reading of 40 actually means that only 4 processes are waiting.
Committed Bytes. Committed Bytes displays the size of virtual memory that
has been committed, as opposed to reserved. Committed memory must have
disk storage available or must not require the disk storage because the main
memory is large enough. If the number of Committed Bytes approaches or
exceeds the amount of physical memory, you may encounter issues with
page swapping.
87
88 Measuring performance
Evaluating Windows system components
When you monitor disk performance, use the %Disk Time counter for the
PhysicalDisk object to track the percentage of elapsed time that the selected disk
drive is busy servicing read or write requests.
Also monitor the Avg. Disk Queue Length counter and watch for values greater
than 1 that last for more than one second. Values greater than 1 for more than a
second indicate that multiple processes are waiting for the disk to service their
requests.
Several techniques may be used to increase disk performance, including:
Check the fragmentation level of the data. A highly fragmented disk limits
throughput levels. Use a disk maintenance utility to defragment the disk.
Determine what type of controller technology is being used to drive the disk.
Consider if a different system would yield better results. See the table
Drive controller data transfer rates on page 21 for throughput rates for
common controllers.
Chapter
Overview on page 90
Overview
This chapter contains information on ways to optimize NetBackup. This chapter
is not intended to provide tuning advice for particular systems. If you would like
help fine-tuning your NetBackup installation, please contact Symantec
Consulting Services.
Before examining the factors that affect backup performance, please note that
an important first step is to ensure that your system meets NetBackups
recommended minimum requirements. Refer to the NetBackup Installation
Guide and NetBackup Release Notes for information about these requirements.
Additionally, Symantec recommends that you have the most recent NetBackup
software patch installed.
Many performance issues can be traced to hardware or other environmental
issues. A basic understanding of the entire data transfer path is essential in
determining the maximum obtainable performance in your environment. Poor
performance is often the result of poor planning, which can be based on
unrealistic expectations of any particular component of the data transfer path.
Tuning suggestions:
Use multiplexing.
Multiplexing is a NetBackup option that lets you write multiple data
streams from several clients at once to a single tape drive or several tape
drives. Multiplexing can be used to improve the backup performance of
slow clients, multiple slow networks, and many small backups (such as
incremental backups). Multiplexing reduces the time each job spends
waiting for a device to become available, thereby making the best use of the
transfer rate of your storage devices.
Multiplexing is not recommended when restore speed is of paramount
interest or when your tape drives are slow. To reduce the impact of
multiplexing on restore times, you can improve your restore performance
by reducing the maximum fragment size for the storage units. If the
fragment size is small, so that the backup image is contained in several
fragments, a NetBackup restore can quickly skip to the specific fragment
containing the file to be restored. In contrast, if the fragment size is large
enough to contain the entire image, the NetBackup restore starts at the very
beginning of the image and reads through the image until it finds the
desired file.
Multiplexed backups can be de-multiplexed to improve restore performance
by using bpduplicate to move fragmented images to a sequential image
on a new tape.
91
Convert large clients into media servers to decrease backup times and
reduce network traffic.
Any machine with locally-attached drives can be used as a media server to
back up itself or other systems. By converting large client systems into
media servers, your backup data no longer travels over the network (except
for catalog data), and you get the fastest transfer speeds afforded by
locally-attached devices. Another benefit of media servers is that you can
use them to balance the load of backing up other clients for your NetBackup
master. A media server can back up clients on a network where it has a local
connection, thus saving network traffic for a master that might have to go
over routers to communicate with those clients. A special case of a media
server is a SAN Media Server, which is a NetBackup media server that backs
up itself only and comes at a lower cost than a regular media server.
Bandwidth limiting
The bandwidth limiting feature lets you restrict the network bandwidth
consumed by one or more NetBackup clients on a network. The bandwidth
setting appears under Host Properties > Master Servers, Properties. The
actual limiting occurs on the client side of the backup connection. This
feature only restricts bandwidth during backups. Restores are unaffected.
When a backup starts, NetBackup reads the bandwidth limit configuration
and then determines the appropriate bandwidth value and passes it to the
client. As the number of active backups increases or decreases on a subnet,
NetBackup dynamically adjusts the bandwidth limiting on that subnet. If
additional backups are started, the NetBackup server instructs the other
NetBackup clients running on that subnet to decrease their bandwidth
setting. Similarly, bandwidth per client is increased if the number of clients
decreases. Changes to the bandwidth value occur on a periodic basis rather
than as backups stop and start. This characteristic can reduce the number
of bandwidth value changes.
Load balancing
NetBackup provides ways to balance loads between servers, clients, policies,
and devices. Note that these settings may interact with each other:
93
compensating for one issue can cause another. The best approach is to use
the defaults unless you anticipate or encounter an issue.
Adjust the backup load on the server during specific time periods only.
Reconfigure schedules that execute during the time periods of interest,
so they use storage units on servers that can handle the load (assuming
you are using media servers).
Limit the number of devices that NetBackup can use concurrently for
each policy or limit the number of drives per storage unit. Another
approach is to exclude some of your devices from Media Manager
control.
The type of controller technology being used to drive the disk. Consider if a
different system would yield better results.
95
Job tracker. If the NetBackup Client Job Tracker is running on the client,
then NetBackup will gather an estimate of the data to be backed up prior to
the start of a backup job. Gathering this estimate will affect the startup
time, and therefore the data throughput rate, because no data is being
written to the NetBackup server during this estimation phase. You may
wish to avoid running the NetBackup Client Job Tracker to avoid this delay.
Client location. You may wish to consider adding a locally attached tape
device to the client and changing the client to a NetBackup media server if
you have a substantial amount of data on the client. For example, backing
up 100 gigabytes of data to a locally attached tape drive will generally be
more efficient than backing up the same amount of data across a network
connection to a NetBackup server. Of course, there are many variables to
consider, such as the bandwidth available on the network, that will affect
the decision to back up the data to a locally attached tape drive as opposed
to moving the data across the network.
Network interface cards (NICs) for NetBackup servers and clients must be
set to full-duplex.
Both ends of each network cable (the NIC card and the switch) must be set
identically as to speed and mode (both NIC and switch must be at full
If auto-negotiate is being used, make sure that both ends of the connection
are set at the same mode and speed. The higher the speed, the better.
In addition to NICs and switches, all routers must be set to full duplex.
Network load
There are two key considerations to monitor when you evaluate remote backup
performance:
Small bursts of high network traffic for short durations will have some negative
impact on the data throughput rate. However, if the network traffic remains
consistently high for a significant amount of time during the operation, the
network component of the NetBackup data transfer path will very likely be the
bottleneck. Always try to schedule backups during times when network traffic is
low. If your network is heavily loaded, you may wish to implement a secondary
network which can be dedicated to backup and restore traffic.
97
For tape: because the default value for the NetBackup data buffer size is 65536
bytes, this formula results in a default NetBackup network buffer size of 263168
bytes for backups and 132096 bytes for restores.
For disk: because the default value for the NetBackup data buffer size is 262144
bytes, this formula results in a default NetBackup network buffer size of
1049600 bytes for backups and 525312 bytes for restores.
To set this parameter, create the following files:
UNIX
/usr/openv/netbackup/NET_BUFFER_SZ
/usr/openv/netbackup/NET_BUFFER_SZ_REST
Windows
install_path\NetBackup\NET_BUFFER_SZ
install_path\NetBackup\NET_BUFFER_SZ_REST
These files contain a single integer specifying the network buffer size in bytes.
For example, to use a network buffer size of 64 Kilobytes, the file would contain
65536. If the files contain the integer 0 (zero), the default value for the network
buffer size is used.
If the NET_BUFFER_SZ file exists, and the NET_BUFFER_SZ_REST file does not
exist, the contents of NET_BUFFER_SZ will specify the network buffer size for
both backup and restores.
If the NET_BUFFER_SZ_REST file exists, its contents will specify the network
buffer size for restores.
If both files exist, the NET_BUFFER_SZ file will specify the network buffer size
for backups, and the NET_BUFFER_SZ_REST file will specify the network buffer
size for restores.
Because local backup or restore jobs on the media server do not send data over
the network, this parameter has no effect on those operations. It is used only by
the NetBackup media server processes which read from or write to the network,
specifically, the bptm or bpdm processes. It is not used by any other NetBackup
processes on a master server, media server, or client.
This parameter is the counterpart on the media server to the Communications
Buffer Size parameter on the client, which is described below. The network
buffer sizes are not required to be the same on all of your NetBackup systems for
NetBackup to function properly; however, setting the Network Buffer Size
parameter on the media server and the Communications Buffer Size parameter
on the client (see below) to the same value has significantly improved the
throughput of the network component of the NetBackup data transfer path in
some installations.
Similarly, the network buffer size does not have a direct relationship with the
NetBackup data buffer size (described under Shared memory (number and size
Changing this value can affect backup and restore operations on the media
servers. Test backups and restores to ensure that the change you make does not
negatively impact performance.
99
In the servers bp.conf file, add one entry for each network interface:
SERVER=server-neta
SERVER=server-netb
SERVER=server-netc
101
It is okay for a client to have an entry for a server that is not currently on
the same network.
NetBackup Operation
Non-multiplexed backup
Windows
16
Table 8-1
NetBackup Operation
Multiplexed backup
16
12
12
Verify
16
Import
16
Duplicate
16
NetBackup Operation
Windows
Non-multiplexed backup
Multiplexed backup
Duplicate
On Windows, a single tape I/O operation is performed for each shared data
buffer. Therefore, this size must not exceed the maximum block size for the tape
device or operating system. For Windows systems, the maximum block size is
generally 64K, although in some cases customers are using a larger value
successfully. For this reason, the terms tape block size and shared data buffer
size are synonymous in this context.
103
For disk
/usr/openv/netbackup/db/config/NUMBER_DATA_BUFFERS_DISK
Windows
For tape
<install_path>\NetBackup\db\config\NUMBER_DATA_BUFFERS
<install_path>\NetBackup\db\config\NUMBER_DATA_BUFFERS_RESTORE
For disk
<install_path>\NetBackup\db\config\NUMBER_DATA_BUFFERS_DISK
These files contain a single integer specifying the number of shared data buffers
NetBackup will use. The integer represents the number of data buffers. For
backups (in the NUMBER_DATA_BUFFERS and
NUMBER_DATA_BUFFERS_DISK files), the integers value must be a power of
2.
If the NUMBER_DATA_BUFFERS file exists, its contents will be used to
determine the number of shared data buffers to be used for multiplexed and
non-multiplexed backups.
NUMBER_DATA_BUFFERS_DISK allows for a different value when doing
backup to disk instead of tape. If NUMBER_DATA_BUFFERS exists but
NUMBER_DATA_BUFFERS_DISK does not, NUMBER_DATA_BUFFERS applies
to all backups. If both files exist, NUMBER_DATA_BUFFERS applies to tape
backups and NUMBER_DATA_BUFFERS_DISK applies to disk backups. If only
NUMBER_DATA_BUFFERS_DISK is present, it applies to disk backups only.
If the NUMBER_DATA_BUFFERS_RESTORE file exists, its contents will be used
to determine the number of shared data buffers to be used for multiplexed
restores from tape.
The NetBackup daemons do not have to be restarted for the new values to be
used. Each time a new job starts, bptm checks the configuration file and adjusts
its behavior.
For disk
/usr/openv/netbackup/db/config/SIZE_DATA_BUFFERS_DISK
Windows
For tape
install_path\NetBackup\db\config\SIZE_DATA_BUFFERS
For disk
install_path\NetBackup\db\config\SIZE_DATA_BUFFERS_DISK
This file contains a single integer specifying the size of each shared data buffer
in bytes. The integer must be a multiple of 32 kilobytes (a multiple of 1024 is
recommended); see the table below for valid values. The integer represents the
size of one tape or disk buffer in bytes. For example, to use a shared data buffer
size of 64 Kilobytes, the file would contain the integer 65536.
The NetBackup daemons do not have to be restarted for the parameter values to
be used. Each time a new job starts, bptm checks the configuration file and
adjusts its behavior.
105
Analyze the buffer usage by checking the bptm debug log before and after
altering the size of buffer parameters.
Table 8-3
SIZE_DATA_BUFFER Value
32
32768
64
65536
96
98304
128
131072
160
163840
192
196608
224
229376
256
262144
IMPORTANT: Because the data buffer size equals the tape I/O size, the value
specified in SIZE_DATA_BUFFERS must not exceed the maximum tape I/O size
supported by the tape drive or operating system. This is usually 256 or 128
Kilobytes. Check your operating system and hardware documentation for the
maximum values. Take into consideration the total system resources and the
entire network. The Maximum Transmission Unit (MTU) for the LAN network
may also have to be changed. NetBackup expects the value for NET_BUFFER_SZ
and SIZE_DATA_BUFFERS to be in bytes, so in order to use 32k, use 32768 (32 x
1024).
Note: Some Windows tape devices are not able to write with block sizes higher
than 65536 (64 Kilobytes). Backups created on a UNIX media server with
SIZE_DATA_BUFFERS set to more than 65536 cannot be read by some Windows
media servers. This means that the Windows media server would not be able to
import or restore any images from media that were written with
SIZE_DATA_BUFFERS greater than 65536.
Note: The size of the shared data buffers used for a restore operation is
determined by the size of the shared data buffers in use at the time the
backup was written. This file is not used by restores.
Run a backup.
107
or
15:26:01 [21544] <2> mpx_setup_restore_shm: using 12 data
buffers, buffer size is 65536
When you change these settings, take into consideration the total system
resources and the entire network. The Maximum Transmission Unit (MTU)
for the local area network (LAN) may also have to be changed.
Windows
<install_path>\NetBackup\db\config\PARENT_DELAY
<install_path>\NetBackup\db\config\CHILD_DELAY
Achieving a good balance between the data producer and the data consumer
processes is an important factor in achieving optimal performance from the
NetBackup server component of the NetBackup data transfer path.
Producer - consumer relationship during a backup
NetBackup
Client
Network
Consumer
BPTM
(Child Process)
Shared Buffers
BPTM
(Parent Process)
Producer
Tape
Local clients
When the NetBackup media server and the NetBackup client are part of the
same system, the NetBackup client is referred to as a local client.
109
Remote clients
When the NetBackup media server and the NetBackup client are part of two
different systems, the NetBackup client is referred to as a remote client.
Data Producer
Data Consumer
Local Backup
bptm
Remote Backup
bptm (child)
bptm (parent)
Local Restore
bptm
tar (UNIX) or
tar32 (Windows)
Remote Restore
bptm (parent)
bptm (child)
If a full buffer is needed by the data consumer but is not available, the data
consumer increments the Wait and Delay counters to indicate that it had to wait
for a full buffer. After a delay, the data consumer will check again for a full
buffer. If a full buffer is still not available, the data consumer increments the
Delay counter to indicate that it had to delay again while waiting for a full
buffer. The data consumer will repeat the delay and full buffer check steps until
a full buffer is available.
Data Producer >> Data Consumer. The data producer has substantially
larger Wait and Delay counter values than the data consumer.
The data consumer is unable to receive data fast enough to keep the data
producer busy. Investigate means to improve the performance of the data
consumer. For a back up operation, check if the data buffer size is
appropriate for the tape drive being used (see below).
If data consumer still has a substantially large value in this case, try
increasing the number of shared data buffers to improve performance (see
below).
Data Producer = Data Consumer (large value). The data producer and the
data consumer have very similar Wait and Delay counter values, but those
values are relatively large.
111
This may indicate that the data producer and data consumer are regularly
attempting to used the same shared data buffer. Try increasing the number
of shared data buffers to improve performance (see below).
Data Producer = Data Consumer (small value). The data producer and the
data consumer have very similar Wait and Delay counter values, but those
values are relatively small.
This indicates that there is a good balance between the data producer and
data consumer, which should yield good performance from the NetBackup
server component of the NetBackup data transfer path.
Data Producer << Data Consumer. The data producer has substantially
smaller Wait and Delay counter values than the data consumer.
The data producer is unable to deliver data fast enough to keep the data
consumer busy. Investigate ways to improve the performance of the data
producer. For a restore operation, check if the data buffer size (see below) is
appropriate for the tape drive being used.
If the data producer still has a relatively large value in this case, try
increasing the number of shared data buffers to improve performance (see
below).
The bullets above describe the four basic relationships possible. Of primary
concern is the relationship and the size of the values. Information on
determining substantial versus trivial values appears on the following pages.
The relationship of these values only provides a starting point in the analysis.
Additional investigative work may be needed to positively identify the cause of a
bottleneck within the NetBackup data transfer path.
Windows
install_path\NetBackup\logs\bpbkar
install_path\NetBackup\logs\bptm
Windows
install_path\NetBackup\logs\bpbkar
The line you are looking for should be similar to the following, and will have
a timestamp corresponding to the completion time of the backup:
... waited 224 times for empty buffer, delayed 254 times
In this example the Wait counter value is 224 and the Delay counter value is
254.
3
Look at the log for the data consumer (bptm) process in:
UNIX
/usr/openv/netbackup/logs/bptm
Windows
install_path\NetBackup\logs\bptm
The line you are looking for should be similar to the following, and will have
a timestamp corresponding to the completion time of the backup:
... waited for full buffer 1 times, delayed 22 times
In this example, the Wait counter value is 1 and the Delay counter value is
22.
To determine wait and delay counter values for a remote client backup:
1
Windows
install_path\NetBackup\logs\bptm
2
Windows
install_path\NetBackup\Logs\bptm
Delays associated with the data producer (bptm child) process will appear as
follows:
... waited for empty buffer 22 times, delayed 151 times, ...
113
In this example, the Wait counter value is 22 and the Delay counter value is
151.
5
Delays associated with the data consumer (bptm parent) process will appear
as:
... waited for full buffer 12 times, delayed 69 times
In this example the Wait counter value is 12, and the Delay counter value is
69.
To determine wait and delay counter values for a local client restore:
1
Windows
install_path\NetBackup\logs\bptm
install_path\NetBackup\logs\tar
In this example, the Wait counter value is 27, and the Delay counter value is
79.
3
Look at the log for the data producer (bptm) process in the bptm log
directory created above.
The line you are looking for should be similar to the following, and will have
a timestamp corresponding to the completion time of the restore:
... waited for empty buffer 1 times, delayed 68 times
In this example, the Wait counter value is 1 and the delay counter value is
68.
To determine wait and delay counter values for a remote client restore:
1
Windows
install_path\NetBackup\logs\bptm
Look at the log for bptm in the bptm log directory created above.
Delays associated with the data consumer (bptm child) process will appear
as follows:
... waited for full buffer 36 times, delayed 139 times
In this example, the Wait counter value is 36 and the Delay counter value is
139.
5
Delays associated with the data producer (bptm parent) process will appear
as follows:
... waited for empty buffer 95 times, delayed 513 times
In this example the Wait counter value is 95 and the Delay counter value is
513.
Note: When you run multiple tests, you can rename the current log file.
NetBackup will automatically create a new log file, which prevents you from
erroneously reading the wrong set of values.
Deleting the debug log file will not stop NetBackup from generating the debug
logs. You must delete the entire directory. For example, to stop bptm logging,
you must delete the bptm subdirectory. NetBackup will automatically generate
debug logs at the specified verbose setting whenever the directory is detected.
Data buffer size. The size of each shared data buffer can be found on a line
similar to:
... io_init: using 65536 data buffer size
Number of data buffers. The number of shared data buffers may be found on
a line similar to:
... io_init: using 16 data buffers
Parent/child delay values. The values in use for the duration of the parent
and child delays can be found on a line similar to:
... io_init: child delay = 20, parent delay = 30 (milliseconds)
NetBackup Media Server Network Buffer Size. The values in use for the
Network Buffer Size parameter on the media server can be found on lines
similar to these in debug log files:
115
The receive network buffer is used by the bptm child process to read from
the network during a remote backup.
...setting receive network buffer to 263168 bytes
The send network buffer is used by the bptm child process to write to the
network during a remote restore.
...setting send network buffer to 131072 bytes
See NetBackup media server network buffer size on page 97 for more
information about the Network Buffer Size parameter on the media server.
Suppose you wanted to analyze a local backup in which there was a 30-minute
data transfer duration baselined at 5 Megabytes/second with a total data
transfer of 9,000 Megabytes. Because a local backup is involved, if you refer to
Roles of processes during backup and restore operations on page 110, you can
determine that bpbkar (UNIX) or bpbkar32 (Windows) is the data producer
and bptm is the data consumer.
You would next want to determine the Wait and Delay values for bpbkar (or
bpbkar32) and bptm by following the procedures described in Determining
wait and delay counter values on page 112. For this example, suppose those
values were:
Process
Wait
Delay
bpbkar (UNIX)
29364
58033
95
105
bpbkar32 (Windows)
bptm
Using these values, you can determine that the bpbkar (or bpbkar32) process
is being forced to wait by a bptm process which cannot move data out of the
shared buffer fast enough.
Next, you can determine time lost due to delays by multiplying the Delay
counter value by the parent or child delay value, whichever applies.
In this example, the bpbkar (or bpbkar32) process uses the child delay value,
while the bptm process uses the parent delay value. (The defaults for these
values are 20 for child delay and 30 for parent delay.) The values are specified in
milliseconds. See Parent/child delay values on page 108 for more information
on how to modify these values.
Use the following equations to determine the amount of time lost due to these
delays:
bpbkar (UNIX)
bpbkar32 (Windows)
=1160 seconds
=19 minutes 20 seconds
bptm
This is useful in determining that the delay duration for the bpbkar (or
bpbkar32) process is significant. If this delay were entirely removed, the
resulting transfer time of 10:40 (total transfer time of 30 minutes minus delay of
19 minutes and 20 seconds) would indicate a throughput value of 14
Megabytes/sec, nearly a threefold increase. This type of performance increase
would warrant expending effort to investigate how the tape drive performance
can be improved.
The number of delays should be interpreted within the context of how much
data was moved. As the amount of data moved increases, the significance
threshold for counter values increases as well.
Again, using the example of a total of 9,000 Megabytes of data being transferred,
assume a 64-Kilobytes buffer size. You can determine the total number of
buffers to be transferred using the following equation:
Number_Kbytes
= 9,000 X 1024
= 9,216,000 Kilobytes
Number_Slots
=9,216,000 / 64
=144,000
The Wait counter value can now be expressed as a percentage of the total
divided by the number of buffers transferred:
bpbkar (UNIX)
= 29364 / 144,000
bpbkar32 (Windows)
= 20.39%
bptm
= 95 / 144,000
= 0.07%
In this example, in the 20 percent of cases where the bpbkar (or bpbkar32)
process needed an empty shared data buffer, that shared data buffer has not yet
been emptied by the bptm process. A value this large indicates a serious issue,
117
= 58033/29364
bpbkar32 (Windows)
= 1.98
In this example, on average the bpbkar (or bpbkar32) process had to delay
twice for each wait condition that was encountered. If this ratio is substantially
large, you may wish to consider increasing the parent or child delay value,
whichever one applies, to avoid the unnecessary overhead of checking for a
shared data buffer in the correct state too often. Conversely, if this ratio is close
to 1, you may wish to consider reducing the applicable delay value to check more
often and see if that increases your data throughput performance. Keep in mind
that the parent and child delay values are rarely changed in most NetBackup
installations.
The preceding information explains how to determine if the values for Wait and
Delay counters are substantial enough for concern. The Wait and Delay counters
are related to the size of data transfer. A value of 1,000 may be extreme when
only 1 Megabyte of data is being moved. The same value may indicate a
well-tuned system when gigabytes of data are being moved. The final analysis
must determine how these counters affect performance by considering such
factors as how much time is being lost and what percentage of time a process is
being forced to delay.
The first number in the message is the number of times bptm waited for a
full buffer, which is the number of times bptm write operations waited for
data from the source. If, using the technique described in the section
Determining wait and delay counter values on page 112, you determine
that the Wait counter indicates a performance issue, then changing the
number of buffers will not help, but adding multiplexing may help.
The first number in the message is the number of times bptm waited for an
empty buffer, which is the number of times bptm experienced data arriving
from the source faster than the data could be written to tape. If, using the
technique described in the section Determining wait and delay counter
values on page 112, you determine that the Wait counter indicates a
performance issue, then reduce the multiplexing factor if you are using
multiplexing. Also, adding more buffers may help.
bptm delays
The bptm debug log contains messages such as,
...waited for empty buffer 1883 times, delayed 14645 times
The second number in the message is the number of times bptm waited for
an available buffer. If, using the technique described in the section
Determining wait and delay counter values on page 112, you determine
that the Delay counter indicates a performance issue, this will need
investigation. Each delay interval is 30 ms.
119
During restores, newer, faster devices can handle large fragments well.
Slower devices, especially if they do not use fast locate block positioning,
will restore individual files faster if fragment size is smaller. (In some cases,
SCSI fast tape positioning can improve restore performance.)
Note: Unless you have particular reasons for creating smaller fragments (such
as when restoring a few individual files, restoring from multiplexed backups, or
restoring from older equipment), larger fragment sizes are likely to yield better
overall performance.
Example 1:
Assume you are backing up four streams to a multiplexed tape, and each stream
is a single, 1 gigabyte file and a default maximum fragment size of 1 TB has been
specified. The resultant backup image logically looks like the following. TM
denotes a tape mark, or file mark, that indicates the start of a fragment.
TM <4 gigabytes data> TM
When restoring any one of the 1 gigabyte files, the restore positions to the TM
and then has to read all 4 gigabytes to get the 1 gigabyte file.
If you set the maximum fragment size to 1 gigabyte:
121
TM <1 gigabyte data> TM <1 gigabyte data> TM <1 gigabyte data> TM <1
gigabyte data> TM
this does not help, since the restore still has to read all four fragments to pull
out the 1 gigabyte of the file being restored.
Example 2:
This is the same as Example 1, but assume four streams are backing up 1
gigabyte worth of /home or C:\. With the maximum fragment size (Reduce
fragment size) set to a default of 1 TB (and assuming all streams are relatively
the same performance), you again end up with:
TM <4 gigabytes data> TM
Restoring /home/file1 or C:\file1 and/home/file2 or C:\file2 from one of the
streams will have to read as much of the 4 gigabytes as necessary to restore all
the data. But, if you set Reduce fragment size to 1 gigabyte, the image looks like
this:
TM <1 gigabyte data> TM <1 gigabyte data> TM <1 gigabyte data> TM <1
gigabyte data> TM
In this case, home/file1 or C:\file1 starts in the second fragment, and bptm
positions to the second fragment to start the restore of home/file1 or C:\file1
(this has saved reading 1 gigabyte so far). After /home/file1 is done, if
/home/file2 or C:\file2 is in the third or forth fragment, the restore can position
to the beginning of that fragment before it starts reading as it looks for the data.
These examples illustrate that whether fragmentation benefits a restore
depends on what the data is, what is being restored, and where in the image the
data is. In Example 2, reducing the fragment size from 1 gigabyte to half a
gigabyte (512 Megabytes) increases the chance the restore can locate by
skipping instead of reading when restoring relatively small amounts of an
image.
NUMBER_DATA_BUFFERS_RESTORE setting
This parameter can help keep other NetBackup processes busy while a
multiplexed tape is positioned during a restore. Increasing this value causes
NetBackup buffers to occupy more physical RAM. This parameter only applies
to multiplexed restores. For more information on this parameter, see Shared
memory (number and size of data buffers) on page 102.
Windows
install_directory\bin\admincmd\bpimage -create_image_list -client
client_name
where client_name is the name of the client with many small backup images.
In the directory:
UNIX
/usr/openv/netbackup/db/images/client_name
Windows
install_path\NetBackup\db\images\client_name
IMAGE_INFO
IMAGE_FILES
123
Do not edit these files, because they contain offsets and byte counts that are
used for seeking to and reading the image information.
Note: These files increase the size of the client directory.
where value is the number which provides the best performance for the system.
This may have to be tried and tested as it may vary from system to system. A
suggested starting value is 20. In any case, the value must not exceed 500ms as
this may break TCP/IP.
Once the optimum value for the system is found, the command for setting the
value can be permanently set in a script under the directory /etc/rc2.d so
that it can be executed at boot time.
When bprd, the request daemon on the master server, receives the first stream
of a multiplexed restore request, it triggers the MPX_RESTORE_DELAY timer to
start counting the configured amount of time. At this point, bprd watches and
waits for related multiplexed jobs from the same client before starting the
overall job. If another associated stream is received within the timeout period, it
is added to the total job, and the timer is reset to the MPX_RESTORE_DELAY
period. Once the timeout has been reached without an additional stream being
received by bprd, the timeout window closes, all associated restore requests are
sent to bptm, and a tape is mounted. If any associated restore requests are
received after this event, they are queued to wait until the tape that is now In
Use is returned to an idle state.
If MPX_RESTORE_DELAY is not set high enough, NetBackup may need to mount
and read the same tape multiple times to collect all of the necessary header
information necessary for the restore. Ideally, NetBackup would read a
multiplexed tape, collecting all of the header information it needs, with a single
pass of the tape, thus minimizing the amount of time to restore.
Example (Oracle):
Suppose that MPX_RESTORE_DELAY is not set in the bp.conf file, so its value is
the default of 30 seconds. Suppose also that you initiate a restore from an Oracle
RMAN backup that was backed up using 4 channels or 4 streams, and you use
the same number of channels to restore.
RMAN passes NetBackup a specific data request, telling NetBackup what
information it needs to start and complete the restore. The first request is
passed and received by NetBackup in 29 seconds, causing the
MPX_RESTORE_DELAY timer to be reset. The next request is passed and
received by NetBackup in 22 seconds, so again the timer is reset. The third
request is received 25 seconds later, resetting the timer a third time, but the
fourth request is received 31 seconds after the third. Since the fourth request
was not received within the restore delay interval, NetBackup only starts three
of the four restores. Instead of reading from the tape once, NetBackup queues
the fourth restore request until the previous three requests are completed. Since
all of the multiplexed images are on the same tape, NetBackup mounts, rewinds,
and reads the entire tape again to collect the multiplexed images for the fourth
restore request.
Note that in addition to NetBackup's reading the tape twice, RMAN waits to
receive all the necessary header information before it begins the restore.
If MPX_RESTORE_DELAY had been larger than 30 seconds, NetBackup would
have received all four restore requests within the restore delay windows and
collected all the necessary header information with one pass of the tape. Oracle
would have started the restore after this one tape pass, improving the restore
performance significantly.
125
Media positioning
When a backup or restore is performed, the storage device must position the
tape so that the data is over the read/write head. Depending on the location of
the data and the overall performance of the media device, this can take a
significant amount of time. When you conduct performance analysis with media
containing multiple images, it is important to account for the time lag that
occurs before the data transfer starts.
Tape streaming
If a tape device is being used at its most efficient speed, it is said to be streaming
the data onto the tape. Generally speaking, if a tape device is streaming, there
will be little physical stopping and starting of the media. Instead the media will
be constantly spinning within the tape drive. If the tape device is not being used
at its most efficient speed, it may continually start and stop the media from
spinning. This behavior is the opposite of tape streaming and usually results in a
poor data throughput rate.
Data compression
Most tape devices support some form of data compression within the tape
device itself. Compressible data (such as text files) yields a higher data
throughput rate than non-compressible data, if the tape device supports
hardware data compression.
Tape devices typically come with two performance rates: maximum throughput
and nominal throughput. Maximum throughput is based on how fast
compressible data can be written to the tape drive when hardware compression
is enabled in the drive. Nominal throughput refers to rates achievable with
non-compressible data.
Note: Tape drive data compression cannot be set by NetBackup. Follow the
instructions provided with your OS and tape drive to be sure data compression is
set correctly.
In general, tape drive data compression is preferable to client (software)
compression such as that available in NetBackup. Client compression may be
desirable in some cases, such as for reducing the amount of data transmitted
across the network for a remote client backup. See Tape versus client
compression on page 133 for more information.
127
Chapter
Figure 9-1
Multiplexing diagram
server
clients
backup to tape
Multi-streaming writes multiple data streams, each to its own tape drive,
unless multiplexing is used.
Figure 9-2
Multistreaming diagram
server
backup to tape
131
Multiplexing
To use multiplexing effectively, you must understand the implications of
multiplexing on restore times. Multiplexing may decrease overall backup
time when you are backing up large numbers of clients over slow networks,
but it does so at the cost of recovery time. Restores from multiplexed tapes
must pass over all nonapplicable data. This action increases restore times.
When recovery is required, demultiplexing causes delays in the restore
process. This is because NetBackup must do more tape searching to
accomplish the restore.
Restores should be tested, before the need to do a restore arises, to
determine the impact of multiplexing on restore performance.
When you initially set up a new environment, keep the multiplexing factor
low. Typically, a multiplexing factor of four or less does not highly impact
the speed of restores, depending on the type of drive and the type of system.
If the backups do not finish within their assigned window, multiplexing can
be increased to meet the window. However, increasing the multiplexing
factor provides diminishing returns as the number of multiplexing clients
increases. The optimum multiplexing factor is the number of clients needed
to keep the buffers full for a single tape drive.
Set the multiplexing factor to four and do not multistream. Run
benchmarks in this environment. Then, if needed, you can begin to change
the values involved until both the backup and restore window parameters
are met.
Multi-streaming
The NEW_STREAM directive is useful for fine-tuning streams so that no
disk subsystem is under-utilized or over-utilized.
Encryption
When the NetBackup encryption option is enabled, your backups may run
slower. How much slower depends on the throttle point in your backup path. If
the network is the issue, encryption should not hinder performance. If the
network is not the issue, then encryption may slow down the backup.
Note that some local backups actually ran faster with encryption than without
it. In some field test cases, memory utilization has been found to be roughly the
same with and without encryption.
Compression
Two types of compression can be used with NetBackup, client compression
(configured in the NetBackup policy) and tape drive compression (handled by
the device hardware). Some or all of the files may also have been compressed by
other means prior to the backup.
Tape compression offloads the compression task from the client and server.
133
Avoid using both tape compression and client compression, as this can
actually increase the amount of backed-up data.
On UNIX: client compression reduces the amount of data sent over the
network, but impacts the client. The NetBackup client configuration setting
MEGABYTES_OF_MEMORY may help client performance. It is undesirable
to compress files which are already compressed. If you find that this is
happening with your backups, refer to the NetBackup configuration option
COMPRESS_SUFFIX. Edit this setting through bpsetconfig.
NetBackup java
For performance improvement, refer to the following sections in the NetBackup
System Administrators Guide for UNIX and Linux, Volume I: Configuring the
NetBackup-Java Administration Console, and the subsection NetBackup-Java
Performance Improvement Hints. In addition, the NetBackup Release Notes
may contain information about NetBackup Java performance.
Vault
Refer to the Best Practices chapter of the NetBackup Vault System
Administrators Guide.
On Windows, make sure virus scans are turned off (this may double
performance).
Snap a mirror (such as with the FlashSnap method in Advanced Client) and
back that up as a raw partition. This does not allow individual file restore
from tape.
Make sure the NetBackup buffer size is the same size on both the servers
and clients.
135
Run the following bpbkar throughput test on the client with Windows:
C:\Veritas\Netbackup\bin\bpbkar32 -nocont > NUL 2>
(for example, C:\Veritas\Netbackup\bin\bpbkar32 -nocont c:\
> NUL 2> temp.f)
Turn off NetBackup Client Job Tracker if the client is a system server.
Regularly review the patch announcements for every server OS. Install
patches that affect TCP/IP functions, such as correcting out-of-sequence
delivery of packets.
FlashBackup
If using advanced client FlashBackup with a copy-on-write snapshot
method
If you are using the FlashBackup feature of Advanced Client with a
copy-on-write method such as nbu_snap, assign the snapshot cache device to a
separate hard drive. This will improve performance by reducing disk contention
and potential head thrashing due to the writing of data to maintain the
snapshot.
You can use the second line of the file to set the tape record write size, also
in bytes. The default is the same size as the read buffer. The first entry on
the second line sets the full backup write buffer size, the second value sets
the incremental backup write buffer size.
Note: Resizing the read buffer for incremental backups can result in a faster
backup in some cases, and a slower backup in others. The result depends on such
factors as the location of the data to be read, the size of the data to be read
relative to the size of the read buffer, and the read characteristics of the storage
device and the I/O stack. Experimentation may be necessary to achieve the best
setting.
137
Chapter
10
Note: The critical factors in performance are not software-based. They are
hardware selection and configuration. Hardware has roughly four times the
weight that software has in determining performance.
Host
Memory
Level 5
PCI bridge
PCI bridge
PCI bus
PCI bus
PCI bus
Level 4
PCI
card 2
PCI card
1
Level 3
PCI
card 3
Fibre channel
Fibre channel
Array 1
Array 2
Raid controller
Raid controller
Level 2
Shelf
Shelf
Shelf adaptor
Shelf adaptor
Shelf
Shelf adaptor
Shelf
Tape,
Ethernet,
or
another
non-disk
device
Shelf adaptor
Level 1
Drives
Drives
Drives
Drives
In general, all data going to or coming from disk must pass through host
memory. In the following diagram, a dashed line shows the path that the data
takes through a media server.
Figure 10-4
Host
Memory
Level 5
PCI bridge
PCI bus
Level 4
PCI card
Level 3
Data moving
through host
memory
Fibre channel
Array 2
Raid controller
Level 2
Shelf
Level 1
Shelf
Tape,
Ethernet
, or
another
non-disk
Drives
The data moves up through the ethernet PCI card at the far right. The card sends
the data across the PCI bus and through the PCI bridge into host memory.
NetBackup then writes this data to the appropriate location. In a disk example,
the data passes through one or more PCI bridges, over one or more PCI buses,
through one or more PCI cards, across one or more fibre channels, and so on.
Sending data through more than one PCI card increases bandwidth by breaking
up the data into large chunks and sending a group of chunks at the same time to
multiple destinations. For example, a write of 1 MB could be split into 2 chunks
going to 2 different arrays at the same time. If the path to each array is x
bandwidth, the aggregate bandwidth will be approximately 2x.
Each level in the Performance Hierarchy diagram represents the transitions
over which data will flow. These transitions have bandwidth limits.
Between each level there are elements that can affect performance as well.
141
Shelf
Shelf
Shelf adaptor
Shelf adaptor
Shelf
Shelf adaptor
Shelf adaptor
Level 1
Drives
Drives
Drives
Tape,
Ethernet,
or
another
non-disk
device
Drives
Array 2
Raid controller
Raid controller
Level 2
Shelf
Shelf
Shelf
Shelf
Shelf
Shelf
Shelf
Shelf
Tape,
Ethernet
, or
another
non-disk
device
Larger disk arrays will have more than one internal FC-AL. Shelves may even
support 2 FC-AL so that there will be two paths between the RAID controller and
every shelf, which provides for redundancy and load balancing.
Level 3
Fibre channel
Array
Fibre channel
Array
143
PCI bridge
PCI bus
PCI bus
Level 4
PCI card
1
PCI
card 2
PCI card
3
A typical host will support 2 or more PCI buses, with each bus supporting 1 or
more PCI cards. A bus has a topology similar to FC-AL in that only 2 endpoints
can be communicating at the same time. That is, if there are 4 cards plugged into
a PCI bus, only one of them can be communicating with the host at a given
instant. Multiple PCI buses are implemented to allow multiple data paths to be
communicating at the same time, resulting in aggregate bandwidth gains.
PCI buses have 2 key factors involved in bandwidth potential: the width of the
bus - 32 or 64 bits, and the clock or cycle time of the bus (in Mhz).
As a rule of thumb, a 32-bit bus can transfer 4 bytes per clock and a 64-bit bus
can transfer 8 bytes per clock. Most modern PCI buses support both 64-bit and
32-bit cards. Currently PCI buses are available in 4 clock rates:
33 Mhz
66 Mhz
PCI bridge
A drive has sequential access bandwidth and average latency times for seek
and rotational delays.
Drives perform optimally when doing sequential I/O to disk. Non-sequential
I/O forces movement of the disk head (that is, seek and rotational latency).
145
A RAID controller has cache memory of varying sizes. The controller also
does the parity calculations for RAID-5. Better controllers have this
calculation (called XOR) in hardware which makes it faster. If there is no
hardware-assisted calculation, the controller processor must perform it,
and controller processors are not usually high performance.
A PCI card can be limited either by the speed supported for the port(s) or the
clock rate to the PCI bus.
Memory can be a limit if there is other intensive non-I/O activity in the system.
Note that there are no CPUs for the host processor(s) in the Performance
hierarchy diagram on page 140.
While CPU performance is obviously a contributor to all performance, it is
generally not the bottleneck in most modern systems for I/O intensive
workloads, because there is very little work done at that level. The CPU must
execute a read operation and a write operation, but those operations do not take
up much bandwidth. An exception is when older gigabit ethernet card(s) are
involved, because the CPU has to do more of the overhead of network transfers.
Example 1
A general hardware configuration could have dual 2-gigabit fibre channel ports
on a single PCI card. In such a case, the following is true:
For maximum performance, the card must be plugged into at least a 66 Mhz
PCI slot.
No other cards on that bus should need to transfer data at the same time.
That single card will saturate the PCI bus.
Putting 2 of these cards (4 ports total) onto the same bus and expecting
them to aggregate to 800 MB/second will never work unless the bus and
cards are 133 Mhz.
Example 2
The following more detailed example shows a pyramid of bandwidth potentials
with aggregation capabilities at some points. Suppose you have the following
hardware:
1x Sun V880 server (2x 33 Mhz PCI buses and 1x 66 Mhz PCI bus)
In this case, for maximum backup and restore throughput with clients on the
network, the following is one way to assemble the hardware so that no
constraints limit throughput.
It will completely saturate the 66 Mhz bus, so do not put any other cards on
that bus that need significant I/O at the same time.
Since the disk arrays have only 1-gigabit fibre channel ports, the fibre channel
cards will degrade to 1 gigabit each.
147
Each card can therefore move approximately 100 MB/second. With four
cards, the total is approximately 400 MB/second.
However, you do not have a single PCI bus available that can support that
400MB /second, since the 66-Mhz bus is already taken by ethernet card.
There are two 33-Mhz buses which can each support approximately 200
MB/second. Therefore, you can put 2 of the fibre channel cards on each of
the 2 buses.
Figure 10-5
Host
Memory
Level 5
PCI bridge
PCI bridge
PCI bus
PCI bus
PCI bus
Level 4
PCI
card 2
PCI card
1
Level 3
Fibre channel
PCI
card 3
Fibre channel
Array 1
Array 2
Raid controller
Raid controller
Level 2
Shelf
Shelf adaptor
Shelf
Shelf adaptor
Tape,
Ethernet,
or
another
non-disk
device
Level 1
Drives
Drives
Each shelf in the disk array has 9 drives because it uses a RAID 5 group of
8+1 (that is, 8 data disks + 1 parity disk).
The RAID controller in the array uses a stripe unit size when performing I/O
to these drives. Suppose that you know the stripe unit size to be 64KB. This
means that when writing a full stripe (8+1) it will write 64KB to each drive.
The amount of non-parity data is 8 * 64KB, or 512KB. So, internal to the
array, the optimal I/O size is 512KB. This means that crossing Level 3 to the
host PCI card should perform I/O at 512KB.
The diagram shows two separate RAID arrays on two separate PCI buses.
You want both to be performing I/O transfers at the same time.
If each is optimal at 512K, the two arrays together are optimal at 1MB.
149
You can implement software RAID-0 to make the two independent arrays
look like one logical device. RAID-0 is a plain stripe with no parity. Parity
protects against drive failure, and this configuration already has RAID-5
parity protecting the drives inside the array.
The software RAID-0 is configured for a stripe unit size of 512KB (the I/O
size of each unit) and a stripe width of 2 (1 for each of the arrays).
Since 1MB is the optimum I/O size for the volume (the RAID-0 entity on the
host), that size is used throughout the rest of the I/O stack.
If possible, configure the file system mounted over the volume for 1MB. The
application performing I/O to the file system also uses an I/O size of 1MB. In
NetBackup, I/O sizes are set in the configuration touch file
.../db/config/SIZE_DATA_BUFFERS_DISK. See Changing the size of
shared data buffers on page 105 for more information.
Chapter
11
Message queues
set msgsys:msginfo_msgmax = maximum message size
set msgsys:msginfo_msgmnb = maximum length of a message queue in
bytes. The length of the message queue is the sum of the lengths of all the
messages in the queue.
set msgsys:msginfo_msgmni = number of message queue identifiers
set msgsys:msginfo_msgtql = maximum number of outstanding messages
system-wide that are waiting to be read across all message queues.
Semaphores
set semsys:seminfo_semmap = number of entries in semaphore map
set semsys:seminfo_semmni = maximum number of semaphore identifiers
system-wide
set semsys:seminfo_semmns = number of semaphores system-wide
set semsys:seminfo_semmnu = maximum number of undo structures in
system
set semsys:seminfo_semmsl = maximum number of semaphores per id
Shared memory
set shmsys:shminfo_shmmin = minimum shared memory segment size
set shmsys:shminfo_shmmax = maximum shared memory segment size
set shmsys:shminfo_shmseg = maximum number of shared memory
segments that can be attached to a given process at one time
set shmsys:shminfo_shmmni = maximum number of shared memory
identifiers that the system will support
The ipcs -a command displays system resources and their allocation, and is a
useful command to use when a process is hanging or sleeping to see if there are
available resources for it to use.
Example:
This is an example of tuning the kernel parameters for NetBackup master
servers and media servers, for a Solaris 8 or 9 system. Symantec provides this
information only to assist in kernel tuning for NetBackup. See Kernel
parameters in Solaris 10 on page 154 for Solaris 10.
These are recommended minimum values. If /etc/system already contains
any of these entries, use the larger of the existing setting and the setting
provided here. Before modifying /etc/system, use the command
/usr/sbin/sysdef -i to view the current kernel parameters.
After you have changed the settings in /etc/system, reboot the system to
allow the changed settings to take effect. After rebooting, the sysdef command
will display the new settings.
*BEGIN NetBackup with the following recommended minimum settings in a
Solaris /etc/system file
*Message queues
set msgsys:msginfo_msgmap=512
set msgsys:msginfo_msgmax=8192
set msgsys:msginfo_msgmnb=65536
set msgsys:msginfo_msgmni=256
set msgsys:msginfo_msgssz=16
set msgsys:msginfo_msgtql=512
set msgsys:msginfo_msgseg=8192
*Semaphores
set semsys:seminfo_semmap=64
set semsys:seminfo_semmni=1024
set semsys:seminfo_semmns=1024
153
set semsys:seminfo_semmnu=1024
set semsys:seminfo_semmsl=300
set semsys:seminfo_semopm=32
set semsys:seminfo_semume=64
*Shared memory
set shmsys:shminfo_shmmax=16777216
set shmsys:shminfo_shmmin=1
set shmsys:shminfo_shmmni=220
set shmsys:shminfo_shmseg=100
*END NetBackup recommended minimum settings
System V Semaphores
Name
Minimum Value
mesg
155
Table 11-4
Name
Minimum Value
msgmap
514
msgmax
8192
msgmnb
65536
msgssz
msgseg
8192
msgtql
512
msgmni
256
sema
semmap
semmni+2
semmni
300
semmns
300
semmnu
300
semume
64
semvmx
32767
shmem
shmmni
300
shmseg
120
shmmax
desired parameters. Once all the values have been changed, select Actions >
Process New Kernel. This will bring up a warning to inform that a reboot
will be required to move the values into place. After the reboot, the sysdef
command can be used to confirm that the correct value is in place.
Caution: Any changes to the kernel will require a reboot in order to move the
new kernel into place. Do not make changes to the parameters unless a system
reboot can be performed, or the changes will not be saved.
157
Enter a value from 16 to 255 (0x10 hex to 0xFF). A value of 255 (0xFF)
enables the maximum 1 Megabyte transfer size. Setting a value higher than
255 reverts to the default of 64-Kilobyte transfers. The default value is 33
(0x21).
Click OK.
Exit the Registry Editor, then shut down and reboot the system.
The main definition here is the so-called SGList, that is, Scatter/Gather list.
This is the number of pages that can be either scattered or gathered (that is,
read or written) in one DMA transfer. For the QLA2200, you set the
parameter MaximumSGList to 0xFF (or just to 0x40 for 256Kb) and can then
set 256Kb buffer sizes for NetBackup. Extreme caution should be used when
attempting to modify this registry value, and the vendor of the SCSI/Fiber
card should always be contacted first to ascertain the maximum value that
particular card can support.
The same should be possible for other HBAs as well, especially fiber cards.
The default for JNI fiber cards using driver version 1.16 is actually 0x80
(512Kb or 128 pages). The default for the Emulex LP8000 is 0x81 (513Kb or
129 pages).
Note that for this approach to work, the HBA has to install its own SCSI
miniport driver. If it does not, transfers will be limited to 64 Kilobytes. This
is for legacy cards like old SCSI cards.
In conclusion, the built-in limit on Windows is 1024 Kilobytes, unless you
are using the default Microsoft miniport driver for legacy cards. The
limitations are all to do with the HBA drivers and the limits of the physical
devices attached to them.
For example, Quantum DLT7000 drives work best with 128-Kilobyte buffers
and StorageTek 9840 drives with 256-Kilobyte buffers. If these values are
increased too far, this could result in damage to either the HBA or the tape
drives or any devices in between (fiber bridges and switches, for example).
159
Appendix
Additional resources
This chapter lists additional sources of information.
The following article discusses how and why to design a scalable data
installation: High-Availability SANs, Richard Lyford, FC Focus Magazine,
April 30, 2002.
Index
Symbols
/dev/null 84
/dev/rmt/2cbn 133
/dev/rmt/2n 133
/etc/rc2.d 124
/etc/system 152
/proc/sys (Linux) 157
/user/openv/netbackup/db/config/SIZE_DATA_BU
FFERS 105
/user/openv/netbackup/db/config/SIZE_DATA_BU
FFERS_DISK 105
/usr/openv/netbackup 100
/usr/openv/netbackup/bin/admincmd/bpimage 12
3
/usr/openv/netbackup/db/error 83
/usr/openv/netbackup/db/images 123
/usr/openv/netbackup/db/media 56
/usr/openv/netbackup/logs 84
/usr/sbin 57
/usr/sbin/devfsadm 58
/usr/sbin/modinfo 57
/usr/sbin/modload 58
/usr/sbin/modunload 57
/usr/sbin/ndd 124
/usr/sbin/tapes 57
Numerics
1000BaseT 21
100BaseT 21
100BaseT cards 31
10BaseT 21
10BaseT cards 31
A
ACS 58
acsd 58
ACSLS communications 58
Activity Monitor 80
additional info on tuning 161
adjusting
backup load 94
data buffer size (Windows) 157
error threshold 55
network communications buffer 97
read buffer size 136
Advanced Client 64, 135, 136
AIX 99
All Log Entries report 79, 81, 83
ALL_LOCAL_DRIVES 29, 50
alphabetical order, storage units 44
ANSI encoding 159
antivirus software 159
arbitrated loop 142
archive bit 159
archiving catalog 47
array RAID controller 142
arrays 95
ATA 21
ATM card 31
auto-negotiate 97
AUTOSENSE 97
available_media report 60
available_media script 54, 60
Avg. Disk Queue Length counter 88
B
backup
catalog 50
database 64
disk or tape 60
environment, dedicated or shared 60
large catalog 47
load adjusting 94
load leveling 93
monopolizing devices 94
user-directed 77
window 64, 92
Backup Central 161
Backup Tries parameter 56
balancing load 93
bandwidth 145
164 Index
bandwidth limiting 93
Bare Metal Restore (BMR) 135
best practices 66, 68, 71
Bonnie 161
Bonnie++ 161
boot options (Linux) 157
bottlenecks 79, 97
freeware tools for detecting 161
bp.conf file 56, 101
bpbkar 96, 109, 110, 112
bpbkar log 83, 84
bpbkar32 96, 109, 110
bpdm log 83
bpdm_dev_null 85
bpend_notify.bat 95
bpimage 123
bpmount -i 50
bprd 41, 44
bpsetconfig 134
bpstart_notify.bat 95
bptm 108, 109, 110, 112, 115, 118, 120
bptm log 56, 83
buffers 97, 157
and FlashBackup 136
changing 104
changing Windows buffers 158
default number of 102
default size of 103
for network communications 99
shared 102
tape 102
testing 107
wait and delay 108
Windows 157
bus 54
C
cache device (snapshot) 136
calculate
actual data transfer rate required 19
length of backups 18
network transfer rate 21
number of robotic tape slots needed 26
number of tape drives needed 20
number of tapes needed 25
shared memory 103
size of catalog 23
space needed for NBDB database 23, 24
cartridges, storing 68
catalog 123
archiving 47
backup requirements 93
backups not finishing 47
backups, guidelines 46
calculating size of 23
compression 47, 48
large backups 47
managing 46
Checkpoint Restart 122
child delay values 108
CHILD_DELAY file 108
cleaning
robots 68
tape drives 66
tapes 60
client
compression 133
convert to media server 92
tuning performance 95
variables 78
Client Job Tracker 136
clock or cycle time 144
Committed Bytes 87
common system resources 85
communications
buffer 99
process 109
Communications Buffer Size parameter 98, 99
comp.arch.storage 162
COMPRESS_SUFFIX option 134
compression 88, 133
and encryption 134
catalog 47, 48
how to enable 133
tape vs client 133
configuration files (Windows) 159
configuration guidelines 49
CONNECT_OPTIONS 42
controller 88
copy-on-write snapshot 136
counters 108
algorithm 110
determining values of 112
in Windows performance 86
wait and delay 108
CPU 84, 86
and performance 146
load, monitoring 84
Index
utilization 42
CPUs needed per media server component 31
critical policies 50
cumulative-incremental backup 17
custom reports
available media 60
cycle time 144
D
daily_messages log 51
data buffer
overview 102
size 97, 157
data compression 126
Data Consumer 111
data path through server 141
data producer 111
data recovery, planning for 68, 69
data stream and tape efficiency 126
data throughput 78
statistics 79
data transfer path 79, 90
basic tuning 91
data transfer rate
for drive controllers 21
for tape drives 19
required 19
data variables 78
database
backups 64, 130
protect against failure 64
restores 124
databases
list of pre-6.0 databases 24
DB2 restores 124
Deactivate command 77
dedicated backup servers 92
dedicated private networks 92
delay
buffer 108
in starting jobs 40
values, parent/child 108
de-multiplexing 91
designing
master server 27
media server 31
Detailed Status tab 80
devfsadmd daemon 57
device
names 133
reconfiguration 57
devlinks 57
disable TapeAlert 68
disaster recovery 68, 69
testing 43
disk
full 88
increase performance 88
load, monitoring 87
performance, issues affecting 139
speed, measuring 84
staging 44
versus tape 60
Disk Queue Length counter 88
disk speed, measuring 85
Disk Time counter 88
disk-based storage 60
diskperf command 87
disks, adding 95
DNS server 48
down drive 55, 56
drive controllers 21
drive selection 58
drive_error_threshold 55, 56
drives, number per network connection 54
drvconfig 57
E
email list (Veritas-bu) 162
EMM 41, 48, 54, 58, 60, 67
EMM database
derived from pre-6.0 databases 24
EMM server
calculating space needed for 23, 24
moving off master 49
encoding, file 159
encryption 133
and compression 134
error logs 56, 80
error threshold value 54
Ethernet connection 140
evaluating components 84, 85
evaluating performance
Activity Monitor 80
All Log Entries report 81
encryption 133
NetBackup clients 95
NetBackup servers 102
165
166 Index
network 77, 96
overview 76
exclude lists 49
F
factors
in choosing disk vs tape 61
in job scheduling 41
failover, storage unit groups 44
fast-locate 120
FBU_READBLKS 137
FC-AL 142, 144
fibre channel 141, 143
arbitrated loop 142
connection 54
file encoding 159
file ID on vxlogview 50
file system space 45
files
backing up many small 135
Windows configuration 159
firewall settings 42
FlashBackup 135, 136
force load parameters (Solaris) 154
forward space filemark 120
fragment size 119, 121
considerations in choosing 119
fragmentation 95
databases 64
level 88
freeware tools 161
freeze media 54, 55, 56
frequency-based tape cleaning 66
frozen volume 55
full backup 61
full duplex 96
I
I/O operations
scaling 148
I/O overhead 64
IMAGE_FILES 123
IMAGE_INFO 123
IMAGE_LIST 123
improving performance, see tuning
include lists 49
increase disk performance 88
incremental backups 61, 92, 131
index performance 123
insufficient memory 87
interfaces, multiple 101
ipcs -a command 153
Iperf 161
iSCSI 21
iSCSI bus 54
J
Java interface 33, 134
job
delays 41
scheduling 40, 41
scheduling, limiting factors 41
Job Tracker 96
jobs queued 40, 41
H
hardware
components and performance 145
kernel tuning
Linux 157
Solaris 152
Index
M
mailing lists 162
resources 161
managing
logs 50
the catalog 46
Manual Backup command 77
master server
CPU utilization 42
designing 27
determining number of 29
splitting 48
Maximum concurrent write drives 40
Maximum jobs per client 40
Maximum Jobs Per Client attribute 94
Maximum streams per drive 41
maximum throughput rate 127
Maximum Transmission Unit (MTU) 108
MaximumSGList 157, 158
MDS 58
measuring
disk read speed 84, 85
NetBackup performance 76
media
catalog 55
error threshold 55
not available 54
pools 60
positioning 126
threshold for errors 54
167
168 Index
N
namespace.chksum 24
naming conventions 71
policies 71
storage units 72
NBDB database 23, 24
NBDB.log 47
nbemmcmd command 55
nbjm and job delays 41
nbpem 44
nbpem and job delays 40
nbu_snap 136
ndd 124
NET_BUFFER_SZ 98, 99, 106
NET_BUFFER_SZ_REST 98
NetBackup
capacity planning 11
catalog 123
job scheduling 40
news groups 162
restores 119
scheduler 76
NetBackup Client Job Tracker 136
NetBackup Java console 134
NetBackup Operations Manager, see NOM
NetBackup Relational Database 48
NetBackup relational database files 47
NetBackup Vault 134
network
bandwidth limiting 93
buffer size 97
communications buffer 99
connection options 42
connections 96
interface cards (NICs) 96
load 97
multiple interfaces 101
performance 77
private, dedicated 92
tapes drives and 54
traffic 97
transfer rate 21
tuning 96
tuning and servers 92
variables 77
Network Buffer Size parameter 99, 115
NEW_STREAM directive 132
O
OEMSETUP.INF file 158
offload work to additional master 48
on-demand tape cleaning 67
online (hot) catalog backup 50
Oracle 125
restores 124
order of using storage units 44
out-of-sequence delivery of packets 136
P
packets 136
Page Faults 87
parent/child delay values 108
PARENT_DELAY file 108
patches 136
PCI bridge 141, 145, 146
PCI bus 141, 144, 145
PCI card 141, 146
performance
and CPU 146
and disk hardware 139
and hardware issues 145
see also tuning
strategies and considerations 91
performance evaluation 76
Activity Monitor 80
All Log Entries report 81
Index
monitoring CPU 86
monitoring disk load 87
monitoring memory use 84, 87
system components 84, 85
PhysicalDisk object 88
policies
critical 50
guidelines 49
naming conventions 71
Policy Update Interval 40
poolDB 24
pooling conventions 60
port configuration
for robot types 58
position error 56
Process Queue Length 86
Processor Time 86
Q
queued jobs 40, 41
R
RAID 61, 88, 95
controller 142, 146
rate of data transfer 17
raw partition backup 136
read buffer size
adjusting 136
and FlashBackup 136
reconfigure devices 57
recovering data, planning for 68, 69
recovery time 61
reduce CPU overhead 64
Reduce fragment size setting 119
reduce I/O 64
REGEDT32 158
registry 158
reload st driver without rebooting 57
report 83
All Log Entries 81
media 60
resizing read buffer (FlashBackup) 137
restore
and network 124
in mixed environment 124
multiplexed image 121
of database 124
performance of 122
S
SAN 64
SAN fabric 143
SAN Media Server 92
sar command 84
SATA 21
Scatter/Gather list 158
schedule naming, best practices 72
scheduling 40, 76
delays 40
disaster recovery 43
limiting factors 41
scratch pool 60
SCSI bus 54
SCSI connection 54
SCSI/FC connection 126
SDLT drives 25, 31
search performance 123
semaphore (Solaris) 152
Serial ATA (SATA) 142
server
data path through 141
splitting master from EMM 49
tuning 102
variables 76
SGList 158
shared data buffers 102
changing 104
default number of 102
default size of 103
shared memory 100, 102
amount required 103
parameters, HP-UX 155
recommended settings 107
Solaris parameters 152
testing 107
shared-access topology 142, 145
shelf 142
SIZE_DATA_BUFFERS 106, 107, 156, 157
169
170 Index
SIZE_DATA_BUFFERS_DISK 105
small files, backup of 135
SMART diagnostic standard 67
snap mirror 135
snapshot cache device 136
snapshots 96
and databases 64
socket
communications 100
parameters (Solaris) 154
software
compression (client) 134
tuning 148
Solaris
clients and FlashBackup read buffer 137
kernel tuning 152
splitting master server 48
SSOhosts 24
st driver
reloading 57
staging, disk 44, 61
Start Window 76
STK drives 25
storage device performance 126
Storage Mountain 161
storage unit 44, 95
groups 44
naming conventions 72
not available 41
Storage Unit dialog 119
storage_units database 24
storing tape cartridges 68
streaming (tape drive) 61, 126
striped volumes (VxVM) 136
striping
block size 136
volumes on disks 92
stunit_groups 24
suspended volume 55
switches 143
synthetic backups 99
System Administration Manager (SAM) 156
system resources 85
system variables, controlling 76
T
Take checkpoints setting 122
tape
block size 103
buffers 102
cartridges, storing 68
cleaning 60, 67
compression 133
efficiency 126
full, frozen, suspended 60
number of tapes needed for backups 25
position error 56
streaming 61, 126
versus disk 60
tape connectivity 54
reload st driver 57
tape drive 126
cleaning 66
number needed 20
number per network connection 54
technologies 66
technology needed 18
transfer rates 19
types 31
tape library
number of tape slots needed 26
using drives 93
TapeAlert 67
tape-based storage 60
tar 110
tar32 110
TCP/IP 136
tcp_deferred_ack_interval 124
testing conditions 76
threshold
error, adjusting 55
for media errors 54
throughput 79
time to data 61
Tiobench 161
TLD robotic control 58
TLM 58
tlmd 58
tools (freeware) 161
topology (hardware) 144
touch files 44, 100
encoding 159
traffic on network 97
transaction log file 47
transfer rate
drive controllers 21
for backups 17
network 21
Index
required 19
tape drives 19
True Image Restore option 23, 135
tuning
additional info 161
basic suggestions 91
buffer sizes 97, 99
client performance 95
data transfer path, overview 90
device performance 126
FlashBackup read buffer 136
Linux kernel 157
network performance 96
restore performance 119, 124
search performance 123
server performance 102
software 148
Solaris kernel 152
U
Ultra-3 SCSI 21
Ultra320 SCSI 21
Unicode encoding 159
unified logging, viewing 50
URL resources 161
Usenet news group 162
user-directed backup 77
V
Vault 134
verbosity level 135
Veritas-bu email list 162
viewing logs 50
virus scans 95, 135
Vision Online 161
vmstat 84
volDB 24
volume
frozen 55
pools 60
suspended 55
vxlogview 50
file ID 50
VxVM striped volumes 136
W
wait/delay counters 108, 109, 112
171
172 Index