Professional Documents
Culture Documents
Veritas NetBackup Backup Planning and Performance Tuning Guide
Veritas NetBackup Backup Planning and Performance Tuning Guide
Veritas NetBackup Backup Planning and Performance Tuning Guide
Release 6.0
N281842
Technical support
For technical assistance, visit http://support.veritas.com and select phone or email support. Use the Knowledge Base search feature to access resources such as TechNotes, product alerts, software downloads, hardware compatibility lists, and our customer email notification service.
Contents
Section I
Chapter 1
Chapter 2
Disk staging .................................................................................................. 44 File system capacity .................................................................................... 45 NetBackup catalog strategies ............................................................................ 45 Catalog backup types .................................................................................. 46 Guidelines for managing the catalog ........................................................ 46 Catalog backup not finishing in the available window .......................... 47 Catalog compression ................................................................................... 48 Merging/splitting/moving servers ................................................................... 48 Moving the EMM server .............................................................................. 49 Guidelines for policies ........................................................................................ 49 Include and exclude lists ............................................................................ 49 Critical policies ............................................................................................. 50 Schedule frequency ..................................................................................... 50 Managing logs ...................................................................................................... 50 Optimizing the performance of vxlogview .............................................. 50 Interpreting legacy error logs .................................................................... 51
Chapter 3
Chapter 4
Chapter 5
Chapter 6
Best practices
Best practices: new tape drive technologies .................................................... 66 Best practices: tape drive cleaning ................................................................... 66 Best practices: storing tape cartridges ............................................................. 68 Best practices: recoverability ............................................................................. 68 Suggestions for data recovery planning .................................................. 69 Best practices: naming conventions ................................................................. 71
Policy names .................................................................................................71 Schedule names ............................................................................................72 Storage unit/storage group names ............................................................72
Section II
Chapter 7
Performance tuning
Measuring performance
Overview ................................................................................................................76 Controlling system variables for consistent testing conditions ...................76 Server variables ............................................................................................76 Network variables ........................................................................................77 Client variables .............................................................................................78 Data variables ...............................................................................................78 Evaluating performance .....................................................................................79 Evaluating UNIX system components ..............................................................84 Monitoring CPU load ...................................................................................84 Measuring performance independent of tape or disk output ...............84 Evaluating Windows system components .......................................................85 Monitoring CPU load ...................................................................................86 Monitoring memory use .............................................................................87 Monitoring disk load ...................................................................................87
Chapter 8
Chapter 9
Chapter 10
Chapter 11
Appendix A
Additional resources
Performance tuning information at vision online ............................... 161 Performance monitoring utilities ........................................................... 161 Freeware tools for bottleneck detection ................................................ 161 Mailing list resources ................................................................................ 162
Index
163
Section I
NetBackup Capacity Planning Master Server Configuration Guidelines Media Server Configuration Guidelines Media Configuration Guidelines Database Backup Guidelines Best Practices
Note: For a discussion of tuning factors and general recommendations that may be applied to an existing installation, see Section II.
10
Chapter
Introduction on page 13 Analyzing your backup requirements on page 14 Designing your backup system on page 16 Questionnaire for capacity planning on page 37
New
Veritas NetBackup is a high-performance data protection application. Its architecture is designed for large and complex distributed computing environments. NetBackup provides a scalable storage management server that can be configured for network backup, recovery, archival, and file migration services. This manual is for administrators who want to analyze, evaluate, and tune NetBackup performance. This manual is intended to answer questions such as the following: How big should the backup server be? How can the NetBackup server be tuned for maximum performance? How many CPUs and tape drives are needed? How to configure backups to run as fast as possible? How to improve recovery times? What tools can characterize or measure how NetBackup is handling data? Note: Most critical factors in performance are based in hardware rather than software. Hardware selection and configuration have roughly four times the weight that software has in determining performance. Although this guide provides some hardware configuration assistance, it is assumed for the most part that your devices are correctly configured.
Disclaimer
It is assumed you are familiar with NetBackup and your applications, operating systems, and hardware. The information in this manual is advisory only, presented in the form of guidelines. Changes to an installation undertaken as a result of the information contained herein should be verified in advance for appropriateness and accuracy. Some of the information contained herein may apply only to certain hardware or operating system architectures. Note: The information in this manual is subject to change.
13
Introduction
The first step toward accurately estimating your backup requirements is a complete understanding of your environment. Many performance issues can be traced to hardware or environmental issues. A basic understanding of the entire backup data path is important in determining the maximum performance you can expect from your installation. Every backup environment has a bottleneck. It may be a fast bottleneck, but it will determine the maximum performance obtainable with your system.
Example:
Consider the configuration illustrated below. In this environment, backups run slowly (in other words, they are not completing in the scheduled backup window). Total throughput is eight to 10 megabytes per second. What makes the backups run slowly? How can NetBackup or the environment be configured to increase backup performance in this situation? Figure 1-1 Dedicated NetBackup server
The explanation is that the LAN, having a speed of 100megabits per second, has a theoretical throughput of 12.5 megabytes per second. In practice, 100BaseT throughput is unlikely to exceed 70% utilization. Therefore, the best delivered data rate is about 8 megabytes per second to the NetBackup server. The throughput can be even lower than this, when TCP/IP packet headers, TCP-window size constraints, router hops (packet latency for ACK packets delays the sending of the next data packet), host CPU utilization, filesystem overhead, and other LAN users activity are considered. Since the LAN is the slowest element in the backup path, it is the first place to look in order to increase backup performance in this configuration.
Which systems need to be backed up? It is important that you identify all systems that need to be backed up and then list each system separately so that you can identify any that require more resources to back up. Document which machines have local tape drives or libraries attached and be sure to write down the model type of each tape drive or library. In addition, record each host name, operating system and version, database type and version, network technology (for example, ATM or 100BaseT), and location. How much data will be backed up? Calculate how much data you need to back up. Include the total disk space on each individual system, including that for databases. Remember to add the space on mirrored disks only once. By calculating the total size for all disks, you can design a system that takes future growth into account. You should also consider the future by estimating how much data you will need to back up in six months to a few years from now.
Do you plan to back up databases or raw partitions? If you are planning to backing up databases, you need to identify the database engines, their version numbers, and the method that you will use to back them up. NetBackup can back up several database engines and raw file systems, and databases can be backed up while they are online or offline. To back up any database while it is online, you need a NetBackup database agent for your particular database engine. If you use NetBackup Advanced Client to back up databases using raw partitions, you are actually backing up as much data as the total size of your raw partition. Also, remember to add the size of your database backups to your final calculations when figuring out how much data you need to back up. Will you be backing up specialty servers like MS-Exchange, Lotus Notes, etc.? If you are planning on backing up any specialty servers, you will need to identify their types and application release numbers. As previously mentioned, you may need a special NetBackup agent to properly back up your particular servers.
What types of backups are needed and how often should they take place?
15
Restore opportunities. To properly size your backup system, you must decide on the type and frequency of your backups. Will you perform daily incremental and weekly full backups? Monthly or bi-weekly full backups?
How much time is available to run each backup? It is important to know the window of time that is available for each backup. The length of a window dictates several aspects of your backup strategy, for example, you may want a larger window of time to back up multiple, high-capacity servers. Or you may consider the use of advanced NetBackup features such as synthetic backups, a local snapshot method, or FlashBackup. How long should backups be retained? An important factor while designing your backup strategy is to consider your policy for backup expiration. The amount of time a backup is kept is also known as the retention period. A fairly common policy is to expire your incremental backups after one month and your full backups after six months. With this policy, you can restore any daily file change from the previous month and restore data from full backups for the previous six months. The length of the retention period depends on your own unique requirements and business needs, and perhaps regulatory requirements. However, keep in mind that the length of your retention period has a directly proportional effect on the number of tapes you will need and the size of your NetBackup catalog database. Your NetBackup catalog database keeps track of all the information on all your tapes. The catalog size is tightly tied in to your retention period and the frequency of your backups. Also, database management daemons and services may become bottlenecks. If backups are sent off site, how long must they remain off site? If you plan to send tapes to an off site location as a disaster recovery option, you must identify which tapes to send off site and how long they remain off site. You might decide to duplicate all your full backups, or only a select few. You might also decide to duplicate certain systems and exclude others. As tapes are sent off site, you will need to buy new tapes to replace them until they are recycled back from off site storage. If you forget this simple detail, you will run out of tapes when you most need them. What is your network technology? If you are planning on backing up any system over a network, note the network types that you will be using. The next section, Designing your
backup system, explains how to calculate the amount of data you can transfer over those networks in a given time. Depending on the amount of data that you want to back up and the frequency of those backups, you might want to consider installing a private network just for backups.
What new systems will be added to your site in the next six months? It is important to plan for future growth when designing your backup system. By analyzing the potential future growth of your current or future systems, you can insure the backup solution that you have accommodates the kind of environment that you will have in the future. Remember to add any resulting growth factor that you incur to your total backup solution. Will user-directed backups or restores be allowed? Allowing users to do their own backups and restores can reduce the time it takes to initiate certain operations. However, user-directed operations can also result in higher support costs and the loss of some flexibility. User-directed operations can monopolize media and tape drives when you most need them. They can also generate more support calls and training issues while the users become familiar with the new backup system. You will need to decide whether allowing user access to some of your backup systems functions is worth the potential costs. Data type: What are the types of data: text, graphics, database? How compressible is the data? How many files are involved? Will the data be encrypted? (Note that encrypted backups may run slower. See Encryption on page 133 for more information.) Data location: Is the data local or remote? What are the characteristics of the storage subsystem? What is the exact data path? How busy is the storage subsystem? Change management: Because hardware and software infrastructure will change over time, is it worth the cost to create an independent test-backup environment to ensure your production environment will work with the changed components?
17
Note: The ideas and examples that follow are based on standard and ideal calculations. Your numbers will differ based on your particular environment, data, and compression rates.
Calculate the required data transfer rate for your backups on page 17 Calculate how long it will take to back up to tape on page 18 Calculate how many tape drives are needed on page 20 Calculate the required data transfer rate for your network(s) on page 21 Calculate the size of your NetBackup catalog on page 22 Calculate the size of the EMM server on page 23 Calculate how much media is needed for full and incremental backups on page 25 Calculate the size of the tape library needed to store your backups on page 26 Design your master backup server based on your previous findings on page 27 Estimate the number of master servers needed on page 29 Design your media server on page 31 Estimate the number of media servers needed on page 32 Design your NOM server on page 33 Summary on page 36
cumulative-incremental backup will be much smaller than if a completely different 20% changes every day.
Example: Calculating your ideal data transfer rate during the week
Assumptions: Amount of data to back up during a full backup = 500 gigabytes Amount of data to back up during an incremental backup = 20% of a full backup Daily backup window = 8 hours Solution 1: Full backup = 500 gigabytes Ideal data transfer rate = 500 gigabytes/8 hours = 62.5 gigabytes/hour Solution 2: Incremental backup = 100 gigabytes Ideal data transfer rate = 100 gigabytes/8 hours = 12.5 gigabytes/hour To calculate your ideal data transfer rate during the weekends, divide the amount of data that needs to be backed up by the length of the weekend backup window.
19
Actual data transfer rate = (Amount of data to back up)/((Number of drives) * (Tape drive transfer rate)) Table 1-1 Drive Tape drive data transfer rates Theoretical Theoretical gigabytes/hour (no gigabytes/hour (2:1 compression) compression)
54 108 288 57 129 108 108 216 576 115 259 252 (2.33:1)
Typical gigabytes/hour
37-65 75-130 200-345 40-70 90-155 75-100
LTO gen 1 LTO gen 2 LTO gen 3 SDLT 320 SDLT 600 STK 9940B
Depending on the several factors that can influence the transfer rates of your tape drives, it is possible to obtain higher or lower transfer rates. The solutions in the examples above are approximations of what you can expect. Note also that a backup of encrypted data may take more time. See Encryption on page 133 for more information.
21
The table below displays the transfer rates for several drive controllers. In practice, your transfer rates might be slower because of the inherent overhead of several variables including your file system layout, system CPU load, and memory usage. Table 1-2 Drive Controller
ATA-5 (ATA/ATAPI-5) Wide Ultra 2 SCSI iSCSI 1 Gigabit Fibre Channel SATA/150 Ultra-3 SCSI 2 Gigabit Fibre Channel SATA/300 Ultra320 SCSI 4 Gigabit Fibre Channel
Theoretical gigabytes/hour
237.6 288 360 360 540 576 720 1080 1152 1440
Network Technology
10BaseT (switched) 100BaseT (switched) 1000BaseT (switched)
Typical gigabytes/hour
2.7 32 320
Note: For additional information on the importance of matching network bandwidth to your tape drives, see Network and SCSI/FC bus bandwidth on page 54.
Backing up your data over a faster network (1000BaseT) Backing up large servers to dedicated tape drives (media servers) Performing your backups during a longer time window Performing your backups over faster dedicated private networks.
Solution 2: Network Technology = 1000BaseT (switched) Typical transfer rate = 320 gigabytes/hour Using the values from the Table 1-3Network data transfer rates table, a single 1000BaseT network has a transfer rate of 320 gigabytes/hour. This network technology will be able to handle your backups with room to spare. Calculating the data transfer rates for your networks can help you identify your potential bottlenecks by looking at the transfer rates of your slowest networks. Basic tuning suggestions for the data path on page 91 provides several solutions for dealing with multiple networks and bottlenecks.
23
To calculate your NetBackup catalog size, you need to know how much data you will be backing up for full and incremental backups, how often these backups will be performed, and for how long they will be retained. Here are two simple formulas to calculate these values: Data being tracked = (Amount of data to back up) * (Number of backups) * (Retention period) NetBackup catalog size = 120 * (number of files) Note: If you select NetBackups True Image Restore option, your catalog will be twice as large as a catalog without this option selected. True Image Restore collects the information required to restore directories to their contents at the time of any selected full or incremental backup. Because the additional information that NetBackup collects for incremental backups is the same as that of a full backup, incremental backups take much more disk space when you collect True Image Restore information.
Note: This space must be included when determining size requirements for a master or media server, depending on where the EMM server is installed. Space for the NBDB on the EMM server is required in the following two locations: UNIX
/usr/openv/db/data /usr/openv/db/staging
Windows
install_path\NetBackupDB\data install_path\NetBackupDB\staging
Calculate the required space for the NBDB in each of the two directories, as follows: 60 MB + (2 KB * number of volumes configured for EMM) where EMM is the Enterprise Media Manager, and volumes are NetBackup (EMM) media volumes. Note that 60 MB is the default amount of space needed for the NBDB database used by the EMM server. It includes pre-allocated space for configuration information for devices and storage units. Note: During NetBackup installation, the install script looks for 60 MB of free space in the above /data directory; if there is insufficient space, the installation fails. The space in /staging is only required when a hot catalog backup is run.
25
Calculate how much media is needed for full and incremental backups
As part of planning your backup strategy, calculate how many tapes will be needed to store and retrieve your backups. The number of tapes that you will need depends on:
The amount of data that you are backing up The frequency of your backups The planned retention periods The capacity of the media used to store your backups.
If you expect your site's workload to increase over time, you can ease the pain of future upgrades by planning for expansion. Design your initial backup architecture so it can evolve to support more clients and servers. Invest in the faster, higher-capacity components that will serve your needs beyond the present. A simple formula for calculating your tape needs is shown here: Number of tapes = (Amount of data to back up) / (Tape capacity) To calculate how many tapes will be needed based on all your requirements, the above formula can be expanded to Number of tapes = ((Amount of data to back up) * (Frequency of backups) * (Retention period)) / (Tape capacity) Table 1-4 Drive
LTO gen 1 LTO gen 2 LTO gen 3 SDLT 320 SDLT 600 STK 9940B
Example: Calculating how many tapes are needed to store all your backups
Preliminary calculations: Size of full backups = 500 gigabytes * 4 (per month) * 6 months = 12 terabytes
Size of incremental backups = (20% of 500 gigabytes) * 30 * 1 month = 3 terabytes Total data tracked = 12 terabytes + 3 terabytes = 15 terabytes Solution 1: Tape drive type = LTO gen 1 Tape capacity without compression = 100 gigabytes Tape capacity with compression = 200 gigabytes Without compression: Tapes needed for full backups = 12 terabytes/100 gigabytes = 120 Tapes needed for incremental backups = 3 terabytes/100 gigabytes = 30 Total tapes needed = 120 + 30 = 150 tapes With 2:1 compression: Tapes needed for full backups = 12 terabytes/200 gigabytes = 60 Tapes needed for incremental backups = 3 terabytes/200 gigabytes = 15 Total tapes needed = 60 + 15 = 75 tapes Solution 2: Tape drive type = LTO gen 3 Tape capacity without compression = 400 gigabytes Tape capacity with compression = 800 gigabytes Without compression: Tapes needed for full backups = 12 terabytes/400 gigabytes = 30 Tapes needed for incremental backups = 3 terabytes/400 gigabytes = 7.5 ~= 8 Total tapes needed = 30 + 8 = 38 tapes With 2:1 compression: Tapes needed for full backups = 12 terabytes/800 gigabytes = 15 Tapes needed for incremental backups = 3 terabytes/800 gigabytes = 3.75 ~= 4 Total tapes needed = 15 + 4 = 19 tapes
Calculate the size of the tape library needed to store your backups
To calculate how many robotic library tape slots are needed to store all your backups, take the number of tapes for backup calculated in Calculate how much media is needed for full and incremental backups on page 25 and add tapes for catalog backup and cleaning: Tape slots needed = (Number of tapes needed for backups) + (Number of tapes needed for catalog backups) + 1 (for a cleaning tape) A typical example of tapes needed for catalog backup is 2. Additional tapes may be needed for the following:
27
If you plan to duplicate tapes or to reserve some media for special (non-backup) use, add those tapes to the above formula. Add tapes needed for future data growth. Make sure your system has a viable upgrade path as new tape drives become available.
Perform an initial backup requirements analysis, as outlined in the section Analyzing your backup requirements on page 14. Perform the calculations outlined in the previous steps of the current section.
Designing a backup server becomes a simple task once the basic design constraints are known:
Amount of data to back up Size of the NetBackup catalog Number of tape drives needed Number of networks needed
Given the above, a simple approach to designing your backup server can be outlined as follows:
Acquire a dedicated server Add tape drives and controllers (for saving your backups) Add disk drives and controllers (for OS and NetBackup catalog) Add network cards Add memory Add CPUs
Figure 1-2
In some cases, it may not be practical to design a generic server to back up all of your systems. You might have one or several large servers that cannot be backed up over a network within your backup window. In such cases, it is best to back up those servers using their own locally-attached tape drives. Although this section discusses how to design a master backup server, you can still use its information to properly add the necessary tape drives and components to your other servers. The next example shows how to configure a master server using the design elements gathered from the previous sections.
29
CPUs needed for OS = 1 Total CPUs needed = 1 + 1 + 1 = 3 Memory needed for network cards = 16 megabytes Memory needed for tape drives = 128 megabytes Memory needed for OS and NetBackup = 1 gigabyte Total memory needed = 16 + 128 + 1000 = 1.144 gigabytes Based on the above, your master server needs 3 CPUs and 1.144 gigabytes of memory. In addition, you need 60 gigabytes of disk space to store your NetBackup catalog, along with the necessary disks and drive controllers to install your operating system and NetBackup (2 gigabytes should be ample for most installations). This server also requires one SCSI card, or another, faster, adapter for use with the tape drive (and robot arm) and a single 100BaseT card for network backups. When designing your backup server solution, begin with a dedicated server for optimum performance. In addition, consult with your servers hardware manufacturer to ensure that the server can handle your other components. In most cases, servers have specific restrictions on the number and mixture of hardware components that can be supported concurrently. Overlooking this last detail can cripple even the best of plans.
The master server must be able to periodically communicate with all its media servers. If there are too many media servers, master server processing may be overloaded. Consider business-related requirements. For example, if an installation has different applications which require different backup windows, a single master may have to run backups continually, leaving no spare time for catalog cleaning, catalog backup, or maintenance. If at all possible, design your configuration with one master server per firewall domain. In addition, do not share robotic tape libraries between firewall domains. As a rule, the number of clients (separate physical hosts) per master server is not a critical factor for NetBackup. Ordinary backup processing performed by each client has little or no impact on the NetBackup server, unless, for instance, the clients all have database extensions or are trying to run ALL_LOCAL_DRIVES at the same time.
Plan your configuration so that it contains no single point of failure. Provide sufficient redundancy to ensure high availability of the backup process. Having more tape drives or media may reduce the number of media servers needed per master server. Consider limiting the number of media servers handled by a master to the lower end of the estimates in the following table, Table 1-5Number of media servers supported by a master server. Although a well-managed NetBackup environment can handle more media servers than the numbers listed in this table, you may find your backup operations more efficient and manageable with fewer but larger media servers. The variation in the number of media servers per master server for each scenario in the table depends on the number of jobs submitted, multiplexing, multi-streaming, and network capacity. For information on designing a master server, refer to Design your master backup server based on your previous findings on page 27.
Note: This table provides a rough estimate only, as a guideline for initial planning. Note also that the RAM amounts shown below are for a base NetBackup installation; RAM requirements vary depending on the NetBackup features, options, and agents being used. Table 1-5 Master Server Type Number of media servers supported by a master server RAM Number of Processors Master Backups Media Server Media Backups Configuration Number of Media Servers Per Master Server
25 - 40
Solaris
2 gigabytes
10 - 20 tape drives in not more than 2 libraries 10 - 20 tape drives in not more than 2 libraries 20 - 40 tape drives in not more than 2 libraries
Solaris
4 gigabytes
35 - 50
Solaris
8+ gigabytes
50 -70
31
Number of media servers supported by a master server RAM Number of Processors Master Backups Media Server Media Backups Configuration Number of Media Servers Per Master Server
10+
Windows
2 gigabytes
15 - 30 tape drives in not more than 2 libraries 20 - 40 tape drives in not more than 2 libraries 40 - 128 tape drives in not more than 2 libraries
Windows
4 gigabytes
20+
Windows
8+ gigabytes
50+
CPUs needed per master/media server component How many and what kind of component
2-3 100BaseT cards 5-7 10BaseT cards 1 ATM card 1-2 Gigabit Ethernet cards with coprocessor
Tape drives
2 LTO gen 3 drives 2-3 SDLT 600 drives 2-3 LTO gen 2 drives 3-4 LTO gen 1 drives
CPUs needed per master/media server component How many and what kind of component Number of CPUs per component
1
Memory needed per master/media server component Type of component Memory per component
16 megabytes LTO gen 3 drive SDLT 600 drive LTO gen 2 drive LTO gen 1 drive 256 megabytes 128 megabytes 128 megabytes 64 megabytes 1 gigabyte 1 or more gigabytes 8 megabytes * (# streams) * (# drives)
The information in the above tables is a rough estimate only, intended as a guideline for initial planning. In addition to the above media server components, you must also add the necessary disk drives to store the NetBackup catalog and your operating system. The size of the disks needed to store your catalog depends on the calculations explained earlier under Calculate the size of your NetBackup catalog on page 22.
I/O performance is generally more important than CPU performance. Consider CPU, I/O, and memory expandability when choosing a server. Consider how many CPUs are needed (see CPUs needed per master/media server component on page 31). Here are some general guidelines: Experiments (with Sun Microsystems) have shown that a useful, conservative estimate is 5MHz of CPU capacity per 1MB/second of data movement in and out of the NetBackup media server. Keep in mind that the operating system and other applications also use the CPU. This estimate is for the power available to NetBackup itself.
33
Example: A system backing up clients over the network to a local tape drive at the rate of 10MB/second would need 100MHz of available CPU power: 50MHz to move data from the network to the NetBackup server 50MHz to move data from the NetBackup server to tape.
Consider how much memory is needed (see Memory needed per master/media server component on page 32). At least 512 megabytes of RAM is recommended if the server is running a Java GUI. NetBackup uses shared memory for local backups. NetBackup buffer usage will affect how much memory is needed. See the Tuning the NetBackup data transfer path chapter for more information on NetBackup buffers. Keep in mind that non-NetBackup processes need memory in addition to what NetBackup needs. A media server moves data from disk (on relevant clients) to storage (usually disk or tape). The server must be carefully sized to maximize throughput. Maximum throughput is attained when the server keeps its tape devices streaming. (For an explanation of streaming, see Tape streaming on page 126.)
Media server factors to consider for sizing include: Disk storage access time Adapter (for example, SCSI) speed Bus (for example, PCI) speed Tape device speed Network interface (for example, 100BaseT) speed Amount of system RAM Other applications, if the host is non-dedicated
The platform chosen must be able to drive all network interfaces and keep all tape devices streaming.
NOM server software does not have to be installed on the same server as NetBackup 6.0 master server software. Since the NOM server is also a web server, installing NOM on a master server may impact security and performance. (The guidelines provided here assume that the NOM server is a standalone host not acting as a master server.)
Symantec recommends that you not install NOM software on a clustered NetBackup master server.
Sizing considerations
The size of your NOM server depends largely on the number of NetBackup objects that NOM manages. See the following table. Factors in determining NOM server size
Number of master servers to manage (the number of media servers is irrelevant) Number of policies Number of jobs run per day Number of media Number of catalog images
Based on the above factors, the following NOM server components should be sized accordingly. NOM server components
Disk space (installed NOM binary + NOM database, described below) Type and number of CPUs RAM
The next section describes the NOM database and how it affects disk space requirements, followed by overall sizing guidelines for NOM.
NOM database
The Sybase database used by NOM is similar to that used by NetBackup and is installed as part of the NOM server installation.
The disk space needed for the initial installation of NOM depends on the volume of data initially loaded onto the server, based on the following: number of policy data records, number of job data records, number of media data records, and number of catalog image records. The rate of NOM database growth depends on the quantity of data being managed: policy data, job data, media data, and catalog data.
35
Sizing guidelines
The following guidelines are presented in groups based on the number of objects that your NOM server manages. It is assumed that your NOM server is a standalone host (the host is not acting as a NetBackup master server). Note: Symantec recommends multiple NOM servers for deployments larger than those described in the following guidelines.
Note: The guidelines are intended for basic planning purposes, and do not represent fixed recommendations or restrictions. In the following table, find the installation category that matches your site, based on number of master servers that your NOM server will manage, number of jobs per day, and so forth. Then consult the following table for NOM sizing guidelines. Table 1-8 NetBackup installation category
A B C D
NOM sizing guidelines Master servers Jobs per day Policies Alerts per day Media
Using the NetBackup installation category from above (A, B, C, D), read across to the recommended NOM server capacities. Table 1-9 NetBackup installation category
A
RAM
Disk space
80 GB 80 GB 80 GB
Windows Solaris
Pentium V
2 GB 2 GB 2 GB
Windows
NOM server capacities OS CPU type Number of CPUs RAM Disk space
80 GB 80 GB 80 GB 80 GB 80 GB
2 GB 4 GB 4 GB 4 GB 8 GB
Summary
Using the guidelines provided in this chapter, design a solution that can do a full backup and incremental backups of your largest system within your time window. The remainder of the backups can happen over successive days. Eventually, your site may outgrow its initial backup solution. By following these guidelines, you can add more capacity at a future date without having to redesign your basic strategy. With proper design and planning, you can create a backup strategy that will grow with your environment. As outlined in the previous sections, the number and location of the backup devices are dependent on a number of factors.
The amount of data on the target systems, The available backup and restore windows, The available network bandwidth, and The speed of the backup devices.
If one drive causes backup window time conflicts, another can be added, providing an aggregate rate of two drives. The trade-off is that the second drive imposes extra CPU, memory, and I/O loads on the media server. If you find that you cannot complete backups in the allocated window, one approach is to either increase your backup window or decrease the frequency of your full and incremental backups. Another approach is to reconfigure your site to speed up overall backup performance. Before you make any such change, you should understand what determines your current backup performance. List or diagram your site network and systems configuration. Note the maximum data transfer rates for all the components of your backup configuration and compare these against the rate you must meet for your backup window. This will identify the slowest
37
components and, consequently, the cause of your bottlenecks. Some likely areas for bottlenecks include the networks, tape drives, client OS load, and filesystem fragmentation.
Backup window
Retention policy
Chapter
Managing NetBackup job scheduling on page 40 Miscellaneous considerations on page 44 Merging/splitting/moving servers on page 48 Guidelines for policies on page 49 Managing logs on page 50
Host Properties > Master Server > Properties > Global Attributes > Maximum jobs per client (should be greater than 1). Host Properties > Master Server > Properties > Client Attributes setting for Maximum data streams (should be greater than 1). Policy attribute Limit jobs per policy (should be greater than 1). Policy schedule attribute Media multiplexing (should be greater than 1). Check the storage unit properties:
Is the storage unit enabled to use multiple drives (Maximum concurrent write drives)? If you want to increase this value, remember to set it to fewer than the number of drives available to this storage unit. Otherwise, restores and other non-backup activities will not be able to run while backups to the storage unit are running.
41
Is the storage unit enabled for multiplexing (Maximum streams per drive)? You can write a maximum of 32 jobs to one tape at the same time.
Note: The Activity Monitor may not update if there are many (thousands of) jobs to view. If this happens, you may need to change the memory setting using the NetBackup Java command jnbSA with the -mx option. Refer to the INITIAL_MEMORY, MAX_MEMORY subsection in the NetBackup System Administrators Guide for UNIX and Linux, Volume I. Note that this situation does not affect NetBackup's ability to continue running jobs.
For an explanation of the CONNECT_OPTIONS values, refer to the NetBackup System Administrators Guide for UNIX and Linux, Volume II.
The NetBackup Troubleshooting Guide also provides information on network connectivity issues.
43
Web-based interface for efficient, remote administration across multiple NetBackup servers from a single, centralized console. Policy-based alert notification, using predefined alert conditions to specify typical issues or thresholds within NetBackup. Flexible reporting, on issues such as backup performance, media utilization, and rates of job success. Consolidated job and job policy views per server (or group of servers), for filtering and sorting job activity.
For more information on the capabilities of NOM, refer to the NOM online help in the Administration console, or see the NetBackup Operations Manager Getting Started Guide.
Windows
install_path\NetBackup\bin
Windows
cd install_path\NetBackup
Windows
echo 0 > NOexpire
Prevent backups from starting by shutting down bprd (NetBackup Request Manager). This will suspend scheduling of new jobs by nbpem. To shut down bprd, you can use the Activity Monitor in the NetBackup Administration Console. Restart bprd to resume scheduling.
Miscellaneous considerations
Consider the following issues when planning for or troubleshooting NetBackup.
Use storage units in the order in which they are listed in the group. Choose the least recently selected storage unit in the group. Configure the storage unit group as a failover group. This means the first storage unit in the group will be the only storage unit used. If the storage unit is busy, then backups will queue. The second storage unit will only be used if the first storage unit is down.
Disk staging
With disk staging, images can be created on disk initially, then copied later to another media type (as determined in the disk staging schedule). The media type for the final destination is typically tape, but could be disk. This two-stage process leverages the advantages of disk-based backups in the near term, while preserving the advantages of tape-based backups for long term. Note that disk staging can be used to increase backup speed. For more information, refer to the NetBackup System Administrators Guide, Volume I.
45
Image database: The image database contains information about what has been backed up. It is by far the largest part of the catalog. NetBackup data stored in relational databases: This includes the media and volume data describing media usage and volume information which is used during the backups. NetBackup configuration files: Policy, schedule and other flat files used by NetBackup.
For more information on the catalog, refer to Catalog Maintenance and Performance Optimization in the NetBackup Administrator's Guide Volume 1. The NetBackup catalogs on the master server tend to grow large over time and eventually fail to fit on a single tape. Here is the layout of the first few directory levels of the NetBackup catalogs on the master server: Figure 2-3 Directory layout on master Server
/usr/openv/
/db/data
/Netbackup/db
/var
/Netbackup/vault
/var/global
License key and authentication information NBDB.db EMM_DATA.db EMM_INDEX.db NBDB.log BMRDB.db BMRDB.log BMR_DATA.db BMR_INDEX.db vxdbms.conf /client_1 /Master /client_n Configuration files /Media_server Relational database files /class /class_template /client /config /failure_history /media /error /images /jobs /vault server.conf databases.conf
Image database
47
When defining the file list, use absolute pathnames for the locations of the NetBackup and Media Manager catalog paths and include the server name in the path. This is in case the media server performing the backup is changed.
Back up the catalog using online, hot catalog backup This type of catalog backup is for highly active NetBackup environments in which continual backup activity is occurring. It is considered an online, hot method because it can be performed while regular backup activity is taking place. This type of catalog is policy-based and can span more than one tape. It also allows for incremental backups, which can significantly reduce catalog backup times for large catalogs. Store the catalog on a separate file system The NetBackup catalog can grow quickly depending on backup frequency, retention periods, and the number of files being backed up. If you store the NetBackup catalog data on its own file system, this ensures that other disk resources, root file systems, and the operating system are not impacted by the catalog growth. For information on how to move the catalog, refer to Catalog compression on page 48. Change the location of the NetBackup relational database files The location of the NetBackup relational database files can be changed and/or split into multiple directories for better performance. For example, by placing the transaction log file, NBDB.log, on a physically separate drive, you gain better protection against disk failure and increased efficiency in writing to the log file. Refer to the procedure in the section Moving NBDB Database Files After Installation in the NetBackup Relational Database appendix of the NetBackup System Administrators Guide, Volume I. Delay to compress catalog The default value for this parameter is 0, which means that NetBackup does not compress the catalog. As your catalog increases in size, you may want to use a value between 10 and 30 days for this parameter. When you restore old backups, which requires looking at catalog files that have been compressed, NetBackup automatically uncompresses the files as needed, with minimal performance impact. For information on how to compress the catalog, refer to Catalog compression on page 48.
Use catalog archiving. Catalog archiving reduces the size of online catalog data by relocating the large catalog .f files to secondary storage. NetBackup
administration will continue to require regularly scheduled catalog backups, but without the large amount of online catalog data, the backups will be faster.
Off load some policies, clients, and backup images from the current master server to a new, additional master, so that each master has a window large enough to allow its catalog backup to finish. Since a media server can be connected to one master server only, additional media servers may be needed. For assistance in adding another master server to lighten the workload of the existing master, contact Symantec Consulting. Determine whether most of the catalog backup time is being used in expiring backup images. If this is the case, make sure the master's primary DNS server is available by running nslookup. The command should respond quickly. Also, investigate whether there are any media servers which no longer exist. The image cleaning operation will time out on these repeatedly, trying to expire fragments, if the media servers were not removed from the NetBackup configuration correctly.
Catalog compression
When the NetBackup image catalog becomes too large for the available disk space, there are two ways to manage this situation:
For details, refer to Moving the Image Catalog and Compressing and Uncompressing the Image Catalog in the NetBackup System Administrators Guide, Volume I. Note that NetBackup compresses images after each backup session, regardless of whether or not any backups were successful. This happens right before the execution of the session_notify script and the backup of the catalog. The actual backup session is extended until compression is complete.
Merging/splitting/moving servers
A master server schedules and maintains backup information for a given set of systems. The Enterprise Media Manager (EMM) server and its database maintain centralized device and media related information used on all servers that are part of the configuration. By default, the EMM server and the NetBackup Relational Database (NBDB) that contains the EMM data are located on the master server. A large and dynamic data center can expect to periodically reconfigure the number and organization of its backup servers.
49
Centralized management, reporting, and maintenance are the benefits of working in a centralized NetBackup environment. Once a master server has been established, it is possible to merge its databases with another master server, giving control over its set of server backups to the new master server. Conversely, if the backup load on a master server has grown to the point where backups are not finishing in the backup window, it may be desirable to split that master server into two master servers. It is possible to merge or split NetBackup master servers or EMM servers. It is also possible to convert a media server to a master server or a master server to a media server. However, the procedures to accomplish this are complex and require a detailed knowledge of NetBackup database interactions. Merging or splitting NetBackup, Media Manager and EMM databases to another server is not recommended without involving a Symantec consultant to determine the changes needed, based on your specific configuration and requirements.
Do not use excessive wild cards in file lists. When wildcards are used, NetBackup compares every filename against the wild cards. This decreases NetBackup performance. Instead of placing /tmp/* (UNIX) or C:\Temp\* (Windows) in an include or exclude list, use /tmp/ or C:\Temp. Use exclude files to exclude large useless files. Reduce the size of your backups by using exclude lists for the files your installation does not need to preserve. For instance, you may decide to exclude temporary files. Use absolute paths for your exclude list entries, so that valuable files are not inadvertently excluded. Before adding files to the exclude list, confirm with the affected users that their files can be safely
excluded. Should disaster (or user error) strike, not being able to recover files costs much more than backing up extra data. When a policy specifies that all local drives be backed up (ALL_LOCAL_DRIVES), nbpem initiates a parent job (nbgenjob) that connects to the client and runs bpmount -i to get a list of mount points. Then nbpem initiates a job with its own unique job identification number for each mount point. Next the client bpbkar starts a stream for each job. Then, and only then, the exclude list is read by NetBackup. When the entire job is excluded, bpbkar exits with a status 0, stating that it sent 0 of 0 files to backup. The resulting image files are treated just as any other successful backup's image files. They expire in the normal fashion when the expiration date in the image header files specifies they are to expire.
Critical policies
For online, hot catalog backups (a new feature in NetBackup 6.0), make sure to identify those policies that are crucial to recovering your site in the event of a disaster. For more information on hot catalog backup and critical policies, refer to the NetBackup System Administrators Guide, Volume I.
Schedule frequency
To minimize the number of times you back up files that have not changed, and to minimize your consumption of bandwidth, media, and other resources, consider limiting the frequency of your full backups to monthly or even quarterly, followed by weekly cumulative incremental backups and daily incremental backups.
Managing logs
Optimizing the performance of vxlogview
As explained in the NetBackup Troubleshooting Guide, the vxlogview command is used for viewing logs created by unified logging (VxUL). The vxlogview command will deliver optimum performance when a file ID is specified in the query. For example: when viewing messages logged by the NetBackup Resource Broker (nbrb) for a given day, you can filter out the library messages while viewing the nbrb logs. To achieve this, run vxlogview as follows:
vxlogview o nbrb i nbrb n 0
Note that -i nbrb specifies the file ID for nbrb. Specifying the file ID improves the performance, because the search is confined to a smaller set of files.
51
The meaning of the various fields in this message (the fields are delimited by blanks) is defined in the table below, Table 2-11Meaning of daily_messages log fields. The next table, Table 2-12Message types, lists the values for the message type, which is the third field in the log message. Table 2-11 Field
1
Value
1021419793 (= number of seconds since 1970) 1 2 4
2 3 4
Error database entry version Type of message Severity of error: 1: Unknown 2: Debug 4: Informational 8: Warning 16: Error 32: Critical
5 6 7 8 9
Server on which error was reported Job ID (included if pertinent to the log entry) (optional entry) (optional entry) Client on which error occurred, if applicable, otherwise *NULL*
nabob 0 0 0 *NULL*
Value
bpjobd TERMINATED bpjobd
Chapter
Network and SCSI/FC bus bandwidth on page 54 How to change the threshold for media errors on page 54 How to reload the st driver without rebooting Solaris on page 57 How to reload the st driver without rebooting Solaris on page 57 Media Manager drive selection on page 58 Robot types and NetBackup port configuration on page 58
Windows
install_path\NetBackup\bin\goodies\available_media
The NetBackup Media List report may show that some media is frozen and therefore cannot be used for backups. One of the reasons NetBackup freezes media is because of recurring I/O errors. The NetBackup Troubleshooting Guide describes the recommended approaches for dealing with this issue, for example, under NetBackup error code 96. It is also possible to configure the NetBackup error threshold value. The method for doing this is described in this section. Each time a read, write, or position error occurs, NetBackup records the time, media ID, type of error, and drive index in the EMM database. Then NetBackup scans to see whether that media has had m of the same errors within the past n hours. The variable m is a tunable parameter known as media_error_threshold. The default value of media_error_threshold is 2 errors.
Media Server configuration guidelines How to change the threshold for media errors
55
The variable n is known as time_window. The default value of time_window is 12 hours. If a tape volume has more than media_error_threshold errors, NetBackup will take the appropriate action:
If the volume has not been previously assigned for backups, then NetBackup will:
set the volume status to FROZEN select a different volume log an error
If the volume is in the NetBackup media catalog and has been previously selected for backups, then NetBackup will:
set the volume to SUSPENDED abort the current backup log an error
Adjusting media_error_threshold
To configure the NetBackup media error thresholds, use the nbemmcmd command on the media server as follows. NetBackup freezes a tape volume or downs a drive for which these values are exceeded. For more detail on the nbemmcmd command, refer to the man page or to the NetBackup Commands Guide. UNIX
/usr/openv/netbackup/bin/admincmd/nbemmcmd -changesetting -time_window unsigned integer -machinename string -media_error_threshold unsigned integer -drive_error_threshold unsigned integer
Windows
<install_path>\NetBackup\bin\admincmd\nbemmcmd.exe -changesetting -time_window unsigned integer -machinename string -media_error_threshold unsigned integer -drive_error_threshold unsigned integer
For example, if the -drive_error_threshold is set to the default value of 2, the drive is downed after 3 errors in 12 hours. If the -drive_error_threshold is set to a value of 6, it would take 7 errors in the same 12 hour period before the drive would be downed.
56 Media Server configuration guidelines How to change the threshold for media errors
Note: The following description has nothing to do with the number of times NetBackup retries a backup/restore that fails. That situation is controlled by the global configuration parameter Backup Tries for backups and the bp.conf entry RESTORE_RETRIES for restores. This algorithm merely deals with whether I/O errors on tape should cause media to be frozen or drives to be downed. When a read/write/position error occurs on tape, the error returned by the operating system does not distinguish between whether the error is caused by the tape or the drive. To prevent the failure of all backups in a given timeframe, bptm tries to identify a bad tape volume or drive based on past history, using the following logic:
Each time an I/O error occurs on a read/write/position, bptm logs the error in the file /usr/openv/netbackup/db/media/errors (UNIX) or install_path\NetBackup\db\media\errors (Windows). The error message includes the time of the error, media ID, drive index and type of error. Examples of the entries in this file are these:
07/21/96 04:15:17 A00167 4 WRITE_ERROR 07/26/96 12:37:47 A00168 4 READ_ERROR
Each time an entry is made, the past entries are scanned to determine if the same media ID and/or drive has had this type of error in the past n hours. n is known as the time_window. The default time window is 12 hours. When performing the history search for the time_window entries, EMM notes past errors that match the media ID, the drive, or both the drive and the media ID. The purpose of this is to determine the cause of the error. For example, if a given media ID gets write errors on more than one drive, it is assumed that the tape volume is bad and NetBackup freezes the volume. If more than one media ID gets a particular error on the same drive, it is assumed the drive is bad and the drive goes to a down state. If only past errors are found on the same drive with the same media ID, then EMM assumes that the volume is bad and freezes it. Freezing or downing does not occur on the first error. There are two other parameters, media_error_threshold and drive_error_threshold. The default value of both of these parameters is 2. For a freeze or down to happen, more than the threshold number of errors must occur (by default, at least three errors must occur) in the time window for the same drive/media ID.
Media Server configuration guidelines How to reload the st driver without rebooting Solaris
57
Note: If either media_error_threshold or drive_error_threshold is 0, freezing or downing occurs the first time any I/O error occurs. media_error_threshold is looked at first, so if both values are 0, freezing will override downing. It is not recommended that these values be set to 0. Changing the default values is not recommended, unless there is a good reason to do so. One obvious change would be to put very large numbers in the THRESHOLD files, thus basically disabling the mechanism such that to freeze a tape or down a drive should never occur. Freezing and downing is primarily intended to benefit backups. If read errors occur on a restore, freezing media has little effect. NetBackup still accesses the tape to perform the restore. In the restore case, downing a bad drive may help.
Use devfsadm to recreate the device nodes in /devices and the device links in /dev for tape devices by running any one (not all) of the following commands:
/usr/sbin/devfsadm -i st /usr/sbin/devfsadm -c tape /usr/sbin/devfsadm -C -c tape (Use this command to enforce cleanup if dangling logical links are present in /dev.)
5 6
Chapter
Dedicated or shared backup environment on page 60 Pooling on page 60 Disk versus tape on page 60
Pooling
Here are some useful conventions for media pools (formerly known as volume pools):
Configure a scratch pool for management of scratch tapes. If a scratch pool exists, EMM can move volumes from that pool to other pools that do not have volumes available. Use the available_media script in the goodies directory. You can put the available_media report into a script which redirects the report output to a file and emails the file to the administrators daily or weekly. This helps track which tapes are full, frozen, suspended, and so on. By means of a script, you can also filter the output of the available_media report to generate custom reports. To monitor media, you can also use the NetBackup Operations Manager (NOM). For instance, NOM can be configured to issue an alert if there are fewer than X number of media available, or if more than X% of the media is frozen or suspended. Use the none pool for cleaning tapes. Do not create too many pools. The existence of too many pools causes the library capacity to become fragmented across the pools. Consequently, the library becomes filled with many partially-full tapes.
61
copies is insufficient to ensure streaming to tape drives, writing to disk can speed the backup process and alleviate wear and tear on your tape drives. Here are some factors to consider when choosing to back up a given dataset to disk or tape:
Short or long retention period Incremental or full backup Intermediate (staging) or long-term storage Delay in recovery time
Here are some benefits of backing up to disk rather than tape: No need to multiplex Writing to disk does not need to be streamed. This means that multiplexing is not necessary. Multiplexing is only necessary with tape because the tape must be streamed. Multiplexing allows multiple clients and multiple file systems to be backed up to the same tape simultaneously, thus streaming the drive. However, this functionality slows down the restore. (See Tape streaming on page 126 for an explanation of streaming.) Instant access to data Most tape drives on the market have a time to data of close to two minutes. This time includes the amount of time to move the tape from its slot, load it into the drive and seek an appropriate place on tape. Disk has an effective time to data of zero seconds. To understand the significance of eliminating this delay, consider that restoring a large file system whose backups reside on 30 different tapes means that a two-minute delay per tape adds almost two hours to the restore. This includes the time it takes to eject and unload the 30 tapes. Fewer full backups. With tape-based systems, full backups must be done regularly because of the instant access to data issue described above. Otherwise, the number of tapes required for a restore significantly increases both the time to restore and the chance that a single tape will cause the restore to fail. Since disk arrays are protected by RAID software, they do not have this problem.
Chapter
Introduction
Before you create a database, decide how to protect the database against potential failures. Answer the following questions before developing your backup strategy.
Is it acceptable to lose any data if a hardware failure damages some of the files that constitute a database? Will you ever need to recover to past points-in-time? Does the database need to be available at all times (24x7)?
For specific information on backing up and restoring your database, refer to the NetBackup administrators guide for your database product. In addition, the manufacturer of your database product may provide publications that document backup recommendations and methods.
Fragmentation and databases Using a smaller fragment size in a backup of a database such as Oracle will not improve backup performance, and may hinder restore performance. Database backups (when not using Advanced Client) are unaffected by fragmentation since there is only one file per backup image. There is no advantage in tape positioning with or without fast-locate blocks. Using Advanced Client NetBackup Advanced Client provides snapshot backup technology combined with off-host data movement for local networks and SAN environments. A data snapshot can be created on disk in seconds and then backed up directly to tape. Users can significantly reduce CPU and I/O overhead from application or database servers while eliminating the backup window altogether. Advanced Client helps reduce the impact on applications that require 24x7xforever availability. Advanced Client is available on UNIX and Windows systems, and supports all NetBackup libraries and drives. It can be used with multi-streaming and multiplexing, and with a variety of disk arrays.
Chapter
Best practices
This chapter describes an assortment of best practices, and includes the following sections:
Best practices: new tape drive technologies on page 66 Best practices: tape drive cleaning on page 66 Best practices: storing tape cartridges on page 68 Best practices: recoverability on page 68 Best practices: naming conventions on page 71
Frequency-based cleaning
NetBackup does frequency-based cleaning by tracking the number of hours a drive has been in use. When this time reaches a configurable parameter, NetBackup creates a job that mounts and exercises a cleaning tape. This cleans the drive in a preventive fashion. The advantage of this method is that typically there are no drives unavailable awaiting cleaning. There is also no limitation on platform or robot type. On the downside, cleaning is done more often than necessary. This adds system wear and consumes time that could be used to write to the drive. Another limitation is that this method is hard to tune. When new tapes are used, drive cleaning is needed less frequently; the need for cleaning increases as the tape inventory ages. This increases the amount of tuning administration needed and, consequently, the margin of error.
67
On-demand cleaning
Refer to the NetBackup Media Manager System Administrators Guide for more information on this topic.
TapeAlert
TapeAlert allows reactive cleaning for most drive types. TapeAlert allows a tape drive to notify EMM when it needs to be cleaned. EMM then performs the cleaning. You must have a cleaning tape configured in at least one library slot in order to utilize this feature. TapeAlert is the recommended cleaning solution if it can be implemented. Not all drives, at all firmware levels, support this type of reactive cleaning. In the case where reactive cleaning is not supported on a particular drive, frequency-based cleaning may be substituted. This solution is not vendor or platform specific. The specific firmware levels have not been tested by Symantec, however the vendor should be able to confirm that the TapeAlert feature is supported.
How TapeAlert works To understand NetBackup's behavior with drive-cleaning TapeAlerts, it is important to understand the TapeAlert interface to a drive. The TapeAlert interface to a tape drive is via the SCSI bus, based on a Log Sense page, which contains 64 alert flags. The conditions that cause a flag to be set and cleared are device-specific and are determined by the device vendor. The configuration of the Log Sense page is via a Mode Select page. The Mode Sense/Select configuration of the TapeAlert interface is compatible with the SMART diagnostic standard for disk drives. NetBackup reads the TapeAlert Log Sense page at the beginning and end of a write/read job. TapeAlert flags 20 to 25 are used for cleaning management although some drive vendors implementations may vary from this. NetBackup uses TapeAlert flag 20 (Clean Now) and TapeAlert flag 21 (Clean Periodic) to determine when it needs to clean a drive. When a drive is selected by NetBackup for a backup, the Log Sense page is reviewed by bptm for status. If one of the clean flags is set, the drive will be cleaned before the job starts. If a backup is in progress and one of the clean flags is set, the flag is not read until a tape is dismounted from the drive. If a job spans media and, during the first tape, one of the clean flags is set, the cleaning light comes on and the drive will be cleaned before the second piece of media is mounted in the drive. The implication is that the present job will conclude its ongoing write despite a TapeAlert Clean Now or Clean Periodic message. That is, the TapeAlert will not require the loss of what has been written to tape so far.
This is true regardless of the number of NetBackup jobs involved in writing out the rest of the media. Note that the behavior described here may change in the future. If a large number of media become FROZEN as a result of having implemented TapeAlert, there is a strong likelihood of underlying media and/or tape drive issues.
Disabling TapeAlert To disable TapeAlert, create a touch file called NO_TAPEALERT: UNIX: /usr/openv/volmgr/database/NO_TAPEALERT Windows: install_path\volmgr\database\NO_TAPEALERT
Robotic cleaning
Robotic cleaning is not proactive, and is not subject to the limitations detailed above. By being reactive, unnecessary cleanings are eliminated, frequency tuning is not an issue, and the drive can spend more time moving data, rather than in maintenance operations. Library-based cleaning is not supported by EMM for most robots, since robotic library and operating systems vendors have implemented this type of cleaning in many different ways.
69
methods and procedures you adopt for your installation should be documented and tested regularly to ensure that your installation can recover from disaster. Table 6-13 Operational Risk
File deleted before backup File deleted after backup Backup client failure Media failure Master/media server failure Loss of backup database No NetBackup software
Yes
The Resilient Enterprise, Recovering Information Services from Disasters, by Symantec and industry authors, published by Symantec Software Corporation. Blueprints for HIGH Availability, Designing Resilient Distributed Systems, by Even Marcus and Hal Stern, published by John Wiley and Sons. Implementing Backup and Recovery: The Readiness Guide for the Enterprise, by David B. Little and David A. Chapa, published by Wiley Technology Publishing.
Always use a regularly scheduled hot catalog backup Refer to Catalog Recovery from an Online Backup in the NetBackup Troubleshooting Guide. Review the disaster recovery plan often
Review your site-specific recovery procedures and verify that they are accurate and up-to-date. Also, verify that the more complex systems, such as the NetBackup master and media servers, have current procedures for rebuilding the machines with the latest software.
Perform test recoveries on a regular basis Implement a plan to perform restores of various systems to alternate locations. This plan should include selecting random production backups and restoring the data to a non-production system. A checksum can then be performed on one or many of the restored files and compared to the actual production data. Be sure to include offsite storage as part of this testing. The end-user or application administrator can also be involved in determining the integrity of the restored data. Support NetBackup recoverability:
Back up the NetBackup catalog to two tapes. The catalog contains information vital for NetBackup recovery. Its loss could result in hours or days of recovery time through manual processes. The cost of a single tape is a small price to pay for the added insurance of rapid recovery in the event of an emergency. Back up the catalog after each backup. If a hot catalog backup is used, an incremental catalog backup can be done after each backup session. Extremely busy backup environments should also use a scheduled hot catalog backup, since their backup sessions end infrequently. In the event of a catastrophic failure, the recovery of images is slowed by not having all images available. If a manual backup occurs just before the master server or the drive that contains the backed-up files crashes, the manual backup must be imported to recover the most recent version of the files. Record the IDs of catalog backup tapes. Record the catalog tapes in the site run book or another public location to ensure rapid identification in the event of an emergency. If the catalog tapes are not identified ahead of time, a significant amount of time may be lost by scanning every tape in a library to find them. The utility vmphyinv can be used to mount all tapes in a robotic library and identify the catalog tape(s). The vmphyinv utility will identify cold catalog tapes. Designate label prefixes for catalog backups. Make it easy to identify the NetBackup catalog data in times of emergency. Label the catalog tapes with a unique prefix such as DB on the tape barcodes, so your operators can find the catalog tapes without delay.
71
Place NetBackup catalogs in specific robot slots. Place a catalog backup tape in the first or last slot of a robot to more easily identify the tape in an emergency. This also allows for easy tape movement if manual tape handling is necessary. Put the NetBackup catalog on different online storage than the data being backed up. In the case of a site storage disaster, the catalogs of the backed-up data should not reside on the same disks as production data. The reason behind this is straightforward: you want to avoid the case where, if a disk drive loses production data, it also loses the catalog of the production data, resulting in increased downtime. Regularly confirm the integrity of the NetBackup catalog. On a regular basis, such as quarterly or after major operations or personnel changes, walk through the process of recovering a catalog from tape. This essential part of NetBackup administration can save hours in the event of a catastrophe.
Policy names
One good naming convention for policies is platform_datatype_server(s). Example 1: w2k_filesystems_trundle This policy name designates a policy for a single Windows server doing file system backups. Example 2: w2k_sql_servers This policy name designates a policy for backing up a set of Windows 2000 SQL servers. Several servers may be backed up by this policy. Servers that are candidates for being included in a single policy are those running the same operating system and with the same backup requirements. Grouping servers within a single policy reduces the number of policies and eases the management of NetBackup.
Schedule names
Create a generic scheme for schedule naming. One recommended set of schedule names is daily, weekly, and monthly. Another recommended set of names is incremental, cumulative, and full. This convention keeps the management of NetBackup at a minimum. It also helps with the implementation of Vault, if your site uses Vault.
Section II
Performance tuning
Section II explains how to measure your current NetBackup performance, and gives general recommendations and examples for tuning NetBackup. Section II includes these chapters:
Measuring performance Tuning the NetBackup data transfer path Tuning other NetBackup components Tuning disk I/O performance OS-related tuning factors Additional resources
74
Chapter
Measuring performance
This chapter provides suggestions for measuring NetBackup performance. This chapter includes the following sections:
Overview on page 76 Controlling system variables for consistent testing conditions on page 76 Evaluating performance on page 79 Evaluating UNIX system components on page 84 Evaluating Windows system components on page 85
Overview
The final measure of NetBackup performance is the length of time required for backup operations to complete (usually known as the backup window), or the length of time required for a critical restore operation to complete. However, to measure existing performance and improve future performance by means of those measurements calls for performance metrics more reliable and reproducible than simple wall clock time. This chapter will discuss these metrics in more detail. After establishing accurate metrics as described here, you can measure the current performance of NetBackup and your system components to compile a baseline performance benchmark. With a baseline, you can apply changes in a controlled way. By measuring performance after each change, you can accurately measure the effect of each change on NetBackup performance.
Server variables
It is important to eliminate all other NetBackup activity from your environment when you are measuring the performance of a particular NetBackup operation. One area to consider is the automatic scheduling of backup jobs by the NetBackup scheduler. When policies are created, they are usually set up to allow the NetBackup scheduler to initiate the backups. The NetBackup scheduler will initiate backups based on the traditional NetBackup frequency-based scheduling or on certain days of the week, month, or other time interval. This process is called calendar-based scheduling. As part of the backup policy definition, the Start Window is used to indicate when the NetBackup scheduler can start backups using either frequency-based or calendar-based scheduling. When you perform backups for the purpose of performance testing, this setup might interfere since the NetBackup scheduler may initiate backups unexpectedly, especially if the operations you intend to measure run for an extended period of time.
77
The simplest way to prevent the NetBackup scheduler from running backup jobs during your performance testing is to create a new policy specifically for use in performance testing and to leave the Start Window field blank in the schedule definition for that policy. This prevents the NetBackup scheduler from initiating any backups automatically for that policy. After creating the policy, you can run the backup on demand by using the Manual Backup command from the NetBackup Administration Console. To prevent the NetBackup scheduler from running backup jobs unrelated to the performance test, you may want to set all other backup policies to inactive by using the Deactivate command from the NetBackup Administration Console. Of course, you must reactivate the policies to start running backups again. You can use a user-directed backup to run the performance test as well. However, using the Manual Backup option for a policy is preferred. With a manual backup, the policy contains the entire definition of the backup job, including the clients and files that are part of the performance test. Running the backup manually, straight from the policy, means there is no doubt which policy will be used for the backup, and makes it easier to change and test individual backup settings: from the policy dialog. Before you start the performance test, check the Activity Monitor to make sure there is no NetBackup processing currently in progress. Similarly, check the Activity Monitor after the performance test for unexpected activity (such as an unanticipated restore job) that may have occurred during the test. Additionally, check for non-NetBackup activity on the server during the performance test and try to reduce or eliminate it. Note: By default, NetBackup logging is set to a minimum level. To gather more logging information, set the legacy and unified logging levels higher and create the appropriate legacy logging directories. For details on how to use NetBackup logging, refer to the logging chapter of the NetBackup Troublshooting Guide. Keep in mind that higher logging levels will consume more disk space.
Network variables
Network performance is key to achieving optimum performance with NetBackup. Ideally, you would use a completely separate network for performance testing to avoid the possibility of skewing the results by encountering unrelated network activity during the course of the test. In many cases, a separate network is not available. Ensure that non-NetBackup activity is kept to an absolute minimum during the time you are evaluating performance. If possible, schedule testing for times when backups are not active. Even occasional short bursts of network activity may be enough to skew
the results during portions of the performance test. If you are sharing the network with production backups occurring for other systems, you must account for this activity during the performance test. Another network variable you must consider is host name resolution. NetBackup depends heavily upon a timely resolution of host names to operate correctly. If you have any delays in host name resolution, including reverse name lookup to identify a server name from an incoming connection from a certain IP address, you may want to eliminate that delay by using the HOSTS (Windows) or /etc/hosts (UNIX) file for host name resolution on systems involved in your performance test environment.
Client variables
Make sure the client system is in a relatively quiescent state during performance testing. A lot of activity, especially disk-intensive activity such as virus scanning on Windows, will limit the data transfer rate and skew the results of your tests. One possible mistake is to allow another NetBackup server, such as a production backup server, to have access to the client during the course of the test. This may result in NetBackup attempting to back up the same client to two different servers at the same time, which would severely impact the results of a performance test in progress at that time. Different file systems have different performance characteristics. For example, comparing data throughput results from operations on a UNIX VxFS or Windows FAT file system to those from operations on a UNIX NFS or Windows NTFS system may not be valid, even if the systems are otherwise identical. If you do need to make such a comparison, factor the difference between the file systems into your performance evaluation testing, and into any conclusions you may draw from that testing.
Data variables
Monitoring the data you are backing up improves the repeatability of performance testing. If possible, move the data you will use for testing backups to its own drive or logical partition (not a mirrored drive), and defragment the drive before you begin performance testing. For testing restores, start with an empty disk drive or a recently defragmented disk drive with ample empty space. This will reduce the impact of disk fragmentation on the NetBackup performance test and yield more consistent results between tests. Similarly, for testing backups to tape, always start each test run with an empty piece of media. You can do this by expiring existing images for that piece of media through the Catalog node of the NetBackup Administration Console, or by
79
running the bpexpdate command. Another approach is to use the bpmedia command to freeze any media containing existing backup images so that NetBackup selects a new piece of media for the backup operation. This step will help reduce the impact of tape positioning on the NetBackup performance test and will yield more consistent results between tests. It will also reduce the impact of tape mounting and unmounting of media that has NetBackup catalog images and that cannot be used for normal backups. When you test restores from tape, always restore from the same backup image on the tape to achieve consistent results between tests. In general, using a large data set will generate a more reliable and reproducible performance test than a small data set. A performance test using a small data set would probably be skewed by startup and shutdown overhead within the NetBackup operation. These variables are difficult to keep consistent between test runs and are therefore likely to produce inconsistent test results. Using a large data set will minimize the effect of start up and shutdown times. Design the makeup of the dataset to represent the makeup of the data in the intended production environment. For example, if the data set in the production environment contains many small files on file servers, then the data set for the performance testing should also contain many small files. A representative test data set will more accurately predict the NetBackup performance that you can reasonably expect in a production environment. The type of data can help reveal bottlenecks in the system. Files consisting of non-compressible (random) data cause the tape drive to run at its lower rated speed. As long as the other components of the data transfer path are keeping up, you may identify the tape drive as the bottleneck. On the other hand, files consisting of highly-compressible data can be processed at higher rates by the tape drive when hardware compression is enabled. This may result in a higher overall throughput and possibly expose the network as the bottleneck. Many values in NetBackup provide data amounts in kilobytes and rates in kilobytes per second. For greater accuracy, divide by 1024 rather than rounding off to 1000 when you convert from kilobytes to megabytes or from kilobytes per second to megabytes per second.
Evaluating performance
There are two primary locations from which to obtain NetBackup data throughput statistics: the NetBackup Activity Monitor and the NetBackup All Log Entries report. The choice of which location to use is determined by the type of NetBackup operation you are measuring: non-multiplexed backup, restore, or multiplexed backup.
You can obtain statistics for all three types of operations from the NetBackup All Log Entries report. You can obtain statistics for non-multiplexed backup or restore operations from the NetBackup Activity Monitor. For multiplexed backup operations, you can obtain the overall statistics from the All Log Entries report after all the individual backup operations which are part of the multiplexed backup are complete. In this case, the statistics available in the Activity Monitors for each of the individual backup operations are relative only to that operation, and do not reflect the actual total data throughput to the tape drive. There may be small differences between the statistics available from these two locations due to slight differences in rounding techniques between the entries in the Activity Monitor and the entries in the All Logs report. For a given type of operation, choose either the Activity Monitor or the All Log Entries report and consistently record your statistics only from that location. In both the Activity Monitor and the All Logs report, the data-streaming speed is reported in kilobytes per second. If a backup or restore is repeated, the reported speed can vary between repetitions depending on many factors, including the availability of system resources and system utilization, but the reported speed can be used to assess the performance of the data-streaming process. The statistics from the NetBackup error logs show the actual amount of time spent reading and writing data to and from tape. This does not include time spent mounting and positioning the tape. Cross-referencing the information from the error logs with data from the bpbkar log on the NetBackup client (showing the end-to-end elapsed time of the entire process) indicates how much time was spent on operations unrelated to reading and writing to and from the tape. To evaluate performance through the NetBackup activity monitor 1 2 3 4 Run the backup or restore job. Open the NetBackup Activity Monitor. Verify that the backup or restore job completed successfully. The Status column should contain a zero (0). View the log details for the job by selecting the Actions > Details menu option, or by double-clicking on the entry for the job. Select the Detailed Status tab. Obtain the NetBackup performance statistics from the following fields in the Activity Monitor:
Start Time/EndTime: These fields show the time window during which the backup or restore job took place.
81
Elapsed Time: This field shows the total elapsed time from when the job was initiated to job completion and can be used as in indication of total wall clock time for the operation. KB per Second: This is the data throughput rate. Kilobytes: Compare this value to the amount of data. Although it should be comparable, the NetBackup data amount will be slightly higher because of administrative information, known as metadata, saved for the backed up data. For example, if you display properties for a directory containing 500 files, each 1 megabyte in size, the directory shows a size of 500 megabytes, or 524,288,000 bytes, which is equal to 512,000 kilobytes. The NetBackup report may show 513,255 kilobytes written, reporting 1255 kilobytes more than the file size of the directory. This is true for a flat directory. Subdirectory structures may diverge due to the way the operating system tracks used and available space on the disk. Also, be aware that the operating system may be reporting how much space was allocated for the files in question, not just how much data is actually there. For example, if the allocation block size is 1 kilobyte, 1000 1-byte files will report a total size of 1 megabyte, even though 1 kilobyte of data is all that exists. The greater the number of files, the larger this discrepancy may become.
To evaluate performance using the all log entries report 1 2 Run the backup or restore job. Run the All Log Entries report from the NetBackup reports node in the NetBackup Administrative Console. Be sure that the Date/Time Range that you select covers the time period during which the job was run. Verify that the job completed successfully by searching for an entry such as the requested operation was successfully completed for a backup, or successfully read (restore) backup id... for a restore. Obtain the NetBackup performance statistics from the following entries in the report. Note: The messages shown here will vary according to the locale setting of the master server.
Entry
started backup job for client <name>, policy <name>, schedule <name> on storage unit <name>
Statistic
The Date and Time fields for this entry show the time at which the backup job started.
Entry
successfully wrote backup id <name>, copy <number>, <number> Kbytes
Statistic
For a multiplexed backup, this entry shows the size of the individual backup job and the Date and Time fields show the time at which the job finished writing to the storage device. The overall statistics for the multiplexed backup group, including the data throughput rate to the storage device, are found in a subsequent entry below. For multiplexed backups, this entry shows the overall statistics for the multiplexed backup group including the data throughput rate.
successfully wrote <number> of <number> multiplexed backups, total Kbytes <number> at Kbytes/sec
successfully wrote backup id <name>, copy <number>, fragment <number>, <number> Kbytes at <number> Kbytes/sec
For non-multiplexed backups, this entry essentially combines the information in the previous two entries for multiplexed backups into one entry showing the size of the backup job, the data throughput rate, and the time, in the Date and Time fields, at which the job finished writing to the storage device. The Date and Time fields for this entry show the time at which the backup job completed. This value is later than the successfully wrote entry above because it includes extra processing time at the end of the job for tasks such as NetBackup image validation. The Date and Time fields for this entry show the time at which the restore job started reading from the storage device. (Note that the latter part of the entry is not shown for restores from disk, as it does not apply.) For a multiplexed restore (generally speaking, all restores from tape are multiplexed restores as non-multiplexed restores require additional action from the user), this entry shows the size of the individual restore job and the Date and Time fields show the time at which the job finished reading from the storage device. The overall statistics for the multiplexed restore group, including the data throughput rate, are found in a subsequent entry below. For multiplexed restores, this entry shows the overall statistics for the multiplexed restore group, including the data throughput rate.
begin reading backup id <name>, (restore), copy <number>, fragment <number> from media id <name> on drive index <number>
successfully restored <number> of <number> requests <name>, read total of <number> Kbytes at <number> Kbytes/sec
83
Entry
successfully read (restore) backup id media <number>, copy <number>, fragment <number>, <number> Kbytes at <number> Kbytes/sec
Statistic
For non-multiplexed restores (generally speaking, only restores from disk are treated as non-multiplexed restores), this entry essentially combines the information from the previous two entries for multiplexed restores into one entry showing the size of the restore job, the data throughput rate, and the time, in the Date and Time fields, at which the job finished reading from the storage device.
Additional information
The NetBackup All Log Entries report will also have entries similar to those described above for other NetBackup operations such as image duplication operations used to create additional copies of a backup image. Those entries have a very similar format and may be useful for analyzing the performance of NetBackup for those operations. The bptm debug log file for tape backups (or bpdm log file for disk backups) will contain the entries that are in the All Log Entries report, as well as additional detail about the operation that may be useful for performance analysis. One example of this additional detail is the intermediate data throughput rate message for multiplexed backups, as shown below: ... intermediate after <number> successful, <number> Kbytes at <number> Kbytes/sec This message is generated whenever an individual backup job completes that is part of a multiplexed backup group. In the debug log file for a multiplexed backup group consisting of three individual backup jobs, for example, there could be two intermediate status lines, then the final (overall) throughput rate. For a backup operation, the bpbkar debug log file will also contain additional detail about the operation that may be useful for performance analysis. Keep in mind, however, that writing the debug log files during the NetBackup operation introduces some overhead that would not normally be present in a production environment. Factor that additional overhead into any calculations done on data captures while debug log files are in use. The information in the All Logs report is also found in /usr/openv/netbackup/db/error (UNIX) or install_path\NetBackup\db\error (Windows). See the NetBackup Troubleshooting Guide to learn how to set up NetBackup to write these debug log files.
2 3
Where X:\ is the path being backed up. 4 Check the time it took NetBackup to move the data from the client disk: UNIX: The start time is the first PrintFile entry in the bpbkar log, the end time is the entry Client completed sending data for backup, and the amount of data is given in the entry Total Size. Windows: Check the bpbkar log for the entry Elapsed time.
85
To measure disk I/O using the bpdm_dev_null touch file (UNIX only) For UNIX systems, the procedure below can be useful as a follow-on to the bpbkar procedure (above). If the bpbkar procedure shows that the disk read performance is not the bottleneck and does not help isolate the problem, then the bpdm_dev_null procedure described below may be helpful. If the bpdm_dev_null procedure shows poor performance, the bottleneck is somewhere in the data transfer between the bpbkar process on the client and the bpdm process on the server. The problem may involve the network, or shared memory (such as not enough buffers, or buffers that are too small). To change shared memory settings, see Shared memory (number and size of data buffers) on page 102. Caution: If not used correctly, the following procedure can lead to data loss. Touching the bpdm_dev_null file redirects all disk backups to /dev/null, not just those backups using the storage unit created by this procedure. You should disable active production policies for the duration of this test and remove /dev/null as soon as this test is complete. 1 Enter the following:
touch /usr/openv/netbackup/bpdm_dev_null
Note: The bpdm_dev_null file re-directs any backup that uses a disk storage unit to /dev/null. 2 3 4 Create a new disk storage unit, using /tmp or some other directory as the image directory path. Create a policy that uses the new disk storage unit. Run a backup using this policy. NetBackup will create a file in the storage unit directory as if this were a real backup to disk. This degenerate image file will be zero bytes long. To remove the zero-length file and clear the NetBackup catalog of a backup that cannot be restored, run this command:
where backupid is the name of the file residing in the storage unit directory.
Windows Performance Monitor utility included with Windows. For information about using the Performance Monitor, refer to your Microsoft documentation. The Performance Monitor organizes information by object, counter, and instance. An object is a system resource category, such as a processor or physical disk. Properties of an object are counters. Counters for the Processor object include %Processor Time, which is the default counter, and Interrupts/sec. Duplicate counters are handled via instances. For example, to monitor the %Processor Time of a specific CPU on a multiple CPU system, the Processor object is selected, then the %Processor Time counter for that object is selected, followed by the specific CPU instance for the counter. When you use the Performance Monitor, you can view data in real time format or collect the data in a log for future analysis. Specific components to evaluate include CPU load, memory use, and disk load. Note: It is recommended that a remote host be used for monitoring of the test host, to reduce load that might otherwise skew results.
87
Note: The default scale for the Processor Queue Length may not be equal to 1. Be sure to read the data correctly. For example, if the default scale is 10x, then a reading of 40 actually means that only 4 processes are waiting.
Committed Bytes. Committed Bytes displays the size of virtual memory that has been committed, as opposed to reserved. Committed memory must have disk storage available or must not require the disk storage because the main memory is large enough. If the number of Committed Bytes approaches or exceeds the amount of physical memory, you may encounter issues with page swapping. Page Faults/sec. Page Faults/sec is a count of the page faults in the processor. A page fault occurs when a process refers to a virtual memory page that is not in its Working Set in main memory. A high Page Fault rate may indicate insufficient memory.
To disable these counters and cancel disk monitoring: 1 From a command prompt, type: diskperf -n
When you monitor disk performance, use the %Disk Time counter for the PhysicalDisk object to track the percentage of elapsed time that the selected disk drive is busy servicing read or write requests. Also monitor the Avg. Disk Queue Length counter and watch for values greater than 1 that last for more than one second. Values greater than 1 for more than a second indicate that multiple processes are waiting for the disk to service their requests. Several techniques may be used to increase disk performance, including:
Check the fragmentation level of the data. A highly fragmented disk limits throughput levels. Use a disk maintenance utility to defragment the disk. Consider adding additional disks to the system to increase performance. If multiple processes are attempting to log data simultaneously, dividing the data among multiple physical disks may help. Determine if the data transfer involves a compressed disk. The use of Windows compression to automatically compress the data on the drive adds additional overhead to disk read or write operations, adversely affecting the performance of NetBackup. Use Windows compression only if it is needed to avoid a disk full condition. Consider converting to a system based on a Redundant Array of Inexpensive Disks (RAID). Though more expensive, RAID devices generally offer greater throughput, and, (depending on the RAID level employed), improved reliability. Determine what type of controller technology is being used to drive the disk. Consider if a different system would yield better results. See the table Drive controller data transfer rates on page 21 for throughput rates for common controllers.
Chapter
Overview on page 90 The data transfer path on page 90 Basic tuning suggestions for the data path on page 91 NetBackup client performance on page 95 NetBackup network performance on page 96 NetBackup server performance on page 102 NetBackup storage device performance on page 126
Overview
This chapter contains information on ways to optimize NetBackup. This chapter is not intended to provide tuning advice for particular systems. If you would like help fine-tuning your NetBackup installation, please contact Symantec Consulting Services. Before examining the factors that affect backup performance, please note that an important first step is to ensure that your system meets NetBackups recommended minimum requirements. Refer to the NetBackup Installation Guide and NetBackup Release Notes for information about these requirements. Additionally, Symantec recommends that you have the most recent NetBackup software patch installed. Many performance issues can be traced to hardware or other environmental issues. A basic understanding of the entire data transfer path is essential in determining the maximum obtainable performance in your environment. Poor performance is often the result of poor planning, which can be based on unrealistic expectations of any particular component of the data transfer path.
Tuning the NetBackup data transfer path Basic tuning suggestions for the data path
91
Tuning suggestions:
Use multiplexing. Multiplexing is a NetBackup option that lets you write multiple data streams from several clients at once to a single tape drive or several tape drives. Multiplexing can be used to improve the backup performance of slow clients, multiple slow networks, and many small backups (such as incremental backups). Multiplexing reduces the time each job spends waiting for a device to become available, thereby making the best use of the transfer rate of your storage devices. Multiplexing is not recommended when restore speed is of paramount interest or when your tape drives are slow. To reduce the impact of multiplexing on restore times, you can improve your restore performance by reducing the maximum fragment size for the storage units. If the fragment size is small, so that the backup image is contained in several fragments, a NetBackup restore can quickly skip to the specific fragment containing the file to be restored. In contrast, if the fragment size is large enough to contain the entire image, the NetBackup restore starts at the very beginning of the image and reads through the image until it finds the desired file. Multiplexed backups can be de-multiplexed to improve restore performance by using bpduplicate to move fragmented images to a sequential image on a new tape.
92 Tuning the NetBackup data transfer path Basic tuning suggestions for the data path
Refer to the Veritas NetBackup System Administrators Guide for more information about using multiplexing.
Consider striping a disk volume across drives. A striped set of disks will pull data from all disk drives concurrently, allowing faster data transfers between disk drives and tape drives. Maximize the use of your backup windows. You can configure all your incremental backups to happen at the same time every day and stagger the execution of your full backups across multiple days. Large systems could be backed up over the weekend while smaller systems are spread over the week. You can even start full backups earlier than the incremental backups. They might finish before the incremental backups and give you back all or most of your backup window to finish the incremental backups. Convert large clients into media servers to decrease backup times and reduce network traffic. Any machine with locally-attached drives can be used as a media server to back up itself or other systems. By converting large client systems into media servers, your backup data no longer travels over the network (except for catalog data), and you get the fastest transfer speeds afforded by locally-attached devices. Another benefit of media servers is that you can use them to balance the load of backing up other clients for your NetBackup master. A media server can back up clients on a network where it has a local connection, thus saving network traffic for a master that might have to go over routers to communicate with those clients. A special case of a media server is a SAN Media Server, which is a NetBackup media server that backs up itself only and comes at a lower cost than a regular media server. Use dedicated private networks to decrease backup times and network traffic. If you configure one or more networks dedicated to backups, you can reduce the time it takes to back up the systems on those networks and reduce or eliminate network traffic on your enterprise networks. In addition, you can convert to faster technologies and even backup your systems at any time without affecting the enterprise networks performance (assuming that users do not mind the system loads while backups take place). Avoid a concentration of servers on one network. If you have a concentration of large servers that you back up over the same general network, you might want to convert some of these into media servers or attach them to private backup networks. Doing either will decrease backup times and reduce network traffic for your other backups. Use dedicated backup servers to perform your backups.
Tuning the NetBackup data transfer path Basic tuning suggestions for the data path
93
When selecting a server for performing your backups, use a dedicated system just for performing backups. A server that shares the load of running several applications unrelated to backups can severely affect your performance and maintenance windows.
Use drives from tape libraries attached to other systems. You can use tape drives from a tape library attached to your master server or another media server, or you can dedicate a library to your large servers. Systems using these drives become media servers that can back up themselves and others through locally-attached drives. The robotic control arm of the library can be connected to either the master server or the media server. Consider the requirements of backing up your catalog. Remember that the NetBackup catalog needs to be backed up. To facilitate NetBackup catalog recovery, it is highly recommended that the master server have access to a dedicated tape drive, either standalone or within a robotic library. Try to level the backup load. You can use multiple drives to reduce backup times; however, since you may not be able to split data streams evenly, you may need to experiment with the configuration of the streams or the configuration of the NetBackup policies to spread the load across multiple drives. Bandwidth limiting The bandwidth limiting feature lets you restrict the network bandwidth consumed by one or more NetBackup clients on a network. The bandwidth setting appears under Host Properties > Master Servers, Properties. The actual limiting occurs on the client side of the backup connection. This feature only restricts bandwidth during backups. Restores are unaffected. When a backup starts, NetBackup reads the bandwidth limit configuration and then determines the appropriate bandwidth value and passes it to the client. As the number of active backups increases or decreases on a subnet, NetBackup dynamically adjusts the bandwidth limiting on that subnet. If additional backups are started, the NetBackup server instructs the other NetBackup clients running on that subnet to decrease their bandwidth setting. Similarly, bandwidth per client is increased if the number of clients decreases. Changes to the bandwidth value occur on a periodic basis rather than as backups stop and start. This characteristic can reduce the number of bandwidth value changes. Load balancing NetBackup provides ways to balance loads between servers, clients, policies, and devices. Note that these settings may interact with each other:
94 Tuning the NetBackup data transfer path Basic tuning suggestions for the data path
compensating for one issue can cause another. The best approach is to use the defaults unless you anticipate or encounter an issue.
Adjust the backup load on the server. Change the Limit jobs per policy attribute for one or more of the policies that the server is backing up. For example, decreasing Limit jobs per policy reduces the load on a server on a specific subnetwork. Reconfiguring policies or schedules to use storage units on other servers also reduces the load. Another possibility is to use bandwidth limiting on one or more clients. Adjust the backup load on the server during specific time periods only. Reconfigure schedules that execute during the time periods of interest, so they use storage units on servers that can handle the load (assuming you are using media servers). Adjust the backup load on the clients. Change the Maximum jobs per client global attribute. For example, increasing Maximum jobs per client increases the number of concurrent jobs that any one client can process and therefore increases the load. Reduce the time to back up clients. Increase the number of jobs that clients can perform concurrently, or use multiplexing. Another possibility is to increase the number of jobs that the server can perform concurrently for the policy or policies that are backing up the clients. Give preference to a policy. Increase the Limit jobs per policy attribute value for the preferred policy relative to other policies. Alternatively, increase the priority for the policy. Adjust the load between fast and slow networks. Increase the values of Limit jobs per policy and Maximum jobs per client for the policies and clients on a faster network. Decrease these values for slower networks. Another solution is to use bandwidth limiting. Limit the backup load produced by one or more clients. Use bandwidth limiting to reduce the bandwidth used by the clients. Maximize the use of devices Use multiplexing. Also, allow as many concurrent jobs per storage unit, policy, and client as possible without causing server, client, or network performance issues. Prevent backups from monopolizing devices.
95
Limit the number of devices that NetBackup can use concurrently for each policy or limit the number of drives per storage unit. Another approach is to exclude some of your devices from Media Manager control.
Disk fragmentation. Fragmentation is a condition where data is scattered around the disk in non-contiguous blocks. This condition severely impacts the data transfer rate from the disk. Fragmentation can be repaired using hard disk management utility software offered by a variety of vendors. Number of disks. Consider adding additional disks to the system to increase performance. If multiple processes are attempting to log data simultaneously, dividing the data among multiple physical disks may help. Disk arrays. Consider converting to a system based on a Redundant Array of Inexpensive Disks (RAID). Though more expensive, RAID devices generally offer greater throughput, and, (depending on the RAID level employed), improved reliability. The type of controller technology being used to drive the disk. Consider if a different system would yield better results. Virus scanning. If virus scanning is turned on for the system, it may severely impact the performance of the NetBackup client during a backup or restore operation. This may be especially true for systems such as large Windows file servers. You may wish to disable virus scanning during backup or restore operations to avoid the impact on performance. NetBackup notify scripts. The bpstart_notify.bat and bpend_notify.bat scripts are very useful in certain situations, such as shutting down a running application to back up its data. However, these scripts must be written with care to avoid any unnecessary lengthy delays at the start or end of the backup job. If the scripts are not performing tasks essential to the backup operation, you may want to remove them. NetBackup software location. If the data being backed up is located on the same physical disk drive as the NetBackup installation, performance may be adversely affected, especially if NetBackup debug log files are being used. If they are being used, the extent of the degradation will be greatly influenced by the NetBackup verbose setting for the debug logs. If possible, install
NetBackup on a separate physical disk drive to avoid this disk drive contention.
Snapshots (hardware or software). If snapshots need to be taken before the actual backup of data, the time needed to take the snapshot will affect the overall performance. Job tracker. If the NetBackup Client Job Tracker is running on the client, then NetBackup will gather an estimate of the data to be backed up prior to the start of a backup job. Gathering this estimate will affect the startup time, and therefore the data throughput rate, because no data is being written to the NetBackup server during this estimation phase. You may wish to avoid running the NetBackup Client Job Tracker to avoid this delay. Client location. You may wish to consider adding a locally attached tape device to the client and changing the client to a NetBackup media server if you have a substantial amount of data on the client. For example, backing up 100 gigabytes of data to a locally attached tape drive will generally be more efficient than backing up the same amount of data across a network connection to a NetBackup server. Of course, there are many variables to consider, such as the bandwidth available on the network, that will affect the decision to back up the data to a locally attached tape drive as opposed to moving the data across the network. Determining the theoretical performance of the NetBackup client software. You can use the NetBackup client command bpbkar (UNIX) or bpbkar32 (Windows) to determine the speed at which the NetBackup client can read the data to be backed up from the disk drive. This may eliminate data read speed as a possible performance bottleneck. For the procedure, see Measuring performance independent of tape or disk output on page 84.
Network interface cards (NICs) for NetBackup servers and clients must be set to full-duplex. Both ends of each network cable (the NIC card and the switch) must be set identically as to speed and mode (both NIC and switch must be at full
97
duplex). Otherwise, link down, excessive/late collisions, and errors will result.
If auto-negotiate is being used, make sure that both ends of the connection are set at the same mode and speed. The higher the speed, the better. In addition to NICs and switches, all routers must be set to full duplex.
Consult the operating system documentation for instructions on how to determine and change the NIC settings. Note: Using AUTOSENSE may cause network problems and performance issues.
Network load
There are two key considerations to monitor when you evaluate remote backup performance:
The amount of network traffic The amount of time that network traffic is high
Small bursts of high network traffic for short durations will have some negative impact on the data throughput rate. However, if the network traffic remains consistently high for a significant amount of time during the operation, the network component of the NetBackup data transfer path will very likely be the bottleneck. Always try to schedule backups during times when network traffic is low. If your network is heavily loaded, you may wish to implement a secondary network which can be dedicated to backup and restore traffic.
For tape: because the default value for the NetBackup data buffer size is 65536 bytes, this formula results in a default NetBackup network buffer size of 263168 bytes for backups and 132096 bytes for restores. For disk: because the default value for the NetBackup data buffer size is 262144 bytes, this formula results in a default NetBackup network buffer size of 1049600 bytes for backups and 525312 bytes for restores. To set this parameter, create the following files: UNIX /usr/openv/netbackup/NET_BUFFER_SZ /usr/openv/netbackup/NET_BUFFER_SZ_REST Windows install_path\NetBackup\NET_BUFFER_SZ install_path\NetBackup\NET_BUFFER_SZ_REST These files contain a single integer specifying the network buffer size in bytes. For example, to use a network buffer size of 64 Kilobytes, the file would contain 65536. If the files contain the integer 0 (zero), the default value for the network buffer size is used. If the NET_BUFFER_SZ file exists, and the NET_BUFFER_SZ_REST file does not exist, the contents of NET_BUFFER_SZ will specify the network buffer size for both backup and restores. If the NET_BUFFER_SZ_REST file exists, its contents will specify the network buffer size for restores. If both files exist, the NET_BUFFER_SZ file will specify the network buffer size for backups, and the NET_BUFFER_SZ_REST file will specify the network buffer size for restores. Because local backup or restore jobs on the media server do not send data over the network, this parameter has no effect on those operations. It is used only by the NetBackup media server processes which read from or write to the network, specifically, the bptm or bpdm processes. It is not used by any other NetBackup processes on a master server, media server, or client. This parameter is the counterpart on the media server to the Communications Buffer Size parameter on the client, which is described below. The network buffer sizes are not required to be the same on all of your NetBackup systems for NetBackup to function properly; however, setting the Network Buffer Size parameter on the media server and the Communications Buffer Size parameter on the client (see below) to the same value has significantly improved the throughput of the network component of the NetBackup data transfer path in some installations. Similarly, the network buffer size does not have a direct relationship with the NetBackup data buffer size (described under Shared memory (number and size
99
of data buffers) on page 102). They are separately tunable parameters. However, setting the network buffer size to a substantially larger value than the data buffer has achieved the best performance in many NetBackup installations.
Changing this value can affect backup and restore operations on the media servers. Test backups and restores to ensure that the change you make does not negatively impact performance.
100 Tuning the NetBackup data transfer path NetBackup network performance
To set the communications buffer size parameter (on Windows clients) 1 From Host Properties in the NetBackup Administration Console, expand Clients and open the Client Properties > Windows Client > Client Settings dialog for the Windows client on which the parameter is to be changed. Enter the desired value in the Communications buffer field. This parameter is specified in the number of kilobytes. The default value is 32. An extra kilobyte is added internally for backup operations. Therefore, the default network buffer size for backups is 33792 bytes. In some NetBackup installations, this default value is too small. Increasing the value to 128 improves performance in these installations. Because local backup jobs on the media server do not send data over the network, this parameter has no effect on these local operations. This parameter is used by only the NetBackup client processes which write to the network, specifically, the bpbkar32 process. It is not used by any other NetBackup for Windows processes on a master server, media server, or client. If you modify the NetBackup buffer settings, test the performance of restores with the new settings.
101
Note: NOSHM only affects backups when it is applied to a system with a directly-attached storage unit. NOSHM forces a local backup to run as though it were a remote backup. A local backup is a backup of a client that has a directly-attached storage unit, such as a client that happens to be a master or media server. A remote backup is a backup that passes the data across a network connection from the client to a master or media servers storage unit. A local backup normally has one or more bpbkar processes that read from the disk and write into shared memory, and a bptm process that reads from shared memory and writes to the tape. A remote backup has one or more bptm (child) processes that read from a socket connection to bpbkar and write into shared memory, and a bptm (parent) process that reads from shared memory and writes to the tape. NOSHM forces the remote backup model even when the client and the media server are the same system. For a local backup without NOSHM, shared memory is used between bptm and bpbkar. Whether the backup is remote or local, and whether NOSHM exists or not, shared memory is always used between bptm (parent) and bptm (child). Note: NOSHM does not affect the shared memory used by the bptm process to buffer data being written to tape. bptm uses shared memory for any backup, local or otherwise.
In the servers bp.conf file, add one entry for each network interface:
SERVER=server-neta SERVER=server-netb SERVER=server-netc
102 Tuning the NetBackup data transfer path NetBackup server performance
It is okay for a client to have an entry for a server that is not currently on the same network.
Shared memory (number and size of data buffers) Parent/child delay values Using NetBackup wait and delay counters Fragment size and NetBackup restores Other restore performance issues
NetBackup Operation
Windows
16
103
Table 8-1
NetBackup Operation
Multiplexed backup Restore of non-multiplexed backup Restore of multiplexed backup Verify Import Duplicate
NetBackup Operation
Windows
64K (tape), 256K (disk) 64K (tape), 256K (disk) same size as used for the backup read side: same size as used for the backup; write side: 64K (tape), 256K (disk)
Duplicate
On Windows, a single tape I/O operation is performed for each shared data buffer. Therefore, this size must not exceed the maximum block size for the tape device or operating system. For Windows systems, the maximum block size is generally 64K, although in some cases customers are using a larger value successfully. For this reason, the terms tape block size and shared data buffer size are synonymous in this context.
104 Tuning the NetBackup data transfer path NetBackup server performance
(number_data_buffers * size_data_buffers) * number_tape_drives * max_multiplexing_setting For example, assume that the number of shared data buffers is 16, the size of the shared data buffers is 64 Kilobytes, there are two tape drives, and the maximum multiplexing setting is four. Following the formula above, the amount of shared memory required by NetBackup is:
(16 * 65536) * 2 * 4 = 8 MB
For disk
/usr/openv/netbackup/db/config/NUMBER_DATA_BUFFERS_DISK
For disk
<install_path>\NetBackup\db\config\NUMBER_DATA_BUFFERS_DISK
These files contain a single integer specifying the number of shared data buffers NetBackup will use. The integer represents the number of data buffers. For backups (in the NUMBER_DATA_BUFFERS and NUMBER_DATA_BUFFERS_DISK files), the integers value must be a power of 2. If the NUMBER_DATA_BUFFERS file exists, its contents will be used to determine the number of shared data buffers to be used for multiplexed and non-multiplexed backups. NUMBER_DATA_BUFFERS_DISK allows for a different value when doing backup to disk instead of tape. If NUMBER_DATA_BUFFERS exists but NUMBER_DATA_BUFFERS_DISK does not, NUMBER_DATA_BUFFERS applies to all backups. If both files exist, NUMBER_DATA_BUFFERS applies to tape backups and NUMBER_DATA_BUFFERS_DISK applies to disk backups. If only NUMBER_DATA_BUFFERS_DISK is present, it applies to disk backups only. If the NUMBER_DATA_BUFFERS_RESTORE file exists, its contents will be used to determine the number of shared data buffers to be used for multiplexed restores from tape.
105
The NetBackup daemons do not have to be restarted for the new values to be used. Each time a new job starts, bptm checks the configuration file and adjusts its behavior.
For disk
/usr/openv/netbackup/db/config/SIZE_DATA_BUFFERS_DISK
For disk
install_path\NetBackup\db\config\SIZE_DATA_BUFFERS_DISK
This file contains a single integer specifying the size of each shared data buffer in bytes. The integer must be a multiple of 32 kilobytes (a multiple of 1024 is recommended); see the table below for valid values. The integer represents the size of one tape or disk buffer in bytes. For example, to use a shared data buffer size of 64 Kilobytes, the file would contain the integer 65536. The NetBackup daemons do not have to be restarted for the parameter values to be used. Each time a new job starts, bptm checks the configuration file and adjusts its behavior.
106 Tuning the NetBackup data transfer path NetBackup server performance
Analyze the buffer usage by checking the bptm debug log before and after altering the size of buffer parameters. Table 8-3 Absolute byte value to be entered in SIZE_DATA_BUFFERS SIZE_DATA_BUFFER Value
32768 65536 98304 131072 163840 196608 229376 262144
IMPORTANT: Because the data buffer size equals the tape I/O size, the value specified in SIZE_DATA_BUFFERS must not exceed the maximum tape I/O size supported by the tape drive or operating system. This is usually 256 or 128 Kilobytes. Check your operating system and hardware documentation for the maximum values. Take into consideration the total system resources and the entire network. The Maximum Transmission Unit (MTU) for the LAN network may also have to be changed. NetBackup expects the value for NET_BUFFER_SZ and SIZE_DATA_BUFFERS to be in bytes, so in order to use 32k, use 32768 (32 x 1024). Note: Some Windows tape devices are not able to write with block sizes higher than 65536 (64 Kilobytes). Backups created on a UNIX media server with SIZE_DATA_BUFFERS set to more than 65536 cannot be read by some Windows media servers. This means that the Windows media server would not be able to import or restore any images from media that were written with SIZE_DATA_BUFFERS greater than 65536.
Note: The size of the shared data buffers used for a restore operation is determined by the size of the shared data buffers in use at the time the backup was written. This file is not used by restores.
107
108 Tuning the NetBackup data transfer path NetBackup server performance
or
15:26:01 [21544] <2> mpx_setup_restore_shm: using 12 data buffers, buffer size is 65536
When you change these settings, take into consideration the total system resources and the entire network. The Maximum Transmission Unit (MTU) for the local area network (LAN) may also have to be changed.
Windows
<install_path>\NetBackup\db\config\PARENT_DELAY <install_path>\NetBackup\db\config\CHILD_DELAY
These files contain a single integer specifying the value in milliseconds to be used for the delay corresponding to the name of the file. For example, to use a parent delay of 50 milliseconds, the PARENT_DELAY file would contain the integer 50. See Using NetBackup wait and delay counters below for more information about how to determine if you should change these values. Note: The following section refers to the bptm process on the media server during back up and restore operations from a tape storage device. If you are backing up to or restoring from a disk storage device, substitute bpdm for bptm throughout the section. For example, to activate debug logging for a disk storage device, the following directory must be created: /usr/openv/netbackup/logs/bpdm (UNIX) or install_path\NetBackup\logs\bpdm (Windows).
109
Achieving a good balance between the data producer and the data consumer processes is an important factor in achieving optimal performance from the NetBackup server component of the NetBackup data transfer path.
Producer - consumer relationship during a backup
NetBackup Client
Network
Consumer
Shared Buffers
Producer
Tape
Local clients
When the NetBackup media server and the NetBackup client are part of the same system, the NetBackup client is referred to as a local client.
Backup of local client For a local client, the bpbkar (UNIX) or bpbkar32 (Windows) process reads data from the disk during a backup and places it in the shared buffers. The bptm process reads the data from the shared buffer and writes it to tape. Restore of local client During a restore of a local client, the bptm process reads data from the tape and places it in the shared buffers. The tar (UNIX) or tar32 (Windows) process reads the data from the shared buffers and writes it to disk.
110 Tuning the NetBackup data transfer path NetBackup server performance
Remote clients
When the NetBackup media server and the NetBackup client are part of two different systems, the NetBackup client is referred to as a remote client.
Backup of remote client The bpbkar (UNIX) or bpbkar32 (Windows) process on the remote client reads data from the disk and writes it to the network. Then a child bptm process on the media server receives data from the network and places it in the shared buffers. The parent bptm process on the media server reads the data from the shared buffers and writes it to tape. Restore of remote client During the restore of the remote client, the parent bptm process reads data from the tape and places it into the shared buffers. The child bptm process reads the data from the shared buffers and writes it to the network. The tar (UNIX) or tar32 (Windows) process on the remote client receives the data from the network and writes it to disk.
Data Producer
bpbkar (UNIX) or bpbkar32 (Windows) bptm (child) bptm
Data Consumer
bptm
Remote Restore
bptm (parent)
If a full buffer is needed by the data consumer but is not available, the data consumer increments the Wait and Delay counters to indicate that it had to wait for a full buffer. After a delay, the data consumer will check again for a full buffer. If a full buffer is still not available, the data consumer increments the Delay counter to indicate that it had to delay again while waiting for a full buffer. The data consumer will repeat the delay and full buffer check steps until a full buffer is available.
111
This sequence is summarized in the following algorithm: while (Buffer_Is_Not_Full) { ++Wait_Counter; while (Buffer_Is_Not_Full) { ++Delay_Counter; delay (DELAY_DURATION); } } If an empty buffer is needed by the data producer but is not available, the data producer increments the Wait and Delay counter to indicate that it had to wait for an empty buffer. After a delay, the data producer will check again for an empty buffer. If an empty buffer is still not available, the data producer increments the Delay counter to indicate that it had to delay again while waiting for an empty buffer. The data producer will relate the delay and empty buffer check steps until an empty buffer is available. The algorithm for a data producer has a similar structure: while (Buffer_Is_Not_Empty) { ++Wait_Counter; while (Buffer_Is_Not_Empty) { ++Delay_Counter; delay (DELAY_DURATION); } } Analysis of the Wait and Delay counter values indicates which process, producer or consumer, has had to wait most often and for how long. There are four basic Wait and Delay Counter relationships:
Data Producer >> Data Consumer. The data producer has substantially larger Wait and Delay counter values than the data consumer. The data consumer is unable to receive data fast enough to keep the data producer busy. Investigate means to improve the performance of the data consumer. For a back up operation, check if the data buffer size is appropriate for the tape drive being used (see below). If data consumer still has a substantially large value in this case, try increasing the number of shared data buffers to improve performance (see below). Data Producer = Data Consumer (large value). The data producer and the data consumer have very similar Wait and Delay counter values, but those values are relatively large.
112 Tuning the NetBackup data transfer path NetBackup server performance
This may indicate that the data producer and data consumer are regularly attempting to used the same shared data buffer. Try increasing the number of shared data buffers to improve performance (see below).
Data Producer = Data Consumer (small value). The data producer and the data consumer have very similar Wait and Delay counter values, but those values are relatively small. This indicates that there is a good balance between the data producer and data consumer, which should yield good performance from the NetBackup server component of the NetBackup data transfer path. Data Producer << Data Consumer. The data producer has substantially smaller Wait and Delay counter values than the data consumer. The data producer is unable to deliver data fast enough to keep the data consumer busy. Investigate ways to improve the performance of the data producer. For a restore operation, check if the data buffer size (see below) is appropriate for the tape drive being used. If the data producer still has a relatively large value in this case, try increasing the number of shared data buffers to improve performance (see below).
The bullets above describe the four basic relationships possible. Of primary concern is the relationship and the size of the values. Information on determining substantial versus trivial values appears on the following pages. The relationship of these values only provides a starting point in the analysis. Additional investigative work may be needed to positively identify the cause of a bottleneck within the NetBackup data transfer path.
113
Windows
install_path\NetBackup\logs\bpbkar install_path\NetBackup\logs\bptm
Execute your backup. Look at the log for the data producer (bpbkar on UNIX or bpbkar32 on Windows) process in: UNIX
/usr/openv/netbackup/logs/bpbkar
Windows
install_path\NetBackup\logs\bpbkar
The line you are looking for should be similar to the following, and will have a timestamp corresponding to the completion time of the backup:
... waited 224 times for empty buffer, delayed 254 times
In this example the Wait counter value is 224 and the Delay counter value is 254. 3 Look at the log for the data consumer (bptm) process in: UNIX
/usr/openv/netbackup/logs/bptm
Windows
install_path\NetBackup\logs\bptm
The line you are looking for should be similar to the following, and will have a timestamp corresponding to the completion time of the backup:
... waited for full buffer 1 times, delayed 22 times
In this example, the Wait counter value is 1 and the Delay counter value is 22. To determine wait and delay counter values for a remote client backup: 1 Activate debug logging by creating this directory on the media server: UNIX
/usr/openv/netbackup/logs/bptm
Windows install_path\NetBackup\logs\bptm 2 3 Execute your backup. Look at the log for the bptm process in: UNIX
/usr/openv/netbackup/logs/bptm
Windows
install_path\NetBackup\Logs\bptm
Delays associated with the data producer (bptm child) process will appear as follows:
... waited for empty buffer 22 times, delayed 151 times, ...
114 Tuning the NetBackup data transfer path NetBackup server performance
In this example, the Wait counter value is 22 and the Delay counter value is 151. 5 Delays associated with the data consumer (bptm parent) process will appear as:
... waited for full buffer 12 times, delayed 69 times
In this example the Wait counter value is 12, and the Delay counter value is 69. To determine wait and delay counter values for a local client restore: 1 Activate logging by creating the two directories on the NetBackup media server: UNIX
/usr/openv/netbackup/logs/bptm /usr/openv/netbackup/logs/tar
Windows
install_path\NetBackup\logs\bptm install_path\NetBackup\logs\tar
Execute your restore. Look at the log for the data consumer (tar or tar32) process in the tar log directory created above. The line you are looking for should be similar to the following, and will have a timestamp corresponding to the completion time of the restore:
... waited for full buffer 27 times, delayed 79 times
In this example, the Wait counter value is 27, and the Delay counter value is 79. 3 Look at the log for the data producer (bptm) process in the bptm log directory created above. The line you are looking for should be similar to the following, and will have a timestamp corresponding to the completion time of the restore:
... waited for empty buffer 1 times, delayed 68 times
In this example, the Wait counter value is 1 and the delay counter value is 68. To determine wait and delay counter values for a remote client restore: 1 Activate debug logging by creating the following directory on the media server: UNIX
/usr/openv/netbackup/logs/bptm
Windows
install_path\NetBackup\logs\bptm
115
3 4
Look at the log for bptm in the bptm log directory created above. Delays associated with the data consumer (bptm child) process will appear as follows:
... waited for full buffer 36 times, delayed 139 times
In this example, the Wait counter value is 36 and the Delay counter value is 139. 5 Delays associated with the data producer (bptm parent) process will appear as follows:
... waited for empty buffer 95 times, delayed 513 times
In this example the Wait counter value is 95 and the Delay counter value is 513. Note: When you run multiple tests, you can rename the current log file. NetBackup will automatically create a new log file, which prevents you from erroneously reading the wrong set of values. Deleting the debug log file will not stop NetBackup from generating the debug logs. You must delete the entire directory. For example, to stop bptm logging, you must delete the bptm subdirectory. NetBackup will automatically generate debug logs at the specified verbose setting whenever the directory is detected.
Data buffer size. The size of each shared data buffer can be found on a line similar to:
... io_init: using 65536 data buffer size
Number of data buffers. The number of shared data buffers may be found on a line similar to:
... io_init: using 16 data buffers
Parent/child delay values. The values in use for the duration of the parent and child delays can be found on a line similar to:
... io_init: child delay = 20, parent delay = 30 (milliseconds)
NetBackup Media Server Network Buffer Size. The values in use for the Network Buffer Size parameter on the media server can be found on lines similar to these in debug log files:
116 Tuning the NetBackup data transfer path NetBackup server performance
The receive network buffer is used by the bptm child process to read from the network during a remote backup.
...setting receive network buffer to 263168 bytes
The send network buffer is used by the bptm child process to write to the network during a remote restore.
...setting send network buffer to 131072 bytes
See NetBackup media server network buffer size on page 97 for more information about the Network Buffer Size parameter on the media server. Suppose you wanted to analyze a local backup in which there was a 30-minute data transfer duration baselined at 5 Megabytes/second with a total data transfer of 9,000 Megabytes. Because a local backup is involved, if you refer to Roles of processes during backup and restore operations on page 110, you can determine that bpbkar (UNIX) or bpbkar32 (Windows) is the data producer and bptm is the data consumer. You would next want to determine the Wait and Delay values for bpbkar (or bpbkar32) and bptm by following the procedures described in Determining wait and delay counter values on page 112. For this example, suppose those values were: Process
bpbkar (UNIX) bpbkar32 (Windows) bptm 95 105
Wait
29364
Delay
58033
Using these values, you can determine that the bpbkar (or bpbkar32) process is being forced to wait by a bptm process which cannot move data out of the shared buffer fast enough. Next, you can determine time lost due to delays by multiplying the Delay counter value by the parent or child delay value, whichever applies. In this example, the bpbkar (or bpbkar32) process uses the child delay value, while the bptm process uses the parent delay value. (The defaults for these values are 20 for child delay and 30 for parent delay.) The values are specified in milliseconds. See Parent/child delay values on page 108 for more information on how to modify these values.
117
Use the following equations to determine the amount of time lost due to these delays:
bpbkar (UNIX) bpbkar32 (Windows) = 58033 delays X 0.020 seconds =1160 seconds =19 minutes 20 seconds bptm =105 X 0.030 seconds =3 seconds
This is useful in determining that the delay duration for the bpbkar (or bpbkar32) process is significant. If this delay were entirely removed, the resulting transfer time of 10:40 (total transfer time of 30 minutes minus delay of 19 minutes and 20 seconds) would indicate a throughput value of 14 Megabytes/sec, nearly a threefold increase. This type of performance increase would warrant expending effort to investigate how the tape drive performance can be improved. The number of delays should be interpreted within the context of how much data was moved. As the amount of data moved increases, the significance threshold for counter values increases as well. Again, using the example of a total of 9,000 Megabytes of data being transferred, assume a 64-Kilobytes buffer size. You can determine the total number of buffers to be transferred using the following equation:
Number_Kbytes = 9,000 X 1024 = 9,216,000 Kilobytes Number_Slots =9,216,000 / 64 =144,000
The Wait counter value can now be expressed as a percentage of the total divided by the number of buffers transferred:
bpbkar (UNIX) bpbkar32 (Windows) bptm = 29364 / 144,000 = 20.39% = 95 / 144,000 = 0.07%
In this example, in the 20 percent of cases where the bpbkar (or bpbkar32) process needed an empty shared data buffer, that shared data buffer has not yet been emptied by the bptm process. A value this large indicates a serious issue,
118 Tuning the NetBackup data transfer path NetBackup server performance
and additional investigation would be warranted to determine why the data consumer (bptm) is having issues keeping up. In contrast, the delays experienced by bptm are insignificant for the amount of data transferred. You can also view the Delay and Wait counters as a ratio:
bpbkar (UNIX) bpbkar32 (Windows) = 58033/29364 = 1.98
In this example, on average the bpbkar (or bpbkar32) process had to delay twice for each wait condition that was encountered. If this ratio is substantially large, you may wish to consider increasing the parent or child delay value, whichever one applies, to avoid the unnecessary overhead of checking for a shared data buffer in the correct state too often. Conversely, if this ratio is close to 1, you may wish to consider reducing the applicable delay value to check more often and see if that increases your data throughput performance. Keep in mind that the parent and child delay values are rarely changed in most NetBackup installations. The preceding information explains how to determine if the values for Wait and Delay counters are substantial enough for concern. The Wait and Delay counters are related to the size of data transfer. A value of 1,000 may be extreme when only 1 Megabyte of data is being moved. The same value may indicate a well-tuned system when gigabytes of data are being moved. The final analysis must determine how these counters affect performance by considering such factors as how much time is being lost and what percentage of time a process is being forced to delay.
bptm read waits The bptm debug log contains messages such as,
...waited for full buffer 1681 times, delayed 12296 times
The first number in the message is the number of times bptm waited for a full buffer, which is the number of times bptm write operations waited for data from the source. If, using the technique described in the section Determining wait and delay counter values on page 112, you determine that the Wait counter indicates a performance issue, then changing the number of buffers will not help, but adding multiplexing may help.
119
bptm write waits The bptm debug log contains messages such as,
...waited for empty buffer 1883 times, delayed 14645 times
The first number in the message is the number of times bptm waited for an empty buffer, which is the number of times bptm experienced data arriving from the source faster than the data could be written to tape. If, using the technique described in the section Determining wait and delay counter values on page 112, you determine that the Wait counter indicates a performance issue, then reduce the multiplexing factor if you are using multiplexing. Also, adding more buffers may help.
bptm delays The bptm debug log contains messages such as,
...waited for empty buffer 1883 times, delayed 14645 times
The second number in the message is the number of times bptm waited for an available buffer. If, using the technique described in the section Determining wait and delay counter values on page 112, you determine that the Delay counter indicates a performance issue, this will need investigation. Each delay interval is 30 ms.
Larger fragment sizes usually favor backup performance, especially when backing up large amounts of data. Creating smaller fragments will slow down large backups: each time a new fragment is created, the backup stream is interrupted.
120 Tuning the NetBackup data transfer path NetBackup server performance
Larger fragment sizes do not hinder performance when restoring large amounts of data. But when restoring a few individual files, larger fragments may slow down the restore. Larger fragment sizes do not hinder performance when restoring from non-multiplexed backups. For multiplexed backups, larger fragments may slow down the restore. In multiplexed backups, blocks from several images can be mixed together within a single fragment. During restore, NetBackup positions to the nearest fragment and starts reading the data from there, until it comes to the desired file. Splitting multiplexed backups into smaller fragments can improve restore performance. During restores, newer, faster devices can handle large fragments well. Slower devices, especially if they do not use fast locate block positioning, will restore individual files faster if fragment size is smaller. (In some cases, SCSI fast tape positioning can improve restore performance.)
Note: Unless you have particular reasons for creating smaller fragments (such as when restoring a few individual files, restoring from multiplexed backups, or restoring from older equipment), larger fragment sizes are likely to yield better overall performance.
121
Example 1:
Assume you are backing up four streams to a multiplexed tape, and each stream is a single, 1 gigabyte file and a default maximum fragment size of 1 TB has been specified. The resultant backup image logically looks like the following. TM denotes a tape mark, or file mark, that indicates the start of a fragment. TM <4 gigabytes data> TM When restoring any one of the 1 gigabyte files, the restore positions to the TM and then has to read all 4 gigabytes to get the 1 gigabyte file. If you set the maximum fragment size to 1 gigabyte:
122 Tuning the NetBackup data transfer path NetBackup server performance
TM <1 gigabyte data> TM <1 gigabyte data> TM <1 gigabyte data> TM <1 gigabyte data> TM this does not help, since the restore still has to read all four fragments to pull out the 1 gigabyte of the file being restored.
Example 2:
This is the same as Example 1, but assume four streams are backing up 1 gigabyte worth of /home or C:\. With the maximum fragment size (Reduce fragment size) set to a default of 1 TB (and assuming all streams are relatively the same performance), you again end up with: TM <4 gigabytes data> TM Restoring /home/file1 or C:\file1 and/home/file2 or C:\file2 from one of the streams will have to read as much of the 4 gigabytes as necessary to restore all the data. But, if you set Reduce fragment size to 1 gigabyte, the image looks like this: TM <1 gigabyte data> TM <1 gigabyte data> TM <1 gigabyte data> TM <1 gigabyte data> TM In this case, home/file1 or C:\file1 starts in the second fragment, and bptm positions to the second fragment to start the restore of home/file1 or C:\file1 (this has saved reading 1 gigabyte so far). After /home/file1 is done, if /home/file2 or C:\file2 is in the third or forth fragment, the restore can position to the beginning of that fragment before it starts reading as it looks for the data. These examples illustrate that whether fragmentation benefits a restore depends on what the data is, what is being restored, and where in the image the data is. In Example 2, reducing the fragment size from 1 gigabyte to half a gigabyte (512 Megabytes) increases the chance the restore can locate by skipping instead of reading when restoring relatively small amounts of an image.
123
NUMBER_DATA_BUFFERS_RESTORE setting
This parameter can help keep other NetBackup processes busy while a multiplexed tape is positioned during a restore. Increasing this value causes NetBackup buffers to occupy more physical RAM. This parameter only applies to multiplexed restores. For more information on this parameter, see Shared memory (number and size of data buffers) on page 102.
Windows
install_directory\bin\admincmd\bpimage -create_image_list -client client_name
where client_name is the name of the client with many small backup images. In the directory: UNIX
/usr/openv/netbackup/db/images/client_name
Windows
install_path\NetBackup\db\images\client_name
124 Tuning the NetBackup data transfer path NetBackup server performance
Do not edit these files, because they contain offsets and byte counts that are used for seeking to and reading the image information. Note: These files increase the size of the client directory.
where value is the number which provides the best performance for the system. This may have to be tried and tested as it may vary from system to system. A suggested starting value is 20. In any case, the value must not exceed 500ms as this may break TCP/IP. Once the optimum value for the system is found, the command for setting the value can be permanently set in a script under the directory /etc/rc2.d so that it can be executed at boot time.
125
When bprd, the request daemon on the master server, receives the first stream of a multiplexed restore request, it triggers the MPX_RESTORE_DELAY timer to start counting the configured amount of time. At this point, bprd watches and waits for related multiplexed jobs from the same client before starting the overall job. If another associated stream is received within the timeout period, it is added to the total job, and the timer is reset to the MPX_RESTORE_DELAY period. Once the timeout has been reached without an additional stream being received by bprd, the timeout window closes, all associated restore requests are sent to bptm, and a tape is mounted. If any associated restore requests are received after this event, they are queued to wait until the tape that is now In Use is returned to an idle state. If MPX_RESTORE_DELAY is not set high enough, NetBackup may need to mount and read the same tape multiple times to collect all of the necessary header information necessary for the restore. Ideally, NetBackup would read a multiplexed tape, collecting all of the header information it needs, with a single pass of the tape, thus minimizing the amount of time to restore.
Example (Oracle):
Suppose that MPX_RESTORE_DELAY is not set in the bp.conf file, so its value is the default of 30 seconds. Suppose also that you initiate a restore from an Oracle RMAN backup that was backed up using 4 channels or 4 streams, and you use the same number of channels to restore. RMAN passes NetBackup a specific data request, telling NetBackup what information it needs to start and complete the restore. The first request is passed and received by NetBackup in 29 seconds, causing the MPX_RESTORE_DELAY timer to be reset. The next request is passed and received by NetBackup in 22 seconds, so again the timer is reset. The third request is received 25 seconds later, resetting the timer a third time, but the fourth request is received 31 seconds after the third. Since the fourth request was not received within the restore delay interval, NetBackup only starts three of the four restores. Instead of reading from the tape once, NetBackup queues the fourth restore request until the previous three requests are completed. Since all of the multiplexed images are on the same tape, NetBackup mounts, rewinds, and reads the entire tape again to collect the multiplexed images for the fourth restore request. Note that in addition to NetBackup's reading the tape twice, RMAN waits to receive all the necessary header information before it begins the restore. If MPX_RESTORE_DELAY had been larger than 30 seconds, NetBackup would have received all four restore requests within the restore delay windows and collected all the necessary header information with one pass of the tape. Oracle would have started the restore after this one tape pass, improving the restore performance significantly.
126 Tuning the NetBackup data transfer path NetBackup storage device performance
MPX_RESTORE_DELAY needs to be set with caution, because it can decrease performance if its value is set too high. Suppose, for instance, that the MPX_RESTORE_DELAY is set to 1800 seconds. When the final associated restore request arrives, NetBackup resets the request delay timer as it did with the previous requests. NetBackup then must wait for the entire 1800-second interval before it can start the restore. Therefore, try to set the value of MPX_RESTORE_DELAY so it is neither too high or too low.
Media positioning
When a backup or restore is performed, the storage device must position the tape so that the data is over the read/write head. Depending on the location of the data and the overall performance of the media device, this can take a significant amount of time. When you conduct performance analysis with media containing multiple images, it is important to account for the time lag that occurs before the data transfer starts.
Tape streaming
If a tape device is being used at its most efficient speed, it is said to be streaming the data onto the tape. Generally speaking, if a tape device is streaming, there will be little physical stopping and starting of the media. Instead the media will be constantly spinning within the tape drive. If the tape device is not being used at its most efficient speed, it may continually start and stop the media from spinning. This behavior is the opposite of tape streaming and usually results in a poor data throughput rate.
Data compression
Most tape devices support some form of data compression within the tape device itself. Compressible data (such as text files) yields a higher data throughput rate than non-compressible data, if the tape device supports hardware data compression.
Tuning the NetBackup data transfer path NetBackup storage device performance
127
Tape devices typically come with two performance rates: maximum throughput and nominal throughput. Maximum throughput is based on how fast compressible data can be written to the tape drive when hardware compression is enabled in the drive. Nominal throughput refers to rates achievable with non-compressible data. Note: Tape drive data compression cannot be set by NetBackup. Follow the instructions provided with your OS and tape drive to be sure data compression is set correctly. In general, tape drive data compression is preferable to client (software) compression such as that available in NetBackup. Client compression may be desirable in some cases, such as for reducing the amount of data transmitted across the network for a remote client backup. See Tape versus client compression on page 133 for more information.
128 Tuning the NetBackup data transfer path NetBackup storage device performance
Chapter
Multiplexing and multi-streaming on page 130 Encryption on page 133 Compression on page 133 Using both encryption and compression on page 134 NetBackup java on page 134 Vault on page 134 Fast recovery with bare metal restore on page 135 Backing up many small files on page 135
131
Figure 9-1
Multiplexing diagram
server
Multi-streaming writes multiple data streams, each to its own tape drive, unless multiplexing is used. Multistreaming diagram
Figure 9-2
server
backup to tape
Here are some things to consider with regard to multiplexing: Experiment with different multiplexing factors to find one where the tape drive is just streaming, that is, where the writes just fill the maximum bandwidth of your drive. This is the optimal multiplexing factor. For instance, if you determine that you can get 5 Megabytes/sec from each of multiple concurrent read streams, then you would use a multiplexing factor of two to get the maximum throughput to a DLT7000 (that is, 10 Megabytes/sec).
Use a higher multiplexing factor for incremental backups. Use a lower multiplexing factor for local backups. Expect the duplication of a multiplexed tape to take a longer period of time if it is demultiplexed, because multiple read passes of the source tape must be made.
When you duplicate a multiplexed backup, demultiplex it. By demultiplexing the backups when they are duplicated, the time for recovery is significantly reduced.
Do not use multi-streaming on single mount points. Multi-streaming takes advantage of the ability to stream data from several devices at once. This permits backups to take advantage of Read Ahead on a spindle or set of spindles in RAID environments. Multi-streaming from a single mount point encourages head thrashing and may result in degraded performance. Only conduct multistreamed backups against single mount points if they are mirrored (RAID 0). However, this also is likely to result in degraded performance.
Multiplexing To use multiplexing effectively, you must understand the implications of multiplexing on restore times. Multiplexing may decrease overall backup time when you are backing up large numbers of clients over slow networks, but it does so at the cost of recovery time. Restores from multiplexed tapes must pass over all nonapplicable data. This action increases restore times. When recovery is required, demultiplexing causes delays in the restore process. This is because NetBackup must do more tape searching to accomplish the restore. Restores should be tested, before the need to do a restore arises, to determine the impact of multiplexing on restore performance. When you initially set up a new environment, keep the multiplexing factor low. Typically, a multiplexing factor of four or less does not highly impact the speed of restores, depending on the type of drive and the type of system. If the backups do not finish within their assigned window, multiplexing can be increased to meet the window. However, increasing the multiplexing factor provides diminishing returns as the number of multiplexing clients increases. The optimum multiplexing factor is the number of clients needed to keep the buffers full for a single tape drive. Set the multiplexing factor to four and do not multistream. Run benchmarks in this environment. Then, if needed, you can begin to change the values involved until both the backup and restore window parameters are met. Multi-streaming The NEW_STREAM directive is useful for fine-tuning streams so that no disk subsystem is under-utilized or over-utilized.
133
Encryption
When the NetBackup encryption option is enabled, your backups may run slower. How much slower depends on the throttle point in your backup path. If the network is the issue, encryption should not hinder performance. If the network is not the issue, then encryption may slow down the backup. Note that some local backups actually ran faster with encryption than without it. In some field test cases, memory utilization has been found to be roughly the same with and without encryption.
Compression
Two types of compression can be used with NetBackup, client compression (configured in the NetBackup policy) and tape drive compression (handled by the device hardware). Some or all of the files may also have been compressed by other means prior to the backup.
Tape drive compression is almost always preferable to client compression. Tape compression offloads the compression task from the client and server.
134 Tuning other NetBackup components Using both encryption and compression
Avoid using both tape compression and client compression, as this can actually increase the amount of backed-up data. Only in rare cases is it beneficial to use client (software) compression. For very dense data, compression algorithms take a long time and often increase the overall size of the images when compressing an already compressed image. In cases where the files are already compressed, devices should be pointed to native device drivers. In other cases, NetBackup client compression should be turned off, and the hardware should handle the compression. On UNIX: client compression reduces the amount of data sent over the network, but impacts the client. The NetBackup client configuration setting MEGABYTES_OF_MEMORY may help client performance. It is undesirable to compress files which are already compressed. If you find that this is happening with your backups, refer to the NetBackup configuration option COMPRESS_SUFFIX. Edit this setting through bpsetconfig.
NetBackup java
For performance improvement, refer to the following sections in the NetBackup System Administrators Guide for UNIX and Linux, Volume I: Configuring the NetBackup-Java Administration Console, and the subsection NetBackup-Java Performance Improvement Hints. In addition, the NetBackup Release Notes may contain information about NetBackup Java performance.
Vault
Refer to the Best Practices chapter of the NetBackup Vault System Administrators Guide.
Tuning other NetBackup components Fast recovery with bare metal restore
135
Use the FlashBackup (or FlashBackup-Windows) policy type. This is a feature of NetBackup Advanced Client. FlashBackup is described in the NetBackup Advanced Client System Administrators Guide. See FlashBackup on page 136 of this Tuning guide for a related tuning issue. On Windows, make sure virus scans are turned off (this may double performance). Snap a mirror (such as with the FlashSnap method in Advanced Client) and back that up as a raw partition. This does not allow individual file restore from tape. Turn off or reduce logging. The NetBackup logging facility has the potential to impact the performance of backup and recovery processing. Logging is usually enabled only to troubleshoot a NetBackup problem, to ensure that any performance impact is short in term. The performance impact can be determined by the amount of logging used and the verbosity level set. Make sure the NetBackup buffer size is the same size on both the servers and clients. Consider upgrading NIC drivers as new releases appear.
Run the following bpbkar throughput test on the client with Windows: C:\Veritas\Netbackup\bin\bpbkar32 -nocont > NUL 2> (for example, C:\Veritas\Netbackup\bin\bpbkar32 -nocont c:\ > NUL 2> temp.f) When initially configuring the Windows server, optimize TCP/IP throughput as opposed to shared file access. Always select the choice of boosting background performance on Windows versus foreground performance. Turn off NetBackup Client Job Tracker if the client is a system server. Regularly review the patch announcements for every server OS. Install patches that affect TCP/IP functions, such as correcting out-of-sequence delivery of packets.
FlashBackup
If using advanced client FlashBackup with a copy-on-write snapshot method
If you are using the FlashBackup feature of Advanced Client with a copy-on-write method such as nbu_snap, assign the snapshot cache device to a separate hard drive. This will improve performance by reducing disk contention and potential head thrashing due to the writing of data to maintain the snapshot.
137
How to adjust the FlashBackup read buffer for Solaris clients 1 2 Create the following touch file on each Solaris client:
/usr/openv/netbackup/FBU_READBLKS
Enter the desired values in the FBU_READBLKS file, as follows. On the first line of the file, enter an integer value for the read buffer size in bytes for full backups and/or enter the read buffer size in bytes for incremental backups. The default is to read the raw partition in 131072 bytes (128 KB) during full backups and in 32768 bytes (32 KB) for incremental backups. If changing both values, separate them with a space. For example, to set the full backup read buffer to 256 KB and the incremental read buffer to 64 KB, enter the following on the first line of the file:
262144 65536
You can use the second line of the file to set the tape record write size, also in bytes. The default is the same size as the read buffer. The first entry on the second line sets the full backup write buffer size, the second value sets the incremental backup write buffer size. Note: Resizing the read buffer for incremental backups can result in a faster backup in some cases, and a slower backup in others. The result depends on such factors as the location of the data to be read, the size of the data to be read relative to the size of the read buffer, and the read characteristics of the storage device and the I/O stack. Experimentation may be necessary to achieve the best setting.
Chapter
10
Hardware performance hierarchy on page 140 Hardware configuration examples on page 147 Tuning software for better performance on page 148
Note: The critical factors in performance are not software-based. They are hardware selection and configuration. Hardware has roughly four times the weight that software has in determining performance.
Host Memory Level 5 PCI bridge PCI bus Level 4 PCI card 1 PCI card 2 PCI bus PCI bridge PCI bus PCI card 3
Level 3
Fibre channel
Fibre channel
Array 1 Raid controller Level 2 Shelf Shelf adaptor Level 1 Shelf Shelf adaptor
Drives
Drives
Drives
Drives
141
In general, all data going to or coming from disk must pass through host memory. In the following diagram, a dashed line shows the path that the data takes through a media server. Figure 10-4 Data stream in NetBackup media server to arrays
Memory PCI bridge PCI bus PCI card Data moving through host memory
Level 3
The data moves up through the ethernet PCI card at the far right. The card sends the data across the PCI bus and through the PCI bridge into host memory. NetBackup then writes this data to the appropriate location. In a disk example, the data passes through one or more PCI bridges, over one or more PCI buses, through one or more PCI cards, across one or more fibre channels, and so on. Sending data through more than one PCI card increases bandwidth by breaking up the data into large chunks and sending a group of chunks at the same time to multiple destinations. For example, a write of 1 MB could be split into 2 chunks going to 2 different arrays at the same time. If the path to each array is x bandwidth, the aggregate bandwidth will be approximately 2x. Each level in the Performance Hierarchy diagram represents the transitions over which data will flow. These transitions have bandwidth limits. Between each level there are elements that can affect performance as well.
Drives
Drives
Drives
Drives
Level 1 bandwidth potential is determined by the technology used. For FC-AL, the arbitrated loop could be either 1 gigabit or 2 gigabit fibre channel. An arbitrated loop is a shared-access topology, which means that only 2 entities on the loop can be communicating at one time. For example, one disk drive and the shelf adaptor can communicate. So even though a single disk drive might be capable of 2 gigabit bursts of data transfers, there is no aggregation of this bandwidth (that is, multiple drives cannot be communicating with the shelf adaptor at the same time, resulting in multiples of the individual drive bandwidth).
143
Larger disk arrays will have more than one internal FC-AL. Shelves may even support 2 FC-AL so that there will be two paths between the RAID controller and every shelf, which provides for redundancy and load balancing.
Level 3
Fibre channel
Fibre channel
Array
Array
While this diagram shows a single point-to-point connection between an array and the host, a real-world use more typically includes a SAN fabric (having one or more fibre channel switches). The logical result is the same, in that either is a data path between the array and the host. When these paths are not arbitrated loops (for example, if they were fabric fibre channel), they do not have the shared-access topology limitations. That is, if two arrays are connected to a fibre channel switch and the host has a single fibre channel connection to the switch, the arrays can be communicating at the same time (the switch does the coordination with the host fibre channel connection). However, this does not aggregate bandwidth, since the host is still limited to a single fibre channel connection. Fibre channel is generally 1 or 2 gigabit (both arbitrated loop and fabric topology). Faster speeds are coming on the market. A general rule-of-thumb when considering protocol overhead is that one can divide the gigabit rate by 10 to get an approximate megabyte-per-second bandwidth. So, 1-gigabit fibre channel can theoretically achieve approximately 100 MB/second and 2-gigabit fibre channel can theoretically achieve approximately 200 MB/second. Fibre channel is also similar to traditional LANs, in that a given interface can support multiple connection rates. That is, a 2-gigabit fibre channel port will also connect to devices that only support 1 gigabit.
A typical host will support 2 or more PCI buses, with each bus supporting 1 or more PCI cards. A bus has a topology similar to FC-AL in that only 2 endpoints can be communicating at the same time. That is, if there are 4 cards plugged into a PCI bus, only one of them can be communicating with the host at a given instant. Multiple PCI buses are implemented to allow multiple data paths to be communicating at the same time, resulting in aggregate bandwidth gains. PCI buses have 2 key factors involved in bandwidth potential: the width of the bus - 32 or 64 bits, and the clock or cycle time of the bus (in Mhz). As a rule of thumb, a 32-bit bus can transfer 4 bytes per clock and a 64-bit bus can transfer 8 bytes per clock. Most modern PCI buses support both 64-bit and 32-bit cards. Currently PCI buses are available in 4 clock rates:
33 Mhz 66 Mhz 100 Mhz (sometimes referred to as PCI-X) 133 Mhz (sometimes referred to as PCI-X)
PCI cards also come in different clock rate capabilities. Backward-compatibility is very common; for example, a bus rated at 100 Mhz will support 100, 66, and 33 Mhz cards. Likewise, a 64-bit bus will support both 32-bit and 64-bit cards. They can also be mixed; for example, a 100-Mhz 64-bit bus can support any mix of clock and width that are at or below those values.
145
Note: In a shared-access topology, a slow card can negatively impact the performance of other fast cards on the same bus. This is because the bus adjusts to the right clock and width for each transfer. One moment it could be doing 100 Mhz 64 bit to card #2 and at another moment doing 33 Mhz 32 bit to card #3. Since the transfer to card #3 will be so much slower, it takes longer to complete. The time that is lost may otherwise have been used for moving data faster with card #2. You should also remember that a PCI bus is a unidirectional bus, which means that when it is doing a transfer in one direction, it cannot move data in the other direction, even from another card. Real-world bandwidth is generally around 80% of the theoretical maximum (clock * width). Following are rough estimates for bandwidths that can be expected: 64 bit/ 33 Mhz = approximately 200 MB/second 64 bit/ 66 Mhz = approximately 400 MB/second 64 bit/100 Mhz = approximately 600 MB/second 64 bit/133 Mhz = approximately 800 MB/second
A drive has sequential access bandwidth and average latency times for seek and rotational delays. Drives perform optimally when doing sequential I/O to disk. Non-sequential I/O forces movement of the disk head (that is, seek and rotational latency).
This movement is a huge overhead compared to the amount of data transferred. So the more non-sequential I/O done, the slower it will get. Reading and/or writing more than one stream at a time will result in a mix of short bursts of sequential I/O with seek and rotational latency in between, which will significantly degrade overall throughput. Because different drive types have different seek and rotational latency specifications, the type of drive selected has a large effect on how much the degradation will be. From best to worst, such drives are fibre channel, SCSI, and SATA, with SATA drives usually twice the latency of fibre channel. However, SATA drives have about 80% the sequential performance that fibre channel drives do.
A RAID controller has cache memory of varying sizes. The controller also does the parity calculations for RAID-5. Better controllers have this calculation (called XOR) in hardware which makes it faster. If there is no hardware-assisted calculation, the controller processor must perform it, and controller processors are not usually high performance. A PCI card can be limited either by the speed supported for the port(s) or the clock rate to the PCI bus. A PCI bridge is usually not an issue because it is sized to handle whatever PCI buses are attached to it.
Memory can be a limit if there is other intensive non-I/O activity in the system. Note that there are no CPUs for the host processor(s) in the Performance hierarchy diagram on page 140. While CPU performance is obviously a contributor to all performance, it is generally not the bottleneck in most modern systems for I/O intensive workloads, because there is very little work done at that level. The CPU must execute a read operation and a write operation, but those operations do not take up much bandwidth. An exception is when older gigabit ethernet card(s) are involved, because the CPU has to do more of the overhead of network transfers.
147
Example 1
A general hardware configuration could have dual 2-gigabit fibre channel ports on a single PCI card. In such a case, the following is true:
Potential bandwidth is approximately 400 MB/second. For maximum performance, the card must be plugged into at least a 66 Mhz PCI slot. No other cards on that bus should need to transfer data at the same time. That single card will saturate the PCI bus. Putting 2 of these cards (4 ports total) onto the same bus and expecting them to aggregate to 800 MB/second will never work unless the bus and cards are 133 Mhz.
Example 2
The following more detailed example shows a pyramid of bandwidth potentials with aggregation capabilities at some points. Suppose you have the following hardware:
1x 66 Mhz quad 1 gigabit ethernet 4x 66 Mhz 2 gigabit fibre channel 4x disk array with 1 gigabit fibre channel port 1x Sun V880 server (2x 33 Mhz PCI buses and 1x 66 Mhz PCI bus)
In this case, for maximum backup and restore throughput with clients on the network, the following is one way to assemble the hardware so that no constraints limit throughput.
The quad 1-gigabit ethernet card can do approximately 400 MB/second throughput at 66 Mhz. It requires at least a 66 Mhz bus, because putting it in a 33 Mhz bus would limit throughput to approximately 200 MB/second. It will completely saturate the 66 Mhz bus, so do not put any other cards on that bus that need significant I/O at the same time.
Since the disk arrays have only 1-gigabit fibre channel ports, the fibre channel cards will degrade to 1 gigabit each.
148 Tuning disk I/O performance Tuning software for better performance
Each card can therefore move approximately 100 MB/second. With four cards, the total is approximately 400 MB/second. However, you do not have a single PCI bus available that can support that 400MB /second, since the 66-Mhz bus is already taken by ethernet card. There are two 33-Mhz buses which can each support approximately 200 MB/second. Therefore, you can put 2 of the fibre channel cards on each of the 2 buses.
This configuration can move approximately 400 MB/second for backup or restore. Real-world results of a configuration like this show approximately 350 MB/second.
149
Figure 10-5
Host Memory Level 5 PCI bridge PCI bus Level 4 PCI card 1 PCI card 2 PCI card 3 PCI bus PCI bridge PCI bus
Level 3
Fibre channel Array 2 Raid controller Tape, Ethernet, or another non-disk device
Drives
Drives
Each shelf in the disk array has 9 drives because it uses a RAID 5 group of 8+1 (that is, 8 data disks + 1 parity disk). The RAID controller in the array uses a stripe unit size when performing I/O to these drives. Suppose that you know the stripe unit size to be 64KB. This means that when writing a full stripe (8+1) it will write 64KB to each drive. The amount of non-parity data is 8 * 64KB, or 512KB. So, internal to the array, the optimal I/O size is 512KB. This means that crossing Level 3 to the host PCI card should perform I/O at 512KB. The diagram shows two separate RAID arrays on two separate PCI buses. You want both to be performing I/O transfers at the same time. If each is optimal at 512K, the two arrays together are optimal at 1MB.
150 Tuning disk I/O performance Tuning software for better performance
You can implement software RAID-0 to make the two independent arrays look like one logical device. RAID-0 is a plain stripe with no parity. Parity protects against drive failure, and this configuration already has RAID-5 parity protecting the drives inside the array. The software RAID-0 is configured for a stripe unit size of 512KB (the I/O size of each unit) and a stripe width of 2 (1 for each of the arrays). Since 1MB is the optimum I/O size for the volume (the RAID-0 entity on the host), that size is used throughout the rest of the I/O stack.
If possible, configure the file system mounted over the volume for 1MB. The application performing I/O to the file system also uses an I/O size of 1MB. In NetBackup, I/O sizes are set in the configuration touch file .../db/config/SIZE_DATA_BUFFERS_DISK. See Changing the size of shared data buffers on page 105 for more information.
Chapter
11
Kernel tuning (UNIX) on page 152 Adjusting data buffer size (Windows) on page 157 Other Windows issues on page 159
Message queues set msgsys:msginfo_msgmax = maximum message size set msgsys:msginfo_msgmnb = maximum length of a message queue in bytes. The length of the message queue is the sum of the lengths of all the messages in the queue. set msgsys:msginfo_msgmni = number of message queue identifiers set msgsys:msginfo_msgtql = maximum number of outstanding messages system-wide that are waiting to be read across all message queues. Semaphores set semsys:seminfo_semmap = number of entries in semaphore map set semsys:seminfo_semmni = maximum number of semaphore identifiers system-wide set semsys:seminfo_semmns = number of semaphores system-wide set semsys:seminfo_semmnu = maximum number of undo structures in system set semsys:seminfo_semmsl = maximum number of semaphores per id
153
set semsys:seminfo_semopm = maximum number of operations per semop call set semsys:seminfo_semume = maximum number of undo entries per process
Shared memory set shmsys:shminfo_shmmin = minimum shared memory segment size set shmsys:shminfo_shmmax = maximum shared memory segment size set shmsys:shminfo_shmseg = maximum number of shared memory segments that can be attached to a given process at one time set shmsys:shminfo_shmmni = maximum number of shared memory identifiers that the system will support
The ipcs -a command displays system resources and their allocation, and is a useful command to use when a process is hanging or sleeping to see if there are available resources for it to use.
Example:
This is an example of tuning the kernel parameters for NetBackup master servers and media servers, for a Solaris 8 or 9 system. Symantec provides this information only to assist in kernel tuning for NetBackup. See Kernel parameters in Solaris 10 on page 154 for Solaris 10. These are recommended minimum values. If /etc/system already contains any of these entries, use the larger of the existing setting and the setting provided here. Before modifying /etc/system, use the command /usr/sbin/sysdef -i to view the current kernel parameters. After you have changed the settings in /etc/system, reboot the system to allow the changed settings to take effect. After rebooting, the sysdef command will display the new settings. *BEGIN NetBackup with the following recommended minimum settings in a Solaris /etc/system file *Message queues set msgsys:msginfo_msgmap=512 set msgsys:msginfo_msgmax=8192 set msgsys:msginfo_msgmnb=65536 set msgsys:msginfo_msgmni=256 set msgsys:msginfo_msgssz=16 set msgsys:msginfo_msgtql=512 set msgsys:msginfo_msgseg=8192 *Semaphores set semsys:seminfo_semmap=64 set semsys:seminfo_semmni=1024 set semsys:seminfo_semmns=1024
set semsys:seminfo_semmnu=1024 set semsys:seminfo_semmsl=300 set semsys:seminfo_semopm=32 set semsys:seminfo_semume=64 *Shared memory set shmsys:shminfo_shmmax=16777216 set shmsys:shminfo_shmmin=1 set shmsys:shminfo_shmmni=220 set shmsys:shminfo_shmseg=100 *END NetBackup recommended minimum settings
Socket Parameters on Solaris 8 and 9 The TCP_TIME_WAIT_INTERVAL parameter sets the amount of time to wait after a TCP socket is closed before it can be used again. This is the time that a TCP connection remains in the kernel's table after the connection has been closed. The default value for most systems is 240000, which is 4 minutes (240 seconds) in milliseconds. If your server is slow because it handles many connections, check the current value for TCP_TIME_WAIT_INTERVAL and consider reducing it. For Solaris or HP-UX, use the following command: ndd -get /dev/tcp tcp_time_wait_interval Force load parameters on Solaris 8 and 9 When system memory gets low, Solaris unloads unused drivers from memory and reloads drivers as needed. Tape drivers are a frequent candidate for unloading, since they tend to be less heavily used than disk drivers. Depending on the timing of these unload and reload events for the st (Sun), sg (Symantec), and Fibre Channel drivers, various issues may result. These issues can range from devices disappearing from a SCSI bus to system panics. Symantec recommends adding the following forceload statements to the /etc/system file. These statements prevent the st and sg drivers from being unloaded from memory: forceload: dev/st forceload: dev/sg Other statements may be necessary for various Fibre Channel drivers, such as the following example for JNI: forceload: dev/fcaw
155
message queues, and semaphores. For information on tuning these system resources, see Chapter 6, Resource Controls (Overview), in the Sun System Administration Guide: Solaris Containers-Resource Management and Solaris Zones. For further assistance with Solaris parameters, refer to the Solaris Tunable Parameters Reference Manual, available at: http://docs.sun.com/app/docs/doc/819-2724?q=Solaris+Tunable+Parameters The following sections of the Solaris Tunable Parameters Reference Manual may be of particular interest:
Whats New in Solaris System Tuning in the Solaris 10 Release? System V Message Queues System V Semaphores System V Shared Memory
*shmmax = NetBackup shared memory allocation = (SIZE_DATA_BUFFERS * NUMBER_DATA_BUFFERS) * number of drives * MPX per drive SIZE_DATA_BUFFERS and NUMBER_DATA_BUFFERS are also discussed under Recommended shared memory settings on page 107. To change the above kernel parameters, use the System Administration Manager (SAM) unless you have great familiarity with changing kernel parameters and rebuilding the kernel from the command line. From SAM, select Kernel Configuration > Configurable Parameters. Find the parameter to change and select Actions > Modify Configurable Parameter. Then key in the new value. This should be done for all the
157
desired parameters. Once all the values have been changed, select Actions > Process New Kernel. This will bring up a warning to inform that a reboot will be required to move the values into place. After the reboot, the sysdef command can be used to confirm that the correct value is in place. Caution: Any changes to the kernel will require a reboot in order to move the new kernel into place. Do not make changes to the parameters unless a system reboot can be performed, or the changes will not be saved.
For further information on setting boot options for st, see /usr/src/linux*/drivers/scsi/README.st, subsection BOOT TIME.
Note: The OEMSETUP.INF file has been updated to automatically update the registry to support 65 scatter/gather segments. Normally, no additional changes will be necessary as this typically results in the best overall performance. To change the data buffer size, do the following: 1 2 Click Start > Run and open the REGEDT32 program. Select HKEY_LOCAL_MACHINE and follow the tree structure down to the QLogic driver as follows: HKEY_LOCAL_MACHINE > SYSTEM > CurrentControlSet > Services > Ql2200 > Parameters > Device. Double click MaximumSGList:REG_DWORD:0x21 Enter a value from 16 to 255 (0x10 hex to 0xFF). A value of 255 (0xFF) enables the maximum 1 Megabyte transfer size. Setting a value higher than 255 reverts to the default of 64-Kilobyte transfers. The default value is 33 (0x21). Click OK. Exit the Registry Editor, then shut down and reboot the system. The main definition here is the so-called SGList, that is, Scatter/Gather list. This is the number of pages that can be either scattered or gathered (that is, read or written) in one DMA transfer. For the QLA2200, you set the parameter MaximumSGList to 0xFF (or just to 0x40 for 256Kb) and can then set 256Kb buffer sizes for NetBackup. Extreme caution should be used when attempting to modify this registry value, and the vendor of the SCSI/Fiber card should always be contacted first to ascertain the maximum value that particular card can support. The same should be possible for other HBAs as well, especially fiber cards. The default for JNI fiber cards using driver version 1.16 is actually 0x80 (512Kb or 128 pages). The default for the Emulex LP8000 is 0x81 (513Kb or 129 pages). Note that for this approach to work, the HBA has to install its own SCSI miniport driver. If it does not, transfers will be limited to 64 Kilobytes. This is for legacy cards like old SCSI cards. In conclusion, the built-in limit on Windows is 1024 Kilobytes, unless you are using the default Microsoft miniport driver for legacy cards. The limitations are all to do with the HBA drivers and the limits of the physical devices attached to them. For example, Quantum DLT7000 drives work best with 128-Kilobyte buffers and StorageTek 9840 drives with 256-Kilobyte buffers. If these values are
3 4
5 6
159
increased too far, this could result in damage to either the HBA or the tape drives or any devices in between (fiber bridges and switches, for example).
Troubleshooting NetBackups use of configuration files on Windows systems. If you create a configuration file on a Windows system for NetBackups use (on UNIX systems, such files are called touch files), the file name must match the file name that NetBackup is expecting. In particular, make sure the file name does not have an extension, such as .txt. If, for instance, you create a file called NOexpire to prevent the expiration of backup images, this file will not produce the desired effect if the files name is NOexpire.txt. Note also: the file must use a supported type of encoding, such as ANSI. Unicode encoding is not supported; if the file is in Unicode, it will not produce the desired effect. To check the encoding type, open the file using a tool that displays the current encoding, such as Notepad. Select File > Save As and check the options in the Encoding field. ANSI encoding will work properly. Disable antivirus software when running file system backup on Windows 2000 or Windows XP. Antivirus applications scan all files backed up by NetBackup, and load down the clients CPU and slow its backups. As a work around, in the Backup, Archive, and Restore interface, on the NetBackup Client Properties dialog, General tab, clear the checkbox next to Perform incrementals based on archive bit.
Appendix
Additional resources
This chapter lists additional sources of information.
Storage Mountain, previously called Backup Central, is a resource for all backup-related issues. It is located at http://www.storagemountain.com. The following article discusses how and why to design a scalable data installation: High-Availability SANs, Richard Lyford, FC Focus Magazine, April 30, 2002.
Iperf, for measuring TCP and UDP bandwidth: http://dast.nlanr.net/Projects/Iperf1.1.1/index.htm Bonnie, for measuring the performance of UNIX file system operations: http://www.textuality.com/bonnie Bonnie++, extends the capabilities of Bonnie: http://www.coker.com.au/bonnie++/readme.html Tiobench, for testing I/O performance with multiple running threads: http://sourceforge.net/projects/tiobench/
You can find Veritas NetBackup news groups at: http://forums.veritas.com. Search on the keyword NetBackup to find threads relevant to NetBackup. The email list Veritas-bu discusses backup-related products such as NetBackup. Archives for Veritas-bu are located at: http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu The Usenet news group comp.arch.storage.
Index
Symbols
/dev/null 84 /dev/rmt/2cbn 133 /dev/rmt/2n 133 /etc/rc2.d 124 /etc/system 152 /proc/sys (Linux) 157 /user/openv/netbackup/db/config/SIZE_DATA_BU FFERS 105 /user/openv/netbackup/db/config/SIZE_DATA_BU FFERS_DISK 105 /usr/openv/netbackup 100 /usr/openv/netbackup/bin/admincmd/bpimage 12 3 /usr/openv/netbackup/db/error 83 /usr/openv/netbackup/db/images 123 /usr/openv/netbackup/db/media 56 /usr/openv/netbackup/logs 84 /usr/sbin 57 /usr/sbin/devfsadm 58 /usr/sbin/modinfo 57 /usr/sbin/modload 58 /usr/sbin/modunload 57 /usr/sbin/ndd 124 /usr/sbin/tapes 57
backup load 94 data buffer size (Windows) 157 error threshold 55 network communications buffer 97 read buffer size 136 Advanced Client 64, 135, 136 AIX 99 All Log Entries report 79, 81, 83 ALL_LOCAL_DRIVES 29, 50 alphabetical order, storage units 44 ANSI encoding 159 antivirus software 159 arbitrated loop 142 archive bit 159 archiving catalog 47 array RAID controller 142 arrays 95 ATA 21 ATM card 31 auto-negotiate 97 AUTOSENSE 97 available_media report 60 available_media script 54, 60 Avg. Disk Queue Length counter 88
B
backup catalog 50 database 64 disk or tape 60 environment, dedicated or shared 60 large catalog 47 load adjusting 94 load leveling 93 monopolizing devices 94 user-directed 77 window 64, 92 Backup Central 161 Backup Tries parameter 56 balancing load 93 bandwidth 145
Numerics
1000BaseT 21 100BaseT 21 100BaseT cards 31 10BaseT 21 10BaseT cards 31
A
ACS 58 acsd 58 ACSLS communications 58 Activity Monitor 80 additional info on tuning 161 adjusting
164 Index
bandwidth limiting 93 Bare Metal Restore (BMR) 135 best practices 66, 68, 71 Bonnie 161 Bonnie++ 161 boot options (Linux) 157 bottlenecks 79, 97 freeware tools for detecting 161 bp.conf file 56, 101 bpbkar 96, 109, 110, 112 bpbkar log 83, 84 bpbkar32 96, 109, 110 bpdm log 83 bpdm_dev_null 85 bpend_notify.bat 95 bpimage 123 bpmount -i 50 bprd 41, 44 bpsetconfig 134 bpstart_notify.bat 95 bptm 108, 109, 110, 112, 115, 118, 120 bptm log 56, 83 buffers 97, 157 and FlashBackup 136 changing 104 changing Windows buffers 158 default number of 102 default size of 103 for network communications 99 shared 102 tape 102 testing 107 wait and delay 108 Windows 157 bus 54
C
cache device (snapshot) 136 calculate actual data transfer rate required 19 length of backups 18 network transfer rate 21 number of robotic tape slots needed 26 number of tape drives needed 20 number of tapes needed 25 shared memory 103 size of catalog 23 space needed for NBDB database 23, 24 cartridges, storing 68
catalog 123 archiving 47 backup requirements 93 backups not finishing 47 backups, guidelines 46 calculating size of 23 compression 47, 48 large backups 47 managing 46 Checkpoint Restart 122 child delay values 108 CHILD_DELAY file 108 cleaning robots 68 tape drives 66 tapes 60 client compression 133 convert to media server 92 tuning performance 95 variables 78 Client Job Tracker 136 clock or cycle time 144 Committed Bytes 87 common system resources 85 communications buffer 99 process 109 Communications Buffer Size parameter 98, 99 comp.arch.storage 162 COMPRESS_SUFFIX option 134 compression 88, 133 and encryption 134 catalog 47, 48 how to enable 133 tape vs client 133 configuration files (Windows) 159 configuration guidelines 49 CONNECT_OPTIONS 42 controller 88 copy-on-write snapshot 136 counters 108 algorithm 110 determining values of 112 in Windows performance 86 wait and delay 108 CPU 84, 86 and performance 146 load, monitoring 84
Index
165
utilization 42 CPUs needed per media server component 31 critical policies 50 cumulative-incremental backup 17 custom reports available media 60 cycle time 144
D
daily_messages log 51 data buffer overview 102 size 97, 157 data compression 126 Data Consumer 111 data path through server 141 data producer 111 data recovery, planning for 68, 69 data stream and tape efficiency 126 data throughput 78 statistics 79 data transfer path 79, 90 basic tuning 91 data transfer rate for drive controllers 21 for tape drives 19 required 19 data variables 78 database backups 64, 130 protect against failure 64 restores 124 databases list of pre-6.0 databases 24 DB2 restores 124 Deactivate command 77 dedicated backup servers 92 dedicated private networks 92 delay buffer 108 in starting jobs 40 values, parent/child 108 de-multiplexing 91 designing master server 27 media server 31 Detailed Status tab 80 devfsadmd daemon 57 device
names 133 reconfiguration 57 devlinks 57 disable TapeAlert 68 disaster recovery 68, 69 testing 43 disk full 88 increase performance 88 load, monitoring 87 performance, issues affecting 139 speed, measuring 84 staging 44 versus tape 60 Disk Queue Length counter 88 disk speed, measuring 85 Disk Time counter 88 disk-based storage 60 diskperf command 87 disks, adding 95 DNS server 48 down drive 55, 56 drive controllers 21 drive selection 58 drive_error_threshold 55, 56 drives, number per network connection 54 drvconfig 57
E
email list (Veritas-bu) 162 EMM 41, 48, 54, 58, 60, 67 EMM database derived from pre-6.0 databases 24 EMM server calculating space needed for 23, 24 moving off master 49 encoding, file 159 encryption 133 and compression 134 error logs 56, 80 error threshold value 54 Ethernet connection 140 evaluating components 84, 85 evaluating performance Activity Monitor 80 All Log Entries report 81 encryption 133 NetBackup clients 95 NetBackup servers 102
166 Index
F
factors in choosing disk vs tape 61 in job scheduling 41 failover, storage unit groups 44 fast-locate 120 FBU_READBLKS 137 FC-AL 142, 144 fibre channel 141, 143 arbitrated loop 142 connection 54 file encoding 159 file ID on vxlogview 50 file system space 45 files backing up many small 135 Windows configuration 159 firewall settings 42 FlashBackup 135, 136 force load parameters (Solaris) 154 forward space filemark 120 fragment size 119, 121 considerations in choosing 119 fragmentation 95 databases 64 level 88 freeware tools 161 freeze media 54, 55, 56 frequency-based tape cleaning 66 frozen volume 55 full backup 61 full duplex 96
configuration examples 147 elements affecting performance 140 performance considerations 145 hierarchy, disk 140 host memory 141 host name resolution 78 hot catalog backup 46, 50
I
I/O operations scaling 148 I/O overhead 64 IMAGE_FILES 123 IMAGE_INFO 123 IMAGE_LIST 123 improving performance, see tuning include lists 49 increase disk performance 88 incremental backups 61, 92, 131 index performance 123 insufficient memory 87 interfaces, multiple 101 ipcs -a command 153 Iperf 161 iSCSI 21 iSCSI bus 54
J
Java interface 33, 134 job delays 41 scheduling 40, 41 scheduling, limiting factors 41 Job Tracker 96 jobs queued 40, 41
G
Gigabit Ethernet cards 31 Gigabit Fibre Channel 21 globDB 24 goodies directory 60 groups of storage units 44
K
kernel tuning Linux 157 Solaris 152
L
larger buffer (FlashBackup) 136 largest fragment size 119 latency 145 legacy logs 51 leveling load among backup components 93
H
hardware components and performance 145
Index
167
library-based tape cleaning 68 Limit jobs per policy attribute 40, 94 limiting fragment size 119 link down 97 Linux, kernel tunable parameters 157 load leveling 93 monitoring 86 parameters (Solaris) 154 local backups 131 Log Sense page 67 logging 77 logs 51, 56, 80, 112, 135 managing 50 viewing 50 long-term storage 61 ltidevs 24 LTO drives 25, 31
M
mailing lists 162 resources 161 managing logs 50 the catalog 46 Manual Backup command 77 master server CPU utilization 42 designing 27 determining number of 29 splitting 48 Maximum concurrent write drives 40 Maximum jobs per client 40 Maximum Jobs Per Client attribute 94 Maximum streams per drive 41 maximum throughput rate 127 Maximum Transmission Unit (MTU) 108 MaximumSGList 157, 158 MDS 58 measuring disk read speed 84, 85 NetBackup performance 76 media catalog 55 error threshold 55 not available 54 pools 60 positioning 126 threshold for errors 54
media and device selection logic 58 media errors database 24 Media List report 54 media manager drive selection 58 Media multiplexing setting 40 media server 32 convert from client 92 designing 31 factors in sizing 33 not available 41 number needed 32 number supported by a master 30 media_error_threshold 55, 56 mediaDB 24 MEGABYTES_OF_MEMORY 134 memory 141, 145, 146 amount required 32, 103 insufficient 87 monitoring use of 84, 87 shared 102 merging master servers 48 message queue 152 message queue parameters HP-UX 155 migration 66 Mode Select page 67 Mode Sense 67 modload command 58 modunload command 57 monitoring data variables 78 MPX_RESTORE_DELAY option 124 MTFSF/MTFSR 120 multiple drives, storage unit 40 multiple interfaces 101 multiple small files, backing up 135 multiplexed backups and fragment size 120 database backups 124 multiplexed image, restoring 121 multiplexing 61, 91, 130 and memory required 32 effects of 132 schedule 40 set too high 124 when to use 130 multi-streaming 130 NEW_STREAM directive 132
168 Index
N
namespace.chksum 24 naming conventions 71 policies 71 storage units 72 NBDB database 23, 24 NBDB.log 47 nbemmcmd command 55 nbjm and job delays 41 nbpem 44 nbpem and job delays 40 nbu_snap 136 ndd 124 NET_BUFFER_SZ 98, 99, 106 NET_BUFFER_SZ_REST 98 NetBackup capacity planning 11 catalog 123 job scheduling 40 news groups 162 restores 119 scheduler 76 NetBackup Client Job Tracker 136 NetBackup Java console 134 NetBackup Operations Manager, see NOM NetBackup Relational Database 48 NetBackup relational database files 47 NetBackup Vault 134 network bandwidth limiting 93 buffer size 97 communications buffer 99 connection options 42 connections 96 interface cards (NICs) 96 load 97 multiple interfaces 101 performance 77 private, dedicated 92 tapes drives and 54 traffic 97 transfer rate 21 tuning 96 tuning and servers 92 variables 77 Network Buffer Size parameter 99, 115 NEW_STREAM directive 132
news groups 162 no media available 54 NO_TAPEALERT touch file 68 NOexpire touch file 44, 159 NOM 32, 43, 60 database 34 guidelines for sizing 33 to monitor jobs 43 nominal throughput rate 127 none pool 60 non-multiplexed restores 120 no-rewind option 133 NOSHM file 100 Notepad, checking file encoding 159 notify scripts 95 nslookup 48 NUMBER_DATA_BUFFERS 104, 107, 156 NUMBER_DATA_BUFFERS_DISK 104 NUMBER_DATA_BUFFERS_RESTORE 104, 123
O
OEMSETUP.INF file 158 offload work to additional master 48 on-demand tape cleaning 67 online (hot) catalog backup 50 Oracle 125 restores 124 order of using storage units 44 out-of-sequence delivery of packets 136
P
packets 136 Page Faults 87 parent/child delay values 108 PARENT_DELAY file 108 patches 136 PCI bridge 141, 145, 146 PCI bus 141, 144, 145 PCI card 141, 146 performance and CPU 146 and disk hardware 139 and hardware issues 145 see also tuning strategies and considerations 91 performance evaluation 76 Activity Monitor 80 All Log Entries report 81
Index
169
monitoring CPU 86 monitoring disk load 87 monitoring memory use 84, 87 system components 84, 85 PhysicalDisk object 88 policies critical 50 guidelines 49 naming conventions 71 Policy Update Interval 40 poolDB 24 pooling conventions 60 port configuration for robot types 58 position error 56 Process Queue Length 86 Processor Time 86
RESTORE_RETRIES for restores 56 retention period 61 RMAN 125 robot cleaning 68 types 58 robotic_def 24 routers 97 ruleDB 24
S
SAN 64 SAN fabric 143 SAN Media Server 92 sar command 84 SATA 21 Scatter/Gather list 158 schedule naming, best practices 72 scheduling 40, 76 delays 40 disaster recovery 43 limiting factors 41 scratch pool 60 SCSI bus 54 SCSI connection 54 SCSI/FC connection 126 SDLT drives 25, 31 search performance 123 semaphore (Solaris) 152 Serial ATA (SATA) 142 server data path through 141 splitting master from EMM 49 tuning 102 variables 76 SGList 158 shared data buffers 102 changing 104 default number of 102 default size of 103 shared memory 100, 102 amount required 103 parameters, HP-UX 155 recommended settings 107 Solaris parameters 152 testing 107 shared-access topology 142, 145 shelf 142 SIZE_DATA_BUFFERS 106, 107, 156, 157
Q
queued jobs 40, 41
R
RAID 61, 88, 95 controller 142, 146 rate of data transfer 17 raw partition backup 136 read buffer size adjusting 136 and FlashBackup 136 reconfigure devices 57 recovering data, planning for 68, 69 recovery time 61 reduce CPU overhead 64 Reduce fragment size setting 119 reduce I/O 64 REGEDT32 158 registry 158 reload st driver without rebooting 57 report 83 All Log Entries 81 media 60 resizing read buffer (FlashBackup) 137 restore and network 124 in mixed environment 124 multiplexed image 121 of database 124 performance of 122
170 Index
SIZE_DATA_BUFFERS_DISK 105 small files, backup of 135 SMART diagnostic standard 67 snap mirror 135 snapshot cache device 136 snapshots 96 and databases 64 socket communications 100 parameters (Solaris) 154 software compression (client) 134 tuning 148 Solaris clients and FlashBackup read buffer 137 kernel tuning 152 splitting master server 48 SSOhosts 24 st driver reloading 57 staging, disk 44, 61 Start Window 76 STK drives 25 storage device performance 126 Storage Mountain 161 storage unit 44, 95 groups 44 naming conventions 72 not available 41 Storage Unit dialog 119 storage_units database 24 storing tape cartridges 68 streaming (tape drive) 61, 126 striped volumes (VxVM) 136 striping block size 136 volumes on disks 92 stunit_groups 24 suspended volume 55 switches 143 synthetic backups 99 System Administration Manager (SAM) 156 system resources 85 system variables, controlling 76
T
Take checkpoints setting 122 tape block size 103
buffers 102 cartridges, storing 68 cleaning 60, 67 compression 133 efficiency 126 full, frozen, suspended 60 number of tapes needed for backups 25 position error 56 streaming 61, 126 versus disk 60 tape connectivity 54 reload st driver 57 tape drive 126 cleaning 66 number needed 20 number per network connection 54 technologies 66 technology needed 18 transfer rates 19 types 31 tape library number of tape slots needed 26 using drives 93 TapeAlert 67 tape-based storage 60 tar 110 tar32 110 TCP/IP 136 tcp_deferred_ack_interval 124 testing conditions 76 threshold error, adjusting 55 for media errors 54 throughput 79 time to data 61 Tiobench 161 TLD robotic control 58 TLM 58 tlmd 58 tools (freeware) 161 topology (hardware) 144 touch files 44, 100 encoding 159 traffic on network 97 transaction log file 47 transfer rate drive controllers 21 for backups 17 network 21
Index
171
required 19 tape drives 19 True Image Restore option 23, 135 tuning additional info 161 basic suggestions 91 buffer sizes 97, 99 client performance 95 data transfer path, overview 90 device performance 126 FlashBackup read buffer 136 Linux kernel 157 network performance 96 restore performance 119, 124 search performance 123 server performance 102 software 148 Solaris kernel 152
analyzing problems 115 correcting problems 118 for local client backup 112 for local client restore 114 for remote client backup 113 for remote client restore 114 wear of tape drives 126 Wide Ultra 2 SCSI 21 wild cards in file lists 49 Windows Performance Monitor 86 Working Set, in memory 87
U
Ultra-3 SCSI 21 Ultra320 SCSI 21 Unicode encoding 159 unified logging, viewing 50 URL resources 161 Usenet news group 162 user-directed backup 77
V
Vault 134 verbosity level 135 Veritas-bu email list 162 viewing logs 50 virus scans 95, 135 Vision Online 161 vmstat 84 volDB 24 volume frozen 55 pools 60 suspended 55 vxlogview 50 file ID 50 VxVM striped volumes 136
W
wait/delay counters 108, 109, 112
172 Index