Professional Documents
Culture Documents
AVID Best Practices NIX
AVID Best Practices NIX
•Avid 1SIS
See "Avid ISIS Recommended Maintenance" on
Check Dashboard for "Warnings" or "Alerts." Clear and protectAvid AirSpeed Multi Stream
•inventory of materials as required for daily operation. Installation and User's Guide
Infartrend RAID disk set (for Interplay Engine Cluster) See the Interplay Engine
Failover Guide.
No reboot maintenance is required for the Infortrend RAID
disk set. RAID disk sets such as the Infortrend are designed for
100% uptime and generally do not benefit from being power cycled.
•Avid ISIS
See "Avid ISIS Recommended Maintenance" on page 315
I Ensure that individual ingest folders do not contain more than 5,000 objects See "Files Per Folder
•each.Limilations" on page 262.
Inspect the Transfer Engine internal dist drive for minimal normal free Interplay Transfer Setup
•space. If the discs are reporting higher than normal use, inspect for the and User 's Guide
•presence of large logging or other error reporting files with recent creation
dates, Report any unusual findings to the site management for follow on
activities.
•Starting in Interplay v3.0, Interplay Transfer no longer requires a
weekly reboot. A monthly reboot is sufficient. This is because Interplay
Transfer v3.x creates a separate process for each playback, ingest, or
DET job. If there is a problem with one job it °ni), affects that particular
job and does not affect the Interplay Transfer Engine.
Purge all jobs. To purge jobs, use the Interplay Production Services and Interplay Production
•Transfer Status window. Services Setup and User 's
•Avid recommends that you tura on Auto Purge so you don't have to Guide man
ually purge jobs.
•Starting in Interplay v3.x you do not need to restan the Interplay
Production servers evo.), week. Once a month is sufficient.
•Interplay Media Indexer
Rebalance the Interplay Media Indexer configuration and'or storages Lo Interplay Production
make sure sufficient indexing mernory is available for the Interplay Media Software Instailation and
•Indexer. Configuration Guide
•Use the Interplay Media Indexer Web interface or A vid Service
Framework Service Configuration toel to check and modify the Interplay
Media Indexer configuration.
•I n t e r p l a y M o n t h l y M a i n t e n a n c e C h e c k Li s t
•Interplay Engine
Check the ratio between the number of database pages and the number of See "Detennining
objects in the database. This value is automatically calculated and displayed Interplay Database
•in the Interplay Administrator tool.Scalability" on page 171.
•Note that the Interplay Engine does not need to be restarted except for the
following reasons:
To verify system health before an upgrade. See the section "Best
Practices for Performing an Engine Upgrade" in the Interplay
Production Readme.
Testing the failover capability on a cluster system during regular
company maintenance windows (for example, twice a year).
Delete the following Interplay Transfer temp files from the C: drive: Interplay Transfer Setup
My Computer>Local Disk C:\tempand User's Guide
Delete the following Interplay Transfer server log files from the C:
drive: My Computer>Local Disk C:\ProgramData\Avid\Temp\TMServerLog
•Avid ISIS
Use the Health Monitor to check memory and CPU usage of server-side Avid Service Framework
•Framework services (Lookup, System Configuration, Time Sync Master, User's Guide
•Email). Check to make sure that none have memory that is increasing at
an unusual rate and that none have persistently high CPU usage.
•Avid ISIS Recommended Maintenance
•Typically, the Avid ISIS does not need tu be power cycled. All components of the Avid ISIS
stack can be individually replaced or restarted without interfering with the production of Avid
ISIS stack.
•\ Power cycling the ent -ire stack (all the components at the same time) could risk the stability
•W• of the Avid ISIS stack.
•For a detai]ed description of the maintenance procedures for Avid ISIS, see the Avid ISIS 7000 Setup
Guide ,Avid ISIS 5000 Setup Guide, or the Avid ISIS 2000 Setup Guide.
•Recommended Maintenance
•Avid recommends setting up a Daily, Weekly, and Monthly maintenance routine for your ISIS
system. The following is an outline of each schedule. For full details see "Avid ISIS Recommended
Maintenance" portion of the setup guide.
•NOTE: If you encounter an unexpected condition, consult the appropriate guides (release
software versions specific) before execution of any corrective measu res to ensure the
•protection and integrity of the shared storage
data. •011:47:44
•5.27 TB12%
Check the Storage Manager/Elements Status in
•1System 43E1113
Admin web page. All Storage Managers and Storage
•W7.24 GBUsedO%
Elements should be green, investigate any error
•STATUS
statuses.
SYSTEM
Log into the active and Sta ndby System Directors and open BAHM'inDTH
•ReadO Mees
Check the system event logs on the System Director for •%IdaO Meps
•For more information, see "Interpreting Failures in the Cluster" en page 84.
•Weekly Maintenance
•The following procedures should be completed on a weekly basis and should take approximately 2
minutes per server or node. These steps can be completed on a "live", in-production system.
•Monthly Maintenance
•The following procedures should be completed on a monthly basis and should take
•approximately 5 minutes per server. These steps can be completed on a "live", in-production
system.
•Throughout the process, watch for any warning messages or errors. If any issues are
encountered, investigate and resolve them prior to releasing the system for use. If needed,
contact Avid Customer Care at 800-800-AVID (2843) for assistance.
•Confirm that all nades, excluding the local node, are "(Connected)".
1. If your system has been configured with Gluster for cache volume replication, use the
following command te verify that all Gluster volumes or "bricks" are mounted on each of
the cluster nodes:
•gluster volume info
•Each cluster node should be listed with a Brick# entry under the "Bricks" section.
•6. Manually verify that Gluster is replicating data across the cluster nodes.
a. From any cluster node, create test files on the following Gluster shares:
•touch /cache/downloaditest001.txt
touch /cache/fi cacheitest002.txt
a. Verify that the files created on your local system are replicated to all other cluster
nodes. This can be accomplished by either Opening an SSH session to each cluster node or
you can use the Linux ssh command to verify the file replication from your current node:
•ssh root@<nade> ls <folderpath>
•Where <nade> is the hostname of the remate server and <f olderpath> is the location of
the test file.
•You might be prompted to confirm that you wish to connect to the remote system. Enter
"yes" to continue. You will also be prompted for the "root" user password of the remote
system.
a. Once you have ven-ad that file replication is functioning normally, remove the test files
from the ¡cache directory:
•rm /cache/downloaditest001.txt && rm /cache/f1 cacheitest002.txt
•You will be asked to confirm that you with to remove the files. Type: yes
•The local and replicated copies of the files are deleted.
•Command Description
•corosync-cfgtool -sRetums the IP and other stats for the node on which you issue the
•(cluser only) command.
•(cluser only)
•Command Description
•If you are accessing the server from a remate SSH session, this
•dmidecode 1 grep -A2 command prints the server information to the screen. Example:
'"System Information'
•System Information
•Manufacture': HP
•Product Name: ProLiant DL360p Gen8
•watch 'crm_mon -fl 1 grep •Depending upon your configuration and the number of managed
-A100 "Migration summary"'
resources, it can be difficult to see afi messages related to the
•(cluster only) cluster when using the crm_mon -f command. This watch
command provides a live status of the last 100 línes of the output
of crm_mon following the "Migration summary".
•The 100 value can be increased or decreased as desired.
•drbd-overview Prints DRBD status information to the screen. This information
•(cluster only)can also be obtained through the following command: service
•drbd status