Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 40

Unit V

Shell Scripting - II
Usually shells are interactive that mean, they accept command
as input from users and execute them. However some time we
want to execute a bunch of commands routinely, so we have type
in all commands each time in terminal.
As shell can also take commands as input from file we can write
these commands in a file and can execute them in shell to avoid
this repetitive work. These files are called Shell Scripts or Shell
Programs. Shell scripts are similar to the batch file in MS-DOS.
Each shell script is saved with .sh file extension eg. myscript.sh

A shell script have syntax just like any other programming


language. If you have any prior experience with any programming
language like Python, C/C++ etc. it would be very easy to get
started with it.
A shell script comprises following elements –

 Shell Keywords – if, else, break etc.


 Shell commands – cd, ls, echo, pwd, touch etc.
 Functions
 Control flow – if..then..else, case and shell loops etc.

Why do we need shell scripts


There are many reasons to write shell scripts –
 To avoid repetitive work and automation
 System admins use shell scripting for routine backups
 System monitoring
 Adding new functionality to the shell etc.
Advantages of shell scripts
 The command and syntax are exactly the same as
those directly entered in command line, so programmer do not
need to switch to entirely different syntax
 Writing shell scripts are much quicker
 Quick start

 Interactive debugging etc.


Disadvantages of shell scripts
 Prone to costly errors, a single mistake can change the
command which might be harmful
 Slow execution speed
 Design flaws within the language syntax or
implementation
 Not well suited for large and complex task
 Provide minimal data structure unlike other scripting
languages. etc

Simple demo of shell scripting using Bash Shell


If you work on terminal, something you traverse deep down in
directories. Then for coming few directories up in path we have to
execute command like this as shown below to get to the “python”
directory –


It is quite frustrating, so why not we can have a
utility where we just have to type the name of
directory and we can directly jump to that without
executing “cd ../” command again and again.
Save the script as “jump.sh”
# !/bin/bash
  
# A simple bash script to move up to desired directory level directly
  
function jump()
{
    # original value of Internal Field Separator
    OLDIFS=$IFS
  
    # setting field separator to "/" 
    IFS=/
  
    # converting working path into array of directories in path
    # eg. /my/path/is/like/this
    # into [, my, path, is, like, this]
    path_arr=($PWD)
  
    # setting IFS to original value
    IFS=$OLDIFS
  
    local pos=-1
  
    # ${path_arr[@]} gives all the values in path_arr
    for dir in "${path_arr[@]}"

Do

        # find the number of directories to move up to

        # reach at target directory 

        pos=$[$pos+1]

        if [ "$1" = "$dir" ];then

  

            # length of the path_arr

            dir_in_path=${#path_arr[@]}

  

            #current working directory

            cwd=$PWD

            limit=$[$dir_in_path-$pos-1]

            for ((i=0; i<limit; i++))


            do

                cwd=$cwd/..

            done

            cd $cwd

            break

        fi

    done

For now we cannot execute our shell script because it do not


have permissions. We have to make it executable by typing 

following command –
$ chmod -x path/to/our/file/jump.sh
Now to make this available on every terminal session, we have to
put this in “.bashrc” file.
“.bashrc” is a shell script that Bash shell runs whenever it is
started interactively. The purpose of a .bashrc file is to provide a
place where you can set up variables, functions and aliases,
define our prompt and define other settings that we want to use
whenever we open a new terminal window.
Now open terminal and type following command –
$ echo “source ~/path/to/our/file/jump.sh”>> ~/.bashrc
Now open you terminal and try out new “jump” functionality by
typing following command-
$ jump dir_name

just like below screenshot –


Metacharacters
Metacharacters are special characters that are used to represent something
other than themselves . As a rule of thumb, characters that are neither
letters nor numbers may be metacharacters. Like grep , sed , and awk , the
shell has its own set of metacharacters, often called shell wildcards .

Linux shell programming : metacharacters & quotes.


Symbol Meaning

* File substitution wildcard; zero or more characters

? File substitution wildcard; one character

[] File substitution wildcard; any character between brackets

`cmd` Command Substitution


However, we can also use special characters called metacharacters in
a Unix command that the shell interprets rather than passing to the
command.
...
4.3. Shell Metacharacters.
Symbol Meaning

> Output redirection

>> Output redirection (append)

< Input redirection


* File substitution wildcard; zero or more characters

A shell variable is created with the following syntax:


"variable_name=variable_value". For example, the command "set
COMPUTER_NAME=mercury" creates the shell variable named
"COMPUTER_NAME" with a value of "mercury". For values with spaces,
quotation marks must be used. Although not required, the convention in
Unix is to use uppercase letters for the variable names. Also, in Unix,
variable names, like filenames, are case sensitive.

A variable is a character string to which we assign a value. The value


assigned could be a number, text, filename, device, or any other type of data.
A variable is nothing more than a pointer to the actual data. The shell
enables you to create, assign, and delete variables.

Variable Names
The name of a variable can contain only letters (a to z or A to Z), numbers ( 0
to 9) or the underscore character ( _).
By convention, Unix shell variables will have their names in UPPERCASE.
The following examples are valid variable names −
_ALI TOKEN_A VAR_1 VAR_2
Following are the examples of invalid variable names −
2_VAR -VARIABLE VAR1-VAR2 VAR_A!
The reason you cannot use other characters such as !, *, or - is that these
characters have a special meaning for the shell.

Defining Variables
Variables are defined as follows −
variable_name=variable_value
For example −
NAME="Zara Ali"
The above example defines the variable NAME and assigns the value "Zara
Ali" to it. Variables of this type are called scalar variables. A scalar variable
can hold only one value at a time.
Shell enables you to store any value you want in a variable. For example −
VAR1="Zara Ali" VAR2=100
Accessing Values
To access the value stored in a variable, prefix its name with the dollar sign
($) −
For example, the following script will access the value of defined variable
NAME and print it on STDOUT −
#!/bin/sh NAME="Zara Ali" echo $NAME
The above script will produce the following value −
Zara Ali

Read-only Variables
Shell provides a way to mark variables as read-only by using the read-only
command. After a variable is marked read-only, its value cannot be changed.
For example, the following script generates an error while trying to change
the value of NAME −
#!/bin/sh NAME="Zara Ali" readonly NAME NAME="Qadiri"
The above script will generate the following result −
/bin/sh: NAME: This variable is read only.

Unsetting Variables
Unsetting or deleting a variable directs the shell to remove the variable from
the list of variables that it tracks. Once you unset a variable, you cannot
access the stored value in the variable.
Following is the syntax to unset a defined variable using
the unset command −
unset variable_name
The above command unsets the value of a defined variable. Here is a simple
example that demonstrates how the command works −
#!/bin/sh NAME="Zara Ali" unset NAME echo $NAME
The above example does not print anything. You cannot use the unset
command to unset variables that are marked readonly.

Variable Types
When a shell is running, three main types of variables are present −
 Local Variables − A local variable is a variable that is present within
the current instance of the shell. It is not available to programs that are
started by the shell. They are set at the command prompt.
 Environment Variables − An environment variable is available to any
child process of the shell. Some programs need environment variables in
order to function correctly. Usually, a shell script defines only those
environment variables that are needed by the programs that it runs.
 Shell Variables − A shell variable is a special variable that is set by
the shell and is required by the shell in order to function correctly. Some of
these variables are environment variables whereas others are local variables.

A shell script is a computer program designed to be run by


the Unix shell, a command-line interpreter.[1] The various dialects
of shell scripts are considered to be scripting languages. Typical
operations performed by shell scripts include file manipulation,
program execution, and printing text. A script which sets up the
environment, runs the program, and does any necessary cleanup,
logging, etc. is called a wrapper.
The term is also used more generally to mean the automated
mode of running an operating system shell; in specific operating
systems they are called other things such as batch files (MSDos-
Win95 stream, OS/2), command procedures (VMS), and shell
scripts (Windows NT stream and third-party derivatives like 4NT—
article is at cmd.exe), and mainframe operating systems are
associated with a number of terms.
The typical Unix/Linux/POSIX-compliant installation includes
the KornShell ( ksh ) in several possible versions such as ksh88,
Korn Shell '93 and others. The oldest shell still in common use is
the Bourne shell ( sh ); Unix systems invariably also include the C
shell ( csh ), Bash ( bash ), a Remote Shell ( rsh ), a Secure
Shell ( ssh ) for SSL telnet connections, and a shell which is a
main component of the Tcl/Tk installation usually
called  tclsh ; wish is a GUI-based Tcl/Tk shell. The C and Tcl
shells have syntax quite similar to that of said programming
languages, and the Korn shells and Bash are developments of the
Bourne shell, which is based on the ALGOL language with
elements of a number of others added as well. [2] On the other
hand, the various shells plus tools like awk, sed, grep,
and BASIC, Lisp, C and so forth contributed to
[3]
the Perl programming language.
Other shells available on a machine or available for download
and/or purchase include Almquist
shell ( ash ), PowerShell ( msh ), Z shell ( zsh , a particularly
common enhanced KornShell), the Tenex C Shell ( tcsh ), and a
Perl-like shell ( psh ). Related programs such as shells based
on Python, Ruby, C, Java, Perl, Pascal, Rexx &c in various forms
are also widely available. Another somewhat common shell is osh,
whose manual page states it "is an enhanced, backward-
compatible port of the standard command interpreter from Sixth
Edition UNIX."[4]
Windows-Unix interoperability software such as the MKS
Toolkit, Cygwin, UWIN, Interix and others make the above shells
and Unix programming available on Windows systems, providing
functionality all the way down to signals and other inter-process
communication, system calls and APIs. The Hamilton C shell is a
Windows shell that is very similar to the Unix C Shell. Microsoft
distributed Windows Services for UNIX for use with its NT-based
operating systems in particular, which have a
POSIX environmental subsystem.

Basic Shell Commands in Linux


A shell is a special user program that provides an interface to the user
to use operating system services. Shell accepts human-readable
commands from the user and converts them into something which the
kernel can understand. It is a command language interpreter that
executes commands read from input devices such as keyboards or
from files. The shell gets started when the user logs in or starts the
terminal. 

1). Displaying the file contents on the terminal: 


 cat: It is generally used to concatenate the files. It gives the
output on the standard output.
 more: It is a filter for paging through text one screenful at a time.

 less: It is used to viewing the files instead of opening the


file.Similar to more command but it allows backward as well as forward
movement.
 head : Used to print the first N lines of a file. It accepts N as
input and the default value of N is 10.
 tail : Used to print the last N-1 lines of a file. It accepts N as
input and the default value of N is 10.

2). File and Directory Manipulation Commands: 


 mkdir : Used to create a directory if not already exist. It accepts
the directory name as an input parameter.

 cp : This command will copy the files and directories from the
source path to the destination path. It can copy a file/directory with
the new name to the destination path. It accepts the source
file/directory and destination file/directory.

 mv : Used to move the files or directories. This command’s


working is almost similar to cp command but it deletes a copy of the
file or directory from the source path.

 rm : Used to remove files or directories.

 touch : Used to create or update a file.

3). Extract, sort, and filter data Commands: 


 grep : This command is used to search for the specified text in a
file.

 grep with Regular Expressions: Used to search for text using


specific regular expressions in file.

 sort : This command is used to sort the contents of files.


 wc : Used to count the number of characters, words in a file.

 cut : Used to cut a specified part of a file.

4). Basic Terminal Navigation Commands: 


 ls : To get the list of all the files or folders.
 ls -l: Optional flags are added to ls to modify default behavior,
listing contents in extended form -l is used for “long” output
 ls -a: Lists of all files including the hidden files, add -a  flag 
 cd: Used to change the directory.
 du: Show disk usage.
 pwd: Show the present working directory.
 man: Used to show the manual of any command present in
Linux.
 rmdir: It is used to delete a directory if it is empty.
 ln file1 file2: Creates a physical link.
 ln -s file1 file2: Creates a symbolic link.
 locate: It is used to locate a file in Linux System
 echo:  This command helps us move some data, usually text into
a file.
 df: It is used to see the available disk space in each of the
partitions in your system.
 tar: Used to work with tarballs (or files compressed in a tarball
archive)
5). File Permissions Commands: The chmod and chown commands
are used to control access to files in UNIX and Linux systems. 
Operator Description Example

+ (Addition) Adds values on either side of the operator `expr $a + $b` w

- (Subtraction) Subtracts right hand operand from left hand `expr $a - $b` w
operand

* Multiplies values on either side of the operator `expr $a \* $b


(Multiplication) 200

/ (Division) Divides left hand operand by right hand operand `expr $b / $a` w
% (Modulus) Divides left hand operand by right hand operand `expr $b % $a`
and returns remainder

= (Assignment) a = $b would a
Assigns right operand in left operand
of b into a

== (Equality) Compares two numbers, if both are same then [ $a == $b ] wo


returns true. false.

!= (Not Compares two numbers, if both are different then [ $a != $b ] wo


Equality) returns true. true.

 chown : Used to change the owner of the file.


 chgrp : Used to change the group owner of the file.
 chmod : Used to modify the access/permission of a user.

Shell Integer Arithemetic Operators

The following arithmetic operators are supported by Bourne Shell.


Assume variable a holds 10 and variable b holds 20 then −

It is very important to understand that all the conditional expressions


should be inside square braces with spaces around them, for example [ $a
== $b ] is correct whereas, [$a==$b] is incorrect.
All the arithmetical calculations are done using long integers.

Example
Here is an example which uses all the arithmetic operators −
#!/bin/sh
a=10
b=20
val=`expr $a + $b`
echo "a + b : $val"
val=`expr $a - $b`
echo "a - b : $val"

val=`expr $a \* $b`
echo "a * b : $val"

val=`expr $b / $a`
echo "b / a : $val"

val=`expr $b % $a`
echo "b % a : $val"

if [ $a == $b ]
then
echo "a is equal to b"
fi

if [ $a != $b ]
then
echo "a is not equal to b"
fi
The above script will produce the following result −
a + b : 30
a - b : -10
a * b : 200
b/a:2
b%a:0
a is not equal to b
The following points need to be considered when using the Arithmetic
Operators −
 There must be spaces between the operators and the expressions. For
example, 2+2 is not correct; it should be written as 2 + 2.
 Complete expression should be enclosed between ‘ ‘, called the
inverted commas.
 You should use \ on the * symbol for multiplication.
 if...then...fi statement is a decision-making statement
Integer Arithmetic and String Manipulation, Special Command line Characters, Decision
Making and Loop Control, Controlling Terminal Input, Trapping Signals, Arrays, I/O
Redirection and Piping,

String Manipulation in Shell Scripting


String Manipulation is defined as performing several operations on a
string resulting change in its contents. In Shell Scripting, this can be
done in two ways: pure bash string manipulation, and string
manipulation via external commands. 
Basics of pure bash string manipulation:
1. Assigning content to a variable and printing its content: In
bash, ‘$‘ followed by the variable name is used to print the content of
the variable. Shell internally expands the variable with its value. This
feature of the shell is also known as parameter expansion. Shell does
not care about the type of variables and can store strings, integers, or
real numbers.
Syntax:

VariableName='value'
echo $VariableName
or
VariableName="value"
echo ${VariableName}
or
VariableName=value
echo "$VariableName"
Note: There should not be any space around the “=” sign in the
variable assignment. When you use VariableName=value, the shell
treats the “=” as an assignment operator and assigns the value to the
variable. When you use VariableName = value, the shell assumes
that VariableName is the name of a command and tries to execute it.

2. To print length of string inside Bash Shell: ‘#‘ symbol is used to


print the length of a string.
Syntax:
variableName=value echo ${#variablename}

3. Concatenate strings inside Bash Shell using variables: In bash,


listing the strings together concatenates the string. The resulting
string so formed is a new string containing all the listed strings.
Syntax:
var=${var1}${var2}${var3}
or
var=$var1$var2$var3
or
var="$var1""$var2""$var3"
To concatenate any character between the strings:
The following will insert "**" between the strings var=${var1}**${var2}**$
{var3}
or
var=$var1**$var2**$var3
or
var="$var1"**"$var2"**"$var3"
The following concatenate the strings using space: var=${var1} ${var2} $
{var3}
or
var="$var1" "$var2" "$var3"
or
echo ${var1} ${var2} ${var3}
Note: While concatenating strings via space, avoid using var=$var1
$var2 $var3. Here, the shell assumes $var2 and $var3 as commands
and tries to execute them, resulting in an error.

4. Concatenate strings inside Bash Shell using an array: In bash,


arrays can also be used to concatenate strings. 
Syntax:
To create an array:
arr=("value1" value2 $value3)
To print an array:
echo ${arr[@]}
To print length of an array:
echo ${#arr[@]}
Using indices (index starts from 0):
echo ${arr[index]}
Note: echo ${arr} is the same as echo ${arr[0]}

5. Extract a substring from a string: In Bash, a substring of


characters can be extracted from a string.
Syntax:
${string:position} --> returns a substring starting from $position till end $
{string:position:length} --> returns a substring of $length characters
starting from $position.
Note: $length and $position must be always greater than or equal to
zero. 
If the $position is less than 0, it will print the complete string.
If the $length is less than 0, it will raise an error and will not execute.

6. Substring matching: In Bash, the shortest and longest possible


match of a substring can be found and deleted from either front or
back.
Syntax:
To delete the shortest substring match from front of $string: $
{string#substring}

To delete the shortest substring match from back of $string: ${string


%substring}

To delete the longest substring match from front of $string: $


{string##substring}

To delete the longest substring match from back of $string of $string: $


{string%%substring}

In the above example: 


 The first echo statement substring ‘*.‘ matches the characters
ending with a dot, and # deletes the shortest match of the substring
from the front of the string, so it strips the substring ‘Welcome.‘.
 The second echo statement substring ‘.*‘ matches the substring
starting with a dot and ending with characters, and % deletes the
shortest match of the substring from the back of the string, so it
strips the substring ‘.GeeksForGeeks‘
 The third echo statement substring ‘*.‘ matches the characters
ending with a dot, and ## deletes the longest match of the substring
from the front of the string, so it strips the substring ‘Welcome.to.‘
 The fourth echo statement substring ‘.*‘ matches the substring
starting with a dot and ending with characters, and %% deletes the
longest match of the substring from the back of the string, so it strips
the substring ‘.to.GeeksForGeeks‘.

shell decision-making
While writing a shell script, there may be a situation when you need to
adopt one path out of the given two paths. So you need to make use of
conditional statements that allow your program to make correct decisions
and perform the right actions.
Unix Shell supports conditional statements which are used to perform
different actions based on different conditions. We will now understand two
decision-making statements here −
 The if...else statement
 The case...esac statement

The if...else statements


If else statements are useful decision-making statements which can be used
to select an option from a given set of options.
Unix Shell supports following forms of if…else statement −

 if...fi statement
 if...else...fi statement
 if...elif...else...fi statement
Most of the if statements check relations using relational operators
discussed in the previous chapter.

The case...esac Statement


You can use multiple if...elif statements to perform a multiway branch.
However, this is not always the best solution, especially when all of the
branches depend on the value of a single variable.
Unix Shell supports case...esac statement which handles exactly this
situation, and it does so more efficiently than repeated if...elif statements.
There is only one form of case...esac statement which has been described in
detail here −

 case...esac statement
The case...esac statement in the Unix shell is very similar to
the switch...casestatement we have in other programming languages
like C or C++ and PERL, etc.

shell loop control


We will learn following two statements that are used to control shell loops−
 The break statement
 The continue statement

The infinite Loop


All the loops have a limited life and they come out once the condition is false
or true depending on the loop.
A loop may continue forever if the required condition is not met. A loop that
executes forever without terminating executes for an infinite number of
times. For this reason, such loops are called infinite loops.

Example
Here is a simple example that uses the while loop to display the numbers
zero to nine −
#!/bin/sh a=10 until [ $a -lt 10 ] do
echo $a a=`expr $a + 1` done
This loop continues forever because a is always greater than or equal to
10 and it is never less than 10.

The break Statement


The break statement is used to terminate the execution of the entire loop,
after completing the execution of all of the lines of code up to the break
statement. It then steps down to the code following the end of the loop.

Syntax
The following break statement is used to come out of a loop −
break
The break command can also be used to exit from a nested loop using this
format −
break n
Here n specifies the nth enclosing loop to the exit from.

Example
Here is a simple example which shows that loop terminates as soon
as a becomes 5 −
#!/bin/sh a=0 while [ $a -lt 10 ] do echo $a if [ $a -eq 5 ] then break
fi a=`expr $a + 1` done
Upon execution, you will receive the following result −
012345
Here is a simple example of nested for loop. This script breaks out of both
loops if var1 equals 2and var2 equals 0 −
#!/bin/sh for var1 in 1 2 3 do for var2 in 0 5 do if [ $var1 -eq 2 -a $var2
-eq 0 ] then break 2 else echo "$var1 $var2" fi done
done
Upon execution, you will receive the following result. In the inner loop, you
have a break command with the argument 2. This indicates that if a
condition is met you should break out of outer loop and ultimately from the
inner loop as well.
1015

The continue statement


The continue statement is similar to the break command, except that it
causes the current iteration of the loop to exit, rather than the entire loop.
This statement is useful when an error has occurred but you want to try to
execute the next iteration of the loop.

Syntax
continue
Like with the break statement, an integer argument can be given to the
continue command to skip commands from nested loops.
continue n
Here n specifies the nth enclosing loop to continue from.

Example
The following loop makes use of the continue statement which returns from
the continue statement and starts processing the next statement −
#!/bin/sh NUMS="1 2 3 4 5 6 7" for NUM in $NUMS do Q=`expr $NUM % 2`
if [ $Q -eq 0 ] then echo "Number is an even number!!" continue fi
echo "Found odd number" done
Upon execution, you will receive the following result −
Found odd number Number is an even number!! Found odd number Number is
an even number!! Found odd number Number is an even number!! Found odd
number

Signals and Traps


Signals are software interrupts sent to a program to indicate that an
important event has occurred. The events can vary from user requests to
illegal memory access errors. Some signals, such as the interrupt signal,
indicate that a user has asked the program to do something that is not in
the usual flow of control.
The following table lists out common signals you might encounter and want
to use in your programs −

Signal
Signal
Numbe Description
Name
r

Hang up detected on controlling terminal or death of controll


SIGHUP 1
process

SIGINT 2 Issued if the user sends an interrupt signal (Ctrl + C)


SIGQUIT 3 Issued if the user sends a quit signal (Ctrl + D)

SIGFPE 8 Issued if an illegal mathematical operation is attempted

If a process gets this signal it must quit immediately and w


SIGKILL 9
not perform any clean-up operations

SIGALRM 14 Alarm clock signal (used for timers)

SIGTERM 15 Software termination signal (sent by kill by default)

List of Signals
There is an easy way to list down all the signals supported by your system.
Just issue the kill -lcommand and it would display all the supported signals

$ kill -l 1) SIGHUP 2) SIGINT 3) SIGQUIT 4) SIGILL 5) SIGTRAP
6) SIGABRT 7) SIGBUS 8) SIGFPE 9) SIGKILL 10) SIGUSR1 11)
SIGSEGV 12) SIGUSR2 13) SIGPIPE 14) SIGALRM 15) SIGTERM 16)
SIGSTKFLT 17) SIGCHLD 18) SIGCONT 19) SIGSTOP 20) SIGTSTP 21)
SIGTTIN 22) SIGTTOU 23) SIGURG 24) SIGXCPU 25) SIGXFSZ 26)
SIGVTALRM 27) SIGPROF 28) SIGWINCH 29) SIGIO 30) SIGPWR 31)
SIGSYS 34) SIGRTMIN 35) SIGRTMIN+1 36) SIGRTMIN+2 37) SIGRTMIN+3
38) SIGRTMIN+4 39) SIGRTMIN+5 40) SIGRTMIN+6 41) SIGRTMIN+7 42)
SIGRTMIN+8 43) SIGRTMIN+9 44) SIGRTMIN+10 45) SIGRTMIN+11 46)
SIGRTMIN+12 47) SIGRTMIN+13 48) SIGRTMIN+14 49) SIGRTMIN+15 50)
SIGRTMAX-14 51) SIGRTMAX-13 52) SIGRTMAX-12 53) SIGRTMAX-11 54)
SIGRTMAX-10 55) SIGRTMAX-9 56) SIGRTMAX-8 57) SIGRTMAX-7 58)
SIGRTMAX-6 59) SIGRTMAX-5 60) SIGRTMAX-4 61) SIGRTMAX-3 62)
SIGRTMAX-2 63) SIGRTMAX-1 64) SIGRTMAX
The actual list of signals varies between Solaris, HP-UX, and Linux.

Default Actions
Every signal has a default action associated with it. The default action for a
signal is the action that a script or program performs when it receives a
signal.
Some of the possible default actions are −
 Terminate the process.
 Ignore the signal.
 Dump core. This creates a file called core containing the memory
image of the process when it received the signal.
 Stop the process.
 Continue a stopped process.

Sending Signals
There are several methods of delivering signals to a program or script. One of
the most common is for a user to type CONTROL-C or the INTERRUPT
key while a script is executing.
When you press the Ctrl+C key, a SIGINT is sent to the script and as per
defined default action script terminates.
The other common method for delivering signals is to use the kill command,
the syntax of which is as follows −
$ kill -signal pid
Here signal is either the number or name of the signal to deliver and pid is
the process ID that the signal should be sent to. For Example −
$ kill -1 1001
The above command sends the HUP or hang-up signal to the program that is
running with process ID 1001. To send a kill signal to the same process,
use the following command −
$ kill -9 1001
This kills the process running with process ID 1001.

Trapping Signals
When you press the Ctrl+C or Break key at your terminal during execution of
a shell program, normally that program is immediately terminated, and your
command prompt returns. This may not always be desirable. For instance,
you may end up leaving a bunch of temporary files that won't get cleaned
up.
Trapping these signals is quite easy, and the trap command has the
following syntax −
$ trap commands signals
Here command can be any valid Unix command, or even a user-defined
function, and signal can be a list of any number of signals you want to trap.
There are two common uses for trap in shell scripts −

 Clean up temporary files


 Ignore signals
Cleaning Up Temporary Files
As an example of the trap command, the following shows how you can
remove some files and then exit if someone tries to abort the program from
the terminal −
$ trap "rm -f $WORKDIR/work1$$ $WORKDIR/dataout$$; exit" 2
From the point in the shell program that this trap is executed, the two
files work1$$ and dataout$$will be automatically removed if signal
number 2 is received by the program.
Hence, if the user interrupts the execution of the program after this trap is
executed, you can be assured that these two files will be cleaned up.
The exit command that follows the rm is necessary because without it, the
execution would continue in the program at the point that it left off when the
signal was received.
Signal number 1 is generated for hangup. Either someone intentionally
hangs up the line or the line gets accidentally disconnected.
You can modify the preceding trap to also remove the two specified files in
this case by adding signal number 1 to the list of signals −
$ trap "rm $WORKDIR/work1$$ $WORKDIR/dataout$$; exit" 1 2
Now these files will be removed if the line gets hung up or if the Ctrl+C key
gets pressed.
The commands specified to trap must be enclosed in quotes, if they contain
more than one command. Also note that the shell scans the command line at
the time that the trap command gets executed and also when one of the
listed signals is received.
Thus, in the preceding example, the value of WORKDIR and $$ will be
substituted at the time that the trap command is executed. If you wanted
this substitution to occur at the time that either signal 1 or 2 was received,
you can put the commands inside single quotes −
$ trap 'rm $WORKDIR/work1$$ $WORKDIR/dataout$$; exit' 1 2

Ignoring Signals
If the command listed for trap is null, the specified signal will be ignored
when received. For example, the command −
$ trap '' 2
This specifies that the interrupt signal is to be ignored. You might want to
ignore certain signals when performing an operation that you don't want to
be interrupted. You can specify multiple signals to be ignored as follows −
$ trap '' 1 2 3 15
Note that the first argument must be specified for a signal to be ignored and
is not equivalent to writing the following, which has a separate meaning of
its own −
$ trap 2
If you ignore a signal, all subshells also ignore that signal. However, if you
specify an action to be taken on the receipt of a signal, all subshells will still
take the default action on receipt of that signal.

Resetting Traps
After you've changed the default action to be taken on receipt of a signal, you
can change it back again with the trap if you simply omit the first argument;
so −
$ trap 1 2
This resets the action to be taken on the receipt of signals 1 or 2 back to the
default.

Array Basics in Shell Scripting


Consider a Situation if we want to store 1000 numbers and
perform operations on them. If we use simple variable concept
then we have to create 1000 variables and the perform operations
on them. But it is difficult to handle a large number of variables.
So it is good to store the same type of values in the array and then
access via index number.
Array in Shell Scripting
An array is a systematic arrangement of the same type of data.
But in Shell script Array is a variable which contains multiple
values may be of same type or different type since by default in
shell script everything is treated as a string. An array is zero-
based ie indexing start with 0.

How to Declare Array in Shell Scripting?

We can declare an array in a shell script in different ways.

1. Indirect Declaration
In Indirect declaration, We assigned a value in a particular index
of Array Variable. No need to first declare.
ARRAYNAME[INDEXNR]=value

2. Explicit Declaration
In Explicit Declaration, First We declare array then assigned the
values.

declare -a ARRAYNAME

3. Compound Assignment
In Compount Assignment, We declare array with a bunch of
values. 
We can add other values later too.
ARRAYNAME=(value1 value2 .... valueN)
or
[indexnumber=]string
ARRAYNAME=([1]=10 [2]=20 [3]=30)

To Print Array Value in Shell Script?

To Print All elements

[@] & [*] means All elements of Array.

echo ${ARRAYNAME[*]}

#! /bin/bash

# To declare static Array 

arr=(prakhar ankit 1 rishabh manish abhinav)

# To print all elements of array

echo ${arr[@]}       

echo ${arr[*]}       
echo ${arr[@]:0}    

echo ${arr[*]:0}    

Output:
prakhar ankit 1 rishabh manish abhinav
prakhar ankit 1 rishabh manish abhinav
prakhar ankit 1 rishabh manish abhinav
prakhar ankit 1 rishabh manish abhinav
To Print first element

# To print first element

echo ${arr[0]}        

echo ${arr}        

Output:

prakhar
prakhar
To Print Selected index element
echo ${ARRAYNAME[INDEXNR]}

# To print particular element

echo ${arr[3]}        

echo ${arr[1]}        

Output:
rishabh
ankit
To print elements from a particular index
echo ${ARRAYNAME[WHICH_ELEMENT]:STARTING_INDEX}

# To print elements from a particular index

echo ${arr[@]:0}     

echo ${arr[@]:1}

echo ${arr[@]:2}     

echo ${arr[0]:1}    

Output:
prakhar ankit 1 rishabh manish abhinav
ankit 1 rishabh manish abhinav
1 rishabh manish abhinav
prakhar

To print elements in range


echo ${ARRAYNAME[WHICH_ELEMENT]:STARTING_INDEX:COUNT_ELEMENT}

# To print elements in range

echo ${arr[@]:1:4}     

echo ${arr[@]:2:3} 

echo ${arr[0]:1:3}    

Output:
ankit 1 rishabh manish
1 rishabh manish
rak
To count Length of in Array
To count the length of a particular element in Array.
Use #(hash) to print length of particular element

# Length of Particular element

echo ${#arr[0]}        

echo ${#arr}        

Output:
7
7
To count length of Array.

# Size of an Array

echo ${#arr[@]}        

echo ${#arr[*]}        

Output:
6
6
To Search in Array
arr[@] : All Array Elements.
/Search_using_Regular_Expression/  : Search in Array
Search Returns 1 if it found the pattern else it return zero.
It does not alter the original array elements.

# Search in Array

echo ${arr[@]/*[aA]*/}    
Output:
1
To Search & Replace in Array
//Search_using_Regular_Expression/Replace  :
Search & Replace
Search & Replace does not change in Original Value of
Array Element. It just returned the new value. So you can
store this value in same or different variable.

# Replacing Substring Temporary

echo ${arr[@]//a/A}         

echo ${arr[@]}             

echo ${arr[0]//r/R}        

Output:
prAkhAr Ankit 1 rishAbh mAnish AbhinAv
prakhar ankit 1 rishabh manish abhinav
RakhaR
To delete Array Variable in Shell Script?
To delete index-1 element
unset ARRAYNAME[1]
To delete the whole Array
unset ARRAYNAME

#! /bin/bash

# To declare static Array 

arr=(prakhar ankit 1 rishabh manish abhinav)

  
# To print all elements of array

echo ${arr[@]}        # prakhar ankit 1 rishabh manish abhinav

echo ${arr[*]}        # prakhar ankit 1 rishabh manish abhinav

echo ${arr[@]:0}    # prakhar ankit 1 rishabh manish abhinav

echo ${arr[*]:0}    # prakhar ankit 1 rishabh manish abhinav

  

# To print first element

echo ${arr[0]}        # prakhar

echo ${arr}            # prakhar

  

# To print particular element

# To print particular element


echo ${arr[3]}        # rishabh
echo ${arr[1]}        # ankit
  
# To print elements from a particular index
echo ${arr[@]:0}    # prakhar ankit 1 rishabh manish abhinav
echo ${arr[@]:1}    # ankit 1 rishabh manish abhinav
echo ${arr[@]:2}    # 1 rishabh manish abhinav
echo ${arr[0]:1}    # rakhar
  
# To print elements in range
echo ${arr[@]:1:4}    # ankit 1 rishabh manish
echo ${arr[@]:2:3}    # 1 rishabh manish
echo ${arr[0]:1:3}    # rak
  
# Length of Particular element
echo ${#arr[0]}        # 7
echo ${#arr}        # 7
  
# Size of an Array
echo ${#arr[@]}        # 6
echo ${#arr[*]}        # 6
  
# Search in Array
echo ${arr[@]/*[aA]*/}    # 1
  
# Replacing Substring Temporary

echo ${arr[@]//a/A}        # prAkhAr Ankit 1 rishAbh mAnish AbhinAv

echo ${arr[@]}            # prakhar ankit 1 rishabh manish abhinav

echo ${arr[0]//r/R}        # pRakhaR

Output:
prakhar ankit 1 rishabh manish abhinav
prakhar ankit 1 rishabh manish abhinav
prakhar ankit 1 rishabh manish abhinav
prakhar ankit 1 rishabh manish abhinav
prakhar
prakhar
rishabh
ankit
prakhar ankit 1 rishabh manish abhinav
ankit 1 rishabh manish abhinav
1 rishabh manish abhinav
rakhar
ankit 1 rishabh manish
1 rishabh manish
rak
7
7
6
6
1 prAkhAr Ankit 1 rishAbh mAnish AbhinAv
prakhar ankit 1 rishabh manish abhinav
pRakhaR
    

I/O Redirection
In this lesson, we will explore a powerful feature used by command line
programs called input/output redirection. As we have seen, many commands
such as ls print their output on the display. This does not have to be the
case, however. By using some special notations we can redirect the output of
many commands to files, devices, and even to the input of other commands.

Standard Output
Most command line programs that display their results do so by sending
their results to a facility called standard output. By default, standard output
directs its contents to the display. To redirect standard output to a file, the
">" character is used like this:

[me@linuxbox me]$ ls > file_list.txt

In this example, the ls command is executed and the results are written in a


file named file_list.txt. Since the output of ls was redirected to the file, no
results appear on the display.

Each time the command above is repeated, file_list.txt is overwritten from the


beginning with the output of the command ls. To have the new
results appended to the file instead, we use ">>" like this:

[me@linuxbox me]$ls >> file_list.txt

When the results are appended, the new results are added to the end of the
file, thus making the file longer each time the command is repeated. If the file
does not exist when we attempt to append the redirected output, the file will
be created.

Standard Input
Many commands can accept input from a facility called standard input. By
default, standard input gets its contents from the keyboard, but like
standard output, it can be redirected. To redirect standard input from a file
instead of the keyboard, the "<" character is used like this:

[me@linuxbox me]$ sort < file_list.txt


In the example above, we used the sort command to process the contents
of file_list.txt. The results are output on the display since the standard
output was not redirected. We could redirect standard output to another file
like this:

[me@linuxbox me]$ sort < file_list.txt > sorted_file_list.txt

As we can see, a command can have both its input and output redirected. Be
aware that the order of the redirection does not matter. The only requirement
is that the redirection operators (the "<" and ">") must appear after the other
options and arguments in the command.

Pipelines
The most useful and powerful thing we can do with I/O redirection is to
connect multiple commands together to form what are called pipelines. With
pipelines, the standard output of one command is fed into the standard
input of another. Here is a very useful example:

[me@linuxbox me]$ ls -l | less

In this example, the output of the ls command is fed into less. By using


this "| less" trick, we can make any command have scrolling output.

By connecting commands together, we can accomplish amazing feats. Here


are some examples to try:

Examples of commands used together with pipelines

Command What it does

ls -lt | head Displays the 10 newest files in the current directory.

du | sort –nr Displays a list of directories and how much space they
consume, sorted from the largest to the smallest.

find . -type f Displays the total number of files in the current working
-print | wc –l directory and all of its subdirectories.

Filters
One kind of program frequently used in pipelines is called a filter. Filters take
standard input and perform an operation upon it and send the results to
standard output. In this way, they can be combined to process information in
powerful ways. Here are some of the common programs that can act as
filters:
Common filter commands

Program What it does

sort Sorts standard input then outputs the sorted result on standard
output.

uniq Given a sorted stream of data from standard input, it removes


duplicate lines of data (i.e., it makes sure that every line is
unique).

grep Examines each line of data it receives from standard input and
outputs every line that contains a specified pattern of characters.

fmt Reads text from standard input, then outputs formatted text on
standard output.

pr Takes text input from standard input and splits the data into
pages with page breaks, headers and footers in preparation for
printing.

head Outputs the first few lines of its input. Useful for getting the
header of a file.

tail Outputs the last few lines of its input. Useful for things like
getting the most recent entries from a log file.

tr Translates characters. Can be used to perform tasks such as


upper/lowercase conversions or changing line termination
characters from one type to another (for example, converting DOS
text files into Unix style text files).

sed Stream editor. Can perform more sophisticated text translations


than tr.

awk An entire programming language designed for constructing filters.


Extremely powerful.

Performing tasks with pipelines


1. Printing from the command line. Linux provides a program
called lpr that accepts standard input and sends it to the printer. It is often
used with pipes and filters. Here are a couple of examples:

cat poorly_formatted_report.txt | fmt | pr | lpr cat unsorted_list_with_dupes.txt | sort |


uniq | pr | lpr
In the first example, we use cat to read the file and output it to standard
output, which is piped into the standard input of fmt. fmt formats the text
into neat paragraphs and outputs it to standard output, which is piped into
the standard input of pr. prsplits the text neatly into pages and outputs it to
standard output, which is piped into the standard input of lpr. lpr takes its
standard input and sends it to the printer.

The second example starts with an unsorted list of data with duplicate
entries. First, cat sends the list into sort which sorts it and feeds it
into uniq which removes any duplicates. Next pr and lpr are used to
paginate and print the list.

2. Viewing the contents of tar files Often you will see software


distributed as a gzipped tar file. This is a traditional Unix style tape archive
file (created with tar) that has been compressed with gzip. You can recognize
these files by their traditional file extensions, ".tar.gz" or ".tgz". You can use
the following command to view the directory of such a file on a Linux system:

tar tzvf name_of_file.tar.gz | less

File handling in Shell Scripting

The bash shell provides lots of commands for manipulating files on the Linux
filesystem. This section walks you through the basic commands you will need to
work with files from the CLI for all your file-handling needs.

Creating files
Every once in a while you will run into a situation where you need to create an
empty file. Sometimes applications expect a log file to be present before they can
write to it. In these situations, you can use the touch command to easily create
an empty file:
$ touch test1
$ ls -il test1
1954793 -rw-r--r-- 1 rich rich 0 Sep 1 09:35 test1
$
The touch command creates the new file you specify, and assigns your
username as the file owner. In ls command -il parameters is used, the first entry
in the listing shows the inode number assigned to the file. Every file on the
Linux system has a unique inode number.

Notice that the file size is zero, since the touch command just created an empty
file. The touch command can also be used to change the access and modification
times on an existing file without changing the file contents:
$ touch test1
$ ls -l test1
-rw-r--r-- 1 rich rich 0 Sep 1 09:37 test1
$
The modification time of test1 is now updated from the original time. If you want
to change only the access time, use the -a parameter. To change only the
modification time, use the –m parameter. By default touch uses the current
time. You can specify the time by using the –t parameter with a specific
timestamp:
$ touch -t 200812251200 test1
$ ls -l test1
-rw-r--r-- 1 rich rich 0 Dec 25 2008 test1
$
Now the modification time for the file is set to a date significantly in the future
from the current time.

Copying files
Copying files and directories from one location in the filesystem to another is a
common practice for system administrators. The cp command provides this
feature.

In it’s most basic form, the cp command uses two parameters: the source object
and the destination object: cp source destination

When both the source and destination parameters are filenames, the cp
command copies the source file to a new file with the filename specified as the
destination. The new file acts like a brand new file, with an updated file creation
and last modified times:
$ cp test1 test2
$ ls -il
total 0
1954793 -rw-r--r-- 1 rich rich 0 Dec 25 2008 test1
1954794 -rw-r--r-- 1 rich rich 0 Sep 1 09:39 test2
$
The new file test2 shows a different inode number, indicating that it’s a
completely new file. You’ll also notice that the modification time for the test2 file
shows the time that it was created. If the destination file already exists, the cp
command will prompt you to answer whether or not you want to overwrite it:
$ cp test1 test2
cp: overwrite `test2’? y
$
If you don’t answer y, the file copy will not proceed. You can also copy a file to
an existing directory:
$ cp test1 dir1
$ ls -il dir1
total 0
1954887 -rw-r--r-- 1 rich rich 0 Sep 6 09:42 test1
$
The new file is now under the dir1 directory, using the same filename as the
original. These examples all used relative pathnames, but you can just as easily
use the absolute pathname for both the source and destination objects.

To copy a file to the current directory you’re in, you can use the dot symbol:
$ cp /home/rich/dir1/test1 .
cp: overwrite `./test1’?
As with most commands, the cp command has a few command line parameters
to help you out.These are shown in Table.

Use the -p parameter to preserve the file access or modification times of the
original file for the copied file.
$ cp -p test1 test3
$ ls -il
total 4
1954886 drwxr-xr-x 2 rich rich 4096 Sep 1 09:42 dir1/
1954793 -rw-r--r-- 1 rich rich 0 Dec 25 2008 test1
1954794 -rw-r--r-- 1 rich rich 0 Sep 1 09:39 test2
1954888 -rw-r--r-- 1 rich rich 0 Dec 25 2008 test3
$
Now, even though the test3 file is a completely new file, it has the same
timestamps as the original test1 file.

The -R parameter is extremely powerful. It allows you to recursively copy the


contents of an entire directory in one command:
$ cp -R dir1 dir2
$ ls -l
total 8
drwxr-xr-x 2 rich rich 4096 Sep 6 09:42 dir1/
drwxr-xr-x 2 rich rich 4096 Sep 6 09:45 dir2/
-rw-r--r-- 1 rich rich 0 Dec 25 2008 test1
-rw-r--r-- 1 rich rich 0 Sep 6 09:39 test2
-rw-r--r-- 1 rich rich 0 Dec 25 2008 test3
$
Now dir2 is a complete copy of dir1. You can also use wildcard characters in
your cp commands:
$ cp -f test* dir2
$ ls -al dir2
total 12
drwxr-xr-x 2 rich rich 4096 Sep 6 10:55 ./
drwxr-xr-x 4 rich rich 4096 Sep 6 10:46 ../
-rw-r--r-- 1 rich rich 0 Dec 25 2008 test1
-rw-r--r-- 1 rich rich 0 Sep 6 10:55 test2
-rw-r--r-- 1 rich rich 0 Dec 25 2008 test3
$
TABLE 3-63
This command copied all of the files that started with test to dir2. I included the
-f parameter to force the overwrite of the test1 file that was already in the
directory without asking.
linking files
You may have noticed a couple of the parameters for the cp command referred to
linking files. This is a pretty cool option available in the Linux filesystems. If you
need to maintain two (or more) copies of the same file on the system, instead of
having separate physical copies, you can use one physical copy and multiple
virtual copies, called links. A link is a placeholder in a directory that points to
the real location of the file. There are two different types of file links in Linux:
 A symbolic, or soft, link
 A hard link
The hard link creates a separate file that contains information about the original
file and where to locate it. When you reference the hard link file, it’s just as if
you’re referencing the original file:
$ cp -l test1 test4
$ ls -il
total 16
1954886 drwxr-xr-x 2 rich rich 4096 Sep 1 09:42 dir1/
1954889 drwxr-xr-x 2 rich rich 4096 Sep 1 09:45 dir2/
1954793 -rw-r--r-- 2 rich rich 0 Sep 1 09:51 test1
1954794 -rw-r--r-- 1 rich rich 0 Sep 1 09:39 test2
1954888 -rw-r--r-- 1 rich rich 0 Dec 25 2008 test3
1954793 -rw-r--r-- 2 rich rich 0 Sep 1 09:51 test4
$
The -l parameter created a hard link for the test1 file called test4. When I
performed the file listing, you can see that the inode number of both the test1
and test4 files are the same, indicating that, in reality, they are both the same
file. Also notice that the link count (the third item in the listing) now shows that
both files have two links.

On the other hand, the -s parameter creates a symbolic, or soft, link:


$ cp -s test1 test5
$ ls -il test*
total 16
1954793 -rw-r--r-- 2 rich rich 6 Sep 1 09:51 test1
1954794 -rw-r--r-- 1 rich rich 0 Sep 1 09:39 test2
1954888 -rw-r--r-- 1 rich rich 0 Dec 25 2008 test3
1954793 -rw-r--r-- 2 rich rich 6 Sep 1 09:51 test4
1954891 lrwxrwxrwx 1 rich rich 5 Sep 1 09:56 test5 -> test1
$
There are a couple of things to notice in the file listing, First, you’ll notice that
the new test5 file has a different inode number than the test1 file, indicating
that the Linux system treats it as a separate file. Second, the file size is different.
A linked file needs to store only information about the source file, not the actual
data in the file. The filename area of the listing shows the relationship between
the two files.
Be careful when copying linked files. If you use the cp command to copy a file
that’s linked to another source file, all you’re doing is making another copy of
the source file. This can quickly get confusing. Instead of copying the linked file,
you can create another link to the original file. You can have many links to the
same file with no problems. However, you also don’t want to create soft links to
other soft-linked files. This creates a chain of links that can not only be
confusing but also be easily broken, causing all sorts of problems.

Renaming files
In the Linux world, renaming files is called moving. The mv command is
available to move both files and directories to another location:
$ mv test2 test6
$ ls -il test*
1954793 -rw-r--r-- 2 rich rich 6 Sep 1 09:51 test1
1954888 -rw-r--r-- 1 rich rich 0 Dec 25 2008 test3
1954793 -rw-r--r-- 2 rich rich 6 Sep 1 09:51 test4
1954891 lrwxrwxrwx 1 rich rich 5 Sep 1 09:56 test5 -> test1
1954794 -rw-r--r-- 1 rich rich 0 Sep 1 09:39 test6
$
Notice that moving the file changed the filename but kept the same inode
number and the timestamp value. Moving a file with soft links is a problem:
$ mv test1 test8
$ ls -il test*
total 16
1954888 -rw-r--r-- 1 rich rich 0 Dec 25 2008 test3
1954793 -rw-r--r-- 2 rich rich 6 Sep 1 09:51 test4
1954891 lrwxrwxrwx 1 rich rich 5 Sep 1 09:56 test5 -> test1
1954794 -rw-r--r-- 1 rich rich 0 Sep 1 09:39 test6
1954793 -rw-r--r-- 2 rich rich 6 Sep 1 09:51 test8
[rich@test2 clsc]$ mv test8 test1
The test4 file that uses a hard link still uses the same inode number, which is
perfectly fine. However, the test5 file now points to an invalid file, and it is no
longer a valid link.

You can also use the mv command to move directories:


$ mv dir2 dir4
The entire contents of the directory are unchanged. The only thing that changes
is the name of the directory.
Deleting files
Most likely at some point in your Linux career you’ll want to be able to delete
existing files. Whether it’s to clean up a filesystem or to remove a software
package, there’s always opportunities to delete files.

In the Linux world, deleting is called removing. The command to remove files in


the bash shell is rm. The basic form of the rm command is pretty simple:
$ rm -i test2 rm: remove `test2’? y $ ls -l total 16 drwxr-xr-x 2 rich rich 4096 Sep 1
09:42 dir1/ drwxr-xr-x 2 rich rich 4096 Sep 1 09:45 dir2/ -rw-r--r-- 2 rich rich 6 Sep 1
09:51 test1 -rw-r--r-- 1 rich rich 0 Dec 25 2008 test3 -rw-r--r-- 2 rich rich 6 Sep 1 09:51
test4 lrwxrwxrwx 1 rich rich 5 Sep 1 09:56 test5 -> test1 $

Notice that the command prompts you to make sure that you’re serious about
removing the file. There’s no trashcan in the bash shell. Once you remove a file
it’s gone forever.

You might also like