Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 78

Unit-1

Syllabus:
Introduction: Data Types, Data structures, Types of data structures, Operations, ADTs,
Algorithms, Comparison of Algorithms, Complexity, Time and space trade off.
Recursion: Introduction, format of recursive functions, recursion Vs. iteration, examples

1. INTRODUCTION
1.1 DATA TYPES
The following are the different data types in C Programming.

Fig.1.1 Data types in C

Fig.1.2 Basic Data types


Table 1: Size, range, and specifier of ‘Integer’ data types

Table 2: Size, range, and specifier of ‘Floating point’ data types

Table 3: Size, range, and specifier of ‘Floating point’ data types

Enumerated data type:


An enumerated data type is a user-defined data type that consists of integer constants and
each integer constant is given a name.
The keyword "enum" is used to define the enumerated data type.
The following is the way to define the enum in C:
enum flag{integer_const1, integer_const2,.....integter_constN};
Example: enum fruits{mango, apple, strawberry, papaya};
enum fruits{
mango=2,
apple=1,
strawberry=5,
papaya=7,
};

void data type:


The void data type means nothing or no value.void is an empty data type that has no value.
Generally, the void is used to specify a function which does not return any value.
We also use the void data type to specify empty parameters of a function.
Ex: 1) void fun1()
{
printf(:hello world”);
}
2) int function(void)

Derived data types:


Derived data types are also user-defined data types/ secondary data types.
In the c programming language, the derived data types are created using the following
concepts...
 Arrays
 Structures
 Unions
 Enumeration

Variables:
Variables in a c programming language are the named memory locations where the user can
store different values of the same data type during the program execution.In other words, a
variable can be defined as a storage container to hold values of the same data type.
Every variable must be declared in the declaration section before it is used.Every variable
must have a data type that determinesthe range and type of values be stored andthe size of
the memory to be allocated.A variable name may contain letters, digits and underscore
symbol.
The following are the rules to specify a variable name...
 Variable name should not start with a digit.
 Keywords should not be used as variable names.
 A variable name should not contain any special symbols except underscore(_).
 A variable name can be of any length but compiler considers only the first 31
characters of the variable name.
Declaration of a variable:
Declaration of a variable tells the compiler to allocate the required amount of memory with
the specified variable name and allows only specified data type values into that memory
location.
In C programming language, the declaration can be performed either before the function as
global variables or inside any block or function.But it must be at the beginning of block or
function.
Declaration Syntax:
Datatype variableName;
Example:
int number;
the compiler that allocates 2 bytes of memory with the name number and allows only
integer values into that memory location

DATA TYPES IN ANY PROGRAMMING LANGUAGE:


There are two categories of data types that are applicable for all the programming
languages.
1. System defined data types/ Primitive Data Types
2. User defined data types
Primitive data types:
Data types that are defined by system are called primitive data types. The primitive data
types provided by many programming languages are:
EX: int, float, char, double, bool, etc.
The number of bits allocated for each primitive data type depends on
 The programming languages,
 The compiler and
 The operating system.
For the same primitive data type, different languages may use different sizes.Depending on
the size of the data types, the total available values (domain) will also change.
For example, “int” may take 2 bytes or 4 bytes.
15
If it takes 2 bytes (16 bits), then the total possible values are -32,768 to +32,767 (- 2

15-1
to2 ).

If it takes 4 bytes (32 bits), then the possible values are between-2,147,483,648 and

31 31-1
+2,147,483,647 (- 2 to2 ).

User – defined data types:


If the system-defined data types are not enough, then most programming languages allow
the users to define their own data types, called user – defined data types.
Example : structures in C/C + + and classes in Java.
We can combine many system-defined data types and call the user defined data type by the
name “newType”.This gives more flexibility and comfort in dealing with computer
memory.
Ex:

1.2 DATA STRUCTURES


When we have data in variables, we need some mechanism for manipulating that data to
solve problems. Data structure is a particular way of storing and organizing data in a
computer so that it can be used efficiently.
A data structure is a special format for organizing and storing data.General data structure
types include
 Arrays,
 Files,
 Linked lists,
 Stacks,
 Queues,
 Trees,
 Graphs
1.3 TYPES OF DATA STRUCTURES

Fig.1.3 Types of Data structures

Depending on the organization of the elements, data structures are classified into two types:
1.3.1 Linear data structures
If the elements of a data structure are stored in a linear or sequential order, then it is a linear
data structure.
Examples: arrays, linked lists, stacks, and queues.
Linear data structures can be represented in memory in two different ways. One way is to
have to a linear relationship between elements by means of sequential memory locations.
The other way is to have a linear relationship between elements by means of links.
Elements are accessed in a sequential order but it is not compulsory to store allelements
sequentially.
Arrays:
An array is a collection of similar data elements. These data elements have the same data
type.
The elements of the array are stored in consecutive memory locations and are referenced by
an index (also known as the subscript).
In C, arrays are declared using the following syntax:
type name[size];
Example: int marks[10];
The above statement declares an array marks that contains 10 elements. In C, the array in-
dex starts from zero. This means that the array marks will contain 10 elements in all. The
first element will be stored in marks[0], second element in marks[1], so on and so forth.
Therefore, the last element, that is the 10th element, will be stored in marks[9]. In the mem -
ory, the array will be stored as shown in Fig. 1.4.

Fig1.4Memory representation of an array of 10 elements


Arrays are generally used when we want to store large amount of similar type of data. But
they have the following limitations:
 Arrays are of fixed size.
 Data elements are stored in contiguous memory locations which may not be always
available.
 Insertion and deletion of elements can be problematic because of shifting of elements
fromtheir positions.
However, these limitations can be solved by using linked lists
Linked Lists:
A linked list is a very flexible, dynamic data structure in which elements (called nodes)
form a sequential list. In contrast to static arrays, a programmer need not worry about how
many elements will be stored in the linked list. This feature enables the programmers to
write robust programs which require less maintenance.
In a linked list, each node is allocated space as it is added to the list. Every node in the list
points to the next node in the list. Therefore, in a linked list, every node contains the follow-
ing two types of data:
 The value of the node or any other data that corresponds to that node
 A pointer or link to the next node in the list
The last node in the list contains a NULL pointer to indicate that it is the end or tail of the
list. Since the memory for a node is dynamically allocated when it is added to the list, the
total number of nodes that may be added to a list is limited only by the amount of memory
available. Fig. 1.5 shows a linked list of seven nodes.

Fig. 1.5Simple linked list


Advantage: Easier to insert or delete data elements
Disadvantage: Slow search operation and requires more memory space

Stacks:
A stack is a linear data structure in which insertion and deletion of elements are done at
only one end, which is known as the top of the stack. Stack is called a last-in, first-out
(LIFO) structure because the last element which is added to the stack is the first element
which is deleted from the stack.
In the computer’s memory, stacks can be implemented using arrays or linked lists. Fig.1.6
shows the array implementation of a stack. Every stack has a variable top associated with
it, top is used to store the address of the topmost element of the stack. It is this position
from where the element will be added or deleted. There is another variable MAX, which is
used to store the maximum number of elements that the stack can store.

Fig. 1.6 Array representation of a stack


If top = NULL, then it indicates that the stack is empty and if top = MAX–1, then the stack
is full.
A stack supports three basic operations: push, and pop. The push operation adds an element
to the top of the stack. The pop operation removes the element from the top of the stack.
Queues:
A queue is a first-in, first-out (FIFO) data structure in which the element that is inserted
first is the first one to be taken out. The elements in a queue are added at one end called the
rear and removed from the other end called the front. Like stacks, queues can be imple-
mented by using either arrays or linked lists.
Every queue has front and rear variables that point to the position from where deletions and
insertions can be done, respectively.
Consider the queue shown in Fig. 1.7. Here, front = 0 and rear = 5.

Fig1.7 Array representation of a queue

1.3.2 Non – linear data structures


Elements of this data structure are stored/accessed in a non-linear order.
Examples: Trees and graphs.
Trees:
A tree is a non-linear data structure which consists of a collection of nodes arranged in a hi-
erarchical order. One of the nodes is designated as the root node, and the remaining nodes
can be partitioned into disjoint sets such that each set is a sub-tree of the root.
The simplest form of a tree is a binary tree. A binary tree consists of a root node and
left and right sub-trees, where both sub-trees are also binary trees. Each node contains a
data element, a left pointer which points to the left sub-tree, and a right pointer which points
to the right sub-tree. The root element is the topmost node which is pointed by a ‘root’
pointer. If root = NULL then the tree is empty.
Fig.1.8 shows a binary tree, where R is the root node and T1 and T2 are the left and right
subtrees.
Fig.1.8Binary tree
If T1 is non-empty, then T1 is said to be the left successor of R. Likewise, if T2 is non-
empty, then it is called the right successor of R.
Advantage: Provides quick search, insert, and delete operations
Disadvantage: Complicated deletion algorithm

Graphs:
A graph is a non-linear data structure which is a collection of vertices (alsocalled nodes)
and edges that connect these vertices.
A graph is often viewed
1. As a generalization of the tree structure, where instead of a purely parent-to-child re-
lationshipbetween tree nodes, any kind of complex relationships between the nodes
can exist.
2. In a tree structure, nodes can have any number of children but only one parent, a
graph on theother hand relaxes all such kinds of restrictions.
Fig.1.9 shows a graph with five nodes.

Figure 1.9Graph
A node in the graph may represent a city and the edges connecting the nodes can represent
roads. A graph can also be used to represent a computer network where the nodes are work-
stations andthe edges are the network connections.
Advantage: Best models real-world situations
Disadvantage: Some algorithms are slow and very complex

1.4 OPERATIONS ON DATA STRUCTURES


 Traversing
It means to access each data item exactly once so that it can be processed. Forexam-
ple, to print the names of all the students in a class.
 Searching
It is used to find the location of one or more data items that satisfy the given con-
straint.For example,to find the names of all the students who secured 100 marks in
mathematics.
 Inserting
It is used to add new data items to the given list of data items. For example, adding
the details of a new student who has recently joined the course.
 Deleting
It means to remove (delete) a particular data item from the given collection of
dataitems. For example, deleting the name of a student who has left the course.
 Sorting
Data items can be arranged in some order like ascending order or descending order.
For example, arranging the names of students in a class in an alphabetical order, or
calculating the top three winners by arranging the participants’ scores in descending
order.
 Merging
Lists of two sorted data items can be combined to form a single list of sorted data
items.

1.5 ABSTRACT DATA TYPES (ADTS)


Bydefault, all primitive data typessupport basic operations such as addition and
subtraction.
The system provides the implementations for the primitive data types.
For user-defined data types,we need to define operations.The implementation for these
operations can be done when we want to actually use them. User defined data types are
defined along with their operations.
To simplify the process of solving problems, we combine the data structures with their
operations and we call this Abstract Data Types (ADTs).
An ADT consists of two parts
 Declaration of data
 Declaration of operations
Commonly used ADTs include Linked Lists, Stacks, Queues, Priority Queues, Binary Trees,
Dictionaries, Disjoint Sets (Union and Find), Hash Tables, Graphs, and many others.
For example, stack uses LIFO (Last-In-First-Out) mechanism while storing the data in data
structures.The last element inserted into the stack is the first element that gets
deleted.Common operations of stack are
 creating the stack,
 pushing an element onto the stack,
 popping an element from stack,
 finding the current top of the stack,
 finding number of elements in the stack, etc.

Different kinds of ADTs are suited to different kinds of applications,

1.6 ALGORITHMS
Let us consider the problem of preparing an omelette.
To prepare an omelette, we follow thesteps given below:
1. Get the frying pan.
2. Get the oil.
3. Do we have oil?
4. If yes, put it in the pan.
5. If no, do we want to buy oil?
6. If yes, then go out and buy.
7. If no, we can terminate.
8. Turn on the stove, etc...

For a given problem (preparing an omelette), we are providing a step-by step procedure for
solving it.
The formal definition of an algorithm can be stated as:
An algorithm is the step-by-step unambiguous instructions to solve a given problem
Definition: An algorithm is a finite set of instructions which, if followed, accomplish a par-
ticular task.
In addition every algorithm must satisfy the following criteria:
(i) input: there are zero or more quantities which are externally supplied;
(ii) output: at least one quantity is produced;
(iii) definiteness: each instruction must be clear and unambiguous;
(iv) finiteness: if we trace out the instructions of an algorithm, then for all cases the algo-
rithm will
terminate after a finite number of steps;
(v) effectiveness: every instruction must be sufficiently basic that it can in principle be car-
ried out by a person using only pencil and paper. It is not enough that each operation be
definite as in (iii), but it must also be feasible.

An algorithm can be described in many ways. A natural language such as English can be
used but we must be very careful that the resulting instructions are definite (condition iii)

The Analysis of Algorithms:


To go from city “A” to city “B”, there can be many ways of accomplishing this: by flight,
by bus, by train and also by bicycle.
Depending on the availability and convenience, we choose the one that suits us.
Similarly, in computer science, multiple algorithms are available for solving the same
problem
for example, a sorting problem has many algorithms, like insertion sort, selection sort,
quick sort and many more.
Algorithm analysis helps us to determine which algorithm is most efficient in terms of time
and space consumed.
The goal of the analysis of algorithms is to compare algorithms (or solutions) mainly in
terms of running time

Running Time Analysis:


It is the process of determining how processing time increases as the size of the problem
(input size) increases.Input size is the number of elements in the input, and depending on
the problem type.
Tthe input may be of different types.The following are the common types of inputs
 Size of an array
 Polynomial degree
 Number of elements in a matrix
 Number of bits in the binary representation of the input
 Vertices and edges in a graph.

1.7 COMPARISON OF ALGORITHMS


To compare algorithms, let us define a few objective measures:
 Execution times
o Not a good measure, since execution times are specific to a particular
computer
 Number of statements executed
oNot a good measure, since the number of statements varies with the
programming language as well as the style of the individual programmer.
 Ideal solution
o Express the running time of a given algorithm as a function of the input size n
(i.e., f(n)) andcompare these different functions corresponding to running
times.
o This kind of comparison is independent of machine time, programming style,
etc.
Rate of Growth:
The rate at which the running time increases as a function of input is called rate of growth.
Example:
Let us assume that you go to a shop to buy a car and a bicycle. If your friend sees you there
and asks what you are buying, then in general you say buying a car.
This is because the cost of the car is high compared to the cost of the bicycle
(approximating the cost of the bicycle to the cost of the carfor a given function ignore the
low order terms that are relatively insignificant (for large value of input size, n).

As an example,n 4,2 n2, 100 and 500 are the individual costs of some function and
approximate to n 4since is the highest rate of growth.
Commonly Used Rates of Growth:

Different approaches to designing an algorithm:


Algorithms are used to manipulate the data contained in data structures.When working with
data structures, algorithms are used to perform operations on the stored data.A complex
algorithm is often divided into smaller units called modules.
This process of dividing an algorithm into modules is called modularization.
The key advantages of modularization are
 It makes the complex algorithm simpler to design and implement.
 Each module can be designed independently
While designing one module, the details of other modules can be ignored, thereby
enhancing clarity in design which in turn simplifies implementation, debugging, testing,
documenting, and maintenance of the overall algorithm.

Fig. 1.10 Different approaches of designing an algorithm


Top-down approach:
 A top-down design approach starts by dividing the complex algorithm into one or
more modules.
 These modules can further be decomposed into one or more sub-modules,
 Top-down design method is a form of stepwise refinement where we begin with the
topmost module and incrementally add modules that it calls.
 Therefore, in a top-down approach, we start from an abstract design and then at each
step, this design is refined into more concrete levels until a level is reached that
requires no further refinement.
Bottom-up approach:
 A bottom-up approach is just the reverse of top-down approach.
 In the bottom-up design, we start with designing the most basic or concrete modules
and then proceed towards designing higher level modules.
 The higher level modules are implemented by using the operations performed by
lower level modules.
 Sub-modules are grouped together to form a higher level module.
 All the higher level modules are clubbed together to form even higher level modules.
 This process is repeated until the design of the complete algorithm is obtained.

Top-down vs bottom-up approach:


1. top-down approach follows a stepwise refinement by decomposing the algorithm
into manageable modules,
2. The bottom-up approach on the other hand defines a module and then groups
together several modules to form a new higher level module.
3. Top-down approach is highly appreciated for ease in documenting the modules,
generation of test cases, implementation of code,and debugging.
4. Bottom up advantage:Information hiding

Control structures used in algorithms:


An algorithm has a finite number of steps.Some steps may involvedecision-making
andrepetition
An algorithm may employ one of the following control structures:
 Sequence,
 Decision, and
 Repetition.
Sequence:
Each step of an algorithm is executed in a specified order.The algorithm performs the steps
in a purely sequential order
Ex: Algorithm to add two numbers

Decision:
Decision statements are used when the execution of a process depends on the outcome of
some condition. For example, if x = y, then print EQUAL
So the general form of IF construct can be given as:
IF condition Then process
A condition may evaluate to either a true value or a false value.
In the above example, a variable x can be either equal to y or not equal to y. However, it
cannot be both true and false.If the condition is true, and then the process is executed.
A decision statement can also be stated in the following manner:
IF condition Then process1 ELSE process2
This form is popularly known as the IF–ELSE construct.Here, if the condition is true, then
process1 is executed, else process2 is executed.

Repetition:
It involves executing one or more steps for a number of times, it can be implemented using
constructs such as while, do–while, and for loops. These loops execute one or more times
until some condition is true.
Algorithm to test for equality of Algorithm to print the first 10 natural numbers
two numbers
Programming
Comparison of Algorithms examples :gramming Examples
1. Write an algorithm for swapping two values.
Step 1: Input first number as A
Step 2: Input second number as B
Step 3: SET TEMP = A
Step 4: SET A = B
Step 5: SET B = TEMP
Step 6: PRINT A, B
Step 7: END

2. Write an algorithm to find the larger of two numbers.


Step 1: Input first number as A
Step 2: Input second number as B
Step 3: IF A>B
PRINT A
ELSE
IF A<B
PRINT B
ELSE
PRINT "The numbers are equal"
[END OF IF]
[END OF IF]
Step 4: END
3. Write an algorithm to find whether a number is even or odd.
Step 1: Input number as A
Step 2: IF A%2 =0
PRINT "EVEN"
ELSE
PRINT "ODD"
[END OF IF]
Step 3: END
4. Write an algorithm to print the grade obtained by a student using the following rules.
Step 1: Enter the Marks obtained as M
Step 2: IF M>75
PRINT O
Step 3: IF M>=60 AND M<75
PRINT A
Step 4: IF M>=50 AND M<60
PRINT B
Step 5: IF M>=40 AND M<50
PRINT C
ELSE
PRINT D
END
5. Write an algorithm to find the sum of first N natural numbers.
Step 1: Input N
Step 2: SET I = 1, SUM = 0
Step 3: Repeat Step 4 while I <= N
Step 4: SET SUM = SUM + I
SET I = I + 1
[END OF LOOP]
Step 5: PRINT SUM
Step 6: END
Types of Run Time Analysis:
To analyze the given algorithm, we need to know with which inputs the algorithm takes less
time and with which inputs the algorithm takes a long time.
Three types of analysis:
To analyze an algorithm we need some kind of syntax, and that forms the base for
asymptotic analysis/notation.
Worst case
Defines the input for which the algorithm takes a long time (slowest time to
complete).Input is the one for which the algorithm runs the slowest.
Best case
Defines the input for which the algorithm takes the least time (fastest time to
complete).Input is the one for which the algorithm runs the fastest.
Average case
Provides a prediction about the running time of the algorithm.Run the algorithm many
times, using many different inputs that come from some distribution that generates these
inputs, compute the total running time (by adding the individual times), and divide by
thenumber of trials.

For a given algorithm, we can represent the best, worst and average cases in the form of
expressions.
As an example, let f(n) be the function which represents the given algorithm.

1.8 TIME AND SPACE COMPLEXITY


Analysing an algorithm means determining the amount of resources such as time and
memory needed to execute it.
Algorithms are generally designed to work with an arbitrary number of inputs. The
efficiency or complexity of an algorithm is stated in terms of time and space complexity
The time complexity of an algorithm is basically the running time of a program as a
function of the input size.
The space complexity of an algorithm is the amount of computer memory that is required
during the program execution as a function of the input size.
the number of machine instructions which a program executes is called its time complexity.
This number is primarily dependent on the size of the program’s input and the algorithm
used.the
Space needed by a program depends on the following two parts
 Fixed part
It varies from problem to problem.It includes the space needed for storing
instructions, constants, variables, and structured variables (like arrays and
structures).
 Variable part
It varies from program to program.It includes the space needed for recursion stack,
and for structured variables that are allocated space dynamically during the runtime of
a program.
Worst-case running time:
This denotes the behaviour of an algorithm with respect to the worst-possible case of the
input instance.It is an upper bound on the running time for any input.having the knowledge
of worst case running time gives us an assurance that the algorithm will never go beyond
this time limit.
Average-case running time:
The average-case running time of an algorithm is an estimate of the running time for an
‘average’ input.It specifies the expected behaviour of the algorithm when the input is
randomly drawn from a given distribution. Average-case running time assumes that all
inputs of a given size are equally likely.
‘Best-case performance’ is used to analyse an algorithm under optimal conditions.
For example, the best case for a simple linear search on an array occurs when the desired
element is the first in the list.It is always recommended to improve the average performance
and the worst-case performance of an algorithm.
Amortized running time:
It refers to the time required to perform a sequence of (related) operations averaged over all
the operations performed.Amortized analysis guarantees the average performance of each
operation in the worst case.
1.9 TIME–SPACE TRADE-OFF
The best algorithm to solve a particular problem requires less memory space and takes less
time to complete its execution.
There can be more than one algorithm to solve a particular problem.One may require less
memory space, while the other may require less CPU time to execute.Hence, there exists a
time–space trade-off among algorithms.
If space is a big constraint, then choose a program that takes less space at the cost of more
CPU time.On the contrary, if time is a major constraint, then choose a program that takes
minimum time to execute at the cost of more space.

Expressing Time and Space Complexity


The time and space complexity can be expressed using a function f(n), where n is the input
size for a given instance of the problem being solved.
Expressing the complexity is required when we want to predict the rate of growth of
complexity as the input size of the problem increases.
There are multiple algorithms that find a solution to a given problem and we need to find
the algorithm that is most efficient.
The most widely used notation to express this function f(n) is the Big O notation.It provides
the upper bound for the complexity.

1.9.1 Algorithm Efficiency


If a function is linear (without any loops or recursions), the efficiency of that algorithm or
the running time of that algorithm can be given as the number of instructions it contains.
if an algorithm contains loops, then the efficiency of that algorithm may vary depending on
the number of loops and the running time of each loop in the algorithm.
Linear Loops:
To calculate the efficiency of an algorithm that has a single loop, first determine the number
of times the statements in the loop will be executed.
For example, consider the loop given below:
for(i=0;i<100;i++)
statement block; Here, 100 is the loop factor
Hence, the general formula in the case of linear loops may be given as
f(n) = n
Consider the loop given below:
for(i=0;i<100;i+=2)
statement block;
Here, the number of iterations is half the number of the loop factor. So, here the efficiency
can be given as
f(n) = n/2
Logarithmic Loops:
The loop updation statement either adds or subtracts the loop-controlling variable.in
logarithmic loops, the loop-controlling variable is either multiplied or divided during each
iteration of the loop.
For example, look at the loops given below:
for(i=1;i<1000;i*=2) for(i=1000;i>=1;i/=2)
statement block; statement block;

when n = 1000 , the number of iterations can be given by log 1000 which is approximately
equal to 10.
The efficiency of loops in which iterations divide or multiply the loop-controlling variables
can be given as
f(n) = log n
Dependent quadratic loop:
In a dependent quadratic loop, the number of iterations in the inner loop is dependent on the
outer loop.
Consider the code given below:
for(i=0;i<10;i++)
for(j=0; j<=i;j++)
statement block;
In this code, the inner loop will execute just once in the first iteration, twice in the second
iteration, thrice in the third iteration, so on and so forth.
In this way, the number of iterations can be calculated as 1 + 2 + 3 + ... + 9 + 10 = 55 ,If we
calculate the average of this loop (55/10 = 5.5), the inner loop iterates (n+ 1)/2 times.
Therefore, the efficiency of such a code can be given as
f(n) = n (n + 1)/2
Nested Loops:
Loops that contain loops are known as nested loops.
In order to analyse nested loops, we need to determine the number of iterations each loop
completes.
The total is then obtained as the product of the number of iterations in the inner loop and the
number of iterations in the outer loop.
In this case, we analyse the efficiency of the algorithm based on whether it is a linear
logarithmic, quadratic, or dependent quadratic nested loop.
Logarithmic Loops:
This inner loop is controlled by an outer loop which iterates 10 times. Therefore, according
to the formula, the number of iterations for this code can be given as 10 log 10.
Fro Example,
for(i=0;i<10;i++)
for(j=1; j<10;j*=2)
statement block;
In more general terms, the efficiency of such loops can be given as
f(n) = n log n .
Quadratic loops:
In a quadratic loop, the number of iterations in the inner loop is equal to the number of
iterations in the outer loop.
Consider the following code in which the outer loop executes 10 times and for each
iteration of the outer loop, the inner loop also executes 10 times.Therefore, the efficiency
here is 100.
for(i=0;i<10;i++)
for(j=0; j<10;j++)
statement block;
The generalized formula for quadratic loop can be given as
2
f (n)=n
Dependent quadratic loop:
In a dependent quadratic loop, the number of iterations in the inner loop is dependent on the
outer loop.Consider the code given below:
for(i=0;i<10;i++)
for(j=0; j<=i;j++)
statement block;
In this code, the inner loop will execute just once in the first iteration, twice in the second
iteration, thrice in the third iteration, so on and so forth.In this way, the number of iterations
can be calculated as 1 + 2 + 3 + ... + 9 + 10 = 55 ,If we calculate the average of this loop
(55/10 = 5.5),it is equal to the number of iterations in the outer loop (10) plus 1 divided by
2.the inner loop iterates (n+ 1)/2 times.
Therefore, the efficiency of such a code can be given as
f(n) = n (n + 1)/2

1.9.2 Asymptotic Analysis


Asymptotic notations are the mathematical notations used to describe the running time of an
algorithm when the input tends towards a particular value or a limiting value.
The efficiency of an algorithm depends on the amount of time, storage and other resources
required for executing the algorithm. The efficiency is measured with the help of asymp-
totic notations.
An algorithm may not have the same performance for different types of inputs. With the in-
crease in the input size, the performance will change.
The study of change in performance of the algorithm with the change in the order of the in-
put size is defined as asymptotic analysis.
There are mainly three asymptotic notations:
 Big-O notation
 Omega notation
 Theta notation
BIG O NOTATION:
The Big O notation represents the upper bound of the running time of an algorithm. Thus, it
gives the worst case complexity of an algorithm.
O(g( n))={f (n):there exist positive constants c∧n0 such that 0 ≤ f (n)≤ cg ( n ) for all n ≥ n0

It means that for large amounts of data, f(n) will grow no more than a constant factor than
g(n) .Hence, g provides an upper bound. Note that here c is a constant which depends on the
following factors:
o the programming language used,
o the quality of the compiler or interpreter,
o the CPU speed,
o the size of the main memory and the access time to it,
o the knowledge of the programmer

According to the Big O notation, we have five different categories of algorithms:


 Constant time algorithm: running time complexity given as O(1)
 Linear time algorithm: running time complexity given as O(n)
 Logarithmic time algorithm: running time complexity given as O(log n)
 Polynomial time algorithm: running time complexity given as where k > 1, O(n k )
 Exponential time algorithm: running time complexity given as O(n k )

Order of run time complexity:


1< log n< √ n<n<nlog n<n < n <… ..<2 <3 < 4 < … … n
2 3 n n n n
Big-O Examples:
Example-1 Find upper bound for f(n) = 3n + 8
Solution: 3n + 8 ≤ 4n, for all n ≥ 8
∴3n + 8 = O(n) with c = 4 and n 0 = 8

Example-2 f (n)=2 n+3


Solution:2 n+3 ≤ 5 n ,n ≥ 1
∴ f (n)=O(n)
Or
It can also be, 2 n+3 ≤ 5 n2 , n ≥ 1
∴ f (n)=O(n2)
n
f (n)=O(2 )
But f (n)≠ O(log n) ,
Since Big-O represents upper bound, the time complexities after n, only allowed.

Example-3Find upper bound for f(n) = n2 + 1


Solution: n2 + 1 ≤ 2n2, for all n ≥ 1
∴n2 + 1 = O(n2) with c = 2 and n 0 = 1
Example-4Find upper bound for f (n)=n 4 +100 n2 +50
Solution: n3 + 100n2+ 50 ≤ 2n 4, for all n ≥ 11
∴n3 + 100n2 + 50 = O(n 4 ) with c = 2 and n 0 = 11

OMEGA NOTATION (Ω):


The OmegaΩ notation represents the lower bound of the running time of an algorithm.
Thus, it gives the best case complexity of an algorithm.
Ω notation is simply written as, f(n) ∈ Ω(g(n)), where n is the problem size and Ω(g(n)) =
{f(n): ∃ positive constants c > 0, n0 such that 0 ≤ cg(n) ≤ f(n), ∀ n ≥ n0}.
Ω(g (n))={f (n):there exist positive constants c∧n0 such that 0 ≤ cg ( n ) ≤ f (n) for all n ≥ n0

Hence, we can say that Ω(g(n)) comprises a set of all the functions f(n) that are greater than
or equal to cg(n) for all values of n ≥ n0.

Our objective is to give the largest rate of growth g(n) which is less than or equal to the
given algorithm’s rate of growth f(n).
Ω Examples
Example-1 Find lower bound for f(n) = 5n2.
Solution: ∃c, n 0Such that: 0 ≤ cn2≤ 5n2⇒cn2 ≤ 5n2⇒c = 5 and n 0 = 1
∴5n2 = Ω(n2) with c = 5 and n 0 = 1
Example-2
f (n)=2 n+3
Solution:2 n+3 ≥ 1n , ∀ n ≥1
∴ f (n)=Ω(n)
Or
It can also be
f (n)=O(log n)
But f (n)≠ O(n2)
n
f (n)≠ O(2 )
Since Big-Ω represents lower bound, the time complexities before n, only allowed.

THETA NOTATION (Θ):


Theta notation encloses the function from above and below. Since it represents the upper
and the lower bound of the running time of an algorithm, it is used for analyzing the
average-case complexity of an algorithm.
Θ notation is simply written as,
f(n) ∈ Θ(g(n)), where n is the problem size and
Θ(g(n)) = {f(n): ∃ positive constants c1, c2, and n0 such that 0 ≤ c1g(n) ≤ f(n) ≤ c2g(n), ∀ n ≥
n0}.
Hence, we can say that Θ(g(n)) comprises a set of all the functions f(n) that are between
c1g(n) and c2g(n) for all values of n ≥ n0.
Ω(g (n))={f (n):there exist positive constants c1 , c 2∧n0 such that 0 ≤ c 1 g ( n ) ≤ c 2 g ( n ) ≤ f (n) for all n ≥ n0

This notation decides whether the upper and lower bounds of a given function (algorithm)
are the same. The average running time of an algorithm is always between the lower bound
and the upper bound. If the upper bound (O) and lower bound (Ω) give the same result, then
the Θ notation will also have the same rate of growth.
Θ Examples

Example-2
f (n)=2 n+3
Solution:
1 n ≤2 n+3 ≤ 5 n , ∀ n ≥ 1
∴ f (n)=Ω(n)
and
f (n)≠ O(log n)
or f (n)≠ O(n2)
n
f (n)≠ O(2 )

Guidelines for Asymptotic Analysis:


There are some general rules to help us determine the running time of an algorithm.
Loops: The running time of a loop is, at most, the running time of the statements inside the
loop (including tests) multiplied by the number of iterations.

Total time = O(n).

Nested loops: Analyze from the inside out. Total running time is the product of the
sizes of all the loops.

Total time = O(n2).


Consecutive statements: Add the time complexities of each statement.

Total time = O((n2).

2. RECURSION
2.1 INTRODUCTION
Any function which calls itself is called recursive function.
Recursion in the data structure can be defined as a method through which problems are
broken down into smaller sub-problems to find a solution. This is done by replicating the
function on a smaller scale, making it call itself and then combining them together to solve
problems.
In recursion, a function α either calls itself directly or calls a function β that in turn calls
the original function α. The function α is called recursive function.
Example − a function calling itself.
int function(int value) {
if(value < 1)
return;
function(value - 1);
printf("%d ",value);
}
Example − a function that calls another function which in turn calls it again.
int function1(int value1) {
if(value1 < 1)
return;
function2(value1 - 1);
printf("%d ",value1);
}
int function2(int value2) {
function1(value2);
}
The recursion step can result in many more such recursive calls. It is important to ensure
that the recursion terminates.

Recursion is a useful technique borrowed from mathematics. Recursive code is generally


shorter and easier to write than iterative code. Generally, loops are turned into recursive
functions when they are compiled or interpreted.
Recursion allows codes to get more compact and efficient by calling a similar function in-
side a function. This is kind of similar to looping functions, however, there are a huge num-
ber of differences. Recursion makes coding programs or solutions much faster and simpler.
Recursion also offers code redundancy, making it much more efficient to read and maintain
codes.
Recursion is most useful for tasks that can be defined in terms of similar subtasks. For ex-
ample,
sort, search, and traversal problems often have simple recursive solutions.

2.2 FORMAT OF A RECURSIVE FUNCTION


A recursive function performs a task in part by calling itself to perform the subtasks. At
some point, the function encounters a subtask that it can perform without calling itself. This
case, where the function does not recur, is called the base case. The former, where the func-
tion calls itself to perform a subtask, is referred to as the recursive case.
We can write all recursive functions using the format:

As an example consider the factorial function: n! is the product of all integers between n
and 1.The definition of recursive factorial looks like:
n !=1if n=0
n !=n∗( n−1 ) ! ifn>0

This definition can easily be converted to recursive implementation. Here the problem is de-
termining the value of n!, and the sub problem is determining the value of (n – l)!.
In the recursive case, when n is greater than 1, the function calls itself to determine the
value of (n – l)! and multiplies that with n.
In the base case, when n is 0 or 1, the function simply returns 1. This looks like the
following:
//calculates factorial of a positive integer
int Fact(int n){
if ( n= =1) //base case: fact of 0 or 1 is 1
return 1;
else if ( n= =0)
return 1;
else //recursive case: multiply n by (n-1) factorial
return n*Fact(n-1);
}

Recursion and Memory (Visualization):


Each recursive call makes a new copy of that method (actually only the variables) in mem-
ory. Once a method ends (that is, returns some data), the copy of that returning method is
removed from memory. The recursive solutions look simple but visualization and tracing
takes time. For better understanding, let us consider the following example.
//print numbers 1 to n backward
int Print(int n){
if (n= =0) //this is terminating base case
return 0;
else {
printf(“%d”,n);
return Print(n-1); //recursive case: multiply n by (n-1) factorial
}
}

For this example, if we call the print function with n=4, visually our memory assignments
may look like:
2.3 RECURSION VERSUS ITERATION
A recursive approach makes it simpler to solve a problem that may not have the most obvi-
ous of answers. But, recursion adds overhead for each recursive call (needs space on the
stack frame).
Recursion
•Terminates when a base case is reached.
• Each recursive call requires extra space on the stack frame (memory).
• If we get infinite recursion, the program may run out of memory and result in stack
overflow.
• Solutions to some problems are easier to formulate recursively.
Iteration
• Terminates when a condition is proven to be false.
• Each iteration does not require extra space.
• An infinite loop could loop forever since there is no extra memory being created.
• Iterative solutions to a problem may not always be as obvious as a recursive solution.

Some problems are best suited for recursive solutions while others are not.
Difference between recursion and iteration
Iteration Recursion
In iteration, a problem is converted into aRecursion is like piling all of those steps on top
train of steps that are finished one at a time, of each other and then quashing the mall into
one after another the solution.
In recursion, each step replicates itself at a
With iteration, each step clearly leads on to
smaller scale, so that all of them combined
the next, like stepping stones across a river
together eventually solve the problem.
Not all recursive problem can solved by
Any iterative problem is solved recursively
iteration
It does not use Stack It uses Stack

2.4 EXAMPLE ALGORITHMS OF RECURSION


• Fibonacci Series, Factorial Finding
• Merge Sort, Quick Sort
• Binary Search
• Tree Traversals and many Tree Problems: InOrder, PreOrderPostOrder
• Graph Traversals: DFS [Depth First Search] and BFS [Breadth First Search]
• Dynamic Programming Examples
• Divide and Conquer Algorithms
• Towers of Hanoi

Recursion: Examples(Problems & Solutions)


Problem-1 Discuss Towers of Hanoi puzzle.
Solution: The Towers of Hanoi is a mathematical puzzle. It consists of three rods (or pegs
or towers), and a number of disks of different sizes which can slide onto any rod. The puz-
zle starts with the disks on one rod in ascending order of size, the smallest at the top, thus
making a conical shape. The objective of the puzzle is to move the entire stack to another
rod, satisfying the following rules:

1. Only one disk may be moved at a time.


2. No disk may be placed on top of a smaller disk.
3. Each move consists of taking the upper disk from one of the rods and sliding it
onto another rod, on top of the other disks that may already be present on that
rod.
Algorithm:
1. Move the top n – 1 disks from Source to Auxiliary tower
2. Move the nth disk from Source to Destination tower,
3. Move the n – 1 disks from Auxiliary tower to Destination tower.
4. Transferring the top n – 1 disks from Source to Auxiliary tower can again be
thought of as a fresh problem and can be solved in the same manner.
Once we solve Towers of Hanoi with three disks, we can solve it with any number of disks
with the above algorithm.

Problem-2 Given an array, check whether the array is in sorted order with recursion.
Solution:

Time Complexity: O(n). Space Complexity: O(n) for recursive stack space.
Unit-II
Topics to be covered
Linked Lists: Introduction, Linked lists and types ,Representation of
linked list, operations on linked list , Comparison of Linked Lists with
Arrays , Dynamic Arrays
2.1 : Introduction
Array: In array(or lists) are simple data structures used to hold sequence of data. Array elements
are stored in consecutive memory locations. To occupy the adjacent space, block of memory that is
required for the array should be allocated beforehand.
Example : int a[ ]= {50,42,85,71,99};

 Once memory allocated it cannot be extended any more. So that array is called the static
data structure. Wastage of memory is more in arrays.

 Disadvantages of arrays as storage data structures:

– slow searching in unordered array

– insertion and deletion operations are slow. Because, we have to shift subsequent ele-
ments

– Fixed size

– Wastage of memory

 Linked lists solve some of these problems


- Linked list is able to grow in size as needed
- Does not require the shifting of items during insertions and deletions.
- No wastage of memory.

2.2. Linked List


Linked list is a linear data structure that supports the dynamic memory allocation( the amount of
memory could be varied during its use). It is also dynamic data structure. Linked list is used to
hold sequence of data values. Data values need not be stored in adjacent memory cells. Each data
values has pointer which indicates where its next data value in computer memory. An element in a
linked list is known as a node. A node contains a data part and one or two pointer part which
contains the address of the neighborhood nodes in the list.
Node Structure

Example: Linked list to store the sequence : 50, 42,85, 71, 99.
Assume each integer and pointer requires 4 bytes
Starting location of list (head) is 124

Address Data Next(link)


100 42 148
108 99 0(Null)
116
124 50 100
132 71 108
140
148 85 132
156

2.2 Types of linked list


• Depending on the requirements the pointers are maintained, and accordingly linked list can
be classified into three groups

1. Singly linked lists


2. Circular linked lists
3. Doubly linked lists
4. Circular doubly linked list
2.2.1. Singly linked list
in singly linked list, each node has two parts one is data part and other is address part.
- data part stores the data values.
- address part contains the address of its next node.

Structure of Linked List


2000
Header

10 2500 20 2600 30 2367 40 NULL


2000 2500 2600 2367

The head or header always points to the first node in the list. A NULL pointer used to mark the end
of the linked list.

Operations on Singly linked list:


1. Insert ( Start, End, Middle) 2. Deletion(Start, End, Middle) 3.Display 4. Search

1. Insertion operation
• There are various positions where node can be inserted.

1. Insert at front ( as a first element)

2. Insert at end ( as a last node)

3. Insert at middle ( any position)

Node Structure
struct node
{ int data;
struct node *link;
}*new, *ptr, *header, *ptr1;

Creating a node
new = malloc (sizeof(struct node));
new -> data = 10;
new -> link = NULL;

1. Insert new node at front in linked list


Algorithm
step1- create a new node
Step2- new->link=header->link
Step3- new->data=item
Step4- header->link=new.
Step5-stop

2. Insert new node at end of the linked list


• Algorithm

• Step1- create a new node

• Step2- ptr=header

• Step3- while(ptr->link!=null)

• 3.1. ptr=ptr->link

• Step4- ptr->link=new

• Step5- new->data=item

• Step6- new->link=null

• Step7-stop

3. Insert new node at any position in linked list


• Algorithm

1.Create new node


2. ptr=header
3. Enter the position
4. for(i=1;i<pos-1;i++)
4.1 ptr=ptr->link;
5. new->link=ptr->link;
6. ptr->link=new;
7. new->data=item
8.stop
Inserted position is – 3

Deletion of a node from singly linked list


Like insertion, there are also various cases of deletion:
1. Deletion at the front

2. Deletion at the end

3. Deletion at any position in the list

Deleting a node at the beginning


if (header = = NULL)
print “List is Empty”;
else
{
ptr = header;
header = header -> link;
free(ptr);
}
Deleting a node at the end

ptr = header;
while(ptr -> link != NULL)
{
ptr1=ptr;
ptr = ptr -> link;
}
ptr1 -> link = NULL;
free(ptr);

Deleting a node at the given position

ptr = header ;
for(i=1;i<pos-1;i++)
ptr = ptr -> link;
ptr1 = ptr -> link;
ptr -> link = ptr1-> link;
free(ptr1);
Traversing an elements of a list
if(header = = NULL)
print “List is empty”;
else
for (ptr = header ; ptr != NULL ; ptr = ptr -> link)
print “ptr->data”;

Doubly linked list


 In a singly linked list one can move from the header node to any node in one direction only
(left-right).

 A doubly linked list is a two-way list because one can move in either direction. That is,
either from left to right or from right to left.

 It maintains two links or pointer. Hence it is called as doubly linked list.

 Where, DATA field - stores the element or data, PREV- contains the address of its previous
node, NEXT- contains the address of its next node.

PREV DATA NEXT

Structure of the node


Operations on doubly linked list
• All the operations as mentioned for the singly linked can be implemented on the doubly
linked list more efficiently.

• Insertion

• Deletion

• Traverse

Search.
Insertion on doubly linked list
• Insertion of a node at the front

• Insertion of a node at any position in the list

• Insertion of a node at the end

Deletion on doubly linked list


• Deletion at front

• Deletion at any position

• Deletion at end

Insertion of a node at the front


if(header==NULL)
{
new->prev=NULL;
new->next=NULL;
header=new;
}
else
{
new->next=ptr;
ptr->prev=new;
new->prev=NULL;
header=new;
}
Insertion of a node at the end
1. Create a new node

2. Read the item

3. new->data=item

4. ptr= header

5. while(ptr->next!=NULL)

5.1 ptr=ptr->next;
6. new->next=NULL;
7. ptr->next=new;
8. new->prev=ptr;

Insertion of a node at any position in the list


Algorithm
1. create a node new
2. read item
3. new->data=item
4. ptr=header;
5. Read the position where the element is to be inserted
6. for(i=1;i<pos-1;i++)
6.1 ptr=ptr->next;
7. 1 ptr1=ptr->next;
7.2 new->next=ptr1;
7.3 ptr1->prev=new;
7.4 new->prev=ptr;
7.5 ptr->next=new;
Deletion at beginning
1.ptr=header
2.ptr1=ptr->next;
3.header=ptr1;
4.if(ptr1!=NULL)
1.ptr1->prev=NULL;
5. free(ptr);

Deletion at End
Algorithm:
1. ptr=header
2. while(ptr->next!=NULL)

2.1 ptr=ptr->next;
3. end while

4. ptr1=ptr->prev;

5. ptr1->next=NULL;

Deletion at any position


Algorithm
1. ptr=header
1.1.for(i=0;i<pos-1;i++)
1.1.1. ptr=ptr->next;
2. ptr1=ptr->prev;
3. ptr2=ptr->next;
4. ptr1->next=ptr2;
5. ptr2->prev=ptr1;
6. free(ptr);
Displaying elements of a list
Algorithm:
1. ptr=header;

2. if(header = = NULL)

1. printf("The list is empty\n");

3. else

1. print “The elements in farword order: “

2. while(ptr!=NULL)

1. print “ptr->data”;

2. if(ptr->next = = NULL)

1. break;

3. ptr=ptr->next;

3. print “The elements in reverse order: “

4. while(ptr!=header)

4.1 print “ptr->data”;


4.2ptr=ptr->prev;
5. End while
6. print “ptr->data”;
7.end else
Circular linked list
In a single linked list the last node link is NULL, but a number of advantages can be gained if we
utilize this link field to store the pointer of the header node.(address of first node). Definition- the
linked list where the last node points the header node is called circular linked list.

Advantages of circular linked list


1. Accessibility of a member node in the list
2. No Null link problem
3. Merging and splitting operations implemented easily
4. Saves time when you want to go from last node to first node.
Disadvantage
1. Goes into infinite loop, if proper care is not taken

2. It is not easy to reverse the elements

3. Visiting previous node is also difficult

Comparison of linked list and array


• Linked list and arrays are used to store collection of data. Both purpose is same but differ in
their usage.

Arrays:
- 1. to access an array element, the address of an element computed by using base address and
multiply position with element size then added to base address. Hence, it takes constant
time to access an array element.

- 2. the size of the array is fixed

- 3. random access is possible.

- 4. inserting new element is expensive as it takes shifting operation.

5. takes less space to store elements


6. memory allocated during compile time.
7. wastage of memory is more.
Linked list
• Accessing linked list elements are slower as it takes time proportional to i to find the i-th
member of a linked list by skipping over the first i−1 cells.
• Dynamic size.

• Insertion and deletion operations are easy.

• No possibility of random accessing.

• Extra space is required as it points to the next element.

• Memory allocated during run time.

• No wastage of memory

Dynamic arrays
• Dynamic array is also called as resizable array, growable array , mutable array or arraylist.

• Dynamic array is a random access and variable size list data structure and enables to add or
remove elements.

• The size of the array grows up automatically when we try to insert new element when there
is no space. Array size will be doubled.

• Dynamic array starts with predefined size and as soon as array become full, array size
would get doubled of the original size.

• Similarly array size reduced to half when elements in the array are less then the half.

Comparison of linked list with arrays and dynamic arrays


Applications of Linked list
 Implement stack and queues

 Represents trees and graphs

 Keep track of blocks of memory allocated to a file

 Web browser navigation.

 Playing next song in the music player

Applications of doubly linked list


• Represents a deck of cards in a game

• Undo and redo operations in text editors

• A music player which has next and previous button uses doubly linked list.

• Circular dll and doubly linked list are used to implement rewind and forward functions in
the playlist.

• web browser forward and back.

Applications of circular ll
• Used in operating system. When multiple applications are running in PC operating system
to put the running applications on a list and then to cycle through them, giving each of them
a slice of time to execute, and then making them wait while the CPU is given to another ap-
plication.

• To repeat songs in the music player.

• Escalator which will run circularly uses the circular linked list.

Question Bank
1. What is Linked list? CO2,BL1,2M
2. Why linked list is linear and dynamic data structure? CO2,BL2,2M
3. List out types of linked list? CO2,BL1,2M
4. What is dynamic array?CO2, BL1, 2M
5. Differentiate between array and dynamic array? CO2,BL2,2M
6. What are dis advantages of linked list over an array? CO2,B2,2M
7. What are the advantages of linked list over an array? CO2,BL3,3M
8. What are the application of SLL,DLL and CLL? CO2,BL1,3M
9. What is circular linked list? What are the advantages of circular linked list over linked list?
CO2, BL2,3M

10. Demonstrate an algorithm or function to insert a node at beginning in singly linked list?
CO2,BL2,5M
11. Demonstrate an algorithm or function to insert a node at end in singly linked list?
CO2,BL25M
12. Demonstrate an algorithm of function to insert a node at any position in singly linked list ?
CO2,BL2,5M
13. Demonstrate delete of a node (start, end, middle ) in singly linked list? CO2,BL2,5M
14. Explain about insertion and deletion operations on singly linked list with neat diagrams?
CO2,BL2,5M
15. What are the application of SLL,DLL and CLL? CO2,BL1,3M
16. What is circular linked list? What are the advantages of circular linked list over linked list?
CO2, BL2,3M.
UNIT-III
Stack is a linear data structure that follows a particular order in which the operations are
performed. The order may be LIFO(Last In First Out) or FILO(First In Last Out).
Mainly the following three basic operations are performed in the stack:
 Push: Adds an item in the stack. If the stack is full, then it is said to be an Overflow
condition.
 Pop: Removes an item from the stack. The items are popped in the reversed order in
which they are pushed. If the stack is empty, then it is said to be an Underflow condi-
tion.
 Peek or Top: Returns the top element of the stack.
 isEmpty: Returns true if the stack is empty, else false.

Stack applications in data structure:


Following are some of the important applications of a Stack data structure:

1. Evaluation of Arithmetic Expressions


2. Backtracking
3. Delimiter Checking
4. Reverse a Data
5. Processing Function Calls
6. Stacks can be used for Memory Management.
7. Syntax Parsing
8. String Reversal

Evaluation of Arithmetic Expressions

A stack is a very effective data structure for evaluating arithmetic expressions in


programming languages. An arithmetic expression consists of operands and operators.

Example: A + (B - C)

To evaluate the expressions, one needs to be aware of the standard precedence rules for arithmetic
expression. The precedence rules for the five basic arithmetic operators are:
Evaluation of Arithmetic Expression requires two steps:
o First, convert the given expression into special notation.
o Evaluate the expression in this new notation.

Notations for Arithmetic Expression

There are three notations to represent an arithmetic expression:

o Infix Notation
o Prefix Notation
o Postfix Notation

Infix Notation

The infix notation is a convenient way of writing an expression in which each operator is placed
between the operands. Infix expressions can be parenthesized or unparenthesized depending upon
the problem requirement.

Example: A + B, (C - D) etc.

Prefix Notation

• The prefix notation places the operator before the operands. This notation was introduced
by the Polish mathematician and hence often referred to as polish notation.
• Example: + A B, -CD etc.

Postfix Notation

• The postfix notation places the operator after the operands. This notation is just the reverse
of Polish notation and also known as Reverse Polish notation.
• Example: AB +, CD+, etc.
Infix Prefix Postfix Notation
Notation Notation

A*B *AB AB*

(A+B)/C /+ ABC AB+C/

(A*B) + (D-C) +*AB - DC AB*DC-+

Implementing Stack using Arrays


Algorithm: Insert an element in a stack
Step 1:IF TOP=MAX-1
PRINT “OVERFLOW”
GOTO Step -4
[END OF IF]
Step 2:SET TOP=TOP+1
Step 3:SET Stack[TOP]=VALUE
Step 4:END
Delete an element from a stack
Step 1:IF TOP=-1
PRINT “UNDERFLOW”
GOTO Step-4
Step 2:SET Val =Stack[TOP]
Step 3:SET TOP=TOP-1
Step 4:END
PEEK Operation
Step 1: IF TOP=-1
PRINT “Stack is Empty”
GOTO Step -3
Step 2: Return Stack[TOP]
Step 3: END
Displaying the elements of a stack
Step 1: IF TOP=-1
print “Stack is Empty”
GOTO Step-3
Step 2: FOR(I=TOP;I>=0;I++)
print “Stack[i]”
Step 3:END
Implementation of stack using Linked List
The major problem with the stack implemented using an array is, it works only for a fixed number
of data values. That means the amount of data must be specified at the beginning of the
implementation itself. Stack implemented using an array is not suitable, when we don't know the
size of data which we are going to use. A stack data structure can be implemented by using a linked
list data structure. The stack implemented using linked list can work for an unlimited number of
values. That means, stack implemented using linked list works for the variable size of data. So,
there is no need to fix the size at the beginning of the implementation. The Stack implemented
using linked list can organize as many data values as we want.
In linked list implementation of a stack, every new element is inserted as 'top' element. That means
every newly inserted element is pointed by 'top'. Whenever we want to remove an element from the
stack, simply remove the node which is pointed by 'top' by moving 'top' to its previous node in the
list. The next field of the first element must be always NULL.

Example

In the above example, the last inserted node is 99 and the first inserted node is 25. The order of
elements inserted is 25, 32,50 and 99.

Stack Operations using Linked List

To implement a stack using a linked list, we need to set the following things before implementing
actual operations.

 Step 1 - Include all the header files which are used in the program. And declare all the user
defined functions.
 Step 2 - Define a 'Node' structure with two members data and next.
 Step 3 - Define a Node pointer 'top' and set it to NULL.
 Step 4 - Implement the main method by displaying Menu with list of operations and make
suitable function calls in the main method.

push(value) - Inserting an element into the Stack

We can use the following steps to insert a new node into the stack...

 Step 1 - Create a newNode with given value.


 Step 2 - Check whether stack is Empty (top == NULL)
 Step 3 - If it is Empty, then set newNode → next = NULL.
 Step 4 - If it is Not Empty, then set newNode → next = top.
 Step 5 - Finally, set top = newNode.

pop() - Deleting an Element from a Stack


We can use the following steps to delete a node from the stack...

 Step 1 - Check whether stack is Empty (top == NULL).


 Step 2 - If it is Empty, then display "Stack is Empty!!! Deletion is not possible!!!" and
terminate the function
 Step 3 - If it is Not Empty, then define a Node pointer 'temp' and set it to 'top'.
 Step 4 - Then set 'top = top → next'.
 Step 5 - Finally, delete 'temp'. (free(temp)).

display() - Displaying stack of elements

We can use the following steps to display the elements (nodes) of a stack...

 Step 1 - Check whether stack is Empty (top == NULL).


 Step 2 - If it is Empty, then display 'Stack is Empty!!!' and terminate the function.
 Step 3 - If it is Not Empty, then define a Node pointer 'temp' and initialize with top.
 Step 4 - Display 'temp → data --->' and move it to the next node. Repeat the same until
temp reaches to the first node in the stack. (temp → next != NULL).
 Step 5 - Finally! Display 'temp → data --->NULL'.

Queue Data Structure:

Queue is also an abstract data type or a linear data structure, just like stack data structure, in which
the first element is inserted from one end called the REAR(also called tail), and the removal of
existing element takes place from the other end called as FRONT(also called head).

This makes queue as FIFO(First in First Out) data structure, which means that element inserted
first will be removed first.

Which is exactly how queue system works in real world . If you go to a ticket counter to buy movie
tickets, and are first in the queue, then you will be the first one to get the tickets. Right? Same is the
case with Queue data structure. Data inserted first, will leave the queue first.

Enqueue and Dequeue

• The process to add an element into queue is called Enqueue and the process of removal of
an element from queue is called Dequeue.
Basic features of Queue

1. Like stack, queue is also an ordered list of elements of similar data types.

2. Queue is a FIFO( First in First Out ) structure.

3. Once a new element is inserted into the Queue, all the elements inserted before the new ele-
ment in the queue must be removed, to remove the new element.

4. peek( ) function is oftenly used to return the value of first element without dequeuing it.

Applications of Queue

1. Serving requests on a single shared resource, like a printer, CPU task scheduling etc.

2. In real life scenario, Call Center phone systems uses Queues to hold people calling them in
an order, until a service representative is free.

3. Handling of interrupts in real-time systems. The interrupts are handled in the same order as
they arrive i.e First come first served.

Implementation of Queue Data Structure

Queue can be implemented using an Array, Stack or Linked List. The easiest way of implementing
a queue is by using an Array.

Initially the head(FRONT) and the tail(REAR) of the queue points at the first index of the array
(starting the index of array from 0). As we add elements to the queue, the tail keeps on moving
ahead, always pointing to the position where the next element will be inserted, while the head re-
mains at the first index.
When we remove an element from Queue, we can follow two possible approaches (mentioned [A]
and [B] in above diagram). In [A] approach, we remove the element at head position, and then one
by one shift all the other elements in forward position.

In approach [B] we remove the element from head position and then move head to the next
position.

In approach [A] there is an overhead of shifting the elements one position forward every time
we remove the first element.

In approach [B] there is no such overhead, but whenever we move head one position ahead, after
removal of first element, the size on Queue is reduced by one space each time.

When we remove an element from Queue, we can follow two possible approaches (mentioned [A]
and [B] in above diagram). In [A] approach, we remove the element at head position, and then one
by one shift all the other elements in forward position.

In approach [B] we remove the element from head position and then move head to the next
position.

In approach [A] there is an overhead of shifting the elements one position forward every time
we remove the first element.

In approach [B] there is no such overhead, but whenever we move head one position ahead, after
removal of first element, the size on Queue is reduced by one space each time.
1. If the queue is not empty, then print the element at the head and increment the head.

peek()
This function helps to see the data at the front of the queue. The algorithm of peek() function is as
follows −
Algorithm

begin procedure peek

return queue[front]

end procedure

isfull()
As we are using single dimension array to implement queue, we just check for the rear pointer to
reach at MAXSIZE to determine that the queue is full. In case we maintain the queue in a circular
linked-list, the algorithm will differ. Algorithm of isfull() function −
Algorithm

begin procedure isfull

if rear equals to MAXSIZE

return true

else

return false

Endif

isempty()
Algorithm of isempty() function −
Algorithm

begin procedure isempty

if front is less than MIN OR front is greater than rear

return true

else

return false

endif
end procedure

end procedure

Enqueue Operation
Queues maintain two data pointers, front and rear. Therefore, its operations are comparatively
difficult to implement than that of stacks.
The following steps should be taken to enqueue (insert) data into a queue −
Step 1 − Check if the queue is full.
Step 2 − If the queue is full, produce overflow error and exit.
Step 3 − If the queue is not full, increment rear pointer to point the next empty space.
Step 4 − Add data element to the queue location, where the rear is pointing.
Step 5 − return success.

Algorithm for enqueue operation

procedure enqueue(data)

if queue is full

return overflow

endif

rear ← rear + 1

queue[rear] ← data

return true

end procedure
Dequeue Operation
Accessing data from the queue is a process of two tasks − access the data where front is pointing
and remove the data after access. The following steps are taken to perform dequeue operation −
Step 1 − Check if the queue is empty.
Step 2 − If the queue is empty, produce underflow error and exit.
Step 3 − If the queue is not empty, access the data where front is pointing.
Step 4 − Increment front pointer to point to the next available data element.
Step 5 − Return success.

Algorithm for dequeue operation


procedure dequeue
if queue is empty
return underflow
end if
data = queue[front]
front ← front + 1
return true
end procedure

Priority Queues

A priority queue is a special type of queue in which each element is associated with a priority
value. And, elements are served on the basis of their priority. That is, higher priority elements are
served first.
However, if elements with the same priority occur, they are served according to their order in the
queue.
In the below priority queue, element with maximum ASCII value will have the highest priority.

A typical priority queue supports following operations.


insert(item, priority): Inserts an item with given priority.
getHighestPriority(): Returns the highest priority item.
deleteHighestPriority(): Removes the highest priority item.

Assigning Priority Value

Generally, the value of the element itself is considered for assigning the priority. For example,

The element with the highest value is considered the highest priority element. However, in other
cases, we can assume the element with the lowest value as the highest priority element.

We can also set priorities according to our needs.

Removing Highest Priority Element

Difference between Priority Queue and Normal Queue

In a queue, the first-in-first-out rule is implemented whereas, in a priority queue, the values are
removed on the basis of priority. The element with the highest priority is removed first.
Implementation of Priority Queue
Priority queue can be implemented using an array, a linked list, a heap data structure, or a binary
search tree. Among these data structures, heap data structure provides an efficient implementation
of priority queues.

Question Bank

1. Write an algorithm to convert infix to postfix expression and explain with an example CO3-
L2
2. Write a function to evaluate postfix CO3-L2
3. Define a stack? How do you implement stack using arrays CO2-L3
4. Write a postfix form of each of the following infix expression CO3-L3
a. a)A-B+(M^N)*(O+P)-Q/R^S*T+Z

b. b)K+L-M*N+(O^P)*W/U/V*T+Q

5. What are the applications of stack? CO2-L2


6. Convert following prefix expression into infix CO3-L3
a. *,-,!PQRS
7. Write various operations on the stack. CO2-L2
8. Write an algorithm for evaluating a postfix expression and evaluate the followingpostfix
expression using the algorithm AB+CD/AD-EA˄+*whereA=2,B=7,C=9, D=3,E=5.
CO3-L3
9. Write an algorithm to insert, deletion operations on queue CO2-L3
10. How a queue can be implemented using array? CO2-L3
11. Write the differences between queue and priority queue CO2-L2
12. Compare the implementation of stack using arrays and linked list. CO2-L2
13. What are the applications of Queue? CO2-L2
UNIT-5
 Topics: Graphs: Introduction
 Applications of graphs
 Graph representations
 graph traversals
 Minimal Spanning Trees
 Searching : Linear searching, binary Searching
 sorting algorithms- bubble sort, selection sort, quick sort, heap sort.
Introduction to Graphs: What is a Graph?
A graph is a pictorial representation of a set of objects where some pairs of objects are connected by
links. The interconnected objects are represented by points termed as vertices, and the links that
connect the vertices are called edges.
Formally, a graph is a pair of sets (V, E), where V is the set of vertices and E is the set of edges,
connecting the pairs of vertices. Take a look at the following graph –

a b
c
d e

In the above graph,


V= {a,b,c,d,e}
E= {(a,b),(a,c),(a,d),(b,e),
(c,d),(c,e),(d,e)}

Graph terminology: Point


A point is a particular position in a one-dimensional, two-dimensional, or three-dimensional space.
For better understanding, a point can be denoted by an alphabet. It can be represented with a dot.

Example .a Here, the dot is a point named ‘a’.


Line
A Line is a connection between two points. It can be represented with a solid line.
Example

Here, ‘a’ and ‘b’ are the points. The link between these two points is called a line.
Vertex
A vertex is a point where multiple lines meet. It is also called a node. Similar to points, a vertex is also
denoted by an alphabet.

Example: .a Here, the vertex is named with an alphabet ‘a’.


Edge
An edge is the mathematical term for a line that connects two vertices. Many edges can be formed
from a single vertex. Without a vertex, an edge cannot be formed. There must be a starting vertex and
an ending vertex for an edge.
Example:
Here, ‘a’ and ‘b’ are the two vertices and the link between them is called an edge.
Graph
A graph ‘G’ is defined as G = (V, E) Where V is a set of all vertices and E is a set of all edges in the
graph.

In the above example, ab, ac, cd, and bd are the edges of the graph. Similarly, a, b, c, and d are the ver-
tices of the graph.
Loop
In a graph, if an edge is drawn from vertex to itself, it is called a loop.

In the above graph, V is a vertex for which it has an edge (V, V) forming a loop.
Degree of Vertex
It is the number of vertices adjacent to a vertex V.
Notation − deg(V).
In a simple graph with n number of vertices, the degree of any vertices is −
deg(v) ≤ n – 1 ∀ v ∈ G
A vertex can form an edge with all other vertices except by itself. So the degree of a vertex will be up
to the number of vertices in the graph minus 1. This 1 is for the self-vertex as it cannot form a loop
by itself. If there is a loop at any of the vertices, then it is not a Simple Graph.
Degree of vertex can be considered under two cases of graphs −
 Undirected Graph
 Directed Graph
Degree of Vertex in an Undirected Graph
An undirected graph has no directed edges. Consider the following examples.

In the above Undirected Graph,


 deg(a) = 2, as there are 2 edges meeting at vertex ‘a’.
 deg(b) = 3, as there are 3 edges meeting at vertex ‘b’.
 deg(c) = 1, as there is 1 edge formed at vertex ‘c’
 So ‘c’ is a pendent vertex.
 deg(d) = 2, as there are 2 edges meeting at vertex ‘d’.
 deg(e) = 0, as there are 0 edges formed at vertex ‘e’.
The vertex ‘e’ is an isolated vertex. The graph does not have any pendent vertex.
Degree of Vertex in a Directed Graph
In a directed graph, each vertex has an indegree and an outdegree.

Indegree of a Graph
 Indegree of vertex V is the number of edges which are coming into the vertex V.
 Notation − deg−(V).
Outdegree of a Graph
 Outdegree of vertex V is the number of edges which are going out from the vertex V.
 Notation − deg+(V).

Adjacentvertices
If two nodes are connected by an edge, they are neighbors (and the nodes are adjacent to each
other).
 The size of a graph is the number of nodes in it
 The empty graph has size zero (no nodes).
 Loop- An edge that connects the vertex with itself
 Degree of a Vertex- the number of edges connected to that vertex
• For directed graph,
– the in-degree of a vertex v is the number of edges incident on to the vertex.
– the out-degree of a vertex v is the number of edges emanating from that vertex.
Examples
simple path: no vertex is repeated
Cycle - some path with distinct vertices except that the last vertex is the same as the first vertex

a b a
Parallel edges- if there are one or More than one edges between the same pair of vertices.
ba b a b

c d
c
e c bec c
acda

d e abedc d e d e

 A graph without cycles is called acyclic graph.


 Sub graph is a graph with subset of vertices and edges of a graph.
 A graph is called a simple graph if it has no loops and no parallel edges.
 A graph is termed as weighted graph if all the edges in it are labeled with some weights.
 A complete graph is a graph if every node in a graph is connected to every other node.
 Two vertices are said to be connected if there is a path from vi to vj or vj to vi
 connected graph: any two vertices are connected by some path. if there is a path from v i to vj and vj to
vi.
 A directed graph is said to be strongly connected if for every pair of distinct vertices v i,vj in G, there is
a path from vi to vj and also from vj to vi.
 Weakly connected graph- if a graph is not strongly connected but it is connected
Complete
Cyclic graph
GraphAcyclic
Strongly
graph
Connected
Weighted Graph
A A
AA B DC B
A
9
10
C
B C D
B
C
E D B 6 C
D

• connected graph: any two vertices are connected by some path

• Representation of a Graph
There are two ways of representing a graph in memory:
 Sequential Representation by means of Adjacency Matrix.
 Linked Representation by means of Linked List.
A
A B C D E
BA 0 1 C1 1 0
B D
1 0 1 E1 0
C 1 1 0 1 1
D 1 1 1 0 1

E 0 0 1 1 0
 Linked List Representation
 It saves the memory.
 The number of lists depends on the number of vertices in the graph.
 The header node in each list maintains a list of all adjacent vertices of a node .

A
B A
C B C
D D E
 NE

Undirected Matrix
Adjacency List
Graph

Comparison among various representations:


• Adjacency matrix representation always requires n x n matrix with n vertices, regardless of the number
of edges. But from the manipulation point of view, this representation is the best.
• Insertion, deletion and searching are concerned, in some cases, linked list representation has an ad-
vantage, but overall performance is consider, matrix representation of graph is more powerful than all.
Graph Traversal Techniques
• There are two standard graph traversal techniques:
• Depth-First Search (DFS)
• Breadth-First Search (BFS)
• Traversing a graph means visiting all the vertices in the graph exactly once.
• DFS and BFS traversals result an acyclic graph.
• DFS and BFS traversal on the same graph do not give the same order of visit of vertices.
• The order of visiting nodes depends on the node we have selected first and hence the order of visiting
nodes may not be unique.
• Depth-First Search:
• DFS is similar to the in order traversal of a tree
• Stack is used to implement the Depth- First- Search.
• DFS follows the following rules:
• Select an unvisited node X, visit it, and treat as the current node
• Find an unvisited adjacent of the current node as Y, visit it, and make it the new current node;
• Repeat step 1 and 2 till there is a ‘dead end’. dead end means a vertex which do not have adjacent
nodes or no more nodes can be visited.
• 4.after coming to a dead end we backtracking along to its parent to see if it has another adjacent ver -
tex other than Y and then repeat the steps 1,2,3. until all vertices are visited.

 A stack can be used to maintain the track of all paths from any vertex so as to help backtracking.
 Initially starting vertex will be pushed onto the stack.
 To visit a vertex, we are to pop a vertex from the stack, and then push all the adjacent vertices onto it.
 A list is maintained to store the vertices already visited.
 When a vertex is popped, whether it is already visited or not that can be known by searching the list.
 If the vertex is already visited, we will simply ignore it and we will pop the stack for the next vertex to
be visited.
 This procedure will continue till the stack not empty.
 Breadth First Traversal:
 The breadth first traversal is similar to the pre-order traversal of a binary tree.
 The breadth first traversal of a graph is similar to traversing a binary tree
 level by level (the nodes at each level are visited from left to right).
 All the nodes at any level, i, are visited before visiting the nodes at level i + 1.
 To implement the breadth first search algorithm, we use a queue.
BFS follows the following rules:
1. Select an unvisited node x, visit it, have it be the root in a BFS tree being formed. Its level is called the
current level.
2. From each node x in the current level, visit all the unvisited neighbors of x. The newly visited nodes
from this level form a new level that becomes the next current level.
3. Repeat step 2 until no more nodes can be visited.
4. If there are still unvisited nodes, repeat from Step 1.
Applications of Graphs:
 Computer networks
 Local area network
 Internet
 Web
 Databases
 Entity-relationship diagram
 Spanning Trees
 A spanning tree of a graph is just a sub graph that contains all the vertices and
is a tree( without any cycles in a graph)
 A graph may have many spanning trees.

Graph A Some Spanning Trees from


Graph A

or or or

Minimum spanning tree:


• Take an example that connect four cities in India like vijayawada, hyderabad, banglore and mumbai by
a graph

Hyderabad
580km 270km
Banglore vijayawada

980km Mumbai1000km
Find a short distance to cover all four
cities
 The minimum spanning tree for a given graph is the spanning tree of
minimum cost for that graph.
Weighted Graph Minimum Spanning Tree
7
2 2
5 3 3
4
1 1
Algorithms to find Minimum Spanning Trees are:
 Kruskal‘s Algorithm
 Prim‘s Algorithm
Kruskal’s algorithm:
Kruskal’s algorithm finds the minimum cost spanning tree by selecting the edges one by one as
follows.
1. Draw all the vertices of the graph.
2. Select the smallest edge from the graph and add it into the spanning tree (initially it is empty).
3. Select the next smallest edge and add it into the spanning tree.
4. Repeat the 2 and 3 steps until that does not form any cycles.
Prim’s algorithm:
Prim’s algorithm finds the minimum cost spanning tree by selecting the edges one by one as follows.
1. All vertices are marked as not visited
2. Any vertex v you like is chosen as starting vertex and is marked as visited (define a cluster C).
3. The smallest- weighted edge e = (v,u), which connects one vertex v inside the cluster C with another
vertex u outside of C, is chosen and is added to the MST.
4. The process is repeated until a spanning tree is formed.

Searching and Sorting:


Linear searching:
A linear search is also known as a sequential search that simply scans each element at a time. Suppose
we want to search an element in an array or list; we simply calculate its length and do not jump at any
item.
Linear search is a very basic and simple search algorithm. In Linear search, we search an element or
value in a given array by traversing the array from the starting, till the desired element or value is
found.It compares the element to be searched with all the elements present in the array and when the
element is matched successfully, it returns the index of the element in the array, else it return -1.
Linear Search is applied on unsorted or unordered lists, when there are fewer elements in a list.
Features of Linear Search Algorithm
1. It is used for unsorted and unordered small list of elements.
2. It has a time complexity of O(n), which means the time is linearly dependent on the number of ele-
ments, which is not bad, but not that good too.
3. It has a very simple implementation.
/*
below we have implemented a simple function
for linear search in C

- values[] => array with all the values


- target => value to be found
- n => total number of elements in the array
*/
int linearSearch(int values[], int target, int n)
{
for(int i = 0; i < n; i++)
{
if (values[i] == target)
{
return i;
}
}
return -1;
}

Binary Search :Binary Search is used with sorted array or list. In binary search, we follow the
following steps:
1. We start by comparing the element to be searched with the element in the middle of the list/array.
2. If we get a match, we return the index of the middle element.
3. If we do not get a match, we check whether the element to be searched is less or greater than in value
than the middle element.
4. If the element/number to be searched is greater in value than the middle number, then we pick the ele-
ments on the right side of the middle element(as the list/array is sorted, hence on the right, we will
have all the numbers greater than the middle number), and start again from the step 1.
5. If the element/number to be searched is lesser in value than the middle number, then we pick the ele-
ments on the left side of the middle element, and start again from the step 1.
Binary Search is useful when there are large number of elements in an array and they are sorted.
So a necessary condition for Binary search to work is that the list/array should be sorted.
Features of Binary Search
1. It is great to search through large sorted arrays.
2. It has a time complexity of O(log n) which is a very good time complexity.
3. It has a simple implementation.
Binary Search Algorithm

BINARY_SEARCH(A, lower_bound, upper_bound, VAL)

Step 1: [INITIALIZE] SET BEG = lower_bound


END = upper_bound, POS = - 1
Step 2: Repeat Steps 3 and 4 while BEG <= END Step 3: SET MID = (BEG + END)/2 Step 4: IF
A[MID] = VAL SET POS = MID PRINT POS Go to Step 6 ELSE IF A[MID]
> VAL
SET END = MID - 1
ELSE
SET BEG = MID + 1
[END OF IF]
[END OF LOOP]
Step 5: IF POS = -1
PRINT “VALUE IS NOT PRESENT IN THE ARRAY”
[END OF IF]
Step 6: EXIT

Selection Sort Algorithm


Selection Sort: The selection sort algorithm sorts an array by repeatedly finding the minimum element
(considering ascending order) from unsorted part and putting it at the beginning. The algorithm
maintains two subarrays in a given array.
1) The subarray which is already sorted.
2) Remaining subarray which is unsorted.
In every iteration of selection sort, the minimum element (considering ascending order) from the
unsorted subarray is picked and moved to the sorted subarray.
Time Complexity of InsertionSort
Best Case : O(n²) #Means array is already sorted.
Average Case : O(n²) #Means array with random numbers.
Worst Case : O(n²) #Means array with descending order.
Following example explains the above steps:
arr[] = 64 25 12 22 11

// Find the minimum element in arr[0...4]


// and place it at beginning
11 25 12 22 64

// Find the minimum element in arr[1...4]


// and place it at beginning of arr[1...4]
11 12 25 22 64

// Find the minimum element in arr[2...4]


// and place it at beginning of arr[2...4]
11 12 22 25 64

// Find the minimum element in arr[3...4]


// and place it at beginning of arr[3...4]
11 12 22 25 64
Quick Sort Algorithm:
Quick Sort: This is the best sort Technique. QuickSort is a Divide and Conquer algorithm. It picks an
element as pivot and partitions the given array around the picked pivot. There are many different
versions of quickSort that pick pivot in different ways.
1. Always pick first element as pivot.
2. Always pick last element as pivot
3. Pick a random element as pivot.
4. Pick median as pivot.
Time Complexity :
Best Case : O(nlogn) #Means array is already sorted.
Average Case : O(nlogn) #Means array with random numbers.
Worst Case : O(n^2) #Means array with descending order.
Algorithm:
1. First element is considered as pivot in the implemented code below.
2. Then we should move all the elements which are less than pivot to one side and greater than pivot to
other side.
3. We divide the array into two arrays in such a way that elements > pivot and elements < pivot.

Bubble Sort:

A well known algorithm called Bubble sort, is easy to understand. Probably this is the least efficient.
The basic idea underlying the bubble sort is to pass through the array left to right several times. Each
pass consists of comparing each element in the array with its successor and interchanging the two ele-
ments if they are not in proper order. At the end of each pass we can observe that the largest element of
the list is moved to its final position.
Explanation:

Time Complexity :
Best Case : O(n²) #Means array is already sorted.
Average Case : O(n²) #Means array with random numbers.
Worst Case : O(n²) #Means array with descending order.

Next →← Prev
Heap Sort Algorithm

In this article, we will discuss the Heapsort Algorithm. Heap sort processes the elements by creating
the min-heap or max-heap using the elements of the given array. Min-heap or max-heap represents the
ordering of array in which the root element represents the minimum or maximum element of the array.
Heap sort basically recursively performs two main operations -
o Build a heap H, using the elements of array.
o Repeatedly delete the root element of the heap formed in 1st phase.
Before knowing more about the heap sort, let's first see a brief description of Heap.
What is a heap?
A heap is a complete binary tree, and the binary tree is a tree in which the node can have the utmost
two children. A complete binary tree is a binary tree in which all the levels except the last level, i.e.,
leaf node, should be completely filled, and all the nodes should be left-justified.
What is heap sort?
Heapsort is a popular and efficient sorting algorithm. The concept of heap sort is to eliminate the ele-
ments one by one from the heap part of the list, and then insert them into the sorted part of the list.
Heapsort is the in-place sorting algorithm.
Now, let's see the algorithm of heap sort.
Algorithm
1. HeapSort(arr)
2. BuildMaxHeap(arr)
3. for i = length(arr) to 2
4. swap arr[1] with arr[i]
5. heap_size[arr] = heap_size[arr] ? 1
6. MaxHeapify(arr,1)
7. End
BuildMaxHeap(arr)
1. BuildMaxHeap(arr)
2. heap_size(arr) = length(arr)
3. for i = length(arr)/2 to 1
4. MaxHeapify(arr,i)
5. End
MaxHeapify(arr,i)
1. MaxHeapify(arr,i)
2. L = left(i)
3. R = right(i)
4. if L ? heap_size[arr] and arr[L] > arr[i]
5. largest = L
6. else
7. largest = i
8. if R ? heap_size[arr] and arr[R] > arr[largest]
9. largest = R
10. if largest != i
11. swap arr[i] with arr[largest]
12. MaxHeapify(arr,largest)
13. End

You might also like