18CS42 - Design and Analysis of Algorithm

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 126

Maharaja Education Trust (R), Mysuru

Maharaja Institute of Technology Mysore


Belawadi, Sriranga Pattana Taluk, Mandya – 571 477

Approved by AICTE, New Delhi,


Affiliated to VTU, Belagavi & Recognized by Government of Karnataka

Lecture Notes on
Design and Analysis of Algorithms (18CS42)

Prepared by

Department of Information Science and


Engineering
Maharaja Education Trust (R), Mysuru
Maharaja Institute of Technology Mysore
Belawadi, Sriranga Pattana Taluk, Mandya – 571 477

Vision/ ಆಶಯ
“To be recognized as a premier technical and management institution promoting extensive
education fostering research, innovation and entrepreneurial attitude"
ಸಂಶೋಧನೆ, ಆವಿಷ್ಕಾ ರ ಹಾಗೂ ಉದ್ಯ ಮಶೋಲತೆಯನ್ನು ಉತೆತ ೋಜಿಸುವ ಅಗ್ರ ಮಾನ್ಯ ತಾಂತ್ರರ ಕ ಮತ್ತತ ಆಡಳಿತ
ವಿಜ್ಞಾ ನ್ ಶಕ್ಷಣ ಕಾಂದ್ರ ವಾಗಿ ಗುರುತ್ರಸಿಕೊಳ್ಳು ವುದು.

Mission/ ಧ್ಯ ೇಯ
 To empower students with indispensable knowledge through dedicated teaching and
collaborative learning.
ಸಮರ್ಪಣಾ ಮನೋಭಾವದ್ ಬೋಧನೆ ಹಾಗೂ ಸಹಭಾಗಿತವ ದ್ ಕಲಿಕಾಕರ ಮಗ್ಳಿಾಂದ್ ವಿದ್ಯಯ ರ್ಥಪಗ್ಳನ್ನು

ಅತಯ ತಾ ೃಷ್ಟ ಜ್ಞಾ ನ್ಸಂರ್ನ್ು ರಾಗಿಸುವುದು.

 To advance extensive research in science, engineering and management disciplines.


ವೈಜ್ಞಾ ನಿಕ, ತಾಂತ್ರರ ಕ ಹಾಗೂ ಆಡಳಿತ ವಿಜ್ಞಾ ನ್ ವಿಭಾಗ್ಗ್ಳಲಿಿ ವಿಸತ ೃತ ಸಂಶೋಧನೆಗ್ಳೊಡನೆ ಬೆಳವಣಿಗೆ
ಹಾಂದುವುದು.
 To facilitate entrepreneurial skills through effective institute - industry collaboration and
interaction with alumni.
ಉದ್ಯ ಮ ಕ್ಷ ೋತಗ್ಳೊಡನೆ ಸಹಯೋಗ್, ಸಂಸ್ಥೆ ಯ ಹಿರಿಯ ವಿದ್ಯಯ ರ್ಥಪಗ್ಳೊಾಂದಿಗೆ ನಿರಂತರ ಸಂವಹನ್ಗ್ಳಿಾಂದ್

ವಿದ್ಯಯ ರ್ಥಪಗ್ಳಿಗೆ ಉದ್ಯ ಮಶೋಲತೆಯ ಕೌಶಲಯ ರ್ಡೆಯಲು ನೆರವಾಗುವುದು.

 To instill the need to uphold ethics in every aspect.

ಜಿೋವನ್ದ್ಲಿಿ ನೈತ್ರಕ ಮೌಲಯ ಗ್ಳನ್ನು ಅಳವಡಿಸಿಕೊಳ್ಳು ವುದ್ರ ಮಹತವ ದ್ ಕುರಿತ್ತ ಅರಿವು ಮೂಡಿಸುವುದು.

 To mould holistic individuals capable of contributing to the advancement of the society.


ಸಮಾಜದ್ ಬೆಳವಣಿಗೆಗೆ ಗ್ಣನಿೋಯ ಕೊಡುಗೆ ನಿೋಡಬಲಿ ರ್ರಿಪೂಣಪ ವಯ ಕ್ತತ ತವ ವುಳು ಸಮರ್ಪ ನಾಗ್ರಿೋಕರನ್ನು
ರೂಪಿಸುವುದು.
Maharaja Institute of Technology Mysore
Department of Information Science and Engineering

VISION OF THE DEPARTMENT


To be recognized as the best centre for technical education and research in the field of
information science and engineering.

MISSION OF THE DEPARTMENT

 To facilitate adequate transformation in students through a proficient teaching


learning process with the guidance of mentors and all-inclusive professional activities.
 To infuse students with professional, ethical and leadership attributes through industry
collaboration and alumni affiliation.
 To enhance research and entrepreneurship in associated domains and to facilitate real
time problem solving.

PROGRAM EDUCATIONAL OBJECTIVES:

 Proficiency in being an IT professional, capable of providing genuine solutions to


information science problems.
 Capable of using basic concepts and skills of science and IT disciplines to pursue
greater competencies through higher education.
 Exhibit relevant professional skills and learned involvement to match the
requirements of technological trends.
PROGRAM SPECIFIC OUTCOME:

Student will be able to

 PSO1: Apply the principles of theoretical foundations, data Organizations,


networking concepts and data analytical methods in the evolving technologies.

 PSO2:Analyse proficient algorithms to develop software and hardware


competence in both professional and industrial areas
Maharaja Institute of Technology Mysore
Department of Information Science and Engineering

Program Outcomes

1. Engineering knowledge: Apply the knowledge of mathematics, science, engineering


fundamentals, and an engineering specialization to the solution of complex engineering
problems.
2. Problem analysis: Identify, formulate, review research literature, and analyze complex
engineering problems reaching substantiated conclusions using first principles of
mathematics, natural sciences, and engineering sciences.
3. Design/development of solutions: Design solutions for complex engineering problems
and design system components or processes that meet the specified needs with appropriate
consideration for the public health and safety, and the cultural, societal, and environmental
considerations.
4. Conduct investigations of complex problems: Use research-based knowledge and
research methods including design of experiments, analysis and interpretation of data, and
synthesis of the information to provide valid conclusions.
5. Modern tool usage: Create, select, and apply appropriate techniques, resources, and
modern engineering and IT tools including prediction and modeling to complex engineering
activities with an understanding of the limitations.
6. The engineer and society: Apply reasoning informed by the contextual knowledge to
assess societal, health, safety, legal and cultural issues and the consequent responsibilities
relevant to the professional engineering practice.
7. Environment and sustainability: Understand the impact of the professional engineering
solutions in societal and environmental contexts, and demonstrate the knowledge of, and need
for sustainable development.
8. Ethics: Apply ethical principles and commit to professional ethics and responsibilities and
norms of the engineering practice.
9. Individual and team work: Function effectively as an individual, and as a member or
leader in diverse teams, and in multidisciplinary settings.
10. Communication: Communicate effectively on complex engineering activities with the
engineering community and with society at large, such as, being able to comprehend and
write effective reports and design documentation, make effective presentations, and give and
receive clear instructions.
11. Project management and finance: Demonstrate knowledge and understanding of the
engineering and management principles and apply these to one’s own work, as a member and
leader in a team, to manage projects and in multidisciplinary environments.
12. Life-long learning: Recognize the need for, and have the preparation and ability to
engage in independent and life-long learning in the broadest context of technological change.
Maharaja Institute of Technology Mysore
Department of Information Science and Engineering

Course Overview
SUBJECT: DESIGN AND ANALYSIS OF ALGORIHTMS SUBJECT CODE: 18CS43

Algorithms are essential to the study of computer science and are increasingly important in
the natural sciences, social sciences and industry. Learn how to effectively construct and
apply techniques for analyzing algorithms including sorting, searching, and selection. Gain an
understanding of algorithm design technique and work on algorithms for fundamental graph
problems including depth-first search, worst and average case analysis, connected
components, and shortest paths.

This course introduces the fundamental concepts of analysis of algorithms. It covers Basic
techniques for designing and analyzing algorithms: dynamic programming, divide and
conquer, balancing. Upper and lower bounds on time and space costs, worst case and
expected cost measures. A selection of applications such as disjoint set union/find, graph
algorithms, search trees, pattern matching. The polynomial complexity classes P, NP, and co-
NP; intractable problems.

Students are able to apply suitable algorithm to solve real-time problems and they can able to
analyse performance of a program.
Course Objectives
1. Explain various computational problem solving techniques.
2. Apply appropriate method to solve a given problem.
3. Describe various methods of algorithm analysis.

Course Outcomes
CO’s DESCRIPTION OF THE OUTCOMES
Apply the knowledge of asymptotic notations, analysis framework and its
18CS43.1
importance.
18CS43.2 Apply appropriate design strategies for problem solving.

18CS43.3 Analyse the computational complexity of different algorithms.

18CS43.4 Inspect the need of efficient algorithm to solve the real-world problems.
Maharaja Institute of Technology Mysore
Department of Information Science and Engineering
Syllabus
SUBJECT: DESIGN AND ANALYSIS OF ALGORITHMS SUBJECT CODE: 18CS42

Teaching
Topics Covered as per Syllabus
Hours
MODULE-1
Performance Analysis: Space complexity, Time complexity. Asymptotic Notations:
Big-Oh notation (O), Omega notation (Ω), Theta notation (Q), and Little-oh notation
(o), Mathematical analysis of Non-Recursive and recursive Algorithms with Examples.
10 Hours
Important Problem Types: Sorting, Searching, String processing, Graph Problems,
Combinatorial Problems. Fundamental Data Structures: Stacks, Queues, Graphs,
Trees, Sets and Dictionaries.
MODULE-2
Divide and Conquer: General method, Binary search, Recurrence equation for divide
and conquer, Finding the maximum and minimum, Merge sort, Quick sort, Strassen’s
10 Hours
matrix multiplication, Advantages and Disadvantages of divide and conquer. Decrease
and Conquer Approach: Topological Sort.
MODULE -3
Greedy Method: General method, Coin Change Problem, Knapsack Problem, Job
sequencing with deadlines. Minimum cost spanning trees: Prim’s Algorithm, Kruskal’s
Algorithm. Single source shortest paths: Dijkstra's Algorithm. Optimal Tree
10
problem: Huffman Trees and Codes. Transform and Conquer Approach: Heaps and Hours
Heap Sort.
MODULE-4
Dynamic Programming: General method with Examples, Multistage Graphs.
Transitive Closure: Warshall’s Algorithm, All Pairs Shortest Paths: Floyd's Algorithm,
10
Optimal Binary Search Trees, Knapsack problem, Bellman-Ford Algorithm (T2:5.4), Hours
Travelling Sales Person problem, Reliability design
MODULE-5
Backtracking: General method, N-Queens problem, Sum of subsets problem, Graph
coloring, Hamiltonian cycles. Branch and Bound: Assignment Problem, Travelling Sales
Person problem (T1:12.2), 0/1 Knapsack problem: LC Branch and Bound solution, FIFO
10
Branch and Bound solution. NP-Complete and NP-Hard problems: Basic concepts, Hours
nondeterministic algorithms, and P, NP, NP-Complete, and NP-Hard classes.
List of Text Books
1. Introduction to the Design and Analysis of Algorithms, Anany Levitin:, 2rd Edition, 2009. Pearson.
2. Computer Algorithms/C++, Ellis Horowitz, Satraj Sahni and Rajasekaran, 2nd Edition, 2014, Universities
Press
List of Reference Books
1. Introduction to Algorithms, Thomas H. Cormen, Charles E. Leiserson, Ronal L. Rivest, Clifford Stein,
3rd Edition, PHI.
2. Design and Analysis of Algorithms , S. Sridhar, Oxford (Higher Education).
Maharaja Institute of Technology Mysore
Department of Information Science and Engineering

Index

SUBJECT: DESIGN AND ANALYSIS OF ALGORITHMS SUBJECT CODE: 18CS43

Page
SL.No Topic
No
Module 1
1.1 Introduction 1
1.2 Performance Analysis 3
1.3 Asymptotic notations 5
1.4 Important problem types 15
1.5 Fundamental data structure 16
Module 2
2.1 General method 21
2.2 Recurrence equation for Divide and Conquer 22
2.3 Binary search 25
2.4 Finding Minimum and Maximum 27
2.5 Merge Sort 29
2.6 Quick Sort 32
2.7 Strassens matrix multiplication 34
2.8 Decrease and Conquer 37
Module 3
3.1 Introduction to Greedy method 44
3.2 Coin change problem 44
3.3 Knapsack problem 45
3.4 Job sequencing with Deadlines 47
3.5 Minimum Spanning Tree 49
3.5.1 Kruskals Algorithm 51
3.5.2 Prims Algorithm 52
3.6 Single source shortest path Algorithm 53
3.6.1 Dijkstras Algorithm 54
3.7 Optimal Tree Problem 56
3.8 Transform and conquer 59
3.8.1 Heap Sort 64
Module 4
4.1 Introduction to Dynamic Programming 65
4.2 Multistage Graphs 67
4.3 Backward Approach 70
4.4 Transitive Closure using Warshall’s Algorithm 71
4.5 All Pairs Shortest Paths using Floyd's Algorithm 82
4.6 Optimal Binary Search Trees 83
4.7 Knapsack problem 84
4.8 Bellman-Ford Algorithm 85
4.9 Reliability design 88
Page
SL.No Topic
No
Module 5
5.1 General method of Backtracking 90
5.2 N-Queens Problem 91
5.3 Hamiltonian Circuit Problem 92
5.4 Subset-Sum Problem 94
5.5 Graph Coloring (for planar graphs) 95
5.6 Branch-and-Bound technique 98
5.7 Knapsack Problem 101
5.8 Traveling Salesman Problem 104
5.9 Approximation Algorithms for NP-hard Problems 106
5.10 Nearest-neighbor algorithm 108
5.11 Multi-fragment-heuristic algorithm 109
5.12 Approximation Algorithms for the Knapsack Problem 112
Maharaja Institute of Technology Mysore Department of Information Science and Engineering

Module-1: Introduction
1.1Introduction
1.1.1 What is an Algorithm?
Algorithm: An algorithm is a finite sequence of unambiguous instructions to solve a
particular problem.

Input. Zero or more quantities are externally supplied.


a. Output. At least one quantity is produced.
b. Definiteness. Each instruction is clear and unambiguous. It must be perfectly
clear what should be done.
c. Finiteness. If we trace out the instruction of an algorithm, then for all cases,
the algorithm terminates after a finite number of steps.
d. Effectiveness. Every instruction must be very basic so that it can be carried
out, in principle, by a person using only pencil and paper. It is not enough that
each operation be definite as in criterion c; it also must be feasible.
1.2. Algorithm Specification
An algorithm can be specified in
1) Simple English
2) Graphical representation like flow chart
3) Programming language like c++/java
4) Combination of above methods.
Example: Combination of simple English and C++, the algorithm for selection sort
is specified as follows.

Example: In C++ the same algorithm can be specified as follows. Here Type is a basic
or user defined data type.

Design and Analysis of Algorithms (18CS42), Module I. Introduction Page | 1


Maharaja Institute of Technology Mysore Department of Information Science and Engineering

Design and Analysis of Algorithms (18CS42), Module I. Introduction Page | 2


Maharaja Institute of Technology Mysore Department of Information Science and Engineering

Design and Analysis of Algorithms (18CS42), Module I. Introduction Page | 3


Maharaja Institute of Technology Mysore Department of Information Science and Engineering

Design and Analysis of Algorithms (18CS42), Module I. Introduction Page | 4


Maharaja Institute of Technology Mysore Department of Information Science and Engineering

Design and Analysis of Algorithms (18CS42), Module I. Introduction Page | 5


Maharaja Institute of Technology Mysore Department of Information Science and Engineering

Design and Analysis of Algorithms (18CS42), Module I. Introduction Page | 6


Maharaja Institute of Technology Mysore Department of Information Science and Engineering

Design and Analysis of Algorithms (18CS42), Module I. Introduction Page | 7


Maharaja Institute of Technology Mysore Department of Information Science and Engineering

Design and Analysis of Algorithms (18CS42), Module I. Introduction Page | 8


Maharaja Institute of Technology Mysore Department of Information Science and Engineering

Design and Analysis of Algorithms (18CS42), Module I. Introduction Page | 9


Maharaja Institute of Technology Mysore Department of Information Science and Engineering

Design and Analysis of Algorithms (18CS42), Module I. Introduction Page | 10


Maharaja Institute of Technology Mysore Department of Information Science and Engineering

Design and Analysis of Algorithms (18CS42), Module I. Introduction Page | 11


Maharaja Institute of Technology Mysore Department of Information Science and Engineering

Design and Analysis of Algorithms (18CS42), Module I. Introduction Page | 12


Maharaja Institute of Technology Mysore Department of Information Science and Engineering

Design and Analysis of Algorithms (18CS42), Module I. Introduction Page | 13


Maharaja Institute of Technology Mysore Department of Information Science and Engineering

Design and Analysis of Algorithms (18CS42), Module I. Introduction Page | 14


Maharaja Institute of Technology Mysore Department of Information Science and Engineering

Design and Analysis of Algorithms (18CS42), Module I. Introduction Page | 15


Maharaja Institute of Technology Mysore Department of Information Science and Engineering

Design and Analysis of Algorithms (18CS42), Module I. Introduction Page | 16


Maharaja Institute of Technology Mysore Department of Information Science and Engineering

Design and Analysis of Algorithms (18CS42), Module I. Introduction Page | 17


Maharaja Institute of Technology Mysore Department of Information Science and Engineering

Design and Analysis of Algorithms (18CS42), Module I. Introduction Page | 18


Maharaja Institute of Technology Mysore Department of Information Science and Engineering

Design and Analysis of Algorithms (18CS42), Module I. Introduction Page | 19


Maharaja Institute of Technology Mysore Department of Information Science and Engineering

Design and Analysis of Algorithms (18CS42), Module I. Introduction Page | 20


Maharaja Institute of Technology Mysore Department of Information Science and Engineering

Module-2: Divide and Conquer


2.1 General method

Design and Analysis of Algorithms (18CS42), Module II. Divide and Conquer Page | 21
Maharaja Institute of Technology Mysore Department of Information Science and Engineering

2.2 Recurrence equation for Divide and Conquer

Design and Analysis of Algorithms (18CS42), Module II. Divide and Conquer Page | 22
Maharaja Institute of Technology Mysore Department of Information Science and Engineering

» Substitution method (Guess the solution & verify by Induction)


In substitution method, we guess a bound and then use mathematical induction to
prove our guess correct.
Example 1:
𝟎 𝒊𝒇 𝒏 = 𝟎
𝑻(𝒏) = {
𝑻(𝒏 − 𝟏) + 𝒏 𝒊𝒇 𝒏 > 𝟎
Forward substitution:

For n = 0 T(0) = 0
For n = 1 T(1) = T(0) + 1 = 0 + 1 = 1 (0+1)
For n = 2 T(2) = T(1) + 2 = 1 + 2 = 3 (0+1+2)
For n = 3 T(3) = T(2) + 3 = 3 + 3 = 6 (0+1+2+3)
For n = 4 T(4) = T(3) + 4= 6 + 4 = 10 (0+1+2+3+4)
For n = 5 T(5) = T(4) + 5= 10 + 5 = 15 (0+1+2+3+4+5)
...
By observing above generated equations we can derive a formula
𝒏(𝒏+𝟏) 𝒏𝟐 𝒏
T(n) = = +
𝟐 𝟐 𝟐
Example 2:
𝟏 𝒊𝒇 𝒏 = 𝟏
𝑻(𝒏) = {
𝟐 𝑻(𝒏 − 𝟏) + 𝟏 𝒊𝒇 𝒏 > 𝟏
Forward substitution:
For n = 1 T(1) = 1 (2-1)
For n = 2 T(2) = 2T(1) + 1 = 3 (4-1)
For n = 3 T(3) = 2T(2) + 1 = 7 (8-1)
For n = 4 T(4) = 2T(3) + 1 = 15 (16-1)
For n = 5 T(5) = 2T(4) + 1 = 31 (32-1)
...
We guess that a closed form solution is: T(n) = 2n − 1.

» Summations (unrolling and summing)


In this method we unroll (or substituting) the given recurrence back to itself until not
getting a regular pattern (or series).

Example 1:
𝟏 𝒊𝒇 𝒏 = 𝟏
𝑻(𝒏) = {
𝟐 𝑻(𝒏 − 𝟏) + 𝟏 𝒊𝒇 𝒏 > 𝟏

For i = 1 T(n) = 2T(n − 1) + 1


For i = 2 T(n) = 2[2T(n − 2) + 1] + 1= 4T(n − 2) + 3
For i = 3 T(n) = 4[2T(n − 3) + 1] + 3 = 8T(n − 3) + 7
For i = 4 T(n) = 8[2T(n − 4) + 1] + 7 = 16T(n − 4) + 15
For i = 5 T(n) = 16[2T(n − 5) + 1] + 15 = 32T(n − 5) + 31
···
After k substitutions: T(i) = 2kT(n − k) + (2k − 1)
Termination condition: when n − k = 1 or k = n-1
Substitute the value for k back into the equation.
T(n) = 2n−1T(1) + 2n−1 − 1
= 2n−1 + 2n−1 − 1
= 2n– 1

Design and Analysis of Algorithms (18CS42), Module II. Divide and Conquer Page | 23
Maharaja Institute of Technology Mysore Department of Information Science and Engineering

Example 2:
𝟕 𝒊𝒇 𝒏 = 𝟑
𝑻(𝒏) = {
𝑻(𝒏 − 𝟏) + 𝑪 𝒊𝒇 𝒏 > 𝟏

For i = 1 T(1) = T(n − 1) + c


For i = 2 T(2) = T(n − 2) + 2c
For i = 3 T(3) = T(n − 3) + 3c
For i = 4 T(4) = T(n − 4) + 4c
After k substitutions:
T(n) = T(n − k) + kc
Terminate when the recurrence reaches a base case - for this problem,
When n − k = 3 or k = n − 3.
Substitute back into the recurrence
T(n) = T(3) + (n − 3)c
= 7 + (n − 3)c

Example 3:
𝟏 𝒊𝒇 𝒏 = 𝟏
𝑻(𝒏) = {
𝟐 𝑻(𝒏 − 𝟏) + 𝒏 𝒊𝒇 𝒏 > 𝟏
For i = 1 T(n) = 2T(n − 1) + n
For i = 2 T(n) = 4T(n − 2) + 2n + n − 2
For i = 3 T(n) = 8T(n − 3) + 4n + 2n + n − 4(2) − 2
For i = 4 T(n) = 16T(n − 4) + 8n + 4n + 2n + n − 8(3) − 4(2) − 2
For i = 5 T(n) = 32T(n − 5) + 16n + 8n + 4n + 2n + n − 16(4) − 8(3) − 4(2) – 2
After k substitutions:
𝐓(𝐧) = 𝟐𝐤 𝐓(𝐧 − 𝐤) + 𝐧 ∑𝐤−𝟏 𝐣 𝐤−𝟏
𝐣=𝟎 𝟐 − ∑𝐣=𝟎 𝐣𝟐
𝐣

= 2kT(n − k) + n(2k − 1) − ((k − 2)2k+ 2)


We terminate when the recurring term reaches a base case - for this problem,
When n − k = 1 or k = n – 1.
Replace k with n − 1:
T(n) = 2kT(n − k) + n(2k − 1) − ((k − 2)2k+ 2)
= 2n−1T(1) + n(2n−1 − 1) − ((n − 3)2n−1 + 2)
= 2n−1 + n2n−1 − n − (n − 3)2n−1 − 2
= 2n−1 + 32n−1 − n − 2
= 4 · 2n−1 − n − 2
= 2n+1 − n − 2

» The Master Theorem


𝒏
For, 𝑻(𝒏) = 𝒂 𝑻 (𝒃) + 𝒇(𝒏) , 𝒘𝒉𝒆𝒓𝒆 𝒂 ≥ 𝟏 𝒂𝒏𝒅 𝒃 > 𝟏
If𝒇(𝒏) 𝒊𝒔 𝒏𝒅 𝒘𝒉𝒆𝒓𝒆 𝒅 ≥ 𝟎 𝒊𝒏 𝒕𝒉𝒆 𝒓𝒆𝒄𝒖𝒓𝒆𝒏𝒄𝒆 𝒓𝒆𝒍𝒂𝒕𝒊𝒐𝒏 𝒕𝒉𝒆𝒏,

𝜽(𝒏𝒅 ) 𝒊𝒇𝒂 < 𝒃𝒅


𝑻(𝒏) = {𝜽(𝒏𝒅 𝒍𝒐𝒈𝟐 𝒏) 𝒊𝒇𝒂 = 𝒃𝒅
𝜽(𝒏𝒍𝒐𝒈𝒃 𝒂
) 𝒊𝒇𝒂 > 𝒃𝒅
Example 1:
T (n) = 3T (n/2) + n2
Here a=3, b=2 and d=2
⇒bd = 22 = 4 >a(=3) i.e., 𝒂 < 𝒃𝒅 case 1
Therefore T(n) = 𝜽(𝒏𝒅 ) = 𝜽(𝒏𝟐 )
Design and Analysis of Algorithms (18CS42), Module II. Divide and Conquer Page | 24
Maharaja Institute of Technology Mysore Department of Information Science and Engineering

Example 2:
T (n) = T (n/2) + 2n
Here a=1, b=2 and d= n
⇒bd = 2n> a(=1) for n ≥ 1 i.e., 𝒂 < 𝒃𝒅 case 1
Therefore T(n) = 𝜽(𝒏𝒅 )
But, f(n) = 2n base n = 2 and degree d = n
𝐓(𝐧) = 𝜽(𝟐𝐧 )

Example 3:
T(n) = 2T (n/2)+ n log n
Here a=2, b=2 and d= 1
⇒bd = 21 = 2 = a(=2) i.e., 𝒂 = 𝒃𝒅 case 2
Therefore 𝑻(𝒏) = 𝜽(𝒏𝒅 𝒍𝒐𝒈𝟐 𝒏) = 𝜽(𝒏𝒍𝒐𝒈𝟐 𝒏)

Example 4:
T(n) = 8T (n/2)+ n2
Here a=8, b=2 and d= 2
⇒bd = 22 = 4 <a(=8) i.e., 𝒂 > 𝒃𝒅 case 3
T(n) = 𝜽(𝒏𝒍𝒐𝒈𝒃 𝒂 ) = 𝜽(𝒏𝟑 ) ⇐𝐥𝐨𝐠 𝟐 𝟖 = 𝐥𝐨𝐠 𝟐 𝟐𝟑 = 𝟑

Note: T (n) = 2nT (n/2) + nn⇒Does not apply masters theorem (because a is not constant)
Note:T (n) = 0.5T (n/2)+ 1/n ⇒ Does not apply (because a< 1)

2.3 Binary search

Design and Analysis of Algorithms (18CS42), Module II. Divide and Conquer Page | 25
Maharaja Institute of Technology Mysore Department of Information Science and Engineering

Example 1:

Example 2: Search for 37 in a given array

We set our low and our high values to be at the beginning and ends of the array, and
our middle value to be their average:

We then compare 37 to the value at the middle location. Is 37 = = 45? No, it is less than
45. So we update the last pointer to be middle - 1, and readjust the middle pointer
accordingly:

Is 37 = = 35? No. It is greater than 35. So we update the first and middle pointers
accordingly:

Design and Analysis of Algorithms (18CS42), Module II. Divide and Conquer Page | 26
Maharaja Institute of Technology Mysore Department of Information Science and Engineering

Is 37 == 37? Yes! We found it:

Analysis: Time complexity is given by


T(n) = T(n/2) + 1
Master theorem a=1, b=1 and d=0 => a=bd
T(n) = nd log n = n0 log n = log n
T(n) ∈ θ( log n)

2.4 Finding Minimum and Maximum


In this approach, the array is divided into two halves. Then using recursive approach
maximum and minimum numbers in each halves are found. Later, return the maximum of
two maxima of each half and the minimum of two minima of each half.

Example problem: Find the minimum and maximum value using divide and conquer method.
The elements are 20, 30, 1, 3, 12, 25, 56, 21, 31

Design and Analysis of Algorithms (18CS42), Module II. Divide and Conquer Page | 27
Maharaja Institute of Technology Mysore Department of Information Science and Engineering

Algorithm DAC_MinMax (A[0..n], i, j)


//Input: array A of n integers
//Output: return Min and Max
if i==j
Min ← Max ← A[i] //Min ← Max ← A[j];
else
{
mid ← (i+j)/2 // mid = i+(j-i)/2
{Lmin, Lmax} ← DAC_MinMax(A[0..n], i, mid) //left array
{Rmin, Rmax} ← DAC_MinMax(A[0..n], mid+1, j) //right array
if Lmin < Rmin
Min ← Lmin
else
Min ← Rmin
if Lmax > Rmax
Max ← Lmax
else
Max ← Rmax
return {Min, Max}
}

Analysis:
Step 1: Input size: Algorithm depends on input, i.e., array size ‘n’
Step 2: Basic Operation: Recursive Comparison
Design and Analysis of Algorithms (18CS42), Module II. Divide and Conquer Page | 28
Maharaja Institute of Technology Mysore Department of Information Science and Engineering

Step 3: No best, worst, average case exists


Step 4: Time complexity is given by
𝟏 𝒇𝒐𝒓 𝒏 = 𝟏
𝑻(𝒏) = {
𝟐𝑻(𝒏⁄𝟐) + 𝟐 𝒇𝒐𝒓 𝒏 > 𝟏
Step 5: Solve T(n) = 2 T(n/2) + 2
From masters theorem
Masters Theorem:
𝒏
For, 𝑻(𝒏) = 𝒂 𝑻 (𝒃 ) + 𝒇(𝒏) , 𝒘𝒉𝒆𝒓𝒆 𝒂 ≥ 𝟏 𝒂𝒏𝒅 𝒃 > 𝟏

If 𝒇(𝒏) 𝒊𝒔 𝒏𝒅 𝒘𝒉𝒆𝒓𝒆 𝒅 ≥ 𝟎 𝒊𝒏 𝒕𝒉𝒆 𝒓𝒆𝒄𝒖𝒓𝒆𝒏𝒄𝒆 𝒓𝒆𝒍𝒂𝒕𝒊𝒐𝒏 𝒕𝒉𝒆𝒏,

𝜽(𝒏𝒅 ) 𝒊𝒇𝒂 < 𝒃𝒅


𝑻(𝒏) = {𝜽(𝒏𝒅 𝒍𝒐𝒈𝟐 𝒏) 𝒊𝒇𝒂 = 𝒃𝒅
∗ 𝒏𝟎𝒍𝒐𝒈𝒃 𝒂 )
a=2, b=2 and d=0 𝒇(𝒏) = 𝟐 = 𝟐 𝜽(𝒏 𝒊𝒇𝒂 > 𝒃𝒅
(𝒂 = 𝟐) > (𝒃𝒅 = 𝟐𝟎 = 𝟏)
𝑻(𝒏) = 𝒏𝐥𝐨𝐠𝟐 𝟐
= 𝒏 = Θ(n)

2.5 Merge sort


Merge Sort is a divide and conquer algorithm. So Merge Sort first divides the array into equal
halves and then combines them in a sorted manner.
Steps on how it works:
1. If it is only one element in the list it is already sorted, return.
2. Divide the list recursively into two halves until it can no more be divided.
3. Merge the smaller lists into new list in sorted order.

Example1: Sort given elements using merge sort: 54, 26, 93, 17, 77, 31, 44, 55, 20

Divide

Design and Analysis of Algorithms (18CS42), Module II. Divide and Conquer Page | 29
Maharaja Institute of Technology Mysore Department of Information Science and Engineering

Conquer

Example2: Sort the given elements using merge sort algorithm. Elements are 14, 7, 3, 12, 9, 11, 6, 2

Design and Analysis of Algorithms (18CS42), Module II. Divide and Conquer Page | 30
Maharaja Institute of Technology Mysore Department of Information Science and Engineering

Algorithm MERGE_SORT(A,p,r)
//Input: An array A[1…n] of n elements
//Output: A[1…n] sorted in increasing order.
if p < r
q ← ⎣(p+r)/2⎦
MERGE_SORT (A, p, q)
MERGE_SORT (A, q+1, r)
MERGE (A, p, q, r)

Algorithm MERGE(A,p,q,r)
//Input: An array A[1…m] of elements and three indices p, q and r, with 1 <= p <= q < r
<= m, such that both the subarrays A[p…q] and A[q + 1…r] are sorted individually in
nondecreasing order.
//Output: A[p…r] contains the result of merging the two subarrays A[p…q] and A[q +
1…r]. B[p…r] is an auxiliary array.
i←p; j←q + 1; k←p
while i <= q and j <= r
if A[i] <= A[j] then
B[k] ← A[i]s
i←i+1
else
B[k] ← A[j]
j←j+1
end if
k←k + 1
end while
if i = q + 1 then
B[k…r] ← A[j…r]
else B[k…r] ← A[i…q]
end if
A[p…r] ←B[p…r]

Analysis:
Step 1: Input size: Algorithm depends on input, i.e., array size ‘n’
Step 2: Basic Operation: Recursive Comparison and swapping
Step 3: No best, worst, average case exists
Note: Worst case: During key comparison, neither of the two arrays becomes empty before
the other one contains just one element.
Step 4: Let T(n) denotes the number of times basic operation is executed. Then
𝟏 𝒇𝒐𝒓 𝒏 = 𝟏
𝑻(𝒏) = {
𝟐𝑻(𝒏⁄𝟐) + 𝒏 𝒇𝒐𝒓 𝒏 > 𝟏
Step 5: Solve T(n) = 2 T(n/2) + n
𝒏
From masters theorem For, 𝑻(𝒏) = 𝒂 𝑻 ( 𝒃) + 𝒇(𝒏) , 𝒘𝒉𝒆𝒓𝒆 𝒂 ≥ 𝟏 𝒂𝒏𝒅 𝒃 > 𝟏
If 𝒇(𝒏) 𝒊𝒔 𝒏𝒅 𝒘𝒉𝒆𝒓𝒆 𝒅 ≥ 𝟎 𝒊𝒏 𝒕𝒉𝒆 𝒓𝒆𝒄𝒖𝒓𝒆𝒏𝒄𝒆 𝒓𝒆𝒍𝒂𝒕𝒊𝒐𝒏 𝒕𝒉𝒆𝒏,
𝜽(𝒏𝒅 ) 𝒊𝒇𝒂 < 𝒃𝒅
𝑻(𝒏) = {𝜽(𝒏𝒅 𝒍𝒐𝒈𝟐 𝒏) 𝒊𝒇𝒂 = 𝒃𝒅
𝜽(𝒏𝒍𝒐𝒈𝒃 𝒂
) 𝒊𝒇𝒂 > 𝒃𝒅
d
a=2, b=2, d=1 => a=b
T(n)= nd log2 b = n log n ⇒ T(n) ∈ θ(n log n)
Design and Analysis of Algorithms (18CS42), Module II. Divide and Conquer Page | 31
Maharaja Institute of Technology Mysore Department of Information Science and Engineering

2.5 Quick sort

Example:

Design and Analysis of Algorithms (18CS42), Module II. Divide and Conquer Page | 32
Maharaja Institute of Technology Mysore Department of Information Science and Engineering

Apply quick sort algorithm to sort following elements: 40, 90, 60, 5, 13, 10

Algorithm QUICKSORT(A, p, r)
//Input:
//Output:
if p < r
q ← PARTITION( A, p, r)
QUICKSORT(A, p, q-1)
QUICKSORT(A, q+1, r)

Algorithm PARTITION(A, p, r) //A[p..r]


//Input:
//Output:
x ← A[p] //the leftmost, i.e., element as pivot
while(i<j)
for i ← p+1 to r
if A[i] <= x
i ← i+1
end for
for j ← r to p+1
if A[j] > x
j ← j+1
end for
Design and Analysis of Algorithms (18CS42), Module II. Divide and Conquer Page | 33
Maharaja Institute of Technology Mysore Department of Information Science and Engineering

exchange A[i]↔A[j]
end while
exchange A[j]↔A[r]
return j

Analysis:
Step1: Input size: Array size, n
Step2: Basic operation: key comparison
Step3: Best, worst, average case exists:
Step4: Best case: when partition happens in the middle of the array each time.
Worst case: When input is already sorted. During key comparison, one half is empty,
while remaining n-1 elements are on the other partition.
Step5: Let T(n) denotes the number of times basic operation is executed in
Worst case:
T(n) = T(n-1) + (n+1) for n > 1 (2 sub-problems of size 0 and n-1 respectively)
T(1) = 1
= O (n2)
Best case:
T(n) = T(n/2) + T(n/2) + n (2 sub-problems of size n/2 each)
= 2*T(n/2) + n
Using master theorem
= (n log n)
T(n) ∈ θ(n log n)

2.6 Strassen’s matrix multiplication


Standard matrix multiplication:

1 2 2 3 (1x2) + (2x4) (1x3) + (2x5)


AxB = [ ]x[ ]=[ ]
3 4 4 5 (3x2) + (4x4) (3x3) + (4x5)
2 + 8 3 + 10
= [ ]
6 + 16 9 + 20
10 13
= [ ]
22 29

T(N) = 8T(N/2) + O(N2)


From Master's Theorem, time complexity of above method is O(N3) which is
unfortunately same as the above naive method.

Design and Analysis of Algorithms (18CS42), Module II. Divide and Conquer Page | 34
Maharaja Institute of Technology Mysore Department of Information Science and Engineering

Strassen’s matrix multiplication:


Description: Strassen’s algorithm is used for matrix multiplication. It is asymptotically faster than the
standard matrix multiplication algorithm

Example 1:
𝟏 𝟐 𝟐 𝟑 (𝑷𝟓 + 𝑷𝟒 − 𝑷𝟐 + 𝑷𝟔) (𝑷𝟏 + 𝑷𝟐)
A𝐱𝐁 = [ ]𝐱[ ] =[ ]
𝟑 𝟒 𝟒 𝟓 (𝑷𝟑 + 𝑷𝟒) (𝑷𝟏 + 𝑷𝟓 − 𝑷𝟑 − 𝑷𝟕)
P1 = 1 x (3–5) = -2
P2 = (1+2) x 5 = 15
P3 = (3+4) x 2 = 14
P4 = 4 x (4-2) = 8
P5 = (1+4) x (2+5) = 35
P6 = (2-4) x (4+5) = -18
P7 = (1-3) x (2+3) = -10
𝟑𝟓 + 𝟖 − 𝟏𝟓 + (−𝟏𝟖) −𝟐 + 𝟏𝟓
= [ ]
𝟏𝟒 + 𝟖 𝟑𝟓 + (−𝟐) − 𝟏𝟒 − (−𝟏𝟎)
𝟏𝟎 𝟏𝟑
= [ ]
𝟐𝟐 𝟐𝟗
Example 2:
𝟓 𝟐 𝟔 𝟏 𝟕 𝟓 𝟖 𝟎 𝟗𝟔 𝟔𝟖 𝟔𝟗 𝟔𝟗
A 𝐱 𝐁 = [𝟎 𝟔 𝟐 𝟎] 𝐱 [ 𝟏 𝟖 𝟐 𝟔] = [ 𝟐𝟒 𝟓𝟔 𝟏𝟖 𝟓𝟐 ]
𝟑 𝟖 𝟏 𝟒 𝟗 𝟒 𝟑 𝟖 𝟓𝟖 𝟗𝟓 𝟕𝟏 𝟗𝟐
𝟏 𝟖 𝟓 𝟔 𝟓 𝟑 𝟕 𝟗 𝟗𝟎 𝟏𝟎𝟕 𝟖𝟏 𝟏𝟒𝟐
𝟓 𝟐 𝟔 𝟏
[ ] [ ]
𝟔 𝟏 𝟑 𝟖 𝟏 𝟒
[𝟓
𝟐 𝟎 ]⇒𝒂= 𝟐]
[𝟎 𝟔 ,𝒃 = [ ] ,𝒄 = [ ] ,𝒅 = [ ]
𝟑 𝟖 𝟏 𝟒 𝟎 𝟔 𝟐 𝟎 𝟏 𝟖 𝟓 𝟔
[ ] [ ]
𝟏 𝟖 𝟓 𝟔
𝟕 𝟓 𝟖 𝟎
[ ] [ ]
𝟖 𝟎 𝟗 𝟒 𝟑 𝟖
[𝟕
𝟐 𝟔 ]⇒𝒆= 𝟓]
[𝟏 𝟖 ,𝒇 = [ ] ,𝒈 = [ ] ,𝒉 = [ ]
𝟗 𝟒 𝟑 𝟖 𝟏 𝟖 𝟐 𝟔 𝟓 𝟑 𝟕 𝟗
[ ] [ ]
𝟓 𝟑 𝟕 𝟗
Design and Analysis of Algorithms (18CS42), Module II. Divide and Conquer Page | 35
Maharaja Institute of Technology Mysore Department of Information Science and Engineering

𝟓 𝟐] 𝟖 𝟎 𝟑 𝟖
]) = [ 𝟏𝟓 −𝟒𝟔]
𝑷𝟏 = 𝒂 ∗ (𝒇 − 𝒉) = [ 𝐱 ([ ]−[
𝟎 𝟔 𝟐 𝟔 𝟕 𝟗 −𝟑𝟎 −𝟏𝟖
𝟓 𝟐] [ 𝟔 𝟏 𝟑 𝟖
] = [𝟓𝟒 𝟏𝟏𝟓]
𝑷𝟐 = (𝒂 + 𝒃) ∗ 𝒉 = ([ + ]) 𝐱 [
𝟎 𝟔 𝟐 𝟎 𝟕 𝟗 𝟒𝟖 𝟕𝟎
𝟑 𝟖 𝟏 𝟒 𝟒𝟎 𝟏𝟏𝟔
]) 𝐱 [𝟕 𝟓]
𝑷𝟑 = (𝒄 + 𝒅) ∗ 𝒆 = ([ ]+[ = [ ]
𝟏 𝟖 𝟓 𝟔 𝟏 𝟖 𝟓𝟔 𝟏𝟒𝟐
𝟏 𝟒 𝟗 𝟒 𝟏𝟖 −𝟐𝟏
] − [𝟕 𝟓])
𝑷𝟒 = 𝒅 ∗ (𝒈 − 𝒆) = [ ] 𝐱 ([ = [ ]
𝟓 𝟔 𝟓 𝟑 𝟏 𝟖 𝟑𝟒 −𝟑𝟓
𝟓 𝟐] [𝟏 𝟒 𝟓] [𝟑 𝟖 𝟏𝟎𝟖 𝟏𝟖𝟎
𝑷𝟓 = (𝒂 + 𝒅) ∗ (𝒆 + 𝒉) = ([ + ]) 𝐱 ([𝟕 + ]) = [ ]
𝟎 𝟔 𝟓 𝟔 𝟏 𝟖 𝟕 𝟗 𝟏𝟒𝟔 𝟐𝟔𝟎
𝟔 𝟏 𝟏 𝟒 𝟗 𝟒 𝟑 𝟖 𝟐𝟒 𝟐𝟒
𝑷𝟔 = (𝒃 − 𝒅) ∗ (𝒈 + 𝒉) = ([ ]−[ ]) 𝐱 ([ ]+[ ]) = [ ]
𝟐 𝟎 𝟓 𝟔 𝟓 𝟑 𝟕 𝟗 −𝟏𝟎𝟖 −𝟏𝟎𝟖
𝟓 𝟐] [ 𝟑 𝟖 𝟓] [𝟖 𝟎 𝟏𝟐 −𝟕𝟒
𝑷𝟕 = (𝒂 − 𝒄) ∗ (𝒆 + 𝒇) = ([ − ]) 𝐱 ([𝟕 + ]) = [ ]
𝟎 𝟔 𝟏 𝟖 𝟏 𝟖 𝟐 𝟔 −𝟐𝟕 −𝟑𝟑
𝟏𝟎𝟖 𝟏𝟖𝟎 𝟏𝟖 −𝟐𝟏 𝟏𝟏𝟓] [ 𝟐𝟒 𝟐𝟒 𝟗𝟔 𝟔𝟖
𝑷𝟓 + 𝑷𝟒 − 𝑷𝟐 + 𝑷𝟔 = [ ]+[ ] − [𝟓𝟒 + ]= [ ]
𝟏𝟒𝟔 𝟐𝟔𝟎 𝟑𝟒 −𝟑𝟓 𝟒𝟖 𝟕𝟎 −𝟏𝟎𝟖 −𝟏𝟎𝟖 𝟐𝟒 𝟓𝟔

𝑷𝟏 + 𝑷𝟐 = [ 𝟏𝟓 −𝟒𝟔] + [𝟓𝟒 𝟏𝟏𝟓] = [𝟔𝟗 𝟔𝟗]


−𝟑𝟎 −𝟏𝟖 𝟒𝟖 𝟕𝟎 𝟏𝟖 𝟓𝟐

𝑷𝟑 + 𝑷𝟒 = [𝟒𝟎 𝟏𝟏𝟔] + [𝟏𝟖 −𝟐𝟏] = [𝟓𝟖 𝟗𝟓 ]


𝟓𝟔 𝟏𝟒𝟐 𝟑𝟒 −𝟑𝟓 𝟗𝟎 𝟏𝟎𝟕

𝑷𝟏 + 𝑷𝟓 − 𝑷𝟑 − 𝑷𝟕 = [ 𝟏𝟓 −𝟒𝟔] + [𝟏𝟎𝟖 𝟏𝟖𝟎] − [𝟒𝟎 𝟏𝟏𝟔] − [ 𝟏𝟐 −𝟕𝟒] = [𝟕𝟏 𝟗𝟐 ]


−𝟑𝟎 −𝟏𝟖 𝟏𝟒𝟔 𝟐𝟔𝟎 𝟓𝟔 𝟏𝟒𝟐 −𝟐𝟕 −𝟑𝟑 𝟖𝟏 𝟏𝟒𝟐
𝟗𝟔 𝟔𝟖 𝟔𝟗 𝟔𝟗
(𝑷𝟓 + 𝑷𝟒 − 𝑷𝟐 + 𝑷𝟔) (𝑷𝟏 + 𝑷𝟐)
[ ] = [ 𝟐𝟒 𝟓𝟔 𝟏𝟖 𝟓𝟐 ]
(𝑷𝟑 + 𝑷𝟒) (𝑷𝟏 + 𝑷𝟓 − 𝑷𝟑 − 𝑷𝟕) 𝟓𝟖 𝟗𝟓 𝟕𝟏 𝟗𝟐
𝟗𝟎 𝟏𝟎𝟕 𝟖𝟏 𝟏𝟒𝟐
Algorithm:

Design and Analysis of Algorithms (18CS42), Module II. Divide and Conquer Page | 36
Maharaja Institute of Technology Mysore Department of Information Science and Engineering

T(N) = 7T(N/2) + O(N2)


From Master's Theorem, time complexity of above method is O(NLog7) which is
approximately (N2.8074)
Analysis:
Input size: n – order of square matrix.
Basic operation: Multiplication (7)
Addition (11)
Subtraction (7)
No best, worst, average case
Let time complexity T(n) is the number of multiplication’s made by the algorithm, Therefore
we have:
1. Divide: Partition A and B into (n/2)×(n/2) sub-matrices. Form terms to be
multiplied using + and – .
2. Conquer: Perform 7 multiplications of (n/2)×(n/2) sub-matrices recursively.
3. Combine: Form C using + and – on (n/2)×(n/2) sub-matrices.
T(n) = 7 T(n/2) + Θ(n2)
Solve

Method 1: Method 2:
T (n) = 7 T (n/2) for n > 1 T (n) = 7 T (n/2) + Θ(n2)
T (1) = 1 Using Masters Theorem:
Assume n = 2k a = 7 , b=2 and d=2
T (2k) = 7 T (2k-1) (a > bd) ⟿ (7 >22)
= 7 [7 T (2k-2)] » T(n) = n(logba)
= 72T (2k-2) = 𝒏𝐥𝐨𝐠𝟐 𝟕
… ≈ n2.8
= 7i T (2k-i)
When i=k
= 7kT (2k-k)
= 7k
K = 𝐥𝐨𝐠 𝟐 𝒏we have
T (n) = 𝟕𝐥𝐨𝐠 𝟐 𝒏
= 𝒏𝐥𝐨𝐠 𝟐 𝟕
≈ n2.80

2.7 Decrease and conquer


This approach is also known as incremental or inductive approach.
The decrease and conquer technique is similar to divide and conquer, except instead of
partitioning a problem into multiple subproblems of smaller size, we use some technique to
reduce our problem into a single problem that is smaller than the original.
The smaller problem might be:
• Decreased by some constant
• Decreased by some constant factor
• Variable decrease

Design and Analysis of Algorithms (18CS42), Module II. Divide and Conquer Page | 37
Maharaja Institute of Technology Mysore Department of Information Science and Engineering

 Topological sort
The topological sort algorithm takes a directed graph and returns an array of the nodes where
each node appears before all the nodes it points to.
or
Topological sort gives a linear ordering of vertices in a directed acyclic graph such that, for
every directed edge a → b, vertex ‘a’ comes before vertex ‘b’.
Applications of Topological Sort-
Few important applications of topological sort are-
 Scheduling jobs from the given dependencies among jobs
 Instruction Scheduling
 Determining the order of compilation tasks to perform in makefiles
 Data Serialization

Example 1:
Find the number of different topological orderings possible for the given graph-

Solution-
The topological orderings of the above graph are found in the following steps-
Step-01:
Write in-degree of each vertex-

Step-02:
 Vertex-A has the least in-degree.
 So, remove vertex-A and its associated edges.
 Now, update the in-degree of other vertices.

Design and Analysis of Algorithms (18CS42), Module II. Divide and Conquer Page | 38
Maharaja Institute of Technology Mysore Department of Information Science and Engineering

Step-03:
 Vertex-B has the least in-degree.
 So, remove vertex-B and its associated edges.
 Now, update the in-degree of other vertices.

Step-04:
There are two vertices with the least in-degree. So, following 2 cases are possible-
In case-01,
 Remove vertex-C and its associated edges.
 Then, update the in-degree of other vertices.
In case-02,
 Remove vertex-D and its associated edges.
 Then, update the in-degree of other vertices.

Step-05:
Now, the above two cases are continued separately in the similar manner.
In case-01,
 Remove vertex-D since it has the least in-degree.
 Then, remove the remaining vertex-E.
In case-02,
 Remove vertex-C since it has the least in-degree.
 Then, remove the remaining vertex-E.

Conclusion-
For the given graph, following 2 different topological orderings are possible-
 ABCDE
 ABDCE

Design and Analysis of Algorithms (18CS42), Module II. Divide and Conquer Page | 39
Maharaja Institute of Technology Mysore Department of Information Science and Engineering

Example 2:
Find the number of different topological orderings possible for the given graph-

Solution-
The topological orderings of the above graph are found in the following steps-

Step-01:
Write in-degree of each vertex-

Step-02:
 Vertex-1 has the least in-degree.
 So, remove vertex-1 and its associated edges.
 Now, update the in-degree of other vertices.

Step-03:
There are two vertices with the least in-degree. So, following 2 cases are possible-
In case-01,
 Remove vertex-2 and its associated edges.
 Then, update the in-degree of other vertices.
In case-02,
 Remove vertex-3 and its associated edges.
 Then, update the in-degree of other vertices.

Design and Analysis of Algorithms (18CS42), Module II. Divide and Conquer Page | 40
Maharaja Institute of Technology Mysore Department of Information Science and Engineering

Step-04:
Now, the above two cases are continued separately in the similar manner.
In case-01,
 Remove vertex-3 since it has the least in-degree.
 Then, update the in-degree of other vertices.
In case-02,
 Remove vertex-2 since it has the least in-degree.
 Then, update the in-degree of other vertices.

Step-05:
In case-01,
 Remove vertex-4 since it has the least in-degree.
 Then, update the in-degree of other vertices.
In case-02,
 Remove vertex-4 since it has the least in-degree.
 Then, update the in-degree of other vertices.

Design and Analysis of Algorithms (18CS42), Module II. Divide and Conquer Page | 41
Maharaja Institute of Technology Mysore Department of Information Science and Engineering

Step-06:
In case-01,
 There are 2 vertices with the least in-degree.
 So, 2 cases are possible.
 Any of the two vertices may be taken first.
Same is with case-2.

Conclusion-
For the given graph, following 4 different topological orderings are possible-
 123456
 123465
 132456
 132465

Exercise-
Consider the following directed acyclic graph-

For this graph, following 4 different topological orderings are possible-


 123456
 123465
 132456
 132465
Algorithm topoSort (u, visited, stack)
//Input: The start vertex u, An array to keep track of which node is visited or not. A stack to
store nodes.
//Output: Sorting the vertices in topological sequence in the stack.
Begin
mark u as visited
for all vertices v which is adjacent with u, do
if v is not visited, then
topoSort(c, visited, stack)
done
Design and Analysis of Algorithms (18CS42), Module II. Divide and Conquer Page | 42
Maharaja Institute of Technology Mysore Department of Information Science and Engineering

push u into a stack


End

OR
Algorithm TopoSort(G(V, E))
//
//
C[ ] ← {} //make empty Container of vertices, C
S[ ] ← {} //make empty List of sorted vertices, S
for each vertex v in V do
if in-deg(v) == 0 then
C.add(v)
while C is not empty do
u ← C.remove()
S.add(u)
for each edge (u, w) do // Adjacent Edge i.e., edge is out from u to w
in-deg(w) ← in-deg(w)-1
if in-deg(w)== 0 then
C.add(w)
if S.size() == V.size() then
return S
else
return "G has a cycle"

Time complexity is given by O(V+E)


*****

Design and Analysis of Algorithms (18CS42), Module II. Divide and Conquer Page | 43
Maharaja Institute of Technology Mysore Department of Information Science and Engineering

Module-3: Greedy Method


3.1 Introduction to Greedy method
3.1.1 General method

3.2 Coin Change Problem

Design and Analysis of Algorithms (18CS42), Module III. Greedy Technique Page | 44
Maharaja Institute of Technology Mysore Department of Information Science and Engineering

3.3 Knapsack Problem (Fractional knapsack problem)

Design and Analysis of Algorithms (18CS42), Module III. Greedy Technique Page | 45
Maharaja Institute of Technology Mysore Department of Information Science and Engineering

Design and Analysis of Algorithms (18CS42), Module III. Greedy Technique Page | 46
Maharaja Institute of Technology Mysore Department of Information Science and Engineering

3.4 Job sequencing with deadlines

Design and Analysis of Algorithms (18CS42), Module III. Greedy Technique Page | 47
Maharaja Institute of Technology Mysore Department of Information Science and Engineering

Example1:
Jobs Profit Deadline
J1 85 5
J2 25 4
J3 16 3
J4 40 3
J5 55 4
J6 19 5
J7 92 2
J8 80 3
J9 15 7

Step 1: Sort the jobs in descending order of profit


Jobs Profit Deadline
J7 92 2
J1 85 5
J8 80 3
J5 55 4
J4 40 3
J2 25 4
J6 19 5
J3 16 3
J9 15 7
Step 2:

Step 3:

Step 4:

Step 5:

Step 6:

Step 7:

Design and Analysis of Algorithms (18CS42), Module III. Greedy Technique Page | 48
Maharaja Institute of Technology Mysore Department of Information Science and Engineering

So, the maximum profit = 40 + 92 + 80 + 55 + 85 + 15 = 367

Example2: Consider the following 5 jobs and their associated deadline and profit.
JOB j1 j2 j3 j4 j5
DEADLINE 2 1 3 2 1
PROFIT 60 100 20 40 20
Total number of jobs is 5. So we can write n = 5
Sort the jobs in descending order of profit
JOB j2 j1 j4 j3 j5
DEADLINE 1 2 2 3 1
PROFIT 100 60 40 20 20
Looking at the jobs we can say the max deadline value is 3. So, dmax = 3
Schedule time slot 1 2 3
Status EMPTY EMPTY EMPTY

if Schedule_timeslot[k] is EMPTY or 0 then


timeslot[k] = job(i)
Step 1:
Schedule time slot 1 2 3
Status j2 EMPTY EMPTY
Step 2:
Schedule time slot 1 2 3
Status j2 j1 EMPTY
Step 3:
Schedule time slot 1 2 3
Status j2 j1 j3

Algorithm: Job-Sequencing-With-Deadline (J, D, P, n)


//input:
//output:
Sort all the jobs based on profit Pi so
P1 > P2 > P3 …………………………….>=Pn
dmax = maximum deadline of job in Dead_line
Create array S[1,…………………,dmax] and Initialize the content of array S with zero.
for i←0 to n
for k←D[i] to 1
if S[k] = 0 then
S[k] = J[i]
End if
End for
End for

Analysis:
To sort n jobs - n log n
For schedule - n2
T(n) = n log n + n2
T(n) = Ꝋ(n2)

Design and Analysis of Algorithms (18CS42), Module III. Greedy Technique Page | 49
Maharaja Institute of Technology Mysore Department of Information Science and Engineering

3.5 Minimum Spanning Tree


Tree is a connected graph without cycles/loop.
A spanning sub graph is a sub graph that contains all the vertices of the original
graph.
A spanning tree is a sub-graph T of a connected graph G that includes all the vertices
of G and is also a tree (does not have cycles).

Graph G ⇒

Spanning Trees:

In a weighted graph, a minimum spanning tree is a spanning tree that has minimum
weight than all other spanning trees of the same graph.

Graph G ⇒

Spanning Tree:

Tree – 1 Tree – 2 Tree – 3 Tree - 4

Tree-3 is minimum spanning tree (7<8, 11, 10)

Solving the MST problem

Two algorithms for solving the minimum-spanning tree problem:


(a) Kruskal’s algorithm, and
(b) Prim’s algorithm

Design and Analysis of Algorithms (18CS42), Module III. Greedy Technique Page | 50
Maharaja Institute of Technology Mysore Department of Information Science and Engineering

3.5.1 Kruskal’s algorithm

(a) (b) (c)

(f) (e) (d)

(g)

ALGORITHM: Kruskals
//Input: A weighted connected undirected graph G = (V, E) with n vertices.
//Output: The set of edges T of a minimum cost spanning tree for G.
Sort the edges in E by increasing weight.
T={} //T-tree empty set
while |T| < n – 1 // tree with n node will have n-1 edges
Let (x, y) be the next edge in E.
if FIND(x) ≠ FIND(y) then //if loop or cycle does not exist i.e., disconnected
Add (x, y) to T
UNION(x, y)
end if
end while

Design and Analysis of Algorithms (18CS42), Module III. Greedy Technique Page | 51
Maharaja Institute of Technology Mysore Department of Information Science and Engineering

Analysis:

3.5.2 Prims Algorithm

(a) (b) (c)

(f) (e) (d)

(g) (h)

ALGORITHM: Prims
//Input: A weighted connected undirected graph G = (V, E), where V = {v1,v2, . . . , vn}.
//Output: The set of edges T of a minimum cost spanning tree for G.
T ←{}; X ←{v1}; Y ← V – {v1}
for vi ←2 to n
if vi adjacent to v1 then
N[vi] ← v1
C[vi] ← Length[v1, vi]
else
C[vi] ← ∞
end if
end for

Design and Analysis of Algorithms (18CS42), Module III. Greedy Technique Page | 52
Maharaja Institute of Technology Mysore Department of Information Science and Engineering

for j←1 to n - 1 //find n - 1 edges


Let vi ϵ Y be such that C[vi] is minimum
T ← T U {( vi, N[vi])} //add edge (vi, N[vi]) to T
X ← X U {vi} //add vertex vi to X
Y ← Y – {vi} //delete vertex vi from Y
for each vertex w ϵ Y that is adjacent to vi
if Length[vi,w] < C[w] then
N[w] ← vi
C[w] ← Length[vi,w]
end if
end for
end for

Analysis

3.6 Single source shortest paths

Design and Analysis of Algorithms (18CS42), Module III. Greedy Technique Page | 53
Maharaja Institute of Technology Mysore Department of Information Science and Engineering

3.6.1 Dijkstra's Algorithm

Design and Analysis of Algorithms (18CS42), Module III. Greedy Technique Page | 54
Maharaja Institute of Technology Mysore Department of Information Science and Engineering

ALGORITHM: Dijkstra( )
//Input: A weighted directed graph G = (V,E), where V = {v1,v2, . . . . ,vn}.
//Output: The distance from vertex 1 to every other vertex in G.
Design and Analysis of Algorithms (18CS42), Module III. Greedy Technique Page | 55
Maharaja Institute of Technology Mysore Department of Information Science and Engineering

X = {v1}; // X-visited vertices


Y ← V - {v1}; // Y-unvisited vertices
cost[v1]← 0; // source to source vertices cost is zero
for i←2 to n
if vi is adjacent to v1 then
cost[vi] ← length[v1, vi] //from source v1 to vi
else
cost[vi] ← ∞ //999 or infinite
end if
end for
for j←1 to n-1
Let vi ϵ Y be such that cost[vi] is minimum
X ← X U {vi} //add vertex vi to X
Y ← Y – {vi} //delete vertex vi from Y
for each edge (vi,w)
if vi ϵ Y and cost[vi] + length[vi,w] < cost[w] then
cost[w] ← cost[vi] + length[vi,w]
end for
end for

Analysis:
Time Complexity of Dijkstra's Algorithm is Ꝋ(V2)

3.7 Optimal Tree Problem

Design and Analysis of Algorithms (18CS42), Module III. Greedy Technique Page | 56
Maharaja Institute of Technology Mysore Department of Information Science and Engineering

3.7.1 Huffman Trees and Codes

Design and Analysis of Algorithms (18CS42), Module III. Greedy Technique Page | 57
Maharaja Institute of Technology Mysore Department of Information Science and Engineering

The resulting codewords are as follows:

Design and Analysis of Algorithms (18CS42), Module III. Greedy Technique Page | 58
Maharaja Institute of Technology Mysore Department of Information Science and Engineering

3.8 Transform and Conquer Approach

3.8.1 Heaps

Design and Analysis of Algorithms (18CS42), Module III. Greedy Technique Page | 59
Maharaja Institute of Technology Mysore Department of Information Science and Engineering

Design and Analysis of Algorithms (18CS42), Module III. Greedy Technique Page | 60
Maharaja Institute of Technology Mysore Department of Information Science and Engineering

Design and Analysis of Algorithms (18CS42), Module III. Greedy Technique Page | 61
Maharaja Institute of Technology Mysore Department of Information Science and Engineering

Design and Analysis of Algorithms (18CS42), Module III. Greedy Technique Page | 62
Maharaja Institute of Technology Mysore Department of Information Science and Engineering

Design and Analysis of Algorithms (18CS42), Module III. Greedy Technique Page | 63
Maharaja Institute of Technology Mysore Department of Information Science and Engineering

3.8.2 Heap Sort

Design and Analysis of Algorithms (18CS42), Module III. Greedy Technique Page | 64
Maharaja Institute of Technology Mysore Department of Information Science &Engineering

4.1 Introduction to Dynamic Programming


Dynamic programming is a technique for solving problems with overlapping subproblems.
Typically, these subproblems arise from a recurrence relating a given problem’s solution to
solutions of its smaller subproblems. Rather than solving overlapping subproblems again and
again, dynamic programming suggests solving each of the smaller subproblems only once and
recording the results in a table from which a solution to the original problem can then be
obtained. [From T1]
The Dynamic programming can also be used when the solution to a problem can be viewed as
the result of sequence of decisions. [ From T2]. Here are some examples.

Example 1

Example 2

Example 3

Example 4

Design and Analysis of Algorithms (18CS42), Module 4. Dynamic Programming Page | 65


Maharaja Institute of Technology Mysore Department of Information Science &Engineering

Design and Analysis of Algorithms (18CS42), Module 4. Dynamic Programming Page | 66


Maharaja Institute of Technology Mysore Department of Information Science &Engineering

4.2 Multistage Graphs

Design and Analysis of Algorithms (18CS42), Module 4. Dynamic Programming Page | 67


Maharaja Institute of Technology Mysore Department of Information Science &Engineering

Figure: Five stage graph

Design and Analysis of Algorithms (18CS42), Module 4. Dynamic Programming Page | 68


Maharaja Institute of Technology Mysore Department of Information Science &Engineering

Design and Analysis of Algorithms (18CS42), Module 4. Dynamic Programming Page | 69


Maharaja Institute of Technology Mysore Department of Information Science &Engineering

4.3 Backward Approach

Design and Analysis of Algorithms (18CS42), Module 4. Dynamic Programming Page | 70


Maharaja Institute of Technology Mysore Department of Information Science &Engineering

4.4 Transitive Closure using Warshall’s Algorithm,


Definition: The transitive closure of a directed graph with n vertices can be defined as the n
× n boolean matrix T = {t ij }, in which the element in the ith row and the jth column is 1 if there
exists a nontrivial path (i.e., directed path of a positive length) from the ith vertex to the jth
vertex; otherwise, t ij is 0.
Example: An example of a digraph, its adjacency matrix, and its transitive closure is given
below.

(a) Digraph. (b) Its adjacency matrix. (c) Its transitive closure.

We can generate the transitive closure of a digraph with the help of depthfirst search or breadth-
first search. Performing either traversal starting at the ith vertex gives the information about the
vertices reachable from it and hence the columns that contain 1’s in the ith row of the transitive
closure. Thus, doing such a traversal for every vertex as a starting point yields the transitive
closure in its entirety.
Since this method traverses the same digraph several times, we can use a better algorithm called
Warshall’s algorithm. Warshall’s algorithm constructs the transitive closure through a series
of n × n boolean matrices:

Each of these matrices provides certain information about directed paths in the digraph.
(k) in the ith row and jth column of matrix R(k) (i, j = 1, 2, . . . , n, k
Specifically, the element rij
= 0, 1, . . . , n) is equal to 1 if and only if there exists a directed path of a positive length from
the ith vertex to the jth vertex with each intermediate vertex, if any, numbered not higher than
k.
Thus, the series starts with R(0) , which does not allow any intermediate vertices in its paths;
hence, R(0) is nothing other than the adjacency matrix of the digraph. R (1) contains the
information about paths that can use the first vertex as intermediate. The last matrix in the
series, R(n) , reflects paths that can use all n vertices of the digraph as intermediate and hence
is nothing other than the digraph’s transitive closure.
This means that there exists a path from the ith vertex vi to the jth vertex vj with each
intermediate vertex numbered not higher than k:
vi, a list of intermediate vertices each numbered not higher than k, vj . --- (*)
Two situations regarding this path are possible.

Design and Analysis of Algorithms (18CS42), Module 4. Dynamic Programming Page | 71


Maharaja Institute of Technology Mysore Department of Information Science &Engineering

1. In the first, the list of its intermediate vertices does not contain the kth vertex. Then this
path from vi to vj has intermediate vertices numbered not higher than k−1. i.e. rij(k–1) = 1
2. The second possibility is that path (*) does contain the kth vertex vk among the
intermediate vertices. Then path (*) can be rewritten as;
vi, vertices numbered ≤ k − 1, vk, vertices numbered ≤ k − 1, vj .
i.e r(k–1) = 1 and r(k–1) = 1
ik kj

Thus, we have the following formula for generating the elements of matrix R (k) from the
elements of matrix R(k−1)

The Warshall’s algorithm works based on the above formula.

As an example, the application of Warshall’s algorithm to the digraph is shown below. New
1’s are in bold.

Design and Analysis of Algorithms (18CS42), Module 4. Dynamic Programming Page | 72


Maharaja Institute of Technology Mysore Department of Information Science &Engineering

Analysis
Its time efficiency is Θ(n3). We can make the algorithm to run faster by treating matrix rows
as bit strings and employ the bitwise or operation available in most modern computer
languages.

Space efficiency: Although separate matrices for recording intermediate results of the
algorithm are used, that can be avoided.

4.5 All Pairs Shortest Paths using Floyd's Algorithm,


Problem definition: Given a weighted connected graph (undirected or directed), the all-pairs
shortest paths problem asks to find the distances—i.e., the lengths of the shortest paths - from
each vertex to all other vertices.
Applications: Solution to this problem finds applications in communications, transportation
networks, and operations research. Among recent applications of the all-pairs shortest-path
problem is pre-computing distances for motion planning in computer games.
We store the lengths of shortest paths in an n x n matrix D called the distance matrix: the
element dij in the ith row and the jth column of this matrix indicates the length of the shortest
path from the ith vertex to the jth vertex.

(a) Digraph. (b) Its weight matrix. (c) Its distance matrix
We can generate the distance matrix with an algorithm that is very similar to Warshall’s
algorithm. It is called Floyd’s algorithm.
Floyd’s algorithm computes the distance matrix of a weighted graph with n vertices through a
series of n × n matrices:

Design and Analysis of Algorithms (18CS42), Module 4. Dynamic Programming Page | 82


Maharaja Institute of Technology Mysore Department of Information Science &Engineering

th th (k)
The element d(k)
ij in the i row and the j column of matrix D (i, j = 1, 2, . . . , n, k = 0, 1,
. . . , n) is equal to the length of the shortest path among all paths from the i vertex to the jth
th

vertex with each intermediate vertex, if any, numbered not higher than k.
As in Warshall’s algorithm, we can compute all the elements of each matrix D(k) from its
immediate predecessor D(k−1)

If d(k)
ij = 1, then it means that there is a path;

vi, a list of intermediate vertices each numbered not higher than k, vj .


We can partition all such paths into two disjoint subsets: those that do not use the kth vertex vk
as intermediate and those that do.
i. Since the paths of the first subset have their intermediate vertices numbered not higher
than k − 1, the shortest of them is, by definition of our matrices, of length d(k–1)
ij
ii. In the second subset the paths are of the form
vi, vertices numbered ≤ k − 1, vk, vertices numbered ≤ k − 1, vj .

The situation is depicted symbolically in Figure, which shows


the underlying idea of Floyd’s algorithm.

Taking into account the lengths of the shortest paths in both subsets leads to the following
recurrence:

Analysis: Its time efficiency is Θ(n3), similar to the warshall’s algorithm.

Design and Analysis of Algorithms (18CS42), Module 4. Dynamic Programming Page | 82


Maharaja Institute of Technology Mysore Department of Information Science &Engineering

Application of Floyd’s algorithm to the digraph is shown below. Updated elements are shown
in bold.

4.6 Optimal Binary Search Trees


A binary search tree is one of the most important data structures in computer science. One of
its principal applications is to implement a dictionary, a set of elements with the operations of
searching, insertion, and deletion.
If probabilities of searching for elements of a set are known e.g., from accumulated data
about past searches it is natural to pose a question about an optimal binary search tree for which
the average number of comparisons in a search is the smallest possible.
As an example, consider four keys A, B, C, and D to
be searched for with probabilities 0.1, 0.2, 0.4, and
0.3, respectively. The figure depicts two out of
14 possible binary search trees containing these
keys.

Design and Analysis of Algorithms (18CS42), Module 4. Dynamic Programming Page | 82


Maharaja Institute of Technology Mysore Department of Information Science &Engineering

The average number of comparisons in a successful search in the first of these trees is 0.1 *
1+ 0.2 * 2 + 0.4 * 3+ 0.3 * 4 = 2.9, and for the second one it is 0.1 * 2 + 0.2 * 1+ 0.4 * 2 +
0.3 * 3= 2.1. Neither of these two trees is, in fact, optimal.
For our tiny example, we could find the optimal tree by generating all 14 binary search trees
with these keys. As a general algorithm, this exhaustive-search approach is unrealistic: the total
number of binary search trees with n keys is equal to the nth Catalan number,

which grows to infinity as fast as 4n / n1.5


So let a1, . . . , an be distinct keys ordered from the smallest to the largest and let p 1, . . . , pn be
the probabilities of searching for them. Let C(i, j) be the smallest average number of
comparisons made in a successful search in a binary search tree T ij made up of keys ai, . . , aj,
where i, j are some integer indices, 1≤ i ≤ j ≤ n.
Following the classic dynamic programming approach, we will find values of C(i, j) for all
smaller instances of the problem, although we are interested just in C(1, n). To derive a
recurrence underlying a dynamic programming algorithm, we will consider all possible ways
to choose a root ak among the keys ai, . . . , aj . For such a binary search tree (Figure 8.8), the
root contains key ak, the left subtree T ik−1 contains keys ai, . . . , ak−1 optimally arranged, and
the right subtree Tjk+1contains keys ak+1, . . . , aj also optimally arranged. (Note how we are
taking advantage of the principle of optimality here.)

If we count tree levels starting with 1 to make the comparison numbers equal the keys’ levels,
the following recurrence relation is obtained:

Design and Analysis of Algorithms (18CS42), Module 4. Dynamic Programming Page | 82


Maharaja Institute of Technology Mysore Department of Information Science &Engineering

The two-dimensional table in Figure 8.9 shows the values needed for computing C(i, j) by
formula (8.8): they are in row i and the columns to the left of column j and in column j and the
rows below row i. The arrows point to the pairs of entries whose sums are computed in order
to find the smallest one to be recorded as the value of C(i, j). This suggests filling the table
along its diagonals, starting with all zeros on the main diagonal and given probabilities pi, 1≤ i
≤ n, right above it and moving toward the upper right corner.
The algorithm we just sketched computes C(1, n)—the average number of comparisons for
successful searches in the optimal binary tree. If we also want to get the optimal tree itself, we
need to maintain another two-dimensional table to record the value of k for which the minimum
in (8.8) is achieved. The table has the same shape as the table in Figure 8.9 and is filled in the
same manner, starting with entries R(i, i) = i for 1≤ i ≤ n. When the table is filled, its entries
indicate indices of the roots of the optimal subtrees, which makes it possible to reconstruct an
optimal tree for the entire set given.

Design and Analysis of Algorithms (18CS42), Module 4. Dynamic Programming Page | 82


Maharaja Institute of Technology Mysore Department of Information Science &Engineering

Example: Let us illustrate the algorithm by applying it to the four-key set we used at the
beginning of this section:

The initial tables look like this:

Let us compute C(1, 2):

Thus, out of two possible binary trees containing the first two keys, A and B, the root of the
optimal tree has index 2 (i.e., it contains B), and the average number of comparisons in a
successful search in this tree is 0.4. On finishing the computations we get the following final
tables:

Thus, the average number of key comparisons in the optimal tree is equal to 1.7. Since R(1,
4) = 3, the root of the optimal tree contains the third key, i.e., C. Its left subtree is made up of
keys A and B, and its right subtree contains just key D. To find the specific structure of these
subtrees, we find first their roots by consulting the root table again as follows. Since R(1, 2) =
2, the root of the optimal tree containing A and B is B, with A being its left child (and the root
of the one-node tree: R(1, 1) = 1). Since R(4, 4) = 4, the root of this one-node optimal tree is
its only key D. Figure given below presents the optimal tree in its entirety.

Here is Pseudocode of the dynamic programming algorithm.

Design and Analysis of Algorithms (18CS42), Module 4. Dynamic Programming Page | 82


Maharaja Institute of Technology Mysore Department of Information Science &Engineering

4.7 Knapsack problem


We start this section with designing a dynamic programming algorithm for the knapsack

problem: given n items of known weights w1, . . . , wn and values v1, . . . , vn and a knapsack
of capacity W, find the most valuable subset of the items that fit into the knapsack.
To design a dynamic programming algorithm, we need to derive a recurrence relation that
expresses a solution to an instance of the knapsack problem in terms of solutions to its
smaller subinstances.

Let us consider an instance defined by the first i items, 1≤ i ≤ n, with weights w 1, . . . , wi,
values v1, . . . , vi , and knapsack capacity j, 1 ≤ j ≤ W. Let F(i, j) be the value of an optimal
solution to this instance. We can divide all the subsets of the first i items that fit the knapsack
of capacity j into two categories: those that do not include the ith item and those that do. Note
the following:
i. Among the subsets that do not include the ith item, the value of an optimal subset is, by
definition, F(i − 1, j).
ii. Among the subsets that do include the ith item (hence, j − wi ≥ 0), an optimal subset is
made up of this item and an optimal subset of the first i−1 items that fits into the
knapsack of capacity j − wi . The value of such an optimal subset is vi + F(i − 1, j − wi).

Design and Analysis of Algorithms (18CS42), Module 4. Dynamic Programming Page | 82


Maharaja Institute of Technology Mysore Department of Information Science &Engineering

Thus, the value of an optimal solution among all feasible subsets of the first I items is the
maximum of these two values.

It is convenient to define the initial conditions as follows:


F(0, j) = 0 for j ≥ 0 and F(i, 0) = 0 for i ≥ 0.
Our goal is to find F(n, W), the maximal value of a subset of the n given items that fit into
the knapsack of capacity W, and an optimal subset itself.

Example-1: Let us consider the instance given by the following data:

The dynamic programming table, filled by applying formulas is given below

Thus, the maximal value is F(4, 5) = $37.

We can find the composition of an optimal subset by backtracing the computations of this entry
in the table. Since F(4, 5) > F(3, 5), item 4 has to be included in an optimal solution along with
an optimal subset for filling 5 − 2 = 3 remaining units of the knapsack capacity. The value of
the latter is F(3, 3). Since F(3, 3) = F(2, 3), item 3 need not be in an optimal subset. Since F(2,
3) > F(1, 3), item 2 is a part of an optimal selection, which leaves element F(1, 3 − 1) to specify
its remaining composition. Similarly, since F(1, 2) > F(0, 2), item 1 is the final part of the
optimal solution {item 1, item 2, item 4}.

Design and Analysis of Algorithms (18CS42), Module 4. Dynamic Programming Page | 82


Maharaja Institute of Technology Mysore Department of Information Science &Engineering

Analysis
The time efficiency and space efficiency of this algorithm are both in Θ(nW). The time
needed to find the composition of an optimal solution is in O(n).

Memory Functions
The direct top-down approach to finding a solution to such a recurrence leads to an algorithm
that solves common subproblems more than once and hence is very inefficient.
The classic dynamic programming approach, on the other hand, works bottom up: it fills a table
with solutions to all smaller subproblems, but each of them is solved only once. An unsatisfying
aspect of this approach is that solutions to some of these smaller subproblems are often not
necessary for getting a solution to the problem given. Since this drawback is not present in the
top-down approach, it is natural to try to combine the strengths of the top-down and bottom-up
approaches. The goal is to get a method that solves only subproblems that are
necessary and does so only once. Such a method exists; it is based on using memory

functions.

This method solves a given problem in the top-down manner but, in addition, maintains a table
of the kind that would have been used by a bottom-up dynamic programming algorithm.
Initially, all the table’s entries are initialized with a special “null” symbol to indicate that they
have not yet been calculated. Thereafter, whenever a new value needs to be calculated, the
method checks the corresponding entry in the table first: if this entry is not “null,” it is simply
retrieved from the table; otherwise, it is computed by the recursive call whose result is then
recorded in the table.
The following algorithm implements this idea for the knapsack problem. After initializing the
table, the recursive function needs to be called with i = n (the number of items) and j = W (the
knapsack capacity).
4.8 Algorithm MFKnapsack(i, j )
//Implements the memory function method for the knapsack problem
//Input: A nonnegative integer i indicating the number of the first items being
considered and a nonnegative integer j indicating the knapsack capacity
//Output: The value of an optimal feasible subset of the first i items
//Note: Uses as global variables input arrays Weights[1..n], V alues[1..n], and
table F[0..n, 0..W ] whose entries are initialized with −1’s except for
row 0 and column 0 initialized with 0’s

Design and Analysis of Algorithms (18CS42), Module 4. Dynamic Programming Page | 82


Maharaja Institute of Technology Mysore Department of Information Science &Engineering

Design and Analysis of Algorithms (18CS42), Module 4. Dynamic Programming Page | 82


Maharaja Institute of Technology Mysore Department of Information Science &Engineering

Example-2 Let us apply the memory function method to the instance considered in Example
1. The table in Figure given below gives the results. Only 11 out of 20 nontrivial values (i.e.,
not those in row 0 or in column 0) have been computed. Just one nontrivial entry, V (1, 2), is
retrieved rather than being recomputed. For larger instances, the proportion of such entries can
be significantly larger.

Figure: Example of solving an instance of the knapsack problem by the memory function algorithm

In general, we cannot expect more than a constant-factor gain in using the memory function
method for the knapsack problem, because its time efficiency class is the same as that of the
bottom-up algorithm
4.9 Bellman-Ford Algorithm (Single source shortest path with –ve weights)
4.10 Problem definition
Single source shortest path - Given a graph and a source vertex s in graph, find shortest paths
from s to all vertices in the given graph. The graph may contain negative weight edges.
Note that we have discussed Dijkstra’s algorithm for single source shortest path problem.
Dijksra’s algorithm is a Greedy algorithm and time complexity is O(VlogV). But Dijkstra
doesn’t work for graphs with negative weight edges.
Bellman-Ford works for such graphs. Bellman-Ford is also simpler than Dijkstra and suites
well for distributed systems. But time complexity of Bellman-Ford is O(VE), which is more
than Dijkstra.
4.11 How it works?
Like other Dynamic Programming Problems, the algorithm calculates shortest paths in
bottom-up manner. It first calculates the shortest distances for the shortest paths which have at-
most one edge in the path. Then, it calculates shortest paths with at-most 2 edges, and so on.
After the ith iteration of outer loop, the shortest paths with at most i edges are calculated. There
can be maximum |V| – 1 edges in any simple path, that is why the outer loop runs |v| – 1 times.
The idea is, assuming that there is no negative weight cycle, if we have calculated shortest
paths with at most i edges, then an iteration over all edges guarantees to give shortest path with
at-most (i+1) edges

Design and Analysis of Algorithms (18CS42), Module 4. Dynamic Programming Page | 82


Maharaja Institute of Technology Mysore Department of Information Science &Engineering

Bellman-Ford algorithm to compute shortest path

Design and Analysis of Algorithms (18CS42), Module 4. Dynamic Programming Page | 89


Maharaja Institute of Technology Mysore Department of Information Science &Engineering

4.12 Travelling Sales Person problem (T2:5.9),

Design and Analysis of Algorithms (18CS42), Module 4. Dynamic Programming Page | 89


Maharaja Institute of Technology Mysore Department of Information Science &Engineering

Design and Analysis of Algorithms (18CS42), Module 4. Dynamic Programming Page | 89


Maharaja Institute of Technology Mysore Department of Information Science &Engineering

Design and Analysis of Algorithms (18CS42), Module 4. Dynamic Programming Page | 89


Maharaja Institute of Technology Mysore Department of Information Science &Engineering

4.13 Reliability design

Design and Analysis of Algorithms (18CS42), Module 4. Dynamic Programming Page | 89


Maharaja Institute of Technology Mysore Department of Information Science &Engineering

Design and Analysis of Algorithms (18CS42), Module 4. Dynamic Programming Page | 89


Maharaja Institute of Technology Mysore Department of Information Science &Engineering

Design and Analysis of Algorithms (18CS42), Module 4. Dynamic Programming Page | 89


Maharaja Institute of Technology Mysore Department of Information Science &Engineering

Module 5 : Backtracking and Branch and Bound

5.1 General method of Backtracking


The principal idea is to construct solutions one component at a time and evaluate such partially
constructed candidates as follows. If a partially constructed solution can be developed further
without violating the problem's constraints, it is done by taking the first remaining legitimate
option for the next component. If there is no legitimate option for the next component, no
alternatives for any remaining component need to be considered. In this case, the algorithm
backtracks to replace the last component of the partially constructed solution with its next option.

It is convenient to implement this kind of processing by constructing a tree of choices being made,
called the state-space tree. Its root represents an initial state before the search for a solution begins.
The nodes of the first level in the tree represent the choices made for the first component of a
solution, the nodes of the second level represent the choices for the second component, and so on.
A node in a state-space tree is said to be promising if it corresponds to a partially constructed
solution that may still lead to a complete solution; otherwise, it is called nonpmmising. Leaves
represent either nonpromising dead ends or complete solutions found by the algorithm. If the
current node is promising, its child is generated by adding the first remaining legitimate option for
the next component of a solution, and the processing moves to this child. If the current node turns
out to be nonpromising, the algorithm backtracks to the node's parent to consider the next possible
option for its last component; if there is no such option, it backtracks one more level up the tree,
and so on. Finally, if the algorithm reaches a complete solution to the problem, it either stops (if
just one solution is required) or continues searching for other possible solutions.

An output of a backtracking algorithm can be thought of as an n-tuple (x1, x2 , ... , xi ) where each
coordinate xi is an element of some finite linearly ordered set S; for example, for the n-queens
problem, each Si is the set of integers (column numbers) 1 through n. The tuple may need to satisfy
some additional constraints (e.g., the non-attacking requirements in then-queens problem).
Depending on the problem, all solution tuples can be of the same length (the n-queens and the
Hamiltonian circuit problem) or of different lengths (the subset-sum problem).

If such a tuple (x1, x2 , ... , xi) is not a solution, the algorithm finds the next element in S i+1 that
is consistent with the values of (x1, x2 , ... , xi) and the problem's constraints and adds it to the
tuple. If such an element does not exist, the algorithm backtracks to consider the next value of x;,
and so on.

AlGORITHM Backtrack(X[1..i])
//Gives a template of a generic backtracking algorithm
//Input: X[Li] specifies first i promising components of a solution
//Output: All the tuples representing the problem's solutions
if X[i] is a solution write X[l..i]
else

Design and Analysis of Algorithms (18CS42) Module 5 – Back Tracking P a g e 90


Maharaja Institute of Technology Mysore Department of Information Science &Engineering

for each element x Si+1 consistent with X[i] and the constraints do
X[i +1]  x
Backtrack(X[1..i + 1 ])

5.2 N-Queens Problem


The problem is to place n queens on an n-by-n chessboard so that no two queens attack each other
by being in the same row or in the same column or on the same diagonal. For n = 1, the problem
has a trivial solution, and it is easy to see that there is no solution for n = 2 and n = 3. So let us
consider the four-queens problem and solve it by the backtracking technique. Since each of the
four queens has to be placed in its own row, all we need to do is to assign a column for each
queen on the board presented in Figure below.

We start with the empty board and then place queen 1 in the first possible position of its row,
which is in column 1 of row 1. Then we place queen 2, after trying unsuccessfully colunms 1 and
2, in the first acceptable position for it, which is square (2,3), the square in row 2 and column 3.
This proves to be a dead end because there is no acceptable position for queen 3. So, the algorithm
backtracks and puts queen 2 in the next possible position at (2,4). Then queen 3 is placed at (3,2),
which proves to be another dead end. The algorithm then backtracks all the way to queen 1 and
moves it to (1,2). Queen 2 then goes to (2,4), queen 3 to (3,1), and queen 4 to ( 4,3), which is a
solution to the problem. The state-space tree of this search is shown in Figure below.

Design and Analysis of Algorithms (18CS42) Module 5 – Back Tracking P a g e 91


Maharaja Institute of Technology Mysore Department of Information Science &Engineering

Fig: State-space tree of solving the four-queens problem by backtracking.


x denotes an unsuccessful attempt to place a queen in the indicated column. The numbers
above the nodes indicate the order in which the nodes are generated.

5.3 Hamiltonian Circuit Problem


It is a cycle in an undirected graph that visits each vertex exactly once and returns to the starting
vertex. We make use of DFS algorithm to solve this problem.

Let us consider the problem of finding a Hamiltonian circuit in the figure below.

Design and Analysis of Algorithms (18CS42) Module 5 – Back Tracking P a g e 92


Maharaja Institute of Technology Mysore Department of Information Science &Engineering

a) Graph. (b) State-space tree for finding a Hamiltonian circuit. The numbers above the
nodes of the tree indicate the order in which the nodes are generated.

We assume that the circuit starts at ‘a’. We then move to the adjacent vertices to ‘a’. Vertices
‘b’,’c’,’d’ are adjacent to a. Considering the alphabetical order we move to ‘b’ first. From b, the
algorithm proceeds to c, then to d, then to e, and finally to f, which proves to be a dead end. So the
algorithm backtracks from f to e, then to d, and then to c. Going from c to e eventually proves
useless, and the algorithm has to backtrack from e to c and then to b. From there, it goes to the
vertices f, e, c, and d, from which it can legitimately return to a, yielding the Hamiltonian circuit
a, b, f, e, c, d, a. If we wanted to find another Hamiltonian circuit, we could continue this process
by backtracking from the leaf of the solution found.

Note: Travelling salesman problem is finding the shortest Hamiltonian circuit.

(Question frequently asked in exam) Apply backtracking to TSP for the following graph.
- (10 marks)

Design and Analysis of Algorithms (18CS42) Module 5 – Back Tracking P a g e 93


Maharaja Institute of Technology Mysore Department of Information Science &Engineering

5.4 Subset-Sum Problem


Subset-sum problem: Find a subset of a given set S= {s1, ... , sn) of n positive integers whose sum
is equal to a given positive integer d. For example, for S = {1, 2, 5, 6, 8] and d = 9, there are two
solutions: {1, 2, 6} and {1, 8}. Some instances of this problem may have no solutions.

Sort the set's elements in increasing order.

Eg: S={3, 5, 6, 7} and d=15.


The root of the tree represents the starting point, with no decisions about the given elements made
as yet.

Its left and right children represent, respectively, inclusion and exclusion of s1 in a set being
sought.
Similarly, going to the left from a node of the first level corresponds to inclusion of s2, while going
to the right corresponds to its exclusion, and so on. Thus, a path from the root to a node on the ith
level of the tree indicates which of the first I numbers have been included in the subsets represented
by that node.

We record the value of s', the sum of these numbers, in the node. If s' is equal to d, we have a
solution to the problem. We can either report this result and stop or, if all the solutions need to be
found, continue by backtracking to the node's parent. If s' is not equal to d, we can terminate the
node as non promising if either of the following two inequalities holds:

Design and Analysis of Algorithms (18CS42) Module 5 – Back Tracking P a g e 94


Maharaja Institute of Technology Mysore Department of Information Science &Engineering

FIGURE State-space tree of the backtracking algorithm applied to the instance S = {3, 5, 6,
7} and d = 15 of the subset-sum problem. The number inside a node is the sum of the elements
already included in subsets represented by the node. The inequality below a leaf indicates
the reason for its termination.

5.5 Graph Coloring (for planar graphs):


Let G be a graph and m be a given positive integer. We want to discover whether the
nodes of G can be colored in such a way that no two adjacent nodes have the same
color, yet only m colors are used. This is termed the m-colorabiltiy decision problem.
The m-colorability optimization problem asks for the smallest integer m for which the
graph G can be colored.

Given any map, if the regions are to be colored in such a way that no two adjacent
regions have the same color, only four colors are needed.

For many years it was known that five colors were sufficient to color any map, but no
map that required more than four colors had ever been found. After several hundred
years, this problem was solved by a group of mathematicians with the help of a
computer. They showed that in fact four colors are sufficient for planar graphs.

The function m-coloring will begin by first assigning the graph to its adjacency matrix,
setting the array x [] to zero. The colors are represented by the integers 1, 2, . . . , m
and the solutions are given by the n-tuple (x1, x2, . . ., xn), where xi is the color of node
i.

A recursive backtracking algorithm for graph coloring is carried out by invoking the
statement mcoloring(1);

Algorithm mcoloring (k)

// This algorithm was formed using the recursive backtracking schema. The graphis

// represented by its Boolean adjacency matrix G [1: n, 1: n]. All assignments of

Design and Analysis of Algorithms (18CS42) Module 5 – Back Tracking P a g e 95


Maharaja Institute of Technology Mysore Department of Information Science &Engineering

// 1, 2, ................. , m to the vertices of the graph such that adjacent vertices areassigned

// distinct integers are printed. k is the index of the next vertex tocolor.

repeat

{ // Generate all legal assignments for x[k].

NextValue (k); // Assign to x [k] a legal color. If (x [k] =


0) then return; // No new color possible If (k = n) then
// at most m colors have been

// used to color the n vertices.

write (x [1: n]);

else mcoloring (k+1);

} until (false);

Algorithm NextValue (k)

// x [1] , x [k-1] have been assigned integer values in the range [1, m] such that

// adjacent vertices have distinct integers. A value for x [k] is determined in the range

// [0, m].x[k] is assigned the next highest numbered color while maintaining distinctness

// from the adjacent vertices of vertex k. If no such color exists, then x [k] is 0.

repeat
{

X[k]: = (x [k] +1) mod (m+1) // Next highest color.

If (x [k] = 0) then return; // All colors have been used for j := 1 to n do

// If (k, j) is and edge and if adj. vertices have the same color. then break;

if (j = n+1) then return; // New color found

} until (false); // Otherwise try to find another color.

Example

Color the graph given below with minimum number of colors by backtracking using state space tree.

Design and Analysis of Algorithms (18CS42) Module 5 – Back Tracking P a g e 96


Maharaja Institute of Technology Mysore Department of Information Science &Engineering

x1

x2

3 1 2 2 3 1 3 1 x3
2 2

x4
2 3 2 2 3 3 1 3 1 3 1 3 1 1 2 2 1

A 4- node gra ph a nd a ll pos s ib le 3-c olorin g s

Design and Analysis of Algorithms (18CS42) Module 5 – Back Tracking P a g e 97


Maharaja Institute of Technology Mysore Department of Information Science &Engineering

5.6 Branch-and-Bound technique:


This technique is used for optimization problems(maximization or minimization). In this method
a space tree of all possible solutions is generated. Then partitioning or branching is done at each
node of the tree. We compute a bound value at each node. We then compare a node’s bound value
with the value of the best solution obtained uptil now. If the bound value of some node is not better
than the best solution then that corresponding node is not expanded further. Such a node is called
non-promising node and is terminated over there.
For maximization problems such as Knapsack problem upper bound (ub) is calculated and for
minimization problems such as Travelling Salesman problem and Job Assignment problem lower
bound (lb) is calculated.

(Note: In the following 3 problems exhaustive approach is given only to understand what the
problem is about. Study the branch and bound technique for exam).

7.2.1 Assignment Problem (Job Assignment Problem)

There are n people who need to be assigned to execute n jobs, one person per job. The problem is
to find an assignment with the minimum total cost.

Exhaustive approach:

Eg: The cost of assignment is given by the following matrix:

J1 J2 J3 J4
a 10 3 8 9
b 7 5 4 8
c 6 9 2 9
d 8 7 10 5

The different permutations of assignment are :


Person a, Person b, Person c, Person d
<1, 2, 3, 4> Cost = 10+5+2+5 = 22
<1, 2, 4, 3> Cost= 10+5+9+10 =34
<1, 3, 4, 2> Cost = 10+4+9+7 = 30
<1, 3, 2, 4> Cost= 10+ +9+10 =34
<1, 4, 3, 2> Cost = 10+8+2+7 = 27
<1, 4, 2, 3> Cost= 10+8+9+10 =37

<2, 1, 3, 4> Cost = 3+7+2+5 = 17


<2, 1, 4, 3> Cost= 3+7+9+10 =29

Design and Analysis of Algorithms (18CS42) Module 5 – Back Tracking P a g e 98


Maharaja Institute of Technology Mysore Department of Information Science &Engineering

<2, 3, 4, 1> Cost = 3+4+9+8 = 24


<2, 3, 1, 4> Cost= 3+4 +6+5 =18
<2, 4, 3, 1> Cost = 3+8+2+8 = 21
<2, 4, 1, 3> Cost= 3+8+6+10 =27

<3, 2, 1, 4> Cost = 8+5+6+5 = 24


<3, 2, 4, 1> Cost= 8+5+9+8 =30
<3, 4, 1, 2> Cost = 8+8+6+7 =29
<3, 4, 2, 1> Cost= 8+8 +9+8 =33
<3, 1, 4, 2> Cost = 8+8+9+7 = 31
<3, 1, 2, 4> Cost= 8+7+9+5 =29

<4, 1, 2, 3> Cost= 9+7+9+10 =35


<4, 1, 3, 2> Cost = 9+7+2+7 = 25
<4, 2, 1, 3> Cost= 9+5+6+10 =30
<4, 2, 3, 1> Cost = 9+5+2+8 = 24
<4, 3, 1, 2> Cost= 9+4 +6+7 =26
<4, 3, 2, 1> Cost = 9+4+9+8 = 30

Out of these solutions the solution that takes the least cost is <2, 1, 3, 4> Cost=3+7+2+5=17
which is the optimal solution.
The above procedure takes a long time since n! (where n= number of jobs/persons) permutations
are to be generated.

Branch and bound technique: Uses lesser time.

Eg: The cost of assignment is given by the following matrix:

Design and Analysis of Algorithms (18CS42) Module 5 – Back Tracking P a g e 99


Maharaja Institute of Technology Mysore Department of Information Science &Engineering

We have to calculate the lower bound value for this problem because it is a minimization problem
(minimize the cost of assignment). The lower bound value for the root node is calculated by taking
the minimum values in each row. The nodes at level 1 show the possible job assignments for person
a. Continue with the node where the lower bound is the least. Since node 2 has least lb we continue
with it. The nodes in level 2 show the possible assignments for person b. Once person a is assigned
with job 2, person b cannot be assigned with job 2. The procedure continues to find the solution.
Since the lower bound of nodes 1, 3, 4 are greater than the solution obtained in node 8 we do not
continue with nodes 1, 3 or 4.

Fig: Complete state-space tree for the instance of the assignment problem solved with the
best-first branch-and-bound algorithm

Design and Analysis of Algorithms (18CS42) Module 5 – Back Tracking P a g e 100


Maharaja Institute of Technology Mysore Department of Information Science &Engineering

5.7 Knapsack Problem


Let us now discuss how we can apply the branch-and-bound technique to solving the knapsack
problem.

What is knapsack problem?

Given n items of known weights wi and values vi , i = 1, 2, ..., n, and a knapsack of capacity W,
find the most valuable subset of the items that fit in the knapsack.

Exhaustive Search Technique:

Design and Analysis of Algorithms (18CS42) Module 5 – Back Tracking P a g e 101


Maharaja Institute of Technology Mysore Department of Information Science &Engineering

Fig a) Instance of the knapsack problem b) The solution using exhaustive search technique
is items={3,4} with optimal cost of 65

Branch and Bound Technique:

Order the items of a given instance in descending order by their value-to-weight ratios. This
ration is called payoff ratio.

Each node on the ith level of the state space tree represents all the subsets of n items that include
a particular selection made from the first i ordered items. A branch going to the left indicates the
inclusion of the next item, while a branch going to the right indicates its exclusion.

We record the total weight w and the total value v of this selection in the node, along with some
upper bound ub(maximization).

A simple way to compute the upper bound ub is to add to v, the total value of the items already
selected, the product of the remaining capacity of the knapsack W - w and the best per unit
payoff among the remaining items, which is vi+1/ wi+1

Design and Analysis of Algorithms (18CS42) Module 5 – Back Tracking P a g e 102


Maharaja Institute of Technology Mysore Department of Information Science &Engineering

Example:

v1/w1=10
v2/w2=6
v3/w3=5
v4/w4=4

Fig: State-space tree of the branch-and-bound algorithm for the instance of


the knapsack problem

Design and Analysis of Algorithms (18CS42) Module 5 – Back Tracking P a g e 103


Maharaja Institute of Technology Mysore Department of Information Science &Engineering

1) At the root node no items have been selected. Thus the value and weight is zero. i = 0 at
root node.
Thus ub = 0+10*10=100
2) At level 1 i=1
Upper bound at node 1 ub= 40+ (10-4)*6=76

Continue this procedure to get the state space tree shown above.

5.8 Traveling Salesman Problem


The problem asks to find the shortest tour through a given set of n cities that visits each city
exactly once before returning to the city where it started.

Exhaustive approach:
Eg:

Design and Analysis of Algorithms (18CS42) Module 5 – Back Tracking P a g e 104


Maharaja Institute of Technology Mysore Department of Information Science &Engineering

Fig: Solution to a small instance of the traveling salesman problem by exhaustive search

Exhaustive search takes a longer time to calculate because it generates (n-1)! routes. Hence we
go for branch and bound technique.

Using branch and bound technique:

We need to compute the lower bound here because it is a minimization problem (minimize the
travel cost)

where s= sum of 2 smallest distances from each city

Note: For all problems of this kind we consider routes only where b is visited before c. This is
done to eliminate repeated tours. For eg: the routes a-b-c-d-a and a-d-c-b-a are similar but
reverse of each other.

Design and Analysis of Algorithms (18CS42) Module 5 – Back Tracking P a g e 105


Maharaja Institute of Technology Mysore Department of Information Science &Engineering

Eg: Find the optimal Travelling Salesman tour for the following graph below:

In the above example the lower bound is calculated as:

For node 0:

For node 3 the calculation of lb must include edge ‘ad’ and edge ‘da’:
lb= [(1+5)+(3+6)+(1+2)+(4+5)+(2+3)]/2 = 16.

5.9 Approximation Algorithms for NP-hard Problems


NP-hard problems- Problems those are at least as hard as NP-complete problems.

[From unit 6: Class P is a class of decision problems that can be solved in polynomial time by
(deterministic) algorithms. This class of problems is called polynomial. Class NP is the class of
decision problems that can be solved by non-deterministic polynomial algorithms. This class of
problems is called nondeterministic polynomial. A nondeterministic algorithm is a two-stage
procedure that takes as its input an instance I of a decision problem and does the following.
Nondeterministic ("guessing") stage: An arbitrary string S is generated that can be thought of as
a candidate solution to the given instance I.

Design and Analysis of Algorithms (18CS42) Module 5 – Back Tracking P a g e 106


Maharaja Institute of Technology Mysore Department of Information Science &Engineering

Deterministic ("verification") stage: A deterministic algorithm takes both I and S as its input and
outputs yes if S represents a solution to instance I. (If S is not a solution to instance I, the
algorithm either returns no or is allowed not to halt at all)]

Why should we go for approximation algorithms?

The problem solving techniques such as exhaustive search, branch and bound, backtracking take
a long time to compute the answer. This is practically not applicable most of the times. Hence in
real-time applications we may have to go for approximation algorithms which are a faster
approach to solve problems. The approximation algorithms just take polynomial time to compute
the answer.

Approximation algorithms are also called heuristic algorithms (which is nothing but a set of
known facts according to which you work out the problem)

Accuracy Ratio:

Using approximation algorithms we may get the optimal answer or we may not get an optimal
answer. Hence to know the accuracy of approximation we calculate the accuracy ratio.

Accuracy ratio 1) for minimization problems is r(sa) = f(sa) / f(s*)


2) for maximization problems is r(sa) = f(s*) / f(sa)
Where s* is an exact solution and sa is an approximate solution.

DEFINITION A polynomial-time approximation algorithm is said to be a c-approximation


algorithm, where c≥1, if the accuracy ratio of the approximation it produces does not exceed c for
any instance of the problem in question:

The best (i.e., the smallest) value of c for which inequality holds for all instances of the problem
is called the performance ratio of the algorithm and denoted RA·

Some approximation algorithms have infinitely large performance ratios (RA = ∞).

Approximation Algorithms for the Traveling Salesman Problem


THEOREM 1: If P ≠NP, there exists no c-approximation algorithm for the traveling salesman
problem, i.e., there exists no polynomial-time approximation algorithm for this problem so that for
all instances

for some constant c.

Design and Analysis of Algorithms (18CS42) Module 5 – Back Tracking P a g e 107


Maharaja Institute of Technology Mysore Department of Information Science &Engineering

PROOF: By way of contradiction, suppose that such an approximation algorithm A and a constant
c exist. (Without loss of generality, we can assume that c is a positive integer.) We will show that
this algorithm could then be used for solving the Hamiltonian circuit problem in polynomial time.
We will reduce the Hamiltonian circuit problem to the traveling salesman problem.

Let G be an arbitrary graph with n vertices. We map G to a complete weighted graph G' by
assigning weight 1 to each edge in G and adding an edge of weight cn + 1 between each pair of
vertices not adjacent in G. If G has a Hamiltonian circuit, its length in G' is n; hence, it is the exact
solution s* to the traveling salesman problem for G'.

Note that if sa is an approximate solution obtained for G' by algorithm A, then f(sa) ≤ cn by the
assumption. If G does not have a Hamiltonian circuit in G, the shortest tour in G' will contain at
least one edge of weight cn + 1, and hence f(sa)≥f(s*) >cn. Taking into account the two derived
inequalities, we could solve the Hamiltonian circuit problem for graph G in polynomial time by
mapping G to G', applying algorithm A to get tour sa in G'. and comparing its length with cn.
Since the Hamiltonian circuit problem is NP-complete, we have a contradiction
unless P = NP.

5.10 Nearest-neighbor algorithm

The following simple greedy algorithm is based on the nearest-neighbor heuristic: the idea of
always going to the nearest unvisited city next.

Step 1: Choose an arbitrary city as the start.


Step 2: Repeat the following operation until all the cities have been visited:
Go to the unvisited city nearest the one visited last (ties can be broken arbitrarily).
Step 3: Return to the starting city.

Eg:

The nearest -neighbor algorithm yields the tour (Hamiltonian circuit) sa: a - b - c- d -a of length
10.

Design and Analysis of Algorithms (18CS42) Module 5 – Back Tracking P a g e 108


Maharaja Institute of Technology Mysore Department of Information Science &Engineering

The optimal solution, as can be easily checked by exhaustive search, is the tour s*: a - b- d- c -a of
length 8. Thus, the accuracy ratio of this approximation is
r (sa ) = f (sa) / f(s*) =10/8=1.25

(i.e., tour sa is 25% longer than the optimal tour s*).

Note: The last edge selected will be edge d-a. Even if the weight of this edge is ∞ this edge will
be selected by this algorithm. Hence R A=∞ for this algorithm.

5.11 Multifragment-heuristic algorithm


Step 1: Sort the edges in increasing order of their weights. (Ties can be broken arbitrarily.)
Initialize the set of tour edges to be constructed to the empty set.

Step 2: Repeat this step until a tour of length n is obtained, where n is the number of cities in the
instance being solved: add the next edge on the sorted edge list to the set of tour edges, provided
this addition does not create a vertex of degree 3 or a cycle of length less than n; otherwise, skip
the edge.

Step 3: Return the set of tour edges.

Eg:

Step 1: Sorted edges are: ab, cd, bc, ac, bd, ad


Step 2: Applying the algorithm we get edges ab, cd, bc, ad.
Step 3: Thus the tour of the salesperson is a-b-c-d-a.

The performance ratio of this algorithm is also unbounded (i.e. ∞).

Euclidean Instances:

The graphs that satisfy the following 2 properties are called Euclidean instances:

1) Triangle inequality

Design and Analysis of Algorithms (18CS42) Module 5 – Back Tracking P a g e 109


Maharaja Institute of Technology Mysore Department of Information Science &Engineering

d[i, j] ≤ d[i, k] + d[k, j] for any triple of cities i, j, and k


(the distance between cities i and j cannot exceed the length of a two-leg path
from i to some intermediate city k to j);

2) Symmetry
d[i, j] = d[j, i] for any pair of cities i and j
(the distance from i to j is the same as the distance from j to i).

The accuracy ratio for the algorithms that work only for Euclidean instances is

3) Minimum-spanning-tree-based algorithms

 Twice-around-the-tree algorithm
This algorithm works only for Euclidean instances. It is a 2-approximation algorithm.

Step 1: Construct a minimum spanning tree of the graph corresponding to a given instance of the
traveling salesman problem.

Step 2: Starting at an arbitrary vertex, perform a walk around the minimum spanning tree recording
all the vertices passed by. (This can be done by a DFS traversal.)

Step 3: Scan the vertex list obtained in Step 2 and eliminate from it all repeated occurrences of
the same vertex except the starting one at the end of the list. (This step is equivalent to making
shortcuts in the walk.) The vertices remaining on the list will form a Hamiltonian circuit, which is
the output of the algorithm.

Eg:

Design and Analysis of Algorithms (18CS42) Module 5 – Back Tracking P a g e 110


Maharaja Institute of Technology Mysore Department of Information Science &Engineering

Step 1: The minimum spanning tree (by Kruskal’s algorithm) is

Step 2: Starting at ‘a’ we perform a walk to get the walk around the tree as shown.(We have to
reach back to ‘a’). The walk is a-b-c-b-d-e-d-b-a.

Step 3: Remove the repeated vertices from the walk obtained in step 2. We get a-b-c-d-e-a of
length 39. (optimal tour is a-b-c-e-d-a whose cost is 37)

THEOREM 2: The twice-around-the-tree algorithm is a 2-approximation algorithm for the


traveling salesman problem with Euclidean distances.

Design and Analysis of Algorithms (18CS42) Module 5 – Back Tracking P a g e 111


Maharaja Institute of Technology Mysore Department of Information Science &Engineering

PROOF: We need to show that for any Euclidean instance of the traveling salesman problem, the
length of a tour sa obtained by the twice-around-the-tree algorithm is at most twice the length of
the optimal tour s*; that is,
f (sa) ≤ 2f(s*).

Since removing any edge from s* yields a spanning tree T of weight w (T), which must be greater
than or equal to the weight of the graph's minimum spanning tree w (T*), we get the inequality
f(s*) > w (T) ≥w (T*).
This inequality implies that
2f(s*) > 2w (T*) = the length of the walk obtained in Step 2 of the algorithm.

The possible shortcuts outlined in Step 3 of the algorithm to obtain sa cannot increase the total
length of the walk in a Euclidean graph; that is,
the length of the walk obtained in Step 2 ≥ the length of the tour sa.

Combining the last two iuequalities, we get the inequality 2f(s*) > f(sa),
which is, in fact, a slightly stronger assertion than the one we needed to prove.

Other approximation algorithms for TSP are


1) Christofides algorithm
2) Local Search Heuristics
 2-opt
 3-opt
 Lin-Kernighan

[Refer Pg. No. 441- 446 for the above 3 algorithms ….. It has not been asked in exam]

5.12 Approximation Algorithms for the Knapsack Problem

Given n items of known weights w1, ... , wn and values v1, ... , vn , and a knapsack of weight
capacity W, find the most valuable sub- set of the items that fits into the knapsack.

1) Greedy algorithms for the knapsack problem

a) Greedy algorithm for the discrete knapsack problem: The discrete knapsack problem is
where the items are either inserted fully or not inserted at all(0/1 knapsack).

Step 1: Compute the value-to-weight ratios ri = vi / wi , i = 1, ... , n, for the items given.
Step 2: Sort the items in non-increasing(decreasing) order of the ratios computed in Step 1. (Ties
can be broken arbitrarily).

Step 3: Repeat the following operation until no item is left in the sorted list: if the current item on
the list fits into the knapsack, place it in the knapsack; otherwise, proceed to the next item.

Design and Analysis of Algorithms (18CS42) Module 5 – Back Tracking P a g e 112


Maharaja Institute of Technology Mysore Department of Information Science &Engineering

Eg 1: Solve the following knapsack instance given

W=10,

Step 1 and 2: Compute the v/w ratios and arrange in decreasing order.

Step 3: The greedy algorithm will select the first item of weight 4, skip the next item of weight 7,
select the next item of weight 5, and skip the last item of weight 3. The solution obtained happens
to be optimal for this instance.

Solution obtained is items {1, 3} with value=65.

Eg 2:

Since the items are already ordered as required, the algorithm takes the first item and skips the
second one; the value of this subset is 2. The optimal selection is item 2, whose value is W. Hence,
the accuracy ratio r(sa) of this approximate solution is W /2, which is unbounded above.

Design and Analysis of Algorithms (18CS42) Module 5 – Back Tracking P a g e 113


Maharaja Institute of Technology Mysore Department of Information Science &Engineering

b) Greedy algorithm for the continuous knapsack problem

The continuous knapsack problem is where the fraction of item can also be taken.

Step 1: Compute the value-to-weight ratios vi!wi , i = 1, ... , n. for the items given.

Step 2: Sort the items in non-increasing order of the ratios computed in Step 1. (Ties can be
broken arbitrarily)

Step 3: Repeat the following operation until the knapsack is filled to its full capacity or no item is
left in the sorted list: if the current item on the list fits into the knapsack in its entirety, take it and
proceed to the next item; otherwise, take its largest fraction to fill the knapsack to its full
capacity and stop.

Eg:
Solve the following knapsack instance given

W=10,

Step 1 and 2: Compute the v/w ratios and arrange in decreasing order.

Step 3: This algorithm will take the first item of weight 4 and then 6/7 of the next item on the
sorted list to fill the knapsack to its full capacity.
[4+6/7(7) =10]
Solution obtained is items {1, 6/7th of item 2} with value = 40 + 6/7 *42 = 76.

Design and Analysis of Algorithms (18CS42) Module 5 – Back Tracking P a g e 114


Maharaja Institute of Technology Mysore Department of Information Science &Engineering

2) Approximation schemes for discrete knapsack problem

Approximation scheme was suggested by S. Sahni in 1975 [Sah75]. This algorithm generates all
subsets of k items or less.

Step 1: Compute the value-to-weight ratios vi!wi , i = 1, ... , n. for the items given.

Step 2: Sort the items in non-increasing order of the ratios computed in Step 1. (Ties can be
broken arbitrarily)

Step 3: Generate the subsets with k items or less. Compute the value for each subset and select
the subset with the largest value as optimal one.

Eg: Solve the following knapsack instance using approximation scheme: Given W=10 and k=2.

Step 1 and 2: Compute the v/w ratios and arrange in decreasing order.

Design and Analysis of Algorithms (18CS42) Module 5 – Back Tracking P a g e 115


Maharaja Institute of Technology Mysore Department of Information Science &Engineering

Step 3: Generating subsets with k≤2

Solution obtained is items {1, 3, 4} with value=69.

Design and Analysis of Algorithms (18CS42) Module 5 – Back Tracking P a g e 116

You might also like