Download as pdf or txt
Download as pdf or txt
You are on page 1of 128

Divide and Conquer

Greedy Techniques
Dynamic Programming
Dr R Manimegalai
Professor and Head / CSE
Divide and Conquer Strategy

* Divide and Conquer / Dr R Manimegalai, Professor and Head / CSE 2


* Divide and Conquer / Dr R Manimegalai, Professor and Head / CSE 3
Typical Case of Divide and Conquer - Two Sub-problems
Computing Time of DAndC Algorithm

* Divide and Conquer / Dr R Manimegalai, Professor and Head / CSE 5


Master’s Theorem

* Divide and Conquer / Dr R Manimegalai, Professor and Head / CSE 6


Brute Force Approach – MaxMin( )
Recursive MaxMin( )
Analysis of Recursive MinMax
Trees of Recursive Calls of MaxMin
Merge Sort Algorithm

* Divide and Conquer / Dr R Manimegalai, Professor and Head / CSE 11


Merge Algorithm

* Divide and Conquer / Dr R Manimegalai, Professor and Head / CSE 12


Example : Merge and Mergesort
• 8, 3, 2, 9, 7, 1, 5, 4

* Divide and Conquer / Dr R Manimegalai, Professor and Head / CSE 13


Example : Merge and Mergesort
Analysis of Merge Sort

* Divide and Conquer / Dr R Manimegalai, Professor and Head / CSE 15


Merge Sort - Horowitz
Merge - Horowitz
QuickSort

* Divide and Conquer / Dr R Manimegalai, Professor and Head / CSE 18


Three Cases While Scanning the Elements
Quicksort Algorithm

* Divide and Conquer / Dr R Manimegalai, Professor and Head / CSE 20


QuickSort – Partition Algorithm

* Divide and Conquer / Dr R Manimegalai, Professor and Head / CSE 21


Quick Sort Example
• 5, 3, 1, 9, 8, 2, 4, 7

* Divide and Conquer / Dr R Manimegalai, Professor and Head / CSE 22


QuickSort - Example
Quick Sort - Example

* Divide and Conquer / Dr R Manimegalai, Professor and Head / CSE 24


Quick Sort Example

* Divide and Conquer / Dr R Manimegalai, Professor and Head / CSE 25


Quick Sort Analysis
Quick Sort - Best Case

* Divide and Conquer / Dr R Manimegalai, Professor and Head / CSE 27


Quick Sort – Worst Case

* Divide and Conquer / Dr R Manimegalai, Professor and Head / CSE 28


Quick Sort – Worst Case

* Divide and Conquer / Dr R Manimegalai, Professor and Head / CSE 29


Quick Sort Analysis
Quick Sort – GeeksforGeeks

* Divide and Conquer / Dr R Manimegalai, Professor and Head / CSE 31


Quick Sort – GeeksforGeeks

* Divide and Conquer / Dr R Manimegalai, Professor and Head / CSE 32


Quick Sort – GeeksforGeeks

* Divide and Conquer / Dr R Manimegalai, Professor and Head / CSE 33


Quick Sort – GeeksforGeeks

https://www.geeksforgeeks.org/quick-sort/
Stable Sorting Algorithms

• A Stable Sort is one which preserves the original order of input set, where the
comparison algorithm does not distinguish between two or more items. A Stable
Sort will guarantee that the original order of data having the same rank is preserved
in the output

• A sorting algorithm is said to be stable if two objects with equal keys appear in the
same order in sorted output as they appear in the input unsorted array.
– Stable Sorting Algorithms : Insertion Sort, Merge Sort and Bubble Sort .
– Not stable : Quick Sort, Heap Sort.
Inplace Sorting Algorithms
• An In-Place Sorting Algorithm directly modifies the list that is
received as input instead of creating a new list that is then modified;
small amount of extra space it uses to manipulate the input set.
– In-Place, Sorting Algorithm updates input only through replacement or
swapping of elements.

• An Algorithm can only have a constant amount of extra space,


counting everything including function call and Pointers, may be O
(log n).

• Bubble sort, insertion sort, and selection sort are in-place sorting
algorithms. Because only swapping of the element in the input array is
required.

• Bubble sort and insertion sort are stable algorithms but selection sort
is not
In-place vs. Stable Sorting Algorithms

* Divide and Conquer / Dr R Manimegalai, Professor and Head / CSE 37


Time and Space Complexities of Sorting Algorithms

* Divide and Conquer / Dr R Manimegalai, Professor and Head / CSE 38


Dynamic Programming

* Dynamic Programming / Dr R Manimegalai, Professor and Head / CSE 39


Dynamic Programming

* Dynamic Programming / Dr R Manimegalai, Professor and Head / CSE 40


Dynamic Programming

* Dynamic Programming / Dr R Manimegalai, Professor and Head / CSE 41


Principle of Optimality

Two main properties of a problem that suggests that the given problem can
be solved using Dynamic programming:

1) Overlapping Sub-problems
2) Optimal Substructure

* Dynamic Programming / Dr R Manimegalai, Professor and Head / CSE 42


Properties of Problems that are Suitable to Apply DP
• Optimal Substructure
– A given problems has Optimal Substructure Property if optimal solution of the
given problem can be obtained by using optimal solutions of its sub-problems.
• For example, the Shortest Path problem has following optimal substructure property:
If a node x lies in the shortest path from a source node u to destination node v then
the shortest path from u to v is combination of shortest path from u to x and shortest
path from x to v.

• Overlapping Sub-problems
– Like Divide and Conquer, Dynamic Programming combines solutions to
sub-problems
– Dynamic Programming is mainly used when solutions of the same
sub-problems are needed again and again
– In dynamic programming, computed solutions to sub-problems are stored in a
table so that these don’t have to be recomputed
– Dynamic Programming is not useful when there are no common (overlapping)
sub-problems because there is no point storing the solutions if they are not
* needed again.
Dynamic Programming / Dr R Manimegalai, Professor and Head / CSE 43
Multi-stage Graph Problem
Multi-stage Graph - Example

* Dynamic Programming / Dr R Manimegalai, Professor and Head / CSE 45


Recurrence for Multi-Stage Graph Problem

* Dynamic Programming / Dr R Manimegalai, Professor and Head / CSE 46


Multi-stage Graph Problem – Forward Approach
Multi-stage Graph Problem – Forward Approach
Multi-stage Graph Problem – Backward Approach

* Dynamic Programming / Dr R Manimegalai, Professor and Head / CSE 49


Multi-stage Graph Problem – Backward Approach

* Dynamic Programming / Dr R Manimegalai, Professor and Head / CSE 50


Multi-stage Graph Problem – Backward Approach

* Dynamic Programming / Dr R Manimegalai, Professor and Head / CSE 51


Optimal Binary Search Tree

* Dynamic Programming / Dr R Manimegalai, Professor and Head / CSE 52


BSTs with External Nodes

* Dynamic Programming / Dr R Manimegalai, Professor and Head / CSE 53


Optimal Binary Search Tree

* Dynamic Programming / Dr R Manimegalai, Professor and Head / CSE 54


Possible BSTs for the Identifier Set {do, if, while}

* Dynamic Programming / Dr R Manimegalai, Professor and Head / CSE 55


Cost Associated with each BST

* Dynamic Programming / Dr R Manimegalai, Professor and Head / CSE 56


OBST – Dynamic Programming

* Dynamic Programming / Dr R Manimegalai, Professor and Head / CSE 57


OBST – Dynamic Programming
OBST – Dynamic Programming

* Dynamic Programming / Dr R Manimegalai, Professor and Head / CSE 59


OBST - Example

* Dynamic Programming / Dr R Manimegalai, Professor and Head / CSE 60


OBST - Example

* Dynamic Programming / Dr R Manimegalai, Professor and Head / CSE 61


OBST - Example

* Dynamic Programming / Dr R Manimegalai, Professor and Head / CSE 62


Algorithm for Constructing OBST

* PSGiTech – CSE, Inputs for Writing Project Report by RM 63


Algorithm for Constructing OBST (cntd.)

* PSGiTech – CSE, Inputs for Writing Project Report by RM 64


Algorithm for Constructing OBST – Find( ) Function

* PSGiTech – CSE, Inputs for Writing Project Report by RM 65


Matrix Chain Multiplication Problem
• Problem: In what order, n matrices A1,A2, A3, …. An should be multiplied so that
it would take a minimum number of computations to derive the result.

• Cost of matrix multiplication: Two matrices are called compatible only if the
number of columns in the first matrix and the number of rows in the second matrix
are the same. Matrix multiplication is possible only if they are compatible.

• Let A and B be two compatible matrices of dimensions p * q and q * r. Each


element of each row of the first matrix is multiplied with corresponding elements of
the appropriate column in the second matrix. The total numbers of multiplications
required to multiply matrix A and B are p * q * r.
Matrix Chain Multiplication Problem
• Suppose dimension of three matrices are : A1 = 5 * 4, A2 = 4 * 6 and
A3 = 6 * 2

The answer of both multiplication sequences would be the same, but


the numbers of multiplications are different. This leads to the question,
what order should be selected for a chain of matrices to minimize the
number of multiplications?
Counting the number of Paranthesization
Counting the number of Paranthesization
Matrix Chain Multiplication - Example 1
Matrix Chain Multiplication - Example 1

Iteration 1 : Difference between i and j is 1, (j – i) = 1


Matrix Chain Multiplication - Example 1

Iteration 2 : Difference between i and j is 2, (j – i) = 2


Matrix Chain Multiplication - Example 1

Iteration 3 : Difference between i and j is 3, (j – i) = 3


Matrix Chain Multiplication - Example 1

Iteration 4 : Difference between i and j is 4, (j – i) = 4


Matrix Chain Multiplication - Example 1
Matrix Chain Multiplication - Example 2
Matrix Chain Multiplication - Example 2

In Dynamic Programming, initialization of every method done by '0'.So


we initialize it by '0'. It will sort out diagonally. We have to sort out all
the combination but the minimum output combination is taken into
consideration.
Matrix Chain Multiplication - Example 2
Matrix Chain Multiplication - Example 2
Matrix Chain Multiplication - Example 2
Matrix Chain Multiplication - Example 2
Matrix Chain Multiplication - Example 2
Matrix Chain Multiplication - Example 2
Matrix Chain Multiplication - Example 2
Matrix Chain Multiplication - Example 2
Matrix Chain Multiplication - Example 2
Matrix Chain Multiplication - Example 2
Matrix Chain Multiplication Algorithm using DP

Total Complexity is: O (n3)


Matrix Chain Multiplication - Parenthesization

* PSGiTech – CSE, Inputs for Writing Project Report by RM 89


Parenthesizing Matrix Chain Multiplication – Example 1
Parenthesizing Matrix Chain Multiplication – Example 1
Parenthesizing Matrix Chain Multiplication – Example 2
Parenthesizing Matrix Chain Multiplication – Example 2
Basics of Greedy Approach
• The Greedy method is a most straight forward design technique which
can be applied to a wide variety of problems

• This algorithm works in steps. In each step it selects the best available
options until all options are finished.

• Most of the problems have n inputs and require us to obtain a subset


that satisfies some constraints

• Any subset that satisfies these constraints is called as a feasible


solution.

• A feasible solution that either minimizes or maximizes a given


objective function is called as Optimal Solution.
94
Basics of Greedy Approach
• The Greedy method suggest that one can devise an algorithm that
work in stages, considering one input at a time.

• At each stage, a decision is made regarding whether a particular input


is an optimal solution or not. This is done by considering the inputs in
an order determined by some selection procedure.

• If the inclusion of the next input into the partially constructed optimal
solution results suboptimal / infeasible solution, then that input is not
added to the partial solution. Otherwise, it is added.

• The selection procedure itself is based on some optimization


measures. 1 1

* Greedy Technique / Dr R Manimegalai, Professor and Head / CSE 95


Basics of Greedy Approach
• Greedy is an algorithmic paradigm that builds up a solution piece by
piece, always choosing the next piece that offers the most obvious and
immediate benefit. Greedy algorithms are used for optimization
problems.

• An optimization problem can be solved using Greedy if the problem


has the following property:
– At every step, we can make a choice that looks best at the
moment, and we get the optimal solution to the complete
problem.

* PSGiTech – CSE, Inputs for Writing Project Report by RM 96


Basics of Greedy Approach

* Greedy Technique / Dr R Manimegalai, Professor and Head / CSE 97


Subset Paradigm and Ordering Paradigm

• Subset Paradigm: To solve a problem (or possibly find the optimal /


best solution), greedy approach generates subsets by selecting one or
more available choices.
– Knapsack problem, job sequencing with deadlines
– Greedy approach creates a subset of items or jobs which satisfies all the
constraints

• Ordering Paradigm: greedy approach generates some arrangement /


order to get the best solution.
– Minimum Spanning tree : Prim’s and Kruskal’s Algorithms

* Greedy Technique / Dr R Manimegalai, Professor and Head / CSE 98


Greedy Control Abstraction for the Subset Problem

Select selects an input from a[ ] and removes it. The selected input’s value is
assigned to x. Feasible is a Boolean-valued function that determines whether x
can be included into the solution vector or not. Union combines x with the
solution and updates the objective function.
What are the components of a Greedy algorithm?
Components of Greedy Algorithm
• Candidate set: A solution that is created from the set is known as a
candidate set.
• Selection function: This function is used to choose the candidate or
subset which can be added in the solution.
• Feasibility function: A function that is used to determine whether the
candidate or subset can be used to contribute to the solution or not.
• Objective function: A function is used to assign the value to the
solution or the partial solution.
• Solution function: This function is used to intimate whether the
complete function has been reached or not.
• What is greedy choice property?
– A global optimum can be arrived at by selecting a local optimum. An optimal
solution to the problem contains an optimal solution to sub-problems.

100
Greedy Properties

* Greedy Technique / Dr R Manimegalai, Professor and Head / CSE 101


Greedy vs. Dynamic Programming

* Greedy Technique / Dr R Manimegalai, Professor and Head / CSE 102


Pros and Cons of Greedy

* Greedy Technique / Dr R Manimegalai, Professor and Head / CSE 103


Greedy Applications

• Container Loading Problem


• Fractional knapsack algorithm
• Optimal Storage on tapes
• Optimal Merge Patterns
• Job sequencing with deadline
• Single source shortest path
• Dijkstra's Single Source Shortest Path Algorithm
• Activity Selection Problem
• Minimum Cost Spanning Tree: Prim’s and Kruskal’s Algorithms
• Construction of Huffman Tree and Huffman Codes

* Greedy Technique / Dr R Manimegalai, Professor and Head / CSE 104


Activity Selection Problem
• There are n activities with their start and finish times. Select the
maximum number of activities that can be performed by a single
person, assuming that a person can only work on a single activity at a
time.
• Input: start[] = {10, 12, 20}, finish[] = {20, 25, 30}
• Output: 0 2
• Explanation: A person can perform at most two activities. The
maximum set of activities that can be executed
is {0, 2} [ These are indexes in start[] and finish[] ]
• Input: start[] = {1, 3, 0, 5, 8, 5}, finish[] = {2, 4, 6, 7, 9, 9};
• Output: 0 1 3 4
Explanation: A person can perform at most four activities. The
maximum set of activities that can be executed
is {0, 1, 3, 4} [ These are indexes in start[] and finish[]
* PSGiTech – CSE, Inputs for Writing Project Report by RM 105
Activity Selection Problem
• The greedy choice is to always pick the next activity whose finish time is the
least among the remaining activities and the start time is more than or equal
to the finish time of the previously selected activity.

• We can sort the activities according to their finishing time so that we always
consider the next activity as the minimum finishing time activity

• Follow the given steps to solve the problem:


– Sort the activities according to their finishing time
– Select the first activity from the sorted array and print it
– Do the following for the remaining activities in the sorted array
– If the start time of this activity is greater than or equal to the finish time
of the previously selected activity then select this activity and print it

* PSGiTech – CSE, Inputs for Writing Project Report by RM 106


Activity Selection Problem (source: geeksforgeeks)

int s[] = { 1, 3, 0, 5, 8, 5 };
int f[] = { 2, 4, 6, 7, 9, 9 };
Time Complexity: O(N)
Auxiliary Space: O(1) Following activities are selected
107
0134
Activity Selection Problem (source: javapoint)

* PSGiTech – CSE, Inputs for Writing Project Report by RM 108


Activity Selection Problem (source: javapoint)
Activity Selection Problem (source: javapoint)

* PSGiTech – CSE, Inputs for Writing Project Report by RM 110


Arrange the Activities in increasing order of end time
Activity Selection Problem (source: javapoint)
• First, schedule A1
• Next schedule A3 as A1 and A3 are non-interfering.
• Next skip A2 as it is interfering.
• Next, schedule A4 as A1 A3 and A4 are non-interfering, then next,
schedule A6 as A1 A3 A4 and A6 are non-interfering.
• Skip A5 as it is interfering.
• Next, schedul A7 as A1 A3 A4 A6 and A7 are non-interfering.
• Next, schedule A9 as A1 A3 A4 A6 A7 and A9 are non-interfering.
• Skip A8 as it is interfering.
• Next, schedule A10 as A1 A3 A4 A6 A7 A9 and A10 are non-interfering.
• Thus the final Activity schedule is:

* 112
Optimal Merge Patterns

* Greedy Technique / Dr R Manimegalai, Professor and Head / CSE 113


Optimal Merge Patterns

* Greedy Technique / Dr R Manimegalai, Professor and Head / CSE 114


Binary Merge Tree Representing a Merge Pattern

* Greedy Technique / Dr R Manimegalai, Professor and Head / CSE 115


Two-Way Merge Patterns
Algorithm to Generate a Two-way Merge Tree

* Greedy Technique / Dr R Manimegalai, Professor and Head / CSE 117


Two-way Merge Example

* Greedy Technique / Dr R Manimegalai, Professor and Head / CSE 118


Two-way Merge - Example

* Greedy Technique / Dr R Manimegalai, Professor and Head / CSE 119


Two-way Merge - Example

* Greedy Technique / Dr R Manimegalai, Professor and Head / CSE 120


Huffman Trees
Huffman Codes and Construction of Huffman Tree
• Encoding of text by using sequence of bits, codeword
• Fixed-length encoding vs. Variable Length coding
– Fixed-length encoding assigns each character a bit-string of the same length m
– Variable-length encoding assigns code-words of different lengths to different
characters
– Prefix-free codes : no codeword is a prefix of a codeword of another character
Huffman Codes - Example

* Greedy Technique / Dr R Manimegalai, Professor and Head / CSE 123


Huffman Codes - Example

* Greedy Technique / Dr R Manimegalai, Professor and Head / CSE 124


Huffman Codes - Example

125
Minimum Weighted Path Tree using Huffman Codes

* Greedy Technique / Dr R Manimegalai, Professor and Head / CSE 126


Huffman Codes – Advantages and Applications

• Dynamic Huffman-coding - the coding tree is updated each time a


new character is read from the source text
• Data Compression
• Binary tree with minimum weighted path length
– Decision trees in gaming

* Greedy Technique / Dr R Manimegalai, Professor and Head / CSE 127


Thank You!

* PSGiTech – CSE, Inputs for Writing Project Report by RM 128

You might also like