Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 132

Unit II

Brute Force Method, Divide and Conquer and Decrease and Conquer
Contents
Brute force approach:
• Selection sort
• Bubble sort
• Sequential search
• String Matching
• Exhaustive search
• Breadth-First search (BFS)
• Depth-First search (DFS)

Decrease and Conquer:


• Insertion sort
• Topological sorting
• Algorithms for generating combinatorial objects
• Decrease by-a-constant-factor algorithms

Divide and conquer


• Merge sort
• Quick sort
• Binary tree traversals and related properties
• Strassen’s Matrix Multiplication
Brute Force Approach:
Selection sort, Bubble sort, Sequential
search, and String Matching
 Brute force is a straightforward approach to solving a problem, usually
directly based on the problem statement and definitions of the concepts
involved.
 The "force" implied by the strategy's definition is that of a computer
and not that of one's intellect. "Just do it!" would be another way to
describe the prescription of the brute-force approach.
 And often, the brute-force strategy is indeed the one that is easiest to
apply.
 As an example, consider the exponentiation problem: compute a n
for a given number a and a nonnegative integer n.
Brute Force Approach:
Selection sort, Bubble sort, Sequential
search, and String Matching
 Though this problem might seem trivial, it provides a useful vehicle
for illustrating several algorithm design techniques, including the
brute-force approach.
 Also note that computing a mod m for some large integers is a
principal component of a leading encryption algorithm.

By the definition of exponentiation,


 an =a*···* a . ._______,_, n times
 This suggests simply computing a by multiplying 1 by n times.
Brute Force Approach: Selection sort

Selection Sort:
 We start selection sort by scanning the entire given list to find its
smallest element and exchange it with the first element,
 Putting the smallest element in its final position in the sorted list. Then
we scan the list, starting with the second element,
 To find the smallest among the last n - 1 elements and exchange it
with the second element, putting the second smallest element in its
final position.
 Generally, on the i th pass through the list, the algorithm searches
for the smallest item among the last n - i elements and swaps it
with Ai.
Brute Force Approach: Selection sort
Brute Force Approach: Selection sort
Selection Sort:
 In their final positions the last n - i elements ,after n - 1 passes, the
list is sorted.

Example:
|89 45 68 90 29 34 17
17 | 45 68 90 29 34 89
17 29 | 68 90 45 34 89
17 29 34 | 90 45 68 89
17 29 34 45 | 90 68 89
17 29 34 45 68 | 90 89
17 29 34 45 68 89 | 90
Brute Force Approach: Selection sort
Brute Force Approach: Selection sort
Selection Sort:
 The analysis of selection sort is straightforward. The input size is
given by the number of elements n; the basic operation is the key
comparison A[j ] < A[min].
 The number of times it is executed depends only on the array size and
is given by the following sum.

 Thus, selection sort is having a time complexity of O(n2) algorithm on


all inputs and space complexity of O(1), because it necessitates some
extra memory space for temp variable for swapping..
Brute Force Approach: Bubble Sort:
Bubble Sort:
 Another brute-force application to the sorting problem is to
compare adjacent elements of the list and exchange them if they are
out of order.
 By doing it repeatedly, we end up "bubbling up" the largest
element to the last position on the list.
 The next pass bubbles up the second largest element, and so on
until, after n - 1 passes, the list is sorted.
Brute Force Approach: Bubble Sort:
Brute Force Approach: Bubble Sort:
Bubble Sort:
Example: III-PASS
45  68 29 34 17 89 90
45 68  29 34 17 89 90
45 29 68  34 17 89 90
45 29 34 68  17 89 90
45 29 34 17 68  89 90
45 29 34 17 68 89  90
45 29 34 17 68 89 90
Brute Force Approach: Bubble Sort:
Bubble Sort:
Example: IV-PASS
45  29 34 17 68 89 90
29 45  34 17 68 89 90
29 34 45  17 68 89 90
29 34 17 45  68 89 90
29 34 17 45 68  89 90
29 34 17 45 68 89  90
29 34 17 45 68 89 90
Brute Force Approach: Bubble Sort:
Bubble Sort:
Example: V-PASS
29  34 17 45 68 89 90
29 34  17 45 68 89 90
29 17 34  45 68 89 90
29 17 34 45  68 89 90
29 17 34 45 68  89 90
29 17 34 45 68 89  90
29 17 34 45 68 89 90
Brute Force Approach: Bubble Sort:
Bubble Sort:
Example: VI-PASS
29  17 34 45 68 89 90
17 29  34 45 68 89 90
17 29 34  45 68 89 90
17 29 34 45  68 89 90
17 29 34 45 68  89 90
17 29 34 45 68 89  90
17 29 34 45 68 89 90
Brute Force Approach: Bubble Sort:

For i = 1 to n-1
For j=0 to n-i-1
If a[j] > a[j+1]
Temp=a[j]
A[j]=a[j+1]
A[j+1] = temp
Brute Force Approach: Bubble Sort
Bubble Sort:
• We have two loops the outer loop which runs n times (i=0 to n), and
inner loop which iterates over the unsorted part of the array (j=0 to
n-i-1).
• When pass =1, ranges from 0 to (7-1-1) i.e; 0 to 5. So 6
comparisons takes place.
• When pass =2, ranges from 0 to (7-2-1) i.e; 0 to 4. So 5
comparisons takes place.

• And so on…
Brute Force Approach: Bubble Sort:
Brute Force Approach:
Sequential search
Sequential search
 We have already encountered a brute-force algorithm for the
general searching problem: it is called sequential search.
 To repeat, the algorithm simply compares successive elements of a
given list with a given search key until either a match is
encountered (successful search) or the list is exhausted without
finding a match (unsuccessful search).
 A simple extra trick is often employed in implementing sequential
search: if we append the search key to the end of the list, the search
for the key will have to be successful, and therefore we can
eliminate a check for the list's end on each iteration of the
algorithm.
Brute Force Approach:
Sequential search
Example
Brute Force Approach:
String Matching
String Matching
 A brute-force algorithm for the string-matching problem is quite
obvious: align the pattern against the first m characters of the text
and start matching the corresponding pairs of characters from left to
right until either all m pairs of the characters match (then the
algorithm can stop) or a mismatching pair is encountered.
 In the latter case, shift the pattern one position to the right and
resume character comparisons, starting again with the first
character of the pattern and its counterpart in the text.
Brute Force Approach:
String Matching
String Matching

 Beyond that position, there are not enough characters to match the
entire pattern; hence, the algorithm need not make any comparisons
there.
 Given a string of n characters called the text and a string of m
characters (m ≤ n) called the pattern,
 Find a substring of the text that matches the pattern.
 To put it more precisely, we want to find i—the index of the
leftmost character of the first matching substring in the text.
Brute Force Approach:
String Matching
String Matching Algorithms:

• The Naive String Matching Algorithm


• The Rabin-Karp-Algorithm
• The Knuth-Morris-Pratt (KMP) Algorithm
• The Boyer-Moore Algorithm
• Horspool’s Algorithm
Brute Force Approach:
String Matching
String Matching

Example
Brute Force Approach:
String Matching
String Matching

Example
Brute Force Approach:
String Matching
String Matching

Example
Brute Force Approach:
String Matching
String Matching

Example
Exhaustive Search
• Exhaustive search is simply a brute-force approach to combinatorial
problems.
• It suggests generating each and every element of the problem
domain, selecting those of them that satisfy all the constraints, and
then finding a desired element.

• The three important problems:


1. The traveling salesman problem
2. The knapsack problem and
3. The assignment problem.
Exhaustive Search
Travelling Salesman problem

• The problem asks to find the shortest tour through a given set of n
cities that visits each city exactly once before returning to the city
where it started.
• The problem can be conveniently modeled by a weighted graph,
with the graph’s vertices representing the cities and the edge
weights specifying the distances.
• Then the problem can be stated as the problem of finding the
shortest Hamiltonian circuit of the graph.
• A Hamiltonian circuit is defined as a cycle that passes through all
the vertices of the graph exactly once (It is named after the Irish
mathematician Sir William Rowan Hamilton).
Exhaustive Search
Travelling Salesman problem

• It is easy to see that a Hamiltonian circuit can also be defined as a


sequence of n + 1 adjacent vertices vi0, vi1,...,vin−1,vi0, where the first
vertex of the sequence is the same as the last one and all the other n− 1
vertices are distinct.
• Further, we can assume, with no loss of generality, that all circuits start
and end at one particular vertex.
• Thus, we can get all the tours by generating all the permutations of
n − 1 intermediate cities, compute the tour lengths, and find
the shortest among them
Exhaustive Search
Travelling Salesman problem
Exhaustive Search
Travelling Salesman problem
Exhaustive Search
Knapsack Problem
• Given n items of known weights w1, w2,...,wn and values v1, v2,...,vn
and a knapsack of capacity W, find the most valuable subset of the
items that fit into the knapsack.
• The exhaustive-search approach to this problem leads to generating all
the subsets of the set of n items given, computing the total weight of
each subset in order to identify feasible subsets (i.e., the ones with the
total weight not exceeding the knapsack capacity), and finding a subset
of the largest value among them.
Exhaustive Search
Knapsack Problem
Exhaustive Search
Knapsack Problem
• Since the number of subsets of an n-element set is 2 n, the exhaustive
search leads to a Ω(2n) algorithm, no matter how efficiently individual
subsets are generated.
• In fact, these two problems are the best-known examples of so called
NP-hard problems.
• No polynomial-time algorithm is known for any NP hard problem.
• NP-hard problems, which most people believe cannot be solved in
polynomial time.
Exhaustive Search
Assignment Problem
• There are n people who need to be assigned to execute n jobs, one
person per job.
• That is, each person is assigned to exactly one job and each job is
assigned to exactly one person.
• The cost that would accrue if the i th person is assigned to the j th job is
a known quantity C[i, j ] for each pair i, j = 1, 2,...,n.
• The problem is to find an assignment with the minimum total cost.
Exhaustive Search
Assignment Problem
• It is easy to see that an instance of the assignment problem is
completely specified by its cost matrix C.
• In terms of this matrix, the problem is to select one element in each
row of the matrix so that all selected elements are in different columns
and the total sum of the selected elements is the smallest possible.
• Note that no obvious strategy for finding a solution works here.
• For example, we cannot select the smallest element in each row,
because the smallest elements may happen to be in the same column.
• In fact, the smallest element in the entire matrix need not be a
component of an optimal solution.
• Thus, opting for the exhaustive search may appear as an unavoidable
evil.
Exhaustive Search
Assignment Problem
Exhaustive Search
Assignment Problem
• We can describe feasible solutions to the assignment problem as n-
tuples j1,...,jn in which the ith component, i = 1,...,n, indicates the
column of the element selected in the ith row (i.e., the job number
assigned to the ith person).
• For example, for the cost matrix above,
2, 3, 4, 1 indicates the assignment of Person 1 to Job 2, Person 2 to Job
3, Person 3 to Job 4, and Person 4 to Job 1.
• We generate n! possible job assignments and for each such
assignment, we compute its total cost and return the less expensive
assignment.
• Since the solution is a permutation of the n jobs, its complexity is
O(n!).
Exhaustive Search
• The term “exhaustive search” can also be applied to two very
important algorithms that systematically process all vertices and edges
of a graph.
• These two traversal algorithms are

1. Depth-first search (DFS) and


2. Breadth-first search (BFS).

• These algorithms have proved to be very useful for many applications


involving graphs in artificial intelligence and operations research.
Exhaustive Search
Depth-first search (DFS)
• Depth-first search starts a graph’s traversal at an arbitrary vertex by marking it
as visited.
• On each iteration, the algorithm proceeds to an unvisited vertex that is adjacent
to the one it is currently in.
• If there are several such vertices, a tie can be resolved arbitrarily.
• As a practical matter, which of the adjacent unvisited candidates is chosen is
dictated by the data structure representing the graph.
• In our examples, we always break ties by the alphabetical order of the vertices.
• This process continues until a dead end—a vertex with no adjacent unvisited
vertices— is encountered
• At a dead end, the algorithm backs up one edge to the vertex it came from and
tries to continue visiting unvisited vertices from there.
• The algorithm eventually halts after backing up to the starting vertex, with the
latter being a dead end.
Exhaustive Search
Depth-first search (DFS)
• We use a stack to trace the operation of depth-first search.
• We push a vertex onto the stack when the vertex is reached for the first
time (i.e., the visit of the vertex starts)
• And we pop a vertex off the stack when it becomes a dead end (i.e., the
visit of the vertex ends).
• A depth-first search traversal by is accompanied by constructing the
depth-first search forest.
Exhaustive Search
Depth-first search (DFS)
• The starting vertex of the traversal serves as the root of the first tree in
such a forest.
• Whenever a new unvisited vertex is reached for the first time, it is
attached as a child to the vertex from which it is being reached.
• Such an edge is called a tree edge because the set of all such edges forms
a forest.
• The algorithm may also encounter an edge leading to a previously visited
vertex other than its immediate predecessor (i.e., its parent in the tree).
• Such an edge is called a back edge because it connects a vertex to its
ancestor, other than the parent, in the depth-first search forest
Exhaustive Search
Depth-first search (DFS)
 The adjacency matrix representation, the traversal time is in (|V | 2), and
for the adjacency list representation, it is in (|V |+|E|) where |V | and |E|
are the number of the graph’s vertices and edges, respectively.
Exhaustive Search
Depth-first search (DFS)
• The pseudo code is as follows:
Exhaustive Search
Depth-first search (DFS)
• The pseudo code is as follows: (contd…)
Exhaustive Search
Depth-first search (DFS)

4
2 3
2

4 7 1

Stack Representation
(LIFO)

5 6
DFS: 1 2 4 5 6 3 7

Graph
Exhaustive Search
Breadth-first search (BFS)
 It proceeds in a concentric manner by visiting first all the vertices that
are adjacent to a starting vertex, then all unvisited vertices two edges
apart from it, and so on, until all the vertices in the same connected
component as the starting vertex are visited.
 It is convenient to use a queue to trace the operation of breadth-first
search.
 The queue is initialized with the traversal’s starting vertex, which is
marked as visited.
 On each iteration, the algorithm identifies all unvisited vertices that are
adjacent to the front vertex, marks them as visited, and adds them to the
queue; after that, the front vertex is removed from the queue.
Exhaustive Search
Breadth-first search (BFS)
 The traversal’s starting vertex serves as the root of the first tree in such a
forest.
 Whenever a new unvisited vertex is reached for the first time, the vertex
is attached as a child to the vertex it is being reached from with an edge
called a tree edge.
 If an edge leading to a previously visited vertex other than its immediate
predecessor (i.e., its parent in the tree) is encountered, the edge is noted
as a cross edge.
Exhaustive Search
Exhaustive Search
Breadth-first search (BFS)
 Breadth-first search has the same efficiency as depth-first search: it is in
(|V |2) for the adjacency matrix representation and in (|V |+|E|) for the
adjacency list representation.
 Unlike depth-first search, it yields a single ordering of vertices because
the queue is a FIFO (first-in first-out) structure and hence the order in
which vertices are added to the queue is the same order in which they are
removed from it.
 BFS can be used to check connectivity and acyclicity of a graph,
essentially in the same manner as DFS can.
Exhaustive Search
Step1: Initially queue and visited arrays are empty.
Exhaustive Search
Step2: Push node 0 into queue and mark it visited.
Exhaustive Search
Step 3: Remove node 0 from the front of queue and visit the unvisited
neighbors and push them into queue.
Exhaustive Search
Step 4: Remove node 1 from the front of queue and visit the unvisited
neighbors and push them into queue.
Exhaustive Search
Step 5: Remove node 2 from the front of queue and visit the unvisited
neighbors and push them into queue.
Exhaustive Search
Step 6: Remove node 3 from the front of queue and visit the unvisited
neighbours and push them into queue.
As we can see that every neighbors of node 3 is visited, so move to the next
node that are in the front of the queue.
Exhaustive Search
Steps 7: Remove node 4 from the front of queue and visit the unvisited
neighbours and push them into queue.
As we can see that every neighbours of node 4 are visited, so move to the
next node that is in the front of the queue.

Now, Queue becomes empty, So, terminate these process of iteration.


Exhaustive Search
Breadth-first search (BFS)

1
1 2 3 4 7 5 6

Queue Representation
(FIFO)
2 3

4 7

5 6
BFS: 12 34 756

Graph
Exhaustive Search
Breadth-first search (BFS)
Exhaustive Search
Breadth-first search (BFS)
Chapter 2
Decrease and Conquer:
• Insertion sort
• Topological sorting
• Algorithms for generating combinatorial objects
• Decrease by-a-constant-factor algorithms
Decrease and Conquer
• The decrease-and-conquer technique is based on exploiting the
relationship between a solution to a given instance of a problem and a
solution to its smaller instance.
• Once such a relationship is established, it can be exploited either top
down or bottom up.
• The former leads naturally to a recursive implementation.
• The bottom-up variation is usually implemented iteratively, starting with
a solution to the smallest instance of the problem it is called sometimes
the incremental approach.
Decrease and Conquer
• There are three major variations of decrease-and-conquer
1. decrease by a constant
2. decrease by a constant factor
3. variable size decrease
Decrease by a constant
• In the decrease-by-a-constant
variation, the size of an instance is
reduced by the same constant on each
iteration of the algorithm.
• Typically, this constant is equal to 1

Exponentiation problem of computing an


• a ≠ 0 and n is a nonnegative integer.
• relationship between a solution to an
instance of size n and an instance of
size n − 1 is obtained by the formula
an = an−1 . a

Either top-down or bottom up


Decrease by a constant factor
• The decrease-by-a-constant-factor technique
suggests reducing a problem instance by the
same constant factor on each iteration of the
algorithm.
• In most applications, this constant
factor is equal to two.
Exponentiation problem
• If the instance of size n is to compute an, the
instance of half its size is to compute an/2,
with the relationship between the two: an =
(an/2)2.
• If n is odd, we have to compute an-1 by using
the rule for even-valued
exponents and then multiply the result by a.
Variable size decrease
• The size-reduction pattern varies from one iteration of an
algorithm to another.
• Euclid’s algorithm for computing the greatest common divisor
provides a good example of such a situation.

gcd(m, n) = gcd(n, m mod n)


Insertion sort (decrease by one)
• We need is to find an appropriate position for A[n − 1] among the
sorted elements and insert it there.
• This is usually done by scanning the sorted subarray from right to
left until the first element smaller than or equal to A[n − 1] is
encountered to insert A[n − 1] right after that element.
• The resulting algorithm is called straight insertion sort or simply
insertion sort.
Insertion sort (decrease by one)

• Insertion sort, which is an efficient


algorithm for sorting a small number of
elements.
• Insertion sort works the way many
people sort a hand of playing cards.
• We start with an empty left hand and
the cards face down on the table.
• We then remove one card at a time
from the table and insert it into the
correct position in the left hand.
• To find the correct position for a card,
we compare it with each of the cards
already in the hand, from right to left.
Insertion sort (decrease by one)
Insertion sort (decrease by one)
First Pass:
Insertion sort (decrease by one)
Insertion sort (decrease by one)
Insertion sort (decrease by one)
Insertion sort (decrease by one)
Insertion sort (decrease by one)
Insertion sort (decrease by one)
Insertion sort (decrease by one)
Sort the numbers: 89, 45, 68, 90, 29, 34, 17
Insertion sort (decrease by one)

Time Complexity of Insertion Sort


• The worst-case time complexity of the Insertion sort is O(N^2)
• The average case time complexity of the Insertion sort is O(N^2)
• The time complexity of the best case is O(N).

Space Complexity of Insertion Sort


• The auxiliary space complexity of Insertion Sort is O(1)
Insertion sort (decrease by one)

• The basic operation of the algorithm is the key comparison A[j ]> v.
Because it is almost certainly faster than the former in an actual
computer implementation.
• In the best case, the comparison A[j ] > v is executed only once on
every iteration of the outer loop.
• The number of key comparisons for such an input is

• In the best case, it happens if and only if A[i − 1] ≤ A[i] for every
i = 1,...,n − 1, i.e., if the input array is already sorted in nondecreasing
order.
Insertion sort (decrease by one)

Characteristics

• Insertion sort is used when number of elements is small.


• It can also be useful when the input array is almost sorted, and only a
few elements are misplaced in a complete big array.
• Insertion sort is adaptive in nature, i.e. it is appropriate for data sets
that are already partially sorted.
Topological Sorting
Digraph
• A directed graph, or digraph for short, is a graph with directions
specified for all its edges.

• The adjacency matrix and adjacency lists


are still two principal means of
representing a digraph.
• There are two notable differences
between undirected and directed graphs
in representing them

1. The adjacency matrix of a directed


graph does not have to be symmetric
2. An edge in a directed graph has just
one (not two) corresponding nodes in
the digraph’s adjacency lists
Topological Sorting
Directed Acyclic Graph (DAG)
• A directed cycle in a digraph is a sequence of three or more of its
vertices that starts and ends with the same vertex and in which every
vertex is connected to its immediate predecessor by an edge directed
from the predecessor to the successor
• For example a, b, a is a directed cycle
in the digraph

• Conversely, if a DFS forest of a digraph has no back edges, the


digraph is a DAG, an acronym for directed acyclic graph.
Topological Sorting
• In computer science, a topological sort or topological ordering of
a directed graph is a linear ordering of its vertices such that
• For every directed edge u v from vertex u to vertex v, u comes
before v in the ordering
• For instance, the vertices of the graph may represent tasks to be
performed, and the edges may represent constraints that one task
must be performed before another
• A topological ordering is possible if and only if the graph has
no directed cycles, that is, if it is a directed acyclic graph (DAG).
EXAMPLE
Modelled As A Digraph

DFS Traversal Stack

C5 C5
C4 C4 C4
C3 C3 C3 C3
C1 C1 C1 C1 C1 C2

The popping off order Topological sorting


C5, C4, C3, C1, C2
C2 C1  C3  C4  C5
Decrease By One Technique
Repeatedly, identify in a
remaining digraph a source,
which is a vertex with no
incoming edges, and delete it
along with all the edges
outgoing from it.
If there are several sources,
break the tie arbitrarily. If
there are none, stop because
the problem cannot be solved
The order in which the
vertices are deleted yields a
solution to the topological
sorting problem
Chapter 3
Divide and conquer
• Merge sort
• Quick sort
• Binary tree traversals and related properties
• Strassen’s Matrix Multiplication
Divide and Conquer
• Divide-and-conquer algorithms work according to the following
general plan
1. A problem is divided into several subproblems of the same type,
ideally of about equal size.
2. The subproblems are solved (typically recursively, though
sometimes a different algorithm is employed, especially when
subproblems become small enough).
3. If necessary, the solutions to the subproblems are combined to get a
solution to the original problem
Divide and Conquer
Divide and Conquer
Example
sum of n
numbers
a0,...,an−1

sum of first sum of next


numbers numbers

Sum1 Sum2

Sum
Divide and Conquer
• Not every divide-and-conquer algorithm is necessarily more efficient
than even a brute-force solution.
• Divide-and-conquer technique is ideally suited for parallel
computations, in which each subproblem can be solved
simultaneously by its own processor.
• Consider a problem’s instance of size n is divided into two instances
of size n/2.
Divide and Conquer

Problem
of Size
‘n’

Problem Problem Problem Problem


of Size of Size of Size of Size
‘n/b’ ‘n/b’ ‘n/b’ ‘n/b’
Divide and Conquer
• As mentioned above, in the most typical case of divide-and-conquer
a problem’s instance of size n is divided into two instances of size
n/2.
• More generally, an instance of size n can be divided into b instances
of size n/b, with a of them needing to be solved. (Here, a and b are
constants; a ≥ 1 and b > 1.)
• Assuming that size n is a power of b to simplify our analysis, we get
the following recurrence for the running time T (n):
T(n) = aT (n/b) + f (n),

f(n) is a function that accounts for the time


spent on dividing an instance of size n into
instances of size n/b and combining their
solutions.
Divide and Conquer
Merge Sort
 Merge sort is a perfect example of a successful application of the
divide-and conquer technique.
 It sorts a given array A[0..n − 1] by dividing it into two halves
A[0..n/2 − 1] and A[n/2..n − 1], sorting each of them recursively,
and then merging the two smaller sorted arrays into a single sorted
one.
Divide and Conquer
Merge Sort 0 1 2 38 4 5 6 7

8 3 2 9 7 1 5 4

8 3 2 9 7 1 5 4

8 3 2 9 7 1 5 4

8 3 2 9 7 1 5 4

3 8 2 9 1 7 4 5

2 3 8 9 1 4 5 7

1 2 3 4 5 7 8 9
Divide and Conquer
Merge Sort
Divide and Conquer
Merge Sort
Divide and Conquer
Merge Sort
• Cmerge(n), the number of key comparisons performed during the
merging stage.
• At each step, exactly one comparison is made, after which the total
number of elements in the two arrays still needing to be processed is
reduced by 1.
Divide and Conquer
Merge Sort
Divide and Conquer
Merge Sort
Divide and Conquer
Merge Sort
Divide and Conquer
Quick Sort
• Quicksort is the other important sorting algorithm that is based on
the divide-and conquer approach.
• Unlike merge sort, which divides its input elements according to
their position in the array, quicksort divides them according to their
value.
• A partition is an arrangement of the array’s elements so that all the
elements to the left of some element A[s] are less than or equal to
A[s], and all the elements to the right of A[s] are greater than or
equal to it.
Divide and Conquer
Quick Sort
• After a partition is achieved, A[s] will be in its final position in the
sorted array, and we can continue sorting the two subarrays to the
left and to the right of A[s] independently.
• The entire work in Quicksort happens in the division stage, with no
work required to combine the solutions to the subproblems.
• We start by selecting a pivot—an element with respect to whose
value we are going to divide the subarray.
• The simplest strategy of selecting the subarray’s first element
: p = A[l].
Divide and Conquer
Quick Sort
• Scan the subarray from both ends, comparing the subarray’s
elements to the pivot.
• The left-to-right scan, denoted below by index pointer i, starts with
the second element.
• Since we want elements smaller than the pivot to be in the left part
of the subarray, this scan skips over elements that are smaller than
the pivot and stops upon encountering the first element greater than
or equal to the pivot.
• The right-to-left scan, denoted below by index pointer j, starts with
the last element of the subarray.
• Since we want elements larger than the pivot to be in the right part of
the subarray, this scan skips over elements that are larger than the
pivot and stops on encountering the first element smaller than or
equal to the pivot
Divide and Conquer
Quick Sort
5 3 1 9 8 2 4 7
EXAMPLE

Pivot
2 3 1 4 5 8 9 7

2 3 1 4 5 8 9 7
Perform
partitioning
recursively
≤5 ≥5
Partitioning
element
Divide and Conquer
Quick Sort
After Both Scans Stop Three Situation Might Arise, Depending On
Whether Or Not The Scanning Indices Have Crossed.
SCENARIO-1
If scanning indices i & j have not crossed, i.e., i < j, we simply
exchange A[i] and A[j] and resume the scans by incrementing i and
decrementing j, respectively
Divide and Conquer
Quick Sort

SCENARIO-2
If the scanning indices have crossed over, i.e., i > j, we will have
partitioned the subarray after exchanging the pivot with A[j].
Divide and Conquer
Quick Sort

SCENARIO-3
If the scanning indices stop while pointing to the same element, i.e.,
i = j, the value they are pointing to must be equal to p. Thus, we have
the subarray partitioned, with the split position s = i = j :

We can combine the last case with the case of crossed-over indices (i >
j ) by exchanging the pivot with A[j ] whenever i ≥ j .
Divide and Conquer
Quick Sort
Pseudocode implementing this partitioning procedure.
Divide and Conquer
Quick Sort
Pseudocode implementing this partitioning procedure.
Divide and Conquer
Quick Sort
Divide and Conquer
Quick Sort
Time Complexity
 The average time complexity of quick sort is O(N log(N)).
 The best case complexity is O(N log(N))
 The worst case complexity is O(N^2)
Divide and Conquer
Quick Sort

Advantages of Quick Sort:


 It is a divide-and-conquer algorithm that makes it easier to solve
problems.
 It is efficient on large data sets.
 It has a low overhead, as it only requires a small amount of memory
to function.
Divide and Conquer
Quick Sort

Disadvantages of Quick Sort:


 It has a worst-case time complexity of O(N2), which occurs when
the pivot is chosen poorly.
 It is not a good choice for small data sets.
 It is not a stable sort, meaning that if two elements have the same
key, their relative order will not be preserved in the sorted output in
case of quick sort, because here we are swapping elements
according to the pivot’s position (without considering their original
positions).
Divide and Conquer
Binary Tree Traversals and related problems:
 A binary tree T is defined as a finite set of nodes that is either empty
or consists of a root and two disjoint binary trees TL and TR called,
respectively, the left and right subtree of the root.
Divide and Conquer
Binary Tree Traversals and related problems:
 As an example, let us consider a recursive algorithm for computing
the height of a binary tree.
 Recall that the height is defined as the length of the longest path
from the root to a leaf.
 Hence, it can be computed as the maximum of the heights of the
root’s left and right subtrees plus 1. (We have to add 1 to account
for the extra level of the root.)
 Also note that it is convenient to define the height of the empty tree
as −1.
Divide and Conquer
Binary Tree Traversals and related problems:
 Recurrence relation for A(n(T )):
A(n(T )) = A(n(Tlef t)) + A(n(Tright)) + 1 for n(T ) >
0,
A(0) = 0
 We measure the problem’s instance size by the number of nodes
n(T) in a given binary tree T.
Divide and Conquer
Binary Tree Traversals and related problems:
 To draw the tree’s extension by replacing the empty subtrees by
special nodes.
 The extra nodes (shown by little squares in Figure 5.5) are called
external; the original nodes (shown by little circles) are called
internal.

 After checking Figure 5.5 and a few similar examples, it is easy to


hypothesize that the number of external nodes x is always 1 more
than the number of internal nodes n: x = n + 1
Divide and Conquer
Binary Tree Traversals and related problems:
 Binary trees are the three classic traversals: preorder, in order, and
post order.
 All three traversals visit nodes of a binary tree recursively, i.e., by
visiting the tree’s root and its left and right subtrees.
 They differ only by the timing of the root’s visit:
 In the preorder traversal, the root is visited before the left and
right subtrees are visited (in that order).
 In the in order traversal, the root is visited after visiting its left
subtree but before visiting the right subtree.
 In the post order traversal, the root is visited after visiting the
left and right subtrees (in that order)
Divide and Conquer
Binary Tree Traversals and related problems:
Divide and Conquer
Strassen’s Matrix Multiplication
 The principal insight of the algorithm lies in the discovery
that we can find the product C of two 2 × 2 matrices A and
B with just seven multiplications as opposed to the eight
required by the brute-force algorithm.
 This is accomplished by using the following formulas
Divide and Conquer
Strassen’s Matrix Multiplication
 In linear algebra, the Strassen algorithm, named after
Volker Strassen.
 It is faster than the standard matrix multiplication
algorithm for large matrices, with a better asymptotic
complexity.
Divide and Conquer
Strassen’s Matrix Multiplication

• Thus, to multiply two 2 × 2 matrices, Strassen’s algorithm makes seven


multiplications and 18 additions/subtractions, whereas the brute-force
algorithm requires eight multiplications and four additions.
Algorithms for generating combinatorial objects
 The most important types of combinatorial objects are
permutations, combinations, and subsets of a given set.
 The advantage of this order of generating permutations
stems from the fact that it satisfies the minimal-change
requirement each permutation can be obtained from its
immediate predecessor by exchanging just two elements.
Algorithms for generating combinatorial objects
 It is possible to get the same ordering of permutations of n elements
without explicitly generating permutations for smaller values of n.
 It can be done by associating a direction with each element k in a
permutation.
 We indicate such a direction by a small arrow written above the
element in question, e.g

 The element k is said to be mobile in such an arrow-marked


permutation if its arrow points to a smaller number adjacent to it.
 In the above example, for the permutation 3 and 4 are mobile while
2 and 1 are not.
Algorithms for generating combinatorial objects
Generating Subsets
 All subsets of A = {a1,...,an} can be divided into two groups: those
that do not contain an and those that do.
 The former group is nothing but all the subsets of {a 1,...,an−1}, while
each and every element of the latter can be obtained by adding a n to
a subset of {a1,...,an−1}.
 Thus, once we have a list of all subsets of {a1,...,an−1}, we can get all
the subsets of {a1,...,an} by adding to the list all its elements with a n
put into each of them and all 2n bit strings b1,...,bn of length n.
 The easiest way to establish such a correspondence is to assign to a
subset the bit string in which bi = 1 if ai belongs to the subset and bi
= 0 if ai does not belong to it.
Generating Subsets

 For example, for the case of n = 3, we obtain

You might also like