Professional Documents
Culture Documents
Unit II Brute Force Divide and Conquer Decrease and Conquer
Unit II Brute Force Divide and Conquer Decrease and Conquer
Brute Force Method, Divide and Conquer and Decrease and Conquer
Contents
Brute force approach:
• Selection sort
• Bubble sort
• Sequential search
• String Matching
• Exhaustive search
• Breadth-First search (BFS)
• Depth-First search (DFS)
Selection Sort:
We start selection sort by scanning the entire given list to find its
smallest element and exchange it with the first element,
Putting the smallest element in its final position in the sorted list. Then
we scan the list, starting with the second element,
To find the smallest among the last n - 1 elements and exchange it
with the second element, putting the second smallest element in its
final position.
Generally, on the i th pass through the list, the algorithm searches
for the smallest item among the last n - i elements and swaps it
with Ai.
Brute Force Approach: Selection sort
Brute Force Approach: Selection sort
Selection Sort:
In their final positions the last n - i elements ,after n - 1 passes, the
list is sorted.
Example:
|89 45 68 90 29 34 17
17 | 45 68 90 29 34 89
17 29 | 68 90 45 34 89
17 29 34 | 90 45 68 89
17 29 34 45 | 90 68 89
17 29 34 45 68 | 90 89
17 29 34 45 68 89 | 90
Brute Force Approach: Selection sort
Brute Force Approach: Selection sort
Selection Sort:
The analysis of selection sort is straightforward. The input size is
given by the number of elements n; the basic operation is the key
comparison A[j ] < A[min].
The number of times it is executed depends only on the array size and
is given by the following sum.
For i = 1 to n-1
For j=0 to n-i-1
If a[j] > a[j+1]
Temp=a[j]
A[j]=a[j+1]
A[j+1] = temp
Brute Force Approach: Bubble Sort
Bubble Sort:
• We have two loops the outer loop which runs n times (i=0 to n), and
inner loop which iterates over the unsorted part of the array (j=0 to
n-i-1).
• When pass =1, ranges from 0 to (7-1-1) i.e; 0 to 5. So 6
comparisons takes place.
• When pass =2, ranges from 0 to (7-2-1) i.e; 0 to 4. So 5
comparisons takes place.
• And so on…
Brute Force Approach: Bubble Sort:
Brute Force Approach:
Sequential search
Sequential search
We have already encountered a brute-force algorithm for the
general searching problem: it is called sequential search.
To repeat, the algorithm simply compares successive elements of a
given list with a given search key until either a match is
encountered (successful search) or the list is exhausted without
finding a match (unsuccessful search).
A simple extra trick is often employed in implementing sequential
search: if we append the search key to the end of the list, the search
for the key will have to be successful, and therefore we can
eliminate a check for the list's end on each iteration of the
algorithm.
Brute Force Approach:
Sequential search
Example
Brute Force Approach:
String Matching
String Matching
A brute-force algorithm for the string-matching problem is quite
obvious: align the pattern against the first m characters of the text
and start matching the corresponding pairs of characters from left to
right until either all m pairs of the characters match (then the
algorithm can stop) or a mismatching pair is encountered.
In the latter case, shift the pattern one position to the right and
resume character comparisons, starting again with the first
character of the pattern and its counterpart in the text.
Brute Force Approach:
String Matching
String Matching
Beyond that position, there are not enough characters to match the
entire pattern; hence, the algorithm need not make any comparisons
there.
Given a string of n characters called the text and a string of m
characters (m ≤ n) called the pattern,
Find a substring of the text that matches the pattern.
To put it more precisely, we want to find i—the index of the
leftmost character of the first matching substring in the text.
Brute Force Approach:
String Matching
String Matching Algorithms:
Example
Brute Force Approach:
String Matching
String Matching
Example
Brute Force Approach:
String Matching
String Matching
Example
Brute Force Approach:
String Matching
String Matching
Example
Exhaustive Search
• Exhaustive search is simply a brute-force approach to combinatorial
problems.
• It suggests generating each and every element of the problem
domain, selecting those of them that satisfy all the constraints, and
then finding a desired element.
• The problem asks to find the shortest tour through a given set of n
cities that visits each city exactly once before returning to the city
where it started.
• The problem can be conveniently modeled by a weighted graph,
with the graph’s vertices representing the cities and the edge
weights specifying the distances.
• Then the problem can be stated as the problem of finding the
shortest Hamiltonian circuit of the graph.
• A Hamiltonian circuit is defined as a cycle that passes through all
the vertices of the graph exactly once (It is named after the Irish
mathematician Sir William Rowan Hamilton).
Exhaustive Search
Travelling Salesman problem
4
2 3
2
4 7 1
Stack Representation
(LIFO)
5 6
DFS: 1 2 4 5 6 3 7
Graph
Exhaustive Search
Breadth-first search (BFS)
It proceeds in a concentric manner by visiting first all the vertices that
are adjacent to a starting vertex, then all unvisited vertices two edges
apart from it, and so on, until all the vertices in the same connected
component as the starting vertex are visited.
It is convenient to use a queue to trace the operation of breadth-first
search.
The queue is initialized with the traversal’s starting vertex, which is
marked as visited.
On each iteration, the algorithm identifies all unvisited vertices that are
adjacent to the front vertex, marks them as visited, and adds them to the
queue; after that, the front vertex is removed from the queue.
Exhaustive Search
Breadth-first search (BFS)
The traversal’s starting vertex serves as the root of the first tree in such a
forest.
Whenever a new unvisited vertex is reached for the first time, the vertex
is attached as a child to the vertex it is being reached from with an edge
called a tree edge.
If an edge leading to a previously visited vertex other than its immediate
predecessor (i.e., its parent in the tree) is encountered, the edge is noted
as a cross edge.
Exhaustive Search
Exhaustive Search
Breadth-first search (BFS)
Breadth-first search has the same efficiency as depth-first search: it is in
(|V |2) for the adjacency matrix representation and in (|V |+|E|) for the
adjacency list representation.
Unlike depth-first search, it yields a single ordering of vertices because
the queue is a FIFO (first-in first-out) structure and hence the order in
which vertices are added to the queue is the same order in which they are
removed from it.
BFS can be used to check connectivity and acyclicity of a graph,
essentially in the same manner as DFS can.
Exhaustive Search
Step1: Initially queue and visited arrays are empty.
Exhaustive Search
Step2: Push node 0 into queue and mark it visited.
Exhaustive Search
Step 3: Remove node 0 from the front of queue and visit the unvisited
neighbors and push them into queue.
Exhaustive Search
Step 4: Remove node 1 from the front of queue and visit the unvisited
neighbors and push them into queue.
Exhaustive Search
Step 5: Remove node 2 from the front of queue and visit the unvisited
neighbors and push them into queue.
Exhaustive Search
Step 6: Remove node 3 from the front of queue and visit the unvisited
neighbours and push them into queue.
As we can see that every neighbors of node 3 is visited, so move to the next
node that are in the front of the queue.
Exhaustive Search
Steps 7: Remove node 4 from the front of queue and visit the unvisited
neighbours and push them into queue.
As we can see that every neighbours of node 4 are visited, so move to the
next node that is in the front of the queue.
1
1 2 3 4 7 5 6
Queue Representation
(FIFO)
2 3
4 7
5 6
BFS: 12 34 756
Graph
Exhaustive Search
Breadth-first search (BFS)
Exhaustive Search
Breadth-first search (BFS)
Chapter 2
Decrease and Conquer:
• Insertion sort
• Topological sorting
• Algorithms for generating combinatorial objects
• Decrease by-a-constant-factor algorithms
Decrease and Conquer
• The decrease-and-conquer technique is based on exploiting the
relationship between a solution to a given instance of a problem and a
solution to its smaller instance.
• Once such a relationship is established, it can be exploited either top
down or bottom up.
• The former leads naturally to a recursive implementation.
• The bottom-up variation is usually implemented iteratively, starting with
a solution to the smallest instance of the problem it is called sometimes
the incremental approach.
Decrease and Conquer
• There are three major variations of decrease-and-conquer
1. decrease by a constant
2. decrease by a constant factor
3. variable size decrease
Decrease by a constant
• In the decrease-by-a-constant
variation, the size of an instance is
reduced by the same constant on each
iteration of the algorithm.
• Typically, this constant is equal to 1
• The basic operation of the algorithm is the key comparison A[j ]> v.
Because it is almost certainly faster than the former in an actual
computer implementation.
• In the best case, the comparison A[j ] > v is executed only once on
every iteration of the outer loop.
• The number of key comparisons for such an input is
• In the best case, it happens if and only if A[i − 1] ≤ A[i] for every
i = 1,...,n − 1, i.e., if the input array is already sorted in nondecreasing
order.
Insertion sort (decrease by one)
Characteristics
C5 C5
C4 C4 C4
C3 C3 C3 C3
C1 C1 C1 C1 C1 C2
Sum1 Sum2
Sum
Divide and Conquer
• Not every divide-and-conquer algorithm is necessarily more efficient
than even a brute-force solution.
• Divide-and-conquer technique is ideally suited for parallel
computations, in which each subproblem can be solved
simultaneously by its own processor.
• Consider a problem’s instance of size n is divided into two instances
of size n/2.
Divide and Conquer
Problem
of Size
‘n’
8 3 2 9 7 1 5 4
8 3 2 9 7 1 5 4
8 3 2 9 7 1 5 4
8 3 2 9 7 1 5 4
3 8 2 9 1 7 4 5
2 3 8 9 1 4 5 7
1 2 3 4 5 7 8 9
Divide and Conquer
Merge Sort
Divide and Conquer
Merge Sort
Divide and Conquer
Merge Sort
• Cmerge(n), the number of key comparisons performed during the
merging stage.
• At each step, exactly one comparison is made, after which the total
number of elements in the two arrays still needing to be processed is
reduced by 1.
Divide and Conquer
Merge Sort
Divide and Conquer
Merge Sort
Divide and Conquer
Merge Sort
Divide and Conquer
Quick Sort
• Quicksort is the other important sorting algorithm that is based on
the divide-and conquer approach.
• Unlike merge sort, which divides its input elements according to
their position in the array, quicksort divides them according to their
value.
• A partition is an arrangement of the array’s elements so that all the
elements to the left of some element A[s] are less than or equal to
A[s], and all the elements to the right of A[s] are greater than or
equal to it.
Divide and Conquer
Quick Sort
• After a partition is achieved, A[s] will be in its final position in the
sorted array, and we can continue sorting the two subarrays to the
left and to the right of A[s] independently.
• The entire work in Quicksort happens in the division stage, with no
work required to combine the solutions to the subproblems.
• We start by selecting a pivot—an element with respect to whose
value we are going to divide the subarray.
• The simplest strategy of selecting the subarray’s first element
: p = A[l].
Divide and Conquer
Quick Sort
• Scan the subarray from both ends, comparing the subarray’s
elements to the pivot.
• The left-to-right scan, denoted below by index pointer i, starts with
the second element.
• Since we want elements smaller than the pivot to be in the left part
of the subarray, this scan skips over elements that are smaller than
the pivot and stops upon encountering the first element greater than
or equal to the pivot.
• The right-to-left scan, denoted below by index pointer j, starts with
the last element of the subarray.
• Since we want elements larger than the pivot to be in the right part of
the subarray, this scan skips over elements that are larger than the
pivot and stops on encountering the first element smaller than or
equal to the pivot
Divide and Conquer
Quick Sort
5 3 1 9 8 2 4 7
EXAMPLE
Pivot
2 3 1 4 5 8 9 7
2 3 1 4 5 8 9 7
Perform
partitioning
recursively
≤5 ≥5
Partitioning
element
Divide and Conquer
Quick Sort
After Both Scans Stop Three Situation Might Arise, Depending On
Whether Or Not The Scanning Indices Have Crossed.
SCENARIO-1
If scanning indices i & j have not crossed, i.e., i < j, we simply
exchange A[i] and A[j] and resume the scans by incrementing i and
decrementing j, respectively
Divide and Conquer
Quick Sort
SCENARIO-2
If the scanning indices have crossed over, i.e., i > j, we will have
partitioned the subarray after exchanging the pivot with A[j].
Divide and Conquer
Quick Sort
SCENARIO-3
If the scanning indices stop while pointing to the same element, i.e.,
i = j, the value they are pointing to must be equal to p. Thus, we have
the subarray partitioned, with the split position s = i = j :
We can combine the last case with the case of crossed-over indices (i >
j ) by exchanging the pivot with A[j ] whenever i ≥ j .
Divide and Conquer
Quick Sort
Pseudocode implementing this partitioning procedure.
Divide and Conquer
Quick Sort
Pseudocode implementing this partitioning procedure.
Divide and Conquer
Quick Sort
Divide and Conquer
Quick Sort
Time Complexity
The average time complexity of quick sort is O(N log(N)).
The best case complexity is O(N log(N))
The worst case complexity is O(N^2)
Divide and Conquer
Quick Sort