Professional Documents
Culture Documents
Divide-And-Conquer: Conquer, Combine. Here's How To View One Step
Divide-And-Conquer: Conquer, Combine. Here's How To View One Step
Divide-And-Conquer: Conquer, Combine. Here's How To View One Step
Divide-and-conquer
This paradigm, divide-and-conquer, breaks a problem into sub problems that are similar to the
original problem, recursively solves the sub problems, and finally combines the solutions to the sub
problems to solve the original problem. Because divide-and-conquer solves sub problems recursively,
each sub problem must be smaller than the original problem, and there must be a base case for sub
problems. You should think of a divide-and-conquer algorithm as having three parts:
1. Divide the problem into a number of sub problems that are smaller instances of the same
problem.
2. Conquer the sub problems by solving them recursively. If they are small enough, solve the sub
problems as base cases.
3. Combine the solutions to the sub
problems into the solution for the original
problem. You can easily remember the steps of
a divide-and-conquer algorithm as divide
conquer, combine. Here's how to view one step,
assuming that each divide step creates two sub
problems (though some divide-and-conquer
algorithms create more than two).
4. Because divide-and-conquer creates at least
two sub problems, a divide-and-conquer
algorithm makes multiple recursive calls.
Merge Sort Algorithm
For Example
# get the Array Merge Sort is quite fast, and has a time complexity of O(n log n). It is also
a stable sort, which means the "equal" elements are ordered in the same
cin>> n;
for(i=1;i<n;i++) order in the sorted list.
{ In this section we will understand why the running time for merge sort
cin>>A[i]; is O(n log n).
} As we have already learned in Binary Search that whenever we divide a
# Divide the Array
number into half in every step, it can be represented using a logarithmic
m = n/2
L[i = 1 to m] and function, which is log n and the number of steps can be represented by log
R[j=m+1 … n] n + 1(at most)
for(i=1 to m) Also, we perform a single step operation to find out the middle of any sub
L[i] = A[i] array, i.e. O(1).
for(j=m+1 to n]
And to merge the sub arrays, made by dividing the original array
R[j] = A[j]
merging Array of n elements, a running time of O(n) will be required.
Hence the total time for merge Sort function will become n(log n + 1),
if(L[i] <= R[j]
A[k] = L[i] which gives us a time complexity of O(n log n).
i=i +1 Worst Case Time Complexity [ Big-O ]: n log n
else Best Case Time Complexity [Big-omega]: log n
A[k] = R[j]
Average Time Complexity [Big-theta]: n log n
j=j+1
Binary Search is applied on the sorted array or list of large size. It's time complexity of O(log n) makes it very
fast as compared to other sorting algorithms.
Here it is, 0 + (9 - 0) / 2 = 4 (integer value of 4.5). So, 4 is the mid of the array.
Now we compare the value stored at location 4, with the value being searched, i.e. 31. We find that the value at
location 4 is 27, which is not a match. As the value is greater than 27 and we have a sorted array, so we also
know that the target value must be in the upper portion of the array.
We change our low to mid + 1 and find the new mid value again.
low = mid + 1
mid = low + (high - low) / 2
Our new mid is 7 now. We compare the value stored at location 7 with our target value 31.
The value stored at location 7 is not a match, rather it is more than what we are looking for. So, the value must
be in the lower part from this location.
We compare the value stored at location 5 with our target value. We find that it is a match.
4
Set lowerBound = 1
Set upperBound = n
if A[midPoint] = x
EXIT: x found at location midPoint
if A[midPoint] < x
set lowerBound = midPoint + 1
if A[midPoint] > x
set upperBound = midPoint - 1
end while
end procedure
Dynamic Programming
Dynamic programming approach is similar to divide and conquer in breaking down the
problem into smaller and yet smaller possible sub-problems. But unlike, divide and conquer,
these sub-problems are not solved independently. Rather, results of these smaller sub-problems
are remembered and used for similar or overlapping sub-problems.
Dynamic programming is used where we have problems, which can be divided into similar
sub-problems, so that their results can be re-used. Mostly, these algorithms are used for
optimization. Before solving the in-hand sub-problem, dynamic algorithm will try to examine
5
the results of the previously solved sub-problems. The solutions of sub-problems are combined
in order to achieve the best solution.
Example
The following computer problems can be solved using dynamic programming approach
Dynamic programming is basically, recursion plus using common sense. What it means is that
recursion allows you to express the value of a function in terms of other values of that function.
Where the common sense tells you that if you implement your function in a way that the
recursive calls are done in advance, and stored for easy access, it will make your program
faster. This is what we call Memorization - it is memorizing the results of some specific states,
which can then be later accessed to solve other sub-problems.
6
The intuition behind dynamic programming is that we trade space for time, i.e. to say that
instead of calculating all the states taking a lot of time but no space, we take up space to store
the results of all the sub-problems to save time later.
Fibonacci (n) = 1; if n = 0
Fibonacci (n) = 1; if n = 1
Fibonacci (n) = Fibonacci(n-1) + Fibonacci(n-2)
So, the first few numbers in this series will be: 1, 1, 2, 3, 5, 8, 13, 21... and so on!
void fib () {
fibresult[0] = 1;
fibresult[1] = 1;
for (int i = 2; i<n; i++)
fibresult[i] = fibresult[i-1] + fibresult[i-2];
}
Binomial Coefficient
Following are common definition of Binomial Coefficients.
1. A binomial coefficient C(n, k) can be defined as the coefficient of X^k in the expansion of
(1 + X)^n.
2. A binomial coefficient C(n, k) also gives the number of ways, disregarding order, that k
objects can be chosen from among n objects; more formally, the number of k-element
subsets (or k-combinations) of an n-element set.
Optimal Substructure
7
The value of C(n, k) can be recursively calculated using following standard formula for
Binomial Coefficients.
Warshall Algorithm
Graph
Prim's Minimal Spanning Tree Algorithm
Design and Analysis In designing greedy algorithm, we have the following general
guideline:
1) Break the problem into a sequence of decisions, just like in dynamic programming. But
bear in mind that greedy algorithm does not always yield the optimal solution. For
example, it is not optimal to run greedy algorithm for Longest Subsequence.
10
2) Identify a rule for the “best” option. Once the last step is completed, you immediately
want to make the first decision in a greedy manner, without considering other future
decisions.
The hard part usually lies in analysis. In divide-and-conquer and dynamic programming,
sometimes the proof simply follows from an easy induction, but that is not generally the case in
greedy algorithms. The key thing to remember is that greedy algorithm often fails if you cannot
find a proof. A common proof technique used in proving correctness of greedy algorithms is
proof by contradiction. One starts by assuming that there is a better solution, and the goal is to
refute it by showing that it is actually not better than what the algorithm did.
Graph
A graph is a pictorial representation of a set of objects where some pairs of objects are
connected by links. The interconnected objects are represented by points termed as vertices,
and the links that connect the vertices are called edges.
Formally, a graph is a pair of sets (V, E), where V is the set of vertices and E is the set of edges,
connecting the pairs of vertices. Take a look at the following graph –
V = {a, b, c, d, e}
Types of Graph
1. Directed Graph
2. Undirected Graph
3) Weighted Graph
12
In each edge of a graph has an associated numerical value, called a weight. Usually, the edge
weights are nonnegative integers. Weighted graphs may be either directed or undirected.
4) UnWeighted Graph
Mathematical graphs can be represented in data structure. We can represent a graph using an
array of vertices and a two-dimensional array of edges. Before we proceed further, let's
familiarize ourselves with some important terms −
Vertex − Each node of the graph is represented as a vertex. In the following example, the
labeled circle represents vertices. Thus, A to G are vertices. We can represent them using
an array as shown in the following image. Here A can be identified by index 0. B can be
identified using index 1 and so on.
Edge − Edge represents a path between two vertices or a line between two vertices. In the
following example, the lines from A to B, B to C, and so on represents edges. We can use
a two-dimensional array to represent an array as shown in the following image. Here AB
can be represented as 1 at row 0, column 1, BC as 1 at row 1, column 2 and so on,
keeping other combinations as 0.
Adjacency − Two node or vertices are adjacent if they are connected to each other
through an edge. In the following example, B is adjacent to A, C is adjacent to B, and so
on.
Path − Path represents a sequence of edges between
the two vertices. In the following example, ABCD
represents a path from A to D.
Basic Operations
13
We now understand that one graph can have more than one spanning tree. Following are a few
properties of the spanning tree connected to graph G −
Cluster Analysis
Let us understand this through a small example. Consider, city network as a huge graph and
now plans to deploy telephone lines in such a way that in minimum lines we can connect to all
city nodes. This is where the spanning tree comes into picture.