Divide-And-Conquer: Conquer, Combine. Here's How To View One Step

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 14

1

Divide-and-conquer

This paradigm, divide-and-conquer, breaks a problem into sub problems that are similar to the
original problem, recursively solves the sub problems, and finally combines the solutions to the sub
problems to solve the original problem. Because divide-and-conquer solves sub problems recursively,
each sub problem must be smaller than the original problem, and there must be a base case for sub
problems. You should think of a divide-and-conquer algorithm as having three parts:

1. Divide the problem into a number of sub problems that are smaller instances of the same
problem.
2. Conquer the sub problems by solving them recursively. If they are small enough, solve the sub
problems as base cases.
3. Combine the solutions to the sub
problems into the solution for the original
problem. You can easily remember the steps of
a divide-and-conquer algorithm as divide
conquer, combine. Here's how to view one step,
assuming that each divide step creates two sub
problems (though some divide-and-conquer
algorithms create more than two).
4. Because divide-and-conquer creates at least
two sub problems, a divide-and-conquer
algorithm makes multiple recursive calls.
Merge Sort Algorithm

For Example

It is a sorting algorithm. The algorithm divides the


array in two halves, recursively sorts them and
finally merges the two sorted halves. Merge Sort
follows the rule of Divide and Conquer to sort a
given set of numbers/elements.
2

Merge algorithm Complexity Analysis of Merge Sort

# get the Array Merge Sort is quite fast, and has a time complexity of O(n log n). It is also
a stable sort, which means the "equal" elements are ordered in the same
cin>> n;
for(i=1;i<n;i++) order in the sorted list.
{ In this section we will understand why the running time for merge sort
cin>>A[i]; is O(n log n).
} As we have already learned in Binary Search that whenever we divide a
# Divide the Array
number into half in every step, it can be represented using a logarithmic
m = n/2
L[i = 1 to m] and function, which is log n and the number of steps can be represented by log
R[j=m+1 … n] n + 1(at most)
for(i=1 to m) Also, we perform a single step operation to find out the middle of any sub
L[i] = A[i] array, i.e. O(1).
for(j=m+1 to n]
And to merge the sub arrays, made by dividing the original array
R[j] = A[j]
merging Array of n elements, a running time of O(n) will be required.
Hence the total time for merge Sort function will become n(log n + 1),
if(L[i] <= R[j]
A[k] = L[i] which gives us a time complexity of O(n log n).
i=i +1 Worst Case Time Complexity [ Big-O ]: n log n
else Best Case Time Complexity [Big-omega]:  log n
A[k] = R[j]
Average Time Complexity [Big-theta]:  n log n
j=j+1

Binary Search Algorithm


Binary Search is applied on the sorted array or list of large size. It's time complexity of O(log n) makes it very
fast as compared to other sorting algorithms. The only limitation is that the array or list of elements must be
sorted for the binary search algorithm to work on it.

Binary Search is applied on the sorted array or list of large size. It's time complexity of O(log n) makes it very
fast as compared to other sorting algorithms. 

How Binary Search Works?


For a binary search to work, it is mandatory for the target array to be sorted. We shall learn the process of
binary search with a pictorial example. The following is our sorted array and let us assume that we need to
search the location of value 31 using binary search.
3

First, we shall determine half of the array by using this formula −

mid = low + (high - low) / 2

Here it is, 0 + (9 - 0) / 2 = 4 (integer value of 4.5). So, 4 is the mid of the array.

Now we compare the value stored at location 4, with the value being searched, i.e. 31. We find that the value at
location 4 is 27, which is not a match. As the value is greater than 27 and we have a sorted array, so we also
know that the target value must be in the upper portion of the array.

We change our low to mid + 1 and find the new mid value again.

low = mid + 1
mid = low + (high - low) / 2

Our new mid is 7 now. We compare the value stored at location 7 with our target value 31.

The value stored at location 7 is not a match, rather it is more than what we are looking for. So, the value must
be in the lower part from this location.

Hence, we calculate the mid again. This time it is 5.

We compare the value stored at location 5 with our target value. We find that it is a match.
4

We conclude that the target value 31 is stored at location 5.


Binary search halves the searchable items and thus reduces the count of comparisons to be made to very less
numbers.
Pseudocode :
Procedure binary_search
A ← sorted array
n ← size of array
x ← value to be searched

Set lowerBound = 1
Set upperBound = n

while x not found


if upperBound < lowerBound
EXIT: x does not exists.

set midPoint = lowerBound + ( upperBound - lowerBound ) / 2

if A[midPoint] = x
EXIT: x found at location midPoint

if A[midPoint] < x
set lowerBound = midPoint + 1

if A[midPoint] > x
set upperBound = midPoint - 1

end while
end procedure

Dynamic Programming

Dynamic programming approach is similar to divide and conquer in breaking down the
problem into smaller and yet smaller possible sub-problems. But unlike, divide and conquer,
these sub-problems are not solved independently. Rather, results of these smaller sub-problems
are remembered and used for similar or overlapping sub-problems.

Dynamic programming is used where we have problems, which can be divided into similar
sub-problems, so that their results can be re-used. Mostly, these algorithms are used for
optimization. Before solving the in-hand sub-problem, dynamic algorithm will try to examine
5

the results of the previously solved sub-problems. The solutions of sub-problems are combined
in order to achieve the best solution.

Example
The following computer problems can be solved using dynamic programming approach

 Fibonacci number series


 Warshall Algorithm
Comparison

Dynamic Programming and Recursion:

Dynamic programming is basically, recursion plus using common sense. What it means is that
recursion allows you to express the value of a function in terms of other values of that function.
Where the common sense tells you that if you implement your function in a way that the
recursive calls are done in advance, and stored for easy access, it will make your program
faster. This is what we call Memorization - it is memorizing the results of some specific states,
which can then be later accessed to solve other sub-problems.
6

The intuition behind dynamic programming is that we trade space for time, i.e. to say that
instead of calculating all the states taking a lot of time but no space, we take up space to store
the results of all the sub-problems to save time later.

Let's try to understand this by taking an example of Fibonacci numbers.

Fibonacci (n) = 1; if n = 0
Fibonacci (n) = 1; if n = 1
Fibonacci (n) = Fibonacci(n-1) + Fibonacci(n-2)

So, the first few numbers in this series will be: 1, 1, 2, 3, 5, 8, 13, 21... and so on!

A code for it using pure recursion:

int fib (int n)


{
if (n < 2)
return 1;
return fib(n-1) + fib(n-2);
}

Using Dynamic Programming approach with memorization:

void fib () {
fibresult[0] = 1;
fibresult[1] = 1;
for (int i = 2; i<n; i++)
fibresult[i] = fibresult[i-1] + fibresult[i-2];
}

Binomial Coefficient
Following are common definition of Binomial Coefficients.
1. A binomial coefficient C(n, k) can be defined as the coefficient of X^k in the expansion of
(1 + X)^n.
2. A binomial coefficient C(n, k) also gives the number of ways, disregarding order, that k
objects can be chosen from among n objects; more formally, the number of k-element
subsets (or k-combinations) of an n-element set.
Optimal Substructure
7

The value of C(n, k) can be recursively calculated using following standard formula for
Binomial Coefficients.

C(n, k) = C(n-1, k-1) + C(n-1, k)


C(n, 0) = C(n, n) = 1

Warshall Algorithm

Warshall’s Algorithm: Transitive Closure

 Computes the transitive closure of a relation


 Alternatively: all paths in a directed graph
 Example of transitive closure:

Warshall’s Algorithm: Transitive closure


8

Graph Algorithms: Greedy Techniques


Introduction to Greedy Algorithms
A greedy algorithm, as the name suggests, always makes the choice that seems to be the best at
that moment. This means that it makes a locally-optimal choice in the hope that this choice will
lead to a globally-optimal solution. However, generally greedy algorithms do not provide
globally optimized solutions.
Formal Definition
A greedy algorithm is an algorithmic paradigm that follows the problem solving heuristic of
making the locally optimal choice at each stage with the hope of finding a global optimum.
In general, greedy algorithms have five components:
9

 A candidate set, from which is set of solution is created


 A selection function, which chooses the best candidate to be added to the solution
 A feasibility function, that is used to determine if a candidate can be used to contribute
to a solution
 An objective function, which assigns a value to a solution, or a partial solution, and
 A solution function, which will indicate when we have discovered a complete solution

Advantages and Disadvantages of Greedy Algorithm

 Finding solution is quite easy with a greedy algorithm for a problem.


 Analyzing the run time for greedy algorithms will generally be much easier than for other
techniques (like Divide and conquer).
 The difficult part is that for greedy algorithms you have to work much harder to
understand correctness issues. Even with the correct algorithm, it is hard to prove why it
is correct. Proving that a greedy algorithm is correct is more of an art than a science.

List of Algorithms based on Greedy Algorithm


The greedy algorithm is quite powerful and works well for a wide range of problems. Many
algorithms can be viewed as applications of the Greedy algorithms, such as :

 Graph
 Prim's Minimal Spanning Tree Algorithm

Design and Analysis In designing greedy algorithm, we have the following general
guideline:

1) Break the problem into a sequence of decisions, just like in dynamic programming. But
bear in mind that greedy algorithm does not always yield the optimal solution. For
example, it is not optimal to run greedy algorithm for Longest Subsequence.
10

2) Identify a rule for the “best” option. Once the last step is completed, you immediately
want to make the first decision in a greedy manner, without considering other future
decisions.

The hard part usually lies in analysis. In divide-and-conquer and dynamic programming,
sometimes the proof simply follows from an easy induction, but that is not generally the case in
greedy algorithms. The key thing to remember is that greedy algorithm often fails if you cannot
find a proof. A common proof technique used in proving correctness of greedy algorithms is
proof by contradiction. One starts by assuming that there is a better solution, and the goal is to
refute it by showing that it is actually not better than what the algorithm did.

Graph

A graph is a pictorial representation of a set of objects where some pairs of objects are
connected by links. The interconnected objects are represented by points termed as vertices,
and the links that connect the vertices are called edges.

Formally, a graph is a pair of sets (V, E), where V is the set of vertices and E is the set of edges,
connecting the pairs of vertices. Take a look at the following graph –

In the above graph,

V = {a, b, c, d, e}

E = {ab, ac, bd, cd, de}


11

 Graph is a non linear data structure, it contains a set of


points known as nodes (or vertices) and set of links
known as edges (or Arcs) which connects the vertices.
 Generally, a graph G is represented as G = ( V , E ),
where V is set of vertices and E is set of edges.
 Vertex: Individual data element of a graph is called as
Vertex.  also known as node : A, B, C, D & E are
known as vertices.
 Edge : An edge is a connecting link between two vertices.  Edge is also known as Arc. An edge is
represented as (starting Vertex, ending Vertex).For example , In the graph, the link between vertices A
and B is represented as (A,B). There are 7 edges (i.e., (A,B), (A,C), (A,D), (B,D), (B,E), (C,D), (D,E)).

Types of Graph

1. Directed Graph

A directed graph is graph, i.e., a set of objects (called vertices or nodes)


that are connected together, where all the edges are directed from one
vertex to another. A directed graph is sometimes called a digraph
or a directed network. In contrast, a graph where the edges are
bidirectional is called an undirected graph.

When drawing a directed graph, the edges are typically drawn as


arrows indicating the direction, as illustrated in the following figure.

2. Undirected Graph

An undirected graph is graph, i.e., a set of objects (called vertices


or nodes) that are connected together, where all the edges are
bidirectional. An undirected graph is sometimes called an
undirected network. In contrast, a graph where the edges point in a
direction is called a directed graph.

3) Weighted Graph
12

In each edge of a graph has an associated numerical value, called a weight. Usually, the edge
weights are nonnegative integers. Weighted graphs may be either directed or undirected.

4) UnWeighted Graph

An unweighted graph is one in which an edge does not have any


cost or weight associated with it, whereas a weighted graph does.

Graph Data Structure

Mathematical graphs can be represented in data structure. We can represent a graph using an
array of vertices and a two-dimensional array of edges. Before we proceed further, let's
familiarize ourselves with some important terms −

 Vertex − Each node of the graph is represented as a vertex. In the following example, the
labeled circle represents vertices. Thus, A to G are vertices. We can represent them using
an array as shown in the following image. Here A can be identified by index 0. B can be
identified using index 1 and so on.
 Edge − Edge represents a path between two vertices or a line between two vertices. In the
following example, the lines from A to B, B to C, and so on represents edges. We can use
a two-dimensional array to represent an array as shown in the following image. Here AB
can be represented as 1 at row 0, column 1, BC as 1 at row 1, column 2 and so on,
keeping other combinations as 0.
 Adjacency − Two node or vertices are adjacent if they are connected to each other
through an edge. In the following example, B is adjacent to A, C is adjacent to B, and so
on.
 Path − Path represents a sequence of edges between
the two vertices. In the following example, ABCD
represents a path from A to D.

Basic Operations
13

Following are basic primary operations of a Graph −

 Add Vertex − Adds a vertex to the graph.


 Add Edge − Adds an edge between the two vertices of the graph.
 Display Vertex − Displays a vertex of the graph.

Spanning Tree Algorithm

A spanning tree is a subset of Graph G, which


has all the vertices covered with minimum
possible number of edges. Hence, a spanning
tree does not have cycles and it cannot be
disconnected.

By this definition, we can draw a conclusion that


every connected and undirected Graph G has at
least one spanning tree. A disconnected graph
does not have any spanning tree, as it cannot be spanned to all its vertices.

General Properties of Spanning Tree


Explain in details of general properties of spanning tree and its applications ?

We now understand that one graph can have more than one spanning tree. Following are a few
properties of the spanning tree connected to graph G −

 A connected graph G can have more than one spanning tree.


 All possible spanning trees of graph G, have the same number of edges and vertices.
 The spanning tree does not have any cycle (loops).
 Removing one edge from the spanning tree will make the graph disconnected, i.e. the
spanning tree is minimally connected.
 Adding one edge to the spanning tree will create a circuit or loop, i.e. the spanning tree
is maximally acyclic.
Application of Spanning Tree
Spanning tree is basically used to find a minimum path to connect all nodes in a graph.
Common application of spanning trees are −

 Civil Network Planning


 Computer Network Routing Protocol
14

 Cluster Analysis

Let us understand this through a small example. Consider, city network as a huge graph and
now plans to deploy telephone lines in such a way that in minimum lines we can connect to all
city nodes. This is where the spanning tree comes into picture.

You might also like