DAA Assignment PDF

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 13

DESIGN &ANALYSIS OF

ALGORITHMS ASSIGNMENT
PANJAB UNIVERSITY SSG-RC

SUBMITTED TO:
MR. GURPINDER SINGH

SUBMITTED BY:
Ankur
BE(IT) 5th SEM
SG-17801
QUICK SORT

 Partition of elements in the array :

In case of quick sort, the array is parted into any ratio. There is no compulsion of
dividing the array of elements into equal parts in quick sort.

 Worst case complexity :

The worst case complexity of quick sort is O(n2) as there is need of lot of
comparisons in the worst condition.

 Usage with datasets :

The quick sort cannot work well with large datasets.

 Additional storage space requirement :

The quick sort is in place as it doesn’t require any additional storage.

 Efficiency :

Quick sort is more efficient and works faster than merge sort in case of smaller
array size or datasets.

 Sorting method :

The quick sort is internal sorting method where the data is sorted in main memory.

 Stability :

Quick sort is unstable in this scenario. But it can be made stable using some
changes in code.

 Preferred for :

Quick sort is preferred for arrays..


 Locality of reference :

Quicksort exhibits good cache locality and this makes quicksort faster than merge
sort (in many cases like in virtual memory environment).
MERGE SORT

 Partition of elements in the array :

In the merge sort, the array is parted into just 2 halves (i.e. n/2).

 Worst case complexity :

In merge sort, worst case and average case has same complexities O(n log n).

 Usage with datasets :

Merge sort can work well on any type of data sets irrespective of its size (either
large or small).

 Additional storage space requirement :

Merge sort is not in place because it requires additional memory space to store
the auxiliary arrays.

 Efficiency :

Merge sort is more efficient and works faster than quick sort in case of larger
array size or datasets.

 Sorting method :

The merge sort is external sorting method in which the data that is to be sorted
cannot be accommodated in the memory and needed auxiliary memory for
sorting.

 Stability :

Merge sort is stable as two elements with equal value appear in the same order in
sorted output as they were in the input unsorted array.

 Preferred for :

Merge sort is preferred for linked lists.


 Locality of reference :

Quicksort exhibits good cache locality and this makes quicksort faster than merge
sort (in many cases like in virtual memory environment).
BINARY SEARCH

 Binary search implements divide and conquer approach.

 The time complexity of binary search has O(log2N).

 The best case time in binary search, it is for the middle element, i.e.,
O(1).

 worst case for searching an element is log 2N number of comparison for


binary search.

 binary search can not be implemented directly on linked list.

 As we know Binary search requires the sorted array that is reason It


requires processing to insert at its proper place to maintain a sorted list.

 search algorithm is however tricky, and elements are necessarily arranged


in order.
Dijkstra’s Algorithm

 Dijkstra’s Algorithm is one example of a single-source shortest or SSSP algorithm,


i.e., given a source vertex it finds shortest path from source to all other vertices.

 Time Complexity of Dijkstra’s Algorithm: O(E log V)

 We can use Dijkstra’s shortest path algorithm for finding all pair shortest paths by
running it for every vertex.

 time complexity of this would be O(VE Log V) which can go (V3 Log V) in worst case.

 Another important differentiating factor between the algorithms is their working


towards distributed systems. Unlike Dijkstra’s algorithm.

 Dijkstra’s algorithm don’t work for negative edges.

 It is greedy algorithm.
Floyd Warshall

 Floyd Warshall Algorithm is an example of all-pairs shortest path algorithm, meaning


it computes the shortest path between all pair of nodes.

 Time Complexity of Floyd Warshall: O(V3)

 Lastly Floyd Warshall works for negative edge but no negative cycle.

 Floyd Warshall can be implemented in a distributed system, making it suitable for

data structures such as Graph of Graphs (Used in Maps).


PRIMS

 It start to built the mst from any of the mode.

 Adjencary matrix, binary heap or Fibonacci heap is used in prims algo.

 Prims algo run faster in dense graph.

 Time complexity is O(EV log V) with binary heap and o(E+VlogV)with Fibonacci

heap.

 The next node include must be connected with the node we traverse.

 It traverse the one node several time in order to get it minimum distance.

 Greedy algorithm.
KRUSKAL ALGORITHM

 It start to built the mst from minimum weighted vertex in the graph.

 Disjoint set is used in Kruskal algo.

 Kruskal algo run faster in sparse graph.

 Time complexity is o(E logV).

 The next edge include may or may not be connected but should not form the cycle.

 It traverse the edge only once and based on cycle it will either reject it or accept it.

 Dynamic algorithm.
BELLMAN FORD

 Calculations for node n involves knowledge of link cost to all neighboring node plus

total cost to each neighbor.

 Each node can maintain a set of costs and path for every other node.

 Can exchange information from other neighbors.

 Can update cost and path based on.

 Worst case amount of computation to find shortage path lengths.

 Algo iterates up to n times.

 Each iteration is done for n-1 nodes.

 Minimization step requires considering up to n-1 alternatives.

 Complexity is O(n^3).
Factors 0/1 Knapsack Fractional Knapsack

Given weights and values of n Fractions of items can be taken


About items, put these items in a rather than having to make binary
knapsack of capacity W to get the (0-1) choices for each
maximum total value in the item.Fractional Knapsack
knapsack. Problem can be solvable by
In other words, given two integer greedy strategy whereas 0 - 1
arrays val[0..n-1] and wt[0..n-1] problem is not.
which represent values and
weights associated with n items Compute the value per
respectively. pound Vi/Wi for each item.
Also given an integer W which Obeying a Greedy Strategy, we
represents knapsack capacity, find take as possible of the item with
out the maximum value subset of the highest value per pound.
val[] such that sum of the weights
of this subset is smaller than or If the supply of that element is
equal to W. You cannot break an exhausted and we can still carry
item, either pick the complete more, we take as much as
item, or don’t pick it (0-1 possible of the element with the
property). next value per pound.Sorting, the
items by value per pound, the
greedy algorithm run in O (n log
n) time.

Time Time complexity of 0/1 Knapsack Fractional Knapsack has


problem is O(nW) time complexity O(NlogN)
Complexity

Algorithm Knapsack (n, W) Fractional Knapsack (Array W


, Array V, int M)

1. for w = 0, W 1. for i <- 1 to size (V)


2. do V [0,w] ← 0 2. calculate cost[i] <- V[i]
3. for i=0, n / W[i]
4. do V [i, 0] ← 0 3. Sort-Descending (cost)
5. for w = 0, W 4. i ← 1
6. do if (wi≤ w & vi + V [i-1, w 5. while (i <= size(V))
- wi]> V [i -1,W]) 6. if W[i] <= M
7. then V [i, W] ← vi + V [i - 1, 7. M ← M – W[i
w - wi] ]
8. else V [i, W] ← V [i - 1, w] 8. total ← total +
V[i];
9. if W[i] > M
10. i ← i+1

You might also like