Download as pdf or txt
Download as pdf or txt
You are on page 1of 3

Methods of Designing Algorithms

Course No.: 0714 09 CSE 2151, Course Title: Data Structures and Algorithms
Electronics and Communication Engineering Discipline, Khulna University, Khulna
Md. Farhan Sadique
Email: farhan@cse.ku.ac.bd
This lecture is not a study material. Use this lecture as an outline. Follow the outline to study from the mentioned sources
after each section header. The sections of this lecture have been prepared from the mentioned sources.

1. Divide and Conquer (Ellis Horowitz et al.: 3.1)


Given a function to compute on n inputs the divide-and-conquer strategy suggests splitting the inputs into k
distinct subsets, 1 < k < n, yielding k subproblems. These subproblems must be solved, and then a method
must be found to combine sub solutions into a solution of the whole. If the sub-problems are still relatively
large, then the divide-and-conquer strategy can possibly be reapplied.
Binary search, merge sort, and quick sort are some examples of divide and conquer approach.
2. The Greedy Method (Ellis Horowitz et al.: 4.1, https://www.programiz.com/dsa/greedy-algorithm)
A greedy algorithm is an approach for solving a problem by selecting the best option available at the moment.
It doesn't worry whether the current best result will bring the overall optimal result.
2.1. Knapsack Problem (Ellis Horowitz et al.: 4.3)
We are given n objects and a knapsack or bag. Object i has a wight wi and the knapsack has a capacity m. If a
fraction xi of object i is placed into knapsack, then a profit of pixi is earned. The objective is to obtain a filling
of the knapsack that maximizes the total profit earned. Formally, the problem can be stated as

In the above example, which solutions will be obtained, if greedy method is used for greedy with largest profit
and greedy with capacity?
Data Structures and Algorithms – Lecture 9 Md. Farhan Sadique

2.2. Minimum-cost Spanning Trees (Ellis Horowitz et al.: 4.6)


Prim’s algorithm and Kruskal’s algorithm are used to obtain minimum cost spanning tree from a graph. This
has been discussed in an earlier lecture.
3. Dynamic Programming (Ellis Horowitz et al.: 5.1, https://www.javatpoint.com/dynamic-programming)
Dynamic programming is an algorithm design method that can be used when the solution to a problem can be
viewed as the result of a sequence of decisions.
Dynamic programming is a technique for solving a complex problem by first breaking into a collection of
simpler subproblems, solving each subproblem just once, and then storing their solutions to avoid repetitive
computations.
3.1. Single-Source Shortest Paths (Ellis Horowitz et al.: 5.4)

2
Data Structures and Algorithms – Lecture 9 Md. Farhan Sadique

Bibliography
• Book: Fundamentals of Computer Algorithms, Second Edition – Ellis Horowitz, Sartaj Sahni and Sanguthevar
Rajasekaran.
• Website: https://www.programiz.com/dsa/greedy-algorithm
• Website: https://www.javatpoint.com/dynamic-programming

You might also like