Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 16

HND in Computing and Software

Engineering
SEC5213: Data Structures and Algorithm
Level: 5
Credit Value: 20

Lesson 04 – Algorithmic paradigms

1
Lecturer: Ms. Sathananthy 06/04/2022
Learning Outcome 01

Evaluate algorithms and data structures in terms of time complexity

2 06/04/2022
Outline

Divide and Conquer

Greedy Approach

Dynamic Programming

7
Algorithmic paradigms

General approaches to the construction of efficient solutions to problems.

Different problems require the use of different kinds of techniques. A good

programmer uses all these techniques based on the type of problem. Some
commonly-used techniques are:
1. Divide and conquer
2. Greedy algorithms (This is not an algorithm, it is a technique.)
3. Dynamic programming

4 06/04/2022
Divide and Conquer

In divide and conquer approach, the problem in hand, is divided into

smaller sub-problems and then each problem is solved independently.


When we keep on dividing the subproblems into even smaller sub-

problems, we may eventually reach a stage where no more division is


possible.
Those "atomic" smallest possible sub-problem (fractions) are solved.

The solution of all sub-problems is finally merged in order to obtain the

solution of an original problem.


5 06/04/2022
Divide and Conquer

6 06/04/2022
Divide and Conquer
Broadly,  divide-and-conquer approach in a three-step process.
Divide/Break
This step involves breaking the problem into smaller sub-problems. Sub-problems should represent
a part of the original problem. This step generally takes a recursive approach to divide the problem
until no sub-problem is further divisible. At this stage, sub-problems become atomic in nature but
still represent some part of the actual problem.
Conquer/Solve
This step receives a lot of smaller sub-problems to be solved. Generally, at this level, the problems
are considered 'solved' on their own.
Merge/Combine
When the smaller sub-problems are solved, this stage recursively combines them until they
formulate a solution of the original problem. This algorithmic approach works recursively and
7 06/04/2022
conquer & merge steps works so close that they appear as one.
Divide and Conquer
Examples : The following computer algorithms are based on divide-and-conquer programming

approach:

• Merge Sort

• Quick Sort

• Binary Search

• Strassen's Matrix Multiplication

• Closest pair (points)

8 06/04/2022
Greedy Algorithms

An algorithm is designed to achieve optimum solution for a given

problem. In greedy algorithm approach, decisions are made from the given
solution domain. As being greedy, the closest solution that seems to
provide an optimum solution is chosen.
Greedy algorithms try to find a localized optimum solution, which may

eventually lead to globally optimized solutions. However, generally


greedy algorithms do not provide globally optimized solutions.

9 06/04/2022
Greedy Algorithms

Idea: Find solution by always making the choice that looks optimal at the

moment don’t look ahead, never go back.


A greedy algorithm, as the name suggests, always makes the choice that seems

to be the best at that moment. This means that it makes a locally-optimal


choice in the hope that this choice will lead to a globally-optimal solution.
The problem solving heuristic of making the locally optimal choice at each

stage. Example: traveling salesman problem

10 06/04/2022
Greedy Algorithms
Examples

Travelling Salesman Problem

Prim's Minimal Spanning Tree Algorithm

Kruskal's Minimal Spanning Tree Algorithm

Dijkstra's Minimal Spanning Tree Algorithm

Graph - Map Coloring

Graph - Vertex Cover

Knapsack Problem

11 Job Scheduling Problem 06/04/2022


Dynamic Programming

 Dynamic programming approach is similar to divide and conquer in breaking down the

problem into smaller and yet smaller possible sub-problems. But unlike, divide and
conquer, these sub-problems are not solved independently. Rather, results of these
smaller sub-problems are remembered and used for similar or overlapping sub-
problems.
 Dynamic programming is used where we have problems, which can be divided into

similar sub-problems, so that their results can be re-used. Mostly, these algorithms are
used for optimization. Before solving the in-hand sub-problem, dynamic algorithm will
try to examine the results of the previously solved sub-problems. The solutions of sub-
12 problems are combined in order to achieve the best solution. 06/04/2022
Dynamic Programming

Example
Fibonacci number series

Knapsack problem

Tower of Hanoi

All pair shortest path by Floyd-Warshall

Shortest path by Dijkstra

Project scheduling

Dynamic programming can be used in both top-down and bottom-up manner.

13 06/04/2022
Dynamic Programming

• Dynamic programming is basically, recursion plus using common sense. What it


means is that recursion allows you to express the value of a function in terms of
other values of that function.
• Where the common sense tells you that if you implement your function in a way
that the recursive calls are done in advance, and stored for easy access.
• it will make your program faster. This is what we call Memorization - it is
memorizing the results of some specific states, which can then be later accessed
to solve other sub-problems

14 06/04/2022
Dynamic Programming
The intuition behind dynamic programming is that we trade space for time, i.e. to
say that instead of calculating all the states taking a lot of time but no space, we take up
space to store the results of all the sub-problems to save time later.
Let's try to understand this by taking an example of Fibonacci numbers.
Fibonacci (n) = 1; if n = 0
Fibonacci (n) = 1; if n = 1
Fibonacci (n) = Fibonacci(n-1) + Fibonacci(n-2)
So, the first few numbers in this series will be: 1, 1, 2, 3, 5, 8, 13, 21... and so on

15 06/04/2022
END

16 06/04/2022

You might also like