Professional Documents
Culture Documents
Design and Analysis of Algorithms: Course Code
Design and Analysis of Algorithms: Course Code
Algorithms
Course Code
Why Design and Analysis of Algorithm
• Theoretical importance
• Practical importance
Algorithm
8. Analyzing an Algorithm
Time efficiency
Space efficiency
Simplicity and generality
9. Coding an algorithm
Fundamentals of Analysis of Algorithms
⚫The Analysis Framework
⚫ Time efficiency, also called time complexity, indicates how fast an
algorithm in question runs.
⚫Space efficiency, also called space complexity, refers to the amount of
memory units required by the algorithm in addition to the space
needed for its input and output.
1. Measuring an Input’s Size
(Note that the best case does not mean the smallest input; it means the
input of size n for which the algorithm runs the fastest.)
Fundamentals of Analysis of Algorithms
⚫Average-Case
⚫Take all possible inputs and calculate computing time for all of the
inputs. Sum all the calculated values and divide the sum by total
number of inputs
Important Problem Types/
Classification of the problem
⚫Sorting
⚫String Processing
⚫Searching
⚫Recurrences
⚫Shortest paths in a graph
⚫Minimum spanning tree
⚫Traveling salesman problem
⚫Knapsack problem
⚫Chess
⚫Towers of Hanoi
⚫Geometric and numerical problems
Algorithm design/Problem solving
strategies
⚫Brute force ⚫Greedy approach
1. O (big oh),
2. (big omega), and
3. (big theta).
Asymptotic Notations
O-notation
Asymptotic Notations
Asymptotic Notations
Mathematical Analysis of Nonrecursive
Algorithms
EXAMPLE 1 Consider the problem of finding the value of the largest element
in a list of n numbers. For simplicity, we assume that the list is implemented as
an array. The following is pseudocode of a standard algorithm for solving the
problem.
⚫2. Identify the algorithm’s basic operation. (As a rule, it is located in the
innermost loop.)
⚫3. Check whether the number of times the basic operation is executed
depends only on the size of an input. If it also depends on some additional
property, the worst-case, average-case, and, if necessary, best-case
efficiencies have to be investigated separately.
⚫4. Set up a sum expressing the number of times the algorithm’s basic
operation is executed.
⚫5. Using standard formulas and rules of sum manipulation, either find a
closed form formula for the count or, at the very least, establish its order of
growth.
Mathematical Analysis of Recursive
Algorithms
General Plan for Analyzing the Time
Efficiency of Recursive Algorithms
1. Decide on a parameter (or parameters) indicating an input’s
size.
2. Identify the algorithm’s basic operation.
3. Check whether the number of times the basic operation is
executed can vary on different inputs of the same size; if it
can, the worst-case, average-case, and best-case efficiencies
must be investigated separately.
4. Set up a recurrence relation, with an appropriate initial
condition, for the number of times the basic operation is
executed.
5. Solve the recurrence
Tower of Hanoi
Rules:
1. Only one disk can be moved among
the towers at any given time.
2. Only the "top" disk can be removed.
3. No large disk can sit over a small
disk.
Prepared By
SWEET SUBHASHREE
Asst. Prof., Dept. of IT, MIT-SOE
Divide and Conquer
• Recursive in structure
• Divide the problem into sub-problems that are similar to the
original but smaller in size
• Conquer the sub-problems by solving them recursively. If
they are small enough, just solve them in a straightforward
manner.
• Combine the solutions to create a solution to the original
problem
• Else
• pick one element to use as pivot.
• Partition elements into two sub-arrays:
• Elements less than or equal to pivot
• Elements greater than pivot
• Quick-sort two sub-arrays
• Return results
pivot.
if p < r
then q ← RANDOMIZED-PARTITION(A, p, r)
RANDOMIZED-QUICKSORT(A, p, q)
RANDOMIZED-QUICKSORT(A, q + 1, r)
Square-matrix-multiplication(A,B)
{
n=A.rows
let c be a new n*n matrix
for i=1 to n
for j=1 to n
c[i,j]=0
for k=1 to n
c[i,j]=c[i,j]+a[i,k]*b[k,j]
return C;
}
11
Strassens’s Matrix Multiplication
• Strassen showed that 2x2 matrix multiplication can be
accomplished in 7 multiplication and 18 additions or
subtractions (2log27 =22.807).
• Divide-and conquer is a general algorithm design paradigm:
• Divide: divide the input data S in two or more disjoint subsets
S1, S2, …
• Recur: solve the sub problems recursively
• Conquer: combine the solutions for S1, S2, …, into a solution
for S
• The base case for the recursion are sub problems of constant size
• Analysis can be done using recurrence equations.
Prepared By - REETIKA KERKETTA 12
Strassens’s Matrix Multiplication
1
A short list of categories
◼ Algorithm types we will consider include:
◼ Simple recursive algorithms
◼ Backtracking algorithms
◼ Divide and conquer algorithms
◼ Dynamic programming algorithms
◼ Greedy algorithms
◼ Branch and bound algorithms
◼ Brute force algorithms
◼ Randomized algorithms
2
2
Optimization problems
3
3
Example: Counting money
◼ Suppose you want to count out a certain amount of
money, using the fewest possible bills and coins
◼ A greedy algorithm would do this would be:
At each step, take the largest possible bill or coin
that does not overshoot
◼ Example: To make $6.39, you can choose:
◼ a $5 bill
◼ a $1 bill, to make $6
◼ a 25¢ coin, to make $6.25
◼ A 10¢ coin, to make $6.35
◼ four 1¢ coins, to make $6.39
◼ For US money, the greedy algorithm always gives
the optimum solution
4
4
A failure of the greedy algorithm
◼ In some (fictional) monetary system, “krons” come
in 1 kron, 7 kron, and 10 kron coins
◼ Using a greedy algorithm to count out 15 krons,
you would get
◼ A 10 kron piece
◼ Five 1 kron pieces, for a total of 15 krons
◼ This requires six coins
◼ A better solution would be to use two 7 kron pieces
and one 1 kron piece
◼ This only requires three coins
◼ The greedy algorithm results in a solution, but not
in an optimal solution
5
5
A scheduling problem
◼ You have to run nine jobs, with running times of 3, 5, 6, 10, 11,
14, 15, 18, and 20 minutes
◼ You have three processors on which you can run these jobs
◼ You decide to do the longest-running jobs first, on whatever
processor is available
P1 20 10 3
P2 18 11 6
P3 15 14 5
P1 3 10 15
P2 5 11 18
P3 6 14 20
◼ That wasn’t such a good idea; time to completion is now
6 + 14 + 20 = 40 minutes
◼ Note, however, that the greedy algorithm itself is fast
◼ All we had to do at each stage was pick the minimum or maximum
7
7
An optimum solution
◼ Better solutions do exist:
P1 20 14
P2 18 11 5
P3 15 10 6 3
◼ This solution is clearly optimal (why?)
◼ Clearly, there are other optimal solutions (why?)
◼ How do we find such a solution?
◼ One way: Try all possible assignments of jobs to processors
◼ Unfortunately, this approach can take exponential time
8
8
Huffman encoding
◼ The Huffman encoding algorithm is a greedy algorithm
◼ You always pick the two smallest numbers to combine
◼ Average bits/char:
100 0.22*2 + 0.12*3 +
54 0.24*2 + 0.06*4 +
0.27*2 + 0.09*4
27 A=00
= 2.42
B=100
C=01 ◼ The Huffman
46 15
D=1010 algorithm finds an
E=11 optimal solution
22 12 24 6 27 9
F=1011
A B C D E F
9
9
Minimum spanning tree
◼ A minimum spanning tree is a least-cost subset of the edges of a
graph that connects all the nodes
◼ Start by picking any node and adding it to the tree
◼ Repeatedly: Pick any least-cost edge from a node in the tree to a
node not in the tree, and add the edge and new node to the tree
◼ Stop when all nodes have been added to the tree
4
6 ◼ The result is a least-cost
2 (3+3+2+2+2=12) spanning tree
4 If you think some other edge should be
1 5 ◼
◼ Minimum spanning tree: At each new node, must include new edges and
keep them sorted, which is O(n log n) overall
◼ Therefore, MST is O(n log n) + O(n) = O(n log n)
12
12
Other greedy algorithms
◼ Dijkstra’s algorithm for finding the shortest path in a
graph
◼ Always takes the shortest edge connecting a known node to an
unknown node
◼ Kruskal’s algorithm for finding a minimum-cost
spanning tree
◼ Always tries the lowest-cost remaining edge
◼ Prim’s algorithm for finding a minimum-cost spanning
tree
◼ Always takes the lowest-cost edge between nodes in the
spanning tree and nodes not yet in the spanning tree
13
13
14
14
15
15
16
16
17
17
18
18
19
19
20
20
21
21
22
22
23
23
The End
24
24
Dynamic Programming
Overview of Serial Dynamic Programming
Computing entries of table F for the 0/1 knapsack problem. The computation of
entry F[i,j] requires communication with processing elements containing
entries F[i-1,j] and F[i-1,j-wi].
0/1 Knapsack Problem
• We have:
Optimal Matrix-chain multiplication Problem
and the time to compute n/p entries of the table in the lth diagonal is
ltcn/p.
Success Failure
• Because we have probabilities of searches for each key
and each dummy key, we can determine the expected
cost of a search in a given binary search tree T. Let us
assume that the actual cost of a search is the number of
nodes examined, i.e., the depth of the node found by the
search in T,plus1. Then the expected cost of a search in
T is : (The second statement)
• E[ search cost in T]
= (i=1~n) ∑ pi .(depthT(ki)+1)
+ (i=0~n) ∑ qi .(depthT(di)+1)
=1 + (i=1~n) ∑ pi .depthT(ki)
+ (i=0~n) ∑ qi .depthT(di)
Where depthT denotes a node’s depth in the tree T.
k2 k2
k1 k4 k1 k5
d0 d1
d0 d1 d5
k3 k5 k4
d2 d3 d4 d5 d4
k3
Figure (a)
i 0 1 2 3 4 5
d2 d3
Cost=
Node# Depth probability cost
Probability *
k1 1 0.15 0.30 (Depth+1)
k2 0 0.10 0.10
k3 2 0.05 0.15
k4 1 0.10 0.20
K5 2 0.20 0.60
d0 2 0.05 0.15
d1 3 0.10 0.30
d2 3 0.05 0.20
d3 3 0.05 0.20
d4 3 0.05 0.20
d5 3 0.10 0.40
• And the total cost = (0.30 + 0.10 + 0.15 + 0.20 + 0.60 +
0.15 + 0.30 + 0.20 + 0.20 + 0.20 + 0.40 ) = 2.80
• So Figure (a) costs 2.80 ,on another, the Figure (b) costs
2.75, and that tree is really optimal.
• We can see the height of (b) is more than (a) , and the
key k5 has the greatest search probability of any key, yet
the root of the OBST shown is k2.(The lowest expected
cost of any BST with k5 at the root is 2.85)
Step1:The structure of an OBST
root
5 1
4 2 2
3 2 4 3
2 2 2 5 4
1 2 4 5 5
1
1 2 3 4 5
The expected output is a binary matrix which has 1s for the blocks where queens are placed.
For example, following is the output matrix for the above 4 queen solution.
{ 0, 1, 0, 0}
{ 0, 0, 0, 1}
{ 1, 0, 0, 0}
{ 0, 0, 1, 0}
Backtracking Algorithm: The idea is to place queens one by one in different columns,
starting from the leftmost column. When we place a queen in a column, we check for clashes
with already placed queens. In the current column, if we find a row for which there is no clash,
we mark this row and column as part of the solution. If we do not find such a row due to
clashes then we backtrack and return false.
1) Start in the leftmost column
2) If all queens are placed
return true
3) Try all rows in the current column. Do following for every tried row.
a) If the queen can be placed safely in this row then mark this row,
column] as part of the solution and recursively check if placing
queen here leads to a solution.
b) If placing the queen in [row, column] leads to a solution then return
true.
c) If placing queen doesn't lead to a solution then unmark this [row,
column] (Backtrack) and go to step (a) to try other rows.
3) If all rows have been tried and nothing worked, return false to trigger
backtracking.
6/9/2021
Computational Complexity
6/9/2021
NP Class Problems
6/9/2021
P-Class Problems
6/9/2021
Is P = NP ?
If P = NP is proved then
Security domain is vulnerable to attacks
6/9/2021
Reduction
A B
NP
NP Hard
6/9/2021
NP Complete
6/9/2021
Summary
Computational
complexity
Non deterministic
Polynomial(P)
polynomial(NP-Class)
6/9/2021
Backtracking
6/9/2021
Backtracking - In general
6/9/2021
Backtracking algorithm
(Given an instance of any computational problem P and data D corresponding to the instance,
all the constraints that need to be satisfied in order to solve the problem are represented by C.)
The Algorithm begins to build up a solution, starting with an empty solution set . S = {}
Add to S the first move that is still left (All possible moves are added to one by one). This now
creates a new sub-tree S in the search tree of the algorithm.
Else, the entire sub-tree S is useless, so recurs back to step 1 using argument S .
In the event of “eligibility” of the newly formed sub-tree S, recurs back to step 1, using
argument S + S .
If the check for S + S returns that it is a solution for the entire data D . Output and terminate
the program.
If not, then return that no solution is possible with the current S and hence discard it.
6/9/2021
N Queen Problem
6/9/2021
Backtracking algorithm for N queen’s problem
6/9/2021
TSP nearest neighbourhood mehod
Step-1
1) Begin at any city and visit the nearest city.
2) Then go to the unvisited city closest to the city
most recently used
3)continue in this fashion until a tour is obtained.
Step-2
1) After applying this procedure repeat it by begining
of different city.
2) Take best tour found.
6/9/2021
Example
A B C D E
A -- 132 217 164 58
B 132 -- 290 201 79
C 217 290 -- 113 303
D 164 201 113 -- 196
E 58 79 303 196 --
6/9/2021
Tours obtained
6/9/2021
Backtracking algorithm for N queen’s problem
6/9/2021
Hamilton Circuit Problem
6/9/2021
Hamilton Circuit Problem
Given a graph G = (V, E) we have to find the Hamiltonian Circuit using Backtracking approach.
We start our search from any arbitrary vertex say 'a.' This vertex 'a' becomes the root of our
implicit tree.
The first element of our partial solution is the first intermediate vertex of the Hamiltonian Cycle
that is to be constructed.
The next adjacent vertex is selected by alphabetical order. If at any stage any arbitrary vertex
makes a cycle with any vertex other than vertex 'a' then we say that dead end is reached.
In this case, we backtrack one step, and again the search begins by selecting another vertex and
backtrack the element from the partial; solution must be removed.
The search using backtracking is successful if a Hamiltonian Cycle is obtained.
6/9/2021
Subset Sum Problem
6/9/2021
The Subset-sum Problem
Problem statement
given the set S = {x1, x2, x3, … xn } of positive integers and t, is
there a subset of S that adds up to t
as an optimization problem, what subset of S adds up to the
greatest total <= t
What we will show
first we develop an exponential time algorithm to solve the
problem exactly
we modify this algorithm to give us a fully polynomial time
approximation scheme that has a running time that is
polynomial in 1/e and n
The exponential algorithm
Some preliminaries
we use the notation S + x = { s+x : s S}
Some notation
Pi is the set of sums of all
subsets of values up to xi Pi = Pi −1 ( Pi −1 + xi )
it can be shown that
Bound = 21
Traveling Salesman Problem
Next, the bound for the node for the partial tour
from 1 to 2 is calculated using the formula:
Bound = Length from 1 to 2 + sum of min outgoing edges for
vertices 2 to 5 = 14 + (7 + 4 + 2 + 4) = 31
Traveling Salesman Problem
Brute Force
The naïve way to solve this problem is to cycle through all 2n
subsets of the n items and pick the subset with a legal weight
that maximizes the value of the knapsack.
The best set of items from {I0, I1, I2} is {I0, I1, I2}
BUT the best set of items from {I0, I1, I2, I3} is {I0, I2, I3}.
In this example, note that this optimal solution, {I0, I2, I3}, does
NOT build upon the previous optimal solution, {I0, I1, I2}.
(Instead it build's upon the solution, {I0, I2}, which is really the optimal subset of
{I0, I1, I2} with weight 12 or less.)
Knapsack 0-1 problem
So now we must re-work the way we build upon previous sub-
problems…
Let B[k, w] represent the maximum total value of a subset Sk with
weight w.
Our goal is to find B[n, W], where n is the total number of items and
W is the maximal weight the knapsack can carry.
In English, this means that the best subset of Sk that has total
weight w is:
1) The best subset of Sk-1 that has total weight w, or
2) The best subset of Sk-1 that has total weight w-wk plus the item k
Knapsack 0-1 Problem –
Recursive Formula
Second case: wk ≤ w
Then the item k can be in the solution, and we choose the case with
greater value.
Knapsack 0-1 Algorithm
W = 5 (max weight)
for i = 1 to n
B[i,0] = 0
Items:
1: (2,3)
2: (3,4)
Knapsack 0-1 Example 3: (4,5)
4: (5,6)
i/w 0 1 2 3 4 5
0 0 0 0 0 0 0 i=1
1 0 0 vi = 3
2 0 wi = 2
3 0 w=1
4 0 w-wi = -1
if wi <= w //item i can be in the solution if wi <= w //item i can be in the solution
if vi + B[i-1,w-wi] > B[i-1,w] if vi + B[i-1,w-wi] > B[i-1,w]
B[i,w] = vi + B[i-1,w- wi] B[i,w] = vi + B[i-1,w- wi]
else else
B[i,w] = B[i-1,w] B[i,w] = B[i-1,w]
else B[i,w] = B[i-1,w] // wi > w else B[i,w] = B[i-1,w] // wi > w
Items:
1: (2,3)
2: (3,4)
Knapsack 0-1 Example 3: (4,5)
4: (5,6)
i/w 0 1 2 3 4 5
0 0 0 0 0 0 0 i=1
1 0 0 3 3 vi = 3
2 0 wi = 2
3 0 w=3
4 0 w-wi = 1
if wi <= w //item i can be in the solution if wi <= w //item i can be in the solution
if vi + B[i-1,w-wi] > B[i-1,w] if vi + B[i-1,w-wi] > B[i-1,w]
B[i,w] = vi + B[i-1,w- wi] B[i,w] = vi + B[i-1,w- wi]
else else
B[i,w] = B[i-1,w] B[i,w] = B[i-1,w]
else B[i,w] = B[i-1,w] // wi > w else B[i,w] = B[i-1,w] // wi > w
Items:
1: (2,3)
2: (3,4)
Knapsack 0-1 Example 3: (4,5)
4: (5,6)
i/w 0 1 2 3 4 5
0 0 0 0 0 0 0 i=2
1 0 0 3 3 3 3 vi = 4
2 0 0 3 wi = 3
3 0 w=2
4 0 w-wi = -1
if wi <= w //item i can be in the solution if wi <= w //item i can be in the solution
if vi + B[i-1,w-wi] > B[i-1,w] if vi + B[i-1,w-wi] > B[i-1,w]
B[i,w] = vi + B[i-1,w- wi] B[i,w] = vi + B[i-1,w- wi]
else else
B[i,w] = B[i-1,w] B[i,w] = B[i-1,w]
else B[i,w] = B[i-1,w] // wi > w else B[i,w] = B[i-1,w] // wi > w
Items:
1: (2,3)
2: (3,4)
Knapsack 0-1 Example 3: (4,5)
4: (5,6)
i/w 0 1 2 3 4 5
0 0 0 0 0 0 0 i=2
1 0 0 3 3 3 3 vi = 4
2 0 0 3 4 4 wi = 3
3 0 w=4
4 0 w-wi = 1
if wi <= w //item i can be in the solution if wi <= w //item i can be in the solution
if vi + B[i-1,w-wi] > B[i-1,w] if vi + B[i-1,w-wi] > B[i-1,w]
B[i,w] = vi + B[i-1,w- wi] B[i,w] = vi + B[i-1,w- wi]
else else
B[i,w] = B[i-1,w] B[i,w] = B[i-1,w]
else B[i,w] = B[i-1,w] // wi > w else B[i,w] = B[i-1,w] // wi > w
Items:
1: (2,3)
2: (3,4)
Knapsack 0-1 Example 3: (4,5)
4: (5,6)
i/w 0 1 2 3 4 5
0 0 0 0 0 0 0 i=3
1 0 0 3 3 3 3 vi = 5
2 0 0 3 4 4 7 wi = 4
3 0 0 3 4 5 7 w=5
4 0 w-wi = 1
if wi <= w //item i can be in the solution if wi <= w //item i can be in the solution
if vi + B[i-1,w-wi] > B[i-1,w] if vi + B[i-1,w-wi] > B[i-1,w]
B[i,w] = vi + B[i-1,w- wi] B[i,w] = vi + B[i-1,w- wi]
else else
B[i,w] = B[i-1,w] B[i,w] = B[i-1,w]
else B[i,w] = B[i-1,w] // wi > w else B[i,w] = B[i-1,w] // wi > w
Items:
1: (2,3)
2: (3,4)
Knapsack 0-1 Example 3: (4,5)
4: (5,6)
i/w 0 1 2 3 4 5
0 0 0 0 0 0 0 i=4
1 0 0 3 3 3 3 vi = 6
2 0 0 3 4 4 7 wi = 5
3 0 0 3 4 5 7 w = 1..4
4 0 0 3 4 5 w-wi = -4..-1
We’re DONE!!
The max possible value that can be carried in this knapsack is $7
Knapsack 0-1 Algorithm
i/w 0 1 2 3 4 5
0 0 0 0 0 0 0 i=4
1 0 0 3 3 3 3 k=5
2 0 0 3 4 4 7 vi = 6
wi = 5
3 0 0 3 4 5 7
B[i,k] = 7
4 0 0 3 4 5 7
B[i-1,k] = 7
i=n,k=W
while i, k > 0
if B[i, k] ≠ B[i-1, k] then
mark the ith item as in the knapsack
i = i-1, k = k-wi
else
i = i-1
Items: Knapsack:
Knapsack 0-1 Algorithm 1: (2,3)
2: (3,4)
Finding the Items 3: (4,5)
4: (5,6)
i/w 0 1 2 3 4 5
0 0 0 0 0 0 0 i=3
1 0 0 3 3 3 3 k=5
2 0 0 3 4 4 7 vi = 5
wi = 4
3 0 0 3 4 5 7
B[i,k] = 7
4 0 0 3 4 5 7
B[i-1,k] = 7
i=n,k=W
while i, k > 0
if B[i, k] ≠ B[i-1, k] then
mark the ith item as in the knapsack
i = i-1, k = k-wi
else
i = i-1
Items: Knapsack:
Knapsack 0-1 Algorithm 1: (2,3) Item 2
2: (3,4)
Finding the Items 3: (4,5)
4: (5,6)
i/w 0 1 2 3 4 5
0 0 0 0 0 0 0 i=2
1 0 0 3 3 3 3 k=5
2 0 0 3 4 4 7 vi = 4
wi = 3
3 0 0 3 4 5 7
B[i,k] = 7
4 0 0 3 4 5 7
B[i-1,k] = 3
k – wi = 2
i=n,k=W
while i, k > 0
if B[i, k] ≠ B[i-1, k] then
mark the ith item as in the knapsack
i = i-1, k = k-wi
else
i = i-1
Items: Knapsack:
Knapsack 0-1 Algorithm 1: (2,3) Item 2
2: (3,4)
Finding the Items 3: (4,5)
Item 1
4: (5,6)
i/w 0 1 2 3 4 5
0 0 0 0 0 0 0 i=1
1 0 0 3 3 3 3 k=2
2 0 0 3 4 4 7 vi = 3
wi = 2
3 0 0 3 4 5 7
B[i,k] = 3
4 0 0 3 4 5 7
B[i-1,k] = 0
k – wi = 0
i=n,k=W
while i, k > 0
if B[i, k] ≠ B[i-1, k] then
mark the ith item as in the knapsack
i = i-1, k = k-wi
else
i = i-1
Items: Knapsack:
Knapsack 0-1 Algorithm 1: (2,3) Item 2
2: (3,4)
Finding the Items 3: (4,5)
Item 1
4: (5,6)
i/w 0 1 2 3 4 5
0 0 0 0 0 0 0 i=1
1 0 0 3 3 3 3 k=2
2 0 0 3 4 4 7 vi = 3
wi = 2
3 0 0 3 4 5 7
B[i,k] = 3
4 0 0 3 4 5 7
B[i-1,k] = 0
k – wi = 0
k = 0, so we’re DONE!
for w = 0 to W O(W)
B[0,w] = 0
for i = 1 to n O(n)
B[i,0] = 0
Optimization Problems
We have some problem instance x that has many feasible
“solutions”.
We are trying to minimize (or maximize) some cost
e.g.
◼ There is a Hamiltonian
cycle, A-B-D-C-E-F-G-A, in
G(BD).
◼ The optimal solution is 13.
Theorem for Hamiltonian cycles
9-85
Def : The t-th power of G=(V,E), denoted as
Gt=(V,Et), is a graph that an edge (u,v)Et if
there is a path from u to v with at most t
edges in G.
Theorem: If a graph G is bi-connected, then
G2 has a Hamiltonian cycle.
An example for the theorem
9-86
G2
A Hamiltonian cycle:
A-B-C-D-E-F-G-A
An approximation algorithm for BTSP
9-87
A Hamiltonian cycle:
A-G-F-E-D-C-B-A.
The longest edge: 16
Time complexity:
polynomial time
How good is the solution ?
9-90
An approximation algorithm:
(first-fit) place ai into the lowest-indexed bin
which can accommodate ai.
OPT(I)
C(Bi) + C(Bi+1
n
)1
S ( ai )
m nonemptyi =1bins are used in FF:
C(B1)+C(B2)+…+C(Bm) m/2
FF(I) = m < 2 =2 2 OPT(I)
C( B )
i S (a )
i =1
i
i =1
Knapsack problem
9-96
Approximation
PTAS
Fractional knapsack problem
4-
97
Maximize pi xi
Subject to 1 i n
0 xi 1, 1 i 1n w i xi M
i n
The knapsack algorithm
4-
98
a profit pi > 0
capacity of knapsack : M
Maximize pixi
1in
Subject to wixi M
1in
xi = 0 or 1, 1 i n
Decision version :
Given K, pixi K ?
1in
i 1 2 3 4 5 6 7 8
pi 90 61 50 33 29 23 15 13
wi 33 30 25 17 15 12 10 9
Solution 1:
We select items 1 and 2. The sum of normalized profits
is 15. The corresponding sum of original profits is 90 +
61 = 151. The sum of weights is 63.
Solution 2:
We select items 1, 2, and 3. The sum of normalized
profits is 20. The corresponding sum of original profits
is 90 + 61 + 50 = 201. The sum of weights is 88.
Knapsack: PTAS
9-
103
Solution 1:
For Solution 1, we can add items 4 and 6. The sum of profits will be
151 + 33 + 23 = 207.
Solution 2:
For Solution 2, we can not add any item from SMALL. Thus the sum
of profits is 201.
A bad example
9-
104
6/9/2021