Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

Student Id: Student Name:

Academic year: 2021-22


Sem-In Examinations-II,
B. Tech (Branch), 2020 Batch
II/IV, 1st Semester
Subject Code:20CS2105 Subject Name: Mathematical Programming-1
SET-2
Time: 2 hours Max. Marks: 50

CO3 Max.Marks 25
Q. No. 1, 2,3,4 from CO3 Preferred to be at lower BTL than the Max BTL of CO1, No sub
questions and have a internal choice
Q. No. 5,6 from CO3 and have a internal choice between Q.No.5 and Q.No.6.
Answer The Questions

Find the minimum subset of vertices that cover all the edges for the following
1. vertex cover problem of an undirected graph.

Key:

VC = {b, c, e, f, d, g}[4.5m]

(OR)
2. Compare and contrast heuristics and meta heuristics.
Key;
Heuristics
Heuristic algorithms are based on trail and error whereas meta heuristics can be bifurcated to meta
plus heuristics I.e. added heuristics. Meta heuristics add randomization to heuristics which generally helps
these algorithms to reach near to optimal solutions even in the complex problems where heuristic
algorithms fail. In other words meta heuristics combine exploration and exploitation while looking for
acceptable solutions.
Both are approximate methods. In which heuristic is well-defined sequence of steps constructed
based on intuition and based on individual's thumb rule gives near optimal solution at minimum time
consumption. A meta heuristic is a general purpose algorithmic framework which can be applied to
different optimisation problems. It guides and modifies subordinate heuristics to efficiently produce good
quality solution. Meta heuristic usually obtain better solution than heuristic at the expense of CPU
time.[2M]
Metaheuristics
• Meta- Greek word for upper level methods
• Heuristics – Greek word heuriskein – art of discovering new strategies to solve
problems.
• Exact and Approximate methods
• Exact
– Math programming LP, IP, NLP, DP
• Approximate
– Heuristics
• Metaheuristics used for
– Combinatorial Optimization problems – a general class of IP problems with
discrete decision variables and finite solution space. Objective function and
constraints could be non-linear too. Uses relaxation techniques to prune the
search space.
– Constraint Programming problems – used for timetabling and scheduling
problems. Uses constraint propagation techniques that reduces the variable
domain. Declaration of variables is a lot more compact[2.5M]

Need for Metaheuristics


• What if the objective function can only be simulated and there is no (or an inaccurate)
mathematical model that connect the variables
– Math and constraint programming need exact math formulations, therefore
cannot be applied to the above
• Large solution space- Ex: TSP, 5 cities have 120 solutions (5!), 10 cities have 10! ~
3.6 million, 75 cities have 2.5X10109 solutions
• Ex: Parallel machine job scheduling: process n jobs on m identical machines. There
are mn solutions
– Complexity - time and space complexity
– Time is the number of steps required to solve a problem of size n. We are
looking for the worst-case asymptotic bound on the step count (not an exact
count). Asymptotic behavior is the limiting behavior as n tends to a large
number.

3. The optimization of a function given by Objective function I,e min

.
Solution:

Min

Subject to

All Linear functions are convex i.e min is convex

=(

D1=2>0, D2= =4>0

D1, D2 >0

is convex function

= [ as

is convex function
Involved function and all constraints are convex. Given problem is convex
[8M]

(OR)
4. Define simulated annealing and write the pseudo code.

Key:
Simulated annealing ( SA) is a probabilistic technique for approximating the
global optimum of a given function. Specifically, it is a metaheuristic to
approximate global optimization in a large search space for an optimization
problem.
Annealing Process:
Raising the temperature up to a very high level (melting temperature, for example), the
atoms have a higher energy state and a high possibility to re-arrange the crystalline
structure.
Cooling down slowly, the atoms have a lower and lower energy state and a smaller and
smaller possibility to re-arrange the crystalline structure.[3M]
Pseudo code:
1. Create random initial solution γ
2. Eold=cost(γ);
3. for(temp=tempmax; temp>=tempmin;temp=next_temp(temp) ) {
4. for(i=0;i<imax; i++ ) {
5. succesor_func(γ); //this is a randomized function
6. Enew=cost(γ);
7. delta=Enew-Eold;
8. if(delta>0)
9. if(random() >= exp(-delta/K*temp);
10. undo_func(γ); //rejected bad
move
11. else
12. Eold=Enew //accepted bad
move
13. else
14. Eold=Enew; //always accept good moves

} [5m]

Answer Any One Questions (1 X 12.5M=12.5 M)


5.
Here, items of various weights need to be packed into a set of bins with a
common capacity. Assuming that there are enough bins to hold all the items.
Find the fewest that will suffice using bin packing algorithm. Weights are (48,
30, 19, 36, 36, 27, 42, 42, 36, 24, 30) and bin capacity is 100. Find the optimal
solution using bin packing and Solve using python.

Ans.
Total wait= 48+30+19+36+36+27+42+36+24+30
= 328
Bin capacity = 100
Expected number of bins= 328/100
= 3.28
= 4 Bins

48 [48]
30 [78]
19 [97]
36 [97, 36]
36 [97,72]
27 [97, 99]
42 [97,99, 42]
36 [97, 99, 78]
24 [97,99, 78, 24]
30 [ 97, 99, 78, 54]

So the minimum number of bin required is 4 9optimum). [8m]


Code:[4.5m]

#next fit
import math
def nextFit(weight, n, c):
print('total weight = ', sum(weight))
print('size of bin = ',c)
lb = math.ceil(sum(weight)/c )
print("lower bound",lb)
res = 0; # Initialize result (Count of bins)
bin_rem = [0]*n; # array to store the added values, initially zero
j=0

# Place items one by one


for i in range(n):

while(j<n):
if bin_rem[j]+weight[i]<=c:
bin_rem[j] = bin_rem[j]+weight[i]
break
else:
j=j+1
print(weight[i],bin_rem)

# count the number of used bins


for i in range(len(bin_rem)):
if bin_rem[i]!=0:
res+=1

return res;

weight = [ 48, 30, 19, 36, 36, 27, 42, 42, 36, 24, 30 ]
c = 100
#weight = [ 5,7,5,2, 4, 2, 5, 1, 6 ]
#c = 10
n = len(weight)
print("Number of bins required in Next Fit : ", nextFit(weight, n, c));

Output
total weight = 370
size of bin = 100
lower bound 4
48 [48, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
30 [78, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
19 [97, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
36 [97, 36, 0, 0, 0, 0, 0, 0, 0, 0, 0]
36 [97, 72, 0, 0, 0, 0, 0, 0, 0, 0, 0]
27 [97, 99, 0, 0, 0, 0, 0, 0, 0, 0, 0]
42 [97, 99, 42, 0, 0, 0, 0, 0, 0, 0, 0]
42 [97, 99, 84, 0, 0, 0, 0, 0, 0, 0, 0]
36 [97, 99, 84, 36, 0, 0, 0, 0, 0, 0, 0]
24 [97, 99, 84, 60, 0, 0, 0, 0, 0, 0, 0]
30 [97, 99, 84, 90, 0, 0, 0, 0, 0, 0, 0]

(Or)

6.
a)Define knapsack problem.

• The knapsack problem or rucksack problem is a problem in


combinatorial optimization. It derives its name from the following
maximization problem of the best choice of essentials that can fit into one
bag to be carried on a trip. Given a set of items, each with a weight and a
value, determine the number of each item to include in a collection so that
the total weight is less than a given limit and the total value is as large as
possible.

Knapsack problem has the following two variants-

1. Fractional Knapsack Problem

2. 0/1 Knapsack Problem.

[2.5m]

b) Find the optimal Solution for the 0/1 knapsack problem making use of
dynamic approach. Consider n=4, w=5 profit(p1,p2,p3,p4)=(3,4,5,6) and
weights(w1,w2,w3,w4)=(2,3,4,5)

Ans.

Item Weight Value

1 2 3

2 3 4

3 4 5

4 5 6

Step-1:

Draw a table say ‘T’ with (n+1) = 4 + 1 = 5 number of rows and (w+1) = 5 + 1 =
6 number of columns.

Fill all the boxes of 0th row and 0th column with 0.
0 0 0 0 0

Step-2:

Start filling the table row wise top to bottom from left to right using the formula-

T (i , j) = max { T ( i-1 , j ) , valuei + T( i-1 , j – weighti ) }

0 1 2 3 4 5

0 0 0 0 0 0 0

1 0 0 3 3 3 3

2 0 0 3 4 4 7

3 0 0 3 4 5 7

4 0 0 3 4 5 7

[5m]

Filling T(1,1)
i = 1, j = 1, (value)i = (value)1 = 3, (weight)i = (weight)1 = 2
Substituting the values, we get-
T(1,1) = max { T(1-1 , 1) , 3 + T(1-1 , 1-2) }
T(1,1) = max { T(0,1) , 3 + T(0,-1) }
(1,1) = T(0,1) { Ignore T(0,-1) }
T(1,1) = 0

Filling T(1,2)
i = 1, j = 2, (value)i = (value)1 = 3, (weight)i = (weight)1 = 2
Substituting the values, we get-
T(1,2) = max { T(1-1 , 2) , 3 + T(1-1 , 2-2) }
T(1,2) = max { T(0,2) , 3 + T(0,0) }
T(1,2) = max {0 , 3+0}
T(1,2) = 3

Filling T(1,3)
i = 1, j = 3, (value)i = (value)1 = 3, (weight)i = (weight)1 = 2
Substituting the values, we get-
T(1,3) = max { T(1-1 , 3) , 3 + T(1-1 , 3-2) }
T(1,3) = max { T(0,3) , 3 + T(0,1) }
T(1,3) = max {0 , 3+0}
T(1,3) = 3

Filling T(1,4)
i = 1, j = 4, (value)i = (value)1 = 3, (weight)i = (weight)1 = 2
Substituting the values, we get-
T(1,4) = max { T(1-1 , 4) , 3 + T(1-1 , 4-2) }
T(1,4) = max { T(0,4) , 3 + T(0,2) }
T(1,4) = max {0 , 3+0}
T(1,4) = 3

Filling T(1,5)
i = 1, j = 5 , (value)i = (value)1 = 3, (weight)i = (weight)1 = 2

Substituting the values, we get-


T(1,5) = max { T(1-1 , 5) , 3 + T(1-1 , 5-2) }
T(1,5) = max { T(0,5) , 3 + T(0,3) }
T(1,5) = max {0 , 3+0}
T(1,5) = 3
Filling T(2,1)
i = 2, j = 1, (value)i = (value)2 = 4, (weight)i = (weight)2 = 3
Substituting the values, we get-
T(2,1) = max { T(2-1 , 1) , 4 + T(2-1 , 1-3) }
T(2,1) = max { T(1,1) , 4 + T(1,-2) }
T(2,1) = T(1,1) { Ignore T(1,-2) }
T(2,1) = 0

Filling T(2,2)
i = 2, j = 2, (value)i = (value)2 = 4, (weight)i = (weight)2 = 3
Substituting the values, we get-
T(2,2) = max { T(2-1 , 2) , 4 + T(2-1 , 2-3) }
T(2,2) = max { T(1,2) , 4 + T(1,-1) }
T(2,2) = T(1,2) { Ignore T(1,-1) }
T(2,2) = 3

Filling T(2,3)
i = 2, j = 3, (value)i = (value)2 = 4, (weight)i = (weight)3 = 3
Substituting the values,
T(2,3) = max { T(2-1 , 3) , 4 + T(2-1 , 3-3) }
T(2,3) = max { T(1,3) , 4 + T(1,0) }
T(2,3) = max { 3 , 4+0 }
T(2,3) = 4

Filling T(2,4)
i = 2, j = 4, (value)i = (value)2 = 4, (weight)i = (weight)4 = 3
Substituting the values, we get-
T(2,4) = max { T(2-1 , 4) , 4 + T(2-1 , 4-3) }
T(2,4) = max { T(1,4) , 4 + T(1,1) }
T(2,4) = max { 3 , 4+0 }
T(2,4) = 4

Filling T(2,5)
i = 2, j = 5, (value)i = (value)2 = 4, (weight)i = (weight)5 = 3
Substituting the values, we get-
T(2,5) = max { T(2-1 , 5) , 4 + T(2-1 , 5-3) }
T(2,5) = max { T(1,5) , 4 + T(1,2) }
T(2,5) = max { 3 , 4+3 }
T(2,5) = 7

Filling T(3,1)
i = 3, j = 1, (value)i = (value)3 = 5, (weight)i = (weight)3 = 4
Substituting the values, we get-

T(3,1) = max { T(3-1 , 1) , 5 + T(3-1 , 1-4) }


T(3,1) = max { T(2,1) , 5 + T(2,-3) }
T(3,1) = max { 0 , ignore }
T(3,1) = 0

We can fill the table in similar way.


• The last entry represents the maximum possible value that can be put into the
knapsack.
• So, maximum possible value that can be put into the knapsack = 7.
• [5m]

CO4 Max. Marks 25


Q. No 7, 8,9,10 from CO4 Preferred to be at lower BTL than the Max BTL of CO2, No sub questions
and have a internal choice
Q. No. 11,12 from CO4 and have a internal choice between Q.No.11 and Q.No.12
Answer ALL Questions
7
Solve using barrier function approach

Subject to:

(OR)
8
Differentiate penalty and barrier function approach.
[4.5M]
9 Find the shortest path for the following graph by using Bellman ford
algorithm.

Solution:
1st pass :[4.5m] 2nd pass: [4m] and 3rd pass: [4m]
(OR)
10
State the Robust optimization and analyze the concept of the uncertain Linear
Optimization.
Key:

Robust optimization is a field of optimization theory that deals with optimization problems in which a
certain measure of robustness is sought against uncertainty that can be represented as deterministic

variability in the value of the parameters of the problem itself and/or its solution.

(2.5 M)

DATA UNCERTAINTY IN LINEAR OPTIMIZATION (10 M)


Recall that the Linear Optimization (LO) problem is of the form
min cT x + d : Ax } b , (1.1)
x≤

where x ∈ Rn is the vector of decision variables, c ∈ Rn and d ∈ R form the objective, A is an m × n


constraint matrix, and b ∈ Rm is the right hand side vector
.
Clearly, the constant term d in the objective, while affecting the optimal value, does not affect the
optimal solution, this is why it is traditionally skipped. As we shall see, when treating the LO
problems with uncertain data thereare good reasons not to neglect this constant term.

The structure of problem (1.1) is given by the number m of constraints and the number n of variables,
while the data of the problem are the collection (c, d, A, b), which we will arrange into an (m + 1) × (n
+ 1) data matrix
cT
D= . d
A b
Usually not all constraints of an LO program, as it arises in applications,are of the form aT x ≤
const; there can be linear “≥” inequalities and linear equalities as well. Clearly, the constraints of the
latter two types can be repre- sented equivalently by linear “≤” inequalities, and we will assume
henceforththat these are the only constraints in the problem.
Indeed, the contribution of a particular decision variable xj to the left hand side of constraint i is the
product aijxj. Hence the consequences of an additive imple- mentation error xj ›→ xj + g are as if there
were no implementation error at all, but the left hand side of the constraint got an extra additive term
aij g, which, in turn, is equivalent to the perturbation bi ›→ bj − aij g in the right hand side of the

constraint. The consequences of a more typical multiplicative implementation error


xj ›→ (1 + g)xj are as if there were no implementation error, but each of the data coefficients aij was
subject to perturbation aij ›→ (1 + g)aij . Similarly, the influ-

ence of additive and multiplicative implementation error in xj on the value of the objective can be
mimicked by appropriate perturbations in d or cj.

In the traditional LO methodology, a small data uncertainty (say, 1% or less) is just ignored; the
problem is solved as if the given (“nominal”) data were exact, and the resulting nominal optimal
solution is what is recommended for use, in hope that small data uncertainties will not affect
significantly the feasibility and optimality properties of this solution, or that small adjustments of
the nominal solution will be sufficient to make it feasible. We are about to demonstrate that
these hopes are not necessarily justified, and sometimes even small data uncertainty deserves
significant attention.

Answer Any One Questions (1 X 12.5 M=12.5 M)


11
The sales of a company (in million dollars) for each year are shown in the table
.
below.

a) Find the least square regression line y = a x + b.


b) Use the least squares regression line as a model to estimate the sales of the
company in 2012.

c) Compute the total deviation in the prices based the estimated line

Year 2005 2006 2007 2008 2009

Price 12 19 29 37 45

Solution :
Regression coefficient Equations : 2 Marks
Part A= 4Marks
Part B= 4 Marks
Part C= 2.5 MArks
Partc :

t Estimated Actual Error in price

0 11.6 12 0.4

1 20 19 1

2 28.4 29 0.6

3 36.8 37 0.2

4 45.2 45 0.2

Total 2.4
deviation

Mark (5)

(Or)
12
Solve the following using exterior penalty function approach.
.

Subject to:

You might also like