Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

UNIVERSITY OF BUEA

FACULTY OF ENGINEERING AND TECHNOLOGY


DEPARTMENT OF ELECTRICAL AND ELECTRONIC
ENGINEERING

EEF 490
HYBRID ENERGY COMPONENTS

REPORT ON PRACTICAL I ACTIVE FILTERS

NAME OF STUDENTS COURSE INSTRUCTORS


BRYAN AGBOR TOKO--FE17A108 Dr LELE
QUESTION:

RESEARCH AND REPORT ON THE VARIOUS RESOLUTION METHODS AIMED AT


MINIMIZING THE DAILY OPERATING COST.

• LINEAR PROGRAMMING (LP):

Linear programming (LP) is one of the most widely used optimization techniques and one of the most
effective. The term linear programming was coined by George Dantzig in 1947 to refer to the procedure of
optimization in problems in which both the objective function and the constraints are linear (Dantzig, 1963).
"Programming" does not specifically require computer coding, but you will find that the solution of almost
all practical linear programming problems does involve the use of a computer code.

Examples of LP problems which occur in plant management are:

1. Assign employees to schedules so that the work force is adequate each day of the week and worker
satisfaction and productivity are as high as possible.
2. Select products to manufacture in the upcoming period, taking best advantage of existing resources
and current prices to yield maximum profit.
3. Find a pattern of distribution from plants to warehouses that will minimize costs within the capacity
limitations.
4. Submit bids on procurement contracts to take into account profit, competitors' bids, and operating
constraints.

When stated mathematically, each of these problems potentially involves many variables, many equations,
and inequalities. A solution must not only satisfy all of the equations, but also must achieve an extremum
of the objective function, such as maximizing profit or minimizing cost. With the aid of computer codes
you can solve LP problems with hundreds and even thousands of variables and constraints.

Mathematical Formulation of Linear Programming Problem

There are four basic components of an LPP:


1. Decision variables -The quantities that need to be determined in order to solve the LPP are
called decision variables.
2. Objective function -The linear function of the decision variables, which is to be maximized or
minimized, is called the objective function.
3. Constraints -A constraint is something that plays the part of a physical, social or financial
restriction such as labor, machine, raw material, space, money, etc. These limits are the degrees
to which an objective can be achieved.
A linear programming problem (LPP) is an optimization problem in which
(i) The linear objective function is to be maximized (or minimized);
(ii) The values of the decision variables must satisfy a set of constraints where each constraint
must be a linear equation or linear inequality;
(iii) A sign restriction must be associated with each decision variable.

In 1947 George Dantzig first advanced a general analytical procedure for handling large-
dimensional linear programming problems. The iterative procedure employed, called the Simplex
algorithm.

Linear Programming Apllications

It has been estimated that a considerable fraction of the computer time expended at oil and chemical
companies is devoted to solving LP's of various types. The kinds of problems solved and references include

1. Multiplant production/distribution (Hadley, 1962), including oil tanker routing and


scheduling (Garvin, 1960).
2. Gasoline blending (Garvin, 1960; Johnson and Williamson, 1967)
3. Petroleum refinery operations (Beightler et a1.,1979: Pike, 1986)
4. Power generation, steam systems (Bouillod, 1969) 5. Olefin manufacture (Sourander et al.,
1984).

Advantages of linear programming

1. Linear programming helps in attaining the optimum use of productive resources. It also
indicates how a decision-maker can employ his productive factors effectively by selecting
and distributing (allocating) these resources.
2. Linear programming techniques improve the quality of decisions. The decision-making
approach of the user of this technique becomes more objective and less subjective.
3. Linear programming techniques provide possible and practical solutions since there might
be other constraints operating outside the problem which must be taken into account. Just
because we can produce so many units docs not mean that they can be sold. Thus, necessary
modification of its mathematical solution is required for the sake of convenience to the
decision-maker.
4. Highlighting of bottlenecks in the production processes is the most significant advantage
of this technique. For example, when a bottleneck occurs, some machines cannot meet
demand while other remains idle for some of the time.
5. Linear programming also helps in re-evaluation of a basic plan for changing conditions. If
conditions change when the plan is partly carried out, they can be determined so as to adjust
the remainder of the plan for best results.

Limitations of linear programming

1. There should be an objective which should be clearly identifiable and measurable in


quantitative terms. It could be, for example, maximisation of sales, of profit, minimisation
of cost, and so on, which is not possible in real life.
2. The activities to be included should be distinctly identifiable and measurable in quantitative
terms, for instance, the products included in a production planning problem and all the
activities can’t be measured in quantitative terms for example if labour is sick, which will
decrease his performance which can’t be measured.
3. The resources of the system which arc to be allocated for the attainment of the goal should
also be identifiable and measurable quantitatively. They must be in limited supply. The
technique would involve allocation of these resources in a manner that would trade off the
returns on the investment of the resources for the attainment of the objective.
4. The relationships representing the objective as also the resource limitation considerations,
represented by the objective function and the constraint equations or inequalities,
respectively must be linear in nature, which is not possible.
5. There should be a series of feasible alternative courses of action available to the decision
makers, which are determined by the resource constraints.
• GENERIC ALGORITHM (GA)

Genetic algorithms (GAs) have been used in numerous fields to solve problems, especially when dealing
with problems with very large search spaces. Genetic algorithm has been developed by John Holland
(Srinivas and Patnaik, 2012) at the University of Michigan in 1970. Their research goals were to abstract
and explain the adaptive process of natural systems and to design artificial system software that retains the
important mechanisms of natural selective processes (Tippabhatla, 1998). Genetic algorithms are search
algorithms based on the mechanics of natural selection and natural genetics, genetic algorithm uses a fitness
function to determine the performance of each artificial chromosome. Since the fitness function is intended
to measure the restoration quality (Chow et al., 2001).

Genetic algorithm phases

Genetic algorithm is a good search algorithm based on technique of natural selection and natural genetics.
It uses rules to guide itself toward an optimal solution, where its cost fitness function is to be minimized
compared with other search algorithms (Holland, 1992). The process in GA as the following:

1) Initial population in SGA is a candidate solutions are usually generated randomly across the search
space. But in PGA divided the main population into N sub population.
2) Reproduction generational that is population is probably replaced at each generation.
3) The fitness function is the objective function to be optimized, provides the mechanism for
evaluating each string.
4) Selection that is select a solution with higher fitness values,. Therefore, many selection procedures
have been proposed such as roulette-wheel.
a) Roulette Wheel selection with take in consideration fitness-based selection (Khurana et al., 2011).
Therefore, each chromosome such as [1111001001, 0010110010] has a chance of selection that is
directly to fitness.
b) Rank-based selection, selection probabilities are based on a chromosome’s relative rank or position
in the population, more than fitness.
c) Tournament-based selection the original tournament selection is to choose K parents at random and
returns the fittest one of these.
5) Mutations occur randomly, some mutations will be advantageous. Mutation of a bit involves
flipping it as changing a 0 to 1 or vice versa (Srinivas and Patnaik, 1994), (Paulinas and Ušinskas,
2007).
The mutation process shows as the following: M= 01000010 M1=01000100
6) Crossover is a GA crucial operation because in this recombination part of two or more parental
solutions is to create new chromosomes possibly a better solution, pairs of strings are picked at
random from the population
7) Termination the conditions for terminations are represented, in the total number of fitness
evaluations reaches a given limit, and fitness remains under a threshold value, for a given period
of time

Strengths of Generic Algorithm

1) A genetic algorithm has ability to many parameters simultaneously (Forrest, 1993). Many
problems cannot be stated in terms of a parameter, but must be expressed in terms of multiple
objectives, GAs are very good at solving problems: in particular, that use of parallelism enables
them to produce multiple equally good solutions to a problem, possibly one candidate solution
optimizing one parameter and another candidate optimizing a different one.
2) GA with feature of parallelism that allows them to implicitly evaluate many schemas at once, GA
well-suited to solving problems where the space of all potential solutions is truly huge too vast to
search exhaustively in any reasonable amount of time. The problems that into this classification are
known as nonlinear which mean non linearity is changing one component may have effects on the
full system, and many changes that individually are detrimental may lead to much greater
improvements in fitness when combined. While a linear problem, the fitness of component is
separated, any improvement to any one part will result in an improvement of the system as a whole,
few real problems are like this category.
3) GA perform well in problems for which the fitness landscape is complex - ones where the fitness
function is discontinuous, changes over time, or has many local optima. Most problems include a
wide area for solution (Craenen et al., 2001).

Limitations of Generic Algorithm

1) Fitness function should be considered a higher value is attainable, and equate to a better solution
for the given problem. If the fitness is chosen poorly or defined inaccurate, the GA may be unable
to find a solution to the problem, or may find wrong solving for this problem.
2) Most important, consideration in originate a genetic algorithm is defining a representation for the
problem. The language used to define candidate solutions must be robust; it must be able to tolerate
random changes such that errors do not consistently result
3) Choice of fitness function, the other parameters of a GA the size of the population, the rate of
mutation and crossover, which making a good prediction for the type and strength of selection must
be also chosen with carefully. If the population size is simple, the genetic algorithm may not enough
to discover of the solution space to consistently find good solutions.

• DYNAMICS PROGRAMMING

Dynamic programming was the brainchild of an American Mathematician, Richard Bellman, who described
the way of solving problems where you need to find the best decisions one after another. The word
Programming in the name has nothing to do with writing any code or computer programs. Mathematicians
use this speech to illustrate a set of rules which anyone can follow to solve a problem. They do not have to
be written even in a computer programming language. The word "programming" in "dynamic
programming" is a synonym for optimization and is meant as “planning or a tabular method”. It is basically
a stage wise search method of optimization problems whose solutions may be viewed as the result of a
sequence of decisions.

General working methodology for achieving solution using this approach is given as:

1) Divide into Subproblems – The main problem is divided into a number of smaller, similar
subproblems. The solution to main problem is expressed in terms of the solution for the smaller
subproblems. Stage wise solutions start with the smallest subproblems.
2) Construction of Table for Storage - The underlying idea of dynamic programming is to avoid
calculating the same stuff twice and usually a table of known results of subproblems is constructed
for the purpose. Dynamic programming thus takes advantage of the duplication and arranges to
solve each subproblem only once, saving the solution in table for later use [4, 25]. The key to
competence of a dynamic programming algorithm is that once it computes the solution to a
constrained version of the problem, it stores that solution in a table until the solution is no longer
needed by any future computation. The initial solution is trivial [16]. This tells us that we trade
space for time to avoid repeating the computation of a subproblem.
3) Combining using Bottom-up means - Combining solutions of smallest subproblems obtain the
solutions to subproblems of increasing size. The process is continued until we arrive at the solution
of the original problem.

Dynamic programming involves selection of optimal decision rules that optimizes a certain performance
criterion:

1) The Principle of Optimality – An optimal sequence of decisions is obtained iff each subsequence
must be optimal. That means if the initial state and decisions are optimal then the remaining
decisions must constitute an optimal sequence w.r.t the state resulting from the first decision.
Combinatorial problems may have this property but may exploit too much memory and/or time
towards efficiency.
2) Polynomial Break up - The original problem is divided into several subproblems. The division is
done in such a way that the total number of subproblems to be solved should be a polynomial or
almost a polynomial number. This is done for efficient performance of dynamic programming.

Applications of Dynamic programming

• Information theory.
• Control theory.
• Bioinformatics.
• Operations research.
• Computer science - theory, graphics, Artificial Intelligence, etc.

Strengths of Dynamic Programming

1) Creativity is necessary before we can distinguish that a particular problem can be casted effectively
as a dynamic program. Even clever insights to restructure the formulation often are essential in
useful solution [24, 25]. This idea of reusing subproblems is the main advantage of the dynamic
programming paradigm over recursion. The simplicity what makes dynamic programming more
appealing is both a full problem solving method and a subroutine solver in more complicated
algorithmic solutions.
2) The most charisma involves selection of optimal decision rules: The Principle of Optimality and
Polynomial Break up, which optimizes performance criterion. The approach is both a full problem
solving method and a subroutine solves [4, 14, 18]. These simplicities make dynamic programming
technique more appealing in complicated algorithmic solutions that also we think about
3) The key to competence of the dynamic programming approach lies in a table that stores partial
solutions for future references. Attractiveness of dynamic programming during the search for a
solution on the other hand lays avoidance of full enumeration by clipping early partial decision
solutions that cannot possibly lead to optimal solution. In a single word it makes the optimization
procedure multistage in nature

Limitations of dynamics programming

What kinds of problems can be solved using Dynamic Programming? Evidently, the answer is optimization
problems. But the optimal solution involves solving a subproblem, and then it uses the optimal solution to
that subproblem [18]. This key property of the solutions produced by dynamic programming is that they
are time consistent. This is essentially due to direct implication of the principle of optimality [10]. Another
drawback of this practice is that it works best on objects which are linearly ordered and cannot be rearranged
such as characters in a string, points around the boundary of a polygon, matrices in a chain, the left-to-right
order of leaves in a search tree, etc [15, 19]. The major shortcoming of making use of dynamic programming
as a means is that it is often nontrivial to write code that evaluates the subproblems in the most efficient
order [5, 25]. The challenge of devising a good solution method is in steps forward to make decisions what
are the subproblems, how they would be computed and in what order. Apart from the obvious requirements
- The Principle of Optimality and Polynomial Break up, an efficient dynamic programming induces only a
“small” number of distinct subproblems.
• MIXED-INTERGER LINEAR PROGRAMMING

Mathematical programming, especially Mixed Integer Linear Programming (MILP), because of its
rigorousness, flexibility and extensive modeling capability, has become one of the most widely explored
methods for process scheduling problems.

MILP mathematical formulations

In the context of chemical processing systems, the scheduling problem generally consists of the following
components:

(i) production recipes, which specify the sequences of tasks to be performed for manufacturing given
products;
(ii) available processing/storage equipment;
(iii) intermediate storage policy;
(iv) production requirements;
(v) specifications of resources, such as utilities and manpower;
(vi) a time horizon of interest.

The goal is to determine a schedule which includes the details of

(i) the sequence of tasks to be performed in each piece of equipment;


(ii) (ii) the timing of each task;
(iii) (iii) the amount of material to be processed (i.e., batch-size) by each task. The performance of a
schedule is measured with one or more criteria, for example, the overall profit, the operating costs,
and the makespan.
Mixed Integer Linear Programming limits and supporting techniques

• First, non-linear effects obviously cannot be taken into account. In particular, when dealing with
the optimal design problem, the efficiency of the units must be kept constant. Therefore, several
effects cannot be considered such as: the variation of the nominal efficiency of the components in
relation to their size, the variation of the component unitary cost in relation to their size, and part-
load effects on nominal efficiency. This problem can be tackled by a decomposition strategy, based
either on an iterative procedure [23] or on a multi-stage algorithm.
• Instead, when dealing with the optimal scheduling problem, a linear relation between the efficiency
of the components and their load factor can be easily considered. Nevertheless, real performance
curves are usually nonlinear, and a further expedient must be adopted, namely piece-wise
linearization. For each unit, a piecewise linear approximation, with an appropriate number of
intervals, can be selected. Since ambient temperature may affect unit performance, the range as
well as the shape of the performance curves can vary with temperature.
• Another limit afflicting the MILP formulation, is the need to consider the whole time horizon at
once, when dealing with the synthesis and/or design problem. In fact, the synthesis and design
problem, and the scheduling problem must be tackled simultaneously. This results in a very large
number of variables and constraints, thus making the problem very challenging from the
computational point of view. To tackle this issue, several approaches have been proposed. One kind
of approach is based on decomposition methods

You might also like