Professional Documents
Culture Documents
Engineering de Sign Optimization Husk!
Engineering de Sign Optimization Husk!
Husk!
If r besked om skemal gning af den sidste forelsning fraGitte og Grethe EMSDerne skal hitte p en evalueringsopgave. -Ellers gr jeg det for jer.
Jo hn Rasm ussen, Insti tu t e of M e c ha nical En gin eeri ng, A alb org Un i v ersity, 2001
Lecture 9
Seq uenti al pr ogr ammi ng t echniq ues P Basic ideas. 1st and 2nd or der methods P Seq uential Linear Progr amm i ng, SLP
< Fo r m ul atio n < Co nv e rg e n c e < M ov e li m its
Problem definition
- we sta rt the u sual way
P Convex pr ogrammi ng
< CO N LI N < M MA
I n p ut x
Model
Out pu t g i( x)
We wish to find the design vari able vector x, which minimize s g0( x) whil e honoring the constr ai nts gi( x) # 0, i = 1.. n. As we have seen, we can onl y use numeri cal methods to sol ve the probl em, if we do not know the mathematic al struct ur e of the functi ons gi.
Sequential Programming
- ba sic idea
T he b asi c i dea of s eq ue nti al pr ogr a m mi ng i s to ma ke ap pr o xi mati ons of th e func ti ons gi( x) . Si nc e the a ppr o xi mati ons ar e e xpli ci t, the r es ulti ng pr obl e m c an be s ol ved ei ther a nal yti c all y or wi th a ver y effi ci ent nu meri c al method. T he a ppr o xi mated pr obl e m i s often c all ed a s ub pr obl e m. T he s ol uti on to t he s u bpr o bl e m i s us u all y o nl y an ap pr o xi mati on of th e s ol uti on to the r e al pr obl e m, s o th e pr oc edur e m ust be appl i ed it er ati v el y as a s eq uenc e of sub pr obl e ms . H enc e the t er m seq u ent ial pr ogr a m mi ng. An other c o m mon ter m i s su b pr o b lem m et h od s .
Sequential programming
Example: sequential linear p rog ramming
g i ( x) g i ( x( k) ) + g i ( x( k ) ) ( x x( k ) )
We c an make a Taylor expansion of the functions from the curren t poi nt, x(k). If we onl y i ncl ude up to line ar terms , then the resul ti ng su bprobl em is linear and can be sol ved by the Simpl ex method. This type of subproblem method is call ed sequential line ar pr ogramm i ng.
Move limits
- ma ke the problem bounded and limit o scillation
A n ap pr o xi mati on i s onl y vali d i n a c ert ai n r egi o n ar ou nd x(k) . W e c all i t th e tr us t r egi on. It i s a g ood i de a to c onstr ai n the s ol uti on of th e s ubpr obl e m to th e tr us t regi on. S uch c ons tr ai nts ar e c all ed mo ve li mi ts .
x (j k ) x j x j x (j k ) + x j
Jo hn Rasm ussen, Insti tu t e of M e c ha nical En gin eeri ng, A alb org Un i v ersity, 2001
Move limits
- adaptive adju stment
M o ve li mits c an be adj ust e d acc or di ng to th e pr ogr ess of the opti mi z ati on pr oc es. If th e pr oc es i s g oi ng well, t he y c a n b e e xtende d or ke pt c ons tant. If t he pr oc es s i s os cill ati ng , t hey c an be tig hten ed. W hen a desig n vari a bl e appr oac hes th e opti mu m fr o m one si de, it ful fil s th e c o nditi on: xj xj
SLP
- propertie s
P It i s bas ed on li ne ar a ppr oxi mati ons of th e obj ec ti ve func ti on a nd c o nstr ai nts. P T he s ubpr obl e ms c an be s ol ve d b y Si mpl ex. P C on ver g enc e c a nnot be m at he mati c all y pr ov ed, but i t wor ks fi ne i n mos t c as es . It w or ks best , i s the o pti m u m is f ull y c ons tr ai ne d, i .e., i s in a c or n er of d e desi g n s p ac e. P M o ve li mits ar e r eq uir e d. N o c on ver g enc e is obtai ned i f the mo ve li mi ts ar e not ad apti ve. P T he fi nal i ter ati ons ma y be m any. P It ha ndl es pr obl e ms wit h ma ny desi g n vari abl es and cons tr ai nts.
x (j k 2 ) x (j k 1) 0 x (j k 1) x (j k )
Iterations
Iterations
In thi sca se, we can relax the move limit on xj a little. Otherwi se, we tighten. Thi s way, the move limits on each variable adju st gradually to the nature of the problem.
Convex programming
- ba sic idea
Fir s t or der ap pr o xi mati ons li ke SLP ar e attr act i v e b ec aus e the y onl yr eq uir e gr adi e nts and n o hi g her d eri v ati ves . B ut SL P has the pr o bl e m th at i t r eq ui r es m ove li mi ts and i s pr one to os cill ati on. Is ther e a ny other fir s t or der ap pr o xi mati on we c o ul d us e whi c h does not hav e thi s pr obl e m? T hi s q ues ti on has cr eated the i dea of c on vex pr ogr a m mi ng .
(1 )
gi (k ) (x x j ) + x j j
(2)
gi (k ) (y y j ) y j j
(1 )
gi ( x x j( k ) ) + x j j
(2)
gi x j ( x j x j( k ) ) x j x j
(k )
Convex Programming
- propertie s
P T he y ar e us uall y fi rs t or d er m etho ds, s uc h as C ON LIN a nd MM A. P C on ve x pr ogr a m mi ng us es appr o xi mati o ns t hat ar e mor e c ons er vati v e tha n SLP. P T he s ol uti on tec hniq u e i s par tl y a nal yti c al an d th er efor e v er y fast . P C on ver g enc e is often t hr oug h feasi bl e s ol uti o ns. P T he y handl e pr obl ems with ma ny v ari abl es ver y well , but the y c a n h av e pr obl e ms wi th man y c onstr ai nts .
Quadratic Approximations
- have some attractive featu re s
P If H i s p osi ti ve defi ni te, the n the a ppr o xi mati on wil l c ur ve u pwar ds a nd th er efor e i ncr e ase at s o me dis ta nce fr o m the c urr e nt poi nt. T hi s mea ns t hat the pro bl e m i s auto matic all y b ounde d, an d we do not need mo ve li mi ts . P It i s possi bl e to d eri ve li near opti mali t y c o nditi ons for a q uadr ai c pr o bl e m. T hi s me ans that i t c an be s ol ved b y a n alg ori th m usi ng Si mpl ex as a s ubr outi ne.
1 gi ( x) g i ( x( k ) ) + g i ( x( k ) )( x x( k ) ) + ( x x( k ) )T H( x x(k ) ) 2
H i sthe He ssian matrix containing second derivative s:
2 gi x2 1 H= Symm gi x1 x2 2 gi x2 2
2
gi x1 xn . . 2 gi 2 xn
2
Quasi-Newton Methods
- the solution to some of the QP problem s
P We s tar t ot with a linear appr ox i mation and a Hessian H = I. P For each step in the process, we s ave the computed gr adients of all functions. P The gradi ents of multiple desig n poi nts are used to create an overall approx i mation of H. P This approximation i mproves as more iter ations are per formed.
A cla ssof method s called quasi-Newton method s solve at least some of the se p roblem s.
Jo hn Rasm ussen, Insti tu t e of M e c ha nical En gin eeri ng, A alb org Un i v ersity, 2001
Quasi-Newton Methods
- continued
P Some methods can approxi mate the inverse Hessian directl y. This eli m i nates the need for inversion or factoriz ati on. P It onl y wor ks i f the functions of the real pr oblem globally c an be approximated well byq uadratic functi ons. P If the process does not converge in, say, 10 it er ations, then the overall behavior of the functi ons is probabl y not q uadratic , and it is better to reinitializ e H to I.
Penalt y method s : Easy, but not ver y r obust. Feasible dir ections: R obust and ver satile, but also a little complicat ed. Can have tr oubl e wi th pr oblems w i th diff er ent scales. Re q uir es gradients onl y for directi on c hoice. Subprobl em methods: Good for larg e probl ems wi th many variabl es and sc al e dif ferenc es. R eq uire gradi ents for all it er ations.
Assign ment
SLP in ODESSY
OD ESSY s st andar d o pti miz er i s S LP wi th adapti ve mo ve li mi ts. D o wnl o ad the s a m pl e pr obl e m bu mp er. acd fr o m m y ho mepag e and st u dy the defi ni ti o n of the obj ecti ve f uncti o n an d c ons tr ai nts . R un i t as a s hape opti miz ati on i n OD ESS Y and r ec or d the c on verg enc e hi st or y. Ini ti all y, i t c o nv erg es well, but it r eq uir es a l ot of i ter ati o ns to obtai n the fi nal c on verg enc e. K nowi ng th at the s ys te m us es S LP, ex pl ai n w hy the c on verg enc e is sl o w.