Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 11

ACTIVE & SEMI-ACTIVE VIBRATION CONTROL: 2 techniques called active control or semi active control can be applied.

The first one can be defined as an active device which reacts on the vibrations. In this case, it can destabilize the system if the smart structure is not correctly tuned but as the system is active the response versus large bandwidth disturbances is better. The second one can be defined as a passive device in which the properties (stiffness, damping,..) can be varied in real time with a low power input. As they are inherently passive, they cannot destabilize the system. Electromechanical Transducer a device for converting mechanical motion (vibrations) into variations of an electric current or voltage (electric signals) and vice versa. Electromechanical transducers are used primarily as actuating mechanisms in automatic control systems and as sensors of mechanical motion in automation and measurement technology. They may be classified according to the conversion principle used as resistive, electromagnetic, magnetoelectric, and electrostatic types; they may also be classified according to the type of output signal as analogue and digital types (with analogue and discrete output signals, respectively). Electromechanical transducers are evaluated with respect to their static and dynamic characteristics, the sensitivity (or transfer ratio) E = y/ x (where y is the change in the output quantity y when the input quantity x is changed by x), the operating frequency range of the output signal, the static error of the signal, and the static error of conversion. Examples of electromechanical transducers are the measuring mechanism of a permanent-magnet instrument, a loudspeaker, a microphone, and a piezoelectric transducer.

Linear-quadratic regulator
From Wikipedia, the free encyclopedia The theory of optimal control is concerned with operating a dynamic system at minimum cost. The case where the system dynamics are described by a set of linear differential equations and the cost is described by a quadratic functional is called the LQ problem. One of the main results in the theory is that the solution is provided by the linear-quadratic regulator (LQR), a feedback controller whose equations are given below. The LQR is an important part of the

solution to the LQG problem. Like the LQR problem itself, the LQG problem is one of the most fundamental problems in control theory.

Contents
[hide]
y y y y y y y

1 General description 2 Finite-horizon, continuous-time LQR 3 Infinite-horizon, continuous-time LQR 4 Finite-horizon, discrete-time LQR 5 Infinite-horizon, discrete-time LQR 6 References 7 External links

General description
In layman's terms this means that the settings of a (regulating) controller governing either a machine or process (like an airplane or chemical reactor) are found by using a mathematical algorithm that minimizes a cost function with weighting factors supplied by a human (engineer). The "cost" (function) is often defined as a sum of the deviations of key measurements from their desired values. In effect this algorithm therefore finds those controller settings that minimize the undesired deviations, like deviations from desired altitude or process temperature. Often the magnitude of the control action itself is included in this sum so as to keep the energy expended by the control action itself limited. In effect, the LQR algorithm takes care of the tedious work done by the control systems engineer in optimizing the controller. However, the engineer still needs to specify the weighting factors and compare the results with the specified design goals. Often this means that controller synthesis will still be an iterative process where the engineer judges the produced "optimal" controllers through simulation and then adjusts the weighting factors to get a controller more in line with the specified design goals. The LQR algorithm is, at its core, just an automated way of finding an appropriate state-feedback controller. And as such it is not uncommon to find that control engineers prefer alternative methods like full state feedback (also known as pole placement) to find a controller over the use of the LQR algorithm. With these the engineer has a much clearer linkage between adjusted parameters and the resulting changes in controller behaviour. Difficulty in finding the right weighting factors limits the application of the LQR based controller synthesis.

Finite-horizon, continuous-time LQR

For a continuous-time linear system, defined on

, described by

with a quadratic cost function defined as

the feedback control law that minimizes the value of the cost is

where K is given by

and P is found by solving the continuous time Riccati differential equation.

The first order conditions for Jmin are (i) State equation

(ii) Co-state equation

(iii) Stationary equation 0 = Ru + BT (iv) Boundary conditions x(t0) = x0 and (t1) = F(t1)x(t1)

Infinite-horizon, continuous-time LQR

For a continuous-time linear system described by

with a cost functional defined as

the feedback control law that minimizes the value of the cost is

where K is given by

and P is found by solving the continuous time algebraic Riccati equation

Finite-horizon, discrete-time LQR


For a discrete-time linear system described by[1]

with a performance index defined as

the optimal control sequence minimizing the performance index is given by

where

and Pk is found iteratively backwards in time by the dynamic Riccati equation

from initial condition PN = Q.

Infinite-horizon, discrete-time LQR


For a discrete-time linear system described by

with a performance index defined as

the optimal control sequence minimizing the performance index is given by

where

and P is the unique positive definite solution to the discrete time algebraic Riccati equation (DARE)

. Note that one way to solve this equation is by iterating the dynamic Riccati equation of the finite-horizon case until it converges.

References
1. ^ Chow, Gregory C. (1986). Analysis and Control of Dynamic Economic Systems, Krieger Publ. Co. ISBN 0-898-749697
y

Kwakernaak, Huibert and Sivan, Raphael (1972). Linear Optimal Control Systems. First Edition. Wiley-Interscience. ISBN 0-471-511102. Sontag, Eduardo (1998). Mathematical Control Theory: Deterministic Finite Dimensional Systems. Second Edition. Springer. ISBN 0-387-984895.

Linear-quadratic-Gaussian control
From Wikipedia, the free encyclopedia

In control theory, the linear-quadratic-Gaussian (LQG) control problem is one of the most fundamental optimal control problems. It concerns uncertain linear systems disturbed by additive white Gaussian noise, having incomplete state information (i.e. not all the state variables are measured and available for feedback) and undergoing control subject to quadratic costs. Moreover the solution is unique and constitutes a linear dynamic feedback control law that is easily computed and implemented. Finally the LQG controller is also fundamental to the optimal perturbation control of non-linear systems[1]. The LQG controller is simply the combination of a Kalman filter i.e. a linear-quadratic estimator (LQE) with a linear-quadratic regulator (LQR). The separation principle guarantees that these can be designed and computed independently. LQG control applies to both linear timeinvariant systems as well as linear time-varying systems. The application to linear time-invariant systems is well known. The application to linear time-varying systems enables the design of linear feedback controllers for non-linear uncertain systems. The LQG controller itself is a dynamic system like the system it controls. Both systems have the same state dimension. Therefore implementing the LQG controller may be problematic if the dimension of the system state is large. The reduced-order LQG problem (fixed-order LQG problem) overcomes this by fixing a-priori the number of states of the LQG controller. This problem is more difficult to solve because it is no longer separable. Also the solution is no longer

unique. Despite these facts numerical algorithms are available[2][3][4][5] to solve the associated optimal projection equations[6][7] which constitute necessary and sufficient conditions for a locally optimal reduced-order LQG controller[2]. Finally, a word of caution. LQG optimality does not automatically ensure good robustness properties.[8] The robust stability of the closed loop system must be checked separately after the LQG controller has been designed. To promote robustness some of the system parameters may be assumed stochastic instead of deterministic. The associated more difficult control problem leads to a similar optimal controller of which only the controller parameters are different[3].

Contents
[hide]
y

y y

1 Mathematical description of the problem and solution o 1.1 Continuous time o 1.2 Discrete time 2 See also 3 References

Mathematical description of the problem and solution


[edit] Continuous time

Consider the linear dynamic system,

where represents the vector of state variables of the system, the vector of control inputs and the vector of measured outputs available for feedback. Both additive white Gaussian system noise and additive white Gaussian measurement noise affect the system. Given this which at every time may depend system the objective is to find the control input history only on the past measurements minimized,

such that the following cost function is

where

denotes expectation (average value). The final time (horizon)

may be either finite or

infinite. If the horizon tends to infinity the first term of the cost function becomes negligible and irrelevant to the problem. Also to keep the costs finite the cost function has to be taken in this case. The LQG controller that solves the LQG control problem is specified by the following equations,

The matrix

is called the Kalman gain of the associated Kalman filter represented by the of the state using the past , and is computed from the matrices

first equation. At each time this filter generates estimates measurements and inputs. The Kalman gain the two intensity matrices ,

associated to the white Gaussian noises

and finally . These five matrices determine the Kalman gain through the following associated matrix Riccati differential equation,

Given the solution

the Kalman gain equals,

The matrix equation,

is called the feedback gain matrix. This matrix is determined by the matrices and through the following associated matrix Riccati differential

Given the solution

the feedback gain equals,

Observe the similarity of the two matrix Riccati differential equations, the first one running forward in time, the second one running backward in time. This similarity is called duality. The

first matrix Riccati differential equation solves the linear-quadratic estimation problem (LQE). The second matrix Riccati differential equation solves the linear-quadratic regulator problem (LQR). These problems are dual and together they solve the linear-quadratic-Gaussian control problem (LQG). So the LQG problem separates into the LQE and LQR problem that can be solved independently. Therefore the LQG problem is called separable. When and the noise intensity matrices , do not depend on and when tends to infinity the LQG controller becomes a time-invariant dynamic system. In that case both matrix Riccati differential equations may be replaced by the two associated algebraic Riccati equations.
[edit] Discrete time

Since the discrete-time LQG control problem is similar to the one in continuous-time the description below focuses on the mathematical equations. Discrete-time linear system equations:

Here represents the discrete time index and noise processes with covariance matrices The quadratic cost function to be minimized:

represent discrete-time Gaussian white respectively.

The discrete-time LQG controller:


,

The Kalman gain equals,

where time,

is determined by the following matrix Riccati difference equation that runs forward in

The feedback gain matrix equals,

where time,

is determined by the following matrix Riccati difference equation that runs backward in

If all the matrices in the problem formulation are time-invariant and if the horizon tends to infinity the discrete-time LQG controller becomes time-invariant. In that case the matrix Riccati difference equations may be replaced by their associated discrete-time algebraic Riccati equations. These determine the time-invarant linear-quadratic estimator and the time-invariant linear-quadratic regulator in discrete-time. To keep the costs finite instead of one has to consider in this case.

See also
y y

Stochastic control Witsenhausen's counterexample

References
1. ^ Athans M. (1971). "The role and use of the stochastic Linear-Quadratic-Gaussian problem in control system design". IEEE Transaction on Automatic Control AC-16 (6): 529 552. doi:10.1109/TAC.1971.1099818. 2. ^ a b Van Willigenburg L.G., De Koning W.L. (2000). "Numerical algorithms and issues concerning the discrete-time optimal projection equations". European Journal of Control 6 (1): 93 100. Associated software download from Matlab Central. 3. ^ a b Van Willigenburg L.G., De Koning W.L. (1999). "Optimal reduced-order compensators for time-varying discrete-time systems with deterministic and white parameters". Automatica 35: 129 138. doi:10.1016/S0005-1098(98)00138-1. Associated software download from Matlab Central. 4. ^ Zigic D., Watson L.T., Collins E.G., Haddad W.M., Ying S. (1996). "Homotopy methods for solving the optimal projection equations for the H2 reduced order model problem". International Journal of Control 56 (1): 173 191. doi:10.1080/00207179208934308. 5. ^ Collins Jr. E.G, Haddad W.M., Ying S. (1996). "A homotopy algorithm for reduced-order dynamic compensation using the Hyland-Bernstein optimal projection equations". Journal of Guidance Control & Dynamics 19 (2): 407 417. doi:10.2514/3.21633.

6. ^ Hyland D.C, Bernstein D.S. (1984). "The optimal projection equations for fixed order dynamic compensation". IEEE Transaction on Automatic Control AC-29 (11): 1034 1037. doi:10.1109/TAC.1984.1103418. 7. ^ Bernstein D.S., Davis L.D., Hyland D.C. (1986). "The optimal projection equations for reducedorder discrete-time modeling estimation and control". Journal of Guidance Control and Dynamics 9 (3): 288 293. doi:10.2514/3.20105. 8. ^ Green, Limebeer: Linear Robust Control, p. 27 Categories: Optimal control | Control theory | Stochastic control

You might also like