Feedback Control Theory

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

www.open2hire.

com
www.open2hire.com

Feedback Control Theory and Applications in Engineering


Table of Contents
I. Introduction

A. Definition of feedback control

B. Importance of feedback control in engineering

C. Basic components of a feedback control system

II. Feedback Control Theory

A. Feedback loop and closed-loop control

B. Block diagram representation

C. Transfer function and frequency response

D. Stability analysis

• Routh-Hurwitz criterion
• Bode plot
• Nyquist plot

III. PID Control

A. Definition of proportional-integral-derivative control

B. Mathematical model of a PID controller

C. Tuning methods

• Ziegler-Nichols method
• Cohen-Coon method
• Tyreus-Luyben method

D. Limitations of PID control

IV. Advanced Control Techniques

A. Nonlinear control

• Feedback linearization
www.open2hire.com

• Sliding mode control


• Backstepping control

B. Optimal control

• Linear quadratic regulator (LQR)


• Model predictive control (MPC)
• Optimal control of distributed parameter systems

C. Robust control

• H-infinity control
• Mu-synthesis
• Robust control of nonlinear systems

D. Adaptive control

• Model reference adaptive control


• Self-tuning control
• Adaptive control of nonlinear systems

V. Applications of Feedback Control

A. Control of mechanical systems

• Motion control
• Robotic control

B. Control of chemical and process systems

• Process control
• Batch control

C. Control of electrical and electronic systems

• Power electronics control


• Motor control

VI. Conclusion

• Summary of feedback control theory and applications


• Future developments in feedback control.
www.open2hire.com

Feedback Control Theory and Applications in Engineering


I. Introduction
Feedback control is a fundamental concept in engineering, which involves using the
output of a system to adjust its input in order to achieve a desired behavior. It is widely
used in various fields, including mechanical, electrical, chemical, and aerospace
engineering, among others.

In a feedback control system, a sensor measures the output of the system, and the
measured value is compared to a reference or setpoint value. The difference between
the two values, called the error, is used to adjust the input of the system in order to
reduce the error and bring the output closer to the desired value. This process of
comparing the output to the reference value and adjusting the input is repeated
continuously in a closed-loop fashion, resulting in the desired behavior of the system.

A. Definition of feedback control


Feedback control is a process in which the output of a system is measured and used
to adjust the input of the system in order to achieve a desired behavior. It involves
comparing the actual output of a system to a reference or setpoint value, calculating
the difference between the two values (called the error), and using this error to adjust
the input of the system in a closed-loop fashion. By continuously adjusting the input
based on the measured output, feedback control can compensate for changes in the
system or its environment, maintain stability, and improve performance.

In essence, feedback control aims to regulate a dynamic system by using information


from the system's output to influence its input, with the goal of achieving a desired
output. This is achieved by continuously measuring the output and adjusting the input
www.open2hire.com

to minimize the difference between the output and the desired value. Feedback control
is used in a wide range of applications, including mechanical, electrical, chemical, and
aerospace engineering, among others, where it is crucial for maintaining stability and
achieving optimal performance.

B. Importance of feedback control in engineering


Feedback control is of significant importance in engineering for various reasons, some
of which are outlined below:

Regulation of dynamic systems: Many systems in engineering are dynamic and


subject to changes and uncertainties. Feedback control can regulate these systems
by continuously adjusting the input based on the measured output, compensating for
changes and disturbances, and maintaining the system's stability and performance.

Improvement of system performance: Feedback control can improve the performance


of systems by reducing errors and ensuring that the output follows a desired trajectory.
By adjusting the input of the system based on the measured output, feedback control
can optimize the system's response time, accuracy, and stability.

Robustness: Feedback control can ensure that the system is robust to changes and
uncertainties in the environment. By continuously measuring the output and adjusting
the input, feedback control can compensate for changes in the system's behavior and
maintain stability.

Automation: Feedback control is essential for automation in engineering. By using


sensors and actuators, feedback control can automatically adjust the input of the
system to achieve a desired output, reducing the need for human intervention.

Safety: Feedback control can improve safety in engineering systems. By continuously


monitoring the output and adjusting the input, feedback control can prevent the system
from exceeding safe operating limits or from entering dangerous states.

C. Basic components of a feedback control system


Sensor: A sensor is used to measure the output of the system, which is then fed back
to the controller. The sensor can be a physical device that converts the output into an
electrical or mechanical signal, such as a temperature sensor, pressure sensor, or
position sensor.

Controller: The controller is the brain of the feedback control system. It processes the
measured output from the sensor and calculates the required input to achieve the
desired behavior of the system. The controller can be implemented using analog or
digital circuits, microcontrollers, or software algorithms.
www.open2hire.com

Actuator: The actuator is the device that adjusts the input of the system based on the
controller output. It can be a motor, valve, heater, or any other device that can modify
the system's behavior. The actuator converts the electrical or mechanical signal from
the controller into a physical action that affects the system's input.

Feedback loop: The feedback loop is the connection between the output, sensor,
controller, and actuator. It enables the continuous adjustment of the input based on
the measured output, allowing the system to regulate itself and achieve the desired
behavior.

Setpoint: The setpoint is the desired value of the output that the system is designed
to achieve. It is set by the operator or the system designer and serves as the reference
for the feedback control system.

These components work together in a closed-loop system to continuously monitor and


adjust the system's behavior. The sensor measures the output, the controller
calculates the required input, the actuator adjusts the input, and the feedback loop
ensures that the output follows the desired trajectory. By continuously adjusting the
input based on the measured output, the feedback control system can regulate the
system's behavior and achieve the desired performance.

II. Feedback Control Theory

A. Feedback loop and closed-loop control


A feedback loop is an essential component of a closed-loop control system. It enables
the continuous adjustment of the system's input based on the measured output,
allowing the system to regulate itself and achieve the desired behavior. In a closed-
loop control system, the feedback loop connects the system's output to the input,
forming a closed circuit.
www.open2hire.com

The feedback loop in a closed-loop control system operates as follows:

1. The output of the system is measured using a sensor, and the resulting signal
is fed back to the controller.
2. Te controller compares the measured output to the desired setpoint and
calculates the difference (error).
3. The controller uses the error to adjust the system's input through an actuator,
which modifies the system's behavior.
4. The adjusted input affects the output of the system, which is measured again,
and the feedback loop repeats the cycle.

By continuously adjusting the input based on the measured output, the closed-loop
control system can regulate the system's behavior and achieve the desired
performance. The closed-loop control system is designed to maintain the output within
a specific range around the setpoint, ensuring that the system operates reliably and
efficiently.

Closed-loop control systems are used in a wide range of applications, including


process control, robotics, and aerospace engineering. They are preferred over open-
loop control systems because they are more robust to disturbances and uncertainties,

and they can maintain stability even in the presence of changes in the system or the
environment.

C. Transfer function and frequency response


The transfer function and frequency response of a feedback control system can be
derived by considering the transfer function of each individual component in the
system and using the feedback loop equation to relate the system's input and output.
Let's consider a simple feedback control system with a plant G(s), a controller C(s),
and a feedback loop with transfer function H(s), as shown below:

where u(t) is the input to the system, y(t) is the output of the system, e(t) is the error
signal, and H(s) is the transfer function of the feedback loop.The transfer function of
www.open2hire.com

the system, from the input u(t) to the output y(t), can be obtained by considering the
output of the plant G(s) with input e(t), which is given by:

G(s) * e(t)

where * denotes the convolution operation.

The error signal e(t) can be obtained by subtracting the output of the plant G(s) with
input C(s)*y(t) (i.e., the feedback signal) from the input u(t), which is given by:

e(t) = u(t) - C(s) * y(t)

Substituting the above expression for e(t) into the equation for G(s)*e(t), we get:

y(t) = G(s) * e(t) = G(s) * [u(t) - C(s) * y(t)]

Solving for Y(s)/U(s), we get the transfer function of the system:

Y(s) G(s)

---- = ------

U(s) 1 + G(s)C(s)H(s)

where H(s) is the transfer function of the feedback loop.

The frequency response of the system can be obtained by substituting s with jω, where
ω is the angular frequency, and plotting the magnitude response and phase response
of the transfer function on a logarithmic scale versus the angular frequency. The
frequency response analysis can be used to analyze the stability, sensitivity, and
performance of the feedback control system, and to design appropriate controllers
based on the system's frequency response characteristics.

D. Stability analysis
Stability analysis is an important aspect of feedback control theory, as it helps
determine whether a feedback control system is stable or unstable, and how the
system responds to disturbances or changes in the input.

A feedback control system is said to be stable if the output of the system remains
bounded for any bounded input, and unstable if the output of the system grows without
bound for certain inputs. A stable system is desirable in control engineering, as it
ensures that the system responds predictably to the input and does not exhibit
oscillations, overshoot, or instability.
www.open2hire.com

There are several methods for analyzing the stability of a feedback control system,
including:

Bode stability criterion: This method uses the frequency response of the system to
determine the stability of the system. According to the Bode stability criterion, a
feedback control system is stable if and only if the phase shift of the system is less
than 180 degrees at the frequency where the magnitude of the transfer function is
unity (0 dB).

Routh-Hurwitz stability criterion: This method uses the characteristic equation of


the system to determine the stability of the system. According to the Routh-Hurwitz
stability criterion, a feedback control system is stable if all the coefficients of the
characteristic equation have the same sign, and there are no sign changes in the first
column of the Routh array.

Nyquist stability criterion: This method uses the complex plane representation of
the frequency response of the system to determine the stability of the system.
According to the Nyquist stability criterion, a feedback control system is stable if and
only if the Nyquist plot of the system does not encircle the point (-1,0) in the complex
plane.

Root locus method: This method uses the root locus plot of the system to determine
the stability and response of the system to changes in the system's parameters. The
root locus plot shows the movement of the closed-loop poles of the system as a
function of a parameter in the system's transfer function

III. PID Control


Proportional-Integral-Derivative (PID) control is a common feedback control technique
used in many industrial processes to control the output of a system. The PID controller
continuously calculates an error value as the difference between the desired set point
and the actual output of the system. It then applies proportional, integral, and
derivative terms to the error value to compute the output of the controller.

The proportional term of the controller produces an output that is proportional to the
current error value. The integral term of the controller produces an output that is
proportional to the accumulated error over time, while the derivative term produces an
output that is proportional to the rate of change of the error over time. By combining
these three terms, the PID controller produces an output that compensates for both
the steady-state error and the transient response of the system, leading to faster and
more accurate control.

The PID control algorithm can be mathematically represented by the following


equation:
www.open2hire.com

u(t) = Kpe(t) + KiIntegral[e(t)] + Kd*Derivative[e(t)]

where u(t) is the output of the controller at time t, e(t) is the error between the desired
set point and the actual output of the system at time t, Kp, Ki, and Kd are the
proportional, integral, and derivative gain coefficients, respectively, and Integral[e(t)]
and Derivative[e(t)] are the integrals and derivatives of the error signal with respect to
time.

The PID controller is widely used in many applications, such as temperature control,
speed control, level control, and pressure control, due to its simplicity, flexibility, and
robustness. However, the tuning of the PID parameters can be challenging and
requires some knowledge of the system's dynamics and response characteristics.

Mathematical model of a PID controller


The mathematical model of a PID controller can be expressed as a transfer function
that relates the output of the controller to the input error signal. The transfer function
of a PID controller can be written as:

G(s) = Kp + Ki/s + Kd*s

where G(s) is the transfer function of the controller, Kp, Ki, and Kd are the proportional,
integral, and derivative gains, respectively, and s is the Laplace variable.

The first term, Kp, represents the proportional gain, which is multiplied by the error
signal to produce the output of the controller. The proportional gain determines how
much the controller responds to the current error value.

The second term, Ki/s, represents the integral gain, which is multiplied by the integral
of the error signal over time. The integral gain reduces the steady-state error of the
system and helps the controller to reach the desired set point.

The third term, Kd*s, represents the derivative gain, which is multiplied by the
derivative of the error signal with respect to time. The derivative gain improves the
transient response of the system and helps the controller to respond faster to changes
in the input.

The transfer function of the PID controller can be used to analyze the stability and
performance of the closed-loop system and to design appropriate controller
parameters for a given system. The PID controller is a widely used feedback control
technique due to its simplicity, flexibility, and robustness, and it is applied in many
industrial and engineering applications, such as temperature control, speed control,
and process control.
www.open2hire.com

PID control tuning is the process of adjusting the controller parameters to achieve the
desired closed-loop system performance. There are several methods for tuning PID
controllers, and the choice of the tuning method depends on the application
requirements, the system dynamics, and the available resources.

C. Tuning methods
Ziegler-Nichols Method: This is a popular and widely used tuning method that
involves applying a step input to the system and then measuring the system response
to determine the ultimate gain and period of oscillation. These values are then used to
calculate the PID controller parameters.

Cohen-Coon Method: This is another popular tuning method that involves estimating
the process time constant and the process gain based on the system response to a
step input. These estimates are then used to calculate the PID controller parameters.

Trial and Error Method: This method involves manually adjusting the PID controller
parameters until the desired closed-loop system performance is achieved. This
method is simple but can be time-consuming and may not guarantee optimal
performance.

Auto-Tuning Method: This method involves using an algorithm or software to


automatically adjust the PID controller parameters based on the system response.
Auto-tuning methods can save time and improve performance but require more
resources and may not work well for highly nonlinear or time-varying systems.

Frequency Response Method: This method involves analyzing the frequency


response of the closed-loop system and designing the PID controller parameters
based on the desired gain and phase margins. This method is effective for systems
with known transfer functions and is commonly used in aerospace and control systems
engineering.

Limitations of PID control


Nonlinear systems: PID control assumes that the system dynamics are linear and
time-invariant, which may not be the case for highly nonlinear systems. In such cases,
more advanced control techniques, such as adaptive control or model predictive
control, may be required.

Dead time: Dead time is the delay between the input and output of the system, which
can lead to instability and poor control performance in PID control. Dead time can be
compensated for by using a Smith predictor or a modified PID controller.
www.open2hire.com

Saturation and nonlinearity: PID control assumes that the control signal can vary
continuously between zero and the maximum output, which may not be the case for
systems with saturation or nonlinearity. In such cases, anti-windup schemes or
nonlinear control techniques may be required.

Robustness: PID control is not inherently robust to disturbances or model


uncertainties, which can affect the closed-loop system performance. Robust control
techniques, such as H-infinity control or sliding mode control, can be used to improve
the robustness of the system.

Tuning: PID control requires careful tuning of the controller parameters to achieve the
desired closed-loop system performance. The tuning process can be time-consuming
and may require expert knowledge and experience.

IV. Advanced Control Techniques

A. Nonlinear control
Nonlinear control techniques are used to address the limitations of linear control
methods such as PID control and are effective in handling systems with nonlinearities
and uncertainties. Some of the commonly used nonlinear control techniques are:

Feedback linearization: This technique transforms a nonlinear system into a linear


system by using a change of coordinates and feedback control. It is particularly useful
for systems with known or measurable inputs and outputs.

Sliding mode control: This technique creates a sliding surface where the system
behavior is constrained to follow a desired trajectory. The control law is designed such
that the sliding motion is maintained, resulting in robustness to uncertainties and
disturbances.

Backstepping control: This technique is a recursive feedback design that constructs


a control law for the system based on a virtual subsystem. The control law is designed
such that the closed-loop system asymptotically follows the desired trajectory.

B. Optimal control
Optimal control techniques are used to optimize a certain objective function while
satisfying the system constraints. These techniques are particularly useful for systems
with complex dynamics and multiple inputs and outputs. Some of the commonly used
optimal control techniques are:
www.open2hire.com

Linear quadratic regulator (LQR): This technique is a state-feedback control design


that minimizes a quadratic cost function of the system state and control inputs. It is
particularly useful for systems with linear dynamics and Gaussian disturbances.

Model predictive control (MPC): This technique uses a dynamic model of the system
to predict the future behavior of the system and optimize a performance index over a
finite time horizon. It is particularly useful for systems with nonlinear dynamics and
constraints on the inputs and outputs.

Optimal control of distributed parameter systems: This technique is used to design


control laws for systems described by partial differential equations, such as heat
transfer or fluid flow systems. The control law is designed to minimize an objective
function over the entire domain.

C. Robust control
Robust control techniques are used to design control laws that are insensitive to
uncertainties and disturbances in the system. These techniques are particularly useful
for systems with modeling errors and parameter variations. Some of the commonly
used robust control techniques are:

H-infinity control: This technique is a frequency-domain control design that


minimizes the worst-case performance of the closed-loop system subject to a
prescribed level of disturbance attenuation. It is particularly useful for systems with
uncertain dynamics and disturbances.

Mu-synthesis: This technique is a control design method that combines the H-infinity
and classical control approaches to obtain a robust control law. It is particularly useful
for systems with both parametric and nonparametric uncertainties.

Robust control of nonlinear systems: This technique uses feedback linearization,


sliding mode control, and other nonlinear control techniques to design control laws
that are robust to uncertainties and disturbances in nonlinear systems.

D. Adaptive control
Adaptive control techniques are used to design control laws that adapt to changes in
the system dynamics or parameter variations. These techniques are particularly useful
for systems with unknown or time-varying parameters. Some of the commonly used
adaptive control techniques are:

Model reference adaptive control: This technique uses a reference model of the
system to adjust the control law based on the difference between the actual and
www.open2hire.com

desired system behavior. It is particularly useful for systems with linear dynamics and
parameter variations.

Self-tuning control: This technique adjusts the control law parameters based on an
estimate of the system parameters. It is particularly useful for systems with unknown
parameters or systems that operate under varying conditions.

Adaptive control of nonlinear systems: This technique uses nonlinear control


techniques, such as feedback linearization and sliding mode control, to design control
laws that adapt to changes in the system dynamics or parameter variations.

V. Applications of Feedback Control

A. Control of mechanical systems


Control of mechanical systems involves the design of control laws for machines and
mechanical systems such as robots, vehicles, and industrial equipment. Some of the
common applications of control of mechanical systems include:

Motion control: This involves the control of position, velocity, and acceleration of
mechanical systems such as vehicles, aircraft, and robotics. Motion control is critical
for achieving accurate and reliable performance in applications such as autonomous
vehicles, unmanned aerial vehicles, and robotic manufacturing.

Robotic control: Robotic control involves the design of control laws for robots,
including manipulators, mobile robots, and humanoid robots. Robotic control is
essential for achieving precise and efficient robotic movements and interactions with
the environment, and has applications in fields such as manufacturing, healthcare,
and search and rescue operations.

The design of control laws for mechanical systems involves modeling the dynamics of
the system, designing a suitable control algorithm, and implementing the control
system on the hardware platform. Control of mechanical systems requires a
multidisciplinary approach that integrates principles of mechanical engineering,
electrical engineering, and computer science. Advances in control theory, sensing and
actuation technologies, and artificial intelligence have enabled the development of
more sophisticated control systems for mechanical systems, leading to improved
performance, efficiency, and safety.

B. Control of chemical and process systems


Control of chemical and process systems involves the design of control strategies for
chemical processes, which include various unit operations such as reactors,
separators, and heat exchangers. Control of chemical and process systems is critical
www.open2hire.com

for ensuring safe, efficient, and reliable operation of chemical plants, refineries, and
other process industries. Some of the common applications of control of chemical and
process systems include:

Process control: Process control involves the control of process variables such as
temperature, pressure, flow rate, and chemical concentrations to achieve desired
process performance. Process control is critical for maintaining product quality,
minimizing waste, and maximizing production efficiency.

Batch control: Batch control involves the control of a sequence of operations that are
carried out in batches, such as in pharmaceutical manufacturing or food processing.
Batch control is critical for achieving consistent product quality and minimizing waste
in batch processes.

C. Control of electrical and electronic systems


Control of electrical and electronic systems involves the design of control strategies
for electrical systems, including power electronics, electric machines, and electronic
circuits. Control of electrical and electronic systems is critical for ensuring safe,
efficient, and reliable operation of electrical systems in various applications, such as
electric vehicles, renewable energy systems, and power grids. Some of the common
applications of control of electrical and electronic systems include:

Power electronics control: Power electronics control involves the design of control
strategies for power converters, such as inverters and rectifiers, used in various
applications such as renewable energy systems, electric vehicles, and motor drives.
Power electronics control is critical for achieving efficient and reliable power
conversion and regulation.

Motor control: Motor control involves the design of control strategies for electric
machines such as motors and generators. Motor control is critical for achieving
efficient and reliable operation of electric machines in various applications such as
electric vehicles, robotics, and industrial automation.

VI. Conclusion
In summary, feedback control theory is a fundamental discipline in engineering that
deals with the design of control systems that can achieve desired performance and
stability. Feedback control systems consist of sensors, actuators, and controllers that
work together to regulate a process or a system. The basic components of a feedback
control system include a plant, a controller, and a feedback loop, which together can
achieve desired performance specifications such as stability, accuracy, and
robustness. Proportional-Integral-Derivative (PID) control is a widely used control
technique that can provide good performance for many applications. However, other
www.open2hire.com

advanced control techniques such as nonlinear control, optimal control, robust control,
and adaptive control are also important for more complex and challenging
applications.

Feedback control theory has wide-ranging applications in various fields of engineering,


including mechanical systems, chemical and process systems, and electrical and
electronic systems. Control of mechanical systems involves the design of control
strategies for motion control and robotic control. Control of chemical and process
systems involves the design of control strategies for process control and batch control.
Control of electrical and electronic systems involves the design of control strategies
for power electronics control and motor control. In all these applications, feedback
control is critical for ensuring safe, efficient, and reliable operation of various systems
and processes.

You might also like