Professional Documents
Culture Documents
Chapter 1
Chapter 1
Chapter 1
INTRODUCTION
Speed control was used in automobiles as early as 1900 in the Wilson-Pilcher and
also in the 1910s by Peerless. Peerless advertised that their system would "maintain
speed whether up hill or down".
The technology was adopted by James Watt and Matthew Boulton in 1788 to
control steam engines, but the use of governors dates at least back to the 17th century.
On an engine the governor adjusts the throttle position as the speed of the engine
changes with different loads, so as to maintain a near constant speed.
The first car with Teetor's system was the 1958 Imperial(called "Auto-pilot")
using a speed dial on the dashboard. This system calculated ground speed based
on driveshaft rotations off the rotating speedometercable, and used a bi-directional
screw-drive electric motor to vary throttle position as needed.
1
Fig. 1. Cruise control
1.2. OPERATION:
The driver must bring the vehicle up to speed manually and use a button to set
the cruise control to the current speed.
The cruise control takes its speed signal from a rotating driveshaft, speedometer
cable, wheel speed sensor from the engine's RPM, or from internal speed pulses
produced electronically by the vehicle. Most systems do not allow the use of the cruise
control below a certain speed - typically around 25 mph (40 km/h). The vehicle will
maintain the desired speed by pulling the throttle cable with a solenoid, a vacuum driven
servomechanism, or by using the electronic systems built into the vehicle (fully
electronic) if it uses a 'drive-by-wire' system.
All cruise control systems must be capable of being turned off both explicitly and
automatically when the driver depresses the brake, and often also the clutch. Cruise
control often includes a memory feature to resume the set speed after braking, and a
coast feature to reduce the set speed without braking.
2
When the cruise control is engaged, the throttle can still be used to accelerate
the car, but once the pedal is released the car will then slow down until it reaches the
previously set speed.
On the latest vehicles fitted with electronic throttle control, cruise control can be
easily integrated into the vehicle's engine management system. Modern "adaptive"
systems (see below) include the ability to automatically reduce speed when the distance
to a car in front, or the speed limit, decreases. This is an advantage for those driving in
unfamiliar areas.
3
SET POINT
O/P SPEED
TRANSFER
P
CONTROLLER
FUNCTION
PI
CONTROLLER
PID
CONTROLLER
NEURO
CONTROLLER
Its usefulness for long drives (reducing driver fatigue, improving comfort by
allowing positioning changes more safely) across highways and sparsely
populated roads.
Some drivers use it to avoid subconsciously violating speed limits. A driver who
otherwise tends to subconsciously increase speed over the course of a highway
journey may avoid speeding.
4
However, when used incorrectly cruise control can lead to accidents due to several
factors, such as:
rough or loose terrain that could negatively affect the cruise control controls
5
CHAPTER 2
LITERATURE SURVEY
2.1.CRUISE CONTROL:
6
This journal was done by Murray Cole.
Simple Cruise-Control:
The car model that will consider in this chapter is made up of:
An accurate positioning motor, with powerthat accepts a low-power (almost no
current) voltage signal, and movesthe accelerator pedal in a manner proportional
to the voltage signal.
The engine, which produces a torque that is related to the position of the
acceleratorpedal
The drivetrain of the car, which transmits this torque to the ground through the
driven
wheels. We will assume that the car is also subjected to air and rolling resistance.
A vehicle, which gets accelerated due to the forces which act on it.
Changes in the slope of the highway, which act as a disturbance force on the car.
for feedback purposes. We will assume for now that we get 1 volt for every
mile/hour
The driver must bring the vehicle up to speed manually and use a button to set
the cruise control to the current speed.
All cruise control systems must be capable of being turned off both explicitly and
automatically when the driver depresses the brake, and often also the clutch.
Cruise control often includes a memory feature to resume the set speed after
braking, and a coast feature to reduce the set speed without braking. When the
7
cruise control is engaged, the throttle can still be used to accelerate the car, but
once the pedal is released the car will then slow down until it reaches the
previously set speed.
On the latest vehicles fitted with electronic throttle control, cruise control can be
easily integrated into the vehicle's engine management system. Modern
"adaptive" systems (see below) include the ability to automatically reduce speed
when the distance to a car in front, or the speed limit, decreases. This is an
advantage for those driving in unfamiliar areas.
2.3. dSPACE:
Recently, software tools for real-time control became available. Using these
software tools it is possible to output values while the simulation program is running,
and also to add signals obtained from external sensors. This scheme is known as
“hardware in the loop” simulation. Control and supervisory strategies are designed
graphically in the Simulink block diagram environment. Then, control algorithms are
downloaded to a real-time prototyping system, instead of designing specific hardware.
However, a complete and integrated environment is required to support a designer
throughout the development of a control system, from initial design phase until the final
steps of code generation. In response, several rapid control prototyping modules have
been proposed using MATLAB/Simulink. Controller board like dSPACE DS1104 is
8
appropriate for motion controls and is fully programmable from the
MATLAB/Simulink environment. The dSPACE uses its own real-time interface
implementation software to generate and then down load the real-time code to specific
dSPACE boards. It enables the user to design digital controller simply by drawing its
block diagram using graphical interface of Simulink.
In the paper the model of the plant and the control algorithm is developed using
MATLAB/Simulink module. The code for the dSPACE board is generated using the
Real Time Workshop toolbox. The Real-Time Workshop produces code directly from
Simulink models and automatically builds programs that can be run in a variety of
environments, including real-time systems and stand-alone simulations. After
downloading the software in the real time platform the data and system parameters can
be observed and modified using ControlDesk. The software allow to create graphic user
interfaces using predefined objects like plots, buttons, sliders, labels, etc.. The main
features of this environment are:
9
dSPACE DS 1104 R&D CONTROLLER BOARD:
The DS1104 R&D Controller Board is a standard board that can be plugged into
a PCI slot of a PC. The DS1104 is specifically designed for the development of high-
speed multivariable digital controllers and real-time simulations in various fields. It is
a complete real time control system based on a 603 PowerPC floating-point processor
running at 250MHz. For advanced I/O purposes, the board includes a slave-DSP
subsystem based on the TMS320F240DSP microcontroller. For purpose of rapid
control prototyping, DAC, encoder interface module of the connector panel. Provide
easy access to all input and output signals of the board.
The control program is written in Simulink environment combined with the real-
time interface of the DS1104 board. The main ingredient of the software used in the
laboratory experiment is based on MATLAB/Simulink programs. The control law is
designed in Simulink and executed in real time using the dSPACE DS1104 DSP board.
Once the controller has been built in Simulink block-set, machine codes are achieved
that runs on the DS1104’TMS320F240 DSP processor.
Many well-structured layouts enable the user to gain full control over the system.
ControlDesk allows users to generate convenient, graphical user interfaces (layouts)
with a great variety of control elements, from simple GUI elements (push-buttons,
displays, radio buttons, etc.) to complex plotters and photorealistic graphics. This
virtual environment allows the user not only to control or monitor any control algorithm
10
parameter during the real time simulation, but also to access any I/O signal connected
to the hardware components.
11
CHAPTER 3
METHODOLOGY
Let m the total mass of the vehicle, v be the speed of the car. F is the force
generated when the wheels contact with the road surface. Fd is the disturbance force
due to friction, gravity and aerodynamic drag.
𝑑𝑣
𝑚 = 𝐹 − 𝐹d
𝑑𝑡
𝜔 2
𝑇(ω) = 𝑇𝑚 (1 − 𝛽 ( − 1) )
𝜔𝑚
𝑛
𝜔= 𝑣 =: 𝛼𝑛 𝑣
𝑟
𝑛𝑢
𝐹= 𝑇(𝜔) = 𝛼𝑛 𝑢 𝑇(𝛼𝑛 𝑣)
𝑟
12
Fg , the forces due to gravity.
Fg = m g sin θ
g = 9.8 m/s2
𝐹𝑟 = 𝑚𝑔𝐶𝑟 𝑠𝑖𝑛(𝑣)
1
𝐹𝑎 = 𝜌𝐶𝑑 𝐴𝑣 2
2
13
Fig.3. Simulink model of cruise control system
14
CHAPTER 4
CALCULATIONS
The output has no influence over the control system of the input.
15
4.2. ZIEGLER NICHOL’s TUNING METHOD:
Parameter P PI PID
Ti - 3L=15 2L=10
Td - - L/2=2.5
Tab.1. Controller parameters of P,PI, PID controllers using ZN open loop response
method
16
PI, Ki = 0.0015
PID, Ki = 0.003
PID, Kd = 0.075.
0% Overshoot in SP
Parameter P PI PID
Ti - 1.2T=120 T=100
Td - - L/2=2.5
PI, Ki = 0.000007291
PID, Ki = 0.00015
PID, Kd = 0.0375.
17
CHAPTER 5
CONTROLLERS
• ON-OFF CONTROLLER
• P CONTROLLER
• PI CONTROLLER
• PID CONTROLLER
• FUZZY CONTROLLER
• NEURO CONTROLLER
5.2. P CONTROLLER:
• P- Proportional control
• So, the actuating signal (the input to the system being controlled) is proportional
to the integral of the error.
18
5.4.PID CONTROLLER:
I (Integral)
D (Derivative)
These controllers have proven to be robust and extremely beneficial in the control of
many important applications.
• A PID controller operates on the error in a feedback system and does the
following:
2) The three terms - the P, I and D terms, are added together to produce a control
signal that is applied to the system being controlled.
• PID controllers are a family of controllers. PID controllers are sold in large
quantities and are often the solution of choice when a controller is needed to close
the loop. The reason PID controllers are so popular is that using PID gives the
designer a larger number of options and those options mean that there are more
possibilities for changing the dynamics of the system in a way that helps the
designer. In particular, starting with a proportional controller, and adding integral
and derivative terms to the control the designer can take advantage of the
following effects.
19
• An integral controller gives zero SSE for a step input.
Proportional Control
• The response becomes more oscillatory and needs longer time to settle, the error
disappears.
c(t) = Kp e(t) + Ki + Kd
Small Small
Kd Decrease Decrease
Change Change
20
5.6. NEURAL NETWORK CONTROLLER:
21
terms "node values" and "activations" are used interchangeably here). Weight values
are associated with each vector and node in the network, and these values constrain how
input data (e.g., satellite image values) are related to output data (e.g., land-cover
classes). Weight values associated with individual nodes are also known as biases.
Weight values are determined by the iterative flow of training data through the network
(i.e., weight values are established during a training phase in which the network learns
how to identify particular classes by their typical input data characteristics). A more
formal description of the foundations of multi-layer, feedforward, backpropagation
neural networks is given in Section 5.
Once trained, the neural network can be applied toward the classification of new data.
Classifications are performed by trained networks through 1) the activation of network
input nodes by relevant data sources [these data sources must directly match those used
in the training of the network], 2) the forward flow of this data through the network,
and 3) the ultimate activation of the output nodes. The pattern of activation of the
network’s output nodes determines the outcome of each pixel’s classification. Useful
summaries of fundamental neural network principles are given by Rumelhart et al.
(1986), McClelland and Rumelhart (1988), Rich and Knight (1991), Winston (1991),
Anzai (1992), Lugar and Stubblefield (1993), Gallant (1993), and Richards and Jia
(2005). Parts of this web page draw on these summaries. A brief historical account of
the development of connectionist theories is given in Gallant (1993).
The development of the perceptron was a large step toward the goal of creating
useful connectionist networks capable of learning complex relations between inputs and
outputs. In the late 1950's, the connectionist community understood that what was
needed for the further development of connectionist models was a mathematically-
derived (and thus potentially more flexible and powerful) rule for learning. By the early
1960's, the Delta Rule [also known as the Widrow and Hoff learning rule or the least
mean square (LMS) rule] was invented (Widrow and Hoff, 1960). This rule is similar
to the perceptron learning rule above (McClelland and Rumelhart, 1988), but is also
22
characterized by a mathematical utility and elegance missing in the perceptron and other
early learning rules. The Delta Rule uses the difference between target activation (i.e.,
target output values) and obtained activation to drive learning. For reasons discussed
below, the use of a threshold activation function (as used in both the McCulloch-Pitts
network and the perceptron) is dropped; instead, a linear sum of products is used to
calculate the activation of the output neuron (alternative activation functions can also
be applied - see Section 5.2). Thus, the activation function in this case is called a linear
activation function, in which the output node’s activation is simply equal to the sum of
the network’s respective input/weight products. The strengths of network’s connections
(i.e., the values of the weights) are adjusted to reduce the difference between target and
actual output activation (i.e., error).
Non-binary values may be used. Weights are identified by w’s, and inputs are
identified by i’s. A simple linear sum of products (represented by the symbol at top) is
used as the activation function at the output node of the network shown here.
23
passed through an activation function before being sent out from the node. Thus, we
have the following:
(Eqn3a)
and
(Eqn3b)
whereSj is the sum of all relevant products of weights and outputs from the previous
layer i, wijrepresents the relevant weights connecting layer i with layer j, ai represents
the activations of the nodes in the previous layer i, a j is the activation of the node at
hand, and f is the activation function.
Fig.8. Schematic representation of an error function for a network containing only two
weights (w1 and w2) (after Lugar and Stubblefield, 1993).
24
5.8. NETWORK TERMINOLOGY:
3) an output layer of nodes (Figure 1). The output layer can consist of one or
more nodes, depending on the problem at hand. In most classification applications, there
will either be a single output node (the value of which will identify a predicted class),
or the same number of nodes in the output layer as there are classes (under this latter
scheme, the predicted class for a given set of input data will correspond to that class
associated with the output node with the highest activation). It is important to recognize
that the term “multi-layer” is often used to refer to multiple layers of weights. This
contrasts with the usual meaning of “layer”, which refers to a row of nodes (Vemuri,
1992). For clarity, it is often best to describe a particular network by its number of
layers, and the number of nodes in each layer (e.g., a “4-3-5" network has an input layer
with 4 nodes, a hidden layer with 3 nodes, and an output layer with 5 nodes).
25
consensus rule: for a given pixel, the class label with the largest number of network
“votes” is that which is assigned (that is, the results of the individual neural-network
executions are combined through a simple majority vote) (Hansen and Salamon,
1990). The reasoning behind such a consensus rule is that a consensus of numerous
neural networks should be less fallible than any of the individual networks, with each
network generating results with different error attributes as a consequence of differing
weight initializations (Hansen and Salamon, 1990). Of interest in the neural network
community is the use of consensus algorithms to generate final results that are
superior to any individual neural network classification.
Neural network is widely used in order to get an optimal solution and minimized
errors. The model is used for the prediction of outputs for the combination of input.
The main steam temperature control using neural network involves collection of data,
creation of neural network and training the network.
The input parameters used for training the neural network as described earlier
are Qgfsh, Wsifsh, Pdr and Wsprayfsh. The neural network is trained for a set of inputs
using “nntool” toolbox.
26
Fig.9. Neural network.
70% of data are used for training, 15% for validation and 15% testing. The neural
network contains input, hidden and output layers. The number of neurons in the hidden
layer is 10. The hidden layer has sigmoidal activation function and the output layer has
purelin function. Levenberg_Marquardt training algorithm is adopted for training. The
trained network is evaluated using test data.
27
CHAPTER 6
SIMULINK MODEL OF CRUISE CONTROL SYSTEM
6.1. P CONTROLLER:
28
6.2. PI CONTROLLER:
29
6.3. PID CONTROLLER:
30
6.4. NEURO CONTROLLER:
31
Fig.14. NN Tool Performance plot
32
Fig.16. NN Tool Regression plot
33
CHAPTER 7
SIMULATION RESULTS
7.1. P CONTROLLER:
34
7.2. PI CONTROLLER:
35
7.3. PID CONTROLLER:
36
7.4: NEURAL NETWORK CONTROLLER:
37
7.5. PERFORMANCE PARAMETERS:
38
Performance Parameters TUNED BLOCK
Kp 0.0728 0.43807
Ki 0.0076886 0.025043
Kd -0.014523 -0.030757
Rise Time (sec) 1.25 0.206
Settling Time (sec) 21.1 NaN
Overshoot (%) 21.1 7.6
Peak 1.21 1.08
Gain Margin (rad/s) -17.8 dB @0.129(rad/s) -33.5 dB @ 0.094(rad/s)
Phase Margin (rad/s) 69 deg @1.08 (rad/s) 63. deg @6.53(rad/s)
Closed loop stability Stable Stable
39
7.6. dSPACE OUTPUT:
40
CHAPTER 8
CONCLUSION
As the tuned and block results doesn’t vary much, the tuned results will be taken
into account. The gain of the PI and PID controller is less. When the gain of these
controllers are increased to achieve the desired response, the system’s response
becomes faster but it becomes oscillatory. In order to anticipate the future error, a
derivative controller is introduced which stabilizes the system. The peak overshoot for
the PID controller is more when compared to the P and PI controller. The higher the
overshoot, lesser is the stability. The settling time of the P controller is lesser than the
PI and PID controller. Also the rise time of the P controller is less compared to PI and
PID. So among P, PI, PID controllers PID is the best. By comparing P controller with
Neuro Controller, P controller settles before it reaches the target. Neuro Controller has
an accurate settling point and the target is achieved. So the Neuro controller is decided
to be the best. By generating the Simulink model into a code and it is fed into dSPACE.
From the image(Fig.21.), it is clear that the set speed(30 km/hr) is maintained by
dSPACE.
CHAPTER 9
FUTURE SCOPE
Recently, software tools for real-time control became available. Using dSPACE
software tools it is possible to output values while the simulation program is running,
and also to add signals obtained from external sensors. This scheme is known as
“hardware in the loop” simulation. In real time, by eliminating ECU the
implementation of dSPACE will be cost efficient.
41
CHAPTER 9
REFERENCES
1. dSPACE User Conference 2010 – India | Sept 24th’ 10“dSPACE DSP DS-
1104 based State Observer Design for Position Control of DC Servo
Motor” by Jaswandi Sawant, Divyesh Ginoya Department of
Instrumentation and control, College of Engineering, Pune.
2. http://www.webpages.ttu.edu/dleverin/neural_network/neural_networks.h
tml
3. MATLAB/Simulink User’s Guide, The Math Works Inc, Natick, MA, 1998
4. K. Ogata, Modern Control Engineering, Prentice Hall. 2002.
5. U. Manwong, S. Boonpiyathud and S. Tunyasrirut, Implementation of a
dSPACE DSP-Based State Feedback with State Observer Using
MATLAB/Simulink for a Speed Control of DC Motor System,
International Conference on Control, Automation and Systems 2008 Oct.
14-17, 2008 in COEX, Seoul, Korea.
6. D. G. Luenberger, An Introduction to Observers, IEEE Transactions On
Automatic Control, Vol. Ac-16, No. 6, Deceder 1971
7. H. Temeltas, G. Aktas, Friction State Observation in Positioning Systems
using Leunberger Observer,
8. Proceedings of the 2003 IEEBASME International Conference on
Advanced Intelligent Mechatronics (AIM 2003).
9. S. Yurkovich, D. J. Clancy, J. K. Hurtig, Control Systems Laboratory.
Simon & Schuster Custom Publishing, 1998.
10.K. Ogata, Modern Control Engineering, 3rd Ed. Prentice Hall, 1997.
11.G. F. Franklin , D. J. Powell , M. L. Workman, Digital Control of Dynamic
Systems, 3rd Ed. Addison-Wesley, 1997.
42
12.Anzai, Y., 1992. Pattern Recognition and Machine Learning. Academic
Press, Boston.
13.Bishop, C.M., 1995a. Neural Networks for Pattern Recognition. Oxford
University Press, New York.
14.Bishop, C.M., 1995b. “Training with noise is equivalent to Tikhonov
regularization”, Neural Computation, 7: 108-116.
15.Gallant, S.I., 1993. Neural Network Learning and Expert Systems. MIT
Press, Cambridge.
16.“Use of Neural Fuzzy Networks with Mixed Genetic/Gradient Algorithm
in Automated Vehicle Control” bySunan Huang and Wei Ren, Senior
Member, IEEE.
17.] P. Varaiya and S. E. Shladover, “Sketch of an IVHS systems
architecture,” PATH, Berkeley, CA, Res. Rep. UCB-ITS-PRR-91-3, 1991.
18.P. Varaiya, “Smart cars on smart roads: problems for control,” IEEE Trans.
Automat. Contr., vol. 38, pp. 195–207, Feb. 1993.
19.S. N. Huang and W. Ren, “Design of vehicle following control systems
with actuator delays,” Int. J. Syst. Sci., vol. 28, no. 2, pp. 145–151, 1997.
20.P. Ioannou, Z. Xu et al., “Intelligent cruise control: Theory and
experiment,” in Proc. 32nd IEEE Conf. Decision and Control, San Antonio,
TX, 1993, pp. 1885–1890.
43
CHAPTER 10
APPENDIX
x = Speed;
t = Speed1;
44
net.divideParam.trainRatio = 70/100;
net.divideParam.valRatio = 15/100;
net.divideParam.testRatio = 15/100;
% Plots
% Uncomment these lines to enable various plots.
%figure, plotperform(tr)
%figure, plottrainstate(tr)
%figure, plotfit(net,x,t)
%figure, plotregression(t,y)
%figure, ploterrhist(e).
45
ADVANCED SCRIPT OF NEURAL NETWORK TOOL:
x = Speed;
t = Speed1;
46
% Setup Division of Data for Training, Validation, Testing
% For a list of all data division functions type: help nndivide
net.divideFcn = 'dividerand'; % Divide data randomly
net.divideMode = 'sample'; % Divide up every sample
net.divideParam.trainRatio = 70/100;
net.divideParam.valRatio = 15/100;
net.divideParam.testRatio = 15/100;
47
valPerformance = perform(net,valTargets,y)
testPerformance = perform(net,testTargets,y)
% Plots
% Uncomment these lines to enable various plots.
%figure, plotperform(tr)
%figure, plottrainstate(tr)
%figure, plotfit(net,x,t)
%figure, plotregression(t,y)
%figure, ploterrhist(e)
% Deployment
% Change the (false) values to (true) to enable the following code blocks.
if (false)
% Generate MATLAB function for neural network for application deployment
% in MATLAB scripts or with MATLAB Compiler and Builder tools, or simply
% to examine the calculations your trained neural network performs.
genFunction(net,'myNeuralNetworkFunction');
y = myNeuralNetworkFunction(x);
end
if (false)
% Generate a matrix-only MATLAB function for neural network code
% generation with MATLAB Coder tools.
genFunction(net,'myNeuralNetworkFunction','MatrixOnly','yes');
y = myNeuralNetworkFunction(x);
end
if (false)
48
% Generate a Simulink diagram for simulation or deployment with.
% Simulink Coder tools.
gensim(net);
end
%#ok<*RPMT0>
% Input 1
x1_step1_xoffset = 20;
49
x1_step1_gain = 0.02;
x1_step1_ymin = -1;
% Layer 1
b1 = [14;10.888888888888889;-7.7777777777777786;4.666666666666667;-
1.5555555555555562;-1.5555555555555562;4.6666666666666661;-
7.7777777777777786;-10.888888888888888;-14];
IW1_1 = [-13.999999999999998;-14;14;-14;14;-14;13.999999999999998;-14;-14;-
14];
% Layer 2
b2 = -0.23910830604928668;
LW2_1 = [0.23208935229327832 -0.053422302194541471 -0.29668098587400649
0.66165725579258172 0.17052818230544853 0.099447216582279063
0.83438732765962009 -0.42832196235925291 0.51440045822144254
0.50745818855699065];
% Output 1
y1_step1_ymin = -1;
y1_step1_gain = 0.1;
y1_step1_xoffset = 20;
% Dimensions
TS = size(X,2); % timesteps
50
if ~isempty(X)
Q = size(X{1},2); % samples/series
else
Q = 0;
end
% Allocate Outputs
Y = cell(1,TS);
% Time loop
forts=1:TS
% Input 1
Xp1 =
mapminmax_apply(X{1,ts},x1_step1_gain,x1_step1_xoffset,x1_step1_ymin);
% Layer 1
a1 = tansig_apply(repmat(b1,1,Q) + IW1_1*Xp1);
% Layer 2
a2 = repmat(b2,1,Q) + LW2_1*a1;
% Output 1
Y{1,ts} = mapminmax_reverse(a2,y1_step1_gain,y1_step1_xoffset,y1_step1_ymin);
end
51
% Format Output Arguments
if ~isCellX, Y = cell2mat(Y); end
end
52