Chapter 1

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 52

CHAPTER 1

INTRODUCTION

1.1. CRUISE CONTROL SYSTEM:

Cruise control (sometimes known as speed control or auto cruise, or tempomat


in some countries) is a system that automatically controls the speed of a motor vehicle.
The system is a servomechanism that takes over the throttle of the car to maintain a
steady speed as set by the driver.

Speed control was used in automobiles as early as 1900 in the Wilson-Pilcher and
also in the 1910s by Peerless. Peerless advertised that their system would "maintain
speed whether up hill or down".

The technology was adopted by James Watt and Matthew Boulton in 1788 to
control steam engines, but the use of governors dates at least back to the 17th century.
On an engine the governor adjusts the throttle position as the speed of the engine
changes with different loads, so as to maintain a near constant speed.

Modern cruise control (also known as a speedostat or tempomat) was invented in


1948 by the inventor and mechanical engineer Ralph Teetor. His idea was born out of
the frustration of riding in a car driven by his lawyer, who kept speeding up and slowing
down as he talked.

The first car with Teetor's system was the 1958 Imperial(called "Auto-pilot")
using a speed dial on the dashboard. This system calculated ground speed based
on driveshaft rotations off the rotating speedometercable, and used a bi-directional
screw-drive electric motor to vary throttle position as needed.

1
Fig. 1. Cruise control

1.2. OPERATION:

The driver must bring the vehicle up to speed manually and use a button to set
the cruise control to the current speed.

The cruise control takes its speed signal from a rotating driveshaft, speedometer
cable, wheel speed sensor from the engine's RPM, or from internal speed pulses
produced electronically by the vehicle. Most systems do not allow the use of the cruise
control below a certain speed - typically around 25 mph (40 km/h). The vehicle will
maintain the desired speed by pulling the throttle cable with a solenoid, a vacuum driven
servomechanism, or by using the electronic systems built into the vehicle (fully
electronic) if it uses a 'drive-by-wire' system.

All cruise control systems must be capable of being turned off both explicitly and
automatically when the driver depresses the brake, and often also the clutch. Cruise
control often includes a memory feature to resume the set speed after braking, and a
coast feature to reduce the set speed without braking.

2
When the cruise control is engaged, the throttle can still be used to accelerate
the car, but once the pedal is released the car will then slow down until it reaches the
previously set speed.

On the latest vehicles fitted with electronic throttle control, cruise control can be
easily integrated into the vehicle's engine management system. Modern "adaptive"
systems (see below) include the ability to automatically reduce speed when the distance
to a car in front, or the speed limit, decreases. This is an advantage for those driving in
unfamiliar areas.

The cruise control systems of some vehicles incorporate a "speed limiter"


function, which will not allow the vehicle to accelerate beyond a pre-set maximum; this
can usually be overridden by fully depressing the accelerator pedal. (Most systems will
prevent the vehicle accelerating beyond the chosen speed, but will not apply the brakes
in the event of overspeeding downhill.)

On vehicles with a manual transmission, cruise control is less flexible because


the act of depressing the clutch pedal and shifting gears usually disengages the cruise
control. The "resume" feature has to be used each time after selecting the new gear and
releasing the clutch. Therefore, cruise control is of most benefit at motorway/highway
speeds when top gear is used virtually all the time.

3
SET POINT

O/P SPEED
TRANSFER
P
CONTROLLER
FUNCTION

PI
CONTROLLER

PID
CONTROLLER

NEURO
CONTROLLER

Fig 2. Block Diagram of Cruise Control

1.3. ADVANTAGES AND DISADVANTAGES:

Advantages of cruise control include:

 Its usefulness for long drives (reducing driver fatigue, improving comfort by
allowing positioning changes more safely) across highways and sparsely
populated roads.

 Some drivers use it to avoid subconsciously violating speed limits. A driver who
otherwise tends to subconsciously increase speed over the course of a highway
journey may avoid speeding.

4
However, when used incorrectly cruise control can lead to accidents due to several
factors, such as:

 speeding around curves that require slowing down

 rough or loose terrain that could negatively affect the cruise control controls

 rainy or wet weather could lose traction

5
CHAPTER 2

LITERATURE SURVEY

2.1.CRUISE CONTROL:

 Allow driver to set a speed to be maintained without his/her


intervention (e.g. 70mph down a long straight motorway)
 No need to keep accelerator pressed (less driver fatigue)
Cruise
2.2. SPECIFICATION:

 Pin down some requirements


 Driver can request the system to maintain the current speed
 Driver can always turn it off
 System should not operate after braking
 System should allow the driver to travel fasterthan the set speed
Things need to be specified
 specify the inputs
 specify the outputs
 decide on the required states (and a start state)
 specify the transitions
 on: on/off button
 set: set the cruise speed to the current speed
 brake: the brake has been pressed
 accP: the accelerator has been pressed
 accR: the accelerator has been released
 resume: resume travelling at the set speed
 store: store the current speed as the cruise speed
 inc: increase the throttle
 dec: decrease the throttle

6
This journal was done by Murray Cole.

Simple Cruise-Control:
The car model that will consider in this chapter is made up of:
 An accurate positioning motor, with powerthat accepts a low-power (almost no
current) voltage signal, and movesthe accelerator pedal in a manner proportional
to the voltage signal.
 The engine, which produces a torque that is related to the position of the
acceleratorpedal
 The drivetrain of the car, which transmits this torque to the ground through the
driven
 wheels. We will assume that the car is also subjected to air and rolling resistance.
 A vehicle, which gets accelerated due to the forces which act on it.
 Changes in the slope of the highway, which act as a disturbance force on the car.
 for feedback purposes. We will assume for now that we get 1 volt for every
mile/hour
 The driver must bring the vehicle up to speed manually and use a button to set
the cruise control to the current speed.

 The cruise control takes its speed signal from a


rotating driveshaft, speedometer cable, wheel speed sensor from the
engine's RPM, or from internal speed pulses produced electronically by the
vehicle. Most systems do not allow the use of the cruise control below a certain
speed - typically around 25 mph (40 km/h). The vehicle will maintain the desired
speed by pulling the throttlecable with a solenoid,
a vacuum driven servomechanism, or by using the electronic systems built into
the vehicle (fully electronic) if it uses a 'drive-by-wire' system.

 All cruise control systems must be capable of being turned off both explicitly and
automatically when the driver depresses the brake, and often also the clutch.
Cruise control often includes a memory feature to resume the set speed after
braking, and a coast feature to reduce the set speed without braking. When the

7
cruise control is engaged, the throttle can still be used to accelerate the car, but
once the pedal is released the car will then slow down until it reaches the
previously set speed.

 On the latest vehicles fitted with electronic throttle control, cruise control can be
easily integrated into the vehicle's engine management system. Modern
"adaptive" systems (see below) include the ability to automatically reduce speed
when the distance to a car in front, or the speed limit, decreases. This is an
advantage for those driving in unfamiliar areas.

 The cruise control systems of some vehicles incorporate a "speed limiter"


function, which will not allow the vehicle to accelerate beyond a pre-set
maximum; this can usually be overridden by fully depressing the accelerator
pedal. (Most systems will prevent the vehicle accelerating beyond the chosen
speed, but will not apply the brakes in the event of overspeeding downhill.)

 On vehicles with a manual transmission, cruise control is less flexible because


the act of depressing the clutch pedal and shifting gears usually disengages the
cruise control. The "resume" feature has to be used each time after selecting the
new gear and releasing the clutch. Therefore, cruise control is of most benefit
at motorway/highway speeds when top gear is used virtually all the time.

2.3. dSPACE:

Recently, software tools for real-time control became available. Using these
software tools it is possible to output values while the simulation program is running,
and also to add signals obtained from external sensors. This scheme is known as
“hardware in the loop” simulation. Control and supervisory strategies are designed
graphically in the Simulink block diagram environment. Then, control algorithms are
downloaded to a real-time prototyping system, instead of designing specific hardware.
However, a complete and integrated environment is required to support a designer
throughout the development of a control system, from initial design phase until the final
steps of code generation. In response, several rapid control prototyping modules have
been proposed using MATLAB/Simulink. Controller board like dSPACE DS1104 is

8
appropriate for motion controls and is fully programmable from the
MATLAB/Simulink environment. The dSPACE uses its own real-time interface
implementation software to generate and then down load the real-time code to specific
dSPACE boards. It enables the user to design digital controller simply by drawing its
block diagram using graphical interface of Simulink.

In the paper the model of the plant and the control algorithm is developed using
MATLAB/Simulink module. The code for the dSPACE board is generated using the
Real Time Workshop toolbox. The Real-Time Workshop produces code directly from
Simulink models and automatically builds programs that can be run in a variety of
environments, including real-time systems and stand-alone simulations. After
downloading the software in the real time platform the data and system parameters can
be observed and modified using ControlDesk. The software allow to create graphic user
interfaces using predefined objects like plots, buttons, sliders, labels, etc.. The main
features of this environment are:

1) Controller code can be generated automatically for hardware implementation;

2) Different languages can be used to describe different parts of the system;

3) Simulink block diagrams can be used to define the control structure;

4) Controller parameters can be tuned online while the experiments are in


progress without having to rebuild and download a new Simulink model to the DSP
board; and

5) Ease of operation especially by means of a simple graphical user interface.

9
dSPACE DS 1104 R&D CONTROLLER BOARD:

The DS1104 R&D Controller Board is a standard board that can be plugged into
a PCI slot of a PC. The DS1104 is specifically designed for the development of high-
speed multivariable digital controllers and real-time simulations in various fields. It is
a complete real time control system based on a 603 PowerPC floating-point processor
running at 250MHz. For advanced I/O purposes, the board includes a slave-DSP
subsystem based on the TMS320F240DSP microcontroller. For purpose of rapid
control prototyping, DAC, encoder interface module of the connector panel. Provide
easy access to all input and output signals of the board.

The control program is written in Simulink environment combined with the real-
time interface of the DS1104 board. The main ingredient of the software used in the
laboratory experiment is based on MATLAB/Simulink programs. The control law is
designed in Simulink and executed in real time using the dSPACE DS1104 DSP board.
Once the controller has been built in Simulink block-set, machine codes are achieved
that runs on the DS1104’TMS320F240 DSP processor.

While the experimental is running, the dSPACE DS1104 provides a mechanism


that allows the user to change controller parameter online. Thus, it is possible for the
user to view the real process while the experiment is in progress.

ControlDesk developer version 3.5, dSPACE’s experiment software, provides all


the functions to control, monitor and automate experiments and makes the development
of controllers more effective. The graphical user interfaces to control manually the real
time simulation.

Many well-structured layouts enable the user to gain full control over the system.
ControlDesk allows users to generate convenient, graphical user interfaces (layouts)
with a great variety of control elements, from simple GUI elements (push-buttons,
displays, radio buttons, etc.) to complex plotters and photorealistic graphics. This
virtual environment allows the user not only to control or monitor any control algorithm

10
parameter during the real time simulation, but also to access any I/O signal connected
to the hardware components.

The following advantages of using the ds1104 controller board for


implementation were seen

 Less time require for implantation of different control algorithm.


 Reconfigurability.
 Less external passive components.
 Less sensitive to temperature variation.
 High efficiency.
 Higher reliability and flexibility.

11
CHAPTER 3

METHODOLOGY

3.1. EXPRESSIONS FOR CRUISE CONTROL SYSTEM:

To develop a mathematical model, force balance is the core factor to consider.

Let m the total mass of the vehicle, v be the speed of the car. F is the force
generated when the wheels contact with the road surface. Fd is the disturbance force
due to friction, gravity and aerodynamic drag.

The equation of motion of the vehicle is

𝑑𝑣
𝑚 = 𝐹 − 𝐹d
𝑑𝑡

The torque at full throttle is given by

𝜔 2
𝑇(ω) = 𝑇𝑚 (1 − 𝛽 ( − 1) )
𝜔𝑚

Let r be taken as wheel radius

and n be taken as gear ratio.

From the expression, the engine speed is related to the velocity.

𝑛
𝜔= 𝑣 =: 𝛼𝑛 𝑣
𝑟

The driving force can be written as

𝑛𝑢
𝐹= 𝑇(𝜔) = 𝛼𝑛 𝑢 𝑇(𝛼𝑛 𝑣)
𝑟

The disturbance force Fd has three main components:

Fr , the forces due to rolling friction.

12
Fg , the forces due to gravity.

Fa , the forces due to aerodynamic drag.

θ , slope of the road.

Fg = m g sin θ

g = 9.8 m/s2

𝐹𝑟 = 𝑚𝑔𝐶𝑟 𝑠𝑖𝑛(𝑣)

Cr , the coefficient of rolling friction

1
𝐹𝑎 = 𝜌𝐶𝑑 𝐴𝑣 2
2

ρ, the density of air.

Cd , the shape dependent aerodynamic drag coefficient.

A , the frontal area of the car.

13
Fig.3. Simulink model of cruise control system

14
CHAPTER 4

CALCULATIONS

4.1. OPEN LOOP SYSTEM:

 It is referred to as NON-FEEDBACK system.

 The output has no influence over the control system of the input.

Fig.4. Open loop system

15
4.2. ZIEGLER NICHOL’s TUNING METHOD:

Fig.5. Open loop response curve

Parameter P PI PID

Kp 1/a=0.25 0.9/a=0.225 1.2/a=0.03

Ti - 3L=15 2L=10

Td - - L/2=2.5

Tab.1. Controller parameters of P,PI, PID controllers using ZN open loop response
method

Where a=40, L=5

Ki = Kp/Ti and Kd = Kp*Td

16
PI, Ki = 0.0015

PID, Ki = 0.003

PID, Kd = 0.075.

4.3. CHEIN-HRONES AND RESWICK (CHR) METHOD:

0% Overshoot in SP

Parameter P PI PID

Kp 0.3/a=0.0075 0.35/a=0.00875 0.6/a=0.015

Ti - 1.2T=120 T=100

Td - - L/2=2.5

Tab.2. Controller parameters of P,PI, PID controllers using Chein-Hrones and


Reswick (CHR) method

Ki = Kp/Ti and Kd= Kp*Td

PI, Ki = 0.000007291

PID, Ki = 0.00015

PID, Kd = 0.0375.

17
CHAPTER 5

CONTROLLERS

5.1. TYPES OF CONTROLLER:

• ON-OFF CONTROLLER

• P CONTROLLER

• PI CONTROLLER

• PID CONTROLLER

• FUZZY CONTROLLER

• NEURO CONTROLLER
5.2. P CONTROLLER:

• P- Proportional control

• Amplifies the error signal

• Proportional control has a tendency to make a system faster

5.3. INTEGRAL CONTROLLER:

• The controller integrates the error

• The integral controller has a transfer function of Ki/s

• So, the actuating signal (the input to the system being controlled) is proportional
to the integral of the error.

• In an integral controller, steady state error should be zero.

• The closed loop system has to be stable.

• Integral control has a tendency to make a system slower.

18
5.4.PID CONTROLLER:

PID stands for: P (Proportional)

I (Integral)

D (Derivative)

These controllers have proven to be robust and extremely beneficial in the control of
many important applications.

• A PID controller operates on the error in a feedback system and does the
following:

• A PID controller calculates a term proportional to the error - the P term.

• A PID controller calculates a term proportional to the integral of the error -


the I term.

1) A PID controller calculates a term proportional to the derivative of the error


the D term.

2) The three terms - the P, I and D terms, are added together to produce a control
signal that is applied to the system being controlled.

5.5. NEED FOR PID CONTROLLERS:

• PID controllers are a family of controllers. PID controllers are sold in large
quantities and are often the solution of choice when a controller is needed to close
the loop. The reason PID controllers are so popular is that using PID gives the
designer a larger number of options and those options mean that there are more
possibilities for changing the dynamics of the system in a way that helps the
designer. In particular, starting with a proportional controller, and adding integral
and derivative terms to the control the designer can take advantage of the
following effects.

19
• An integral controller gives zero SSE for a step input.

• A derivative control terms often produces faster response.

Proportional Control

• By only employing proportional control, a steady state error occurs, but it


amplifies

Proportional and Integral Control

• The response becomes more oscillatory and needs longer time to settle, the error
disappears.

Proportional, Integral and Derivative Control

All design specifications can be reached.

PID(s) = Kp + Ki/s+ sKd

c(t) = Kp e(t) + Ki + Kd

THE CHARACTERISTICS OF P, PI, PID CONTROLLERS:

CL RESPONSE RISE TIME OVERSHOOT SETTLING TIME S-S ERROR

Kp Decrease Increase Small Change Decrease

Ki Decrease Increase Increase Eliminate

Small Small
Kd Decrease Decrease
Change Change

Tab.3. The characteristic of P, PI, PID Controllers

20
5.6. NEURAL NETWORK CONTROLLER:

Neural networks are members of a family of computational


architectures inspired by biological brains (e.g., McClelland et al., 1986; Luger and
Stubblefield, 1993). Such architectures are commonly called "connectionist systems",
and are composed of interconnected and interacting components called nodes or
neurons (these terms are generally considered synonyms in connectionist terminology,
and are used interchangeably here). Neural networks are characterized by a lack of
explicit representation of knowledge; there are no symbols or values that directly
correspond to classes of interest. Rather, knowledge is implicitly represented in the
patterns of interactions between network components (Lugar and Stubblefield, 1993).
A graphical depiction of a typical feedforward neural network is given in Figure 1. The
term “feedforward” indicates that the network has links that extend in only one
direction. Except during training, there are no backward links in a feedforward network;
all links proceed from input nodes toward output nodes.

Fig.6. A typical feedforward neural network.

Individual nodes in a neural network emulate biological neurons by taking


input data and performing simple operations on the data, selectively passing the results
on to other neurons (Figure 2). The output of each node is called its "activation" (the

21
terms "node values" and "activations" are used interchangeably here). Weight values
are associated with each vector and node in the network, and these values constrain how
input data (e.g., satellite image values) are related to output data (e.g., land-cover
classes). Weight values associated with individual nodes are also known as biases.
Weight values are determined by the iterative flow of training data through the network
(i.e., weight values are established during a training phase in which the network learns
how to identify particular classes by their typical input data characteristics). A more
formal description of the foundations of multi-layer, feedforward, backpropagation
neural networks is given in Section 5.
Once trained, the neural network can be applied toward the classification of new data.
Classifications are performed by trained networks through 1) the activation of network
input nodes by relevant data sources [these data sources must directly match those used
in the training of the network], 2) the forward flow of this data through the network,
and 3) the ultimate activation of the output nodes. The pattern of activation of the
network’s output nodes determines the outcome of each pixel’s classification. Useful
summaries of fundamental neural network principles are given by Rumelhart et al.
(1986), McClelland and Rumelhart (1988), Rich and Knight (1991), Winston (1991),
Anzai (1992), Lugar and Stubblefield (1993), Gallant (1993), and Richards and Jia
(2005). Parts of this web page draw on these summaries. A brief historical account of
the development of connectionist theories is given in Gallant (1993).

5.7. THE DELTA RULE:

The development of the perceptron was a large step toward the goal of creating
useful connectionist networks capable of learning complex relations between inputs and
outputs. In the late 1950's, the connectionist community understood that what was
needed for the further development of connectionist models was a mathematically-
derived (and thus potentially more flexible and powerful) rule for learning. By the early
1960's, the Delta Rule [also known as the Widrow and Hoff learning rule or the least
mean square (LMS) rule] was invented (Widrow and Hoff, 1960). This rule is similar
to the perceptron learning rule above (McClelland and Rumelhart, 1988), but is also

22
characterized by a mathematical utility and elegance missing in the perceptron and other
early learning rules. The Delta Rule uses the difference between target activation (i.e.,
target output values) and obtained activation to drive learning. For reasons discussed
below, the use of a threshold activation function (as used in both the McCulloch-Pitts
network and the perceptron) is dropped; instead, a linear sum of products is used to
calculate the activation of the output neuron (alternative activation functions can also
be applied - see Section 5.2). Thus, the activation function in this case is called a linear
activation function, in which the output node’s activation is simply equal to the sum of
the network’s respective input/weight products. The strengths of network’s connections
(i.e., the values of the weights) are adjusted to reduce the difference between target and
actual output activation (i.e., error).

Fig.7. A network capable of implementing the Delta Rule.

Non-binary values may be used. Weights are identified by w’s, and inputs are
identified by i’s. A simple linear sum of products (represented by the symbol at top) is
used as the activation function at the output node of the network shown here.

During forward propagation through a network, the output (activation) of a given


node is a function of its inputs. The inputs to a node, which are simply the products of
the output of preceding nodes with their associated weights, are summed and then

23
passed through an activation function before being sent out from the node. Thus, we
have the following:

(Eqn3a)

and

(Eqn3b)

whereSj is the sum of all relevant products of weights and outputs from the previous
layer i, wijrepresents the relevant weights connecting layer i with layer j, ai represents
the activations of the nodes in the previous layer i, a j is the activation of the node at
hand, and f is the activation function.

Fig.8. Schematic representation of an error function for a network containing only two
weights (w1 and w2) (after Lugar and Stubblefield, 1993).

Any given combination of weights will be associated with a particular error


measure. The Delta Rule uses gradient descent learning to iteratively change network
weights to minimize error (i.e., to locate the global minimum in the error surface).

24
5.8. NETWORK TERMINOLOGY:

A multi-layer, feedforward, backpropagation neural network is composed of

1) an input layer of nodes,

2) one or more intermediate (hidden) layers of nodes, and

3) an output layer of nodes (Figure 1). The output layer can consist of one or
more nodes, depending on the problem at hand. In most classification applications, there
will either be a single output node (the value of which will identify a predicted class),
or the same number of nodes in the output layer as there are classes (under this latter
scheme, the predicted class for a given set of input data will correspond to that class
associated with the output node with the highest activation). It is important to recognize
that the term “multi-layer” is often used to refer to multiple layers of weights. This
contrasts with the usual meaning of “layer”, which refers to a row of nodes (Vemuri,
1992). For clarity, it is often best to describe a particular network by its number of
layers, and the number of nodes in each layer (e.g., a “4-3-5" network has an input layer
with 4 nodes, a hidden layer with 3 nodes, and an output layer with 5 nodes).

5.9. COMBINING NEURAL NETWORK RESULTS:

The random initialization of network weights prior to each execution of the


neural network training algorithm can in some cases cause final classification results
to vary from execution to execution, even when all other factors (e.g., training data,
learning rate, momentum, network topology) are kept constant. Particularly when
working with very limited training datasets, the variation in results can be large.
Under such circumstances, it is best to expand training data on the basis of improved
ground truth. If this is not possible, generation of optimum results can sometimes be
made through combination of the results of multiple neural network classifications.
For example, multiple neural network results can be combined using a simple

25
consensus rule: for a given pixel, the class label with the largest number of network
“votes” is that which is assigned (that is, the results of the individual neural-network
executions are combined through a simple majority vote) (Hansen and Salamon,
1990). The reasoning behind such a consensus rule is that a consensus of numerous
neural networks should be less fallible than any of the individual networks, with each
network generating results with different error attributes as a consequence of differing
weight initializations (Hansen and Salamon, 1990). Of interest in the neural network
community is the use of consensus algorithms to generate final results that are
superior to any individual neural network classification.

Neural network is widely used in order to get an optimal solution and minimized
errors. The model is used for the prediction of outputs for the combination of input.
The main steam temperature control using neural network involves collection of data,
creation of neural network and training the network.

The input parameters used for training the neural network as described earlier
are Qgfsh, Wsifsh, Pdr and Wsprayfsh. The neural network is trained for a set of inputs
using “nntool” toolbox.

26
Fig.9. Neural network.

70% of data are used for training, 15% for validation and 15% testing. The neural
network contains input, hidden and output layers. The number of neurons in the hidden
layer is 10. The hidden layer has sigmoidal activation function and the output layer has
purelin function. Levenberg_Marquardt training algorithm is adopted for training. The
trained network is evaluated using test data.

27
CHAPTER 6
SIMULINK MODEL OF CRUISE CONTROL SYSTEM

6.1. P CONTROLLER:

Fig.10. Simulink Model of P Controller.

28
6.2. PI CONTROLLER:

Fig.11. Simulink model of PI Controller.

29
6.3. PID CONTROLLER:

Fig.12. Simulink model of PID Controller

30
6.4. NEURO CONTROLLER:

Fig.13. Simulink Model of Neural Network Controller

31
Fig.14. NN Tool Performance plot

Fig.15. NN Tool Training State plot

32
Fig.16. NN Tool Regression plot

33
CHAPTER 7

SIMULATION RESULTS

7.1. P CONTROLLER:

Fig.17. Scope of P Controller

34
7.2. PI CONTROLLER:

Fig.18. Scope of PI Controller

35
7.3. PID CONTROLLER:

Fig.19. Scope of PID Controller

36
7.4: NEURAL NETWORK CONTROLLER:

Fig.20. Scope of Neural Network Controller

37
7.5. PERFORMANCE PARAMETERS:

Performance Parameters TUNED BLOCK


Kp 0.0058817 0.0070581
Rise Time (sec) 3.79 2.17
Settling Time (sec) 6.75 4.82
Overshoot (%) 0 0
Peak 2 1.71
Gain Margin (rad/s) -6.02 dB @0(rad/s) -7.61 dB @ 0(rad/s)
Phase Margin (rad/s) 60 deg @1 (rad/s) 65.4 deg @1.26(rad/s)
Closed loop stability Stable Stable

Tab.4. Simulation Results of P Controller

Performance Parameters TUNED BLOCK

Rise Time (sec) 30 26


Settling Time (sec) 84 86.8
Overshoot (%) 1 11.6
Peak 1 1.12
Gain Margin (rad/s) Inf dB @ NaN Inf dB @ NaN
Phase Margin (rad/s) 97.8 deg @ 0.107 60 deg @ 0.0562
Closed loop stability Stable Stable

Tab.5. Simulation Results of PI Controller

38
Performance Parameters TUNED BLOCK
Kp 0.0728 0.43807
Ki 0.0076886 0.025043
Kd -0.014523 -0.030757
Rise Time (sec) 1.25 0.206
Settling Time (sec) 21.1 NaN
Overshoot (%) 21.1 7.6
Peak 1.21 1.08
Gain Margin (rad/s) -17.8 dB @0.129(rad/s) -33.5 dB @ 0.094(rad/s)
Phase Margin (rad/s) 69 deg @1.08 (rad/s) 63. deg @6.53(rad/s)
Closed loop stability Stable Stable

Tab.6. Simulation Results of PID Controller

39
7.6. dSPACE OUTPUT:

Fig.21. dSPACE Output

40
CHAPTER 8

CONCLUSION

As the tuned and block results doesn’t vary much, the tuned results will be taken
into account. The gain of the PI and PID controller is less. When the gain of these
controllers are increased to achieve the desired response, the system’s response
becomes faster but it becomes oscillatory. In order to anticipate the future error, a
derivative controller is introduced which stabilizes the system. The peak overshoot for
the PID controller is more when compared to the P and PI controller. The higher the
overshoot, lesser is the stability. The settling time of the P controller is lesser than the
PI and PID controller. Also the rise time of the P controller is less compared to PI and
PID. So among P, PI, PID controllers PID is the best. By comparing P controller with
Neuro Controller, P controller settles before it reaches the target. Neuro Controller has
an accurate settling point and the target is achieved. So the Neuro controller is decided
to be the best. By generating the Simulink model into a code and it is fed into dSPACE.
From the image(Fig.21.), it is clear that the set speed(30 km/hr) is maintained by
dSPACE.

CHAPTER 9

FUTURE SCOPE

Recently, software tools for real-time control became available. Using dSPACE
software tools it is possible to output values while the simulation program is running,
and also to add signals obtained from external sensors. This scheme is known as
“hardware in the loop” simulation. In real time, by eliminating ECU the
implementation of dSPACE will be cost efficient.

41
CHAPTER 9

REFERENCES

1. dSPACE User Conference 2010 – India | Sept 24th’ 10“dSPACE DSP DS-
1104 based State Observer Design for Position Control of DC Servo
Motor” by Jaswandi Sawant, Divyesh Ginoya Department of
Instrumentation and control, College of Engineering, Pune.
2. http://www.webpages.ttu.edu/dleverin/neural_network/neural_networks.h
tml
3. MATLAB/Simulink User’s Guide, The Math Works Inc, Natick, MA, 1998
4. K. Ogata, Modern Control Engineering, Prentice Hall. 2002.
5. U. Manwong, S. Boonpiyathud and S. Tunyasrirut, Implementation of a
dSPACE DSP-Based State Feedback with State Observer Using
MATLAB/Simulink for a Speed Control of DC Motor System,
International Conference on Control, Automation and Systems 2008 Oct.
14-17, 2008 in COEX, Seoul, Korea.
6. D. G. Luenberger, An Introduction to Observers, IEEE Transactions On
Automatic Control, Vol. Ac-16, No. 6, Deceder 1971
7. H. Temeltas, G. Aktas, Friction State Observation in Positioning Systems
using Leunberger Observer,
8. Proceedings of the 2003 IEEBASME International Conference on
Advanced Intelligent Mechatronics (AIM 2003).
9. S. Yurkovich, D. J. Clancy, J. K. Hurtig, Control Systems Laboratory.
Simon & Schuster Custom Publishing, 1998.
10.K. Ogata, Modern Control Engineering, 3rd Ed. Prentice Hall, 1997.
11.G. F. Franklin , D. J. Powell , M. L. Workman, Digital Control of Dynamic
Systems, 3rd Ed. Addison-Wesley, 1997.

42
12.Anzai, Y., 1992. Pattern Recognition and Machine Learning. Academic
Press, Boston.
13.Bishop, C.M., 1995a. Neural Networks for Pattern Recognition. Oxford
University Press, New York.
14.Bishop, C.M., 1995b. “Training with noise is equivalent to Tikhonov
regularization”, Neural Computation, 7: 108-116.
15.Gallant, S.I., 1993. Neural Network Learning and Expert Systems. MIT
Press, Cambridge.
16.“Use of Neural Fuzzy Networks with Mixed Genetic/Gradient Algorithm
in Automated Vehicle Control” bySunan Huang and Wei Ren, Senior
Member, IEEE.
17.] P. Varaiya and S. E. Shladover, “Sketch of an IVHS systems
architecture,” PATH, Berkeley, CA, Res. Rep. UCB-ITS-PRR-91-3, 1991.
18.P. Varaiya, “Smart cars on smart roads: problems for control,” IEEE Trans.
Automat. Contr., vol. 38, pp. 195–207, Feb. 1993.
19.S. N. Huang and W. Ren, “Design of vehicle following control systems
with actuator delays,” Int. J. Syst. Sci., vol. 28, no. 2, pp. 145–151, 1997.
20.P. Ioannou, Z. Xu et al., “Intelligent cruise control: Theory and
experiment,” in Proc. 32nd IEEE Conf. Decision and Control, San Antonio,
TX, 1993, pp. 1885–1890.

43
CHAPTER 10
APPENDIX

SIMPLE SCRIPT OF NEURAL NETWORK TOOL:

% Solve an Input-Output Fitting problem with a Neural Network


% Script generated by Neural Fitting app
% Created Mon Nov 27 13:16:21 IST 2017
%
% This script assumes these variables are defined:
%
% Speed - input data.
% Speed1 - target data.

x = Speed;
t = Speed1;

% Choose a Training Function


% For a list of all training functions type: help nntrain
% 'trainlm' is usually fastest.
% 'trainbr' takes longer but may be better for challenging problems.
% 'trainscg' uses less memory. NFTOOL falls back to this in low memory situations.
trainFcn = 'trainlm'; % Levenberg-Marquardt

% Create a Fitting Network


hiddenLayerSize = 10;
net = fitnet(hiddenLayerSize,trainFcn);

% Setup Division of Data for Training, Validation, Testing

44
net.divideParam.trainRatio = 70/100;
net.divideParam.valRatio = 15/100;
net.divideParam.testRatio = 15/100;

% Train the Network


[net,tr] = train(net,x,t);

% Test the Network


y = net(x);
e = gsubtract(t,y);
performance = perform(net,t,y)

% View the Network


view(net)

% Plots
% Uncomment these lines to enable various plots.
%figure, plotperform(tr)
%figure, plottrainstate(tr)
%figure, plotfit(net,x,t)
%figure, plotregression(t,y)
%figure, ploterrhist(e).

45
ADVANCED SCRIPT OF NEURAL NETWORK TOOL:

% Solve an Input-Output Fitting problem with a Neural Network


% Script generated by Neural Fitting app
% Created Mon Nov 27 13:18:04 IST 2017

% This script assumes these variables are defined:


% Speed - input data.
% Speed1 - target data.

x = Speed;
t = Speed1;

% Choose a Training Function


% For a list of all training functions type: help nntrain
% 'trainlm' is usually fastest.
% 'trainbr' takes longer but may be better for challenging problems.
% 'trainscg' uses less memory. NFTOOL falls back to this in low memory situations.
trainFcn = 'trainlm'; % Levenberg-Marquardt

% Create a Fitting Network


hiddenLayerSize = 10;
net = fitnet(hiddenLayerSize,trainFcn);

% Choose Input and Output Pre/Post-Processing Functions


% For a list of all processing functions type: help nnprocess
net.input.processFcns = {'removeconstantrows','mapminmax'};
net.output.processFcns = {'removeconstantrows','mapminmax'};

46
% Setup Division of Data for Training, Validation, Testing
% For a list of all data division functions type: help nndivide
net.divideFcn = 'dividerand'; % Divide data randomly
net.divideMode = 'sample'; % Divide up every sample
net.divideParam.trainRatio = 70/100;
net.divideParam.valRatio = 15/100;
net.divideParam.testRatio = 15/100;

% Choose a Performance Function


% For a list of all performance functions type: help nnperformance
net.performFcn = 'mse'; % Mean squared error

% Choose Plot Functions


% For a list of all plot functions type: help nnplot
net.plotFcns = {'plotperform','plottrainstate','ploterrhist', ...
'plotregression', 'plotfit'};

% Train the Network


[net,tr] = train(net,x,t);

% Test the Network


y = net(x);
e = gsubtract(t,y);
performance = perform(net,t,y)

% Recalculate Training, Validation and Test Performance


trainTargets = t .* tr.trainMask{1};
valTargets = t .* tr.valMask{1};
testTargets = t .* tr.testMask{1};
trainPerformance = perform(net,trainTargets,y)

47
valPerformance = perform(net,valTargets,y)
testPerformance = perform(net,testTargets,y)

% View the Network


view(net)

% Plots
% Uncomment these lines to enable various plots.
%figure, plotperform(tr)
%figure, plottrainstate(tr)
%figure, plotfit(net,x,t)
%figure, plotregression(t,y)
%figure, ploterrhist(e)

% Deployment
% Change the (false) values to (true) to enable the following code blocks.
if (false)
% Generate MATLAB function for neural network for application deployment
% in MATLAB scripts or with MATLAB Compiler and Builder tools, or simply
% to examine the calculations your trained neural network performs.
genFunction(net,'myNeuralNetworkFunction');
y = myNeuralNetworkFunction(x);
end
if (false)
% Generate a matrix-only MATLAB function for neural network code
% generation with MATLAB Coder tools.
genFunction(net,'myNeuralNetworkFunction','MatrixOnly','yes');
y = myNeuralNetworkFunction(x);
end
if (false)

48
% Generate a Simulink diagram for simulation or deployment with.
% Simulink Coder tools.
gensim(net);
end

MATLAB FUNCTION OF NN TOOL:

function [Y,Xf,Af] = myNeuralNetworkFunction(X,~,~)


%MYNEURALNETWORKFUNCTION neural network simulation function.
%
% Generated by Neural Network Toolbox function genFunction, 27-Nov-2017
13:11:48.
%
% [Y] = myNeuralNetworkFunction(X,~,~) takes these arguments:
%
% X = 1xTS cell, 1 inputs over TS timsteps
% Each X{1,ts} = 1xQ matrix, input #1 at timestepts.
%
% and returns:
% Y = 1xTS cell of 1 outputs over TS timesteps.
% Each Y{1,ts} = 1xQ matrix, output #1 at timestepts.
%
% where Q is number of samples (or series) and TS is the number of timesteps.

%#ok<*RPMT0>

% ===== NEURAL NETWORK CONSTANTS =====

% Input 1
x1_step1_xoffset = 20;

49
x1_step1_gain = 0.02;
x1_step1_ymin = -1;

% Layer 1
b1 = [14;10.888888888888889;-7.7777777777777786;4.666666666666667;-
1.5555555555555562;-1.5555555555555562;4.6666666666666661;-
7.7777777777777786;-10.888888888888888;-14];
IW1_1 = [-13.999999999999998;-14;14;-14;14;-14;13.999999999999998;-14;-14;-
14];

% Layer 2
b2 = -0.23910830604928668;
LW2_1 = [0.23208935229327832 -0.053422302194541471 -0.29668098587400649
0.66165725579258172 0.17052818230544853 0.099447216582279063
0.83438732765962009 -0.42832196235925291 0.51440045822144254
0.50745818855699065];

% Output 1
y1_step1_ymin = -1;
y1_step1_gain = 0.1;
y1_step1_xoffset = 20;

% ===== SIMULATION ========

% Format Input Arguments


isCellX = iscell(X);
if ~isCellX, X = {X}; end;

% Dimensions
TS = size(X,2); % timesteps

50
if ~isempty(X)
Q = size(X{1},2); % samples/series
else
Q = 0;
end

% Allocate Outputs
Y = cell(1,TS);

% Time loop
forts=1:TS

% Input 1
Xp1 =
mapminmax_apply(X{1,ts},x1_step1_gain,x1_step1_xoffset,x1_step1_ymin);

% Layer 1
a1 = tansig_apply(repmat(b1,1,Q) + IW1_1*Xp1);

% Layer 2
a2 = repmat(b2,1,Q) + LW2_1*a1;

% Output 1
Y{1,ts} = mapminmax_reverse(a2,y1_step1_gain,y1_step1_xoffset,y1_step1_ymin);
end

% Final Delay States


Xf = cell(1,0);
Af = cell(2,0);

51
% Format Output Arguments
if ~isCellX, Y = cell2mat(Y); end
end

% ===== MODULE FUNCTIONS ========

% Map Minimum and Maximum Input Processing Function


function y = mapminmax_apply(x,settings_gain,settings_xoffset,settings_ymin)
y = bsxfun(@minus,x,settings_xoffset);
y = bsxfun(@times,y,settings_gain);
y = bsxfun(@plus,y,settings_ymin);
end

% Sigmoid Symmetric Transfer Function


function a = tansig_apply(n)
a = 2 ./ (1 + exp(-2*n)) - 1;
end

% Map Minimum and Maximum Output Reverse-Processing Function


function x = mapminmax_reverse(y,settings_gain,settings_xoffset,settings_ymin)
x = bsxfun(@minus,y,settings_ymin);
x = bsxfun(@rdivide,x,settings_gain);
x = bsxfun(@plus,x,settings_xoffset);
end

52

You might also like