Download as pdf
Download as pdf
You are on page 1of 18
mn {REE TRANSACTIONS ON INDUSTRIAL ELECTRONICS VOL 38. NO. 6, DECEMBER 1992 Theory and Applications of Neural Networks for Industrial Control Systems ‘Toshio Fukuda, Member, IEEE, and Takanori Shibata Absract—This paper describes the theory and the applica ‘ons of artificial neural networks, especially in a contro Held [Arifcial neural networks try to mimic the nerve system In 8 thammalian brain into mathematical model. Therefore, neural desirable characteristics and capabilities stem, such as parallel processing, leara- ing, nonlinear ‘nd. generalization. Recently, many researchers have developed neural networks as new tools in ‘many fields such as pattern recognition, information processing. design, planning, diagnosis, and control. We survey hybrid Stems ‘of the neural networks, faery els, and Artifclal Intligence (AD technologies. Furry sets and technologies have ‘ben also implemented as new tols in many fields and shown to bbe useful, Therefore we deal with the hybrid systems as ey {echnologies inthe ature. LL isrropucrion ESEARCH in artificial neural networks has recently en active a a new means of information process ing. Artificial neural networks try t0 mimic the biological brain neural networks into mathematical model. The brain is a large-scale system connecting many neural cells called neurons (Fig. 1; itis estimated that a human being has around 10'° brain cells. The brain has many excellent characteristics: parallel processing of information, a learn- ing function, sel-organization capabilites, and so forth [1] 2. The brain can also provide an associative memory {2} and is good for information processing such as pattern recognition (3) In artificial neural network, a model of the brain, con- neets many linear or nonlinear neuron models and pro: cesses information in a parallel distributed manner (4) In conventional single-processor Von Neumann computers, the speed of computation is limited by the propagation delay of the transistors. Because of their massively paral- Tel nature, neural networks can perform computations at ‘much higher speed [1]. In addition, the neural network thas many interesting and attractive features. Neural net- works have learning and self-organization capabilities. ‘Therefore, neural networks can adapt 10 changes in data, learning the characteristics of input signal. That is, neural networks can learn a mapping between an input and an Manuscript recived Deseber 2, 1991; eve February 6, 1992 ‘The shor ae with the Deparment of Mechanical Engaceig, apa Unters, Nagoya 46401 Japan TEE Log Number 920507 K Fig LBlolgial nearon output space and synthesize an associative memory that rettieves the appropriate output when presented with the input and generalizes when presented with new inputs (5. ‘Moreover, because of their nonlinear nature, neural net- works can perform functional approximation and signal filtering operations that are beyond optimal linear tech- niques (31 From 1946 to 1960, a movement attempted to carry out interdisciplinary research on the brain and computers to solve the basic principles for intelligent information pro- cessing, McCulloch and Pitts proposed an idea that @ “mnindlike machine” could be manufactured by intercon- necting models based on behavior of biological neurons, the concept of “Neurological Networks” [6]. They made a neuron model representing 2 basic component of the brain and showed its versatility as a logical operation system, In this era, eybernetics proposed by Wiener (7) twas studied in depth, The principle of cybernetics is the relationship between engineering principle, feedback, and brain function. Research was led to the basis of today's von Neumann-type computer {8} With the progress of research on the brain and comput- cers, the objective changed from the “mindlike machine" to “manufacturing a learning machine,” for which Hebb's learning model was proposed [9]. In addition, by early 1960's, specific design guidelines for learning systems were siven by Rosenblatt’s perceptron [10}, Widrow and Hott's ‘Adaline (ADAptive LINear Element) [11] and Steinbuch’s Learning Matrix [12], which was a pattern recognition ‘machine based on lincar discriminant functions. The Per: ceptron received considerable excitement when it was first introduced because of its conceptual simplicity. However, Minsky and Papert (1969) proved mathematically that the Perceptron cannot be used for complex log function 13) Then, the excitement was terminated and research in cert /s250300 © 1992 HEE Artificial Intelligence (AD independent from the structure ‘of the biological brain was started. On the other hand, the ‘Adaline is a weighted sum of the inputs, together with a Teast mean square (LMS) algorithm to adjust the weights to minimize the difference between the desired signal and the output (11), [14] Because of the rigorous mathemati- cal foundation of the LMS algorithm, the Adaline has been developed into a powerful tool for adaptive signal processing [15] and adaptive control [16]. Early work on ‘competitive learning and self-organization was performed 07), (18) ‘Although few researchers worked on neural networks during the 1970's, Grossberg and Kohonen made signifi ceant contributions, Grossberg developed Adaptive Reso- nance Theory (ART) [19], based on the idea that the brain spontaneously organizes itself into recognition codes. The ‘dynamics of the network were modeled by first order ‘differentiable equations. There are three architectures and these are self-organizing neural implementations of pattern clustering algorithms (20). On the other hand, Kohonen developed his work on self-organizing maps, based on the idea that neurons organize themselves to tune various and specific patterns [2], [21]. Moreover, Albus developed an adaptive Cerebellar Model Articula- tion Controller (CMAC), which i a distributed table-look- ‘up system based on models of human memory (22) ‘In the 1970's, Werbos originally developed a back- propagation algorithm (23), and its fist practical applica- tion was for estimating a dynamic model to predict na tionalism and social communications. However, Werbos’ ‘work remained almost unknown in the scientific commu. nity. In the mid 1980's, the back-propagation algorithm as the learning algorithm of the feed-forward neural network ‘was also rediscovered by Parker (24) and Rumethart et al [25]. In Section TV we survey the precise algorithm of the back-propagation. Moreover, back-propagation through time is also‘a powerful tool to deal with dynamical systems such as recurrent neural networks, feed-forward systems fof equations, and systems with time lag 26]. On the other hand, in the early 1980's, Hopfield introduced a recurrent-type neural network that is based on the inter- action of neurons (27] and his approach was based on ‘Hebbian learning law [9], The model consisted of a set of first-order (nonlinear) differentiable equations that mi ‘mize a certain energy function and this model is known as ‘4 Hopfield net, In Section VI we survey the basis of the Hopfield net. Furthermore, Kosko extended some of the ideas of Grossberg and Hopfield to develop his adaptive Bidirectional Associative Memory (BAM) [28]. Hinton, Sejnowski, and Ackley developed the Boltzmann machine {20}, [30] which is a kind of the Hopfield net that settles into solutions by a simulated annealing process (31) a8 a stochastic technique. These research in the 1980's trig: gered the present boom in the scientific community. Re- cently, neural networks have found wide applications in ‘many different fields ‘Application of neural networks to pattern recognition has been widely studied. Neural networks such as the Hopfield net and the feed-forward net with the back-prop- gation algorithm are applied and studied mainly for jmage and speech recognition. Research in image recogni- tion includes initial vision (stereo vision of both eyes, coutine extraction, ete.) close to the biological (particu- larly brain) function, manually written character recopni- tion by cognitron and neocognitron at the practical level (32), and cell recognition for mammalian cell cultivation by using the feed-forward neural network [33]. Speech recognition deals with time series, and it was reported that the Time Delay Neural Network (TDNN) with time delay input was effective as a buffer model of a feed-for- ‘ward net [34 Optimization is often required for the planning of ac tions, motions, and tasks, but many parameters cause the amount of calculation to be tremendous and the ordinary ‘method cannot be applied. The Hopfield network is an effective tool to find the optimal solution of the optimiza- tion problem [35] by defining an energy function Recently, in control field, there have been many caves ‘where automatic control theories and techniques have played an important role. With the progress of control theories, there are now many applications for automatic control with increased performance. However, applicable ‘ystems become increasingly complicated and highly com- posite. It is therefore expected that control theories and Techniques will make further progress. Adaptive control, such as the Model Reference Adaptive Control (MRAC) [36] and the Self-Tuning Regulators (STR) (37) (38), has become available to those systems having much uncer- tainty [39], Nevertheless, traditional adaptive control had problems such as exponentially complicated calculation for the number of unknown parameters and limitations on the applicability to nonlinear systems (1, (40), (41}- Many attempts have been made to apply the neural network to control fields where the neural network is used 10 deal with nonlinearities and uncertainty of the control system ‘and to approximate functions [1] such as system identifi cation [40), Research in neural network applications to ‘control can be classified into some major methods de- pending on structures of the control systems (42), such as supervised control [43], (44, inverse control [48]-{S1) neural adaptive control [40], [41}, [52], back propagation of utility [42], whieh is an extended method of back-propa- ‘gation through time (26), and adaptive critics that include 4 reinforcement learning algorithm [53}-{56} In Sections ‘V-VIIT we explain these methods in control fields and survey some examples. ‘When we discuss intelligent control systems, we expect the intelligent control systems to have reasoning mecha- nisms with knowledge-based systems such as expert sys tems and adaptive controllers to the changing environ- ‘ment (S7]-[60]. The reasoning mechanism produces con- trol strategies symbolically for complex and composite tasks by using knowledge and data base systems after recognition of the environment [60] [61]. As the reasoning mechanism deals with symbols, whereas the real world is numerical, itis necessary to classify the data in order to om {EEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL, NO6, DECEMBER 1982 ‘map the numerical sensed data into the symbolic data for understanding of the process state [59]-[61], However is difficult to decide such rules that classify the numerical data, In expert systems, human experts must decide these rules by using their domain-dependent knowledge [62] ‘Moreover, in order to achieve tasks following the control strategy produced by the reasoning mechanism, itis nec- essary for the intelligent control systems to use effective ‘adaptive controller for servo control. Therefore, as de- scribed before, the neural network has potential for intel- ligent control. systems by its learning and adaptation, self-organization, function approximation, and. massively parallel processing capabilities. The ncural networks can be used as a classifier and an adaptive controller with the expert systems for intelligent control systems. Further- more, fuzzy set theory [63]-166) also has potential for intelligent control systems. The fuzzy set is characterized as extension of binary Boolean logic [63], The fuzzy set is ‘class in which transition from membership to nonmem- bership is gradual rather than abrupt. In the expert sys- tem, the fuzzy set can exploit the complexity and ambigu- fty that may result from communication gaps between the expert and the user, since the fuzzy set can model some- hhow ambiguous language that cannot be modeled by con- ventional two-valued logic (If-then”” rules). Therefore, the fuzzy se is another powerful tool to model phenom- cena associated with human thinking and perception [67] However, both the neural network and the fuzzy set have some difficulties. The neural network can produce map- ping rules from empirical training sets through supervised learning, but mapping rules in the network are not visible land are difficult to understand. On the other hand, since the fuzzy set does not have learning capability, it is difficult to modify the rules. In order to solve these difficulties, recently, much research has been trying 10 hybridize the fuzzy set theory and the neural network {671-170}, and itis often called a Fuzzy Neural Network (ENN). In Section IX we survey and discuss the FNN for an intelligent control system. ‘This tutorial describes theories, current status, and trends when neural networks are applied to control prob- lems. This tutorial also describes synthesis techniques of artificial intelligence as a human interface, fuzzy sets, and neural networks for intelligent control systems as key technologies in the future Tue NeuRAt. Nerwork Mopet A. The Neuron Model Each neuron model (1, (3) (4) (25) building up a network simulates a biological neuron as shown in Fig. 2. The neuron unit consists of multiple inputs x and one output y and its internal state is given as the weighted sum of input signals. The output of the neuron unit is as follows (Bees Joo o 2O- Fg.2, Newon mode ‘where w, is a weight of connection, @ is a bias of the neuron unit, is time, and mis the number of inputs. The weight coefficient w, that represents the strength of con- rection indicates the synapse load. Tt takes a positive value for excitation and a negative one for inhibition. ‘The neuron output funetion f(x) often uses the two: valued function of 1 and 0 using threshold fend-limiter or the sigmoid function that is a continuous and nonlinear function (Fig. 3) [25]. A conventional sigmoid function is represented by the following expressions: fon O

You might also like