00384899fuzzy Fault in Sensors

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

Sensor fault detection using

fuzzy logic and neural networks


Gilles MOUROT,Souad BOUSGHlRI et FrU6ric KRATZ
Centre de Recherche en Atmmatique de Nancy - CNRS UA 821
BP 40,Rue drp doyen Marcel Roubault - F 54 500 Van&euvre
T61. 80 50 30 80 Fax:835030%

Abstract - The validation of signals is a technique which inte- For the numerical point of view, an only one example
gr- information from redundmt and from functionally would be used. Consider the network, shown in figure 1,
diverse sensum ta pmvkk sighly r&bk hRm" to related by four non-linear equations :
oper~erews.p4tsrultauticcoahgtlernSignalvrdidPJianis
generaily perfarmed by like-sensor c o ~ v f s o a s(direct re-
dundancy). When the increase of sensors is impossible,we use in fi (X) = 0.0045 X(l) X(2)2 - X(3)
preference slrrriyticlrt rer(211118pacy.A n d y t h l redtwdpncy refers
h3tk p h @ d &&kRS%t@&sl#bPB Of QW OT
f2 0 = X(3) - 280.86 m
c a w a d - , m"g the way variables
be@ "md&aa qstem. "se dbt.tlon tech*- mltst
be implemented to increase systems reliabirity and to fhcilitate
detection of Failures. This paper p"a sewrp1 of de-
tectioR.adl~afllleasa"cntf~dngarnot~ls
of the process. For model-bsecl we preEcnt the
standardized hgbpdraa 1psriiirnls malysis rrrd the standardized
least square residuets ;usually, these techniques .re presented
by using linear modeis and here we extend to non-linear models
We then compare these approaches wtth fault detection using
neural Rei.

I. INTRODUCTION
For this simulation example, the true values vector X*
The monitoring of a plant strongly depends on reliable sets (which satisftes the constraint equations fi(X*) = 0) and the
of stcady state data. Unfortunately these measurement data measurements vector X are:
are generally corrupted to random noise and systematic
e m s . As a consequence, but it is also a common tool for X* = [ 7.29 58.39 111.85 146.62 0.037 6218 60.71 91.40IT
Fault 813" and botation, the process constraints are X = [ 7.18 60.31 173.13 141.49 0.039 77.46 58.04 95.57IT
vdaod. The " l y s i s of the magnitudes of these residual
constraintsmay be used fw detecting the presence of m, We define the measurement covarhce matrix as a
isolating the faulty sei"and estimating the magnitude of diagonal mafrix (the m e a s m " ace hdep"t). whose
the m o r s . Cmqumdy. a djusmmt of the data ithtermis:
(after eliiRmion a f t h e g r o s s e " ~in order to avoid severe
impairment of the adjustment process) may be achieved V(ij) = (0.05 X(i))2 (5)
which gives a statistical estimation of the variables process.
Thc p q m will focus only about fault (or gross error) II. RESIDUALS GENERATION
dctcction and idonlification. Classical approach using
ana-fysisof thc nmdizod rcsiduak will be present ;
inorcwcr, wc will dcfirre a coffFplete strategy based on a We consider a process characterized by a n state vector X
priori 48 posterior $etas 111. Wrc recently some authors and a set of p modcl equations f. lksidds may be genenued
try to analysc thc peasi4ilitics o f h d by the a m 1 nets [2] either directly (a pMri gemration) by considering the
and f u u y logic [3]. unbalance of the model Consnamts or indirectly (a poawiOr
generation) after estimating the state X (by a classical
Thc papcr is divided into two major parts: the conventional procedure of data reconciliation) and computing the
slatistical approach and the use of neural nets. An example corrective tenns. In both cases, a global criterion may be
will bc providcd ; it concerns a process involving non linear computed involving the sum of square of the residuals which
cquations on which mors have been simulated. The main is sensitive to gross dtrors ;under some hypathesis, it has
contribution of this papcr is thc comparison of these been proved the equivalence of the two Criterin [4]. In order
approachcs for gmss ormr detection. to achieve the isolation of the gross errors, one may analyse
the components of &esesiduals.

369
In the general case, the process model can be written: By fusion of (9) and (lo), we suppress the variable 2. We
seek to explain this variable X(2) as a function of other
f(x*) = 0 (6) variables. From (9), it is not possible to obtain this relation ;
but, from lo), we get X(2) that we substitutein the equation
where f is an n-vector of non-linear functions and X* is a (6a). So, the resultantequation is:
set a true values.

A. A priori residuals generation

As in the linear case,the measurement vector X does not B. A posterori residuals generation
satisfy the constraint equations. The constraint residuals
vector, or imbalance residualsvector, R is given by: Besides the problem of fusions of non-linear equations, the
supplementary problem that we meet in this method is the
R=f(X) (7) computation of the estimate of true values vector. Newton -
Raphson type iteratiw coupled with numerical elimination of
With the hypothesis that the measurement errors are of n variables as a function of the v' - n other variables, is found
constant variance V, one shows that the vector R follows a to be an effective solution strategy [6]. Notice that in the
normal distribution with zero mean and covariance VR (VR = linear case, we have an exact solution by mamx resolution.
g(X) V g(X)T. where g o is the Jacobian o f f [$I. One
defines a normalized imbalance vector as: Summarizing, the different steps are:
- compute the estimate vector 2,
RN = V R - ' R
~ (8) - compute the least 4- residuals vector:.
&=X-k
If RN(i) exceeds the critical value, this denotes that -computethecovariancematrix:
equation i is a bad equation ; the location of the suspect
variable necessitates the fusion of not necessarily linear vR= v g h T [g(% g(%g(kT1-'
equations. Notice that these fusions, which correspo& to the - compute the standardized least square residuals:
elimination of one variable between two equations, are not EN = vEln & (14)
easy from an analytical point of view and, what is more,
these fusions can not always be achieved, nor am they
unique. JIL STATISTICAL ANALYSIS OF RESIDUALS

If we note Si the set of variables index whose the ith A given statistic threshold allows to detect the faulty data.
equation explicitly depending and q;j (= S i n Sj) the Then the isolation of the faults is achieved by analysing the
intersection of two sets of variables index of two equations, residuals. A better isolation is achieved by optimizing the
the fusion between the ith equation and jth equation is structure of the residual equations ; moreover forming a
possible only if !+J# 0. Then, for each equation we seek to subset of secondary equations obtained by combining the
basic model equations will improve this isolation. Despite of
explain (if it is possible) the variables X(p) (p E 4.j)as a the difficulties of equations aggregation in the general case,
function of other variables X(i) (i E Si, i # p). the method gives good results. Each component of the
residual vector is compared with a critical value test (defined
Generally, the fusion of two equations induces the by the overall probability a of error type I). If a least one
elimination of a single variable X(p), on condition that one component is out of the confidence interval, this denote the
can i n t e p t this variable as a function of other variables X(i) presence of a bad measmment ; this measurement is then
from at least one equation. We observe that the fusion of two deleted by equation aggregation. As several errors might
equations is possible by many methods, depending on the occur, the detection is iterative ; to take into account the
variables that we eliminate. The resultant equation of the modification of the number of variables, we choose the
fusion comes by the substitution of the relation in another overall probability p instead of a [7],with f3 = 1 - (1 - a)lk
equation. where v is the number of tested measurcments.

To illustrate this method, we now consider the following


example: A. A priori residuals analysis

fl (X) = X(l) - log(x(2)) - X(3)X(2)2 (9) Stage 1: Computation of residuals vector


f2 (x) = X(4) - X(2)2X(5) (lo)
S1 = (1,2, 3). S2 = (294%51, S1;2 = (2) R = f(X) = [ - 55.61 53.41 - 96.86 1409.48 IT

37 0

~
Stage 2 Computation of the standardized imbalance residuals Step 2: Compute the standardized least square residuals
RN = [ - 3.53 4.41 - 3.57 3.22IT vector
EN= [0.32 - 1.40 7.10 0.56 2.71 4.60 - 0.55 -0521T
Stage 3: Search for "bad" equations
For the overall probability a = 0.05, we have a confidence Step 3: Location of the failed variable
interval [ - 1.96 , 1.96 1. We look for the component out of For a = 0.05 and v' = 8, we have f3 equals 0.0064 and a
the confidence interval and we find all equations are "bad. confidence interval [ - 2.72 ,2.72 3. One observes that the 3th
measurement cOrteSpOndingto the largest value of Edi)l out
Stage 4: Aggregation procedure of the confidence interval, denotes that variable 3 is a faulty
We compute the following fusions : variable.
(1 + 2)2,, (1 + 2)3,2 + 3,3 + 4, (1 + 2)2 + 3.
2 + 3 + 4 , ( 1 + 2)2+ 3 + 4 , (1 + 2)3 + 3 + 4 Step 4: Elimination of the faulty measure
By fusion of (1) and (2) :(1 + 2)3, we delete the variable 3.
where the subscript refer the variable deleted by fusion. The
search for the fault is summarized in table I. The method used Step 5: Compute the estimate vector
is the same as the linear case [l]. 2= [ 7.08 62.07 142.10 0.04 61.57 60.02 95.64IT
TABLE I Step 6: Compute the standardized least square residuals
FAULTY SENSOR DETECl'ION

Equation and
-
bad 1 2 3 4 5 6 7 8
Vector
EN = [ 0.47 - 0.73 - 0.10 241 4.43 - 0.80 - 0.64 IT
aggregation equa- Step 7 :Location of the failed variable
-tion For a = 0.05 and v' = 7, we have B equals 0.0073 and a
1 Yes 1 1 1 . . . . . confidence interval [ - 2.68,2.68 1. One observes that the 6th
2 yes . 1 1 1 . . . . measure corresponding to the largest value of EN(i)l out of
3 Y= . . . 1 1 1 . . the confidence interval, denotes that variable 6 is a "bad"
4 Y= . . . . . 1 1 1 variable.
Y= 1 . 1 1 . . . . Step 8: Elimination of the faulty measure
no 0 0 . 0 . . . . By fusion of (3) and (4), we delete the variable 6.
Y= . 1 1 . 1 1 . .
no . . . 0 0 . 0 0 Step 9: Compute the estimate vector
(1+2)2+3 1 . 1 . 1 1 . .
h = [ 7.19 61.15 141.82 0.04 58.20 95.57 IT
Y=
2+3+4 Yes . 1 1 . 1 . 1 1 Step 10: Compute the standardized least square residuals
vector
(1+2)2+3+4 Yes 1 . 1 . 1 . 1 1 EN = [ - 0.07 - 0.34 - 0.05 0.45 - 0.07 - 0.04 IT
(1 +2)3+3+4 -
no o o . . o . o o
Step 11 :Location of the failed variable
Logical
For a = 0.05 and v' = 6, we have i3 equals 0.0085 and a
product 0 0 1 0 0 1 0 0 confidence interval [ - 2.63 , 2 . 6 3 I. One observes that all
- measures are "good".

The variahlcs 3 and 6 arc identified as "bad variables. We We find again, as in the standardized imbalance method,
must noticc that thc location of thc failurcs can be more that the variables 3 and 6 are at fault.
difficult. Bcsidcs thc pmblcm of symmctry in thc linear case,
thc pmblcm of thc fusion of non-lincar equations is not very
wsy and is gcncrdlly difficult to automatc. Thc utiliia~ionof IV.THE ASSUMPTION-BASED APPROACH
formal languagc shoutd pcrmit, to a ccrtain cxtent, the
rcsolution d this problcm. All the assumption-based approaches to diagnosis use the
fact that violation of a model equation indicates that at least
one of its associated variables is invalid : m m v e r if a
B. A poslerori residuals analysis model equation is not violated, all its individual variables are
valid. The method states as follows : establish the constraint
Step 1: Compute the estimate vector equations of the concerned process, evaluate and check these
constraints for their satisfaction, identify the fault set based
2 = 7.11 63.49 128.95 138.27 0.04 60.96 59.39 95.63 IT on a Boolean logic scheme. In order to clarify this logic
scheme, let us define the following sets:

37 1
R the set of residuals (Ri ) i=l,...,n equation depends or not of a variable [SI. For the given
s thesetofvariables example, we have the foliowing matrix:
Rn the set of normal residuals (less than a given TABLE II
threshold) OCCURRENCEMATRIX
Rz thesetofabnormalresidds
SB = u(Rj E Rp)thesetofvariablesoccurringinthe 11 2 3 4 5 6 7 8
Bonnal residuals in Rn 1 1 1 1 1 0 0 0 0 0
2 0 1 1 1 0 0 0 0
SE = U(Ri E ~)thesudofblesoccurringinthe 3 0 0 0 1 1 1 0 0
4 0 0 0 0 0 1 1 1
abnormal residuals in RE

Finrdly, the variabies which are the possible fault origins Inspecting this matrix implies that each failure is detectable
are definedby the set since no column of the matrix contains only zero element ;
d e y , some column are identical and thenfore the
X = Sn - Sz corresponding faults are not isolable. This is the case for
variables 2 and 3, variables 5 and 8. As this original set of
Let us illustrate the concept with the aid of the given equations does not satisfy the isolability requirements, we
exmpk. All the variabies X(i) are supposed to be measured. may look for a modified set obtained by a linear or a non-
An assumption on a measwed variable is that the linear aansfonnation of the basic set [lo]. Hence, for the
corresponding sensor is not defective. Failure Occurring on a isolation of X(2) and X(3), two addttioml cquatkms may be
sensor would make a " i n m b e x of constraints to be obtained first by removing element X(2) betweem f l and f2
vicikated. These c ~ ~ t s & & a saze given below (only under and second by removing element X(3) between the same
&l"lfm). equations (that gives a new equation with variables X(1),
X(2) and X(4)). The modified OcGmrenCe matrix shows that
fl m m w9, W3)) = 0 the signature of the faults on X(2) and X(3) become different.
The reader should note that not any transformation of the
f2 (x(2). X(3). X(4)) = 0
basic set of equations exist in order to distinguish the si-
f3 (x(4). X(3, X(6N = 0 gnatures of X(7) and X(8). For complex systems, the
fh (x(6),XU), X(8)) = 0 preceding analysis may be realized in a decentralized way ;
see for example [113.
Each of these comtraints may be checked for consistency
by evaluating 88 equation residual (which may be turned to a However, these notions may fail because the procedure
Ilonnslizedform in osrter to take into accouRt the diierent evaluating the residual equations may become inaccurate due
precisions of the measuremeam). To explain the analysis, let to model uncertainties and Sensor noise. Non Boolean
u s s a p p o s e ' E h a t t *B e 2arrd3arcvicrlated.Wethcn
~ methods must be use in order to improve the stability of
have: diagnosis. The consideration of the sensitivity of model equa-
tions to assumption deviations [121, the use of bclief function
ST = fi f4 = { X(1), X(2), X(3), X(6), X(71, X(8)) of residual [13], the fuzzification of threshold [3] are addi-
SF = f u f = ( X(2),X(3),X(4),X(5),X(d))
P = s3 - ss = { X(4),X(5))
tional computation which may increase the capability to
detect and isolate the faults.
Sub-ofP=={ ) , ( X ( 4 ) ? , ( X ( 5 ) ),{X(4),X(5)).
Among theses subsets { X(4) ] and( X(4), X(5) ] cover both V. BACK PROPAGATION NEURAL NET
and f3. Therefore, taking the minimal cover, ( X(4) ) is the
minimal subset and the diagnosis is that X(4) is probably In a new approach [14] and [15], it has been shown that
faulty. auto-associative n e m l networks can be implemental to
detect and eliminate bias or gross error in measurement data
Although they suffer from several requirements, subject ts cotlsttaints. In this paper me want to highlight the
assmption-based m#od has the great advantage to give a use d back ppagation netual nets methods to dctect gross
agsid analysis of the muaures of the process equations errors. For this purpose, two structures may be used [161. In
Bocordiag EOthe swpkbus variables [8]. the f i t one, the input layer consists of the process constraint
residual and the output layer consists of e h " correspon-
As we have just seen, the failure signatures are closely ding to the process variables. The states of the computational
related to the structure of the residual equations. This elements in the output layer of the net (one for each variable)
structure may be ckanrcterized by an occurrence matrix indicates the presence (output value equal 1) or the absencc
" i n i a g 0 or 1 at any gilven position if the corresponding (output value qua1 0) of a gross error. In the sccond
approach the input layer is extended by joining the proccss

372
variables to the residual vector. Numerical results are given TABLEIU
MEASUREMENTS
to compare the two schemes and discussion will be provided
-1
about the neural net parameters.
-1 732 2
52.4
3
91.3
4
162.1
5
0.040
6
64.0
7
59.6
8
149.4
A neural net consists of a number of primitive process 6.76 58.8 105.5 153.2 0.034 675 51.9 1011
units arranged in sequential layers ; genedg, a net is 2
3 6.09 64.4 112.1 158.1 0.0 62.4 57.2 290.8
comprised of an input and an output layer which may be 6.77 70.5 153.6 130.6 0.026 695 562 882.2
separated by one or more hidden layers. Units in different 4
5 7.94 62.5 1328 1313 0.032 633 593 250
layers are i n t e r " e c e and the strengths of these linksare
6 7.55 67.7 157.5 121.9 0.034 59.1 67.9 -550
described by the weight matrix of the net. In the training step, 838 0.034
7, _ _ ~ 54.3 111.2 139.7 61.7 62.1 44.3
which entails the modification of these weights, one may use 6.41 51.3 75.1 1902 0.050 60.8 57.4 179.3
a learning algorithm designed to minimize a fimction of the 8 '

9 6.43 63.7 121.2 148.8 0.034 66.2 59.1 463.7


e m hetween the &sired and the actual output of the net. In
this way, the net progressively forms an internal TABLEN
representation of the relationship between the input and the RESIDUALS

1
output data presented to it.
11 2 3 4 5 6 7
In our case, for the error detection and localisation 1 12.67 0.07 038 0.47 2.72 334 037
objective, the input am the eq&m residuals and the outputs 630 5.25 0.23 0.14 039 3.41 3.86
the number of the faulty variables. In fact we try to identify 4.02 5.88 0.20 0.13 2.75 0.03 292
the relation X = f-l@) in d e r to recognize the faulty measu- 0.11 6.68 4.02 0.12 531 3.71 0.10
rement from the knawledge of the residuals. In our 0.43 0.06 3.03 0.01 030 0.35 282
experiment, a two layer back propagation net consisting of an 9 0.08 0.11 6.10 5.66 0.06 0.00 5.82
input layer of c computational elements (the process 0.01 0.25 033 9.19 034 0.17 0.13
constraints), an output layer consisting of v computational 0.07 0.08 0.04 0.23 0.04 0.02 0.02
elements (corresponding to the v process variables) and a 0.27 0.10 0.16 0.01 10.12 0.19 0.21
hidden layer of p equations was constructed to identify gross
errors in the measured values of the X variables. The state of TABLE V
NEuRAL~ltK0uTpuTs
the elements in the output layers (one element for each
variable) indicates the presence (output value = 1). the
absence (output variable = 0) or ambiguous value (between 0 -1 1
0.998
2
0.004
3
0
4
0.003
5
O.OO0 0
6 7
0
8
0
and 1 when the fault isolation is not perfect). 2 0.002 0.997 0.002 0 0 0.002 0 0
3 0 0.002 0.997 0.002 0.001 0 0 0
4 0.002 0 0 0.996 0.023 0 0 0
5 0 0 0.001 0.001 0.995 0.004 0 0
6 0 0.002 0 0 0.002 0.996 0.002 0
7 0.001 0.002 0 0 0 0.003 0.998 0.002
8 0 0 0 0 0 0 0 0.252
-9 0 0 0 0 0 0 0 0 -
Fig. 2 A twelayer artificial neural network The results given in tables 111, IV and V are issued from a
simulation of for 900 cases, They correspond to 100 sets of 9
The set of exemplars for the training consists of feature configurations each with 8 variables randomly selected
vectors containing the normalized residuals and the number around normal functioning point. For each set, the first
of the faulty variable (only one for each exemplar) ;the data configuration is characterized by a fault on the variable 1, the
have been selected in a domain around the actual state of the second by a fault on the variable 2, ... while the 9th
process system. After the training has been achieved, the net configuration is free fault. The table I11 shows one of these
may be used to detect error in a given data set. The choice of sets ;the columns correspond to the 8 variables and $e rows
the technical parameter of the learning algorithm is not here to the 9 configurations. The table IV contains the 7
discussed. The use of different transfer functions was normalized residuals (the 4 independent residuals and the 3
investigated and sismoidal "fer functions was found to be residuals obtained by aggregation of the equations). Finally,
the most effective. We only give in table 3 some numerical the table V presents the results of the classification of the
results pointing out the performances of the method. neural network and shows that dl the faults have been
Generally, the error classification could fail in different ways. detected and identified (note however the imprecision of the
First of all a gross error could pass unrecognized and be detection of fault on variable 8). For the 900 cases, 7 cascs
classified as a random error with no bias ; secondly a random are not well recognized among which 3 faults on variable 8
error could be mistaken for a gross error ; Lhirdly, an error on are not detected and 4 faults are overdetected.
a variable may be detected but affected to another variable.

37 3
VI. CONCLUSION F. &a@ J. Ragot, D. Maquin, A. Despujols. Detection of a
biais in measurement using analytical redundancy.
Several approaches have been described in this paper proceedings of the 7th symposium Imeko, Helsinki, p. 455-
aimed at enhancing the detection and the localisation of gross 462.1990.
errors. The underlying numerical techniques use parity
equations of the observed system ;these parity equations are R.S.H. Mah. A.C. Tamhame. Detection of gross errors in
here simply generated from the model of the process and processdata. AKhE Journal, 28 (5). 1982.
correspond to their residual equations. The parity equations S.N. Kavuri, V. Venkatasubramauilm. Combining pattern
generate a failuresignature which has been -c - a n d t i ~aSumption-based
c l a ~ ~ f i c ad ~~ uclmiquur for proces~
tested by different approaches :statistical analysis and n e d fault diagnosis.Computers Chemical Enginaering. 16 (4), p.
network. On the point of view of detection, theses approaches 299312,1992
seem to be equivalent although the computation time are not
comparable. M.Staroswiecki. V.Cocqucmpo~J.P.Cassar. Generation of
analytical redundancy relations in a linear interconnected
Fault diagnosis by artificial neural network seems to be system. pro~eedingsof the IMACS MINMS~,p. ~ ~ 3 . 1 -
attractive ;however, the training of the network may be time 2A3.6, Bnaelles. 1990.
consuming. Indeed, in arder to extendthe applicabilityof this
technique. it would be interesting to explore the ability of the J. Gda.K.C. Anderson. An evidential reesMljng extension
network to diagnose multiple faults after an appropriate to quantitative model-based failure diagnosis. IEm Trans.
training. Statistical analysis of residuals with the help of Sy~bemSM m and cybanetics,22 (2), p. 275-289.1992.
equations aggregation seems to be very powerful for the J.M. Koscielny. Rules of diagnostic of complex objects m
faults isolation. However, the method present some decentrakd structure. Recessing of the IFAC Symposium.
numend difficulties for the elimination of one variable Delawape,p. 319324,1992.
between two non linear equations. Some extensions are
envisaged in this direction, using approximated elimination T.F. Petti. J. Klein, P.S. Dhurjati. Diagnostic model processor
by local lineariaionof the equations. nsmg the deep knowledge for process fault diagnosis. AI-
36 (4). p. 565575,1990.
Rom our own experience [ln, fuzzy pattem recognition
approach may be helpful to the gross errors detection MA. Kramer. Malfunction diagnosis using quantitative
problem. Indeed, the problems of a rigid threshold in mod%8 and non-boolean reasoning in expert systems. AIChE
statistical analysis of residuals and the problem of training in Joumal33. p. 130-140.1987.
neural net approach may be avoided by this approach. K. Watanabe. I. Matsura, M. Abe. M. Kubota and D.M.
Moreover, it allows to detect drift fault that the two other Himmelblau. Jncipent fault diagnosis of chemical process via
techniques don't detect and recognize. artificial neural networks. AIChE Journal. 35 (11). p. 1803-
1812.1989.
REFERENCES u51 M.A. K r m . A u t o d a t i v e neural networks. Computers
chemicrlEsrgineaig., 16 (4), p. 313-328.1992.
J. Ragot, M. Darouach. D. Maquin, G. Bloch .Validation de 1161 C. Aldrich. J.S.J. van Deventer. The identification of
domCes par 6quilibrage de bilan. Trait6 des nouvelles systematic mors m steady state mmeral processing systems
technologies. &ne diagnosticet maintenance. HERMES. juin by means of back propagation neural nets. Proposed to
1990. International Journal of Mineral Recessing. 1993.
M.A. Kramer, J.A. Leonard. Diagnosis using back [I71 G. Mourot. Contribution au diagnostic des systbmes
propagation neural networks - analysis and criticism. industriels par reconnaissancedes formes. Thbse de l'lnstitut
Computers Chemical Engineering, 14, p. 1323-1338.1990. National Polytechnique de Lorraine. f6vrier 1993.

D.Sauter, C. Aubrun. H.Nourat, M. Robert. Fault diagnosis


and reccmfoguration of systems using fuzzy logic, application
to a thermal plant. 6th IAR Kolloqium "Fuzzy signal
processing and team production", Duisbourg, 1993.

D. Maquin, J. Ragot. Comparison of gross errors detection


methods in process data. 30th Conference on Decision and
Control IEEE, Brighton. december 1991.

F. Kratz. Utilisation des techniques de redondances


matkrielles et analytiques h la dCtection de pannes de
capteurs. Application aux ceneales nuclkaires. juin 1991.

37 4

You might also like