Download as pdf or txt
Download as pdf or txt
You are on page 1of 21

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/261492467

Fault diagnosis in electric drives using machine learning approaches

Conference Paper · May 2013


DOI: 10.1109/IEMDC.2013.6556173

CITATIONS READS
38 4,177

3 authors, including:

Ali Bazzi Shalabh Gupta


University of Connecticut University of Connecticut
123 PUBLICATIONS 1,823 CITATIONS 117 PUBLICATIONS 1,748 CITATIONS

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

CO Sensor for PEM Fuel Cell View project

Structural Health Monitoring (SHM) View project

All content following this page was uploaded by Ali Bazzi on 14 February 2018.

The user has requested enhancement of the downloaded file.


Final Version Submitted for Publication in ISA Transactions

Wavelet-based Information Filtering for Fault Diagnosis


of Electric Drive Systems in Electric Ships

Abstract

Electric machines and drives have enjoyed extensive applications in the field of electric vehicles (e.g., electric
ships, boats, cars, and underwater vessels) due to their ease of scalability and wide range of operating conditions.
This stems from their ability to generate the desired torque and power levels for propulsion under various external
load conditions. However, as with the most electrical systems, the electric drives are prone to component failures
that can degrade their performance, reduce the efficiency, and require expensive maintenance. Therefore, for safe
and reliable operation of electric vehicles, there is a need for automated early diagnostics of critical failures such
as broken rotor bars and electrical phase failures.

In this regard, this paper presents a fault diagnosis methodology for electric drives in electric ships. This
methodology utilizes the two-dimensional, i.e. scale-shift, wavelet transform of the sensor data to filter optimal
information-rich regions which can enhance the diagnosis accuracy as well as reduce the computational complexity
of the classifier. The methodology was tested on sensor data generated from an experimentally validated simulation
model of electric drives under various cruising speed conditions. The results in comparison with other existing
techniques show a high correct classification rate with low false alarm and miss detection rates.

I. I NTRODUCTION

T HE induction motor, and in general electric machine and drive systems, are the de facto standard in the
industry due to their consistency of speed control, cost effectiveness, and range of applications including
electric vehicles (e.g., electric ships, boats, cars, and underwater vessels) and other applications such as air handling
systems, extruders, hoists and conveyors. However, as with the most electrical systems, the electric drives are prone
to component faults that can degrade their performance, reduce the efficiency, and require expensive maintenance.
Therefore, for safe and reliable operation of electric vehicles, there is a need for automated early diagnostics of
critical failures (e.g., broken rotor bars and electrical phase failures) for early warnings that may help in system
recovery and condition-based maintenance (CBM) [1]–[4].
2

This paper presents a fault diagnosis methodology for the electric drive systems in electric ships, with focus on
the broken rotor bar faults and the stator winding short-circuit faults. The stator winding short-circuit faults are
some of the most common types of faults that encompass 21% of the distribution of all faults in electric motor
drive systems [5]. The main reason for occurrence of these faults is due to the unforeseen breakdown of insulation
between components, which may occur across one or more of the phase windings in the stator or across the phase and
nearby components. Typically these faults are caused by intermittent voltage overloads, winding displacements due
to mechanical vibrations, excessive heating, etc. The effects of such faults are high currents and more heating which
results in further growth of faults. The broken rotor bar faults have also received notable attention which are more
difficult to detect. Broken rotor bars are generally caused by stresses from electromagnetic forces or overloaded
operation conditions, inadequate rotor fabrication, and rotor component wear from poor operating environments
or lack of maintenance [6]. Broken rotor bars affect the distribution of current to other bars, sparking, torque
fluctuations leading to premature wearing of bearings and other components, etc.

Most of the techniques reported in literature for motor fault diagnosis rely on current and/or voltage signal analysis.
Since the effects of the above faults directly affect the current signals, the motor current-signature analysis (MCSA)
technique has gained wide popularity for motor fault diagnosis [7]. Another advantage of the MCSA technique
is that it is non-invasive. Using the current-signature analysis both the time-domain methods (e.g., observed-based
residual computation [8] and neural networks based detection [9]) and frequency-domain methods [10] have been
investigated. However, due to the periodic nature of motor rotation it was observed that the effects of faults are well
reflected in the frequency spectrum of the motor current. Thus, frequency-domain based techniques were accepted
in literature as more reliable means for motor fault diagnosis. As such, methods such as the Fast Fourier transform
(FFT) were used to extract the most pertinent information for motor fault diagnosis [11]. However, due to averaging
properties of the FFT in the time-domain, the FFT based methods proved to be insufficient for general operating
conditions. Therefore, more recently wavelet transform based methods were proposed which provided both time
and frequency resolution [10][12].

The wavelet transform converts the one-dimensional time series data to two-dimensional scale-shift data, and
contains more information in the sense that it provides both time and frequency localization. However, despite
the additional benefits of the two-dimensional information present in the wavelet domain, there exists added
computational complexity for machine learning for fault classification. Second, the information pertinent for the
fault classification problem might be hidden in localized regions in the wavelet domain. Thus, certain regions might
contain useful information that can facilitate better separation of classes while the other regions might not be so
useful for class separation. As such, the use of such information that is not useful for separation of fault classes
can in fact degrade the performance of any well-known classifier. Thus, there exists a gap in understanding whether
the entire two-dimensional domain of the wavelet transform is necessary for motor diagnosis.
In this regard, this paper presents a wavelet-based filtering method that selects the optimal information-rich regions
3

in the wavelet-domain which provide maximal separation between fault classes. The data from these regions is then
used to extract features which are compact and used in training a classifier for fault diagnosis. The advantages are
enhancement of the diagnosis accuracy as well as reduction of the computational complexity.

The fault diagnosis methodology is built upon the following four main processes: 1) Wavelet transformation of
the motor current time-series data, 2) Filtering of optimal regions from the wavelet domain based on their available
information content to separate different fault classes, 3) Feature extraction via further reduction of the filtered
data using the Principal Component Analysis (PCA), and 4) Pattern classification using a diagnostic-tree classifier
to diagnose different faults in the system. The methodology was validated under various cruising speed conditions
on the motor current data generated from an experimentally validated model of the electric drives [13], [14]. The
results show a high correct classification rate with low false alarm and miss detection rates.

The main contributions of the paper are as follows:


• Development of a method for filtering of information-rich regions in the wavelet-domain for extraction of
useful features for enhancement of the fault classification accuracy.
• Construction of a diagnostic-tree classifier for sequential fault diagnosis.
• Testing and validation of the methodology using an experimentally validated simulation model of the motor
drive system for electric ships.

The paper is organized in six sections including the introduction. Section II provides the relevant background
information. Section III presents a brief description of the motor drive system in electric ships and provides the
details of data generation for the nominal and different faulty conditions. Section IV describes the wavelet-based
fault diagnosis methodology developed in this paper. Section V presents the results and discussion, and finally, the
paper is concluded in Section VI with suggestions for future work.

II. L ITERATURE R EVIEW

Technical literature reports several approaches for motor current-signature analysis (MCSA) for the purpose of
fault diagnosis. Amongst these neural networks based classification methods have been commonly used. Schoen et
al. [11] implemented an Artificial Neural Network (ANN) classifier for unsupervised, online learning of induction
motor failures. Tallam et al. [9] extended the application of stator winding turn-fault detection for closed-loop
induction motor drives based on ANNs. Murphey et al. [15] proposed the fault diagnostic ANN for single-switch
and post-short circuit faults. Martins et al. [16] proposed a Hebbian-based ANN for unsupervised, online diagnosis
of the stator faults utilizing vector current information. Ghate et al. [17] proposed an optimal multi-layer perceptron
ANN and later explored cascaded ANN systems for induction motor fault detection [18].
Some methods based on residual computation have also been proposed. Kallesoe et al. [8] presented an observer-
based estimation of interturn short circuit faults in delta-connected induction motors. Tabbache et al. [19] im-
plemented the Extended Kalman Filter (EKF) for residual generation of the motor parameters for sensor fault
4

detection and post fault tolerance. De Angelo et al. [7] generated vectorial residuals for stator-interturn short-circuit
detection. Cheng et al. [20] proposed a fault detection and identification approach of stator-turn faults using the
transfer impedance of closed-loop multiple-motor drives. Besides some machine learning methods have also been
proposed [21]. Georgoulas et al. [22] applied the Principal Component Analysis (PCA) with Hidden Markov Models
(HMM) for broken rotor fault diagnosis in asynchronous machines. Tran et al. [23] proposed a feature selection of
current sensors based on decision trees to implement a neuro-fuzzy inference system.
As discussed earlier, the frequency-domain methods have been accepted as reliable diagnostic tools for motor fault
diagnosis. Specifically, pattern classification using wavelet analysis [24] for fault diagnosis [25], [26] have gained
recent attention. Mohammed et al. [27] implemented the wavelets for faults diagnosis of permanent magnet machines
using a recently validated model based on Finite Element Analysis (FEA). Ordaz-Moreno et al. [28] designed a
broken bar detection algorithm based on discrete wavelets for FPGA implementation. Cusido et al. [10] utilized
the power spectral density techniques in wavelet decomposition for machine fault detection. Li et al. [29] applied
wavelet-based kurtosis statistics for fault diagnosis in rolling bearings. Rajagopalan et al. [30] implemented the Zhao-
Atlas-Marks distribution for nonstationary motor fault detection. Rosero et al. [31] utilized the Empirical Mode
Decomposition (EMD) and Wigner-Ville distribution for short-circuit detection of permanent magnet machines.
Sadeghian et al. [32] detected broken rotor bar faults using wavelet packets and neural networks. Konar et
al. [33] utilized wavelet analysis with Support Vector Machines (SVM) for bearing fault detection. Seshadrinath et
al. [34], [35] proposed Dual Tree complex wavelets for interturn fault diagnosis, and more recently a classification
methodology by applying the wavelet analysis for optimized Bayesian inference [12]. However, the applications of
above methods for electric vehicles have been limited [36]–[39].
The above methods have shown the utility of using the wavelet transform for extracting features for motor fault
diagnosis. But, as mentioned earlier, there is still a gap in understanding whether the entire scale-shift domain
of wavelets is necessary for fault classification or there are some specific regions which carry more pertinent
information to separate classes. Furthermore, the wavelet transform converts the one-dimensional time series data
to two-dimensional data, thus adding complexity for data analysis. Moreover, the information in the wavelet domain
that is not useful for separation of fault classes can in fact degrade the performance of any well-known classifier.
Thus, this paper presents a novel filtering approach that takes advantage of the benefits of two-dimensional wavelet
information while reducing the computational complexity via selecting the optimal regions in the wavelet domain
to extract features for improving the overall classifier performance.

III. M ODEL D ESCRIPTION & DATA G ENERATION

A simulation model of the electric motor drive [13], [14] that was developed and experimentally validated in [40]
is used for this research. The block diagram of the model is shown in Fig. 1. The system under study is a three-phase
induction motor drive operating under indirect Field-Oriented Control (FOC). The motor drive includes a 2300V
inverter fed from a 3500V dc source and connected to a 2300V/500A, four-pole, 2250 HP induction machine.
5

Fig. 1: Block Diagram of Electric Drive System for an Electric Ship with Fault Locations

Fig. 2: Driving Profile for the 1750 peak RPM

TABLE I: Fault Classes and Descriptions


Class Abbreviation Fault Description
NOM Nominal No Fault Present
BR Broken Rotor Bar
PP Phase-to-Phase
SCG Short Circuit to Ground

TABLE II: Data Generation and Specifications


Simulation Parameters Values
Fault Class NOM, BR, PP, SCG
Cruising Speed [1562.50: 12.50: 1750] RPM
Inputs Speed command (RPM), Rotor flux (V·sec)
Sensor Outputs Speed (RPM), Current (A)
Sampling Rate 1kHz
Sensor Noise AWGN with 20 dB SNR

Typical motors of this rating are used for driving voluminous marine vessels, such as cargo ships, cruise ships, and
other watercraft.
6

Fig. 3: Stator current time-series data collected for the nominal condition and for different fault classes, at different
cruising speeds: a) 1575 RPM, b) 1625 RPM, c) 1675 RPM, and d) 1725 RPM.

Fig. 2 shows a typical drive schedule of an electric ship which is a trapezoidal profile consisting of the following
three velocity phases: i) start-up phase with an upward ramp, ii) steady state at the cruising speed, and iii)
deceleration phase with a downward ramp. The profile is scaled-down in time as compared to an actual ship
driving profile for faster execution. The phase times of the driving profiles of actual ships could be significantly
different especially for the cruising phase which depends on the total distance travelled. In order to study the
variations of the fault signatures with respect to the motor speed, sixteen trapezoidal profiles, similar to the one
shown in Fig. 2, were simulated with different cruising speeds ranging from 1562.5 RPM to 1750 RPM with
increments of 12.5 RPM. For this study, the input flux is set to 0.4 V·sec and the torque is set proportional to a
quadratic load. The sensor outputs of speed and current are collected after each simulation run.
Table I summarizes the typical faults of the electric drive system and their locations are shown in Fig. 1 with
red marks. The broken rotor bar (BR) faults were simulated by increasing the resistance of the squirrel-cage rotor.
This amounts to about 9-10% loss of torque compared to the Nominal (NOM) condition. This was determined by
collecting the torque values at steady state for the BR and NOM conditions. On the other hand, the short circuit
(SC) faults included in the analysis are the short circuits from phase to ground (SCG) and phase to phase (PP),
respectively. These faults were simulated by shorting the desired locations using switches.

Furthermore, to model the effects of uncertainties, the sensor data is corrupted with Additive White Gaussian Noise
(AWGN) which leads to 20 dB SNR. For each cruising speed and vehicle health condition, four random instances
7

Fig. 4: Fault Diagnosis Methodology including the Training and the Testing Phases

of noise are generated that serve as additional observations of the sensor data. Table II shows the specifications for
sensor data generation. Thus, in total 256 data sets were generated from the combination of data collected for 4
classes (i.e., the nominal condition and the three fault classes), 16 different cruising speed profiles, and 4 instances
of AWGN to the sensor data. Figure 3 shows instances of the time-series data collected for the nominal and the
faulty conditions at the 1575, 1625, 1675, and 1725 RPM cruising speeds. As seen in Fig. 3, the time-series data for
different fault classes are overlapping, especially under the broken rotor bar and the nominal conditions; thus they
are hard to separate in the time-domain. Therefore, this paper presents the wavelet-based method which showed
promising results in separating the different fault classes as explained below.

IV. FAULT D IAGNOSIS M ETHODOLOGY

This section presents the fault diagnosis methodology for classifying the signatures of the nominal and the three
electric drive system faults, as described above. Figure 4 shows the architecture of the methodology, which is
divided into training and testing phases. In the training phase, the system model is simulated for different cruise
speed conditions. For each cruising speed, the simulation is run to generate sensor data for the nominal and the
three faulty conditions as described in the previous section. Subsequently, the wavelet transform is computed for
each time-series data of the stator current. Then the optimal regions in the wavelet domain called pockets or cells
are filtered, which contain the information to maximize the separation between different classes. The filtered data
is then used for feature extraction and for training a classifier for fault diagnosis. In the testing phase, the trained
classifier is applied to make the correct diagnosis of the system using the current sensor data with a priori unknown
class. Further details are presented in the following subsections.
8

A. Wavelet Analysis of the Current Data

For a time-domain signal f (t) in any L2 (R) space, the signal can be expanded by the use of a family of
orthonormal wavelet functions, such that
 
1 t−τ
Z

[Wψ f ](s, τ ) = p f (t)ψ dt (1)
|s| R s
where ψ(t) is the mother wavelet, s = 1, ..., m and τ = 1, ..., n are the scale and translation parameters respectively,
and [Wψ f ](s, τ ) is the wavelet transform of the signal f (t) [25]. Wavelet analysis is an effective tool that extracts
the two-dimensional scale-shift information from time-domain signals. Thus the m × n wavelet coefficient matrix
is generated from the current time-series data. The magnitude-square is computed from the wavelet data for
mathematical convenience.

B. Partitioning of the Wavelet Domain

Once the wavelet transform is computed from the current data, the two-dimensional wavelet domain is partitioned
into a series of regions called cells, as described here. Let a ∈ N+ and b ∈ N+ be the number of partitions of the
scale and translation axes of the wavelet domain respectively, such that m mod a = 0 and n mod b = 0. Then the
total number of cells is equal to ab and each cell is of size ( m n
a × b ).

Now let (i, j) denote the index of any particular cell where i ∈ 1, ..., a and j ∈ 1, ..., b. Then the contents or the
wavelet coefficients inside this cell are given by

Wi,j = [Wψ f ](Si , Tj ) (2)

where
n o
Si = (i − 1) m
a + 1, ..., i m
a and
n o (3)
Tj = (j − 1) nb + 1, ..., j nb

are the subsets containing the indices of points inside the cell (i, j) along the scale and translation axes respectively.
Further, for the sake of convenience let the index of a cell be represented a single parameter θ ∈ {1, ..., ab}.
Then θ = (i − 1)b + j; ∀i = 1, ..., a, and j = 1, ..., b. Also, let R(θ) denote the data matrix for a cell θ such that

T
R(θ) = Wi,j (4)
9

C. Filtering of Optimal Cells

Once the wavelet domain is partitioned into cells, a set of optimal cells is filtered from all cells, which contain
the information to maximize the separability between all classes, thus enhancing the classifier performance for fault
diagnosis. Therefore, the filtering approach utilizes a metric that measures the separability between fault classes as
described here.
To begin with, let C 0 = {C1 , ..., CN } represent the set of all classes, which is equal to {N OM, BR, SCG, P P }
0 0
in this paper. Then pick any class say Cα ∈ C 0 , α ∈ {1, ...N }. Now lets define a set C α = {C 0 \Cα }, |C α | =
0
N − 1, which contains all classes in C 0 excluding the class Cα . Let Cβ ∈ C α be any other class.

Now we will describe the process of ranking each cell. Lets pick any cell θ ∈ {1, ..., ab}. As per Eq. (4), denote
RCα (θ) and RCβ (θ) to be the data matrices containing the wavelet coefficient data in the cell with index θ , when
0
the data is generated for class Cα ∈ C 0 and Cβ ∈ C α , respectively. Furthermore, let P (R• (θ)) be the probability
distribution of the data in R• (θ), where • is either Cα or Cβ . This distribution is obtained by dividing the range of
wavelet coefficient values into eight uniformly spaced intervals forming bins and computing the number of points
falling inside each bin.

Then the efficacy of the cell θ ∈ {1, ..., ab} to separate the class pair Cα and Cβ is measured by the total variation
distance [41] between the probability distributions P (RCα (θ)) and P (RCβ (θ)) as follows

1 
dCα ,Cβ (θ) = kP (RCα (θ)) − P RCβ (θ) k (5)
2

In this manner the distance dCα ,Cβ (θ) is computed for all cells θ ∈ {1, ..., ab} and the set of distances DCα ,Cβ =
{dCα ,Cβ (1), dCα ,Cβ (2)...dCα ,Cβ (ab)} is constructed that consists of the measures of all cells to separate the class
pair Cα and Cβ . Subsequently, the set DCα ,Cβ is sorted in descending order as follows

dCα ,Cβ (θ 1 ) ≥ dCα ,Cβ (θ 2 ) ≥ · · · ≥ dCα ,Cβ (θ ab ) (6)

where θ k ∈ {1, ..., ab}. This also defines the rank of cells, such that rank(θ 1 ) ≥ rank(θ 2 ) ≥ ... ≥ rank(θ ab ). Thus
higher the distance a cell generates, the higher is its rank. Then the set of top r ranked cells which can maximally
separate class Cα with Cβ is obtained as

ΘCα ,Cβ = θ 1 , ..., θ r



(7)

0
Similarly, the above process including Eqs. (5)-(7) is repeated for every other class Cβ ∈ C α to generate the
corresponding optimal cells. Now, the optimal cells that can separate class Cα with all other classes are obtained
0
from intersection of the sets {ΘCα ,Cβ , ∀Cβ ∈ C α }, as follows:
10

\
Θ∗Cα = ΘCα ,Cβ (8)
0
Cβ ∈C α

where the number of winning cells is denoted by |Θ∗Cα | = η . Here Θ∗Cα is the set of optimal cells for class Cα that
could be used to separate it from all other classes. Now the above process is repeated to generate the set Θ∗Cα for
all classes Cα ∈ C 0 , ∀α = 1, ..., N . However, this is done in a manner such that every time a class is separated,
that class is excluded from the list. This forms a diagnostic tree classifier which separates one class at each node
using its optimal cells. Before delving into details of the diagnostic tree classifier construction, a data reduction
method is presented below.

D. Data Reduction for Classification

Once the wavelet data from the optimal cells is extracted, the Principal Component Analysis (PCA) method is
applied for data reduction and feature extraction [42]–[44]. For this purpose, the data from the η winning cells of
 
each class Cα are placed into a matrix XCα , where XCα = R(θ), ∀θ ∈ Θ∗Cα .

Using the Karhunen-Loève (KL) algorithm, the uncorrelated features, called Principal Components (PC’s) or
score vectors, are inferred from the data matrix XCα based on the variance maximization principle. These PC’s
capture the most information in the classes from the original data matrix, as can be seen by the formation of
separable clusters in the feature space. The KL algorithm is briefly summarized as follows:

1) The covariance matrix Σ of XCα is computed and the left eigenvalues {λi } and the corresponding eigenvectors
{ei } are obtained where i = 1, ..., η .
2) The eigenvalues are sorted and the q < η largest dominating eigenvalues are selected.
3) Using the q eigenvectors that correspond to the largest eigenvalues, a transformation matrix T is obtained
which transforms the data set XCα into a feature vector YCα , using Eq. (9) below.

YCα = XCα × T (9)

The feature vector YCα could be viewed as n feature points in a q dimensional feature space. The above process
is repeated for all classes Cα to obtain the feature points for all classes which are included in the feature space.
Furthermore, since electric ships could possibly operate at different cruising speeds, the feature vectors YCα are
obtained for different cruising speeds as described earlier. The feature space is then augmented by an additional
axis of the cruising speed ω to make the approach adaptive to different cruising conditions.

E. Fault Classification using a Diagnostic Tree

The fault diagnosis approach is formulated as a diagnostic tree classifier which separates one class from the rest
at each node as shown in Fig. 5. This approach is useful for sequential diagnosis if there are multiple classes and
their optimal cells are different. Hence, each node of the diagnostic tree uses the optimal cells for the class that
11

is separated at that node. First, the tree performs optimization to identify the best class to separate at each level.
Then, a classifier is trained to isolate that class on the feature data extracted from the optimal cells for that class.
The construction of the tree is explained here.

At node 1 of the tree, the full class set C 0 = {C1 , C2 , ...CN } forms the entering class set. Then the most
separable class say φ1 ∈ C 0 is separated from others using the optimal cells for φ1 . Then the other branch of the
tree starts at node 2 with entry class set as C 1 = {C 0 \ φ1 }. The above process is repeated at all nodes until all
classes are separated. Now we explain how the optimal cells and class are obtained at every level of the tree for
sequential classification.

In general, let ℓ represent a certain level of the tree, where ℓ = 1, ..., N − 1. Let C ℓ−1 be the entering class set at
level ℓ of the tree, such that it contains |C ℓ−1 | = N − ℓ + 1 classes. Further, let Cα ∈ C ℓ−1 be any class that has
ℓ−1
not already been separated at a previous level. Let C α = {C ℓ−1 \Cα } be the set of other remaining classes such
ℓ−1
that it contains |C α | = N − ℓ classes. Now the optimal cells to separate class Cα ∈ C ℓ−1 from all other classes
ℓ−1
Cβ ∈ C α are defined in a set Θℓ∗
Cα which is obtained using the procedure described in Eqs. (5)-(8). However,
ℓ−1
as described above the procedure is performed only over the available classes at level ℓ, i.e. the class set C α .
Similarly the optimal cells Θℓ∗
Cα are obtained for every class Cα ∈ C
ℓ−1 at level ℓ of the tree.

Now the optimal class φℓ ∈ C ℓ−1 to be separated at level ℓ of the tree is obtained as follows. First, the total
separability measure of each class Cα ∈ C ℓ−1 is computed as

1 X X
∆ℓCα = dCα ,Cβ (θ) (10)
η ℓ−1
Cβ ∈C α θ∈Θℓ∗

Then, the optimal class to separate at level ℓ is

φℓ = arg max(∆ℓCα ) (11)


Cα ∈C ℓ−1

The exit class set of level ℓ that forms the entering class set of level ℓ + 1 is then given as

C ℓ = {C ℓ−1 \ φℓ } (12)

Figure 5 shows the tree for the electric ship fault diagnosis problem and the table underneath describes the optimal
class separated as well as the entering class set at each level. This tree will be further discussed in the results section.
At each level of the tree, the optimal class is selected using Eq. (11) and features are extracted from the optimal
cells for that class using the method described in section IV-D. Subsequently, a classifier is constructed at each
level to separate the optimal class vs the rest. The k-Nearest Neighbor (k-NN) classifier is used in this paper that
acts according to majority vote rule where any test point is assigned to a class which has the majority occurrence
in its k-nearest neighbors [42] in the feature space. Since at each level of the diagnostic tree the classifier makes
12

Fig. 5: Diagnostic Tree

only binary decisions for separation of the optimal class with the rest, the feature data of all classes other than the
optimal class are grouped together and re-labeled. It was observed that the binary tree architecture simplifies the
construction of the classifier and also improves the classification accuracy.

F. Training and Testing of the Tree

In the training phase, the diagnostic tree is constructed and fixed. The tree construction includes the following at
each level: the optimal class to be separated, the optimal cells for that class, and the corresponding classifier. In the
testing phase, the decisions are made for the new test data with unknown class label. Starting with ℓ = 1, the test
data is transformed into the wavelet domain. Then the optimal cells for the optimal class at level ℓ = 1, that were
found in the training phase, are used to extract features. Subsequently, the classifier for this level is employed to
make a binary decision between the optimal class vs the rest. If the decision happens to be that of the optimal class,
then the operation stops. Otherwise it moves down to the next level of the tree and the same process is repeated
as above and so on until we reach the bottom of the tree thus making a final decision.

The performance of the overall diagnostic tree classifier was evaluated using the K-fold Cross-Validation (CV)
algorithm, where (K-1) data sets are randomly selected for training the classifier while the remaining one is used
for testing, and this process is repeated K times. In this paper K=256, which is equal to the total number of data
sets generated from different simulation runs. At each iteration, the output of the diagnostic tree is recorded into a
confusion matrix, which compares the classifier decisions vs. the actual class labels of the test data.
13

V. R ESULTS & D ISCUSSION

This section presents the results for fault diagnosis of the motor drives of electric ships, obtained by applying the
wavelet-based methodology presented in this paper. First, the effect of faults is observed in the wavelet transforms of
the current sensor data. Figures 6a i)-iv) show examples of the wavelet transforms of the current data over m = 25
scales generated at the 1575 RPM cruising speed. The wavelet data is generated for the four classes studied in this
paper, i.e., the nominal (NOM) and the three fault classes (BR, PP and SCG), respectively. After experimentation
with different mother wavelets, the Meyer wavelet was chosen since it provided the best classification accuracy.
As seen in Figures 6a i)-iv), the wavelet transforms capture the changes in different classes, especially between the
NOM and BR classes. However, the information is disbursed over different regions in the wavelet domain which
need to be filtered for improving classification accuracy. In this respect, Figs. 6b i)-iv) show the corresponding
optimal cells of size 1 × 250 that were filtered for each class using the procedure described in Section IV-C.

The optimal class separated at each level is found using the diagnostic tree as explained in Sec. IV-E. From
the results of the diagnostic tree, shown in Fig. 5, it is observed that the best class to separate at ℓ = 1 is φ1 =
PP from C 1 = {N OM, BR, SCG}, at ℓ = 2 is φ2 = SCG from C 2 = {N OM, BR}, and at ℓ = 3 is φ3 = BR
from C 3 = {N OM }. When testing an unknown class data using the diagnostic tree, the wavelet transform of the
testing data is first filtered using the optimal cells at ℓ = 1 to separate P P from the rest {N OM, BR, SCG}. The
data of optimal cells is passed through PCA and subsequently the k-NN classifier is applied to obtain a decision
as explained in Sections IV-D and IV-E, respectively. If the decision on the unknown class data is P P , then the
decision is finalized and the algorithm stops. If it is not P P , then the algorithm moves down the tree and the
wavelet transform of the testing data is filtered with the optimal cells that can separate SCG from {N OM, BR}
and subsequently passed through PCA and the corresponding classifier. If the decision is SCG, the tree operation
terminates. If it is not SCG, then the operation further moves down the tree and the wavelet data is filtered with
the optimal cells that can separate N OM from BR and a decision is obtained using the corresponding classifier.
Since this is the last level of the diagnostic tree, the operation terminates at this step.
Figure 7a)-c) show the feature space generated at each level of the diagnostic tree: a) level ℓ = 1 that separates
PP from C 1 = {N OM, BR, SCG}, b) level ℓ = 2 that separates SCG from C 2 = {N OM, BR}, and c) level
ℓ = 3 that separates BR from C 3 = {N OM }. As seen, the three dimensions of the features spaces consist of two
principal components and the cruising speed. The class colors for {P P, SCG, BR, N OM } are blue, pink, cyan
and green, respectively. The data that is colored represents the class that is separated at each level while all other
classes are shown by the black color.

The fault diagnosis methodology presented in this paper is evaluated in comparison with several different existing
methods. For this purpose, we chose different feature extractor and classifier combinations for data analysis, as
shown in Table III. For evaluation of each feature extractor and classifier combination, the K -fold cross-validation
process [42] is employed as explained earlier. Each diagnosis decision is recorded into the confusion matrix. In the
14

(a) Wavelet transform data for the four classes.

(b) Optimal cells filtered for each class using Eq. (8)

Fig. 6: Top row shows the wavelet transform for different classes NOM, PP, SCG, and BR at 1575 RPM cruising
speed. Bottom row shows the corresponding optimal cells.

ideal case the prediction should match the actual, thus a confusion matrix with large tallies in the diagonal indicate
an accurate classifier. The Correct Classification Rate (CCR) is computed by taking the trace of the confusion
matrix and dividing by the sum of all entries. Similarly, the False Alarm Rate (FA) is computed by taking the sum
of all entries in the c0 row that are not predicted as c0 and dividing it by the total sum of all entries in the row.
Also, the Missed Detection Rate (MD) is computed by taking the sum of all entries that are predicted to be c0
but that belong to classes other than c0 and dividing it by the total sum of all entries for all rows corresponding
to classes other than c0 . Table III provides the confusion matrices, CCRs, FAs, and MDs for the different feature
and classifier combinations. The computational times per testing time-series data of these different techniques are
15

Fig. 7: Feature space generated at: a) level 1- separates PP (color blue) vs C 1 = {N OM, BR, SCG} (color black),
b) level 2- separates SCG (color pink) vs C 2 = {N OM, BR} (color black), and c) level 3- separates BR (color
cyan) vs C 3 = {N OM } (color black). The third axis in all plots is the cruising speed in RPM.

summarized in Table IV. During the testing phase, a trained classifier can be used to directly produce a diagnostics
decision such that it takes the time series data of the motor current as input and provides the fault class information
as its output.
As seen in Table III, the first row shows the results when the time-series data of the stator current were analyzed
using PCA to extract the principal components as features of the classes which were then sent to different (k-NN,
SVM and C4.5) classifiers. The second row shows the results when the time-series data of the stator current were
analyzed using the Linear Discriminant Analysis method [42] for feature extraction and then sent to different (k-NN,
SVM and C4.5) classifiers [42]. The above two methods were the fastest; however, they did not produce overall
good results in terms of CCRs, FAs, and MDs.
In the second approach, first the wavelet coefficients were computed from the testing data and applied directly
to PCA and LDA respectively, without filtering to generate the features. These set of results are shown in rows
3 and 4 of Table III. Since the wavelet transform generates the two-dimensional shif t − scale information from
one-dimensional time-series data, it can be seen that using wavelets improved the results resulting in higher CCRs
and lower FAs and MDs for all classifiers; however, the corresponding computation time increased a little.
Finally, the proposed approach of optimal cell filtering was employed to filter in the information rich cells in
the wavelet domain which contain maximum information to separate different classes. This filtered data was then
applied to PCA and LDA and then sent to different classifiers and the results are shown in the bottom two rows
of Table III. As seen the proposed approach significantly improved the classifier performances and resulted in
improved CCRs and lower FAs and MDs.
16

TABLE III: Confusion Matrices and Performance Values for Various Techniques.
Different Classifiers
Feature Extraction Process k-NN SVM C4.5
Prediction Prediction Prediction
c0 c1 c2 c3 c0 c1 c2 c3 c0 c1 c2 c3
Time Series Data c0 48 16 0 0 c0 38 26 0 0 c0 52 11 1 x

Actual
֒→ PCA c1 19 44 1 0 c1 7 57 0 0 c1 16 46 2 0
c2 4 4 56 0 c2 0 0 64 0 c2 3 2 59 0
c3 0 0 0 64 c3 0 0 0 64 c3 0 0 0 64
CCR = 82.81% CCR = 87.11% CCR = 86.33%
F A = 25.00%, M D = 11.98% F A = 40.63%, M D = 3.65% F A = 18.75%, M D = 9.89%
c0 c1 c2 c3 c0 c1 c2 c3 c0 c1 c2 c3
Time Series Data c0 39 21 4 0 c0 34 21 9 0 c0 37 22 5 0
Actual

֒→ LDA c1 25 33 6 0 c1 25 31 8 0 c1 23 35 6 0
c2 11 10 43 0 c2 17 14 33 0 c2 6 7 51 0
c3 0 0 0 64 c3 0 0 0 64 c3 0 0 0 64
CCR = 69.92% CCR = 63.28% CCR = 73.05%
F A = 39.06%, M D = 18.75% F A = 46.87%, M D = 21.88% F A = 42.19%, M D = 15.1%
c0 c1 c2 c3 c0 c1 c2 c3 c0 c1 c2 c3
Time Series Data c0 57 3 3 1 c0 48 11 0 5 c0 54 2 0 8
Actual

֒→ Wavelet Transform c1 0 63 1 0 c1 0 64 0 0 c1 1 63 0 0
֒→ PCA c2 1 2 60 1 c2 0 0 64 0 c2 0 0 64 0
c3 4 1 1 58 c3 5 1 2 56 c3 3 0 0 61
CCR = 92.97% CCR = 90.63% CCR = 94.53%
F A = 10.94%, M D = 2.60% F A = 25.00%, M D = 2.60% F A = 15.62%, M D = 2.08%
c0 c1 c2 c3 c0 c1 c2 c3 c0 c1 c2 c3
Time Series Data c0 57 0 7 0 c0 57 0 7 0 c0 57 0 7 0
Actual

֒→ Wavelet Transform c1 0 64 0 0 c1 0 64 0 0 c1 0 63 1 0
֒→ LDA c2 9 0 55 0 c2 12 1 51 0 c2 9 0 55 0
c3 0 0 0 64 c3 7 0 0 57 c3 0 0 0 64
CCR = 93.75% CCR = 89.45% CCR = 93.36%
F A = 10.94%, M D = 4.69% F A = 10.94%, M D = 4.89% F A = 10.94%, M D = 4.69%
c0 c1 c2 c3 c0 c1 c2 c3 c0 c1 c2 c3
Time Series Data c0 60 4 0 0 c0 58 6 0 0 c0 60 4 0 0
Actual

֒→ Wavelet Transform c1 0 64 0 0 c1 1 63 0 0 c1 0 64 0 0
֒→ Optimal Cell Selection c2 0 0 64 0 c2 0 0 64 0 c2 0 0 64 0
֒→ PCA c3 0 0 0 64 c3 2 1 0 61 c3 0 0 0 64
CCR = 98.44% CCR = 96.10% CCR = 98.44%
F A = 6.25%, M D = 0.00% F A = 9.38%, M D = 1.56% F A = 6.25%, M D = 0%
c0 c1 c2 c3 c0 c1 c2 c3 c0 c1 c2 c3
Time Series Data c0 63 1 0 0 c0 58 0 6 0 c0 63 1 0 0
Actual

֒→ Wavelet Transform c1 0 64 0 0 c1 0 64 0 0 c1 0 64 0 0
֒→ Optimal Cell Selection c2 0 0 64 0 c2 6 0 58 0 c2 0 0 64 0
֒→ LDA c3 0 0 0 64 c3 2 1 0 61 c3 0 0 0 64
CCR = 99.61% CCR = 94.14% CCR = 99.61%
F A = 1.56%, M D = 0.00% F A = 9.38%, M D = 4.17% F A = 1.56%, M D = 0.00%
17

TABLE IV: Computational Testing Times per Time Series Observation.


Different Classifiers
Feature Extraction Process k-NN SVM C4.5
Time Series Data
֒→ PCA 0.012s 0.011s 0.006s
Time Series Data
֒→ LDA 0.002s 0.002s 0.001s
Time Series Data
֒→ Wavelet Transform 0.255s 0.354s 0.107s
֒→ PCA
Time Series Data
֒→ Wavelet Transform 0.2s 0.082s 0.133s
֒→ LDA
Time Series Data
֒→ Wavelet Transform 0.145s 0.249s 0.178s
֒→ Optimal Cell Selection
֒→ PCA
Time Series Data
֒→ Wavelet Transform 0.154s 0.053s 0.044s
֒→ Optimal Cell Selection
֒→ LDA

Overall, it can seen from Table III that the results progressively improved from the time-domain based methods
(rows 1 and 2 of the table) to the wavelet-domain based methods (rows 3 and 4 of the table) to the proposed wavelet-
domain based filtering method (rows 5 and 6 of the table). As discussed in the introduction, the wavelet-domain
based methods provided better classification performances as compared to the time-domain based methods due to
the more subtle fault information present in the two-dimensional wavelet-domain. The proposed wavelet-domain
based filtering method further improved the performance of the wavelet-domain based methods by extracting the
information-rich regions in the wavelet domain which can maximally separate the fault classes. Thus, it is observed
that while the wavelet domain can enhance the class separability, there are certain regions in the wavelet domain
that contain the most useful information and which can further enhance the classifier performance.
All, the above results were generated using a personal computer running Windows 7 Enterprise SP1 64-bit,
Intel(R) Core(TM) i5-2400 CPU @ 3.1 GHz, and 16 GB RAM.

VI. C ONCLUSIONS & F UTURE W ORK

This paper presented a method for fault diagnosis in electric drive systems with applications to electric vehicles,
in particular electric ships. The proposed diagnosis method utilizes a wavelet-based filtering approach for feature
extraction where optimal cells in the wavelet domain are selected which provide maximum separability between
classes. In addition, a diagnostic tree was constructed to classify the wavelet-based features. The proposed approach
was validated in comparison with several different feature extraction and classifier combinations. It was shown that
18

the proposed filtering approach significantly improved the classifier performances and resulted in improved CCRs
and lower FAs and MDs. The machine learning framework was trained to be robust to uncertainties while also
being adaptive to the varying cruising speeds. Furthermore, all the classifier training steps can be performed off-line,
thus the application of this method in the implementation phase needs a small computational time to achieve a
consistently high degree of accuracy.
Future work consists of the following directions:
• Online implementation of the diagnostic tool on an experimental test-bed.
• Extension of the proposed method to electric vehicles with different driving schedules, (e.g., a typical driving
profile in New York city) where the driving input can be broken down into stop-and-go traffic dynamics.
• Inclusion of environmental factors on data for fault diagnosis.
• Extension of the current work to include a larger set of electric drive faults.
• Development of a supervisory control approach [45] for resilience to motor faults in electric vehicles.

ACKNOWLEDGEMENTS

The authors would like to acknowledge the support provided by Khushboo Mittal in comparison of the proposed
wavelet-based filtering method with other existing techniques in literature.

R EFERENCES

[1] R. Schoen, T. Habetler et al., “Motor bearing damage detection using stator current monitoring,” IEEE Transactions on Industry
Applications, vol. 31, no. 6, pp. 1274–1279, Dec. 1995.
[2] A. Siddique, G. Yadava, and B. Singh, “A review of stator fault monitoring techniques of induction motors,” IEEE Transactions on
Energy Conversion, vol. 20, no. 1, pp. 106–114, Mar. 2005.
[3] R. Tallam, S. Lee et al., “A survey of methods for detection of stator-related faults in induction machines,” IEEE Transactions on
Industry Application, vol. 43, no. 4, pp. 920–933, Aug. 2007.
[4] A. Gandhi, T. Corrigan, and L. Parsa, “Recent advances in modeling and online detection of stator interturn faults in electrical motors,”
IEEE Transactions on Industrial Electronics, vol. 58, no. 5, pp. 1564–1575, May 2011.
[5] A. Bonnett and C. Yung, “Increased efficiency versus increased reliability,” IEEE Industry Applications Magazine, vol. 14, no. 1, pp.
29–36, Feb. 2008.
[6] A. Bonnett and G. Soukup, “Cause and analysis of stator and rotor failures in three-phase squirrel-cage induction motors,” IEEE
Transactions on Industry Applications, vol. 28, no. 4, pp. 921–937, Aug. 1992.
[7] C. De Angelo, G. Bossio et al., “Online model-based stator-fault detection and identification in induction motors,” IEEE Transactions
on Industrial Electronics, vol. 56, no. 11, pp. 4671–4680, Nov. 2009.
[8] C. Kallesoe, R. Izadi-Zamanabadi et al., “Observer-based estimation of stator-winding faults in delta-connected induction motors: A
linear matrix inequality approach,” IEEE Transactions on Industry Applications, vol. 43, no. 4, pp. 1022–1031, July 2007.
[9] R. Tallam, T. Habetler, and R. Harley, “Stator winding turn-fault detection for closed-loop induction motor drives,” IEEE Transactions
on Industry Applications, vol. 39, no. 3, pp. 720–724, May 2003.
[10] J. Cusido, L. Romeral et al., “Fault detection in induction machines using power spectral density in wavelet decomposition,” IEEE
Transactions on Industrial Electronics, vol. 55, no. 2, pp. 633–643, Feb. 2008.
[11] R. Schoen, B. Lin et al., “An unsupervised, on-line system for induction motor fault detection using stator current monitoring,” IEEE
Transactions on Industry Applications, vol. 31, no. 6, pp. 1280–1286, Dec. 1995.
19

[12] J. Seshadrinath, B. Singh, and B. Panigrahi, “Incipient interturn fault diagnosis in induction machines using an analytic wavelet-based
optimized bayesian inference,” IEEE Transactions on Neural Networks and Learning Systems, vol. 25, no. 5, pp. 990–1001, May 2014.
[13] A. Bazzi, A. Dominguez-Garcia, and P. Krein, “Markov reliability modeling for induction motor drives under field-oriented control,”
IEEE Transactions on Power Electronics, vol. 27, no. 2, pp. 534–546, Feb. 2012.
[14] A. Bazzi and P. Krein, “Utilization of median filters in power electronics: Traction drive applications,” in 2013 28th Annual Conference
on Applied Power Electronics Conference and Exposition (APEC), Mar. 2013, pp. 3055–3060.
[15] Y. Murphey, M. Masrur et al., “Model-based fault diagnosis in electric drives using machine learning,” IEEE/ASME Transactions on
Mechatronics, vol. 11, no. 3, pp. 290–303, Jun. 2006.
[16] J. Martins, V. Pires, and A. Pires, “Unsupervised neural-network-based algorithm for an on-line diagnosis of three-phase induction
motor stator fault,” IEEE Transactions on Industrial Electronics, vol. 54, no. 1, pp. 259–264, Feb. 2007.
[17] V. Ghate and S. Dudul, “Optimal mlp neural network classifier for fault detection of three phase induction motor,” Expert Systems with
Applications, vol. 37, no. 4, pp. 3468–3481, Apr. 2010.
[18] ——, “Cascade neural-network-based fault classifier for three-phase induction motor,” IEEE Transactions on Industrial Electronics,
vol. 58, no. 5, pp. 1555–1563, May 2011.
[19] B. Tabbache, M. Benbouzid et al., “Dsp-based sensor fault detection and post fault-tolerant control of an induction motor-based electric
vehicle,” International Journal of Vehicular Technology, vol. 2012, no. 1, pp. 1–7, Nov. 2012.
[20] S. Cheng, P. Zhang, and T. Habetler, “An impedance identification approach to sensitive detection and location of stator turn-to-turn
faults in a closed-loop multiple-motor drive,” IEEE Transactions on Industrial Electronics, vol. 58, no. 5, pp. 1545–1554, May 2011.
[21] A. Silva, S. Gupta, and A. Bazzi, “Fault diagnosis in electric drives using machine learning approaches,” in IEEE International Electric
Machines and Drives Conference (IEMDC), May 2013, pp. 722–726.
[22] G. Georgoulas, M. Mustafa et al., “Principal component analysis of the start-up transient and hidden markov modeling for broken rotor
bar fault diagnosis in asynchronous machines,” Expert Systems with Applications, vol. 40, no. 17, pp. 7024–7033, Dec. 2003.
[23] V. Tran, B. Yang et al., “Fault diagnosis of induction motor based on decision trees and adaptive neuro-fuzzy inference,” Expert Systems
with Applications, vol. 36, no. 2, pp. 1840–1849, Mar. 2009.
[24] X. Jin, S. Gupta, K. Mukherjee, and A. Ray, “Wavelet-based feature extraction using probabilistic finite state automata for pattern
classification,” Pattern Recognition, vol. 44, no. 11, pp. 1343–1356, July 2011.
[25] S. Mallat, A wavelet tour of signal processing: the sparse way. Burlington, MA: Academic Press, 2008.
[26] K. Gaeid, H. Ping et al., “Survey of wavelet fault diagnosis and tolerant of induction machines with case study,” International Review
of Electrical Engineering, vol. 27, no. 3, pp. 4437–4456, June 2012.
[27] O. Mohammed, Z. Liu et al., “Internal short circuit fault diagnosis for pm machines using fe-based phase variable model and wavelets
analysis,” IEEE Transactions on Magnetics, vol. 43, no. 4, pp. 1729–1732, Apr. 2007.
[28] A. Ordaz-Moreno, R. Romero-Troncoso et al., “Automatic online diagnosis algorithm for broken-bar detection on induction motors based
on discrete wavelet transform for fpga implementation,” IEEE Transactions on Industrial Electronics, vol. 55, no. 5, pp. 2193–2202,
May 2008.
[29] F. Li, G. Meng et al., “Wavelet transform-based higher-order statistics for fault diagnosis in rolling element bearings,” Journal of
Vibration and Control, vol. 14, no. 11, pp. 1691–1709, Nov. 2008.
[30] S. Rajagopalan, J. Restrepo et al., “Nonstationary motor fault detection using recent quadratic timefrequency representations,” IEEE
Transactions on Industry Applications, vol. 44, no. 3, pp. 735–744, June 2008.
[31] J. Rosero, L. Romeral et al., “Short-circuit detection by means of empirical mode decomposition and wignerville distribution for pmsm
running under dynamic condition,” IEEE Transactions on Industrial Electronics, vol. 56, no. 11, pp. 4534–4547, Nov. 2009.
[32] A. Sadeghian, Y. Zhongming, and B. Wu, “Online detection of broken rotor bars in induction motors by wavelet packet decomposition
and artificial neural networks,” IEEE Transactions on Instrumentation and Measurement, vol. 58, no. 7, pp. 2253–2263, Feb. 2009.
[33] P. Konar and P. Chattopadhyay, “Bearing fault detection of induction motor using wavelet and support vector machines (svms),” Applied
Soft Computing, vol. 11, no. 6, pp. 4203–4211, Sept. 2011.
20

[34] J. Seshadrinath, B. Singh, and B. Panigrahi, “Incipient turn fault detection and condition monitoring of induction machine using
analytical wavelet transform,” IEEE Transactions on Industry Applications, vol. 50, no. 3, pp. 2235–2242, Sept. 2013.
[35] ——, “Investigation of vibration signatures for multiple fault diagnosis in variable frequency drives using complex wavelets,” IEEE
Transactions on Power Electronics, vol. 29, no. 2, pp. 936–945, Apr. 2013.
[36] K. Logan, “Intelligent diagnostic requirements of future all-electric ship integrated power system,” IEEE Transactions on Industry
Applications, vol. 43, no. 1, pp. 139–149, Jan. 2007.
[37] Y. Jeong, S. Sul, S. E. Schulz, and N. R. Patel, “Fault detection and fault-tolerant control of interior permanent-magnet motor drive
system for electric vehicle,” IEEE Transactions on Vehicular Technology, vol. 41, no. 1, pp. 46–51, 2005.
[38] B. Akin, S. B. Ozturk, H. A. Toliyat, and M. Rayner, “Dsp-based sensorless electric motor fault-diagnosis tools for electric and hybrid
electric vehicle powertrain applications,” IEEE Transactions on Vehicular Technology, vol. 58, no. 6, pp. 2679–2688, 2009.
[39] R. Wang and J. Wang, “Fault-tolerant control with active fault diagnosis for four-wheel independently driven electric ground vehicles,”
IEEE Transactions on Vehicular Technology, vol. 60, no. 9, pp. 4276–4287, 2011.
[40] M. Amrhein and P. T. Krein, “Dynamic simulation for analysis of hybrid electric vehicle system and subsystem interactions, including
power electronics,” IEEE Transactions on Vehicular Technology, vol. 54, no. 3, pp. 825–836, 2005.
[41] D. Levin, Y. Peres, and E. L. Wilmerm, Markov Chains and Mixing Times. Providence, RI: American Mathematical Society, 2008.
[42] C. Bishop, Pattern Recognition and Machine Learning. New York, NY: Springer, 2006.
[43] N. Najjar, J. Hare et al., “Heat exchanger fouling diagnosis for an aircraft air-conditioning system,” in SAE 2013 AeroTech Congress
& Exhibition, Sep. 2013, pp. 3055–3060.
[44] N. Najjar, S. Gupta, J. Hare, S. Kandil, and R. Walthall, “Optimal sensor selection and fusion for heat exchanger fouling diagnosis in
aerospace systems,” IEEE Sensors Journal, vol. 16, no. 12, pp. 4866–4881, 2016.
[45] K. Mittal, J. P. Wilson, B. P. Bailie, S. Gupta, G. M. Bollas, and P. B. Luh, “Supervisory control for resilient chiller plants under
condenser fouling,” IEEE Access, vol. 5, pp. 14 028–14 046, August 2017.

View publication stats

You might also like