Download as pdf or txt
Download as pdf or txt
You are on page 1of 1

Signal Processing and Machine Learning Applied to Power Frequency

Artifacts in Frequency Disturbance Recorder Signals


Ebholo, Ijieh. Tennessee State University, Nashville.
Mark, Buckner. Ph.D., Power and Energy Systems Research Division, Oak Ridge National Laboratory, Oak Ridge.

Background Modeling and Classification Algorithms


• Signals from individual Frequency Disturbance Recorder (FDR) units Signals from the FDR units are not linearly separable, the Support Vector
have special unique characteristics. Machine (SVM), Naïve Bayes (NB), and the Neural Network (NN) classifiers
are use to build the five-class nonlinear models.
• These characteristics such as frequency, phase angle and voltage
SVM Model
could be used to distinguish them from other FDRs.
• Radial Basis Function RBF Kernels enable the SVM to map high
• The Fast Fourier Transform (FFT) and supervised Machine Learning dimensional space of non-linear samples onto parametric values for
Algorithms (MLA) are used to filter and extract features, to selecting the best model, this project identified the best parameters for
differentiate the electrical signals from different outlets in the same the model using a grid-search approach.
building, and potentially identify the source of the signal. • The SVM classifier trained and tested using 10-fold cross-validation
resulted in an accuracy of 44.8%.
NB Model
• The NB model estimated the prior probability and the Gaussian of each
class, which is used as the basis for the classifier model.
• The NB model multiplies the Gaussians for each class to estimate that class
probability.
ANN Model
• In the Multi-Layer Perceptron MLP, the Broyden–Fletcher–Goldfarb–
Shannon (BFGS) algorithm searches for the global minimum to optimize
Frequency Disturbance Recorder
the network.
Preprocessing and Features Extraction • A hyperbolic tangent function allows the activation of the hidden neurons
and the softmax for the output unit activations, which the algorithms uses
• Signals are preprocessed to normalized to their mean and the to compare the cross entropy error function and the output error with a
amplitudes are scaled with the standard deviation (unit variance) to backpropagation technique.
assure the smaller data are not dominated by the larger samples during
training and testing. Results
• Signals are smoothed with a constant window size prior to performing The confusion matrix below, shows the results for the different MLA:
the FFT. SVM (left), NB (top right), MLP(bottom right) of the classified FDR units.
• Smoothing reduces the power of the amplitude at the ends, “spectral
leakage,” the effect is reduced by overlapping the frames (number of
cycles/sec) such that the new frames are streamed in before previous
frames are finished.

• Classifier true positive signal rate • Classifier true positive & negative signal rate

• The overlapping ensures the robustness of the shifted signal, which is


the spectrum of each data frame and output is the FFT feature data
used by the MLAs.
Conclusions
Machine Learning Flowchart • Scaling and FFT generally improves performance of the MLA models.
• The MLP performance is 5% better than the other MLA models used.
Acknowledgments
• Thanks to my Mentor for his contributions and success of this work
• This work was supported in part by Volkswagen Chattanooga through the
Volkswagen Distinguished Scholars Program

You might also like