Download as pdf or txt
Download as pdf or txt
You are on page 1of 14

sensors

Article
Learning to See the Vibration: A Neural Network for
Vibration Frequency Prediction
Jiantao Liu and Xiaoxiang Yang *
School of Mechanical Engineering and Automation, Fuzhou University, Fuzhou 350108, China;
m150210008@fzu.edu.cn
* Correspondence: yangxx@fzu.edu.cn; Tel.: +86-138-5010-0672

Received: 9 July 2018; Accepted: 1 August 2018; Published: 2 August 2018 

Abstract: Vibration measurement serves as the basis for various engineering practices such as natural
frequency or resonant frequency estimation. As image acquisition devices become cheaper and
faster, vibration measurement and frequency estimation through image sequence analysis continue to
receive increasing attention. In the conventional photogrammetry and optical methods of frequency
measurement, vibration signals are first extracted before implementing the vibration frequency
analysis algorithm. In this work, we demonstrate that frequency prediction can be achieved using a
single feed-forward convolutional neural network. The proposed method is verified using a vibration
signal generator and excitation system, and the result compared with that of an industrial contact
vibrometer in a real application. Our experimental results demonstrate that the proposed method can
achieve acceptable prediction accuracy even in unfavorable field conditions.

Keywords: vibration measurement; frequency prediction; deep learning; convolutional neural


network; photogrammetry; computer vison; non-contact measurement

1. Introduction
Structural vibration measurement is widely used in civil engineering, mechanical engineering,
and other engineering practices. The frequency components can be estimated from the measured
vibration signal; thus, the resonant frequency or the natural frequency can be determined from the
signal. The estimation of the resonant frequency or natural frequency of a mechanical system is of
great importance in many engineering applications. The resonant frequency and natural frequency
can be used in monitoring the mechanical behavior of important structural components, detecting
variations in the mechanical properties, design optimization, avoiding the occurrence of resonance
disasters, and so forth.
Vibration measurement methods can be divided into two categories: contact measurement and
non-contact measurement. Contact measurement uses contact sensors attached to the measurement
target such as velocity transducer, strain gauges, and accelerometer. However, installing and deploying
contact sensors is both time and labor intensive. Non-contact measurement utilizes certain types
of electromagnetic radiation to transmit information, such as laser Doppler vibrometer (LDV) [1–3]
and microwave interferometry techniques [4,5]. The use of cameras to record visible light to carry
out non-contact vibration measurement (also called image-based or computer vison vibration
measurement) has received significant attention in recent decades, such as in digital image correlation
(DIC) [6–8], marker tracking [9–12], and target-less [13,14] image-based vibration measurement
methods. In the conventional image-based methods, the vibration signals should be extracted first
to determine the vibration frequency. In the DIC method, patterns must be manually applied to the
target object surface. Then, the image processing algorithm is implemented to track variations in the
projected [15,16] or printed pattern [6] on the surface for correlation analysis of the vibration signals.

Sensors 2018, 18, 2530; doi:10.3390/s18082530 www.mdpi.com/journal/sensors


Sensors 2018, 18, 2530 2 of 14

Then, algorithms such as FFT are used to analyze the frequency components and finally determine the
resonant frequency. Marker tracking also requires an optical target such as LED light [10] or marker [17]
printed or mounted on the target surface, while the image algorithm and signal analysis algorithm
are implemented consecutively. In the target-less method, the image processing algorithm must
also be applied first to analyze and track the intrinsic features of the target object surface [13,18,19],
and an algorithm is then implemented to determine the frequency spectrum. As discussed above,
the present image-based non-contact vibration frequency measurement method requires a complex
image algorithm for vibration signal extraction, as well as a signal analysis algorithm to estimate
the vibration frequency, which requires the use of a substantial amount of computational resources.
This makes it difficult to deploy conventional image-based non-contact methods in real applications.
In this work, we do not explicitly analyze and extract vibration signals from the image sequence;
rather, we propose a convolutional neural network (CNN) trained with generated artificial signals,
and utilize the learned features to discriminate the pixel brightness variation in the time domain for
vibration frequency prediction. Furthermore, down sampling, de-noising, or other signal enhancement
algorithms are not required. The proposed method predicts the vibration frequency at pixel level or the
statistical result of pixels in the region of interest (ROI) using the original raw noisy brightness
signals of each pixel and additional image processing or signal processing algorithms are not
required. Verification tests were conducted using a vibration signal generator and excitation system.
Further laboratory and field experiments were conducted to compare the performance of the proposed
method with that of an industrial vibrometer. The limitations of this study and challenges in a real
application are then discussed.

2. Methods
This section describes the learning of deep features for vibration frequency prediction using our
CNN model. First, the proposed artificial neural network architecture is introduced. Then, we will
describe how the artificial signals were generated in the dataset preparation stage and the training
procedure. The entire pipeline of the proposed method is presented at the end of the section.

2.1. Network Architecture


To discriminate the brightness signals of the raw noisy pixels, we treated the vibration frequency
prediction as a multi-class classification problem, utilizing the learned deep features to classify input
information into different classes, which are the values of the predicted frequencies. Figure 1 shows
the proposed neural network architecture. The proposed network is composed of D one-dimensional
convolution layers and two fully connected layers; each convolution layer has W outputs and is
followed by a batch normalization (BN) [20] layer, a max pooling layer [21], and activated by a rectified
linear unit (ReLU) [22] activation function. We used K × 1 kernels for each convolution layer. The input
one-dimensional information length L is the product of the sampling rate and video clip duration.
The fully connected layers connected all input neural nodes, which function as high-level reasoning in
the artificial neural network captured and correlated features activated by different parts of the input
signal. Since most parameters are collected in the fully connected layer, overfitting can easily occur.
The dropout [23] layer, which is a regularization method that randomly sets several neural nodes to
zero, is followed by the first fully connected layer to effectively prevent overfitting, and all the fully
connected layers also consist of an ReLU. The fully connected layers connected all activations into the
frequency range (FR) output. The FR is defined as the frequency range multiplied by the reciprocal
of the prediction precision. For example, if the measurement frequency range is 0–200 Hz and the
precision is 0.2 Hz, the FR value is 200 × 1/0.2, which is equal to 1000. FR outputs are fed into a
softmax function to yield a probability distribution over every FR class.
In general, we optimize for a deeper (larger D) and thinner (smaller W) network architecture.
We varied D from 2 up to 6, and for every convolution layer, we used a kernel with kernel size K in the
Sensors 2018, 18, 2530 3 of 14

Sensors 2018, 18, x FOR PEER REVIEW 3 of 14

range 3–13.
Sensors 2018,Furthermore, we padded the input of every convolution layer by (K-1)/2 pixels
18, x FOR PEER REVIEW 3 ofwith
14 0 on
the range 3–13. Furthermore, we padded the input of every convolution layer by (K-1)/2 pixels with
both sides to maintain the spatial dimensions along the depth.
0 on both sides to maintain the spatial dimensions along the depth.
the range 3–13. Furthermore, we padded the input of every convolution layer by (K-1)/2 pixels with

D
0 on both sides to maintain the spatial dimensions along the depth.

D
K
K

FrequencyFrequency
Softmax Softmax
L

FR

Predicted Predicted
L

FR
FC

FC
Input
1D Conv. 1D Conv. 1D Conv. 1D Conv. FC

Input Figure 1. Convolutional neural network architecture of proposed method.


Figure1D Convolutional
1.Conv. neural network
1D Conv. architecture
1D Conv. 1D Conv. of proposed
FC method.
2.2. Dataset Preparation
Figure 1. Convolutional neural network architecture of proposed method.
2.2. Dataset The
Preparation
proposed CNN was trained with large amount of purely artificial data. Both the training
2.2. data andPreparation
Dataset testing data were generated by an artificial random process.
The proposed CNN was trained with large amount of purely artificial data. Both the training data
To simulate real vibration signals, we used a simple artificial vibration signal generator to
and testing data
The proposed
generate
were
vibration
generated
CNN waswith
signals
bya specific
trainedanwith
artificial
large random
amount
vibration
process.
of
frequency.purely artificial
Artificial noise data.
random Both
samplethe training
from a
To simulate
dataGaussian
and testing real vibration
data werewith
distribution signals,
generated
zero mean we
by anused
and a simple
artificial
varyingrandom artificial
standard vibration
process.
deviation was signal
added generator to generate
to the artificial
vibration Tosignals
simulate
signals with
reala generalization
to improve specific
vibrationvibration
signals, frequency.
of theweCNNusedand Artificial
a avoid
simple noiseThe
artificial
overfitting. random
vibration sample
signal
standard from of athe
generator
deviation Gaussian
to
generate
distribution vibration
artificial
withnoise signals
was
zero mean with
set to vary
and a specific
between
varying vibration
0.6 frequency.
and 2.1,
standard which Artificial
will
deviation achieve noise
was added random
the best thesample
toresult from
after testing
artificial a
signals to
Gaussian
improve from 0distribution
generalization with
to 10. Detail ofofthe zero mean
artificial
the CNN andavoid
signal
and varying
generating standard
function isdeviation
overfitting. was added
givenstandard
The in Equation (1). Ttois theof
deviation artificial
discrete
the artificial
signals
timetovalue
improve
fromgeneralization of theofCNN
zero to the duration and avoid
the artificial overfitting.
signal Thestep
with an even of , deviation
standard where FS isofthe the
noise was set to vary between 0.6 and 2.1, which will achieve the best result after testing from 0 to 10.
artificial noiserate,
sampling was set to vary
is the between
frequency 0.6artificial
of the and 2.1,signal,
which will achieve the amplitude.
best resultExamples
after testing
Detail of the artificial signal generating function is givenand
in A is the signal
Equation (1). T is the discrete of
time value
fromgenerated
0 to 10. Detail of the
artificial artificial
signals signalin
are shown generating
Figure 2. function is given in Equation
1 (1). T is the discrete
from zero to the duration of the artificial signal with an even step of FS , where FS is the sampling rate,
time value from zero to the duration of the artificial signal with an even step of , where FS is the
f is the frequency of the artificial signal, and A is the signal amplitude. Examples of generated artificial
sampling rate, is the frequency of the artificial signal, and A is the signal amplitude. Examples of
signals are shown in Figure 2.
generated artificial signals are shown in Figure 2. (a)

Time(s)
(a)

(b)

Time(s)

Time(s)

(b)

(c)

Time(s)

Time(s)

Figure 2. Examples of artificial signals: (a) Artificial signal of 5.6 Hz vibration frequency with random
(c)
noise added; (b) Artificial signal of 26.5 Hz vibration frequency with random noise added; (c)
Artificial signal of 46.8 Hz vibration frequency with random noise added.
Time(s)

Figure 2. Examples of artificial signals: (a) Artificial signal of 5.6 Hz vibration frequency with random
Figure 2. Examples of artificial signals: (a) Artificial signal of 5.6 Hz vibration frequency with random
noise added; (b) Artificial signal of 26.5 Hz vibration frequency with random noise added; (c)
noise added;signal
Artificial (b) Artificial
of 46.8 Hzsignal of 26.5
vibration Hz vibration
frequency frequency
with random noisewith random noise added; (c) Artificial
added.
signal of 46.8 Hz vibration frequency with random noise added.
Sensors 2018, 18, 2530 4 of 14

A = sin(2π f × T ) + Noise (1)

ForSensors
each2018,
frequency precision
18, x FOR PEER REVIEWstep, we generated approximately 3000 or even more artificial4 of 14 signals
and the corresponding ground truth vibration frequency value was labelled with each artificial signal.
For instance, if FR is 100, there would be at Aleast 300,000
= sin 2 × artificial
+ signals generated for the training
(1) process.

2.3. TrainingFor each frequency precision step, we generated approximately 3000 or even more artificial
Procedure
signals and the corresponding ground truth vibration frequency value was labelled with each
At artificial
the training
signal.stage, we optimized
For instance, thethere
if FR is 100, weights
wouldand
be bias parameters
at least by minimizing
300,000 artificial the negative
signals generated
for the training
log likelihood process.
loss over the entire training dataset. In all our experiments, we used the adaptive
moment estimation (Adam) optimizer [24], which is a stochastic gradient descent-based optimizer and
2.3. Training Procedure
maintained the adaptive estimate of the first and second order moment of the gradient. We used a
learning rateAtofthe1×training
10−4 ,stage, we optimized the weights and bias parameters by minimizing the negative
and did not decrease the learning rate while using a larger batch size of 4096,
log likelihood loss over the entire training dataset. In all our experiments, we used the adaptive
which was inspired by Smith et al. [25]. The training was carried out on Pytorch [26] with an nVidia
moment estimation (Adam) optimizer [24], which is a stochastic gradient descent-based optimizer
GTX1080ti, which normally
and maintained requires
the adaptive 12 hofto
estimate the1 first
dayandto converge
second orderto moment
a good ofsolution.
the gradient. We used
a learning rate of 1 × 10 , and did not decrease the learning rate while using a larger batch size of
2.4. Implementation Pipeline
4096, which was inspired by Smith et al. [25]. The training was carried out on Pytorch [26] with an
nVidia GTX1080ti, which normally requires 12 h to 1 day to converge to a good solution.
The ROI for frequency prediction was selected in the input video; then, the cropped video of the
ROI was2.4.
read out as image
Implementation sequences. The brightness values of every pixel of the image sequence in
Pipeline
the time domain were saved as separate pixel brightness variation signals. As shown in Figure 3a,
The ROI for frequency prediction was selected in the input video; then, the cropped video of the
every pixel in the
ROI was readROI is image
out as read sequences.
out alongThe thebrightness
time dimension from pixel
values of every the first
of theframe to the end
image sequence in frame.
If the width ROI are W and H, respectively, we will obtain W × H
the time domain were saved as separate pixel brightness variation signals. As shown in Figure 3a,of pixel
and height of the number
signals every
alongpixel
the intime dimension.
the ROI is read out For
alonginstance, if we have
the time dimension froma the
500first 9 × to
fpsframe 9 image
the end sequence
frame. If with
durationtheofwidth
5 s, weandwill
height of the81ROI
obtain are9)Winstances
(9 × and H, respectively,
of input withwe will obtain
length × to
L equal number of pixel
2500 (500 Hz × 5 s).
signals along the time dimension. For instance, if we have a 500 fps 9 × 9 image sequence with
Then, the instances were directly fed into the trained CNN to yield the number of predicted vibration
duration of 5 s, we will obtain 81 (9 × 9) instances of input with length L equal to 2500 (500 Hz × 5
frequency as output. After obtaining the predicted vibration frequency of each pixel, we can generate
s). Then, the instances were directly fed into the trained CNN to yield the number of predicted
a vibration frequency
vibration mapascorresponding
frequency to the original
output. After obtaining imagevibration
the predicted spatial frequency
information as shown
of each in the left
pixel, we
plot in Figure 3b. Aa histogram
can generate of the predicted
vibration frequency frequency
map corresponding distribution
to the is plotted
original image in Figure 3b,
spatial information as so that
shown
the result can in
bethe left plot infor
quantified Figure 3b. A histogram
an overall of theprediction.
frequency predicted frequency
For the distribution
frequency iscorresponding
plotted in to
Figure 3b,
the maximum of so
thethat the result can
histogram, the be quantified
pixels for anaround
predicted overall frequency prediction.
this frequency For the frequency
are highlighted for a better
corresponding to the maximum of the histogram, the pixels predicted around this frequency are
understanding of the prediction result, as shown in the middle of Figure 3b.
highlighted for a better understanding of the prediction result, as shown in the middle of Figure 3b.

p(H,W,1)

p(H,W,T)
Height of ROI(H)

p(H,W,t)
p(1,1,1)

ConvNet
p(1,1,t)
Time(frames)
Width of p(1,1,T)
(a)
ROI(W)

6000 35

30
X: 12.71
5000 X: 12.4 Y: 33
Y: 55 02
25
Pixel c ount
Pi xel c o u nt

4000

20
3000

15

2000
10

1000
5

0 0
0 5 10 15 20 25 30 35 40 45 0 5 10 15 20 25 30 35 40 45
Predicted frequency Predicted frequency

(b) (c)

Figure 3. Implementation pipeline of the proposed method: (a) Read in the ROI video as image
Figure 3. Implementation pipeline of the proposed method: (a) Read in the ROI video as image
sequence and save as separate pixel brightness variation signals, then feed in the ConvNet; (b)
sequence and
Networksave as separate
output pixel
prediction brightness
result variation
visualization; signals,
(c) Optional edgethen feed in the
enhancement ConvNet; (b) Network
operation.
output prediction result visualization; (c) Optional edge enhancement operation.
Sensors 2018, 18, 2530 5 of 14

An optional operation can be implemented if the input video is recorded in an unfavorable


condition such as if it contains extremely noisy signals, variations in the lighting condition, and camera
shake. We described
Sensors such
2018, 18, x FOR operation
PEER REVIEW as edge enhancement operation, that is, to perform Canny 5 of 14 edge
detection on the first frame of the image, and only take pixels at the edge into consideration in the
An optional
statistical procedure operation
when can be implemented
generating the histogramif the input
plot, video
while is recorded
pixels outsidein the
an unfavorable
edge are ignored.
condition such as if it contains extremely noisy signals, variations in the lighting condition, and
Improved vibration frequency prediction result can be obtained because pixels around the image edge
camera shake. We described such operation as edge enhancement operation, that is, to perform
always Canny
have better
edge contrast,
detection onwhich is very
the first important
frame in all and
of the image, photogrammetry
only take pixelsmethods.
at the edge into
consideration in the statistical procedure when generating the histogram plot, while pixels outside
2.5. Experimental
the edge areMethods
ignored. Improved vibration frequency prediction result can be obtained because pixels
around the image edge always have better contrast, which is very important in all photogrammetry
Several experiments were conducted, including a verification test using the vibration test system,
methods.
impact hammer excitation test in a controlled laboratory condition, and field experiment, similar to a
practical2.5.
application. An industrial camera with Aptina MT9P031 image sensor was used in all the
Experimental Methods
experimentsSeveral and the DongHua DH5906 industrial vibrometer was used for comparison of results
experiments were conducted, including a verification test using the vibration test
obtainedsystem, impact hammer excitationintest
from the proposed method laboratory and field
in a controlled experiments.
laboratory condition, and field experiment,
In all experiments
similar presented
to a practical application.in this paper, the
An industrial camera
camera withsampling rate was
Aptina MT9P031 set sensor
image to 100wasHz,used
and 10 s of
in all
video was the experiments
recorded for eachand the DongHua
experiment DH5906
case. industrial
The input vibrometer
information was used
length L wasfor 1000
comparison
(100 Hz of × 10 s),
results obtained from the proposed method in laboratory and field experiments.
and the prediction FR value was 500 since the frequency range is 0–50 Hz and the precision is defined as
In all experiments presented in this paper, the camera sampling rate was set to 100 Hz, and 10 s
0.1 Hz. The experiments described in this section used a network with D = 4 layers (each with W = 10
of video was recorded for each experiment case. The input information length L was 1000 (100 Hz ×
and K =1011). Since
s), and thewe must test
prediction D from
FR value was2500
to since
6, Wthefrom 5 to 15,range
frequency andisK0–50
from Hz3and
to 13,
the this configuration
precision is
ensureddefined
that theasnetwork exhibited
0.1 Hz. The fast convergence
experiments to asection
described in this usableused
model and acceptable
a network with D = 4prediction
layers (eachaccuracy
withoutwith
many W =redundant
10 and K = parameters. The prediction
11). Since we must test D from 2accuracy did 5not
to 6, W from benefit
to 15, and Kfrom
from a3 to
deeper
13, thisnetwork.
Choosingconfiguration
W = 10 featuresensuredperthatlayer
the network
worked exhibited
well, Wfast = convergence to a usable
15 is superfluous, model
while W and= 5 acceptable
will significantly
decreaseprediction accuracyspeed
the convergence without andmany redundant
prediction parameters.
accuracy. K = 11The
was prediction
found toaccuracy
be the bestdidchoice
not benefit
considering
from a deeper network. Choosing W = 10 features per layer worked well, W = 15 is superfluous, while
the accuracy and convergence speed. In the edge enhancement operation, the high and low thresholds for
W = 5 will significantly decrease the convergence speed and prediction accuracy. K = 11 was found
the Canny
to be edgethe detection
best choicealgorithm
considering were
the set to values
accuracy and that would generate
convergence speed. In athe
clear
edgeedge for the target.
enhancement
operation, the high and low thresholds for the Canny edge detection algorithm were set to values
2.5.1. Verification Test
that would generate a clear edge for the target.
An experiment was conducted to validate the proposed method using a vibration test system,
2.5.1. Verification Test
which consisted of a RIGOL DG1022 arbitrary waveform generator, MB Dynamics MODAL 50 exciter,
An experiment was conducted to validate the proposed method using a vibration test system,
and SL500VCF amplifier. The experimental setup is shown in Figure 4.
which consisted of a RIGOL DG1022 arbitrary waveform generator, MB Dynamics MODAL 50
Sine signals
exciter, andwere generated
SL500VCF every
amplifier. The 5 Hz between
experimental 0 and
setup 50 Hz;
is shown then, 4.
in Figure the signals passed through
the amplifier with
Sine minimum
signals gain and
were generated to5the
every exciter. 0Aand
Hz between modular steelthestructure
50 Hz; then, was through
signals passed excited by the
preciselythecontrolled vibration
amplifier with minimumsignals, andtoathe
gain and video of the
exciter. measurement
A modular ROI in
steel structure Figure
was 4d,
excited bywhich
the was
precisely controlled vibration signals, and a video of the measurement ROI in Figure
set to the beam near the exciter with a resolution of 468 × 14 pixels, was recorded simultaneously. 4d, which was
Then, thesetproposed
to the beammethod
near thewas
exciter with ato
applied resolution × 14 pixels,
of 468 videos
the recorded was recorded
to determine thesimultaneously.
vibration frequency
Then, the proposed method was applied to the recorded videos to determine the vibration frequency
prediction of the measurement target ROI.
prediction of the measurement target ROI.

(a) (b) (c)

(d)

Figure 4. Experimental setup for verification test: (a) Camera, laptop, and steel structure; (b) MB
Figure 4. Experimental
Dynamics MODALsetup for verification
50 exciter test:in(a)redCamera,
(ROI highlighted laptop,
rectangle); and steel
(c) RIGOL DG1022structure;
arbitrary (b) MB
Dynamics MODAL
waveform 50 exciter
generator; (ROIfor
(d) ROI highlighted in red rectangle);
vibration frequency measurement.(c) RIGOL DG1022 arbitrary waveform
generator; (d) ROI for vibration frequency measurement.
Sensors 2018, 18, 2530
x FOR PEER REVIEW 6 of 14

2.5.2. Laboratory Experiment


2.5.2. Laboratory Experiment
An experiment was conducted in a well-controlled laboratory environment to investigate the
An experiment
performance was conducted
of the proposed methodinand
a well-controlled
the results werelaboratory
compared environment to investigate
with those obtained usingthe
an
performance of the proposed method and the results were compared with those obtained
industrial vibrometer. A carbon plate was struck on the left support point using an impact hammer using an
industrial vibrometer.
as the excitation. TheAresulting
carbon plate was struck
vibrations on the
of the left support
carbon pointcaptured
plate were using anby impact hammer as
the vibrometer
the excitation. The resulting vibrations of the carbon plate were captured
mounted at the midpoint of the simple-supported carbon plate and the sampling rateby the vibrometer mounted
of the
at the midpoint of the simple-supported carbon plate and the sampling rate of the vibrometer
vibrometer was set to 100 Hz. The ROI of the camera was also set to the midpoint of the plate with wasa
set to 100 Hz. The ROI of the camera was also set to the midpoint of the plate with a resolution
resolution of 176 × 304 pixels as shown in Figure 5c. An LED torch was used as additional light source of
176 × 304
for the pixels
target as shown
object, so thatinthe
Figure 5c. can
camera An LED torchawas
maintain highused as additional
sampling rate. Thelight source for the
configuration target
is shown
object, so that
in Figure 5a. the camera can maintain a high sampling rate. The configuration is shown in Figure 5a.

(a) (b) (c)

Figure 5.
Figure 5. Laboratory
Laboratory experimental
experimentalsetup:
setup:(a)
(a)Camera,
Camera,laptop,
laptop,carbon
carbonplate,
plate,and
and vibrometer
vibrometer mounted
mounted at
at the
the midpoint;
midpoint; (b) Field
(b) Field of view
of view of theof the camera
camera (ROI highlighted
(ROI highlighted in red rectangle);
in red rectangle); (c)vibration
(c) ROI for ROI for
vibration frequency
frequency measurement.
measurement.

The raw acceleration signal was transformed to the frequency domain using fast Fourier
The raw acceleration signal was transformed to the frequency domain using fast Fourier transform
transform (FFT). The frequency components of the vibration signal were examined in the frequency
(FFT). The frequency components of the vibration signal were examined in the frequency domain and
domain and compared with the vibration frequency prediction result obtained from the proposed
compared with the vibration frequency prediction result obtained from the proposed method.
method.
2.5.3. Field Experiment
2.5.3. Field Experiment
To verify the applicability of the proposed method to practical engineering applications, a field
To verify
experiment wastheconducted
applicability on of
thetheWuyuan
proposed method
bridge, to practical
Xiamen. Sinceengineering
the bridge applications, a field
is a steel structure
experiment was conducted on the Wuyuan bridge, Xiamen. Since the bridge
and was built on extremely salty marine environment, the bridge is surrounded by water vapor is a steel structure and
was built on extremely salty marine environment, the bridge is surrounded
evaporating from sea; as the vapor cools, it condenses to fog with high salinity, which inevitably by water vapor
evaporating
speeds from sea;
up corrosion as the
of the steelvapor cools,
structural it condenses
components. Thetolocal
fog with high salinity,
department which inevitably
of transportation actively
monitors the dynamic behavior of bridge cables and other structural components. transportation
speeds up corrosion of the steel structural components. The local department of The vibration
actively monitors
frequency is one of the dynamic
the most behavior
important of bridge
parameters usedcables and other
in predicting thestructural
structure’scomponents.
modal shape andThe
vibration
tension frequency
force is one components.
of structural of the most important
A bridge parameters used in as
cable was selected predicting the structure’s modal
the test object.
shapeInand tension force of structural components. A bridge cable was
the field experiment, the bridge cable was excited by randomly passing vehicles selected as the test object.
at varying
In the field experiment, the bridge cable was excited by randomly
speeds, dissimilar pattern, and weight. The vibrometer was attached to the bridge cable passing vehicles at varying
with a
speeds, dissimilar pattern, and weight. The vibrometer was attached to
sampling rate of 100 Hz, and a video of the ROI of the bridge cable was captured; the resolution the bridge cable withofa
sampling
the ROI was rate72of×10086Hz, andasa shown
pixels video of inthe ROI 6c.
Figure of the
Thebridge cablecomponents
frequency was captured; of the
the resolution
bridge cableof
the ROI
were was 72using
obtained × 86 pixels
FFT fromas shown in Figure 6c.
the vibrometer The frequency
acceleration timecomponents
history. Theofproposed
the bridge cable were
method was
obtained using FFT from the vibrometer acceleration time history. The
implemented on the video recorded in the field to determine the vibration frequency, which was proposed method was
implemented on the video
compared with that of the vibrometer.recorded in the field to determine the vibration frequency, which was
compared with that of the vibrometer.
Sensors 2018, 18, 2530 7 of 14
Sensors 2018, 18, x FOR PEER REVIEW 7 of 14

(a) (b) (c)

Figure6.6.Field
Figure Fieldexperimental
experimental setup:
setup: (a)(a) Camera,
Camera, receiver,
receiver, vibrometer
vibrometer (highlighted
(highlighted in circle);
in blue blue circle); (b)
(b) Field
Field of view of the camera (ROI highlighted in red rectangle); (c) ROI for vibration frequency
of view of the camera (ROI highlighted in red rectangle); (c) ROI for vibration frequency measurement.
measurement.
3. Results
3. Results
3.1. Verification Test
3.1. Verification Test
Figure 7 shows four results of histograms of the predicted frequency distribution with excitation
Figure of
frequencies 7 shows
5 Hz, 15fourHz,results
25 Hz, ofand
histograms
35 Hz. Theof the predictedvisualized
prediction frequencyresults
distribution
for 30 with
Hz and excitation
40 Hz
frequencies
are shown inof 5 Hz,8;15the
Figure Hz, 25 Hz, and
prediction 35 Hz.
results of The
everyprediction visualized
pixel mapped to its results
originalfor 30 Hzposition
spatial and 40 on Hz
areinput
the shown in Figure
image 8; the prediction
and pixels results the
predicted around of every
maximumpixel mapped to its
value of the original spatial
histogram position on
were highlighted.
the inputoperation
Optional image andusing pixels predicted
the Canny edge around the maximum
detection algorithm value of the histogram
of finding pixels on the were
edgehighlighted.
is shown
Optional
in Figure operation
8c; the high using
andthe lowCanny edge detection
thresholds algorithmwere
for the algorithm of finding
set topixels
0.6 andon 0.24,
the edge is shown
respectively
inthe
in Figure 8c; the high
normalized and low
threshold thresholds
range of 0 to 1.forFigure
the algorithm were setoftothe
9 shows results 0.6corresponding
and 0.24, respectively
excitation in
the normalized threshold range of 0 to 1. Figure 9 shows results of the
frequencies of 5 Hz, 15 Hz, 25 Hz, and 35 Hz obtained from edge enhancement in which pixels on the corresponding excitation
frequencies
edge were takenof 5 Hz,
into15 Hz, 25 Hz, and
consideration 35 Hz obtained
in generating from edge enhancement in which pixels on the
the histogram.
edgeThe were taken into consideration in generating the histogram.
results from nine different test scenarios are summarized in Table 1, including excitation with
The results
frequencies from
of 5 Hz, 10 nine
Hz, 15 different
Hz, 20 Hz,test25scenarios
Hz, 30 Hz, are35summarized in Table
Hz, 40 Hz, and 45 Hz.1, including excitation
withItfrequencies
can be observedof 5 Hz,
from 10 Hz, 15 Hz,results
the above 20 Hz, that
25 Hz,the30 Hz, 35 Hz,
proposed 40 Hz,successfully
method and 45 Hz. predicted the
It can be observed from the above results that the proposed method
excitation frequency within an error range of 0.1 Hz. The optional edge enhancement operation successfully predicted the
is not
excitationunder
necessary frequency
such within
conditionan error
and didrangenotof 0.1 Hz. the
improve Thedirect
optional edge enhancement
prediction results. Sinceoperation is not
the excitation
isnecessary
a standard under
clearsuch
sine condition
wave without and did notthe
noise, improve the direct
background and prediction results. Since
lighting conditions in thethe excitation
verification
is a are
test standard clearstable
relatively sine wave
so thatwithout
noise in noise, the background
the captured videos and
was lighting
also low.conditions in the verification
test are relatively stable so that noise in the captured videos was also low.
Table 1. Verification test result summary.
Table 1. Verification test result summary.
Predicted Frequency (Hz)
Scenario Excitation Frequency (Hz) Predicted Frequency (Hz)
Scenario Excitation Frequency (Hz) Direct Result Edge Enhanced Result
Direct Result Edge Enhanced Result
11 5.0
5.0 4.9
4.9 4.9
4.9
2 10.0 10.0 10.0
23 10.0
15.0
10.0
14.9
10.0
14.9
34 15.0
20.0 14.9
20.1 14.9
20.0
45 20.0
25.0 20.1
25.1 20.0
25.1
56 30.0
25.0 30.0
25.1 30.0
25.1
7 35.0 35.0 34.9
6 30.0 30.0 30.0
8 40.0 40.0 39.9
79 35.0
45.0 35.0
45.0 34.9
45.0
8 40.0 40.0 39.9
9 45.0 45.0 45.0
Sensors
Sensors 2018, 18, 2530 PEER REVIEW 88 of 14
Sensors2018,
2018,18,
18,xxFOR
FOR PEER REVIEW 8ofof14
14

Predicted
Predictedfrequency(Hz)
frequency(Hz) Predicted
Predictedfrequency(Hz)
frequency(Hz)
(a)
(a) (b)
(b)

Predicted
Predictedfrequency(Hz)
frequency(Hz) Predicted
Predictedfrequency(Hz)
frequency(Hz)
(c)
(c) (d)
(d)

Figure
Figure7.7. Four
7.Four representative
Fourrepresentative results
representativeresults of
resultsof histogram
ofhistogram
histogramof of the
ofthe predicted
thepredicted frequency
predictedfrequency distribution:
frequencydistribution: (a)
(a)555Hz
distribution: (a) Hz
Hz
Figure
excitation
excitation result;
result; (b)
(b) 15
15 Hz
Hz excitation
excitation result;
result; (c)
(c) 25
25 Hz
Hz excitation
excitation result;
result; (d)
(d) 35
35 Hz
Hz excitation
excitation result.
result.
excitation result; (b) 15 Hz excitation result; (c) 25 Hz excitation result; (d) 35 Hz excitation result.

50
50
00(Hz)
(Hz)

(a)
(a)
50
50
00(Hz)
(Hz)

(b)
(b)

(c)
(c)
Figure
Figure 8.
8. Two
Two representative
representative results
results and
and Canny
Canny edge
edge detection
detection algorithm
algorithm result:
result: (a)
(a) 30
30 Hz
Hz excitation
excitation
Figure 8. Two representative results and Canny edge detection algorithm result: (a) 30 Hz excitation
result:
result: prediction result map and pixels predicted around 30 Hz (maximum value of histogram)
prediction result map and pixels predicted around 30 Hz (maximum value of histogram)
result: prediction result map and pixels predicted around 30 Hz (maximum value of histogram)
highlighted;
highlighted; (b)
(b) 40
40 Hz
Hz excitation
excitation result:
result: prediction
prediction result
result map
map andand pixels
pixels predicted
predicted around
around 40
40 Hz
Hz
highlighted; (b) 40 Hz excitation result: prediction result map and pixels predicted around 40 Hz
(maximum value
(maximumvalue of histogram)
valueofofhistogram) highlighted;
histogram)highlighted; (c)
highlighted;(c) Pixels
(c)Pixels used
Pixelsused in edge
usedininedge enhancement.
edgeenhancement.
enhancement.
(maximum
Sensors 2018, 18, 2530 9 of 14
Sensors 2018, 18, x FOR PEER REVIEW 9 of 14

Predicted frequency(Hz) Predicted frequency(Hz)


(a) (b)

Predicted frequency(Hz) Predicted frequency(Hz)


(c) (d)

Figure 9.9. Four


Figure Four representative
representative results
results of
of histogram
histogram of
of predicted
predicted frequency
frequency distribution
distribution after
after edge
edge
enhancement: (a) 5 Hz excitation result; (b) 15 Hz excitation result; (c) 25 Hz excitation result; (d)
enhancement: (a) 5 Hz excitation result; (b) 15 Hz excitation result; (c) 25 Hz excitation result; (d) 35 Hz35
Hz excitation result.
excitation result.

3.2. Laborotory
3.2. LaborotoryExperiment
Experiment
Thehistogram
The histogramofofthe thepredicted
predicted frequency
frequency distribution
distribution obtained
obtained fromfrom
the the proposed
proposed method
method in
in the
the laboratory
laboratory experiment
experiment is shownis in
shown
Figurein10a,Figure
while10a, while theobtained
the histogram histogram afterobtained after edge
edge enhancement
enhancement is shown in Figure 10b. The results of every pixel in the ROI
is shown in Figure 10b. The results of every pixel in the ROI are visualized in its original are visualized in its original
spatial
spatial position as shown in Figure 11a, while the pixels predicted using a frequency
position as shown in Figure 11a, while the pixels predicted using a frequency of approximately 12.7 Hz of approximately
12.7highlighted
are Hz are highlighted
in Figurein11b.
FigureThe11b. The selected
pixels pixels selected using Canny
using Canny edge detection
edge detection with high
with high and
and low
low thresholds of 0.2 and 0.08, respectively, which were utilized for edge
thresholds of 0.2 and 0.08, respectively, which were utilized for edge enhancement, are shown inenhancement, are shown in
Figure11c.
Figure 11c.
Figure
Figure 12 12shows
showsthe thevibration
vibrationmeasurement
measurementresult resultfrom
fromthethecontact
contactvibrometer;
vibrometer;the thetime
time history
history
of the normalized acceleration signal is plotted in Figure 12a, while the
of the normalized acceleration signal is plotted in Figure 12a, while the corresponding power spectrum corresponding power
spectrum
density density
(PSD) (PSD)using
obtained obtained usingpeak
FFT with FFT picking
with peak pickinginisFigure
is shown shown12b.in Figure 12b.
It can be observed that the result obtained from the proposed
It can be observed that the result obtained from the proposed method closely method closelymatched
matchedthatthatfrom
from
the vibrometer,
the vibrometer,namelynamely12.4 12.4Hz
Hzandand12.75
12.75Hz,Hz,respectively.
respectively.After
Afteredge
edgeenhancement
enhancementoperation
operation waswas
applied, the noise floor significantly reduced, and a better prediction result of
applied, the noise floor significantly reduced, and a better prediction result of 12.7 Hz was achieved,12.7 Hz was achieved,
which isisvery
which veryclose
closeto tothat
thatofofthe
thevibrometer.
vibrometer.In Inaddition,
addition, the
the highlighted
highlighted pixels
pixels in
in Figure
Figure 11b11b also
also
demonstrate that
demonstrate that the
the predicted
predicted frequency
frequency came came from
from the
the pixels
pixels ofofthe
thevibrometer
vibrometer and andcarbon
carbonplate
plate
rather than noise from other parts of
rather than noise from other parts of the image. the image.
Sensors
Sensors 2018,2018, 18, x FOR PEER REVIEW
18, 2530 1010
ofof
1414
Sensors 2018, 18, x FOR PEER REVIEW 10 of 14

Predicted frequency(Hz) Predicted frequency(Hz)


Predicted frequency(Hz)
(a) Predicted frequency(Hz)
(b)
(a) (b)

Figure 10. Laboratory experimental results of the proposed method: (a) Histogram of the predicted
Figure 10. Laboratory experimental results of the proposed method: (a) Histogram of the predicted
Figure 10. Laboratory
frequency experimental
distribution; results of
(b) Histogram ofthe proposed
the method:
predicted (a) Histogram
frequency of the
distribution predicted
after edge
frequency distribution; (b) Histogram of the predicted frequency distribution after edge
frequency distribution; (b) Histogram of the predicted frequency distribution after edge enhancement.
enhancement.
enhancement.

45
45
45

40
40
40

35
35
35

30
30
30

25
25
25

20
20
20

15
15
15

10
10
10

5
55

(Hz)
0
(a) (Hz)
00 (Hz) (b) (c)
(a) (b) (c)

Figure
Figure 11. 11. Laboratory
Laboratory experimentalresult
experimental resultvisualization
visualization using
using the
the proposed
proposed method:
method:(a) (a)Prediction
Prediction
Figure 11. Laboratory experimental result visualization using the proposed method: (a) Prediction
result map; (b) Pixels predicted at 12.4 Hz (maximum value of histogram) highlighted; (c) Pixelsused
result map; (b) Pixels predicted at 12.4 Hz (maximum value of histogram) highlighted; (c) Pixels used
result map; (b) Pixels predicted at 12.4 Hz (maximum value of histogram) highlighted; (c) Pixels used
in edge
in edge enhancement.
enhancement.
in edge enhancement.
1 0.03
11 0.03
0.03
0.8 X: 12.75
0.8
0.8 X:Y:
X: 0.02795
12.75
12.75
0.025 Y:
Y: 0.02795
0.02795
0.6 0.025
0.025
0.6
0.6
Normalized Amplitude

PSD(Normalized)

0.4
0.02
Amplitude

PSD(Normalized)

0.4
NormalizedAmplitude

PSD(Normalized)

0.4
0.02
0.02
0.2
0.2
0.2
0 0.015
Normalized

00 0.015
0.015
-0.2
-0.2
-0.2
0.01
-0.4 0.01
0.01
-0.4
-0.4
-0.6
-0.6 0.005
-0.6
0.005
0.005
-0.8
-0.8
-0.8
-1 0
0 10 20 30 40 50 60 0 5 10 15 20 25 30 35 40 45 50
-1
-1 00
00 10
10 20
20 Time(s)
30
30 40
40 50
50 60
60 00 55 10
10 15
15 20 Frequency(Hz)
20 25
25 30
30 35
35 40
40 45
45 50
50
Time(s)
(a) Frequency(Hz)
(b)
(a) (b)

Figure 12. Vibrometer laboratory experimental result: (a) Time history of normalized acceleration
Figure
Figure 12.12. Vibrometerlaboratory
Vibrometer laboratoryexperimental
experimental result:
result: (a)
(a) Time
Time history
historyofofnormalized
normalizedacceleration
acceleration
signal; (b) Normalized power spectrum density.
signal; (b) Normalized power spectrum density.
signal; (b) Normalized power spectrum density.
Sensors 2018, 18, 2530 11 of 14
Sensors 2018, 18, x FOR PEER REVIEW 11 of 14
Sensors 2018, 18, x FOR PEER REVIEW 11 of 14

3.3.3.3.
Field Experiment
Field Experiment
3.3. Field Experiment
AA comparisonofofthe
comparison thedirect
direct prediction
prediction result
result histogram
histogram of of the
the field
field experiment
experimentand andthethe
A comparison of the direct prediction result histogram of the field experiment and the
corresponding
corresponding resultafter
result afteredge
edgeenhancement
enhancement is is shown
shown in Figure
Figure 13.
13. The
Thefield
fieldexperimental
experimental result
result
corresponding result after edge enhancement is shown in Figure 13. The field experimental result
visualization
visualization is shown
is shown in
in in Figure
Figure 14, including
14,14,including the mapped prediction result, while pixels predicted
visualization is shown Figure includingthethemapped
mapped prediction
prediction result, whilepixels
result, while pixelspredicted
predicted
around
around thethe maximumvalue
maximum valueininthe
thehistogram
histogram areare highlighted
highlighted inin Figure
Figure 14b.
14b.The
Thepixels for
pixels the
for theedge
edge
around the maximum value in the histogram are highlighted in Figure 14b. The pixels for the edge
enhancement
enhancement procedure
procedure are
are shown
shown inin Figure
Figure 14c;
14c; the
the high
high and
and low
low thresholds
thresholds for the
for Canny
the Canny edge
edge
enhancement procedure are shown in Figure 14c; the high and low thresholds for the Canny edge
algorithm
algorithm were
were set
setset to
to 0.20.2 and
and 0.08,
0.08, respectively.
respectively.
algorithm were to 0.2 and 0.08, respectively.
The vibration signal of the bridge cable collected from the vibrometer was normalized and
TheThe
vibration signal of the bridge
vibration signal of the bridge cablecable
collected from from
collected the vibrometer was normalized
the vibrometer and plotted
was normalized and
plotted as shown in Figure 15a, while the normalized PSD obtained by FFT is plotted as shown in
as shown
plotted as shown in Figure 15a, while the normalized PSD obtained by FFT is plotted as Figure
in Figure 15a, while the normalized PSD obtained by FFT is plotted as shown in shown 15bin
Figure 15b to examine the frequency components of the bridge cable vibration.
to examine
Figure 15bthetofrequency
examine the components
frequency of the bridgeofcable
components vibration.
the bridge cable vibration.

Predicted frequency(Hz) Predicted frequency(Hz)


Predicted frequency(Hz)
(a) Predicted frequency(Hz)
(b)
(a) (b)

Figure 13. Field experimental results of the proposed method: (a) Histogram of predicted frequency
Figure Field
13. 13.
Figure experimental
Field results
experimental ofofthe
results theproposed
proposedmethod:
method: (a)
(a) Histogram of predicted
Histogram of predictedfrequency
frequency
distribution; (b) Histogram of predicted frequency distribution after edge enhancement.
distribution; (b) (b)
distribution; Histogram of of
Histogram predicted
predicted frequency
frequencydistribution
distributionafter
after edge
edge enhancement.
enhancement.

(Hz)
(a) (Hz) (b) (c)
(a) (b) (c)

Figure 14. Proposed method field experimental result visualization: (a) Prediction result map; (b)
Figure 14. Proposed
Proposed method
method field experimental
field result visualization: (a) Prediction
resultresult map; (b)
Figure 14.predicted
Pixels around 12.7 Hzexperimental result
(maximum value ofvisualization: (a) Prediction
histogram) highlighted; (c) Pixels map; (b)edge
used in Pixels
Pixels
predicted predicted around 12.7 Hz (maximum value of histogram) highlighted; (c) Pixels used in
around 12.7 Hz (maximum value of histogram) highlighted; (c) Pixels used in edge enhancement. edge
enhancement.
enhancement.

TheThe direct
direct result
result ofof theproposed
the proposed method under under field condition, which
whichisisischallenging
challengingfor many
The direct result of the proposedmethodmethod underfieldfield condition,
condition, which challenging for
for many
many
image-based
image-based measurement
measurement methods,
methods, is is shown
shown in in Figure
Figure 13a. 13a.
The The figure
figure showsshows significantly
significantly higher
higher noise
image-based measurement methods, is shown in Figure 13a. The figure shows significantly higher
noise
floor floor
thanfloor than
the than the
results results obtained from the verification test and laboratory experiment. Many pixels
noise theobtained from the
results obtained verification
from test and
the verification laboratory
test experiment.
and laboratory Many
experiment. pixels
Many were
pixels
were predicted in the low frequency range under 1 Hz, which may be due to noise from variations in
predicted in the low
were predicted frequency
in the range under
low frequency 1 Hz, 1which
range under may may
Hz, which be due to noise
be due from
to noise variations
from variationsin in
the
the lighting condition and background of the measurement target, or other forms of disturbance in
lighting condition and background of the measurement target, or other forms of disturbance
the lighting condition and background of the measurement target, or other forms of disturbance in in the image
the image while capturing the video. Excluding the low frequency noise, we can still find the peak
while
thecapturing
image whilethe video. Excluding
capturing the video.theExcluding
low frequency noise,
the low we can noise,
frequency still find
wethe
canpeak withthe
still find predicted
peak
with predicted frequency of 12.6 Hz. However, the result with edge enhancement operation shows a
with predicted
frequency frequency
of 12.6 Hz. However, of the
12.6result
Hz. However,
with edgethe result with operation
enhancement edge enhancement operation
shows a clear shows
peak of 12.7 Hza
clear peak of 12.7 Hz in Figure 13b, and noise was mostly eliminated. The highlighted pixels shown
clear peak
in Figure of 12.7
13b, and Hzwas
noise in Figure
mostly13b, and noiseThe
eliminated. washighlighted
mostly eliminated. The highlighted
pixels shown in Figure 14b pixels
also shown
confirm
that pixels around the measurement object edge are more robust to real-world noise in our method.
Sensors 2018, 18, x FOR PEER REVIEW 12 of 14

in Figure
Sensors 2018, 14b also
confirm that pixels around the measurement object edge are more robust to
18, 2530 12 real-
of 14
world noise in our method.
1 0.16
X: 12.97
0.8 Y: 0.1583
0.14

0.6
0.12
0.4
Normalized Amplitude

PSD(Normalized)
0.1
0.2

0 0.08

-0.2
0.06

-0.4
0.04
-0.6

0.02
-0.8

-1 0
0 10 20 30 40 50 60 0 5 10 15 20 25 30 35 40 45 50
Time(s) Frequency(Hz)
(a) (b)

Figure15.
Figure 15. Vibrometer
Vibrometer field
fieldexperimental
experimental result:
result: (a)
(a) Time
Time history
history of
of normalized
normalized acceleration
acceleration signal;
signal;
(b) Normalized power spectrum density.
(b) Normalized power spectrum density.

The result obtained from the vibrometer was 12.97 Hz, which gives an error of approximately
The result obtained from the vibrometer was 12.97 Hz, which gives an error of approximately
0.27 Hz (2.1%) compared to the result obtained using the proposed method, that is, 12.7 Hz. One
0.27 Hz (2.1%) compared to the result obtained using the proposed method, that is, 12.7 Hz.
possible explanation for the error is that the vibrometer was installed near the hinge of the bridge
One possible explanation for the error is that the vibrometer was installed near the hinge of the
cable as this location was convenient and accessible for hand installation as highlighted by the blue
bridge cable as this location was convenient and accessible for hand installation as highlighted by
circle in Figure 6a, while the ROI for the vibration frequency prediction highlighted by the red
the blue circle in Figure 6a, while the ROI for the vibration frequency prediction highlighted by the
rectangle in Figure 6b was significantly higher than the vibrometer installation position to avoid
red rectangle in Figure 6b was significantly higher than the vibrometer installation position to avoid
disturbance from passing vehicles and pedestrians in the image.
disturbance from passing vehicles and pedestrians in the image.
4. Discussion
4. Discussion
Based on
Based onthe
theexperiments
experiments conducted
conducted with
with different
different excitation
excitation sources
sources and
and different
differenttest
testobjects
objects
vibrating at different frequencies, the following observations were made on the performance
vibrating at different frequencies, the following observations were made on the performance of
of our
method. (1) On average, the proposed method can provide acceptable vibration frequency prediction
our method. (1) On average, the proposed method can provide acceptable vibration frequency
result and the visualized result can improve understanding of the distribution of the vibration
prediction result and the visualized result can improve understanding of the distribution of the
frequencies. (2) Under good experimental condition with adequate and stable lighting, as well as
vibration frequencies. (2) Under good experimental condition with adequate and stable lighting,
good contrast on the measurement target, the proposed method provides acceptable vibration
as well as good contrast on the measurement target, the proposed method provides acceptable vibration
frequency prediction accuracy without any additional operation. Implementing edge enhancement
frequency prediction accuracy without any additional operation. Implementing edge enhancement
operation improved the predicted result; furthermore, edge enhancement can significantly improve
operation improved the predicted result; furthermore, edge enhancement can significantly improve
the usability of the predicted result.
the usability of the predicted result.
Compared to the conventional image-based method, our method does not explicitly analyze the
Compared to the conventional image-based method, our method does not explicitly analyze the
vibration signal to determine the vibration frequency. It uses a single feed-forward CNN to directly
vibration signal to determine the vibration frequency. It uses a single feed-forward CNN to directly
predict the vibration frequency. No extra algorithm, image or signal processing technique is required,
predict the vibration frequency. No extra algorithm, image or signal processing technique is required,
which makes for a fast, simple, and easy cross-platform deployment. In addition, the CNN model
which makes for a fast, simple, and easy cross-platform deployment. In addition, the CNN model
used in the proposed method was trained with purely artificial data, making this method scalable to
used in the proposed method was trained with purely artificial data, making this method scalable to
other frequency ranges or higher precision.
other frequency ranges or higher precision.
This work also has some limitations. The sampling rate of the input video is restricted to the
This work also has some limitations. The sampling rate of the input video is restricted to the
training configuration; a new model must be trained if a different sampling rate of the input video is
training configuration; a new model must be trained if a different sampling rate of the input video
to be used. Furthermore, only the peak or resonant frequency is estimated, while other frequency
is to be used. Furthermore, only the peak or resonant frequency is estimated, while other frequency
components are not determined. The proposed method is vulnerable to variations in the lighting
components are not determined. The proposed method is vulnerable to variations in the lighting
condition, changes in the background of the measurement target, camera shake, and other forms of
condition, changes in the background of the measurement target, camera shake, and other forms of
electric or mechanical noise.
electric or mechanical noise.
The method proposed in this paper offers some opportunities for future research. Our method
The method proposed in this paper offers some opportunities for future research. Our method
does not require signal preprocessing before feeding into the CNN. A proper signal preprocessing
does not require signal preprocessing before feeding into the CNN. A proper signal preprocessing
procedure such as signal denoising may improve the performance of the proposed method. The
procedure such as signal denoising may improve the performance of the proposed method. The dataset
Sensors 2018, 18, 2530 13 of 14

is limited in that it was generated by a simple algorithm and only simple noise was added. The use
of a better artificial signal generator that emulates real situations and real noise better will further
improve the generalization and robustness of the CNN model in a real-world application.

5. Conclusions
In this work, we propose an artificial neural network for vibration frequency prediction. We also
propose an optional operation for result enhancement. Experiments were conducted, including
verification test, laboratory experiment, and field experiment. A comparison of the results obtained
from the proposed method and that of an industrial vibrometer indicates that the proposed method
can predict vibration frequency within acceptable error.
The major limitations of this method are that the input video sampling rate is restricted by
the trained model, and it cannot estimate all the frequency components of the target. In the future,
we will attempt to address these limitations and further research will be conducted focusing on signal
preprocessing and dataset preparation for this method.

Author Contributions: J.L. conceived and conducted the research. X.Y. contributed to this work by making
valuable comments and suggestions on writing and revising the paper.
Funding: This work was supported by the National Natural Science Foundation of China (11372074).
Conflicts of Interest: The authors declare no conflicts of interest.

References
1. Reu, P.L.; Rohe, D.P.; Jacobs, L.D. Comparison of dic and ldv for practical vibration and modal measurements.
Mech. Syst. Signal Process. 2017, 86, 2–16. [CrossRef]
2. Khan, A.Z.; Stanbridge, A.B.; Ewins, D.J. Detecting damage in vibrating structures with a scanning LDV.
Opt. Lasers Eng. 2000, 32, 583–592. [CrossRef]
3. Stanbridge, A.B.; Sever, I.A.; Ewins, D.J. Vibration measurements in a rotating blisk test rig using an LDV.
In Proceedings of the Fifth International Conference on Vibration Measurements by Laser Techniques,
Ancona, Italy, 20–23 June 2000; pp. 1–9. [CrossRef]
4. Farrar, C.R.; Darling, T.W.; Migliori, A.; Baker, W.E. Microwave interferometers for non-contact vibration
measurements on large structures. Mech. Syst. Signal Process. 1999, 13, 241–253. [CrossRef]
5. Pieraccini, M.; Luzi, G.; Mecatti, D.; Fratini, M.; Noferini, L.; Carissimi, L.; Franchioni, G.; Atzeni, C.
Remote sensing of building structural displacements using a microwave interferometer with imaging
capability. NDT E Int. 2004, 37, 545–550. [CrossRef]
6. Yu, L.; Pan, B. Single-camera high-speed stereo-digital image correlation for full-field vibration measurement.
Mech. Syst. Signal Process. 2017, 94, 374–383. [CrossRef]
7. Helfrick, M.N.; Niezrecki, C.; Avitabile, P.; Schmidt, T. 3D digital image correlation methods for full-field
vibration measurement. Mech. Syst. Signal Process. 2011, 25, 917–927. [CrossRef]
8. Beberniss, T.J.; Eason, D.A. High-speed 3D digital image correlation measurement of long-duration random
vibration: Recent advancements and noted limitations. Mech. Syst. Signal. Process. 2017, 86, 35–48. [CrossRef]
9. Lee, J.J.; Shinozuka, M. Real-time displacement measurement of a flexible bridge using digital image
processing techniques. Exp. Mech. 2006, 46, 105–114. [CrossRef]
10. Tian, L.; Pan, B. Remote bridge deflection measurement using an advanced video deflectometer and actively
illuminated LED targets. Sensors 2016, 16. [CrossRef] [PubMed]
11. Kalpoe, D.; Khoshelham, K.; Gorte, B. Vibration measurement of a model wind turbine using high speed
photogrammetry. In Proceedings of the Videometrics, Range Imaging, and Applications XI, Munich,
Germany, 25–26 May 2011; Volume 8085.
12. Feng, D.; Feng, M.Q.; Ozer, E.; Fukuda, Y. A vision-based sensor for noncontact structural displacement
measurement. Sensors 2015, 15, 16557–16575. [CrossRef] [PubMed]
13. Michael, G.G.; Sharon, S.W.; Richard, S.P.; Martha, E.D. Sts-74/mir Photsogrammetric Appendage Structural Dynamics
Experiment Preliminary Data Analysis; NASA Langley Technical Report Server: Kissimmee, FL, USA, 1997.
Sensors 2018, 18, 2530 14 of 14

14. Son, K.S.; Jeon, H.S.; Park, J.H.; Park, J.W. Vibration displacement measurement technology for cylindrical
structures using camera images. Nucl. Eng. Technol. 2015, 47, 488–499. [CrossRef]
15. Pappa, R.S.; Black, J.T.; Blandino, J.R.; Jones, T.W.; Danehy, P.M.; Dorrington, A.A. Dot-projection photogrammetry
and videogrammetry of gossamer space structures. J. Spacecr. Rockets 2003, 40, 858–867. [CrossRef]
16. Black, J.; Pappa, R. Photogrammetry and videogrammetry methods for solar sails and other gossamer
structures. In Proceedings of the 45th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics &
Materials Conference, Palm Springs, CA, USA, 19–22 April 2004.
17. Busca, G.; Cigada, A.; Mazzoleni, P.; Zappa, E. Vibration monitoring of multiple bridge points by means of a
unique vision-based measuring system. Exp. Mech. 2013, 54, 255–271. [CrossRef]
18. Poudel, U.P.; Fu, G.; Ye, J. Structural damage detection using digital video imaging technique and wavelet
transformation. J. Sound Vib. 2005, 286, 869–895. [CrossRef]
19. Bartilson, D.T.; Wieghaus, K.T.; Hurlebaus, S. Target-less computer vision for traffic signal structure vibration
studies. Mech. Syst. Signal Process. 2015, 60–61, 571–582. [CrossRef]
20. Ioffe, S.; Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate
shift. In Proceedings of the International Conference on Machine Learning, New York, NY, USA, 6–11 July
2015; pp. 448–456.
21. Scherer, D.; Müller, A.; Behnke, S. Evaluation of Pooling Operations in Convolutional Architectures for
Object Recognition. In Proceedings of the International Conference on Artificial Neural Networks, Rhodes,
Greece, 4–7 October 2010; Springer: Berlin/Heidelberg, Germany, 2010; pp. 92–101.
22. Nair, V.; Hinton, G.E. Rectified linear units improve restricted boltzmann machines. In Proceedings of the
27th international Conference on Machine Learning (ICML-10), Haifa, Israel, 21–24 June 2010; pp. 807–814.
23. Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R. Dropout: A simple way to prevent
neural networks from overfitting. J. Mach. Learn. Res. 2014, 15, 1929–1958.
24. Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980.
25. Smith, S.L.; Kindermans, P.-J.; Le, Q.V. Don’t decay the learning rate, increase the batch size. arXiv 2017,
arXiv:1711.00489.
26. Paszke, A.; Gross, S.; Chintala, S.; Chanan, G.; Yang, E.; DeVito, Z.; Lin, Z.; Desmaison, A.; Antiga, L.;
Lerer, A. Automatic differentiation in pytorch. In Proceedings of the 31st Conference on Neural Information
Processing Systems (NIPS 2017), Long Beach, CA, USA, 4–9 December 2017.

© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).

You might also like