Professional Documents
Culture Documents
Complete Theory
Complete Theory
Chapter 3 Functions
3.1 Time domain functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Time Record . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Autocorrelation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Crosscorrelation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Histogram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Probability Density . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Probability Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.2 Frequency domain functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Spectrum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Autopower Spectrum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Crosspower spectrum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
Coherence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Principal Component Spectra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Frequency Response Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Impulse Response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
3.3 Composite functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Overall level (OA) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Frequency section . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Order sections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
Octave sections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3.4 Units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3.5 Rms calculations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Part II Acoustics and Sound Quality
Chapter 18
Modal validation
18.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294
18.2 MSF and MAC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
18.3 Mode participation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
18.4 Reciprocity between inputs and outputs . . . . . . . . . . . . . . . . . . . . . . . . . . . 298
18.5 Generalized modal parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300
18.6 Mode complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302
18.7 Modal phase collinearity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
18.8 Comparison of models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
18.9 Mode indicator functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305
18.10 Summation of FRFs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307
18.11 Synthesis of FRFs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308
Chapter 20 Design
20.1 Using the modal model for modal design . . . . . . . . . . . . . . . . . . . . . . . . . . 322
20.2 Sensitivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
20.2.1 Mathematical background to sensitivity analysis . . . . . . . . . . . . . . 325
20.3 Modification prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329
20.3.1 Mathematical background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329
20.3.2 Implementation of Modification prediction . . . . . . . . . . . . . . . . . . 338
20.3.3 Definition of modifications to the model . . . . . . . . . . . . . . . . . . . . 339
20.3.4 Modification prediction calculation . . . . . . . . . . . . . . . . . . . . . . . . . 347
20.3.5 Units of scaling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
Example of the application of a beam element . . . . . . . . . . . . . . . . . . . . . 349
Static condensation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351
20.4 Forced response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
20.4.1 Mathematical background for forced response . . . . . . . . . . . . . . . . 354
Chapter 21 Geometry concepts
21.1 The geometry of a test structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358
21.2 Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359
Location . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359
Orientation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359
Theory and Background
Part I
Signal processing
Chapter 1
Spectral processing . . . . . . . . . . . . . . . . . . . . . 1
Chapter 2
Structural dynamics testing . . . . . . . . . . . . . . 23
Chapter 3
Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Chapter 1
Spectral processing
Aliasing
Averaging
This is by no means a comprehensive treatment of the subject and a
reading list is given at the end.
1
Chapter 1 Spectral processing
amplitude
time frequency
Each sine wave in the time domain is represented by one spectral line in the
frequency domain. The series of lines describing a waveform is known as its
frequency spectrum.
Fourier transform
The conversion of a time signal to the frequency domain (and its inverse) is
achieved using the Fourier Transform as defined below.
x(t) S ā(f)eā
x
Ăj2ft
Ădf Eqn. 1-2
This function is continuous and in order to use the Fourier Transform digitally
a numerical integration must be performed between fixed limits.
x(t)āe j2āāmfāt
time
t
Since the waveform is being sampled at discrete intervals and during a finite
observation time, we do not have an exact representation of it in either domain.
This gives rise to shortcomings which are discussed later.
Hermitian symmetry
The Fourier transform of a sinusoidal function would result in complex funcĆ
tion made up of real and imaginary parts that are symmetrical. This is illusĆ
trated below. In the majority of cases only the real part is taken into account
and of this only the positive frequencies are shown. So the representation of the
frequency spectrum of the sine wave shown below would become the area
shaded in grey.
-f 0 +f -f 0 +f
A/2
N samples
time
inverse
frequency
N/2 spectral lines
To achieve high calculation performance the FFT algorithm requires that the
number of time samples (N) be a power of 2 (such as 2, 4, 8, ...., 512, 1024, 2048).
Blocksize
Such a time record of N samples is referred to as a block of data with N being
the blocksize. N samples in the time domain converts to N/2 spectral (frequency)
lines. Each line contains information about both amplitude and phase.
Frequency range
The time taken to collect the sample block is T. The lowest frequency that can be
detected then is that which is the reciprocal of the time T.
The frequency spacing between the spectral lines is therefore 1/T and the highĆ
est frequency that can be determined is (N/2).(1/T).
frequency
1 2 3 N2
T T T
T
f 1 fmax N . 1 N .f
T 2 T 2
The frequency range that can be covered is dependant on both the blocksize (N)
and the sampling period (T). To cover high frequencies you need to sample at a
fast rate which implies a short sample period.
This is not the case if the computation time is taking longer than the measureĆ
ment time or if the acquisition requires a trigger condition.
Overlap
Overlap processing involves using time records that are not completely indeĆ
pendent of each other as illustrated below.
time
record 1
time
record 2
time
record3
time
record 4
If the time data is not being weighted at all by the application of a window,
then overlap processing does not include any new data and therefore makes no
statistical improvement to the estimation procedure. When windows are being
applied however, the overlap process can utilize data that would otherwise be
ignored.
The figure below shows data that is weighted with a Hanning window. In this
case the first and last 20% of each sample period is practically lost and contribĆ
utes hardly anything towards the averaging process.
Sampled
data
Processed data
with no overlap
Applying an overlap of at least 30% means that this data is once again included
- as shown below. This not only speeds up the acquisition (for the same numĆ
ber of averages) but also makes it statistically more reliable since a much higher
proportion of the acquired data is being included in the averaging process.
Sampled
data
1.2 Aliasing
Sampling at too low a frequency can give rise to the problem of aliasing which
can lead to erroneous results as illustrated below.
fs 2Ăfm
The highest frequency that can be measured is fmax which is half the sampling
frequency (fs ), and is also known as the Nyquist frequency (fn ).
f
fmax s Ă fn
Ă 2
The problem of aliasing can also be illustrated in the frequency domain.
measured
frequency
fn
f1 input
frequency
f1 f2 f3 f4
fn 2 fn = fs 3 fn 4 fn
All multiples of the Nyquist frequency (fn ) act as `folding lines'. So f4 is folded
back on f3 around line 3 fn , f3 is folded back on f2 around line 2 fn and f2 is
folded back on f1 around line fn . Therefore all signals at f2 , f3 , f4 are all seen as
signals at frequency f1 .
The only sure way to avoid such problems is to apply an analog or digital anti-
aliasing filter to limit the high frequency content of the signal. Filters are less
than ideal however so the positioning of the cut off frequency of the filters must
be made with respect to fmax and the roll off characteristics of the filter.
ideal filter
fmax fs
roll off
characteristics
of a real filter
fmax fs
A further problem associated with the discrete time sampling of the data is that
of leakage. A continuous sine wave such as the one shown below should result
in the single spectral line.
continuous
waveform
time
frequency
Because the signals are measured over a sample period T, the DFT assumes that
this is representative for all time. When the sine wave is not periodic in the
sample time window, the result is a consequent leakage of energy from the
original line spectrum due to the discontinuities at the edges.
discretely
sampled
waveform
time
DFT
assumed
waveform
time
frequency
The user should be aware that leakage is one of the most serious problems
associated with digital signal processing. Whilst aliasing errors can be reduced
by various techniques, leakage errors can never be eliminated. Leakage can be reĆ
duced by using different excitation techniques and increasing the frequency
resolution, or through the use of windows as described below.
1.3.1 Windows
The problem of discontinuities at the edge can be alleviated either by ensuring
that the signal and the sampling period are synchronous or by ensuring that the
function is zero at the start and end of the sampling period. This latter situaĆ
tion can be achieved by applying what is called a `window function' which
normally takes the form of an amplitude modulated sine wave.
X =
sample sample
period T. period T.
The use of windows gives rise to errors itself of which the user should be aware
and should be avoided if possible. The various types of windowing functions
distribute the energy in different ways. The choice of window depends on the
input function and on your area of interest.
Note! It should be noted that synchronizing the signal and the sampling time, or using
a self windowing function is preferable to using a window.
Window characteristics
The time windows provided take a number of forms - many of which are amĆ
plitude modulated sine waves. There are all in effect filters and the properties
of the various windows can be compared by examining their filter characterisĆ
tics in the frequency domain where they can be characterized by the factors
shown below.
noise Bandwidth
0dB
highest side
lobe
log f
The windows vary in the amount of energy squeezed in to the central lobe as
compared to that in the side lobes. The choice of window depends on both the
aim of the analysis and the type of signal you are using. In general, the broader
the noise Bandwidth, the worse the frequency resolution, since it becomes more
difficult to pick out adjacent frequencies with similar amplitudes. On the other
hand, selectivity (i.e. the ability to pick out a small component next to a large
on) is improved with side lobe falloff. It is typical that a window that scores
well on Bandwidth is weak on side lobe fall off and the choice is therefore a
trade off between the two. A summary of these characteristics of the windows
provided is given in Table 1.1.
Window types
Uniform window
This window is used when leakage is not a probĆ
lem since it does not affect the energy distribuĆ
tion. It is applied in the case of periodic sine
waves, impulses, transients... where the function
is naturally zero at the start and end of the samĆ
pling period.
Hanning
This window is most commonly applied for general purpose analysis of ranĆ
dom signals with discrete frequency components. It has the effect of applying a
round topped filter. The ability to distinguish between adjacent frequencies of
similar amplitude is low so it is not suitable for accurate measurements of small
signals.
Hamming
This window has a higher side lobe than the Hanning but a lower fall off rate
and is best used when the dynamic range is about 50dB.
Blackman
This window is useful for detecting a weak component in the presence of a
strong one.
Kaiser–Bessel
The filter characteristics of this window provide good selectivity, and thus
make it suitable for distinguishing multiple tone signals with widely different
levels. It can cause more leakage than a Hanning window when used with ranĆ
dom excitation.
Flattop
This window's name derives from its low ripple characteristics in the filter pass
band. This window should be used for accurate amplitude measurements of
single tone frequencies and is best suited for calibration purposes.
Force window
Exponential window
Amplitude correction
Consider the example of a sine wave signal and a Hanning window.
amplitude
time time
frequency frequency
Energy correction
Windowing also affects broadband signals.
In this case however it is the energy in the signal which it is usually important
to maintain, and an energy correction factor will be applied to restore the enerĆ
gy level of the windowed signal to that of the original signal.
In the case of a Hanning window, the energy in the windowed signal is 61% of
that the original signal. The windowed data needs to be multiplied by 1.63
therefore to correct the energy level.
The actual correction factor that is needed to compensate for the application of
the time window depends on the window correction mode and the number of
windows applied. Table 1.2 lists the values used.
1.4 Averaging
Signals in the real world are contaminated by noise -both random and bias.
This contamination can be reduced by averaging a number of measurements in
which the random noise signal will average to zero. Bias errors however, such
as nonlinearities, leakage and mass loading are not reduced by the averaging
process. A number of different techniques for averaging of measurements are
provided.
Linear
This produces a linearly weighted average in which all the individual measureĆ
ments have the same influence on the final averaged value. If the average value
of M consecutive measurement ensembles is x then -
M1
x 1 xm Eqn 1-3
M m0
x x a(n1) x n
The intermediate average is aān . The final averaging can be
done at the end of the acquisition.
Stable
In the case of stable averaging again all the individual measurements have the
same influence on the final averaged value. In this case though, the intermediate
averaging result is based on -
xn n 1
n Ăx n1 n
xn
Eqn 1-4
The advantage of stable averaging is that the intermediate averaging results are
always properly scaled. This scaling however makes the procedure slightly
more time consuming.
Exponential
Exponential averaging on the other hand yields an averaging result to which
the newest measurement has the largest influence while the effect of the older
ones is gradually diminished. In this case -
xn
1 Ăx
xn
n1 Eqn 1-5
In this way, the averaging result contains, for a specific k, the maximum value
in an absolute sense of all the ensembles, considered during the averaging proĆ
cess.
This way, the averaging result contains all values that coincide with the maxiĆ
mum values for the reference channel.
R.W. Ramirez
The FFT : Fundamentals and Concepts
Prentice Hall, Englewood Cliffs N.J., 1985.
H.J. Nussbaumer
Fast Fourier Transform and Convolution Algorithms
Springer Verlag, 1982.
R.E. Blahut
Fast Algorithms for Digital Signal Processing
Addison Wesley, 1985.
IEEE-ASSP Society
Programs for Digital Signal Processing
IEEE Press, New York, 1979.
Structural dynamics
testing
Signature analysis
System analysis
23
Chapter 2 Structural dynamics testing
Noise levels are a common problem and specific information about acoustic
measurement functions are given in a separate set of documentation on AcousĆ
tics and sound quality.
output output
output output
output
input input
Xi
H(f)
Xj
For modal purposes the response signal is most commonly the acceleration at
the response DOF due to a force input at another. In this case peaks in the FRF
indicate that low input levels generate high response levels (resonances), while
minima indicate low response levels, even for high inputs (anti-resonances).
resonance
log Amp
anti-resonance frequency
Measurement points
The number of acquisition channels determines the number of response and exĆ
citation points that can be measured at any one time. Their position on the test
system can be defined as part of the geometry of the structure. In order to visuĆ
alize the response of each DOF, then their geometrical position must be defined.
If the response is measured at several response DOFs and the system excited at
a number of inputs then the resulting FRFs are termed Multiple Input Multiple
Output.
basic
function
tracking parameter
composite function
Tracking
The dominant parameter describing the change of a signal is termed the trackĆ
ing parameter. This could be time, rpm, temperature or other. The rotational
speed is a commonly used as a tracking parameter and for this a tacho signal is
used to determine the rpm.
While a number of channels can be used to measure tracking values, one must
be used to control the acquisition, i.e. to determine when the measurements
will be made.
Number of revs P
M samplesā/ārev M M M
Blocksize N = MP samples
P= Number of revsā/
ā b
ā lock =( Number of revs/sec) . ( Number of secs)
rpm(Hz)
P rpmā(Hz)Ă.ĂT
f
M = Number of samplesā/ārev = (Number of samplesā/sec) . (Number of
secsā/rev) fs
M
rpm(Hz)
Orders
For rotating machinery most signal phenomena are related to the rotational
speed and its harmonics.
A rotational speed harmonic is called an order. It is the proportionality
constant (O) between the rotational speed (rpm) and the frequency (f).
f= O . rpm (Hz)
Fixed sampling
This is another term for basic signature analysis, where signals are measured
using the standard data acquisition techniques as described above i.e. with a
fixed sampling frequency and sampling period. The rpm is measured but is
used only for control of the acquisition, and annotation of the acquired blocks.
In this case, the maximum order and the order resolution will vary with the roĆ
tational speed (rpm).
Order tracking
This involves measuring signals at different rotational speeds but in this case,
the sampling frequency (fs ) and observation time (T) are dependent on the rpm.
The data is sampled synchronously with the rotational speed (rpm). In this
way the number of samples per revolution is kept constant. The signals are in fact
sampled at constant shaft angle increments rather than time increments. This
implies that the maximum order measured remains constant (Omax= M / 2).
Functions
Time Record
N instantaneous time samples x(n), are taken where N = the blocksize. The reĆ
sult of a time record measurement x(n), is the ensemble average of a series of M
instantaneous time records, where M= the number of averages and A desigĆ
nates the averaging operator.
xĂ(n) A M1
m0 Ă(x m(n)) n 0N 1 Eqn 3-1
Autocorrelation
Correlation is a measure of the similarity between two quantities. The autocorĆ
relation function is found by taking a signal and comparing it with a time
shifted version of itself.
The time domain autocorrelation function Rxx () is thus acquired by multiplyĆ
ing a signal by the same signal displaced by time () and integrating the prodĆ
uct over all time.
R xxĂ()Ă Ă lim
T
x(t)Ăx(t )Ădt Eqn 3-2
T
where F -1 is the inverse Fourier Transform and Sxx (k) is the discrete autopower
spectrum.
It can be seen that the greatest correlation will occur when and the autoĆ
correlation function will thus be a maximum at this point equal to the mean
square value of x(t). Purely random signals will therefore exhibit just one peak
at Periodic signals however will exhibit another peak when the time
shift equals a multiple of the period.
The autocorrelation function of a periodic signal is also periodic and has the
same period as the wave form itself. This property is useful in detecting sigĆ
nals hidden by noise. The advantage of using the auto correlation function
rather than linear averaging, is that no synchronizing trigger is required. CerĆ
tain impulse type signals also show up better using the autocorrelation function
rather than using a frequency domain function.
Crosscorrelation
R xyĂ()Ă Ă lim
T
x(t)Ăy(t )Ădt Eqn 3-4
T
As in the case of the autocorrelation function the discrete cross correlation funcĆ
tion Rxy (n) between two sampled signals x(n) and y(n) is calculated as,
with Sxy (k) being the discrete crosspower spectrum between the two signals.
Histogram
The probability histogram q(j) describes the relative occurrence of specific sigĆ
nal levels. Let the signal input range of a sampled signal x(n) be divided in J
classes. Each class j,j = 0...J-1, can be characterized by an average value xj and
a class increment x.
3
2
ÇÇ
nr of classes
signal range
ÇÇÇÇ
ÇÇÇ
ÇÇÇÇÇÇ
ÇÇÇ
ÇÇ
0
ÇÇ
ÇÇÇÇÇÇÇ
ÇÇ
-1
-2 -3 -2 -1 0 1 2 3
-3
nr of classes
N1
q(j)Ă Ă 1 ĂĂ
N
Ă kĂ xĂ(n)
Ă,ĂĂĂ jĂ Ă 0...ĂJĂ Ă 1 Eqn 3-6
nĂ Ă 0Ă
where kĂ x(n)
Ă Ă 1,Ă ifĂx jĂ Ă xĂ Ă xĂ(n)Ă Ă x jĂ Ă xĂĂ
2 2
kĂ x(n)
Ă Ă 0,Ă otherwiseĂĂ
The maximum value of J is either the number of time samples (Time data) or
spectral lines in the block.
Probability Density
The probability density p(j) is a normalized representation of the probability
histogram q(j),
Probability Distribution
The probability distribution d(j) gives the probability (in percent) that the signal
level is below a given value. This function is calculated from the probability
histogram, q(t) given in equation 3-6.
j
d(j)Ă Ă Ă q(i),ĂĂ jĂ Ă 0...JĂ Ă 1 Eqn 3-8
i0
Spectrum
The instantaneous discrete frequency spectrum X(k),is defined as the discrete
Fourier transform of the instantaneous sampled time record.
_
XĂ(k)Ă Ă A M1
m0 Ă(X mĂ(k)),ĂĂ kĂ Ă 0...ĂN 1 Eqn 3-10
Since only real valued time records are considered the frequency spectrum has
a Hermitian symmetry.
Autopower Spectrum
Thus if the frequency spectrum is complex you have phase information, while
the autopower spectrum will be real and contain no phase information.
Since only real valued time records are considered, the autopower spectrum is
symmetric with respect to zero-frequency,
X Sxx Gxx
A2/2
(A/2)2
A/2
A
-f 0 f -f 0 f 0 f
double sided double sided single sided
frequency spectrum autopower spectrum (rms power)
T autopower spectrum
signal
Figure 3-2 Autopower spectra
Of this double sided frequency spectrum, only the positive frequency values
are considered. In order to obtain a time signal power estimate, a summation
of the power spectra values at the positive and negative frequencies must be
made, resulting in the so-called RMS Autopower spectra Gxx (k),
The Power Spectral Density normalizes the level with respect to the frequency
resolution. This overcomes differences that may arise from using a specific
Bandwidth. This is the standard way of measuring stationary broadband sigĆ
nals.
For transient signals the Energy Spectral Density may be more interesting
since this looks at the level of the energy rather than the average power over
the total acquisition time and is obtained by multiplying the Power Spectral
Density by the measurement period.
The interrelationship of these autopower formats is shown in Table 3.1. The paĆ
rameters A and T are as illustrated in Figure 3-2 , and F is the frequency resoĆ
lution. Examples of the different modes and units are shown below.
Crosspower spectrum
The cross power spectrum Sxy is a measure of the mutual power between two
signals at each frequency in the analysis band. It is the dual of the cross corĆ
relation function.
Coherence
There are three types of coherence functions; the ordinary coherence, partial coĆ
herence and virtual coherence.
Ordinary Coherence
The (squared) ordinary coherence between a signal Xi (N) and Xj (N) is defined
by,
SijĂ(k) 2
2 0 ijĂ(k)Ă Ă Eqn 3-16
S iiĂ(k)Ă S jjĂ(k)
where S ij(k) is the averaged crosspower. S ii(k) and S jj(k) are the averaged autoĆ
powers.
It is a ratio of the maximum energy in a combined output signal due to its variĆ
ous components, and the total amount of energy in the output signal. CoherĆ
ence can be used as a measure of the power in one channel that is caused by the
power in the another channel. As such it is useful in assessing the accuracy of
transfer function measurements. It does not however need to apply to input
and output and can also be measured between shakers.
The coherence function can take values that range between 0 and 1. A high valĆ
ue (near 1) indicates that the output is due almost entirely to the input and you
can feel confident in the frequency response function measurements. A low
value (near 0) indicates problems such as extraneous input signals not being
measured, noise, nonlinearities or time delays in the system.
The multiple coherence function between a single response spectrum Y(k) and a
set of reference spectra Xi (k) is calculated from
S yy.n!Ă(k)
2 y:xĂ(k)Ă Ă 1 Ă Eqn 3-17
S yyĂ(k)
Partial Coherence
The partial coherence is the ordinary coherence between conditioned signals.
Conditioned signals are those where the causal effects of other signals are reĆ
moved in a linear least squares sense.
To define the partial coherence, consider the signals X1 ..., Xi , Xj ,... The partial
coherence between Xi and Xj , after eliminating the signals X1 ... Xg is given by,
SijĂĂgĂ(k)Ă 2
2p ijĂgĂĂ(k)ĂĂ Ă Eqn 3-18
SiiĂĂgĂ(k)Ă S jjgĂ(k)
with :
Sii g (k) = autopower of signal Xi without the influences of the signals X1 ...xg
Sjjg (k) = autopower of signal Xj without the influence of the signals X1 ...xg
Sijg (k) = crosspower between signals Xi and Xj without the influences of the
signals X1 ...xg .
The partial coherence can take values between 0 and 1.
Virtual Coherence
The Virtual coherence is an ordinary coherence between a signal and a princiĆ
pal component which is discussed below. The virtual coherence is calculated
from,
SijĂ(k) 2
2 vijĂ(k)Ă Ă Eqn 3-19
S iiĂ(k)Ă S jjĂ(k)
with :
S'ii (k) autopower of principal component X'i
S'ij (k) crosspower between signal xj and principal component X'i
The value of the virtual coherence is always between 0 and 1. The sum of the
virtual coherences between any signal and all principal components is also in
the range [0,1].
[ U ] h[ U ]Ă Ă I
where
S'xx = diagonal matrix with the autopower of the principal component
spectra on the diagonal.
{X'(K)} = an uncorrelated set of principal component signals.
[U] = unitary transformation matrix.
The major application of the principal component spectra is in determining the
number of uncorrelated mechanisms (sources) in a signal set. A well known
example is the diagnosis of multiple input excitation for multiple input/multiĆ
ple output FRF estimation.
If Ni be the number of system inputs and No the number of system outputs, let
{X(N)} be a Ni -vector with the system input signals and {Y(N)} a No -vector with
the system output signals. A frequency response function matrix [ H(k)] of size
(No , Ni ) can then be defined such that,
The system described above is an ideal one where the output is related directly
to the input and there is no contamination by noise. This is not the case in realĆ
ity and various estimators are used to estimate [H(k)] from the measured input
and output signals.
The H1 Estimator
The most commonly used one is the H1-estimator, which assumes that there is
no noise on the input and consequently that all the X measurements are accuĆ
rate.
X H Y Y
Y = HX + N
It minimizes the noise on the output in a least squares sense. In this case the
transfer function is given by -
Syx(k)
H 1(k) Ă Ă Eqn 3-22
Sxx(k)
This estimator tends to give an underestimate of the FRF if there is noise on the
input. H1 estimates the anti-resonances better than the resonances. Best results
are obtained with this estimator when the inputs are uncorrelated.
The H2 Estimator
Alternatively, the H2 estimator can be used. This assumes that there is no noise
on the output and consequently that all the Y measurements are accurate.
Y
X H Y
Y = H(X - M)
It minimizes the noise on the input in a least squares sense and in this case the
transfer function is given by -
Syy(k)
H 2(k) Ă Ă Eqn 3-23
Syx(k)
This estimator tends to give an overestimate of the FRF if there is noise on the
output. this estimator estimates the resonances better than the anti-resonances.
Note! This estimator can only be implemented in the case of a single output
The Hv Estimator
Finally with the Hv estimator, [ H(k)] is calculated from the eigenvector correĆ
sponding to the smallest eigenvalue of a matrix [ Sxxy ]:
SxxyĂĂĂ Ă
S xx S xy
S yx S yy Eqn 3-24
This estimator minimizes the global noise contribution in a total least squares
sense. When using this estimator the partitioning of the noise over the input
and output signals can be scaled.
Y
X H Y
M N
Y-N =H (X-M)
This estimator provides the best overall estimate of the frequency function. It
approximates to the H2 estimator at the resonances and the H1 estimator at the
anti-resonances. It does however require more computational time than the
other two.
Impulse Response
The impulse response (IR) function matrix [h(t)] expresses the time domain
relationship between the inputs and outputs of a linear system. This relationĆ
ship takes the form of a convolution integral.
[h(t)] is calculated using the inverse Fourier transform of the frequency reĆ
sponse function as shown below -
Impulse response functions depend on there being at least one reference chanĆ
nel and one response channel.
The functions described in this section represent functions that can be acquired
or processed during a Signature analysis. Since this type of analysis is intended
to examine the evolution of signals as a function of changing environment (e.g.
rpm, time, ...), then there needs to be functions that express this evolution.
These are called composite functions as they are derived from the `basic' meaĆ
surement functions described in the previous section, for different environmenĆ
tal conditions.
This function describes the evolution of the total energy in the measured signal.
As such it is always expressed as a frequency spectrum rms value. It is availĆ
able with all basic measurement functions. Energy correction is applied to this
function.
When the signal contains spikes and is therefore defined as impulse" an addiĆ
tional peak detector mechanism is implemented. In this case the signal is first
averaged using the 35ms averaging time constant and then peaks are detected
using a decay rate of 1500ms.
Frequency section
This function describes the evolution of the energy of the measured signal over
the rpm range in a specified frequency band. It is always expressed as an Rms
frequency spectrum and is available only when the basic measurement function
is a frequency domain function.
Bandwidth
The center frequency is the frequency at which the section will be calculated
and is specified by the Center parameter. The Lower bandvalue and the UpĆ
per bandvalue are given by
rpm rpm
f f f
Band mode=frequency
Band mode=lines Band mode=%
f
constant
f=constant fc
Order sections
This function describes the evolution of the energy of the measured signal in a
specified `order' band. Orders are introduced chapter 2.3 in the chapter on
types of testing. An `order' band is a frequency band whose center frequency
changes as a function of the measurement environment or tracking parameter.
It is necessary therefore that the tracking parameter be a `frequency' type of paĆ
rameter (e.g. rotation speed in rpm). An order is nothing other than a multiple
of this basic tracking parameter. The evolution of the energy in a specified orĆ
der band is expressed as a function of the measured rpm. Through post procesĆ
sing it is also possible to examine it in terms of measured time or frequency.
Possible means of defining the span for integration are:
V a fixed frequency range
V a fixed number of spectral lines
the lines closest to the exact value are used.
V a fixed order Bandwidth
V a percentage of the selected order value
These three options are illustrated below.
rpm rpm rpm
f f f f f f
Band mode=frequency Band mode=order Band mode= %
Band mode=lines O=constant
O=constant
f=constant f=constant . rpm Oorder i=Bandwidth (%) . i
Oorder i+1=Bandwidth (%) . (i+1)
f=constant . rpm
Octave sections
An octave section represents the summation of values over octave bands. The
center frequencies of the bands are defined in the ISO norm 150 266. Possible
octave bands are are 1/1, 1/2, 1/3, 1/12 and 1/24 octaves.
3.4 Units
light li candela cd
This means that all data in either the internal data structures of the LMS softĆ
ware or the database is stored in these units. A physical quantity with a dimenĆ
sion that is a combination of the above canonical dimensions will be allocated a
unit in the internal unit system that is a combination of the corresponding referĆ
ence units.
This section describes the ways in which rms calculations are performed for
different measurement functions. RMS stands for Root Mean Square and is a
measure of the energy in a signal.
When dealing with time samples, then a certain number of sample must be
analyzed in order to obtain a measure of the nature and the energy in the sigĆ
nal. This is done by squaring values, summing them and then taking an averĆ
age (mean) remove the influence of the number of samples. Then the square
root of the mean is taken to arrive at the rms value. So for a range of samples
starting at sample 0 and ending at sample k
Rms 1 ā .ā
k1
k
i0
y 2i Eqn.
3-27
yi
i0 ik
Frequency spectra
Amplitude A
The frequency range over which you want the rms value computed is defined
by the upper and lower values of f1 and f2. All lines completely within the
range will be included in the calculations (Ai) where i takes values of 1 to k-1.
For the lines at the beginning and the end (A0 and A k), half of each value is
taken.
f1 f2
Ai
frequency
-f i=0 i=k f
Rms A 20 k1
2 i1
2
A 2k
2Ă A i
2
Eqn. 3-28
Rms A 0 k1 A
2Ă A i k
2 i1 2 Eqn. 3-29
X rms
H rms
F rms
H rms 1 ĂA 0
k12
2 k1 A 2k
Ai
2
2
i1
Eqn. 3-31
Sound power, sound intensity (active and reactive), SFTVI and SFUI
SFTVI (sound field temporal uniformity indicator) and SFUI (sound field uniĆ
formity indicator) are ISO defined functions for acoustic measurements and
analysis. The rms computes the total energy in a band, so since these are alĆ
ready a measure of energy, then the values of the spectral lines can simply be
added.
Rms
A0
2
Ai A2k Eqn. 3-32
Rms Ă
A 20
2
k1
A2i A2k
i1
2
Eqn. 3-33
Part II
Acoustics and Sound Quality
Chapter 4
Terminology and definitions . . . . . . . . . . . . . . 55
Chapter 5
Acoustic measurements . . . . . . . . . . . . . . . . . 67
Chapter 6
Sound quality . . . . . . . . . . . . . . . . . . . . . . . . . . 83
Chapter 7
Sound metrics . . . . . . . . . . . . . . . . . . . . . . . . . 99
Chapter 8
Acoustic holography . . . . . . . . . . . . . . . . . . . . 117
53
Chapter 4
Terminology and
definitions
Reference conditions
Octave bands
Acoustic weighting
55
Chapter 4 Terminology and definitions
The amount of noise emitted from a source depends on the sound power of that
source. The sound power is a basic characteristic of a noise source, providing
an absolute parameter that can be used for comparison. This differs from the
sound pressure levels it gives rise to, which depend on a number of external
factors.
N
PI Pi Eqn 4-1
i1
Sound pressure
The effect of the sound power emanating from a source is the level of sound
pressure. Sound pressure is what the ear detects as noise, the level of which deĆ
pends to a great extent on the acoustic environment and the distance from the
source. The sound pressure is defined as the difference between the actual and
ambient pressure.
This is a scalar quantity that can be derived from measured sound pressure
spectra or autopower spectra either at one specific frequency (spectral line), or
integrated over a certain frequency band.
Sound pressure measurements can be obtained at each measurement point, and
are independent of the measurement direction (X,Y, or Z). The units are Pascal
(Pa) or N/m 2.
TotalĂpowerĂP I I.dS
Eqn 4-2
I
S
I(t)dt
T
I1 Eqn 4-3
T
0
As such, if the energy is flowing back and forth resulting in zero net energy
flow then there will be zero intensity.
Free field
Diffuse field
Particle velocity
Pressure variations give rise to movements of the air particles. It is the product
of pressure and particle velocity that results in the intensity. In a medium with
mean flow therefore
I pĂv
Eqn 4-4
By combining equations 4-4 and 4-5 it can be seen that in a free field a relationĆ
ship exists enabling the acoustic intensity to be determined from the effective
pressure of a plane wave.
p 2e
I .c Eqn 4-6
This is defined as the product of the mass density of a medium and the velocity
of sound in that medium.
where
= mass density (kg/m 3)
c = velocity of sound in the medium (m/s)
dB scale
Since the range of pressure levels that can be detected is large and the ear reĆ
sponds logarithmically to a stimulus, it is practical to express acoustic parameĆ
ters as a logarithmic ratio of a measured value to a reference value. Hence the
use of the decibel scales for which the reference values for intensity, pressure
and power are defined below.
L W 10 log 10Ă
|P I|
P0
Eqn 4-8
L v 20 log 10Ă vv
0
Eqn 4-9
This is the logarithmic measure of the absolute value of the intensity vector.
L I 10 log 10Ă
|I|
I0
Eqn 4-10
This is the logarithmic measure of the absolute value of the normal intensity
vector.
L In 10 log 10Ă
|I n|
I0
dB Eqn 4-11
This is defined as
2
p
L p 10 log 10Ă p
0
Eqn 4-12
20 log 10
p
p0
The above reference values for intensity and power correspond to an effective
rms reference pressure of
po = 0.00002 (Pa)
= 20 Pa
Complete (1/1) octave bands represent frequency bands where the center freĆ
quency of one band is approximately twice (according to standardized values)
that of the previous one.
f c, i1 2. f c,i
Partial octave bands (1/3, 1/12 1/24 . . .) represent frequency bands where
1/1 bands
1/3 bands
12x
The Lower band limit of a 1/x octave band is f c.2
12x
The Upper band limit of a 1/x octave band is f c.2
The bands defined by these formulas are termed the `natural' bands. The InterĆ
national ISO norm 150266 defines normalized center frequencies for octave
bands and the values for 1/1, 1/2 and 1/3 octave bands are listed in table 4.1.
Natural frequencies are used for calculations but the normalized frequencies
are used for annotation. Octave bands above or below the normalized values
are annotated with the natural frequencies.
Normalized 1/ 1/ 1/ 1/ 1/ 1/ 1/ 1/ 1/
frequency 1 2 3 1 2 3 1 2 3
oct oct oct oct oct oct oct oct oct
16 x x x 160 x 1600 x
18 180 x 1800
20 x 200 x 2000 x x x
22.4 x 224 2240
25 x 250 x x x 2250 x
28 280 2800 x
31.5 x x x 315 x 3150 x
35.5 355 x 3550
40 x 400 x 4000 x x x
45 x 450 4500
50 x 500 x x x 5000 x
56 560 5600 x
63 x x x 630 x 6300 x
71 710 x 7100
80 x 800 x 8000 x x x
90 x 900 9000
100 x 1000 x x x 10000 x
112 1120 11200 x
125 x x x 1250 x 12500 x
140 1400 x 14000
160 x 1600 x 16000 x x x
Table 4.1 Normalized frequencies (Hz)
Frequency weighting
20
10
Relative response (dB)
-10 C
-20
D
-30
B
-40
-50
-60 A
Frequency (Hz)
-70
10 2 5 102 2 5 103 2 5 104 2
Acoustic measurements
Frequency bands
Field indicators
67
Chapter 5 Acoustic measurements
This section describes the acoustic quantities that can be measured. From meaĆ
sured quantities it is possible to derive further quantities as described in section
5.2.
Sound Intensity
The sound intensity in a specified direction at a point is the average rate of
sound energy transmitted in the specified direction through a unit area normal
to this direction at the point considered.
In most situations it is the component of the sound intensity vector normal to
the measurement surface, I n , which is measured.
In order to determine sound intensity you can measure both the instantaneous
pressure and the corresponding particle velocity simultaneously. In practice,
the sound pressure can be obtained directly using a microphone. The instantaĆ
neous particle velocity can be calculated from the pressure gradient between
two closely spaced microphones. A sound intensity probe can therefore consist
of two closely spaced pressure microphones which measure both the sound
pressure and the pressure gradient between the microphones.
For frequency domain calculations, it can be shown that the sound intensity can
be calculated from the imaginary part of the crosspower between the two miĆ
crophone signals. The following formula is used
S 1,2
I ImagĂ Eqn 5-1
2fĂĂd
Where S1,2 is the double sided crosspower between the two microphone sigĆ
nals, f is the signal frequency, d is the microphone distance and is the air denĆ
sity.
For this function, all channels are processed as channel pairs, each pair consistĆ
ing of two consecutive channels. It therefore requires that an even number of
channels is defined.
S 1,Ă1Ă S 2,Ă2
I reactiveĂ Ă Eqn. 5-2
2ĂfĂĂd
For the idealized case of measurements in the free field (free space without reĆ
flections) and in the direction of propagation, the reactive intensity is zero.
Residual intensity
This is defined as
where L p is the measured sound pressure level and pĂI0 is the pressure residual
intensity index. To calculate the residual intensity therefore it is necessary to
have the pressure residual intensity index available. This is described below.
Intensity measurements can be made in a sound field where the sound intensity
level is in the range
Lp is defined in equation 4-12, and LI in equation 4-10. In a free field the presĆ
sure and intensity levels are the same, whereas in all other cases, the measured
intensity will be less than the pressure. The residual intensity ( L p pĂI 0) repĆ
resents the lowest intensity level which can be detected by the system for the
given sound pressure level.
For the calculation of the pressure residual intensity index of a sound intensity
probe, it is required to place the intensity probe in a sound field such that the
sound pressure is uniform over the volume. In these conditions there will be
no difference between the two signals at both microphones, and hence the meaĆ
sured intensity should be zero. However, the phase mismatch between the two
measuring channels causes a small difference between the two signals making
it appear as if there is some intensity. The intensity detected can be likened to a
noise floor below which measurements cannot be made. This intensity lower
limit is not fixed but varies with the pressure level. What is fixed, is the differĆ
ence between the pressure and the intensity level when the same signal is fed to
both channels. It is this which is defined as the pressure residual intensity inĆ
dex. Mathematically therefore the pressure residual intensity index is
where Lp is the sound pressure level and LIn is the normal sound intensity levĆ
el.
LI dB
Lp
Ld
pIo
residual intensity level
frequency
The `bias error factor' () is selected according to the grade of accuracy reĆ
quired from the table below.
Acoustic functions can be derived from ones that have been measured. This
section describes these analysis functions and Table 5.2 gives an overview of
them and the measured quantities required for their derivation.
Calculations will be made over specific frequency bands This subject is disĆ
cussed in section 5.4. Some functions are computed over a known area. The
subject of defining surfaces (meshes) for acoustic functions is discussed in secĆ
tion 5.3.
f2
p 2e 2 p(f) df 2
f1
Eqn 5-8
f2
2 A (f)dfp
f1
Acoustic intensity
This as a vector quantity calculated directly from measured acoustic intensity
functions.
f2
I I(f)df
Eqn 5-9
f1
When intensity measurements are not available but sound pressure measureĆ
ments are available, then the magnitude of the acoustic intensity can be comĆ
puted from the effective sound pressure p and the acoustic impedance .c
p2e
I
Ă.Ăc Eqn 5-10
0
but only under the assumption of plane progressive waves in a free field.
Sound power
This is calculated from the geometrical area S and the acoustic intensity compoĆ
nent perpendicular to a surface
p2
P
eĂc Ă.ĂS Eqn 5-12
0
Particle velocities
These can be calculated when both acoustic intensity and sound pressure data
are available
v pI
Eqn 5-13
All the possible analysis functions are summarized in Table 5.2. (These are
based on the assumption of plane progressive waves in a free field.)
A
Intensity
I intensity I i W/m2
Sound power
p P Intensity and area IĂ.ĂS W
Sound pressure p 2e (1)
spectrum and area
0Ăc .ĂS
Acoustic measurements differ from other types of signals in that they are meaĆ
sured some distance away from the object rather than on the test structure itĆ
self. The measurement points are termed associated nodes, that are surrounded
by a hypothetical measurement surface. An organized collection of measureĆ
ment surfaces and nodes are termed a measurement mesh and there are ISO
standards that define such meshes for particular measurement types.
acoustic measurement nodes
source
ÊÊÊÊÊÊÊÊÊÊÊÊÊÊÊÊÊÊÊÊ
ÊÊÊÊÊÊÊÊÊÊÊÊÊÊÊÊÊÊÊÊ
ÊÊÊÊÊÊÊÊÊÊÊÊÊÊÊÊÊÊÊÊ
ÊÊÊÊÊÊÊÊÊÊÊÊÊÊÊÊÊÊÊÊ
reflecting plane
a(f)df
f2
a Eqn 5-14
f1
The integration of a continuous function a(f) is replaced by a finite sum over the
corresponding discrete samples:
a 1 a1
2
a 12 ai 2 Eqn 5-15
i
where a1 = a(f1 )
a2 = a(f2 )
fā1 < ā fāiā < fā2
This integration takes into account the full value of all data samples between
the two limits, and 50 % of the first and last sample. It can be obtained between
any two measured frequency limits.
It is good practise to maintain the type of frequency band that was used in the
acquisition of the data for the calculation. In fact data acquired in octave bands
must remain in those bands for the analysis. The calculation of the field indicaĆ
tors also makes little sense unless the analysis bands correspond with the meaĆ
surement bands.
When attempting to analyze the sound power being radiated from a noise
source in situ, the international standard ISO 9614-1 lays out a number of meaĆ
surement conditions which must be adhered to if the results are to be considĆ
ered acceptable for this purpose. A number of criteria must be satisfied, based
on the values of particular indicator functions, to ensure the requisite adequacy
of the measurements and meshes. This section describes both the field indicaĆ
tors themselves and the criteria used to assess the results.
This gives the measure of temporal (or time) variability of the field. It is deĆ
fined as follows
F1 1
In
1
M1
M
(ĂInk InĂ)2
k1
Eqn 5-16
Where I n is the mean value of M short time averages of Ink defined in the folĆ
lowing equation.
M
In 1
M
Ink Eqn 5-17
k1
In a free field where sound is only radiating out from a source, the pressure and
intensity levels are equal in magnitude. In a diffuse or reactive field however,
intensity can be low when the pressure is high. A lower measured intensity
can also arise if the sound wave is incident at an angle to the probe since this
also affects the phase change detected across the probe. The pressure-intensity
indicator examines the difference between the pressure and the absolute values
of intensity. This function can be determined on a point to point basis during
the acquisition, but the function F2 described here represents the value averĆ
aged over all the measured surfaces.
F 2 L p L |I n| Eqn 5-18
L p 10 log 10 1
N
p i 2!
ā ā p
Eqn 5-19
N
i1
o
where i indicates the measurement surface and N is the total number of surĆ
faces (of the local component).
1 N |Ini|!
L |I | 10 log 10ā ā
Io
Eqn 5-20
n N
i1
where |I ni| is the absolute (unsigned) value of the normal intensity vector.
Note! A large difference between intensity and pressure suggests that the probe is
not well aligned or that you are operating in diffuse field.
ā Ă1 ā
L In 10 log 10 IInio Ă
N
Eqn 5-22
N i1
too great and the set of measurements do not satisfy the ISO requirements.
F4 Non–uniformity indicator
This indicates the measure of spatial (or positional) variability that exists in the
field. It can be compared with the statistical parameter standard deviation.
F4 1
In
1
N1
i1
N
(ĂIni InĂ)2 Eqn 5-23
Where i indicates the measurement surface and N is the total number of surĆ
faces. I n is the mean of the normal acoustic intensity vectors taken over the N
surfaces.
N
In 1 ā
N
Ini Eqn 5-24
i1
Ld F2 0 Criterion 1
If this criterion is not satisfied then it is an indication that the levels being meaĆ
sured are too low for the source and that it is necessary to reduce the average
distance between the measurement surface and the source.
If the difference between these two indicators is greater than 3 dB, then the situĆ
ation can be improved by reducing the average distance between the measureĆ
ment surface and the source, shielding measurement sources from the extraneĆ
ous noises or reducing some reflections towards the source under investigation.
A check on the adequacy of the measurement positions (mesh) can be made usĆ
ing the following criterion.
N CĂ.ĂF 4ā 2 Criterion 2
Where the same mesh is used for a number of bands then the maximum value
of C.F4 2 will be considered when evaluating the criterion.
Sound quality
83
Chapter 6 Sound quality
Sound signals
The characteristics of a sound as it is perceived are not exactly the same as the
characteristics of sound being emitted. The discussion starts with definitions
which describe the actual sound signals, and then discusses the physical and
psychological effects that influence the perception of a particular signal.
Sound power and sound pressure
The amount of noise emitted from a source depends on the sound power of that
source.
The effect of the sound power emanating from a source is the level of sound (or
acoustic) pressure. Sound pressure is what the eardrum detects - the level of
which depends to a great extent on the acoustic environment and the distance
from the source.
Sound pressure is what is measured by microphones and the majority of data
used in the a sound quality analysis would have the dimension pressure and
thus be referred to as a sound signal. This is not an absolute condition however
and vibrational data too can be analyzed.
Sound pressure level
The basic descriptor of a sound signal is the sound pressure level (SPL) denoted
by L and described in equation 4-12. The sound pressure level of 20 Pa is
known as the standardized normal hearing threshold and represents the quietĆ
est sound at 1000Hz that can be heard by the average person.
Since the range of pressure levels that can be detected is large and the ear reĆ
sponds logarithmically to a stimulus, it is practical to express acoustic parameĆ
ters as a logarithmic ratio of a measured value to a reference value. Hence the
use of the decibel scales.
Hearing frequency range
The threshold frequency for human hearing is around 20kHz. Signals with a
frequency content below this value are referred to as audio signals. Sampling of
audio signals therefore requires a sampling rate at least twice the maximum
that can be detected by the ear in order to avoid aliasing problems. You will
find therefore that CD recorders use a sampling rate of 44.1KHz and DAT reĆ
corders 48kHz.
Loudness and pitch
A sound can be characterized by its loudness (related to the SPL) and its freĆ
quency content. The common term for describing the frequency content of a
sound (or tone) is its `pitch'. However pitch is very much a perceived frequency
sensation and depends on its frequency and the sound pressure level. Both
loudness and pitch are discussed further below.
Physics alone, however, are not sufficient to explain all aspects of sound perĆ
ception. It is also influenced by psychological factors such as attitude, backĆ
ground, expectations, environment, context, etc. As a consequence, there is no
better `judge' of sound quality than the human listener, despite all efforts at
quantification and modelling.
The purpose of this section is merely to highlight the salient points of this subĆ
ject. For a more thorough understanding of this topic you should refer to the
reading list at the end of the chapter. Specific references to items in this list are
contained within brackets {1}thus.
Figure 6-1 shows the various parts of the ear (from {5}). The outer ear consists
of the pinna and the ear canal. Diffraction effects at the pinna and direction inĆ
dependent effects within the ear canal result in the human ear being most sensiĆ
tive in the frequency range 1 to 10 kHz. The middle ear links the eardrum to
the cochlea, which is the actual sound receptor. The final link between an
acoustic signal and a neural response takes place in the cochlea, which is in the
inner ear.
Semicircular canal
Nerve fibers
Cochlea
Scala vestibuli
Pinna
Scala tympani
Eustachian tube
Ear canal
Ear drum Round window
Binaural hearing
Another essential characteristic of human hearing is that it is binaural in nature.
The sound signals received by the left and right ear show a relative time delay
as well as a spectral difference dependent on the direction of the sound. Below
about 1500 Hz, the phase difference between the two signals will be the main
contribution to localization, while above this frequency the interaural level difĆ
ference and difference in spectrum will be the principal factors.
Processing in the human brain not only allows the sound to be spatially localĆ
ized, but also to suppress unwanted sounds and to concentrate on a sound
coming from a specific direction {2 6}. This is the well known `party' effect
where it is possible to focus one's hearing on an individual a certain distance
away in the presence of significant background noise.
Sound perception
The body, head and outer ear effects consist mainly of a spatial and spectral filĆ
tering that is applied to the acoustic stimulus. Consequently, just looking at the
frequency spectrum of a free positioned microphone does not necessarily lead
to a correct assessment of the human response. In other words, there is no simĆ
ple relationship between the measured physical sound pressure level and the
human perception of the same sound.
The effects of the inner ear are many, but the most important are its nonlinear
characteristics. This means that the auditory impression of sound strength,
which is referred to by the term `loudness' is not linearly related to the sound
pressure level. In addition, the perceived loudness of a pure tone of constant
sound pressure level varies with its frequency. Also the auditory impression of
frequency, which is referred to by the term `pitch' is not linearly related to the
frequency itself. These and other effects are described below.
Loudness
The sound pressure level is not linearly related to the auditory impression of
sound strength (or loudness). Together with the frequency dependencies disĆ
cussed above, this means that the sensation of loudness cannot be correctly deĆ
scribed by the acoustic pressure level or its spectrum. Figure 6-2 {5} shows a
number of curves representing levels of perceived equal loudness (for sinusoidal
tones) across a frequency range as a function of acoustic pressure level.
Pitch
The pitch of a pure tone varies with both the frequency and the sound pressure
level, and this relationship is itself dependent on the frequency of the tone.
Pure tones can be used though to determine how pitch is perceived. One possiĆ
bility is to measure the sensation of `half pitch'. In this case the subject is asked
to listen to one pure tone, and then adjust the frequency of a second one such
that it produces half the pitch of the first one. At low frequencies, the halving
of the pitch sensation corresponds to a ratio of 2:1 in frequency. At high freĆ
quencies however this does not occur and the corresponding frequency ratio is
larger than 2:1. For example a pure tone of 8kHz produces a `half pitch' of only
1300Hz.
At high frequencies, the numerical value of frequency and that of the ratio pitch
deviate substantially from another. The experimental finding that a pure tone
of 8kHz has a `half pitch' of 1300Hz, is reflected in the numerical values of the
corresponding ratio pitch. The frequency of 8 kHz corresponds to a ratio pitch
of 2100 mel and the frequency of 1300 Hz corresponds to a ratio pitch of 1050
mel, which are half of 2100 mel.
Critical bands
The inner ear can be considered to act as a set of overlapping constant percentĆ
age Bandwidth filters. The noise Bandwidths concerned are approximately
constant with a Bandwidth of around 110 Hz, for frequencies below 500 Hz,
evolving to a constant percentage value (about 23 %) at higher frequencies.
This corresponds perfectly with the nonlinear frequency-distance characterisĆ
tics of the cochlea. These Bandwidths are often referred to as `critical BandĆ
widths' and a `Bark' scale is associated with them as shown in Table 6.1.
Masking
The critical bands described above, have important implications for sounds
composed of multiple components. For example, narrow band random sounds
falling within one such filter Bandwidth will add up to the global sensation of
loudness at the center frequency of the filter. On the other hand, a high level
sound component may `mask' another lower level sound which is too close in
frequency.
An example of masking is shown below {5}. A 50 dB, 4 kHz tone (marked +)
can be heard in the presence of narrow-band noise, centered around 1200 Hz,
up to a level of 90 dB. If the noise level rises to 100 dB, the tone is not heard.
Threshold of hearing
Frequency
The higher the level of the masking sound, the wider the frequency band over
which masking occurs. Again, it turns out that multiple sound components falĆ
ling within one of the ear filter Bandwidths add up to the masking level, while
when they are wider apart each can be considered as a separate sound with its
own masking properties.
Temporal effects
Finally, a number of temporal effects are associated with the hearing process.
Sounds must `build up' before causing a neural reaction, the reaction time howĆ
ever is dependent on the sound level. This has an effect on the perceived loudĆ
ness since the loudness of a tone burst decreases for durations smaller than
about 200 ms. For larger durations, the loudness is almost independent of
duration.
- A similar effect may occur after switching off a loud sound. During a time
interval up to 200 ms (dependent on masking and tone level), short tone
bursts may be masked (post-masking).
- In the presence of a given continuous sound, tone bursts with levels exceedĆ
ing that of the first signal, might be obscured, depending on their length.
This is called `simultaneous masking'.
But not everything you hear is either bad or unwanted. A sound can be an imĆ
portant messenger of information in which, it conveys a positive feeling. ExĆ
amples are the solidity of a door-slam, the feeling of sportiveness of a car enĆ
gine (or exhaust) during acceleration, the smoothness of a limousine engine, the
`catching' of a door lock, or a seat belt....
In these cases, the noise does not need to be removed, but it has to sound
`right'.
Digital
output
Spectral
processing
input
Comfort
ÄÄ
analysis
ÄÄ ÎÎÎ
Analysis
Reporting
ÎÎÎ
ÎÎÎ
Figure 6-4 Sound quality analysis
Measurements
Sound quality measurements are acoustic measurements made with microĆ
phones. These can be digitally recorded and imported into the computer sysĆ
tem, but in order to successfully evaluate a sound it is absolutely essential that
it is both recorded and replayed in the most accurate and representative way
possible. Binaural recording is a technique whereby microphones are mounted
inside the ear in an artificial head to represent the sensation of human hearing
as closely as possible.
Evaluation
The next step in dealing with a sound quality issue, is to gain a proper underĆ
standing of the quality of the sound. In order to evaluate sound quality characĆ
teristics, different (non-exclusive) approaches may be followed.
(a) The acoustic signal can be evaluated subjectively by a specialist or jury of
listeners. This can be achieved by replaying the signal either digitally via a
recorder or directly via an analog output to headphones or speakers. When
using direct replay, cyclic repetition of a particular segment can be perĆ
formed and techniques are provided to suppress the `click' at the start and
end of a segment as well as on-line notch filtering. This latter facility can
give a very fast assessment of the critical spectral characteristics of a sound.
(b) The acoustic pressure signal is processed in such a way that perception-releĆ
vant quantitative values can be obtained through the use of adequate sound
quality metrics. Such metrics form part of the comfort analysis.
Modification
Important information on the nature of a sound can be obtained by modifying
the sound signal and comparing its perceived quality with the original. This
modification can be imposed in the time, frequency or order domains.
Listener
Artificial head DAT recorder Computer
Sound de-equalization
Recording Quality
Calibration Equalization
Analysis
equalization
A free field refers to an idealized situation where the sound flows directly out
from the source and the pressure levels drop with increasing distance from the
source. A diffuse field occupies a smaller space and the sound is reflected
many times.
Thus, when you are recording a sound you can determine the type of field you
wish to reconstruct it in and the appropriate compensation or equalization will
be applied. If you only wish to replay the sound through headphones, then
you do not need equalization and so you can either select to have a non-equalĆ
ized recording or you will have to de-equalize it before it is replayed through
headphones.
Transfer to computer
The recording on the DAT recorder is held in a 16 bit audio format. When this
is transferred to a computer system, it will be then converted to a 32 bit floating
point format. To achieve this conversion a calibration factor is required.
Replay
When you need to replay the signal on the headphones, then de-equalization
may be necessary if free-field or diffuse field equalization has been applied to
the original recording. In addition, compensation is required to take account of
the transfer function associated with the particular set of headphones to be
used.
1 D.LUBMAN, Noise Quality, Toward a Larger Vision of Noise Control Engineering, JourĆ
nal of Noise Control Engineering, ....
2 J.BLAUERT, Spatial Hearing, MIT Press, Cambridge (MA), 1983.
3 W.BRAY ET AL, Development and Use of Binaural Measurement Technique, Proc. Noise
Con. `91, Tarytown (NY), July 14-16, 1991, pp 443-450.
4 D.HAMMERSHOI, H.MOLLER, Binaural Auralisation : Head-Related Transfer FuncĆ
tion Measured on Human Subjects, Proceedings 93rd AES Convention, Vienna (A),
March, 24-27, 1992, 7pp.
5 J.HASSAL, K.ZAVERI, Acoustic Noise Measurements, Bruel & Kjaer, DK2850 NaerĆ
um, Denmark, 1988
6 E.ZWICKER,H.FASTL, Psychoacoustics, Facts and Models, Springer Verlag, Berlin
(Germany), 1990.
7 J.HOLMES, Speech Synthesis and Recognition, Van Nostrand Reinhold, Wokingham,
Berkshire (UK), 1988.
8 M.HUSSAIN, J.GOELLES, Statistical Evaluation of an Annoyance Index for Engine
Noise Recordings, SAE Paper 911080, Proc. SAE Noise and Vibration Conference, TraĆ
verse City (MI), May 16-18 1991 pp 359-368.
9 H.SHIFFBAENKER ET AL, Development and Application of an Evaluation Technique
to Assess the Subjective Character of Engine Noise, SAE paper 911081, Proc. SAE Noise
and Vibration Conference, Traverse City (MI), May 16-18 1991, pp 369-379.
10 K.TAKANAMI ET AL, Improving Interior Noise Produced During Acceleration, SAE
paper 911078, Proc. SAE Noise and Vibration Conference, Traverse City (MI), May
16-18 1991, pp 339-348.
11 G.IRATO, G.RUSPA, Influence of the Experimental Setting on the Evaluation of SubjecĆ
tive Noise Quality, Proceeding of the second International Conference on Vehicle ComĆ
fort, Oct 14-16, 1992, Bologna (Italy), pp. 1033-1044.
12 INTERNATIONAL ORGANIZATION FOR STANDARDIZATION, Method for
Calculating Loudness Level, ISO-532-1975 (E)
13 E.ZWICKER ET AL, Program for Calculating Loudness According to DIN45631
(ISO532B), Journal Acoustic Society Jpn (E), Vol. 12, Nr.1, 1991.
14 S.J.STEVENS, Procedure for Calculating Loudness : Mark VII, J. Acoust. Soc. Am., Vol.
33, Nr.11, pp.1577-1585, 1961.
15 S.J.STEVENS, Perceived Level of Noise by Mark VII and Decibel, J. Acoust. Soc. Am.,
Vol.511, Nr.2, pp. 575-601, 1971.
16 E.ZWICKER, Procedure for Calculating Loudness of Temporally Variable Sounds, J.
Acoust. Soc. Am., Vol. 62, Nr. 3, pp 675-681, 1977.
17 L.L.BERANEK, Criteria for Noise and Vibration in Communities, Buildings and Vehicles
in Noise and Vibration Control, revised edition, McGraw-Hill Inc., 1988.
18 W.AURES, Berechnungsverfahren fü r den Sensorischen Wohlklang beliebigen SchallsigĆ
nale, Acustica, Vol.59, pp. 130-141, 1985
Sound metrics
It may be said that the best way to evaluate the quality of a sound is
to listen to it and express an opinion about it, but in a lot of cases there
is also a strong interest in correlating the results from these subjective
evaluations with measurable parameters. Therefore a number of
sound quality metrics exist where perception-relevant quantitative
values are calculated from the acoustic pressure signal.
Sound pressure levels
Loudness metrics
Sharpness
Roughness
Fluctuation strength
Pitch
Articulation index
Impulsiveness
The references are listed in chapter 6
99
Chapter 7 Sound metrics
The basic descriptor of a sound signal is sound pressure level (SPL) denoted by
L. and described in equation 4-12.
This function calculates the frequency and time weighted sound pressure level
according to the IEC 651 and ANSI SI.4-1983 standards.
By selecting the type of signal (mode) then the appropriate time constant is apĆ
plied.
When the signal contains spikes and is therefore defined by the mode imĆ
pulse" an additional peak detector mechanism is implemented. In this case
when an increase in the averaged signal is detected, then the signal is followed
exactly. When the signal is decreasing, then exponential averaging is used with
a long time constant, set by default to 1500 ms. The time constant used in this
situation is termed the decay time constant.
This function gives the value of the A-weighted sound pressure level of a conĆ
tinuous, steady sound that, within a specified time interval T, has the same
mean square sound pressure as the sound under consideration whose level varĆ
ies with time. This leads to the expression:
t
2
p 2A(t)
L Aeq,T 10 logt t Ă 2 Ă dt
1 Eqn 7-1
1 2 p0
t
1
where
In practice with sampled data the equivalent sound pressure level is computed
by a summation of the sampled values of the pressure level, in dB over the
number of samples required.
7.3 Loudness
The equal loudness contours shown in Figure 6-2 in the document Sound
quality" are the result of large numbers of psycho-acoustical experiments and
are in principle only valid for the specific sound types involved in the test.
These curves are valid for pure tones and depict the actual experienced loudĆ
ness for a tone of given frequency and sound pressure level when compared to
a reference tone. The resulting value is called the `loudness level'.
The loudness level itself is expressed in Phons. 1 kHz-tones are used as the refĆ
erence, which means that for a 1 kHz tone, the Phon value corresponds to the
dB sound pressure level. The equal loudness contours for free field pure tones
and diffuse field narrow-band random noise are standardized as ISO 226-1987
(E).
A linear unit derived from the (logarithmic) Phon values is the Sone (S), which
is related to the Phon (P) in the following way :
The Sone scale's linear relationship to the experienced loudness makes it easier
to interpret. A loudness of 1 Sone corresponds to a loudness level of 40 Phons.
A tone which is twice as loud, will have double the loudness (Sone) value, and a
loudness level which is 10 Phons higher.
For steady state sounds, standardized calculation procedures have been deĆ
fined by Zwicker and Stevens and are accepted as ISO standards {12, 13, 14}. A
more recent procedure by Stevens {15} has not yet been accepted as an ISO stanĆ
dard.
V a convention for the relation between octave band sound pressure levĆ
els and octave band partial (specific) loudness descriptions
For temporally varying sounds, Zwicker has also proposed an approach taking
into account temporal effects {16}, which is not yet accepted as an ISO standard.
The Stevens (Mark VI) method, standardized as ISO 532-A-1975 and ANSI
S3.4-1980, starts from octave band sound pressure levels. Their loudness is
compared to that of a critical band noise at 1 kHz. It is only defined for diffuse
sound fields with relatively smooth, broadband spectra. Through a set of stanĆ
dardized curves, each octave band level is converted into a partial loudness inĆ
dex (s) see Figure 7-1. The partial loudness values are then combined into a
total loudness (in Sones), using equation 7-3.
The method takes masking effects into account. Masking effects are important
for sounds composed of multiple components. A high level sound component
may `mask' another lower level sound which is too close in frequency. An exĆ
ample of masking is shown below {5}. A 50 dB, 4 kHz tone (marked +) can be
heard in the presence of narrow-band noise, centered around 1200 Hz, up to a
level of 90 dB. If the noise level rises to 100 dB, the tone is not heard.
Threshold of hearing
Frequency
The method uses different sets of graphs for diffuse and free fields that relate
loudness level to sound pressure level and that take the masking into account
by a sloping-edge filter characteristic for each octave band. This way, domiĆ
nant and hence masking frequency bands will show their influence over a large
frequency range and prevent masked sounds contributing to the total level.
Figure 7-4 shows an example of the Zwicker method. The 1/3 octave band
data are transferred to the appropriate Zwicker diagram.
Frequency
The partial loudness contours are computed for each defined segment (global
evaluation) or frame (tracked evaluation) using a classical Zwicker loudness
calculation. The frame or segment size should be selected to ensure that the
spectral resolution needed for the FFT-based octave band analysis can be
achieved. The frame size can be used to restrict the analysis to time periods
over which time-varying signals can be regarded as stationary.
The total loudness is calculated as the surface under the enveloping partial loudĆ
ness contours and can be expressed in Sones, or as loudness level in Phones as a
function of time. This is presented as a single value in the global evaluation
and a trace of values for the tracked evaluation.
7.4 Sharpness
0.11ĂN (z)Ăg(z)Ăz
S (z)
24āBark
N(z)Ăz
Eqn 7-4
0āBark
24āBark
S S (z)Ăz Eqn 7-6
0āBark
7.5 Roughness
At high modulation frequencies (above 150-300 Hz), three separate tones can
be heard. In the intermediate frequency range (15-300 Hz), the sensation is of a
stationary, but rough tone, which renders it rather unpleasant. This sensation is
often associated with engine noise, where fractional orders can cause the moduĆ
lation effects.
When the sound functions have modulation frequencies below 20 Hz, they are
perceived as changes in the sound volume over time. Typically, fluctuation sigĆ
nal sound louder (and more annoying) than steady state signals of the same
rms amplitude. In this case, the intensity of the sensation is referred to as
Fluctuation strength" with the unit vacil". A reference sound of 1 vacil correĆ
sponds to a 1 KHz tone of 60 dB with a 100 % amplitude modulation of 4Hz.
The ear is most sensitive to fluctuations at 4 Hz. Quantitative models have
been proposed for the fluctuation strength {6} which take into account the temĆ
poral masking effects due to the sound fluctuation.
F# L Eqn 7-8
(f mod4Hz) (4Hzf mod)
7.7 Pitch
Pitch is a sound attribute that classifies sounds on a scale from low to high. For
pure tones, pitch depends largely on the frequency of the tone, but it is also inĆ
fluenced by its level.
Pitches, both for pure and complex tones, which can be derived from the specĆ
tral content of the signals, are called spectral pitches.
If, in the calculation the effect of the tone level on the pitch is taken into acĆ
count, the calculated pitch is called true pitch. If the influence of level on the
tone is neglected, it is called nominal pitch.
V Modified
These calculations are based upon the AIM method which has been deĆ
scribed in the work mentioned above, but which opens up the internal
floating range of 30dB to a fixed range of 80dB between the limits of 20
and 100dB. The results of this method will lie in the range-107% to alĆ
most 160%
When the comprehension of speech is the goal, background sound or noise has
the negative quality of interference. It can cause annoyance, and even be hazĆ
ardous in a working environment where instructions need to be correctly unĆ
derstood. Therefore, a noise rating called `Speech Interference Level' (SIL) was
developed.
Figure 7-9 Communication limits in the presence of background noise (after WebĆ
ster)
7.10 Impulsiveness
This metric is used to quantify the impulsive nature of a signal. It is used for
instance in the quantification of the diesel engine noise.
The algorithm for calculating impulsiveness is based on the signal envelop, and
results in a number of output values; the mean impulse peak level, mean imĆ
pulse rise rate and mean impulse duration. Each of these parameters is deĆ
scribed in the Figure below. In addition the mean impulse rate (occurrence) is
determined.
Peak level
signal envelope
rise
rate threshold
rms level
The start of the impulse is defined by the minimum which occurs before the
crossing of the threshold. The rise time is the time between the impulse start,
and the moment at which the impulse peak level is reached. The peak level is
expressed in dB, and is the difference between the impulse peak and the threshĆ
old level. The end of the peak is defined by the first minimum which occurs
after the threshold level. The duration of the peak duration is the sum of rise
time and fall time.
The rise rate is the maximum rise rate occurring between the impulse start and
the impulse peak.
Acoustic holography
117
Chapter 8 Acoustic holography
8.1 Introduction
V estimates the acoustic power and the spectral content emitted by the
object under examination.
Basic principles
In performing acoustic holography, you need to measure cross spectra between
a set of reference transducers and the hologram microphones. From these meaĆ
surements you can derive sound intensity, particle velocity and sound power
values.
A basic assumption is that you are operating in free field conditions and that
the energy flow is coming directly from the source. Measurements need to be
taken close to the source.
It provides you with an accurate 3D characterization of the sound field and the
source with a higher spatial resolution than is possible with conventional intenĆ
sity measurements.
measurement plane
ÄÄÄÄÄ
ÄÄÄÄÄ
ÄÄÄÄÄ
ÄÄÄÄÄ
ÄÄÄÄÄ
ÄÄÄÄÄ
ÄÄÄÄÄ
The goal is to determine the whole acoustic wavefront from the known pressure
on the measurement plane. Each microphone in the array measures the comĆ
plex pressure (amplitude and phase).
ÄÄÄÄÄÄ pressure
ÄÄÄÄÄÄ
ÄÄÄÄÄÄ
T
ÄÄÄÄÄÄ m time
ÄÄÄÄÄÄ
ÄÄÄÄÄÄ f=1/T
=cĂ/Ăf
The transformation from the time to the frequency domain is achieved using
the Fourier Transform given below
Spatial domain
If we now consider measurements where time is fixed and pressure varies as a
function of distance, we can obtain a measure of energy flow.
ÄÄÄÄÄ
ÄÄÄÄÄ
ÄÄÄÄÄ
ÄÄÄÄÄ
ÄÄÄÄÄ
m
ÄÄÄÄÄ pressure
distance
If we fix the temporal frequency, this means that the acoustic wavelength is
fixed too.
The complex pressure as a function of the space is called the pressure image at
the specified frequency.
Conversion from the spatial domain is also done using a Fourier transform. In
Acoustic holography pressure is measured in two dimensions (x and y for
example), so a 2-dimensional transformation is performed.
where S (kx , ky) is the spatial transform of the measured pressure field to the
wavenumber (kx and ky ) domain resulting in the 2-D hologram pressure field.
Pmeasured
pressure levels kx
ky
wavenumber domain kx ky
The spatial Fourier transform implies that a measured pressure field can be
considered as a sum of sinusoidal functions.
Each of these sinusoidal functions can be understood as the result of cutting the
wavefronts of a plane wave by the measurement plane.
measurement plane
wavelength
= c/f
spatial periodicity wavelength
= c/f
There is a coincidence between the nodes of the sinusoidal function and the waĆ
vefronts. In effect, decomposing the pressure field into a sum of sinusoidal
functions means decomposing the real acoustic wave into a sum of plane
waves.
Whatever the angle of incidence, the spatial periodicity must be greater than
the wavelength
.
Propagating waves represent the sound field that is propagated away from the
near towards the far field. Evanescent waves describe the complex sound field
in the near field of the source.
To understand why we must take evanescent plane waves into account, let us
consider our decomposition of the pressure field into sinusoidal functions. If
the spatial periodicity of a sinusoidal function is shorter than the wavelength, it
cannot be the result of cutting a propagating plane wave by the measurement
plane :
spatial periodicity
measurement plane
Whatever the direction of the propagating plane wave may be, there is no posĆ
sible coincidence between the nodes of the sinusoidal function and the waveĆ
fronts. Therefore, this sinusoidal function must be understood as the intersecĆ
tion between an evanescent wave (which can have a smaller spatial periodicity
than propagating waves) and the measurement plane.
kz
ky
measurement plane
kx
kz can be determined from the wave number k0 and the known values of kx
and ky from the transformation.
k 0 k 2x k 2y k 2z
c
Eqn 8-3
kz c
(k k )
2
2
x
2
y
2 2 2
kz is real when k x k y ( c ) (the spatial periodicity is greater than the waveĆ
length). This means that the waves lie in the circle defined by the radius /c in
the wave number domain. kz is imaginary outside of this region.
ky
k 2x k 2y k 20
k z is imaginary
2
evanescent waves k 2x k 2y k 0
k z is real kx
propagating
waves
k0
c
Pressure levels at other planes can be found using Raleigh's integral Equation
with Dirichlet's Green function :
We can use wave domain properties (k) to predict the pressure at a different
spatial position (z).
for z > z' S(k z, k y, z) S(k z, k y, z)āg dā(k z, k y, z z) Eqn 8-5
where z' is the measurement plane and z is the position of the required plane.
The green function is given by
g d eā jākzāz
The final step is to perform an inverse transformation back to the temporal doĆ
main.
ÏÏÏÏÏÏÏÏ
evanescent waves
ÏÏÏÏÏÏÏÏ
ÏÏÏÏÏÏÏÏ kx
radius = k0
When propagating towards the source, a Wiener filter can be used to include a
certain number of evanescent waves to improve the resolution. Taking a higher
number of waves taken into account may result in the amplification becoming
unstable. This depends on a parameter of the Wiener filter known as the Signal
to Noise Ratio (SNR). When the SNR value is greater than 15dB, then the amĆ
plification will become unstable as the number of evanescent waves included
increases. Using an low SNR value (5dB for example) means that the evanesĆ
cent waves are taken into account but they are so attenuated that the improveĆ
ment in resolution is negligible. The default value of 15dB provides the best
compromise in terms of resolution and amplification.
When the Wiener filter is used, the pressure image needs to be multiplied by a
two-dimensional window. As is the case with a single FFT, the observed presĆ
sure must be `periodic' within the observed hologram. If this is not the case,
then truncation errors occur as with a single FFT. These truncation errors maniĆ
fest themselves as ghost sources at the borders of the observed area.
Two windows are used
When N 1 (1 ) I N 1 (1 )
2 2
Wā[I] 1
When I N 1 (1 )
2
W[I] 0.5 0.5 cosN .N
2
I N 2 1 (1 )
When I N 1 (1 )
2
W[I] 0.5 0.5 cosN .N
2
I N 2 1 (1 )
measurement plane
ÄÄÄÄÄÄ
ÄÄÄÄÄÄ
ÄÄÄÄÄÄ
correct calculation plane
ÄÄÄÄÄÄ
ÄÄÄÄÄÄ incorrect
ÄÄÄÄÄÄ
126 The Lms Theory and Background Book
Acoustic holography
Knowing the pressure field on the parallel plane, it is possible to calculate the
particle velocity and eventually the intensity on this plane.
The particle velocity (V) will be known if the pressure differential can be deterĆ
mined -which is the case with Acoustic holography since the pressure can be
measured at r and (r + r)
j
V P(r)
ck
Once the pressure and the velocity are known then the intensity is just the
product of the two.
Part III
Time data processing
Chapter 9
Statistical functions . . . . . . . . . . . . . . . . . . . . . 129
Chapter 10
Time frequency analysis . . . . . . . . . . . . . . . . . 139
Chapter 11
Resampling . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
Chapter 12
Digital filtering . . . . . . . . . . . . . . . . . . . . . . . . . 163
Chapter 13
Harmonic tracking . . . . . . . . . . . . . . . . . . . . . . 193
Chapter 14
Counting and histogramming . . . . . . . . . . . . . 203
128
Chapter 9
Statistical functions
129
Chapter 9 Statistical functions
range t
minimum
Nt
real value absolute value
Sum
This is the summation of all the (N) values within the frame
N1
Sum Ă xj Eqn. 9-1
j0
Integration
This is the area under the curve of values, found by multiplying the half of the
sum of the values by the time increment.
x j1
xj N2
area ĂĂ xj 2xj1Ă t Eqn. 9-2
j0
t
RMS Ă1
N
N1
Ă x2j
j0
Eqn 9-3
Crest factor
The crest factor is given by
|max min|
Eqn 9-4
2ĂRMS
The crest factor provides a measure of the ``spikeness'' in the data. A sine sigĆ
nal has a crest factor of 1.4. A random signal has a crest factor of about 3 or 4.
A short spike will yield a high crest factor.
Mean
The mean of a set of data values (x) estimates the central value contained withĆ
in the set. It is defined as
N1
x 1
N
Ă xj Eqn 9-5
j0
The mean is not the only parameter which characterizes the central value of a
distribution. An alternative is the median.
The mean and the median both provide information on the average or central
value of the data. The choice of the most suitable one to use depends on the
skewness described on page 134.
Median
The median of a probability function p(x) is the value for which larger and
smaller values of x are equally probable:
x med
For discrete data, the median is defined as the middle value of the data samples
when they are arranged in increasing (or decreasing) order.
When N is odd, the median is
Thus half the values are numerically greater than the median and half are
smaller.
When N is even, the median is estimated as the mean of the two unique central
values.
x N1 x N
x med 2 2 Eqn 9-8
2
The mean and median both provide information on the average or central valĆ
ue of a set of data. Which is the most suitable one to use in a particular circumĆ
stance depends on the skewness of the data. Skewness is illustrated in Figure
9-2.
x x x
(a) mean = median (b) mean > median (c) mean < median
Skewness refers to the shape of the distribution about the central value. PerĆ
fectly symmetrical data has no skew. Data distributions where there is a small
number of extremely high values are said to exhibit positive skew. Those with
a few extremely low values show negative skew. The mean is more influenced
by such extreme values than the median, but can be used with confidence if the
skewness lies within the range -1 to 1. For the calculation of skewness see
Equation 9-13 below.
Percentiles
The median can also be expressed as the 50th percentile since it represents the
value where 50% of all the values in the data set are below it and 50% lie above
it. It is also possible to compute the 10th, 25th, 75th and 90th percentiles.
The nth percentile of a probability function p(x) is the value at which n% of the
values in the set are smaller then the percentile value. So 10% of the values are
smaller than the 10th percentile and 90% are larger.
N1
varĂ(x 0, ..., x N1) 1
N1
xj x
2 Eqn 9-9
j0
and as such can also be regarded as the second order moment of a distribution.
N1
ADevĂ(x 0, ..., x N1) 1
N
Ăxj x Ă Eqn 9-11
j0
Extreme deviation
The extreme deviation is given by
The extreme deviation is similar to the crest factor, except that it is referenced to
the mean and will therefore follow data which drifts away from zero.
Skewness
Skewness was illustrated in Figure 9-2. It characterizes the degree of asymmeĆ
try of the distribution around its central value. It is defined as
3
N1
x j x !
skewĂ(x 0, ..., x N1) 1 Ă Eqn 9-13
N
j0
The skewness is a unitless parameter known as the third order moment of a
distribution.
Even if the estimated skewness is other than zero, it does not necessarily mean
that the data is in fact skewed. You can have confidence in the skewness only
when the estimated skewness is larger than the standard deviation on this estiĆ
mated parameter (Eqn 9-13). For the idealized case of a normal (Gaussian) disĆ
tribution, the standard deviation on the estimated skewness is approximately
6N . In real life it is good practice to place confidence in skewness only when
the estimated value is several times as large as this.
Kurtosis
One further characteristic of a distribution can be obtained from the kurtosis of
a function. This is also a unitless parameter that measures the relative sharpĆ
ness or flatness of a distribution relative to a normal or Gaussian one.
This is illustrated in Figure 9-3.
x x x
normal distribution positive kurtosis negative kurtosis
Figure 9-3 Distributions with positive and negative kurtosis compared to a normal
distribution.
The kurtosis is defined as
N1 x x
!!
4
Ă Ă
1 j
kurtĂ(x 0, ..., x N1)Ă Ă $Ă Ă %3 Eqn 9-14
N j0
The standard deviation of (Eqn 9-14) is 24N, for the idealized case of a norĆ
mal (Gaussian) distribution. However, the kurtosis depends on such a high
moment, that there are many real-life distributions for which the standard
deviation of equation 9-14 is effectively infinite.
Note! Higher order moments (skewness and kurtosis), are often less robust than low-
er order moments which are based on linear sums. (It is possible that the cal-
culation of the skewness or kurtosis generates an overflow.) They must be
used with caution.
Markov regression
This function provides you with a measure of the likelihood of one data value
within a set being similar to another.
It is based on the circular autocorrelation R(.) of a set of data. This calculates
the correlation between one particular value and a value displaced by a certain
lag, as illustrated below.
lag
lag
The circular correlation takes the last shifted value and wraps it to the start.
The circular correlation for a lag of 1 data sample is given by
N2
xjĂxj1Ă x0ĂxN1
R(1) Ă Eqn 9-15
j0
The circular correlation for a lag of (0) is given by
N1
R(0) x2j Eqn 9-16
j0
R(1)
MarkovĂregressionĂcoefficient Eqn 9-17
R(0)
This function can therefore take values between 0 (very low correlation) and 1
(high similarity). It approaches 1 for a narrow or filtered band and 0 for broadĆ
band signals. It provides an indication therefor of how much a broadband sigĆ
nal has been filtered.
Linear representations
Quadratic representations
139
Chapter 10 Time frequency analysis
10.1 Introduction
A great deal of physical signals are non-stationary. Fourier analysis establishes
a one-to-one relationship between the time and the frequency domain, but proĆ
vides no time localization of a signal's frequency components. Whilst an overĆ
all representation of all frequencies that appeared during the observation periĆ
od is presented, there is no indication as to exactly at what time which
frequencies were present.
Time-frequency analysis methods describe a signal jointly in terms of both time
and frequency. The aim is to find a distribution that determines the portion of
the signal's energy which lies in a particular time and/or frequency range. In
addition these distributions might or might not satisfy some other interesting
mathematical properties, such as the marginal equations".
The instantaneous power of a signal at time t is given by
The intensity per unit frequency is given by the squared modulus of the Fourier
transform S()
The joint function P(,t) should represent the energy per unit time and per unit
frequency
P(, t) = Energy or intensity per unit frequency (at frequency )
per unit time (at time t )
Ideally summing this energy distribution over all frequencies should give the
instantaneous power
and summing over all time should give the energy density spectrum.
Equations 10-1 and 10-2 are known as the `marginal' equations and in addition
the total energy, E
should be equal to the total energy in the signal while satisfying the marginals.
There are a number of distributions which satisfy equations 10-1 and 10-2 but
which demonstrate very dissimilar behavior.
These are representations that satisfy the linearity principle. If x1 , and x2 are
signals, then T(t,f) is a linear time-frequency representation if -
V Wavelet analysis
frequency
For a time signal s(t) multiplied by a window function g(t), the Short Time
Fourier Transform located at time is given by
STFT(, ) 1
2
e jtĂs(t)Ăg *(t )ādt Eqn 10-4
STFT(, ) 1
2
eĄ jĂS()āG *( )Ăād Eqn 10-5
By analogy with the previous discussion this reflects the behavior around the
frequency ``for all times'' as illustrated by the horizontal bands in Figure
10-1. These bands can be regarded as a bank of bandpass filters which have
impulse responses corresponding to the window function.
Wavelet analysis
A method that provides an alternative for the analysis of non-stationary sigĆ
nals, where it becomes difficult to find the right compromise between time and
frequency resolution for the analysis window of the STFT is the Wavelet analyĆ
sis.
In effect, the Fourier transform decomposes the signal using a set of basis funcĆ
tions, which in this case are sine waves. The Wavelet transform also decomĆ
poses the signal, but it uses another set of basis functions, called wavelets.
These basis functions are concentrated in time, which results in a higher time
localization of the signal's energy. One prototype basis function is defined, and
a scaling factor is then used to dilate or contract this prototype function to arĆ
rive at the series of basis functions needed for the analysis.
h a(t) 1 Ă h t
Eqn 10-6
|a| a
CWT(a, t) 1
|a|
s()āh a t
ād Eqn 10-7
The use of the scaling factor to dilate or contract the basic wavelet results in an
analysis window that is narrow at high frequencies and wide at low frequenĆ
cies. Figure 10-1 likens the STFT to a series of constant width bandpass filters.
Using this concept again, the wavelet transform can be considered as a bank of
constant relative Bandwidth filters.
f
c
f
time time
(a) STFT
(b) Wavelet analysis
frequency frequency
This is in fact a very natural way to analyse a signal. Low frequencies are pheĆ
nomena that change slowly with time so requiring a low resolution in this doĆ
main. In this situation, a good time resolution can be sacrificed for a high freĆ
quency resolution. High frequency phenomena vary rapidly with time which
then becomes the important dimension, so under these conditions wavelet analĆ
ysis increases the time resolution at the cost of frequency. This type of analysis
is also very closely related to the human hearing process, since the human ear
seems to analyse sounds in terms of octave bands.
Eqn. 10-8
z(t) c 1x(t) c 2y(t) & T z(t, f)
|c1| 2T x(t, f) |c 2| 2T y(t, f) c 1c 2 * T xy(t, f) c 2c 1 * T yx(t, f)
The first two terms in this result can be seen as signal terms", and the last two
terms as the interference terms". These interference terms are necessary to satĆ
isfy mathematically desirable properties like the marginal equations", but they
often make interpretation of the results difficult.
The interference terms can be recognized by their oscillatory nature, and differĆ
ent so -called smoothing" techniques can be used to reduce their effect. This,
however, leads us to a new tradeoff; that of a reduction of interference terms
against time-frequency localization. The spectral smearing effect of the
smoothing windows will disperse the signal's energy in the time-frequency
plane, thereby reducing the time-frequency localization of all signal compoĆ
nents.
Two examples of quadratic time-frequency representations are the spectrogram
and the scalogram.
spectrogram = |STFT|2
scalogram = |WT|2
which are the two energy-counterparts of the Short-Time Fourier Transform
(STFT) and the Wavelet Transform (WT) respectively. The interference terms
for these representations only exist where different signal components overlap.
Hence if the signal components are sufficiently far apart in the time-frequency
plane, the interference terms will be essentially zero. While neither of these
representations satisfies the marginal equations, this is not of great concern for
a qualitative energy localization assessment.
For an adequate interpretation of time-frequency analysis results, it is often
good practice to use several techniques (STFT or WT together with a quadratic
method), which makes it possible to distinguish the signal components" from
the interference terms".
W(, t) 1
2
s * t 2ā
Ăe jĂs t 2ā
ād Eqn 10-9
W(, t) 1
2
S * 2ā
Ăe jtĂS 2ā
ād Eqn 10-10
tstart tend
Thus one characteristic of the Wigner Ville distribution is that for a signal of finite
duration the distribution is zero up to the start and beyond the end. The same can be
said when considering the frequency version which means that for a band limĆ
ited signal, the Wigner Ville distribution will be zero outside of that range.
The same manoeuvre can be used to see why the reverse is true if at some point
the signal level drops to zero. Consider the situation illustrated below.
t0
At a point where the signal itself is zero (t0 ), multiplying the section to the left
by the section to the right results in a non-zero value. In general it can be said
that the Wigner distribution is not zero when the signal is. This unwelcome characĆ
teristic makes it difficult to interpret, especially when analyzing signals with
many components.
The same mechanism accounts for noisiness that can be seen in the distribution
in places where it is not present in the signal as shown below.
t1 t2
When evaluating the distribution at point (t1 ) the overlapping sections will not
include the noise, but even at point (t2 ) where there is no noise in the signal, it
will already influence the distribution. Noise will be spread over a wider periĆ
od than occurred in the actual signal therefore.
The same reasoning can be used to explain the appearance of the interference
terms along the frequency axis. This is especially so when a signal contains
multiple frequency components at the same moment in time, which will result
in interference terms at a frequency mid way between the frequencies of the
different components. As mentioned above, these terms can easily be recogĆ
nized by their oscillatory nature and smoothing techniques can reduce their efĆ
fect. Some possible smoothing techniques are discussed below.
Generalization
A generalization of the Wigner-Ville distribution leads to a whole class of time-
frequency representations, with as main desirable mathematical property their
invariance against operations like time shift, frequency shift or time/frequency
scaling. This means that a shift in time or frequency of the signal leads to an
equivalent shift of the time-frequency representation of that signal, or that scalĆ
ing the signal leads to a corresponding scaling of the time-frequency represenĆ
tation.
where Wx (t',f') is the Wigner-Ville distribution of the signal x(t), and where ΨT
is the kernel function". It is the choice of this kernel function that determines
the basic properties of each specific time-frequency representation derived
from this general definition. The kernel function can also be seen as a smoothĆ
ing function applied to the Wigner-Ville distribution.
Spectrogram
where the kernel = Wigner distribution of the analysis window.
The class of time shift/time scale invariant representations is also known as the
Affine class, and examples of representations belonging to this class are the scaĆ
logram, the Wigner-Ville distribution, CWD, ...
10.4 References
Books
Time-frequency analysis :
Leon Cohen - Prentice Hall - 1995 - 299 pp. - ISBN 0-13-594532-1
Papers
Linear and Quadratic Time-frequency Signal Representations :
F. Hlawatsch, G.F.Boudreaux-Bartels (IEEE SP Magazine, April 1992)
Resampling
Adaptive resampling
151
Chapter 11 Resampling
The process of converting a signal that has been sampled at a particular rate to
one that is sampled at a different rate is known as resampling.
Resampling may be necessary for a number of reasons. A DAT recorder, for exĆ
ample, samples a signal at a rate of 48000 samples per second. If the signal has
a Bandwidth of only 200Hz then 500 samples a second would be adequate and
as a consequence far more data than is needed to describe the signal exists. In
this situation the sample rate could be decreased, a process which is referred to
as decimation or downsampling.
On the other hand, while a critically sampled signal may contain all the inforĆ
mation to adequately describe the frequency contents of the signal, it may not
look good, or be easy to interpret, in the time domain.
Increasing the sampling rate will generate a signal which has identical spectral
contents but a much better defined time waveform. When the resampling inĆ
volves an increase in the sampling rate it is referred to as interpolation or upsamĆ
pling.
This section considers the theoretical background to the process of digital reĆ
sampling and the factors that must be taken into account to realize resampling
and achieve the required accuracy of results. It should be noted however that
the contents of this document are by no means a comprehensive treatment of
this subject. For a more thorough understanding of this subject you should reĆ
fer to the reading list given at the end of the section and in particular to referĆ
ences [3] and [4].
1.0
0.5
Downsampling by a factor of 5 will reduce the sample rate to 200Hz and the
Bandwidth to 100Hz. It is first necessary to apply a low pass filter to limit the
spectral content of the data to the 100 Hz Bandwidth. This will remove the
higher frequency component leaving a time domain signal containing 125
points per period for the remaining 8 Hz component.
É
1.0
É
É
0.5
É
8 100
Bandwidth
É
325 frequency (Hz) 125 points per period
1.0
0.5
Not applying the filter would result in the following. The 325 Hz component
will fold to 75Hz in the 100Hz Bandwidth and as a consequence the result is
heavily distorted.
1.0
0.5
8Hz 75Hz
100Hz 325Hz
Bandwidth
It can be proven that the spectrum of the upsampled signal consists of the origiĆ
nal one plus a mirrored version of it at all higher frequencies.
f bw
f bw
The resulting signal has identical spectral contents to the original. The inĆ
creased number of points per cycle provides a much improved time domain deĆ
scription of the waveform.
In this situation another strategy is used. Consider the signal shown below
which was originally sampled at a rate indicated by the white circles. The reĆ
quired sample rate is indicates by the filled circles. The new sample rate is not
an integer ratio to the original.
The first stage is to upsample by a relatively high factor (a). This factor is
known as the `Upsampling factor before interpolation' parameter and the deĆ
fault value used is 15. The resulting sample rate is indicated by the squares.
The second stage then involves performing a linear inĆ
terpolation on the upsampled signal to arrive at a new
sample rate that is an integer multiple (b) of the target
frequency. This introduces an error which will be
small as long the source trace is upsampled at a high
enough ratio. The maximum distortion that can occur
with the upsampling factor is indicated by the softĆ
ware.
This error is indicated in the form of the `SDR' (Signal to Distortion Ratio). It
depends on the `Upsampling factor before interpolation' parameters and the
filter 's cut-off frequency as shown below:
The final stage in this process is to downsample by this integer factor (b) to the
required rate. It is also possible that the downsampling is achieved directly by
the interpolation process itself as long as the downsampling rate being perĆ
formed is lower that the preceding upsampling rate (a).
In the same way as the Fourier transform presents the contents of time domain
data in the frequency domain, it converts angle domain data to the order doĆ
main. Just as something that happens twice every second has a frequency of 2
Hz, something that occurs twice every cycle is related to order 2. Consider the
example of measurements taken on an engine at a supposedly constant rpm.
Even very slight variations in rpm will result in a frequency domain represenĆ
tation where the related spectral components are sharp for the low orders, but
become smeared out for higher frequencies. The small RPM variations, lead to
leakage errors in the frequency domain.
Implementation example
This example below illustrates the procedure involved in converting from the
time to the angle domain. The principle can be used to convert between any
two domains.
Your original time signal must be measured in conjunction with a tracking sigĆ
nal. This is most likely to be a tacho signal; a pulse train, that can be converted
to an rpm /time function and then integrated to obtain an angle/time function.
angle
Ordinate
Abscissa time
In the case of a transformation from the time domain into the angle domain, the
required (constant) resolution in the angle domain () defines the time interĆ
vals at which data samples of the vibration measurement should be available.
angle
measured points
required points
The most appropriate resolution () is based on the minimum slew rate which
must be coped with.
When sampling in the time domain the time increment is the reciprocal of the
sampling frequency.
1 T
Fs
d
dt
rpm min
min
Fs Fs
So for example if the minimum slew rate (d/dt) is 500 rpm and the sample
frequency is 2000Hz, then the threshold angle will be
Using an angle increment less than this value will yield more data points in the
angle domain without any gain in information thus representing excessive
processing. Using a higher increment value will result in a loss of information
in the lower rpm ranges which will not be recovered if the data is transformed
back to the original domain.
11.3 References
Digital filtering
Analysis of filters
Application of filters
This is by no means a comprehensive text and aims just to give some
insight into the subject. A reading list is appended at the end of the
chapter.
163
Chapter 12 Digital filtering
a(n)
n
0
{a(n)} a(m)Ău 0Ă(n m) Eqn 12-1
m
A linear system implies that applying the input ax1 +bx2 will result in the output
ay1 +by2 where a and b are arbitrary constants.
A time-invariant system implies that the input sequence x(n-n0 ) will result in
the output y(n-n0 ) for all n0 .
From equation 12-1 the input x(n) to a system can be expressed as
x(n) x(m)Ău 0Ă(n m) Eqn 12-3
m
If h(n) is defined as the impulse response of a system which is the response to the
sequence u0 (n), then by time invariance h(n-m) is the response to u0 (n-m). By
linearity, the response to sequence x(m)u0 (n-m) must be x(m)h(n-m).
Thus the response to x(n) is given by
y(n) x(m)ĂhĂ(n m) h(m)ĂxĂ(n m) Eqn 12-4
m m
Equation 12-4 is known as the convolution sum and y(n) is known as the conĆ
volution of x(n) and h(n), designated by x(n) * h(n). Thus for a linear time inĆ
variant (LTI) system a relation exists between the input and output that is comĆ
pletely characterized by the impulse function h(n) of the system.
LTI system
x(n) y(n)
h(n)
A causal system is one for which the output for any n=n0 depends only on the
input for n n0. A linear time-invariant system is causal if and only if the unit
sample response is zero for n<0, in which case it may be referred to as a causal
sequence.
Difference equations
Some linear time-invariant systems have input and output sequences that are
related by a constant coefficient linear difference equation. Representing such
systems in this way, can provide means of making them realizable and the apĆ
propriate difference equation reveals useful information on the characteristics
of the system under investigation such as the natural frequencies, their multiĆ
plicity, the order of the system, frequencies for which there is zero transmission
...
The general form of an Mth order linear constant coefficient difference equation
is given in equation 12-6.
M M
y(n) b iĂx(n i)ĂĄ a iĂy(n i)Ă Eqn 12-6
i0 i1
Delay
x(n–1)
b1
x(n) y(n)
b0
y(n–1)
Delay
–a1
Delay
represents a one sample delay. A realization such as this where sepaĆ
rate delays are used for both input and output is known as Direct form 1. More
detailed information on filter realizations can be obtained from the references
listed at the end of this chapter.
The z transform
The z transform of a sequence x(n) is given by
X(z) x(n)Ăz n Eqn 12-8
n
Y(z)
H(z) Eqn 12-9
X(z)
and H(z) can again be expressed in the general form of difference equations
a 0 a 1z 1 a 2z 2.Ă.Ă.a Mz M
H(z) Eqn 12-10
1 b 1z 1 b 2z 2.Ă.Ă.b Nz N
y(n) h(m)Ăe Ăj0(nm)
m
e j 0n h(m)Ăe j0m
m
The quantity H(eā j) is the frequency response function of the filter, which gives the
transmission of the system for every value of .
This is in fact the z transform of the impulse response function with z=eāj.
H(z)| zej H(e j) h(n)Ăe jn Eqn 12-12
n
j
H(eā ) h(n)āe jn Eqn 12-13
n
h(n) 1
2
ā H(eā j)āe jn
where the impulse response coefficients are also the Fourier series coefficients.
Since the above relationships are valid for any sequence that can be summed,
the same can apply to x(n) and y(n) and it can be shown that
and so the convolution in the time domain has been converted to multiplication
in the frequency domain.
N1
H p(k) hp(k)āejā(2N)ānk Eqn 12-15
n0
and the DFT coefficients are identical to the z transform of that same sequence
evaluated at N equally spaced points around the unit circle. The DFT coeffiĆ
cients are therefore a unique representation of a sequence of finite duration.
The continuous frequency response can be obtained from the DFT coefficients,
by artificially increasing the number of points equally spaced around the unit
circle. So by augmenting a finite duration sequence with additional equally
spaced zero valued samples the Fourier transform can be calculated with arbiĆ
trary resolution.
h(n)
n
N1 N2
Such filters are always stable and can be realized by delaying the impulse reĆ
sponse by an appropriate amount. The design of FIR filters is described in secĆ
tion 12.2.2.
Filters fall into two distinct categories -the Finite Impulse Response (FIR) filĆ
ters and the Infinite Impulse Response (IIR) filters. A comparison of the two
categories of filters is given below.
There are nine basic designs of filters that are described in this chapter as listed
below.
This section begins with an introduction to the terminology used in filter deĆ
sign. The following subsections deal with the processes and parameters inĆ
volved in each sort of filter mentioned above.
Filter characteristics
attenuation
The filter design functions operate with normalized frequencies with a unit freĆ
quency equal to the sampling frequency.
h(n) h(N 1 n)
This means that for each value of N there is only one value of for which exactĆ
ly linear phase will be obtained. Figure 12-2 shows the type of symmetry reĆ
quired when N is odd and even.
center of symmetry center of symmetry
Filter types
Several types of filter are provided (some of which are illustrated below) as
well as multipoint filters where the required response can be of an arbitrary
shape.
H H
H
H
high pass band stop
Differentiator filter
Such a filter takes the derivative of a signal and an ideal differentiator has a deĆ
sired frequency response of
h(n) 1
2
ā H ()āe d
jnd
Eqn 12-17
1
2
ā jāe jnd
cosnn
Hilbert transformer
This filter imparts a 900 phase shift to the input. The ideal Hilbert transformer
has a desired frequency response of
h(n) 1
2
ā H ()āe d
jnd
Eqn 12-19
0
!
1
2
ā jāe jnd
ā jāe jnd
0
2
2 Ą sin (ā n2)
n
In practice however the ideal case is not required and the desired frequency reĆ
sponse of a Hilbert transformer can be specified as Hd () = 1 between the limits
l <<u as shown below.
H
l u
H(eā j) h(n)āe jn Eqn 12-20
n
h(n) 1
2
ā H(eā j)āe jn
The coefficients of the Fourier series are identical to the impulse response of the
filter. Such a filter is not realizable however since it begins at - and is infiĆ
nitely long. It needs to be both truncated to make it finite and shifted to make
it realizable. Direct truncation is possible but leads to the Gibbs phenomenon
of overshoot and ripple illustrated below.
A solution to this is to truncate the Fourier series with a window function. This
is a finite weighting sequence which will modify the Fourier coefficients to conĆ
trol the convergence of the series. Then
^
h(n) h(n)āw(n) Eqn 12-21
^
where w(n) is the window function sequence and h(n) gives the required imĆ
pulse response.
The desirable characteristics of a window function are
d a narrow main lobe containing as much energy as possible
d side lobes that decrease in energy rapidly as tends to .
The windows supported are listed below.
Rectangular
This is equivalent to direct truncation.
(N 1) (N 1) W(n)
W(n) 1ĂwhenĂ n
2 2
= 0 elsewhere
-(N-1)/2 (N-1)/2
Hanning
This type of window trades off transition width for ripple cancellation. In this
case
= 0 elsewhere
= 0.5
Hamming
This has similar properties to the Hanning window described above. The forĆ
mula is the same but in this case =0.54.
Kaiser
The Kaiser window function is a simplified approximation of a prolate spheroiĆ
dal wave function which exhibits the desirable qualities of being a time-limited
function whose Fourier transform approximates a band-limited function. It
displays minimum energy outside a selected frequency band and is described
by the following formula
I 0 1 [2n(N 1)] 2
(N 1) (N 1)
W(n) ĂĂĂĂĂĂĂ when n
I 0ā 2 2
Chebyshev
This is another example of an essentially optimum window like the Kaiser winĆ
dow, in the sense that it is a finite duration sequence that has the minimum
spectral energy beyond the specified limits. The window function is derived
from the Chebyshev polynomial which is described below.
And so T0 1
T r(1) 1
T rā( 1) ( 1) r
T 2rā(0) ( 1) r
T 2r1ā(0) 0
The window function W(n) is obtained from the inverse DFT of the Chebyshev
polynomial evaluated at N equally spaced points around the unit circle.
This allows you to design a filter of arbitrary shape and is suited for narrow
band selective filters. It uses the design technique known as frequency samĆ
pling.
It will be recalled from equations 12-15 that a filter can be defined by its DFT
coefficients and that the DFT coefficients can be regarded as samples of the z
transform of the function evaluated at N points around the unit circle.
N1
H(k) h(n)āej(2N)ānk
n0
N1
h(n) 1
N
ā H(k)āej(2N)nk
k0
From these relationships and since e j2k =1, it can be shown that
N1
H(z)
(1 zN)
N
Ą [Ă1 z1H(k)
eā j(2N)ānkĂ]
Eqn. 12-22
k0
The filter coefficients are obtained after applying an inverse FFT on the interpoĆ
lated response. The coefficients are tapered smoothly to zero at the ends by
multiplying the impulse response by the specified window function.
This uses the remez exchange algorithm and the Chebyshev approximation
theory to arrive at filters that optimally fit the desired and the actual frequency
responses, in the sense that the error between them is minimized. The Parks-
McClellan algorithm employed enables you to design an equi-ripple optimal
FIR filter.
The weighted approximation error between the desired frequency response and
the actual response is spread evenly across the passbands and the stopbands
and the maximum error is minimized by linear optimization techniques. The
approximation errors in both the pass and stop bands for a low pass filter are
illustrated in Figure 12-7.
1
1
1
2
2
0
The filter coefficients are obtained after applying an inverse DFT on the optiĆ
mum frequency response.
Weighting
For each frequency band the approximation errors can be weighted. This is
done by specifying a weighting function W(). Applying a weighting function
of 1 (unity) in all bands implies an even distribution of the errors over the
whole frequency band. To reduce the ripple in one particular band it is necesĆ
sary to change the relative weighting across the bands and in this case to ensure
that the band of interest has a relatively high weighting. It is convenient to
normalize W() in the stopband to unity and to set it to the ratio of the approxiĆ
mation errors (2/1) in the passband.
The required filter characteristics are described in Figure 12-8. These will of
course depend on the type of filter required.
maximum ripple in
the pass band (dB)
attenuation (dB)
lower cutoff l u upper cutoff
A prototype low pass filter will be designed based on the required digital cutĆ
off frequency c . First however the digital frequency d must be converted to
an analog one a . This is achieved through a bilinear transformation from the
digital (z) plane to the analog (s) plane where s and z are related by
1
s 2 1 z 1
T 1z
Eqn 12-23
jT
The analog axis is mapped onto one revolution the of the unit circle, but in a
non-linear fashion. It is necessary to compensate for this nonlinearity (warpĆ
ing) as shown below
a a
c d
defined digital frequencies
It is now necessary to select a suitable low pass analog prototype filter that will
produce the required characteristics. The selection can be made from the folĆ
lowing types of filter.
Bessel filters
Butterworth filters
Bessel filters
The goal of the Bessel approximation for filter design is to obtain a flat delay
characteristic in the passband. The delay characteristics of the Bessel approxiĆ
mation are far superior to those of the Butterworth and the Chebyshev approxiĆ
mations, however, the flat delay is achieved at the expense of the stopband atĆ
tenuation which is even lower than that for the Butterworth. The poor
stopband characteristics of the Bessel approximation make it impractical for
most filtering applications !
Bessel filters have sloping pass and stop bands and a wide transition width reĆ
sulting in a cutoff frequency that is not well defined.
d0
H(s) Eqn 12-26
B n(s)
(2n)!
d0 Eqn 12-28
2 nn!
Butterworth filters
These are characterized by the response being maximally flat in the pass band
and monotonic in the pass band and stop band. Maximally flat means as many
derivatives as possible are zero at the origin. The squared magnitude response
of a Butterworth filter is
where n is the order of the filter. The transfer function of this filter can be deĆ
termined by evaluating equation 12-29 at s=j
|H(j)| 2 H(s)H( s) 1
s2 n
Eqn 12-30
1 ( j 2)
c
Butterworth filters are all-pole filters i.e. the zeros of H(s) are all at s=.
They have magnitude (1/2 ) when /c =1 i.e. the magnitude response is
down 3dB at the cutoff frequency.
|H |2
3dB
n=4
n=10
c
where Cn () are the Chebyshev polynomials and is the parameter related to
the ripple in the pass band as shown below for n odd and even.
1 1
1 2 1 2
n odd n even
For the same loss requirements, the Chebyshev approximation usually requires
a lower order than the Butterworth approximation, but at the expense of an
equi-ripple passband. Therefore, the transition width of a Chebyshev filter is
narrower than for a Butterworth filter of the same order.
These contain poles and zeros and have equi-ripple stop bands with maximally
flat pass bands. In this case
1 1
1 2 1 2
...
n odd r n even r
For the same loss requirements, the Inverse Chebyshev approximation usually
requires a lower order than the Butterworth approximation, but at the expense
of an equi-ripple stopband.
These filters are optimum in the sense that for a given filter order and ripple
specifications, they achieve the fastest transition between the pass and the stop
band (i.e. the narrowest transition band). They have equi-ripple stop bands
and pass bands.
n odd n even
This group of filters is characterized by the property that the group delay is
maximally flat at the origin of the s plane. However this characteristic is not
normally preserved by the bilinear transformation and it has poor stop band
characteristics.
For a given requirement, this approximation will in general require a lower orĆ
der than the Butterworth or the Chebyshev ones. The Cauer approximation
will thus lead to the least costly filter realization, but at the expense of the worst
delay characteristics.
In the Chebyshev and Butterworth approximations, the stopband loss keeps inĆ
creasing at the maximum possible rate of 6*<Order> dB/Octave. Therefore
these approximations provide increasingly more loss than a certain wanted flat
attenuation that is really needed above the edge of the stopband. This source of
inefficiency for both approximations is remedied by the Cauer or elliptic
approximation.
Low pass to high pass s
sc
s2 u l
Low pass to band pass s
sā( u l)
l u
sā( u l)
Low pass to band stop s
s 2 u l
l u
The final result is a set of filter coefficients a and b, stored in vectors of length
n+1,where n is the order of the filter. A facility, described below, enables you to
determine the optimum order of a filter required for a particular design.
1
passband ripple
1-1
attenuation
2
p s
The filter can be any one of the types mentioned above and the prototype can
be either a Butterworth, Chebyshev type I or type II or a Cauer filter. This proĆ
cess does not apply to the Bessel filter because of the particular condition perĆ
taining to these filters in that the filter order affects the cutoff frequency.
The minimum filter order required is determined from a set of functions deĆ
scribed below.
One function relates the pass band and stop band ripple specifications to a filter
design parameter where
12
2
(1 1) (1 21) 22
Another parameter relates the pass band cut off frequency p , the transition
width v and the low pass filter transition ratio k where
p tan p2
k ĄĄ ĄĄ ĄĄ
s tan s2
analog digital
A final function relates the filter order n, the low pass filter transition ratio k
and the filter design parameter This relationship depends on the type of proĆ
totype analog filter.
n Butterworth
k
cosh1(ā 1)
n Chebyshev
ln
11k 2
k
K(k)āK (1 2)
n Elliptic
K()āK(1 k 2)
The required response is obtained from a specified gabarit that contains the
necessary frequency and magnitude break points which are mapped onto a
grid.
The outcome is a set of filter coefficients.
12.3 Analysis
This section describes the functions that provide information on the characterisĆ
tics of filters.
Group delay
The group delay of a set of filters provides a measure of the average delay of a
filter as a function of frequency. The frequency response of a filter is given by
ā()
p() Eqn 12-35
and the group delay is defined as the first derivative of the phase
dā()
g() Eqn 12-36
d
If the wave form is not to be distorted then the group delay should be constant
over the frequency bands being passed by the filter.
0
Z{āxā( n)} x( n)Ăz n Eqn 12-37
n
which if -n=u x(u)Ă(z1)u
0
So if X(z) Z{x(n)}
Time reversal filtering can be realized using the method shown in Figure 12-12.
i.e. zero phase and squared magnitude. Using this filtering method results in
starting and end transients, which in this implementation are minimized by
carefully matching the initial conditions.
12.5 References
Harmonic tracking
Practical considerations
193
Chapter 13 Harmonic tracking
13.1 Introduction
V fine spectral resolution of the orders (i.e. 0.01 Hz) obtained after just a
few measurement samples (not even one cycle of the fundamental comĆ
ponent),
V no phase distortion.
In order to use the Kalman filter the following conditions must apply -
This equation defines the shape or structure of the waveform you wish to track.
A sine wave for example, x(t) of frequency sampled at time t satisfies the
following second order difference equation
When tracking a sine wave which is changing in frequency, and which is conĆ
taminated by noise and other sinusoids, a non homogeneity term (n) is
introduced. This allows the sine wave to vary in frequency, amplitude and
phase and Equation 13-2 then becomes
(n) is a deterministic but unknown term which allows for deviations from the
true stationary wave.
It is also useful to define S (n) as the standard deviation of the non homogeneĆ
ity of the structural equation.
x(n) is the time history defined by the structural equation, but the measured
signal y(n) contains both the signal that matches the structural equation as well
as noise and other periodic components.
where (n) contains noise and periodic components at frequencies other than
the target signal.
Once again S (n) is defined as the standard deviation of the nuisance element
of the data equation.
ĄĄ1
1Ą c(n)Ą1 x(n 2)
x(n)
(n)
Ąx(n 1) y(n) (n) Eqn. 13-5
The error in equation 13-5 is made isotropic by applying a weighting factor r(n)
which is defined as the ratio of the standard deviations of the errors in the
structural and data equations.
sā(n)
rā(n)Ă Eqn. 13-6
s ā(n)
ĄĄr(n)
1Ą c(n)Ą1 x(n 2)
x(n)
(n)
Ąx(n 1) r(n)ā(y(n) (n)) Eqn. 13-7
The weighting function r(n) expresses the degree of confidence between the
structural equation and data equation, or, the certainty of the presence of orders
in the data. This function shapes the nature of the Kalman filter and influences
its tracking characteristics. A small value for r(n) leads to a filter that is highly
discriminating in frequency, but which takes time to converge. Conversely, fast
convergence with low frequency resolution is achieved by choosing a large r(n).
When applied to all observed time points Equation 13-7 provides a system of
overdetermined equations which may be solved using standard least squares
techniques.
This section considers some practical characteristics of the Kalman filter and the
parameters that influence them.
Frequency resolution
In principle the Kalman filter is capable of tracking sinusoidal components of
any frequency up to half the sample frequency. In practice however, it has been
found that the ability to distinguish between two closely spaced sine waves is
inversely proportional to the total observation time. As a consequence, the obĆ
servation time should be equal to the inverse of minimum frequency spacing
required between components.
Filter characteristics
It was mentioned above that the weighting r(n) used in Equation 13-7 can be
used to influence the nature of the tracking filter used. This weighting can be
adjusted through the specification of a harmonic confidence factor which is deĆ
fined as the inverse of the weighting factor.
s ā(n)
HC 1 Eqn. 13-8
rā(n) s ā(n)
Applying a high value implies confidence in the harmonic (structural data) and
assumes that the error in your measured data is high. In this case the filter will
be narrow so that it is highly discriminating in frequency. This is obtained at
the cost of time to converge in amplitude. Applying a low value implies that
the error in the measured data is low and consequently a wider filter can be
used which while less discriminating in frequency has the advantage that the
amplitude converges more quickly.
The three Kalman filters shown below are characterized by different harmonic
confidence factors which influence the width of the filter.
HC= 50
HC= 100
HC= 200
Bandwidth characteristics
Equation 13-7 shows that the weighting function, r(n), which is the inverse of
the harmonic confidence factor, can be different for every time point. This
means that the bandwidth of the filter can vary as a function of the frequency
or order being tracked.
Using a frequency defined band width means that at low Rpm values, a numĆ
ber of orders will be encompassed by the filter range.
Rpm
amp Rpm
amp
orders
orders
frequency
frequency
Figure 13-2 Defining the filter bandwidth in terms of frequency and amplitude.
Tracking closely spaced order signals with a high slew rate requires sampling
at a high frequency over a long period which imposes a heavy computational
effort. However if you consider the significant slew rate encountered during
the deceleration of gas turbines of 75Hz/sec over 5 seconds, from the above this
implies a sample rate of 750Hz. It can be seen therefore that such an extreme
slew rate does not impose any realistic limitation on the sample rate.
Counting and
histogramming
203
Chapter 14 Counting and histogramming
14.1 Introduction
0.4
acceleration
(g)
time
(s)
-0.4
d Stage 1, counting
The data is scanned for the occurrence of one of the events listed above.
This in effect reduces the full time history to a set of mechanical or therĆ
mal load events.
d Stage 2 histogramming
This involves dividing the counted occurrences into classes where for
each event, its number of occurrences is specified.
The procedures described above deal with the counting of `single events' or ocĆ
currences which are further explored in this section.
-1
-2
Ç
Ç
4 4 ÇÇ
ÇÇ ÉÉ
ÉÉ
4 ÉÉ
ÉÉ
Ç
3 3
ÇÇ ÉÉ
3
ÉÉ
Ç ÇÇ ÉÉ ÉÉ
Nr of occurrences
Nr of occurrences
Nr of occurrences
Ç
2
Ç
ÇÇ
ÇÇ
2
Ç
Ç ÇÇ
ÇÇ
2
ÉÉ
ÉÉ
ÉÉ
ÉÉ
ÉÉ
ÉÉ
Ç
1
ÇÇ 1
Ç ÇÇ ÉÉÉÉ
ÉÉ
1
ÉÉ
ÉÉ
ÉÉ
Ç
0
-2 -1 0
ÇÇ 1 2
0 Ç
-2 -1 0
ÇÇ
1 2
ÉÉ
0ÉÉÉÉÉÉ
-2 -1 0 1 2
level level
level
Minima Maxima Extrema
Figure 14-4 Histograms of peaks (maxima), valleys (minima) and both (extrema)
-1
-2
Peak counts and level cross counts are closely related. The number of positive
crossings of a certain level is equal of the number of peaks above that level miĆ
nus the number of valleys above it. This implies that a level cross count can be
derived from a peak-valley count.
ÉÉ É
10 10 10
ÉÉ É
Nr of occurrences
Nr of occurrences
Nr of occurrences
8 8 8
6
ÉÉ É
6
ÇÇ Ç ÇÇ Ç ÉÉ
ÉÉ ÉÉÉ
6
ÇÇ ÇÇÇÇÇ ÇÇ
ÇÇÇÇÇÇÇ
ÉÉ
ÉÉ
ÉÉÉÉÉ
4 4
É
4
ÇÇ
2
ÇÇÇÇÇÇÇ Ç ÉÉ ÉÉ
2
ÇÇ
ÇÇÇÇÇÇÇ ÇÇÇÇÇÇÇ ÉÉ
ÉÉÉÉÉ
2
0 0 0
-2 -1 0 1 2 -2 -1 0 1 2 -2 -1 0 1 2
level level level
up (+) crossings down (-) crossings up (+) & down (-) crossings
The range between successive peak-valley pairs is counted. Ranges are considĆ
ered positive when the slope is rising and negative when the slope is falling.
-
4
- + - +
1 1 1 1
+ - + -
1 1 1 1
+
4
4 ÇÇ
ÇÇ ÇÇ
ÇÇ
Nr of occurrences
3
ÇÇ ÇÇ
2
ÇÇ
ÇÇ ÇÇ
ÇÇ
ÇÇ ÇÇ ÇÇ
1
ÇÇ ÇÇ ÇÇ
0 ÇÇ-4 -3 -2
ÇÇ
-1 0
ÇÇ1 2 3
ÇÇ
4
Range
Counting of range–pairs
The counting of single ranges (usually indicated as a range-count), is both simĆ
ple and straightforward but sensitive to small variations of the signal. Thus in
the analysis of the left hand signal illustrated in Figure 14-9, single range
counting would result in a large number of relatively small ranges.
= + R
If a pair of extremities are separated by a range that is less than the defined
range of interest (R), then they are `filtered out' of the range count.
The counting methods described so far, consider the occurrence of single events
in isolation from any other circumstances which may affect these events. HowĆ
ever, it is also meaningful to count events differently, depending on other cirĆ
cumstances using `two-dimensional' methods. Such methods are discussed in
this section.
14.3.1 From–to–counting
Such a ``combined" event can be the occurrence of a peak at level j followed by
a valley at level i. As an example, consider the combination of a valley at level
A followed by a peak at level C as illustrated in Figure 14-11.
4 12
D
2
C
B
3 11
A
1
In this example, the Fromto sequence (12) is counted separately from the
sequences (34) and (1112), although the ranges involved are identical
(C-A=D-B).
From j
A B C D
X A X 0 1 0 1
0 B 0 X 1 2 3
2 To i C 1 1 X 2 2
4 D 1 2 1 X 0
peaks valleys
The lower left triangle of the Markov matrix contains the positive fromto
events, the upper right triangle summarizes the negative transitions. The addiĆ
tional separate columns contain the counting results for peaks and valleys at a
particular level. These results are easily obtained for the triangles of the MarĆ
kov matrix.
4 12
D
2 C C
C
B B
3 D-B 11 D-B
A
1 C-A
Number of events
Mean
Range
Essentially the signal is split into separate cycles, having a specific amplitude
(or range) and a mean. The result can be put directly into cumulative fatigue
damage calculations according to Miner's rule and into simple crack growth
calculations. Three steps are involved in the complete procedure.
e5 e5
R
e3
R
e1 e4
e1
e6
e6
e0 R e0
R
e2 e2
After counting the first peak, the next valid valley is looked for, which in
this case is e2. This point is validated as a valley as the signal rises by
more then R to go to e3. The algorithm then searches for the next valid
peak. The first peak encountered is e3, but this is not counted as a valid
peak as the signal does not drop sufficiently before reaching the next exĆ
tremum in the signal (e4). So the algorithm checks whether the following
peak is a valid one. Peak e5 is regarded as valid since the drop in signal
level following it, is greater than R.
In this example the range filter eliminated the small signal variation (e3,e4)
from the peak-valley sequence.
Note that increasing the range filter eliminates only those transitions from
the histogram for which the range is smaller than the new value of R.
This is important for fatigue purposes since it proves that the filtering is
not that sensitive to the range filter size.
This phase of the counting procedure consists of taking a set of four conĆ
secutive points, and check whether a range-pair is contained in it. If not,
the search through the peak-valley sequence continues by shifting one
data point ahead. Once a range-pair is detected, the pair is counted and
removed from the sequence. After this, the next new set of four points is
formed by adding the closest two previously scanned points, to the two
remaining after removal of the range pair. The fact that earlier scanned
points are re-considered, clearly distinguishes Range-pair range counting
from single range counting.
At the end of the second phase, a ``residue" of peaks and valleys is left
which is analyzed according to the single range principle. It can be
shown that this residue has a specific shape, namely a diverging part folĆ
lowed by a converging part.
Example
The following example shows how the range-pair range method operates.
Consider the time signal shown beĆ A peak-valley reduction with a range
low. filter of size R, results in the peak-valley
sequence shown below.
S6
S2
S4
S8
S5
R S3
S1
S7
S7 S7
S5
Counting a range-pair implies deleting the counted extremes from the signal.
``Stepping backwards", the extremes S1,S2,S3, and S6 are now considered and
another pair (S2,S3) is found.
S6 S6
S2
S8 S8
S3
S1 S1
S2
S7 S7
S3
From the remaining four extremes, no ``pairs'' can be subtracted. This forms
the residue which is further counted as single ``from-to-ranges''.
Further considerations
The result of the range pair-range counting depends on the length of the data
record being analyzed at one time because the largest range counted will be beĆ
tween the lowest valley and the highest peak. This largest variation is often reĆ
ferred to as the `half load cycle'. If the lowest valley occurs near the beginning
of a very long load cycle, and the highest peak near the end, you should conĆ
sider whether it makes physical sense to combine such occurrences, so remote
in time into one cycle.
The counting method is insensitive to the size of the range filter applied. The
only effect of increasing the range filter size from R to 3R, for example, is that
all elements in a From-to counting for which |from-to|<3*R, become zero. In
other words, the choice of the range filter size is not critical.
14.4 References
[1] Fatigue load monitoring of tactical aircraft, de Jonghe J.B., 29th Meeting of
the AGARD SMP, Istanbul, September 1969.
[3] Statistical load data processing, van Dijk C.M, 6th ICAF Symposium MiĆ
ami, Florida USA, May 1971 .
[4] Fatigue of Metals subjected to varying stress, Matsuiski M. & Endo T.,
Kyushu district meeting, Japan Society of Mechanical Engineers, March
1968 .
[5] Cycle counting and fatigue damage, Watson P., SEE Symposium of 12th
February 1975, Journal of Society of Environmental Engineers, September
1976.
Part IV
Analysis and design
Chapter 15
Estimation of modal parameters . . . . . . . . . . 219
Chapter 16
Operational modal analysis . . . . . . . . . . . . . . 267
Chapter 17
Running modes analysis . . . . . . . . . . . . . . . . 281
Chapter 18
Modal validation . . . . . . . . . . . . . . . . . . . . . . . . 293
Chapter 19
Rigid body modes . . . . . . . . . . . . . . . . . . . . . . 309
Chapter 20
Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
Chapter 21
Geometry concepts . . . . . . . . . . . . . . . . . . . . . 357
218
Chapter 15
Estimation of modal
parameters
219
Chapter 15 Estimation of modal parameters
A modal analysis provides a set of modal parameters that characterize the dyĆ
namic behavior of a structure. These modal parameters form the modal model
and Figure 15-1 illustrates the process of arriving at the modal parameters.
Ą (jrĂ ) (jrĂ )
N *
ijk ijk
h ijĂ(j)Ă *
k1 k k
input input
N
h ijĂ(t)Ă Ą rĂ ijkeĂ t rĂ ijk *eĂ t
*
k k Eqn 15-1
k1
rĂ ijk rĂ ijk !
h ijĂ(j)Ă Ą
N *
hij (t) = IR between the response (or output) degree of freedom i and the refĆ
erence (or input) DOF j
The pole value can be expressed as shown in Equations 15-3 and 15-4.
k k j dk Eqn 15-3
where
or
k k nk j nk 1 2k Eqn 15-4
where
Equation 15-5 shows that the residue can be proven to be the product of three
terms
where
Note that the mode shape coefficients can be either real (normal mode shapes)
or complex. If the mode shapes are real, the scaling constant can be expressed
as,
ak 1
2jm k dk Eqn 15-6
where
The residues, as appearing in Equation 15-1 of 15-2, have the same dimension
as the measurement data. As an aside, it is important to note that residues have
a dimension. Residues are composed of a product of mode shape coefficients
and a scaling constant, (Equation 15-5). The mode shape coefficients by themĆ
selves do not have any dimension, nor absolute (or scaled) magnitude. DimenĆ
sion, and therefore units will be viewed as attributes of the scaling constant.
Finally, for multiple input analysis, the residues are written in factored form as
the product of mode shapes with modal participation factors. Again, the prodĆ
uct of the factors has a dimension and absolute magnitude. Formally, the mode
shape coefficients will again be considered as without dimension and therefore
units will be viewed as attributes of the residues.
d/f
Under this assumption, the FRF equation 15-2 can be simplified to equation
15-7. This is assuming the data to have the dimension of displacement over
force.
rĂ ijk rĂ *ijk
h ij Eqn 15-7
(j k) (j *k)
min max
rĂ ijk rĂ ijk * lr ij
h ij ur ij Eqn 15-8
(j k) (j *k) 2
where
d/f
mass line
upper
residual urij
lower lr ij frequency
residual stiffness line
2
rĂ ijk
h ij Eqn 15-9
(j k)
N
h ijā(t) rijkeā t r*ijkāeā t
k
*
k Eqn 15-10
k1
you will see that the pole values k are independent of both the response and
the reference DOFs. In other words the pole value k is a characteristic of the
system and should be found in any function that is measured on the structure.
When applying parameter estimation techniques, one of two strategies can be
employed; making local or global estimates.
k
*
k Eqn 15-11
k1
where
[H] =(No ,Ni ) matrix with hij as elements
[Rk ] = (No ,Ni ) matrix with rijk as elements
Equation 15-5 can be used to express the residue matrix in factored form,
where
{V}k = No vector (column) with mode shape coefficients at the output DOFs
Vr k = Ni vector (row) with mode shape coefficients at the input
DOFs
If DOFs i and j are both output and input DOFs then the above equation imĆ
plies Maxwell Betti reciprocity,
This assumption is not essential however since the residue matrix can be exĆ
pressed in a more general form,
Using the factored form of the residue matrix, equation 15-11 can be written as,
N
H {V} kLkeĂ t {V *} kL* keĂ t
k
*
k Eqn 15-15
k1
If just the data between any output DOF and all input DOFs are considered
then
N
H i vikLkeĂ t v*ikL*keĂ t
k
*
k Eqn 15-16
k1
where
It is essential in the model of equation 15-16 that both the poles and the modal
participation factors are independent of the output DOF. In other words in this
formulation the characteristics become -
or since
1 " 2
The latter equation shows that in the response data relative to an input DOF j, a
combination of the coupled modes is observed and not the individual modes.
The combination coefficients for the modes are the modal participation factors
l1j and l2j .
The only difference between these last two equations is the modal participation
factors l1l and l2l . If they are linearly independent of the modal participation
factors for input i, then the modes will appear in a different combination in the
response data relative to input l. As a multiple input parameter estimation
technique analyses data relative to several inputs simultaneously, and the modĆ
al participation factors are identified, then it is possible to detect highly
coupled or repeated modes.
For modal parameter estimation applications with the data measured in the freĆ
quency domain, introducing the sampled nature of the data transforms the
equation for the model to -
N
ijk rĂ rĂ ijk*!
h ij,nĂ(j)Ă Ą Eqn 15-21
k1
(j n ) k (j n *k)
where
A frequency domain parameter estimation method uses data directly in the freĆ
quency domain to estimate modal parameters. It is therefore irrelevant whethĆ
er the frequency lines are equally spaced or not. They are based directly on the
model expressed by equation 15-21.
If the data are sampled at equally spaced frequency lines, then the FRF can be
transformed back to the time domain to obtain a corresponding Impulse ReĆ
sponse (IR). A Fast Fourier Transform (FFT) algorithm is used for this transĆ
formation but the restriction on the number of frequency lines being equal to a
power of 2 (e.g. 32, 64, 128...) is no longer valid. After transformation, a series
of equally spaced samples of corresponding impulse response functions is obĆ
tained. A time domain parameter estimation technique allows you to analyze
such equally spaced time samples to estimate modal parameters.
In practice, a variety of conditions mean that the frequency band over which
data is analyzed is smaller than the full measurement band. This is illustrated
in Figure 15-4.
hij
min max
The analysis frequency band includes only three modes whereas the measureĆ
ment band includes five. If the data is transformed from frequency to time doĆ
main, then the time increment between samples will be determined by the analĆ
ysis frequency band and not the measurement band. If the frequency band of
analysis is bounded by max and min then t is determined from
2
t Eqn 15-22
2( max min )
N
h ij,nĂ(t)Ă rĂijkeĂ nt rĂ*ijkeĂ nt
k
*
k
Eqn 15-23
k1
or
N
h ij,n rĂ ijkĂz nk rĂ *ijkĂz *n
k
Eqn 15-24
k1
where
Time domain parameter estimation methods are based on the model defined by
equation 15-24. They analyze hij,n to estimate zk .
k is then calculated from
equation 15-25. Note however that this calculation is not unique since
This implies that no poles outside the frequency band ă2 /t can be identiĆ
fied. In other words, with a time domain parameter estimation method, all esĆ
timated poles are to be found in the frequency band of analysis ( min , max ).
This may cause problems in estimating modal parameters if the data in the freĆ
quency band of analysis is strongly influenced by modes outside this band (reĆ
sidual effects). Since with frequency domain methods
k is estimated directly,
no such limitation arises. A frequency domain technique may therefore someĆ
times be preferred over a time domain technique for analyzing data over a narĆ
row frequency band, where residual effects are important.
with
M S, C S, K S the structural mass, damping and stiffness matrices
{f} the externally applied forces
{lp } the acoustical pressure loading vectors
In the fluidum the indirect acoustical formulation states:
with
M f, C f, K f matrices describing the pressure-volume acceleration
ω 2{lf } the acoustical pressure loading vectors
Combining these equations with
{l f} Ă
Ăx ĂdS
N
Eqn 15-30
Sb
KS Kc
0 Kf
px iC0 C0 px MM
S
f
2
S 0
C Mf
x
p
f
q
. Eqn 15-31
Selection of a method
SDOF
Single degree of freedom curve fitters are rough and ready and will give you a
quick impression of the most dominant modes (frequency damping and mode
shapes) influencing a structure under test. As such they are useful in checking
the measurement setup and can help assess:
V whether the accelerometers are correctly labelled with their node and
direction;
V whether all the nodes are instrumented.
For this purpose it is recommended to identify real modes since these are the
easiest to interpret when displayed.
The circle fitter gives the most accurate estimates of the SDOF techniques, but
may create large errors on nodal points of the mode shapes.
Complex MIF
This method can be used in the same way as the SDOF techniques to give you
an idea of the most dominant modes and check the test setup. It has the
advantage that multiple input FRFs can be used and the mode shape estimates
are of a higher quality. Furthermore, it can extract a modal model that includes
the most dominant modes in a particular frequency band.
Time domain MDOF
This is the most general purpose parameter estimation technique that is probĆ
ably the standard tool used in modal analysis. It provides a complete and accuĆ
rate modal model from MIMO FRFs. Its major weakness seems to be when
analyzing heavily damped systems where the damping is greater than 5% such
as in the case of a fully equipped car.
Frequency domain MDOF
The Frequency Domain Direct Parameter technique provides similar results to
the Time domain technique described above, in terms of accuracy but is generĆ
ally slower. It is weak when dealing with lightly damped systems (damping
less than 0.3%) but fortunately performs better on heavily damped ones, thus
complementing the other MDOF technique. Since it operates in the frequency
domain it is able to analyze FRFs with an unequally spaced frequency axis.
A corresponding estimate of the damping can be found with the 3dB rule. The
frequency values 1 and 2,on both sides of the peak of the FRF at which the
amplitude is half the peak amplitude (3dB down) are introduced in the formula
in equation 3.1 to yield the critical damping ratio. The method is also illusĆ
trated in Figure 15-5 below. 1 and 2 are also called half power points.
2 1
Eqn 15-33
2 r
hij
dB
ampl 3 dB
1 r 2 frequency
Since the curve fitter locates the resonance frequency on a spectral line, signifiĆ
cant errors can be introduced if the FRF has a low frequency resolution and the
peaks of modes fall between two spectral lines. This can be compensated for by
extrapolating the slopes on either side of the picked line to determine the amĆ
plitude of the FRF more precisely.
It may be necessary to deal with the situation when one of the half power
points is not found. This may arise when the frequency of one mode is close to
that of another mode, or it is near to the ends of the measured frequency range.
Note! Peak picking is a single DOF method: it is therefore only suitable for data with
well separated modes.
As this method yields local estimates, it requires only one data record to obtain
frequency and damping values for all modes. However, if several data records
are available, it may be that different records identify different modes.
If you assume that the modes are uncoupled and lightly damped, the modal
amplitude can be computed from the peak quadrature or peak amplitude of the
FRF. With this assumption, the data in the neighborhood of the resonant freĆ
quency can be approximated by
r ijk
h ij,n " Eqn 15-34
(j n k)
r ijk
Eqn 15-35
k
Note that from the modal amplitude a residue or mode shape estimate is obĆ
tained by multiplying by the modal damping.
To use the Mode picking method you must have an estimate of dk . This estiĆ
mate can be obtained with the Peak picking method (see section 15.3.1) or other
techniques.
The Mode Picking method is obviously quite sensitive to frequency shifts in the
data. If for example the resonant frequency of a mode in a data record is
shifted a few spectral lines with respect to the frequency that is used as resoĆ
nant frequency for that mode, then the modal amplitude would be erroneously
picked. To accommodate situations where frequency shifts occur, you need to
specify an allowed frequency shift around the resonant frequencies dk that are
used to calculate the modal amplitudes. Rather than picking the modal ampliĆ
tude at the resonant frequencies the method now scans a band around each
modal frequency for each data record. The maximum amplitude in this band is
used to determine the modal amplitude and thus the mode shape coefficient.
Mode picking allows you to make a very quick determination of a modal modĆ
el. The accuracy of this model however depends on how well the assumptions
of the methods were applicable to the data.
r ijk r *ijk
h ij,n Eqn 15-36
j n k j n *k
U jV
hn R jI Eqn 15-37
jĂ( n d)
f Re(h)
arctan V
U
(R,I)
U 2 V 2
d
(R U2Ă, I V2Ă)
Im(h)
d
f
Figure 15-6 Relation between circle fitting parameters and modal parameters
Having determined the natural frequency and assuming a lightly damped sysĆ
tem, the damping is given by equation 15-38.
2 1
d
tan( 2) 2 tan( 2)
Eqn 15-38
1 2
U 2 V 2
d Eqn 15-39
arctan
V
U
Eqn 15-40
The FRF matrix of a system with No (output) and Ni (input) degrees of freedom
can be expressed as follows
2N
[ĂH()Ă] {ĂĂ} rĂ
Qr
{ĂLĂ} Tr Eqn 15-41
r
r1
Or in matrix form as
where
Taking the singular value decomposition of the FRF matrix at each spectral line
results in
where
[ĂĂUĂ]= the left singular matrix corresponding to the matrix of mode shape
vectors
[ĂĂVĂ]= the right singular matrix corresponding to the matrix of modal parĆ
ticipation vectors
In comparing equations 15-42 and 15-43, the mode shape and modal participaĆ
tion vectors in equation 15-42 are, through the singular value decomposition,
scaled to be unitary vectors and the mass matrix in equation 15-43 is assumed
to be an identity matrix, so that the orthogonality of modal vectors is still satisĆ
fied.
For any one mode, the natural frequency is the one where the maximum singuĆ
lar value occurs.
where
CMIF 1
log
.1
.01
.001
frequency
When the frequencies have been selected, equations 15-43 and 15-44 can be
used to yield the complex conjugate of the modal participation factors [ĂVĂ],
and the as yet unscaled mode shape vectors [ĂUĂ].
The unscaled mode shape vectors and the modal participation factors are used
to generate an enhanced FRF for each mode (r), defined by
HE H
r ()Ă {ĂUĂ} r Ă[ĂH()Ă]Ă{ĂVĂ} r Eqn 15-46
Since the mode shape vectors and modal participation factors are normalized
to unitary vectors by the singular value decomposition, the enhanced FRF is acĆ
tually the decoupled single mode response function
Qr
HE
r ()Ă Eqn 15-47
r
A single degree of freedom method (such as the circle fitter technique) can now
be applied to improve the accuracy of the natural frequency estimate and then
to extract damping values and the scaling factor for the mode shape.
CMIF 1
.1
log
.01
.001
frequency
amp
frequency
One CMIF can be calculated for each reference DOF. They can be sorted in
terms of the magnitude of the eigenvalues. They can all be plotted as a funcĆ
tion of frequency as shown in the example in Figure 15-9.
CMIF 1
.1
CMIF_1
log
.01 CMIF_2
.001
frequency
At any one frequency these functions will indicate how many significant indeĆ
pendent phenomena are taking place as well as their relative importance.
At a resonance, at least one CMIF will peak implying that at least one mode is
active. At a different frequency however it may be that a different mode has
increased its influence and is the major contributor to the response. Between
resonances, a cross over point can occur where the contribution of two modes are
equal. This can result in a higher order CMIF exhibiting peaks if they are
sorted as shown in Figure 15-9 and in the effect of one CMIF exhibiting a dip at
the same time as a lower order function is exhibiting a peak.
A check on peaks in the second order CMIF functions can be made to deterĆ
mine whether or not they are due to the cross over effect or a genuine pole of
second order. This is done by calculating the MAC matrix using data on either
side of the frequency of interest.
CMIF_1
CMIF_1
CMIF_2
a b CMIF_2
a b
When this MAC matrix approximates When this MAC matrix is anti diagoĆ
a unity matrix then the peak in nal then the peak in CMIF_2 repreĆ
CMIF_2 represents a resonance peak. sents a cross over point. The mode is
The mode is not changing between freĆ switching between frequencies a and
quencies a and b. b.
"1 "0 "0 "1
"0 "1 "1 "0
Peak picking can be facilitated by using tracked CMIFs. This alters the display
of the CMIFs for when the mode shapes represented by the two CMIFs are
switched, the CMIFs are also switched. This is determined by the cross over
check described above.
CMIF 1
.1
log
.01
.001
frequency
To understand how the method works, recall the expression for an impulse reĆ
sponse (IR) given below
N
h ijĂ(t)Ă rĂ ijkeĂ t rĂ *ijkeĂ t
*
k k Eqn 15-48
k1
A particular problem when trying to work with equation 15-48 to achieve the
above objective is that it contains residues rijk which do depend on the response
and reference DOFs. It is therefore essential to define another parametric modĆ
el for the data hij , in which the coefficients are independent of response and refĆ
erence DOFs and can be used to identify estimates for
k. . It can be proved that
such a model takes the form of a linear differential equation of order 2N with
constant real coefficients
(ddt) 2Nf (t) a 1(ddt) 2N1f (t) a 2Nf (t) 0 Eqn 15-50
2N a 1 2N1 a 2N 0
Eqn 15-51
k, *k, k 1N
Turning the reasoning around therefore, one could first try to estimate the coefĆ
ficients in equation 15-49 using all available data. Estimates of the complex exĆ
ponential coefficients
k can then be found by solving equation 15-51.
z k e kt
Instead of damped complex exponentials, the characteristics are now power seĆ
ries with base numbers zk .
z 2N 2N1
k a 1z k a 2N 0 Eqn 15-54
The Least Squares Complex Exponential is a method that estimates the coeffiĆ
cients in equation 15-53 using data measured on the system.
In principle any data record hij,n ăcan be used. Applying the method to just a
single data record at a time will result in local estimates of the poles.
To estimate the coefficients in equation 15-53 in a least squares sense the equaĆ
tions for all possible time points and all possible response and reference DOFs
are to be solved simultaneously as indicated in equation 15-55. This equation
system will be greatly overdetermined. To find the least squares solution the
normal equations technique can be applied so that the final solution is calcuĆ
lated from a compact equation with a square coefficient matrix, equation 15-56.
The coefficient matrix in this equation is called a covariance matrix.
where
N0 Ni Nt
r k,l (hij,nkĂhij,nl) Eqn 15-57
i1 j1 n1
Building this covariance matrix is the first stage in applying the Least Squares
Complex Exponential method. This phase is usually the most time consuming
since all the available data is used to build the inner products expressed by
equation 15-57.
Note that after solving equation 15-56 all that is required to calculate the estiĆ
mates of modal frequency and damping is to substitute the estimated coeffiĆ
cients in equation 15-54 and to solve for zk .
The solution of equation 15-56 results in least squares estimates of the coeffiĆ
cients in the model expressed by equation 15-53. It is also possible therefore to
calculate the corresponding least squares error. This error is of importance in
determining the minimum number of modes in the data.
In the preceding discussion it has been assumed that N modes are present in
the data. However, the number of modes contained in the data is in fact unĆ
known. It is preferable that this should be determined by the method itself.
Using the Least Squares Complex Exponential method, this can be achieved by
observing the evolution of the least squares error on the solutions of equation
15-56 as a function of the number of assumed modes.
r 1,1 r 1,2
. r 2,2
a1 r 1,0
a 2 Ă=Ă r 2,0
When 2 modes are assumed in the data then the sub set to be solved is
Least
Squares
Error No noise on data
Noise on data
1 2 3 4 5 6 7 Nr of modes
To determine the optimal number of modes you could try to compare frequenĆ
cy and damping estimates that are calculated from models with various numĆ
ber of modes. Physical intuition would lead you to expect that estimates of freĆ
quency and damping corresponding to true structural modes, should recur (in
approximately the same place) as the number of modes is increased. ComputaĆ
tional modes will not reappear with identical frequency and damping. A diaĆ
gram that shows the evolution of frequency and damping as the number of
modes is increased is called a Stabilization diagram. The optimal number of
modes that can be calculated for use can then be seen, as those modes for which
the frequency and damping values of the physical modes do not change signifiĆ
cantly. In other words, those which have stabilized.
s v s s
s f s s
number of modes
s d s s
s d s s
s f s s
amplitude
s f s s
s v s s
s f s s
d d s
v v s
f f v
f f f
o
frequency
Example
Let two data records be measured on a system, both shown in Figure 15-13.
h11
1
0
1 2 3 4
-1
h21
0
1 2 3 4
t
-1
Let four data samples be measured of which the values are listed in the Table
below.
n h11 h21
0 1 0
1 0 1
2 -1 0
3 0 -1
Consider a model for 1 mode (N=1). Equations 15-55 and 15-56 become reĆ
spectively
0 1! a 1!
1 0
a
1
1 0 2 0% Ă=Ă$
0
0 s1 1
20 02
aa Ă=Ă02
1
2
The solution is therefore a1 =0, a2 =1. Now equation 15-54 is used to calculate zk
and so k,
z2 1 0
z * j
zĂ Ă e t
z j,Ă 0 j
2t
z j,Ă 0 j
2t
The solution indicates a mode with a period 4t and zero damping. This is
compatible with the trend of the cursor as shown in Figure 15-13.
The basis for the Multiple Input Least Squares Complex Exponential method is
the model of the data introduced in section 15.2.3 equation 15-16.
N
H i vikLkeĂ t v*ikL*keĂ
k
*
k Eqn 15-58
k1
where
H i = Ni vector (row) of IRs between output DOF i and all input DOFs
Note that in this model, frequency, damping and modal participation factors
are independent of the particular response DOF. It should therefore be possible
to estimate these coefficients using all the available data simultaneously.
Introducing firstly the sampled nature of the data, equation 15-58 is rewritten
as,
N
H n i vikLkznk v*ikL*kz*nk
It can be proved that if the data can be described by equation 15-59, it can also
be described by the following model
Lk[z pk z p1
k
A 1Ă A p] 0 Eqn 15-61
pN i 2N Eqn 15-62
(The proof of this follows from basic calculus along the same lines as for Least
Squares Complex Exponential in section 15.3.5).
The condition expressed by equation 15-61 states that the terms [ Lk ] and zk nă
are characteristic solutions of this system of finite difference equations. As
equation 15-59 is a superposition of 2N of such terms, it is essential that the
number of characteristic solutions of this system of equations pNi at least equals
2N as expressed by equation 15-62.
Note finally, that if data for each reference DOF is treated individually, i.e. Ni =
1, then equation 15-60 and 15-61 simplify to equations 15-53 and 15-54. Thus
the least squares complex exponential method is a special case of the multiple
input least squares complex exponential method.
Hp1 1 H 0 1 ! Hp1 !
. . . A .
HN 1
H N p 1 1! H N
1
1
) 2
t t
Ă=Ă
A .
t
. . t.
H n i
Eqn 15-63
Hn1i H npi
A p .
. . .
H
H N p N H N
t
N
N 1 N
t t 0
0 0
where
N0 Nt
R k,lĂ Ă Ă Ă(Ă[Hnk]ĂtiĂ[Hnl]iĂ) Eqn 15-64
i1 np
R 1,1 R 1,0
R 1,2 R 1,p A 1
. R 2,2 R 2,pA 2 R 2,0
)
Ă Ă
) ) )
Eqn 15-65
) )
. . . R p,p A p
R p,0
The order (p) of the finite difference equation is related to the number of modes
in the data by equation 15-62. It is preferable that this be determined by the
method itself. As the coefficients of the finite difference equation are solved for
in a least squares sense, this can be done by observing the least squares error as
a function of the assumed order. As an order is reached such that the model
can describe as many modes as are present in the data, the error should drop
considerably.
Due to the condition expressed by equation 15-62 there is no linear relation beĆ
tween the number of modes that can be described by the model and the order
of the model. The relation between the number of modes, the order of the
model and the number of reference DOFs is listed in Table 15.2. It can be seen
that a model of order 8 can describe 11 or 12 modes if data for 3 inputs are anaĆ
lyzed simultaneously. In the error diagrams therefore the same least squares
error is shown for 11 and 12 modes.
Example
To clarify the method, consider again the example discussed on page 248. Let
the example system satisfy reciprocity so that h12 is also equal to h21 . The vecĆ
tor [h12 h21 ] then represents the data between response DOF 1 and reference
DOFs 1 and 2.
1 0! a a 0 1!
11 12
0 1 a a Ă=Ă1 0
1 0 0 1
12 22
20 02
aa 11
12
a 12 0 2
a 22 Ă= 1 0
l 1Ăl 2
z 1 1z
=Ă 00
L [* j, 1]
Notice that the solution for the frequency and damping is the same as found
with the Least Squares Complex Exponential (see page 249). In addition you
also find an estimate of the modal participation factors. For this example they
indicate that there should be a phase difference of 90_ in the system response
between excitation from reference DOFs 1 and 2 as h11 is a cosine, and h12 a
sine. This estimate seems to be correct.
*
k k Eqn 15-66
k1
If estimates of the modal frequency and damping are available, then the resiĆ
dues appear linearly as unknowns in this model.
j r *ijk !
N r ijk lr ij
h ij,p
p k
ij 2
j p *k
ur Eqn 15-67
k1 p
where
These are illustrated in Figure 15-3. Note that the residues as well as lower and
upper residuals are local characteristics; in other words, they depend on the
particular response and reference DOF.
The Least Squares Frequency Domain method is based on the model expressed
by equation 15-67. Least squares estimates of residues, lower and upper residĆ
uals are calculated by analyzing all data values in a selected frequency range.
k
*
k Eqn 15-68
k1
where
[UR]Ăi = upper residuals between response DOF i and all reference DOFs,
vector of dimension Ni
[LR]ĂĂi = lower residuals between response DOF i and all reference DOFs, vecĆ
tor of dimension Ni
Theoretical background
The basis of the FDPI method is the second order differential equation for meĆ
chanical structures
.. .
My(t) Cy(t) Ky(t) f (t) Eqn 15-70
When transformed into the frequency domain, this equation can be reformuĆ
lated in terms of measured FRFs
where
= frequency variable
Note that for the single input case, the H() ămatrix becomes a column vector
of frequency dependent FRF's.
Equation 15-71 is valid for every discrete frequency value When these equaĆ
tions are assembled for all available FRFs, including multiple input - multiple
output test cases, the unknown matrix coefficients A0, A1 , B0, and B1 can be estiĆ
mated from the measurement data H(). Equation 15-71 thus means that the
measurement dataăH() can be described by a second order linear model with
constant matrix coefficients. From the identified matrices, the system's poles
and mode shapes can be estimated via an eigenvalue and eigenvector decomĆ
position of the system matrix.
IA 0A
1 0 Eqn 15-72
This will yield the diagonal matrix [] of poles and a matrix of eigenvectors.
It will become clear from the following section, that the matrix ăthus obtained
is not equal to the matrix of mode shapes, although it is related to it.
In a final step, the modal participation factors are estimated from another least
squares problem, using the obtained [] and matrices.
Data reduction
Prior to estimating the system matrix, all available data are condensed via a
projection on their principal components. For all response stations, a maximum
of Nm ăprincipal components are first calculated and then analyzed. The obĆ
tained matrix ă represents the modal matrix for this set of fictitious response
stations.
2C 2 1C 1 C0 C 1 2C 2
The presence of these residual terms will influence the estimates for frequency,
damping and mode shapes (as well as the modal participation factors for multiĆ
ple input analysis).
Nm N0
However, using a similar approach as for the time domain LSCE method, it is
possible to create so-called pseudo-" Degrees of Freedom from the measureĆ
ments that are available, thus generating enough new" measurements to allow
a full identification on as few as one measurement.
Normal modes
From the meaning of the matrices [A0 ] and [A1 ] and the eigenvalue problem
(15-72), it is possible to estimate damped (generally complex) mode shapes ,
or undamped real normal modes.
Normal modes can be identified via the FDPI technique by solving an eigenvaĆ
lue problem for the reduced mass and stiffness matrices only
2 Eqn 15-74
M 1ĂK n nĂ
This eigenvalue problem is very much related to the one that is solved by FEM
software packages that ignore the damping contribution in a system. This is an
entirely different approach to the one that is used to estimate real modes via the
LSFD technique. The latter technique estimates the real-valued mode shape
coefficients that curve-fit the data set in a best least squares sense (proportional
damping assumed), while the FDPI method uses an FEM-like approach.
Damping values are computed by applying a circle-fitter to enhanced FRFs for
each mode. The enhanced FRFs are calculated by projecting the principal FRFs
on the reduced mode shapes.
V the polyreference LSCE estimator does not always work well when the
number of references (inputs) is larger than 3 for example
V the method does not deliver confidence intervals on the estimated moĆ
dal parameters.
^ N oiĂ( f)
H oiĂ( f) Eqn 15-75
DĂ( f)
for i = 1, . . . . . , Ni and o = 1, . . . . . , No
with
n
N oiĂ( f) ! jā( f)āA j
j0
and
n
Dā( f) !ā( f)āB oij
j0
The polynomial basis functions j (f) are given by ! j( f) e iāā fāT sā.j for a
discrete-time model (with Ts the sampling period). The complex-valued coeffiĆ
cients Aj and Boij are the parameters to be estimated. The approach used to opĆ
timize the computation speed and memory requirements will first be explained
for the Least Squares Solver and then these results will be extrapolated to the
ML estimator.
n n
!jĂ(f)ĂBoij !jĂ(f)ĂHoj(f)ĂAj # 0 Eqn 15-76
j0 j0
for i = 1, . . . . . , Ni 0 = 1, . . . . . , No and f= 1, . . . . . , Nf
Note that equation 15-76 can be multiplied with a weighting function Woi (f ).
The quality of the estimate can often be improved by using an adequate
weighting function.
As the elements in equation 15-76 are linear in the parameters, they can be reĆ
formulated as
X1 0 0 Y1 B1
0) X) 2 0 Y2 )2
B
) ) B # 0
X N0N i N0N i
0 0 X N 0N i
A
with
B oi0 A0
.. ,ą A
A1
B k
B
oi1 .
. ..
B oin An
X1 0 .ā.ā. 0 Y1
0 X2
J .. ..
0
..
Y2
.. Eqn 15-77
. . . .
0 0 XN N Y NoN i
o i
has Nf No Ni rows and (n+1)(No Ni +1) columns (with Nf >> n, where n is the
order of the polynomials). Because every element in equation 15-76 has been
weighted with Woi (f ), the Xk 's in equation 15-77 can all be different.
The ML equations
Assuming the different FRFs to be uncorrelated, the (negative) log-likelihood
function reduces to
No Ni Nf ^
|ĂH oiā(,ā f) H oiā( f)Ă| 2
l ML() Ă Ă Ă var{Hoi( f)}
Eqn 15-78
o1 i1 f1
(a ) solve āJ H H
mĂJ mĂ m ā j mĂr m
for m
(b ) set m1 m m
CRLB " [J H
mĂJ m]
1 Eqn 15-79
with Jm the Jacobian matrix evaluated in the last iteration step of the Gauss-
Newton algorithm. As one is mainly interested in the uncertainty on the resoĆ
nance frequencies and damping ratios, only the covariance matrix of the deĆ
nominator coefficients is in fact required.
Hence, it is not necessary to invert the full matrix to obtain the uncertainty on
the poles (or on the resonance frequencies and the damping ratios).
R upperĂresidual U VĂT
The mode shape values of the static compensation mode () are related to the
left singular vector, the singular value, and the frequency value (0)
j U j j 0
The participation factor values (L) can be derived from the mode shape values
()
j
Lj
2Ăm rĂ 0 j
m r V U
j
Lj ĂV
2Ă 0 j 0 j
Operational modal
analysis
Theoretical aspects
267
Chapter 16 Operational modal analysis
It is also the case that large in-operation data sets are measured anyway, for
level verification, operating field shape analysis and other purposes. Hence,
extending classical operating data analysis procedures with modal parameter
identification capabilities will allow a better exploitation of these data.
Finally, the availability of in-operation established models opens the way for in
situ model-based diagnosis and damage detection. Hence, a considerable inĆ
terest exists in extracting valid models directly from operating data.
Correlation of the operating data set with the modal database measured in the
lab allows an assessment the modes which are dominant for a particular operĆ
ating condition. In case of partially correlated inputs (e.g. road analysis), princiĆ
pal component techniques are employed to decompose the multi-reference probĆ
lem into subsets of single reference problems, which can be analyzed in
parallel. These decomposed sets of data can be fed to an animation program, to
interpret the operational deflection shapes for each principal component as a
function of frequency.
Theoretically, one could consider the case where the input forces are measured
in such conditions which means that conventional FRF processing and modal
analysis techniques could be used. However the Operational modal analysis
software is aimed specifically at applications where the inputs can not be meaĆ
sured, and works when only responses such as accelerations signals are availĆ
able. The ideal situation is when the input has a flat spectrum.
Three methods are discussed, all of which use time domain correlation funcĆ
tions. These auto- and cross-correlation functions can be calculated directly
from raw time data or be derived from measured auto- and cross powers by an
inverse FFT processing.
{x k1} [A]Ă{x k} {w k}
Eqn 16-1
{y k} [C]Ă{x k} {v k}
For p and q large enough, the matrices [A] and [C] are respectively the state
space matrix and the output matrix. Along with this model, the observability
matrix [Op ] of order p and the controllability matrix [Cq ] of order q are defined :
[C]
[C][A]
[O p]
.. p1ā; [Cqā] [Ă[G]ă[A][G]...ă[A] [G]Ă]
q1
Eqn 16-2
[C][A]
T
where [G] E[Ă{x k1}{y k} Ă] and E[.] denotes the expectation operator. The
matrices [Op ] and [Cq ] are assumed to be of rank 2Nm , where Nm is the number
of system modes.
where r is the damping factor and r the damped natural frequency of the r-th
mode.
r Eqn 16-5
r
2r 2r
The mode shape {}r of the r-th mode at the sensor locations are the observed
parts of the system eigenvectors {}r of [], given by the following equation
The extracted mode shapes can not be mass-normalized as this requires the
measurement of the input force.
[R k] EĂ {y km}Ă{y m} TrefĂ
Eqn 16-7
[R 1] [R 2] . [R q]
[R 2] [R 3] .. [R q1]
[H p,q] .. ă .. ă ..ă .. Eqn 16-8
. . .
. .
[R p] [R p1] [R pq1]
Direct computations of the [Rk ] from the model equations lead to the following
factorization property
Let [W1 ] and [W2 ] be two user-defined invertible weighting matrices of size
pNresp and qNresp , respectively. Pre-and post multiplying the Hankel matrix
with [W1 ] and [W2 ] and performing a SVD decomposition on the weighted
Hankel matrix gives the following
T
[W 1]Ă[H p,q][W 2] [Ă[U 1]ă[U 2]Ă]
[0] [0]
[S 1] [0] [V 1]T [U ]Ă[S ]Ă[V ]T
[V ]T
2 1 1 1 Eqn 16-10
where [S1 ] contains n non-zero singular values in decreasing order, the n colĆ
umns of [U1 ] are the corresponding left singular vectors and the n columns of
[V1 ] are the corresponding right singular vectors.
On the other hand, the factorization property of the weighted Hankel matrix
results in
From equations 16-10 and 16-11, it can be easily seen that the observability maĆ
trix can be recovered, up to a similarity transformation, as
where [Op-1 ] is the matrix obtained by deleting the last block row of [Op ] and
[Op-1 ↑] is the upper shifted matrix by one block row.
Different choices of weighting will lead to different stochastic subspace identifiĆ
cation methods. Two particular choices for the weighting matrices give rise to
the Balance Realization and the Canonical Variate Analysis methods.
So no weighting is involved.
Eqn 16-16
[
] [L ]Ă[L ] TĄ; [
] [L ]Ă[L ] T Eqn 16-17
With this weighting, the singular values in equation 16-10 correspond to the
so-called canonical angles. A physical interpretation of the CVA weighting is
that the system modes are balanced in terms of energy. Modes which are less
well excited in operational conditions might be better identified.
M
{ymk}Ă{ym}Tref Eqn 16-19
m0
The SVD decomposition of the weighted empirical Hankel matrix will then reĆ
sult in the following
Eqn 16-20
with
^
[S 1] diagĂ( 1--- n)Ă,ą 1 2--- n 0
^
Eqn 16-21
[S 2] diagĂ( n1--- pRrespĂ)Ă,ą n1 n2--- pR resp 0
^ ^ ^
[O p] [W 1] 1Ă[U 1]Ă[S 1] 12 Eqn 16-22
The remaining steps of the algorithm are similar to those described in equations
16-11 to 16-18, where theoretical quantities are replaced with empirical ones.
Each decaying sinusoid has a damped natural frequency and damping ratio
that is identical to that of a corresponding structural mode. Consequently, the
classical modal parameter techniques using impulse repines functions as input,
like Polyreference LSCE, Eigensystem Realization Algorithm (ERA) and
Ibrahim Time Domain are also appropriate to extract the modal parameters
from response-only data measured under operational conditions.
This technique is also referred to as NExT, standing for Natural Excitation TechĆ
nique. An interesting remark is that the ERA method applied to correlation
functions instead of impulse response functions is basically the same as the BalĆ
anced Realization method.
where r e rĂt and {L}r is a column vector of Nref constant multipliers which
are constant for all response stations for the r-th mode.
{Note that in conventional modal analysis, these constant multipliers are the
modal participation factors.}
where [F1 ]...[Ft ] are coefficient matrices with dimension Nref x Nref .
In case the system has Nm physical modes, the order t in equation 16-24 should
be theoretically equal to 2Nm/Nref in order to find the 2Nm characteristic poles.
In practice, over specification of the model order will be needed.
Equation 16-25 which uses all response stations simultaneously enables a globĆ
al least squares estimate of the coefficient matrices [F1 ]... [Ft ]... The overdeterĆ
mination is also achieved by considering all available or selected time intervals.
Once the coefficient matrices are known, equation 16-24 can be reformulated
into a generalized eigenvalue problem resulting in Nref t eigenvalues lr, yielding
estimates for the system poles µr and the corresponding left eigenvectors {L}r T .
jAmn
Nm
r A mn*
r B mn
r B mn*
r
X mn(j) Eqn 16-26
r j * j r j *
r r
r1
where Xmn (jω) is the crosspower between m-th response station and the n-th
response station serving as a reference.
In case of autopowers (m=n), Ar mn equals Br mn. The residue Ar mn is proportionĆ
al to the m-th component of the mode shape {}r and the residue Br mn is proĆ
portional to the n-th component of the mode shape {}r. Consequently, by fitĆ
ting the crosspowers between all response stations and one reference station,
the complete mode shape can be derived.
The power spectra fitting step offers the advantage that not all responses
should be included in the time-domain parameter extraction scheme and that
consequently, mode shapes of a large number of response stations can be easily
processed by consecutively fitting the spectra. Additionally, it provides a
graphical quality check by overlaying the actual test data with the synthesized
data. In comparison with modal FRF synthesis, it can be observed in equation
16-26 that two additional terms as function of -jw need to be included for a corĆ
rect synthesis of the auto-and crosspowers which are assumed to be estimated
on the basis of the FFT and segment averaging. If Xmn (jw) would not be calcuĆ
lated with the FFT segment averaging approach, but as the FFT of the correlaĆ
tion function between response m and response n estimated using equation
16-19, the last 2 terms in equation 16-26 can be neglected.
Mode shapes are identified in a secondary process using the Least Squares
Frequency Domain procedure. For the theoretical background on this
method see section 16.2.2
BR (Balanced Reduction)
This is one of the subspace" techniques which identifies frequency,
damping and mode shapes.
In this case all the response functions must be selected as references which
are used in the computation of the cross power functions from the original
time domain data. Thus this method requires more computational effort
but this algorithm will give equal importance" to all modes and can
identify modes which are not well excited under operational conditions.
For the theoretical background on subspace methods see section 16.2.1.
16.3 References
[13] Van der Auweraer H., Ishaque K., Leuridan J., Signal Processing and
System Identification Techniques for Flutter Test Data Analysis", Proc. 15th Int.
Seminar of Modal Analysis, K.U.Leuven, pp. 517-538, Leuven, 1990.
[15] Van der Auweraer H., Leuridan J., Pintelon R., Schoukens J., A FrequenĆ
cy Domain Maximum Likelihood Identification Scheme with application to
Flight Flutter Data Analysis", Proc. 8-th IMAC, pp. 1252-1261, Kissimmee,
1990.
281
Chapter 17 Running modes analysis
The aim of modal analysis is to identify a modal model that describes the dyĆ
namic behavior of a (mechanical) system. This behavior is identified by means
of the transfer function measured between any two degrees of freedom of the
system.
One of the most common ways of estimating the modal parameters is based
upon the measurement of FRFs between one or more input (reference DOFs)
and all response DOFs of interest. These measurements are made under well
defined and controlled conditions, where all input and output signals are meaĆ
sured and no unknown forces (external or internal) are acting on the system.
The modal model is (ideally) valid under any circumstances; that is to say,
whatever the frequency contents, level or nature of the acting forces. This
makes modal analysis a very powerful tool, and the modal model (once identiĆ
fied) can be used in a number of ways, such as trouble shooting, forced reĆ
sponse prediction, sensitivity analysis or modification prediction.
Animating the system's wire frame model can lead to a better understanding of
these phenomena. This makes it possible to show each motion (or acceleration)
level at the corresponding DOF, in a cyclic manner. Because of the external reĆ
semblance of the animated representation of the vector quantity {X} with the
mode shape vector {V}, the vector {X} is called a running mode, or an operational
deflection shape.
These running modes must be interpreted entirely differently from modal
modes. Running modes only reflect the cyclic motion of each DOF under specifĆ
ic operational conditions, and at a specific frequency. Using a modal model based
on displacement/force frequency response functions {H}, the displacement runĆ
ning mode {X} can be described as follows.
{X i( p)} {Hi1( p)}F 1( p) {H i2( p)}F 2( p)
Eqn 17-1
{H im( p)}F m( p)
2N
Ą
V ikV 1k ! 2N Ą V ikV mk !ĂF ( )
j p k jp m p
ĂF ( p) Eqn 17-2
k1 1 k1 k
where,
i = the DOF counter
p = the particular angular frequency
Fj () = the force input spectrum at DOF j
m = the number of acting forces
The above equation clearly shows that running modes:
V can be identified at any of the measured frequencies p , whereas a moĆ
dal mode has a fixed natural frequency determined by the structural
characteristics of the system (mass, size, Young's modulus, etc.).
V depend on the level and nature of the acting force(s).
V depend on the structural characteristics of the system, through its FRF
behavior.
V depend on the frequency contents of each of the acting forces : if F3 (p )
happens to be zero at p , it will not contribute to the running mode
{x(p )}.
V will be dominant at structural resonances (p " k ), but also at peaks in
the acting force spectra.
Ideally, all response spectra for a running mode analysis would be acquired:
V simultaneously
The two measured functions available for running mode analysis are: transmisĆ
sibility functions and crosspower spectra.
X iĂ()
T ijĂ()Ă Ă Eqn 17-3
X jĂ()
G ijĂ()
T ij()Ă Ă Eqn 17-4
G jjĂ()
ĂGijĂ()2
2ijĂ()Ă Ă Eqn 17-5
G iiĂ()Ă.ĂG jjĂ()
The coherence function expresses the linear relationship between both response
signals of the measured system. This coherence function is expected to be high,
since both responses are caused by the same acting forces. In practice, however,
it can be low for the same reasons as those affecting the measurement of FRFs,
that is to say due to low signal to noise ratio for one or both of the signals, bad
signal conditioning, etc.
Another interesting reason why the coherence between two measured signals
may be low, can be derived from equation 17-1, when it is substituted in equaĆ
tion 17-3. The linear relationship (and hence the coherence) will vary as a funcĆ
tion of the weighting factors Fj (), this can be because of changing operating
conditions during the averaging process for example. High coherence function
values in the frequency regions of interest therefore indicate both a high quality
of the measurement signals and stationary operating conditions.
Absolutely scaled running mode coefficients for each DOF i can be obtained by
multiplying the transmissibility spectra by the RMS value of the reference autoĆ
power spectrum.
.
XiĂ(0) ĂĂ XiĂ(0) Ă.ĂjĂ(0) [m/s] Eqn 17-8
Absolutely scaled running modes can, in this case, be obtained again by means
of the autopower spectrum of the reference station j
G ijĂ()
{X iĂ()}Ă Ă Eqn 17-11
GjjĂ()
When displacements were measured, the running mode coefficient will have
units of displacement. Equations 17-8 and 17-9 can be used to derive velocity
or acceleration values.
Unlike modal modes, a running mode can be identified at any arbitrary freĆ
quency of the measured spectra.
Simple peak picking and mode picking methods can be used to extract the
sampled values, corresponding to a certain spectral line from the measured
spectra. They can then be scaled, and assembled into a vector which can be
listed, or animated using a 3D wire frame model of the measured object. For a
measurement blocksize of 1024 (512 spectral lines), it is thus possible to identify
512 running modes - or even more when interpolating between the spectral
lines.
Note! There is no such quantity as damping defined for a running mode. Similarly
other modal parameter concepts such as residues or modal participation fac-
tors have no meaning for running mode analysis.
The scaling of running modes coefficients that have been determined using
peak picking methods, depends upon the nature of the measurement data (e.g.
transmissibilities, or autopowers).
Several ways of scaling running modes can be considered
Each one of the above scaling methods may change and influence the units of
the scaled running mode. The scaling factor's units will be incorporated into
the mode shape coefficient units, which were initially obtained from the meaĆ
surement data.
A set of functions exists, that are designed to assess the validity of modes.
These include the functions of Modal Scale Factor, Modal Assurance Criterion
and Modal decomposition.
Both the Modal Scale Factor and Modal Assurance Criterion are mathematical
tools used to compare two vectors of equal length. They can be used to
compare running and modal, mode shape information.
The Modal Scale Factor between columns l and j of mode shape k or MSFjlo is
the ratio between two vectors. Although this ratio should be independent of
the row index i (the response station), a least squares estimate has to be comĆ
puted for it when more than one output station coefficient is available.
Modal Scale Factors and Modal Assurance Criterion values can be used to
compare an obtained modal model with the accepted running modes. The
MAC values for corresponding modeshapes should be near 100 % and the MSF
between corresponding vectors should be close to unity. When multiple inputs
are used, the MSF can be calculated for each input, while the corresponding
MAC will be the same for all of them.
Modal decomposition
When a modal model for the same DOFs is available for a measured object, it is
possible to compare modal and running modes and to track down resonance
phenomena causing a particular running mode to become predominant. This is
termed Modal decomposition. By using a decomposition of each running
mode in a linear combination of the modal modes, it becomes clear whether or
not a running mode originates primarily from a resonance phenomenon.
The modal modes form what is termed the `basis' group of modes. The runĆ
ning modes are in a separate group that is to be decomposed. The following
formula applies.
Where
Xi is the i th mode of the group to be decomposed (running modes)
Vi is the i th mode of the basis group (modal modes)
ai are the scaling coefficients needed to satisfy the above equation.
{X i( 0)}
100%
a max a 1
a max
a
max
.100%{V 1} a n .100%{V n} Rest Eqn 17-15
Note! Take care when interpreting these values since resemblance of the modal and
the running mode may purely be coincidental. A running mode at 56 Hz will
have no connection with a modal mode at 200 Hz even if they look alike.
Modal validation
Mode participation
Reciprocity
Scaling
Summation of FRFs
Synthesis of FRFs
293
Chapter 18 Modal validation
18.1 Introduction
Some validation procedures allow you to convert the complex mode shape vecĆ
tors to normalized ones. Normalized mode shapes are obtained from the amĆ
plitudes of the complex mode shape coefficients after a rotation over their
weighted mean phase angle in the complex plane.
The FRF between input j and output i on a structure can be written in partial
fraction expansion form as
rĂ ijk
h ij,n
N rĂ *ijk !
Eqn 18-1
j
k1
n k j n *k
j[RK]
N
[R K] *
[H] Eqn 18-2
k1
n k j n *k
where [RK ] represents the matrix of residues. When Maxwell's reciprocity prinĆ
ciple holds for the tested structure this residue matrix is symmetric and can be
rewritten as
The ratio between two residue elements on the same row i but on two different
columns j and l can be computed as
r iāj,āk v jāk
r iāl,āk v lāk MSF jālāk Eqn 18-4
This ratio MSFjlk is called the Modal Scale Factor between columns l and j of
mode k. Although this ratio should be independent of the row index i (the reĆ
sponse station), a least squares estimate has to be computed for it when more
than one output station residue coefficient is available
If a linear relationship exists between the two complex vectors {R jk} and {R lk}
the MSF is the corresponding proportionality constant between them and the
MAC value will be near to one. If they are linearly independent, the MAC valĆ
ue will be small (near zero), and the MSF not very meaningful.
In a more general way, the MAC concept can be applied on two arbitrary comĆ
plex vectors. This is useful in comparing two arbitrary scaled mode shape vecĆ
tors since similar mode shapes have a high MAC value.
Modal Scale Factors and Modal Assurance Criterion values can be used to
compare two modal models obtained from two different modal parameter esĆ
timation processes on the same test data for example. When comparing mode
shapes, the MAC values for corresponding modes should be near 100 % and
the MSF between corresponding residue vectors (mode shapes, scaled by the
modal participation factors) should be close to unity. When multiple inputs
were used, this MSF can be calculated for each input while the corresponding
MAC will be the same for all of them.
A second application for the MAC value is derived from the orthogonality of
mode shape vectors when weighted by the mass matrix:
{V k} tĂ[M]Ă{V} m kĄwhenĄk 1
Eqn 18-7
0Ăotherwise
For more specific information on using MSF and MAC for interpreting results
in a running mode analysis see section 17.4.
Note! These evaluations are only meaningful when the same response and reference
stations are included for all modes.
When a comparison is made of the residue sums for one mode at all the referĆ
ences, it evaluates the reference point selection for that mode. The reference
with the highest residue sum is the best one to excite that mode.
When these sums are added together for all references, the importance of the
modes themselves is evaluated. The mode with the highest result is the most
important one.
Finally the sums of residues can be added for all modes. Comparison of these
results between different inputs allows you to evaluate the selection of referĆ
ence stations in a global sense for all modes.
Reciprocity of FRFs
Reciprocity of FRFs means that -
measuring the response at DOF i while exciting at DOF j is the same as measuring
the response at DOF j while exciting at DOF i
This means that the FRF matrix is symmetric. Note that this property is inĆ
herently assumed when performing hammer impact testing to measure FRFs or
impulse responses.
N
{V} kL k {V} *kL k!
*
H Ă Ă Eqn 18-9
j k j *k
k1
it becomes clear that, when this matrix is symmetric, the role of mode shape
vectors and modal participation vectors can be switched. Making an abstracĆ
tion of the absolute scaling of residues, this property can be expressed as folĆ
lows.
For a reciprocal test structure, the modal participation factors should be proporĆ
tional to the mode shape coefficients at the input stations.
Using this proportionality between mode shapes and modal participation facĆ
tors, reciprocity can be checked for each mode when data for more than one inĆ
put station has been used for the modal parameter estimation.
If reciprocity is not satisfied then really only the transfer functions between the
measured response and reference DOFs can be correctly synthesized. If reciprocĆ
ity is required then it can be imposed on the model, and a number of options
are available to calculate the proportionality factor needed to do this.
1 Select one driving point for each mode. The best choice in this case is
the one with the largest driving point residue, since it is the one that
best excites and is observed from that input DOF.
2 Select one specific driving point for all modes. Other participation facĆ
tors are disregarded for scaling.
n
v*i Ăli
i1
RSF n
v*i Ăvi
i1
where
This section deals with mode shape scaling and generalized parameters (modal
mass).
The residueărij,k between locations i and j for mode k can be written as the
product of a scaling factor ak (which is independent of the location) and the moĆ
dal vector components in both locations. If the structure is proportionally
damped, the modal vectors of the structure are real whereas the residues are
purely imaginary. As a consequence, the scaling factor ak , is also purely imagiĆ
nary.
r iāj,k a kĂv ikĂv jāk
1 Eqn 18-10
ak
2j dkm k
1 vikv jk
N v *ikv *jk
H ij(j)
2j m j
Eqn 18-11
k1 dk
k n k j n *k
where
At this point, it should be pointed out that equation 18-11 contains N more paĆ
rameters than equation 18-1, i.e. one more parameter per mode. This is due to
the fact that residues are scaled quantities whereas the modal vectors are deterĆ
mined within a scaling factor only. In equation 18-11 the modal mass values
play the role of the scaling constants. It is clear that the value of the modal
mass depends on the scaling scheme that was used to obtain the numerical valĆ
ues of the modal vector amplitudes.
v ikv jk
r ijk Eqn 18-12
2j dkm k
To compute the amplitudes of one modal vector and the corresponding modal
mass from a set of residues with respect to a given input location j you need
one additional equation since the set of equations that can be written for all outĆ
put locationsăi in the form of equation 18-12 is undetermined. Therefore N
equations in N +1 unknowns are obtained. This last equation will actually deĆ
termine the scaling of the modal vector.
Note that an eigenvector determines only a direction in the state space and has
no absolutely scaled amplitude, while a residue has a magnitude with physical
meaning. The scaling of the eigenvectors will determine the modal mass. MoĆ
dal stiffness is determined as the modal mass multiplied by the natural freĆ
quency squared. Modal damping is twice the modal mass multiplied by the
natural frequency and the damping ratio.
V Unity mass
In this case the mode shapes and participation factors are scaled such
that the modal mass (mk ) in equation 18-12 is equal to 1.
V Unity stiffness
In this case the mode shapes and participation factors are scaled such
that the modal stiffness (kk = mk k 2 ) is scaled to 1.
V Unity modal A
In this case the mode shapes and participation factors are scaled such
that the scaling factor (ak ) is scaled to 1. This scaling factor is indepenĆ
dent of the DOFs.
V Unity length
In this case the mode shapes and participation factors are scaled such
that the squared norm of the vector vik is scaled to unity.
N0
Ă v2ikĂ 1
i1
V Unity maximum
In this case the mode shapes and participation factors are scaled such
that the vector vik is scaled to 1 where i is the DOF with the largest
mode shape amplitude.
V Unity component
In this case the mode shapes and participation factors are scaled such
that the vector vik is scaled to 1 where i is any DOF selected by the user.
For each response station, the sensitivity of each natural frequency to a mass
increase at that station can be calculated and should be negative. A quantity
called the Mode Overcomplexity Value" (MOV) is defined as the (weighted)
percentage of the response stations for which a mass addition indeed decreases
the natural frequency for a specific mode,
N0
wiĂaik
i0
MOV k Ă xĂ100% Eqn 18-13
N0
w i
i0
where
This MOV index should be high (near 100 %) for high quality modes. If this
index is low the considered mode shape vector is either computational or
wrongly estimated. It is called overcomplex", which means that the phase
angle of some modal coefficients exceeds a reasonable limit.
However if this MOV is low for all modes for a specific input station (say, beĆ
low 10%), this might indicate that the excitation force direction was wrongly
entered while measuring the FRFs for that input station. This error may be
corrected by changing the signs of the modal participation factors for all modes
for that particular input.
This index should be high (near 100%) for real normal modes. A low MPC inĆ
dex indicates a rather complex mode, either because of local damping elements
in the tested structure or because of an erroneous measurement or analysis proĆ
cedure.
When you have two groups of modes representing the same modal space then
you can compare the two groups. The comparison concerns the damped freĆ
quencies, the damping values, the modal phase collinearities and the MAC valĆ
ues of the two groups. This is a useful way of comparing sets of modes generĆ
ated from the same data but using different estimation techniques for example.
Mode Indicator Functions (MIFs) are frequency domain functions that exhibit
local minima at the natural frequencies of real normal modes. The number of
MIFs that can be computed for a given data set equals the number of input
locations that are available. The so-called primary MIF will exhibit a local
minimum at each of the structure's natural frequencies. The secondary MIF
will have local minima only in the case of repeated roots. Depending on the
number of input locations for which data is available, higher order MIFs can be
computed to determine the multiplicity of the repeated root. So a root with a
multiplicity of four will cause a minimum in the first, second, third and fourth
MIF for example. An example of a MIF is shown below.
Removing the brackets from the notation, equation 18-14 can be split into real
and imaginary parts
X r jX i (H r jH i)Ă(F r jF i) Eqn 18-15
For real normal modes, the structural response must lag the excitation forces by
90_. Therefore, when the structure is excited at the correct frequency according
to one of these modes (modal tuning) the contribution of the real part of the reĆ
sponse vector X to its total length must become minimal. Mathematically this
can be formulated in the following minimisation problem
min
X trX r
|FF| ( 1Ă X t X X tX
r r i i
Eqn 18-16
Substituting the expression for the real and imaginary parts of the response
18-15 in this expression yields
min
Ă FH trH rF
|FF| ( 1 F t(H t H H tH )F
r r i i
Eqn 18-17
The square matrices Hr t Hr and Hi t Hi have as many rows and columns as the
number of input or reference locations that were used to create them (i.e. the
number of columns of the FRF matrix that were measured). The primary Mode
Indicator Function is now constructed from the smallest eigenvalue of expresĆ
sion 18-18 at each spectral line. It exhibits noticeable local minima at the freĆ
quencies where real normal modes exist. A second MIF can be constructed usĆ
ing the second smallest eigenvalue of 18-18 for each spectral line. It will
contain noticeable local minima if the structure has repeated modes. This can
be repeated for all other eigenvalues of equation 18-18. The number of funcĆ
tions that can be constructed is equal to the number of eigenvalues, which is the
same as the number of input stations. From these functions, you can then deĆ
duce the multiplicity of each of the normal modes.
Graphically comparing this summation of FRFs with values of the natural freĆ
quencies of modes in a display module can be useful. Problems like missing
modes, erroneous frequency estimates or shifting resonances because of mass
loading by the transducers can easily be detected this way.
The FRFs that you have obtained from a modal model can be synthesized in a
number of ways. Scaled mode shapes (i.e. mode shapes and modal participaĆ
tion factors) have to be available for at least one input station for which a mode
shape coefficient is also available. Using the Maxwell-Betti reciprocity princiĆ
ple between inputs and outputs (section 18.4) it is however possible to calculate
the FRF between any two measurement stations.
The correlation is the normalized complex product of the synthesized and meaĆ
sured values.
| Ă SiĂxĂM*i
|2
i
correlationĂ Ă Eqn 18-19
S ĂxĂS*
! M ĂxĂM*
!
i i i
i i i
with
The LS error is the least square difference normalized to the synthesized values.
Ă SiĂ Ă Mi
ĂxĂ SiĂ Ă Mi
*
LSĂerrorĂ Ă i
SiĂxĂS*i
Eqn 18-20
A listing of FRFs where the correlation is lower than a specified percentage and
which exhibit an error higher than a specified percentage provides useful inforĆ
mation on the quality if the synthesized FRF.
309
Chapter 19 Rigid body modes
This section discusses the theory used in the calculation of rigid body properĆ
ties. Experimental frequency response functions (FRFs) can be used to derive
structural modes of a structure and the inertia properties of a system. These
properties are: the moments of inertia, the products of inertia and the principal
moments of inertia.
In general two types of method are applied.
1 A first type determines the inertia characteristics using the rigid body mode
shapes obtained from test data. This is the Modal Model Method described
in reference [1].
2 The second type starts from the mass line, i.e. the FRF inertia restraint of the
softly suspended structure. This mass line is used in a set of kinematic and
dynamic equations, from which the rigid body characteristics (mass, center
of gravity, principal directions and moments of inertia) can be determined
(reference [2]). Some of these methods also look for the suspension stiffĆ
nesses while others consider the mass of the system as known (reference [3]).
acc/force
first deformation mode
frequency
band
Rigid body
mode
mass line
frequency
Input data
FRFs are required in order to determine the rigid body properties. The input
format is required to be acceleration/force, and if this is not the case then a
transformation can be applied. Rotational or scalar (acoustic) measurements
are not used in the rigid body calculations.
In theory 2 excitations and 6 responses are needed for the calculations. PractiĆ
cal tests show that best results are obtained with at least 6 excitations (e.g. 2
nodes in 3 directions) and 12 responses need to be measured.
All three directions (+X, +Y, +Z) are required. For the three measured (loĆ
cal) accelerations of output node o":
.. ..
{X} g [āTā] 1
o Ă{X} l Eqn 19-1
where
..
{X} g is the global acceleration vector
..
{X} l is the local acceleration vector
[T] 1
o is the rotation matrix (global to local) of node o"
When a reference is specified, which does not coincide with the global oriĆ
gin, the three measured accelerations of output node o" are also rotated
according to the axis of the reference system.
.. ..
{X} r ([āTā] 1
o Ă[āTā] r)Ă{X} l Eqn 19-2
where
[T] r is the rotation matrix (global to local) of node r".
1.2 System of equations
For all spectral lines of the selected band, for all response nodes P, Q,...
and for all inputs 1, 2, ... under consideration
.. ..
X.. 1P X 2Px ---
Z P Y P X.. 1g
.. ..
X 2g x ---
X1P
x
..
---
1 0 0 0 x
X P X 1g X 2g y ---
..
X 2Py
.. y
0 1 0 ZP 0
..
X1P --- 0 0 1 YP XP 0
.. y
X 2g z ---
..
.. X 2Pz
X 1g
X1Q
z
.. z
1 0 0 0 Z Y Q
--- .. 1g 2g x ---
.. ..
.. X 2Qx Q
X x
.. 0 1 0 ZQ 0 XQ
--- .. 1g
x
)
z
) --- ) ) ---
where XP, YP, and ZP are the global coordinates of node P (or towards the
reference axis system).
This over-determined system of equations (number of output DOFs is
higher than or equals 6) is solved for each spectral line in a least square
sense. In this way at each spectral line, the reference acceleration matrix is
found. Further, a general solution of the reference acceleration matrix
over the total frequency band is calculated by solving in a least squares
sense the global set of equations containing all outputs and all spectral
lines.
1.0!
i $0.0%
{F 1} [T] 1 Eqn 19-4
0.0
[T] 1
i is the rotation matrix (global to local) of node i"
When the reference r" is not coincident with the global origin:
1.0!
{F 1} ([T] r[T] 1$0.0%
i ) Eqn 19-5
0.0
[T] 1
r is the rotation matrix (global to local) of reference node r"
Similar equations are used when the input has Y-direction or Z-direction.
F1g !
F1g 0 1 0
x
1 0 0
F1g 1
y
$M1g % 0 Z1 Y 1
0 0
Ą{F1} Eqn 19-6
z
M1g Z1 0 X1
x
Y 1 X1 0
y
M 1g
z
For
(i) each input and for each spectral line
(ii) each input over the total band:
Xcog!
Fg m.ag ! 0 mz m y 0 0 0 0 0 0 Y gog
x x
0 Z cog
Fg m.ag
F g m.a g
m z 0 m x 0 0 0 0 0
0 I xx
y y
my mx 0 0 0 0 0 0
$ Mg % F g x 0 0 y 0 zĄ$ Iyy %
I
z z
0 Fg
0 y 0 x z 0
z y
Mg
zz
x
Fg
y
0 z
Fgx
I xy
0 0 z 0 y x
Mg Fg mFg
z y x
0
Iyz
I xz
Eqn 19-7
Xcog, Ycog and Zcog are the global coordinates of the center of gravity
Ixx, Iyy Izz are the moments of inertia towards the global axis system
Ixy, Iyz Ixz are the products of inertia towards the global axis system.
This set of equations can be solved in two steps. First, the coordinates of
the center of gravity can be solved from the first three equations (per refĆ
erence). Afterwards, these values can be filled in the last equations to
solve the inertia moments and products.
Step 1
for each input and for each spectral line
and for each input over the total band:
Fg m.a g ! 0
x
x m z m y x cog
ycog!
$
Fg m.ag %
F g
y
m.a g
ym z 0 m x $z %
ă Eqn 19-8
z
z
m y m x 0 cog
Step 2
for each input and for each spectral line
and for each input over the total band:
Ixx!
I yy
Mg ycogĂFg zcogĂFg ! x 0 0 y 0 z
I zz
x z y
z y 0 0 z 0 y x
x
Iyz
I xz
If wanted, only the second set of equations is solved. In this case the coorĆ
dinates of the center of gravity are presumed to be known and specified
by the user.
In general: {L g} [A]{ g}
{Lg } is the vector of total impulse towards the global (reference) axis sysĆ
tem
[A] is the matrix of inertia (symmetrical)
{g } is the vector of velocity
A rigid body is a (part of a) structure that does not deform of itself, but that
moves periodically as a whole at a certain frequency.
The modal parameters for such a rigid body mode are determined not by the
dynamics of the structure itself, but by the dynamic properties of the boundary
conditions of that structure. This includes the way it is attached to its surĆ
roundings (or the rest of the structure), the stiffness and damping characterisĆ
tics of suspending elements, its global mass, etc... A rigid body can be
compared to a simple system with a mass attached to a fixed point by a spring
and a damper element.
It has 6 modes of vibration i.e. translation along the X, Y, and Z axes, and rotaĆ
tion about these axes. Every mode which is measured for such a system will be
a linear combination of these 6 modes.
This section discusses how rigid body modes are used and describes two methĆ
ods by which the modes can be determined; namely
1 Use the geometry data to construct the 6 rigid body motions of the structure.
3 Calculate the mode shape coefficients for the requested DOFs based upon
the geometry and the 6 weighting coefficients.
Limitations
Calculating the rigid body motion for a part of the structure (for example one
single component) can sometimes prove a little awkward. The component will
indeed move as a rigid body but is not constrained to still be connected to the
rest of the structure. When applied to the tail wing of an airplane for example
this wing may rotate about a horizontal axis through the middle of the wing
but may no longer be connected to the fuselage at its base. The same may hapĆ
pen to an engine block of a car which may be disconnected from the supports
when a rigid body motion is applied to it.
R trans 1
2m
r fĂr x
R rot
2I
where
Rigid body modes are useful in completing the modal model of a structure that
is being used for structural modification purposes.
19.3 References
[2] Okuzumi, H.
Identification of the Rigid Body Characteristics of a Powerplant by Using
Experimental Obtained Transfer Functions
Central Engineering Laboratories, Nissan Motor Co., Ltd., Jun 1991
Design
This chapter discusses the three types of analysis that can be perĆ
formed to determine the effect of design changes on the modal beĆ
havior of a structure. These are
Sensitivity
Modification prediction
Forced response
321
Chapter 20 Design
Correctly scaled mode shapes are an absolute pre-requisite of the correct apĆ
plication of the design procedures described here.
The dynamic behavior of a structure can be fully described and modelled thereĆ
fore if the poles #k ), and the residues rijk for each mode k and each pair of reĆ
sponse and reference DOFs i and j are known.
In practise however the modal model is often defined by the poles (frequency
and damping values) and the residues for only one (or a few) reference staĆ
tion(s) j. The question now arises as to how this limited modal model can be
used for the prediction of responses when forces are acting on a degree of freeĆ
dom for which residues are not readily available. The residues required beĆ
tween any two degrees of freedom can be derived as follows.
For a linear structure which obeys the Maxwell-Betti reciprocity principle beĆ
tween inputs and outputs, the FRF between two DOFs i and j can be obtained
by exciting the structure at DOF j and measuring the response at DOF i, or by
exciting at DOF i and measuring the response at j:
Under these circumstances, the residue for each mode k between two response
DOFs m and n can be obtained from the ones between each of them and the
available set of residues for reference j:
where
W1 !
W2
{ W } kĂ Ă $ .. % Eqn 20-6
W.
N i
k
The residues Rk are defined as the product of mode shapes and modal particiĆ
pation factors :
The scaled mode shapes {V}k , used in the theoretical derivation of the previous
chapter are related to the unscaled mode shapes {W}k via a complex scaling facĆ
tor k for each mode :
{ V } kĂ Ă kĂ{ W } k Eqn20-8
From the definition of residues these mode shapes are scaled such that
t
2kĂ{ W } kĂ{W } kĂ { W } kĂĂL k
Eqn
* 20-10
W *1kĂLikĂĂ W *2kĂL 2kĂ
...Ă Ă W N kĂL N ik
kĂ i
In the special case where only one input is considered, i.e. only one set of resiĆ
dues is available, the scaling factor becomes -
kĂ Ă L 1k
W 1k
Eqn
20-11
The scaling of equation 20-8 actually converts the generally valid modal model
of mode shape vectors W and modal participation factors L to a model of scaled
mode shape vectors V, in which the modal participation factors are absorbed
via equation 20-10. Obviously some information is lost by removing the scalĆ
ing factors L from the model, and as a consequence, the resulting model is only
valid for reciprocal structures with a symmetric FRF matrix. The calculation of
the scaling factor k according to equation 20-10 is in fact the best compromise in
a least squares sense to approximate a non-reciprocal modal model by a reĆ
duced reciprocal one.
20.2 Sensitivity
For structures with complex dynamic behavior, predictions about the effect of
physical changes on modal parameters are usually very difficult - if not imposĆ
sible - to make. When unsatisfactory dynamic behavior is detected or susĆ
pected the designer can use trial and error procedures to try out a number of
modifications, but there is no guarantee that any of these attempts will yield
satisfying results. On the other hand numerical techniques can be employed
which use the quantitative results of a modal test to evaluate the effects of
structural changes.
Ą jrĂijk
2N
H ij()Ă Eqn 20-12
k1 k
Using this equation and the theory of adjoined matrices, equation 20-13 can be
rewritten in the form
āH ij āQ
Ă {H ic} t {H cj} Eqn 20-15
āP āP
t
āH ij 2N r ick ! Ă āQĂ
Ă $
2N r cjk !
%
j k āP $ (j k)%
Eqn 20-16
āP
k1 k1
Splitting up equation 20-16 into partial fractions, and identifying the correĆ
sponding terms of equation 20-13, gives the sensitivities for frequency (20-17)
and mode shape (20-18).
ā k
āP
Ă 1 Ă{r } tĂ āQ
r ijk ick āP
{r cjk} Eqn 20-17
jjdk
ār ijk
āP
Ă {r ick} tĂ
āQ
j āP
jjdk
{r cjk}Ă {rick}tāāQP
2N
m1 jjdk
r cjm
m k
2N t
r ick āQ
Ă {r cjm} Eqn 20-18
m k āP
m1 jj dk
So from equations 20-17 and 20-18, the residues rick and rcjk for each DOF c
that is influenced by the structural change are required in order to calculate the
sensitivity to that change. Even if not all the residues are available, the MaxĆ
well-Betti reciprocity principle can be used to calculate the required values.
The residue rick to be derived for any reference DOF c when the residues for
DOFs i and c are available for an arbitrary reference j on condition that the drivĆ
ing point residue rjjk is also available. The driving point residue is also required if
the mode shapes are to be correctly scaled.
From the general formula of equation 20-18, it is now possible to calculate the
sensitivity value of a mode shape coefficient for DOF i when a structural
change is considered for the parameter P, which will affect DOFs a and b. The
corresponding scaled mode shape coefficients for each mode in the modal modĆ
el are required. From the definition of the dynamic stiffness matrix Q, the three
specific cases of P being a mass, a linear spring (stiffness) or a viscous damper
can be considered.
Mass
This is the case where P is a mass at a specific DOF a. Equations 20-17 and
20-18 are then simplified to
ā k
Ă 2kĂ$ 2ak Eqn 20-19
ām a
2N 2k
ā$ ik
Ă kā$ 2akā$ ik $ak $ amā$ im Eqn 20-20
ām a
m1 k m
Stiffness
This is the case where P is a linear spring between DOFs a and b. Equations
20-17 and 20-18 are then simplified to
ā k
Ă ($ ak $ bkā) 2 Eqn 20-21
āk ab
2N
ā$ ik
āk ab
Ă ($ ak $ bk)ā ($am $bm
)$ im
Eqn 20-22
k m
m1
Damping
This is the case where P is a viscous damper between DOFs a and b. Equations
20-17 and 20-18 then become
ā k
Ă k($ ak $ bkā)2 Eqn 20-23
āc ab
2N
k
ā$ ik
āc ab
($ $ bk) 2
Ă ak
2
$ik ($ ak $ bk)ā m
($ am $ bm)$ im Eqn 20-24
m1 k
The imaginary parts of equation 20-19, 20-21 and 20-23 are used to compute
the sensitivities of the damped natural frequencies. The corresponding real
parts express the sensitivities of damping factors or exponential decay rates.
This section describes the use of a dynamics modification theory to predict the
effect of structural modifications on a mechanical structure's modal parameters.
These modifications can take the form of local mass, stiffness and/or damping,
FEM-like rod, truss, beam or plate reinforcements. In addition to local modifiĆ
cations, a substructure assembly theory allows you to predict the modal model
for a structure that consists of an assembly of substructures.
V the effect of any number and type of connections between any number
of substructures (only if installed)
Such an analysis avoids time consuming experimental trial and error proceĆ
dures of modifying prototypes or scale models of mechanical structures, meaĆ
suring and analyzing the dynamic behavior and evaluating the effects of these
modifications.
The first section of this theoretical background deals with the coupling and
modification of substructures using flexible coupling and general viscous
damping. It continues with the cases of rigid coupling and flexible coupling
with proportional damping.
Modal models for the assembly of substructures with flexible coupling and
viscous damping
Modal models of substructures
Consider two structures, 1 and 2. They obey the following equations of motion
in the Laplace domain :
The matrices Mi , Ci and Ki are the mass, damping and stiffness matrices of the
structure 1 or 2 corresponding to the subscript i. General viscous damping is
allowed. The system matrices are symmetric. The displacement vectors are
{x1 } and {x2 }, and the force vectors {f1 } and {f2 } respectively.
The modal parameters for substructure 1 will first be derived in a general way.
For substructure 2 the same method can be used but will not be entirely reĆ
peated.
The transformation to decouple the equations of motion can be found by adĆ
ding a set of dummy equations (Duncan's method) :
where
0 M1
A 1Ă Ă M C
1 1
B 1Ă Ă M1 0
0 K1
sx
y 1Ă Ă x 1
1
p 1Ă Ă f
1
0
Using expressions 20-29 and 20-30 in the equation of motion 20-28 after pre-
multiplication with the transpose of V1 and substitution with expression 20-31
one obtains the equations of motion in modal coordinates for substructure 1 :
It can be seen that the equations of motion in modal space are uncoupled.
The same procedure can be repeated for substructure 2, yielding a diagonal eiĆ
genvalue matrix 2 and an eigenvector matrix V2 . The eigenvector matrix V2
defines a transformation to modal coordinates {q2 }. The equations of motion for
substructure 2 in modal space are :
Substructure assembly
The system matrices of both substructures can be merged to give a structure
composed of two dynamically independent substructures. For this assembled
structure one can easily derive the modal parameters since they are the same as
those of the two substructures but gathered in one eigenvalue matrix and one
eigenvector matrix.
AĂ Ă A1 0
0 A2 BĂ Ă
B1 0
0 B2
Ă
y
y Ă Ă y 1
2
p
p Ă Ă p 1
2
It can be verified that the matrices of equation 20-35 are diagonalized by the
eigenvector matrix V composed as follows :
V1 0
VĂ Ă 0 V
2
Eqn 20-36
Ă Ă
0 1 0
2
Eqn 20-37
where
q
q Ă Ă q 1
2
An expression of the type of equation 20-33 using the eigenvector and eigenvaĆ
lue matrices, yields :
A close look at the matrix of eigenvectors V shows that the two substructures 1
and 2 are still dynamically independent. Indeed, any force at any point of one
substructure will not induce any motion at any point of the other substructure.
The two substructures can now be connected with flexible connections modĆ
elled as springs and dampers. With the connection matrices Kc and Cc equation
20-35 becomes:
where
0 0 0 0 0 0 0 0
0 Cc 0 Cc 0 Kc 0 Kc
A cĂ Ă
0 0 0 0
B cĂ Ă
0 0 0 0
0 Cc 0 Cc 0 Kc 0 Kc
Modification of structures
Before decoupling the equations of motion of the connected substructures a
number of modifications to each substructure can be added. Let the structural
modifications be gathered in the modification matrices -
These changes can be brought together in system matrices for the modificaĆ
tions:
M
0 M 1 0
C 1 0
0
0
M
0
1 0
K 1
0
0
0
0
A
0
B
0
1
0 0 M2 0 M 2 0 Eqn 20-42
0 0 M 2 C2 0 0 0 K 2
It is clear from the matrices of previous expression that the modifications are
not coupling the substructures, they are only modifying each substructure sepaĆ
rately.
When the modifications of expression 20-42 are added to the system equation
of the connected structure (Eqn. 20-40), one obtains the final equation in physiĆ
cal coordinates
sA m q Ă Ă B m q Ă Ă V t p Eqn 20-44
where
The matrices Am and Bm for the modified structure can again be diagonalized
by a general eigenvalue decomposition. When the new eigenvalues and eigenĆ
vectors are represented by ' and W, one has :
W tA mWĂ Ă a
W tB mWĂ Ă b
The natural frequencies and the damping factors can be found as the imaginary
resp. the real part of the eigenvalues in . The mode shapes are the columns
of the matrix Vi.
Zero damping
In case of no damping : [C] = [0], next eigenvalue problem is to be solved with
eigenvalues: r2 and with eigenvectors : {}r.
This system has purely imaginary poles, occurring in complex conjugate pairs.
* *
1 j 1Ă, ...,Ă N j N Eqn 20-53
The modal vectors are real, called normal modes (phase: +/ - 180_).
The equation of motion can be diagonalized, based on the orthogonality of the
modal vectors. Transformation to modal coordinates leads to an equation of
motion, with diagonal system matrices, being the modal mass and modal stifĆ
fness maĆ
trices:
["] [{" 1}ā...ā{" N}] Eqn 20-54
Propotional damping
In case of proportional damping, the damping system matrix is a linear comĆ
bination of the mass system matrix and the stiffness system matrix:
2 Eqn 20-60
(ā(s s)[M] [K]ā)ā{X} {0}
s 1
The eigenvalues are related to the complex poles
2
r r Eqn 20-61
r2
r 1
The complex poles are solved from the real eigenvalues (-ωn) and the damping
factors (α, β). When more than two original modes are taken into account (in
practical cases, this is always the case), the damping factors can solved in a
least squares way from the modal masses, modal stiffnesses and modal dampĆ
ing factors.
Modal synthesis
Only mass and stiffness coupling modifications, ∆M, ∆K and not damping
coupling modifications can be applied. The equation of motion of the coupled
system are
In modal space:
Where :
["] m ["][q r] m Eqn 20-64
Rigid coupling
The above theory relates to flexible coupling, but it is also possible to place
constraints on DOFs connecting substructures to create rigid coupling between
them, or to constrain a single DOF, thus fixing it rigidly to `ground'. In this
case the restrained DOFs will have zero displacement.
The modal coordinates are split up into dependent modal coordinates qd an inĆ
dependent modal coordinates qi. The constraint matrix [T] is also split up
qd
[[T d][T i]] q {0}
i
Eqn 20-69
{q d}
{q i}
[T d] 1ă[T i]
I
{q i} [T]{q i} Eqn 20-70
When the eigenvalues and the eigenvectors with the independent modal
coordinates qi are solved, the dependent modal coordinates qd of the eigenvecĆ
tors can be calculated. In a last step, the mode shapes in physical coordinates
are found by the inverse modal transformation.
The starting point for modal synthesis applications is the available modal modĆ
el for the structure to be modified or for each of the substructures to be asĆ
sembled.
All modal parameters (natural frequencies, damping values, and scaled mode
shapes) have to be available for the calculation procedure. It is important howĆ
ever that some conditions are met -
To obtain correct results, the modal model should include all structural modes
to accurately describe the dynamic response for the frequency band of interest.
This aspect is especially important when an experimental modal model was obĆ
tained from a set of FRFs relative to only one reference station, which happened
not to excite some structural modes. This may arise if the reference station was
located on or near a nodal point for these modes. In this case the modal model
may be well suited to describe the measured FRFs but not the dynamic behavĆ
ior of the structure as such.
A truss element between two nodes is translated into elementary mass and
stiffness modifications. The longitudinal stiffness is related to a 6 by 6 stiffness
matrix for 6 Degrees Of Freedom (3 for each node). This matrix is obtained by
projecting the longitudinal stiffness along each of the 3 coordinate axes.
A rod element can be added between any two separate nodes on the structure.
Rods are modelled with hinges at their ends so (modal) forces acting on the
ends are directed along the axis of the rod. Bending and torsion moments canĆ
not be transmitted from one element to the next.
In effect it provides a means of adding stiffness and mass between two points
by the addition of a connection for which you know the mass and the stiffness.
To add a rod element you have to specify the nodes between which the rod is to
be fixed and the physical characteristics of the rod. A rod element is characterĆ
ized by its
- longitudinal stiffness Kij
- its mass M.
A beam element is an element that can transfer translational forces and moĆ
ments of bending and torsion.
To add a beam element you have to specify the following parameters which are
illustrated below :
material : E,G,m
It
cross sectional area
A
ÉÉÉ
ÉÉÉ
Ip
The reference node together with the two end nodes defines the so-called referĆ
ence plane. The moments of inertia for bending are defined in two directions :
Ib for bending in the reference plane
Ip for bending in a plane perpendicular to the reference plane
The 2 end nodes have six Degrees Of Freedom each: 3 translational and 3 rotaĆ
tions. A beam element can therefore transmit six forces to another beam eleĆ
ment: 3 translational forces and 3 moments. For end nodes that are not conĆ
nected to another beam only the translational forces can be transmitted as for
example in the case for a stand-alone beam. In the same way, beams that are
positioned on a straight line (colinear beams) will not be subjected to torsion.
To add a plate element you have to specify the following parameters which are
illustrated below :
V The number of divisions along the first side, between c1 and c2 (a)
V The number of divisions along the second side, between c2 and c3 (b)
V Material properties of the plate i.e. Young's Modulus (E), Poisson's raĆ
tio (), mass density (m).
c1 n
3
c4
n
a
1
c2 n
2
thickness t
b c3
When a plate is defined with a and b divisions along its two sides, a mesh of (a
x b) rectangles is created as shown in the diagram. As the corner nodes already
exist this means that ((a+1).(b+1) - 4) new nodes are generated.
If there are connection nodes defined then the mesh point situated closest to a
connection node is replaced by that node.
d the mesh elements should not deviate too much from a rectangular
form,
i.e. each corner angle should be #900
(1) The calculation of the mass and stiffness matrices of a plate membrane described here is based on the plate
theory of Mindlin.
Each of the corner nodes of the mesh elements has 6 Degrees Of Freedom - 3
translations and 3 rotations - and so can transmit six forces to another mesh eleĆ
ment. This is also the case between elements of different plate membranes, as
long as they are connected either at a corner or at a common connection node.
ÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍ
ÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍ
ÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍ
ÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍ
a
ÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍ
The parameters m, k and c of this SDOF system are designed such that the moĆ
tion of the coupling point in the direction of this absorber is decreased
(damped) as much as possible for a certain frequency, typically at resonance.
xa e jwt
k c xr e jwt
If the motion of the coupling point in the direction of the absorber is designated
by xa and the frequency to be damped by f (= /2) then the following formuĆ
lae apply for the equations of motion of m (xr is the relative displacement beĆ
tween the absorber's mass and the attachment point).
(kx r cjx r)Ăe jtĂ Ă mĂ(x a x r)Ă 2Ăe jt Eqn 20-72
m 2Ăx 2
x rĂ Ă Eqn 20-73
m 2Ă jcĂ Ă k
(k jc)m
FĂ Ă 2
Ă 2Ăx a Eqn 20-75
m jc k
(kĂ jc)Ăm
m eqĂ Ă Eqn 20-77
mĂ 2Ă Ă jcĂ Ă k
It can be shown that if no damping is used (c=0) the mass and stiffness of the
absorber can be designed such that the vibration of the attachment point is
eliminated entirely (xa = 0). This happens if the natural frequency of the abĆ
sorber equals the forcing frequency .
where ! is the ratio between the absorber's mass and the equivaĆ
lent" mass of the system at resonance :
Ă Ă mm Eqn 20-79
eq
% optĂ Ă
c Ă Ă
2 km
3
8(1 ) 3
Eqn 20-80
From equations 20-78, 20-79 and 20-80 the physical parameters m, c and k of
the attached absorber can be computed if the following values are known.
meq the equivalent mass (see further)
m eqĂ Ă 1Ă
.V 2i Ă.Ă2Ăjd. Eqn 20-81
where
Vi is the scaled mode shape coefficient of the mode to be tuned at
the attachment point
d is the damped natural frequency of the mode to be tuned.
20.3.3.9 Constraints
Physical constraints can be defined between separate DOFs or between one
DOF and itself.
Defining a constraint between two separate DOFs, applies a rigid coupling beĆ
tween them. Defining a constraint between a DOF and itself effectively fixes it
to `ground'.
For the simplified case of two substructures which are possibly modified (symĆ
bol ) and connected to each other (subscript c), the following procedure is folĆ
lowed to predict the modal model of the resulting structure :
1. Retrieve the modal models for each substructure. Build the diagonal maĆ
trices 1 and 2 of poles and the (possibly complex) modal matrices V1
and V2 of scaled mode shapes.
2. Join both modal models into the global matrices (equation 20-37) and V
(20-36).
3. Define the connecting elements (springs and dash pots) between both subĆ
structures. This yields matrices Ac and Bc (equation 20-40).
4. Define the necessary modifications and join them into matrices A and B
(equation 20-42).
5. Use the modal matrix V to transform the connection and modification maĆ
trices to the modal space.
6. Add the diagonalized matrices in modal space (equation 20-44) to yield the
system matrix of the resulting structure.
Numerical problems
The eigenvalue problem mentioned above that is to be solved for the modified
system, can be subject to numerical problems. These can arise from two
sources.
The criterion used in this respect is the condition number of the system matrix.
The system matrix is the one whose eigenvalues and eigenvectors yield the moĆ
dal parameters. If this condition number exceeds a certain (critical) value this
is reported to the user. The critical value used has been established by empiriĆ
cal tests and is by default set to 1e+8.
The scaled mode shapes of the original structure have a physical dimension reĆ
lated to the measurement data from which they were extracted by modal paĆ
rameter estimation techniques. Since this modal model is a valid description
for the relation between input forces and response displacements, the applied
modifications should be defined in a unit set which is consistent for these quanĆ
tities. The same rule applies to the interpretation of the resulting modal model.
Erroneous results are bound to occur when the original mode shape vectors are
not scaled correctly. This might arise because of the incorrect definition of the
reference point for the data (wrong driving point residue), not using the correct
transducer sensitivity or calibration factors for the experimental FRFs (force as
well as response transducers), or the use of an inconsistent unit set during the
modal test or analysis phase. These errors may cause an entirely wrong transĆ
formation of the applied physical modifications to the modal space and a small
mass modification for example may grow out of proportion because of this bad
scaling.
5
4
3 main plate
2
1
elem
4 nodes
elem
3 5
elem
2 4
elem
1 3
2
1
From the geometrical properties of the beam the user can calculate the cross
sectional area and the different moments of inertia. Tables listing characterĆ
istics of various types can be found.
z 2
r
1
n2
n1
Figure 20-1 Stiffening rib orientation and local co-ordinate system (Axes 1 2 and 3)
T t1 R t1 T t2 R t2
U1! U 2!
$ V1% T ĂĂ$V 2
% T 2ĂĂĂĂT & translation T1
W1
1
W
2
R1
1! 2! T2
$1% R 1ĂĂ$ 2
% R 2ĂĂĂĂĂT & rotation
"1 "
2 R1
t t t t t t t t t t
T1 R1 T2 R2 T3 R3 T4 R4 T5 R5
T1
R1
T1
R1
T1
R1
T1
R1
T1
R1
Static condensation
Static condensation in a dynamic analysis is based upon the assumption that
the mass at some Degrees Of Freedom can be neglected without a significant
loss of accuracy on the dynamic model in the frequency range of interest. More
explicitly, for the beam elements in the application of interest consider the rotaĆ
tional Degrees Of Freedom to be without mass. The assembled mass and stiffĆ
ness matrices of the entire beam can then be partitioned as follows,
where
T refers to the translational DOFs
R refers to the rotational DOFs.
The modal parameters describing the dynamic behavior of this structure are
then obtained by solving following eigenvalue problem,
KTT
K
RT K RR
K TR V T
MTT [0]Ă V T
ĂĂ V Ă Ă 2ĂĂ
R
[0] [0] V R
Eqn 20-83
From the bottom half of equation 20-83 a relation between the translational and
the rotational DOFs is derived.
which can be solved to express the rotational DOFs in terms of the translational
ones,
1
V RĂ Ă Ă K RR ĂK RTĂV T Eqn 20-85
with
The matrices [KT] and [MTT] of equation 20-86 are used to dynamically model
the beam structure. The model will only be valid in the frequency range where
the mass effects of the rotational DOFs are negligible. Mass effects only conĆ
tribute significantly to the dynamic behavior around and above those resoĆ
nances where they are capable of storing a considerable amount of kinetic enerĆ
gy.
Note that [KT] as expressed in equation 20-87 can only be computed if [KRR] is
non-singular. The stiffness matrix is singular if rigid body motion is possible.
The rigid body mode of a beam along its longitudinal axis is not naturally elimĆ
inated by constraining its three translational DOFs so causing in general a first
order singularity. With such configurations it will not be possible to store torĆ
sional deformation energy in the beam therefore the corresponding off-diagoĆ
nal elements of the assembled stiffness matrix can be neglected and the diagoĆ
nal elements made relatively small. In this way the matrix becomes invertible
and the predicted dynamic behavior will reflect the inability to store torsional
deformation energy in the beam. This operation will, however, not be necesĆ
sary when the beam is two or three dimensional, as in such cases, rigid body
motion through rotation around one of the axes is no longer possible.
The natural frequency of the modes of vibration which seem to be the most imĆ
portant parameters in the modal model may well not dominate the response if
conditions are such that they are not excited.
The Forced response functions enable you to answer these questions by deterĆ
mining the response of the modal model to known force spectra.
The equations of motion of a linear, time invariant mechanical structure are exĆ
pressed in the frequency domain as follows:
When the response at one specific degree of freedom (DOF), say i, is needed the
above equations become:
N0
X i()Ă Ă Ă HijĂ()ĂFj() Eqn 20-89
j1
This means that the response at DOF i can be written as a linear combination of
the applied forces, each weighted by the corresponding FRF between input
DOF j and output DOF i. These frequency dependent weighting factors deĆ
scribe the dynamic flexibility between two degrees of freedom i and j of a meĆ
chanical structure.
When the modal model for that structure is available, e.g. from modal test data
or finite element calculations, the FRF can be modelled as given by
j rijk
2N
H ij()Ă Ă Eqn 20-90
k1 k
Even if not all the residues are available, the Maxwell-Betti reciprocity princiĆ
ple can be used to calculate the required values. Equation 20-4 allows the resiĆ
due rick to be derived for any reference DOF c when the residues for DOFs i and
c are available for an arbitrary reference j on condition that the driving point resiĆ
due rjjk is also available. The driving point residue is also required if the mode
shapes are to be correctly scaled.
Equation 20-91 represents the response at all DOFs to all forces with a contribuĆ
tion from all modes. The contribution of each mode is given by -
N0 N0
mode k ; 0 to N f k() 1
j k
vjkFj() pk() vjkFj()
j1 j1
N0 N0
Eqn 20-92
; N to 2N f k() 1
j *k j1
v *jkF j() p k() v *jkF j()
j1
(complex conjugate modes)
The response for each DOF then taking into account the contribution of each
mode is then given by
N 2N
X i()Ă Ă Ă vikĂfk() Ă Ă v*ikĂfk() Eqn 20-93
k1 kN
Geometry concepts
357
Chapter 21 Geometry concepts
The most important part of the model is the nodes. These define the points
where measurements will be taken on the structure, and the points where the
mode shape deformation are calculated. It is common practice to defined conĆ
nections or edges between specific nodes to form a wire frame model of the
structure. In addition surfaces can be defined, that aid in the visual representaĆ
tion of the structure.
y
node x
connection
surface
Note that the definition of nodes and meshes for acoustic measurements are deĆ
scribed in the Acoustic" documentation.
21.2 Nodes
A node is defined by its location and its orientation.
Location
The location of a node in the 3D space is defined by a set of 3 real numbers
known as the coordinates. Coordinates are always defined relative to a referĆ
ence coordinate system.
The reference coordinates are normally shown along with the model in the the
display window. The origin of the global coordinate system is the origin of the
3D space that contains the test structure and the global symmetry of the strucĆ
ture should be considered when defining this.
The reference coordinate system can be either Cartesian, cylindrical or spheriĆ
cal.
Z Z Z
z z
y y y y
r r
x
x x x
" "
Cartesian X Y Z 1 1 1
Cylindrical r " Z 2 45° 1
Spherical r " 3 45° 55°
Orientation
Nodal orientation is defined using a Cartesian coordinate system. In many apĆ
plications the orientation of the node defines the measurement directions.
y
x
The origin of the nodal coordinate system coincides with the node's location. If
the principal axes of the nodal coordinate system are not coincident with the
measurement directions, in either a positive or a negative sense, then the differĆ
ence must be defined with Euler angles.
Euler angles
Three Euler angles are used to define the orientation of a one coordinate sysĆ
tem, relative to a reference coordinate system with the same origin.
"xy Zr z'
The first angle, ă"xy (Euler XY) is a rotation
about the Zr axis of the reference system. (PosiĆ
tive from Xr axis to Yr axis). This generates a
first intermediate system indicated by a single y'
quote ' on the axis labels. Xr
+
"xy x' Yr
"xz z"
z'
The second angle "xz (Euler XZ) is a rotation
about the y' axis of the first intermediate system.
(Positive from the x' axis to the z' axis). This genĆ
erates a second intermediate system, indicated x" y'y''
by two quotes " on the axis labels.
+
"xz
x'
"yz
Z z''
Finally the third angle, "yz (Euler YZ) is a rotaĆ
tion about the x" axis of the second intermediate
system, positive from the y" axis to the z" axis.
Y
This last orientation generates the desired new
coordinate system orientation. + "yz
y''
x''X
RZ scalar
RX RY
X Y