Download as pdf or txt
Download as pdf or txt
You are on page 1of 36

ARTICLE IN PRESS

Mechanical Systems
and
Signal Processing
Mechanical Systems and Signal Processing 20 (2006) 1783–1818
www.elsevier.com/locate/jnlabr/ymssp

The history of random vibrations through 1958


Thomas L. Paez
Sandia National Laboratories, Albuquerque, NM, USA
Received 6 March 2006; received in revised form 5 July 2006; accepted 5 July 2006

Abstract

Interest in the analysis of random vibrations of mechanical systems started to grow about a half century ago in response
to the need for a theory that could accurately predict structural response to jet engine noise and missile launch-induced
environments. However, the work that enabled development of the theory of random vibrations started about a half
century earlier. This paper discusses contributions to the theory of random vibrations from the time of Einstein to the time
of an MIT workshop that was organized by Crandall in 1958.
r 2006 Elsevier Ltd. All rights reserved.

1. Introduction

Random vibrations are the oscillations of mechanical systems subjected to temporally, and perhaps
spatially, randomly varying dynamic environments. Their study is particularly important because practically
all real physical systems are subjected to random dynamic environments at some times during their lives, and
many systems will fail due to the effects of these exposures. Mathematical and experimental studies of random
vibrations have historically been pursued to explain observed phenomena, to predict the characteristics of
system responses to as yet unrealised environments, to aid in the design of mechanical systems and systems
that isolate them, and to demonstrate the survivability and response character of physical systems in the
laboratory.
Though random vibrations have been observed for millennia because of the effects on structures of
earthquakes, wind, ocean waves, and other natural environments, they have only been studied in a
mathematical framework since about the turn of the previous century. Einstein performed the first
mathematical analysis that could be considered a random vibration analysis when he considered the Brownian
movement of particles suspended in a liquid medium. The results of his study were published in 1905. (This is
the same year in which his results on the photoelectric effect, for which he received the 1921 Nobel prize in
physics, and his results on special relativity were published.) Numerous studies whose results would later be
used to explain the random vibration of mechanical systems were carried out in the decades to follow, and in
1930 Norbert Weiner formally defined the spectral density of a stationary random process, i.e., a random
process in a temporal steady state. Spectral density is the de facto fundamental quantitative descriptor of
stationary random processes in use today. But it was not until the 1950s that the subject of random vibrations

E-mail address: tlpaez@sandia.gov.

0888-3270/$ - see front matter r 2006 Elsevier Ltd. All rights reserved.
doi:10.1016/j.ymssp.2006.07.001
ARTICLE IN PRESS
1784 T.L. Paez / Mechanical Systems and Signal Processing 20 (2006) 1783–1818

of mechanical systems would be addressed directly. A comprehensive theory of random vibrations was needed
to accurately predict structural response to jet engine noise and missile launch-induced environments. In 1958,
Crandall organised a special summer programme at the Massachusetts Institute of Technology to address
problems in the various areas of random vibrations of mechanical systems. Specifically, the papers covered
topics such as analysis of random vibrations and design for random environments, random vibration testing,
and the analysis of data from random environments. The history of many contributions to the theory and
practice of random vibrations from the time of Einstein to 1958, and the years immediately following, are
described.
The scheme used for the presentation of historical material is chronological and, to the limited extent
possible, graphical. The major developments from four eras, seen by the author as well defined, yet, necessarily
overlapping, are covered, to the extent possible, in order. A few of the mathematical ideas are supported with
graphics. Some limited mathematics are included.
Many texts are available for those seeking a detailed introduction to, or even a more advanced presentation
of the mathematics of random vibrations. Among these are the texts by Crandall [1,2], Crandall and Mark [3],
Robson [4], Lin [5], Elishakoff [6], Nigam [7], Newland [8], Bolotin [9], Augusti et al. [10], Ibrahim [11], Yang
[12], Schueller and Shinozuka [13], Roberts and Spanos [14], Ghanem and Spanos [15], Soong and Grigoriu
[16], Wirsching et al. [17], and Bendat and Piersol [18]. Texts that contain discussions on historical
developments in the theory of stochastic processes (The term ‘‘stochastic’’ is used interchangeably with
‘‘random’’.) include those by Gnedenko [19], and Feller [20]. Many important topics in the theory of random
vibration of mechanical systems that are not discussed in this paper, are discussed in the texts listed, including
first passage and peak response of structures, numerical techniques in random vibration and signal analysis,
random vibration of structures that are themselves random, stochastic fatigue, random vibration of structures
modelled via finite elements, random vibration of non-linear structures, and non-Gaussian random vibrations.
The important ideas of a random vibrations framework can be described using a simple schematic diagram.
Consider Fig. 1. Random vibrations occur in a mechanical system when it is subjected to a stochastic
environment—one applied as forces or pressure on the system, or one applied at system boundaries through its
support structure. Each mechanical system has its own characteristics and features—simple or complex, linear
(quasilinear) or non-linear, time varying or not, etc. Ensembles of mechanical systems have their own ranges
of characteristic random variations; however, classical random vibration studies consider excitation
randomness only. The effect of a random environment on a structural system is stochastic response motion.
The activities of random vibration analysis can be succinctly described with the following (scalar or vector)
equation representation:
X_ ¼ gðX; Q; aÞ; X ð0Þ ¼ X 0 ; 1oto1. (1.1)
The quantity X represents system response (such as displacement), Q represents system excitation (such as a
force applied on a structural surface), a represents system parameters, the dot denotes differentiation with
respect to time, and g(.) is the deterministic functional form that relates the former quantities to the response
derivative. Bold type is used to denote vector and matrix quantities. The purpose of random vibration analysis
is to specify the stochastic system response, X, in terms of the system characteristics gðX; aÞ (deterministic
mathematical form, g, and deterministic parameters, a), and the random excitation, Q. The response, X, can
be described in the framework of probability theory. The probabilistic character of the excitation and response
may be specified completely, at one extreme, by higher-order probability distributions, or, as more commonly
occurs, only partially specified by some of their average features. Random vibration analyses can be

Excitation Mechanical Response

System

Fig. 1. Schematic of excitation/system/response.


ARTICLE IN PRESS
T.L. Paez / Mechanical Systems and Signal Processing 20 (2006) 1783–1818 1785

performed by representing the mechanical system with a set of differential equations, then solving the
equations, or they can be performed without writing the differential equations of dynamic response, by writing
equations that govern the probability distribution of system response.
The purpose of design for random vibration is to specify the characteristics of a random excitation, Q, specify
the stochastic characteristics of the system response, X, or, perhaps, a deterministic limit on the random
response, then establish the characteristics of a system, gðX; aÞ, that will yield the desired response.
Testing of mechanical systems in random environments may serve many purposes. It may be done simply to
establish the character of a particular system in a random environment. It may be used to show that the
random response satisfies certain criteria. Testing of physical systems may also be used to explore our
capabilities to model the same physical systems. However, it is usually only out of indirect necessity that a
system model relating excitation to response is developed in the course of random vibration testing.
Random signal analysis uses measured data to estimate the measures critical for description of random
processes. Of course, this is fundamental to the pursuit of practical environment description and test
specification.
In spite of the importance of random vibration design, testing and signal analysis, this paper will focus
mainly on the description of historical developments in random vibration analysis. Section 2 summarizes the
first investigations into random vibrations, from Einstein’s description of Brownian motion as a diffusion
process to description of mechanical system response in terms of averages. Section 3 summarizes the
development of spectral density, the fundamental descriptor of stationary random processes, and traces some
preliminary thoughts on the subject back to 1889. Section 4 summarizes advances that were motivated by
problems in electrical and communications systems that arose prior to and during World War II. Analysis of
the random vibrations of mechanical systems, as practised today, started in the 1950s, and the beginnings of
the analytical developments are covered in Section 5. The summary provided here concludes with a description
of some of the works from Crandall’s 1958 workshop and a few others that followed immediately thereafter.

2. Einstein’s era

Around the turn of the previous century, Einstein [21] constructed a framework for analysing the Brownian
movement—the random oscillation of particles suspended in a fluid medium and caused by the molecular
motion associated with the kinetic theory of matter. Brownian movement had been recognised about a century
earlier during observations of microscopic particles of pollen immersed in a liquid medium; it is characterized
by the erratic movement of the pollen particles. The particle motion characteristics depend on the mass and
geometry of the particle and the physical characteristics (such as viscosity and temperature) of the fluid
medium.
Because the problem Einstein solved yields the probabilistic description of the motion of a mass attached via
a viscous damper to a fixed boundary and excited with white noise (a random excitation with frequencies
covering a broad band), his development can be thought of as the first solution to a random vibration problem
and the dawning of the era of random vibration analysis. Einstein, however, did not consider the solution of
the random vibration problem as the most important breakthrough of the analysis. He stated, ‘‘If the
movement discussed here can actually be observed (together with the laws relating to it that one would expect
to find), then classical thermodynamics can no longer be looked upon as applicable with precision to bodies
even of dimensions distinguishable in a microscope: an exact determination of actual atomic dimensions is
then possible.’’
In his solution of the problem of Brownian movement Einstein did not use a direct formulation that writes
and analyses the differential equation governing motion of the system. (The direct approach would eventually
become the one most commonly used for random vibration analysis.) However, for reference, the governing
equation is
mX€ þ cX_ ¼ W ðtÞ tX0; X ð0Þ ¼ 0; X_ ð0Þ ¼ 0, (2.1)
where fX ðtÞ; tX0g is the one-dimensional particle displacement response random process, m is particle mass, c
is the damping that ties the mass to an inertial frame, fW ðtÞ; 1oto1g is the white noise excitation random
process, and dots denote differentiation (in a sense appropriate for a random process) with respect to time.
ARTICLE IN PRESS
1786 T.L. Paez / Mechanical Systems and Signal Processing 20 (2006) 1783–1818

The white noise excitation random process has the constant spectral density S WW ðoÞ ¼ SWW ; 1ooo1.
(Spectral density defines the mean square signal content of a random source as a function of frequency. The
spectral density defined here is two-sided because it is defined for positive and negative frequencies. Negative
frequencies are to be interpreted in the sense that harmonic functions are defined for negative arguments. The
definition of spectral density, along with some examples, and some ideas underlying random processes and
their notations are provided in the following section.) The system of Eq. (2.1) is shown schematically in Fig. 2.
The white noise random process is a source with mean square signal content that is uniformly distributed over
the entire range of frequencies (up to infinity, in theory). This idea will be discussed in more detail in the
following section, and examples will be presented.
One way of thinking about a random process is to consider it as a sequence of random variables. In this
interpretation, the excitation and all measures of the response are random processes. The random variables in
these random processes characterize the quantity under consideration (excitation or response) at a given time,
t. For example, X ðtÞ is the random variable representing displacement response at time t. The random variable
has a formal definition which we will not explore, here, but the practical idea behind a random variable is that
when we perform a sequence of random experiments, the values that the random variable assumes (called
realisations) are observed empirically to follow a probability distribution. One descriptor of a probability
distribution of a random variable X is the probability density function (PDF), f X ðxÞ; 1oxo1. The R b PDF is
non-negative, and has a unit integral on ð1; 1Þ. The integral of the PDF over an interval (a, b), a f x ðxÞ dx,
where apb, is the relative chance—probability—that the realisation of the random variable X will occupy the
interval (a, b] when one random experiment is performed. The expected value, or mean, of a random variable
is the average value of all possible random variable realisations. It is denoted E½X ðtÞ ¼ mX ðtÞ. The variance of
a random variable is the average of the square of the deviations of the random variable realisations from the
mean. The variance is denoted V ½X ðtÞ ¼ s2X ðtÞ. The standard deviation of a random variable is the square
root of its variance. Every random process has PDF, mean, variance, standard deviation, and many other
measures for each of its random variables. Beyond these things, a random process also has other measures that
characterize the simultaneous behaviour of pairs, triplets, etc., of its random variables. Some of these will be
considered later in this section and in the following sections. (See [22], for more details.)
Einstein developed the diffusion construct for analysing the random vibration of mechanical systems. This
framework models diffusion of a particle (rigid mass) under the influence of applied impacts. In the
application he considered, the impacts arose from the motions of the molecular constituents. The paper
Einstein wrote has two parts. The first part uses the idea of osmotic pressure and equilibrium of a sphere
moving in a fluid medium to derive the coefficient of diffusion of such a particle. He showed that the
coefficient of diffusion can be modelled as D ¼ ðRT=NÞð1=6pcf rÞ, where R is the universal gas constant, T is
absolute temperature, N is Avagadro’s number, cf is the coefficient of viscosity of the fluid, and r is the radius
of the sphere. The coefficient of diffusion (to be used, later) governs the rate at which particles will spread
throughout a fluid in an equilibrium condition.
The second part of Einstein’s paper considered particle diffusion in one dimension in more detail. He
defined a time interval t that is short compared to an interval of visual observation but long enough that
particle movements executed in two consecutive intervals are independent. He assumed that during the time t

c
W (t )
m

Fig. 2. Schematic of the system considered by Einstein in his solution of the Brownian movement problem/random vibration analysis.
ARTICLE IN PRESS
T.L. Paez / Mechanical Systems and Signal Processing 20 (2006) 1783–1818 1787

the x coordinate of an individual particle will increase by a random amount D (positive or negative). The
quantity D is a random variable with symmetric PDF f D ðaÞ; 1oao1. (Einstein neither used the term
‘‘random variable’’ nor ‘‘PDF’’ because the terminology was not yet established, however, these are what he
described.) He then specified that the PDF of the x coordinate of a particle after time t, i.e., the PDF of a
random variable X(t) in the random process fX ðtÞ; tX0g, is f X ðtÞ ðxÞ; tX0; 1oxo1, and wrote the
relation
Z 1
dx  f X ðtþtÞ ðxÞ ¼ dx  f X ðtÞ ðx þ aÞf D ðaÞ da tX0; 1oxo1. (2.2)
1

This equation states that the probability of finding the x coordinate of a particle in the interval ðx; x þ dx at
time t+t (left side) equals the ‘‘sum’’ of probabilities that the particle starts in the interval ðx þ a; x þ a þ da
at time t, and moves a distance a during the time increment t (right side). He expanded f X ðtþtÞ ðxÞ in a
Taylor’s series in t, and f X ðtÞ ðx þ aÞ in a Taylor’s series in a and substituted the series into Eq. (2.2). He
simplified the integral on the right hand side and noted that because of the symmetry of f D ðaÞ, only the odd
numbered terms are non-zero. The term f X ðtÞ ðxÞ occurs on both sides and the two terms cancel. He retained
only the linear term in t on the left hand side, arguing the permissibility of this approximation because of the
small magnitude of t. He set the integral in the third term on the right hand side (the first non-zero term) to the
coefficient of diffusion:
1 1 a2
Z
D¼ f ðaÞ da. (2.3)
t 1 2 D
And he eliminated the remainder of the terms on the right-hand side, arguing that they are small relative to
the term retained. The result he obtained is
qf X ðtÞ ðxÞ q2 f X ðtÞ ðxÞ
¼D tX0; 1oxo1. (2.4)
qt qx2
He pointed out that this is the equation governing diffusion of a particle in a liquid medium with coefficient
of diffusion D.
Einstein specified as the initial condition
f X ð0Þ ðxÞ ¼ dðxÞ; 1oxo1, (2.5)
where dðxÞ; 1oxo1, is the Dirac delta function. (The Dirac delta function was not actually defined until
later, so Einstein described an initial condition with this character, but not using the terminology. The Dirac
delta
R 1 function is a ‘‘distribution,’’ a function described by its behaviour under an integral. Its integral is one,
1 dðxÞ dx ¼ 1, and it has ‘‘small’’ values away from the origin, dðxÞ ¼ 0; xa0. In the application written
above, it indicates certainty that the system starts with zero displacement.) Under these conditions, the
solution to Eq. (2.4) is
x2
 
1 1
f X ðtÞ ðxÞ ¼ pffiffiffiffiffiffiffiffiffi pffiffi exp  tX0; 1oxo1. (2.6)
4pD t 4Dt
The displacement response random process has a normal (Gaussian) distribution with mean zero and
variance 2Dt. (This also equals the mean square, or mean of the square of X(t), because the random process
has zero mean.) The displacement of a particle in Brownian motion has a variance that increases
pffiffi linearly with
time. Its root-mean-square (RMS) departure from the origin increases at a rate of t. Increases in the
coefficient of diffusion, D, imply increased response variance at a given time. D increases linearly with
temperature of the medium, and decreases as the inverse of coefficient of viscosity and particle size. Einstein
pointed out, and it was confirmed later, by more accurate analyses, that this result is not accurate for times
that are small compared to (m/c). Nevertheless, Eq. (2.6) stands as the first solution of a random vibration
problem.
It is interesting to note that the diffusion coefficient is related to the parameters of Eq. (2.1) via D ¼
S WW =2c2 when SWW has units of lb2/(rad/s). Fig. 3 shows five marginal PDFs (i.e., PDFs of single random
variables) of response at normalised times t ¼ 2Dt ¼ 0:1; 1; 4; 7; 10. The normalised time versus x plane also
ARTICLE IN PRESS
1788 T.L. Paez / Mechanical Systems and Signal Processing 20 (2006) 1783–1818

1.5

fX (τ) (x) 0.5

0
0 10

5 0
τ x
10 -10

Fig. 3. Marginal PDFs of some normal distributions of the displacement response of a particle in Brownian motion for normalised times
t ¼ 2Dt ¼ 0:1; 1; 4; 7; 10.

pffiffiffi
shows the plus and minus one times the standard deviation curves,  t. The plots clearly indicate the effects
of the uniformly increasing variance. As time increases the standard deviation—and, thus, the width—of each
normal PDF increases, and the peak of the PDF decreases. We reiterate the important fact that Einstein’s
analysis, summarised here, was the first random vibration analysis of a discrete mechanical system.
One of many important results demonstrated in Einstein’s work is that the motion of a mechanical system
excited by a large number of independent impacts is governed by a normal probability distribution. He
augmented his initial study with further investigations (all reprinted in [23], and written during the period of
1905–1908). He considered, among other things, extensions to his original development and the problem of
molecular parameter identification.
A characteristic of random vibrations of mechanical systems that Einstein (and other early researchers in
this field) did not fully pursue is the average temporal characteristics of the excitation and particle response
motions. A sample time history drawn from a random source (a random process) is known as a realisation
of the random process, and it may bear a certain average relation to itself in time. For example, consider
Figs. 4(a–c). The first of these shows a realisation of a band-limited white noise force. It has signal content up
to a maximum frequency of 50 Hz, a spectral density of 1 lb2 =Hz, and, therefore, a mean square of 50 lb2. The
second shows the corresponding realisation of displacement response random process of a rigid particle with
mass m ¼ 1 lb s2/in attached via a viscous damper with constant c ¼ 0.1 lb s/in to an inertial frame. The third
shows the corresponding realisation of the velocity response random process. All the plots have a time
increment of Dt ¼ 0:01 s. Clearly, the random variable realisations in the first figure bear little relation to one
another, on average; the sequence of random variable realisations is quite erratic. On the other hand, the
random variable realisations in the displacement response are highly correlated; the displacement response
realisation is quite smooth. The random variable realisations in the velocity response bear an intermediate
level of correlation to one another, so, the curve shown is not as erratic as the force realisation, but not as
smooth as the displacement response. It may be useful and important to know the averages of the responses at
specific times t, their mean squares at times t, the average of the product of one of the responses at times t and
s, and even the average of the cross-product between the displacement response at time t and the velocity
response at time s. These quantities are particularly important for systems that execute oscillatory responses,
and the reason will be discussed in the following section.
According to Uhlenbeck and Ornstein [24], Smoluchowski [88] and Furth [25] independently generalised
Einstein’s analysis and performed Brownian motion experiments to verify the predictions of the theory.
Smoluchowski was first to write a form of the equation that would later be known as Fokker–Planck equation
for systems in which a displacement-related force restrains the mass. In other words, he wrote the diffusion
equation governing the PDF of the response of a single-degree-of-freedom (SDF) system connected to a fixed
ARTICLE IN PRESS
T.L. Paez / Mechanical Systems and Signal Processing 20 (2006) 1783–1818 1789

20 10

w (t), lb

x (t), in
0 0

-20 -10
0 2 4 6 8 10 0 2 4 6 8 10
(a) Time, sec (b) Time, sec

v (t), in /sec
2

-2
0 2 4 6 8 10
(c) Time, sec

Fig. 4. (a) Realisation of a band-limited white noise excitation. (b) realisation of displacement response of the system shown in Fig. 2 to
the excitation shown in (a). (c) Realisation of velocity response of the system shown in Fig. 2 to the excitation shown in (a).

k
W (t )
m
c

Fig. 5. Schematic of the system considered by Smoluchowski in his random vibration analysis.

boundary via a linear spring and viscous damper. The SDF system in which linear damping and stiffness are
present was called the case of the ‘‘harmonically bound’’ particle, and a schematic of such a system is shown in
Fig. 5. But as Smoluchowski and, later, Uhlenbeck and Ornstein pointed out, the Fokker–Planck equation
that Smoluchowski developed, and its solution, govern only the response of an over-damped SDF system, i.e.,
a system with relatively high damping—one that executes a non-oscillatory response in free vibrations caused
by non-zero initial conditions. The over-damped solution is less interesting to structural dynamicists than the
lightly damped one because most practical structural dynamic systems are lightly damped.
The limitation on the Fokker–Planck equation would later be overcome [26], but in the short term Ornstein
[27] developed a method for assessing the displacement and velocity response PDFs for the linear SDF system.
Ornstein’s solution was summarised in detail in the 1930 paper by Uhlenbeck and Ornstein. The method
involves direct consideration of the response based on its governing equation of motion
mX€ þ cX_ þ kX ¼ W ðtÞ tX0; X ð0Þ ¼ 0; X_ ð0Þ ¼ 0, (2.7)
where the parameters and variables have the same meanings as defined following Eq. (2.1), and k is the SDF
system stiffness. Their first step was to mathematically characterize the white noise excitation. They required
that the random process have zero mean, E½W ðtÞ ¼ 0; 1oto1, and a second-order average, known now
as the autocorrelation function, with the following behaviour:
E½W ðt1 ÞW ðt2 Þ ¼ RWW ðt1 ; t2 Þ ¼ 2pSWW dðt2  t1 Þ; 1ot1 ; t2 o1, (2.8)
where dðtÞ; 1oto1, is Dirac’s unit delta function. This terminology was still not used in 1930, but it does,
effectively, describe the characteristics Ornstein specified. The requirement of Eq. (2.8) indicates that the
average of the product of the excitation at times t1 and t2 is zero for t1 at2 , and effectively, infinite for t1 ¼ t2 .
ARTICLE IN PRESS
1790 T.L. Paez / Mechanical Systems and Signal Processing 20 (2006) 1783–1818

One interpretation of this will be given in the following section, but the important implication is that mean
square displacement response to such an excitation is finite. Uhlenbeck and Ornstein also defined some higher
order average characteristics of the excitation, but we will not summarise those here.
Any measure of the response of the system governed by Eq. (2.7) (or any linear system) can be expressed
with a convolution integral. The displacement response is
Z t
X ðtÞ ¼ hðt  tÞW ðtÞ dt tX0, (2.9)
0

where hðtÞ; tX0, is the impulse response function relating the system excitation to the response measure of
interest. In this case, the impulse response function is the displacement response following application of the
unit delta function excitation, and is given by
1 zon t
hðtÞ ¼ e sinðod tÞ tX0, (2.10)
mod
pffiffiffiffiffiffiffiffiffi
where on ¼ k=m is the natural frequency of the SDF system, the frequency in rad/s at which the system
pffiffiffiffiffiffiffi
responds in free vibration, z ¼ c=2 km is the (unitless) system damping factor (or, critical damping ratio), an
pffiffiffiffiffiffiffiffiffiffiffiffiffi
indicator of the rate at which energy is dissipated in the system, and od ¼ on 1  z2 is the damped natural
frequency, the actual frequency at which the damped SDF system responds. Uhlenbeck and Ornstein argued
that because the expected value of excitation is zero, the expected value of the response is also zero. The idea is
that, based on Eq. (2.9), the mean of X(t) equals the mean of the integral, which equals the integral of the
mean of the integrand. The quantity W(t) in the integrand has a mean of zero, therefore, the integral is zero.
(The formal conditions under which the mean of an integral equals the integral of the mean were
mathematically clarified, later.) Next, they wrote the expression for the square of the convolution integral, Eq.
(2.9), and computed the expectation to obtain the mean square displacement response:

E½X 2 ðtÞ ¼ s2X ðtÞ


( " #)
pS WW 2zon t 2z2 2 z
¼ 1e 1þ sin ðod tÞ þ pffiffiffiffiffiffiffiffiffiffiffiffiffi sinð2od tÞ tX0. ð2:11Þ
2zo3n m2 ð1  z2 Þ 1  z2
This equation establishes the growth of displacement response mean square with time for a linear SDF
system. Uhlenbeck and Ornstein went on to show that certain conditions on the random excitation yield a
response that is Gaussian distributed.

2
X (∞) fX (/ ) (X (∞) )

1.5
Normalized PDF

1
n

0.5

0
0
No
rm 20 2
aliz 0 ξ
ed ent,
Tim 40 -2 spla
cem
e, d Di
τ alize
Norm

Fig. 6. Marginal PDFs of some normal distributions of the displacement response of a linear SDF system excited by white noise, for
normalised times t ¼ 1; 10; 20; 30; 40; 50, as a function of the normalised displacements x ¼ x=sX ð1Þ.
ARTICLE IN PRESS
T.L. Paez / Mechanical Systems and Signal Processing 20 (2006) 1783–1818 1791

Fig. 6 shows a sequence of Gaussian, displacement response PDFs in normalised coordinates for a damping
factor of z ¼ 0:05. Normalised time is t ¼ on t. The normalised displacement is x ¼ x=sX ð1Þ, where
s2X ð1Þ ¼ pS WW =2zo3n m2 , i.e., the displacement divided by the RMS response as t ! 1. The normalised
PDFs governing the displacement response at normalised times t ¼ 1; 10; 20; 30; 40; 50, are plotted.
Uhlenbeck and Ornstein showed many other things in their paper. They showed, for example, how to
compute a cross-correlation between the displacement and velocity responses. This is the expected value of the
product between the displacement response at time t and the velocity response at time t. They showed that, for
an SDF system, this cross-correlation starts at zero, then oscillates, and finally decays to zero. The reason is
that these two response measures tend, on average, to be ninety degrees out of phase.
This work of Uhlenbeck and Ornstein is extraordinarily important in the history of random vibrations for
several reasons. Among them, it is the first treatment of random vibration for the under-damped case pertinent
to structural dynamics. Further, it is the first analysis of response characteristics that directly uses the
convolution integral representation of the response, which is the approach that is used most frequently today
for random vibration analyses.
Soon after publication of the 1930 paper by Uhlenbeck and Ornstein, Van Lear and Uhlenbeck [28] wrote
another paper using Ornstein’s approach to solve some problems in mechanical system random vibrations.
They sought to analyse the oscillatory Brownian motion response of strings and elastic rods. Their specific
objective was to use modal analysis (or normal vibration analysis, as they called it) to decompose continuous
system motion into a collection of components, each of which is governed by the equation for an SDF system.
The simpler equations were analysed to establish mean square component response to a white noise excitation,
then modal components were combined to establish mean square responses of the systems—string and rod—
at various locations and at all times. Of course, the mean square responses in the various modes are transient
at the start of response, then reach a steady state as time increases, at a rate dependant on damping in the
system modes.
It is understandable that the collection of papers summarised in this section on the Brownian motion of
particles, including particles that respond to input in an oscillatory fashion, may not have spurred the
imagination of structural dynamics researchers interested in the motions of large-scale structures. However,
this paper, written more than two decades prior to the time in the 1950s when widespread interest grew in
random vibration of mechanical systems, might have served as an impetus to practical random vibration
investigations. It appears, though, that the paper was not widely known by structural dynamics researchers,
and some of the earliest and strongest motivations for random vibrations research, namely the effects caused
by jet noise and missile launch environments, did not yet exist. It appears that among those who participated
in early random vibrations research, only Lyon [29] referred to the 1931 paper by Van Lear and Uhlenbeck.
For their part, Van Lear and Uhlenbeck refer to even earlier investigations into the random vibration of
structural systems. Ornstein [30] wrote a paper on the random vibration of an elastic string, and Houdijk [31]
wrote a paper on the random vibration of an elastic rod. Still, these three papers cannot really be considered
part of the literature that inspired the popular development of random vibrations of mechanical systems.
Planck [32] and Fokker [33] used a different approach to obtain the diffusion problem formulation and
solution for Brownian motion. They started with the discrete space/discrete time framework of the random
walk to characterize the dynamic motion of a particle in Brownian motion. Fig. 7 shows a particle on the real

Probability of moving one spatial Probability of moving one spatial


step, x, to the left during one step, x, to the right during one
time step, t, is q=1-p= 0.5 time step, t, is p= 0.5

x Particle location at time jt

Fig. 7. The schematic idea that forms the basis for the derivation of the diffusion model from a discrete time/discrete space foundation—
the random walk.
ARTICLE IN PRESS
1792 T.L. Paez / Mechanical Systems and Signal Processing 20 (2006) 1783–1818

line. During this simple random walk a particle starts at the origin at time t ¼ 0, and moves at each step with
probability p ¼ 1=2, one step to the right, and with probability q ¼ 1=2 one step to the left. They specified the
time duration of each step as Dt, and the length of each step as Dx, so that the times under consideration are
t ¼ jDt; j ¼ 0; 1; 2; . . ., and the displacements under consideration are x ¼ kDx; k ¼ 0; 1; . . . ; j.
Let the notation f X ðtÞ ðxÞ ¼ f X ðjDtÞ ðkDxÞ; j ¼ 0; 1; 2; . . . ; k ¼ 0; 1; 2; . . . ; j, denote the probability
that at time t ¼ jDt, following j impacts, the particle lies at the location x ¼ kDx, k steps to the right of the
origin (k can be positive or negative). If i equals the total number of steps to the right and j  i the total
number of steps to the left, then it must be that k ¼ i  ðj  iÞ ¼ 2i  j because k represents the net number of
steps to the right. This can be solved for i, and because x ¼ kDx, i ¼ 12ðk þ jÞ ¼ 12ðx=Dx þ jÞ. Because each step
in the random process involves a random binary movement, the probability distribution of the discrete
location of the particle is binomial (see [22]), and Planck and Fokker wrote
j
! 
1 ð1=2ÞðjþkÞ 1 ð1=2ÞðjkÞ
   
X ðtÞ
P ¼k ¼ 1
Dx 2 ðj þ kÞ
2 2
j
! 
1 j
¼ 1 ¼ PðX ðtÞ ¼ kDxÞ j ¼ 0; 1; 2; . . . ; k ¼ 0; 1; . . . ; j, ð2:12Þ
2 ðj þ kÞ
2

j
 
where the notation ¼ j!=r!ðj  rÞ! refers to the binomial coefficient. Based on this, the expected value of
r
X(t) is zero, E½X ðtÞ ¼ 0, and its variance is V ½X ðtÞ ¼ ðDxÞ2 j.
The law of total probability indicates that the chance that the particle lies at the point x ¼ kDx at time
t þ Dt, i.e., X ðt þ DtÞ ¼ kDx, can be decomposed into two parts: the chance that the particle lies at the point
ðk  1ÞDx ¼ x  Dx at time t times the chance that it moves one step to the right between t and t þ Dt, plus the
chance that the particle lies at the point ðk þ 1ÞDx ¼ x þ Dx at time t times the chance that it moves one step
to the left between t and t þ Dt. This is
   
1 1
f X ðtþDtÞ ðxÞ ¼ f ðx  DxÞ þ f ðx þ DxÞ. (2.13)
2 X ðtÞ 2 X ðtÞ
This difference equation governs motion of the particle, subject to the initial condition
(
1 x ¼ 0;
f X ð0Þ ðxÞ ¼ (2.14)
0 xa0:
Planck and Fokker subtracted f X ðtÞ ðxÞ from both sides of Eq. (2.13) then divided the left side of the result by
Dt and the right side by a scaled version of Dt to obtain
  !
f X ðtþDtÞ ðxÞ  f X ðtÞ ðxÞ 1 ðDxÞ2 ð1=2Þf X ðtÞ ðx  DxÞ  f X ðtÞ ðxÞ þ 1=2 f X ðtÞ ðx þ DxÞ
¼ . (2.15)
Dt 2 Dt ðDxÞ2
Next, they let ðDxÞ2 =2Dt ¼ D, a diffusion coefficient, and took the limit at Dt ! 0 to obtain a continuous
equation governing the PDF of the displacement response:
qf X q2 f X
¼D tX0; 1oxo1 (2.16)
qt qx2
with initial condition f X ð0Þ ðxÞ ¼ dðxÞ; 1oxo1. This is, of course, identical to the equation developed by
Einstein, except that the present development has its origins in a scenario remote from physical diffusion.
Planck and Fokker could have stopped at this point and simply solved Eq. (2.16) for the PDF governing
response, but instead they noted that the probability distribution of the response is already given, in its discrete
form, by Eq. (2.12). The DeMoivre–Laplace Theorem (see [34]) holds that as the number of trials, j in
Eq. (2.12), increases the binomial distribution approaches a normal distribution with the same mean and
variance. Note that the mean and variance of X(t) were shown to be E½X ðtÞ ¼ 0 and V ½X ðtÞ ¼ ðDxÞ2 j,
therefore, because of the definition given in the previous paragraph, ðDxÞ2 ¼ 2ðDtÞD, and the variance
ARTICLE IN PRESS
T.L. Paez / Mechanical Systems and Signal Processing 20 (2006) 1783–1818 1793

approaches 2Dt. The PDF of the response is the same as that given in Eq. (2.6), but the result was derived
from a completely different perspective. Among other things, it is interesting to note here that the random
response is the accumulated result of numerous transitions, none of which is governed by a normal
distribution.
This discrete formulation was extended to cover several other cases including the one in which the particle
experiences a drift caused by a constant force superimposed over the oscillatory force, and the one in which
the particle is harmonically bound. A summary of much of the early work is given by Kac [35].
Many special cases of the Fokker–Planck equation (written in its fully general form in [5]) have been
developed over the years, and work in this area continues. Kolmogorov [36] is credited with having greatly
generalised the ideas of the Fokker–Planck equation. (Gnedenko [19] states that in the 1931 paper cited above,
Kolmogorov started the ‘‘construction of the general theory of stochastic processes.’’) Also, among many
other things, Wang and Uhlenbeck [26] systematised the development of a Fokker–Planck equation for a
system governed by specific equations.

3. The development of spectral density

It is unlikely that the mathematical theory of random vibrations of mechanical systems would have become
as popular and practically useful as it is today, had it not been for the development of spectral density by
Wiener [37] and those who preceded him. Spectral density (also know as mean square spectral density, power
spectral density, and by other descriptive names) is the fundamental characteristic of a weakly stationary
random process. It describes the distribution in the frequency domain of the mean square signal content of a
stationary random process.
Here, is what the terms in the previous paragraph mean, qualitatively, and why spectral density is essential
to random vibration analysis. First, an alternate way of thinking about random processes (alternate to the
sequence-of-random-variables description, provided in the previous section) is to consider a random process
to be an infinite ensemble (collection) of signals. Frequently, almost all the signals in the ensemble look alike,
in a general sense, but, in detail, they all differ. For example, finite duration segments of two signals from the
ensemble of a single random process are shown in Fig. 8. They look alike in their general random character,
but they differ in their details. When practically all of the signals in the ensemble of a random process are in a
random steady state from the infinite past to the infinite future, the random process is stationary. (Precisely
what is meant by the phrase ‘‘random steady state’’ will be described in a moment. Of course, no real signal
maintains a steady state from the infinite past to the infinite future. In a practical sense, if signals from a
random source maintain a steady state for a ‘‘long time,’’ then the source is considered stationary.) The

5
x1 (t)

-5
0 0.05 0.1 0.15 0.2
5
x2 (t)

-5
0 0.05 0.1 0.15 0.2
Time, sec

Fig. 8. Segments of two realisations from a single random process.


ARTICLE IN PRESS
1794 T.L. Paez / Mechanical Systems and Signal Processing 20 (2006) 1783–1818

notation fX ðtÞ; 1oto1g indicates a random process defined at times in ð1; 1Þ. With the current
interpretation this might be thought of as a collection of signals xj ðtÞ; j ¼ 1; 2; . . . ; 1oto1. Each signal
xj ðtÞ is a realisation of the random process fX ðtÞ; 1oto1g.
Second, the stock in trade of structural dynamic analysts and experimentalists is frequency domain analysis
of structures and their characteristics via Fourier expansion. Frequency domain analysis often reveals the
modal characteristics of structural behaviour. The modal frequencies of a mechanical system are the
frequencies at which motion is amplified in structural response, and the mode shapes of the system are
the shapes the mechanical system naturally assumes when it responds at specific modal frequencies. For any
excitation, it is useful to know the frequencies of substantial signal content because this information indicates
the frequencies where substantial response might be excited in a structure. For any response, it is useful to
know the frequencies of substantial signal content because these frequencies indicate where the excitation has
at least some signal content, and the response amplified the input signal content. When the input and response
are finite duration and deterministic, direct Fourier analysis can be used to answer these questions about
excitation and response signal content. But when a source is a random process the ensemble of the random
process contains an infinite collection of signals, and further, each one has infinite duration. Infinite duration
signals do not have Fourier transforms in a traditional sense. Spectral density considers signal content in a
mean square sense, and in so doing enables characterization of the signal content of an infinite collection of
random process realisations, each of which has infinite duration.
Wiener started the process of defining the spectral density (in strict terminology, the autospectral density,
for reasons that will be seen below) by defining a related function—the autocorrelation function. It is
Z T
1
RXX ðtÞ ¼ E½X ðtÞX ðt þ tÞ ¼ lim xðtÞxðt þ tÞ dt; 1oto1, (3.1)
T!1 2T T

where xðtÞ; 1oto1, is a realisation of the random process fX ðtÞ; 1oto1g. Eq. (3.1) defines the
autocorrelation function in terms of a temporal average. With this definition, it is assumed that the random
process realisation chosen to perform the time averaging computation in Eq. (3.1) is representative of
practically all the other realisations in the random process. (When one arbitrarily chosen stationary random
process realisation can be used in the above formula to define a valid autocorrelation function, the random
process is said to be ergodic. See [5].) Wiener could have defined the autocorrelation function in terms of an
average over all j of products like xj ðtÞxj ðt þ tÞ across the ensemble of the random process, but did not do so.
An assertion of the formula in Eq. (3.1) is that the average presented is a function of t only. When this
assertion is correct for the random process, and when the mean of the random process is constant over all
time, then the random process is weakly stationary, and maintains the random steady state mentioned above.
(The term ‘‘weak’’ used to describe stationarity refers to a random steady state that is characterized exclusively
in terms of the mean and autocorrelation functions.) The function defined in Eq. (3.1) is referred to as the
autocorrelation function because is defines the correlation of one random variable in the process, X(t), to
another random variable in the same random process, X(t+t). Finally, the autocorrelation function is a
symmetric function in t, and it can be shown to be non-negative definite.
When a random process is weakly stationary the spectral density exists; it was defined by Wiener as the
Fourier transform of the autocorrelation function:
1 1
Z
SXX ðoÞ ¼ RXX ðtÞeiot dt; 1ooo1. (3.2)
2p 1
Because of the characteristics of the autocorrelation function, mentioned above, the spectral
density is symmetric and non-negative. Eq. (3.2) is called a two-sided spectral density because it is defined
for negative as well as positive frequencies. (As mentioned earlier, negative frequencies are to be interpreted in
the sense that harmonic functions are defined for negative arguments.) Among many other things, Wiener
showed that the autocorrelation function can be recovered from the spectral density via inverse Fourier
transformation:
Z 1
RXX ðtÞ ¼ S XX ðoÞeiot do; 1oto1. (3.3)
1
ARTICLE IN PRESS
T.L. Paez / Mechanical Systems and Signal Processing 20 (2006) 1783–1818 1795

Because, from Eq. (3.1), RXX ðtÞ ¼ E½X ðtÞX ðt þ tÞ, the mean square of the random process is the constant
s2X ¼ RXX ð0Þ. Therefore, evaluation of Eq. (3.3) at t ¼ 0 gives the mean square of the random process
fX ðtÞ; 1oto1g. This is
Z 1
RXX ð0Þ ¼ S XX ðoÞ do. (3.4)
1

As Wiener pointed out, Eq. (3.4) shows that the mean square of a random process is the area under the
spectral density curve, and it is clear that the integral of the spectral density over a portion of the frequency
axis yields the part of the mean square of the random process due to contributions over that frequency range.
The function defined in Eq. (3.2) is, in strict terms, an autospectral density because it is the Fourier
transform of the autocorrelation function, and because it is the frequency domain characterization of the
mean square behaviour of a random process.
In his 1930, paper defining spectral density, Wiener attributed the fundamental idea underlying spectral
density to Schuster [38–40]. He wrote: ‘‘The germs of the generalised harmonic analysis of this paper are
already in the work of Schuster, but only the germs. To make the Schuster theory assume a form suitable for
extension and generalisation, a radical recasting is necessary.’’
However, some of the ideas underlying spectral density preceded Schuster’s 1899 paper. Rayleigh [41]
introduced the idea that a random process has an autocorrelation, and, in a footnote, he gave an excellent
example of how and why it exists. He did not define the autocorrelation via an equation, though. In the same
paper he showed how an energy spectrum can be defined as a frequency domain description of a stationary
random process. He recognised the difficulty in writing the Fourier transform of a signal that extends from
minus infinity to infinity, and he multiplied the time-domain signal by a symmetric, exponential lag weighting
to make the Fourier transform well-defined. He did not hint at the possibility that the autocorrelation might
be related to the spectral density, but he certainly introduced the idea that a spectral representation can be
developed for a stationary source.
Schuster wrote another paper [42] that discusses the idea of a spectral representation for random signals.
His topic was the characterization of light, but, as Rayleigh had done earlier, he suggested that random
sources can be described in terms of their energy spectra, and that energy over limited ranges of frequency
represents the content of signal components. In his 1894 paper, Schuster appears not to have accommodated
the problem of growth of the Fourier transform magnitude of a stationary signal as greater duration signals
are represented. But in a later paper Schuster [43] indicated that the modulus of the Fourier transform of a
segment of stationary random signal taken on the interval ðt; t þ TÞ increases as the square root of T when the
signal has no periodic components. Still, he did not apply the correction in his spectral characterization in the
1897 paper. It was not until later that Schuster [40] inserted the correction into his definition of the
periodogram. The expression he developed did not take the limit as T ! 1, but it is still the earliest definition
of a quantity that is very similar to the spectral density in use today.
During the time frame of the early 1900s and later, Rayleigh [44–47] wrote many other papers that
considered the spectral representation and probability distributions of random signals.
The first mathematical definition of the autocorrelation function appears to have been written by Taylor
[48]. In a paper that considered the diffusion of heat in fluids in turbulent motion, he defined a random process
that proceeds in discrete, temporal steps. He defined the correlation between random variables in the process
at adjacent steps and inferred a correlation structure for the process.
To understand in a practical and intuitive sense the meaning of spectral density, consider the schematic of
Fig. 9. It indicates that every real signal, like the one shown at left in the figure, can be filtered into a finite
number of components. A sequence of idealised filter gain functions is shown in the second column. The filters
are non-overlapping, band-pass, equal width, and vary from low to high frequency, top to bottom. Some
components of the signal at left are shown by the third column of signals. The signals vary from the lowest
frequency component at the top to the highest frequency component at the bottom. The mean square value of
each signal can be estimated by squaring a long-duration segment of each component and computing the
average. The mean square values of the signal components are plotted as a function of band-pass filter centre
frequencies in the fourth column of figures. Finally, the mean square values of the signal components are
divided by the effective filter bandwidths to obtain the estimate of spectral density of the random source from
ARTICLE IN PRESS
1796 T.L. Paez / Mechanical Systems and Signal Processing 20 (2006) 1783–1818

Bandpass Signal
Filters Components

M ean Spectral
Signal Squares Density

Time Freq Freq

Freq Time

Fig. 9. Schematic describing the meaning of spectral density.

which the measured signal arose. Of course, the units of the spectral density are the units of the measured
signal squared per unit of frequency (usually Hz). This is how spectral density is related to random source. The
explanation is meant only to be a rough description of the meaning of spectral density. Issues like the filter
type, shape and widths, and the measured signal duration are all important. Even so, a physical
implementation of the process described here was used to estimate the spectral density of measured sources
into the 1960s and 1970s. Miles [49] mentions the need to obtain a spectral density estimate from
experimentally measured signals, in order to perform random vibration analyses, and suggests a method like
the one outlined here. A paper by Rona [50] describes the process, shown graphically, here, for estimation of
spectral density of mechanical system motions.
Here are some examples of random process realisations, their autocorrelation functions, and spectral
densities. First, a zero-mean, wide-band random process fX ðtÞ; 1oto1g, is considered in Fig. 10a. It has
the one-sided spectral density G XX ðf Þ; f X0, shown on the right. (The one-sided spectral density is defined on
non-negative frequencies and has twice the magnitude of the two-sided spectral density. The one-sided spectral
density is traditionally used to describe the results of laboratory and field signal analyses, and it is used in most
plots of spectral density.) The random process is termed wide-band because the frequency band over which the
spectral density has non-zero values is wide relative to the ‘‘centre’’ frequency. This particular random process
is known as a band-limited white noise, by analogy to visible light, because the density of mean square signal
content is evenly distributed over all the frequencies where it is non-zero. Such a random process has
realisations most of which resemble the signal shown on the left. The autocorrelation function of a wide-band
random process, RXX ðtÞ; 1oto1, is narrow and sharp, as shown by the middle figure. It indicates that
the random process loses correlation with itself over a very short time lag. Second, a zero-mean, narrowband
random process is considered in Fig. 10b. It has the one-sided spectral density shown on the right. The random
process is called narrowband because the frequency band over which the spectral density has non-zero values
is narrow relative to the ‘‘centre’’ frequency. Such a random process has realisations most of which resemble
the signal shown on the left. The autocorrelation function of a narrowband random process is periodic and
decaying, as shown by the middle figure. It indicates that the random process has correlation with itself that is
nearly periodic, and that diminishes slowly. The period of the autocorrelation function matches the average
period of the realisations, and these are the reciprocals of the ‘‘centre’’ frequency of the spectral density
measured in Hertz. Third, the response random process of a three degree-of-freedom (dof) system is
considered in Fig. 10c. It has the one-sided spectral density shown on the right. There is a peak in the spectral
ARTICLE IN PRESS
T.L. Paez / Mechanical Systems and Signal Processing 20 (2006) 1783–1818 1797

x (t ) 1
2 RXX () SXX (f )

0.5 10-2
0

-2 0
10-4
0 0.5 1 -0.2 0 0.2 100 101 102
(a) Time, sec Time Lag,τ Frequency, Hz

100
2 x (t )
1 RXX (τ) SXX (f )

0 0 10-2

-2 -1
10-4 0
0 0.5 1 -0.2 0 0.2 10 101 102
(b) Time, sec Time Lag, τ Frequency, Hz

x (t ) 400 RXX (τ) SXX (f )


50
200 100
0
0
-50 -200
0 0.5 1 -0.5 0 0.5 100 101 102
(c) Time, sec Time Lag, τ Frequency, Hz

Fig. 10. (a) Band-limited white noise random process. realisation (left), autocorrelation function (centre), spectral density (right).
(b) Narrowband random process. realisation (left), autocorrelation function (centre), spectral density (right). (c) Random response of three
dof system. realisation (left), autocorrelation function (centre), spectral density (right).

density for each mode in the system that is substantially excited. The frequencies where the peaks occur
indicate the modal frequencies of the system. Such a random process has realisations most of which resemble
the signal shown on the left. The realisations of this random process are superpositions of three narrowband
random processes. The autocorrelation function of the random response of the three dof system is the
superposition of autocorrelation functions of three narrowband random processes with centre frequencies at
the modal frequencies of the system, as shown by the middle figure. In most situations it is difficult to interpret
the character of a stationary random process using response realisations or autocorrelation function, but
relatively easy to interpret stationary random process mean square signal content using spectral density.
A random process known as an ideal white noise is one whose two-sided spectral density is finite and
constant on ð1; 1Þ. Indeed, the excitation characteristics described mathematically by Uhlenbeck and
Ornstein [24] imply a white noise excitation, though not in those words. Even prior to that description, the
assumptions made by Einstein [21] in his early development of a Fokker–Planck equation imply an
assumption of white noise-type excitation.
It was stated following Eq. (3.4) that the area under the spectral density curve is the mean square of a zero-
mean random process, therefore, an ideal white noise has infinite mean square. In spite of this, the ideal white
noise model is important because some measures of the mean square response of stable linear systems to white
noise are finite. For example, the mean square displacement and velocity responses of force-excited, fixed-base,
linear structures to a white noise excitation are finite. For this reason, the white noise excitation model is used
even today to perform relatively simple, yet accurate, analyses. Part of modern analyses is the development of
the relation between the spectral density of a random excitation and the spectral density of the response it
excites. In view of this, an understanding of the history of random vibrations requires knowledge of the origins
of input/output spectral density relations.
Wiener pursued this topic in his 1930 paper. He did so in a far-ranging section entitled ‘‘Spectra and
Integration in Function Space.’’ Without writing the formula, he stated the relation that is most fundamental
to the modern practice of random vibrations of linear mechanical systems. He stated the result for a white
noise excitation in three ways; here are two of them. First, he wrote, ‘‘the spectral density of [random linear
system response] is half the square of the modulus of the Fourier transform of [the system impulse response
ARTICLE IN PRESS
1798 T.L. Paez / Mechanical Systems and Signal Processing 20 (2006) 1783–1818

function].’’ Later, ‘‘if a linear resonator is stimulated by a uniformly haphazard sequence of impulses, each
frequency responds with an amplitude proportional to that which it would have if stimulated by an impulse of
that frequency and of unit energy.’’ The excitation he is referring to is a type of white noise. (See discussions of
shot noise in [34].) The Fourier transform of the system impulse response function is the system frequency
response function (FRF). The FRF of a linear system is the coefficient of proportionality relating magnitude
and phase of a harmonic response component to magnitude and phase of a harmonic input component. That
is, H(o) is the FRF in the relation
X ðoÞ ¼ HðoÞQðoÞ; 1ooo1, (3.5)
R1
where QðoÞ ¼ 1 qðtÞeiot dt; 1ooo1, is the Fourier transform of a system excitation, and
R1
X ðoÞ ¼ 1 xðtÞeiot dt; 1ooo1, is the Fourier transform of a linear system response.
When fW ðtÞ; 1oto1g is a zero-mean, ideal white noise random process with spectral density
SWW ðoÞ ¼ S WW ¼ constant; 1ooo1, Wiener’s statement is

SXX ðoÞ ¼ jHðoÞj2 S WW ; 1ooo1, (3.6)


where SXX ðoÞ; 1ooo1, is the spectral density of the response random process fX ðtÞ; 1oto1g. For
example, when a linear, SDF system with mass, m ¼ 1:0 lb 2 =in, damping factor, z ¼ 0:05, and natural
frequency on ¼ 2p rad=s, is excited by a zero-mean, ideal white noise with spectral density SWW ¼ 1 lb2 =Hz,
the displacement response spectral density is
1
SXX ðoÞ ¼
m2 ððo2n  o 2 Þ2
þ ð2zon oÞ2 Þ
1
¼ ; 1ooo1 ð3:7Þ
ððð2pÞ  o Þ þ ð0:1ð2pÞoÞ2 Þ
2 2 2

because
1
HðoÞ ¼ ; 1ooo1. (3.8)
mðo2n  o2 þ 2izon oÞ
This response spectral density is graphed in Fig. 11.
It appears that Carson [51] had previously defined the input/output relation for spectral densities in work
Wiener was not aware of. Carson was motivated by a need to describe the effects of noise on an electrical
communications system. He defined what he called the energy spectrum of random interference, as the modulus
squared of the Fourier transform of a finite duration segment of a random process realisation, divided by the
duration of the segment. This quantity is essentially the same as Schuster’s [40] definition of the periodogram,
and, as Schuster, Carson failed to take the limit as T ! 1. But Carson wrote the expression for the mean
square response of a linear system to the random excitation in terms of the energy spectrum. Essentially, he
wrote Eq. (3.6), but he wrote it in terms of the linear system impedance, the inverse of the system FRF. A few
years later, Carson [52] modified the definition of what he now called the energy-frequency spectrum by taking
the limit as T ! 1. He re-wrote the input/output spectral density relations for a linear system in the 1931

100
SXX (f), in2 /Hz

10-5

10-1 100
Frequency, Hz

Fig. 11. Spectral density of the response of an SDF system.


ARTICLE IN PRESS
T.L. Paez / Mechanical Systems and Signal Processing 20 (2006) 1783–1818 1799

paper, and, in addition, proposed several shot noise-type models (see [5]) for the electrical noise. He ventured
that the energy-frequency spectrum of practical noise sources must be wide-band and slowly varying.
Wiener went much further in his 1930 paper and defined what is now known as the cross-spectral
density. He called it coherency, and used it to define frequency domain relations among random processes
in a quadratic mean sense. He developed the cross-spectral density in terms of cross-correlations between
random processes. Consider a collection of M zero-mean, stationary, ergodic random processes
fX m ðtÞ; 1oto1g; m ¼ 1; . . . ; M. Each random process consists of an infinite collection of realisations,
and one of the realisations in any of the random processes is representative of practically all the other
realisations in the random process. The signals xm ðtÞ; m ¼ 1; . . . ; M; 1oto1, are realisations, one
from each of the M random processes, respectively. He defined a function, one that we now call the cross-
correlation function between a pair of random processes fX j ðtÞ; 1oto1g; j 2 ð1; . . . ; MÞ, and
fX k ðtÞ; 1oto1g; k 2 ð1; . . . ; MÞ, as
Z T
1
RX j X k ðtÞ ¼ lim xj ðtÞxk ðt þ tÞ dt; j; k 2 ð1; 2; . . . ; MÞ; 1oto1. (3.9)
t!1 2T T

Normally, we consider jak. Wiener defined the cross-spectral density as the Fourier transform of the cross-
correlation function:
1 1
Z
SX j X k ðoÞ ¼ RX j X k ðtÞeiot dt; j; k 2 ð1; 2; . . . ; MÞ; 1ooo1. (3.10)
2p 1
He described many of the characteristics of the cross-spectral density, and noted that the matrix of all the
cross-spectral densities of the random processes fX m ðtÞ; 1oto1g; m ¼ 1; . . . ; M, namely,
2 3
SX 1 X 1 ðoÞ S X 1 X 2 ðoÞ    SX 1 X M ðoÞ
6S
6 X 2 X 1 ðoÞ S X 2 X 2 ðoÞ    SX 2 X M ðoÞ 7
7
SXX ðoÞ ¼ 6 .. .. .. .. 7; 1ooo1 (3.11)
6 7
6
4 . . . . 7
5
S X M X 1 ðoÞ SX M X 2 ðoÞ    S X M X M ðoÞ

‘‘determines the spectra of all possible linear combinations of [the random processes].’’
He noted that the cross-spectral density matrix is Hermetian (i.e., S X j X k ðoÞ ¼ S X k X j ðoÞ), and that every
cross-spectral density matrix can be diagonalized. He pointed out that this makes it easy to generate correlated
random processes starting with completely uncorrelated random processes—a fact that has been rediscovered
many times. This is important, for example, in the laboratory-experimental generation of coherent, stationary
random signals for multi-axis testing. (See [53,54].)
In terms of the cross-spectral density, Wiener defined what he called the coefficient of coherence as
S X j X k ðoÞ
gX j X k ðoÞ ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ; j; k 2 ð1; 2; . . . ; MÞ; 1ooo1, (3.12)
S X j X j ðoÞSX k X k ðoÞ
where S X j X j ðoÞ and S X k X k ðoÞ are the autospectral densities of the two random processes. He stated, ‘‘The
modulus of the coefficient of coherency represents the amount of linear coherency between [the two random
processes], and the argument [phase] the phase lag of this coherency.’’ Today, we frequently define a quantity
that is the modulus squared of the expression in Eq. (3.12) and call it the coherence. It is used frequently with
experimentally measured data to judge the extent of linear relation between a pair of random processes. (See,
for example, [18].)
The use of the cross-spectral density to define the coefficient of coherency is useful and interesting, but the
most important use for the cross-spectral density involves the estimation of the FRF. It appears that this is one
very important application that Wiener did not explicitly include in his 1930 paper. It happens that the
frequency domain input/output relation for a linear system, Eq. (3.5), above, can be used to develop a relation
among the excitation autospectral density, S QQ ðoÞ; 1ooo1, the cross-spectral density between the
response and the excitation, S XQ ðoÞ; 1ooo1, and the FRF, HðoÞ; 1ooo1, of the linear system:
SXQ ðoÞ ¼ HðoÞS QQ ðoÞ; 1ooo1. (3.13)
ARTICLE IN PRESS
1800 T.L. Paez / Mechanical Systems and Signal Processing 20 (2006) 1783–1818

When the auto- and cross-spectral densities in the equation are estimated, the FRF can be inferred. Wiener
did suggest methods for estimating auto- and cross-spectral densities, though they are not the approaches used
in the current digital age. Details on this sort of estimation can be obtained in any text that discusses random
signal analysis, for example, [18], or Wirsching et al. [17].
It is inappropriate to end this section without noting that the spectral density was independently defined in
1934 by Khintchine [84]. He also defined the autospectral density as in Eq. (3.2), and in honour of his work,
and, of course, that of Wiener, Eqs. (3.2) and (3.3) are usually known as the Wiener–Khintchine relations. In
fact, many papers and texts, especially those written in Russia, acknowledge Khintchine as the person who
developed spectral density.

4. More foundations for random vibrations

The early and continued development of the Fokker–Planck equation, the development of direct analysis
techniques for random vibration, and the definition of the spectral density all formed a foundation sufficient
to develop a modern theory of random vibrations of mechanical systems. Yet many elements that would prove
extraordinarily useful to the solution of real problems of mechanical systems were not present as of 1944. For
example, the modern practice of random vibration testing of physical systems and the Monte-Carlo analysis of
complex, perhaps non-linear systems, require the generation of random process realisations. (Monte-Carlo
analysis involves (1) the specification of a random process model for excitation, (2) the generation of excitation
random process realisations, (3) the deterministic computation of the response of a system to the generated
excitations, and (4) the statistical analysis of the family of generated responses.) The framework for doing so
had not been developed.
In a far ranging paper, Rice [55] developed techniques to perform these analyses, and many others,
including a thorough investigation of the shot effect, an investigation into many measures of peak response,
and an investigation of the averages and probability distributions of the outputs of non-linear devices excited
by noise inputs. (The shot effect is observed in phenomena which give rise to sequences of pulses that occur at
random times, with potentially random amplitudes, and potentially random shapes. Any random process with
such realisations is called a shot noise. A good summary of several types of shot noise is given in [52].)
We consider, first, representations for a stationary random process that can be used to generate stationary
random process realisations. Rice developed two methods for accomplishing this. He had preceded this
development with the analysis of a shot noise random process, and stated that ‘‘the Fourier series
representation of the shot effect current y suggests the representation’’
N
X
X ðtÞ ¼ ðAk cos ok t þ Bk sin ok tÞ; 1oto1 (4.1)
k¼1

for the random process fX ðtÞ; 1oto1g, where


ok ¼ 2pf k f k ¼ kDf k ¼ 1; . . . ; N (4.1a)
and the Ak ; Bk ; k ¼ 1; . . . ; N, are zero-mean, uncorrelated, normally distributed random variables with
variances DfG XX ðf k Þ; k ¼ 1; . . . ; N. The function G XX ðf Þ; 0pf pf max is the one sided spectral density of the
random process. Normally, f N is defined so that f N ¼ f max . He showed that if the ensemble of random process
realisations generated with Eq. (4.1) was to have the desired spectral density, then the random variables must
have the stated variance.
Realisations of the stationary random process can be generated by generating realisations of the
Ak ; Bk ; k ¼ 1; . . . ; N, and using them in place of the Ak ; Bk ; k ¼ 1; . . . ; N, in Eq. (4.1).
He also mentioned in a footnote that ‘‘this sort of representation was used by Einstein and Hopf for
radiation.’’
The representation of Eqs. (4.1) and (4.1a) is usually associated with Rice because of his 1944 and 1945
papers. For example, Wang and Uhlenbeck [26] include a section entitled ‘‘The Gaussian Random Process;
Method of Rice.’’ But the idea of representing a random process and its realisations in a Fourier expansion
goes back at least as far as Rayleigh [41,44–47] and Schuster [40,42,43]. Both authors used the harmonic
ARTICLE IN PRESS
T.L. Paez / Mechanical Systems and Signal Processing 20 (2006) 1783–1818 1801

representation to motivate the idea of a spectral representation of a random process, but the harmonic
representation can be used for much more. In their 1945 paper, Wang and Uhlenbeck use it to discuss the
probability distribution of one random variable in a stationary, Gaussian random process; the joint
probability distribution of a stationary, Gaussian random process and its derivative at a fixed time, t; the joint
probability distribution of two separate random variables in a stationary, Gaussian random process; and the
joint probability distribution of any finite collection of random variables in a stationary, Gaussian random
process and/or its derivatives.
In what appears to be his only reference to the input/output relation for spectral density, he states ‘‘suppose
we are interested in the output of a certain filter when a source of thermal noise is applied to the input. Let
jHðf Þj be the absolute value of the ratio of the output current to the input current when a steady sinusoidal
voltage of frequency f is applied to the input. Then
 2
SXX ð f Þ ¼ C H ð f Þ :’’ (4.2)
He went on to explain that this formula can be used to establish C, the spectral density of a broadband input
when both jHðf Þj and S XX ðf Þ are known.
He also suggested an alternate representation for the stationary random process that is trigonometrically
equivalent to Eq. (4.1), yet different in its details:
N
X
X ðtÞ ¼ C k cosðok t  fk Þ; 1oto1, (4.3)
k¼1

where fk ; k ¼ 1; . . . ; N, are independent random variables, uniformly distributed on ð0; 2pÞ, the
C k ; k ¼ 1; . . . ; N, are defined as constants
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
C k ¼ 2DfG XX ðf k Þ; k ¼ 1; . . . ; N (4.3a)
and the ok and fk are defined as in Eq. (4.1a). This definition uses harmonic components with fixed amplitudes,
and random phase angles. When N is sufficiently large, the random process in Eq. (4.3) has a normal
distribution by virtue of the Central Limit Theorem. (See [34].)
Realisations of the stationary random process can be obtained by generating realisations of the
fk ; k ¼ 1; . . . ; N, and using them in place of the fk ; k ¼ 1; . . . ; N, in Eq. (4.3). This formula is used to
generate random process realisations for many applications, including the generation of random process
realisations for laboratory testing.
Much of the material in Rice’s paper deals with crossing rates of stationary random processes, and the
probability distribution of peaks values in stationary random processes and their envelopes. We will consider
only his work on level-crossings. But first, it is necessary to summarise another part of Rice’s work. Much of
the work that Rice did depends on the joint distribution of the response and its derivatives. And the work we
will summarise below depends on the joint probability distribution of the random process and its first
derivative, i.e., the displacement and velocity responses. The zero-mean random process fX ðtÞ; 1oto1g
with spectral density S XX ðoÞ; 1ooo1, has the mean square
Z 1
2
sX ¼ SXX ðoÞ do (4.4)
1

as shown in Section 3. In addition, though, Rice showed that the derivative of the random process
fX ðtÞ; 1oto1g has the means square
Z 1
2
sX_ ¼ o2 SXX ðoÞ do (4.5)
1

if the random process derivative exists in some probabilistic sense.


Rice also showed (Uhlenbeck and Ornstein [24] had shown this previously) that the displacement and
velocity random processes are uncorrelated.
The level crossing problem addresses the number of times that a stationary random process
fX ðtÞ; 1oto1g crosses the level x ¼ a in the time interval ðt; t þ dtÞ. The problem is critical in structural
ARTICLE IN PRESS
1802 T.L. Paez / Mechanical Systems and Signal Processing 20 (2006) 1783–1818

x (t)
Level x=a up-crossing Peak

Zero up- Time, t


crossing
Zero down-
crossing Valley

Fig. 12. Schematic diagram indicating terminology with reference to level crossings and peaks.


x (t) x (t ) dt

dt
t

Fig. 13. Schematic that shows signal conditions when an up-crossing of the level xðtÞ ¼ a occurs.

dynamics because of its association with failure analysis of certain system responses. With reference to Fig. 12,
let N aþ ðt; t þ dtÞ denote the discrete, random number of up-crossings of the level x ¼ a by the random process
in the time interval ðt; t þ dtÞ. N aþ ðt; t þ dtÞ is a discrete random variable with a probability mass function, i.e.,
a probability that the random variable equals 0, a probability that the random variable equals 1, etc.
Rice assumed that the time interval dt can be defined such that either zero or one up-crossings occur in
ðt; t þ dtÞ. We denote the up-crossing rate of the level x ¼ a as naþ . Because the random process is stationary,
the number of up-crossings depends directly on the interval length dt. These assumptions lead to
E½N aþ ðdtÞ ¼ naþ dt ¼ PðupcrossingÞ. (4.6)
Consider Fig. 13. An up-crossing occurs in ðt; t þ dtÞ when
X ðtÞoa; X_ ðtÞ40; X ðtÞ þ X_ ðtÞ dt4a. (4.7)
The probability of up-crossing is
Pðða  X_ ðtÞ dtoX ðtÞoaÞ \ ðX_ ðtÞ40ÞÞ. (4.8)
The region of integration implied by the probability of Eq. (4.8) is shown in Fig. 14. The probability is
Z 1 Z a
dv dx f X X_ ðx; vÞ. (4.9)
0 avdt

Combining Eqs. (4.6) and (4.9) yields


Z 1 Z a
naþ ¼ dv dx f X X_ ðx; vÞ. (4.10)
0 av dt
ARTICLE IN PRESS
T.L. Paez / Mechanical Systems and Signal Processing 20 (2006) 1783–1818 1803

.
x

.
a–x dt

a x

Fig. 14. Region of integration implied by the probability statement of Eq. (4.4).

When the random process is Gaussian distributed, the mean up-crossing rate of the level x ¼ a is
a2
 
1 sX_
naþ ¼ exp  2 . (4.11)
2p sX 2sX
The mean up-crossing rate of the level x ¼ 0 is
sRffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
1
1 sX_ 1 1 o2 SXX ðoÞ do
n0 ¼
þ ¼ R 1 . (4.12)
2p sX 2p 1 S XX ðoÞ do

Later, researchers in random vibrations and many other fields used this and other results developed by Rice
to characterize system responses and their probabilities of barrier crossings. For example, see Siebert [56],
Crandall [57], and Powell [58].

5. The popularization of random vibration of mechanical systems

The modern field of random vibrations of mechanical systems and probabilistic structural dynamics, in
general, has gained importance as the awareness that real mechanical environments are stochastic has
broadened. Today random vibration analyses are performed frequently and in practical settings, usually
within the framework of a commercial finite element code. Commercial finite element codes include, in
general, rather limited capabilities to perform probabilistic structural dynamic analyses, at least without much
pre- and post-processing. The most common analyses are those wherein the auto- (and, perhaps, cross-)
spectral densities of stationary excitation random processes are specified, and response auto- and cross-
spectral densities are computed. Some of the developments that led to the formulas that have been
implemented are summarised below.
Crandall is normally credited as the person (at least, in the United States) who made the topic of random
vibrations of mechanical systems accessible to practicing engineers. He organised a summer programme at the
Massachusetts Institute of Technology devoted to presentations on the fundamental topics in random
vibrations. The presentations are published in Crandall [1], and cover analysis of and design for random
vibrations, testing, data analysis, spectral density estimation, and other topics. Chapters 1, 2, and 4, in the
proceedings present an introduction to random vibrations on a most elementary level. Because of their
historical significance, and, in particular, the historical significance of Chapter 4, those three chapters will be
summarised first. Next, a number of papers whose publication actually preceded the MIT summer programme
will be discussed. These papers contain some of the earliest modern efforts to develop methods for the analysis
ARTICLE IN PRESS
1804 T.L. Paez / Mechanical Systems and Signal Processing 20 (2006) 1783–1818

of random vibration of mechanical systems. Then we will return briefly to the proceedings of the summer
programme to define the apparent extent of understanding of random vibrations as of 1958.
The first chapter in the programme proceedings by Crandall [59], entitled ‘‘Mechanical Vibrations with
Deterministic Excitations,’’ develops, very briefly, the ideas of impulse response function and FRF, and it uses
these to write expressions for the responses of linear systems in terms of the convolution integral and its
Fourier transform. The time-domain expression for scalar, transient response, xðtÞ; tX0, is
Z t
xðtÞ ¼ qðtÞhðt  tÞ dt; tX0, (5.1)
0

where qðtÞ; tX0, is the scalar excitation to the system, and hðtÞ; tX0, is the system unit impulse response
function (also simply called the impulse response function). Eq. (5.1) is the same as the convolution integral of
Eq. (2.9). When there is a single excitation under consideration and when response at a single point is of
interest, then the functions xðtÞ; qðtÞ; hðtÞ are scalar. The equation can be written for any measure of response
desired, for example, displacement, velocity or acceleration. The excitation can be any quantity of interest, for
example, force or imposed motion. The impulse response function is the response of the system at the location
where xðtÞ; tX0, is measured to a unit-impulse excitation (a unit delta function) applied at the location where
qðtÞ; tX0, is applied. Crandall’s development and explanation of Eq. (5.1) was very intuitive.
When the excitation to the system is applied starting at time less than zero, perhaps at t ! ð1Þ, then the
lower limit on the integral in Eq. (5.1) can be changed to ð1Þ and the upper limit can be changed to N. The
Fourier transform of the resulting expression can be taken to obtain the frequency-domain equivalent to the
convolution integral. It is
X ðoÞ ¼ HðoÞQðoÞ; 1ooo1, (5.2)
where X ðoÞ; 1ooo1, HðoÞ; 1ooo1, and QðoÞ; 1ooo1, are, respectively, the Fourier
transforms of xðtÞ; 1oto1, hðtÞ; tX0, and qðtÞ; 1oto1, defined
Z 1
X ðoÞ ¼ xðtÞeiot dt; 1ooo1,
Z11
HðoÞ ¼ hðtÞeiot dt; 1ooo1,
0
Z 1
QðoÞ ¼ qðtÞeiot dt; 1ooo1. ð5:3Þ
1

The function HðoÞ; 1ooo1, is called the FRF of the system, and it is the fundamental
descriptor of linear system behaviour in the frequency domain. It is the factor by which a
harmonic excitation can be multiplied to obtain the harmonic response of a linear system at a single
point. Its magnitude is the scale factor between input and response amplitudes, and its phase is the phase
difference between input and response. Crandall’s derivation of Eq. (5.2) was also very intuitive. He developed
Eqs. (5.1) and (5.2) because they form the basis for the fundamental input/output relations for linear random
vibrations.
The second chapter in the programme proceedings was written by Siebert [56], and is entitled ‘‘The
Description of Random Processes.’’ It provides an introductory discussion of the ideas of probability, random
processes, moments, correlation functions and spectral density. It seeks to establish a motivation for
describing structural dynamic excitations and responses as random processes.
Chapter 4 in Crandall [57], entitled ‘‘Statistical Properties of Response to Random Vibration,’’ starts with
the expression of linear system response in terms of the convolution integral and its Fourier transform, Eqs.
(5.1) and (5.2). He proceeds to develop the single-input/single-output relations for randomly excited linear
systems in terms of integrals involving autocorrelation and impulse response functions, and autospectral
densities and FRFs. Specifically, he derived the expression for the response autocorrelation function
RXX ðtÞ; 1oto1, of the scalar, stationary random process fX ðtÞ; 1oto1g.
Z 1 Z 1
RXX ðtÞ ¼ dt1 dt2 hðt1 Þhðt2 ÞRQQ ðt þ t2  t1 Þ; 1oto1, (5.4)
1 1
ARTICLE IN PRESS
T.L. Paez / Mechanical Systems and Signal Processing 20 (2006) 1783–1818 1805

where RQQ ðtÞ; 1oto1, is the autocorrelation function of the stationary excitation random process
fQðtÞ; 1oto1g. Intuitive interpretation of Eq. (5.4) is difficult, except, perhaps, to note that
RXX ð0Þ ¼ s2X , is the mean square of the response random process when the means of the excitation and
response are zero.
In the same chapter, Crandall also derived the expression for response spectral density
S XX ðoÞ; 1ooo1, of the stationary random process fX ðtÞ; 1oto1g. It is obtained through Fourier
transformation of Eq. (5.4):
SXX ðoÞ ¼ jHðoÞj2 S QQ ðoÞ; 1ooo1, (5.5)
where S QQ ðoÞ; 1ooo1, is the autospectral density of the stationary, excitation random process. Eq.
(5.5) describes the distribution of the mean square of the response of a randomly excited system in the
frequency domain. It is much easier to visualise than Eq. (5.4), and is, Crandall said, the ‘‘central result’’ of
random vibrations. Using the inverse Fourier transform, Crandall next wrote the expression for the response
autocorrelation function as
Z 1
RXX ðtÞ ¼ jHðoÞj2 SQQ ðoÞeiot do; 1oto1. (5.6)
1

When evaluated at t ¼ 0, it shows that the response mean square is strongly influenced by structural system
modes, the frequencies where the greatest amplifications occur in the FRF, HðoÞ; 1ooo1, and the mean
square distribution of signal content in the stationary excitation. The expression is
Z 1
RXX ð0Þ ¼ s2X ¼ jHðoÞj2 SQQ ðoÞ do. (5.7)
1

Following these developments, Crandall discussed the probability distribution of response in the Gaussian
excitation case. He developed expressions for the response of an SDF system to ideal white noise, and he
discussed how these results might be applied in the practical case where the excitation is not ideal white, but
only relatively constant in the vicinity of the natural frequency of a lightly damped structure. He went on to
apply one of the techniques of Rice [55], the technique for estimating the number of zero-crossings (and peaks)
in a narrowband random process, to the response of a lightly damped, SDF system.
The developments of researchers who investigated the random vibration responses of linear systems, several
years before Crandall’s workshop, are, in some cases, much more detailed and complex. Some of those
developments will be summarised in a moment. But this seems to indicate that Crandall sought to keep his
developments and results simple and clear, and he certainly accomplished that goal.
The result in Eq. (5.5) is, in a practical sense, the most important result in linear random vibrations because
it is the formula applied in the vast majority of practical analyses. Indeed, practitioners seeking to characterize
the random vibration response of systems which they are willing to approximate as linear use the formulas in
the following way. Consider a mechanical system that has been either experimentally or analytically evaluated,
and whose FRF relating acceleration response at one location to force excitation at another location has the
modulus shown in Fig. 15. The system motion may involve multiple modes, but the displayed FRF indicates
that three of them are dominant—the ones at 821, 1381, and 1706 Hz. Suppose that an analyst or designer

100
|H (f)|, g/lb

102 103
Frequency, Hz

Fig. 15. Modulus of the FRF of acceleration response on a system to a force excitation.
ARTICLE IN PRESS
1806 T.L. Paez / Mechanical Systems and Signal Processing 20 (2006) 1783–1818

102

GQQ (f), lb2/Hz


101

100 2
10 103
Frequency, Hz

Fig. 16. Spectral density of the system excitation.

100
GXX (f), g2 /Hz

102 103
Frequency, Hz

Fig. 17. Spectral density of the acceleration response of the system.

wishes to characterize the response motion via its spectral density to the excitation whose spectral density is
specified by the curve of Fig. 16. She or he would compute the product of the curves in Figs. 15 and 16 to
obtain the approximate spectral density of the response shown in Fig. 17. Many features of the response
character can be inferred from its spectral density, but perhaps most important, is the fact that its RMS value
is approximately 22 g. (This is the square root of the area under the response spectral density curve.) The RMS
value of the excitation is approximately 181 lb. As Crandall explained, if the acceleration excitation is
normally distributed with zero mean, then the response at each time is a normally distributed random variable
with zero mean and standard deviation 22 g. Gaussian realisations of the excitation and response random
processes can be obtained using a process described, for example, by Wirsching et al. [17]. That was done here,
and the excitation and response realisations are shown in Figs. 18 and 19.
Because Crandall’s is one of the earliest popular works on the subject of random vibrations of mechanical
systems, it is important to note the references he provides as the sources for his work. He cites Davenport and
Root [60] for ideas in random processes, but more importantly, he cites Laning and Battin [61] for ideas in the
response of linear systems to random excitation. Their book arose from a set of lecture notes on random
processes in the field of automatic control, first offered at MIT in 1951. They devote an entire chapter to the
subject of ‘‘Analysis of Effects of Time-Invariant Linear Systems on Stationary Random Processes.’’ In that
chapter, they express the autocorrelation function and autospectral density of the response of a linear system
to a stationary random excitation, and this is apparently the source of Crandall’s corresponding expressions.
They also refer to the idea of filtering of random processes and mention the relation of that activity to the
analysis of system response. However, Lanning and Battin do not explicitly cite the source for their approach
to the development of the input/output spectral density relation for randomly excited linear systems.
The reference to filtering by Lanning and Battin is interesting in view of the fact that Siebert [56], in the
chapter mentioned above, also briefly develops the input/output spectral density relation for randomly excited
linear systems, and he references an early source for that development as James, Nichols and Phillips [83].
Their text is written on the subject of servomechanisms. The input/output spectral density relations are
developed in a chapter entitled ‘‘Statistical Properties of Time-Variable Data.’’ It was written by Phillips [62]
with reference to the input/output relations for servomechanisms, and with much reference to the terminology
of signal filtering. Though the equations relating the spectral density of a stationary, white noise random input
ARTICLE IN PRESS
T.L. Paez / Mechanical Systems and Signal Processing 20 (2006) 1783–1818 1807

500

q (t), lb
0

-500
0.04 0.045 0.05 0.055 0.06
Time, sec

Fig. 18. Realisation of the excitation random process.

100
x(t), g

-100
0.04 0.045 0.05 0.055 0.06
Time, sec

Fig. 19. Realisation of the response random process.

to the spectral density of the output of a linear system had appeared several years earlier, no reference is made
by Phillips to any other derivation.
As mentioned in Section 3, an early development of the input/output spectral density relation for linear
systems is to be found in Wiener [37] in his paper defining spectral density. The 13th section of his paper is
entitled ‘‘Spectra and Integration in Function Space,’’ and in that section he proves that ‘‘if a linear resonator
is stimulated by a uniformly haphazard sequence of impulses, each frequency responds with an amplitude
proportional to that which it would have if stimulated by an impulse of that frequency and of unit energy.’’
This is a statement of Eq. (5.5) for the case of a white noise excitation—the case where
S QQ ðoÞ ¼ SQQ ¼ constant; 1ooo1. Still, if the importance of his development to practical applications
had been recognised, it may have been applied more broadly, and earlier.
Another early development of the input/output spectral density relation for linear systems excited by ideal
white noise is the one given by Wang and Uhlenbeck [26]. That paper was not written with reference to
mechanical systems in particular, but with reference to linear systems, in general. In it, the authors focus most
of their attention on finding the probability distributions of the solutions to white noise excited, first and
second order linear differential equations, and systems of second-order linear equations using the
Fokker–Planck equation approach. However, they did write the input/output spectral density relation for a
first-order linear system and for a second-order linear system excited by white noise. The first order linear
system they considered is a massless particle connected to a fixed boundary via a spring and damper, and
excited by white noise. They did not write the relations in a general way using FRF or impedance, and they did
not discuss the implications of the expressions for interpreting signal content of the response mean square.
Nevertheless, the relations are presented, perhaps, for the first time since Weiner wrote them. Whether or not
they independently rediscovered the input/output spectral density relation is not clear, but their footnotes and
comments on other parts of the work of Weiner appear not to credit him with the idea.
Finally, as mentioned in Section 3, the earliest reference (found by the author) to an input/output spectral
density relation for linear systems is the one given by Carson [51] in his analysis of electrical systems. His
expression of the input/output relation has its limitations—namely, it is not based on expressions for spectral
density defined as T ! 1, and it is not written explicitly—but it does convey the correct concept for the
input/output spectral density relation, and it does so for a fully general (non-white noise) input random
process.
Many papers on the subject of random vibrations of mechanical systems were written prior to the
publication of the proceedings of Crandall’s MIT summer programme. Among those was a paper by Fung
ARTICLE IN PRESS
1808 T.L. Paez / Mechanical Systems and Signal Processing 20 (2006) 1783–1818

[63], though it was not referenced by Crandall. It is a development that appears in the mechanical systems
literature of the input/output spectral density relation for randomly excited linear systems. His paper on the
structural dynamics of aeronautical systems used the relation between the spectral density of an excitation
force and the spectral density of a system response to express the mean square value of the response. (Instead
of using the FRF, he used system impedance.) The equation is
Z 1
SQQ ðoÞ
s2X ¼ 2
do, (5.8)
1 jZðoÞj

where ZðoÞ ¼ 1=HðoÞ; 1ooo1, is the impedance of a linear system. The complex impedance of a linear
system at a given frequency is the quantity by which a harmonic component of the input must be divided in
order to obtain the harmonic component of the response. Fung did not separately express the spectral density
input/output relation, as in Eq. (5.5), but he clearly opened to door to future developments in the area.
Further, though he only wrote the relation for a simple system, he noted ‘‘The preceding relation holds for a
much wider class of dynamic systems than that represented by’’ the equation for an SDF system. ‘‘It holds for
higher order linear differential equations, linear integral equations, or linear integral-differential equations
under mild restrictions.’’ Eq. (5.8) appears to be the first occurrence of the formula in a paper on structural
dynamics. In the paper Fung referenced Liepmann [64] who applied random process concepts to the study of
aeronautical buffeting.
Fung’s paper considers many aspects of stochastic structural dynamic response, and in addition to writing
the input/output relations for linear systems, he also considered the distribution of extremes in the response of
a system. He based his analysis of extremes on developments presented by Cramer [65], and showed that the
approximate probability distribution of response extreme values can be written. He also noted that formulas
developed by Gumbel [66] can be used with data to estimate the parameters of the response extreme value
distribution. This is a practical issue, because engineers seeking to analyse the random vibrations of linear
systems were and are interested in characterizing the extremes of the response.
Further, Fung used the work of Housner [67] to express an envelope on the response of a structural dynamic
system. Housner’s work is interesting in its own right because he considers earthquakes to be sequences of
random pulses that are time-varying. In his paper, Housner developed a means for analysing earthquakes
based on averages of their Fourier transform magnitudes. Specifically, he showed that the sample mean of the
square of the Fourier transform modulus (the estimate of spectral density, when averaged) of ten earthquakes
is nearly constant in the frequency range [0.5,5] Hz. However, he did not generalise the idea, or speculate on its
meaning relative to limiting arguments on the number of signals averaged. Nor did Housner refer to the
quantity he was estimating as a spectral density.
Fung’s [63] paper led to another contribution [68]. In the latter paper he developed—in an extraordinarily
direct and intuitive manner—results far beyond any that had previously been presented, and that were not
used until much later. The objective of his main development was to obtain the formula for the expected value
of the nth power of the non-stationary random response of a linear structure given information on the
character of a non-stationary random excitation. He accomplished this in four steps. First, he specified that
non-stationary random processes that are a function of time can be modelled as the sum of a time-varying
mean function plus a non-stationary deviation from the mean. (Here, the term non-stationary refers to
random processes that are not stationary, i.e., random processes that are not in a random steady-state.) Such
an excitation random process is fQðtÞ; tX0g, and its mean and nth order autocorrelation functions are
mQ ðtÞ ¼ E½QðtÞ; tX0, and RQ;...;Q ðt1 ; . . . ; tn Þ ¼ E½Qðt1 Þ    Qðtn Þ; ti X0. Second, starting with a determinis-
tic convolution integral similar to Eq. (5.1), he expressed the mean and mean square responses at location r of
a continuous linear system in the time domain as (integral) functions of the non-stationary excitation random
process mean and autocorrelation functions. (The location on the structure, r, is written in bold because it may
denote a vector location on a multidimensional structure.) His expression for the response mean and mean
square are
Z t
E½X ðr; tÞ ¼ hðr; t  tÞmQ ðtÞ dt r 2 V; tX0, (5.9)
0
ARTICLE IN PRESS
T.L. Paez / Mechanical Systems and Signal Processing 20 (2006) 1783–1818 1809

Z t Z t
2
E½X ðr; tÞ ¼ dt1 dt2 hðr; t  t1 Þhðx; t  t2 ÞRQQ ðt1 ; t2 Þ r 2 V; tX0, (5.10)
0 0

where hðr; tÞ; r 2 V; tX0, is the impulse response function of a linear structure, i.e., the response of a
structure at location r to an impulsive load at a specific, but unspecified location, at time zero. The second of
these equations differs from Eq. (5.4), written 3 years later by Crandall, in that this is simply the expression for
the response mean square. But, in addition, both these expressions are for non-stationary excitation and
response, and further, they represent the response moments for a continuous structure.
His third step was to write the general, higher-order, response correlation functions of linear system
response in the time domain in terms of the multivariate autocovariance function of the random excitation.
That is,
Z t Z t n
Y
E½X n ðr; tÞ ¼ dt1    dtn hðr; t  ti ÞRQ...Q ðt1 ; . . . ; tn Þ r 2 V; tX0. (5.11)
0 0 i¼1

Fourth, he wrote the general, higher-order, response autocorrelation functions of linear system response in
terms of frequency domain integrals. These final expressions were obtained by, first, writing the expression for
linear system response in terms of Fourier transforms:
Z 1
QðoÞ iot
X ðr; tÞ ¼ e do r 2 V ; tX0, (5.12)
1 Zðr; oÞ
where QðoÞ; 1ooo1, is the Fourier transform of the excitation QðtÞ; tX0, and
Zðr; oÞ; r 2 V; 1ooo1, is the impedance of the linear structure at location r to an applied input. As
well, the impedance is the reciprocal of the Fourier transform of the impulse response function
hðr; tÞ; r 2 V; tX0. Next, Fung formed general, higher-order products of the expressions at times
t1 ; . . . ; tn , then took the expected value of the result. The expression he obtained is
Z 1 Z 1
Qðo1 Þ    Qðon Þ
RX ;...;X ðr; t1 ; . . . ; tn Þ ¼ do1    don eiðo1 t1 þþon tn Þ r 2 V; ti X0, (5.13)
1 1 Zðr; o1 Þ    Zðr; on Þ
The quantity Qðo1 Þ    Qðon Þ refers to the mean of the product of the function QðoÞ; 1ooo1,
evaluated at its frequencies ok ; k ¼ 1; :::; n. If Eq. (5.5) is referred to as the ‘‘central result’’ of linear random
vibrations, then Eq. (5.13) completely generalises linear random vibrations to continuous structures with non-
stationary excitations and responses. Of course, Eq. (5.13) reduces to the important second order case for
autocorrelation function of the linear system response when n ¼ 2.
A byproduct of his development is the indirect definition of the spectral density of a non-stationary random
process. It is
SQQ ðo1 ; o2 Þ ¼ Qðo1 ÞQðo2 Þ; 1oo1 ; o2 o1. (5.14)
Fung did not refer to the non-stationary random process spectral density by any special name. In fact, he
did not write the autospectral density of the linear system response. He wrote only the moments and the
higher-order correlation functions of the linear, time-domain response. (Note that Fung’s definition of the
spectral density of a non-stationary random process does not correspond exactly to the definition used in
much modern modelling and analysis. See, for example [69].)
Fung’s [68] paper covered many other subjects useful in random vibrations including the simplification of
random vibration problems using normal mode analysis, analysis of bending moments, and applications to
structural design, including extreme value considerations.
Here, is an example of the use of the formulas from Fung’s [68] paper. Consider, again, the system whose
FRF magnitude is shown in Fig. 16. It is excited by a non-stationary, Gaussian random process with the
(estimated) spectral density magnitude (of Eq. (5.14)) shown in Fig. 20. The system has a random response
whose spectral density magnitude (of Eq. (5.14)) is shown in Fig. 21. realisations of both the random
excitation and response can be generated as shown in Figs. 22 and 23.
The subject of structural fatigue was and remains one of great importance in structural dynamics. Among
many other sources, it is strongly motivated by the response of structural components in the presence of a
ARTICLE IN PRESS
1810 T.L. Paez / Mechanical Systems and Signal Processing 20 (2006) 1783–1818

-2

(f1,f2) |
-3
QQ
log |S
10

0
-4
0 1000
H z
1000 f,
Fre 2000 y, 1
que 2000 nc
ncy
,f , ue
3000 3000 eq
2 Hz Fr

Fig. 20. Magnitude of the spectral density of the excitation.

-2
log10|SXX (f1,f2) |

-4

-6

0
-8
0 1000
1000 z
Fre 2000 ,H
que y,f1
ncy 2000 nc
,f , ue
2 Hz 3000 3000 eq
Fr

Fig. 21. Magnitude of the response spectral density for the system with FRF of Fig. 5.1 to the excitation with spectral density shown in
Fig. 5.6.

rapidly varying pressure field of the sort that arises in connection with aerodynamic turbulence. Of course,
developments were starting to take place in the field of jet propulsion during the early to mid 1950s. Miles [49]
wrote perhaps the first paper that considered the subject of random fatigue of structural components in
mechanical systems. His paper started with the assumption that many lightly damped structures can be
approximated as SDF systems (an assumption essential to his mode of analysis). He expressed the response
spectral density and noted that, as far as displacement and stress responses are concerned, the spectral density
approximately equals the one that would be excited in a system by a white noise excitation with spectral
density equal to the actual input spectral density at the natural frequency of the SDF system. Specifically, if
the excitation random process is zero mean and stationary, with spectral density S QQ ðoÞ; 1ooo1, and if
the SDF system has FRF HðoÞ; 1ooo1, then the response spectral density is given by Eq. (5.5). Miles’
ARTICLE IN PRESS
T.L. Paez / Mechanical Systems and Signal Processing 20 (2006) 1783–1818 1811

200

q (t), g
0

-200
0 0.002 0.004 0.006 0.008 0.01
Time, sec

Fig. 22. Realisation of the non-stationary random process with spectral density shown in Fig. 5.6.

10
x (t), g

-10

0 0.002 0.004 0.006 0.008 0.01


Time, sec

Fig. 23. Realisation of the non-stationary response random process with spectral density shown in Fig. 5.7.

approximation states that the response spectral density can be approximated by

SXX ðoÞ ¼ jHðoÞj2 S QQ ðon Þ; 1ooo1, (5.15)


where on is the natural frequency of the SDF system. For example, when an SDF system with natural
frequency on ¼ 2p rad=s, and damping factor z ¼ 0:05, is excited by an input with the spectral density shown
in Fig. 24, its displacement response is a random process with the spectral density shown in Fig. 24. The
response spectral density is the product of the modulus squared of the FRF (also shown) and the input
spectral density. Miles’ approximation to the response spectral density is the product of the white noise
spectral density with magnitude SQQ ðon Þ and the modulus squared of the FRF, both shown in Fig. 24. The
RMS value of the response is the square root of the area under the response spectral density curve. In this case
the exact value is 0.312 in, and the value that comes from the approximation is 0.297 in. For this example, the
approximation yields accuracy of about five percent. (Of course, in individual cases, the approximation may
be better or worse.) Miles’ paper may be better known for this intermediate result than for its final result, to
follow.
Miles continued his analysis by using his white noise approximation and noting (with reference to [55])
that the probability distribution of the peaks in a zero-mean, stationary, narrowband random process
follow a Rayleigh distribution [86]. Further, he assumed that the number of cycles to a high-cycle fatigue
failure is related to the operation stress level via a power law. He adopted Miner’s rule for fatigue damage
accumulation of a specimen loaded at a varying levels. And he used these facts to show that the single stress
that is equivalent, on average, to the varying stress amplitudes of an SDF system in narrowband random
response is
 2 1=2
ass
sr ffi ; ab1, (5.16)
e
where a1 is the slope on a log–log plot of the stress amplitude that corresponds to number of cycles to
failure, s2s is the mean square value of stress at the point of interest, and e is Euler’s constant. Finally, using the
white noise approximation, Miles showed that the amplification ratio of this equivalent stress over the stress s0
ARTICLE IN PRESS
1812 T.L. Paez / Mechanical Systems and Signal Processing 20 (2006) 1783–1818

SQQ (f )
100 SQQ (fn)

|H (f )|2
10-2

SXX (f ) |H (f )| 2SQQ (fn)


10-4

10-1 100 101


Frequency, Hz

Fig. 24. Graphic showing the meaning of Miles’ approximation to the response spectral density.

caused by a static force F 0 , is


" #1=2
sr pa on S QQ ðon Þ
¼ , (5.17)
s0 4ez F 20

where z is the SDF system damping factor. Given knowledge of the system’s natural frequency, the formula
permits estimation of the expected time to failure.
The response spectral density approximation that Miles defined is so attractive that it has gained widespread
acceptance, and is in use in many applications. For example, NASA uses it in combination with finite element
models and a load combination scheme to establish an approximation for the overall load on a system. (See,
for example, [70–72].)
Lyon [29,73] wrote two papers in 1956 that consider, first, in a very general sense, then in a specific sense,
how random pressure fields fQðr; tÞ; r 2 V; 1oto1g propagate and excite mechanical system responses.
The random field specification shows that it is a function of a spatial parameter as well as the time parameter;
the spatial parameter, r, is bold because it can describe the locations where the random process is to be
considered in one, two, or three dimensions, and this parameter is limited to the volume, V. The random
response Lyon considered is also a random field; it varies as a function of space and time. As Miles did, he also
based his analysis on the work of Rice [55], but in addition to considering processes that are random in time,
he also considered spatial randomness. In the first (and more general) paper he expressed the excitation field as
a ‘‘random superposition of elementary events (eddies)’’ that propagate in space and time. The spatial
randomness was included by permitting the eddies to originate at random locations. He defined a probability
law to govern the random field, permitting the time and location of origination of eddies to be dependant, and
requiring amplitudes of the eddies to be independent. He used the joint probability distribution with the form
of the random field to establish a space–time correlation function for the noise field. It is
E½Qðr1 ; t1 ÞQðr2 ; t2 Þ; r1 ; r2 2 V ; t1 ; t2 X0. He then ‘‘propagated’’ the noise field through a linear system to
obtain the correlation function of the response. To do so, he noted that the response of a linear, continuous
system to a deterministic excitation qðr0 ; t0 Þ; r0 2 V; 1ot0 pt is
Z t Z
xðr; tÞ ¼ dt0 dr0 gðr; t; r0 ; t0 Þqðr0 ; t0 Þ r 2 V; 1oto1, (5.18)
1 V

where gðr; t; r0 ; t0 Þ; r; r0 2 V; 1ot0 ; to1, is the Green’s function of the system, i.e., the response of the
structure at time t and locations r, to a temporally impulsive excitation at time t0 and locations r0. By writing
this expression using the random field excitation, Qðr0 ; t0 Þ, in place of the deterministic excitation, qðr0 ; t0 Þ, in
Eq. (5.18), evaluating the result at ðra ; tÞ, multiplying the result by the expression for the response xðrb ; sÞ, and
ARTICLE IN PRESS
T.L. Paez / Mechanical Systems and Signal Processing 20 (2006) 1783–1818 1813

averaging the result over the random excitation products, Lyon obtained
Z t Z s Z Z
RXX ðra ; t; rb ; sÞ ¼ dt1 dt2 dr1 dr2
1 1 V V
gðra ; t; r1 ; t1 Þgðrb ; s; r2 ; t2 ÞRQQ ðr1 ; t1 ; r2 ; t2 Þ ra ; rb 2 V; 1ot; so1, ð5:19Þ
where RQQ ðr1 ; t1 ; r2 ; t2 Þ ¼ E½Qðr1 ; t1 ÞQðr2 ; t2 Þ, the expected value of the product Qðr1 ; t1 ÞQðr2 ; t2 Þ, is the
autocorrelation function of the random field fQðr; tÞ; r 2 V; 1oto1g, and RXX ðra ; t; rb ; sÞ ¼
E½Qðra ; tÞQðrb ; sÞ is the autocorrelation function of the response field. Lyon concluded the first paper without
solving a numerical example for a specific physical system. He also pointed out that an alternate means to
obtain this result had been developed by Eckart [74] who propagated individual excitation realisations to
response realisations, then averaged over the response realisations to obtain the autocorrelation function of
Eq. (5.19).
Lyon’s second (and more specific) paper applied the theory of the first paper (i.e., Eq. (5.19)) to several
different types of strings. He considered both finite and infinite strings, and experimented with some of the
finite strings. He compared his model-predicted results to experimental measurements to show the
applicability of the mathematical formulation.
Thomson and Barton [75] wrote a paper that extended the approximation of Miles [49]. They pointed out
that through modal analysis the equations of motion of a complex linear structure can be reduced to a set of
equations that have the form of the equation governing motion of an SDF system. These simple equations can
be evaluated individually, then the mean square responses in the modes can be synthesised to approximate the
mean square response at a point on the system. Their modal assumption is that the response at a location r on
a complex system xðr; tÞ; r 2 V; 1oto1, can be expressed as a series
X
xðr; tÞ ¼ xk ðtÞfk ðrÞ r 2 V; 1oto1, (5.20)
k

where fk ðrÞ; k ¼ 1; 2; . . . ; r 2 V, are the mode shapes of the system, i.e., the shapes the system assumes when
response is harmonic and occurs at the modal frequencies ok ; k ¼ 1; 2; . . ., and
xk ðtÞ; k ¼ 1; 2; . . . ; 1oto1, are the modal coordinates of the system, i.e., the amplitudes of the
responses in the individual modes. When a system is excited by a single excitation qðtÞ; 1oto1, then
response in the kth mode is governed by

x€ k þ 2zk ok x_ k þ o2k xk ¼ fk ðrin ÞqðtÞ; k ¼ 1; 2; :::; 1oto1, (5.21)


where zk ; k ¼ 1; 2; . . ., are the modal damping factors, and rin is the location on the continuous
structure where the excitation is applied. Based on this equation, when the actual input is a stationary
random process, an approximation to the mean square response in each mode can be established using
Miles’ approximation. This is the approximation that Thomson and Barton made to obtain the overall
mean square response at a point on the complex linear structure. When the excitation random process
applied at the point rin has zero mean, and is stationary with spectral density SQQ ðoÞ; 1ooo1, then the
approximate mean square response of the system at location r is (when the response achieves a stationary

10-4
|H (f)|, in/lb

10-6

102 103
Frequency, Hz

Fig. 25. Magintude of the force in/displacement out FRF for a three-degree-of-freedom linear system.
ARTICLE IN PRESS
1814 T.L. Paez / Mechanical Systems and Signal Processing 20 (2006) 1783–1818

state) given by
X pf2 ðrin Þf2 ðrÞS QQ ðok Þ
s2X ðrÞ ¼ k k
; r 2 V: (5.22)
k
2zk o3k

Development of this approximation includes the assumption that the correlation between pairs of modal
responses is zero.
Here, is an example that tests the accuracy of Thomson and Barton’s approximation in a specific case.
Consider the system whose force input/displacement output FRF magnitude is given in Fig. 25. (The system is
the one considered in Fig. 15, except that here, displacement response is of interest.) The modal frequencies
were previously given as 821, 1384, and 1706 Hz. It is now necessary to specify that the system has modal
dampings of 0.020, 0.035, and 0.015. The system is a three dof system with masses of 0.025, 0.010 and
0.005 lb s2/in. The mode shape amplitudes (normalised with respect to mass) at the input location are 2.66,
5.68 and 0.79, for the first, second and third modes, respectively. The mode shape amplitudes at the
response location are 5.25, 1.30 and 8.41. The system is excited with a zero-mean stationary random
process with the spectral density shown in Fig. 26. The spectral density values at the modal frequencies are
3.72, 2.84, and 4.78 lb2/Hz. The exact spectral density of the system response is shown in Fig. 27. The mean
square response is the area under the response spectral density curve. It is 3.45  108 in2 with corresponding
RMS response 1.86  104 in. Thomson and Barton’s approximation can be used to estimate the same
quantities using Eq. (5.22). The approximate results are a mean square response of 3.52  108 in2 and an
RMS response of 1.88  104. In this case, the accuracy of RMS prediction is about 1%.
The final paper (published before 1958) to be summarised here was written by Eringen [76]. He considered,
specifically, the random vibrations of linear beams and plates. The approach he took was to write the
deterministic solution for the response in terms of Fourier expansion. The response expressions so obtained
are series in the eigenfrequencies and eigenshapes of the beam or plate, and the excitation component
amplitudes. Eringen assumed that the excitation is the product of a temporal white noise and a shape function
defined over the beam or plate. For these systems he developed expressions for the response auto- and cross-
correlation functions, and the response auto- and cross-spectral densities. The autocorrelation functions and
the autospectral densities are the quantities that other researchers of the time were developing for general and

101
SQQ (f),lb2/Hz

100

10-1
102 103
Frequency, Hz

Fig. 26. Spectral density of force applied to the system.


SXX (f), in2/Hz

10-10

102 103
Frequency, Hz

Fig. 27. Spectral density of system response.


ARTICLE IN PRESS
T.L. Paez / Mechanical Systems and Signal Processing 20 (2006) 1783–1818 1815

specific structural response random vibrations. The cross-correlation functions and cross-spectral densities
were not commonly developed at the time, but are required to establish exact expressions for random response
spectral densities, mean squares, etc., when the response is the superposition of multiple components like
modes, or responses excited by multiple inputs simultaneously.
Eringen solved some specific random vibrations problems for beams and plates, and in those he considered
the case of spatially concentrated loads. His discussion covers practical issues of linear system response
analysis using the framework of Fourier analysis.
Several other noteworthy papers appeared in the proceedings of Crandall’s [58] workshop. Among these is
one by Powell [58], entitled ‘‘On the Response of Structures to Random Pressures and to Jet Noise in
Particular.’’ He also used modal analysis to express the response of continuous linear systems, and explored
the importance of spatial distribution of loads. Further, he expressed the correlations between modal response
components. In many respects the paper is similar to Fung’s [68] paper, and is, in fact, very advanced for the
era. A paper by Dyer [77] in the MIT workshop, entitled ‘‘Estimation of Sound-Induced Missile Vibrations,’’
is a very practical, empirical presentation of the load field, followed by vibration response characterization for
missiles subjected to sound-induced noise.
Another paper in the 1958 Random Vibration volume by Rona [50] entitled ‘‘Instrumentation for Random
Vibration,’’ considers both issues of experimental instrumentation and the estimation of spectral density. He
discussed several types of accelerometer, and described how analog-to-digital conversion of measured signals
is performed. He also discussed the possibility of digital signal analysis, but pointed out that it was not in
general use at the time. He described what was, at the time of that writing, the technique used for estimation of
spectral density. In essence, his description is the one given by Fig. 9 in Section 3. He referenced Blackman and
Tukey [78] for the spectral density estimation technique.
Hardware implementations of the approach he described were used to estimate spectral densities of
stationary random sources in the laboratory and the field, through at least the mid-1960s. The input signals
were, of course, analog. The filters were band-pass filters, and were used in banks of 40–80. Spectral density
estimates were frequently obtained up to 1000–2000 Hz. Mean square values of signal components were
obtained in two ways. First, the filtered signals might be run through a full-wave rectifier. The means of the
resulting signals could be computed, and from this, the RMS values of the filtered signal components inferred.
These quantities could be squared to obtain estimates of the component mean squares. A second method
involved the squaring of the filtered signal components using an analog squarer, then the averaging of the
resulting signals to obtain the component mean squares. Finally, the mean square of each signal component
was scaled by the factor 1/(BW) (the inverse of the effective bandwidth of the band-pass filters) to obtain the
analog estimate of spectral density.
Rona’s paper presented an early discussion (one of many) on the subject of test instrumentation, random
signal analysis, and random vibration test control. Through the years, as much time, effort and funding has
gone to the performance of laboratory and field random vibration testing and analysis as has gone into the
development of analytical techniques. The reason is simple; those burdened with the requirement to assure
that actual systems that are capable of being tested are robust and reliable have tended to trust the results of
physical experiments over analytical predictions, or in addition to analytical predictions. That attitude is
changing, but a discussion of the reasons why requires another paper. For that matter, a complete discussion
of the history of random vibration and mechanical shock testing requires a separate discussion.
The papers discussed in this section form a part of the legacy of the modern development of the theory of
random vibrations. During the 1960s and beyond the study of random vibrations of mechanical systems
flourished, and many papers dealing with the subject were published. Even the briefest summary of the topics
covered in random vibration studies during the past 45 years would require a substantial discussion and admit
an enormous bibliography. For those reasons, the papers summarised here stand out, forming the starting
point of the modern theory of random vibrations.

6. Conclusion

The history of the mathematical theory of random vibrations of mechanical systems spans the previous
century, starting with the work of Einstein, and continues to the present. This paper briefly presents some
ARTICLE IN PRESS
1816 T.L. Paez / Mechanical Systems and Signal Processing 20 (2006) 1783–1818

Smoluchowski, Uhlenbeck, Wang,


Einstein
Fokker, Planck Ornstein Uhlenbeck
1905 1930 1945
1916

Schuster Wiener Crandall


1905 1930 1958

Rayleigh Rice Fung, et


1880,1889 1944 al. 1953

Fig. 28. Time line of contributors to the mathematical theory of random vibrations of mechanical systems.

developments in that history through the year 1958. And though the groundwork for random vibration
analysis was laid over that entire period, the work dealing with application to structural and mechanical
systems started in earnest in the 1950s. While numerous papers contributing to the development of random
vibrations have been summarised, there are some early texts that rate special mention. These are some of the
texts that form the foundations for the rich, formative contributions to come after 1958:

 The text by Crandall and Mark [3] provides a particularly accessible development of the formulas and
techniques of random vibration of linear structures. It covers in great detail the random vibration of two-
dof structures, and provides tables of formulas useful in the computation of random vibration integrals.
 Lin [5] provided the first text with a detailed mathematical presentation of random vibration of mechanical
systems. He included a concise, yet thorough summary of probability and random process theory. He
covered SDF and multi-dof linear systems, as well as non-linear systems. He covered direct moment-based
approaches to random vibration as well as Markov vector-based approaches.
 The text by Robson [4] provides a very concise summary of deterministic structural dynamics and random
vibration of linear systems. It includes several developments fundamental to random vibrations and
random processes that are omitted from other texts.
 The introduction of random vibrations as a theory useful for the solution of engineering problems
necessitated the development of techniques for estimation of the quantities required to apply the theory.
The text by Bendat [79] satisfied many of these needs. It provides methods for the estimation of the spectral
density and autocorrelation functions.

Finally, for the convenience of the reader, we present a time line of the major contributors to the field of
random vibrations. Fig. 28 shows many of the major contributors, the years of their contributions, and their
connections to earlier and later contributors.

Acknowledgement

The author owes a great debt of gratitude to the individuals who reviewed the manuscript. They corrected
numerous errors. They demanded clarity of wording and presentation, and suggested how to achieve it. When
the sourcing of historical developments appeared inadequate, they encouraged more research. They are Allan
Piersol, Harry Himelblau, David Smallwood, Tim Hasselman, George Lloyd, Bill Hughes and Mark
McNellis. Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin
Company, for the United States Department of Energy under Contract DE-AC04-94AL85000.
ARTICLE IN PRESS
T.L. Paez / Mechanical Systems and Signal Processing 20 (2006) 1783–1818 1817

References

[1] S. Crandall (Ed.), Random Vibration, Technology Press of MIT and Wiley, New York, 1958.
[2] S. Crandall (Ed.), Random Vibration, MIT Press, Cambridge, MA, 1963.
[3] S. Crandall, W. Mark, Random Vibration in Mechanical Systems, Academic, New York, 1963.
[4] J. Robson, An Introduction to Random Vibration, Elsevier, New York, 1964.
[5] Y. Lin, Probabilistic Theory of Structural Dynamics, McGraw-Hill, New York, 1967 (Republished as Y. K. Lin, Probabilistic Theory
of Structural Dynamics, Krieger, Huntington, NY, 1967).
[6] I. Elishakoff, Probabilistic Methods in the Theory of Structures, Wiley, New York, 1983.
[7] N. Nigam, Introduction to Random Vibrations, MIT Press, Cambridge, MA, 1983.
[8] D. Newland, Random Vibrations, Spectral and Wavelet Analysis, Longman, New York, 1993.
[9] V. Bolotin, Random Vibration of Elastic Systems, Martinus Nijhoff, The Hague, The Netherlands, 1984.
[10] G. Augusti, A. Baratta, F. Casciati, Probability Methods in Structural Engineering, Chapman & Hall, New York, 1984.
[11] R. Ibrahim, Parametric Random Vibration, Wiley, New York, 1985.
[12] C. Yang, Random Vibration of Structures, Wiley, New York, 1986.
[13] G. Schueller, M. Shinozuka (Eds.), Stochastic Methods in Structural Dynamics, Martinus Nijhoff, Boston, 1987.
[14] J. Roberts, P. Spanos, Random Vibration and Statistical Linearization, Wiley, New York, 1990.
[15] R.G. Ghanem, P.D. Spanos, Stochastic Finite Elements: A Spectral Approach, Springer, New York, 1991.
[16] T. Soong, M. Grigoriu, Random Vibration of Mechanical and Structural Systems, Prentice-Hall, Englewood Cliffs, NJ, 1993.
[17] P. Wirsching, T. Paez, K. Ortiz, Random Vibrations: Theory and Practice, Wiley, New York, 1995.
[18] J. Bendat, A. Piersol, Random Data: Analysis and Measurement Procedures, third ed, Wiley, New York, 2000.
[19] B. Gnedenko, Theory of Probability, sixth ed, Gordon and Breach, UK, 1997.
[20] W. Feller, An Introduction to Probability Theory and Its Applications, Wiley, New York, 1971.
[21] A. Einstein, On the movement of small particles suspended in a stationary liquid demanded by the molecular kinetic theory of heat,
Annalen der Pyhsik 17 (1905) 549 (also, reprinted in Einstein, 1956).
[22] A. Ang, W. Tang, Probability Concepts in Engineering Planning and Desig., vol. 1—Basic Principles, Wiley, New York, 1975.
[23] A. Einstein, Investigations on the Theory of the Brownian Movement, Dover Publications, New York, 1956 (edited by R. Furth).
[24] G. Uhlenbeck, L. Ornstein, On the theory of the brownian motion, Physical Review 36 (1930) 823–841 (reprinted in Wax, 1954); (see
also Ref. [87]).
[25] R. Furth, Annals Des Physik 53 (1917) 177.
[26] M. Wang, G. Uhlenbeck, On the theory of Brownian motion II, Reviews of Modern Physics 17 (2 and 3) (1945) 323–342 (reprinted in
Wax, 1954); (see also Ref. [87]).
[27] L. Ornstein, Proceedings of the Academy Amsterdam 21 (96) (1919).
[28] G.A. Van Lear, G.E. Uhlenbeck, The Brownian motion of strings and elastic rods, Physical Review 38 (1931) 1583–1598.
[29] R.H. Lyon, Response of strings to random noise fields, Journal of the Acoustical Society of America 28 (3) (1956) 91–398.
[30] L.S. Ornstein, Zeits. F. Physik. 41 (1927) 848.
[31] A. Houdijk, Archives Neerlandaises des Sciences Exactes et Naturelles, Series III A 11 (1928) 212.
[32] M. Planck, Berl. Ber. (1927) 324.
[33] A. Fokker, Dissertation, Leiden, 1913.
[34] A. Papoulis, S. Unnikrishna Pillai, Probability, Random Variables and Stochastic Processes, fourth ed, McGraw-Hill, New York,
2002.
[35] M. Kac, Random walk and the theory of Brownian motion, American Mathematical Monthly 54 (7) (1947) (reprinted in Wax, 1954).
[36] A.N. Kolmogorov, On analytical methods in the theory of probability, Mathematische Annalen 104 (1931) 415–458.
[37] N. Wiener, Generalized harmonic analysis, Acta Mathematica 55 (118) (1930).
[38] A. Schuster, The periodogram of magnetic declination, Cambridge Philosophical Transactions 18 (1899) 108.
[39] A. Schuster, The Periodogram of magnetic declination, Transactions of the Cambridge Philosophical Society (1900) 107–135.
[40] A. Schuster, The periodogram and its optical analogy, Proceedings of the Royal Society 77 (1905) 136–140.
[41] L. Rayleigh, On the character of the complete radiation at a given temperature, Philosophical Magazine 27 (1889) 460–469.
[42] A. Schuster, On interference phenomena, Philosophical Magazine 37 (1894) 509–545.
[43] A. Schuster, On lunar and solar periodicities of earthquakes, Proceedings of the Royal Society of London 61 (1897) 455–465.
[44] L. Rayleigh, On the spectrum of an irregular disturbance, Philosophical Magazine 5 (1903) 238–243.
[45] L. Rayleigh, Remarks concerning Fourier’s theorem as applied to physical problems, Philosophical Magazine 24 (1912) 864–869.
[46] L. Rayleigh, On the problem of random vibrations, and of random flights in one, two, or three dimensions, Philosophical Magazine
37 (1919) 321–347.
[47] L. Rayleigh, On the resultant of a number of unit vibrations, whose phases are at random over a range not limited to an integral
number of periods, Philosophical Magazine 37 (1919) 498–515.
[48] G.I. Taylor, Diffusion by continuous movements, Proceedings of the London Mathematical Society 20 (1920) 196–212.
[49] J.W. Miles, On structural fatigue under random loading, Journal of the Aeronautical Sciences 21 (1954) 753–762.
[50] T. Rona, Instrumentation for random vibration, in: S. Crandall (Ed.), Random Vibration, 1958 (Chapter 7).
[51] J.R. Carson, Selective circuits and static interference, Bell System Technical Journal (1925) 265–279.
[52] J.R. Carson, The statistical energy-frequency spectrum of random disturbances, Bell System Technical Journal (1931) 374–381.
ARTICLE IN PRESS
1818 T.L. Paez / Mechanical Systems and Signal Processing 20 (2006) 1783–1818

[53] D. Smallwood, Random vibration testing of a single test item with a multiple input control system, in: Proceedings of the Institute of
Environmental Sciences Annual Meeting, IES, 1982.
[54] D. Smallwood, Random vibration control system for testing a single test item with multiple inputs, Advances in Dynamic Analysis
and Testing, SAE Publication SP-529, Paper No. 821482, 1982.
[55] S.O. Rice, Mathematical analysis of random noise, Bell System Technical Journal 23 (1944, 1945) 282–332 (V. 24, pp. 46–156.
Reprinted in Wax, 1954); (see also Ref. [87]).
[56] W.M. Siebert, The description of random processes, in: S. Crandall (Ed.), Random Vibration, 1958 (Chapter 2).
[57] S. Crandall, Statistical properties of response to random vibration, in: S. Crandall (Ed.), Random Vibration, 1958 (Chapter 4).
[58] A. Powell, On the response of structures to random pressures and to jet noise in particular, in: S. Crandall (Ed.), Random Vibration,
1958 (Chapter 8).
[59] S. Crandall, Mechanical Vibrations with Deterministic Excitations, in: S. Crandall (Ed.,) Random Vibration, 1958 (Chapter 1).
[60] W.B. Davenport, W.L. Root, Random Signals and Noise, McGraw-Hill Book Co., New York, 1956.
[61] J.H. Laning, R.H. Battin, Random Processes in Automatic Control, McGraw-Hill, New York, 1956.
[62] R. Phillips, Statistical properties of time variable data, James, Nichols and Phillips, 1947 (Chapter 6).
[63] Y. Fung, Statistical aspects of dynamic loads, Journal of the Aeronautical Sciences 20 (1953) 317–330.
[64] H. Liepmann, On the application of statistical concepts to the buffeting problem, Journal of the Aeronautical Sciences 19 (12) (1952)
793 (also, An Approach to the buffeting problem from turbulence considerations, Report No. SM-13940, Douglas Aircraft
Company, Inc., 1951).
[65] H. Cramer, Mathematical Methods of Statistics, Princeton University Press, Princeton, NJ, 1946.
[66] E. Gumbel, Les ValeursExtremes desDistributions Statistiques, Annals De L’Institute Henri Poincare 5 (2) (1935) 115–158.
[67] G. Housner, Characteristics of strong motion earthquakes, Bulletin of the Seismological Society of America 37 (1947) 19–31.
[68] Y. Fung, The analysis of dynamic stresses in aircraft structures during landing as nonstationary random processes, Journal of
Applied Mechanics 2 (1955) 449–457.
[69] L. Cohen, Time-frequency distributions—a review, Proceedings of the IEEE 77 (7) (1989) 941–981.
[70] R. Ferebee, Loads combination research at Marshall Space Flight Center, Marshall Space Flight Center, Alabama, Report No.
NASA/TM-2000-210331, 2000.
[71] H. Himelblau, D.L. Kern, J.E. Manning, A.G. Piersol, S. Rubin, Dynamic environmental criteria, NASA Technical Handbook,
NASA-HDBK-7005, 2001.
[72] NASA, Payload flight equipment requirements and guidelines for safety-critical structures, International Space Station Program,
NASA, No. SSP 52005 Revision C, 2002.
[73] R.H. Lyon, Propagation of correlation functions in continuous media, Journal of the Acoustical Society of America 28 (1) (1956)
76–79.
[74] C. Eckart, Physical Review 91 (1953) 784.
[75] W.T. Thomson, M.V. Barton, The response of mechanical systems to random excitations, Journal of Applied Mechanics 24 (1957)
248–251.
[76] A.C. Eringen, Response of beams and plates to random loads, Journal of Applied Mechanics 24 (1957) 46–52.
[77] I. Dyer, Estimation of sound-induced missile vibrations, in: S. Crandall (Ed.,) Random Vibration, 1958 (Chapter 9).
[78] R.B. Blackman, J.W. Tukey, Measurements of power spectra from the viewpoint of communication engineering, Bell System
Technical Journal, 1958.
[79] J.S. Bendat, Principles and Applications of Random Noise Theory, Wiley, New York, 1958.
[83] H. James, N. Nichols, R. Phillips (Eds.), Theory of Servomechanisms, Radiation Laboratory Series, vol. 25, MIT, McGraw-Hill,
New York, 1947.
[84] A. Khintchine, Korrelations Theorie der Stationaren Stochastischen Prozesse, Mathematische Annalen 109 (1934) 604–615.
[86] L. Rayleigh, On the resultant of a large number of vibrations of the same pitch and of arbitrary phase, Philosophical Magazine 10
(1880) 73–78.
[87] N. Wax (Ed.), Selected Papers on Noise and Stochastic Processes, Dover Publications, New York, 1954.
[88] M.v. Smoluchowski, Physikalische, Zeitschrift 17 (1916) 557.

You might also like