0045 7949 (92) 90132 J PDF

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

au5-7949/x $5.00 + 0.

00
ConQnlersLtsmicwes Vol. 41, NO. 4, pp. 649-659, 1992
Printed in Great Britain. 6 1992PqamonPrempk

COMPENDIUM

USE OF NEURAL NETWORKS IN DETECTION OF


STRUCTURAL DAMAGE
X. Wu,t J. Grr~nouastt$ and J. H. GARRET&JR§
TDepartment of Civil Engineering, University of Illinois at Urbana-Champaign, Urbana, IL 61801, U.S.A.
@epartment of Civil Engineering, Carnegie-Mellon University, Pittsburgh, PA 15213, U.S.A.

(Receiued 15 iVmet&er 1990)

Abstract-Damage to structures may be caused as a result of normal operations, accidents, deterioration


or severe natural events such as earthquakes and storms. Most otten the extent and the location of the
damage can be determined through visual inspection. However, in some cases visual inspection may not
be feasible. The study reported in this paper is the first stage of a research aimed at developing automatic
monitoring methods for detection of structural damage. In this feasibility study we have explored the use
of the self-organization and learning capabilities of neural networks in structural damage assessment. The
basic strategy is to train a neural network to recognixe the behavior of the undamaged structure as well
as the behavior of the structure with various possible damage states. When the trained network is subjected
to the measurements of the structural response, it should he able to detect any existing damage. We have
tried this basic idea on a simple structure and the results are promising.

1. INTRODUCITON develop an understanding of the structural behavior


and to establish correlations between specific member
An effective and reliable damage assessment method- damage conditions and changes in the structural
ology will be a valuable tool in the timely determi- response. These mathematical models are inherently
nation of damage and deterioration state of structural direct process models, proceeding linearly from
members. The information produced by a damage causes to effects. However, the identification of mem-
assessment process can play a vital role in the devel- ber damage from the response of the damaged struc-
opment of economical repair and retrofit programs. ture is an inverse process, causes must be discerned
The most common method of damage assessment is from the effects. This diagnostic step is a comprehen-
by visual inspection. However, visual inspection of sive search for the causes of the observed structural
large and complex structures may be difficult and behavior; this process is computationally expensive
costly due to problems of accessibility and in some and requires a well developed data tracking system.
cases visual inspection may prove to be unreliable. Researchers have proposed damage assessment
Examples of such structures are offshore oil and gas schemes based on analysis of measured dynamic
exploration and production platforms and aircraft responses of the structure before and after damage.
fuselages. In recent years, the indirect diagnostic Some of the proposed schemes only use the frequency
methodology for structural damage assessment has response of the structure under applied loads or
attracted the attention of many researchers. The basic the frequency content of the ambient vibration
approach is to detect changes in the dynamic re- tests [l-S]. More recent works in this area also utilize
sponse and/or in the dynamic characteristics of the the information on the mode shapes of the struc-
structure. These changes are subsequently related to ture [6]. While the existing mathematical models for
various damage states. The objective of any damage complex structures may be sufficiently accurate for
assessment is to determine whether structural damage the purposes of analysis and design, these models are
has occurred and if so, to determine the location and
not in themselves sutlicient for diagnosing which
the extent of the damage.
members may be causing the observed changes in the
Most of the damage assessment methods proposed dynamic response of the structure. A robust damage
in the literature follow more or less the same assessment methodology must be capable of recog-
approach. A number of basic steps can be identified
nizing patterns in the observed response of the struc-
in these traditional damage assessment methods. ture resulting from individual member damage,
First, a mathematical model for the structure is
including the capability of determining the extent of
constructed. The mathematical model is then used to
member damage. This capability appears to be within
the scope of the pattern matching capabilities of
$ To whom correspondence should be addressed. neural networks. The utilization of these capabilities

649
650 Com~ndium

of neural networks in damage assessment forms the perception, language understanding, motor control,
basis of the investigation described in this paper. etc., is that:
In this study, neural networks are used to extract these tasks require the simultaneous consideration
and store the knowledge of the patterns in the of a large number of pieces of information and
response of the undamag~ and the damaged constraints, and
structure. Thus, the need for construction of the the neuronal architecture, consisting of billions of
mathematical models and the comprehensive inverse neurons that are interconnected with a fan-in and
search is avoided. The neural networks are compu- fan-out on the order of about lOOO-100,000, is
tational models, loosely based on the structure and ideally suited for the the relaxation-type wmpu-
the information processing capabilities of the human tation necessary for the simultaneous consider-
brain. A neural network is an assembly of a large ation of the above mentioned multitude of
number of highly interconnected simple processing information and constraints f7l.
units (neurons). The connections between the Hence, this neurally inspired computation and
neurons have numerical values which represent the knowledge representation paradigm consists of a
strength of these connections called weights. Knowl- model of an individual neuron, called a processing
edge is stored in the form of a collection of connec- unit, that is instantiated thousands of times, with
tion strengths. These neural networks are capable many connections existing between these various
of elf-organi~tinn and knowledge acquisition instances. The basic model of an individual neuron
(learning). This capability allows automatic determi- was developed in 1943, by Warren McCulloch and
nation from the connection strengths from the data Walter Pitts and still remains at the heart of most
containing the knowledge to be extracted. neural networks [8]. Given that the physiological
The objective of the pilot study presented here is to structure of a neuron looks like that shown at the top
explore the applicability of neural networks to the of Pig. I. McCulloch and Pitts put forth the following
assessment of structural damage. Appropriate neural model of a neuron (see bottom of Fig. 1):
networks are trained to recognize the frequency each neuron has an activity, xi, that is the sum of
response of an undamaged structure as well as the inputs that arrive via weighted pathways,
frequency responses of the structure in which individ- the input from a particular pathway is an incom-
ual members have sustained varying degrees of dam- ing signal, Si, multiplied by the weight, wrj, of that
age. These trained neural networks will have the pathway,
capability of recognizing the location and the extent wij represents synaptic efficacy-the amount of
of individual member damage from the measured post-synaptic potential transmitted to soma j if a
response of the structure. spike occurs at soma i, and
the outgoing signal from neuron i is computed as
Sj =f(xj), whereflx,) is binary threshold function
2. NEURAL NETWORKS
of the activity, xj, in that neuron.
As was stated in the introduction, neural networks The McCull~h-~tts neuron can also have a bias
have been inspired by the neuronal architecture and term, ej, which acts as a form of threshold. In the
operation of the brain. Rumelhart et al. f7] argue that McCulloch-Pitts neuron model, the signals, S,, are
the reason the brain is capable of high performance restricted to take on the values of 0 and 1, as is
in natural information processing tasks, such as obvious from the output function f(Xj)*

represents axon

Sj f fiXj)

Tj ‘j

Fig. 1. The McCulloch-Pitts computational model of a neuron.


Compendium 651

Each processor in the network thus maintains only between the computed output pattern and the
one piece of dynamic information (its current level of expected output pattern. Most learning mechanisms
activation) and is capable of only a few simple strive to reduce this error for all associations
computations (adding inputs, computing a new (input/output pattern pairs) to be learned through an
activation level, or comparing input to a threshold iterative, localized weight modification strategy that
value). A neural network performs ‘computations’ serves to reduce the sum of the squares of errors for
by propagating changes in activation (i.e., level of each pattern pair.
stimulation) between the processors, which may or Rumelhart et al. [7] provide an excellent descrip
may not involve feedback and relaxation. tion of the basic anatomy of neural networks, which
Rosenblatt [9] extended the McCulloch-Pitts they divide into seven basic aspects: (1) a set of
model with the intent of giving self-organization processing units, (2) the state of activation of a
capabilities to networks of these units through local- processing unit, (3) the function used to compute
ized, dynamic weight modification. His work, while output of a processing unit, (4) the pattern of COM~G
highly criticized for dealing with only simple two- tivity among the processing units, (5) the rule of
layered networks which were incapable of learning propagation employed, (6) the activation function
certain types of functions, did pave the way for more employed, and (7) the rule of learning employed. The
modem research into learning mechanisms for multi- network topology (i.e., the number of nodes and their
layer networks, such as backpropagation [ 10-121 connectivity), and the form of the rules and functions
(discussed later in this paper). Without such capabili- are all variables in a neural network and lead to a
ties, any complex computation by a McCulloch-Pitts wide variety of network types. Some of the well-
style neural network would require the manual set- known types of neural network described in [12] are:
ting of all connection strengths, which would be competitive learning (13,141, the Boltzmann ma-
impossible. Hence, a key characteristic of neural chine [IS], the Hopfield network [16], the Kohonen
networks is their capability of self-organization or network [17], and the backpropagation network
‘learning’. Unlike traditional sequential program- [lO-121. The backpropagation network is given its
ming techniques, neural networks are trained with name due to the way that it learns-by backpropa-
examples of the concepts to capture. The network gating the errors seen at the output nodes. Although
then internally organizes itself to be able to recon- there are many other variations of neural networks,
struct the presented examples. Several other interest- backpropagation networks are currently being
ing and valuable characteristics are: (1) their ability considered by most neural network application
to produce correct, or nearly correct, responses when developers.
presented with partially incorrect or incomplete
stimuli; and (2) their ability to perform a limited 2.1. Backpropagation neural networks
amount of generalization from the cases on which The processing units in backpropagation neural
they are trained. Both of these latter characteristics networks are arranged in layers. Each neural network
stem from the fact that a neural network, through has an input layer, an output layer and a number of
self-organization, develops an internal set of features hidden layers. As illustrated by the examples shown
that it uses to classify the stimuli presented to it and in Fig. 2, it is the presence of these hidden units that
return the expected response. allows these networks to represent and compute more
In the early stages of learning, the immature con- complicated associations between patterns. Propa-
nection strengths cause the actual response of the gation takes place in a feed-forward manner, from
network to be quite different from that expected. input layer to the output layer. The pattern of
Error is defined as a measure of the difference. connectivity and the number of processing units in

Direction Direction
of of
Activation
Propagation Prop~~oIl
I

Fig. 2. A sample backpropagation neural network.


652 Compendium

each layer may vary with some constraints. No study the output is the same as the activation value
communication is permitted between the processing
units within a layer, but the processing units in each oj = aj.
layer may send their output to the processing units in
higher layers. In the specific type of neural network l The output values are sent to other processing
architecture used in this study, each unit receives units along the outgoing connections.
input from all the units in the proceeding layer and l This process continues until the processing units in
sends its output to all the units in the next layer. A the output layer compute their activation values.
typical example of such a neural network architecture These activation values are the output of the
is shown in Fig. 2. neural computations.
Associated with each connection is a numerical
value which is the strength or the weight of that 2.2. Self-organization and rules of learning
connection: wij = strength of connection between Several mechanisms for imparting self-organiz-
units i andj. The connection strengths are developed ation to these multi-layer networks have been devel-
during training of the neural network. At the begin- oped and described in [7]. One of the main
ning of a training process, the connection strengths characteristics of these learning mechanisms is
are assigned random value. As examples are pre- whether learning occurs in a supervised or unsuper-
sented during the training, application of the ‘rules of vised fashion. Supervised learning means that the
learning’ (to be described later) modifies the connec- expected output is included in what the network is to
tion strengths in an iterative process. At the success- learn. Unsupervised learning means that the network
ful completion of the training, when the iterative is not told what it is to learn about the input with
process has converged, the collection of connection which it is presented and must, on its own, discover
strengths captures and stores the knowledge and the regularities and similarities among the input patterns.
information present in the examples used in its train- One form of supervised learning, developed by
ing. Such a trained neural network is ready to Rumelhart, Hinton, and Williams, is called the gener-
be used. When presented an input pattern, a feed- alized delta rule [11] and is the learning mechanism
forward network computation results in an output used in backpropagation neural networks. All back-
pattern which is the result of the generalization and propagation networks, which use the generalized
synthesis of what it has learned and stored in its delta rule for self-organization and derive their name
connection strengths. from the need to backpropagate error, have the same
A feed-foward network computation with these general architecture shown in Fig. 2.
backpropagation neutral networks proceeds as The training (i.e., supervised learning) of a multi-
follows: layer backpropagation neural network, via the gener-
the units in the input layer receive their activations alized delta rule, is an iterative process. Each step
in the form of an input pattern and this initiates involves the determination of error associated with
the feed-forward process, each unit and then the modification of weights on the
the processing units in each layer receive outputs connections coming into that unit. Each presentation
from other units and perform the following com- of one training case and subsequent modification of
putations: connection strengths is called a cycle. Each cycle
compute their net input N,, actually involves three substeps: (1) for the training
case to be learned, the network is presented with the
input pattern and then propagates the activation
Ni= 5 wjkOk
k=l
7 through to the processing units; (2) the error at the
output units is then backpropagated back to the
hidden processing units; and (3) connections coming
where ok = output from units impinging on unit j into the hidden units then modify their connection
and M = number of units impinging on unit j. strengths using this backpropagated error.
Compute their activation values from the net A set of cycles, made up of one cycle for each
input values training case, is called an epoch. The training process
for a network may require several hundreds or thou-
sands of epochs for all of the training cases to be
‘learned’ within a specified error tolerance. The gener-
Fj is usually a sigmoid function alized delta rule basically performs a gradient-descent
on the error space consisting of Euclidean norms of
the errors of all patterns presented during training.
1
F, = 1 +c-‘NJ-@/” Convergence is not guaranteed. Convergence de-
pends in some ways on the network architecture, its
capacity and the amount of information it is to learn.
Compute their outputs from their activation This relationship is as yet not well understood and is
values. In the neural network type used in this the subject of current research.
Compendium 653

The modification of the strengths of the connec- eliminate the need for explicitly extracting the cause
tions in the generalized delta rule, described in [11], and effect relationships. These relationships exist in a
is accomplished through the gradient descent on the distributed way within the connection strengths of the
total error in a given training case neural networks but are not obvious or externally
available to the user.
Aw, = $,oj. The training of a neural network with appropriate
data containing the information regarding the cause
In this equation, 1 = a learning constant called and effect relationships is at the heart of the proposed
the ‘learning rate’ and Sj= gradient of the total method of damage assessment. Thus, the first step is
error with respect to the net input at unit j. At to generate a battery of data that can be used in
the output units Sj is determined from the difference training an appropriate neural network to recognize
between the expected activations, r,, and the damage patterns from the response of the structure.
computed activations, uj Ideally, this data set should contain the response of
the undamaged structure as well as the responses of
s, = (tj - a,)F’(N,), the structure in various damaged states. This data can
be generated through measurement of structural
where F’ is the derivative of the activation function. response, model test results or through numerical
At the hidden units the expected activations are not simulation, or a combination of all three types of
known a priori. The following equation gives a data. In this first study, we have used numerical
reasonable estimate of Sj for the hidden units simulations.
In any typical application of backpropagation
neural network, at first, an appropriate network
F’(Nj).
architecture must be determined and a training algor-
ithm selected. Then, this particular neural network is
In this equation, the error attributed to a hidden trained with the training data set. The trained net-
unit depends on the error of the units it influences. work is then tested to verify how well it has learned
The amount of error from these units attributed to the training cases. The last step is usually to test the
the hidden unit depends on the strength of connection gneralization capability of the neural network. In this
from the hidden unit to those units; a hidden unit step, the trained network is tested with data which
with a strong excitatory connection to a unit exhibit- was not present in the training data set. This last step
ing error will be strongly ‘blamed’ for this error, is actually the test of the performance capability of
causing this connection strength to be reduced. the network. How well a trained network is able to
generalize, very much depends on the adequacy of the
selected network architecture and the richness of the
3. NEURAL NETWORK-BASED APPROACH TO
DAMAGE ASSESSMENT
training data set on the information needed for
damage assessment. Often, changes in the network
The basic strategy for developing a neural network- architecture and/or additions to training data set are
based approach to damage assessment of a structural needed. Such changes are followed by the repetition
system is to train a backpropagation neural network of the whole training and testing process. This quasi-
to recognize individual member damage from the iterative fine tuning is repeated until satisfactory
measured response of structures. This strategy is performance is obtained.
based on the fact that the measured response of any In this study we have used the computed accelera-
structure contains information regarding its natural tion time histories as measured response of the struc-
frequencies and vibration mode shapes. Structural ture. These acceleration time histories are then passed
damage a!Tects these dynamic characteristics of the through a FFT process and the resulting Fourier
structure. Thus, in theory, the measured response of spectra of the acceleration time histories are used as
a damaged structure, when compared with the re- input to the neural network. The Fourier spectrum
sponse of the undamaged structure, contains infor- between 0 and 20 Hz is discretized at intervals of
mation regarding the location and the extent of 0.1 Hz and the spectral values are input at 200 nodes
damage. However, this information may be extremely in the input layer, as shown in Fig. 4. The output
difficult to extract by the traditional methods; the layer contains one node per member and the acti-
rules governing the cause and effect relationships vation values at these nodes represent the damage
must be established explicitly and methodology for state in that member; an activation value of one
using these relationships must be developed a priori. means no damage and an activation value of zero
In the proposed approach, the training process is mean complete damage. We have tried various num-
used to extract the cause and effect relationships ber of hidden layers. Generally, the capacity of a
which are then stored within the connection strengths neural network is a function of the number of hidden
of an appropriate backpropagation neural network. layers, the number of processing units in each layer,
In this way, the self-organization and learning and the pattern of connectivity between layers [18]. In
.. . . .
capability of these neural networks are utilized to this study a simple pattern of connectivity is used:
654 Compendium

each processing unit has outgoing connections to all retention in stiffness in the neural network output
the processing units in the next layer. The capacity of representation, that is if the structum is intact, the
the neural network is somehow related to the amount damage state is designated as 100%. A Fast Fourier
of the information in the training data and the Transformation (FFT) was performed on the transi-
complexity of the knowledge contained in that data. ent relative acceleration response to convert it from
Currently there are no reliable rules for determining time domain to frequency domain, and the discretized
the capacity of a feed-forward multi-layer neural Fourier spectra of the relative accelerations were used
network, as this question is not yet well understood. as input data in the training data sets. The selected
neural network was trained on a number of the
4. CASE STUDY Fourier spectra of relative accelerations recorded at
the top floor to identify the structural member
In this case study the proposed method has been damage, and the approach is ~h~ati~~y shown in
applied to a simple three story frame, as shown in Fig. 3. The following sections describe the scheme for
Fig. 3. The frame is modeled as a ‘shear building’, input and output representation, network architec-
with girders assumed to be infinitely rigid and the ture determination, and the training and testing of the
columns being flexible. The structure has three de- network in more detail.
grees of freedom (floor translations) and the columns
in each story represent a member. The structure was
subjected to earthquake base acceleration and the 4.1. Input and output to the neural ~tw~~k
transient response was computed in time domain with The input to the neural network was taken as the
Newmark method [19]. The Fourier spectra of the Fourier spectra of the relative accelerations recorded
computed relative acceleration time histories of the at the third or top floor. Note, that for more compli-
top floor were used in training the neural network. cated structures, more accelerometers would have to
Member damage was defined as a reduction in the be used in order to be able to discern the damage state
member stiffness. The dynamic responses of the struc- corresponding to one structural member from
ture at the third or top floor for various member another. The purpose of this investigation was to
damage conditions were computed. For this investi- illustrate the neural network-based methodology and
gation, only one member was ‘damaged’ at a time; to show that this approach could be developed to
damage was represented by a percentage reduction in identify member damage. However, more compli-
stiffness for the structural member, and the damage cated structures will have to be investigated in future
state of the element is defined as the percentage research to determine how this method ‘seakzs up’.

_zz 0 15

Time (see)
Ace&ration ncnn El Cenacrearthquske
May 18,1!MO (SB component)
Fig. 3. The damage assessment problem.
Fig. 4. Architecture of the neural network and the damage state of structural elements.

The Fourier spectrum recorded at the third floor elements are shown in Fig. 4. It was discovered
was represented by a set of 200 processing units, through experiments that if the hidden layer was too
where each unit represented a portion of the fre- small, the network would not converge; if the net-
quency spectrum. Hence, the input to the neural work was too large, it would not converge either.
network consists of 200 units with varying levels of With a hidden layer of 10 nodes, the network con-
activation value representing the amplitudes of the verged and modification of the architecture ceased at
Fourier spectrum. The output of the neural network that point.
is an identification of the extent of the damage for
each member in the structure; units with low 4.3. Training of the neural network
activation represent severe damage of a structural
member and the activation value of 1.0 represents A total of 42 training cases (consisting of 42
no damaged structural members in the structure. frequency spectra of relative acceleration recorded at
Hence, the output layer of the neural network in the top floor along with the information that indi-
this investigation consisted of three nodes, each cates which member is damaged and to what extent
representing the damage condition for each member it is damaged) were presented to the neural network
in the structure. as follows:
no structural member is damaged, i.e., all of the
4.2. Architecture of the neural network output units have activation levels near 1.O,for the
Aa stated previously, a backpropagation neural six different earthquakes used as base accelera-
network architecture was used in this investigation. tions for the structure (six cases);
In the previous section, the input and output layers l the structural member 1 for the first floor is
were described. It was determined that one hidden assumed damaged to certain extent, i.e., its stiff-
layer with 10 nodes be used. The final architecture of ness is reduced by 75% and 50% (represented by
the neural network with an illustration of the rep- activation levels of 0.25 and 0.50, respectively), for
resentation scheme for the damage state of structural six different earthquakes (12 cases);
656

123456 123456

0 expected activation Earthquakes: 1. Castiac 2. El Centro 3. Melendy,


I actual activation 4. Mexican 5. Parkfield 6. Thft.

Fig. 5. Damage assessment training results for one-layer network.

the structural member 2 for the second floor is the network an additional 1437 epochs. As can be
damaged to certain extent, i.e., its stiffness is seen by the training results shown in Fig. 5, the
reduced by 75% and 50% for six different earth- network was able to learn the relationship between
quakes (12 cases); patterns of frequency response spectra and the corre-
the structural member 3 for the third floor is sponding damage state of the structure to identify all
damaged to certain extent, i.e., its stiffness was 42 damage cases when trained on the Fourier spectra
reduced by 75% and 50% for six different earth- of relative acceleration recorded at the top floor.
quakes (12 cases).
4.4. Testing of the neural network
It was discovered in our research on the material
modeling with neural networks [20,21] that an incre- After the network was trained on the 42 training
mental training was more efficient than simply dump- cases, it was tested to see how well it would recognize
ing the whole training set to the network. The other states of damage, such as those corresponding
training patterns were presented to the neural net- to a 60% structural member damage. The results of
work in stages, with frequency response spectra cor- these tests are shown in Fig. 6 and, as can be seen
responding to no damage and the 50% damage for all from this figure, the network was able to adequately
earthquakes records being presented and learned recognize the damage of structural member 3. How-
first, followed by those corresponding to 75% dam- ever, the trained network was not able to adequately
age cases. For the network to ‘learn’ the first 24 cases, identify the damaged member and to recognize the
it took 1280 epochs to convergence (an epoch is extent that damage had occurred for structural mem-
defined as all the operations needed for feed-forward bers 1 and 2. Nevertheless, the network was able to
and error backpropagation of all the cases), with a detect that damage had occur& in all three cases,
learning rate of q = 0.2 and a maximum permissible but it was confused as to whether member 1 or
error of 10%. For the second set of 18 cases, it took member 2 was damaged. While this error may be due
Compendium 657

-Waell
Member2 Unit I
I

0 0
123456 123456 123456

2 60%
0 0 0
123456 123456 123456

0 0 0
123456 123456 123456

0 expected activation - aaualactivation


Fig. 6. Damage assessment test results for one-layer network.

to a variety of different causes, it was felt that this the network was able to converge turned out to be
inability was most likely due to the network not being eight units in each layer. The training results for this
provided with enough information to distinguish new network were almost identical to those shown in
between the structural member damage states. This Fig. 5, and thus are not shown here in the interest of
has been pointed out by Norton [22] that the selection space. However, the testing results of the new net-
of suitable location for the measurement transducers work did change from those of Fig. 6 and are shown
or accelerometers is of primary importance in cor- in Fig. 7. Note that the network, when provided with
rectly capturing the vibration characteristics of the more information about the dynamic behavior of
structure. the structure, was able to better distinguish between
To increase the information on which the damaged the various states of damage and was able to identify
member identification is based, it was decided to: (1) the structural member experiencing 60% damage.
provide the network with Fourier spectra recorded at The network was able to correctly identify damage to
two floor levels (the second and the third floors); (2) both the first and third member, but it was still unable
modify the network architecture to consist of two 200 to identify that the second structural member was
node input sets (one obtained from the second floor damaged by 60%.
and another from the third floor) and two hidden
layers; and (3) re-train the network. Two hidden 5. CONCLUDINGREMARKS
layers were thought to be necessary so as to be able
to look for combinations (i.e., features) of the fre- On the basis of the case studies presented, it is felt
quencies in the Fourier spectra recorded at the two that the use of neural networks for structural damage
floor levels. The size of the hidden layers for which assessment is a promising line of research. These early

123456 123456

0 eypectedaottvation I actaalactiva~
Fig. 7. Damage assessment test results for two-layer network.
658 Compendium

results indicate that neural networks are capable of 7. D. E. Rumelhart, J. L. McClelland and the PDP
learning about the behavior of undamaged and dam- Research Group, Parallel Distributed Processing: Ex-
plorations in the Microstructure of Cognition, Volume I:
aged structures and to identify the damaged member Foundations. MIT Press, Cambridge, MA (1986).
and the extent of the damage from patterns in the 8. W. S. McCulloch and W. Pitts, A logical calculus of
frequency response of the structure. However, some the ideas immanent in nervous activity. Bull. Math.
issues remain to be resolved before this approach Biophys. 5, 115-133 (1943).
9. F. Rosenblatt, Principles of Neurodynamics. Spartan,
becomes a truly viable method of structural damage
New York (1962).
assessment. At first, for real world complex struc- 10. D. B. Parker, Learning logic. Invention report S81-64,
tures, the neural network after proper training should File 1, Office of Technology Licensing, Stanford
be able to distinguish between the frequency response University (1982).
patterns recorded at several locations of the structure 11. D. E. Rumelhart, G. E. Hinton and R. J. Williams,
Learning internal representations by error propagation.
so that the damaged member and its extent of damage In Parallel Distributed Processing: Explorations in the
can be identified. Secondly, the amount of infor- Microstructure of Cognition, Vol. 1: Foundations (Edited
mation needed for training a neural network, such by D. E. Rumelhart and J. L. McClelland). MIT Press,
that the trained network not only learned the training Cambridge, MA (1986).
12. P. Warbos, Beyond regression: New tools for prediction
cases but also can accurately generalize to other
and analysis in behavioral sciences. Ph.D. dissertation,
cases, must be quantified; that is, how many ac- Harvard University (1974).
celerometer recordings, and at what locations, must 13. S. Grossberg, Adaptive pattern classification and uni-
be obtained in order to reliably identify the damaged versal recoding: Part 1. Parallel development and coding
structural members. For more complicated struc- of natural features detectors. Biological Cybernetics 23,
121-134 (1976).
tures, it is anticipated that many more than two
14. D. E. Rumelhart and D. Zipser, Feature discovery by
accelerometer recordings (as in this case study) will competitive learning. In Parallel Distributed Processing:
have to be used as input to the network. Finally, how Explorations in the Microstructure of Cognition, Vol. 1:
many damage states for each structural members Foundations (Edited by D. E. Rumelhart and J. L.
must be presented to the network before it gains McClelland). MIT Press, Cambridge, MA (1986).
15. G. E. Hinton, T. J. Sejnowski and D. H. Ackley,
adequate performance for all states of single member Boltzmann machines: Constraint satisfaction networks
damage? It is anticipated based on the generalization that learn. Technical report No. CMU-CS-84-119,
capability of neural networks that only several vari- Department of Computer Science, Carnegie Mellon
ations of damage state for selected key structural University, Pittsburgh, PA (1984).
16. J. J. Hopfiled, Neural networks and physical systems
member may be sufficient in order to adequately train
with emergent collective computational abilities: Proc.
the network. natl Acad. Sci.. U.S.A. 79. 2554-2558 (1982).
Further research in this area should not only 17. T. Kohonen, Self Organization and Associative Memory.
address the above mentioned issues, but should also Springer, Berlin” (1984).
investigate the role of noise in actual measurements, 18. K. Homik. M. Stinchcombe and H. White. Multilaver
feedforward networks are universal approximat&s.
application to large and complex structures, use of Neural Networks 2, 356366 (1989).
measurements from applied force tests or ambient 19. N. M. Newmark, A method of computation for
vibration tests. structural dynamics. J. Engng Mech., ASCE 3, 67-94
(1959).
20. J. H. Garrett, Jr, J. Ghaboussi and X. Wu, Neural
REFERENCES networks. In Expert Systems in Civil Engineering-
Knowledge Representations (Edited by R. H. Allen). The
1. B. G. Burke, C. Sundararajan and F. M. Safaie, Expert Systems Committee of the American Society of
Characterization of ambient vibration data by response Civil Engineers (1990).
shaoe vectors. In Proceed&s of the Offshore Technology 21. J. Ghaboussi, J. H. Garrett, Jr and X. Wu, Knowledge-
Co&rence, OTC paper 386l,Hous&n, TX, pp. 53-61 based modeling of material behavior with neural
(1980). networks. J. Engng Mech. 117,Jan. (1991).
2. R. N. Coppolino and K. J. Rubin, Detectability of 22. M. P. Norton, Fundamentals of Noise and Vibration
structural failures in offshore platforms by ambient Analysis for Engineers. Cambridge University Press,
vibration monitoring. In Proceedings of the Offshore Cambridge (1989).
Technology Conference, OTC paper 3865, Houston, TX, 23. IEEE, Proceedings of the First IEEE International
pp. 101-110 (1980). Conference on Neural Networks. San Diego, CA,
3. S. Rubin, Ambient vibration survey of offshore plat- June 21-24 (1987).
form. J. Engng Mech., ASCE 106,425-441 (1980). 24. IEEE, Proceedings of the 1988 IEEE International
4. F. Shahrivar and J. G. Bouwkamp, Damage detection Conference on Neural Networks. San Diego, CA,
in offshore platforms using vibration information. Pro- July 2427 (1988).
ceedings of the Third International Offshore Mechanics 25. IJCNN, Proceedings of the International Joint Confer-
and Arctic Engineering Symp., ASME, II, pp. 174-185 ence on Neural Networks. Co-sponsored by IEEE and
(1984). the International Neural Network Society, Washington
5. J. K. Vandiver, Detection of structural failure on fixed DC, June 18-22 (1989).
platform by measurement of dynamic response. Pro- 26. IJCNN, Proceedings of the International Joint Confer-
ceedings of the Ofshore Technology Conference, OTC ence on Neural Networks. Co-sponsored by IEEE and
paper 2267, Houston, TX, pp. 243-252 (1975). the International Neural Network Society, Washington
6. F. Shahrivar and J. G. Bouwkamp, Signal separation DC, January (1990).
method for tower mode shape measurement, J. Struct. 27. IJCNN, Proceedings of the International Joint Confer-
Engng, ASCE 115,707-723 (1989). ence on Neural Networks. Co-sponsored by IEEE and
Compendium 659

the International Neural Network Society, San Diego, 100, School of Computer Science, Carnegie Mellon
CA, June 17-21 (1990). University, Pittsburgh, PA (1990).
28. I. Aleksander, (Ed.), Neural Computing Architecture. 34. D. 0. Hebb, Fhe Organization of Behavior: A Neuro-
MIT Press, Cambridge, MA (1989). psychological Theory. John Wiley. New York (1949).
29. T. Ash, Dynamic node creation in backpropagation 35. M. Minsky and S. Papert, Perceptrons. MIT Press,
networks. In Proceedings of the International Joint Cambridge, MA (1969).
Conference on Neural Networks, Washington DC, June 36. D. R. Rehak, C. R. Thewalt and L. B. Doo, Neural
18-22 (1989). network approaches in stn&mal me&u& computations.
30. A. G. Barto, Connectionist learning for control: An Corn~ter utilization b stm3ural EngimMg-T&e
overview. COINS technical report 89-89, Department Proceedblgs of the Se&on9 Rekab?dto computer Ui&.ation
of Computer and Information Science, University of at Structures Congress ‘89. ASCE, pp. 168-176 (1989).
Massachusetts, Amherst, MA (1989). 37. M. F. Tenorio and W.-T. Lee, Self-organizing neural
31. G. Carpenter and S. Grossberg, Adaptive resonance network for optimum supervised learning. Technical
theory: Stable self-organization of neural recognition report TR-EE-89-30, School of Electrical Engineering,
codes in response to arbitrary lists of input patterns. Purdue University, West Lafayette, IN (1989).
In Eighth Annual Conference of the Comitive Science 38. M. D. Trifimac, Comparisons between ambient and
So&&. Lawrence Erlbaum Associates (1986). forced vibration experiments. Earthquake Engng Struct.
32. R. W. Clouah and J. Penxien. Dvnamics of Structures. Dynamics 1, 133-150 (1972).
McGraw-Hill, New York (1975): ” 39. B. Widrow and M. E. Hoff, Adaptive switching circuits.
33. S. E. Fahlman and C. Lebiere, The cascade-correlation 1960 IRE WESCON Convention Record, New York,
learning architecture. Technical report CMU-CS-90- IRE, pp. 96-104 (1960).

US 42/&-N

You might also like